public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2021-12-05 23:43 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2021-12-05 23:43 UTC (permalink / raw
  To: gentoo-commits

commit:     cf66e2805fb422a66cc137fab18b936b1c769569
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Dec  5 23:35:14 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Dec  5 23:43:27 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cf66e280

Remove KSPP setting for HARDENED_USERCOPY_FALLBACK

This config option has been removed in 5.16.

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 16 +++-------------
 1 file changed, 3 insertions(+), 13 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 74e80d3e..b51dd21b 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -290,19 +290,9 @@
 +		See the settings that become available for more details and fine-tuning.
 +
 +endmenu
-diff --git a/security/Kconfig b/security/Kconfig
-index 7561f6f99..01f0bf73f 100644
---- a/security/Kconfig
-+++ b/security/Kconfig
-@@ -166,6 +166,7 @@ config HARDENED_USERCOPY
- config HARDENED_USERCOPY_FALLBACK
- 	bool "Allow usercopy whitelist violations to fallback to object size"
- 	depends on HARDENED_USERCOPY
-+	depends on !GENTOO_KERNEL_SELF_PROTECTION
- 	default y
- 	help
- 	  This is a temporary option that allows missing usercopy whitelists
-@@ -181,6 +182,7 @@ config HARDENED_USERCOPY_PAGESPAN
+--- a/security/Kconfig	2021-12-05 18:20:55.655677710 -0500
++++ b/security/Kconfig	2021-12-05 18:23:42.404251618 -0500
+@@ -167,6 +167,7 @@ config HARDENED_USERCOPY_PAGESPAN
  	bool "Refuse to copy allocations that span multiple pages"
  	depends on HARDENED_USERCOPY
  	depends on EXPERT


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2021-12-05 23:46 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2021-12-05 23:46 UTC (permalink / raw
  To: gentoo-commits

commit:     a8f0e431469a1ed2817fac5ebea32d67aa50c9ac
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Dec  5 23:45:52 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Dec  5 23:45:52 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a8f0e431

Add genpatches to 5.16

Patches:
Patch to support for namespace user.pax.* on tmpfs.
Patch to enable link security restrictions by default.
Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
Revert: drm/i915: Implement Wa_1508744258
tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
sign-file: full functionality with modern LibreSSL

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  32 +
 1500_XATTR_USER_PREFIX.patch                       |  67 ++
 ...ble-link-security-restrictions-by-default.patch |  20 +
 ...zes-only-if-Secure-Simple-Pairing-enabled.patch |  37 ++
 2700_drm-i915-revert-Implement-Wa-1508744258.patch |  51 ++
 ...3-Fix-build-issue-by-selecting-CONFIG_REG.patch |  30 +
 2920_sign-file-patch-for-libressl.patch            |  16 +
 3000_Support-printing-firmware-info.patch          |  14 +
 4567_distro-Gentoo-Kconfig.patch                   |   2 +-
 5010_enable-cpu-optimizations-universal.patch      | 680 +++++++++++++++++++++
 10 files changed, 948 insertions(+), 1 deletion(-)

diff --git a/0000_README b/0000_README
index 90189932..419ae990 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,38 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1500_XATTR_USER_PREFIX.patch
+From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
+Desc:   Support for namespace user.pax.* on tmpfs.
+
+Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
+From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
+Desc:   Enable link security restrictions by default.
+
+Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
+From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
+Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
+
+Patch:  2700_drm-i915-revert-Implement-Wa-1508744258.patch
+From:   https://bugs.gentoo.org/828080
+Desc:   Revert: drm/i915: Implement Wa_1508744258
+
+Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
+From:   https://bugs.gentoo.org/710790
+Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
+
+Patch:  2920_sign-file-patch-for-libressl.patch
+From:   https://bugs.gentoo.org/717166
+Desc:   sign-file: full functionality with modern LibreSSL
+
+Patch:  3000_Support-printing-firmware-info.patch
+From:   https://bugs.gentoo.org/732852
+Desc:   Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO). Thanks to Georgy Yakovlev
+
 Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
+
+Patch:  5010_enable-cpu-optimizations-universal.patch
+From:   https://github.com/graysky2/kernel_compiler_patch
+Desc:   Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.

diff --git a/1500_XATTR_USER_PREFIX.patch b/1500_XATTR_USER_PREFIX.patch
new file mode 100644
index 00000000..245dcc29
--- /dev/null
+++ b/1500_XATTR_USER_PREFIX.patch
@@ -0,0 +1,67 @@
+From: Anthony G. Basile <blueness@gentoo.org>
+
+This patch adds support for a restricted user-controlled namespace on
+tmpfs filesystem used to house PaX flags.  The namespace must be of the
+form user.pax.* and its value cannot exceed a size of 8 bytes.
+
+This is needed even on all Gentoo systems so that XATTR_PAX flags
+are preserved for users who might build packages using portage on
+a tmpfs system with a non-hardened kernel and then switch to a
+hardened kernel with XATTR_PAX enabled.
+
+The namespace is added to any user with Extended Attribute support
+enabled for tmpfs.  Users who do not enable xattrs will not have
+the XATTR_PAX flags preserved.
+
+diff --git a/include/uapi/linux/xattr.h b/include/uapi/linux/xattr.h
+index 1590c49..5eab462 100644
+--- a/include/uapi/linux/xattr.h
++++ b/include/uapi/linux/xattr.h
+@@ -73,5 +73,9 @@
+ #define XATTR_POSIX_ACL_DEFAULT  "posix_acl_default"
+ #define XATTR_NAME_POSIX_ACL_DEFAULT XATTR_SYSTEM_PREFIX XATTR_POSIX_ACL_DEFAULT
+ 
++/* User namespace */
++#define XATTR_PAX_PREFIX XATTR_USER_PREFIX "pax."
++#define XATTR_PAX_FLAGS_SUFFIX "flags"
++#define XATTR_NAME_PAX_FLAGS XATTR_PAX_PREFIX XATTR_PAX_FLAGS_SUFFIX
+ 
+ #endif /* _UAPI_LINUX_XATTR_H */
+--- a/mm/shmem.c	2020-05-04 15:30:27.042035334 -0400
++++ b/mm/shmem.c	2020-05-04 15:34:57.013881725 -0400
+@@ -3238,6 +3238,14 @@ static int shmem_xattr_handler_set(const
+ 	struct shmem_inode_info *info = SHMEM_I(inode);
+ 
+ 	name = xattr_full_name(handler, name);
++
++	if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN)) {
++		if (strcmp(name, XATTR_NAME_PAX_FLAGS))
++			return -EOPNOTSUPP;
++		if (size > 8)
++			return -EINVAL;
++	}
++
+ 	return simple_xattr_set(&info->xattrs, name, value, size, flags, NULL);
+ }
+ 
+@@ -3253,6 +3261,12 @@ static const struct xattr_handler shmem_
+ 	.set = shmem_xattr_handler_set,
+ };
+ 
++static const struct xattr_handler shmem_user_xattr_handler = {
++	.prefix = XATTR_USER_PREFIX,
++	.get = shmem_xattr_handler_get,
++	.set = shmem_xattr_handler_set,
++};
++
+ static const struct xattr_handler *shmem_xattr_handlers[] = {
+ #ifdef CONFIG_TMPFS_POSIX_ACL
+ 	&posix_acl_access_xattr_handler,
+@@ -3260,6 +3274,7 @@ static const struct xattr_handler *shmem
+ #endif
+ 	&shmem_security_xattr_handler,
+ 	&shmem_trusted_xattr_handler,
++	&shmem_user_xattr_handler,
+ 	NULL
+ };
+ 

diff --git a/1510_fs-enable-link-security-restrictions-by-default.patch b/1510_fs-enable-link-security-restrictions-by-default.patch
new file mode 100644
index 00000000..f0ed144f
--- /dev/null
+++ b/1510_fs-enable-link-security-restrictions-by-default.patch
@@ -0,0 +1,20 @@
+From: Ben Hutchings <ben@decadent.org.uk>
+Subject: fs: Enable link security restrictions by default
+Date: Fri, 02 Nov 2012 05:32:06 +0000
+Bug-Debian: https://bugs.debian.org/609455
+Forwarded: not-needed
+This reverts commit 561ec64ae67ef25cac8d72bb9c4bfc955edfd415
+('VFS: don't do protected {sym,hard}links by default').
+--- a/fs/namei.c	2018-09-28 07:56:07.770005006 -0400
++++ b/fs/namei.c	2018-09-28 07:56:43.370349204 -0400
+@@ -885,8 +885,8 @@ static inline void put_link(struct namei
+ 		path_put(&last->link);
+ }
+ 
+-int sysctl_protected_symlinks __read_mostly = 0;
+-int sysctl_protected_hardlinks __read_mostly = 0;
++int sysctl_protected_symlinks __read_mostly = 1;
++int sysctl_protected_hardlinks __read_mostly = 1;
+ int sysctl_protected_fifos __read_mostly;
+ int sysctl_protected_regular __read_mostly;
+ 

diff --git a/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch b/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
new file mode 100644
index 00000000..394ad48f
--- /dev/null
+++ b/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
@@ -0,0 +1,37 @@
+The encryption is only mandatory to be enforced when both sides are using
+Secure Simple Pairing and this means the key size check makes only sense
+in that case.
+
+On legacy Bluetooth 2.0 and earlier devices like mice the encryption was
+optional and thus causing an issue if the key size check is not bound to
+using Secure Simple Pairing.
+
+Fixes: d5bb334a8e17 ("Bluetooth: Align minimum encryption key size for LE and BR/EDR connections")
+Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
+Cc: stable@vger.kernel.org
+---
+ net/bluetooth/hci_conn.c | 9 +++++++--
+ 1 file changed, 7 insertions(+), 2 deletions(-)
+
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 3cf0764d5793..7516cdde3373 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1272,8 +1272,13 @@ int hci_conn_check_link_mode(struct hci_conn *conn)
+ 			return 0;
+ 	}
+ 
+-	if (hci_conn_ssp_enabled(conn) &&
+-	    !test_bit(HCI_CONN_ENCRYPT, &conn->flags))
++	/* If Secure Simple Pairing is not enabled, then legacy connection
++	 * setup is used and no encryption or key sizes can be enforced.
++	 */
++	if (!hci_conn_ssp_enabled(conn))
++		return 1;
++
++	if (!test_bit(HCI_CONN_ENCRYPT, &conn->flags))
+ 		return 0;
+ 
+ 	/* The minimum encryption key size needs to be enforced by the
+-- 
+2.20.1

diff --git a/2700_drm-i915-revert-Implement-Wa-1508744258.patch b/2700_drm-i915-revert-Implement-Wa-1508744258.patch
new file mode 100644
index 00000000..37e60701
--- /dev/null
+++ b/2700_drm-i915-revert-Implement-Wa-1508744258.patch
@@ -0,0 +1,51 @@
+From 72641d8d60401a5f1e1a0431ceaf928680d34418 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Jos=C3=A9=20Roberto=20de=20Souza?= <jose.souza@intel.com>
+Date: Fri, 19 Nov 2021 06:09:30 -0800
+Subject: Revert "drm/i915: Implement Wa_1508744258"
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+This workarounds are causing hangs, because I missed the fact that it
+needs to be enabled for all cases and disabled when doing a resolve
+pass.
+
+So KMD only needs to whitelist it and UMD will be the one setting it
+on per case.
+
+This reverts commit 28ec02c9cbebf3feeaf21a59df9dfbc02bda3362.
+
+Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/4145
+Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
+Fixes: 28ec02c9cbeb ("drm/i915: Implement Wa_1508744258")
+Reviewed-by: Matt Atwood <matthew.s.atwood@intel.com>
+Link: https://patchwork.freedesktop.org/patch/msgid/20211119140931.32791-1-jose.souza@intel.com
+(cherry picked from commit f3799ff16fcfacd44aee55db162830df461b631f)
+Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
+---
+ drivers/gpu/drm/i915/gt/intel_workarounds.c | 7 -------
+ 1 file changed, 7 deletions(-)
+
+(limited to 'drivers/gpu/drm/i915/gt/intel_workarounds.c')
+
+diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c
+index e1f3625308891..ed73d9bc9d40b 100644
+--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
++++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
+@@ -621,13 +621,6 @@ static void gen12_ctx_workarounds_init(struct intel_engine_cs *engine,
+ 	       FF_MODE2_GS_TIMER_MASK,
+ 	       FF_MODE2_GS_TIMER_224,
+ 	       0, false);
+-
+-	/*
+-	 * Wa_14012131227:dg1
+-	 * Wa_1508744258:tgl,rkl,dg1,adl-s,adl-p
+-	 */
+-	wa_masked_en(wal, GEN7_COMMON_SLICE_CHICKEN1,
+-		     GEN9_RHWO_OPTIMIZATION_DISABLE);
+ }
+ 
+ static void dg1_ctx_workarounds_init(struct intel_engine_cs *engine,
+-- 
+cgit 1.2.3-1.el7
+

diff --git a/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch b/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
new file mode 100644
index 00000000..43356857
--- /dev/null
+++ b/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
@@ -0,0 +1,30 @@
+From dc328d75a6f37f4ff11a81ae16b1ec88c3197640 Mon Sep 17 00:00:00 2001
+From: Mike Pagano <mpagano@gentoo.org>
+Date: Mon, 23 Mar 2020 08:20:06 -0400
+Subject: [PATCH 1/1] This driver requires REGMAP_I2C to build.  Select it by
+ default in Kconfig. Reported at gentoo bugzilla:
+ https://bugs.gentoo.org/710790
+Cc: mpagano@gentoo.org
+
+Reported-by: Phil Stracchino <phils@caerllewys.net>
+
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/hwmon/Kconfig | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
+index 47ac20aee06f..530b4f29ba85 100644
+--- a/drivers/hwmon/Kconfig
++++ b/drivers/hwmon/Kconfig
+@@ -1769,6 +1769,7 @@ config SENSORS_TMP421
+ config SENSORS_TMP513
+ 	tristate "Texas Instruments TMP513 and compatibles"
+ 	depends on I2C
++	select REGMAP_I2C
+ 	help
+ 	  If you say yes here you get support for Texas Instruments TMP512,
+ 	  and TMP513 temperature and power supply sensor chips.
+-- 
+2.24.1
+

diff --git a/2920_sign-file-patch-for-libressl.patch b/2920_sign-file-patch-for-libressl.patch
new file mode 100644
index 00000000..e6ec017d
--- /dev/null
+++ b/2920_sign-file-patch-for-libressl.patch
@@ -0,0 +1,16 @@
+--- a/scripts/sign-file.c	2020-05-20 18:47:21.282820662 -0400
++++ b/scripts/sign-file.c	2020-05-20 18:48:37.991081899 -0400
+@@ -41,9 +41,10 @@
+  * signing with anything other than SHA1 - so we're stuck with that if such is
+  * the case.
+  */
+-#if defined(LIBRESSL_VERSION_NUMBER) || \
+-	OPENSSL_VERSION_NUMBER < 0x10000000L || \
+-	defined(OPENSSL_NO_CMS)
++#if defined(OPENSSL_NO_CMS) || \
++	( defined(LIBRESSL_VERSION_NUMBER) \
++	&& (LIBRESSL_VERSION_NUMBER < 0x3010000fL) ) || \
++	OPENSSL_VERSION_NUMBER < 0x10000000L
+ #define USE_PKCS7
+ #endif
+ #ifndef USE_PKCS7

diff --git a/3000_Support-printing-firmware-info.patch b/3000_Support-printing-firmware-info.patch
new file mode 100644
index 00000000..a630cfbe
--- /dev/null
+++ b/3000_Support-printing-firmware-info.patch
@@ -0,0 +1,14 @@
+--- a/drivers/base/firmware_loader/main.c	2021-08-24 15:42:07.025482085 -0400
++++ b/drivers/base/firmware_loader/main.c	2021-08-24 15:44:40.782975313 -0400
+@@ -809,6 +809,11 @@ _request_firmware(const struct firmware
+ 
+ 	ret = _request_firmware_prepare(&fw, name, device, buf, size,
+ 					offset, opt_flags);
++
++#ifdef CONFIG_GENTOO_PRINT_FIRMWARE_INFO
++        printk(KERN_NOTICE "Loading firmware: %s\n", name);
++#endif
++
+ 	if (ret <= 0) /* error or already assigned */
+ 		goto out;
+ 

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index b51dd21b..05570254 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -124,7 +124,6 @@
 +	select BPF_SYSCALL
 +	select CGROUP_BPF
 +	select CGROUPS
-+	select CHECKPOINT_RESTORE
 +	select CRYPTO_HMAC 
 +	select CRYPTO_SHA256
 +	select CRYPTO_USER_API_HASH
@@ -136,6 +135,7 @@
 +	select FILE_LOCKING
 +	select INOTIFY_USER
 +	select IPV6
++	select KCMP
 +	select NET
 +	select NET_NS
 +	select PROC_FS

diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
new file mode 100644
index 00000000..becfda36
--- /dev/null
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -0,0 +1,680 @@
+From d31d2b0747ab55e65c2366d51149a0ec9896155e Mon Sep 17 00:00:00 2001
+From: graysky <graysky@archlinux.us>
+Date: Tue, 14 Sep 2021 15:35:34 -0400
+Subject: [PATCH] more uarches for kernel 5.15+
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+FEATURES
+This patch adds additional CPU options to the Linux kernel accessible under:
+ Processor type and features  --->
+  Processor family --->
+
+With the release of gcc 11.1 and clang 12.0, several generic 64-bit levels are
+offered which are good for supported Intel or AMD CPUs:
+• x86-64-v2
+• x86-64-v3
+• x86-64-v4
+
+Users of glibc 2.33 and above can see which level is supported by current
+hardware by running:
+  /lib/ld-linux-x86-64.so.2 --help | grep supported
+
+Alternatively, compare the flags from /proc/cpuinfo to this list.[1]
+
+CPU-specific microarchitectures include:
+• AMD Improved K8-family
+• AMD K10-family
+• AMD Family 10h (Barcelona)
+• AMD Family 14h (Bobcat)
+• AMD Family 16h (Jaguar)
+• AMD Family 15h (Bulldozer)
+• AMD Family 15h (Piledriver)
+• AMD Family 15h (Steamroller)
+• AMD Family 15h (Excavator)
+• AMD Family 17h (Zen)
+• AMD Family 17h (Zen 2)
+• AMD Family 19h (Zen 3)†
+• Intel Silvermont low-power processors
+• Intel Goldmont low-power processors (Apollo Lake and Denverton)
+• Intel Goldmont Plus low-power processors (Gemini Lake)
+• Intel 1st Gen Core i3/i5/i7 (Nehalem)
+• Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+• Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+• Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+• Intel 4th Gen Core i3/i5/i7 (Haswell)
+• Intel 5th Gen Core i3/i5/i7 (Broadwell)
+• Intel 6th Gen Core i3/i5/i7 (Skylake)
+• Intel 6th Gen Core i7/i9 (Skylake X)
+• Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
+• Intel 10th Gen Core i7/i9 (Ice Lake)
+• Intel Xeon (Cascade Lake)
+• Intel Xeon (Cooper Lake)*
+• Intel 3rd Gen 10nm++ i3/i5/i7/i9-family (Tiger Lake)*
+• Intel 3rd Gen 10nm++ Xeon (Sapphire Rapids)‡
+• Intel 11th Gen i3/i5/i7/i9-family (Rocket Lake)‡
+• Intel 12th Gen i3/i5/i7/i9-family (Alder Lake)‡
+
+Notes: If not otherwise noted, gcc >=9.1 is required for support.
+       *Requires gcc >=10.1 or clang >=10.0
+       †Required gcc >=10.3 or clang >=12.0
+       ‡Required gcc >=11.1 or clang >=12.0
+
+It also offers to compile passing the 'native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[2]
+
+Users of Intel CPUs should select the 'Intel-Native' option and users of AMD
+CPUs should select the 'AMD-Native' option.
+
+MINOR NOTES RELATING TO INTEL ATOM PROCESSORS
+This patch also changes -march=atom to -march=bonnell in accordance with the
+gcc v4.9 changes. Upstream is using the deprecated -match=atom flags when I
+believe it should use the newer -march=bonnell flag for atom processors.[3]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[4] The
+recommendation is to use the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=5.15
+gcc version >=9.0 or clang version >=9.0
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[5]
+
+REFERENCES
+1.  https://gitlab.com/x86-psABIs/x86-64-ABI/-/commit/77566eb03bc6a326811cb7e9
+2.  https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html#index-x86-Options
+3.  https://bugzilla.kernel.org/show_bug.cgi?id=77461
+4.  https://github.com/graysky2/kernel_gcc_patch/issues/15
+5.  http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+Signed-off-by: graysky <graysky@archlinux.us>
+---
+From 1bfa1ef4e3a93e540a64cd1020863019dff3046e Mon Sep 17 00:00:00 2001
+From: graysky <graysky@archlinux.us>
+Date: Sun, 14 Nov 2021 16:08:29 -0500
+Subject: [PATCH] iiii
+
+---
+ arch/x86/Kconfig.cpu            | 332 ++++++++++++++++++++++++++++++--
+ arch/x86/Makefile               |  40 +++-
+ arch/x86/include/asm/vermagic.h |  66 +++++++
+ 3 files changed, 424 insertions(+), 14 deletions(-)
+
+diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
+index eefc434351db..331f7631339a 100644
+--- a/arch/x86/Kconfig.cpu
++++ b/arch/x86/Kconfig.cpu
+@@ -157,7 +157,7 @@ config MPENTIUM4
+
+
+ config MK6
+-	bool "K6/K6-II/K6-III"
++	bool "AMD K6/K6-II/K6-III"
+ 	depends on X86_32
+ 	help
+ 	  Select this for an AMD K6-family processor.  Enables use of
+@@ -165,7 +165,7 @@ config MK6
+ 	  flags to GCC.
+
+ config MK7
+-	bool "Athlon/Duron/K7"
++	bool "AMD Athlon/Duron/K7"
+ 	depends on X86_32
+ 	help
+ 	  Select this for an AMD Athlon K7-family processor.  Enables use of
+@@ -173,12 +173,98 @@ config MK7
+ 	  flags to GCC.
+
+ config MK8
+-	bool "Opteron/Athlon64/Hammer/K8"
++	bool "AMD Opteron/Athlon64/Hammer/K8"
+ 	help
+ 	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ 	  Enables use of some extended instructions, and passes appropriate
+ 	  optimization flags to GCC.
+
++config MK8SSE3
++	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++	help
++	  Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MK10
++	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++	help
++	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++	  Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MBARCELONA
++	bool "AMD Barcelona"
++	help
++	  Select this for AMD Family 10h Barcelona processors.
++
++	  Enables -march=barcelona
++
++config MBOBCAT
++	bool "AMD Bobcat"
++	help
++	  Select this for AMD Family 14h Bobcat processors.
++
++	  Enables -march=btver1
++
++config MJAGUAR
++	bool "AMD Jaguar"
++	help
++	  Select this for AMD Family 16h Jaguar processors.
++
++	  Enables -march=btver2
++
++config MBULLDOZER
++	bool "AMD Bulldozer"
++	help
++	  Select this for AMD Family 15h Bulldozer processors.
++
++	  Enables -march=bdver1
++
++config MPILEDRIVER
++	bool "AMD Piledriver"
++	help
++	  Select this for AMD Family 15h Piledriver processors.
++
++	  Enables -march=bdver2
++
++config MSTEAMROLLER
++	bool "AMD Steamroller"
++	help
++	  Select this for AMD Family 15h Steamroller processors.
++
++	  Enables -march=bdver3
++
++config MEXCAVATOR
++	bool "AMD Excavator"
++	help
++	  Select this for AMD Family 15h Excavator processors.
++
++	  Enables -march=bdver4
++
++config MZEN
++	bool "AMD Zen"
++	help
++	  Select this for AMD Family 17h Zen processors.
++
++	  Enables -march=znver1
++
++config MZEN2
++	bool "AMD Zen 2"
++	help
++	  Select this for AMD Family 17h Zen 2 processors.
++
++	  Enables -march=znver2
++
++config MZEN3
++	bool "AMD Zen 3"
++	depends on (CC_IS_GCC && GCC_VERSION >= 100300) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++	help
++	  Select this for AMD Family 19h Zen 3 processors.
++
++	  Enables -march=znver3
++
+ config MCRUSOE
+ 	bool "Crusoe"
+ 	depends on X86_32
+@@ -270,7 +356,7 @@ config MPSC
+ 	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+
+ config MCORE2
+-	bool "Core 2/newer Xeon"
++	bool "Intel Core 2"
+ 	help
+
+ 	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -278,6 +364,8 @@ config MCORE2
+ 	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ 	  (not a typo)
+
++	  Enables -march=core2
++
+ config MATOM
+ 	bool "Intel Atom"
+ 	help
+@@ -287,6 +375,182 @@ config MATOM
+ 	  accordingly optimized code. Use a recent GCC with specific Atom
+ 	  support in order to fully benefit from selecting this option.
+
++config MNEHALEM
++	bool "Intel Nehalem"
++	select X86_P6_NOP
++	help
++
++	  Select this for 1st Gen Core processors in the Nehalem family.
++
++	  Enables -march=nehalem
++
++config MWESTMERE
++	bool "Intel Westmere"
++	select X86_P6_NOP
++	help
++
++	  Select this for the Intel Westmere formerly Nehalem-C family.
++
++	  Enables -march=westmere
++
++config MSILVERMONT
++	bool "Intel Silvermont"
++	select X86_P6_NOP
++	help
++
++	  Select this for the Intel Silvermont platform.
++
++	  Enables -march=silvermont
++
++config MGOLDMONT
++	bool "Intel Goldmont"
++	select X86_P6_NOP
++	help
++
++	  Select this for the Intel Goldmont platform including Apollo Lake and Denverton.
++
++	  Enables -march=goldmont
++
++config MGOLDMONTPLUS
++	bool "Intel Goldmont Plus"
++	select X86_P6_NOP
++	help
++
++	  Select this for the Intel Goldmont Plus platform including Gemini Lake.
++
++	  Enables -march=goldmont-plus
++
++config MSANDYBRIDGE
++	bool "Intel Sandy Bridge"
++	select X86_P6_NOP
++	help
++
++	  Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++	  Enables -march=sandybridge
++
++config MIVYBRIDGE
++	bool "Intel Ivy Bridge"
++	select X86_P6_NOP
++	help
++
++	  Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++	  Enables -march=ivybridge
++
++config MHASWELL
++	bool "Intel Haswell"
++	select X86_P6_NOP
++	help
++
++	  Select this for 4th Gen Core processors in the Haswell family.
++
++	  Enables -march=haswell
++
++config MBROADWELL
++	bool "Intel Broadwell"
++	select X86_P6_NOP
++	help
++
++	  Select this for 5th Gen Core processors in the Broadwell family.
++
++	  Enables -march=broadwell
++
++config MSKYLAKE
++	bool "Intel Skylake"
++	select X86_P6_NOP
++	help
++
++	  Select this for 6th Gen Core processors in the Skylake family.
++
++	  Enables -march=skylake
++
++config MSKYLAKEX
++	bool "Intel Skylake X"
++	select X86_P6_NOP
++	help
++
++	  Select this for 6th Gen Core processors in the Skylake X family.
++
++	  Enables -march=skylake-avx512
++
++config MCANNONLAKE
++	bool "Intel Cannon Lake"
++	select X86_P6_NOP
++	help
++
++	  Select this for 8th Gen Core processors
++
++	  Enables -march=cannonlake
++
++config MICELAKE
++	bool "Intel Ice Lake"
++	select X86_P6_NOP
++	help
++
++	  Select this for 10th Gen Core processors in the Ice Lake family.
++
++	  Enables -march=icelake-client
++
++config MCASCADELAKE
++	bool "Intel Cascade Lake"
++	select X86_P6_NOP
++	help
++
++	  Select this for Xeon processors in the Cascade Lake family.
++
++	  Enables -march=cascadelake
++
++config MCOOPERLAKE
++	bool "Intel Cooper Lake"
++	depends on (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
++	select X86_P6_NOP
++	help
++
++	  Select this for Xeon processors in the Cooper Lake family.
++
++	  Enables -march=cooperlake
++
++config MTIGERLAKE
++	bool "Intel Tiger Lake"
++	depends on  (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
++	select X86_P6_NOP
++	help
++
++	  Select this for third-generation 10 nm process processors in the Tiger Lake family.
++
++	  Enables -march=tigerlake
++
++config MSAPPHIRERAPIDS
++	bool "Intel Sapphire Rapids"
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++	select X86_P6_NOP
++	help
++
++	  Select this for third-generation 10 nm process processors in the Sapphire Rapids family.
++
++	  Enables -march=sapphirerapids
++
++config MROCKETLAKE
++	bool "Intel Rocket Lake"
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++	select X86_P6_NOP
++	help
++
++	  Select this for eleventh-generation processors in the Rocket Lake family.
++
++	  Enables -march=rocketlake
++
++config MALDERLAKE
++	bool "Intel Alder Lake"
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++	select X86_P6_NOP
++	help
++
++	  Select this for twelfth-generation processors in the Alder Lake family.
++
++	  Enables -march=alderlake
++
+ config GENERIC_CPU
+ 	bool "Generic-x86-64"
+ 	depends on X86_64
+@@ -294,6 +558,50 @@ config GENERIC_CPU
+ 	  Generic x86-64 CPU.
+ 	  Run equally well on all x86-64 CPUs.
+
++config GENERIC_CPU2
++	bool "Generic-x86-64-v2"
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++	depends on X86_64
++	help
++	  Generic x86-64 CPU.
++	  Run equally well on all x86-64 CPUs with min support of x86-64-v2.
++
++config GENERIC_CPU3
++	bool "Generic-x86-64-v3"
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++	depends on X86_64
++	help
++	  Generic x86-64-v3 CPU with v3 instructions.
++	  Run equally well on all x86-64 CPUs with min support of x86-64-v3.
++
++config GENERIC_CPU4
++	bool "Generic-x86-64-v4"
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++	depends on X86_64
++	help
++	  Generic x86-64 CPU with v4 instructions.
++	  Run equally well on all x86-64 CPUs with min support of x86-64-v4.
++
++config MNATIVE_INTEL
++	bool "Intel-Native optimizations autodetected by the compiler"
++	help
++
++	  Clang 3.8, GCC 4.2 and above support -march=native, which automatically detects
++	  the optimum settings to use based on your processor. Do NOT use this
++	  for AMD CPUs.  Intel Only!
++
++	  Enables -march=native
++
++config MNATIVE_AMD
++	bool "AMD-Native optimizations autodetected by the compiler"
++	help
++
++	  Clang 3.8, GCC 4.2 and above support -march=native, which automatically detects
++	  the optimum settings to use based on your processor. Do NOT use this
++	  for Intel CPUs.  AMD Only!
++
++	  Enables -march=native
++
+ endchoice
+
+ config X86_GENERIC
+@@ -318,7 +626,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ 	int
+ 	default "7" if MPENTIUM4 || MPSC
+-	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD || X86_GENERIC || GENERIC_CPU || GENERIC_CPU2 || GENERIC_CPU3 || GENERIC_CPU4
+ 	default "4" if MELAN || M486SX || M486 || MGEODEGX1
+ 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+
+@@ -336,11 +644,11 @@ config X86_ALIGNMENT_16
+
+ config X86_INTEL_USERCOPY
+ 	def_bool y
+-	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL
+
+ config X86_USE_PPRO_CHECKSUM
+ 	def_bool y
+-	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
+
+ config X86_USE_3DNOW
+ 	def_bool y
+@@ -360,26 +668,26 @@ config X86_USE_3DNOW
+ config X86_P6_NOP
+ 	def_bool y
+ 	depends on X86_64
+-	depends on (MCORE2 || MPENTIUM4 || MPSC)
++	depends on (MCORE2 || MPENTIUM4 || MPSC || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL)
+
+ config X86_TSC
+ 	def_bool y
+-	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD) || X86_64
+
+ config X86_CMPXCHG64
+ 	def_bool y
+-	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8
++	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
+
+ # this should be set for all -march=.. options where the compiler
+ # generates cmov.
+ config X86_CMOV
+ 	def_bool y
+-	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
+
+ config X86_MINIMUM_CPU_FAMILY
+ 	int
+ 	default "64" if X86_64
+-	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8)
++	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8 ||  MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
+ 	default "5" if X86_32 && X86_CMPXCHG64
+ 	default "4"
+
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index 42243869216d..ab1ad6959b96 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -119,8 +119,44 @@ else
+         # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
+         cflags-$(CONFIG_MK8)		+= -march=k8
+         cflags-$(CONFIG_MPSC)		+= -march=nocona
+-        cflags-$(CONFIG_MCORE2)		+= -march=core2
+-        cflags-$(CONFIG_MATOM)		+= -march=atom
++        cflags-$(CONFIG_MK8SSE3)	+= -march=k8-sse3
++        cflags-$(CONFIG_MK10) 		+= -march=amdfam10
++        cflags-$(CONFIG_MBARCELONA) 	+= -march=barcelona
++        cflags-$(CONFIG_MBOBCAT) 	+= -march=btver1
++        cflags-$(CONFIG_MJAGUAR) 	+= -march=btver2
++        cflags-$(CONFIG_MBULLDOZER) 	+= -march=bdver1
++        cflags-$(CONFIG_MPILEDRIVER)	+= -march=bdver2 -mno-tbm
++        cflags-$(CONFIG_MSTEAMROLLER) 	+= -march=bdver3 -mno-tbm
++        cflags-$(CONFIG_MEXCAVATOR) 	+= -march=bdver4 -mno-tbm
++        cflags-$(CONFIG_MZEN) 		+= -march=znver1
++        cflags-$(CONFIG_MZEN2) 	+= -march=znver2
++        cflags-$(CONFIG_MZEN3) 	+= -march=znver3
++        cflags-$(CONFIG_MNATIVE_INTEL) += -march=native
++        cflags-$(CONFIG_MNATIVE_AMD) 	+= -march=native
++        cflags-$(CONFIG_MATOM) 	+= -march=bonnell
++        cflags-$(CONFIG_MCORE2) 	+= -march=core2
++        cflags-$(CONFIG_MNEHALEM) 	+= -march=nehalem
++        cflags-$(CONFIG_MWESTMERE) 	+= -march=westmere
++        cflags-$(CONFIG_MSILVERMONT) 	+= -march=silvermont
++        cflags-$(CONFIG_MGOLDMONT) 	+= -march=goldmont
++        cflags-$(CONFIG_MGOLDMONTPLUS) += -march=goldmont-plus
++        cflags-$(CONFIG_MSANDYBRIDGE) 	+= -march=sandybridge
++        cflags-$(CONFIG_MIVYBRIDGE) 	+= -march=ivybridge
++        cflags-$(CONFIG_MHASWELL) 	+= -march=haswell
++        cflags-$(CONFIG_MBROADWELL) 	+= -march=broadwell
++        cflags-$(CONFIG_MSKYLAKE) 	+= -march=skylake
++        cflags-$(CONFIG_MSKYLAKEX) 	+= -march=skylake-avx512
++        cflags-$(CONFIG_MCANNONLAKE) 	+= -march=cannonlake
++        cflags-$(CONFIG_MICELAKE) 	+= -march=icelake-client
++        cflags-$(CONFIG_MCASCADELAKE) 	+= -march=cascadelake
++        cflags-$(CONFIG_MCOOPERLAKE) 	+= -march=cooperlake
++        cflags-$(CONFIG_MTIGERLAKE) 	+= -march=tigerlake
++        cflags-$(CONFIG_MSAPPHIRERAPIDS) += -march=sapphirerapids
++        cflags-$(CONFIG_MROCKETLAKE) 	+= -march=rocketlake
++        cflags-$(CONFIG_MALDERLAKE) 	+= -march=alderlake
++        cflags-$(CONFIG_GENERIC_CPU2) 	+= -march=x86-64-v2
++        cflags-$(CONFIG_GENERIC_CPU3) 	+= -march=x86-64-v3
++        cflags-$(CONFIG_GENERIC_CPU4) 	+= -march=x86-64-v4
+         cflags-$(CONFIG_GENERIC_CPU)	+= -mtune=generic
+         KBUILD_CFLAGS += $(cflags-y)
+
+diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
+index 75884d2cdec3..4e6a08d4c7e5 100644
+--- a/arch/x86/include/asm/vermagic.h
++++ b/arch/x86/include/asm/vermagic.h
+@@ -17,6 +17,48 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE_INTEL
++#define MODULE_PROC_FAMILY "NATIVE_INTEL "
++#elif defined CONFIG_MNATIVE_AMD
++#define MODULE_PROC_FAMILY "NATIVE_AMD "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MGOLDMONT
++#define MODULE_PROC_FAMILY "GOLDMONT "
++#elif defined CONFIG_MGOLDMONTPLUS
++#define MODULE_PROC_FAMILY "GOLDMONTPLUS "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
++#elif defined CONFIG_MSKYLAKEX
++#define MODULE_PROC_FAMILY "SKYLAKEX "
++#elif defined CONFIG_MCANNONLAKE
++#define MODULE_PROC_FAMILY "CANNONLAKE "
++#elif defined CONFIG_MICELAKE
++#define MODULE_PROC_FAMILY "ICELAKE "
++#elif defined CONFIG_MCASCADELAKE
++#define MODULE_PROC_FAMILY "CASCADELAKE "
++#elif defined CONFIG_MCOOPERLAKE
++#define MODULE_PROC_FAMILY "COOPERLAKE "
++#elif defined CONFIG_MTIGERLAKE
++#define MODULE_PROC_FAMILY "TIGERLAKE "
++#elif defined CONFIG_MSAPPHIRERAPIDS
++#define MODULE_PROC_FAMILY "SAPPHIRERAPIDS "
++#elif defined CONFIG_ROCKETLAKE
++#define MODULE_PROC_FAMILY "ROCKETLAKE "
++#elif defined CONFIG_MALDERLAKE
++#define MODULE_PROC_FAMILY "ALDERLAKE "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -35,6 +77,30 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
++#elif defined CONFIG_MZEN2
++#define MODULE_PROC_FAMILY "ZEN2 "
++#elif defined CONFIG_MZEN3
++#define MODULE_PROC_FAMILY "ZEN3 "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+--
+2.33.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2021-12-07 19:42 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2021-12-07 19:42 UTC (permalink / raw
  To: gentoo-commits

commit:     56267a53319a8d6cc12c891d59d7c437003343bf
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Dec  7 19:41:51 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Dec  7 19:41:51 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=56267a53

Remove redundant patch

Removed:
2700_drm-i915-revert-Implement-Wa-1508744258.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  4 --
 2700_drm-i915-revert-Implement-Wa-1508744258.patch | 51 ----------------------
 2 files changed, 55 deletions(-)

diff --git a/0000_README b/0000_README
index 419ae990..efde5c7d 100644
--- a/0000_README
+++ b/0000_README
@@ -55,10 +55,6 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
-Patch:  2700_drm-i915-revert-Implement-Wa-1508744258.patch
-From:   https://bugs.gentoo.org/828080
-Desc:   Revert: drm/i915: Implement Wa_1508744258
-
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2700_drm-i915-revert-Implement-Wa-1508744258.patch b/2700_drm-i915-revert-Implement-Wa-1508744258.patch
deleted file mode 100644
index 37e60701..00000000
--- a/2700_drm-i915-revert-Implement-Wa-1508744258.patch
+++ /dev/null
@@ -1,51 +0,0 @@
-From 72641d8d60401a5f1e1a0431ceaf928680d34418 Mon Sep 17 00:00:00 2001
-From: =?UTF-8?q?Jos=C3=A9=20Roberto=20de=20Souza?= <jose.souza@intel.com>
-Date: Fri, 19 Nov 2021 06:09:30 -0800
-Subject: Revert "drm/i915: Implement Wa_1508744258"
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-This workarounds are causing hangs, because I missed the fact that it
-needs to be enabled for all cases and disabled when doing a resolve
-pass.
-
-So KMD only needs to whitelist it and UMD will be the one setting it
-on per case.
-
-This reverts commit 28ec02c9cbebf3feeaf21a59df9dfbc02bda3362.
-
-Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/4145
-Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
-Fixes: 28ec02c9cbeb ("drm/i915: Implement Wa_1508744258")
-Reviewed-by: Matt Atwood <matthew.s.atwood@intel.com>
-Link: https://patchwork.freedesktop.org/patch/msgid/20211119140931.32791-1-jose.souza@intel.com
-(cherry picked from commit f3799ff16fcfacd44aee55db162830df461b631f)
-Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
----
- drivers/gpu/drm/i915/gt/intel_workarounds.c | 7 -------
- 1 file changed, 7 deletions(-)
-
-(limited to 'drivers/gpu/drm/i915/gt/intel_workarounds.c')
-
-diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c
-index e1f3625308891..ed73d9bc9d40b 100644
---- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
-+++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
-@@ -621,13 +621,6 @@ static void gen12_ctx_workarounds_init(struct intel_engine_cs *engine,
- 	       FF_MODE2_GS_TIMER_MASK,
- 	       FF_MODE2_GS_TIMER_224,
- 	       0, false);
--
--	/*
--	 * Wa_14012131227:dg1
--	 * Wa_1508744258:tgl,rkl,dg1,adl-s,adl-p
--	 */
--	wa_masked_en(wal, GEN7_COMMON_SLICE_CHICKEN1,
--		     GEN9_RHWO_OPTIMIZATION_DISABLE);
- }
- 
- static void dg1_ctx_workarounds_init(struct intel_engine_cs *engine,
--- 
-cgit 1.2.3-1.el7
-


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-01-02 15:29 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-01-02 15:29 UTC (permalink / raw
  To: gentoo-commits

commit:     4bb4205fa2bb031ebb6a5373a24eb66d67048081
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Dec 21 19:26:40 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jan  2 15:29:05 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4bb4205f

Move X86 and ARM only config settings to their respective sections

Thanks to gyakovlev

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 05570254..24b75095 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,9 +6,9 @@
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2021-08-24 15:34:24.700702871 -0400
-+++ b/distro/Kconfig	2021-08-24 15:49:16.965525424 -0400
-@@ -0,0 +1,281 @@
+--- /dev/null	2021-12-21 08:57:43.779324794 -0500
++++ b/distro/Kconfig	2021-12-21 14:12:07.964572417 -0500
+@@ -0,0 +1,283 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -211,7 +211,6 @@
 +	select PAGE_POISONING_ZERO
 +	select INIT_ON_ALLOC_DEFAULT_ON
 +	select INIT_ON_FREE_DEFAULT_ON
-+	select VMAP_STACK
 +	select REFCOUNT_FULL
 +	select FORTIFY_SOURCE
 +	select SECURITY_DMESG_RESTRICT
@@ -219,7 +218,6 @@
 +	select GCC_PLUGIN_LATENT_ENTROPY
 +	select GCC_PLUGIN_STRUCTLEAK
 +	select GCC_PLUGIN_STRUCTLEAK_BYREF_ALL
-+	select GCC_PLUGIN_STACKLEAK
 +	select GCC_PLUGIN_RANDSTRUCT
 +	select GCC_PLUGIN_RANDSTRUCT_PERFORMANCE
 +
@@ -239,6 +237,8 @@
 +	select RELOCATABLE
 +	select LEGACY_VSYSCALL_NONE
 + 	select PAGE_TABLE_ISOLATION
++	select GCC_PLUGIN_STACKLEAK
++	select VMAP_STACK
 +
 +
 +config GENTOO_KERNEL_SELF_PROTECTION_ARM64
@@ -251,6 +251,8 @@
 +	select RELOCATABLE
 +	select ARM64_SW_TTBR0_PAN
 +	select CONFIG_UNMAP_KERNEL_AT_EL0
++	select GCC_PLUGIN_STACKLEAK
++	select VMAP_STACK
 +
 +config GENTOO_KERNEL_SELF_PROTECTION_X86_32
 +	bool "X86_32 KSPP Settings"


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-01-16 10:19 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-01-16 10:19 UTC (permalink / raw
  To: gentoo-commits

commit:     856d11ee1caf5f42639c1727cd391e3fe53822ee
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jan 14 14:11:12 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jan 14 14:11:12 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=856d11ee

mt76: mt7921e: fix possible probe failure after reboot

See bug #830977

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   4 +
 ...e-fix-possible-probe-failure-after-reboot.patch | 436 +++++++++++++++++++++
 2 files changed, 440 insertions(+)

diff --git a/0000_README b/0000_README
index efde5c7d..c012760e 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
+Patch:  2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch
+From:   https://patchwork.kernel.org/project/linux-wireless/patch/70e27cbc652cbdb78277b9c691a3a5ba02653afb.1641540175.git.objelf@gmail.com/
+Desc:   mt76: mt7921e: fix possible probe failure after reboot
+
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch b/2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch
new file mode 100644
index 00000000..4440e910
--- /dev/null
+++ b/2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch
@@ -0,0 +1,436 @@
+From patchwork Fri Jan  7 07:30:03 2022
+Content-Type: text/plain; charset="utf-8"
+MIME-Version: 1.0
+Content-Transfer-Encoding: 7bit
+X-Patchwork-Submitter: Sean Wang <sean.wang@mediatek.com>
+X-Patchwork-Id: 12706336
+X-Patchwork-Delegate: nbd@nbd.name
+Return-Path: <linux-wireless-owner@kernel.org>
+X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on
+	aws-us-west-2-korg-lkml-1.web.codeaurora.org
+Received: from vger.kernel.org (vger.kernel.org [23.128.96.18])
+	by smtp.lore.kernel.org (Postfix) with ESMTP id 21BAFC433F5
+	for <linux-wireless@archiver.kernel.org>;
+ Fri,  7 Jan 2022 07:30:14 +0000 (UTC)
+Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand
+        id S234590AbiAGHaN (ORCPT
+        <rfc822;linux-wireless@archiver.kernel.org>);
+        Fri, 7 Jan 2022 02:30:13 -0500
+Received: from mailgw01.mediatek.com ([60.244.123.138]:50902 "EHLO
+        mailgw01.mediatek.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org
+        with ESMTP id S234589AbiAGHaM (ORCPT
+        <rfc822;linux-wireless@vger.kernel.org>);
+        Fri, 7 Jan 2022 02:30:12 -0500
+X-UUID: 13aa3d5f268849bb8556160d0ceca5f3-20220107
+X-UUID: 13aa3d5f268849bb8556160d0ceca5f3-20220107
+Received: from mtkcas10.mediatek.inc [(172.21.101.39)] by
+ mailgw01.mediatek.com
+        (envelope-from <sean.wang@mediatek.com>)
+        (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256)
+        with ESMTP id 982418171; Fri, 07 Jan 2022 15:30:06 +0800
+Received: from mtkcas10.mediatek.inc (172.21.101.39) by
+ mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server
+ (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.792.3;
+ Fri, 7 Jan 2022 15:30:05 +0800
+Received: from mtkswgap22.mediatek.inc (172.21.77.33) by mtkcas10.mediatek.inc
+ (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend
+ Transport; Fri, 7 Jan 2022 15:30:04 +0800
+From: <sean.wang@mediatek.com>
+To: <nbd@nbd.name>, <lorenzo.bianconi@redhat.com>
+CC: <sean.wang@mediatek.com>, <Soul.Huang@mediatek.com>,
+        <YN.Chen@mediatek.com>, <Leon.Yen@mediatek.com>,
+        <Eric-SY.Chang@mediatek.com>, <Mark-YW.Chen@mediatek.com>,
+        <Deren.Wu@mediatek.com>, <km.lin@mediatek.com>,
+        <jenhao.yang@mediatek.com>, <robin.chiu@mediatek.com>,
+        <Eddie.Chen@mediatek.com>, <ch.yeh@mediatek.com>,
+        <posh.sun@mediatek.com>, <ted.huang@mediatek.com>,
+        <Eric.Liang@mediatek.com>, <Stella.Chang@mediatek.com>,
+        <Tom.Chou@mediatek.com>, <steve.lee@mediatek.com>,
+        <jsiuda@google.com>, <frankgor@google.com>, <jemele@google.com>,
+        <abhishekpandit@google.com>, <shawnku@google.com>,
+        <linux-wireless@vger.kernel.org>,
+        <linux-mediatek@lists.infradead.org>,
+        "Deren Wu" <deren.wu@mediatek.com>
+Subject: [PATCH] mt76: mt7921e: fix possible probe failure after reboot
+Date: Fri, 7 Jan 2022 15:30:03 +0800
+Message-ID: 
+ <70e27cbc652cbdb78277b9c691a3a5ba02653afb.1641540175.git.objelf@gmail.com>
+X-Mailer: git-send-email 1.7.9.5
+MIME-Version: 1.0
+X-MTK: N
+Precedence: bulk
+List-ID: <linux-wireless.vger.kernel.org>
+X-Mailing-List: linux-wireless@vger.kernel.org
+
+From: Sean Wang <sean.wang@mediatek.com>
+
+It doesn't guarantee the mt7921e gets started with ASPM L0 after each
+machine reboot on every platform.
+
+If mt7921e gets started with not ASPM L0, it would be possible that the
+driver encounters time to time failure in mt7921_pci_probe, like a
+weird chip identifier is read
+
+[  215.514503] mt7921e 0000:05:00.0: ASIC revision: feed0000
+[  216.604741] mt7921e: probe of 0000:05:00.0 failed with error -110
+
+or failing to init hardware because the driver is not allowed to access the
+register until the device is in ASPM L0 state. So, we call
+__mt7921e_mcu_drv_pmctrl in early mt7921_pci_probe to force the device
+to bring back to the L0 state for we can safely access registers in any
+case.
+
+In the patch, we move all functions from dma.c to pci.c and register mt76
+bus operation earilier, that is the __mt7921e_mcu_drv_pmctrl depends on.
+
+Fixes: bf3747ae2e25 ("mt76: mt7921: enable aspm by default")
+Reported-by: Kai-Chuan Hsieh <kaichuan.hsieh@canonical.com>
+Co-developed-by: Deren Wu <deren.wu@mediatek.com>
+Signed-off-by: Deren Wu <deren.wu@mediatek.com>
+Signed-off-by: Sean Wang <sean.wang@mediatek.com>
+---
+ .../net/wireless/mediatek/mt76/mt7921/dma.c   | 119 -----------------
+ .../wireless/mediatek/mt76/mt7921/mt7921.h    |   1 +
+ .../net/wireless/mediatek/mt76/mt7921/pci.c   | 124 ++++++++++++++++++
+ .../wireless/mediatek/mt76/mt7921/pci_mcu.c   |  18 ++-
+ 4 files changed, 139 insertions(+), 123 deletions(-)
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
+index cdff1fd52d93..39d6ce4ecddd 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
+@@ -78,110 +78,6 @@ static void mt7921_dma_prefetch(struct mt7921_dev *dev)
+ 	mt76_wr(dev, MT_WFDMA0_TX_RING17_EXT_CTRL, PREFETCH(0x380, 0x4));
+ }
+ 
+-static u32 __mt7921_reg_addr(struct mt7921_dev *dev, u32 addr)
+-{
+-	static const struct {
+-		u32 phys;
+-		u32 mapped;
+-		u32 size;
+-	} fixed_map[] = {
+-		{ 0x820d0000, 0x30000, 0x10000 }, /* WF_LMAC_TOP (WF_WTBLON) */
+-		{ 0x820ed000, 0x24800, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_MIB) */
+-		{ 0x820e4000, 0x21000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_TMAC) */
+-		{ 0x820e7000, 0x21e00, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_DMA) */
+-		{ 0x820eb000, 0x24200, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_LPON) */
+-		{ 0x820e2000, 0x20800, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_AGG) */
+-		{ 0x820e3000, 0x20c00, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_ARB) */
+-		{ 0x820e5000, 0x21400, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_RMAC) */
+-		{ 0x00400000, 0x80000, 0x10000 }, /* WF_MCU_SYSRAM */
+-		{ 0x00410000, 0x90000, 0x10000 }, /* WF_MCU_SYSRAM (configure register) */
+-		{ 0x40000000, 0x70000, 0x10000 }, /* WF_UMAC_SYSRAM */
+-		{ 0x54000000, 0x02000, 0x1000 }, /* WFDMA PCIE0 MCU DMA0 */
+-		{ 0x55000000, 0x03000, 0x1000 }, /* WFDMA PCIE0 MCU DMA1 */
+-		{ 0x58000000, 0x06000, 0x1000 }, /* WFDMA PCIE1 MCU DMA0 (MEM_DMA) */
+-		{ 0x59000000, 0x07000, 0x1000 }, /* WFDMA PCIE1 MCU DMA1 */
+-		{ 0x7c000000, 0xf0000, 0x10000 }, /* CONN_INFRA */
+-		{ 0x7c020000, 0xd0000, 0x10000 }, /* CONN_INFRA, WFDMA */
+-		{ 0x7c060000, 0xe0000, 0x10000 }, /* CONN_INFRA, conn_host_csr_top */
+-		{ 0x80020000, 0xb0000, 0x10000 }, /* WF_TOP_MISC_OFF */
+-		{ 0x81020000, 0xc0000, 0x10000 }, /* WF_TOP_MISC_ON */
+-		{ 0x820c0000, 0x08000, 0x4000 }, /* WF_UMAC_TOP (PLE) */
+-		{ 0x820c8000, 0x0c000, 0x2000 }, /* WF_UMAC_TOP (PSE) */
+-		{ 0x820cc000, 0x0e000, 0x1000 }, /* WF_UMAC_TOP (PP) */
+-		{ 0x820cd000, 0x0f000, 0x1000 }, /* WF_MDP_TOP */
+-		{ 0x820ce000, 0x21c00, 0x0200 }, /* WF_LMAC_TOP (WF_SEC) */
+-		{ 0x820cf000, 0x22000, 0x1000 }, /* WF_LMAC_TOP (WF_PF) */
+-		{ 0x820e0000, 0x20000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_CFG) */
+-		{ 0x820e1000, 0x20400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_TRB) */
+-		{ 0x820e9000, 0x23400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_WTBLOFF) */
+-		{ 0x820ea000, 0x24000, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_ETBF) */
+-		{ 0x820ec000, 0x24600, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_INT) */
+-		{ 0x820f0000, 0xa0000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_CFG) */
+-		{ 0x820f1000, 0xa0600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_TRB) */
+-		{ 0x820f2000, 0xa0800, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_AGG) */
+-		{ 0x820f3000, 0xa0c00, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_ARB) */
+-		{ 0x820f4000, 0xa1000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_TMAC) */
+-		{ 0x820f5000, 0xa1400, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_RMAC) */
+-		{ 0x820f7000, 0xa1e00, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_DMA) */
+-		{ 0x820f9000, 0xa3400, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_WTBLOFF) */
+-		{ 0x820fa000, 0xa4000, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_ETBF) */
+-		{ 0x820fb000, 0xa4200, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_LPON) */
+-		{ 0x820fc000, 0xa4600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_INT) */
+-		{ 0x820fd000, 0xa4800, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_MIB) */
+-	};
+-	int i;
+-
+-	if (addr < 0x100000)
+-		return addr;
+-
+-	for (i = 0; i < ARRAY_SIZE(fixed_map); i++) {
+-		u32 ofs;
+-
+-		if (addr < fixed_map[i].phys)
+-			continue;
+-
+-		ofs = addr - fixed_map[i].phys;
+-		if (ofs > fixed_map[i].size)
+-			continue;
+-
+-		return fixed_map[i].mapped + ofs;
+-	}
+-
+-	if ((addr >= 0x18000000 && addr < 0x18c00000) ||
+-	    (addr >= 0x70000000 && addr < 0x78000000) ||
+-	    (addr >= 0x7c000000 && addr < 0x7c400000))
+-		return mt7921_reg_map_l1(dev, addr);
+-
+-	dev_err(dev->mt76.dev, "Access currently unsupported address %08x\n",
+-		addr);
+-
+-	return 0;
+-}
+-
+-static u32 mt7921_rr(struct mt76_dev *mdev, u32 offset)
+-{
+-	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
+-	u32 addr = __mt7921_reg_addr(dev, offset);
+-
+-	return dev->bus_ops->rr(mdev, addr);
+-}
+-
+-static void mt7921_wr(struct mt76_dev *mdev, u32 offset, u32 val)
+-{
+-	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
+-	u32 addr = __mt7921_reg_addr(dev, offset);
+-
+-	dev->bus_ops->wr(mdev, addr, val);
+-}
+-
+-static u32 mt7921_rmw(struct mt76_dev *mdev, u32 offset, u32 mask, u32 val)
+-{
+-	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
+-	u32 addr = __mt7921_reg_addr(dev, offset);
+-
+-	return dev->bus_ops->rmw(mdev, addr, mask, val);
+-}
+-
+ static int mt7921_dma_disable(struct mt7921_dev *dev, bool force)
+ {
+ 	if (force) {
+@@ -341,23 +237,8 @@ int mt7921_wpdma_reinit_cond(struct mt7921_dev *dev)
+ 
+ int mt7921_dma_init(struct mt7921_dev *dev)
+ {
+-	struct mt76_bus_ops *bus_ops;
+ 	int ret;
+ 
+-	dev->phy.dev = dev;
+-	dev->phy.mt76 = &dev->mt76.phy;
+-	dev->mt76.phy.priv = &dev->phy;
+-	dev->bus_ops = dev->mt76.bus;
+-	bus_ops = devm_kmemdup(dev->mt76.dev, dev->bus_ops, sizeof(*bus_ops),
+-			       GFP_KERNEL);
+-	if (!bus_ops)
+-		return -ENOMEM;
+-
+-	bus_ops->rr = mt7921_rr;
+-	bus_ops->wr = mt7921_wr;
+-	bus_ops->rmw = mt7921_rmw;
+-	dev->mt76.bus = bus_ops;
+-
+ 	mt76_dma_attach(&dev->mt76);
+ 
+ 	ret = mt7921_dma_disable(dev, true);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+index 8b674e042568..63e3c7ef5e89 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+@@ -443,6 +443,7 @@ int mt7921e_mcu_init(struct mt7921_dev *dev);
+ int mt7921s_wfsys_reset(struct mt7921_dev *dev);
+ int mt7921s_mac_reset(struct mt7921_dev *dev);
+ int mt7921s_init_reset(struct mt7921_dev *dev);
++int __mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev);
+ int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev);
+ int mt7921e_mcu_fw_pmctrl(struct mt7921_dev *dev);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+index 1ae0d5826ca7..a0c82d19c4d9 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+@@ -121,6 +121,110 @@ static void mt7921e_unregister_device(struct mt7921_dev *dev)
+ 	mt76_free_device(&dev->mt76);
+ }
+ 
++static u32 __mt7921_reg_addr(struct mt7921_dev *dev, u32 addr)
++{
++	static const struct {
++		u32 phys;
++		u32 mapped;
++		u32 size;
++	} fixed_map[] = {
++		{ 0x820d0000, 0x30000, 0x10000 }, /* WF_LMAC_TOP (WF_WTBLON) */
++		{ 0x820ed000, 0x24800, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_MIB) */
++		{ 0x820e4000, 0x21000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_TMAC) */
++		{ 0x820e7000, 0x21e00, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_DMA) */
++		{ 0x820eb000, 0x24200, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_LPON) */
++		{ 0x820e2000, 0x20800, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_AGG) */
++		{ 0x820e3000, 0x20c00, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_ARB) */
++		{ 0x820e5000, 0x21400, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_RMAC) */
++		{ 0x00400000, 0x80000, 0x10000 }, /* WF_MCU_SYSRAM */
++		{ 0x00410000, 0x90000, 0x10000 }, /* WF_MCU_SYSRAM (configure register) */
++		{ 0x40000000, 0x70000, 0x10000 }, /* WF_UMAC_SYSRAM */
++		{ 0x54000000, 0x02000, 0x1000 }, /* WFDMA PCIE0 MCU DMA0 */
++		{ 0x55000000, 0x03000, 0x1000 }, /* WFDMA PCIE0 MCU DMA1 */
++		{ 0x58000000, 0x06000, 0x1000 }, /* WFDMA PCIE1 MCU DMA0 (MEM_DMA) */
++		{ 0x59000000, 0x07000, 0x1000 }, /* WFDMA PCIE1 MCU DMA1 */
++		{ 0x7c000000, 0xf0000, 0x10000 }, /* CONN_INFRA */
++		{ 0x7c020000, 0xd0000, 0x10000 }, /* CONN_INFRA, WFDMA */
++		{ 0x7c060000, 0xe0000, 0x10000 }, /* CONN_INFRA, conn_host_csr_top */
++		{ 0x80020000, 0xb0000, 0x10000 }, /* WF_TOP_MISC_OFF */
++		{ 0x81020000, 0xc0000, 0x10000 }, /* WF_TOP_MISC_ON */
++		{ 0x820c0000, 0x08000, 0x4000 }, /* WF_UMAC_TOP (PLE) */
++		{ 0x820c8000, 0x0c000, 0x2000 }, /* WF_UMAC_TOP (PSE) */
++		{ 0x820cc000, 0x0e000, 0x1000 }, /* WF_UMAC_TOP (PP) */
++		{ 0x820cd000, 0x0f000, 0x1000 }, /* WF_MDP_TOP */
++		{ 0x820ce000, 0x21c00, 0x0200 }, /* WF_LMAC_TOP (WF_SEC) */
++		{ 0x820cf000, 0x22000, 0x1000 }, /* WF_LMAC_TOP (WF_PF) */
++		{ 0x820e0000, 0x20000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_CFG) */
++		{ 0x820e1000, 0x20400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_TRB) */
++		{ 0x820e9000, 0x23400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_WTBLOFF) */
++		{ 0x820ea000, 0x24000, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_ETBF) */
++		{ 0x820ec000, 0x24600, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_INT) */
++		{ 0x820f0000, 0xa0000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_CFG) */
++		{ 0x820f1000, 0xa0600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_TRB) */
++		{ 0x820f2000, 0xa0800, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_AGG) */
++		{ 0x820f3000, 0xa0c00, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_ARB) */
++		{ 0x820f4000, 0xa1000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_TMAC) */
++		{ 0x820f5000, 0xa1400, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_RMAC) */
++		{ 0x820f7000, 0xa1e00, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_DMA) */
++		{ 0x820f9000, 0xa3400, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_WTBLOFF) */
++		{ 0x820fa000, 0xa4000, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_ETBF) */
++		{ 0x820fb000, 0xa4200, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_LPON) */
++		{ 0x820fc000, 0xa4600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_INT) */
++		{ 0x820fd000, 0xa4800, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_MIB) */
++	};
++	int i;
++
++	if (addr < 0x100000)
++		return addr;
++
++	for (i = 0; i < ARRAY_SIZE(fixed_map); i++) {
++		u32 ofs;
++
++		if (addr < fixed_map[i].phys)
++			continue;
++
++		ofs = addr - fixed_map[i].phys;
++		if (ofs > fixed_map[i].size)
++			continue;
++
++		return fixed_map[i].mapped + ofs;
++	}
++
++	if ((addr >= 0x18000000 && addr < 0x18c00000) ||
++	    (addr >= 0x70000000 && addr < 0x78000000) ||
++	    (addr >= 0x7c000000 && addr < 0x7c400000))
++		return mt7921_reg_map_l1(dev, addr);
++
++	dev_err(dev->mt76.dev, "Access currently unsupported address %08x\n",
++		addr);
++
++	return 0;
++}
++
++static u32 mt7921_rr(struct mt76_dev *mdev, u32 offset)
++{
++	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
++	u32 addr = __mt7921_reg_addr(dev, offset);
++
++	return dev->bus_ops->rr(mdev, addr);
++}
++
++static void mt7921_wr(struct mt76_dev *mdev, u32 offset, u32 val)
++{
++	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
++	u32 addr = __mt7921_reg_addr(dev, offset);
++
++	dev->bus_ops->wr(mdev, addr, val);
++}
++
++static u32 mt7921_rmw(struct mt76_dev *mdev, u32 offset, u32 mask, u32 val)
++{
++	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
++	u32 addr = __mt7921_reg_addr(dev, offset);
++
++	return dev->bus_ops->rmw(mdev, addr, mask, val);
++}
++
+ static int mt7921_pci_probe(struct pci_dev *pdev,
+ 			    const struct pci_device_id *id)
+ {
+@@ -152,6 +256,7 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
+ 		.fw_own = mt7921e_mcu_fw_pmctrl,
+ 	};
+ 
++	struct mt76_bus_ops *bus_ops;
+ 	struct mt7921_dev *dev;
+ 	struct mt76_dev *mdev;
+ 	int ret;
+@@ -189,6 +294,25 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
+ 
+ 	mt76_mmio_init(&dev->mt76, pcim_iomap_table(pdev)[0]);
+ 	tasklet_init(&dev->irq_tasklet, mt7921_irq_tasklet, (unsigned long)dev);
++
++	dev->phy.dev = dev;
++	dev->phy.mt76 = &dev->mt76.phy;
++	dev->mt76.phy.priv = &dev->phy;
++	dev->bus_ops = dev->mt76.bus;
++	bus_ops = devm_kmemdup(dev->mt76.dev, dev->bus_ops, sizeof(*bus_ops),
++			       GFP_KERNEL);
++	if (!bus_ops)
++		return -ENOMEM;
++
++	bus_ops->rr = mt7921_rr;
++	bus_ops->wr = mt7921_wr;
++	bus_ops->rmw = mt7921_rmw;
++	dev->mt76.bus = bus_ops;
++
++	ret = __mt7921e_mcu_drv_pmctrl(dev);
++	if (ret)
++		return ret;
++
+ 	mdev->rev = (mt7921_l1_rr(dev, MT_HW_CHIPID) << 16) |
+ 		    (mt7921_l1_rr(dev, MT_HW_REV) & 0xff);
+ 	dev_info(mdev->dev, "ASIC revision: %04x\n", mdev->rev);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
+index f9e350b67fdc..36669e5aeef3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
+@@ -59,10 +59,8 @@ int mt7921e_mcu_init(struct mt7921_dev *dev)
+ 	return err;
+ }
+ 
+-int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
++int __mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
+ {
+-	struct mt76_phy *mphy = &dev->mt76.phy;
+-	struct mt76_connac_pm *pm = &dev->pm;
+ 	int i, err = 0;
+ 
+ 	for (i = 0; i < MT7921_DRV_OWN_RETRY_COUNT; i++) {
+@@ -75,9 +73,21 @@ int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
+ 	if (i == MT7921_DRV_OWN_RETRY_COUNT) {
+ 		dev_err(dev->mt76.dev, "driver own failed\n");
+ 		err = -EIO;
+-		goto out;
+ 	}
+ 
++	return err;
++}
++
++int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
++{
++	struct mt76_phy *mphy = &dev->mt76.phy;
++	struct mt76_connac_pm *pm = &dev->pm;
++	int err;
++
++	err = __mt7921e_mcu_drv_pmctrl(dev);
++	if (err < 0)
++		goto out;
++
+ 	mt7921_wpdma_reinit_cond(dev);
+ 	clear_bit(MT76_STATE_PM, &mphy->state);
+ 


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-01-16 10:20 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-01-16 10:20 UTC (permalink / raw
  To: gentoo-commits

commit:     4e5e1b080f2be5453274fb18361a359006b5f5ed
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jan 16 10:20:04 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jan 16 10:20:04 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4e5e1b08

Linux patch 5.16.1

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1000_linux-5.16.1.patch | 1121 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1125 insertions(+)

diff --git a/0000_README b/0000_README
index c012760e..4a1aa215 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1000_linux-5.16.1.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.1
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1000_linux-5.16.1.patch b/1000_linux-5.16.1.patch
new file mode 100644
index 00000000..a6ba1125
--- /dev/null
+++ b/1000_linux-5.16.1.patch
@@ -0,0 +1,1121 @@
+diff --git a/Makefile b/Makefile
+index 08510230b42f3..fdbd06daf2af1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/arch/arm/boot/dts/exynos4210-i9100.dts b/arch/arm/boot/dts/exynos4210-i9100.dts
+index 19bb7dc98b339..21b791150697d 100644
+--- a/arch/arm/boot/dts/exynos4210-i9100.dts
++++ b/arch/arm/boot/dts/exynos4210-i9100.dts
+@@ -828,7 +828,7 @@
+ 		compatible = "brcm,bcm4330-bt";
+ 
+ 		shutdown-gpios = <&gpl0 4 GPIO_ACTIVE_HIGH>;
+-		reset-gpios = <&gpl1 0 GPIO_ACTIVE_HIGH>;
++		reset-gpios = <&gpl1 0 GPIO_ACTIVE_LOW>;
+ 		device-wakeup-gpios = <&gpx3 1 GPIO_ACTIVE_HIGH>;
+ 		host-wakeup-gpios = <&gpx2 6 GPIO_ACTIVE_HIGH>;
+ 	};
+diff --git a/arch/parisc/include/uapi/asm/pdc.h b/arch/parisc/include/uapi/asm/pdc.h
+index acc633c157221..e794e143ec5f8 100644
+--- a/arch/parisc/include/uapi/asm/pdc.h
++++ b/arch/parisc/include/uapi/asm/pdc.h
+@@ -4,7 +4,7 @@
+ 
+ /*
+  *	PDC return values ...
+- *	All PDC calls return a subset of these errors. 
++ *	All PDC calls return a subset of these errors.
+  */
+ 
+ #define PDC_WARN		  3	/* Call completed with a warning */
+@@ -165,7 +165,7 @@
+ #define PDC_PSW_GET_DEFAULTS	1	/* Return defaults              */
+ #define PDC_PSW_SET_DEFAULTS	2	/* Set default                  */
+ #define PDC_PSW_ENDIAN_BIT	1	/* set for big endian           */
+-#define PDC_PSW_WIDE_BIT	2	/* set for wide mode            */ 
++#define PDC_PSW_WIDE_BIT	2	/* set for wide mode            */
+ 
+ #define PDC_SYSTEM_MAP	22		/* find system modules		*/
+ #define PDC_FIND_MODULE 	0
+@@ -274,7 +274,7 @@
+ #define PDC_PCI_PCI_INT_ROUTE_SIZE	13
+ #define PDC_PCI_GET_INT_TBL_SIZE	PDC_PCI_PCI_INT_ROUTE_SIZE
+ #define PDC_PCI_PCI_INT_ROUTE		14
+-#define PDC_PCI_GET_INT_TBL		PDC_PCI_PCI_INT_ROUTE 
++#define PDC_PCI_GET_INT_TBL		PDC_PCI_PCI_INT_ROUTE
+ #define PDC_PCI_READ_MON_TYPE		15
+ #define PDC_PCI_WRITE_MON_TYPE		16
+ 
+@@ -345,7 +345,7 @@
+ 
+ /* constants for PDC_CHASSIS */
+ #define OSTAT_OFF		0
+-#define OSTAT_FLT		1 
++#define OSTAT_FLT		1
+ #define OSTAT_TEST		2
+ #define OSTAT_INIT		3
+ #define OSTAT_SHUT		4
+@@ -403,7 +403,7 @@ struct zeropage {
+ 	int	vec_pad1[6];
+ 
+ 	/* [0x040] reserved processor dependent */
+-	int	pad0[112];
++	int	pad0[112];              /* in QEMU pad0[0] holds "SeaBIOS\0" */
+ 
+ 	/* [0x200] reserved */
+ 	int	pad1[84];
+@@ -691,6 +691,22 @@ struct pdc_hpmc_pim_20 { /* PDC_PIM */
+ 	unsigned long long fr[32];
+ };
+ 
++struct pim_cpu_state_cf {
++	union {
++	unsigned int
++		iqv : 1,	/* IIA queue Valid */
++		iqf : 1,	/* IIA queue Failure */
++		ipv : 1,	/* IPRs Valid */
++		grv : 1,	/* GRs Valid */
++		crv : 1,	/* CRs Valid */
++		srv : 1,	/* SRs Valid */
++		trv : 1,	/* CR24 through CR31 valid */
++		pad : 24,	/* reserved */
++		td  : 1;	/* TOC did not cause any damage to the system state */
++	unsigned int val;
++	};
++};
++
+ struct pdc_toc_pim_11 {
+ 	unsigned int gr[32];
+ 	unsigned int cr[32];
+@@ -698,8 +714,7 @@ struct pdc_toc_pim_11 {
+ 	unsigned int iasq_back;
+ 	unsigned int iaoq_back;
+ 	unsigned int check_type;
+-	unsigned int hversion;
+-	unsigned int cpu_state;
++	struct pim_cpu_state_cf cpu_state;
+ };
+ 
+ struct pdc_toc_pim_20 {
+@@ -709,8 +724,7 @@ struct pdc_toc_pim_20 {
+ 	unsigned long long iasq_back;
+ 	unsigned long long iaoq_back;
+ 	unsigned int check_type;
+-	unsigned int hversion;
+-	unsigned int cpu_state;
++	struct pim_cpu_state_cf cpu_state;
+ };
+ 
+ #endif /* !defined(__ASSEMBLY__) */
+diff --git a/drivers/bluetooth/bfusb.c b/drivers/bluetooth/bfusb.c
+index 5a321b4076aab..cab93935cc7f1 100644
+--- a/drivers/bluetooth/bfusb.c
++++ b/drivers/bluetooth/bfusb.c
+@@ -628,6 +628,9 @@ static int bfusb_probe(struct usb_interface *intf, const struct usb_device_id *i
+ 	data->bulk_out_ep   = bulk_out_ep->desc.bEndpointAddress;
+ 	data->bulk_pkt_size = le16_to_cpu(bulk_out_ep->desc.wMaxPacketSize);
+ 
++	if (!data->bulk_pkt_size)
++		goto done;
++
+ 	rwlock_init(&data->lock);
+ 
+ 	data->reassembly = NULL;
+diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
+index e4182acee488c..d9ceca7a7935c 100644
+--- a/drivers/bluetooth/btbcm.c
++++ b/drivers/bluetooth/btbcm.c
+@@ -8,6 +8,7 @@
+ 
+ #include <linux/module.h>
+ #include <linux/firmware.h>
++#include <linux/dmi.h>
+ #include <asm/unaligned.h>
+ 
+ #include <net/bluetooth/bluetooth.h>
+@@ -343,6 +344,52 @@ static struct sk_buff *btbcm_read_usb_product(struct hci_dev *hdev)
+ 	return skb;
+ }
+ 
++static const struct dmi_system_id disable_broken_read_transmit_power[] = {
++	{
++		 .matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Apple Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro16,1"),
++		},
++	},
++	{
++		 .matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Apple Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro16,2"),
++		},
++	},
++	{
++		 .matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Apple Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro16,4"),
++		},
++	},
++	{
++		 .matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Apple Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "MacBookAir8,1"),
++		},
++	},
++	{
++		 .matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Apple Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "MacBookAir8,2"),
++		},
++	},
++	{
++		 .matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Apple Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "iMac20,1"),
++		},
++	},
++	{
++		 .matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Apple Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "iMac20,2"),
++		},
++	},
++	{ }
++};
++
+ static int btbcm_read_info(struct hci_dev *hdev)
+ {
+ 	struct sk_buff *skb;
+@@ -363,6 +410,10 @@ static int btbcm_read_info(struct hci_dev *hdev)
+ 	bt_dev_info(hdev, "BCM: features 0x%2.2x", skb->data[1]);
+ 	kfree_skb(skb);
+ 
++	/* Read DMI and disable broken Read LE Min/Max Tx Power */
++	if (dmi_first_match(disable_broken_read_transmit_power))
++		set_bit(HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER, &hdev->quirks);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
+index 9359bff472965..b11567b0fd9ab 100644
+--- a/drivers/bluetooth/btintel.c
++++ b/drivers/bluetooth/btintel.c
+@@ -2353,8 +2353,15 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ 	 * As a workaround, send HCI Reset command first which will reset the
+ 	 * number of completed commands and allow normal command processing
+ 	 * from now on.
++	 *
++	 * Regarding the INTEL_BROKEN_SHUTDOWN_LED flag, these devices maybe
++	 * in the SW_RFKILL ON state as a workaround of fixing LED issue during
++	 * the shutdown() procedure, and once the device is in SW_RFKILL ON
++	 * state, the only way to exit out of it is sending the HCI_Reset
++	 * command.
+ 	 */
+-	if (btintel_test_flag(hdev, INTEL_BROKEN_INITIAL_NCMD)) {
++	if (btintel_test_flag(hdev, INTEL_BROKEN_INITIAL_NCMD) ||
++	    btintel_test_flag(hdev, INTEL_BROKEN_SHUTDOWN_LED)) {
+ 		skb = __hci_cmd_sync(hdev, HCI_OP_RESET, 0, NULL,
+ 				     HCI_INIT_TIMEOUT);
+ 		if (IS_ERR(skb)) {
+@@ -2426,12 +2433,6 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ 				set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
+ 					&hdev->quirks);
+ 
+-			/* These devices have an issue with LED which doesn't
+-			 * go off immediately during shutdown. Set the flag
+-			 * here to send the LED OFF command during shutdown.
+-			 */
+-			btintel_set_flag(hdev, INTEL_BROKEN_LED);
+-
+ 			err = btintel_legacy_rom_setup(hdev, &ver);
+ 			break;
+ 		case 0x0b:      /* SfP */
+@@ -2562,9 +2563,10 @@ static int btintel_shutdown_combined(struct hci_dev *hdev)
+ 
+ 	/* Some platforms have an issue with BT LED when the interface is
+ 	 * down or BT radio is turned off, which takes 5 seconds to BT LED
+-	 * goes off. This command turns off the BT LED immediately.
++	 * goes off. As a workaround, sends HCI_Intel_SW_RFKILL to put the
++	 * device in the RFKILL ON state which turns off the BT LED immediately.
+ 	 */
+-	if (btintel_test_flag(hdev, INTEL_BROKEN_LED)) {
++	if (btintel_test_flag(hdev, INTEL_BROKEN_SHUTDOWN_LED)) {
+ 		skb = __hci_cmd_sync(hdev, 0xfc3f, 0, NULL, HCI_INIT_TIMEOUT);
+ 		if (IS_ERR(skb)) {
+ 			ret = PTR_ERR(skb);
+diff --git a/drivers/bluetooth/btintel.h b/drivers/bluetooth/btintel.h
+index e500c0d7a7298..c9b24e9299e2a 100644
+--- a/drivers/bluetooth/btintel.h
++++ b/drivers/bluetooth/btintel.h
+@@ -150,7 +150,7 @@ enum {
+ 	INTEL_FIRMWARE_FAILED,
+ 	INTEL_BOOTING,
+ 	INTEL_BROKEN_INITIAL_NCMD,
+-	INTEL_BROKEN_LED,
++	INTEL_BROKEN_SHUTDOWN_LED,
+ 	INTEL_ROM_LEGACY,
+ 
+ 	__INTEL_NUM_FLAGS,
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 75c83768c2573..c923c38658baa 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -59,6 +59,7 @@ static struct usb_driver btusb_driver;
+ #define BTUSB_WIDEBAND_SPEECH	0x400000
+ #define BTUSB_VALID_LE_STATES   0x800000
+ #define BTUSB_QCA_WCN6855	0x1000000
++#define BTUSB_INTEL_BROKEN_SHUTDOWN_LED	0x2000000
+ #define BTUSB_INTEL_BROKEN_INITIAL_NCMD 0x4000000
+ 
+ static const struct usb_device_id btusb_table[] = {
+@@ -295,6 +296,24 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x0cf3, 0xe600), .driver_info = BTUSB_QCA_WCN6855 |
+ 						     BTUSB_WIDEBAND_SPEECH |
+ 						     BTUSB_VALID_LE_STATES },
++	{ USB_DEVICE(0x0489, 0xe0cc), .driver_info = BTUSB_QCA_WCN6855 |
++						     BTUSB_WIDEBAND_SPEECH |
++						     BTUSB_VALID_LE_STATES },
++	{ USB_DEVICE(0x0489, 0xe0d6), .driver_info = BTUSB_QCA_WCN6855 |
++						     BTUSB_WIDEBAND_SPEECH |
++						     BTUSB_VALID_LE_STATES },
++	{ USB_DEVICE(0x0489, 0xe0e3), .driver_info = BTUSB_QCA_WCN6855 |
++						     BTUSB_WIDEBAND_SPEECH |
++						     BTUSB_VALID_LE_STATES },
++	{ USB_DEVICE(0x10ab, 0x9309), .driver_info = BTUSB_QCA_WCN6855 |
++						     BTUSB_WIDEBAND_SPEECH |
++						     BTUSB_VALID_LE_STATES },
++	{ USB_DEVICE(0x10ab, 0x9409), .driver_info = BTUSB_QCA_WCN6855 |
++						     BTUSB_WIDEBAND_SPEECH |
++						     BTUSB_VALID_LE_STATES },
++	{ USB_DEVICE(0x0489, 0xe0d0), .driver_info = BTUSB_QCA_WCN6855 |
++						     BTUSB_WIDEBAND_SPEECH |
++						     BTUSB_VALID_LE_STATES },
+ 
+ 	/* Broadcom BCM2035 */
+ 	{ USB_DEVICE(0x0a5c, 0x2009), .driver_info = BTUSB_BCM92035 },
+@@ -365,10 +384,13 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x8087, 0x0033), .driver_info = BTUSB_INTEL_COMBINED },
+ 	{ USB_DEVICE(0x8087, 0x07da), .driver_info = BTUSB_CSR },
+ 	{ USB_DEVICE(0x8087, 0x07dc), .driver_info = BTUSB_INTEL_COMBINED |
+-						     BTUSB_INTEL_BROKEN_INITIAL_NCMD },
+-	{ USB_DEVICE(0x8087, 0x0a2a), .driver_info = BTUSB_INTEL_COMBINED },
++						     BTUSB_INTEL_BROKEN_INITIAL_NCMD |
++						     BTUSB_INTEL_BROKEN_SHUTDOWN_LED },
++	{ USB_DEVICE(0x8087, 0x0a2a), .driver_info = BTUSB_INTEL_COMBINED |
++						     BTUSB_INTEL_BROKEN_SHUTDOWN_LED },
+ 	{ USB_DEVICE(0x8087, 0x0a2b), .driver_info = BTUSB_INTEL_COMBINED },
+-	{ USB_DEVICE(0x8087, 0x0aa7), .driver_info = BTUSB_INTEL_COMBINED },
++	{ USB_DEVICE(0x8087, 0x0aa7), .driver_info = BTUSB_INTEL_COMBINED |
++						     BTUSB_INTEL_BROKEN_SHUTDOWN_LED },
+ 	{ USB_DEVICE(0x8087, 0x0aaa), .driver_info = BTUSB_INTEL_COMBINED },
+ 
+ 	/* Other Intel Bluetooth devices */
+@@ -384,6 +406,8 @@ static const struct usb_device_id blacklist_table[] = {
+ 	/* Realtek 8852AE Bluetooth devices */
+ 	{ USB_DEVICE(0x0bda, 0xc852), .driver_info = BTUSB_REALTEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x0bda, 0x385a), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x0bda, 0x4852), .driver_info = BTUSB_REALTEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x04c5, 0x165c), .driver_info = BTUSB_REALTEK |
+@@ -423,6 +447,14 @@ static const struct usb_device_id blacklist_table[] = {
+ 						     BTUSB_WIDEBAND_SPEECH |
+ 						     BTUSB_VALID_LE_STATES },
+ 
++	/* MediaTek MT7922A Bluetooth devices */
++	{ USB_DEVICE(0x0489, 0xe0d8), .driver_info = BTUSB_MEDIATEK |
++						     BTUSB_WIDEBAND_SPEECH |
++						     BTUSB_VALID_LE_STATES },
++	{ USB_DEVICE(0x0489, 0xe0d9), .driver_info = BTUSB_MEDIATEK |
++						     BTUSB_WIDEBAND_SPEECH |
++						     BTUSB_VALID_LE_STATES },
++
+ 	/* Additional Realtek 8723AE Bluetooth devices */
+ 	{ USB_DEVICE(0x0930, 0x021d), .driver_info = BTUSB_REALTEK },
+ 	{ USB_DEVICE(0x13d3, 0x3394), .driver_info = BTUSB_REALTEK },
+@@ -2236,7 +2268,7 @@ static int btusb_set_bdaddr_mtk(struct hci_dev *hdev, const bdaddr_t *bdaddr)
+ 	struct sk_buff *skb;
+ 	long ret;
+ 
+-	skb = __hci_cmd_sync(hdev, 0xfc1a, sizeof(bdaddr), bdaddr, HCI_INIT_TIMEOUT);
++	skb = __hci_cmd_sync(hdev, 0xfc1a, 6, bdaddr, HCI_INIT_TIMEOUT);
+ 	if (IS_ERR(skb)) {
+ 		ret = PTR_ERR(skb);
+ 		bt_dev_err(hdev, "changing Mediatek device address failed (%ld)",
+@@ -2265,6 +2297,7 @@ static void btusb_mtk_wmt_recv(struct urb *urb)
+ 		skb = bt_skb_alloc(HCI_WMT_MAX_EVENT_SIZE, GFP_ATOMIC);
+ 		if (!skb) {
+ 			hdev->stat.err_rx++;
++			kfree(urb->setup_packet);
+ 			return;
+ 		}
+ 
+@@ -2285,6 +2318,7 @@ static void btusb_mtk_wmt_recv(struct urb *urb)
+ 			data->evt_skb = skb_clone(skb, GFP_ATOMIC);
+ 			if (!data->evt_skb) {
+ 				kfree_skb(skb);
++				kfree(urb->setup_packet);
+ 				return;
+ 			}
+ 		}
+@@ -2293,6 +2327,7 @@ static void btusb_mtk_wmt_recv(struct urb *urb)
+ 		if (err < 0) {
+ 			kfree_skb(data->evt_skb);
+ 			data->evt_skb = NULL;
++			kfree(urb->setup_packet);
+ 			return;
+ 		}
+ 
+@@ -2303,6 +2338,7 @@ static void btusb_mtk_wmt_recv(struct urb *urb)
+ 			wake_up_bit(&data->flags,
+ 				    BTUSB_TX_WAIT_VND_EVT);
+ 		}
++		kfree(urb->setup_packet);
+ 		return;
+ 	} else if (urb->status == -ENOENT) {
+ 		/* Avoid suspend failed when usb_kill_urb */
+@@ -2323,6 +2359,7 @@ static void btusb_mtk_wmt_recv(struct urb *urb)
+ 	usb_anchor_urb(urb, &data->ctrl_anchor);
+ 	err = usb_submit_urb(urb, GFP_ATOMIC);
+ 	if (err < 0) {
++		kfree(urb->setup_packet);
+ 		/* -EPERM: urb is being killed;
+ 		 * -ENODEV: device got disconnected
+ 		 */
+@@ -2877,6 +2914,7 @@ static int btusb_mtk_setup(struct hci_dev *hdev)
+ 		}
+ 
+ 		hci_set_msft_opcode(hdev, 0xFD30);
++		hci_set_aosp_capable(hdev);
+ 		goto done;
+ 	default:
+ 		bt_dev_err(hdev, "Unsupported hardware variant (%08x)",
+@@ -3857,6 +3895,9 @@ static int btusb_probe(struct usb_interface *intf,
+ 
+ 		if (id->driver_info & BTUSB_INTEL_BROKEN_INITIAL_NCMD)
+ 			btintel_set_flag(hdev, INTEL_BROKEN_INITIAL_NCMD);
++
++		if (id->driver_info & BTUSB_INTEL_BROKEN_SHUTDOWN_LED)
++			btintel_set_flag(hdev, INTEL_BROKEN_SHUTDOWN_LED);
+ 	}
+ 
+ 	if (id->driver_info & BTUSB_MARVELL)
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index 605969ed0f965..7470ee24db2f9 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -461,6 +461,7 @@ static struct crng_state primary_crng = {
+  * its value (from 0->1->2).
+  */
+ static int crng_init = 0;
++static bool crng_need_final_init = false;
+ #define crng_ready() (likely(crng_init > 1))
+ static int crng_init_cnt = 0;
+ static unsigned long crng_global_init_time = 0;
+@@ -828,6 +829,36 @@ static void __init crng_initialize_primary(struct crng_state *crng)
+ 	crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1;
+ }
+ 
++static void crng_finalize_init(struct crng_state *crng)
++{
++	if (crng != &primary_crng || crng_init >= 2)
++		return;
++	if (!system_wq) {
++		/* We can't call numa_crng_init until we have workqueues,
++		 * so mark this for processing later. */
++		crng_need_final_init = true;
++		return;
++	}
++
++	invalidate_batched_entropy();
++	numa_crng_init();
++	crng_init = 2;
++	process_random_ready_list();
++	wake_up_interruptible(&crng_init_wait);
++	kill_fasync(&fasync, SIGIO, POLL_IN);
++	pr_notice("crng init done\n");
++	if (unseeded_warning.missed) {
++		pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n",
++			  unseeded_warning.missed);
++		unseeded_warning.missed = 0;
++	}
++	if (urandom_warning.missed) {
++		pr_notice("%d urandom warning(s) missed due to ratelimiting\n",
++			  urandom_warning.missed);
++		urandom_warning.missed = 0;
++	}
++}
++
+ #ifdef CONFIG_NUMA
+ static void do_numa_crng_init(struct work_struct *work)
+ {
+@@ -843,8 +874,8 @@ static void do_numa_crng_init(struct work_struct *work)
+ 		crng_initialize_secondary(crng);
+ 		pool[i] = crng;
+ 	}
+-	mb();
+-	if (cmpxchg(&crng_node_pool, NULL, pool)) {
++	/* pairs with READ_ONCE() in select_crng() */
++	if (cmpxchg_release(&crng_node_pool, NULL, pool) != NULL) {
+ 		for_each_node(i)
+ 			kfree(pool[i]);
+ 		kfree(pool);
+@@ -857,8 +888,26 @@ static void numa_crng_init(void)
+ {
+ 	schedule_work(&numa_crng_init_work);
+ }
++
++static struct crng_state *select_crng(void)
++{
++	struct crng_state **pool;
++	int nid = numa_node_id();
++
++	/* pairs with cmpxchg_release() in do_numa_crng_init() */
++	pool = READ_ONCE(crng_node_pool);
++	if (pool && pool[nid])
++		return pool[nid];
++
++	return &primary_crng;
++}
+ #else
+ static void numa_crng_init(void) {}
++
++static struct crng_state *select_crng(void)
++{
++	return &primary_crng;
++}
+ #endif
+ 
+ /*
+@@ -962,38 +1011,23 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r)
+ 		crng->state[i+4] ^= buf.key[i] ^ rv;
+ 	}
+ 	memzero_explicit(&buf, sizeof(buf));
+-	crng->init_time = jiffies;
++	WRITE_ONCE(crng->init_time, jiffies);
+ 	spin_unlock_irqrestore(&crng->lock, flags);
+-	if (crng == &primary_crng && crng_init < 2) {
+-		invalidate_batched_entropy();
+-		numa_crng_init();
+-		crng_init = 2;
+-		process_random_ready_list();
+-		wake_up_interruptible(&crng_init_wait);
+-		kill_fasync(&fasync, SIGIO, POLL_IN);
+-		pr_notice("crng init done\n");
+-		if (unseeded_warning.missed) {
+-			pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n",
+-				  unseeded_warning.missed);
+-			unseeded_warning.missed = 0;
+-		}
+-		if (urandom_warning.missed) {
+-			pr_notice("%d urandom warning(s) missed due to ratelimiting\n",
+-				  urandom_warning.missed);
+-			urandom_warning.missed = 0;
+-		}
+-	}
++	crng_finalize_init(crng);
+ }
+ 
+ static void _extract_crng(struct crng_state *crng,
+ 			  __u8 out[CHACHA_BLOCK_SIZE])
+ {
+-	unsigned long v, flags;
+-
+-	if (crng_ready() &&
+-	    (time_after(crng_global_init_time, crng->init_time) ||
+-	     time_after(jiffies, crng->init_time + CRNG_RESEED_INTERVAL)))
+-		crng_reseed(crng, crng == &primary_crng ? &input_pool : NULL);
++	unsigned long v, flags, init_time;
++
++	if (crng_ready()) {
++		init_time = READ_ONCE(crng->init_time);
++		if (time_after(READ_ONCE(crng_global_init_time), init_time) ||
++		    time_after(jiffies, init_time + CRNG_RESEED_INTERVAL))
++			crng_reseed(crng, crng == &primary_crng ?
++				    &input_pool : NULL);
++	}
+ 	spin_lock_irqsave(&crng->lock, flags);
+ 	if (arch_get_random_long(&v))
+ 		crng->state[14] ^= v;
+@@ -1005,15 +1039,7 @@ static void _extract_crng(struct crng_state *crng,
+ 
+ static void extract_crng(__u8 out[CHACHA_BLOCK_SIZE])
+ {
+-	struct crng_state *crng = NULL;
+-
+-#ifdef CONFIG_NUMA
+-	if (crng_node_pool)
+-		crng = crng_node_pool[numa_node_id()];
+-	if (crng == NULL)
+-#endif
+-		crng = &primary_crng;
+-	_extract_crng(crng, out);
++	_extract_crng(select_crng(), out);
+ }
+ 
+ /*
+@@ -1042,15 +1068,7 @@ static void _crng_backtrack_protect(struct crng_state *crng,
+ 
+ static void crng_backtrack_protect(__u8 tmp[CHACHA_BLOCK_SIZE], int used)
+ {
+-	struct crng_state *crng = NULL;
+-
+-#ifdef CONFIG_NUMA
+-	if (crng_node_pool)
+-		crng = crng_node_pool[numa_node_id()];
+-	if (crng == NULL)
+-#endif
+-		crng = &primary_crng;
+-	_crng_backtrack_protect(crng, tmp, used);
++	_crng_backtrack_protect(select_crng(), tmp, used);
+ }
+ 
+ static ssize_t extract_crng_user(void __user *buf, size_t nbytes)
+@@ -1775,6 +1793,8 @@ static void __init init_std_data(struct entropy_store *r)
+ int __init rand_initialize(void)
+ {
+ 	init_std_data(&input_pool);
++	if (crng_need_final_init)
++		crng_finalize_init(&primary_crng);
+ 	crng_initialize_primary(&primary_crng);
+ 	crng_global_init_time = jiffies;
+ 	if (ratelimit_disable) {
+@@ -1949,7 +1969,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
+ 		if (crng_init < 2)
+ 			return -ENODATA;
+ 		crng_reseed(&primary_crng, &input_pool);
+-		crng_global_init_time = jiffies - 1;
++		WRITE_ONCE(crng_global_init_time, jiffies - 1);
+ 		return 0;
+ 	default:
+ 		return -EINVAL;
+@@ -2283,7 +2303,8 @@ void add_hwgenerator_randomness(const char *buffer, size_t count,
+ 	 * We'll be woken up again once below random_write_wakeup_thresh,
+ 	 * or when the calling thread is about to terminate.
+ 	 */
+-	wait_event_interruptible(random_write_wait, kthread_should_stop() ||
++	wait_event_interruptible(random_write_wait,
++			!system_wq || kthread_should_stop() ||
+ 			ENTROPY_BITS(&input_pool) <= random_write_wakeup_bits);
+ 	mix_pool_bytes(poolp, buffer, count);
+ 	credit_entropy_bits(poolp, entropy);
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index ecbb3d1416321..201477ca408a5 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -3062,9 +3062,9 @@ static void snb_wm_latency_quirk(struct drm_i915_private *dev_priv)
+ 	 * The BIOS provided WM memory latency values are often
+ 	 * inadequate for high resolution displays. Adjust them.
+ 	 */
+-	changed = ilk_increase_wm_latency(dev_priv, dev_priv->wm.pri_latency, 12) |
+-		ilk_increase_wm_latency(dev_priv, dev_priv->wm.spr_latency, 12) |
+-		ilk_increase_wm_latency(dev_priv, dev_priv->wm.cur_latency, 12);
++	changed = ilk_increase_wm_latency(dev_priv, dev_priv->wm.pri_latency, 12);
++	changed |= ilk_increase_wm_latency(dev_priv, dev_priv->wm.spr_latency, 12);
++	changed |= ilk_increase_wm_latency(dev_priv, dev_priv->wm.cur_latency, 12);
+ 
+ 	if (!changed)
+ 		return;
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 7c007426e0827..058d28a0344b1 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -2193,7 +2193,6 @@ int uvc_register_video_device(struct uvc_device *dev,
+ 			      const struct v4l2_file_operations *fops,
+ 			      const struct v4l2_ioctl_ops *ioctl_ops)
+ {
+-	const char *name;
+ 	int ret;
+ 
+ 	/* Initialize the video buffers queue. */
+@@ -2222,20 +2221,16 @@ int uvc_register_video_device(struct uvc_device *dev,
+ 	case V4L2_BUF_TYPE_VIDEO_CAPTURE:
+ 	default:
+ 		vdev->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
+-		name = "Video Capture";
+ 		break;
+ 	case V4L2_BUF_TYPE_VIDEO_OUTPUT:
+ 		vdev->device_caps = V4L2_CAP_VIDEO_OUTPUT | V4L2_CAP_STREAMING;
+-		name = "Video Output";
+ 		break;
+ 	case V4L2_BUF_TYPE_META_CAPTURE:
+ 		vdev->device_caps = V4L2_CAP_META_CAPTURE | V4L2_CAP_STREAMING;
+-		name = "Metadata";
+ 		break;
+ 	}
+ 
+-	snprintf(vdev->name, sizeof(vdev->name), "%s %u", name,
+-		 stream->header.bTerminalLink);
++	strscpy(vdev->name, dev->name, sizeof(vdev->name));
+ 
+ 	/*
+ 	 * Set the driver data before calling video_register_device, otherwise
+diff --git a/drivers/mfd/intel-lpss-acpi.c b/drivers/mfd/intel-lpss-acpi.c
+index 3f1d976eb67cb..f2ea6540a01e1 100644
+--- a/drivers/mfd/intel-lpss-acpi.c
++++ b/drivers/mfd/intel-lpss-acpi.c
+@@ -136,6 +136,7 @@ static int intel_lpss_acpi_probe(struct platform_device *pdev)
+ {
+ 	struct intel_lpss_platform_info *info;
+ 	const struct acpi_device_id *id;
++	int ret;
+ 
+ 	id = acpi_match_device(intel_lpss_acpi_ids, &pdev->dev);
+ 	if (!id)
+@@ -149,10 +150,14 @@ static int intel_lpss_acpi_probe(struct platform_device *pdev)
+ 	info->mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	info->irq = platform_get_irq(pdev, 0);
+ 
++	ret = intel_lpss_probe(&pdev->dev, info);
++	if (ret)
++		return ret;
++
+ 	pm_runtime_set_active(&pdev->dev);
+ 	pm_runtime_enable(&pdev->dev);
+ 
+-	return intel_lpss_probe(&pdev->dev, info);
++	return 0;
+ }
+ 
+ static int intel_lpss_acpi_remove(struct platform_device *pdev)
+diff --git a/drivers/mfd/intel-lpss-pci.c b/drivers/mfd/intel-lpss-pci.c
+index a872b4485eacf..f70464ce8e3d7 100644
+--- a/drivers/mfd/intel-lpss-pci.c
++++ b/drivers/mfd/intel-lpss-pci.c
+@@ -254,7 +254,7 @@ static const struct pci_device_id intel_lpss_pci_ids[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x34eb), (kernel_ulong_t)&bxt_i2c_info },
+ 	{ PCI_VDEVICE(INTEL, 0x34fb), (kernel_ulong_t)&spt_info },
+ 	/* ICL-N */
+-	{ PCI_VDEVICE(INTEL, 0x38a8), (kernel_ulong_t)&bxt_uart_info },
++	{ PCI_VDEVICE(INTEL, 0x38a8), (kernel_ulong_t)&spt_uart_info },
+ 	/* TGL-H */
+ 	{ PCI_VDEVICE(INTEL, 0x43a7), (kernel_ulong_t)&bxt_uart_info },
+ 	{ PCI_VDEVICE(INTEL, 0x43a8), (kernel_ulong_t)&bxt_uart_info },
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index 6f9877546830b..ed53276f6ad90 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -1866,6 +1866,7 @@ static const struct pci_device_id pci_ids[] = {
+ 	SDHCI_PCI_DEVICE(INTEL, JSL_SD,    intel_byt_sd),
+ 	SDHCI_PCI_DEVICE(INTEL, LKF_EMMC,  intel_glk_emmc),
+ 	SDHCI_PCI_DEVICE(INTEL, LKF_SD,    intel_byt_sd),
++	SDHCI_PCI_DEVICE(INTEL, ADL_EMMC,  intel_glk_emmc),
+ 	SDHCI_PCI_DEVICE(O2, 8120,     o2),
+ 	SDHCI_PCI_DEVICE(O2, 8220,     o2),
+ 	SDHCI_PCI_DEVICE(O2, 8221,     o2),
+diff --git a/drivers/mmc/host/sdhci-pci.h b/drivers/mmc/host/sdhci-pci.h
+index 5e3193278ff99..3661a224fb04a 100644
+--- a/drivers/mmc/host/sdhci-pci.h
++++ b/drivers/mmc/host/sdhci-pci.h
+@@ -59,6 +59,7 @@
+ #define PCI_DEVICE_ID_INTEL_JSL_SD	0x4df8
+ #define PCI_DEVICE_ID_INTEL_LKF_EMMC	0x98c4
+ #define PCI_DEVICE_ID_INTEL_LKF_SD	0x98f8
++#define PCI_DEVICE_ID_INTEL_ADL_EMMC	0x54c4
+ 
+ #define PCI_DEVICE_ID_SYSKONNECT_8000	0x8000
+ #define PCI_DEVICE_ID_VIA_95D0		0x95d0
+diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c
+index 1b400de00f517..4d43aca2ff568 100644
+--- a/drivers/net/can/usb/gs_usb.c
++++ b/drivers/net/can/usb/gs_usb.c
+@@ -321,7 +321,7 @@ static void gs_usb_receive_bulk_callback(struct urb *urb)
+ 
+ 	/* device reports out of range channel id */
+ 	if (hf->channel >= GS_MAX_INTF)
+-		goto resubmit_urb;
++		goto device_detach;
+ 
+ 	dev = usbcan->canch[hf->channel];
+ 
+@@ -406,6 +406,7 @@ static void gs_usb_receive_bulk_callback(struct urb *urb)
+ 
+ 	/* USB failure take down all interfaces */
+ 	if (rc == -ENODEV) {
++ device_detach:
+ 		for (rc = 0; rc < GS_MAX_INTF; rc++) {
+ 			if (usbcan->canch[rc])
+ 				netif_device_detach(usbcan->canch[rc]->netdev);
+@@ -507,6 +508,8 @@ static netdev_tx_t gs_can_start_xmit(struct sk_buff *skb,
+ 
+ 	hf->echo_id = idx;
+ 	hf->channel = dev->channel;
++	hf->flags = 0;
++	hf->reserved = 0;
+ 
+ 	cf = (struct can_frame *)skb->data;
+ 
+diff --git a/drivers/net/veth.c b/drivers/net/veth.c
+index 2acdb8ad6c713..ecbc09cbe2590 100644
+--- a/drivers/net/veth.c
++++ b/drivers/net/veth.c
+@@ -342,7 +342,6 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		 */
+ 		use_napi = rcu_access_pointer(rq->napi) &&
+ 			   veth_skb_is_eligible_for_gro(dev, rcv, skb);
+-		skb_record_rx_queue(skb, rxq);
+ 	}
+ 
+ 	skb_tx_timestamp(skb);
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index 5ae2ef4680d6c..04238c29419b5 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -2069,7 +2069,7 @@ int ath11k_wmi_send_scan_start_cmd(struct ath11k *ar,
+ 	void *ptr;
+ 	int i, ret, len;
+ 	u32 *tmp_ptr;
+-	u8 extraie_len_with_pad = 0;
++	u16 extraie_len_with_pad = 0;
+ 	struct hint_short_ssid *s_ssid = NULL;
+ 	struct hint_bssid *hint_bssid = NULL;
+ 
+@@ -2088,7 +2088,7 @@ int ath11k_wmi_send_scan_start_cmd(struct ath11k *ar,
+ 		len += sizeof(*bssid) * params->num_bssid;
+ 
+ 	len += TLV_HDR_SIZE;
+-	if (params->extraie.len)
++	if (params->extraie.len && params->extraie.len <= 0xFFFF)
+ 		extraie_len_with_pad =
+ 			roundup(params->extraie.len, sizeof(u32));
+ 	len += extraie_len_with_pad;
+@@ -2195,7 +2195,7 @@ int ath11k_wmi_send_scan_start_cmd(struct ath11k *ar,
+ 		      FIELD_PREP(WMI_TLV_LEN, len);
+ 	ptr += TLV_HDR_SIZE;
+ 
+-	if (params->extraie.len)
++	if (extraie_len_with_pad)
+ 		memcpy(ptr, params->extraie.ptr,
+ 		       params->extraie.len);
+ 
+diff --git a/drivers/platform/x86/intel/hid.c b/drivers/platform/x86/intel/hid.c
+index 13f8cf70b9aee..41a2a026f1568 100644
+--- a/drivers/platform/x86/intel/hid.c
++++ b/drivers/platform/x86/intel/hid.c
+@@ -106,6 +106,13 @@ static const struct dmi_system_id button_array_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "Surface Go 3"),
+ 		},
+ 	},
++	{
++		.ident = "Microsoft Surface Go 3",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Surface Go 3"),
++		},
++	},
+ 	{ }
+ };
+ 
+diff --git a/drivers/staging/greybus/audio_topology.c b/drivers/staging/greybus/audio_topology.c
+index 1e613d42d8237..7f7d558b76d04 100644
+--- a/drivers/staging/greybus/audio_topology.c
++++ b/drivers/staging/greybus/audio_topology.c
+@@ -974,6 +974,44 @@ static int gbaudio_widget_event(struct snd_soc_dapm_widget *w,
+ 	return ret;
+ }
+ 
++static const struct snd_soc_dapm_widget gbaudio_widgets[] = {
++	[snd_soc_dapm_spk]	= SND_SOC_DAPM_SPK(NULL, gbcodec_event_spk),
++	[snd_soc_dapm_hp]	= SND_SOC_DAPM_HP(NULL, gbcodec_event_hp),
++	[snd_soc_dapm_mic]	= SND_SOC_DAPM_MIC(NULL, gbcodec_event_int_mic),
++	[snd_soc_dapm_output]	= SND_SOC_DAPM_OUTPUT(NULL),
++	[snd_soc_dapm_input]	= SND_SOC_DAPM_INPUT(NULL),
++	[snd_soc_dapm_switch]	= SND_SOC_DAPM_SWITCH_E(NULL, SND_SOC_NOPM,
++					0, 0, NULL,
++					gbaudio_widget_event,
++					SND_SOC_DAPM_PRE_PMU |
++					SND_SOC_DAPM_POST_PMD),
++	[snd_soc_dapm_pga]	= SND_SOC_DAPM_PGA_E(NULL, SND_SOC_NOPM,
++					0, 0, NULL, 0,
++					gbaudio_widget_event,
++					SND_SOC_DAPM_PRE_PMU |
++					SND_SOC_DAPM_POST_PMD),
++	[snd_soc_dapm_mixer]	= SND_SOC_DAPM_MIXER_E(NULL, SND_SOC_NOPM,
++					0, 0, NULL, 0,
++					gbaudio_widget_event,
++					SND_SOC_DAPM_PRE_PMU |
++					SND_SOC_DAPM_POST_PMD),
++	[snd_soc_dapm_mux]	= SND_SOC_DAPM_MUX_E(NULL, SND_SOC_NOPM,
++					0, 0, NULL,
++					gbaudio_widget_event,
++					SND_SOC_DAPM_PRE_PMU |
++					SND_SOC_DAPM_POST_PMD),
++	[snd_soc_dapm_aif_in]	= SND_SOC_DAPM_AIF_IN_E(NULL, NULL, 0,
++					SND_SOC_NOPM, 0, 0,
++					gbaudio_widget_event,
++					SND_SOC_DAPM_PRE_PMU |
++					SND_SOC_DAPM_POST_PMD),
++	[snd_soc_dapm_aif_out]	= SND_SOC_DAPM_AIF_OUT_E(NULL, NULL, 0,
++					SND_SOC_NOPM, 0, 0,
++					gbaudio_widget_event,
++					SND_SOC_DAPM_PRE_PMU |
++					SND_SOC_DAPM_POST_PMD),
++};
++
+ static int gbaudio_tplg_create_widget(struct gbaudio_module_info *module,
+ 				      struct snd_soc_dapm_widget *dw,
+ 				      struct gb_audio_widget *w, int *w_size)
+@@ -1052,77 +1090,37 @@ static int gbaudio_tplg_create_widget(struct gbaudio_module_info *module,
+ 
+ 	switch (w->type) {
+ 	case snd_soc_dapm_spk:
+-		*dw = (struct snd_soc_dapm_widget)
+-			SND_SOC_DAPM_SPK(w->name, gbcodec_event_spk);
++		*dw = gbaudio_widgets[w->type];
+ 		module->op_devices |= GBAUDIO_DEVICE_OUT_SPEAKER;
+ 		break;
+ 	case snd_soc_dapm_hp:
+-		*dw = (struct snd_soc_dapm_widget)
+-			SND_SOC_DAPM_HP(w->name, gbcodec_event_hp);
++		*dw = gbaudio_widgets[w->type];
+ 		module->op_devices |= (GBAUDIO_DEVICE_OUT_WIRED_HEADSET
+ 					| GBAUDIO_DEVICE_OUT_WIRED_HEADPHONE);
+ 		module->ip_devices |= GBAUDIO_DEVICE_IN_WIRED_HEADSET;
+ 		break;
+ 	case snd_soc_dapm_mic:
+-		*dw = (struct snd_soc_dapm_widget)
+-			SND_SOC_DAPM_MIC(w->name, gbcodec_event_int_mic);
++		*dw = gbaudio_widgets[w->type];
+ 		module->ip_devices |= GBAUDIO_DEVICE_IN_BUILTIN_MIC;
+ 		break;
+ 	case snd_soc_dapm_output:
+-		*dw = (struct snd_soc_dapm_widget)SND_SOC_DAPM_OUTPUT(w->name);
+-		break;
+ 	case snd_soc_dapm_input:
+-		*dw = (struct snd_soc_dapm_widget)SND_SOC_DAPM_INPUT(w->name);
+-		break;
+ 	case snd_soc_dapm_switch:
+-		*dw = (struct snd_soc_dapm_widget)
+-			SND_SOC_DAPM_SWITCH_E(w->name, SND_SOC_NOPM, 0, 0,
+-					      widget_kctls,
+-					      gbaudio_widget_event,
+-					      SND_SOC_DAPM_PRE_PMU |
+-					      SND_SOC_DAPM_POST_PMD);
+-		break;
+ 	case snd_soc_dapm_pga:
+-		*dw = (struct snd_soc_dapm_widget)
+-			SND_SOC_DAPM_PGA_E(w->name, SND_SOC_NOPM, 0, 0, NULL, 0,
+-					   gbaudio_widget_event,
+-					   SND_SOC_DAPM_PRE_PMU |
+-					   SND_SOC_DAPM_POST_PMD);
+-		break;
+ 	case snd_soc_dapm_mixer:
+-		*dw = (struct snd_soc_dapm_widget)
+-			SND_SOC_DAPM_MIXER_E(w->name, SND_SOC_NOPM, 0, 0, NULL,
+-					     0, gbaudio_widget_event,
+-					     SND_SOC_DAPM_PRE_PMU |
+-					     SND_SOC_DAPM_POST_PMD);
+-		break;
+ 	case snd_soc_dapm_mux:
+-		*dw = (struct snd_soc_dapm_widget)
+-			SND_SOC_DAPM_MUX_E(w->name, SND_SOC_NOPM, 0, 0,
+-					   widget_kctls, gbaudio_widget_event,
+-					   SND_SOC_DAPM_PRE_PMU |
+-					   SND_SOC_DAPM_POST_PMD);
++		*dw = gbaudio_widgets[w->type];
+ 		break;
+ 	case snd_soc_dapm_aif_in:
+-		*dw = (struct snd_soc_dapm_widget)
+-			SND_SOC_DAPM_AIF_IN_E(w->name, w->sname, 0,
+-					      SND_SOC_NOPM,
+-					      0, 0, gbaudio_widget_event,
+-					      SND_SOC_DAPM_PRE_PMU |
+-					      SND_SOC_DAPM_POST_PMD);
+-		break;
+ 	case snd_soc_dapm_aif_out:
+-		*dw = (struct snd_soc_dapm_widget)
+-			SND_SOC_DAPM_AIF_OUT_E(w->name, w->sname, 0,
+-					       SND_SOC_NOPM,
+-					       0, 0, gbaudio_widget_event,
+-					       SND_SOC_DAPM_PRE_PMU |
+-					       SND_SOC_DAPM_POST_PMD);
++		*dw = gbaudio_widgets[w->type];
++		dw->sname = w->sname;
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+ 		goto error;
+ 	}
++	dw->name = w->name;
+ 
+ 	dev_dbg(module->dev, "%s: widget of type %d created\n", dw->name,
+ 		dw->id);
+diff --git a/drivers/staging/r8188eu/core/rtw_led.c b/drivers/staging/r8188eu/core/rtw_led.c
+index 0e3453639a8b0..a6dc268397438 100644
+--- a/drivers/staging/r8188eu/core/rtw_led.c
++++ b/drivers/staging/r8188eu/core/rtw_led.c
+@@ -54,6 +54,7 @@ void DeInitLed871x(struct LED_871x *pLed)
+ 	_cancel_workitem_sync(&pLed->BlinkWorkItem);
+ 	_cancel_timer_ex(&pLed->BlinkTimer);
+ 	ResetLedStatus(pLed);
++	SwLedOff(pLed->padapter, pLed);
+ }
+ 
+ static void SwLedBlink1(struct LED_871x *pLed)
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index 4d326ee12c36a..8948b3bf7622b 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -753,6 +753,7 @@ void usb_hcd_poll_rh_status(struct usb_hcd *hcd)
+ {
+ 	struct urb	*urb;
+ 	int		length;
++	int		status;
+ 	unsigned long	flags;
+ 	char		buffer[6];	/* Any root hubs with > 31 ports? */
+ 
+@@ -770,11 +771,17 @@ void usb_hcd_poll_rh_status(struct usb_hcd *hcd)
+ 		if (urb) {
+ 			clear_bit(HCD_FLAG_POLL_PENDING, &hcd->flags);
+ 			hcd->status_urb = NULL;
++			if (urb->transfer_buffer_length >= length) {
++				status = 0;
++			} else {
++				status = -EOVERFLOW;
++				length = urb->transfer_buffer_length;
++			}
+ 			urb->actual_length = length;
+ 			memcpy(urb->transfer_buffer, buffer, length);
+ 
+ 			usb_hcd_unlink_urb_from_ep(hcd, urb);
+-			usb_hcd_giveback_urb(hcd, urb, 0);
++			usb_hcd_giveback_urb(hcd, urb, status);
+ 		} else {
+ 			length = 0;
+ 			set_bit(HCD_FLAG_POLL_PENDING, &hcd->flags);
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 00070a8a65079..3bc4a86c3d0a5 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -1225,7 +1225,7 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
+ 			 */
+ 			if (portchange || (hub_is_superspeed(hub->hdev) &&
+ 						port_resumed))
+-				set_bit(port1, hub->change_bits);
++				set_bit(port1, hub->event_bits);
+ 
+ 		} else if (udev->persist_enabled) {
+ #ifdef CONFIG_PM
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 63065bc01b766..383342efcdc46 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -246,6 +246,15 @@ enum {
+ 	 * HCI after resume.
+ 	 */
+ 	HCI_QUIRK_NO_SUSPEND_NOTIFIER,
++
++	/*
++	 * When this quirk is set, LE tx power is not queried on startup
++	 * and the min/max tx power values default to HCI_TX_POWER_INVALID.
++	 *
++	 * This quirk can be set before hci_register_dev is called or
++	 * during the hdev->setup vendor callback.
++	 */
++	HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER,
+ };
+ 
+ /* HCI device flags */
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index b532f1058d35f..4e51bf3f9603a 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -7229,16 +7229,16 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 		fallthrough;
+ 	case PTR_TO_PACKET_END:
+ 	case PTR_TO_SOCKET:
+-	case PTR_TO_SOCKET_OR_NULL:
+ 	case PTR_TO_SOCK_COMMON:
+-	case PTR_TO_SOCK_COMMON_OR_NULL:
+ 	case PTR_TO_TCP_SOCK:
+-	case PTR_TO_TCP_SOCK_OR_NULL:
+ 	case PTR_TO_XDP_SOCK:
++reject:
+ 		verbose(env, "R%d pointer arithmetic on %s prohibited\n",
+ 			dst, reg_type_str[ptr_reg->type]);
+ 		return -EACCES;
+ 	default:
++		if (reg_type_may_be_null(ptr_reg->type))
++			goto reject;
+ 		break;
+ 	}
+ 
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 613917bbc4e73..c55004cbf48ee 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -868,8 +868,17 @@ void wq_worker_running(struct task_struct *task)
+ 
+ 	if (!worker->sleeping)
+ 		return;
++
++	/*
++	 * If preempted by unbind_workers() between the WORKER_NOT_RUNNING check
++	 * and the nr_running increment below, we may ruin the nr_running reset
++	 * and leave with an unexpected pool->nr_running == 1 on the newly unbound
++	 * pool. Protect against such race.
++	 */
++	preempt_disable();
+ 	if (!(worker->flags & WORKER_NOT_RUNNING))
+ 		atomic_inc(&worker->pool->nr_running);
++	preempt_enable();
+ 	worker->sleeping = 0;
+ }
+ 
+@@ -903,6 +912,16 @@ void wq_worker_sleeping(struct task_struct *task)
+ 	worker->sleeping = 1;
+ 	raw_spin_lock_irq(&pool->lock);
+ 
++	/*
++	 * Recheck in case unbind_workers() preempted us. We don't
++	 * want to decrement nr_running after the worker is unbound
++	 * and nr_running has been reset.
++	 */
++	if (worker->flags & WORKER_NOT_RUNNING) {
++		raw_spin_unlock_irq(&pool->lock);
++		return;
++	}
++
+ 	/*
+ 	 * The counterpart of the following dec_and_test, implied mb,
+ 	 * worklist not empty test sequence is in insert_work().
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 8d33aa64846b1..2cf77d76c50be 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -619,7 +619,8 @@ static int hci_init3_req(struct hci_request *req, unsigned long opt)
+ 			hci_req_add(req, HCI_OP_LE_READ_ADV_TX_POWER, 0, NULL);
+ 		}
+ 
+-		if (hdev->commands[38] & 0x80) {
++		if ((hdev->commands[38] & 0x80) &&
++		    !test_bit(HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER, &hdev->quirks)) {
+ 			/* Read LE Min/Max Tx Power*/
+ 			hci_req_add(req, HCI_OP_LE_READ_TRANSMIT_POWER,
+ 				    0, NULL);
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index df6968b28bf41..02cbcb2ecf0db 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -119,8 +119,8 @@ enum {
+ };
+ 
+ struct tpcon {
+-	int idx;
+-	int len;
++	unsigned int idx;
++	unsigned int len;
+ 	u32 state;
+ 	u8 bs;
+ 	u8 sn;


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-01-20 13:45 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-01-20 13:45 UTC (permalink / raw
  To: gentoo-commits

commit:     bac505d232815166e38c0d9ca27763fb12fc664d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jan 20 13:45:08 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jan 20 13:45:08 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bac505d2

Linux patch 5.16.2

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1001_linux-5.16.2.patch | 1219 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1223 insertions(+)

diff --git a/0000_README b/0000_README
index 4a1aa215..41c5d786 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch:  1000_linux-5.16.1.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.16.1
 
+Patch:  1001_linux-5.16.2.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.2
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1001_linux-5.16.2.patch b/1001_linux-5.16.2.patch
new file mode 100644
index 00000000..33650f08
--- /dev/null
+++ b/1001_linux-5.16.2.patch
@@ -0,0 +1,1219 @@
+diff --git a/Makefile b/Makefile
+index fdbd06daf2af1..dd98debc26048 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/arch/arm/kernel/perf_callchain.c b/arch/arm/kernel/perf_callchain.c
+index 3b69a76d341e7..1626dfc6f6ce6 100644
+--- a/arch/arm/kernel/perf_callchain.c
++++ b/arch/arm/kernel/perf_callchain.c
+@@ -62,9 +62,10 @@ user_backtrace(struct frame_tail __user *tail,
+ void
+ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	struct frame_tail __user *tail;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		/* We don't support guest os callchain now */
+ 		return;
+ 	}
+@@ -98,9 +99,10 @@ callchain_trace(struct stackframe *fr,
+ void
+ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	struct stackframe fr;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		/* We don't support guest os callchain now */
+ 		return;
+ 	}
+@@ -111,18 +113,21 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
+ 
+ unsigned long perf_instruction_pointer(struct pt_regs *regs)
+ {
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
+-		return perf_guest_cbs->get_guest_ip();
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
++
++	if (guest_cbs && guest_cbs->is_in_guest())
++		return guest_cbs->get_guest_ip();
+ 
+ 	return instruction_pointer(regs);
+ }
+ 
+ unsigned long perf_misc_flags(struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	int misc = 0;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+-		if (perf_guest_cbs->is_user_mode())
++	if (guest_cbs && guest_cbs->is_in_guest()) {
++		if (guest_cbs->is_user_mode())
+ 			misc |= PERF_RECORD_MISC_GUEST_USER;
+ 		else
+ 			misc |= PERF_RECORD_MISC_GUEST_KERNEL;
+diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c
+index 4a72c27273097..86d9f20131723 100644
+--- a/arch/arm64/kernel/perf_callchain.c
++++ b/arch/arm64/kernel/perf_callchain.c
+@@ -102,7 +102,9 @@ compat_user_backtrace(struct compat_frame_tail __user *tail,
+ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
+ 			 struct pt_regs *regs)
+ {
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
++
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		/* We don't support guest os callchain now */
+ 		return;
+ 	}
+@@ -147,9 +149,10 @@ static bool callchain_trace(void *data, unsigned long pc)
+ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
+ 			   struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	struct stackframe frame;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		/* We don't support guest os callchain now */
+ 		return;
+ 	}
+@@ -160,18 +163,21 @@ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
+ 
+ unsigned long perf_instruction_pointer(struct pt_regs *regs)
+ {
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
+-		return perf_guest_cbs->get_guest_ip();
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
++
++	if (guest_cbs && guest_cbs->is_in_guest())
++		return guest_cbs->get_guest_ip();
+ 
+ 	return instruction_pointer(regs);
+ }
+ 
+ unsigned long perf_misc_flags(struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	int misc = 0;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+-		if (perf_guest_cbs->is_user_mode())
++	if (guest_cbs && guest_cbs->is_in_guest()) {
++		if (guest_cbs->is_user_mode())
+ 			misc |= PERF_RECORD_MISC_GUEST_USER;
+ 		else
+ 			misc |= PERF_RECORD_MISC_GUEST_KERNEL;
+diff --git a/arch/csky/kernel/perf_callchain.c b/arch/csky/kernel/perf_callchain.c
+index ab55e98ee8f62..35318a635a5fa 100644
+--- a/arch/csky/kernel/perf_callchain.c
++++ b/arch/csky/kernel/perf_callchain.c
+@@ -86,10 +86,11 @@ static unsigned long user_backtrace(struct perf_callchain_entry_ctx *entry,
+ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
+ 			 struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	unsigned long fp = 0;
+ 
+ 	/* C-SKY does not support virtualization. */
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
++	if (guest_cbs && guest_cbs->is_in_guest())
+ 		return;
+ 
+ 	fp = regs->regs[4];
+@@ -110,10 +111,11 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
+ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
+ 			   struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	struct stackframe fr;
+ 
+ 	/* C-SKY does not support virtualization. */
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		pr_warn("C-SKY does not support perf in guest mode!");
+ 		return;
+ 	}
+diff --git a/arch/nds32/kernel/perf_event_cpu.c b/arch/nds32/kernel/perf_event_cpu.c
+index 0ce6f9f307e6a..f387919607813 100644
+--- a/arch/nds32/kernel/perf_event_cpu.c
++++ b/arch/nds32/kernel/perf_event_cpu.c
+@@ -1363,6 +1363,7 @@ void
+ perf_callchain_user(struct perf_callchain_entry_ctx *entry,
+ 		    struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	unsigned long fp = 0;
+ 	unsigned long gp = 0;
+ 	unsigned long lp = 0;
+@@ -1371,7 +1372,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry,
+ 
+ 	leaf_fp = 0;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		/* We don't support guest os callchain now */
+ 		return;
+ 	}
+@@ -1479,9 +1480,10 @@ void
+ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
+ 		      struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	struct stackframe fr;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		/* We don't support guest os callchain now */
+ 		return;
+ 	}
+@@ -1493,20 +1495,23 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
+ 
+ unsigned long perf_instruction_pointer(struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
++
+ 	/* However, NDS32 does not support virtualization */
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
+-		return perf_guest_cbs->get_guest_ip();
++	if (guest_cbs && guest_cbs->is_in_guest())
++		return guest_cbs->get_guest_ip();
+ 
+ 	return instruction_pointer(regs);
+ }
+ 
+ unsigned long perf_misc_flags(struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	int misc = 0;
+ 
+ 	/* However, NDS32 does not support virtualization */
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+-		if (perf_guest_cbs->is_user_mode())
++	if (guest_cbs && guest_cbs->is_in_guest()) {
++		if (guest_cbs->is_user_mode())
+ 			misc |= PERF_RECORD_MISC_GUEST_USER;
+ 		else
+ 			misc |= PERF_RECORD_MISC_GUEST_KERNEL;
+diff --git a/arch/riscv/kernel/perf_callchain.c b/arch/riscv/kernel/perf_callchain.c
+index 0bb1854dce833..8ecfc4c128bc5 100644
+--- a/arch/riscv/kernel/perf_callchain.c
++++ b/arch/riscv/kernel/perf_callchain.c
+@@ -56,10 +56,11 @@ static unsigned long user_backtrace(struct perf_callchain_entry_ctx *entry,
+ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
+ 			 struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	unsigned long fp = 0;
+ 
+ 	/* RISC-V does not support perf in guest mode. */
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
++	if (guest_cbs && guest_cbs->is_in_guest())
+ 		return;
+ 
+ 	fp = regs->s0;
+@@ -78,8 +79,10 @@ static bool fill_callchain(void *entry, unsigned long pc)
+ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
+ 			   struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
++
+ 	/* RISC-V does not support perf in guest mode. */
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		pr_warn("RISC-V does not support perf in guest mode!");
+ 		return;
+ 	}
+diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
+index c3bd993fdd0cf..0576d5c991384 100644
+--- a/arch/s390/kvm/interrupt.c
++++ b/arch/s390/kvm/interrupt.c
+@@ -2115,6 +2115,13 @@ int kvm_s390_is_stop_irq_pending(struct kvm_vcpu *vcpu)
+ 	return test_bit(IRQ_PEND_SIGP_STOP, &li->pending_irqs);
+ }
+ 
++int kvm_s390_is_restart_irq_pending(struct kvm_vcpu *vcpu)
++{
++	struct kvm_s390_local_interrupt *li = &vcpu->arch.local_int;
++
++	return test_bit(IRQ_PEND_RESTART, &li->pending_irqs);
++}
++
+ void kvm_s390_clear_stop_irq(struct kvm_vcpu *vcpu)
+ {
+ 	struct kvm_s390_local_interrupt *li = &vcpu->arch.local_int;
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 14a18ba5ff2c8..ef299aad40090 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -4645,10 +4645,15 @@ int kvm_s390_vcpu_stop(struct kvm_vcpu *vcpu)
+ 		}
+ 	}
+ 
+-	/* SIGP STOP and SIGP STOP AND STORE STATUS has been fully processed */
++	/*
++	 * Set the VCPU to STOPPED and THEN clear the interrupt flag,
++	 * now that the SIGP STOP and SIGP STOP AND STORE STATUS orders
++	 * have been fully processed. This will ensure that the VCPU
++	 * is kept BUSY if another VCPU is inquiring with SIGP SENSE.
++	 */
++	kvm_s390_set_cpuflags(vcpu, CPUSTAT_STOPPED);
+ 	kvm_s390_clear_stop_irq(vcpu);
+ 
+-	kvm_s390_set_cpuflags(vcpu, CPUSTAT_STOPPED);
+ 	__disable_ibs_on_vcpu(vcpu);
+ 
+ 	for (i = 0; i < online_vcpus; i++) {
+diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
+index c07a050d757d3..1876ab0c293fe 100644
+--- a/arch/s390/kvm/kvm-s390.h
++++ b/arch/s390/kvm/kvm-s390.h
+@@ -427,6 +427,7 @@ void kvm_s390_destroy_adapters(struct kvm *kvm);
+ int kvm_s390_ext_call_pending(struct kvm_vcpu *vcpu);
+ extern struct kvm_device_ops kvm_flic_ops;
+ int kvm_s390_is_stop_irq_pending(struct kvm_vcpu *vcpu);
++int kvm_s390_is_restart_irq_pending(struct kvm_vcpu *vcpu);
+ void kvm_s390_clear_stop_irq(struct kvm_vcpu *vcpu);
+ int kvm_s390_set_irq_state(struct kvm_vcpu *vcpu,
+ 			   void __user *buf, int len);
+diff --git a/arch/s390/kvm/sigp.c b/arch/s390/kvm/sigp.c
+index cf4de80bd5410..8aaee2892ec35 100644
+--- a/arch/s390/kvm/sigp.c
++++ b/arch/s390/kvm/sigp.c
+@@ -276,6 +276,34 @@ static int handle_sigp_dst(struct kvm_vcpu *vcpu, u8 order_code,
+ 	if (!dst_vcpu)
+ 		return SIGP_CC_NOT_OPERATIONAL;
+ 
++	/*
++	 * SIGP RESTART, SIGP STOP, and SIGP STOP AND STORE STATUS orders
++	 * are processed asynchronously. Until the affected VCPU finishes
++	 * its work and calls back into KVM to clear the (RESTART or STOP)
++	 * interrupt, we need to return any new non-reset orders "busy".
++	 *
++	 * This is important because a single VCPU could issue:
++	 *  1) SIGP STOP $DESTINATION
++	 *  2) SIGP SENSE $DESTINATION
++	 *
++	 * If the SIGP SENSE would not be rejected as "busy", it could
++	 * return an incorrect answer as to whether the VCPU is STOPPED
++	 * or OPERATING.
++	 */
++	if (order_code != SIGP_INITIAL_CPU_RESET &&
++	    order_code != SIGP_CPU_RESET) {
++		/*
++		 * Lockless check. Both SIGP STOP and SIGP (RE)START
++		 * properly synchronize everything while processing
++		 * their orders, while the guest cannot observe a
++		 * difference when issuing other orders from two
++		 * different VCPUs.
++		 */
++		if (kvm_s390_is_stop_irq_pending(dst_vcpu) ||
++		    kvm_s390_is_restart_irq_pending(dst_vcpu))
++			return SIGP_CC_BUSY;
++	}
++
+ 	switch (order_code) {
+ 	case SIGP_SENSE:
+ 		vcpu->stat.instruction_sigp_sense++;
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index 38b2c779146f1..32cec290d3ad6 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -2768,10 +2768,11 @@ static bool perf_hw_regs(struct pt_regs *regs)
+ void
+ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	struct unwind_state state;
+ 	unsigned long addr;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		/* TODO: We don't support guest os callchain now */
+ 		return;
+ 	}
+@@ -2871,10 +2872,11 @@ perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ctx *ent
+ void
+ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	struct stack_frame frame;
+ 	const struct stack_frame __user *fp;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
++	if (guest_cbs && guest_cbs->is_in_guest()) {
+ 		/* TODO: We don't support guest os callchain now */
+ 		return;
+ 	}
+@@ -2951,18 +2953,21 @@ static unsigned long code_segment_base(struct pt_regs *regs)
+ 
+ unsigned long perf_instruction_pointer(struct pt_regs *regs)
+ {
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
+-		return perf_guest_cbs->get_guest_ip();
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
++
++	if (guest_cbs && guest_cbs->is_in_guest())
++		return guest_cbs->get_guest_ip();
+ 
+ 	return regs->ip + code_segment_base(regs);
+ }
+ 
+ unsigned long perf_misc_flags(struct pt_regs *regs)
+ {
++	struct perf_guest_info_callbacks *guest_cbs = perf_get_guest_cbs();
+ 	int misc = 0;
+ 
+-	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
+-		if (perf_guest_cbs->is_user_mode())
++	if (guest_cbs && guest_cbs->is_in_guest()) {
++		if (guest_cbs->is_user_mode())
+ 			misc |= PERF_RECORD_MISC_GUEST_USER;
+ 		else
+ 			misc |= PERF_RECORD_MISC_GUEST_KERNEL;
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index ec6444f2c9dcb..1e33c75ffa260 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -2835,6 +2835,7 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
+ {
+ 	struct perf_sample_data data;
+ 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++	struct perf_guest_info_callbacks *guest_cbs;
+ 	int bit;
+ 	int handled = 0;
+ 	u64 intel_ctrl = hybrid(cpuc->pmu, intel_ctrl);
+@@ -2901,9 +2902,11 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
+ 	 */
+ 	if (__test_and_clear_bit(GLOBAL_STATUS_TRACE_TOPAPMI_BIT, (unsigned long *)&status)) {
+ 		handled++;
+-		if (unlikely(perf_guest_cbs && perf_guest_cbs->is_in_guest() &&
+-			perf_guest_cbs->handle_intel_pt_intr))
+-			perf_guest_cbs->handle_intel_pt_intr();
++
++		guest_cbs = perf_get_guest_cbs();
++		if (unlikely(guest_cbs && guest_cbs->is_in_guest() &&
++			     guest_cbs->handle_intel_pt_intr))
++			guest_cbs->handle_intel_pt_intr();
+ 		else
+ 			intel_pt_interrupt();
+ 	}
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 555f4de47ef29..59fc339ba5282 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1519,6 +1519,7 @@ struct kvm_x86_init_ops {
+ 	int (*disabled_by_bios)(void);
+ 	int (*check_processor_compatibility)(void);
+ 	int (*hardware_setup)(void);
++	bool (*intel_pt_intr_in_guest)(void);
+ 
+ 	struct kvm_x86_ops *runtime_ops;
+ };
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index f206fc35deff6..7c009867d6f23 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -676,31 +676,25 @@ static inline bool pv_eoi_enabled(struct kvm_vcpu *vcpu)
+ static bool pv_eoi_get_pending(struct kvm_vcpu *vcpu)
+ {
+ 	u8 val;
+-	if (pv_eoi_get_user(vcpu, &val) < 0) {
+-		printk(KERN_WARNING "Can't read EOI MSR value: 0x%llx\n",
+-			   (unsigned long long)vcpu->arch.pv_eoi.msr_val);
++	if (pv_eoi_get_user(vcpu, &val) < 0)
+ 		return false;
+-	}
++
+ 	return val & KVM_PV_EOI_ENABLED;
+ }
+ 
+ static void pv_eoi_set_pending(struct kvm_vcpu *vcpu)
+ {
+-	if (pv_eoi_put_user(vcpu, KVM_PV_EOI_ENABLED) < 0) {
+-		printk(KERN_WARNING "Can't set EOI MSR value: 0x%llx\n",
+-			   (unsigned long long)vcpu->arch.pv_eoi.msr_val);
++	if (pv_eoi_put_user(vcpu, KVM_PV_EOI_ENABLED) < 0)
+ 		return;
+-	}
++
+ 	__set_bit(KVM_APIC_PV_EOI_PENDING, &vcpu->arch.apic_attention);
+ }
+ 
+ static void pv_eoi_clr_pending(struct kvm_vcpu *vcpu)
+ {
+-	if (pv_eoi_put_user(vcpu, KVM_PV_EOI_DISABLED) < 0) {
+-		printk(KERN_WARNING "Can't clear EOI MSR value: 0x%llx\n",
+-			   (unsigned long long)vcpu->arch.pv_eoi.msr_val);
++	if (pv_eoi_put_user(vcpu, KVM_PV_EOI_DISABLED) < 0)
+ 		return;
+-	}
++
+ 	__clear_bit(KVM_APIC_PV_EOI_PENDING, &vcpu->arch.apic_attention);
+ }
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 0dbf94eb954fd..7f4e6f625abcf 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -7905,6 +7905,7 @@ static struct kvm_x86_init_ops vmx_init_ops __initdata = {
+ 	.disabled_by_bios = vmx_disabled_by_bios,
+ 	.check_processor_compatibility = vmx_check_processor_compat,
+ 	.hardware_setup = hardware_setup,
++	.intel_pt_intr_in_guest = vmx_pt_mode_is_host_guest,
+ 
+ 	.runtime_ops = &vmx_x86_ops,
+ };
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index e50e97ac44084..0b5c61bb24a17 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -8560,7 +8560,7 @@ static struct perf_guest_info_callbacks kvm_guest_cbs = {
+ 	.is_in_guest		= kvm_is_in_guest,
+ 	.is_user_mode		= kvm_is_user_mode,
+ 	.get_guest_ip		= kvm_get_guest_ip,
+-	.handle_intel_pt_intr	= kvm_handle_intel_pt_intr,
++	.handle_intel_pt_intr	= NULL,
+ };
+ 
+ #ifdef CONFIG_X86_64
+@@ -8676,8 +8676,6 @@ int kvm_arch_init(void *opaque)
+ 
+ 	kvm_timer_init();
+ 
+-	perf_register_guest_info_callbacks(&kvm_guest_cbs);
+-
+ 	if (boot_cpu_has(X86_FEATURE_XSAVE)) {
+ 		host_xcr0 = xgetbv(XCR_XFEATURE_ENABLED_MASK);
+ 		supported_xcr0 = host_xcr0 & KVM_SUPPORTED_XCR0;
+@@ -8709,7 +8707,6 @@ void kvm_arch_exit(void)
+ 		clear_hv_tscchange_cb();
+ #endif
+ 	kvm_lapic_exit();
+-	perf_unregister_guest_info_callbacks(&kvm_guest_cbs);
+ 
+ 	if (!boot_cpu_has(X86_FEATURE_CONSTANT_TSC))
+ 		cpufreq_unregister_notifier(&kvmclock_cpufreq_notifier_block,
+@@ -11269,6 +11266,10 @@ int kvm_arch_hardware_setup(void *opaque)
+ 	memcpy(&kvm_x86_ops, ops->runtime_ops, sizeof(kvm_x86_ops));
+ 	kvm_ops_static_call_update();
+ 
++	if (ops->intel_pt_intr_in_guest && ops->intel_pt_intr_in_guest())
++		kvm_guest_cbs.handle_intel_pt_intr = kvm_handle_intel_pt_intr;
++	perf_register_guest_info_callbacks(&kvm_guest_cbs);
++
+ 	if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES))
+ 		supported_xss = 0;
+ 
+@@ -11296,6 +11297,9 @@ int kvm_arch_hardware_setup(void *opaque)
+ 
+ void kvm_arch_hardware_unsetup(void)
+ {
++	perf_unregister_guest_info_callbacks(&kvm_guest_cbs);
++	kvm_guest_cbs.handle_intel_pt_intr = NULL;
++
+ 	static_call(kvm_x86_hardware_unsetup)();
+ }
+ 
+diff --git a/drivers/base/devtmpfs.c b/drivers/base/devtmpfs.c
+index 8be352ab4ddbf..fa13ad49d2116 100644
+--- a/drivers/base/devtmpfs.c
++++ b/drivers/base/devtmpfs.c
+@@ -59,8 +59,15 @@ static struct dentry *public_dev_mount(struct file_system_type *fs_type, int fla
+ 		      const char *dev_name, void *data)
+ {
+ 	struct super_block *s = mnt->mnt_sb;
++	int err;
++
+ 	atomic_inc(&s->s_active);
+ 	down_write(&s->s_umount);
++	err = reconfigure_single(s, flags, data);
++	if (err < 0) {
++		deactivate_locked_super(s);
++		return ERR_PTR(err);
++	}
+ 	return dget(s->s_root);
+ }
+ 
+diff --git a/drivers/firmware/qemu_fw_cfg.c b/drivers/firmware/qemu_fw_cfg.c
+index 172c751a4f6c2..f08e056ed0ae4 100644
+--- a/drivers/firmware/qemu_fw_cfg.c
++++ b/drivers/firmware/qemu_fw_cfg.c
+@@ -388,9 +388,7 @@ static void fw_cfg_sysfs_cache_cleanup(void)
+ 	struct fw_cfg_sysfs_entry *entry, *next;
+ 
+ 	list_for_each_entry_safe(entry, next, &fw_cfg_entry_cache, list) {
+-		/* will end up invoking fw_cfg_sysfs_cache_delist()
+-		 * via each object's release() method (i.e. destructor)
+-		 */
++		fw_cfg_sysfs_cache_delist(entry);
+ 		kobject_put(&entry->kobj);
+ 	}
+ }
+@@ -448,7 +446,6 @@ static void fw_cfg_sysfs_release_entry(struct kobject *kobj)
+ {
+ 	struct fw_cfg_sysfs_entry *entry = to_entry(kobj);
+ 
+-	fw_cfg_sysfs_cache_delist(entry);
+ 	kfree(entry);
+ }
+ 
+@@ -601,20 +598,18 @@ static int fw_cfg_register_file(const struct fw_cfg_file *f)
+ 	/* set file entry information */
+ 	entry->size = be32_to_cpu(f->size);
+ 	entry->select = be16_to_cpu(f->select);
+-	memcpy(entry->name, f->name, FW_CFG_MAX_FILE_PATH);
++	strscpy(entry->name, f->name, FW_CFG_MAX_FILE_PATH);
+ 
+ 	/* register entry under "/sys/firmware/qemu_fw_cfg/by_key/" */
+ 	err = kobject_init_and_add(&entry->kobj, &fw_cfg_sysfs_entry_ktype,
+ 				   fw_cfg_sel_ko, "%d", entry->select);
+-	if (err) {
+-		kobject_put(&entry->kobj);
+-		return err;
+-	}
++	if (err)
++		goto err_put_entry;
+ 
+ 	/* add raw binary content access */
+ 	err = sysfs_create_bin_file(&entry->kobj, &fw_cfg_sysfs_attr_raw);
+ 	if (err)
+-		goto err_add_raw;
++		goto err_del_entry;
+ 
+ 	/* try adding "/sys/firmware/qemu_fw_cfg/by_name/" symlink */
+ 	fw_cfg_build_symlink(fw_cfg_fname_kset, &entry->kobj, entry->name);
+@@ -623,9 +618,10 @@ static int fw_cfg_register_file(const struct fw_cfg_file *f)
+ 	fw_cfg_sysfs_cache_enlist(entry);
+ 	return 0;
+ 
+-err_add_raw:
++err_del_entry:
+ 	kobject_del(&entry->kobj);
+-	kfree(entry);
++err_put_entry:
++	kobject_put(&entry->kobj);
+ 	return err;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index e727f1dd2a9a7..05f7ffd6a28da 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -6065,6 +6065,7 @@ static void update_dsc_caps(struct amdgpu_dm_connector *aconnector,
+ 							struct dsc_dec_dpcd_caps *dsc_caps)
+ {
+ 	stream->timing.flags.DSC = 0;
++	dsc_caps->is_dsc_supported = false;
+ 
+ 	if (aconnector->dc_link && sink->sink_signal == SIGNAL_TYPE_DISPLAY_PORT) {
+ 		dc_dsc_parse_dsc_dpcd(aconnector->dc_link->ctx->dc,
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index 9f37eaf28ce7e..1b4cc934109e8 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -1963,6 +1963,10 @@ static int uvc_video_start_transfer(struct uvc_streaming *stream,
+ 		if (ep == NULL)
+ 			return -EIO;
+ 
++		/* Reject broken descriptors. */
++		if (usb_endpoint_maxp(&ep->desc) == 0)
++			return -EIO;
++
+ 		ret = uvc_init_video_bulk(stream, ep, gfp_flags);
+ 	}
+ 
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/hw.c
+index 6312fddd9c00a..eaba661133280 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/hw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/hw.c
+@@ -1000,6 +1000,7 @@ int rtl92cu_hw_init(struct ieee80211_hw *hw)
+ 	_initpabias(hw);
+ 	rtl92c_dm_init(hw);
+ exit:
++	local_irq_disable();
+ 	local_irq_restore(flags);
+ 	return err;
+ }
+diff --git a/drivers/remoteproc/qcom_pil_info.c b/drivers/remoteproc/qcom_pil_info.c
+index 7c007dd7b2000..aca21560e20b8 100644
+--- a/drivers/remoteproc/qcom_pil_info.c
++++ b/drivers/remoteproc/qcom_pil_info.c
+@@ -104,7 +104,7 @@ int qcom_pil_info_store(const char *image, phys_addr_t base, size_t size)
+ 	return -ENOMEM;
+ 
+ found_unused:
+-	memcpy_toio(entry, image, PIL_RELOC_NAME_LEN);
++	memcpy_toio(entry, image, strnlen(image, PIL_RELOC_NAME_LEN));
+ found_existing:
+ 	/* Use two writel() as base is only aligned to 4 bytes on odd entries */
+ 	writel(base, entry + PIL_RELOC_NAME_LEN);
+diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
+index 03857dc9cdc12..120c16b14223b 100644
+--- a/drivers/remoteproc/qcom_q6v5_pas.c
++++ b/drivers/remoteproc/qcom_q6v5_pas.c
+@@ -652,6 +652,7 @@ static const struct adsp_data sm8350_cdsp_resource = {
+ 	.auto_boot = true,
+ 	.proxy_pd_names = (char*[]){
+ 		"cx",
++		"mxc",
+ 		NULL
+ 	},
+ 	.load_state = "cdsp",
+diff --git a/drivers/video/fbdev/vga16fb.c b/drivers/video/fbdev/vga16fb.c
+index e2757ff1c23d2..96e312a3eac75 100644
+--- a/drivers/video/fbdev/vga16fb.c
++++ b/drivers/video/fbdev/vga16fb.c
+@@ -184,6 +184,25 @@ static inline void setindex(int index)
+ 	vga_io_w(VGA_GFX_I, index);
+ }
+ 
++/* Check if the video mode is supported by the driver */
++static inline int check_mode_supported(void)
++{
++	/* non-x86 architectures treat orig_video_isVGA as a boolean flag */
++#if defined(CONFIG_X86)
++	/* only EGA and VGA in 16 color graphic mode are supported */
++	if (screen_info.orig_video_isVGA != VIDEO_TYPE_EGAC &&
++	    screen_info.orig_video_isVGA != VIDEO_TYPE_VGAC)
++		return -ENODEV;
++
++	if (screen_info.orig_video_mode != 0x0D &&	/* 320x200/4 (EGA) */
++	    screen_info.orig_video_mode != 0x0E &&	/* 640x200/4 (EGA) */
++	    screen_info.orig_video_mode != 0x10 &&	/* 640x350/4 (EGA) */
++	    screen_info.orig_video_mode != 0x12)	/* 640x480/4 (VGA) */
++		return -ENODEV;
++#endif
++	return 0;
++}
++
+ static void vga16fb_pan_var(struct fb_info *info, 
+ 			    struct fb_var_screeninfo *var)
+ {
+@@ -1422,6 +1441,11 @@ static int __init vga16fb_init(void)
+ 
+ 	vga16fb_setup(option);
+ #endif
++
++	ret = check_mode_supported();
++	if (ret)
++		return ret;
++
+ 	ret = platform_driver_register(&vga16fb_driver);
+ 
+ 	if (!ret) {
+diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c
+index fac918ccb3051..1d554d0b6e583 100644
+--- a/fs/9p/vfs_addr.c
++++ b/fs/9p/vfs_addr.c
+@@ -42,6 +42,11 @@ static void v9fs_req_issue_op(struct netfs_read_subrequest *subreq)
+ 	iov_iter_xarray(&to, READ, &rreq->mapping->i_pages, pos, len);
+ 
+ 	total = p9_client_read(fid, pos, &to, &err);
++
++	/* if we just extended the file size, any portion not in
++	 * cache won't be on server and is zeroes */
++	__set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
++
+ 	netfs_subreq_terminated(subreq, err ?: total, false);
+ }
+ 
+diff --git a/fs/9p/vfs_inode_dotl.c b/fs/9p/vfs_inode_dotl.c
+index 7dee89ba32e7b..52f8ae79db219 100644
+--- a/fs/9p/vfs_inode_dotl.c
++++ b/fs/9p/vfs_inode_dotl.c
+@@ -551,7 +551,10 @@ int v9fs_vfs_setattr_dotl(struct user_namespace *mnt_userns,
+ {
+ 	int retval, use_dentry = 0;
+ 	struct p9_fid *fid = NULL;
+-	struct p9_iattr_dotl p9attr;
++	struct p9_iattr_dotl p9attr = {
++		.uid = INVALID_UID,
++		.gid = INVALID_GID,
++	};
+ 	struct inode *inode = d_inode(dentry);
+ 
+ 	p9_debug(P9_DEBUG_VFS, "\n");
+@@ -561,14 +564,22 @@ int v9fs_vfs_setattr_dotl(struct user_namespace *mnt_userns,
+ 		return retval;
+ 
+ 	p9attr.valid = v9fs_mapped_iattr_valid(iattr->ia_valid);
+-	p9attr.mode = iattr->ia_mode;
+-	p9attr.uid = iattr->ia_uid;
+-	p9attr.gid = iattr->ia_gid;
+-	p9attr.size = iattr->ia_size;
+-	p9attr.atime_sec = iattr->ia_atime.tv_sec;
+-	p9attr.atime_nsec = iattr->ia_atime.tv_nsec;
+-	p9attr.mtime_sec = iattr->ia_mtime.tv_sec;
+-	p9attr.mtime_nsec = iattr->ia_mtime.tv_nsec;
++	if (iattr->ia_valid & ATTR_MODE)
++		p9attr.mode = iattr->ia_mode;
++	if (iattr->ia_valid & ATTR_UID)
++		p9attr.uid = iattr->ia_uid;
++	if (iattr->ia_valid & ATTR_GID)
++		p9attr.gid = iattr->ia_gid;
++	if (iattr->ia_valid & ATTR_SIZE)
++		p9attr.size = iattr->ia_size;
++	if (iattr->ia_valid & ATTR_ATIME_SET) {
++		p9attr.atime_sec = iattr->ia_atime.tv_sec;
++		p9attr.atime_nsec = iattr->ia_atime.tv_nsec;
++	}
++	if (iattr->ia_valid & ATTR_MTIME_SET) {
++		p9attr.mtime_sec = iattr->ia_mtime.tv_sec;
++		p9attr.mtime_nsec = iattr->ia_mtime.tv_nsec;
++	}
+ 
+ 	if (iattr->ia_valid & ATTR_FILE) {
+ 		fid = iattr->ia_file->private_data;
+diff --git a/fs/fs_context.c b/fs/fs_context.c
+index b7e43a780a625..24ce12f0db32e 100644
+--- a/fs/fs_context.c
++++ b/fs/fs_context.c
+@@ -548,7 +548,7 @@ static int legacy_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ 			      param->key);
+ 	}
+ 
+-	if (len > PAGE_SIZE - 2 - size)
++	if (size + len + 2 > PAGE_SIZE)
+ 		return invalf(fc, "VFS: Legacy: Cumulative options too large");
+ 	if (strchr(param->key, ',') ||
+ 	    (param->type == fs_value_is_string &&
+diff --git a/fs/nfsd/nfs3proc.c b/fs/nfsd/nfs3proc.c
+index 15dac36ca852e..8ef53f6726ec8 100644
+--- a/fs/nfsd/nfs3proc.c
++++ b/fs/nfsd/nfs3proc.c
+@@ -202,15 +202,11 @@ nfsd3_proc_write(struct svc_rqst *rqstp)
+ 	fh_copy(&resp->fh, &argp->fh);
+ 	resp->committed = argp->stable;
+ 	nvecs = svc_fill_write_vector(rqstp, &argp->payload);
+-	if (!nvecs) {
+-		resp->status = nfserr_io;
+-		goto out;
+-	}
++
+ 	resp->status = nfsd_write(rqstp, &resp->fh, argp->offset,
+ 				  rqstp->rq_vec, nvecs, &cnt,
+ 				  resp->committed, resp->verf);
+ 	resp->count = cnt;
+-out:
+ 	return rpc_success;
+ }
+ 
+diff --git a/fs/nfsd/nfsproc.c b/fs/nfsd/nfsproc.c
+index de282f3273c50..312fd289be583 100644
+--- a/fs/nfsd/nfsproc.c
++++ b/fs/nfsd/nfsproc.c
+@@ -235,10 +235,6 @@ nfsd_proc_write(struct svc_rqst *rqstp)
+ 		argp->len, argp->offset);
+ 
+ 	nvecs = svc_fill_write_vector(rqstp, &argp->payload);
+-	if (!nvecs) {
+-		resp->status = nfserr_io;
+-		goto out;
+-	}
+ 
+ 	resp->status = nfsd_write(rqstp, fh_copy(&resp->fh, &argp->fh),
+ 				  argp->offset, rqstp->rq_vec, nvecs,
+@@ -247,7 +243,6 @@ nfsd_proc_write(struct svc_rqst *rqstp)
+ 		resp->status = fh_getattr(&resp->fh, &resp->stat);
+ 	else if (resp->status == nfserr_jukebox)
+ 		return rpc_drop_reply;
+-out:
+ 	return rpc_success;
+ }
+ 
+diff --git a/fs/orangefs/orangefs-bufmap.c b/fs/orangefs/orangefs-bufmap.c
+index 538e839590ef5..b501dc07f9222 100644
+--- a/fs/orangefs/orangefs-bufmap.c
++++ b/fs/orangefs/orangefs-bufmap.c
+@@ -176,7 +176,7 @@ orangefs_bufmap_free(struct orangefs_bufmap *bufmap)
+ {
+ 	kfree(bufmap->page_array);
+ 	kfree(bufmap->desc_array);
+-	kfree(bufmap->buffer_index_array);
++	bitmap_free(bufmap->buffer_index_array);
+ 	kfree(bufmap);
+ }
+ 
+@@ -226,8 +226,7 @@ orangefs_bufmap_alloc(struct ORANGEFS_dev_map_desc *user_desc)
+ 	bufmap->desc_size = user_desc->size;
+ 	bufmap->desc_shift = ilog2(bufmap->desc_size);
+ 
+-	bufmap->buffer_index_array =
+-		kzalloc(DIV_ROUND_UP(bufmap->desc_count, BITS_PER_LONG), GFP_KERNEL);
++	bufmap->buffer_index_array = bitmap_zalloc(bufmap->desc_count, GFP_KERNEL);
+ 	if (!bufmap->buffer_index_array)
+ 		goto out_free_bufmap;
+ 
+@@ -250,7 +249,7 @@ orangefs_bufmap_alloc(struct ORANGEFS_dev_map_desc *user_desc)
+ out_free_desc_array:
+ 	kfree(bufmap->desc_array);
+ out_free_index_array:
+-	kfree(bufmap->buffer_index_array);
++	bitmap_free(bufmap->buffer_index_array);
+ out_free_bufmap:
+ 	kfree(bufmap);
+ out:
+diff --git a/fs/super.c b/fs/super.c
+index 3bfc0f8fbd5bc..a6405d44d4ca2 100644
+--- a/fs/super.c
++++ b/fs/super.c
+@@ -1423,8 +1423,8 @@ struct dentry *mount_nodev(struct file_system_type *fs_type,
+ }
+ EXPORT_SYMBOL(mount_nodev);
+ 
+-static int reconfigure_single(struct super_block *s,
+-			      int flags, void *data)
++int reconfigure_single(struct super_block *s,
++		       int flags, void *data)
+ {
+ 	struct fs_context *fc;
+ 	int ret;
+diff --git a/include/linux/fs_context.h b/include/linux/fs_context.h
+index 6b54982fc5f37..13fa6f3df8e46 100644
+--- a/include/linux/fs_context.h
++++ b/include/linux/fs_context.h
+@@ -142,6 +142,8 @@ extern void put_fs_context(struct fs_context *fc);
+ extern int vfs_parse_fs_param_source(struct fs_context *fc,
+ 				     struct fs_parameter *param);
+ extern void fc_drop_locked(struct fs_context *fc);
++int reconfigure_single(struct super_block *s,
++		       int flags, void *data);
+ 
+ /*
+  * sget() wrappers to be called from the ->get_tree() op.
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index 0dcfd265beed5..318c489b735bc 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -1240,7 +1240,18 @@ extern void perf_event_bpf_event(struct bpf_prog *prog,
+ 				 enum perf_bpf_event_type type,
+ 				 u16 flags);
+ 
+-extern struct perf_guest_info_callbacks *perf_guest_cbs;
++extern struct perf_guest_info_callbacks __rcu *perf_guest_cbs;
++static inline struct perf_guest_info_callbacks *perf_get_guest_cbs(void)
++{
++	/*
++	 * Callbacks are RCU-protected and must be READ_ONCE to avoid reloading
++	 * the callbacks between a !NULL check and dereferences, to ensure
++	 * pending stores/changes to the callback pointers are visible before a
++	 * non-NULL perf_guest_cbs is visible to readers, and to prevent a
++	 * module from unloading callbacks while readers are active.
++	 */
++	return rcu_dereference(perf_guest_cbs);
++}
+ extern int perf_register_guest_info_callbacks(struct perf_guest_info_callbacks *callbacks);
+ extern int perf_unregister_guest_info_callbacks(struct perf_guest_info_callbacks *callbacks);
+ 
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 30d94f68c5bdb..63f0414666438 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -6526,18 +6526,25 @@ static void perf_pending_event(struct irq_work *entry)
+  * Later on, we might change it to a list if there is
+  * another virtualization implementation supporting the callbacks.
+  */
+-struct perf_guest_info_callbacks *perf_guest_cbs;
++struct perf_guest_info_callbacks __rcu *perf_guest_cbs;
+ 
+ int perf_register_guest_info_callbacks(struct perf_guest_info_callbacks *cbs)
+ {
+-	perf_guest_cbs = cbs;
++	if (WARN_ON_ONCE(rcu_access_pointer(perf_guest_cbs)))
++		return -EBUSY;
++
++	rcu_assign_pointer(perf_guest_cbs, cbs);
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(perf_register_guest_info_callbacks);
+ 
+ int perf_unregister_guest_info_callbacks(struct perf_guest_info_callbacks *cbs)
+ {
+-	perf_guest_cbs = NULL;
++	if (WARN_ON_ONCE(rcu_access_pointer(perf_guest_cbs) != cbs))
++		return -EINVAL;
++
++	rcu_assign_pointer(perf_guest_cbs, NULL);
++	synchronize_rcu();
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(perf_unregister_guest_info_callbacks);
+diff --git a/sound/pci/hda/hda_tegra.c b/sound/pci/hda/hda_tegra.c
+index ea700395bef40..773f4903550a0 100644
+--- a/sound/pci/hda/hda_tegra.c
++++ b/sound/pci/hda/hda_tegra.c
+@@ -68,14 +68,20 @@
+  */
+ #define TEGRA194_NUM_SDO_LINES	  4
+ 
++struct hda_tegra_soc {
++	bool has_hda2codec_2x_reset;
++};
++
+ struct hda_tegra {
+ 	struct azx chip;
+ 	struct device *dev;
+-	struct reset_control *reset;
++	struct reset_control_bulk_data resets[3];
+ 	struct clk_bulk_data clocks[3];
++	unsigned int nresets;
+ 	unsigned int nclocks;
+ 	void __iomem *regs;
+ 	struct work_struct probe_work;
++	const struct hda_tegra_soc *soc;
+ };
+ 
+ #ifdef CONFIG_PM
+@@ -170,7 +176,7 @@ static int __maybe_unused hda_tegra_runtime_resume(struct device *dev)
+ 	int rc;
+ 
+ 	if (!chip->running) {
+-		rc = reset_control_assert(hda->reset);
++		rc = reset_control_bulk_assert(hda->nresets, hda->resets);
+ 		if (rc)
+ 			return rc;
+ 	}
+@@ -187,7 +193,7 @@ static int __maybe_unused hda_tegra_runtime_resume(struct device *dev)
+ 	} else {
+ 		usleep_range(10, 100);
+ 
+-		rc = reset_control_deassert(hda->reset);
++		rc = reset_control_bulk_deassert(hda->nresets, hda->resets);
+ 		if (rc)
+ 			return rc;
+ 	}
+@@ -427,9 +433,17 @@ static int hda_tegra_create(struct snd_card *card,
+ 	return 0;
+ }
+ 
++static const struct hda_tegra_soc tegra30_data = {
++	.has_hda2codec_2x_reset = true,
++};
++
++static const struct hda_tegra_soc tegra194_data = {
++	.has_hda2codec_2x_reset = false,
++};
++
+ static const struct of_device_id hda_tegra_match[] = {
+-	{ .compatible = "nvidia,tegra30-hda" },
+-	{ .compatible = "nvidia,tegra194-hda" },
++	{ .compatible = "nvidia,tegra30-hda", .data = &tegra30_data },
++	{ .compatible = "nvidia,tegra194-hda", .data = &tegra194_data },
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(of, hda_tegra_match);
+@@ -449,6 +463,8 @@ static int hda_tegra_probe(struct platform_device *pdev)
+ 	hda->dev = &pdev->dev;
+ 	chip = &hda->chip;
+ 
++	hda->soc = of_device_get_match_data(&pdev->dev);
++
+ 	err = snd_card_new(&pdev->dev, SNDRV_DEFAULT_IDX1, SNDRV_DEFAULT_STR1,
+ 			   THIS_MODULE, 0, &card);
+ 	if (err < 0) {
+@@ -456,11 +472,20 @@ static int hda_tegra_probe(struct platform_device *pdev)
+ 		return err;
+ 	}
+ 
+-	hda->reset = devm_reset_control_array_get_exclusive(&pdev->dev);
+-	if (IS_ERR(hda->reset)) {
+-		err = PTR_ERR(hda->reset);
++	hda->resets[hda->nresets++].id = "hda";
++	hda->resets[hda->nresets++].id = "hda2hdmi";
++	/*
++	 * "hda2codec_2x" reset is not present on Tegra194. Though DT would
++	 * be updated to reflect this, but to have backward compatibility
++	 * below is necessary.
++	 */
++	if (hda->soc->has_hda2codec_2x_reset)
++		hda->resets[hda->nresets++].id = "hda2codec_2x";
++
++	err = devm_reset_control_bulk_get_exclusive(&pdev->dev, hda->nresets,
++						    hda->resets);
++	if (err)
+ 		goto out_free;
+-	}
+ 
+ 	hda->clocks[hda->nclocks++].id = "hda";
+ 	hda->clocks[hda->nclocks++].id = "hda2hdmi";
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 28255e752c4a1..fa80a79e9f966 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -1924,6 +1924,7 @@ enum {
+ 	ALC887_FIXUP_ASUS_BASS,
+ 	ALC887_FIXUP_BASS_CHMAP,
+ 	ALC1220_FIXUP_GB_DUAL_CODECS,
++	ALC1220_FIXUP_GB_X570,
+ 	ALC1220_FIXUP_CLEVO_P950,
+ 	ALC1220_FIXUP_CLEVO_PB51ED,
+ 	ALC1220_FIXUP_CLEVO_PB51ED_PINS,
+@@ -2113,6 +2114,29 @@ static void alc1220_fixup_gb_dual_codecs(struct hda_codec *codec,
+ 	}
+ }
+ 
++static void alc1220_fixup_gb_x570(struct hda_codec *codec,
++				     const struct hda_fixup *fix,
++				     int action)
++{
++	static const hda_nid_t conn1[] = { 0x0c };
++	static const struct coef_fw gb_x570_coefs[] = {
++		WRITE_COEF(0x1a, 0x01c1),
++		WRITE_COEF(0x1b, 0x0202),
++		WRITE_COEF(0x43, 0x3005),
++		{}
++	};
++
++	switch (action) {
++	case HDA_FIXUP_ACT_PRE_PROBE:
++		snd_hda_override_conn_list(codec, 0x14, ARRAY_SIZE(conn1), conn1);
++		snd_hda_override_conn_list(codec, 0x1b, ARRAY_SIZE(conn1), conn1);
++		break;
++	case HDA_FIXUP_ACT_INIT:
++		alc_process_coef_fw(codec, gb_x570_coefs);
++		break;
++	}
++}
++
+ static void alc1220_fixup_clevo_p950(struct hda_codec *codec,
+ 				     const struct hda_fixup *fix,
+ 				     int action)
+@@ -2415,6 +2439,10 @@ static const struct hda_fixup alc882_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc1220_fixup_gb_dual_codecs,
+ 	},
++	[ALC1220_FIXUP_GB_X570] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc1220_fixup_gb_x570,
++	},
+ 	[ALC1220_FIXUP_CLEVO_P950] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc1220_fixup_clevo_p950,
+@@ -2517,7 +2545,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x13fe, 0x1009, "Advantech MIT-W101", ALC886_FIXUP_EAPD),
+ 	SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte EP45-DS3/Z87X-UD3H", ALC889_FIXUP_FRONT_HP_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1458, 0xa0b8, "Gigabyte AZ370-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
+-	SND_PCI_QUIRK(0x1458, 0xa0cd, "Gigabyte X570 Aorus Master", ALC1220_FIXUP_CLEVO_P950),
++	SND_PCI_QUIRK(0x1458, 0xa0cd, "Gigabyte X570 Aorus Master", ALC1220_FIXUP_GB_X570),
+ 	SND_PCI_QUIRK(0x1458, 0xa0ce, "Gigabyte X570 Aorus Xtreme", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x11f7, "MSI-GE63", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x1228, "MSI-GP63", ALC1220_FIXUP_CLEVO_P950),
+@@ -6784,6 +6812,8 @@ enum {
+ 	ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE,
+ 	ALC233_FIXUP_NO_AUDIO_JACK,
+ 	ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME,
++	ALC285_FIXUP_LEGION_Y9000X_SPEAKERS,
++	ALC285_FIXUP_LEGION_Y9000X_AUTOMUTE,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -8380,6 +8410,18 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
+ 	},
++	[ALC285_FIXUP_LEGION_Y9000X_SPEAKERS] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc285_fixup_ideapad_s740_coef,
++		.chained = true,
++		.chain_id = ALC285_FIXUP_LEGION_Y9000X_AUTOMUTE,
++	},
++	[ALC285_FIXUP_LEGION_Y9000X_AUTOMUTE] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc287_fixup_legion_15imhg05_speakers,
++		.chained = true,
++		.chain_id = ALC269_FIXUP_THINKPAD_ACPI,
++	},
+ 	[ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS] = {
+ 		.type = HDA_FIXUP_VERBS,
+ 		//.v.verbs = legion_15imhg05_coefs,
+@@ -8730,6 +8772,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8896, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8898, "HP EliteBook 845 G8 Notebook PC", ALC285_FIXUP_HP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x103c, 0x88d0, "HP Pavilion 15-eh1xxx (mainboard 88D0)", ALC287_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x89c3, "HP", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x89ca, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+@@ -8921,13 +8964,16 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3176, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340),
++	SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME),
++	SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS),
++	SND_PCI_QUIRK(0x17aa, 0x3824, "Legion Y9000X 2020", ALC285_FIXUP_LEGION_Y9000X_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF),
++	SND_PCI_QUIRK(0x17aa, 0x3834, "Lenovo IdeaPad Slim 9i 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3843, "Yoga 9i", ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP),
+-	SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
++	SND_PCI_QUIRK(0x17aa, 0x384a, "Lenovo Yoga 7 15ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3852, "Lenovo Yoga 7 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3853, "Lenovo Yoga 7 15ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+-	SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+diff --git a/tools/perf/ui/browsers/annotate.c b/tools/perf/ui/browsers/annotate.c
+index e81c2493efdf9..44ba900828f6c 100644
+--- a/tools/perf/ui/browsers/annotate.c
++++ b/tools/perf/ui/browsers/annotate.c
+@@ -966,6 +966,7 @@ int symbol__tui_annotate(struct map_symbol *ms, struct evsel *evsel,
+ 		.opts = opts,
+ 	};
+ 	int ret = -1, err;
++	int not_annotated = list_empty(&notes->src->source);
+ 
+ 	if (sym == NULL)
+ 		return -1;
+@@ -973,13 +974,15 @@ int symbol__tui_annotate(struct map_symbol *ms, struct evsel *evsel,
+ 	if (ms->map->dso->annotate_warned)
+ 		return -1;
+ 
+-	err = symbol__annotate2(ms, evsel, opts, &browser.arch);
+-	if (err) {
+-		char msg[BUFSIZ];
+-		ms->map->dso->annotate_warned = true;
+-		symbol__strerror_disassemble(ms, err, msg, sizeof(msg));
+-		ui__error("Couldn't annotate %s:\n%s", sym->name, msg);
+-		goto out_free_offsets;
++	if (not_annotated) {
++		err = symbol__annotate2(ms, evsel, opts, &browser.arch);
++		if (err) {
++			char msg[BUFSIZ];
++			ms->map->dso->annotate_warned = true;
++			symbol__strerror_disassemble(ms, err, msg, sizeof(msg));
++			ui__error("Couldn't annotate %s:\n%s", sym->name, msg);
++			goto out_free_offsets;
++		}
+ 	}
+ 
+ 	ui_helpline__push("Press ESC to exit");
+@@ -994,9 +997,11 @@ int symbol__tui_annotate(struct map_symbol *ms, struct evsel *evsel,
+ 
+ 	ret = annotate_browser__run(&browser, evsel, hbt);
+ 
+-	annotated_source__purge(notes->src);
++	if(not_annotated)
++		annotated_source__purge(notes->src);
+ 
+ out_free_offsets:
+-	zfree(&notes->offsets);
++	if(not_annotated)
++		zfree(&notes->offsets);
+ 	return ret;
+ }


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-01-27 11:35 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-01-27 11:35 UTC (permalink / raw
  To: gentoo-commits

commit:     6a4115eb2cdf1b9b7b1a6002e8f244af6a322873
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jan 27 11:35:35 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jan 27 11:35:35 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6a4115eb

Linux patch 5.16.3

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |     4 +
 1002_linux-5.16.3.patch | 42994 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 42998 insertions(+)

diff --git a/0000_README b/0000_README
index 41c5d786..1eb44c73 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-5.16.2.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.16.2
 
+Patch:  1002_linux-5.16.3.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.3
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1002_linux-5.16.3.patch b/1002_linux-5.16.3.patch
new file mode 100644
index 00000000..9946b6fa
--- /dev/null
+++ b/1002_linux-5.16.3.patch
@@ -0,0 +1,42994 @@
+diff --git a/Documentation/admin-guide/cifs/usage.rst b/Documentation/admin-guide/cifs/usage.rst
+index f170d88202588..3766bf8a1c20e 100644
+--- a/Documentation/admin-guide/cifs/usage.rst
++++ b/Documentation/admin-guide/cifs/usage.rst
+@@ -734,10 +734,9 @@ SecurityFlags		Flags which control security negotiation and
+ 			using weaker password hashes is 0x37037 (lanman,
+ 			plaintext, ntlm, ntlmv2, signing allowed).  Some
+ 			SecurityFlags require the corresponding menuconfig
+-			options to be enabled (lanman and plaintext require
+-			CONFIG_CIFS_WEAK_PW_HASH for example).  Enabling
+-			plaintext authentication currently requires also
+-			enabling lanman authentication in the security flags
++			options to be enabled.  Enabling plaintext
++			authentication currently requires also enabling
++			lanman authentication in the security flags
+ 			because the cifs module only supports sending
+ 			laintext passwords using the older lanman dialect
+ 			form of the session setup SMB.  (e.g. for authentication
+diff --git a/Documentation/admin-guide/devices.txt b/Documentation/admin-guide/devices.txt
+index 922c23bb4372a..c07dc0ee860e7 100644
+--- a/Documentation/admin-guide/devices.txt
++++ b/Documentation/admin-guide/devices.txt
+@@ -2339,13 +2339,7 @@
+ 		disks (see major number 3) except that the limit on
+ 		partitions is 31.
+ 
+- 162 char	Raw block device interface
+-		  0 = /dev/rawctl	Raw I/O control device
+-		  1 = /dev/raw/raw1	First raw I/O device
+-		  2 = /dev/raw/raw2	Second raw I/O device
+-		    ...
+-		 max minor number of raw device is set by kernel config
+-		 MAX_RAW_DEVS or raw module parameter 'max_raw_devs'
++ 162 char	Used for (now removed) raw block device interface
+ 
+  163 char
+ 
+diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
+index ab7d402c16779..a2b22d5640ec9 100644
+--- a/Documentation/admin-guide/hw-vuln/spectre.rst
++++ b/Documentation/admin-guide/hw-vuln/spectre.rst
+@@ -468,7 +468,7 @@ Spectre variant 2
+    before invoking any firmware code to prevent Spectre variant 2 exploits
+    using the firmware.
+ 
+-   Using kernel address space randomization (CONFIG_RANDOMIZE_SLAB=y
++   Using kernel address space randomization (CONFIG_RANDOMIZE_BASE=y
+    and CONFIG_SLAB_FREELIST_RANDOM=y in the kernel configuration) makes
+    attacks on the kernel generally more difficult.
+ 
+diff --git a/Documentation/devicetree/bindings/display/amlogic,meson-dw-hdmi.yaml b/Documentation/devicetree/bindings/display/amlogic,meson-dw-hdmi.yaml
+index cf5a208f2f105..343598c9f473b 100644
+--- a/Documentation/devicetree/bindings/display/amlogic,meson-dw-hdmi.yaml
++++ b/Documentation/devicetree/bindings/display/amlogic,meson-dw-hdmi.yaml
+@@ -10,6 +10,9 @@ title: Amlogic specific extensions to the Synopsys Designware HDMI Controller
+ maintainers:
+   - Neil Armstrong <narmstrong@baylibre.com>
+ 
++allOf:
++  - $ref: /schemas/sound/name-prefix.yaml#
++
+ description: |
+   The Amlogic Meson Synopsys Designware Integration is composed of
+   - A Synopsys DesignWare HDMI Controller IP
+@@ -99,6 +102,8 @@ properties:
+   "#sound-dai-cells":
+     const: 0
+ 
++  sound-name-prefix: true
++
+ required:
+   - compatible
+   - reg
+diff --git a/Documentation/devicetree/bindings/display/amlogic,meson-vpu.yaml b/Documentation/devicetree/bindings/display/amlogic,meson-vpu.yaml
+index 851cb07812173..047fd69e03770 100644
+--- a/Documentation/devicetree/bindings/display/amlogic,meson-vpu.yaml
++++ b/Documentation/devicetree/bindings/display/amlogic,meson-vpu.yaml
+@@ -78,6 +78,10 @@ properties:
+   interrupts:
+     maxItems: 1
+ 
++  amlogic,canvas:
++    description: should point to a canvas provider node
++    $ref: /schemas/types.yaml#/definitions/phandle
++
+   power-domains:
+     maxItems: 1
+     description: phandle to the associated power domain
+@@ -106,6 +110,7 @@ required:
+   - port@1
+   - "#address-cells"
+   - "#size-cells"
++  - amlogic,canvas
+ 
+ additionalProperties: false
+ 
+@@ -118,6 +123,7 @@ examples:
+         interrupts = <3>;
+         #address-cells = <1>;
+         #size-cells = <0>;
++        amlogic,canvas = <&canvas>;
+ 
+         /* CVBS VDAC output port */
+         port@0 {
+diff --git a/Documentation/devicetree/bindings/input/hid-over-i2c.txt b/Documentation/devicetree/bindings/input/hid-over-i2c.txt
+index c76bafaf98d2f..34c43d3bddfd1 100644
+--- a/Documentation/devicetree/bindings/input/hid-over-i2c.txt
++++ b/Documentation/devicetree/bindings/input/hid-over-i2c.txt
+@@ -32,6 +32,8 @@ device-specific compatible properties, which should be used in addition to the
+ - vdd-supply: phandle of the regulator that provides the supply voltage.
+ - post-power-on-delay-ms: time required by the device after enabling its regulators
+   or powering it on, before it is ready for communication.
++- touchscreen-inverted-x: See touchscreen.txt
++- touchscreen-inverted-y: See touchscreen.txt
+ 
+ Example:
+ 
+diff --git a/Documentation/devicetree/bindings/thermal/thermal-zones.yaml b/Documentation/devicetree/bindings/thermal/thermal-zones.yaml
+index a07de5ed0ca6a..2d34f3ccb2572 100644
+--- a/Documentation/devicetree/bindings/thermal/thermal-zones.yaml
++++ b/Documentation/devicetree/bindings/thermal/thermal-zones.yaml
+@@ -199,12 +199,11 @@ patternProperties:
+ 
+               contribution:
+                 $ref: /schemas/types.yaml#/definitions/uint32
+-                minimum: 0
+-                maximum: 100
+                 description:
+-                  The percentage contribution of the cooling devices at the
+-                  specific trip temperature referenced in this map
+-                  to this thermal zone
++                  The cooling contribution to the thermal zone of the referred
++                  cooling device at the referred trip point. The contribution is
++                  a ratio of the sum of all cooling contributions within a
++                  thermal zone.
+ 
+             required:
+               - trip
+diff --git a/Documentation/devicetree/bindings/watchdog/samsung-wdt.yaml b/Documentation/devicetree/bindings/watchdog/samsung-wdt.yaml
+index 76cb9586ee00c..93cd77a6e92c0 100644
+--- a/Documentation/devicetree/bindings/watchdog/samsung-wdt.yaml
++++ b/Documentation/devicetree/bindings/watchdog/samsung-wdt.yaml
+@@ -39,8 +39,8 @@ properties:
+   samsung,syscon-phandle:
+     $ref: /schemas/types.yaml#/definitions/phandle
+     description:
+-      Phandle to the PMU system controller node (in case of Exynos5250
+-      and Exynos5420).
++      Phandle to the PMU system controller node (in case of Exynos5250,
++      Exynos5420 and Exynos7).
+ 
+ required:
+   - compatible
+@@ -58,6 +58,7 @@ allOf:
+             enum:
+               - samsung,exynos5250-wdt
+               - samsung,exynos5420-wdt
++              - samsung,exynos7-wdt
+     then:
+       required:
+         - samsung,syscon-phandle
+diff --git a/Documentation/driver-api/dmaengine/dmatest.rst b/Documentation/driver-api/dmaengine/dmatest.rst
+index ee268d445d38b..d2e1d8b58e7dc 100644
+--- a/Documentation/driver-api/dmaengine/dmatest.rst
++++ b/Documentation/driver-api/dmaengine/dmatest.rst
+@@ -143,13 +143,14 @@ Part 5 - Handling channel allocation
+ Allocating Channels
+ -------------------
+ 
+-Channels are required to be configured prior to starting the test run.
+-Attempting to run the test without configuring the channels will fail.
++Channels do not need to be configured prior to starting a test run. Attempting
++to run the test without configuring the channels will result in testing any
++channels that are available.
+ 
+ Example::
+ 
+     % echo 1 > /sys/module/dmatest/parameters/run
+-    dmatest: Could not start test, no channels configured
++    dmatest: No channels configured, continue with any
+ 
+ Channels are registered using the "channel" parameter. Channels can be requested by their
+ name, once requested, the channel is registered and a pending thread is added to the test list.
+diff --git a/Documentation/driver-api/firewire.rst b/Documentation/driver-api/firewire.rst
+index 94a2d7f01d999..d3cfa73cbb2b4 100644
+--- a/Documentation/driver-api/firewire.rst
++++ b/Documentation/driver-api/firewire.rst
+@@ -19,7 +19,7 @@ of kernel interfaces is available via exported symbols in `firewire-core` module
+ Firewire char device data structures
+ ====================================
+ 
+-.. include:: /ABI/stable/firewire-cdev
++.. include:: ../ABI/stable/firewire-cdev
+     :literal:
+ 
+ .. kernel-doc:: include/uapi/linux/firewire-cdev.h
+@@ -28,7 +28,7 @@ Firewire char device data structures
+ Firewire device probing and sysfs interfaces
+ ============================================
+ 
+-.. include:: /ABI/stable/sysfs-bus-firewire
++.. include:: ../ABI/stable/sysfs-bus-firewire
+     :literal:
+ 
+ .. kernel-doc:: drivers/firewire/core-device.c
+diff --git a/Documentation/firmware-guide/acpi/dsd/data-node-references.rst b/Documentation/firmware-guide/acpi/dsd/data-node-references.rst
+index b7ad47df49de0..8b65b32e6e40e 100644
+--- a/Documentation/firmware-guide/acpi/dsd/data-node-references.rst
++++ b/Documentation/firmware-guide/acpi/dsd/data-node-references.rst
+@@ -5,7 +5,7 @@
+ Referencing hierarchical data nodes
+ ===================================
+ 
+-:Copyright: |copy| 2018 Intel Corporation
++:Copyright: |copy| 2018, 2021 Intel Corporation
+ :Author: Sakari Ailus <sakari.ailus@linux.intel.com>
+ 
+ ACPI in general allows referring to device objects in the tree only.
+@@ -52,12 +52,14 @@ the ANOD object which is also the final target node of the reference.
+ 	    Name (NOD0, Package() {
+ 		ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
+ 		Package () {
++		    Package () { "reg", 0 },
+ 		    Package () { "random-property", 3 },
+ 		}
+ 	    })
+ 	    Name (NOD1, Package() {
+ 		ToUUID("dbb8e3e6-5886-4ba6-8795-1319f52a966b"),
+ 		Package () {
++		    Package () { "reg", 1 },
+ 		    Package () { "anothernode", "ANOD" },
+ 		}
+ 	    })
+@@ -74,7 +76,11 @@ the ANOD object which is also the final target node of the reference.
+ 	    Name (_DSD, Package () {
+ 		ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
+ 		Package () {
+-		    Package () { "reference", ^DEV0, "node@1", "anothernode" },
++		    Package () {
++			"reference", Package () {
++			    ^DEV0, "node@1", "anothernode"
++			}
++		    },
+ 		}
+ 	    })
+ 	}
+diff --git a/Documentation/trace/coresight/coresight-config.rst b/Documentation/trace/coresight/coresight-config.rst
+index a4e3ef2952401..6ed13398ca2ce 100644
+--- a/Documentation/trace/coresight/coresight-config.rst
++++ b/Documentation/trace/coresight/coresight-config.rst
+@@ -211,19 +211,13 @@ also declared in the perf 'cs_etm' event infrastructure so that they can
+ be selected when running trace under perf::
+ 
+     $ ls /sys/devices/cs_etm
+-    configurations  format  perf_event_mux_interval_ms  sinks  type
+-    events  nr_addr_filters  power
++    cpu0  cpu2  events  nr_addr_filters		power  subsystem  uevent
++    cpu1  cpu3  format  perf_event_mux_interval_ms	sinks  type
+ 
+-Key directories here are 'configurations' - which lists the loaded
+-configurations, and 'events' - a generic perf directory which allows
+-selection on the perf command line.::
++The key directory here is 'events' - a generic perf directory which allows
++selection on the perf command line. As with the sinks entries, this provides
++a hash of the configuration name.
+ 
+-    $ ls configurations/
+-    autofdo
+-    $ cat configurations/autofdo
+-    0xa7c3dddd
+-
+-As with the sinks entries, this provides a hash of the configuration name.
+ The entry in the 'events' directory uses perfs built in syntax generator
+ to substitute the syntax for the name when evaluating the command::
+ 
+diff --git a/Makefile b/Makefile
+index dd98debc26048..acb8ffee65dc5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/arch/arm/Kconfig.debug b/arch/arm/Kconfig.debug
+index 98436702e0c7e..644875d73ba15 100644
+--- a/arch/arm/Kconfig.debug
++++ b/arch/arm/Kconfig.debug
+@@ -410,12 +410,12 @@ choice
+ 		  Say Y here if you want kernel low-level debugging support
+ 		  on i.MX25.
+ 
+-	config DEBUG_IMX21_IMX27_UART
+-		bool "i.MX21 and i.MX27 Debug UART"
+-		depends on SOC_IMX21 || SOC_IMX27
++	config DEBUG_IMX27_UART
++		bool "i.MX27 Debug UART"
++		depends on SOC_IMX27
+ 		help
+ 		  Say Y here if you want kernel low-level debugging support
+-		  on i.MX21 or i.MX27.
++		  on i.MX27.
+ 
+ 	config DEBUG_IMX28_UART
+ 		bool "i.MX28 Debug UART"
+@@ -1481,7 +1481,7 @@ config DEBUG_IMX_UART_PORT
+ 	int "i.MX Debug UART Port Selection"
+ 	depends on DEBUG_IMX1_UART || \
+ 		   DEBUG_IMX25_UART || \
+-		   DEBUG_IMX21_IMX27_UART || \
++		   DEBUG_IMX27_UART || \
+ 		   DEBUG_IMX31_UART || \
+ 		   DEBUG_IMX35_UART || \
+ 		   DEBUG_IMX50_UART || \
+@@ -1540,12 +1540,12 @@ config DEBUG_LL_INCLUDE
+ 	default "debug/icedcc.S" if DEBUG_ICEDCC
+ 	default "debug/imx.S" if DEBUG_IMX1_UART || \
+ 				 DEBUG_IMX25_UART || \
+-				 DEBUG_IMX21_IMX27_UART || \
++				 DEBUG_IMX27_UART || \
+ 				 DEBUG_IMX31_UART || \
+ 				 DEBUG_IMX35_UART || \
+ 				 DEBUG_IMX50_UART || \
+ 				 DEBUG_IMX51_UART || \
+-				 DEBUG_IMX53_UART ||\
++				 DEBUG_IMX53_UART || \
+ 				 DEBUG_IMX6Q_UART || \
+ 				 DEBUG_IMX6SL_UART || \
+ 				 DEBUG_IMX6SX_UART || \
+diff --git a/arch/arm/boot/compressed/efi-header.S b/arch/arm/boot/compressed/efi-header.S
+index c0e7a745103e2..230030c130853 100644
+--- a/arch/arm/boot/compressed/efi-header.S
++++ b/arch/arm/boot/compressed/efi-header.S
+@@ -9,16 +9,22 @@
+ #include <linux/sizes.h>
+ 
+ 		.macro	__nop
+-#ifdef CONFIG_EFI_STUB
+-		@ This is almost but not quite a NOP, since it does clobber the
+-		@ condition flags. But it is the best we can do for EFI, since
+-		@ PE/COFF expects the magic string "MZ" at offset 0, while the
+-		@ ARM/Linux boot protocol expects an executable instruction
+-		@ there.
+-		.inst	MZ_MAGIC | (0x1310 << 16)	@ tstne r0, #0x4d000
+-#else
+  AR_CLASS(	mov	r0, r0		)
+   M_CLASS(	nop.w			)
++		.endm
++
++		.macro __initial_nops
++#ifdef CONFIG_EFI_STUB
++		@ This is a two-instruction NOP, which happens to bear the
++		@ PE/COFF signature "MZ" in the first two bytes, so the kernel
++		@ is accepted as an EFI binary. Booting via the UEFI stub
++		@ will not execute those instructions, but the ARM/Linux
++		@ boot protocol does, so we need some NOPs here.
++		.inst	MZ_MAGIC | (0xe225 << 16)	@ eor r5, r5, 0x4d000
++		eor	r5, r5, 0x4d000			@ undo previous insn
++#else
++		__nop
++		__nop
+ #endif
+ 		.endm
+ 
+diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S
+index b1cb1972361b8..bf79f2f78d232 100644
+--- a/arch/arm/boot/compressed/head.S
++++ b/arch/arm/boot/compressed/head.S
+@@ -203,7 +203,8 @@ start:
+ 		 * were patching the initial instructions of the kernel, i.e
+ 		 * had started to exploit this "patch area".
+ 		 */
+-		.rept	7
++		__initial_nops
++		.rept	5
+ 		__nop
+ 		.endr
+ #ifndef CONFIG_THUMB2_KERNEL
+diff --git a/arch/arm/boot/dts/armada-38x.dtsi b/arch/arm/boot/dts/armada-38x.dtsi
+index 9b1a24cc5e91f..df3c8d1d8f641 100644
+--- a/arch/arm/boot/dts/armada-38x.dtsi
++++ b/arch/arm/boot/dts/armada-38x.dtsi
+@@ -168,7 +168,7 @@
+ 			};
+ 
+ 			uart0: serial@12000 {
+-				compatible = "marvell,armada-38x-uart";
++				compatible = "marvell,armada-38x-uart", "ns16550a";
+ 				reg = <0x12000 0x100>;
+ 				reg-shift = <2>;
+ 				interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>;
+@@ -178,7 +178,7 @@
+ 			};
+ 
+ 			uart1: serial@12100 {
+-				compatible = "marvell,armada-38x-uart";
++				compatible = "marvell,armada-38x-uart", "ns16550a";
+ 				reg = <0x12100 0x100>;
+ 				reg-shift = <2>;
+ 				interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm/boot/dts/gemini-nas4220b.dts b/arch/arm/boot/dts/gemini-nas4220b.dts
+index 13112a8a5dd88..6544c730340fa 100644
+--- a/arch/arm/boot/dts/gemini-nas4220b.dts
++++ b/arch/arm/boot/dts/gemini-nas4220b.dts
+@@ -84,7 +84,7 @@
+ 			partitions {
+ 				compatible = "redboot-fis";
+ 				/* Eraseblock at 0xfe0000 */
+-				fis-index-block = <0x1fc>;
++				fis-index-block = <0x7f>;
+ 			};
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/omap3-n900.dts b/arch/arm/boot/dts/omap3-n900.dts
+index 32335d4ce478b..d40c3d2c4914e 100644
+--- a/arch/arm/boot/dts/omap3-n900.dts
++++ b/arch/arm/boot/dts/omap3-n900.dts
+@@ -8,6 +8,7 @@
+ 
+ #include "omap34xx.dtsi"
+ #include <dt-bindings/input/input.h>
++#include <dt-bindings/leds/common.h>
+ 
+ /*
+  * Default secure signed bootloader (Nokia X-Loader) does not enable L3 firewall
+@@ -630,63 +631,92 @@
+ 	};
+ 
+ 	lp5523: lp5523@32 {
++		#address-cells = <1>;
++		#size-cells = <0>;
+ 		compatible = "national,lp5523";
+ 		reg = <0x32>;
+ 		clock-mode = /bits/ 8 <0>; /* LP55XX_CLOCK_AUTO */
+-		enable-gpio = <&gpio2 9 GPIO_ACTIVE_HIGH>; /* 41 */
++		enable-gpios = <&gpio2 9 GPIO_ACTIVE_HIGH>; /* 41 */
+ 
+-		chan0 {
++		led@0 {
++			reg = <0>;
+ 			chan-name = "lp5523:kb1";
+ 			led-cur = /bits/ 8 <50>;
+ 			max-cur = /bits/ 8 <100>;
++			color = <LED_COLOR_ID_WHITE>;
++			function = LED_FUNCTION_KBD_BACKLIGHT;
+ 		};
+ 
+-		chan1 {
++		led@1 {
++			reg = <1>;
+ 			chan-name = "lp5523:kb2";
+ 			led-cur = /bits/ 8 <50>;
+ 			max-cur = /bits/ 8 <100>;
++			color = <LED_COLOR_ID_WHITE>;
++			function = LED_FUNCTION_KBD_BACKLIGHT;
+ 		};
+ 
+-		chan2 {
++		led@2 {
++			reg = <2>;
+ 			chan-name = "lp5523:kb3";
+ 			led-cur = /bits/ 8 <50>;
+ 			max-cur = /bits/ 8 <100>;
++			color = <LED_COLOR_ID_WHITE>;
++			function = LED_FUNCTION_KBD_BACKLIGHT;
+ 		};
+ 
+-		chan3 {
++		led@3 {
++			reg = <3>;
+ 			chan-name = "lp5523:kb4";
+ 			led-cur = /bits/ 8 <50>;
+ 			max-cur = /bits/ 8 <100>;
++			color = <LED_COLOR_ID_WHITE>;
++			function = LED_FUNCTION_KBD_BACKLIGHT;
+ 		};
+ 
+-		chan4 {
++		led@4 {
++			reg = <4>;
+ 			chan-name = "lp5523:b";
+ 			led-cur = /bits/ 8 <50>;
+ 			max-cur = /bits/ 8 <100>;
++			color = <LED_COLOR_ID_BLUE>;
++			function = LED_FUNCTION_STATUS;
+ 		};
+ 
+-		chan5 {
++		led@5 {
++			reg = <5>;
+ 			chan-name = "lp5523:g";
+ 			led-cur = /bits/ 8 <50>;
+ 			max-cur = /bits/ 8 <100>;
++			color = <LED_COLOR_ID_GREEN>;
++			function = LED_FUNCTION_STATUS;
+ 		};
+ 
+-		chan6 {
++		led@6 {
++			reg = <6>;
+ 			chan-name = "lp5523:r";
+ 			led-cur = /bits/ 8 <50>;
+ 			max-cur = /bits/ 8 <100>;
++			color = <LED_COLOR_ID_RED>;
++			function = LED_FUNCTION_STATUS;
+ 		};
+ 
+-		chan7 {
++		led@7 {
++			reg = <7>;
+ 			chan-name = "lp5523:kb5";
+ 			led-cur = /bits/ 8 <50>;
+ 			max-cur = /bits/ 8 <100>;
++			color = <LED_COLOR_ID_WHITE>;
++			function = LED_FUNCTION_KBD_BACKLIGHT;
+ 		};
+ 
+-		chan8 {
++		led@8 {
++			reg = <8>;
+ 			chan-name = "lp5523:kb6";
+ 			led-cur = /bits/ 8 <50>;
+ 			max-cur = /bits/ 8 <100>;
++			color = <LED_COLOR_ID_WHITE>;
++			function = LED_FUNCTION_KBD_BACKLIGHT;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/qcom-sdx55.dtsi b/arch/arm/boot/dts/qcom-sdx55.dtsi
+index 44526ad9d210b..eee2f63b9bbab 100644
+--- a/arch/arm/boot/dts/qcom-sdx55.dtsi
++++ b/arch/arm/boot/dts/qcom-sdx55.dtsi
+@@ -333,12 +333,10 @@
+ 			clocks = <&rpmhcc RPMH_IPA_CLK>;
+ 			clock-names = "core";
+ 
+-			interconnects = <&system_noc MASTER_IPA &system_noc SLAVE_SNOC_MEM_NOC_GC>,
+-					<&mem_noc MASTER_SNOC_GC_MEM_NOC &mc_virt SLAVE_EBI_CH0>,
++			interconnects = <&system_noc MASTER_IPA &mc_virt SLAVE_EBI_CH0>,
+ 					<&system_noc MASTER_IPA &system_noc SLAVE_OCIMEM>,
+ 					<&mem_noc MASTER_AMPSS_M0 &system_noc SLAVE_IPA_CFG>;
+-			interconnect-names = "memory-a",
+-					     "memory-b",
++			interconnect-names = "memory",
+ 					     "imem",
+ 					     "config";
+ 
+diff --git a/arch/arm/boot/dts/sama7g5-pinfunc.h b/arch/arm/boot/dts/sama7g5-pinfunc.h
+index 22fe9e522a97b..4eb30445d2057 100644
+--- a/arch/arm/boot/dts/sama7g5-pinfunc.h
++++ b/arch/arm/boot/dts/sama7g5-pinfunc.h
+@@ -765,7 +765,7 @@
+ #define PIN_PD20__PCK0			PINMUX_PIN(PIN_PD20, 1, 3)
+ #define PIN_PD20__FLEXCOM2_IO3		PINMUX_PIN(PIN_PD20, 2, 2)
+ #define PIN_PD20__PWMH3			PINMUX_PIN(PIN_PD20, 3, 4)
+-#define PIN_PD20__CANTX4		PINMUX_PIN(PIN_PD20, 5, 2)
++#define PIN_PD20__CANTX4		PINMUX_PIN(PIN_PD20, 4, 2)
+ #define PIN_PD20__FLEXCOM5_IO0		PINMUX_PIN(PIN_PD20, 6, 5)
+ #define PIN_PD21			117
+ #define PIN_PD21__GPIO			PINMUX_PIN(PIN_PD21, 0, 0)
+diff --git a/arch/arm/boot/dts/stm32f429-disco.dts b/arch/arm/boot/dts/stm32f429-disco.dts
+index 075ac57d0bf4a..6435e099c6326 100644
+--- a/arch/arm/boot/dts/stm32f429-disco.dts
++++ b/arch/arm/boot/dts/stm32f429-disco.dts
+@@ -192,7 +192,7 @@
+ 
+ 	display: display@1{
+ 		/* Connect panel-ilitek-9341 to ltdc */
+-		compatible = "st,sf-tc240t-9370-t";
++		compatible = "st,sf-tc240t-9370-t", "ilitek,ili9341";
+ 		reg = <1>;
+ 		spi-3wire;
+ 		spi-max-frequency = <10000000>;
+diff --git a/arch/arm/configs/cm_x300_defconfig b/arch/arm/configs/cm_x300_defconfig
+index 502a9d870ca44..45769d0ddd4ef 100644
+--- a/arch/arm/configs/cm_x300_defconfig
++++ b/arch/arm/configs/cm_x300_defconfig
+@@ -146,7 +146,6 @@ CONFIG_NFS_V3_ACL=y
+ CONFIG_NFS_V4=y
+ CONFIG_ROOT_NFS=y
+ CONFIG_CIFS=m
+-CONFIG_CIFS_WEAK_PW_HASH=y
+ CONFIG_PARTITION_ADVANCED=y
+ CONFIG_NLS_CODEPAGE_437=m
+ CONFIG_NLS_ISO8859_1=m
+diff --git a/arch/arm/configs/ezx_defconfig b/arch/arm/configs/ezx_defconfig
+index a49e699e52de3..ec84d80096b1c 100644
+--- a/arch/arm/configs/ezx_defconfig
++++ b/arch/arm/configs/ezx_defconfig
+@@ -314,7 +314,6 @@ CONFIG_NFSD_V3_ACL=y
+ CONFIG_SMB_FS=m
+ CONFIG_CIFS=m
+ CONFIG_CIFS_STATS=y
+-CONFIG_CIFS_WEAK_PW_HASH=y
+ CONFIG_CIFS_XATTR=y
+ CONFIG_CIFS_POSIX=y
+ CONFIG_NLS_CODEPAGE_437=m
+diff --git a/arch/arm/configs/imote2_defconfig b/arch/arm/configs/imote2_defconfig
+index 118c4c927f264..6db871d4e0775 100644
+--- a/arch/arm/configs/imote2_defconfig
++++ b/arch/arm/configs/imote2_defconfig
+@@ -288,7 +288,6 @@ CONFIG_NFSD_V3_ACL=y
+ CONFIG_SMB_FS=m
+ CONFIG_CIFS=m
+ CONFIG_CIFS_STATS=y
+-CONFIG_CIFS_WEAK_PW_HASH=y
+ CONFIG_CIFS_XATTR=y
+ CONFIG_CIFS_POSIX=y
+ CONFIG_NLS_CODEPAGE_437=m
+diff --git a/arch/arm/configs/nhk8815_defconfig b/arch/arm/configs/nhk8815_defconfig
+index 23595fc5a29a9..907d6512821ad 100644
+--- a/arch/arm/configs/nhk8815_defconfig
++++ b/arch/arm/configs/nhk8815_defconfig
+@@ -127,7 +127,6 @@ CONFIG_NFS_FS=y
+ CONFIG_NFS_V3_ACL=y
+ CONFIG_ROOT_NFS=y
+ CONFIG_CIFS=m
+-CONFIG_CIFS_WEAK_PW_HASH=y
+ CONFIG_NLS_CODEPAGE_437=y
+ CONFIG_NLS_ASCII=y
+ CONFIG_NLS_ISO8859_1=y
+diff --git a/arch/arm/configs/pxa_defconfig b/arch/arm/configs/pxa_defconfig
+index 58f4834289e63..dedaaae3d0d8a 100644
+--- a/arch/arm/configs/pxa_defconfig
++++ b/arch/arm/configs/pxa_defconfig
+@@ -699,7 +699,6 @@ CONFIG_NFSD_V3_ACL=y
+ CONFIG_NFSD_V4=y
+ CONFIG_CIFS=m
+ CONFIG_CIFS_STATS=y
+-CONFIG_CIFS_WEAK_PW_HASH=y
+ CONFIG_CIFS_XATTR=y
+ CONFIG_CIFS_POSIX=y
+ CONFIG_NLS_DEFAULT="utf8"
+diff --git a/arch/arm/configs/spear13xx_defconfig b/arch/arm/configs/spear13xx_defconfig
+index 3b206a31902ff..065553326b391 100644
+--- a/arch/arm/configs/spear13xx_defconfig
++++ b/arch/arm/configs/spear13xx_defconfig
+@@ -61,7 +61,6 @@ CONFIG_SERIAL_AMBA_PL011=y
+ CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
+ # CONFIG_HW_RANDOM is not set
+ CONFIG_RAW_DRIVER=y
+-CONFIG_MAX_RAW_DEVS=8192
+ CONFIG_I2C=y
+ CONFIG_I2C_DESIGNWARE_PLATFORM=y
+ CONFIG_SPI=y
+diff --git a/arch/arm/configs/spear3xx_defconfig b/arch/arm/configs/spear3xx_defconfig
+index fc5f71c765edc..afca722d6605c 100644
+--- a/arch/arm/configs/spear3xx_defconfig
++++ b/arch/arm/configs/spear3xx_defconfig
+@@ -41,7 +41,6 @@ CONFIG_SERIAL_AMBA_PL011=y
+ CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
+ # CONFIG_HW_RANDOM is not set
+ CONFIG_RAW_DRIVER=y
+-CONFIG_MAX_RAW_DEVS=8192
+ CONFIG_I2C=y
+ CONFIG_I2C_DESIGNWARE_PLATFORM=y
+ CONFIG_SPI=y
+diff --git a/arch/arm/configs/spear6xx_defconfig b/arch/arm/configs/spear6xx_defconfig
+index 52a56b8ce6a71..bc32c02cb86b1 100644
+--- a/arch/arm/configs/spear6xx_defconfig
++++ b/arch/arm/configs/spear6xx_defconfig
+@@ -36,7 +36,6 @@ CONFIG_INPUT_FF_MEMLESS=y
+ CONFIG_SERIAL_AMBA_PL011=y
+ CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
+ CONFIG_RAW_DRIVER=y
+-CONFIG_MAX_RAW_DEVS=8192
+ CONFIG_I2C=y
+ CONFIG_I2C_DESIGNWARE_PLATFORM=y
+ CONFIG_SPI=y
+diff --git a/arch/arm/include/debug/imx-uart.h b/arch/arm/include/debug/imx-uart.h
+index c8eb83d4b8964..3edbb3c5b42bf 100644
+--- a/arch/arm/include/debug/imx-uart.h
++++ b/arch/arm/include/debug/imx-uart.h
+@@ -11,13 +11,6 @@
+ #define IMX1_UART_BASE_ADDR(n)	IMX1_UART##n##_BASE_ADDR
+ #define IMX1_UART_BASE(n)	IMX1_UART_BASE_ADDR(n)
+ 
+-#define IMX21_UART1_BASE_ADDR	0x1000a000
+-#define IMX21_UART2_BASE_ADDR	0x1000b000
+-#define IMX21_UART3_BASE_ADDR	0x1000c000
+-#define IMX21_UART4_BASE_ADDR	0x1000d000
+-#define IMX21_UART_BASE_ADDR(n)	IMX21_UART##n##_BASE_ADDR
+-#define IMX21_UART_BASE(n)	IMX21_UART_BASE_ADDR(n)
+-
+ #define IMX25_UART1_BASE_ADDR	0x43f90000
+ #define IMX25_UART2_BASE_ADDR	0x43f94000
+ #define IMX25_UART3_BASE_ADDR	0x5000c000
+@@ -26,6 +19,13 @@
+ #define IMX25_UART_BASE_ADDR(n)	IMX25_UART##n##_BASE_ADDR
+ #define IMX25_UART_BASE(n)	IMX25_UART_BASE_ADDR(n)
+ 
++#define IMX27_UART1_BASE_ADDR	0x1000a000
++#define IMX27_UART2_BASE_ADDR	0x1000b000
++#define IMX27_UART3_BASE_ADDR	0x1000c000
++#define IMX27_UART4_BASE_ADDR	0x1000d000
++#define IMX27_UART_BASE_ADDR(n)	IMX27_UART##n##_BASE_ADDR
++#define IMX27_UART_BASE(n)	IMX27_UART_BASE_ADDR(n)
++
+ #define IMX31_UART1_BASE_ADDR	0x43f90000
+ #define IMX31_UART2_BASE_ADDR	0x43f94000
+ #define IMX31_UART3_BASE_ADDR	0x5000c000
+@@ -112,10 +112,10 @@
+ 
+ #ifdef CONFIG_DEBUG_IMX1_UART
+ #define UART_PADDR	IMX_DEBUG_UART_BASE(IMX1)
+-#elif defined(CONFIG_DEBUG_IMX21_IMX27_UART)
+-#define UART_PADDR	IMX_DEBUG_UART_BASE(IMX21)
+ #elif defined(CONFIG_DEBUG_IMX25_UART)
+ #define UART_PADDR	IMX_DEBUG_UART_BASE(IMX25)
++#elif defined(CONFIG_DEBUG_IMX27_UART)
++#define UART_PADDR	IMX_DEBUG_UART_BASE(IMX27)
+ #elif defined(CONFIG_DEBUG_IMX31_UART)
+ #define UART_PADDR	IMX_DEBUG_UART_BASE(IMX31)
+ #elif defined(CONFIG_DEBUG_IMX35_UART)
+diff --git a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
+index ee949255ced3f..09ef73b99dd86 100644
+--- a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
++++ b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
+@@ -154,8 +154,10 @@ static int __init rcar_gen2_regulator_quirk(void)
+ 		return -ENODEV;
+ 
+ 	for_each_matching_node_and_match(np, rcar_gen2_quirk_match, &id) {
+-		if (!of_device_is_available(np))
++		if (!of_device_is_available(np)) {
++			of_node_put(np);
+ 			break;
++		}
+ 
+ 		ret = of_property_read_u32(np, "reg", &addr);
+ 		if (ret)	/* Skip invalid entry and continue */
+@@ -164,6 +166,7 @@ static int __init rcar_gen2_regulator_quirk(void)
+ 		quirk = kzalloc(sizeof(*quirk), GFP_KERNEL);
+ 		if (!quirk) {
+ 			ret = -ENOMEM;
++			of_node_put(np);
+ 			goto err_mem;
+ 		}
+ 
+diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
+index eeb6dc0ecf463..e59b41e9ab0c1 100644
+--- a/arch/arm/net/bpf_jit_32.c
++++ b/arch/arm/net/bpf_jit_32.c
+@@ -1199,7 +1199,8 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
+ 
+ 	/* tmp2[0] = array, tmp2[1] = index */
+ 
+-	/* if (tail_call_cnt > MAX_TAIL_CALL_CNT)
++	/*
++	 * if (tail_call_cnt >= MAX_TAIL_CALL_CNT)
+ 	 *	goto out;
+ 	 * tail_call_cnt++;
+ 	 */
+@@ -1208,7 +1209,7 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
+ 	tc = arm_bpf_get_reg64(tcc, tmp, ctx);
+ 	emit(ARM_CMP_I(tc[0], hi), ctx);
+ 	_emit(ARM_COND_EQ, ARM_CMP_I(tc[1], lo), ctx);
+-	_emit(ARM_COND_HI, ARM_B(jmp_offset), ctx);
++	_emit(ARM_COND_CS, ARM_B(jmp_offset), ctx);
+ 	emit(ARM_ADDS_I(tc[1], tc[1], 1), ctx);
+ 	emit(ARM_ADC_I(tc[0], tc[0], 0), ctx);
+ 	arm_bpf_put_reg64(tcc, tmp, ctx);
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+index 00c6f53290d43..428449d98c0ae 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+@@ -58,7 +58,7 @@
+ 		secure-monitor = <&sm>;
+ 	};
+ 
+-	gpu_opp_table: gpu-opp-table {
++	gpu_opp_table: opp-table-gpu {
+ 		compatible = "operating-points-v2";
+ 
+ 		opp-124999998 {
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
+index e8a00a2f88128..3e968b2441918 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
+@@ -609,7 +609,7 @@
+ 	pinctrl-0 = <&nor_pins>;
+ 	pinctrl-names = "default";
+ 
+-	mx25u64: spi-flash@0 {
++	mx25u64: flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		compatible = "mxicy,mx25u6435f", "jedec,spi-nor";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
+index a350fee1264d7..a4d34398da358 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
+@@ -6,6 +6,7 @@
+  */
+ 
+ #include "meson-gxbb.dtsi"
++#include <dt-bindings/gpio/gpio.h>
+ 
+ / {
+ 	aliases {
+@@ -64,6 +65,7 @@
+ 		regulator-name = "VDDIO_AO18";
+ 		regulator-min-microvolt = <1800000>;
+ 		regulator-max-microvolt = <1800000>;
++		regulator-always-on;
+ 	};
+ 
+ 	vcc_3v3: regulator-vcc_3v3 {
+@@ -161,6 +163,7 @@
+ 	status = "okay";
+ 	pinctrl-0 = <&hdmi_hpd_pins>, <&hdmi_i2c_pins>;
+ 	pinctrl-names = "default";
++	hdmi-supply = <&vddio_ao18>;
+ };
+ 
+ &hdmi_tx_tmds_port {
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds.dts b/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds.dts
+index 6e2a1da662fb4..4597848598df0 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds.dts
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds.dts
+@@ -272,11 +272,6 @@
+ 				vcc-supply = <&sb_3v3>;
+ 			};
+ 
+-			rtc@51 {
+-				compatible = "nxp,pcf2129";
+-				reg = <0x51>;
+-			};
+-
+ 			eeprom@56 {
+ 				compatible = "atmel,24c512";
+ 				reg = <0x56>;
+@@ -318,6 +313,15 @@
+ 
+ };
+ 
++&i2c1 {
++	status = "okay";
++
++	rtc@51 {
++		compatible = "nxp,pcf2129";
++		reg = <0x51>;
++	};
++};
++
+ &enetc_port1 {
+ 	phy-handle = <&qds_phy1>;
+ 	phy-mode = "rgmii-id";
+diff --git a/arch/arm64/boot/dts/marvell/cn9130.dtsi b/arch/arm64/boot/dts/marvell/cn9130.dtsi
+index a2b7e5ec979d3..327b04134134f 100644
+--- a/arch/arm64/boot/dts/marvell/cn9130.dtsi
++++ b/arch/arm64/boot/dts/marvell/cn9130.dtsi
+@@ -11,6 +11,13 @@
+ 	model = "Marvell Armada CN9130 SoC";
+ 	compatible = "marvell,cn9130", "marvell,armada-ap807-quad",
+ 		     "marvell,armada-ap807";
++
++	aliases {
++		gpio1 = &cp0_gpio1;
++		gpio2 = &cp0_gpio2;
++		spi1 = &cp0_spi0;
++		spi2 = &cp0_spi1;
++	};
+ };
+ 
+ /*
+@@ -35,3 +42,11 @@
+ #undef CP11X_PCIE0_BASE
+ #undef CP11X_PCIE1_BASE
+ #undef CP11X_PCIE2_BASE
++
++&cp0_gpio1 {
++	status = "okay";
++};
++
++&cp0_gpio2 {
++	status = "okay";
++};
+diff --git a/arch/arm64/boot/dts/nvidia/tegra186.dtsi b/arch/arm64/boot/dts/nvidia/tegra186.dtsi
+index 9ac4f0140700f..8ab83b4ac0373 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra186.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra186.dtsi
+@@ -1199,7 +1199,7 @@
+ 
+ 	ccplex@e000000 {
+ 		compatible = "nvidia,tegra186-ccplex-cluster";
+-		reg = <0x0 0x0e000000 0x0 0x3fffff>;
++		reg = <0x0 0x0e000000 0x0 0x400000>;
+ 
+ 		nvidia,bpmp = <&bpmp>;
+ 	};
+diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+index 851e049b3519c..dcc0e55d6bdbb 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+@@ -976,9 +976,8 @@
+ 				 <&bpmp TEGRA194_CLK_HDA2CODEC_2X>;
+ 			clock-names = "hda", "hda2hdmi", "hda2codec_2x";
+ 			resets = <&bpmp TEGRA194_RESET_HDA>,
+-				 <&bpmp TEGRA194_RESET_HDA2HDMICODEC>,
+-				 <&bpmp TEGRA194_RESET_HDA2CODEC_2X>;
+-			reset-names = "hda", "hda2hdmi", "hda2codec_2x";
++				 <&bpmp TEGRA194_RESET_HDA2HDMICODEC>;
++			reset-names = "hda", "hda2hdmi";
+ 			power-domains = <&bpmp TEGRA194_POWER_DOMAIN_DISP>;
+ 			interconnects = <&mc TEGRA194_MEMORY_CLIENT_HDAR &emc>,
+ 					<&mc TEGRA194_MEMORY_CLIENT_HDAW &emc>;
+diff --git a/arch/arm64/boot/dts/qcom/ipq6018.dtsi b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+index 933b56103a464..66ec5615651d4 100644
+--- a/arch/arm64/boot/dts/qcom/ipq6018.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+@@ -220,7 +220,7 @@
+ 			interrupts = <GIC_SPI 208 IRQ_TYPE_LEVEL_HIGH>;
+ 			gpio-controller;
+ 			#gpio-cells = <2>;
+-			gpio-ranges = <&tlmm 0 80>;
++			gpio-ranges = <&tlmm 0 0 80>;
+ 			interrupt-controller;
+ 			#interrupt-cells = <2>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index c1c42f26b61e0..8be601275e9b4 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -19,8 +19,8 @@
+ 	#size-cells = <2>;
+ 
+ 	aliases {
+-		sdhc1 = &sdhc_1; /* SDC1 eMMC slot */
+-		sdhc2 = &sdhc_2; /* SDC2 SD card slot */
++		mmc0 = &sdhc_1; /* SDC1 eMMC slot */
++		mmc1 = &sdhc_2; /* SDC2 SD card slot */
+ 	};
+ 
+ 	chosen { };
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index bccc2d0b35a84..1ac78d9909abf 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -987,9 +987,6 @@
+ 			nvmem-cells = <&speedbin_efuse>;
+ 			nvmem-cell-names = "speed_bin";
+ 
+-			qcom,gpu-quirk-two-pass-use-wfi;
+-			qcom,gpu-quirk-fault-detect-mask;
+-
+ 			operating-points-v2 = <&gpu_opp_table>;
+ 
+ 			status = "disabled";
+diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+index 365a2e04e285b..6e27a1beaa33a 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+@@ -576,7 +576,7 @@
+ 				 <&rpmhcc RPMH_CXO_CLK_A>, <&sleep_clk>,
+ 				 <0>, <0>, <0>, <0>, <0>, <0>;
+ 			clock-names = "bi_tcxo", "bi_tcxo_ao", "sleep_clk",
+-				      "pcie_0_pipe_clk", "pcie_1_pipe-clk",
++				      "pcie_0_pipe_clk", "pcie_1_pipe_clk",
+ 				      "ufs_phy_rx_symbol_0_clk", "ufs_phy_rx_symbol_1_clk",
+ 				      "ufs_phy_tx_symbol_0_clk",
+ 				      "usb3_phy_wrapper_gcc_usb30_pipe_clk";
+@@ -1592,10 +1592,10 @@
+ 			interrupt-names = "msi";
+ 			#interrupt-cells = <1>;
+ 			interrupt-map-mask = <0 0 0 0x7>;
+-			interrupt-map = <0 0 0 1 &intc 0 434 IRQ_TYPE_LEVEL_HIGH>,
+-					<0 0 0 2 &intc 0 435 IRQ_TYPE_LEVEL_HIGH>,
+-					<0 0 0 3 &intc 0 438 IRQ_TYPE_LEVEL_HIGH>,
+-					<0 0 0 4 &intc 0 439 IRQ_TYPE_LEVEL_HIGH>;
++			interrupt-map = <0 0 0 1 &intc 0 0 0 434 IRQ_TYPE_LEVEL_HIGH>,
++					<0 0 0 2 &intc 0 0 0 435 IRQ_TYPE_LEVEL_HIGH>,
++					<0 0 0 3 &intc 0 0 0 438 IRQ_TYPE_LEVEL_HIGH>,
++					<0 0 0 4 &intc 0 0 0 439 IRQ_TYPE_LEVEL_HIGH>;
+ 
+ 			clocks = <&gcc GCC_PCIE_1_PIPE_CLK>,
+ 				 <&gcc GCC_PCIE_1_PIPE_CLK_SRC>,
+diff --git a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+index d6b2ba4396f68..2e882a977e2c4 100644
+--- a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
++++ b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+@@ -523,6 +523,10 @@
+ 	dai@1 {
+ 		reg = <1>;
+ 	};
++
++	dai@2 {
++		reg = <2>;
++	};
+ };
+ 
+ &sound {
+@@ -535,6 +539,7 @@
+ 		"SpkrLeft IN", "SPK1 OUT",
+ 		"SpkrRight IN", "SPK2 OUT",
+ 		"MM_DL1",  "MultiMedia1 Playback",
++		"MM_DL3",  "MultiMedia3 Playback",
+ 		"MultiMedia2 Capture", "MM_UL2";
+ 
+ 	mm1-dai-link {
+@@ -551,6 +556,13 @@
+ 		};
+ 	};
+ 
++	mm3-dai-link {
++		link-name = "MultiMedia3";
++		cpu {
++			sound-dai = <&q6asmdai  MSM_FRONTEND_DAI_MULTIMEDIA3>;
++		};
++	};
++
+ 	slim-dai-link {
+ 		link-name = "SLIM Playback";
+ 		cpu {
+@@ -580,6 +592,21 @@
+ 			sound-dai = <&wcd9340 1>;
+ 		};
+ 	};
++
++	slim-wcd-dai-link {
++		link-name = "SLIM WCD Playback";
++		cpu {
++			sound-dai = <&q6afedai SLIMBUS_1_RX>;
++		};
++
++		platform {
++			sound-dai = <&q6routing>;
++		};
++
++		codec {
++			sound-dai =  <&wcd9340 2>;
++		};
++	};
+ };
+ 
+ &tlmm {
+diff --git a/arch/arm64/boot/dts/qcom/sm6350.dtsi b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+index 973e18fe3b674..cd55797facf69 100644
+--- a/arch/arm64/boot/dts/qcom/sm6350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+@@ -631,7 +631,7 @@
+ 			reg = <0 0x0c263000 0 0x1ff>, /* TM */
+ 			      <0 0x0c222000 0 0x8>; /* SROT */
+ 			#qcom,sensors = <16>;
+-			interrupts = <&pdc 26 IRQ_TYPE_LEVEL_HIGH>,
++			interrupts-extended = <&pdc 26 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <&pdc 28 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "uplow", "critical";
+ 			#thermal-sensor-cells = <1>;
+@@ -642,7 +642,7 @@
+ 			reg = <0 0x0c265000 0 0x1ff>, /* TM */
+ 			      <0 0x0c223000 0 0x8>; /* SROT */
+ 			#qcom,sensors = <16>;
+-			interrupts = <&pdc 27 IRQ_TYPE_LEVEL_HIGH>,
++			interrupts-extended = <&pdc 27 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <&pdc 29 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "uplow", "critical";
+ 			#thermal-sensor-cells = <1>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index d134280e29390..c13858cf50dd2 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -910,7 +910,7 @@
+ 			reg = <0 0x0c263000 0 0x1ff>, /* TM */
+ 			      <0 0x0c222000 0 0x8>; /* SROT */
+ 			#qcom,sensors = <15>;
+-			interrupts = <&pdc 26 IRQ_TYPE_LEVEL_HIGH>,
++			interrupts-extended = <&pdc 26 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <&pdc 28 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "uplow", "critical";
+ 			#thermal-sensor-cells = <1>;
+@@ -921,7 +921,7 @@
+ 			reg = <0 0x0c265000 0 0x1ff>, /* TM */
+ 			      <0 0x0c223000 0 0x8>; /* SROT */
+ 			#qcom,sensors = <14>;
+-			interrupts = <&pdc 27 IRQ_TYPE_LEVEL_HIGH>,
++			interrupts-extended = <&pdc 27 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <&pdc 29 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "uplow", "critical";
+ 			#thermal-sensor-cells = <1>;
+@@ -2447,7 +2447,7 @@
+ 			};
+ 		};
+ 
+-		camera-thermal-bottom {
++		cam-thermal-bottom {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 
+diff --git a/arch/arm64/boot/dts/renesas/cat875.dtsi b/arch/arm64/boot/dts/renesas/cat875.dtsi
+index a69d24e9c61db..8c9da8b4bd60b 100644
+--- a/arch/arm64/boot/dts/renesas/cat875.dtsi
++++ b/arch/arm64/boot/dts/renesas/cat875.dtsi
+@@ -18,6 +18,7 @@
+ 	pinctrl-names = "default";
+ 	renesas,no-ether-link;
+ 	phy-handle = <&phy0>;
++	phy-mode = "rgmii-id";
+ 	status = "okay";
+ 
+ 	phy0: ethernet-phy@0 {
+diff --git a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
+index 6f4fffacfca21..e70aa5a087402 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
+@@ -2784,7 +2784,7 @@
+ 	};
+ 
+ 	thermal-zones {
+-		sensor_thermal1: sensor-thermal1 {
++		sensor1_thermal: sensor1-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 0>;
+@@ -2799,7 +2799,7 @@
+ 			};
+ 		};
+ 
+-		sensor_thermal2: sensor-thermal2 {
++		sensor2_thermal: sensor2-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 1>;
+@@ -2814,7 +2814,7 @@
+ 			};
+ 		};
+ 
+-		sensor_thermal3: sensor-thermal3 {
++		sensor3_thermal: sensor3-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 2>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a774b1.dtsi b/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
+index 0f7bdfc90a0dc..6c5694fa66900 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
+@@ -2629,7 +2629,7 @@
+ 	};
+ 
+ 	thermal-zones {
+-		sensor_thermal1: sensor-thermal1 {
++		sensor1_thermal: sensor1-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 0>;
+@@ -2644,7 +2644,7 @@
+ 			};
+ 		};
+ 
+-		sensor_thermal2: sensor-thermal2 {
++		sensor2_thermal: sensor2-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 1>;
+@@ -2659,7 +2659,7 @@
+ 			};
+ 		};
+ 
+-		sensor_thermal3: sensor-thermal3 {
++		sensor3_thermal: sensor3-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 2>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a774e1.dtsi b/arch/arm64/boot/dts/renesas/r8a774e1.dtsi
+index 379a1300272ba..62209ab6deb9a 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774e1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774e1.dtsi
+@@ -2904,7 +2904,7 @@
+ 	};
+ 
+ 	thermal-zones {
+-		sensor_thermal1: sensor-thermal1 {
++		sensor1_thermal: sensor1-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 0>;
+@@ -2919,7 +2919,7 @@
+ 			};
+ 		};
+ 
+-		sensor_thermal2: sensor-thermal2 {
++		sensor2_thermal: sensor2-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 1>;
+@@ -2934,7 +2934,7 @@
+ 			};
+ 		};
+ 
+-		sensor_thermal3: sensor-thermal3 {
++		sensor3_thermal: sensor3-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 2>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77951.dtsi b/arch/arm64/boot/dts/renesas/r8a77951.dtsi
+index 1768a3e6bb8da..193d81be40fc4 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77951.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77951.dtsi
+@@ -3375,7 +3375,7 @@
+ 	};
+ 
+ 	thermal-zones {
+-		sensor_thermal1: sensor-thermal1 {
++		sensor1_thermal: sensor1-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 0>;
+@@ -3390,7 +3390,7 @@
+ 			};
+ 		};
+ 
+-		sensor_thermal2: sensor-thermal2 {
++		sensor2_thermal: sensor2-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 1>;
+@@ -3405,7 +3405,7 @@
+ 			};
+ 		};
+ 
+-		sensor_thermal3: sensor-thermal3 {
++		sensor3_thermal: sensor3-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 2>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77960.dtsi b/arch/arm64/boot/dts/renesas/r8a77960.dtsi
+index 2bd8169735d35..b526e4f0ee6a8 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77960.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77960.dtsi
+@@ -2972,7 +2972,7 @@
+ 	};
+ 
+ 	thermal-zones {
+-		sensor_thermal1: sensor-thermal1 {
++		sensor1_thermal: sensor1-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 0>;
+@@ -2987,7 +2987,7 @@
+ 			};
+ 		};
+ 
+-		sensor_thermal2: sensor-thermal2 {
++		sensor2_thermal: sensor2-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 1>;
+@@ -3002,7 +3002,7 @@
+ 			};
+ 		};
+ 
+-		sensor_thermal3: sensor-thermal3 {
++		sensor3_thermal: sensor3-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 2>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77961.dtsi b/arch/arm64/boot/dts/renesas/r8a77961.dtsi
+index 86d59e7e1a876..b1a00f5df4311 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77961.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77961.dtsi
+@@ -2730,7 +2730,7 @@
+ 	};
+ 
+ 	thermal-zones {
+-		sensor_thermal1: sensor-thermal1 {
++		sensor1_thermal: sensor1-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 0>;
+@@ -2745,7 +2745,7 @@
+ 			};
+ 		};
+ 
+-		sensor_thermal2: sensor-thermal2 {
++		sensor2_thermal: sensor2-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 1>;
+@@ -2760,7 +2760,7 @@
+ 			};
+ 		};
+ 
+-		sensor_thermal3: sensor-thermal3 {
++		sensor3_thermal: sensor3-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 2>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77965.dtsi b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
+index 08df75606430b..f9679a4dd85fa 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77965.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
+@@ -2784,7 +2784,7 @@
+ 	};
+ 
+ 	thermal-zones {
+-		sensor_thermal1: sensor-thermal1 {
++		sensor1_thermal: sensor1-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 0>;
+@@ -2799,7 +2799,7 @@
+ 			};
+ 		};
+ 
+-		sensor_thermal2: sensor-thermal2 {
++		sensor2_thermal: sensor2-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 1>;
+@@ -2814,7 +2814,7 @@
+ 			};
+ 		};
+ 
+-		sensor_thermal3: sensor-thermal3 {
++		sensor3_thermal: sensor3-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 2>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77980.dtsi b/arch/arm64/boot/dts/renesas/r8a77980.dtsi
+index 6347d15e66b64..21fe602bd25af 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77980.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77980.dtsi
+@@ -1580,7 +1580,7 @@
+ 	};
+ 
+ 	thermal-zones {
+-		thermal-sensor-1 {
++		sensor1_thermal: sensor1-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 0>;
+@@ -1599,7 +1599,7 @@
+ 			};
+ 		};
+ 
+-		thermal-sensor-2 {
++		sensor2_thermal: sensor2-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 1>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
+index 43bf2cbfbd8f7..770a23b769d86 100644
+--- a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
+@@ -2607,7 +2607,7 @@
+ 	};
+ 
+ 	thermal-zones {
+-		sensor_thermal1: sensor-thermal1 {
++		sensor1_thermal: sensor1-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 0>;
+@@ -2621,7 +2621,7 @@
+ 			};
+ 		};
+ 
+-		sensor_thermal2: sensor-thermal2 {
++		sensor2_thermal: sensor2-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 1>;
+@@ -2635,7 +2635,7 @@
+ 			};
+ 		};
+ 
+-		sensor_thermal3: sensor-thermal3 {
++		sensor3_thermal: sensor3-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 2>;
+@@ -2649,7 +2649,7 @@
+ 			};
+ 		};
+ 
+-		sensor_thermal4: sensor-thermal4 {
++		sensor4_thermal: sensor4-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 3>;
+@@ -2663,7 +2663,7 @@
+ 			};
+ 		};
+ 
+-		sensor_thermal5: sensor-thermal5 {
++		sensor5_thermal: sensor5-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <1000>;
+ 			thermal-sensors = <&tsc 4>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4b-plus.dts b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4b-plus.dts
+index dfad13d2ab249..5bd2b8db3d51a 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4b-plus.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4b-plus.dts
+@@ -35,13 +35,16 @@
+ 	status = "okay";
+ 
+ 	bluetooth {
+-		compatible = "brcm,bcm43438-bt";
++		compatible = "brcm,bcm4345c5";
+ 		clocks = <&rk808 1>;
+-		clock-names = "ext_clock";
++		clock-names = "lpo";
+ 		device-wakeup-gpios = <&gpio2 RK_PD3 GPIO_ACTIVE_HIGH>;
+ 		host-wakeup-gpios = <&gpio0 RK_PA4 GPIO_ACTIVE_HIGH>;
+ 		shutdown-gpios = <&gpio0 RK_PB1 GPIO_ACTIVE_HIGH>;
++		max-speed = <1500000>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&bt_host_wake_l &bt_wake_l &bt_enable_h>;
++		vbat-supply = <&vcc3v3_sys>;
++		vddio-supply = <&vcc_1v8>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4b.dts b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4b.dts
+index 6c63e617063c9..cf48746a3ad81 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4b.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4b.dts
+@@ -34,13 +34,16 @@
+ 	status = "okay";
+ 
+ 	bluetooth {
+-		compatible = "brcm,bcm43438-bt";
++		compatible = "brcm,bcm4345c5";
+ 		clocks = <&rk808 1>;
+-		clock-names = "ext_clock";
++		clock-names = "lpo";
+ 		device-wakeup-gpios = <&gpio2 RK_PD3 GPIO_ACTIVE_HIGH>;
+ 		host-wakeup-gpios = <&gpio0 RK_PA4 GPIO_ACTIVE_HIGH>;
+ 		shutdown-gpios = <&gpio0 RK_PB1 GPIO_ACTIVE_HIGH>;
++		max-speed = <1500000>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&bt_host_wake_l &bt_wake_l &bt_enable_h>;
++		vbat-supply = <&vcc3v3_sys>;
++		vddio-supply = <&vcc_1v8>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4c.dts b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4c.dts
+index 99169bcd51c03..57ddf55ee6930 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4c.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4c.dts
+@@ -35,14 +35,17 @@
+ 	status = "okay";
+ 
+ 	bluetooth {
+-		compatible = "brcm,bcm43438-bt";
++		compatible = "brcm,bcm4345c5";
+ 		clocks = <&rk808 1>;
+-		clock-names = "ext_clock";
++		clock-names = "lpo";
+ 		device-wakeup-gpios = <&gpio2 RK_PD3 GPIO_ACTIVE_HIGH>;
+ 		host-wakeup-gpios = <&gpio0 RK_PA4 GPIO_ACTIVE_HIGH>;
+ 		shutdown-gpios = <&gpio0 RK_PB1 GPIO_ACTIVE_HIGH>;
++		max-speed = <1500000>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&bt_host_wake_l &bt_wake_l &bt_enable_h>;
++		vbat-supply = <&vcc3v3_sys>;
++		vddio-supply = <&vcc_1v8>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-am642.dtsi b/arch/arm64/boot/dts/ti/k3-am642.dtsi
+index e2b397c884018..8a76f4821b11b 100644
+--- a/arch/arm64/boot/dts/ti/k3-am642.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am642.dtsi
+@@ -60,6 +60,6 @@
+ 		cache-level = <2>;
+ 		cache-size = <0x40000>;
+ 		cache-line-size = <64>;
+-		cache-sets = <512>;
++		cache-sets = <256>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+index d60ef4f7dd0b7..05a627ad6cdc4 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+@@ -32,7 +32,7 @@
+ 		#size-cells = <1>;
+ 		ranges = <0x00 0x00 0x00100000 0x1c000>;
+ 
+-		serdes_ln_ctrl: serdes-ln-ctrl@4080 {
++		serdes_ln_ctrl: mux-controller@4080 {
+ 			compatible = "mmio-mux";
+ 			#mux-control-cells = <1>;
+ 			mux-reg-masks = <0x4080 0x3>, <0x4084 0x3>, /* SERDES0 lane0/1 select */
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200.dtsi b/arch/arm64/boot/dts/ti/k3-j7200.dtsi
+index 47567cb260c2b..64fef4e67d76a 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200.dtsi
+@@ -62,7 +62,7 @@
+ 			i-cache-sets = <256>;
+ 			d-cache-size = <0x8000>;
+ 			d-cache-line-size = <64>;
+-			d-cache-sets = <128>;
++			d-cache-sets = <256>;
+ 			next-level-cache = <&L2_0>;
+ 		};
+ 
+@@ -76,7 +76,7 @@
+ 			i-cache-sets = <256>;
+ 			d-cache-size = <0x8000>;
+ 			d-cache-line-size = <64>;
+-			d-cache-sets = <128>;
++			d-cache-sets = <256>;
+ 			next-level-cache = <&L2_0>;
+ 		};
+ 	};
+@@ -86,7 +86,7 @@
+ 		cache-level = <2>;
+ 		cache-size = <0x100000>;
+ 		cache-line-size = <64>;
+-		cache-sets = <2048>;
++		cache-sets = <1024>;
+ 		next-level-cache = <&msmc_l3>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+index 08c8d1b47dcd9..e85c89eebfa31 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+@@ -42,7 +42,7 @@
+ 		#size-cells = <1>;
+ 		ranges = <0x0 0x0 0x00100000 0x1c000>;
+ 
+-		serdes_ln_ctrl: mux@4080 {
++		serdes_ln_ctrl: mux-controller@4080 {
+ 			compatible = "mmio-mux";
+ 			reg = <0x00004080 0x50>;
+ 			#mux-control-cells = <1>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e.dtsi b/arch/arm64/boot/dts/ti/k3-j721e.dtsi
+index 214359e7288b2..4a3872fce5339 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e.dtsi
+@@ -64,7 +64,7 @@
+ 			i-cache-sets = <256>;
+ 			d-cache-size = <0x8000>;
+ 			d-cache-line-size = <64>;
+-			d-cache-sets = <128>;
++			d-cache-sets = <256>;
+ 			next-level-cache = <&L2_0>;
+ 		};
+ 
+@@ -78,7 +78,7 @@
+ 			i-cache-sets = <256>;
+ 			d-cache-size = <0x8000>;
+ 			d-cache-line-size = <64>;
+-			d-cache-sets = <128>;
++			d-cache-sets = <256>;
+ 			next-level-cache = <&L2_0>;
+ 		};
+ 	};
+@@ -88,7 +88,7 @@
+ 		cache-level = <2>;
+ 		cache-size = <0x100000>;
+ 		cache-line-size = <64>;
+-		cache-sets = <2048>;
++		cache-sets = <1024>;
+ 		next-level-cache = <&msmc_l3>;
+ 	};
+ 
+diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
+index 478b9bcf69ad1..e4704a403237e 100644
+--- a/arch/arm64/include/asm/mte-kasan.h
++++ b/arch/arm64/include/asm/mte-kasan.h
+@@ -84,10 +84,12 @@ static inline void __dc_gzva(u64 p)
+ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag,
+ 					 bool init)
+ {
+-	u64 curr, mask, dczid_bs, end1, end2, end3;
++	u64 curr, mask, dczid, dczid_bs, dczid_dzp, end1, end2, end3;
+ 
+ 	/* Read DC G(Z)VA block size from the system register. */
+-	dczid_bs = 4ul << (read_cpuid(DCZID_EL0) & 0xf);
++	dczid = read_cpuid(DCZID_EL0);
++	dczid_bs = 4ul << (dczid & 0xf);
++	dczid_dzp = (dczid >> 4) & 1;
+ 
+ 	curr = (u64)__tag_set(addr, tag);
+ 	mask = dczid_bs - 1;
+@@ -106,7 +108,7 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag,
+ 	 */
+ #define SET_MEMTAG_RANGE(stg_post, dc_gva)		\
+ 	do {						\
+-		if (size >= 2 * dczid_bs) {		\
++		if (!dczid_dzp && size >= 2 * dczid_bs) {\
+ 			do {				\
+ 				curr = stg_post(curr);	\
+ 			} while (curr < end1);		\
+diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
+index b5ec010c481f3..309a27553c875 100644
+--- a/arch/arm64/kernel/module.c
++++ b/arch/arm64/kernel/module.c
+@@ -36,7 +36,7 @@ void *module_alloc(unsigned long size)
+ 		module_alloc_end = MODULES_END;
+ 
+ 	p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base,
+-				module_alloc_end, gfp_mask, PAGE_KERNEL, 0,
++				module_alloc_end, gfp_mask, PAGE_KERNEL, VM_DEFER_KMEMLEAK,
+ 				NUMA_NO_NODE, __builtin_return_address(0));
+ 
+ 	if (!p && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
+@@ -58,7 +58,7 @@ void *module_alloc(unsigned long size)
+ 				PAGE_KERNEL, 0, NUMA_NO_NODE,
+ 				__builtin_return_address(0));
+ 
+-	if (p && (kasan_module_alloc(p, size) < 0)) {
++	if (p && (kasan_module_alloc(p, size, gfp_mask) < 0)) {
+ 		vfree(p);
+ 		return NULL;
+ 	}
+diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
+index aacf2f5559a8b..271d4bbf468e3 100644
+--- a/arch/arm64/kernel/process.c
++++ b/arch/arm64/kernel/process.c
+@@ -439,34 +439,26 @@ static void entry_task_switch(struct task_struct *next)
+ 
+ /*
+  * ARM erratum 1418040 handling, affecting the 32bit view of CNTVCT.
+- * Assuming the virtual counter is enabled at the beginning of times:
+- *
+- * - disable access when switching from a 64bit task to a 32bit task
+- * - enable access when switching from a 32bit task to a 64bit task
++ * Ensure access is disabled when switching to a 32bit task, ensure
++ * access is enabled when switching to a 64bit task.
+  */
+-static void erratum_1418040_thread_switch(struct task_struct *prev,
+-					  struct task_struct *next)
++static void erratum_1418040_thread_switch(struct task_struct *next)
+ {
+-	bool prev32, next32;
+-	u64 val;
+-
+-	if (!IS_ENABLED(CONFIG_ARM64_ERRATUM_1418040))
+-		return;
+-
+-	prev32 = is_compat_thread(task_thread_info(prev));
+-	next32 = is_compat_thread(task_thread_info(next));
+-
+-	if (prev32 == next32 || !this_cpu_has_cap(ARM64_WORKAROUND_1418040))
++	if (!IS_ENABLED(CONFIG_ARM64_ERRATUM_1418040) ||
++	    !this_cpu_has_cap(ARM64_WORKAROUND_1418040))
+ 		return;
+ 
+-	val = read_sysreg(cntkctl_el1);
+-
+-	if (!next32)
+-		val |= ARCH_TIMER_USR_VCT_ACCESS_EN;
++	if (is_compat_thread(task_thread_info(next)))
++		sysreg_clear_set(cntkctl_el1, ARCH_TIMER_USR_VCT_ACCESS_EN, 0);
+ 	else
+-		val &= ~ARCH_TIMER_USR_VCT_ACCESS_EN;
++		sysreg_clear_set(cntkctl_el1, 0, ARCH_TIMER_USR_VCT_ACCESS_EN);
++}
+ 
+-	write_sysreg(val, cntkctl_el1);
++static void erratum_1418040_new_exec(void)
++{
++	preempt_disable();
++	erratum_1418040_thread_switch(current);
++	preempt_enable();
+ }
+ 
+ /*
+@@ -501,7 +493,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
+ 	contextidr_thread_switch(next);
+ 	entry_task_switch(next);
+ 	ssbs_thread_switch(next);
+-	erratum_1418040_thread_switch(prev, next);
++	erratum_1418040_thread_switch(next);
+ 	ptrauth_thread_switch_user(next);
+ 
+ 	/*
+@@ -611,6 +603,7 @@ void arch_setup_new_exec(void)
+ 	current->mm->context.flags = mmflags;
+ 	ptrauth_thread_init_user();
+ 	mte_thread_init_user();
++	erratum_1418040_new_exec();
+ 
+ 	if (task_spec_ssb_noexec(current)) {
+ 		arch_prctl_spec_ctrl_set(current, PR_SPEC_STORE_BYPASS,
+diff --git a/arch/arm64/lib/clear_page.S b/arch/arm64/lib/clear_page.S
+index b84b179edba3a..1fd5d790ab800 100644
+--- a/arch/arm64/lib/clear_page.S
++++ b/arch/arm64/lib/clear_page.S
+@@ -16,6 +16,7 @@
+  */
+ SYM_FUNC_START_PI(clear_page)
+ 	mrs	x1, dczid_el0
++	tbnz	x1, #4, 2f	/* Branch if DC ZVA is prohibited */
+ 	and	w1, w1, #0xf
+ 	mov	x2, #4
+ 	lsl	x1, x2, x1
+@@ -25,5 +26,14 @@ SYM_FUNC_START_PI(clear_page)
+ 	tst	x0, #(PAGE_SIZE - 1)
+ 	b.ne	1b
+ 	ret
++
++2:	stnp	xzr, xzr, [x0]
++	stnp	xzr, xzr, [x0, #16]
++	stnp	xzr, xzr, [x0, #32]
++	stnp	xzr, xzr, [x0, #48]
++	add	x0, x0, #64
++	tst	x0, #(PAGE_SIZE - 1)
++	b.ne	2b
++	ret
+ SYM_FUNC_END_PI(clear_page)
+ EXPORT_SYMBOL(clear_page)
+diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
+index e83643b3995f4..f531dcb95174a 100644
+--- a/arch/arm64/lib/mte.S
++++ b/arch/arm64/lib/mte.S
+@@ -43,17 +43,23 @@ SYM_FUNC_END(mte_clear_page_tags)
+  *	x0 - address to the beginning of the page
+  */
+ SYM_FUNC_START(mte_zero_clear_page_tags)
++	and	x0, x0, #(1 << MTE_TAG_SHIFT) - 1	// clear the tag
+ 	mrs	x1, dczid_el0
++	tbnz	x1, #4, 2f	// Branch if DC GZVA is prohibited
+ 	and	w1, w1, #0xf
+ 	mov	x2, #4
+ 	lsl	x1, x2, x1
+-	and	x0, x0, #(1 << MTE_TAG_SHIFT) - 1	// clear the tag
+ 
+ 1:	dc	gzva, x0
+ 	add	x0, x0, x1
+ 	tst	x0, #(PAGE_SIZE - 1)
+ 	b.ne	1b
+ 	ret
++
++2:	stz2g	x0, [x0], #(MTE_GRANULE_SIZE * 2)
++	tst	x0, #(PAGE_SIZE - 1)
++	b.ne	2b
++	ret
+ SYM_FUNC_END(mte_zero_clear_page_tags)
+ 
+ /*
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index 3a8a7140a9bfb..1090a957b3abc 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -287,13 +287,14 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
+ 	emit(A64_CMP(0, r3, tmp), ctx);
+ 	emit(A64_B_(A64_COND_CS, jmp_offset), ctx);
+ 
+-	/* if (tail_call_cnt > MAX_TAIL_CALL_CNT)
++	/*
++	 * if (tail_call_cnt >= MAX_TAIL_CALL_CNT)
+ 	 *     goto out;
+ 	 * tail_call_cnt++;
+ 	 */
+ 	emit_a64_mov_i64(tmp, MAX_TAIL_CALL_CNT, ctx);
+ 	emit(A64_CMP(1, tcc, tmp), ctx);
+-	emit(A64_B_(A64_COND_HI, jmp_offset), ctx);
++	emit(A64_B_(A64_COND_CS, jmp_offset), ctx);
+ 	emit(A64_ADD_I(1, tcc, tcc, 1), ctx);
+ 
+ 	/* prog = array->ptrs[index];
+@@ -791,7 +792,10 @@ emit_cond_jmp:
+ 		u64 imm64;
+ 
+ 		imm64 = (u64)insn1.imm << 32 | (u32)imm;
+-		emit_a64_mov_i64(dst, imm64, ctx);
++		if (bpf_pseudo_func(insn))
++			emit_addr_mov_i64(dst, imm64, ctx);
++		else
++			emit_a64_mov_i64(dst, imm64, ctx);
+ 
+ 		return 1;
+ 	}
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 0215dc1529e9a..c5826236d913a 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -1907,6 +1907,10 @@ config SYS_HAS_CPU_MIPS64_R1
+ config SYS_HAS_CPU_MIPS64_R2
+ 	bool
+ 
++config SYS_HAS_CPU_MIPS64_R5
++	bool
++	select ARCH_HAS_SYNC_DMA_FOR_CPU if DMA_NONCOHERENT
++
+ config SYS_HAS_CPU_MIPS64_R6
+ 	bool
+ 	select ARCH_HAS_SYNC_DMA_FOR_CPU if DMA_NONCOHERENT
+@@ -2065,7 +2069,7 @@ config CPU_SUPPORTS_ADDRWINCFG
+ 	bool
+ config CPU_SUPPORTS_HUGEPAGES
+ 	bool
+-	depends on !(32BIT && (ARCH_PHYS_ADDR_T_64BIT || EVA))
++	depends on !(32BIT && (PHYS_ADDR_T_64BIT || EVA))
+ config MIPS_PGD_C0_CONTEXT
+ 	bool
+ 	depends on 64BIT
+diff --git a/arch/mips/bcm63xx/clk.c b/arch/mips/bcm63xx/clk.c
+index 1c91064cb448b..6e6756e8fa0a9 100644
+--- a/arch/mips/bcm63xx/clk.c
++++ b/arch/mips/bcm63xx/clk.c
+@@ -387,6 +387,12 @@ struct clk *clk_get_parent(struct clk *clk)
+ }
+ EXPORT_SYMBOL(clk_get_parent);
+ 
++int clk_set_parent(struct clk *clk, struct clk *parent)
++{
++	return 0;
++}
++EXPORT_SYMBOL(clk_set_parent);
++
+ unsigned long clk_get_rate(struct clk *clk)
+ {
+ 	if (!clk)
+diff --git a/arch/mips/boot/compressed/Makefile b/arch/mips/boot/compressed/Makefile
+index f27cf31b41401..38e233f7fd7a4 100644
+--- a/arch/mips/boot/compressed/Makefile
++++ b/arch/mips/boot/compressed/Makefile
+@@ -52,7 +52,7 @@ endif
+ 
+ vmlinuzobjs-$(CONFIG_KERNEL_XZ) += $(obj)/ashldi3.o
+ 
+-vmlinuzobjs-$(CONFIG_KERNEL_ZSTD) += $(obj)/bswapdi.o $(obj)/ashldi3.o
++vmlinuzobjs-$(CONFIG_KERNEL_ZSTD) += $(obj)/bswapdi.o $(obj)/ashldi3.o $(obj)/clz_ctz.o
+ 
+ targets := $(notdir $(vmlinuzobjs-y))
+ 
+diff --git a/arch/mips/boot/compressed/clz_ctz.c b/arch/mips/boot/compressed/clz_ctz.c
+new file mode 100644
+index 0000000000000..b4a1b6eb2f8ad
+--- /dev/null
++++ b/arch/mips/boot/compressed/clz_ctz.c
+@@ -0,0 +1,2 @@
++// SPDX-License-Identifier: GPL-2.0-only
++#include "../../../../lib/clz_ctz.c"
+diff --git a/arch/mips/cavium-octeon/octeon-platform.c b/arch/mips/cavium-octeon/octeon-platform.c
+index d56e9b9d2e434..a994022e32c9f 100644
+--- a/arch/mips/cavium-octeon/octeon-platform.c
++++ b/arch/mips/cavium-octeon/octeon-platform.c
+@@ -328,6 +328,7 @@ static int __init octeon_ehci_device_init(void)
+ 
+ 	pd->dev.platform_data = &octeon_ehci_pdata;
+ 	octeon_ehci_hw_start(&pd->dev);
++	put_device(&pd->dev);
+ 
+ 	return ret;
+ }
+@@ -391,6 +392,7 @@ static int __init octeon_ohci_device_init(void)
+ 
+ 	pd->dev.platform_data = &octeon_ohci_pdata;
+ 	octeon_ohci_hw_start(&pd->dev);
++	put_device(&pd->dev);
+ 
+ 	return ret;
+ }
+diff --git a/arch/mips/cavium-octeon/octeon-usb.c b/arch/mips/cavium-octeon/octeon-usb.c
+index 6e4d3619137af..4df919d26b082 100644
+--- a/arch/mips/cavium-octeon/octeon-usb.c
++++ b/arch/mips/cavium-octeon/octeon-usb.c
+@@ -537,6 +537,7 @@ static int __init dwc3_octeon_device_init(void)
+ 			devm_iounmap(&pdev->dev, base);
+ 			devm_release_mem_region(&pdev->dev, res->start,
+ 						resource_size(res));
++			put_device(&pdev->dev);
+ 		}
+ 	} while (node != NULL);
+ 
+diff --git a/arch/mips/configs/fuloong2e_defconfig b/arch/mips/configs/fuloong2e_defconfig
+index 5c24ac7fdf56d..ba47c5e929b7f 100644
+--- a/arch/mips/configs/fuloong2e_defconfig
++++ b/arch/mips/configs/fuloong2e_defconfig
+@@ -206,7 +206,6 @@ CONFIG_NFSD_V3_ACL=y
+ CONFIG_NFSD_V4=y
+ CONFIG_CIFS=m
+ CONFIG_CIFS_STATS2=y
+-CONFIG_CIFS_WEAK_PW_HASH=y
+ CONFIG_CIFS_XATTR=y
+ CONFIG_CIFS_POSIX=y
+ CONFIG_CIFS_DEBUG2=y
+diff --git a/arch/mips/configs/malta_qemu_32r6_defconfig b/arch/mips/configs/malta_qemu_32r6_defconfig
+index 614af02d83e6e..6fb9bc29f4a03 100644
+--- a/arch/mips/configs/malta_qemu_32r6_defconfig
++++ b/arch/mips/configs/malta_qemu_32r6_defconfig
+@@ -165,7 +165,6 @@ CONFIG_TMPFS=y
+ CONFIG_NFS_FS=y
+ CONFIG_ROOT_NFS=y
+ CONFIG_CIFS=m
+-CONFIG_CIFS_WEAK_PW_HASH=y
+ CONFIG_CIFS_XATTR=y
+ CONFIG_CIFS_POSIX=y
+ CONFIG_NLS_CODEPAGE_437=m
+diff --git a/arch/mips/configs/maltaaprp_defconfig b/arch/mips/configs/maltaaprp_defconfig
+index 9c051f8fd3300..eb72df528243a 100644
+--- a/arch/mips/configs/maltaaprp_defconfig
++++ b/arch/mips/configs/maltaaprp_defconfig
+@@ -166,7 +166,6 @@ CONFIG_TMPFS=y
+ CONFIG_NFS_FS=y
+ CONFIG_ROOT_NFS=y
+ CONFIG_CIFS=m
+-CONFIG_CIFS_WEAK_PW_HASH=y
+ CONFIG_CIFS_XATTR=y
+ CONFIG_CIFS_POSIX=y
+ CONFIG_NLS_CODEPAGE_437=m
+diff --git a/arch/mips/configs/maltasmvp_defconfig b/arch/mips/configs/maltasmvp_defconfig
+index 2e90d97551d6f..1fb40d310f49c 100644
+--- a/arch/mips/configs/maltasmvp_defconfig
++++ b/arch/mips/configs/maltasmvp_defconfig
+@@ -167,7 +167,6 @@ CONFIG_TMPFS=y
+ CONFIG_NFS_FS=y
+ CONFIG_ROOT_NFS=y
+ CONFIG_CIFS=m
+-CONFIG_CIFS_WEAK_PW_HASH=y
+ CONFIG_CIFS_XATTR=y
+ CONFIG_CIFS_POSIX=y
+ CONFIG_NLS_CODEPAGE_437=m
+diff --git a/arch/mips/configs/maltasmvp_eva_defconfig b/arch/mips/configs/maltasmvp_eva_defconfig
+index d1f7fdb27284b..75cb778c61496 100644
+--- a/arch/mips/configs/maltasmvp_eva_defconfig
++++ b/arch/mips/configs/maltasmvp_eva_defconfig
+@@ -169,7 +169,6 @@ CONFIG_TMPFS=y
+ CONFIG_NFS_FS=y
+ CONFIG_ROOT_NFS=y
+ CONFIG_CIFS=m
+-CONFIG_CIFS_WEAK_PW_HASH=y
+ CONFIG_CIFS_XATTR=y
+ CONFIG_CIFS_POSIX=y
+ CONFIG_NLS_CODEPAGE_437=m
+diff --git a/arch/mips/configs/maltaup_defconfig b/arch/mips/configs/maltaup_defconfig
+index 48e5bd4924522..7b4f247dc60cc 100644
+--- a/arch/mips/configs/maltaup_defconfig
++++ b/arch/mips/configs/maltaup_defconfig
+@@ -165,7 +165,6 @@ CONFIG_TMPFS=y
+ CONFIG_NFS_FS=y
+ CONFIG_ROOT_NFS=y
+ CONFIG_CIFS=m
+-CONFIG_CIFS_WEAK_PW_HASH=y
+ CONFIG_CIFS_XATTR=y
+ CONFIG_CIFS_POSIX=y
+ CONFIG_NLS_CODEPAGE_437=m
+diff --git a/arch/mips/include/asm/local.h b/arch/mips/include/asm/local.h
+index ecda7295ddcd1..3fa6340903882 100644
+--- a/arch/mips/include/asm/local.h
++++ b/arch/mips/include/asm/local.h
+@@ -5,6 +5,7 @@
+ #include <linux/percpu.h>
+ #include <linux/bitops.h>
+ #include <linux/atomic.h>
++#include <asm/asm.h>
+ #include <asm/cmpxchg.h>
+ #include <asm/compiler.h>
+ #include <asm/war.h>
+@@ -39,7 +40,7 @@ static __inline__ long local_add_return(long i, local_t * l)
+ 		"	.set	arch=r4000				\n"
+ 			__SYNC(full, loongson3_war) "			\n"
+ 		"1:"	__LL	"%1, %2		# local_add_return	\n"
+-		"	addu	%0, %1, %3				\n"
++			__stringify(LONG_ADDU)	"	%0, %1, %3	\n"
+ 			__SC	"%0, %2					\n"
+ 		"	beqzl	%0, 1b					\n"
+ 		"	addu	%0, %1, %3				\n"
+@@ -55,7 +56,7 @@ static __inline__ long local_add_return(long i, local_t * l)
+ 		"	.set	"MIPS_ISA_ARCH_LEVEL"			\n"
+ 			__SYNC(full, loongson3_war) "			\n"
+ 		"1:"	__LL	"%1, %2		# local_add_return	\n"
+-		"	addu	%0, %1, %3				\n"
++			__stringify(LONG_ADDU)	"	%0, %1, %3	\n"
+ 			__SC	"%0, %2					\n"
+ 		"	beqz	%0, 1b					\n"
+ 		"	addu	%0, %1, %3				\n"
+@@ -88,7 +89,7 @@ static __inline__ long local_sub_return(long i, local_t * l)
+ 		"	.set	arch=r4000				\n"
+ 			__SYNC(full, loongson3_war) "			\n"
+ 		"1:"	__LL	"%1, %2		# local_sub_return	\n"
+-		"	subu	%0, %1, %3				\n"
++			__stringify(LONG_SUBU)	"	%0, %1, %3	\n"
+ 			__SC	"%0, %2					\n"
+ 		"	beqzl	%0, 1b					\n"
+ 		"	subu	%0, %1, %3				\n"
+@@ -104,7 +105,7 @@ static __inline__ long local_sub_return(long i, local_t * l)
+ 		"	.set	"MIPS_ISA_ARCH_LEVEL"			\n"
+ 			__SYNC(full, loongson3_war) "			\n"
+ 		"1:"	__LL	"%1, %2		# local_sub_return	\n"
+-		"	subu	%0, %1, %3				\n"
++			__stringify(LONG_SUBU)	"	%0, %1, %3	\n"
+ 			__SC	"%0, %2					\n"
+ 		"	beqz	%0, 1b					\n"
+ 		"	subu	%0, %1, %3				\n"
+diff --git a/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h b/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h
+index 13373c5144f89..efb41b3519747 100644
+--- a/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h
++++ b/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h
+@@ -32,7 +32,7 @@
+ 	nop
+ 	/* Loongson-3A R2/R3 */
+ 	andi	t0, (PRID_IMP_MASK | PRID_REV_MASK)
+-	slti	t0, (PRID_IMP_LOONGSON_64C | PRID_REV_LOONGSON3A_R2_0)
++	slti	t0, t0, (PRID_IMP_LOONGSON_64C | PRID_REV_LOONGSON3A_R2_0)
+ 	bnez	t0, 2f
+ 	nop
+ 1:
+@@ -63,7 +63,7 @@
+ 	nop
+ 	/* Loongson-3A R2/R3 */
+ 	andi	t0, (PRID_IMP_MASK | PRID_REV_MASK)
+-	slti	t0, (PRID_IMP_LOONGSON_64C | PRID_REV_LOONGSON3A_R2_0)
++	slti	t0, t0, (PRID_IMP_LOONGSON_64C | PRID_REV_LOONGSON3A_R2_0)
+ 	bnez	t0, 2f
+ 	nop
+ 1:
+diff --git a/arch/mips/include/asm/octeon/cvmx-bootinfo.h b/arch/mips/include/asm/octeon/cvmx-bootinfo.h
+index 0e6bf220db618..6c61e0a639249 100644
+--- a/arch/mips/include/asm/octeon/cvmx-bootinfo.h
++++ b/arch/mips/include/asm/octeon/cvmx-bootinfo.h
+@@ -318,7 +318,7 @@ enum cvmx_chip_types_enum {
+ 
+ /* Functions to return string based on type */
+ #define ENUM_BRD_TYPE_CASE(x) \
+-	case x: return(#x + 16);	/* Skip CVMX_BOARD_TYPE_ */
++	case x: return (&#x[16]);	/* Skip CVMX_BOARD_TYPE_ */
+ static inline const char *cvmx_board_type_to_string(enum
+ 						    cvmx_board_types_enum type)
+ {
+@@ -410,7 +410,7 @@ static inline const char *cvmx_board_type_to_string(enum
+ }
+ 
+ #define ENUM_CHIP_TYPE_CASE(x) \
+-	case x: return(#x + 15);	/* Skip CVMX_CHIP_TYPE */
++	case x: return (&#x[15]);	/* Skip CVMX_CHIP_TYPE */
+ static inline const char *cvmx_chip_type_to_string(enum
+ 						   cvmx_chip_types_enum type)
+ {
+diff --git a/arch/mips/lantiq/clk.c b/arch/mips/lantiq/clk.c
+index 4916cccf378fd..7a623684d9b5e 100644
+--- a/arch/mips/lantiq/clk.c
++++ b/arch/mips/lantiq/clk.c
+@@ -164,6 +164,12 @@ struct clk *clk_get_parent(struct clk *clk)
+ }
+ EXPORT_SYMBOL(clk_get_parent);
+ 
++int clk_set_parent(struct clk *clk, struct clk *parent)
++{
++	return 0;
++}
++EXPORT_SYMBOL(clk_set_parent);
++
+ static inline u32 get_counter_resolution(void)
+ {
+ 	u32 res;
+diff --git a/arch/mips/net/bpf_jit_comp32.c b/arch/mips/net/bpf_jit_comp32.c
+index bd996ede12f8e..044b11b65bcac 100644
+--- a/arch/mips/net/bpf_jit_comp32.c
++++ b/arch/mips/net/bpf_jit_comp32.c
+@@ -1381,8 +1381,7 @@ void build_prologue(struct jit_context *ctx)
+ 	 * 16-byte area in the parent's stack frame. On a tail call, the
+ 	 * calling function jumps into the prologue after these instructions.
+ 	 */
+-	emit(ctx, ori, MIPS_R_T9, MIPS_R_ZERO,
+-	     min(MAX_TAIL_CALL_CNT + 1, 0xffff));
++	emit(ctx, ori, MIPS_R_T9, MIPS_R_ZERO, min(MAX_TAIL_CALL_CNT, 0xffff));
+ 	emit(ctx, sw, MIPS_R_T9, 0, MIPS_R_SP);
+ 
+ 	/*
+diff --git a/arch/mips/net/bpf_jit_comp64.c b/arch/mips/net/bpf_jit_comp64.c
+index 815ade7242278..6475828ffb36d 100644
+--- a/arch/mips/net/bpf_jit_comp64.c
++++ b/arch/mips/net/bpf_jit_comp64.c
+@@ -552,7 +552,7 @@ void build_prologue(struct jit_context *ctx)
+ 	 * On a tail call, the calling function jumps into the prologue
+ 	 * after this instruction.
+ 	 */
+-	emit(ctx, addiu, tc, MIPS_R_ZERO, min(MAX_TAIL_CALL_CNT + 1, 0xffff));
++	emit(ctx, ori, tc, MIPS_R_ZERO, min(MAX_TAIL_CALL_CNT, 0xffff));
+ 
+ 	/* === Entry-point for tail calls === */
+ 
+diff --git a/arch/openrisc/include/asm/syscalls.h b/arch/openrisc/include/asm/syscalls.h
+index 3a7eeae6f56a8..aa1c7e98722e3 100644
+--- a/arch/openrisc/include/asm/syscalls.h
++++ b/arch/openrisc/include/asm/syscalls.h
+@@ -22,9 +22,11 @@ asmlinkage long sys_or1k_atomic(unsigned long type, unsigned long *v1,
+ 
+ asmlinkage long __sys_clone(unsigned long clone_flags, unsigned long newsp,
+ 			void __user *parent_tid, void __user *child_tid, int tls);
++asmlinkage long __sys_clone3(struct clone_args __user *uargs, size_t size);
+ asmlinkage long __sys_fork(void);
+ 
+ #define sys_clone __sys_clone
++#define sys_clone3 __sys_clone3
+ #define sys_fork __sys_fork
+ 
+ #endif /* __ASM_OPENRISC_SYSCALLS_H */
+diff --git a/arch/openrisc/kernel/entry.S b/arch/openrisc/kernel/entry.S
+index 59c6d3aa7081e..dc5b45e9e72b5 100644
+--- a/arch/openrisc/kernel/entry.S
++++ b/arch/openrisc/kernel/entry.S
+@@ -1170,6 +1170,11 @@ ENTRY(__sys_clone)
+ 	l.j	_fork_save_extra_regs_and_call
+ 	 l.nop
+ 
++ENTRY(__sys_clone3)
++	l.movhi	r29,hi(sys_clone3)
++	l.j	_fork_save_extra_regs_and_call
++	 l.ori	r29,r29,lo(sys_clone3)
++
+ ENTRY(__sys_fork)
+ 	l.movhi	r29,hi(sys_fork)
+ 	l.ori	r29,r29,lo(sys_fork)
+diff --git a/arch/parisc/include/asm/special_insns.h b/arch/parisc/include/asm/special_insns.h
+index a303ae9a77f41..16ee41e77174f 100644
+--- a/arch/parisc/include/asm/special_insns.h
++++ b/arch/parisc/include/asm/special_insns.h
+@@ -2,28 +2,32 @@
+ #ifndef __PARISC_SPECIAL_INSNS_H
+ #define __PARISC_SPECIAL_INSNS_H
+ 
+-#define lpa(va)	({			\
+-	unsigned long pa;		\
+-	__asm__ __volatile__(		\
+-		"copy %%r0,%0\n\t"	\
+-		"lpa %%r0(%1),%0"	\
+-		: "=r" (pa)		\
+-		: "r" (va)		\
+-		: "memory"		\
+-	);				\
+-	pa;				\
++#define lpa(va)	({					\
++	unsigned long pa;				\
++	__asm__ __volatile__(				\
++		"copy %%r0,%0\n"			\
++		"8:\tlpa %%r0(%1),%0\n"			\
++		"9:\n"					\
++		ASM_EXCEPTIONTABLE_ENTRY(8b, 9b)	\
++		: "=&r" (pa)				\
++		: "r" (va)				\
++		: "memory"				\
++	);						\
++	pa;						\
+ })
+ 
+-#define lpa_user(va)	({		\
+-	unsigned long pa;		\
+-	__asm__ __volatile__(		\
+-		"copy %%r0,%0\n\t"	\
+-		"lpa %%r0(%%sr3,%1),%0"	\
+-		: "=r" (pa)		\
+-		: "r" (va)		\
+-		: "memory"		\
+-	);				\
+-	pa;				\
++#define lpa_user(va)	({				\
++	unsigned long pa;				\
++	__asm__ __volatile__(				\
++		"copy %%r0,%0\n"			\
++		"8:\tlpa %%r0(%%sr3,%1),%0\n"		\
++		"9:\n"					\
++		ASM_EXCEPTIONTABLE_ENTRY(8b, 9b)	\
++		: "=&r" (pa)				\
++		: "r" (va)				\
++		: "memory"				\
++	);						\
++	pa;						\
+ })
+ 
+ #define mfctl(reg)	({		\
+diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c
+index 892b7fc8f3c45..eb41fece19104 100644
+--- a/arch/parisc/kernel/traps.c
++++ b/arch/parisc/kernel/traps.c
+@@ -785,7 +785,7 @@ void notrace handle_interruption(int code, struct pt_regs *regs)
+ 	     * unless pagefault_disable() was called before.
+ 	     */
+ 
+-	    if (fault_space == 0 && !faulthandler_disabled())
++	    if (faulthandler_disabled() || fault_space == 0)
+ 	    {
+ 		/* Clean up and return if in exception table. */
+ 		if (fixup_exception(regs))
+diff --git a/arch/powerpc/boot/dts/fsl/qoriq-fman3l-0.dtsi b/arch/powerpc/boot/dts/fsl/qoriq-fman3l-0.dtsi
+index c90702b04a530..48e5cd61599c6 100644
+--- a/arch/powerpc/boot/dts/fsl/qoriq-fman3l-0.dtsi
++++ b/arch/powerpc/boot/dts/fsl/qoriq-fman3l-0.dtsi
+@@ -79,6 +79,7 @@ fman0: fman@400000 {
+ 		#size-cells = <0>;
+ 		compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
+ 		reg = <0xfc000 0x1000>;
++		fsl,erratum-a009885;
+ 	};
+ 
+ 	xmdio0: mdio@fd000 {
+@@ -86,6 +87,7 @@ fman0: fman@400000 {
+ 		#size-cells = <0>;
+ 		compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
+ 		reg = <0xfd000 0x1000>;
++		fsl,erratum-a009885;
+ 	};
+ };
+ 
+diff --git a/arch/powerpc/configs/ppc6xx_defconfig b/arch/powerpc/configs/ppc6xx_defconfig
+index 6697c5e6682f1..bb549cb1c3e33 100644
+--- a/arch/powerpc/configs/ppc6xx_defconfig
++++ b/arch/powerpc/configs/ppc6xx_defconfig
+@@ -1022,7 +1022,6 @@ CONFIG_NFSD=m
+ CONFIG_NFSD_V3_ACL=y
+ CONFIG_NFSD_V4=y
+ CONFIG_CIFS=m
+-CONFIG_CIFS_WEAK_PW_HASH=y
+ CONFIG_CIFS_UPCALL=y
+ CONFIG_CIFS_XATTR=y
+ CONFIG_CIFS_POSIX=y
+diff --git a/arch/powerpc/configs/pseries_defconfig b/arch/powerpc/configs/pseries_defconfig
+index de7641adb899f..e64f2242abe15 100644
+--- a/arch/powerpc/configs/pseries_defconfig
++++ b/arch/powerpc/configs/pseries_defconfig
+@@ -189,7 +189,6 @@ CONFIG_HVCS=m
+ CONFIG_VIRTIO_CONSOLE=m
+ CONFIG_IBM_BSR=m
+ CONFIG_RAW_DRIVER=y
+-CONFIG_MAX_RAW_DEVS=1024
+ CONFIG_I2C_CHARDEV=y
+ CONFIG_FB=y
+ CONFIG_FIRMWARE_EDID=y
+diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
+index 21cc571ea9c2d..5c98a950eca0d 100644
+--- a/arch/powerpc/include/asm/hw_irq.h
++++ b/arch/powerpc/include/asm/hw_irq.h
+@@ -224,6 +224,42 @@ static inline bool arch_irqs_disabled(void)
+ 	return arch_irqs_disabled_flags(arch_local_save_flags());
+ }
+ 
++static inline void set_pmi_irq_pending(void)
++{
++	/*
++	 * Invoked from PMU callback functions to set PMI bit in the paca.
++	 * This has to be called with irq's disabled (via hard_irq_disable()).
++	 */
++	if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
++		WARN_ON_ONCE(mfmsr() & MSR_EE);
++
++	get_paca()->irq_happened |= PACA_IRQ_PMI;
++}
++
++static inline void clear_pmi_irq_pending(void)
++{
++	/*
++	 * Invoked from PMU callback functions to clear the pending PMI bit
++	 * in the paca.
++	 */
++	if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
++		WARN_ON_ONCE(mfmsr() & MSR_EE);
++
++	get_paca()->irq_happened &= ~PACA_IRQ_PMI;
++}
++
++static inline bool pmi_irq_pending(void)
++{
++	/*
++	 * Invoked from PMU callback functions to check if there is a pending
++	 * PMI bit in the paca.
++	 */
++	if (get_paca()->irq_happened & PACA_IRQ_PMI)
++		return true;
++
++	return false;
++}
++
+ #ifdef CONFIG_PPC_BOOK3S
+ /*
+  * To support disabling and enabling of irq with PMI, set of
+@@ -408,6 +444,10 @@ static inline void do_hard_irq_enable(void)
+ 	BUILD_BUG();
+ }
+ 
++static inline void clear_pmi_irq_pending(void) { }
++static inline void set_pmi_irq_pending(void) { }
++static inline bool pmi_irq_pending(void) { return false; }
++
+ static inline void irq_soft_mask_regs_set_state(struct pt_regs *regs, unsigned long val)
+ {
+ }
+diff --git a/arch/powerpc/kernel/btext.c b/arch/powerpc/kernel/btext.c
+index 803c2a45b22ac..1cffb5e7c38d6 100644
+--- a/arch/powerpc/kernel/btext.c
++++ b/arch/powerpc/kernel/btext.c
+@@ -241,8 +241,10 @@ int __init btext_find_display(int allow_nonstdout)
+ 			rc = btext_initialize(np);
+ 			printk("result: %d\n", rc);
+ 		}
+-		if (rc == 0)
++		if (rc == 0) {
++			of_node_put(np);
+ 			break;
++		}
+ 	}
+ 	return rc;
+ }
+diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
+index b7ceb041743c9..60f5fc14aa235 100644
+--- a/arch/powerpc/kernel/fadump.c
++++ b/arch/powerpc/kernel/fadump.c
+@@ -1641,6 +1641,14 @@ int __init setup_fadump(void)
+ 	else if (fw_dump.reserve_dump_area_size)
+ 		fw_dump.ops->fadump_init_mem_struct(&fw_dump);
+ 
++	/*
++	 * In case of panic, fadump is triggered via ppc_panic_event()
++	 * panic notifier. Setting crash_kexec_post_notifiers to 'true'
++	 * lets panic() function take crash friendly path before panic
++	 * notifiers are invoked.
++	 */
++	crash_kexec_post_notifiers = true;
++
+ 	return 1;
+ }
+ subsys_initcall(setup_fadump);
+diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
+index 7d72ee5ab387c..e783860bea838 100644
+--- a/arch/powerpc/kernel/head_40x.S
++++ b/arch/powerpc/kernel/head_40x.S
+@@ -27,6 +27,7 @@
+ 
+ #include <linux/init.h>
+ #include <linux/pgtable.h>
++#include <linux/sizes.h>
+ #include <asm/processor.h>
+ #include <asm/page.h>
+ #include <asm/mmu.h>
+@@ -650,7 +651,7 @@ start_here:
+ 	b	.		/* prevent prefetch past rfi */
+ 
+ /* Set up the initial MMU state so we can do the first level of
+- * kernel initialization.  This maps the first 16 MBytes of memory 1:1
++ * kernel initialization.  This maps the first 32 MBytes of memory 1:1
+  * virtual to physical and more importantly sets the cache mode.
+  */
+ initial_mmu:
+@@ -687,6 +688,12 @@ initial_mmu:
+ 	tlbwe	r4,r0,TLB_DATA		/* Load the data portion of the entry */
+ 	tlbwe	r3,r0,TLB_TAG		/* Load the tag portion of the entry */
+ 
++	li	r0,62			/* TLB slot 62 */
++	addis	r4,r4,SZ_16M@h
++	addis	r3,r3,SZ_16M@h
++	tlbwe	r4,r0,TLB_DATA		/* Load the data portion of the entry */
++	tlbwe	r3,r0,TLB_TAG		/* Load the tag portion of the entry */
++
+ 	isync
+ 
+ 	/* Establish the exception vector base
+diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
+index 835b626cd4760..df048e331cbfe 100644
+--- a/arch/powerpc/kernel/interrupt.c
++++ b/arch/powerpc/kernel/interrupt.c
+@@ -148,7 +148,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
+ 	 */
+ 	if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) &&
+ 			unlikely(MSR_TM_TRANSACTIONAL(regs->msr)))
+-		current_thread_info()->flags |= _TIF_RESTOREALL;
++		set_bits(_TIF_RESTOREALL, &current_thread_info()->flags);
+ 
+ 	/*
+ 	 * If the system call was made with a transaction active, doom it and
+diff --git a/arch/powerpc/kernel/interrupt_64.S b/arch/powerpc/kernel/interrupt_64.S
+index ec950b08a8dcc..4b1ff94e67eb4 100644
+--- a/arch/powerpc/kernel/interrupt_64.S
++++ b/arch/powerpc/kernel/interrupt_64.S
+@@ -30,21 +30,23 @@ COMPAT_SYS_CALL_TABLE:
+ 	.ifc \srr,srr
+ 	mfspr	r11,SPRN_SRR0
+ 	ld	r12,_NIP(r1)
++	clrrdi  r12,r12,2
+ 100:	tdne	r11,r12
+-	EMIT_BUG_ENTRY 100b,__FILE__,__LINE__,(BUGFLAG_WARNING | BUGFLAG_ONCE)
++	EMIT_WARN_ENTRY 100b,__FILE__,__LINE__,(BUGFLAG_WARNING | BUGFLAG_ONCE)
+ 	mfspr	r11,SPRN_SRR1
+ 	ld	r12,_MSR(r1)
+ 100:	tdne	r11,r12
+-	EMIT_BUG_ENTRY 100b,__FILE__,__LINE__,(BUGFLAG_WARNING | BUGFLAG_ONCE)
++	EMIT_WARN_ENTRY 100b,__FILE__,__LINE__,(BUGFLAG_WARNING | BUGFLAG_ONCE)
+ 	.else
+ 	mfspr	r11,SPRN_HSRR0
+ 	ld	r12,_NIP(r1)
++	clrrdi  r12,r12,2
+ 100:	tdne	r11,r12
+-	EMIT_BUG_ENTRY 100b,__FILE__,__LINE__,(BUGFLAG_WARNING | BUGFLAG_ONCE)
++	EMIT_WARN_ENTRY 100b,__FILE__,__LINE__,(BUGFLAG_WARNING | BUGFLAG_ONCE)
+ 	mfspr	r11,SPRN_HSRR1
+ 	ld	r12,_MSR(r1)
+ 100:	tdne	r11,r12
+-	EMIT_BUG_ENTRY 100b,__FILE__,__LINE__,(BUGFLAG_WARNING | BUGFLAG_ONCE)
++	EMIT_WARN_ENTRY 100b,__FILE__,__LINE__,(BUGFLAG_WARNING | BUGFLAG_ONCE)
+ 	.endif
+ #endif
+ .endm
+diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
+index ed04a3ba66fe8..40a583e9d3c70 100644
+--- a/arch/powerpc/kernel/module.c
++++ b/arch/powerpc/kernel/module.c
+@@ -90,16 +90,17 @@ int module_finalize(const Elf_Ehdr *hdr,
+ }
+ 
+ static __always_inline void *
+-__module_alloc(unsigned long size, unsigned long start, unsigned long end)
++__module_alloc(unsigned long size, unsigned long start, unsigned long end, bool nowarn)
+ {
+ 	pgprot_t prot = strict_module_rwx_enabled() ? PAGE_KERNEL : PAGE_KERNEL_EXEC;
++	gfp_t gfp = GFP_KERNEL | (nowarn ? __GFP_NOWARN : 0);
+ 
+ 	/*
+ 	 * Don't do huge page allocations for modules yet until more testing
+ 	 * is done. STRICT_MODULE_RWX may require extra work to support this
+ 	 * too.
+ 	 */
+-	return __vmalloc_node_range(size, 1, start, end, GFP_KERNEL, prot,
++	return __vmalloc_node_range(size, 1, start, end, gfp, prot,
+ 				    VM_FLUSH_RESET_PERMS | VM_NO_HUGE_VMAP,
+ 				    NUMA_NO_NODE, __builtin_return_address(0));
+ }
+@@ -114,13 +115,13 @@ void *module_alloc(unsigned long size)
+ 
+ 	/* First try within 32M limit from _etext to avoid branch trampolines */
+ 	if (MODULES_VADDR < PAGE_OFFSET && MODULES_END > limit)
+-		ptr = __module_alloc(size, limit, MODULES_END);
++		ptr = __module_alloc(size, limit, MODULES_END, true);
+ 
+ 	if (!ptr)
+-		ptr = __module_alloc(size, MODULES_VADDR, MODULES_END);
++		ptr = __module_alloc(size, MODULES_VADDR, MODULES_END, false);
+ 
+ 	return ptr;
+ #else
+-	return __module_alloc(size, VMALLOC_START, VMALLOC_END);
++	return __module_alloc(size, VMALLOC_START, VMALLOC_END, false);
+ #endif
+ }
+diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
+index 18b04b08b9833..f845065c860e3 100644
+--- a/arch/powerpc/kernel/prom_init.c
++++ b/arch/powerpc/kernel/prom_init.c
+@@ -2991,7 +2991,7 @@ static void __init fixup_device_tree_efika_add_phy(void)
+ 
+ 	/* Check if the phy-handle property exists - bail if it does */
+ 	rv = prom_getprop(node, "phy-handle", prop, sizeof(prop));
+-	if (!rv)
++	if (rv <= 0)
+ 		return;
+ 
+ 	/*
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index c23ee842c4c33..c338f9d8ab37a 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -61,6 +61,7 @@
+ #include <asm/cpu_has_feature.h>
+ #include <asm/ftrace.h>
+ #include <asm/kup.h>
++#include <asm/fadump.h>
+ 
+ #ifdef DEBUG
+ #include <asm/udbg.h>
+@@ -620,6 +621,45 @@ void crash_send_ipi(void (*crash_ipi_callback)(struct pt_regs *))
+ }
+ #endif
+ 
++#ifdef CONFIG_NMI_IPI
++static void crash_stop_this_cpu(struct pt_regs *regs)
++#else
++static void crash_stop_this_cpu(void *dummy)
++#endif
++{
++	/*
++	 * Just busy wait here and avoid marking CPU as offline to ensure
++	 * register data is captured appropriately.
++	 */
++	while (1)
++		cpu_relax();
++}
++
++void crash_smp_send_stop(void)
++{
++	static bool stopped = false;
++
++	/*
++	 * In case of fadump, register data for all CPUs is captured by f/w
++	 * on ibm,os-term rtas call. Skip IPI callbacks to other CPUs before
++	 * this rtas call to avoid tricky post processing of those CPUs'
++	 * backtraces.
++	 */
++	if (should_fadump_crash())
++		return;
++
++	if (stopped)
++		return;
++
++	stopped = true;
++
++#ifdef CONFIG_NMI_IPI
++	smp_send_nmi_ipi(NMI_IPI_ALL_OTHERS, crash_stop_this_cpu, 1000000);
++#else
++	smp_call_function(crash_stop_this_cpu, NULL, 0);
++#endif /* CONFIG_NMI_IPI */
++}
++
+ #ifdef CONFIG_NMI_IPI
+ static void nmi_stop_this_cpu(struct pt_regs *regs)
+ {
+@@ -1635,10 +1675,12 @@ void start_secondary(void *unused)
+ 	BUG();
+ }
+ 
++#ifdef CONFIG_PROFILING
+ int setup_profiling_timer(unsigned int multiplier)
+ {
+ 	return 0;
+ }
++#endif
+ 
+ static void fixup_topology(void)
+ {
+diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
+index 3fa6d240bade2..ad94a2c6b7337 100644
+--- a/arch/powerpc/kernel/watchdog.c
++++ b/arch/powerpc/kernel/watchdog.c
+@@ -135,6 +135,10 @@ static void set_cpumask_stuck(const struct cpumask *cpumask, u64 tb)
+ {
+ 	cpumask_or(&wd_smp_cpus_stuck, &wd_smp_cpus_stuck, cpumask);
+ 	cpumask_andnot(&wd_smp_cpus_pending, &wd_smp_cpus_pending, cpumask);
++	/*
++	 * See wd_smp_clear_cpu_pending()
++	 */
++	smp_mb();
+ 	if (cpumask_empty(&wd_smp_cpus_pending)) {
+ 		wd_smp_last_reset_tb = tb;
+ 		cpumask_andnot(&wd_smp_cpus_pending,
+@@ -221,13 +225,44 @@ static void wd_smp_clear_cpu_pending(int cpu, u64 tb)
+ 
+ 			cpumask_clear_cpu(cpu, &wd_smp_cpus_stuck);
+ 			wd_smp_unlock(&flags);
++		} else {
++			/*
++			 * The last CPU to clear pending should have reset the
++			 * watchdog so we generally should not find it empty
++			 * here if our CPU was clear. However it could happen
++			 * due to a rare race with another CPU taking the
++			 * last CPU out of the mask concurrently.
++			 *
++			 * We can't add a warning for it. But just in case
++			 * there is a problem with the watchdog that is causing
++			 * the mask to not be reset, try to kick it along here.
++			 */
++			if (unlikely(cpumask_empty(&wd_smp_cpus_pending)))
++				goto none_pending;
+ 		}
+ 		return;
+ 	}
++
+ 	cpumask_clear_cpu(cpu, &wd_smp_cpus_pending);
++
++	/*
++	 * Order the store to clear pending with the load(s) to check all
++	 * words in the pending mask to check they are all empty. This orders
++	 * with the same barrier on another CPU. This prevents two CPUs
++	 * clearing the last 2 pending bits, but neither seeing the other's
++	 * store when checking if the mask is empty, and missing an empty
++	 * mask, which ends with a false positive.
++	 */
++	smp_mb();
+ 	if (cpumask_empty(&wd_smp_cpus_pending)) {
+ 		unsigned long flags;
+ 
++none_pending:
++		/*
++		 * Double check under lock because more than one CPU could see
++		 * a clear mask with the lockless check after clearing their
++		 * pending bits.
++		 */
+ 		wd_smp_lock(&flags);
+ 		if (cpumask_empty(&wd_smp_cpus_pending)) {
+ 			wd_smp_last_reset_tb = tb;
+@@ -318,8 +353,12 @@ void arch_touch_nmi_watchdog(void)
+ {
+ 	unsigned long ticks = tb_ticks_per_usec * wd_timer_period_ms * 1000;
+ 	int cpu = smp_processor_id();
+-	u64 tb = get_tb();
++	u64 tb;
+ 
++	if (!cpumask_test_cpu(cpu, &watchdog_cpumask))
++		return;
++
++	tb = get_tb();
+ 	if (tb - per_cpu(wd_timer_tb, cpu) >= ticks) {
+ 		per_cpu(wd_timer_tb, cpu) = tb;
+ 		wd_smp_clear_cpu_pending(cpu, tb);
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 7b74fc0a986b8..94da0d25eb125 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -4861,8 +4861,12 @@ static int kvmppc_core_prepare_memory_region_hv(struct kvm *kvm,
+ 	unsigned long npages = mem->memory_size >> PAGE_SHIFT;
+ 
+ 	if (change == KVM_MR_CREATE) {
+-		slot->arch.rmap = vzalloc(array_size(npages,
+-					  sizeof(*slot->arch.rmap)));
++		unsigned long size = array_size(npages, sizeof(*slot->arch.rmap));
++
++		if ((size >> PAGE_SHIFT) > totalram_pages())
++			return -ENOMEM;
++
++		slot->arch.rmap = vzalloc(size);
+ 		if (!slot->arch.rmap)
+ 			return -ENOMEM;
+ 	}
+diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
+index ed8a2c9f56299..89295b52a97c3 100644
+--- a/arch/powerpc/kvm/book3s_hv_nested.c
++++ b/arch/powerpc/kvm/book3s_hv_nested.c
+@@ -582,7 +582,7 @@ long kvmhv_copy_tofrom_guest_nested(struct kvm_vcpu *vcpu)
+ 	if (eaddr & (0xFFFUL << 52))
+ 		return H_PARAMETER;
+ 
+-	buf = kzalloc(n, GFP_KERNEL);
++	buf = kzalloc(n, GFP_KERNEL | __GFP_NOWARN);
+ 	if (!buf)
+ 		return H_NO_MEM;
+ 
+diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
+index 3a600bd7fbc6a..2a3d5fb8201c9 100644
+--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
++++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
+@@ -1100,7 +1100,7 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
+ 
+ int pud_clear_huge(pud_t *pud)
+ {
+-	if (pud_huge(*pud)) {
++	if (pud_is_leaf(*pud)) {
+ 		pud_clear(pud);
+ 		return 1;
+ 	}
+@@ -1147,7 +1147,7 @@ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot)
+ 
+ int pmd_clear_huge(pmd_t *pmd)
+ {
+-	if (pmd_huge(*pmd)) {
++	if (pmd_is_leaf(*pmd)) {
+ 		pmd_clear(pmd);
+ 		return 1;
+ 	}
+diff --git a/arch/powerpc/mm/kasan/book3s_32.c b/arch/powerpc/mm/kasan/book3s_32.c
+index 202bd260a0095..35b287b0a8da4 100644
+--- a/arch/powerpc/mm/kasan/book3s_32.c
++++ b/arch/powerpc/mm/kasan/book3s_32.c
+@@ -19,7 +19,8 @@ int __init kasan_init_region(void *start, size_t size)
+ 	block = memblock_alloc(k_size, k_size_base);
+ 
+ 	if (block && k_size_base >= SZ_128K && k_start == ALIGN(k_start, k_size_base)) {
+-		int k_size_more = 1 << (ffs(k_size - k_size_base) - 1);
++		int shift = ffs(k_size - k_size_base);
++		int k_size_more = shift ? 1 << (shift - 1) : 0;
+ 
+ 		setbat(-1, k_start, __pa(block), k_size_base, PAGE_KERNEL);
+ 		if (k_size_more >= SZ_128K)
+diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
+index 78c8cf01db5f9..175aabf101e87 100644
+--- a/arch/powerpc/mm/pgtable_64.c
++++ b/arch/powerpc/mm/pgtable_64.c
+@@ -102,7 +102,8 @@ EXPORT_SYMBOL(__pte_frag_size_shift);
+ struct page *p4d_page(p4d_t p4d)
+ {
+ 	if (p4d_is_leaf(p4d)) {
+-		VM_WARN_ON(!p4d_huge(p4d));
++		if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
++			VM_WARN_ON(!p4d_huge(p4d));
+ 		return pte_page(p4d_pte(p4d));
+ 	}
+ 	return virt_to_page(p4d_pgtable(p4d));
+@@ -112,7 +113,8 @@ struct page *p4d_page(p4d_t p4d)
+ struct page *pud_page(pud_t pud)
+ {
+ 	if (pud_is_leaf(pud)) {
+-		VM_WARN_ON(!pud_huge(pud));
++		if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
++			VM_WARN_ON(!pud_huge(pud));
+ 		return pte_page(pud_pte(pud));
+ 	}
+ 	return virt_to_page(pud_pgtable(pud));
+@@ -125,7 +127,13 @@ struct page *pud_page(pud_t pud)
+ struct page *pmd_page(pmd_t pmd)
+ {
+ 	if (pmd_is_leaf(pmd)) {
+-		VM_WARN_ON(!(pmd_large(pmd) || pmd_huge(pmd)));
++		/*
++		 * vmalloc_to_page may be called on any vmap address (not only
++		 * vmalloc), and it uses pmd_page() etc., when huge vmap is
++		 * enabled so these checks can't be used.
++		 */
++		if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
++			VM_WARN_ON(!(pmd_large(pmd) || pmd_huge(pmd)));
+ 		return pte_page(pmd_pte(pmd));
+ 	}
+ 	return virt_to_page(pmd_page_vaddr(pmd));
+diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
+index 0da31d41d4131..8a4faa05f9e41 100644
+--- a/arch/powerpc/net/bpf_jit_comp32.c
++++ b/arch/powerpc/net/bpf_jit_comp32.c
+@@ -221,13 +221,13 @@ static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 o
+ 	PPC_BCC(COND_GE, out);
+ 
+ 	/*
+-	 * if (tail_call_cnt > MAX_TAIL_CALL_CNT)
++	 * if (tail_call_cnt >= MAX_TAIL_CALL_CNT)
+ 	 *   goto out;
+ 	 */
+ 	EMIT(PPC_RAW_CMPLWI(_R0, MAX_TAIL_CALL_CNT));
+ 	/* tail_call_cnt++; */
+ 	EMIT(PPC_RAW_ADDIC(_R0, _R0, 1));
+-	PPC_BCC(COND_GT, out);
++	PPC_BCC(COND_GE, out);
+ 
+ 	/* prog = array->ptrs[index]; */
+ 	EMIT(PPC_RAW_RLWINM(_R3, b2p_index, 2, 0, 29));
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index 8b5157ccfebae..8571aafcc9e1e 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -228,12 +228,12 @@ static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 o
+ 	PPC_BCC(COND_GE, out);
+ 
+ 	/*
+-	 * if (tail_call_cnt > MAX_TAIL_CALL_CNT)
++	 * if (tail_call_cnt >= MAX_TAIL_CALL_CNT)
+ 	 *   goto out;
+ 	 */
+ 	PPC_BPF_LL(b2p[TMP_REG_1], 1, bpf_jit_stack_tailcallcnt(ctx));
+ 	EMIT(PPC_RAW_CMPLWI(b2p[TMP_REG_1], MAX_TAIL_CALL_CNT));
+-	PPC_BCC(COND_GT, out);
++	PPC_BCC(COND_GE, out);
+ 
+ 	/*
+ 	 * tail_call_cnt++;
+diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
+index 73e62e9b179bc..bef6b1abce702 100644
+--- a/arch/powerpc/perf/core-book3s.c
++++ b/arch/powerpc/perf/core-book3s.c
+@@ -857,6 +857,19 @@ static void write_pmc(int idx, unsigned long val)
+ 	}
+ }
+ 
++static int any_pmc_overflown(struct cpu_hw_events *cpuhw)
++{
++	int i, idx;
++
++	for (i = 0; i < cpuhw->n_events; i++) {
++		idx = cpuhw->event[i]->hw.idx;
++		if ((idx) && ((int)read_pmc(idx) < 0))
++			return idx;
++	}
++
++	return 0;
++}
++
+ /* Called from sysrq_handle_showregs() */
+ void perf_event_print_debug(void)
+ {
+@@ -1281,11 +1294,13 @@ static void power_pmu_disable(struct pmu *pmu)
+ 
+ 		/*
+ 		 * Set the 'freeze counters' bit, clear EBE/BHRBA/PMCC/PMAO/FC56
++		 * Also clear PMXE to disable PMI's getting triggered in some
++		 * corner cases during PMU disable.
+ 		 */
+ 		val  = mmcr0 = mfspr(SPRN_MMCR0);
+ 		val |= MMCR0_FC;
+ 		val &= ~(MMCR0_EBE | MMCR0_BHRBA | MMCR0_PMCC | MMCR0_PMAO |
+-			 MMCR0_FC56);
++			 MMCR0_PMXE | MMCR0_FC56);
+ 		/* Set mmcr0 PMCCEXT for p10 */
+ 		if (ppmu->flags & PPMU_ARCH_31)
+ 			val |= MMCR0_PMCCEXT;
+@@ -1299,6 +1314,23 @@ static void power_pmu_disable(struct pmu *pmu)
+ 		mb();
+ 		isync();
+ 
++		/*
++		 * Some corner cases could clear the PMU counter overflow
++		 * while a masked PMI is pending. One such case is when
++		 * a PMI happens during interrupt replay and perf counter
++		 * values are cleared by PMU callbacks before replay.
++		 *
++		 * If any PMC corresponding to the active PMU events are
++		 * overflown, disable the interrupt by clearing the paca
++		 * bit for PMI since we are disabling the PMU now.
++		 * Otherwise provide a warning if there is PMI pending, but
++		 * no counter is found overflown.
++		 */
++		if (any_pmc_overflown(cpuhw))
++			clear_pmi_irq_pending();
++		else
++			WARN_ON(pmi_irq_pending());
++
+ 		val = mmcra = cpuhw->mmcr.mmcra;
+ 
+ 		/*
+@@ -1390,6 +1422,15 @@ static void power_pmu_enable(struct pmu *pmu)
+ 	 * (possibly updated for removal of events).
+ 	 */
+ 	if (!cpuhw->n_added) {
++		/*
++		 * If there is any active event with an overflown PMC
++		 * value, set back PACA_IRQ_PMI which would have been
++		 * cleared in power_pmu_disable().
++		 */
++		hard_irq_disable();
++		if (any_pmc_overflown(cpuhw))
++			set_pmi_irq_pending();
++
+ 		mtspr(SPRN_MMCRA, cpuhw->mmcr.mmcra & ~MMCRA_SAMPLE_ENABLE);
+ 		mtspr(SPRN_MMCR1, cpuhw->mmcr.mmcr1);
+ 		if (ppmu->flags & PPMU_ARCH_31)
+@@ -2337,6 +2378,14 @@ static void __perf_event_interrupt(struct pt_regs *regs)
+ 				break;
+ 			}
+ 		}
++
++		/*
++		 * Clear PACA_IRQ_PMI in case it was set by
++		 * set_pmi_irq_pending() when PMU was enabled
++		 * after accounting for interrupts.
++		 */
++		clear_pmi_irq_pending();
++
+ 		if (!active)
+ 			/* reset non active counters that have overflowed */
+ 			write_pmc(i + 1, 0);
+@@ -2356,6 +2405,13 @@ static void __perf_event_interrupt(struct pt_regs *regs)
+ 			}
+ 		}
+ 	}
++
++	/*
++	 * During system wide profling or while specific CPU is monitored for an
++	 * event, some corner cases could cause PMC to overflow in idle path. This
++	 * will trigger a PMI after waking up from idle. Since counter values are _not_
++	 * saved/restored in idle path, can lead to below "Can't find PMC" message.
++	 */
+ 	if (unlikely(!found) && !arch_irq_disabled_regs(regs))
+ 		printk_ratelimited(KERN_WARNING "Can't find PMC that caused IRQ\n");
+ 
+diff --git a/arch/powerpc/platforms/cell/iommu.c b/arch/powerpc/platforms/cell/iommu.c
+index fa08699aedeb8..d32f24de84798 100644
+--- a/arch/powerpc/platforms/cell/iommu.c
++++ b/arch/powerpc/platforms/cell/iommu.c
+@@ -977,6 +977,7 @@ static int __init cell_iommu_fixed_mapping_init(void)
+ 			if (hbase < dbase || (hend > (dbase + dsize))) {
+ 				pr_debug("iommu: hash window doesn't fit in"
+ 					 "real DMA window\n");
++				of_node_put(np);
+ 				return -1;
+ 			}
+ 		}
+diff --git a/arch/powerpc/platforms/cell/pervasive.c b/arch/powerpc/platforms/cell/pervasive.c
+index 5b9a7e9f144b3..dff8d5e7ab82b 100644
+--- a/arch/powerpc/platforms/cell/pervasive.c
++++ b/arch/powerpc/platforms/cell/pervasive.c
+@@ -78,6 +78,7 @@ static int cbe_system_reset_exception(struct pt_regs *regs)
+ 	switch (regs->msr & SRR1_WAKEMASK) {
+ 	case SRR1_WAKEDEC:
+ 		set_dec(1);
++		break;
+ 	case SRR1_WAKEEE:
+ 		/*
+ 		 * Handle these when interrupts get re-enabled and we take
+diff --git a/arch/powerpc/platforms/embedded6xx/hlwd-pic.c b/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
+index 15396333a90bd..a4b020e4b6af0 100644
+--- a/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
++++ b/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
+@@ -214,6 +214,7 @@ void hlwd_pic_probe(void)
+ 			irq_set_chained_handler(cascade_virq,
+ 						hlwd_pic_irq_cascade);
+ 			hlwd_irq_host = host;
++			of_node_put(np);
+ 			break;
+ 		}
+ 	}
+diff --git a/arch/powerpc/platforms/powermac/low_i2c.c b/arch/powerpc/platforms/powermac/low_i2c.c
+index f77a59b5c2e1a..df89d916236d9 100644
+--- a/arch/powerpc/platforms/powermac/low_i2c.c
++++ b/arch/powerpc/platforms/powermac/low_i2c.c
+@@ -582,6 +582,7 @@ static void __init kw_i2c_add(struct pmac_i2c_host_kw *host,
+ 	bus->close = kw_i2c_close;
+ 	bus->xfer = kw_i2c_xfer;
+ 	mutex_init(&bus->mutex);
++	lockdep_register_key(&bus->lock_key);
+ 	lockdep_set_class(&bus->mutex, &bus->lock_key);
+ 	if (controller == busnode)
+ 		bus->flags = pmac_i2c_multibus;
+@@ -810,6 +811,7 @@ static void __init pmu_i2c_probe(void)
+ 		bus->hostdata = bus + 1;
+ 		bus->xfer = pmu_i2c_xfer;
+ 		mutex_init(&bus->mutex);
++		lockdep_register_key(&bus->lock_key);
+ 		lockdep_set_class(&bus->mutex, &bus->lock_key);
+ 		bus->flags = pmac_i2c_multibus;
+ 		list_add(&bus->link, &pmac_i2c_busses);
+@@ -933,6 +935,7 @@ static void __init smu_i2c_probe(void)
+ 		bus->hostdata = bus + 1;
+ 		bus->xfer = smu_i2c_xfer;
+ 		mutex_init(&bus->mutex);
++		lockdep_register_key(&bus->lock_key);
+ 		lockdep_set_class(&bus->mutex, &bus->lock_key);
+ 		bus->flags = 0;
+ 		list_add(&bus->link, &pmac_i2c_busses);
+diff --git a/arch/powerpc/platforms/powernv/opal-lpc.c b/arch/powerpc/platforms/powernv/opal-lpc.c
+index 1e5d51db40f84..5390c888db162 100644
+--- a/arch/powerpc/platforms/powernv/opal-lpc.c
++++ b/arch/powerpc/platforms/powernv/opal-lpc.c
+@@ -396,6 +396,7 @@ void __init opal_lpc_init(void)
+ 		if (!of_get_property(np, "primary", NULL))
+ 			continue;
+ 		opal_lpc_chip_id = of_get_ibm_chip_id(np);
++		of_node_put(np);
+ 		break;
+ 	}
+ 	if (opal_lpc_chip_id < 0)
+diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
+index f143b6f111ac0..1179632560b8d 100644
+--- a/arch/powerpc/sysdev/xive/spapr.c
++++ b/arch/powerpc/sysdev/xive/spapr.c
+@@ -653,6 +653,9 @@ static int xive_spapr_debug_show(struct seq_file *m, void *private)
+ 	struct xive_irq_bitmap *xibm;
+ 	char *buf = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ 
++	if (!buf)
++		return -ENOMEM;
++
+ 	list_for_each_entry(xibm, &xive_irq_bitmaps, list) {
+ 		memset(buf, 0, PAGE_SIZE);
+ 		bitmap_print_to_pagebuf(true, buf, xibm->bitmap, xibm->count);
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index 821252b65f890..7a68a4106e5a0 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -158,10 +158,9 @@ config PA_BITS
+ 
+ config PAGE_OFFSET
+ 	hex
+-	default 0xC0000000 if 32BIT && MAXPHYSMEM_1GB
++	default 0xC0000000 if 32BIT
+ 	default 0x80000000 if 64BIT && !MMU
+-	default 0xffffffff80000000 if 64BIT && MAXPHYSMEM_2GB
+-	default 0xffffffe000000000 if 64BIT && MAXPHYSMEM_128GB
++	default 0xffffffe000000000 if 64BIT
+ 
+ config KASAN_SHADOW_OFFSET
+ 	hex
+@@ -270,24 +269,6 @@ config MODULE_SECTIONS
+ 	bool
+ 	select HAVE_MOD_ARCH_SPECIFIC
+ 
+-choice
+-	prompt "Maximum Physical Memory"
+-	default MAXPHYSMEM_1GB if 32BIT
+-	default MAXPHYSMEM_2GB if 64BIT && CMODEL_MEDLOW
+-	default MAXPHYSMEM_128GB if 64BIT && CMODEL_MEDANY
+-
+-	config MAXPHYSMEM_1GB
+-		depends on 32BIT
+-		bool "1GiB"
+-	config MAXPHYSMEM_2GB
+-		depends on 64BIT && CMODEL_MEDLOW
+-		bool "2GiB"
+-	config MAXPHYSMEM_128GB
+-		depends on 64BIT && CMODEL_MEDANY
+-		bool "128GiB"
+-endchoice
+-
+-
+ config SMP
+ 	bool "Symmetric Multi-Processing"
+ 	help
+diff --git a/arch/riscv/boot/dts/microchip/microchip-mpfs.dtsi b/arch/riscv/boot/dts/microchip/microchip-mpfs.dtsi
+index c9f6d205d2ba1..794da883acb19 100644
+--- a/arch/riscv/boot/dts/microchip/microchip-mpfs.dtsi
++++ b/arch/riscv/boot/dts/microchip/microchip-mpfs.dtsi
+@@ -9,9 +9,6 @@
+ 	model = "Microchip PolarFire SoC";
+ 	compatible = "microchip,mpfs";
+ 
+-	chosen {
+-	};
+-
+ 	cpus {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+diff --git a/arch/riscv/configs/defconfig b/arch/riscv/configs/defconfig
+index ef473e2f503b2..11de2ab9ed6e9 100644
+--- a/arch/riscv/configs/defconfig
++++ b/arch/riscv/configs/defconfig
+@@ -78,6 +78,7 @@ CONFIG_DRM=m
+ CONFIG_DRM_RADEON=m
+ CONFIG_DRM_NOUVEAU=m
+ CONFIG_DRM_VIRTIO_GPU=m
++CONFIG_FB=y
+ CONFIG_FRAMEBUFFER_CONSOLE=y
+ CONFIG_USB=y
+ CONFIG_USB_XHCI_HCD=y
+diff --git a/arch/riscv/configs/nommu_k210_defconfig b/arch/riscv/configs/nommu_k210_defconfig
+index b16a2a12c82a8..3b9f83221f9c2 100644
+--- a/arch/riscv/configs/nommu_k210_defconfig
++++ b/arch/riscv/configs/nommu_k210_defconfig
+@@ -29,8 +29,6 @@ CONFIG_EMBEDDED=y
+ CONFIG_SLOB=y
+ # CONFIG_MMU is not set
+ CONFIG_SOC_CANAAN=y
+-CONFIG_SOC_CANAAN_K210_DTB_SOURCE="k210_generic"
+-CONFIG_MAXPHYSMEM_2GB=y
+ CONFIG_SMP=y
+ CONFIG_NR_CPUS=2
+ CONFIG_CMDLINE="earlycon console=ttySIF0"
+diff --git a/arch/riscv/configs/nommu_k210_sdcard_defconfig b/arch/riscv/configs/nommu_k210_sdcard_defconfig
+index 61f887f654199..d68b743d580f8 100644
+--- a/arch/riscv/configs/nommu_k210_sdcard_defconfig
++++ b/arch/riscv/configs/nommu_k210_sdcard_defconfig
+@@ -21,8 +21,6 @@ CONFIG_EMBEDDED=y
+ CONFIG_SLOB=y
+ # CONFIG_MMU is not set
+ CONFIG_SOC_CANAAN=y
+-CONFIG_SOC_CANAAN_K210_DTB_SOURCE="k210_generic"
+-CONFIG_MAXPHYSMEM_2GB=y
+ CONFIG_SMP=y
+ CONFIG_NR_CPUS=2
+ CONFIG_CMDLINE="earlycon console=ttySIF0 rootdelay=2 root=/dev/mmcblk0p1 ro"
+diff --git a/arch/riscv/configs/nommu_virt_defconfig b/arch/riscv/configs/nommu_virt_defconfig
+index e046a0babde43..f224be697785f 100644
+--- a/arch/riscv/configs/nommu_virt_defconfig
++++ b/arch/riscv/configs/nommu_virt_defconfig
+@@ -27,7 +27,6 @@ CONFIG_SLOB=y
+ # CONFIG_SLAB_MERGE_DEFAULT is not set
+ # CONFIG_MMU is not set
+ CONFIG_SOC_VIRT=y
+-CONFIG_MAXPHYSMEM_2GB=y
+ CONFIG_SMP=y
+ CONFIG_CMDLINE="root=/dev/vda rw earlycon=uart8250,mmio,0x10000000,115200n8 console=ttyS0"
+ CONFIG_CMDLINE_FORCE=y
+diff --git a/arch/riscv/configs/rv32_defconfig b/arch/riscv/configs/rv32_defconfig
+index 6e9f12ff968ac..05b6f17adbc1c 100644
+--- a/arch/riscv/configs/rv32_defconfig
++++ b/arch/riscv/configs/rv32_defconfig
+@@ -73,6 +73,7 @@ CONFIG_POWER_RESET=y
+ CONFIG_DRM=y
+ CONFIG_DRM_RADEON=y
+ CONFIG_DRM_VIRTIO_GPU=y
++CONFIG_FB=y
+ CONFIG_FRAMEBUFFER_CONSOLE=y
+ CONFIG_USB=y
+ CONFIG_USB_XHCI_HCD=y
+diff --git a/arch/riscv/include/asm/smp.h b/arch/riscv/include/asm/smp.h
+index a7d2811f35365..62d0e6e61da83 100644
+--- a/arch/riscv/include/asm/smp.h
++++ b/arch/riscv/include/asm/smp.h
+@@ -43,7 +43,6 @@ void arch_send_call_function_ipi_mask(struct cpumask *mask);
+ void arch_send_call_function_single_ipi(int cpu);
+ 
+ int riscv_hartid_to_cpuid(int hartid);
+-void riscv_cpuid_to_hartid_mask(const struct cpumask *in, struct cpumask *out);
+ 
+ /* Set custom IPI operations */
+ void riscv_set_ipi_ops(const struct riscv_ipi_ops *ops);
+@@ -85,13 +84,6 @@ static inline unsigned long cpuid_to_hartid_map(int cpu)
+ 	return boot_cpu_hartid;
+ }
+ 
+-static inline void riscv_cpuid_to_hartid_mask(const struct cpumask *in,
+-					      struct cpumask *out)
+-{
+-	cpumask_clear(out);
+-	cpumask_set_cpu(boot_cpu_hartid, out);
+-}
+-
+ static inline void riscv_set_ipi_ops(const struct riscv_ipi_ops *ops)
+ {
+ }
+@@ -102,6 +94,8 @@ static inline void riscv_clear_ipi(void)
+ 
+ #endif /* CONFIG_SMP */
+ 
++void riscv_cpuid_to_hartid_mask(const struct cpumask *in, struct cpumask *out);
++
+ #if defined(CONFIG_HOTPLUG_CPU) && (CONFIG_SMP)
+ bool cpu_has_hotplug(unsigned int cpu);
+ #else
+diff --git a/arch/riscv/kernel/kexec_relocate.S b/arch/riscv/kernel/kexec_relocate.S
+index a80b52a74f58c..059c5e216ae75 100644
+--- a/arch/riscv/kernel/kexec_relocate.S
++++ b/arch/riscv/kernel/kexec_relocate.S
+@@ -159,25 +159,15 @@ SYM_CODE_START(riscv_kexec_norelocate)
+ 	 * s0: (const) Phys address to jump to
+ 	 * s1: (const) Phys address of the FDT image
+ 	 * s2: (const) The hartid of the current hart
+-	 * s3: (const) kernel_map.va_pa_offset, used when switching MMU off
+ 	 */
+ 	mv	s0, a1
+ 	mv	s1, a2
+ 	mv	s2, a3
+-	mv	s3, a4
+ 
+ 	/* Disable / cleanup interrupts */
+ 	csrw	CSR_SIE, zero
+ 	csrw	CSR_SIP, zero
+ 
+-	/* Switch to physical addressing */
+-	la	s4, 1f
+-	sub	s4, s4, s3
+-	csrw	CSR_STVEC, s4
+-	csrw	CSR_SATP, zero
+-
+-.align 2
+-1:
+ 	/* Pass the arguments to the next kernel  / Cleanup*/
+ 	mv	a0, s2
+ 	mv	a1, s1
+@@ -214,7 +204,15 @@ SYM_CODE_START(riscv_kexec_norelocate)
+ 	csrw	CSR_SCAUSE, zero
+ 	csrw	CSR_SSCRATCH, zero
+ 
+-	jalr	zero, a2, 0
++	/*
++	 * Switch to physical addressing
++	 * This will also trigger a jump to CSR_STVEC
++	 * which in this case is the address of the new
++	 * kernel.
++	 */
++	csrw	CSR_STVEC, a2
++	csrw	CSR_SATP, zero
++
+ SYM_CODE_END(riscv_kexec_norelocate)
+ 
+ .section ".rodata"
+diff --git a/arch/riscv/kernel/machine_kexec.c b/arch/riscv/kernel/machine_kexec.c
+index e6eca271a4d60..cbef0fc73afa8 100644
+--- a/arch/riscv/kernel/machine_kexec.c
++++ b/arch/riscv/kernel/machine_kexec.c
+@@ -169,7 +169,8 @@ machine_kexec(struct kimage *image)
+ 	struct kimage_arch *internal = &image->arch;
+ 	unsigned long jump_addr = (unsigned long) image->start;
+ 	unsigned long first_ind_entry = (unsigned long) &image->head;
+-	unsigned long this_hart_id = raw_smp_processor_id();
++	unsigned long this_cpu_id = smp_processor_id();
++	unsigned long this_hart_id = cpuid_to_hartid_map(this_cpu_id);
+ 	unsigned long fdt_addr = internal->fdt_addr;
+ 	void *control_code_buffer = page_address(image->control_code_page);
+ 	riscv_kexec_method kexec_method = NULL;
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index b42bfdc674823..63241abe84eb8 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -59,6 +59,16 @@ atomic_t hart_lottery __section(".sdata")
+ unsigned long boot_cpu_hartid;
+ static DEFINE_PER_CPU(struct cpu, cpu_devices);
+ 
++void riscv_cpuid_to_hartid_mask(const struct cpumask *in, struct cpumask *out)
++{
++	int cpu;
++
++	cpumask_clear(out);
++	for_each_cpu(cpu, in)
++		cpumask_set_cpu(cpuid_to_hartid_map(cpu), out);
++}
++EXPORT_SYMBOL_GPL(riscv_cpuid_to_hartid_mask);
++
+ /*
+  * Place kernel memory regions on the resource tree so that
+  * kexec-tools can retrieve them from /proc/iomem. While there
+diff --git a/arch/riscv/kernel/smp.c b/arch/riscv/kernel/smp.c
+index 2f6da845c9aeb..b5d30ea922925 100644
+--- a/arch/riscv/kernel/smp.c
++++ b/arch/riscv/kernel/smp.c
+@@ -59,16 +59,6 @@ int riscv_hartid_to_cpuid(int hartid)
+ 	return -ENOENT;
+ }
+ 
+-void riscv_cpuid_to_hartid_mask(const struct cpumask *in, struct cpumask *out)
+-{
+-	int cpu;
+-
+-	cpumask_clear(out);
+-	for_each_cpu(cpu, in)
+-		cpumask_set_cpu(cpuid_to_hartid_map(cpu), out);
+-}
+-EXPORT_SYMBOL_GPL(riscv_cpuid_to_hartid_mask);
+-
+ bool arch_match_cpu_phys_id(int cpu, u64 phys_id)
+ {
+ 	return phys_id == cpuid_to_hartid_map(cpu);
+diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c
+index 421ecf4e6360b..2e5ca43c8c49e 100644
+--- a/arch/riscv/kvm/main.c
++++ b/arch/riscv/kvm/main.c
+@@ -58,6 +58,14 @@ int kvm_arch_hardware_enable(void)
+ 
+ void kvm_arch_hardware_disable(void)
+ {
++	/*
++	 * After clearing the hideleg CSR, the host kernel will receive
++	 * spurious interrupts if hvip CSR has pending interrupts and the
++	 * corresponding enable bits in vsie CSR are asserted. To avoid it,
++	 * hvip CSR and vsie CSR must be cleared before clearing hideleg CSR.
++	 */
++	csr_write(CSR_VSIE, 0);
++	csr_write(CSR_HVIP, 0);
+ 	csr_write(CSR_HEDELEG, 0);
+ 	csr_write(CSR_HIDELEG, 0);
+ }
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 24b2b80446020..6fd7532aeab94 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -187,10 +187,10 @@ static void __init setup_bootmem(void)
+ 
+ 
+ 	phys_ram_end = memblock_end_of_DRAM();
+-#ifndef CONFIG_64BIT
+ #ifndef CONFIG_XIP_KERNEL
+ 	phys_ram_base = memblock_start_of_DRAM();
+ #endif
++#ifndef CONFIG_64BIT
+ 	/*
+ 	 * memblock allocator is not aware of the fact that last 4K bytes of
+ 	 * the addressable memory can not be mapped because of IS_ERR_VALUE
+@@ -812,13 +812,22 @@ static void __init reserve_crashkernel(void)
+ 	/*
+ 	 * Current riscv boot protocol requires 2MB alignment for
+ 	 * RV64 and 4MB alignment for RV32 (hugepage size)
++	 *
++	 * Try to alloc from 32bit addressible physical memory so that
++	 * swiotlb can work on the crash kernel.
+ 	 */
+ 	crash_base = memblock_phys_alloc_range(crash_size, PMD_SIZE,
+-					       search_start, search_end);
++					       search_start,
++					       min(search_end, (unsigned long) SZ_4G));
+ 	if (crash_base == 0) {
+-		pr_warn("crashkernel: couldn't allocate %lldKB\n",
+-			crash_size >> 10);
+-		return;
++		/* Try again without restricting region to 32bit addressible memory */
++		crash_base = memblock_phys_alloc_range(crash_size, PMD_SIZE,
++						search_start, search_end);
++		if (crash_base == 0) {
++			pr_warn("crashkernel: couldn't allocate %lldKB\n",
++				crash_size >> 10);
++			return;
++		}
+ 	}
+ 
+ 	pr_info("crashkernel: reserved 0x%016llx - 0x%016llx (%lld MB)\n",
+diff --git a/arch/riscv/net/bpf_jit_comp32.c b/arch/riscv/net/bpf_jit_comp32.c
+index e6497424cbf60..529a83b85c1c9 100644
+--- a/arch/riscv/net/bpf_jit_comp32.c
++++ b/arch/riscv/net/bpf_jit_comp32.c
+@@ -799,11 +799,10 @@ static int emit_bpf_tail_call(int insn, struct rv_jit_context *ctx)
+ 	emit_bcc(BPF_JGE, lo(idx_reg), RV_REG_T1, off, ctx);
+ 
+ 	/*
+-	 * temp_tcc = tcc - 1;
+-	 * if (tcc < 0)
++	 * if (--tcc < 0)
+ 	 *   goto out;
+ 	 */
+-	emit(rv_addi(RV_REG_T1, RV_REG_TCC, -1), ctx);
++	emit(rv_addi(RV_REG_TCC, RV_REG_TCC, -1), ctx);
+ 	off = ninsns_rvoff(tc_ninsn - (ctx->ninsns - start_insn));
+ 	emit_bcc(BPF_JSLT, RV_REG_TCC, RV_REG_ZERO, off, ctx);
+ 
+@@ -829,7 +828,6 @@ static int emit_bpf_tail_call(int insn, struct rv_jit_context *ctx)
+ 	if (is_12b_check(off, insn))
+ 		return -1;
+ 	emit(rv_lw(RV_REG_T0, off, RV_REG_T0), ctx);
+-	emit(rv_addi(RV_REG_TCC, RV_REG_T1, 0), ctx);
+ 	/* Epilogue jumps to *(t0 + 4). */
+ 	__build_epilogue(true, ctx);
+ 	return 0;
+diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
+index f2a779c7e225d..603630b6f3c5b 100644
+--- a/arch/riscv/net/bpf_jit_comp64.c
++++ b/arch/riscv/net/bpf_jit_comp64.c
+@@ -327,12 +327,12 @@ static int emit_bpf_tail_call(int insn, struct rv_jit_context *ctx)
+ 	off = ninsns_rvoff(tc_ninsn - (ctx->ninsns - start_insn));
+ 	emit_branch(BPF_JGE, RV_REG_A2, RV_REG_T1, off, ctx);
+ 
+-	/* if (TCC-- < 0)
++	/* if (--TCC < 0)
+ 	 *     goto out;
+ 	 */
+-	emit_addi(RV_REG_T1, tcc, -1, ctx);
++	emit_addi(RV_REG_TCC, tcc, -1, ctx);
+ 	off = ninsns_rvoff(tc_ninsn - (ctx->ninsns - start_insn));
+-	emit_branch(BPF_JSLT, tcc, RV_REG_ZERO, off, ctx);
++	emit_branch(BPF_JSLT, RV_REG_TCC, RV_REG_ZERO, off, ctx);
+ 
+ 	/* prog = array->ptrs[index];
+ 	 * if (!prog)
+@@ -352,7 +352,6 @@ static int emit_bpf_tail_call(int insn, struct rv_jit_context *ctx)
+ 	if (is_12b_check(off, insn))
+ 		return -1;
+ 	emit_ld(RV_REG_T3, off, RV_REG_T2, ctx);
+-	emit_mv(RV_REG_TCC, RV_REG_T1, ctx);
+ 	__build_epilogue(true, ctx);
+ 	return 0;
+ }
+diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
+index b01ba460b7cad..d52d85367bf73 100644
+--- a/arch/s390/kernel/module.c
++++ b/arch/s390/kernel/module.c
+@@ -37,14 +37,15 @@
+ 
+ void *module_alloc(unsigned long size)
+ {
++	gfp_t gfp_mask = GFP_KERNEL;
+ 	void *p;
+ 
+ 	if (PAGE_ALIGN(size) > MODULES_LEN)
+ 		return NULL;
+ 	p = __vmalloc_node_range(size, MODULE_ALIGN, MODULES_VADDR, MODULES_END,
+-				 GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
++				 gfp_mask, PAGE_KERNEL_EXEC, VM_DEFER_KMEMLEAK, NUMA_NO_NODE,
+ 				 __builtin_return_address(0));
+-	if (p && (kasan_module_alloc(p, size) < 0)) {
++	if (p && (kasan_module_alloc(p, size, gfp_mask) < 0)) {
+ 		vfree(p);
+ 		return NULL;
+ 	}
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index ef299aad40090..8fc9c79c899b5 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -3449,7 +3449,7 @@ bool kvm_arch_no_poll(struct kvm_vcpu *vcpu)
+ {
+ 	/* do not poll with more than halt_poll_max_steal percent of steal time */
+ 	if (S390_lowcore.avg_steal_timer * 100 / (TICK_USEC << 12) >=
+-	    halt_poll_max_steal) {
++	    READ_ONCE(halt_poll_max_steal)) {
+ 		vcpu->stat.halt_no_poll_steal++;
+ 		return true;
+ 	}
+diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
+index 781965f7210eb..91e478e09b54b 100644
+--- a/arch/s390/mm/pgalloc.c
++++ b/arch/s390/mm/pgalloc.c
+@@ -244,13 +244,15 @@ void page_table_free(struct mm_struct *mm, unsigned long *table)
+ 		/* Free 2K page table fragment of a 4K page */
+ 		bit = ((unsigned long) table & ~PAGE_MASK)/(PTRS_PER_PTE*sizeof(pte_t));
+ 		spin_lock_bh(&mm->context.lock);
+-		mask = atomic_xor_bits(&page->_refcount, 1U << (bit + 24));
++		mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24));
+ 		mask >>= 24;
+ 		if (mask & 3)
+ 			list_add(&page->lru, &mm->context.pgtable_list);
+ 		else
+ 			list_del(&page->lru);
+ 		spin_unlock_bh(&mm->context.lock);
++		mask = atomic_xor_bits(&page->_refcount, 0x10U << (bit + 24));
++		mask >>= 24;
+ 		if (mask != 0)
+ 			return;
+ 	} else {
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index 233cc9bcd6527..9ff2bd83aad70 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -1369,7 +1369,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 				 jit->prg);
+ 
+ 		/*
+-		 * if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
++		 * if (tail_call_cnt++ >= MAX_TAIL_CALL_CNT)
+ 		 *         goto out;
+ 		 */
+ 
+@@ -1381,9 +1381,9 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 		EMIT4_IMM(0xa7080000, REG_W0, 1);
+ 		/* laal %w1,%w0,off(%r15) */
+ 		EMIT6_DISP_LH(0xeb000000, 0x00fa, REG_W1, REG_W0, REG_15, off);
+-		/* clij %w1,MAX_TAIL_CALL_CNT,0x2,out */
++		/* clij %w1,MAX_TAIL_CALL_CNT-1,0x2,out */
+ 		patch_2_clij = jit->prg;
+-		EMIT6_PCREL_RIEC(0xec000000, 0x007f, REG_W1, MAX_TAIL_CALL_CNT,
++		EMIT6_PCREL_RIEC(0xec000000, 0x007f, REG_W1, MAX_TAIL_CALL_CNT - 1,
+ 				 2, jit->prg);
+ 
+ 		/*
+diff --git a/arch/sh/configs/titan_defconfig b/arch/sh/configs/titan_defconfig
+index ba887f1351be6..cd5c58916c65a 100644
+--- a/arch/sh/configs/titan_defconfig
++++ b/arch/sh/configs/titan_defconfig
+@@ -242,7 +242,6 @@ CONFIG_NFSD=y
+ CONFIG_NFSD_V3=y
+ CONFIG_SMB_FS=m
+ CONFIG_CIFS=m
+-CONFIG_CIFS_WEAK_PW_HASH=y
+ CONFIG_PARTITION_ADVANCED=y
+ CONFIG_NLS_CODEPAGE_437=m
+ CONFIG_NLS_ASCII=m
+diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
+index 9a2f20cbd48b7..0bfe1c72a0c9e 100644
+--- a/arch/sparc/net/bpf_jit_comp_64.c
++++ b/arch/sparc/net/bpf_jit_comp_64.c
+@@ -867,7 +867,7 @@ static void emit_tail_call(struct jit_ctx *ctx)
+ 	emit(LD32 | IMMED | RS1(SP) | S13(off) | RD(tmp), ctx);
+ 	emit_cmpi(tmp, MAX_TAIL_CALL_CNT, ctx);
+ #define OFFSET2 13
+-	emit_branch(BGU, ctx->idx, ctx->idx + OFFSET2, ctx);
++	emit_branch(BGEU, ctx->idx, ctx->idx + OFFSET2, ctx);
+ 	emit_nop(ctx);
+ 
+ 	emit_alu_K(ADD, tmp, 1, ctx);
+diff --git a/arch/um/.gitignore b/arch/um/.gitignore
+index 6323e5571887e..d69ea5b562cee 100644
+--- a/arch/um/.gitignore
++++ b/arch/um/.gitignore
+@@ -2,3 +2,4 @@
+ kernel/config.c
+ kernel/config.tmp
+ kernel/vmlinux.lds
++kernel/capflags.c
+diff --git a/arch/um/drivers/virt-pci.c b/arch/um/drivers/virt-pci.c
+index c080666330234..0ab58016db22f 100644
+--- a/arch/um/drivers/virt-pci.c
++++ b/arch/um/drivers/virt-pci.c
+@@ -181,15 +181,15 @@ static unsigned long um_pci_cfgspace_read(void *priv, unsigned int offset,
+ 	/* buf->data is maximum size - we may only use parts of it */
+ 	struct um_pci_message_buffer *buf;
+ 	u8 *data;
+-	unsigned long ret = ~0ULL;
++	unsigned long ret = ULONG_MAX;
+ 
+ 	if (!dev)
+-		return ~0ULL;
++		return ULONG_MAX;
+ 
+ 	buf = get_cpu_var(um_pci_msg_bufs);
+ 	data = buf->data;
+ 
+-	memset(data, 0xff, sizeof(data));
++	memset(buf->data, 0xff, sizeof(buf->data));
+ 
+ 	switch (size) {
+ 	case 1:
+@@ -304,7 +304,7 @@ static unsigned long um_pci_bar_read(void *priv, unsigned int offset,
+ 	/* buf->data is maximum size - we may only use parts of it */
+ 	struct um_pci_message_buffer *buf;
+ 	u8 *data;
+-	unsigned long ret = ~0ULL;
++	unsigned long ret = ULONG_MAX;
+ 
+ 	buf = get_cpu_var(um_pci_msg_bufs);
+ 	data = buf->data;
+diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c
+index d51e445df7976..7755cb4ff9fc6 100644
+--- a/arch/um/drivers/virtio_uml.c
++++ b/arch/um/drivers/virtio_uml.c
+@@ -1090,6 +1090,8 @@ static void virtio_uml_release_dev(struct device *d)
+ 			container_of(d, struct virtio_device, dev);
+ 	struct virtio_uml_device *vu_dev = to_virtio_uml_device(vdev);
+ 
++	time_travel_propagate_time();
++
+ 	/* might not have been opened due to not negotiating the feature */
+ 	if (vu_dev->req_fd >= 0) {
+ 		um_free_irq(vu_dev->irq, vu_dev);
+@@ -1136,6 +1138,8 @@ static int virtio_uml_probe(struct platform_device *pdev)
+ 	vu_dev->pdev = pdev;
+ 	vu_dev->req_fd = -1;
+ 
++	time_travel_propagate_time();
++
+ 	do {
+ 		rc = os_connect_socket(pdata->socket_path);
+ 	} while (rc == -EINTR);
+diff --git a/arch/um/include/asm/delay.h b/arch/um/include/asm/delay.h
+index 56fc2b8f2dd01..e79b2ab6f40c8 100644
+--- a/arch/um/include/asm/delay.h
++++ b/arch/um/include/asm/delay.h
+@@ -14,7 +14,7 @@ static inline void um_ndelay(unsigned long nsecs)
+ 	ndelay(nsecs);
+ }
+ #undef ndelay
+-#define ndelay um_ndelay
++#define ndelay(n) um_ndelay(n)
+ 
+ static inline void um_udelay(unsigned long usecs)
+ {
+@@ -26,5 +26,5 @@ static inline void um_udelay(unsigned long usecs)
+ 	udelay(usecs);
+ }
+ #undef udelay
+-#define udelay um_udelay
++#define udelay(n) um_udelay(n)
+ #endif /* __UM_DELAY_H */
+diff --git a/arch/um/include/asm/irqflags.h b/arch/um/include/asm/irqflags.h
+index dab5744e9253d..1e69ef5bc35e0 100644
+--- a/arch/um/include/asm/irqflags.h
++++ b/arch/um/include/asm/irqflags.h
+@@ -3,7 +3,7 @@
+ #define __UM_IRQFLAGS_H
+ 
+ extern int signals_enabled;
+-int set_signals(int enable);
++int um_set_signals(int enable);
+ void block_signals(void);
+ void unblock_signals(void);
+ 
+@@ -16,7 +16,7 @@ static inline unsigned long arch_local_save_flags(void)
+ #define arch_local_irq_restore arch_local_irq_restore
+ static inline void arch_local_irq_restore(unsigned long flags)
+ {
+-	set_signals(flags);
++	um_set_signals(flags);
+ }
+ 
+ #define arch_local_irq_enable arch_local_irq_enable
+diff --git a/arch/um/include/shared/longjmp.h b/arch/um/include/shared/longjmp.h
+index bdb2869b72b31..8863319039f3d 100644
+--- a/arch/um/include/shared/longjmp.h
++++ b/arch/um/include/shared/longjmp.h
+@@ -18,7 +18,7 @@ extern void longjmp(jmp_buf, int);
+ 	enable = *(volatile int *)&signals_enabled;	\
+ 	n = setjmp(*buf);				\
+ 	if(n != 0)					\
+-		set_signals_trace(enable);		\
++		um_set_signals_trace(enable);		\
+ 	n; })
+ 
+ #endif
+diff --git a/arch/um/include/shared/os.h b/arch/um/include/shared/os.h
+index 96d400387c93e..03ffbdddcc480 100644
+--- a/arch/um/include/shared/os.h
++++ b/arch/um/include/shared/os.h
+@@ -238,8 +238,8 @@ extern void send_sigio_to_self(void);
+ extern int change_sig(int signal, int on);
+ extern void block_signals(void);
+ extern void unblock_signals(void);
+-extern int set_signals(int enable);
+-extern int set_signals_trace(int enable);
++extern int um_set_signals(int enable);
++extern int um_set_signals_trace(int enable);
+ extern int os_is_signal_stack(void);
+ extern void deliver_alarm(void);
+ extern void register_pm_wake_signal(void);
+diff --git a/arch/um/include/shared/registers.h b/arch/um/include/shared/registers.h
+index 0c50fa6e8a55b..fbb709a222839 100644
+--- a/arch/um/include/shared/registers.h
++++ b/arch/um/include/shared/registers.h
+@@ -16,8 +16,8 @@ extern int restore_fp_registers(int pid, unsigned long *fp_regs);
+ extern int save_fpx_registers(int pid, unsigned long *fp_regs);
+ extern int restore_fpx_registers(int pid, unsigned long *fp_regs);
+ extern int save_registers(int pid, struct uml_pt_regs *regs);
+-extern int restore_registers(int pid, struct uml_pt_regs *regs);
+-extern int init_registers(int pid);
++extern int restore_pid_registers(int pid, struct uml_pt_regs *regs);
++extern int init_pid_registers(int pid);
+ extern void get_safe_registers(unsigned long *regs, unsigned long *fp_regs);
+ extern unsigned long get_thread_reg(int reg, jmp_buf *buf);
+ extern int get_fp_registers(int pid, unsigned long *regs);
+diff --git a/arch/um/kernel/ksyms.c b/arch/um/kernel/ksyms.c
+index b1e5634398d09..3a85bde3e1734 100644
+--- a/arch/um/kernel/ksyms.c
++++ b/arch/um/kernel/ksyms.c
+@@ -6,7 +6,7 @@
+ #include <linux/module.h>
+ #include <os.h>
+ 
+-EXPORT_SYMBOL(set_signals);
++EXPORT_SYMBOL(um_set_signals);
+ EXPORT_SYMBOL(signals_enabled);
+ 
+ EXPORT_SYMBOL(os_stat_fd);
+diff --git a/arch/um/os-Linux/registers.c b/arch/um/os-Linux/registers.c
+index 2d9270508e156..b123955be7acc 100644
+--- a/arch/um/os-Linux/registers.c
++++ b/arch/um/os-Linux/registers.c
+@@ -21,7 +21,7 @@ int save_registers(int pid, struct uml_pt_regs *regs)
+ 	return 0;
+ }
+ 
+-int restore_registers(int pid, struct uml_pt_regs *regs)
++int restore_pid_registers(int pid, struct uml_pt_regs *regs)
+ {
+ 	int err;
+ 
+@@ -36,7 +36,7 @@ int restore_registers(int pid, struct uml_pt_regs *regs)
+ static unsigned long exec_regs[MAX_REG_NR];
+ static unsigned long exec_fp_regs[FP_SIZE];
+ 
+-int init_registers(int pid)
++int init_pid_registers(int pid)
+ {
+ 	int err;
+ 
+diff --git a/arch/um/os-Linux/sigio.c b/arch/um/os-Linux/sigio.c
+index 6597ea1986ffa..9e71794839e87 100644
+--- a/arch/um/os-Linux/sigio.c
++++ b/arch/um/os-Linux/sigio.c
+@@ -132,7 +132,7 @@ static void update_thread(void)
+ 	int n;
+ 	char c;
+ 
+-	flags = set_signals_trace(0);
++	flags = um_set_signals_trace(0);
+ 	CATCH_EINTR(n = write(sigio_private[0], &c, sizeof(c)));
+ 	if (n != sizeof(c)) {
+ 		printk(UM_KERN_ERR "update_thread : write failed, err = %d\n",
+@@ -147,7 +147,7 @@ static void update_thread(void)
+ 		goto fail;
+ 	}
+ 
+-	set_signals_trace(flags);
++	um_set_signals_trace(flags);
+ 	return;
+  fail:
+ 	/* Critical section start */
+@@ -161,7 +161,7 @@ static void update_thread(void)
+ 	close(write_sigio_fds[0]);
+ 	close(write_sigio_fds[1]);
+ 	/* Critical section end */
+-	set_signals_trace(flags);
++	um_set_signals_trace(flags);
+ }
+ 
+ int __add_sigio_fd(int fd)
+diff --git a/arch/um/os-Linux/signal.c b/arch/um/os-Linux/signal.c
+index 6cf098c23a394..24a403a70a020 100644
+--- a/arch/um/os-Linux/signal.c
++++ b/arch/um/os-Linux/signal.c
+@@ -94,7 +94,7 @@ void sig_handler(int sig, struct siginfo *si, mcontext_t *mc)
+ 
+ 	sig_handler_common(sig, si, mc);
+ 
+-	set_signals_trace(enabled);
++	um_set_signals_trace(enabled);
+ }
+ 
+ static void timer_real_alarm_handler(mcontext_t *mc)
+@@ -126,7 +126,7 @@ void timer_alarm_handler(int sig, struct siginfo *unused_si, mcontext_t *mc)
+ 
+ 	signals_active &= ~SIGALRM_MASK;
+ 
+-	set_signals_trace(enabled);
++	um_set_signals_trace(enabled);
+ }
+ 
+ void deliver_alarm(void) {
+@@ -348,7 +348,7 @@ void unblock_signals(void)
+ 	}
+ }
+ 
+-int set_signals(int enable)
++int um_set_signals(int enable)
+ {
+ 	int ret;
+ 	if (signals_enabled == enable)
+@@ -362,7 +362,7 @@ int set_signals(int enable)
+ 	return ret;
+ }
+ 
+-int set_signals_trace(int enable)
++int um_set_signals_trace(int enable)
+ {
+ 	int ret;
+ 	if (signals_enabled == enable)
+diff --git a/arch/um/os-Linux/start_up.c b/arch/um/os-Linux/start_up.c
+index 8a72c99994eb1..e3ee4db58b40d 100644
+--- a/arch/um/os-Linux/start_up.c
++++ b/arch/um/os-Linux/start_up.c
+@@ -368,7 +368,7 @@ void __init os_early_checks(void)
+ 	check_tmpexec();
+ 
+ 	pid = start_ptraced_child();
+-	if (init_registers(pid))
++	if (init_pid_registers(pid))
+ 		fatal("Failed to initialize default registers");
+ 	stop_ptraced_child(pid, 1, 1);
+ }
+diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
+index 431bf7f846c3c..e118136460518 100644
+--- a/arch/x86/boot/compressed/Makefile
++++ b/arch/x86/boot/compressed/Makefile
+@@ -28,7 +28,11 @@ KCOV_INSTRUMENT		:= n
+ targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
+ 	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4 vmlinux.bin.zst
+ 
+-KBUILD_CFLAGS := -m$(BITS) -O2
++# CLANG_FLAGS must come before any cc-disable-warning or cc-option calls in
++# case of cross compiling, as it has the '--target=' flag, which is needed to
++# avoid errors with '-march=i386', and future flags may depend on the target to
++# be valid.
++KBUILD_CFLAGS := -m$(BITS) -O2 $(CLANG_FLAGS)
+ KBUILD_CFLAGS += -fno-strict-aliasing -fPIE
+ KBUILD_CFLAGS += -Wundef
+ KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
+@@ -47,7 +51,6 @@ KBUILD_CFLAGS += -D__DISABLE_EXPORTS
+ # Disable relocation relaxation in case the link is not PIE.
+ KBUILD_CFLAGS += $(call as-option,-Wa$(comma)-mrelax-relocations=no)
+ KBUILD_CFLAGS += -include $(srctree)/include/linux/hidden.h
+-KBUILD_CFLAGS += $(CLANG_FLAGS)
+ 
+ # sev.c indirectly inludes inat-table.h which is generated during
+ # compilation and stored in $(objtree). Add the directory to the includes so
+diff --git a/arch/x86/configs/i386_defconfig b/arch/x86/configs/i386_defconfig
+index e81885384f604..99398cbdae434 100644
+--- a/arch/x86/configs/i386_defconfig
++++ b/arch/x86/configs/i386_defconfig
+@@ -262,3 +262,4 @@ CONFIG_BLK_DEV_IO_TRACE=y
+ CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
+ CONFIG_EARLY_PRINTK_DBGP=y
+ CONFIG_DEBUG_BOOT_PARAMS=y
++CONFIG_KALLSYMS_ALL=y
+diff --git a/arch/x86/configs/x86_64_defconfig b/arch/x86/configs/x86_64_defconfig
+index e8a7a0af2bdaa..d7298b104a456 100644
+--- a/arch/x86/configs/x86_64_defconfig
++++ b/arch/x86/configs/x86_64_defconfig
+@@ -258,3 +258,4 @@ CONFIG_BLK_DEV_IO_TRACE=y
+ CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
+ CONFIG_EARLY_PRINTK_DBGP=y
+ CONFIG_DEBUG_BOOT_PARAMS=y
++CONFIG_KALLSYMS_ALL=y
+diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
+index e09f4672dd382..41901ba9d3a2c 100644
+--- a/arch/x86/crypto/aesni-intel_glue.c
++++ b/arch/x86/crypto/aesni-intel_glue.c
+@@ -1107,7 +1107,7 @@ static struct aead_alg aesni_aeads[] = { {
+ 		.cra_flags		= CRYPTO_ALG_INTERNAL,
+ 		.cra_blocksize		= 1,
+ 		.cra_ctxsize		= sizeof(struct aesni_rfc4106_gcm_ctx),
+-		.cra_alignmask		= AESNI_ALIGN - 1,
++		.cra_alignmask		= 0,
+ 		.cra_module		= THIS_MODULE,
+ 	},
+ }, {
+@@ -1124,7 +1124,7 @@ static struct aead_alg aesni_aeads[] = { {
+ 		.cra_flags		= CRYPTO_ALG_INTERNAL,
+ 		.cra_blocksize		= 1,
+ 		.cra_ctxsize		= sizeof(struct generic_gcmaes_ctx),
+-		.cra_alignmask		= AESNI_ALIGN - 1,
++		.cra_alignmask		= 0,
+ 		.cra_module		= THIS_MODULE,
+ 	},
+ } };
+diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
+index bd13736d0c054..0ad2378fe6ad7 100644
+--- a/arch/x86/hyperv/mmu.c
++++ b/arch/x86/hyperv/mmu.c
+@@ -68,15 +68,6 @@ static void hyperv_flush_tlb_multi(const struct cpumask *cpus,
+ 
+ 	local_irq_save(flags);
+ 
+-	/*
+-	 * Only check the mask _after_ interrupt has been disabled to avoid the
+-	 * mask changing under our feet.
+-	 */
+-	if (cpumask_empty(cpus)) {
+-		local_irq_restore(flags);
+-		return;
+-	}
+-
+ 	flush_pcpu = (struct hv_tlb_flush **)
+ 		     this_cpu_ptr(hyperv_pcpu_input_arg);
+ 
+@@ -115,7 +106,9 @@ static void hyperv_flush_tlb_multi(const struct cpumask *cpus,
+ 		 * must. We will also check all VP numbers when walking the
+ 		 * supplied CPU set to remain correct in all cases.
+ 		 */
+-		if (hv_cpu_number_to_vp_number(cpumask_last(cpus)) >= 64)
++		cpu = cpumask_last(cpus);
++
++		if (cpu < nr_cpumask_bits && hv_cpu_number_to_vp_number(cpu) >= 64)
+ 			goto do_ex_hypercall;
+ 
+ 		for_each_cpu(cpu, cpus) {
+@@ -131,6 +124,12 @@ static void hyperv_flush_tlb_multi(const struct cpumask *cpus,
+ 			__set_bit(vcpu, (unsigned long *)
+ 				  &flush->processor_mask);
+ 		}
++
++		/* nothing to flush if 'processor_mask' ends up being empty */
++		if (!flush->processor_mask) {
++			local_irq_restore(flags);
++			return;
++		}
+ 	}
+ 
+ 	/*
+diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
+index 5db5d083c8732..331474b150f16 100644
+--- a/arch/x86/include/asm/realmode.h
++++ b/arch/x86/include/asm/realmode.h
+@@ -89,6 +89,7 @@ static inline void set_real_mode_mem(phys_addr_t mem)
+ }
+ 
+ void reserve_real_mode(void);
++void load_trampoline_pgtable(void);
+ 
+ #endif /* __ASSEMBLY__ */
+ 
+diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
+index cc164777e6619..2f0b6be8eaabc 100644
+--- a/arch/x86/include/asm/topology.h
++++ b/arch/x86/include/asm/topology.h
+@@ -221,7 +221,7 @@ static inline void arch_set_max_freq_ratio(bool turbo_disabled)
+ }
+ #endif
+ 
+-#ifdef CONFIG_ACPI_CPPC_LIB
++#if defined(CONFIG_ACPI_CPPC_LIB) && defined(CONFIG_SMP)
+ void init_freq_invariance_cppc(void);
+ #define init_freq_invariance_cppc init_freq_invariance_cppc
+ #endif
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index 33a68407def3f..8ab9e79abb2b4 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -314,11 +314,12 @@ do {									\
+ do {									\
+ 	__chk_user_ptr(ptr);						\
+ 	switch (size) {							\
+-	unsigned char x_u8__;						\
+-	case 1:								\
++	case 1:	{							\
++		unsigned char x_u8__;					\
+ 		__get_user_asm(x_u8__, ptr, "b", "=q", label);		\
+ 		(x) = x_u8__;						\
+ 		break;							\
++	}								\
+ 	case 2:								\
+ 		__get_user_asm(x, ptr, "w", "=r", label);		\
+ 		break;							\
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 6ed365337a3b1..69fd51a29278f 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -267,11 +267,17 @@ static void wait_for_panic(void)
+ 	panic("Panicing machine check CPU died");
+ }
+ 
+-static void mce_panic(const char *msg, struct mce *final, char *exp)
++static noinstr void mce_panic(const char *msg, struct mce *final, char *exp)
+ {
+-	int apei_err = 0;
+ 	struct llist_node *pending;
+ 	struct mce_evt_llist *l;
++	int apei_err = 0;
++
++	/*
++	 * Allow instrumentation around external facilities usage. Not that it
++	 * matters a whole lot since the machine is going to panic anyway.
++	 */
++	instrumentation_begin();
+ 
+ 	if (!fake_panic) {
+ 		/*
+@@ -286,7 +292,7 @@ static void mce_panic(const char *msg, struct mce *final, char *exp)
+ 	} else {
+ 		/* Don't log too much for fake panic */
+ 		if (atomic_inc_return(&mce_fake_panicked) > 1)
+-			return;
++			goto out;
+ 	}
+ 	pending = mce_gen_pool_prepare_records();
+ 	/* First print corrected ones that are still unlogged */
+@@ -324,6 +330,9 @@ static void mce_panic(const char *msg, struct mce *final, char *exp)
+ 		panic(msg);
+ 	} else
+ 		pr_emerg(HW_ERR "Fake kernel panic: %s\n", msg);
++
++out:
++	instrumentation_end();
+ }
+ 
+ /* Support code for software error injection */
+@@ -636,7 +645,7 @@ static struct notifier_block mce_default_nb = {
+ /*
+  * Read ADDR and MISC registers.
+  */
+-static void mce_read_aux(struct mce *m, int i)
++static noinstr void mce_read_aux(struct mce *m, int i)
+ {
+ 	if (m->status & MCI_STATUS_MISCV)
+ 		m->misc = mce_rdmsrl(mca_msr_reg(i, MCA_MISC));
+@@ -1054,10 +1063,13 @@ static int mce_start(int *no_way_out)
+  * Synchronize between CPUs after main scanning loop.
+  * This invokes the bulk of the Monarch processing.
+  */
+-static int mce_end(int order)
++static noinstr int mce_end(int order)
+ {
+-	int ret = -1;
+ 	u64 timeout = (u64)mca_cfg.monarch_timeout * NSEC_PER_USEC;
++	int ret = -1;
++
++	/* Allow instrumentation around external facilities. */
++	instrumentation_begin();
+ 
+ 	if (!timeout)
+ 		goto reset;
+@@ -1101,7 +1113,8 @@ static int mce_end(int order)
+ 		/*
+ 		 * Don't reset anything. That's done by the Monarch.
+ 		 */
+-		return 0;
++		ret = 0;
++		goto out;
+ 	}
+ 
+ 	/*
+@@ -1117,6 +1130,10 @@ reset:
+ 	 * Let others run again.
+ 	 */
+ 	atomic_set(&mce_executing, 0);
++
++out:
++	instrumentation_end();
++
+ 	return ret;
+ }
+ 
+@@ -1454,6 +1471,14 @@ noinstr void do_machine_check(struct pt_regs *regs)
+ 	if (worst != MCE_AR_SEVERITY && !kill_current_task)
+ 		goto out;
+ 
++	/*
++	 * Enable instrumentation around the external facilities like
++	 * task_work_add() (via queue_task_work()), fixup_exception() etc.
++	 * For now, that is. Fixing this properly would need a lot more involved
++	 * reorganization.
++	 */
++	instrumentation_begin();
++
+ 	/* Fault was in user mode and we need to take some action */
+ 	if ((m.cs & 3) == 3) {
+ 		/* If this triggers there is no way to recover. Die hard. */
+@@ -1482,6 +1507,9 @@ noinstr void do_machine_check(struct pt_regs *regs)
+ 		if (m.kflags & MCE_IN_KERNEL_COPYIN)
+ 			queue_task_work(&m, msg, kill_me_never);
+ 	}
++
++	instrumentation_end();
++
+ out:
+ 	mce_wrmsrl(MSR_IA32_MCG_STATUS, 0);
+ }
+diff --git a/arch/x86/kernel/cpu/mce/inject.c b/arch/x86/kernel/cpu/mce/inject.c
+index 0bfc14041bbb4..b63b548497c14 100644
+--- a/arch/x86/kernel/cpu/mce/inject.c
++++ b/arch/x86/kernel/cpu/mce/inject.c
+@@ -350,7 +350,7 @@ static ssize_t flags_write(struct file *filp, const char __user *ubuf,
+ 	char buf[MAX_FLAG_OPT_SIZE], *__buf;
+ 	int err;
+ 
+-	if (cnt > MAX_FLAG_OPT_SIZE)
++	if (!cnt || cnt > MAX_FLAG_OPT_SIZE)
+ 		return -EINVAL;
+ 
+ 	if (copy_from_user(&buf, ubuf, cnt))
+diff --git a/arch/x86/kernel/cpu/mce/severity.c b/arch/x86/kernel/cpu/mce/severity.c
+index bb019a594a2c9..09a8cad0c44fa 100644
+--- a/arch/x86/kernel/cpu/mce/severity.c
++++ b/arch/x86/kernel/cpu/mce/severity.c
+@@ -222,6 +222,9 @@ static bool is_copy_from_user(struct pt_regs *regs)
+ 	struct insn insn;
+ 	int ret;
+ 
++	if (!regs)
++		return false;
++
+ 	if (copy_from_kernel_nofault(insn_buf, (void *)regs->ip, MAX_INSN_SIZE))
+ 		return false;
+ 
+@@ -263,24 +266,36 @@ static bool is_copy_from_user(struct pt_regs *regs)
+  * distinguish an exception taken in user from from one
+  * taken in the kernel.
+  */
+-static int error_context(struct mce *m, struct pt_regs *regs)
++static noinstr int error_context(struct mce *m, struct pt_regs *regs)
+ {
++	int fixup_type;
++	bool copy_user;
++
+ 	if ((m->cs & 3) == 3)
+ 		return IN_USER;
++
+ 	if (!mc_recoverable(m->mcgstatus))
+ 		return IN_KERNEL;
+ 
+-	switch (ex_get_fixup_type(m->ip)) {
++	/* Allow instrumentation around external facilities usage. */
++	instrumentation_begin();
++	fixup_type = ex_get_fixup_type(m->ip);
++	copy_user  = is_copy_from_user(regs);
++	instrumentation_end();
++
++	switch (fixup_type) {
+ 	case EX_TYPE_UACCESS:
+ 	case EX_TYPE_COPY:
+-		if (!regs || !is_copy_from_user(regs))
++		if (!copy_user)
+ 			return IN_KERNEL;
+ 		m->kflags |= MCE_IN_KERNEL_COPYIN;
+ 		fallthrough;
++
+ 	case EX_TYPE_FAULT_MCE_SAFE:
+ 	case EX_TYPE_DEFAULT_MCE_SAFE:
+ 		m->kflags |= MCE_IN_KERNEL_RECOV;
+ 		return IN_KERNEL_RECOV;
++
+ 	default:
+ 		return IN_KERNEL;
+ 	}
+@@ -317,8 +332,8 @@ static int mce_severity_amd_smca(struct mce *m, enum context err_ctx)
+  * See AMD Error Scope Hierarchy table in a newer BKDG. For example
+  * 49125_15h_Models_30h-3Fh_BKDG.pdf, section "RAS Features"
+  */
+-static int mce_severity_amd(struct mce *m, struct pt_regs *regs, int tolerant,
+-			    char **msg, bool is_excp)
++static noinstr int mce_severity_amd(struct mce *m, struct pt_regs *regs, int tolerant,
++				    char **msg, bool is_excp)
+ {
+ 	enum context ctx = error_context(m, regs);
+ 
+@@ -370,8 +385,8 @@ static int mce_severity_amd(struct mce *m, struct pt_regs *regs, int tolerant,
+ 	return MCE_KEEP_SEVERITY;
+ }
+ 
+-static int mce_severity_intel(struct mce *m, struct pt_regs *regs,
+-			      int tolerant, char **msg, bool is_excp)
++static noinstr int mce_severity_intel(struct mce *m, struct pt_regs *regs,
++				      int tolerant, char **msg, bool is_excp)
+ {
+ 	enum exception excp = (is_excp ? EXCP_CONTEXT : NO_EXCP);
+ 	enum context ctx = error_context(m, regs);
+@@ -407,8 +422,8 @@ static int mce_severity_intel(struct mce *m, struct pt_regs *regs,
+ 	}
+ }
+ 
+-int mce_severity(struct mce *m, struct pt_regs *regs, int tolerant, char **msg,
+-		 bool is_excp)
++int noinstr mce_severity(struct mce *m, struct pt_regs *regs, int tolerant, char **msg,
++			 bool is_excp)
+ {
+ 	if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
+ 	    boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)
+diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c
+index 391a4e2b86049..8690fab95ae4b 100644
+--- a/arch/x86/kernel/early-quirks.c
++++ b/arch/x86/kernel/early-quirks.c
+@@ -515,6 +515,7 @@ static const struct intel_early_ops gen11_early_ops __initconst = {
+ 	.stolen_size = gen9_stolen_size,
+ };
+ 
++/* Intel integrated GPUs for which we need to reserve "stolen memory" */
+ static const struct pci_device_id intel_early_ids[] __initconst = {
+ 	INTEL_I830_IDS(&i830_early_ops),
+ 	INTEL_I845G_IDS(&i845_early_ops),
+@@ -591,6 +592,13 @@ static void __init intel_graphics_quirks(int num, int slot, int func)
+ 	u16 device;
+ 	int i;
+ 
++	/*
++	 * Reserve "stolen memory" for an integrated GPU.  If we've already
++	 * found one, there's nothing to do for other (discrete) GPUs.
++	 */
++	if (resource_size(&intel_graphics_stolen_res))
++		return;
++
+ 	device = read_pci_config_16(num, slot, func, PCI_DEVICE_ID);
+ 
+ 	for (i = 0; i < ARRAY_SIZE(intel_early_ids); i++) {
+@@ -703,7 +711,7 @@ static struct chipset early_qrk[] __initdata = {
+ 	{ PCI_VENDOR_ID_INTEL, 0x3406, PCI_CLASS_BRIDGE_HOST,
+ 	  PCI_BASE_CLASS_BRIDGE, 0, intel_remapping_check },
+ 	{ PCI_VENDOR_ID_INTEL, PCI_ANY_ID, PCI_CLASS_DISPLAY_VGA, PCI_ANY_ID,
+-	  QFLAG_APPLY_ONCE, intel_graphics_quirks },
++	  0, intel_graphics_quirks },
+ 	/*
+ 	 * HPET on the current version of the Baytrail platform has accuracy
+ 	 * problems: it will halt in deep idle state - so we disable it.
+diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
+index 169fb6f4cd2ee..95fa745e310a5 100644
+--- a/arch/x86/kernel/module.c
++++ b/arch/x86/kernel/module.c
+@@ -67,6 +67,7 @@ static unsigned long int get_module_load_offset(void)
+ 
+ void *module_alloc(unsigned long size)
+ {
++	gfp_t gfp_mask = GFP_KERNEL;
+ 	void *p;
+ 
+ 	if (PAGE_ALIGN(size) > MODULES_LEN)
+@@ -74,10 +75,10 @@ void *module_alloc(unsigned long size)
+ 
+ 	p = __vmalloc_node_range(size, MODULE_ALIGN,
+ 				    MODULES_VADDR + get_module_load_offset(),
+-				    MODULES_END, GFP_KERNEL,
+-				    PAGE_KERNEL, 0, NUMA_NO_NODE,
++				    MODULES_END, gfp_mask,
++				    PAGE_KERNEL, VM_DEFER_KMEMLEAK, NUMA_NO_NODE,
+ 				    __builtin_return_address(0));
+-	if (p && (kasan_module_alloc(p, size) < 0)) {
++	if (p && (kasan_module_alloc(p, size, gfp_mask) < 0)) {
+ 		vfree(p);
+ 		return NULL;
+ 	}
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index 0a40df66a40de..fa700b46588e0 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -113,17 +113,9 @@ void __noreturn machine_real_restart(unsigned int type)
+ 	spin_unlock(&rtc_lock);
+ 
+ 	/*
+-	 * Switch back to the initial page table.
++	 * Switch to the trampoline page table.
+ 	 */
+-#ifdef CONFIG_X86_32
+-	load_cr3(initial_page_table);
+-#else
+-	write_cr3(real_mode_header->trampoline_pgd);
+-
+-	/* Exiting long mode will fail if CR4.PCIDE is set. */
+-	if (boot_cpu_has(X86_FEATURE_PCID))
+-		cr4_clear_bits(X86_CR4_PCIDE);
+-#endif
++	load_trampoline_pgtable();
+ 
+ 	/* Jump to the identity-mapped low memory code */
+ #ifdef CONFIG_X86_32
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 07e9215e911d7..e5e597fc3c86f 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -99,6 +99,28 @@ static int kvm_check_cpuid(struct kvm_cpuid_entry2 *entries, int nent)
+ 	return 0;
+ }
+ 
++/* Check whether the supplied CPUID data is equal to what is already set for the vCPU. */
++static int kvm_cpuid_check_equal(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2 *e2,
++				 int nent)
++{
++	struct kvm_cpuid_entry2 *orig;
++	int i;
++
++	if (nent != vcpu->arch.cpuid_nent)
++		return -EINVAL;
++
++	for (i = 0; i < nent; i++) {
++		orig = &vcpu->arch.cpuid_entries[i];
++		if (e2[i].function != orig->function ||
++		    e2[i].index != orig->index ||
++		    e2[i].eax != orig->eax || e2[i].ebx != orig->ebx ||
++		    e2[i].ecx != orig->ecx || e2[i].edx != orig->edx)
++			return -EINVAL;
++	}
++
++	return 0;
++}
++
+ static void kvm_update_kvm_cpuid_base(struct kvm_vcpu *vcpu)
+ {
+ 	u32 function;
+@@ -125,14 +147,21 @@ static void kvm_update_kvm_cpuid_base(struct kvm_vcpu *vcpu)
+ 	}
+ }
+ 
+-static struct kvm_cpuid_entry2 *kvm_find_kvm_cpuid_features(struct kvm_vcpu *vcpu)
++static struct kvm_cpuid_entry2 *__kvm_find_kvm_cpuid_features(struct kvm_vcpu *vcpu,
++					      struct kvm_cpuid_entry2 *entries, int nent)
+ {
+ 	u32 base = vcpu->arch.kvm_cpuid_base;
+ 
+ 	if (!base)
+ 		return NULL;
+ 
+-	return kvm_find_cpuid_entry(vcpu, base | KVM_CPUID_FEATURES, 0);
++	return cpuid_entry2_find(entries, nent, base | KVM_CPUID_FEATURES, 0);
++}
++
++static struct kvm_cpuid_entry2 *kvm_find_kvm_cpuid_features(struct kvm_vcpu *vcpu)
++{
++	return __kvm_find_kvm_cpuid_features(vcpu, vcpu->arch.cpuid_entries,
++					     vcpu->arch.cpuid_nent);
+ }
+ 
+ void kvm_update_pv_runtime(struct kvm_vcpu *vcpu)
+@@ -147,11 +176,12 @@ void kvm_update_pv_runtime(struct kvm_vcpu *vcpu)
+ 		vcpu->arch.pv_cpuid.features = best->eax;
+ }
+ 
+-void kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu)
++static void __kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2 *entries,
++				       int nent)
+ {
+ 	struct kvm_cpuid_entry2 *best;
+ 
+-	best = kvm_find_cpuid_entry(vcpu, 1, 0);
++	best = cpuid_entry2_find(entries, nent, 1, 0);
+ 	if (best) {
+ 		/* Update OSXSAVE bit */
+ 		if (boot_cpu_has(X86_FEATURE_XSAVE))
+@@ -162,33 +192,38 @@ void kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu)
+ 			   vcpu->arch.apic_base & MSR_IA32_APICBASE_ENABLE);
+ 	}
+ 
+-	best = kvm_find_cpuid_entry(vcpu, 7, 0);
++	best = cpuid_entry2_find(entries, nent, 7, 0);
+ 	if (best && boot_cpu_has(X86_FEATURE_PKU) && best->function == 0x7)
+ 		cpuid_entry_change(best, X86_FEATURE_OSPKE,
+ 				   kvm_read_cr4_bits(vcpu, X86_CR4_PKE));
+ 
+-	best = kvm_find_cpuid_entry(vcpu, 0xD, 0);
++	best = cpuid_entry2_find(entries, nent, 0xD, 0);
+ 	if (best)
+ 		best->ebx = xstate_required_size(vcpu->arch.xcr0, false);
+ 
+-	best = kvm_find_cpuid_entry(vcpu, 0xD, 1);
++	best = cpuid_entry2_find(entries, nent, 0xD, 1);
+ 	if (best && (cpuid_entry_has(best, X86_FEATURE_XSAVES) ||
+ 		     cpuid_entry_has(best, X86_FEATURE_XSAVEC)))
+ 		best->ebx = xstate_required_size(vcpu->arch.xcr0, true);
+ 
+-	best = kvm_find_kvm_cpuid_features(vcpu);
++	best = __kvm_find_kvm_cpuid_features(vcpu, entries, nent);
+ 	if (kvm_hlt_in_guest(vcpu->kvm) && best &&
+ 		(best->eax & (1 << KVM_FEATURE_PV_UNHALT)))
+ 		best->eax &= ~(1 << KVM_FEATURE_PV_UNHALT);
+ 
+ 	if (!kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT)) {
+-		best = kvm_find_cpuid_entry(vcpu, 0x1, 0);
++		best = cpuid_entry2_find(entries, nent, 0x1, 0);
+ 		if (best)
+ 			cpuid_entry_change(best, X86_FEATURE_MWAIT,
+ 					   vcpu->arch.ia32_misc_enable_msr &
+ 					   MSR_IA32_MISC_ENABLE_MWAIT);
+ 	}
+ }
++
++void kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu)
++{
++	__kvm_update_cpuid_runtime(vcpu, vcpu->arch.cpuid_entries, vcpu->arch.cpuid_nent);
++}
+ EXPORT_SYMBOL_GPL(kvm_update_cpuid_runtime);
+ 
+ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
+@@ -276,21 +311,36 @@ u64 kvm_vcpu_reserved_gpa_bits_raw(struct kvm_vcpu *vcpu)
+ static int kvm_set_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2 *e2,
+                         int nent)
+ {
+-    int r;
++	int r;
++
++	__kvm_update_cpuid_runtime(vcpu, e2, nent);
++
++	/*
++	 * KVM does not correctly handle changing guest CPUID after KVM_RUN, as
++	 * MAXPHYADDR, GBPAGES support, AMD reserved bit behavior, etc.. aren't
++	 * tracked in kvm_mmu_page_role.  As a result, KVM may miss guest page
++	 * faults due to reusing SPs/SPTEs. In practice no sane VMM mucks with
++	 * the core vCPU model on the fly. It would've been better to forbid any
++	 * KVM_SET_CPUID{,2} calls after KVM_RUN altogether but unfortunately
++	 * some VMMs (e.g. QEMU) reuse vCPU fds for CPU hotplug/unplug and do
++	 * KVM_SET_CPUID{,2} again. To support this legacy behavior, check
++	 * whether the supplied CPUID data is equal to what's already set.
++	 */
++	if (vcpu->arch.last_vmentry_cpu != -1)
++		return kvm_cpuid_check_equal(vcpu, e2, nent);
+ 
+-    r = kvm_check_cpuid(e2, nent);
+-    if (r)
+-        return r;
++	r = kvm_check_cpuid(e2, nent);
++	if (r)
++		return r;
+ 
+-    kvfree(vcpu->arch.cpuid_entries);
+-    vcpu->arch.cpuid_entries = e2;
+-    vcpu->arch.cpuid_nent = nent;
++	kvfree(vcpu->arch.cpuid_entries);
++	vcpu->arch.cpuid_entries = e2;
++	vcpu->arch.cpuid_nent = nent;
+ 
+-    kvm_update_kvm_cpuid_base(vcpu);
+-    kvm_update_cpuid_runtime(vcpu);
+-    kvm_vcpu_after_set_cpuid(vcpu);
++	kvm_update_kvm_cpuid_base(vcpu);
++	kvm_vcpu_after_set_cpuid(vcpu);
+ 
+-    return 0;
++	return 0;
+ }
+ 
+ /* when an old userspace process fills a new kernel module */
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
+index 1beb4ca905609..095b5cb4e3c9b 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.c
++++ b/arch/x86/kvm/mmu/tdp_mmu.c
+@@ -1442,12 +1442,12 @@ static bool write_protect_gfn(struct kvm *kvm, struct kvm_mmu_page *root,
+ 		    !is_last_spte(iter.old_spte, iter.level))
+ 			continue;
+ 
+-		if (!is_writable_pte(iter.old_spte))
+-			break;
+-
+ 		new_spte = iter.old_spte &
+ 			~(PT_WRITABLE_MASK | shadow_mmu_writable_mask);
+ 
++		if (new_spte == iter.old_spte)
++			break;
++
+ 		tdp_mmu_set_spte(kvm, &iter, new_spte);
+ 		spte_set = true;
+ 	}
+diff --git a/arch/x86/kvm/vmx/posted_intr.c b/arch/x86/kvm/vmx/posted_intr.c
+index 1c94783b5a54c..46fb83d6a286e 100644
+--- a/arch/x86/kvm/vmx/posted_intr.c
++++ b/arch/x86/kvm/vmx/posted_intr.c
+@@ -15,7 +15,7 @@
+  * can find which vCPU should be waken up.
+  */
+ static DEFINE_PER_CPU(struct list_head, blocked_vcpu_on_cpu);
+-static DEFINE_PER_CPU(spinlock_t, blocked_vcpu_on_cpu_lock);
++static DEFINE_PER_CPU(raw_spinlock_t, blocked_vcpu_on_cpu_lock);
+ 
+ static inline struct pi_desc *vcpu_to_pi_desc(struct kvm_vcpu *vcpu)
+ {
+@@ -51,7 +51,7 @@ void vmx_vcpu_pi_load(struct kvm_vcpu *vcpu, int cpu)
+ 
+ 	/* The full case.  */
+ 	do {
+-		old.control = new.control = pi_desc->control;
++		old.control = new.control = READ_ONCE(pi_desc->control);
+ 
+ 		dest = cpu_physical_id(cpu);
+ 
+@@ -104,7 +104,7 @@ static void __pi_post_block(struct kvm_vcpu *vcpu)
+ 	unsigned int dest;
+ 
+ 	do {
+-		old.control = new.control = pi_desc->control;
++		old.control = new.control = READ_ONCE(pi_desc->control);
+ 		WARN(old.nv != POSTED_INTR_WAKEUP_VECTOR,
+ 		     "Wakeup handler not enabled while the VCPU is blocked\n");
+ 
+@@ -121,9 +121,9 @@ static void __pi_post_block(struct kvm_vcpu *vcpu)
+ 			   new.control) != old.control);
+ 
+ 	if (!WARN_ON_ONCE(vcpu->pre_pcpu == -1)) {
+-		spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
++		raw_spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
+ 		list_del(&vcpu->blocked_vcpu_list);
+-		spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
++		raw_spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
+ 		vcpu->pre_pcpu = -1;
+ 	}
+ }
+@@ -147,22 +147,23 @@ int pi_pre_block(struct kvm_vcpu *vcpu)
+ 	struct pi_desc old, new;
+ 	struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu);
+ 
+-	if (!vmx_can_use_vtd_pi(vcpu->kvm))
++	if (!vmx_can_use_vtd_pi(vcpu->kvm) ||
++	    vmx_interrupt_blocked(vcpu))
+ 		return 0;
+ 
+ 	WARN_ON(irqs_disabled());
+ 	local_irq_disable();
+ 	if (!WARN_ON_ONCE(vcpu->pre_pcpu != -1)) {
+ 		vcpu->pre_pcpu = vcpu->cpu;
+-		spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
++		raw_spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
+ 		list_add_tail(&vcpu->blocked_vcpu_list,
+ 			      &per_cpu(blocked_vcpu_on_cpu,
+ 				       vcpu->pre_pcpu));
+-		spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
++		raw_spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
+ 	}
+ 
+ 	do {
+-		old.control = new.control = pi_desc->control;
++		old.control = new.control = READ_ONCE(pi_desc->control);
+ 
+ 		WARN((pi_desc->sn == 1),
+ 		     "Warning: SN field of posted-interrupts "
+@@ -215,7 +216,7 @@ void pi_wakeup_handler(void)
+ 	struct kvm_vcpu *vcpu;
+ 	int cpu = smp_processor_id();
+ 
+-	spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
++	raw_spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
+ 	list_for_each_entry(vcpu, &per_cpu(blocked_vcpu_on_cpu, cpu),
+ 			blocked_vcpu_list) {
+ 		struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu);
+@@ -223,13 +224,13 @@ void pi_wakeup_handler(void)
+ 		if (pi_test_on(pi_desc) == 1)
+ 			kvm_vcpu_kick(vcpu);
+ 	}
+-	spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
++	raw_spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
+ }
+ 
+ void __init pi_init_cpu(int cpu)
+ {
+ 	INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu));
+-	spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
++	raw_spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
+ }
+ 
+ bool pi_has_pending_interrupt(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 0b5c61bb24a17..4acf43111e9c2 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -830,6 +830,7 @@ int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3)
+ 
+ 	memcpy(mmu->pdptrs, pdpte, sizeof(mmu->pdptrs));
+ 	kvm_register_mark_dirty(vcpu, VCPU_EXREG_PDPTR);
++	kvm_make_request(KVM_REQ_LOAD_MMU_PGD, vcpu);
+ 	vcpu->arch.pdptrs_from_userspace = false;
+ 
+ 	return 1;
+@@ -5148,17 +5149,6 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
+ 		struct kvm_cpuid __user *cpuid_arg = argp;
+ 		struct kvm_cpuid cpuid;
+ 
+-		/*
+-		 * KVM does not correctly handle changing guest CPUID after KVM_RUN, as
+-		 * MAXPHYADDR, GBPAGES support, AMD reserved bit behavior, etc.. aren't
+-		 * tracked in kvm_mmu_page_role.  As a result, KVM may miss guest page
+-		 * faults due to reusing SPs/SPTEs.  In practice no sane VMM mucks with
+-		 * the core vCPU model on the fly, so fail.
+-		 */
+-		r = -EINVAL;
+-		if (vcpu->arch.last_vmentry_cpu != -1)
+-			goto out;
+-
+ 		r = -EFAULT;
+ 		if (copy_from_user(&cpuid, cpuid_arg, sizeof(cpuid)))
+ 			goto out;
+@@ -5169,14 +5159,6 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
+ 		struct kvm_cpuid2 __user *cpuid_arg = argp;
+ 		struct kvm_cpuid2 cpuid;
+ 
+-		/*
+-		 * KVM_SET_CPUID{,2} after KVM_RUN is forbidded, see the comment in
+-		 * KVM_SET_CPUID case above.
+-		 */
+-		r = -EINVAL;
+-		if (vcpu->arch.last_vmentry_cpu != -1)
+-			goto out;
+-
+ 		r = -EFAULT;
+ 		if (copy_from_user(&cpuid, cpuid_arg, sizeof(cpuid)))
+ 			goto out;
+@@ -8133,7 +8115,12 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
+ 	 * updating interruptibility state and injecting single-step #DBs.
+ 	 */
+ 	if (emulation_type & EMULTYPE_SKIP) {
+-		kvm_rip_write(vcpu, ctxt->_eip);
++		if (ctxt->mode != X86EMUL_MODE_PROT64)
++			ctxt->eip = (u32)ctxt->_eip;
++		else
++			ctxt->eip = ctxt->_eip;
++
++		kvm_rip_write(vcpu, ctxt->eip);
+ 		if (ctxt->eflags & X86_EFLAGS_RF)
+ 			kvm_set_rflags(vcpu, ctxt->eflags & ~X86_EFLAGS_RF);
+ 		return 1;
+@@ -8197,6 +8184,9 @@ restart:
+ 			writeback = false;
+ 		r = 0;
+ 		vcpu->arch.complete_userspace_io = complete_emulated_mmio;
++	} else if (vcpu->arch.complete_userspace_io) {
++		writeback = false;
++		r = 0;
+ 	} else if (r == EMULATION_RESTART)
+ 		goto restart;
+ 	else
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index bafe36e69227d..b87d98efd2240 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -412,7 +412,7 @@ static void emit_indirect_jump(u8 **pprog, int reg, u8 *ip)
+  * ... bpf_tail_call(void *ctx, struct bpf_array *array, u64 index) ...
+  *   if (index >= array->map.max_entries)
+  *     goto out;
+- *   if (++tail_call_cnt > MAX_TAIL_CALL_CNT)
++ *   if (tail_call_cnt++ >= MAX_TAIL_CALL_CNT)
+  *     goto out;
+  *   prog = array->ptrs[index];
+  *   if (prog == NULL)
+@@ -446,14 +446,14 @@ static void emit_bpf_tail_call_indirect(u8 **pprog, bool *callee_regs_used,
+ 	EMIT2(X86_JBE, offset);                   /* jbe out */
+ 
+ 	/*
+-	 * if (tail_call_cnt > MAX_TAIL_CALL_CNT)
++	 * if (tail_call_cnt++ >= MAX_TAIL_CALL_CNT)
+ 	 *	goto out;
+ 	 */
+ 	EMIT2_off32(0x8B, 0x85, tcc_off);         /* mov eax, dword ptr [rbp - tcc_off] */
+ 	EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT);     /* cmp eax, MAX_TAIL_CALL_CNT */
+ 
+ 	offset = ctx->tail_call_indirect_label - (prog + 2 - start);
+-	EMIT2(X86_JA, offset);                    /* ja out */
++	EMIT2(X86_JAE, offset);                   /* jae out */
+ 	EMIT3(0x83, 0xC0, 0x01);                  /* add eax, 1 */
+ 	EMIT2_off32(0x89, 0x85, tcc_off);         /* mov dword ptr [rbp - tcc_off], eax */
+ 
+@@ -504,14 +504,14 @@ static void emit_bpf_tail_call_direct(struct bpf_jit_poke_descriptor *poke,
+ 	int offset;
+ 
+ 	/*
+-	 * if (tail_call_cnt > MAX_TAIL_CALL_CNT)
++	 * if (tail_call_cnt++ >= MAX_TAIL_CALL_CNT)
+ 	 *	goto out;
+ 	 */
+ 	EMIT2_off32(0x8B, 0x85, tcc_off);             /* mov eax, dword ptr [rbp - tcc_off] */
+ 	EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT);         /* cmp eax, MAX_TAIL_CALL_CNT */
+ 
+ 	offset = ctx->tail_call_direct_label - (prog + 2 - start);
+-	EMIT2(X86_JA, offset);                        /* ja out */
++	EMIT2(X86_JAE, offset);                       /* jae out */
+ 	EMIT3(0x83, 0xC0, 0x01);                      /* add eax, 1 */
+ 	EMIT2_off32(0x89, 0x85, tcc_off);             /* mov dword ptr [rbp - tcc_off], eax */
+ 
+diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
+index da9b7cfa46329..429a89c5468b5 100644
+--- a/arch/x86/net/bpf_jit_comp32.c
++++ b/arch/x86/net/bpf_jit_comp32.c
+@@ -1323,7 +1323,7 @@ static void emit_bpf_tail_call(u8 **pprog, u8 *ip)
+ 	EMIT2(IA32_JBE, jmp_label(jmp_label1, 2));
+ 
+ 	/*
+-	 * if (tail_call_cnt > MAX_TAIL_CALL_CNT)
++	 * if (tail_call_cnt++ >= MAX_TAIL_CALL_CNT)
+ 	 *     goto out;
+ 	 */
+ 	lo = (u32)MAX_TAIL_CALL_CNT;
+@@ -1337,7 +1337,7 @@ static void emit_bpf_tail_call(u8 **pprog, u8 *ip)
+ 	/* cmp ecx,lo */
+ 	EMIT3(0x83, add_1reg(0xF8, IA32_ECX), lo);
+ 
+-	/* ja out */
++	/* jae out */
+ 	EMIT2(IA32_JAE, jmp_label(jmp_label1, 2));
+ 
+ 	/* add eax,0x1 */
+diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
+index 38d24d2ab38b3..c5e29db02a469 100644
+--- a/arch/x86/realmode/init.c
++++ b/arch/x86/realmode/init.c
+@@ -17,6 +17,32 @@ u32 *trampoline_cr4_features;
+ /* Hold the pgd entry used on booting additional CPUs */
+ pgd_t trampoline_pgd_entry;
+ 
++void load_trampoline_pgtable(void)
++{
++#ifdef CONFIG_X86_32
++	load_cr3(initial_page_table);
++#else
++	/*
++	 * This function is called before exiting to real-mode and that will
++	 * fail with CR4.PCIDE still set.
++	 */
++	if (boot_cpu_has(X86_FEATURE_PCID))
++		cr4_clear_bits(X86_CR4_PCIDE);
++
++	write_cr3(real_mode_header->trampoline_pgd);
++#endif
++
++	/*
++	 * The CR3 write above will not flush global TLB entries.
++	 * Stale, global entries from previous page tables may still be
++	 * present.  Flush those stale entries.
++	 *
++	 * This ensures that memory accessed while running with
++	 * trampoline_pgd is *actually* mapped into trampoline_pgd.
++	 */
++	__flush_tlb_all();
++}
++
+ void __init reserve_real_mode(void)
+ {
+ 	phys_addr_t mem;
+diff --git a/arch/x86/um/syscalls_64.c b/arch/x86/um/syscalls_64.c
+index 58f51667e2e4b..8249685b40960 100644
+--- a/arch/x86/um/syscalls_64.c
++++ b/arch/x86/um/syscalls_64.c
+@@ -11,6 +11,7 @@
+ #include <linux/uaccess.h>
+ #include <asm/prctl.h> /* XXX This should get the constants from libc */
+ #include <os.h>
++#include <registers.h>
+ 
+ long arch_prctl(struct task_struct *task, int option,
+ 		unsigned long __user *arg2)
+@@ -35,7 +36,7 @@ long arch_prctl(struct task_struct *task, int option,
+ 	switch (option) {
+ 	case ARCH_SET_FS:
+ 	case ARCH_SET_GS:
+-		ret = restore_registers(pid, &current->thread.regs.regs);
++		ret = restore_pid_registers(pid, &current->thread.regs.regs);
+ 		if (ret)
+ 			return ret;
+ 		break;
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index fec18118dc309..30918b0e81c02 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -5991,48 +5991,7 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
+ 
+ 	spin_lock_irq(&bfqd->lock);
+ 	bfqq = bfq_init_rq(rq);
+-
+-	/*
+-	 * Reqs with at_head or passthrough flags set are to be put
+-	 * directly into dispatch list. Additional case for putting rq
+-	 * directly into the dispatch queue: the only active
+-	 * bfq_queues are bfqq and either its waker bfq_queue or one
+-	 * of its woken bfq_queues. The rationale behind this
+-	 * additional condition is as follows:
+-	 * - consider a bfq_queue, say Q1, detected as a waker of
+-	 *   another bfq_queue, say Q2
+-	 * - by definition of a waker, Q1 blocks the I/O of Q2, i.e.,
+-	 *   some I/O of Q1 needs to be completed for new I/O of Q2
+-	 *   to arrive.  A notable example of waker is journald
+-	 * - so, Q1 and Q2 are in any respect the queues of two
+-	 *   cooperating processes (or of two cooperating sets of
+-	 *   processes): the goal of Q1's I/O is doing what needs to
+-	 *   be done so that new Q2's I/O can finally be
+-	 *   issued. Therefore, if the service of Q1's I/O is delayed,
+-	 *   then Q2's I/O is delayed too.  Conversely, if Q2's I/O is
+-	 *   delayed, the goal of Q1's I/O is hindered.
+-	 * - as a consequence, if some I/O of Q1/Q2 arrives while
+-	 *   Q2/Q1 is the only queue in service, there is absolutely
+-	 *   no point in delaying the service of such an I/O. The
+-	 *   only possible result is a throughput loss
+-	 * - so, when the above condition holds, the best option is to
+-	 *   have the new I/O dispatched as soon as possible
+-	 * - the most effective and efficient way to attain the above
+-	 *   goal is to put the new I/O directly in the dispatch
+-	 *   list
+-	 * - as an additional restriction, Q1 and Q2 must be the only
+-	 *   busy queues for this commit to put the I/O of Q2/Q1 in
+-	 *   the dispatch list.  This is necessary, because, if also
+-	 *   other queues are waiting for service, then putting new
+-	 *   I/O directly in the dispatch list may evidently cause a
+-	 *   violation of service guarantees for the other queues
+-	 */
+-	if (!bfqq ||
+-	    (bfqq != bfqd->in_service_queue &&
+-	     bfqd->in_service_queue != NULL &&
+-	     bfq_tot_busy_queues(bfqd) == 1 + bfq_bfqq_busy(bfqq) &&
+-	     (bfqq->waker_bfqq == bfqd->in_service_queue ||
+-	      bfqd->in_service_queue->waker_bfqq == bfqq)) || at_head) {
++	if (!bfqq || at_head) {
+ 		if (at_head)
+ 			list_add(&rq->queuelist, &bfqd->dispatch);
+ 		else
+@@ -6059,7 +6018,6 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
+ 	 * merge).
+ 	 */
+ 	cmd_flags = rq->cmd_flags;
+-
+ 	spin_unlock_irq(&bfqd->lock);
+ 
+ 	bfq_update_insert_stats(q, bfqq, idle_timer_disabled,
+diff --git a/block/blk-flush.c b/block/blk-flush.c
+index 1fce6d16e6d3a..b5b4a26ef0925 100644
+--- a/block/blk-flush.c
++++ b/block/blk-flush.c
+@@ -235,8 +235,10 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error)
+ 	 * avoiding use-after-free.
+ 	 */
+ 	WRITE_ONCE(flush_rq->state, MQ_RQ_IDLE);
+-	if (fq->rq_status != BLK_STS_OK)
++	if (fq->rq_status != BLK_STS_OK) {
+ 		error = fq->rq_status;
++		fq->rq_status = BLK_STS_OK;
++	}
+ 
+ 	if (!q->elevator) {
+ 		flush_rq->tag = BLK_MQ_NO_TAG;
+diff --git a/block/blk-pm.c b/block/blk-pm.c
+index 17bd020268d42..2dad62cc15727 100644
+--- a/block/blk-pm.c
++++ b/block/blk-pm.c
+@@ -163,27 +163,19 @@ EXPORT_SYMBOL(blk_pre_runtime_resume);
+ /**
+  * blk_post_runtime_resume - Post runtime resume processing
+  * @q: the queue of the device
+- * @err: return value of the device's runtime_resume function
+  *
+  * Description:
+- *    Update the queue's runtime status according to the return value of the
+- *    device's runtime_resume function. If the resume was successful, call
+- *    blk_set_runtime_active() to do the real work of restarting the queue.
++ *    For historical reasons, this routine merely calls blk_set_runtime_active()
++ *    to do the real work of restarting the queue.  It does this regardless of
++ *    whether the device's runtime-resume succeeded; even if it failed the
++ *    driver or error handler will need to communicate with the device.
+  *
+  *    This function should be called near the end of the device's
+  *    runtime_resume callback.
+  */
+-void blk_post_runtime_resume(struct request_queue *q, int err)
++void blk_post_runtime_resume(struct request_queue *q)
+ {
+-	if (!q->dev)
+-		return;
+-	if (!err) {
+-		blk_set_runtime_active(q);
+-	} else {
+-		spin_lock_irq(&q->queue_lock);
+-		q->rpm_status = RPM_SUSPENDED;
+-		spin_unlock_irq(&q->queue_lock);
+-	}
++	blk_set_runtime_active(q);
+ }
+ EXPORT_SYMBOL(blk_post_runtime_resume);
+ 
+@@ -201,7 +193,7 @@ EXPORT_SYMBOL(blk_post_runtime_resume);
+  * runtime PM status and re-enable peeking requests from the queue. It
+  * should be called before first request is added to the queue.
+  *
+- * This function is also called by blk_post_runtime_resume() for successful
++ * This function is also called by blk_post_runtime_resume() for
+  * runtime resumes.  It does everything necessary to restart the queue.
+  */
+ void blk_set_runtime_active(struct request_queue *q)
+diff --git a/block/genhd.c b/block/genhd.c
+index 30362aeacac4b..5308e0920fa6f 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -425,6 +425,8 @@ int __must_check device_add_disk(struct device *parent, struct gendisk *disk,
+ 				DISK_MAX_PARTS);
+ 			disk->minors = DISK_MAX_PARTS;
+ 		}
++		if (disk->first_minor + disk->minors > MINORMASK + 1)
++			return -EINVAL;
+ 	} else {
+ 		if (WARN_ON(disk->minors))
+ 			return -EINVAL;
+@@ -437,10 +439,6 @@ int __must_check device_add_disk(struct device *parent, struct gendisk *disk,
+ 		disk->flags |= GENHD_FL_EXT_DEVT;
+ 	}
+ 
+-	ret = disk_alloc_events(disk);
+-	if (ret)
+-		goto out_free_ext_minor;
+-
+ 	/* delay uevents, until we scanned partition table */
+ 	dev_set_uevent_suppress(ddev, 1);
+ 
+@@ -451,7 +449,12 @@ int __must_check device_add_disk(struct device *parent, struct gendisk *disk,
+ 		ddev->devt = MKDEV(disk->major, disk->first_minor);
+ 	ret = device_add(ddev);
+ 	if (ret)
+-		goto out_disk_release_events;
++		goto out_free_ext_minor;
++
++	ret = disk_alloc_events(disk);
++	if (ret)
++		goto out_device_del;
++
+ 	if (!sysfs_deprecated) {
+ 		ret = sysfs_create_link(block_depr, &ddev->kobj,
+ 					kobject_name(&ddev->kobj));
+@@ -539,8 +542,6 @@ out_del_block_link:
+ 		sysfs_remove_link(block_depr, dev_name(ddev));
+ out_device_del:
+ 	device_del(ddev);
+-out_disk_release_events:
+-	disk_release_events(disk);
+ out_free_ext_minor:
+ 	if (disk->major == BLOCK_EXT_MAJOR)
+ 		blk_free_ext_minor(disk->first_minor);
+diff --git a/block/mq-deadline.c b/block/mq-deadline.c
+index 85d919bf60c78..3ed5eaf3446a2 100644
+--- a/block/mq-deadline.c
++++ b/block/mq-deadline.c
+@@ -865,7 +865,7 @@ SHOW_JIFFIES(deadline_write_expire_show, dd->fifo_expire[DD_WRITE]);
+ SHOW_JIFFIES(deadline_prio_aging_expire_show, dd->prio_aging_expire);
+ SHOW_INT(deadline_writes_starved_show, dd->writes_starved);
+ SHOW_INT(deadline_front_merges_show, dd->front_merges);
+-SHOW_INT(deadline_async_depth_show, dd->front_merges);
++SHOW_INT(deadline_async_depth_show, dd->async_depth);
+ SHOW_INT(deadline_fifo_batch_show, dd->fifo_batch);
+ #undef SHOW_INT
+ #undef SHOW_JIFFIES
+@@ -895,7 +895,7 @@ STORE_JIFFIES(deadline_write_expire_store, &dd->fifo_expire[DD_WRITE], 0, INT_MA
+ STORE_JIFFIES(deadline_prio_aging_expire_store, &dd->prio_aging_expire, 0, INT_MAX);
+ STORE_INT(deadline_writes_starved_store, &dd->writes_starved, INT_MIN, INT_MAX);
+ STORE_INT(deadline_front_merges_store, &dd->front_merges, 0, 1);
+-STORE_INT(deadline_async_depth_store, &dd->front_merges, 1, INT_MAX);
++STORE_INT(deadline_async_depth_store, &dd->async_depth, 1, INT_MAX);
+ STORE_INT(deadline_fifo_batch_store, &dd->fifo_batch, 0, INT_MAX);
+ #undef STORE_FUNCTION
+ #undef STORE_INT
+diff --git a/crypto/jitterentropy.c b/crypto/jitterentropy.c
+index 4dc2261cdeefb..788d90749715a 100644
+--- a/crypto/jitterentropy.c
++++ b/crypto/jitterentropy.c
+@@ -265,7 +265,6 @@ static int jent_stuck(struct rand_data *ec, __u64 current_delta)
+ {
+ 	__u64 delta2 = jent_delta(ec->last_delta, current_delta);
+ 	__u64 delta3 = jent_delta(ec->last_delta2, delta2);
+-	unsigned int delta_masked = current_delta & JENT_APT_WORD_MASK;
+ 
+ 	ec->last_delta = current_delta;
+ 	ec->last_delta2 = delta2;
+@@ -274,7 +273,7 @@ static int jent_stuck(struct rand_data *ec, __u64 current_delta)
+ 	 * Insert the result of the comparison of two back-to-back time
+ 	 * deltas.
+ 	 */
+-	jent_apt_insert(ec, delta_masked);
++	jent_apt_insert(ec, current_delta);
+ 
+ 	if (!current_delta || !delta2 || !delta3) {
+ 		/* RCT with a stuck bit */
+diff --git a/drivers/acpi/acpica/exfield.c b/drivers/acpi/acpica/exfield.c
+index 06f3c9df1e22d..8618500f23b39 100644
+--- a/drivers/acpi/acpica/exfield.c
++++ b/drivers/acpi/acpica/exfield.c
+@@ -330,12 +330,7 @@ acpi_ex_write_data_to_field(union acpi_operand_object *source_desc,
+ 		       obj_desc->field.base_byte_offset,
+ 		       source_desc->buffer.pointer, data_length);
+ 
+-		if ((obj_desc->field.region_obj->region.address ==
+-		     PCC_MASTER_SUBSPACE
+-		     && MASTER_SUBSPACE_COMMAND(obj_desc->field.
+-						base_byte_offset))
+-		    || GENERIC_SUBSPACE_COMMAND(obj_desc->field.
+-						base_byte_offset)) {
++		if (MASTER_SUBSPACE_COMMAND(obj_desc->field.base_byte_offset)) {
+ 
+ 			/* Perform the write */
+ 
+diff --git a/drivers/acpi/acpica/exoparg1.c b/drivers/acpi/acpica/exoparg1.c
+index b639e930d6429..44b7c350ed5ca 100644
+--- a/drivers/acpi/acpica/exoparg1.c
++++ b/drivers/acpi/acpica/exoparg1.c
+@@ -1007,7 +1007,8 @@ acpi_status acpi_ex_opcode_1A_0T_1R(struct acpi_walk_state *walk_state)
+ 						    (walk_state, return_desc,
+ 						     &temp_desc);
+ 						if (ACPI_FAILURE(status)) {
+-							goto cleanup;
++							return_ACPI_STATUS
++							    (status);
+ 						}
+ 
+ 						return_desc = temp_desc;
+diff --git a/drivers/acpi/acpica/hwesleep.c b/drivers/acpi/acpica/hwesleep.c
+index 808fdf54aeebf..7ee2939c08cd4 100644
+--- a/drivers/acpi/acpica/hwesleep.c
++++ b/drivers/acpi/acpica/hwesleep.c
+@@ -104,7 +104,9 @@ acpi_status acpi_hw_extended_sleep(u8 sleep_state)
+ 
+ 	/* Flush caches, as per ACPI specification */
+ 
+-	ACPI_FLUSH_CPU_CACHE();
++	if (sleep_state < ACPI_STATE_S4) {
++		ACPI_FLUSH_CPU_CACHE();
++	}
+ 
+ 	status = acpi_os_enter_sleep(sleep_state, sleep_control, 0);
+ 	if (status == AE_CTRL_TERMINATE) {
+diff --git a/drivers/acpi/acpica/hwsleep.c b/drivers/acpi/acpica/hwsleep.c
+index 34a3825f25d37..5efa3d8e483e0 100644
+--- a/drivers/acpi/acpica/hwsleep.c
++++ b/drivers/acpi/acpica/hwsleep.c
+@@ -110,7 +110,9 @@ acpi_status acpi_hw_legacy_sleep(u8 sleep_state)
+ 
+ 	/* Flush caches, as per ACPI specification */
+ 
+-	ACPI_FLUSH_CPU_CACHE();
++	if (sleep_state < ACPI_STATE_S4) {
++		ACPI_FLUSH_CPU_CACHE();
++	}
+ 
+ 	status = acpi_os_enter_sleep(sleep_state, pm1a_control, pm1b_control);
+ 	if (status == AE_CTRL_TERMINATE) {
+diff --git a/drivers/acpi/acpica/hwxfsleep.c b/drivers/acpi/acpica/hwxfsleep.c
+index e4cde23a29061..ba77598ee43e8 100644
+--- a/drivers/acpi/acpica/hwxfsleep.c
++++ b/drivers/acpi/acpica/hwxfsleep.c
+@@ -162,8 +162,6 @@ acpi_status acpi_enter_sleep_state_s4bios(void)
+ 		return_ACPI_STATUS(status);
+ 	}
+ 
+-	ACPI_FLUSH_CPU_CACHE();
+-
+ 	status = acpi_hw_write_port(acpi_gbl_FADT.smi_command,
+ 				    (u32)acpi_gbl_FADT.s4_bios_request, 8);
+ 	if (ACPI_FAILURE(status)) {
+diff --git a/drivers/acpi/acpica/utdelete.c b/drivers/acpi/acpica/utdelete.c
+index e5ba9795ec696..8d7736d2d2699 100644
+--- a/drivers/acpi/acpica/utdelete.c
++++ b/drivers/acpi/acpica/utdelete.c
+@@ -422,6 +422,7 @@ acpi_ut_update_ref_count(union acpi_operand_object *object, u32 action)
+ 			ACPI_WARNING((AE_INFO,
+ 				      "Obj %p, Reference Count is already zero, cannot decrement\n",
+ 				      object));
++			return;
+ 		}
+ 
+ 		ACPI_DEBUG_PRINT_RAW((ACPI_DB_ALLOCATIONS,
+diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
+index 8afa85d6eb6a7..ead0114f27c9f 100644
+--- a/drivers/acpi/battery.c
++++ b/drivers/acpi/battery.c
+@@ -53,6 +53,7 @@ static int battery_bix_broken_package;
+ static int battery_notification_delay_ms;
+ static int battery_ac_is_broken;
+ static int battery_check_pmic = 1;
++static int battery_quirk_notcharging;
+ static unsigned int cache_time = 1000;
+ module_param(cache_time, uint, 0644);
+ MODULE_PARM_DESC(cache_time, "cache time in milliseconds");
+@@ -217,6 +218,8 @@ static int acpi_battery_get_property(struct power_supply *psy,
+ 			val->intval = POWER_SUPPLY_STATUS_CHARGING;
+ 		else if (acpi_battery_is_charged(battery))
+ 			val->intval = POWER_SUPPLY_STATUS_FULL;
++		else if (battery_quirk_notcharging)
++			val->intval = POWER_SUPPLY_STATUS_NOT_CHARGING;
+ 		else
+ 			val->intval = POWER_SUPPLY_STATUS_UNKNOWN;
+ 		break;
+@@ -1111,6 +1114,12 @@ battery_do_not_check_pmic_quirk(const struct dmi_system_id *d)
+ 	return 0;
+ }
+ 
++static int __init battery_quirk_not_charging(const struct dmi_system_id *d)
++{
++	battery_quirk_notcharging = 1;
++	return 0;
++}
++
+ static const struct dmi_system_id bat_dmi_table[] __initconst = {
+ 	{
+ 		/* NEC LZ750/LS */
+@@ -1155,6 +1164,19 @@ static const struct dmi_system_id bat_dmi_table[] __initconst = {
+ 			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo MIIX 320-10ICR"),
+ 		},
+ 	},
++	{
++		/*
++		 * On Lenovo ThinkPads the BIOS specification defines
++		 * a state when the bits for charging and discharging
++		 * are both set to 0. That state is "Not Charging".
++		 */
++		.callback = battery_quirk_not_charging,
++		.ident = "Lenovo ThinkPad",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad"),
++		},
++	},
+ 	{},
+ };
+ 
+diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
+index fa923a9292244..dd535b4b9a160 100644
+--- a/drivers/acpi/bus.c
++++ b/drivers/acpi/bus.c
+@@ -98,8 +98,8 @@ int acpi_bus_get_status(struct acpi_device *device)
+ 	acpi_status status;
+ 	unsigned long long sta;
+ 
+-	if (acpi_device_always_present(device)) {
+-		acpi_set_device_status(device, ACPI_STA_DEFAULT);
++	if (acpi_device_override_status(device, &sta)) {
++		acpi_set_device_status(device, sta);
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index b62c87b8ce4a9..12a156d8283e6 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -411,7 +411,7 @@ bool acpi_cpc_valid(void)
+ 	struct cpc_desc *cpc_ptr;
+ 	int cpu;
+ 
+-	for_each_possible_cpu(cpu) {
++	for_each_present_cpu(cpu) {
+ 		cpc_ptr = per_cpu(cpc_desc_ptr, cpu);
+ 		if (!cpc_ptr)
+ 			return false;
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index a6366d3f0c786..b9c44e6c5e400 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -166,6 +166,7 @@ struct acpi_ec_query {
+ 	struct transaction transaction;
+ 	struct work_struct work;
+ 	struct acpi_ec_query_handler *handler;
++	struct acpi_ec *ec;
+ };
+ 
+ static int acpi_ec_query(struct acpi_ec *ec, u8 *data);
+@@ -452,6 +453,7 @@ static void acpi_ec_submit_query(struct acpi_ec *ec)
+ 		ec_dbg_evt("Command(%s) submitted/blocked",
+ 			   acpi_ec_cmd_string(ACPI_EC_COMMAND_QUERY));
+ 		ec->nr_pending_queries++;
++		ec->events_in_progress++;
+ 		queue_work(ec_wq, &ec->work);
+ 	}
+ }
+@@ -518,7 +520,7 @@ static void acpi_ec_enable_event(struct acpi_ec *ec)
+ #ifdef CONFIG_PM_SLEEP
+ static void __acpi_ec_flush_work(void)
+ {
+-	drain_workqueue(ec_wq); /* flush ec->work */
++	flush_workqueue(ec_wq); /* flush ec->work */
+ 	flush_workqueue(ec_query_wq); /* flush queries */
+ }
+ 
+@@ -1103,7 +1105,7 @@ void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit)
+ }
+ EXPORT_SYMBOL_GPL(acpi_ec_remove_query_handler);
+ 
+-static struct acpi_ec_query *acpi_ec_create_query(u8 *pval)
++static struct acpi_ec_query *acpi_ec_create_query(struct acpi_ec *ec, u8 *pval)
+ {
+ 	struct acpi_ec_query *q;
+ 	struct transaction *t;
+@@ -1111,11 +1113,13 @@ static struct acpi_ec_query *acpi_ec_create_query(u8 *pval)
+ 	q = kzalloc(sizeof (struct acpi_ec_query), GFP_KERNEL);
+ 	if (!q)
+ 		return NULL;
++
+ 	INIT_WORK(&q->work, acpi_ec_event_processor);
+ 	t = &q->transaction;
+ 	t->command = ACPI_EC_COMMAND_QUERY;
+ 	t->rdata = pval;
+ 	t->rlen = 1;
++	q->ec = ec;
+ 	return q;
+ }
+ 
+@@ -1132,13 +1136,21 @@ static void acpi_ec_event_processor(struct work_struct *work)
+ {
+ 	struct acpi_ec_query *q = container_of(work, struct acpi_ec_query, work);
+ 	struct acpi_ec_query_handler *handler = q->handler;
++	struct acpi_ec *ec = q->ec;
+ 
+ 	ec_dbg_evt("Query(0x%02x) started", handler->query_bit);
++
+ 	if (handler->func)
+ 		handler->func(handler->data);
+ 	else if (handler->handle)
+ 		acpi_evaluate_object(handler->handle, NULL, NULL, NULL);
++
+ 	ec_dbg_evt("Query(0x%02x) stopped", handler->query_bit);
++
++	spin_lock_irq(&ec->lock);
++	ec->queries_in_progress--;
++	spin_unlock_irq(&ec->lock);
++
+ 	acpi_ec_delete_query(q);
+ }
+ 
+@@ -1148,7 +1160,7 @@ static int acpi_ec_query(struct acpi_ec *ec, u8 *data)
+ 	int result;
+ 	struct acpi_ec_query *q;
+ 
+-	q = acpi_ec_create_query(&value);
++	q = acpi_ec_create_query(ec, &value);
+ 	if (!q)
+ 		return -ENOMEM;
+ 
+@@ -1170,19 +1182,20 @@ static int acpi_ec_query(struct acpi_ec *ec, u8 *data)
+ 	}
+ 
+ 	/*
+-	 * It is reported that _Qxx are evaluated in a parallel way on
+-	 * Windows:
++	 * It is reported that _Qxx are evaluated in a parallel way on Windows:
+ 	 * https://bugzilla.kernel.org/show_bug.cgi?id=94411
+ 	 *
+-	 * Put this log entry before schedule_work() in order to make
+-	 * it appearing before any other log entries occurred during the
+-	 * work queue execution.
++	 * Put this log entry before queue_work() to make it appear in the log
++	 * before any other messages emitted during workqueue handling.
+ 	 */
+ 	ec_dbg_evt("Query(0x%02x) scheduled", value);
+-	if (!queue_work(ec_query_wq, &q->work)) {
+-		ec_dbg_evt("Query(0x%02x) overlapped", value);
+-		result = -EBUSY;
+-	}
++
++	spin_lock_irq(&ec->lock);
++
++	ec->queries_in_progress++;
++	queue_work(ec_query_wq, &q->work);
++
++	spin_unlock_irq(&ec->lock);
+ 
+ err_exit:
+ 	if (result)
+@@ -1240,6 +1253,10 @@ static void acpi_ec_event_handler(struct work_struct *work)
+ 	ec_dbg_evt("Event stopped");
+ 
+ 	acpi_ec_check_event(ec);
++
++	spin_lock_irqsave(&ec->lock, flags);
++	ec->events_in_progress--;
++	spin_unlock_irqrestore(&ec->lock, flags);
+ }
+ 
+ static void acpi_ec_handle_interrupt(struct acpi_ec *ec)
+@@ -2021,6 +2038,7 @@ void acpi_ec_set_gpe_wake_mask(u8 action)
+ 
+ bool acpi_ec_dispatch_gpe(void)
+ {
++	bool work_in_progress;
+ 	u32 ret;
+ 
+ 	if (!first_ec)
+@@ -2041,8 +2059,19 @@ bool acpi_ec_dispatch_gpe(void)
+ 	if (ret == ACPI_INTERRUPT_HANDLED)
+ 		pm_pr_dbg("ACPI EC GPE dispatched\n");
+ 
+-	/* Flush the event and query workqueues. */
+-	acpi_ec_flush_work();
++	/* Drain EC work. */
++	do {
++		acpi_ec_flush_work();
++
++		pm_pr_dbg("ACPI EC work flushed\n");
++
++		spin_lock_irq(&first_ec->lock);
++
++		work_in_progress = first_ec->events_in_progress +
++			first_ec->queries_in_progress > 0;
++
++		spin_unlock_irq(&first_ec->lock);
++	} while (work_in_progress && !pm_wakeup_pending());
+ 
+ 	return false;
+ }
+diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
+index d91b560e88674..54b2be94d23dc 100644
+--- a/drivers/acpi/internal.h
++++ b/drivers/acpi/internal.h
+@@ -183,6 +183,8 @@ struct acpi_ec {
+ 	struct work_struct work;
+ 	unsigned long timestamp;
+ 	unsigned long nr_pending_queries;
++	unsigned int events_in_progress;
++	unsigned int queries_in_progress;
+ 	bool busy_polling;
+ 	unsigned int polling_guard;
+ };
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index 2c80765670bc7..25d9f04f19959 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -1695,6 +1695,7 @@ static bool acpi_device_enumeration_by_parent(struct acpi_device *device)
+ {
+ 	struct list_head resource_list;
+ 	bool is_serial_bus_slave = false;
++	static const struct acpi_device_id ignore_serial_bus_ids[] = {
+ 	/*
+ 	 * These devices have multiple I2cSerialBus resources and an i2c-client
+ 	 * must be instantiated for each, each with its own i2c_device_id.
+@@ -1703,11 +1704,18 @@ static bool acpi_device_enumeration_by_parent(struct acpi_device *device)
+ 	 * drivers/platform/x86/i2c-multi-instantiate.c driver, which knows
+ 	 * which i2c_device_id to use for each resource.
+ 	 */
+-	static const struct acpi_device_id i2c_multi_instantiate_ids[] = {
+ 		{"BSG1160", },
+ 		{"BSG2150", },
+ 		{"INT33FE", },
+ 		{"INT3515", },
++	/*
++	 * HIDs of device with an UartSerialBusV2 resource for which userspace
++	 * expects a regular tty cdev to be created (instead of the in kernel
++	 * serdev) and which have a kernel driver which expects a platform_dev
++	 * such as the rfkill-gpio driver.
++	 */
++		{"BCM4752", },
++		{"LNV4752", },
+ 		{}
+ 	};
+ 
+@@ -1721,8 +1729,7 @@ static bool acpi_device_enumeration_by_parent(struct acpi_device *device)
+ 	     fwnode_property_present(&device->fwnode, "baud")))
+ 		return true;
+ 
+-	/* Instantiate a pdev for the i2c-multi-instantiate drv to bind to */
+-	if (!acpi_match_device_ids(device, i2c_multi_instantiate_ids))
++	if (!acpi_match_device_ids(device, ignore_serial_bus_ids))
+ 		return false;
+ 
+ 	INIT_LIST_HEAD(&resource_list);
+diff --git a/drivers/acpi/x86/utils.c b/drivers/acpi/x86/utils.c
+index f22f23933063b..b3fb428461c6f 100644
+--- a/drivers/acpi/x86/utils.c
++++ b/drivers/acpi/x86/utils.c
+@@ -22,58 +22,71 @@
+  * Some BIOS-es (temporarily) hide specific APCI devices to work around Windows
+  * driver bugs. We use DMI matching to match known cases of this.
+  *
+- * We work around this by always reporting ACPI_STA_DEFAULT for these
+- * devices. Note this MUST only be done for devices where this is safe.
++ * Likewise sometimes some not-actually present devices are sometimes
++ * reported as present, which may cause issues.
+  *
+- * This forcing of devices to be present is limited to specific CPU (SoC)
+- * models both to avoid potentially causing trouble on other models and
+- * because some HIDs are re-used on different SoCs for completely
+- * different devices.
++ * We work around this by using the below quirk list to override the status
++ * reported by the _STA method with a fixed value (ACPI_STA_DEFAULT or 0).
++ * Note this MUST only be done for devices where this is safe.
++ *
++ * This status overriding is limited to specific CPU (SoC) models both to
++ * avoid potentially causing trouble on other models and because some HIDs
++ * are re-used on different SoCs for completely different devices.
+  */
+-struct always_present_id {
++struct override_status_id {
+ 	struct acpi_device_id hid[2];
+ 	struct x86_cpu_id cpu_ids[2];
+ 	struct dmi_system_id dmi_ids[2]; /* Optional */
+ 	const char *uid;
++	const char *path;
++	unsigned long long status;
+ };
+ 
+-#define X86_MATCH(model)	X86_MATCH_INTEL_FAM6_MODEL(model, NULL)
+-
+-#define ENTRY(hid, uid, cpu_models, dmi...) {				\
++#define ENTRY(status, hid, uid, path, cpu_model, dmi...) {		\
+ 	{ { hid, }, {} },						\
+-	{ cpu_models, {} },						\
++	{ X86_MATCH_INTEL_FAM6_MODEL(cpu_model, NULL), {} },		\
+ 	{ { .matches = dmi }, {} },					\
+ 	uid,								\
++	path,								\
++	status,								\
+ }
+ 
+-static const struct always_present_id always_present_ids[] = {
++#define PRESENT_ENTRY_HID(hid, uid, cpu_model, dmi...) \
++	ENTRY(ACPI_STA_DEFAULT, hid, uid, NULL, cpu_model, dmi)
++
++#define NOT_PRESENT_ENTRY_HID(hid, uid, cpu_model, dmi...) \
++	ENTRY(0, hid, uid, NULL, cpu_model, dmi)
++
++#define PRESENT_ENTRY_PATH(path, cpu_model, dmi...) \
++	ENTRY(ACPI_STA_DEFAULT, "", NULL, path, cpu_model, dmi)
++
++#define NOT_PRESENT_ENTRY_PATH(path, cpu_model, dmi...) \
++	ENTRY(0, "", NULL, path, cpu_model, dmi)
++
++static const struct override_status_id override_status_ids[] = {
+ 	/*
+ 	 * Bay / Cherry Trail PWM directly poked by GPU driver in win10,
+ 	 * but Linux uses a separate PWM driver, harmless if not used.
+ 	 */
+-	ENTRY("80860F09", "1", X86_MATCH(ATOM_SILVERMONT), {}),
+-	ENTRY("80862288", "1", X86_MATCH(ATOM_AIRMONT), {}),
++	PRESENT_ENTRY_HID("80860F09", "1", ATOM_SILVERMONT, {}),
++	PRESENT_ENTRY_HID("80862288", "1", ATOM_AIRMONT, {}),
+ 
+-	/* Lenovo Yoga Book uses PWM2 for keyboard backlight control */
+-	ENTRY("80862289", "2", X86_MATCH(ATOM_AIRMONT), {
+-			DMI_MATCH(DMI_PRODUCT_NAME, "Lenovo YB1-X9"),
+-		}),
+ 	/*
+ 	 * The INT0002 device is necessary to clear wakeup interrupt sources
+ 	 * on Cherry Trail devices, without it we get nobody cared IRQ msgs.
+ 	 */
+-	ENTRY("INT0002", "1", X86_MATCH(ATOM_AIRMONT), {}),
++	PRESENT_ENTRY_HID("INT0002", "1", ATOM_AIRMONT, {}),
+ 	/*
+ 	 * On the Dell Venue 11 Pro 7130 and 7139, the DSDT hides
+ 	 * the touchscreen ACPI device until a certain time
+ 	 * after _SB.PCI0.GFX0.LCD.LCD1._ON gets called has passed
+ 	 * *and* _STA has been called at least 3 times since.
+ 	 */
+-	ENTRY("SYNA7500", "1", X86_MATCH(HASWELL_L), {
++	PRESENT_ENTRY_HID("SYNA7500", "1", HASWELL_L, {
+ 		DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "Venue 11 Pro 7130"),
+ 	      }),
+-	ENTRY("SYNA7500", "1", X86_MATCH(HASWELL_L), {
++	PRESENT_ENTRY_HID("SYNA7500", "1", HASWELL_L, {
+ 		DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "Venue 11 Pro 7139"),
+ 	      }),
+@@ -81,54 +94,83 @@ static const struct always_present_id always_present_ids[] = {
+ 	/*
+ 	 * The GPD win BIOS dated 20170221 has disabled the accelerometer, the
+ 	 * drivers sometimes cause crashes under Windows and this is how the
+-	 * manufacturer has solved this :| Note that the the DMI data is less
+-	 * generic then it seems, a board_vendor of "AMI Corporation" is quite
+-	 * rare and a board_name of "Default String" also is rare.
++	 * manufacturer has solved this :|  The DMI match may not seem unique,
++	 * but it is. In the 67000+ DMI decode dumps from linux-hardware.org
++	 * only 116 have board_vendor set to "AMI Corporation" and of those 116
++	 * only the GPD win and pocket entries' board_name is "Default string".
+ 	 *
+ 	 * Unfortunately the GPD pocket also uses these strings and its BIOS
+ 	 * was copy-pasted from the GPD win, so it has a disabled KIOX000A
+ 	 * node which we should not enable, thus we also check the BIOS date.
+ 	 */
+-	ENTRY("KIOX000A", "1", X86_MATCH(ATOM_AIRMONT), {
++	PRESENT_ENTRY_HID("KIOX000A", "1", ATOM_AIRMONT, {
+ 		DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
+ 		DMI_MATCH(DMI_BOARD_NAME, "Default string"),
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "Default string"),
+ 		DMI_MATCH(DMI_BIOS_DATE, "02/21/2017")
+ 	      }),
+-	ENTRY("KIOX000A", "1", X86_MATCH(ATOM_AIRMONT), {
++	PRESENT_ENTRY_HID("KIOX000A", "1", ATOM_AIRMONT, {
+ 		DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
+ 		DMI_MATCH(DMI_BOARD_NAME, "Default string"),
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "Default string"),
+ 		DMI_MATCH(DMI_BIOS_DATE, "03/20/2017")
+ 	      }),
+-	ENTRY("KIOX000A", "1", X86_MATCH(ATOM_AIRMONT), {
++	PRESENT_ENTRY_HID("KIOX000A", "1", ATOM_AIRMONT, {
+ 		DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
+ 		DMI_MATCH(DMI_BOARD_NAME, "Default string"),
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "Default string"),
+ 		DMI_MATCH(DMI_BIOS_DATE, "05/25/2017")
+ 	      }),
++
++	/*
++	 * The GPD win/pocket have a PCI wifi card, but its DSDT has the SDIO
++	 * mmc controller enabled and that has a child-device which _PS3
++	 * method sets a GPIO causing the PCI wifi card to turn off.
++	 * See above remark about uniqueness of the DMI match.
++	 */
++	NOT_PRESENT_ENTRY_PATH("\\_SB_.PCI0.SDHB.BRC1", ATOM_AIRMONT, {
++		DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
++		DMI_EXACT_MATCH(DMI_BOARD_NAME, "Default string"),
++		DMI_EXACT_MATCH(DMI_BOARD_SERIAL, "Default string"),
++		DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Default string"),
++	      }),
+ };
+ 
+-bool acpi_device_always_present(struct acpi_device *adev)
++bool acpi_device_override_status(struct acpi_device *adev, unsigned long long *status)
+ {
+ 	bool ret = false;
+ 	unsigned int i;
+ 
+-	for (i = 0; i < ARRAY_SIZE(always_present_ids); i++) {
+-		if (acpi_match_device_ids(adev, always_present_ids[i].hid))
++	for (i = 0; i < ARRAY_SIZE(override_status_ids); i++) {
++		if (!x86_match_cpu(override_status_ids[i].cpu_ids))
+ 			continue;
+ 
+-		if (!adev->pnp.unique_id ||
+-		    strcmp(adev->pnp.unique_id, always_present_ids[i].uid))
++		if (override_status_ids[i].dmi_ids[0].matches[0].slot &&
++		    !dmi_check_system(override_status_ids[i].dmi_ids))
+ 			continue;
+ 
+-		if (!x86_match_cpu(always_present_ids[i].cpu_ids))
+-			continue;
++		if (override_status_ids[i].path) {
++			struct acpi_buffer path = { ACPI_ALLOCATE_BUFFER, NULL };
++			bool match;
+ 
+-		if (always_present_ids[i].dmi_ids[0].matches[0].slot &&
+-		    !dmi_check_system(always_present_ids[i].dmi_ids))
+-			continue;
++			if (acpi_get_name(adev->handle, ACPI_FULL_PATHNAME, &path))
++				continue;
++
++			match = strcmp((char *)path.pointer, override_status_ids[i].path) == 0;
++			kfree(path.pointer);
++
++			if (!match)
++				continue;
++		} else {
++			if (acpi_match_device_ids(adev, override_status_ids[i].hid))
++				continue;
++
++			if (!adev->pnp.unique_id ||
++			    strcmp(adev->pnp.unique_id, override_status_ids[i].uid))
++				continue;
++		}
+ 
++		*status = override_status_ids[i].status;
+ 		ret = true;
+ 		break;
+ 	}
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index c75fb600740cc..99ae919255f4d 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -1608,15 +1608,21 @@ static void binder_cleanup_transaction(struct binder_transaction *t,
+ /**
+  * binder_get_object() - gets object and checks for valid metadata
+  * @proc:	binder_proc owning the buffer
++ * @u:		sender's user pointer to base of buffer
+  * @buffer:	binder_buffer that we're parsing.
+  * @offset:	offset in the @buffer at which to validate an object.
+  * @object:	struct binder_object to read into
+  *
+- * Return:	If there's a valid metadata object at @offset in @buffer, the
++ * Copy the binder object at the given offset into @object. If @u is
++ * provided then the copy is from the sender's buffer. If not, then
++ * it is copied from the target's @buffer.
++ *
++ * Return:	If there's a valid metadata object at @offset, the
+  *		size of that object. Otherwise, it returns zero. The object
+  *		is read into the struct binder_object pointed to by @object.
+  */
+ static size_t binder_get_object(struct binder_proc *proc,
++				const void __user *u,
+ 				struct binder_buffer *buffer,
+ 				unsigned long offset,
+ 				struct binder_object *object)
+@@ -1626,10 +1632,16 @@ static size_t binder_get_object(struct binder_proc *proc,
+ 	size_t object_size = 0;
+ 
+ 	read_size = min_t(size_t, sizeof(*object), buffer->data_size - offset);
+-	if (offset > buffer->data_size || read_size < sizeof(*hdr) ||
+-	    binder_alloc_copy_from_buffer(&proc->alloc, object, buffer,
+-					  offset, read_size))
++	if (offset > buffer->data_size || read_size < sizeof(*hdr))
+ 		return 0;
++	if (u) {
++		if (copy_from_user(object, u + offset, read_size))
++			return 0;
++	} else {
++		if (binder_alloc_copy_from_buffer(&proc->alloc, object, buffer,
++						  offset, read_size))
++			return 0;
++	}
+ 
+ 	/* Ok, now see if we read a complete object. */
+ 	hdr = &object->hdr;
+@@ -1702,7 +1714,7 @@ static struct binder_buffer_object *binder_validate_ptr(
+ 					  b, buffer_offset,
+ 					  sizeof(object_offset)))
+ 		return NULL;
+-	object_size = binder_get_object(proc, b, object_offset, object);
++	object_size = binder_get_object(proc, NULL, b, object_offset, object);
+ 	if (!object_size || object->hdr.type != BINDER_TYPE_PTR)
+ 		return NULL;
+ 	if (object_offsetp)
+@@ -1767,7 +1779,8 @@ static bool binder_validate_fixup(struct binder_proc *proc,
+ 		unsigned long buffer_offset;
+ 		struct binder_object last_object;
+ 		struct binder_buffer_object *last_bbo;
+-		size_t object_size = binder_get_object(proc, b, last_obj_offset,
++		size_t object_size = binder_get_object(proc, NULL, b,
++						       last_obj_offset,
+ 						       &last_object);
+ 		if (object_size != sizeof(*last_bbo))
+ 			return false;
+@@ -1882,7 +1895,7 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
+ 		if (!binder_alloc_copy_from_buffer(&proc->alloc, &object_offset,
+ 						   buffer, buffer_offset,
+ 						   sizeof(object_offset)))
+-			object_size = binder_get_object(proc, buffer,
++			object_size = binder_get_object(proc, NULL, buffer,
+ 							object_offset, &object);
+ 		if (object_size == 0) {
+ 			pr_err("transaction release %d bad object at offset %lld, size %zd\n",
+@@ -2269,8 +2282,8 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda,
+ 		if (!ret)
+ 			ret = binder_translate_fd(fd, offset, t, thread,
+ 						  in_reply_to);
+-		if (ret < 0)
+-			return ret;
++		if (ret)
++			return ret > 0 ? -EINVAL : ret;
+ 	}
+ 	return 0;
+ }
+@@ -2455,6 +2468,7 @@ static void binder_transaction(struct binder_proc *proc,
+ 	binder_size_t off_start_offset, off_end_offset;
+ 	binder_size_t off_min;
+ 	binder_size_t sg_buf_offset, sg_buf_end_offset;
++	binder_size_t user_offset = 0;
+ 	struct binder_proc *target_proc = NULL;
+ 	struct binder_thread *target_thread = NULL;
+ 	struct binder_node *target_node = NULL;
+@@ -2469,6 +2483,8 @@ static void binder_transaction(struct binder_proc *proc,
+ 	int t_debug_id = atomic_inc_return(&binder_last_id);
+ 	char *secctx = NULL;
+ 	u32 secctx_sz = 0;
++	const void __user *user_buffer = (const void __user *)
++				(uintptr_t)tr->data.ptr.buffer;
+ 
+ 	e = binder_transaction_log_add(&binder_transaction_log);
+ 	e->debug_id = t_debug_id;
+@@ -2780,19 +2796,6 @@ static void binder_transaction(struct binder_proc *proc,
+ 	t->buffer->clear_on_free = !!(t->flags & TF_CLEAR_BUF);
+ 	trace_binder_transaction_alloc_buf(t->buffer);
+ 
+-	if (binder_alloc_copy_user_to_buffer(
+-				&target_proc->alloc,
+-				t->buffer, 0,
+-				(const void __user *)
+-					(uintptr_t)tr->data.ptr.buffer,
+-				tr->data_size)) {
+-		binder_user_error("%d:%d got transaction with invalid data ptr\n",
+-				proc->pid, thread->pid);
+-		return_error = BR_FAILED_REPLY;
+-		return_error_param = -EFAULT;
+-		return_error_line = __LINE__;
+-		goto err_copy_data_failed;
+-	}
+ 	if (binder_alloc_copy_user_to_buffer(
+ 				&target_proc->alloc,
+ 				t->buffer,
+@@ -2837,6 +2840,7 @@ static void binder_transaction(struct binder_proc *proc,
+ 		size_t object_size;
+ 		struct binder_object object;
+ 		binder_size_t object_offset;
++		binder_size_t copy_size;
+ 
+ 		if (binder_alloc_copy_from_buffer(&target_proc->alloc,
+ 						  &object_offset,
+@@ -2848,8 +2852,27 @@ static void binder_transaction(struct binder_proc *proc,
+ 			return_error_line = __LINE__;
+ 			goto err_bad_offset;
+ 		}
+-		object_size = binder_get_object(target_proc, t->buffer,
+-						object_offset, &object);
++
++		/*
++		 * Copy the source user buffer up to the next object
++		 * that will be processed.
++		 */
++		copy_size = object_offset - user_offset;
++		if (copy_size && (user_offset > object_offset ||
++				binder_alloc_copy_user_to_buffer(
++					&target_proc->alloc,
++					t->buffer, user_offset,
++					user_buffer + user_offset,
++					copy_size))) {
++			binder_user_error("%d:%d got transaction with invalid data ptr\n",
++					proc->pid, thread->pid);
++			return_error = BR_FAILED_REPLY;
++			return_error_param = -EFAULT;
++			return_error_line = __LINE__;
++			goto err_copy_data_failed;
++		}
++		object_size = binder_get_object(target_proc, user_buffer,
++				t->buffer, object_offset, &object);
+ 		if (object_size == 0 || object_offset < off_min) {
+ 			binder_user_error("%d:%d got transaction with invalid offset (%lld, min %lld max %lld) or object.\n",
+ 					  proc->pid, thread->pid,
+@@ -2861,6 +2884,11 @@ static void binder_transaction(struct binder_proc *proc,
+ 			return_error_line = __LINE__;
+ 			goto err_bad_offset;
+ 		}
++		/*
++		 * Set offset to the next buffer fragment to be
++		 * copied
++		 */
++		user_offset = object_offset + object_size;
+ 
+ 		hdr = &object.hdr;
+ 		off_min = object_offset + object_size;
+@@ -2956,9 +2984,14 @@ static void binder_transaction(struct binder_proc *proc,
+ 			}
+ 			ret = binder_translate_fd_array(fda, parent, t, thread,
+ 							in_reply_to);
+-			if (ret < 0) {
++			if (!ret)
++				ret = binder_alloc_copy_to_buffer(&target_proc->alloc,
++								  t->buffer,
++								  object_offset,
++								  fda, sizeof(*fda));
++			if (ret) {
+ 				return_error = BR_FAILED_REPLY;
+-				return_error_param = ret;
++				return_error_param = ret > 0 ? -EINVAL : ret;
+ 				return_error_line = __LINE__;
+ 				goto err_translate_failed;
+ 			}
+@@ -3028,6 +3061,19 @@ static void binder_transaction(struct binder_proc *proc,
+ 			goto err_bad_object_type;
+ 		}
+ 	}
++	/* Done processing objects, copy the rest of the buffer */
++	if (binder_alloc_copy_user_to_buffer(
++				&target_proc->alloc,
++				t->buffer, user_offset,
++				user_buffer + user_offset,
++				tr->data_size - user_offset)) {
++		binder_user_error("%d:%d got transaction with invalid data ptr\n",
++				proc->pid, thread->pid);
++		return_error = BR_FAILED_REPLY;
++		return_error_param = -EFAULT;
++		return_error_line = __LINE__;
++		goto err_copy_data_failed;
++	}
+ 	if (t->buffer->oneway_spam_suspect)
+ 		tcomplete->type = BINDER_WORK_TRANSACTION_ONEWAY_SPAM_SUSPECT;
+ 	else
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index fd034d7424472..b191bd17de891 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -485,8 +485,7 @@ static void device_link_release_fn(struct work_struct *work)
+ 	/* Ensure that all references to the link object have been dropped. */
+ 	device_link_synchronize_removal();
+ 
+-	while (refcount_dec_not_one(&link->rpm_active))
+-		pm_runtime_put(link->supplier);
++	pm_runtime_release_supplier(link, true);
+ 
+ 	put_device(link->consumer);
+ 	put_device(link->supplier);
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index d504cd4ab3cbf..38c2e1892a00e 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -305,19 +305,40 @@ static int rpm_get_suppliers(struct device *dev)
+ 	return 0;
+ }
+ 
++/**
++ * pm_runtime_release_supplier - Drop references to device link's supplier.
++ * @link: Target device link.
++ * @check_idle: Whether or not to check if the supplier device is idle.
++ *
++ * Drop all runtime PM references associated with @link to its supplier device
++ * and if @check_idle is set, check if that device is idle (and so it can be
++ * suspended).
++ */
++void pm_runtime_release_supplier(struct device_link *link, bool check_idle)
++{
++	struct device *supplier = link->supplier;
++
++	/*
++	 * The additional power.usage_count check is a safety net in case
++	 * the rpm_active refcount becomes saturated, in which case
++	 * refcount_dec_not_one() would return true forever, but it is not
++	 * strictly necessary.
++	 */
++	while (refcount_dec_not_one(&link->rpm_active) &&
++	       atomic_read(&supplier->power.usage_count) > 0)
++		pm_runtime_put_noidle(supplier);
++
++	if (check_idle)
++		pm_request_idle(supplier);
++}
++
+ static void __rpm_put_suppliers(struct device *dev, bool try_to_suspend)
+ {
+ 	struct device_link *link;
+ 
+ 	list_for_each_entry_rcu(link, &dev->links.suppliers, c_node,
+-				device_links_read_lock_held()) {
+-
+-		while (refcount_dec_not_one(&link->rpm_active))
+-			pm_runtime_put_noidle(link->supplier);
+-
+-		if (try_to_suspend)
+-			pm_request_idle(link->supplier);
+-	}
++				device_links_read_lock_held())
++		pm_runtime_release_supplier(link, try_to_suspend);
+ }
+ 
+ static void rpm_put_suppliers(struct device *dev)
+@@ -1772,9 +1793,7 @@ void pm_runtime_drop_link(struct device_link *link)
+ 		return;
+ 
+ 	pm_runtime_drop_link_count(link->consumer);
+-
+-	while (refcount_dec_not_one(&link->rpm_active))
+-		pm_runtime_put(link->supplier);
++	pm_runtime_release_supplier(link, true);
+ }
+ 
+ static bool pm_runtime_need_not_resume(struct device *dev)
+diff --git a/drivers/base/property.c b/drivers/base/property.c
+index f1f35b48ab8b9..6df99e526ab0f 100644
+--- a/drivers/base/property.c
++++ b/drivers/base/property.c
+@@ -1206,8 +1206,10 @@ fwnode_graph_devcon_match(struct fwnode_handle *fwnode, const char *con_id,
+ 
+ 	fwnode_graph_for_each_endpoint(fwnode, ep) {
+ 		node = fwnode_graph_get_remote_port_parent(ep);
+-		if (!fwnode_device_is_available(node))
++		if (!fwnode_device_is_available(node)) {
++			fwnode_handle_put(node);
+ 			continue;
++		}
+ 
+ 		ret = match(node, con_id, data);
+ 		fwnode_handle_put(node);
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index 21a0c2562ec06..f7811641ed5ae 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -647,6 +647,7 @@ int regmap_attach_dev(struct device *dev, struct regmap *map,
+ 	if (ret)
+ 		return ret;
+ 
++	regmap_debugfs_exit(map);
+ 	regmap_debugfs_init(map);
+ 
+ 	/* Add a devres resource for dev_get_regmap() */
+diff --git a/drivers/base/swnode.c b/drivers/base/swnode.c
+index 4debcea4fb12d..0a482212c7e8e 100644
+--- a/drivers/base/swnode.c
++++ b/drivers/base/swnode.c
+@@ -529,7 +529,7 @@ software_node_get_reference_args(const struct fwnode_handle *fwnode,
+ 		return -ENOENT;
+ 
+ 	if (nargs_prop) {
+-		error = property_entry_read_int_array(swnode->node->properties,
++		error = property_entry_read_int_array(ref->node->properties,
+ 						      nargs_prop, sizeof(u32),
+ 						      &nargs_prop_val, 1);
+ 		if (error)
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index c4267da716fe6..8026125037ae8 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -1015,7 +1015,7 @@ static DECLARE_DELAYED_WORK(fd_timer, fd_timer_workfn);
+ static void cancel_activity(void)
+ {
+ 	do_floppy = NULL;
+-	cancel_delayed_work_sync(&fd_timer);
++	cancel_delayed_work(&fd_timer);
+ 	cancel_work_sync(&floppy_work);
+ }
+ 
+@@ -3081,6 +3081,8 @@ static void raw_cmd_free(struct floppy_raw_cmd **ptr)
+ 	}
+ }
+ 
++#define MAX_LEN (1UL << MAX_ORDER << PAGE_SHIFT)
++
+ static int raw_cmd_copyin(int cmd, void __user *param,
+ 				 struct floppy_raw_cmd **rcmd)
+ {
+@@ -3108,7 +3110,7 @@ loop:
+ 	ptr->resultcode = 0;
+ 
+ 	if (ptr->flags & (FD_RAW_READ | FD_RAW_WRITE)) {
+-		if (ptr->length <= 0)
++		if (ptr->length <= 0 || ptr->length >= MAX_LEN)
+ 			return -EINVAL;
+ 		ptr->kernel_data = (char *)fd_dma_mem_alloc(ptr->length);
+ 		fallback_on_nodma_alloc(&ptr->kernel_data, ptr->length);
+diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
+index 323af5c9c8026..e23aac1a83f73 100644
+--- a/drivers/block/null_blk/main.c
++++ b/drivers/block/null_blk/main.c
+@@ -340,9 +340,9 @@ static int nullb_update_nr_hw_queues(struct nullb_device *dev,
+ 		return 0;
+ 
+ 	/*
+-	 * Make sure at least one queue exists for each of submit and poll.
++	 * Make sure at least one submit queue exists.
+ 	 */
+-	if (!submit_queues || !poll_queues)
++	if (!submit_queues)
+ 		return -EINVAL;
+ 
+ 	/*
+@@ -1891,7 +1891,7 @@ static int null_init_tag_set(struct nullb *nullb, struct blk_mq_tag_set *set)
+ 	if (g_shared_tag_bitmap)
+ 		set->flags |= BLK_MQ_F_TAG_HCTX_SHARED;
+ 	set->driver_data = nullb;
+-	if (g_poll_queues)
++	if (poll_queues)
+ 		set->nr_maps = 3;
+ 	else
+ 		set->nr_maps = 1;
+@@ -1918,8 +1918,6 @@ static int null_validate_conf(struct nullb_device *dev)
+ 
+ 	if (dev->poll_queues > g_poll_queues)
+ 		dev->poll_queues = g_poll_queues;
+-	else if (dev->poll_queues == 0)
+-		dev->poll_queues = 1;
+ 	dev->prev_poll_queues = dev->poll_queues;
+ 
+ 	dev->queue_mode = min_t(unsigned int, dev->queue_mode, NULL_Q_MQ);
+diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
+index b11567b0fd9ab..851a0c9b8fae5 100644
+--- a/drivers/bluetooth/btintel.c
++++ b/drivers/bluetooth/btintel.c
+@@ -2493,10 +2493,14 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ 	case 0x12:      /* ThP */
+ 	case 0x13:      /* HrP */
+ 	case 0x14:      /* CcP */
+-		/* Some legacy bootloader devices from JfP supports both old
+-		 * and TLV based HCI_Intel_Read_Version command. But we don't
+-		 * want to use the TLV based setup routines for those legacy
+-		 * bootloader device.
++		/* Some legacy bootloader devices starting from JfP,
++		 * the operational firmware supports both old and TLV based
++		 * HCI_Intel_Read_Version command based on the command
++		 * parameter.
++		 *
++		 * For upgrading firmware case, the TLV based version cannot
++		 * be used because the firmware filename for legacy bootloader
++		 * is based on the old format.
+ 		 *
+ 		 * Also, it is not easy to convert TLV based version from the
+ 		 * legacy version format.
+@@ -2508,6 +2512,20 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ 		err = btintel_read_version(hdev, &ver);
+ 		if (err)
+ 			return err;
++
++		/* Apply the device specific HCI quirks
++		 *
++		 * All Legacy bootloader devices support WBS
++		 */
++		set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
++
++		/* Valid LE States quirk for JfP/ThP familiy */
++		if (ver.hw_variant == 0x11 || ver.hw_variant == 0x12)
++			set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
++
++		/* Setup MSFT Extension support */
++		btintel_set_msft_opcode(hdev, ver.hw_variant);
++
+ 		err = btintel_bootloader_setup(hdev, &ver);
+ 		break;
+ 	case 0x17:
+diff --git a/drivers/bluetooth/btmtksdio.c b/drivers/bluetooth/btmtksdio.c
+index 9872ef18f9fea..1cbdeca1fdc4a 100644
+--- a/drivers/bluetooth/btmtksdio.c
++++ b/drivers/bluetooth/btmtksdio.c
+@@ -1042,6 +1042,8 @@ static int btmtksdio_runtime_suspend(struct device *dev)
+ 	if (!bdev)
+ 		return 0;
+ 
++	sdio_set_host_pm_flags(func, MMC_PM_KEEP_POWER);
++
+ 	sdio_claim_host(bdev->func);
+ 
+ 	sdio_writel(bdev->func, C_FW_OWN_REQ_SET, MTK_REG_CHLPCR, &err);
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index c923c38658baa..ea72afb7abead 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -2600,6 +2600,7 @@ static int btusb_mtk_setup_firmware_79xx(struct hci_dev *hdev, const char *fwnam
+ 				} else {
+ 					bt_dev_err(hdev, "Failed wmt patch dwnld status (%d)",
+ 						   status);
++					err = -EIO;
+ 					goto err_release_fw;
+ 				}
+ 			}
+@@ -2895,6 +2896,10 @@ static int btusb_mtk_setup(struct hci_dev *hdev)
+ 			"mediatek/BT_RAM_CODE_MT%04x_1_%x_hdr.bin",
+ 			 dev_id & 0xffff, (fw_version & 0xff) + 1);
+ 		err = btusb_mtk_setup_firmware_79xx(hdev, fw_bin_name);
++		if (err < 0) {
++			bt_dev_err(hdev, "Failed to set up firmware (%d)", err);
++			return err;
++		}
+ 
+ 		/* It's Device EndPoint Reset Option Register */
+ 		btusb_mtk_uhw_reg_write(data, MTK_EP_RST_OPT, MTK_EP_RST_IN_OUT_OPT);
+diff --git a/drivers/bluetooth/hci_bcm.c b/drivers/bluetooth/hci_bcm.c
+index ef54afa293574..7abf99f0ee399 100644
+--- a/drivers/bluetooth/hci_bcm.c
++++ b/drivers/bluetooth/hci_bcm.c
+@@ -1188,7 +1188,12 @@ static int bcm_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	dev->dev = &pdev->dev;
+-	dev->irq = platform_get_irq(pdev, 0);
++
++	ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
++		return ret;
++
++	dev->irq = ret;
+ 
+ 	/* Initialize routing field to an unused value */
+ 	dev->pcm_int_params[0] = 0xff;
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index dd768a8ed7cbb..f6e91fb432a3b 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -1928,6 +1928,9 @@ static int qca_power_off(struct hci_dev *hdev)
+ 	hu->hdev->hw_error = NULL;
+ 	hu->hdev->cmd_timeout = NULL;
+ 
++	del_timer_sync(&qca->wake_retrans_timer);
++	del_timer_sync(&qca->tx_idle_timer);
++
+ 	/* Stop sending shutdown command if soc crashes. */
+ 	if (soc_type != QCA_ROME
+ 		&& qca->memdump_state == QCA_MEMDUMP_IDLE) {
+@@ -2056,14 +2059,14 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+ 
+ 		qcadev->bt_en = devm_gpiod_get_optional(&serdev->dev, "enable",
+ 					       GPIOD_OUT_LOW);
+-		if (!qcadev->bt_en && data->soc_type == QCA_WCN6750) {
++		if (IS_ERR_OR_NULL(qcadev->bt_en) && data->soc_type == QCA_WCN6750) {
+ 			dev_err(&serdev->dev, "failed to acquire BT_EN gpio\n");
+ 			power_ctrl_enabled = false;
+ 		}
+ 
+ 		qcadev->sw_ctrl = devm_gpiod_get_optional(&serdev->dev, "swctrl",
+ 					       GPIOD_IN);
+-		if (!qcadev->sw_ctrl && data->soc_type == QCA_WCN6750)
++		if (IS_ERR_OR_NULL(qcadev->sw_ctrl) && data->soc_type == QCA_WCN6750)
+ 			dev_warn(&serdev->dev, "failed to acquire SW_CTRL gpio\n");
+ 
+ 		qcadev->susclk = devm_clk_get_optional(&serdev->dev, NULL);
+@@ -2085,7 +2088,7 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+ 
+ 		qcadev->bt_en = devm_gpiod_get_optional(&serdev->dev, "enable",
+ 					       GPIOD_OUT_LOW);
+-		if (!qcadev->bt_en) {
++		if (IS_ERR_OR_NULL(qcadev->bt_en)) {
+ 			dev_warn(&serdev->dev, "failed to acquire enable gpio\n");
+ 			power_ctrl_enabled = false;
+ 		}
+diff --git a/drivers/bluetooth/hci_vhci.c b/drivers/bluetooth/hci_vhci.c
+index b45db0db347c6..95af01bdd02a2 100644
+--- a/drivers/bluetooth/hci_vhci.c
++++ b/drivers/bluetooth/hci_vhci.c
+@@ -176,6 +176,8 @@ static ssize_t force_wakeup_write(struct file *file,
+ 	if (data->wakeup == enable)
+ 		return -EALREADY;
+ 
++	data->wakeup = enable;
++
+ 	return count;
+ }
+ 
+@@ -237,6 +239,8 @@ static int __vhci_create_device(struct vhci_data *data, __u8 opcode)
+ 	if (opcode & 0x80)
+ 		set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks);
+ 
++	set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
++
+ 	if (hci_register_dev(hdev) < 0) {
+ 		BT_ERR("Can't register HCI device");
+ 		hci_free_dev(hdev);
+diff --git a/drivers/bluetooth/virtio_bt.c b/drivers/bluetooth/virtio_bt.c
+index 57908ce4fae85..076e4942a3f0e 100644
+--- a/drivers/bluetooth/virtio_bt.c
++++ b/drivers/bluetooth/virtio_bt.c
+@@ -202,6 +202,9 @@ static void virtbt_rx_handle(struct virtio_bluetooth *vbt, struct sk_buff *skb)
+ 		hci_skb_pkt_type(skb) = pkt_type;
+ 		hci_recv_frame(vbt->hdev, skb);
+ 		break;
++	default:
++		kfree_skb(skb);
++		break;
+ 	}
+ }
+ 
+diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
+index 5aaca6d0f52b2..f1ec344175928 100644
+--- a/drivers/bus/mhi/core/init.c
++++ b/drivers/bus/mhi/core/init.c
+@@ -788,6 +788,7 @@ static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
+ 		mhi_chan->offload_ch = ch_cfg->offload_channel;
+ 		mhi_chan->db_cfg.reset_req = ch_cfg->doorbell_mode_switch;
+ 		mhi_chan->pre_alloc = ch_cfg->auto_queue;
++		mhi_chan->wake_capable = ch_cfg->wake_capable;
+ 
+ 		/*
+ 		 * If MHI host allocates buffers, then the channel direction
+diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
+index 547e6e769546a..bb9a2043f3a20 100644
+--- a/drivers/bus/mhi/core/pm.c
++++ b/drivers/bus/mhi/core/pm.c
+@@ -1053,7 +1053,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
+ 	enum mhi_ee_type current_ee;
+ 	enum dev_st_transition next_state;
+ 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	u32 val;
++	u32 interval_us = 25000; /* poll register field every 25 milliseconds */
+ 	int ret;
+ 
+ 	dev_info(dev, "Requested to power ON\n");
+@@ -1070,10 +1070,6 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
+ 	mutex_lock(&mhi_cntrl->pm_mutex);
+ 	mhi_cntrl->pm_state = MHI_PM_DISABLE;
+ 
+-	ret = mhi_init_irq_setup(mhi_cntrl);
+-	if (ret)
+-		goto error_setup_irq;
+-
+ 	/* Setup BHI INTVEC */
+ 	write_lock_irq(&mhi_cntrl->pm_lock);
+ 	mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
+@@ -1087,7 +1083,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
+ 		dev_err(dev, "%s is not a valid EE for power on\n",
+ 			TO_MHI_EXEC_STR(current_ee));
+ 		ret = -EIO;
+-		goto error_async_power_up;
++		goto error_exit;
+ 	}
+ 
+ 	state = mhi_get_mhi_state(mhi_cntrl);
+@@ -1096,20 +1092,12 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
+ 
+ 	if (state == MHI_STATE_SYS_ERR) {
+ 		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
+-		ret = wait_event_timeout(mhi_cntrl->state_event,
+-				MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state) ||
+-					mhi_read_reg_field(mhi_cntrl,
+-							   mhi_cntrl->regs,
+-							   MHICTRL,
+-							   MHICTRL_RESET_MASK,
+-							   MHICTRL_RESET_SHIFT,
+-							   &val) ||
+-					!val,
+-				msecs_to_jiffies(mhi_cntrl->timeout_ms));
+-		if (!ret) {
+-			ret = -EIO;
++		ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
++				 MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 0,
++				 interval_us);
++		if (ret) {
+ 			dev_info(dev, "Failed to reset MHI due to syserr state\n");
+-			goto error_async_power_up;
++			goto error_exit;
+ 		}
+ 
+ 		/*
+@@ -1119,6 +1107,10 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
+ 		mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
+ 	}
+ 
++	ret = mhi_init_irq_setup(mhi_cntrl);
++	if (ret)
++		goto error_exit;
++
+ 	/* Transition to next state */
+ 	next_state = MHI_IN_PBL(current_ee) ?
+ 		DEV_ST_TRANSITION_PBL : DEV_ST_TRANSITION_READY;
+@@ -1131,10 +1123,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
+ 
+ 	return 0;
+ 
+-error_async_power_up:
+-	mhi_deinit_free_irq(mhi_cntrl);
+-
+-error_setup_irq:
++error_exit:
+ 	mhi_cntrl->pm_state = MHI_PM_DISABLE;
+ 	mutex_unlock(&mhi_cntrl->pm_mutex);
+ 
+diff --git a/drivers/bus/mhi/pci_generic.c b/drivers/bus/mhi/pci_generic.c
+index 4c577a7317091..b8b2811c7c0a9 100644
+--- a/drivers/bus/mhi/pci_generic.c
++++ b/drivers/bus/mhi/pci_generic.c
+@@ -1018,7 +1018,7 @@ static int __maybe_unused mhi_pci_freeze(struct device *dev)
+ 	 * context.
+ 	 */
+ 	if (test_and_clear_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status)) {
+-		mhi_power_down(mhi_cntrl, false);
++		mhi_power_down(mhi_cntrl, true);
+ 		mhi_unprepare_after_power_down(mhi_cntrl);
+ 	}
+ 
+diff --git a/drivers/char/mwave/3780i.h b/drivers/char/mwave/3780i.h
+index 9ccb6b270b071..95164246afd1a 100644
+--- a/drivers/char/mwave/3780i.h
++++ b/drivers/char/mwave/3780i.h
+@@ -68,7 +68,7 @@ typedef struct {
+ 	unsigned char ClockControl:1;	/* RW: Clock control: 0=normal, 1=stop 3780i clocks */
+ 	unsigned char SoftReset:1;	/* RW: Soft reset 0=normal, 1=soft reset active */
+ 	unsigned char ConfigMode:1;	/* RW: Configuration mode, 0=normal, 1=config mode */
+-	unsigned char Reserved:5;	/* 0: Reserved */
++	unsigned short Reserved:13;	/* 0: Reserved */
+ } DSP_ISA_SLAVE_CONTROL;
+ 
+ 
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index 7470ee24db2f9..a27ae3999ff32 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -912,12 +912,14 @@ static struct crng_state *select_crng(void)
+ 
+ /*
+  * crng_fast_load() can be called by code in the interrupt service
+- * path.  So we can't afford to dilly-dally.
++ * path.  So we can't afford to dilly-dally. Returns the number of
++ * bytes processed from cp.
+  */
+-static int crng_fast_load(const char *cp, size_t len)
++static size_t crng_fast_load(const char *cp, size_t len)
+ {
+ 	unsigned long flags;
+ 	char *p;
++	size_t ret = 0;
+ 
+ 	if (!spin_trylock_irqsave(&primary_crng.lock, flags))
+ 		return 0;
+@@ -928,7 +930,7 @@ static int crng_fast_load(const char *cp, size_t len)
+ 	p = (unsigned char *) &primary_crng.state[4];
+ 	while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) {
+ 		p[crng_init_cnt % CHACHA_KEY_SIZE] ^= *cp;
+-		cp++; crng_init_cnt++; len--;
++		cp++; crng_init_cnt++; len--; ret++;
+ 	}
+ 	spin_unlock_irqrestore(&primary_crng.lock, flags);
+ 	if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
+@@ -936,7 +938,7 @@ static int crng_fast_load(const char *cp, size_t len)
+ 		crng_init = 1;
+ 		pr_notice("fast init done\n");
+ 	}
+-	return 1;
++	return ret;
+ }
+ 
+ /*
+@@ -1287,7 +1289,7 @@ void add_interrupt_randomness(int irq, int irq_flags)
+ 	if (unlikely(crng_init == 0)) {
+ 		if ((fast_pool->count >= 64) &&
+ 		    crng_fast_load((char *) fast_pool->pool,
+-				   sizeof(fast_pool->pool))) {
++				   sizeof(fast_pool->pool)) > 0) {
+ 			fast_pool->count = 0;
+ 			fast_pool->last = now;
+ 		}
+@@ -2295,8 +2297,11 @@ void add_hwgenerator_randomness(const char *buffer, size_t count,
+ 	struct entropy_store *poolp = &input_pool;
+ 
+ 	if (unlikely(crng_init == 0)) {
+-		crng_fast_load(buffer, count);
+-		return;
++		size_t ret = crng_fast_load(buffer, count);
++		count -= ret;
++		buffer += ret;
++		if (!count || crng_init == 0)
++			return;
+ 	}
+ 
+ 	/* Suspend writing if we're above the trickle threshold.
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index ddaeceb7e1091..df37e7b6a10a5 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -474,13 +474,21 @@ static void tpm_del_char_device(struct tpm_chip *chip)
+ 
+ 	/* Make the driver uncallable. */
+ 	down_write(&chip->ops_sem);
+-	if (chip->flags & TPM_CHIP_FLAG_TPM2) {
+-		if (!tpm_chip_start(chip)) {
+-			tpm2_shutdown(chip, TPM2_SU_CLEAR);
+-			tpm_chip_stop(chip);
++
++	/*
++	 * Check if chip->ops is still valid: In case that the controller
++	 * drivers shutdown handler unregisters the controller in its
++	 * shutdown handler we are called twice and chip->ops to NULL.
++	 */
++	if (chip->ops) {
++		if (chip->flags & TPM_CHIP_FLAG_TPM2) {
++			if (!tpm_chip_start(chip)) {
++				tpm2_shutdown(chip, TPM2_SU_CLEAR);
++				tpm_chip_stop(chip);
++			}
+ 		}
++		chip->ops = NULL;
+ 	}
+-	chip->ops = NULL;
+ 	up_write(&chip->ops_sem);
+ }
+ 
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index b2659a4c40168..dc56b976d8162 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -950,9 +950,11 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 	priv->timeout_max = TPM_TIMEOUT_USECS_MAX;
+ 	priv->phy_ops = phy_ops;
+ 
++	dev_set_drvdata(&chip->dev, priv);
++
+ 	rc = tpm_tis_read32(priv, TPM_DID_VID(0), &vendor);
+ 	if (rc < 0)
+-		goto out_err;
++		return rc;
+ 
+ 	priv->manufacturer_id = vendor;
+ 
+@@ -962,8 +964,6 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 		priv->timeout_max = TIS_TIMEOUT_MAX_ATML;
+ 	}
+ 
+-	dev_set_drvdata(&chip->dev, priv);
+-
+ 	if (is_bsw()) {
+ 		priv->ilb_base_addr = ioremap(INTEL_LEGACY_BLK_BASE_ADDR,
+ 					ILB_REMAP_SIZE);
+@@ -994,7 +994,15 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 	intmask |= TPM_INTF_CMD_READY_INT | TPM_INTF_LOCALITY_CHANGE_INT |
+ 		   TPM_INTF_DATA_AVAIL_INT | TPM_INTF_STS_VALID_INT;
+ 	intmask &= ~TPM_GLOBAL_INT_ENABLE;
++
++	rc = request_locality(chip, 0);
++	if (rc < 0) {
++		rc = -ENODEV;
++		goto out_err;
++	}
++
+ 	tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask);
++	release_locality(chip, 0);
+ 
+ 	rc = tpm_chip_start(chip);
+ 	if (rc)
+diff --git a/drivers/clk/bcm/clk-bcm2835.c b/drivers/clk/bcm/clk-bcm2835.c
+index a254512965eb8..3667b4d731e71 100644
+--- a/drivers/clk/bcm/clk-bcm2835.c
++++ b/drivers/clk/bcm/clk-bcm2835.c
+@@ -932,8 +932,7 @@ static int bcm2835_clock_is_on(struct clk_hw *hw)
+ 
+ static u32 bcm2835_clock_choose_div(struct clk_hw *hw,
+ 				    unsigned long rate,
+-				    unsigned long parent_rate,
+-				    bool round_up)
++				    unsigned long parent_rate)
+ {
+ 	struct bcm2835_clock *clock = bcm2835_clock_from_hw(hw);
+ 	const struct bcm2835_clock_data *data = clock->data;
+@@ -945,10 +944,6 @@ static u32 bcm2835_clock_choose_div(struct clk_hw *hw,
+ 
+ 	rem = do_div(temp, rate);
+ 	div = temp;
+-
+-	/* Round up and mask off the unused bits */
+-	if (round_up && ((div & unused_frac_mask) != 0 || rem != 0))
+-		div += unused_frac_mask + 1;
+ 	div &= ~unused_frac_mask;
+ 
+ 	/* different clamping limits apply for a mash clock */
+@@ -1079,7 +1074,7 @@ static int bcm2835_clock_set_rate(struct clk_hw *hw,
+ 	struct bcm2835_clock *clock = bcm2835_clock_from_hw(hw);
+ 	struct bcm2835_cprman *cprman = clock->cprman;
+ 	const struct bcm2835_clock_data *data = clock->data;
+-	u32 div = bcm2835_clock_choose_div(hw, rate, parent_rate, false);
++	u32 div = bcm2835_clock_choose_div(hw, rate, parent_rate);
+ 	u32 ctl;
+ 
+ 	spin_lock(&cprman->regs_lock);
+@@ -1130,7 +1125,7 @@ static unsigned long bcm2835_clock_choose_div_and_prate(struct clk_hw *hw,
+ 
+ 	if (!(BIT(parent_idx) & data->set_rate_parent)) {
+ 		*prate = clk_hw_get_rate(parent);
+-		*div = bcm2835_clock_choose_div(hw, rate, *prate, true);
++		*div = bcm2835_clock_choose_div(hw, rate, *prate);
+ 
+ 		*avgrate = bcm2835_clock_rate_from_divisor(clock, *prate, *div);
+ 
+@@ -1216,7 +1211,7 @@ static int bcm2835_clock_determine_rate(struct clk_hw *hw,
+ 		rate = bcm2835_clock_choose_div_and_prate(hw, i, req->rate,
+ 							  &div, &prate,
+ 							  &avgrate);
+-		if (rate > best_rate && rate <= req->rate) {
++		if (abs(req->rate - rate) < abs(req->rate - best_rate)) {
+ 			best_parent = parent;
+ 			best_prate = prate;
+ 			best_rate = rate;
+diff --git a/drivers/clk/clk-bm1880.c b/drivers/clk/clk-bm1880.c
+index e6d6599d310a1..fad78a22218e8 100644
+--- a/drivers/clk/clk-bm1880.c
++++ b/drivers/clk/clk-bm1880.c
+@@ -522,14 +522,6 @@ static struct clk_hw *bm1880_clk_register_pll(struct bm1880_pll_hw_clock *pll_cl
+ 	return hw;
+ }
+ 
+-static void bm1880_clk_unregister_pll(struct clk_hw *hw)
+-{
+-	struct bm1880_pll_hw_clock *pll_hw = to_bm1880_pll_clk(hw);
+-
+-	clk_hw_unregister(hw);
+-	kfree(pll_hw);
+-}
+-
+ static int bm1880_clk_register_plls(struct bm1880_pll_hw_clock *clks,
+ 				    int num_clks,
+ 				    struct bm1880_clock_data *data)
+@@ -555,7 +547,7 @@ static int bm1880_clk_register_plls(struct bm1880_pll_hw_clock *clks,
+ 
+ err_clk:
+ 	while (i--)
+-		bm1880_clk_unregister_pll(data->hw_data.hws[clks[i].pll.id]);
++		clk_hw_unregister(data->hw_data.hws[clks[i].pll.id]);
+ 
+ 	return PTR_ERR(hw);
+ }
+@@ -695,14 +687,6 @@ static struct clk_hw *bm1880_clk_register_div(struct bm1880_div_hw_clock *div_cl
+ 	return hw;
+ }
+ 
+-static void bm1880_clk_unregister_div(struct clk_hw *hw)
+-{
+-	struct bm1880_div_hw_clock *div_hw = to_bm1880_div_clk(hw);
+-
+-	clk_hw_unregister(hw);
+-	kfree(div_hw);
+-}
+-
+ static int bm1880_clk_register_divs(struct bm1880_div_hw_clock *clks,
+ 				    int num_clks,
+ 				    struct bm1880_clock_data *data)
+@@ -729,7 +713,7 @@ static int bm1880_clk_register_divs(struct bm1880_div_hw_clock *clks,
+ 
+ err_clk:
+ 	while (i--)
+-		bm1880_clk_unregister_div(data->hw_data.hws[clks[i].div.id]);
++		clk_hw_unregister(data->hw_data.hws[clks[i].div.id]);
+ 
+ 	return PTR_ERR(hw);
+ }
+diff --git a/drivers/clk/clk-si5341.c b/drivers/clk/clk-si5341.c
+index 57ae183982d8c..f7b41366666e5 100644
+--- a/drivers/clk/clk-si5341.c
++++ b/drivers/clk/clk-si5341.c
+@@ -1740,7 +1740,7 @@ static int si5341_probe(struct i2c_client *client,
+ 			clk_prepare(data->clk[i].hw.clk);
+ 	}
+ 
+-	err = of_clk_add_hw_provider(client->dev.of_node, of_clk_si5341_get,
++	err = devm_of_clk_add_hw_provider(&client->dev, of_clk_si5341_get,
+ 			data);
+ 	if (err) {
+ 		dev_err(&client->dev, "unable to add clk provider\n");
+diff --git a/drivers/clk/clk-stm32f4.c b/drivers/clk/clk-stm32f4.c
+index af46176ad0539..473dfe632cc57 100644
+--- a/drivers/clk/clk-stm32f4.c
++++ b/drivers/clk/clk-stm32f4.c
+@@ -129,7 +129,6 @@ static const struct stm32f4_gate_data stm32f429_gates[] __initconst = {
+ 	{ STM32F4_RCC_APB2ENR, 20,	"spi5",		"apb2_div" },
+ 	{ STM32F4_RCC_APB2ENR, 21,	"spi6",		"apb2_div" },
+ 	{ STM32F4_RCC_APB2ENR, 22,	"sai1",		"apb2_div" },
+-	{ STM32F4_RCC_APB2ENR, 26,	"ltdc",		"apb2_div" },
+ };
+ 
+ static const struct stm32f4_gate_data stm32f469_gates[] __initconst = {
+@@ -211,7 +210,6 @@ static const struct stm32f4_gate_data stm32f469_gates[] __initconst = {
+ 	{ STM32F4_RCC_APB2ENR, 20,	"spi5",		"apb2_div" },
+ 	{ STM32F4_RCC_APB2ENR, 21,	"spi6",		"apb2_div" },
+ 	{ STM32F4_RCC_APB2ENR, 22,	"sai1",		"apb2_div" },
+-	{ STM32F4_RCC_APB2ENR, 26,	"ltdc",		"apb2_div" },
+ };
+ 
+ static const struct stm32f4_gate_data stm32f746_gates[] __initconst = {
+@@ -286,7 +284,6 @@ static const struct stm32f4_gate_data stm32f746_gates[] __initconst = {
+ 	{ STM32F4_RCC_APB2ENR, 21,	"spi6",		"apb2_div" },
+ 	{ STM32F4_RCC_APB2ENR, 22,	"sai1",		"apb2_div" },
+ 	{ STM32F4_RCC_APB2ENR, 23,	"sai2",		"apb2_div" },
+-	{ STM32F4_RCC_APB2ENR, 26,	"ltdc",		"apb2_div" },
+ };
+ 
+ static const struct stm32f4_gate_data stm32f769_gates[] __initconst = {
+@@ -364,7 +361,6 @@ static const struct stm32f4_gate_data stm32f769_gates[] __initconst = {
+ 	{ STM32F4_RCC_APB2ENR, 21,	"spi6",		"apb2_div" },
+ 	{ STM32F4_RCC_APB2ENR, 22,	"sai1",		"apb2_div" },
+ 	{ STM32F4_RCC_APB2ENR, 23,	"sai2",		"apb2_div" },
+-	{ STM32F4_RCC_APB2ENR, 26,	"ltdc",		"apb2_div" },
+ 	{ STM32F4_RCC_APB2ENR, 30,	"mdio",		"apb2_div" },
+ };
+ 
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 566ee2c78709e..21b17d6ced6d5 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -3343,6 +3343,24 @@ static int __init clk_debug_init(void)
+ {
+ 	struct clk_core *core;
+ 
++#ifdef CLOCK_ALLOW_WRITE_DEBUGFS
++	pr_warn("\n");
++	pr_warn("********************************************************************\n");
++	pr_warn("**     NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE           **\n");
++	pr_warn("**                                                                **\n");
++	pr_warn("**  WRITEABLE clk DebugFS SUPPORT HAS BEEN ENABLED IN THIS KERNEL **\n");
++	pr_warn("**                                                                **\n");
++	pr_warn("** This means that this kernel is built to expose clk operations  **\n");
++	pr_warn("** such as parent or rate setting, enabling, disabling, etc.      **\n");
++	pr_warn("** to userspace, which may compromise security on your system.    **\n");
++	pr_warn("**                                                                **\n");
++	pr_warn("** If you see this message and you are not debugging the          **\n");
++	pr_warn("** kernel, report this immediately to your vendor!                **\n");
++	pr_warn("**                                                                **\n");
++	pr_warn("**     NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE           **\n");
++	pr_warn("********************************************************************\n");
++#endif
++
+ 	rootdir = debugfs_create_dir("clk", NULL);
+ 
+ 	debugfs_create_file("clk_summary", 0444, rootdir, &all_lists,
+diff --git a/drivers/clk/imx/clk-imx8mn.c b/drivers/clk/imx/clk-imx8mn.c
+index c55577604e16a..021355a247081 100644
+--- a/drivers/clk/imx/clk-imx8mn.c
++++ b/drivers/clk/imx/clk-imx8mn.c
+@@ -277,9 +277,9 @@ static const char * const imx8mn_pdm_sels[] = {"osc_24m", "sys_pll2_100m", "audi
+ 
+ static const char * const imx8mn_dram_core_sels[] = {"dram_pll_out", "dram_alt_root", };
+ 
+-static const char * const imx8mn_clko1_sels[] = {"osc_24m", "sys_pll1_800m", "osc_27m",
+-						 "sys_pll1_200m", "audio_pll2_out", "vpu_pll",
+-						 "sys_pll1_80m", };
++static const char * const imx8mn_clko1_sels[] = {"osc_24m", "sys_pll1_800m", "dummy",
++						 "sys_pll1_200m", "audio_pll2_out", "sys_pll2_500m",
++						 "dummy", "sys_pll1_80m", };
+ static const char * const imx8mn_clko2_sels[] = {"osc_24m", "sys_pll2_200m", "sys_pll1_400m",
+ 						 "sys_pll2_166m", "sys_pll3_out", "audio_pll1_out",
+ 						 "video_pll1_out", "osc_32k", };
+diff --git a/drivers/clk/meson/gxbb.c b/drivers/clk/meson/gxbb.c
+index d6eed760327d0..608e0e8ca49a8 100644
+--- a/drivers/clk/meson/gxbb.c
++++ b/drivers/clk/meson/gxbb.c
+@@ -713,6 +713,35 @@ static struct clk_regmap gxbb_mpll_prediv = {
+ };
+ 
+ static struct clk_regmap gxbb_mpll0_div = {
++	.data = &(struct meson_clk_mpll_data){
++		.sdm = {
++			.reg_off = HHI_MPLL_CNTL7,
++			.shift   = 0,
++			.width   = 14,
++		},
++		.sdm_en = {
++			.reg_off = HHI_MPLL_CNTL,
++			.shift   = 25,
++			.width	 = 1,
++		},
++		.n2 = {
++			.reg_off = HHI_MPLL_CNTL7,
++			.shift   = 16,
++			.width   = 9,
++		},
++		.lock = &meson_clk_lock,
++	},
++	.hw.init = &(struct clk_init_data){
++		.name = "mpll0_div",
++		.ops = &meson_clk_mpll_ops,
++		.parent_hws = (const struct clk_hw *[]) {
++			&gxbb_mpll_prediv.hw
++		},
++		.num_parents = 1,
++	},
++};
++
++static struct clk_regmap gxl_mpll0_div = {
+ 	.data = &(struct meson_clk_mpll_data){
+ 		.sdm = {
+ 			.reg_off = HHI_MPLL_CNTL7,
+@@ -749,7 +778,16 @@ static struct clk_regmap gxbb_mpll0 = {
+ 	.hw.init = &(struct clk_init_data){
+ 		.name = "mpll0",
+ 		.ops = &clk_regmap_gate_ops,
+-		.parent_hws = (const struct clk_hw *[]) { &gxbb_mpll0_div.hw },
++		.parent_data = &(const struct clk_parent_data) {
++			/*
++			 * Note:
++			 * GXL and GXBB have different SDM_EN registers. We
++			 * fallback to the global naming string mechanism so
++			 * mpll0_div picks up the appropriate one.
++			 */
++			.name = "mpll0_div",
++			.index = -1,
++		},
+ 		.num_parents = 1,
+ 		.flags = CLK_SET_RATE_PARENT,
+ 	},
+@@ -3044,7 +3082,7 @@ static struct clk_hw_onecell_data gxl_hw_onecell_data = {
+ 		[CLKID_VAPB_1]		    = &gxbb_vapb_1.hw,
+ 		[CLKID_VAPB_SEL]	    = &gxbb_vapb_sel.hw,
+ 		[CLKID_VAPB]		    = &gxbb_vapb.hw,
+-		[CLKID_MPLL0_DIV]	    = &gxbb_mpll0_div.hw,
++		[CLKID_MPLL0_DIV]	    = &gxl_mpll0_div.hw,
+ 		[CLKID_MPLL1_DIV]	    = &gxbb_mpll1_div.hw,
+ 		[CLKID_MPLL2_DIV]	    = &gxbb_mpll2_div.hw,
+ 		[CLKID_MPLL_PREDIV]	    = &gxbb_mpll_prediv.hw,
+@@ -3439,7 +3477,7 @@ static struct clk_regmap *const gxl_clk_regmaps[] = {
+ 	&gxbb_mpll0,
+ 	&gxbb_mpll1,
+ 	&gxbb_mpll2,
+-	&gxbb_mpll0_div,
++	&gxl_mpll0_div,
+ 	&gxbb_mpll1_div,
+ 	&gxbb_mpll2_div,
+ 	&gxbb_cts_amclk_div,
+diff --git a/drivers/clk/qcom/gcc-sc7280.c b/drivers/clk/qcom/gcc-sc7280.c
+index 8fb6bd69f240e..423627d49719c 100644
+--- a/drivers/clk/qcom/gcc-sc7280.c
++++ b/drivers/clk/qcom/gcc-sc7280.c
+@@ -2917,7 +2917,7 @@ static struct clk_branch gcc_cfg_noc_lpass_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_cfg_noc_lpass_clk",
+-			.ops = &clk_branch2_ops,
++			.ops = &clk_branch2_aon_ops,
+ 		},
+ 	},
+ };
+diff --git a/drivers/clk/renesas/rzg2l-cpg.c b/drivers/clk/renesas/rzg2l-cpg.c
+index 4021f6cabda4b..aafd1879ff562 100644
+--- a/drivers/clk/renesas/rzg2l-cpg.c
++++ b/drivers/clk/renesas/rzg2l-cpg.c
+@@ -850,10 +850,16 @@ static void rzg2l_cpg_detach_dev(struct generic_pm_domain *unused, struct device
+ 		pm_clk_destroy(dev);
+ }
+ 
++static void rzg2l_cpg_genpd_remove(void *data)
++{
++	pm_genpd_remove(data);
++}
++
+ static int __init rzg2l_cpg_add_clk_domain(struct device *dev)
+ {
+ 	struct device_node *np = dev->of_node;
+ 	struct generic_pm_domain *genpd;
++	int ret;
+ 
+ 	genpd = devm_kzalloc(dev, sizeof(*genpd), GFP_KERNEL);
+ 	if (!genpd)
+@@ -864,10 +870,15 @@ static int __init rzg2l_cpg_add_clk_domain(struct device *dev)
+ 		       GENPD_FLAG_ACTIVE_WAKEUP;
+ 	genpd->attach_dev = rzg2l_cpg_attach_dev;
+ 	genpd->detach_dev = rzg2l_cpg_detach_dev;
+-	pm_genpd_init(genpd, &pm_domain_always_on_gov, false);
++	ret = pm_genpd_init(genpd, &pm_domain_always_on_gov, false);
++	if (ret)
++		return ret;
+ 
+-	of_genpd_add_provider_simple(np, genpd);
+-	return 0;
++	ret = devm_add_action_or_reset(dev, rzg2l_cpg_genpd_remove, genpd);
++	if (ret)
++		return ret;
++
++	return of_genpd_add_provider_simple(np, genpd);
+ }
+ 
+ static int __init rzg2l_cpg_probe(struct platform_device *pdev)
+diff --git a/drivers/clk/samsung/clk-exynos850.c b/drivers/clk/samsung/clk-exynos850.c
+index 2294989e244c5..79cce8ba88831 100644
+--- a/drivers/clk/samsung/clk-exynos850.c
++++ b/drivers/clk/samsung/clk-exynos850.c
+@@ -60,6 +60,43 @@ static void __init exynos850_init_clocks(struct device_node *np,
+ 	iounmap(reg_base);
+ }
+ 
++/**
++ * exynos850_register_cmu - Register specified Exynos850 CMU domain
++ * @dev:	Device object; may be NULL if this function is not being
++ *		called from platform driver probe function
++ * @np:		CMU device tree node
++ * @cmu:	CMU data
++ *
++ * Register specified CMU domain, which includes next steps:
++ *
++ * 1. Enable parent clock of @cmu CMU
++ * 2. Set initial registers configuration for @cmu CMU clocks
++ * 3. Register @cmu CMU clocks using Samsung clock framework API
++ */
++static void __init exynos850_register_cmu(struct device *dev,
++		struct device_node *np, const struct samsung_cmu_info *cmu)
++{
++	/* Keep CMU parent clock running (needed for CMU registers access) */
++	if (cmu->clk_name) {
++		struct clk *parent_clk;
++
++		if (dev)
++			parent_clk = clk_get(dev, cmu->clk_name);
++		else
++			parent_clk = of_clk_get_by_name(np, cmu->clk_name);
++
++		if (IS_ERR(parent_clk)) {
++			pr_err("%s: could not find bus clock %s; err = %ld\n",
++			       __func__, cmu->clk_name, PTR_ERR(parent_clk));
++		} else {
++			clk_prepare_enable(parent_clk);
++		}
++	}
++
++	exynos850_init_clocks(np, cmu->clk_regs, cmu->nr_clk_regs);
++	samsung_cmu_register_one(np, cmu);
++}
++
+ /* ---- CMU_TOP ------------------------------------------------------------- */
+ 
+ /* Register Offset definitions for CMU_TOP (0x120e0000) */
+@@ -347,10 +384,10 @@ static const struct samsung_cmu_info top_cmu_info __initconst = {
+ 
+ static void __init exynos850_cmu_top_init(struct device_node *np)
+ {
+-	exynos850_init_clocks(np, top_clk_regs, ARRAY_SIZE(top_clk_regs));
+-	samsung_cmu_register_one(np, &top_cmu_info);
++	exynos850_register_cmu(NULL, np, &top_cmu_info);
+ }
+ 
++/* Register CMU_TOP early, as it's a dependency for other early domains */
+ CLK_OF_DECLARE(exynos850_cmu_top, "samsung,exynos850-cmu-top",
+ 	       exynos850_cmu_top_init);
+ 
+@@ -615,6 +652,15 @@ static const struct samsung_cmu_info peri_cmu_info __initconst = {
+ 	.clk_name		= "dout_peri_bus",
+ };
+ 
++static void __init exynos850_cmu_peri_init(struct device_node *np)
++{
++	exynos850_register_cmu(NULL, np, &peri_cmu_info);
++}
++
++/* Register CMU_PERI early, as it's needed for MCT timer */
++CLK_OF_DECLARE(exynos850_cmu_peri, "samsung,exynos850-cmu-peri",
++	       exynos850_cmu_peri_init);
++
+ /* ---- CMU_CORE ------------------------------------------------------------ */
+ 
+ /* Register Offset definitions for CMU_CORE (0x12000000) */
+@@ -779,24 +825,9 @@ static int __init exynos850_cmu_probe(struct platform_device *pdev)
+ {
+ 	const struct samsung_cmu_info *info;
+ 	struct device *dev = &pdev->dev;
+-	struct device_node *np = dev->of_node;
+ 
+ 	info = of_device_get_match_data(dev);
+-	exynos850_init_clocks(np, info->clk_regs, info->nr_clk_regs);
+-	samsung_cmu_register_one(np, info);
+-
+-	/* Keep bus clock running, so it's possible to access CMU registers */
+-	if (info->clk_name) {
+-		struct clk *bus_clk;
+-
+-		bus_clk = clk_get(dev, info->clk_name);
+-		if (IS_ERR(bus_clk)) {
+-			pr_err("%s: could not find bus clock %s; err = %ld\n",
+-			       __func__, info->clk_name, PTR_ERR(bus_clk));
+-		} else {
+-			clk_prepare_enable(bus_clk);
+-		}
+-	}
++	exynos850_register_cmu(dev, dev->of_node, info);
+ 
+ 	return 0;
+ }
+@@ -806,9 +837,6 @@ static const struct of_device_id exynos850_cmu_of_match[] = {
+ 	{
+ 		.compatible = "samsung,exynos850-cmu-hsi",
+ 		.data = &hsi_cmu_info,
+-	}, {
+-		.compatible = "samsung,exynos850-cmu-peri",
+-		.data = &peri_cmu_info,
+ 	}, {
+ 		.compatible = "samsung,exynos850-cmu-core",
+ 		.data = &core_cmu_info,
+diff --git a/drivers/counter/104-quad-8.c b/drivers/counter/104-quad-8.c
+index 1cbd60aaed697..a97027db0446d 100644
+--- a/drivers/counter/104-quad-8.c
++++ b/drivers/counter/104-quad-8.c
+@@ -14,6 +14,7 @@
+ #include <linux/interrupt.h>
+ #include <linux/isa.h>
+ #include <linux/kernel.h>
++#include <linux/list.h>
+ #include <linux/module.h>
+ #include <linux/moduleparam.h>
+ #include <linux/types.h>
+@@ -44,7 +45,6 @@ MODULE_PARM_DESC(irq, "ACCES 104-QUAD-8 interrupt line numbers");
+  * @ab_enable:		array of A and B inputs enable configurations
+  * @preset_enable:	array of set_to_preset_on_index attribute configurations
+  * @irq_trigger:	array of current IRQ trigger function configurations
+- * @next_irq_trigger:	array of next IRQ trigger function configurations
+  * @synchronous_mode:	array of index function synchronous mode configurations
+  * @index_polarity:	array of index function polarity configurations
+  * @cable_fault_enable:	differential encoder cable status enable configurations
+@@ -61,7 +61,6 @@ struct quad8 {
+ 	unsigned int ab_enable[QUAD8_NUM_COUNTERS];
+ 	unsigned int preset_enable[QUAD8_NUM_COUNTERS];
+ 	unsigned int irq_trigger[QUAD8_NUM_COUNTERS];
+-	unsigned int next_irq_trigger[QUAD8_NUM_COUNTERS];
+ 	unsigned int synchronous_mode[QUAD8_NUM_COUNTERS];
+ 	unsigned int index_polarity[QUAD8_NUM_COUNTERS];
+ 	unsigned int cable_fault_enable;
+@@ -390,7 +389,6 @@ static int quad8_action_read(struct counter_device *counter,
+ }
+ 
+ enum {
+-	QUAD8_EVENT_NONE = -1,
+ 	QUAD8_EVENT_CARRY = 0,
+ 	QUAD8_EVENT_COMPARE = 1,
+ 	QUAD8_EVENT_CARRY_BORROW = 2,
+@@ -402,34 +400,49 @@ static int quad8_events_configure(struct counter_device *counter)
+ 	struct quad8 *const priv = counter->priv;
+ 	unsigned long irq_enabled = 0;
+ 	unsigned long irqflags;
+-	size_t channel;
++	struct counter_event_node *event_node;
++	unsigned int next_irq_trigger;
+ 	unsigned long ior_cfg;
+ 	unsigned long base_offset;
+ 
+ 	spin_lock_irqsave(&priv->lock, irqflags);
+ 
+-	/* Enable interrupts for the requested channels, disable for the rest */
+-	for (channel = 0; channel < QUAD8_NUM_COUNTERS; channel++) {
+-		if (priv->next_irq_trigger[channel] == QUAD8_EVENT_NONE)
+-			continue;
++	list_for_each_entry(event_node, &counter->events_list, l) {
++		switch (event_node->event) {
++		case COUNTER_EVENT_OVERFLOW:
++			next_irq_trigger = QUAD8_EVENT_CARRY;
++			break;
++		case COUNTER_EVENT_THRESHOLD:
++			next_irq_trigger = QUAD8_EVENT_COMPARE;
++			break;
++		case COUNTER_EVENT_OVERFLOW_UNDERFLOW:
++			next_irq_trigger = QUAD8_EVENT_CARRY_BORROW;
++			break;
++		case COUNTER_EVENT_INDEX:
++			next_irq_trigger = QUAD8_EVENT_INDEX;
++			break;
++		default:
++			/* should never reach this path */
++			spin_unlock_irqrestore(&priv->lock, irqflags);
++			return -EINVAL;
++		}
+ 
+-		if (priv->irq_trigger[channel] != priv->next_irq_trigger[channel]) {
+-			/* Save new IRQ function configuration */
+-			priv->irq_trigger[channel] = priv->next_irq_trigger[channel];
++		/* Skip configuration if it is the same as previously set */
++		if (priv->irq_trigger[event_node->channel] == next_irq_trigger)
++			continue;
+ 
+-			/* Load configuration to I/O Control Register */
+-			ior_cfg = priv->ab_enable[channel] |
+-				  priv->preset_enable[channel] << 1 |
+-				  priv->irq_trigger[channel] << 3;
+-			base_offset = priv->base + 2 * channel + 1;
+-			outb(QUAD8_CTR_IOR | ior_cfg, base_offset);
+-		}
++		/* Save new IRQ function configuration */
++		priv->irq_trigger[event_node->channel] = next_irq_trigger;
+ 
+-		/* Reset next IRQ trigger function configuration */
+-		priv->next_irq_trigger[channel] = QUAD8_EVENT_NONE;
++		/* Load configuration to I/O Control Register */
++		ior_cfg = priv->ab_enable[event_node->channel] |
++			  priv->preset_enable[event_node->channel] << 1 |
++			  priv->irq_trigger[event_node->channel] << 3;
++		base_offset = priv->base + 2 * event_node->channel + 1;
++		outb(QUAD8_CTR_IOR | ior_cfg, base_offset);
+ 
+ 		/* Enable IRQ line */
+-		irq_enabled |= BIT(channel);
++		irq_enabled |= BIT(event_node->channel);
+ 	}
+ 
+ 	outb(irq_enabled, priv->base + QUAD8_REG_INDEX_INTERRUPT);
+@@ -442,35 +455,20 @@ static int quad8_events_configure(struct counter_device *counter)
+ static int quad8_watch_validate(struct counter_device *counter,
+ 				const struct counter_watch *watch)
+ {
+-	struct quad8 *const priv = counter->priv;
++	struct counter_event_node *event_node;
+ 
+ 	if (watch->channel > QUAD8_NUM_COUNTERS - 1)
+ 		return -EINVAL;
+ 
+ 	switch (watch->event) {
+ 	case COUNTER_EVENT_OVERFLOW:
+-		if (priv->next_irq_trigger[watch->channel] == QUAD8_EVENT_NONE)
+-			priv->next_irq_trigger[watch->channel] = QUAD8_EVENT_CARRY;
+-		else if (priv->next_irq_trigger[watch->channel] != QUAD8_EVENT_CARRY)
+-			return -EINVAL;
+-		return 0;
+ 	case COUNTER_EVENT_THRESHOLD:
+-		if (priv->next_irq_trigger[watch->channel] == QUAD8_EVENT_NONE)
+-			priv->next_irq_trigger[watch->channel] = QUAD8_EVENT_COMPARE;
+-		else if (priv->next_irq_trigger[watch->channel] != QUAD8_EVENT_COMPARE)
+-			return -EINVAL;
+-		return 0;
+ 	case COUNTER_EVENT_OVERFLOW_UNDERFLOW:
+-		if (priv->next_irq_trigger[watch->channel] == QUAD8_EVENT_NONE)
+-			priv->next_irq_trigger[watch->channel] = QUAD8_EVENT_CARRY_BORROW;
+-		else if (priv->next_irq_trigger[watch->channel] != QUAD8_EVENT_CARRY_BORROW)
+-			return -EINVAL;
+-		return 0;
+ 	case COUNTER_EVENT_INDEX:
+-		if (priv->next_irq_trigger[watch->channel] == QUAD8_EVENT_NONE)
+-			priv->next_irq_trigger[watch->channel] = QUAD8_EVENT_INDEX;
+-		else if (priv->next_irq_trigger[watch->channel] != QUAD8_EVENT_INDEX)
+-			return -EINVAL;
++		list_for_each_entry(event_node, &counter->next_events_list, l)
++			if (watch->channel == event_node->channel &&
++				watch->event != event_node->event)
++				return -EINVAL;
+ 		return 0;
+ 	default:
+ 		return -EINVAL;
+@@ -1183,8 +1181,6 @@ static int quad8_probe(struct device *dev, unsigned int id)
+ 		outb(QUAD8_CTR_IOR, base_offset + 1);
+ 		/* Disable index function; negative index polarity */
+ 		outb(QUAD8_CTR_IDR, base_offset + 1);
+-		/* Initialize next IRQ trigger function configuration */
+-		priv->next_irq_trigger[i] = QUAD8_EVENT_NONE;
+ 	}
+ 	/* Disable Differential Encoder Cable Status for all channels */
+ 	outb(0xFF, base[id] + QUAD8_DIFF_ENCODER_CABLE_STATUS);
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 096c3848fa415..76ffdaf8c8b5e 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -1403,7 +1403,7 @@ static int cpufreq_online(unsigned int cpu)
+ 
+ 		ret = freq_qos_add_request(&policy->constraints,
+ 					   policy->min_freq_req, FREQ_QOS_MIN,
+-					   policy->min);
++					   FREQ_QOS_MIN_DEFAULT_VALUE);
+ 		if (ret < 0) {
+ 			/*
+ 			 * So we don't call freq_qos_remove_request() for an
+@@ -1423,7 +1423,7 @@ static int cpufreq_online(unsigned int cpu)
+ 
+ 		ret = freq_qos_add_request(&policy->constraints,
+ 					   policy->max_freq_req, FREQ_QOS_MAX,
+-					   policy->max);
++					   FREQ_QOS_MAX_DEFAULT_VALUE);
+ 		if (ret < 0) {
+ 			policy->max_freq_req = NULL;
+ 			goto out_destroy_policy;
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index dec2a5649ac1a..676edc580fd4f 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -1124,19 +1124,22 @@ static void intel_pstate_update_policies(void)
+ 		cpufreq_update_policy(cpu);
+ }
+ 
++static void __intel_pstate_update_max_freq(struct cpudata *cpudata,
++					   struct cpufreq_policy *policy)
++{
++	policy->cpuinfo.max_freq = global.turbo_disabled_mf ?
++			cpudata->pstate.max_freq : cpudata->pstate.turbo_freq;
++	refresh_frequency_limits(policy);
++}
++
+ static void intel_pstate_update_max_freq(unsigned int cpu)
+ {
+ 	struct cpufreq_policy *policy = cpufreq_cpu_acquire(cpu);
+-	struct cpudata *cpudata;
+ 
+ 	if (!policy)
+ 		return;
+ 
+-	cpudata = all_cpu_data[cpu];
+-	policy->cpuinfo.max_freq = global.turbo_disabled_mf ?
+-			cpudata->pstate.max_freq : cpudata->pstate.turbo_freq;
+-
+-	refresh_frequency_limits(policy);
++	__intel_pstate_update_max_freq(all_cpu_data[cpu], policy);
+ 
+ 	cpufreq_cpu_release(policy);
+ }
+@@ -1584,8 +1587,15 @@ static void intel_pstate_notify_work(struct work_struct *work)
+ {
+ 	struct cpudata *cpudata =
+ 		container_of(to_delayed_work(work), struct cpudata, hwp_notify_work);
++	struct cpufreq_policy *policy = cpufreq_cpu_acquire(cpudata->cpu);
++
++	if (policy) {
++		intel_pstate_get_hwp_cap(cpudata);
++		__intel_pstate_update_max_freq(cpudata, policy);
++
++		cpufreq_cpu_release(policy);
++	}
+ 
+-	cpufreq_update_policy(cpudata->cpu);
+ 	wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_STATUS, 0);
+ }
+ 
+diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c
+index a2be0df7e1747..35d93361fda1a 100644
+--- a/drivers/cpufreq/qcom-cpufreq-hw.c
++++ b/drivers/cpufreq/qcom-cpufreq-hw.c
+@@ -304,7 +304,8 @@ static void qcom_lmh_dcvs_notify(struct qcom_cpufreq_data *data)
+ 	if (capacity > max_capacity)
+ 		capacity = max_capacity;
+ 
+-	arch_set_thermal_pressure(policy->cpus, max_capacity - capacity);
++	arch_set_thermal_pressure(policy->related_cpus,
++				  max_capacity - capacity);
+ 
+ 	/*
+ 	 * In the unlikely case policy is unregistered do not enable
+@@ -342,9 +343,9 @@ static irqreturn_t qcom_lmh_dcvs_handle_irq(int irq, void *data)
+ 
+ 	/* Disable interrupt and enable polling */
+ 	disable_irq_nosync(c_data->throttle_irq);
+-	qcom_lmh_dcvs_notify(c_data);
++	schedule_delayed_work(&c_data->throttle_work, 0);
+ 
+-	return 0;
++	return IRQ_HANDLED;
+ }
+ 
+ static const struct qcom_cpufreq_soc_data qcom_soc_data = {
+diff --git a/drivers/crypto/atmel-aes.c b/drivers/crypto/atmel-aes.c
+index 9391ccc03382d..fe05584031914 100644
+--- a/drivers/crypto/atmel-aes.c
++++ b/drivers/crypto/atmel-aes.c
+@@ -960,6 +960,7 @@ static int atmel_aes_handle_queue(struct atmel_aes_dev *dd,
+ 	ctx = crypto_tfm_ctx(areq->tfm);
+ 
+ 	dd->areq = areq;
++	dd->ctx = ctx;
+ 	start_async = (areq != new_areq);
+ 	dd->is_async = start_async;
+ 
+@@ -1274,7 +1275,6 @@ static int atmel_aes_init_tfm(struct crypto_skcipher *tfm)
+ 
+ 	crypto_skcipher_set_reqsize(tfm, sizeof(struct atmel_aes_reqctx));
+ 	ctx->base.dd = dd;
+-	ctx->base.dd->ctx = &ctx->base;
+ 	ctx->base.start = atmel_aes_start;
+ 
+ 	return 0;
+@@ -1291,7 +1291,6 @@ static int atmel_aes_ctr_init_tfm(struct crypto_skcipher *tfm)
+ 
+ 	crypto_skcipher_set_reqsize(tfm, sizeof(struct atmel_aes_reqctx));
+ 	ctx->base.dd = dd;
+-	ctx->base.dd->ctx = &ctx->base;
+ 	ctx->base.start = atmel_aes_ctr_start;
+ 
+ 	return 0;
+@@ -1783,7 +1782,6 @@ static int atmel_aes_gcm_init(struct crypto_aead *tfm)
+ 
+ 	crypto_aead_set_reqsize(tfm, sizeof(struct atmel_aes_reqctx));
+ 	ctx->base.dd = dd;
+-	ctx->base.dd->ctx = &ctx->base;
+ 	ctx->base.start = atmel_aes_gcm_start;
+ 
+ 	return 0;
+@@ -1927,7 +1925,6 @@ static int atmel_aes_xts_init_tfm(struct crypto_skcipher *tfm)
+ 	crypto_skcipher_set_reqsize(tfm, sizeof(struct atmel_aes_reqctx) +
+ 				    crypto_skcipher_reqsize(ctx->fallback_tfm));
+ 	ctx->base.dd = dd;
+-	ctx->base.dd->ctx = &ctx->base;
+ 	ctx->base.start = atmel_aes_xts_start;
+ 
+ 	return 0;
+@@ -2154,7 +2151,6 @@ static int atmel_aes_authenc_init_tfm(struct crypto_aead *tfm,
+ 	crypto_aead_set_reqsize(tfm, (sizeof(struct atmel_aes_authenc_reqctx) +
+ 				      auth_reqsize));
+ 	ctx->base.dd = dd;
+-	ctx->base.dd->ctx = &ctx->base;
+ 	ctx->base.start = atmel_aes_authenc_start;
+ 
+ 	return 0;
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index 8697ae53b0633..d3d8bb0a69900 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -1533,6 +1533,9 @@ static int aead_do_one_req(struct crypto_engine *engine, void *areq)
+ 
+ 	ret = caam_jr_enqueue(ctx->jrdev, desc, aead_crypt_done, req);
+ 
++	if (ret == -ENOSPC && engine->retry_support)
++		return ret;
++
+ 	if (ret != -EINPROGRESS) {
+ 		aead_unmap(ctx->jrdev, rctx->edesc, req);
+ 		kfree(rctx->edesc);
+@@ -1762,6 +1765,9 @@ static int skcipher_do_one_req(struct crypto_engine *engine, void *areq)
+ 
+ 	ret = caam_jr_enqueue(ctx->jrdev, desc, skcipher_crypt_done, req);
+ 
++	if (ret == -ENOSPC && engine->retry_support)
++		return ret;
++
+ 	if (ret != -EINPROGRESS) {
+ 		skcipher_unmap(ctx->jrdev, rctx->edesc, req);
+ 		kfree(rctx->edesc);
+diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c
+index 8b8ed77d8715d..6753f0e6e55d1 100644
+--- a/drivers/crypto/caam/caamalg_qi2.c
++++ b/drivers/crypto/caam/caamalg_qi2.c
+@@ -5470,7 +5470,7 @@ int dpaa2_caam_enqueue(struct device *dev, struct caam_request *req)
+ 	dpaa2_fd_set_len(&fd, dpaa2_fl_get_len(&req->fd_flt[1]));
+ 	dpaa2_fd_set_flc(&fd, req->flc_dma);
+ 
+-	ppriv = this_cpu_ptr(priv->ppriv);
++	ppriv = raw_cpu_ptr(priv->ppriv);
+ 	for (i = 0; i < (priv->dpseci_attr.num_tx_queues << 1); i++) {
+ 		err = dpaa2_io_service_enqueue_fq(ppriv->dpio, ppriv->req_fqid,
+ 						  &fd);
+diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
+index e8a6d8bc43b5d..36ef738e4a181 100644
+--- a/drivers/crypto/caam/caamhash.c
++++ b/drivers/crypto/caam/caamhash.c
+@@ -765,6 +765,9 @@ static int ahash_do_one_req(struct crypto_engine *engine, void *areq)
+ 
+ 	ret = caam_jr_enqueue(jrdev, desc, state->ahash_op_done, req);
+ 
++	if (ret == -ENOSPC && engine->retry_support)
++		return ret;
++
+ 	if (ret != -EINPROGRESS) {
+ 		ahash_unmap(jrdev, state->edesc, req, 0);
+ 		kfree(state->edesc);
+diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
+index bf6275ffc4aad..8867275767101 100644
+--- a/drivers/crypto/caam/caampkc.c
++++ b/drivers/crypto/caam/caampkc.c
+@@ -380,6 +380,9 @@ static int akcipher_do_one_req(struct crypto_engine *engine, void *areq)
+ 
+ 	ret = caam_jr_enqueue(jrdev, desc, req_ctx->akcipher_op_done, req);
+ 
++	if (ret == -ENOSPC && engine->retry_support)
++		return ret;
++
+ 	if (ret != -EINPROGRESS) {
+ 		rsa_pub_unmap(jrdev, req_ctx->edesc, req);
+ 		rsa_io_unmap(jrdev, req_ctx->edesc, req);
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index e09925d86bf36..581a1b13d5c3d 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -241,7 +241,7 @@ static int __sev_platform_init_locked(int *error)
+ 	struct psp_device *psp = psp_master;
+ 	struct sev_data_init data;
+ 	struct sev_device *sev;
+-	int rc = 0;
++	int psp_ret, rc = 0;
+ 
+ 	if (!psp || !psp->sev_data)
+ 		return -ENODEV;
+@@ -266,7 +266,21 @@ static int __sev_platform_init_locked(int *error)
+ 		data.tmr_len = SEV_ES_TMR_SIZE;
+ 	}
+ 
+-	rc = __sev_do_cmd_locked(SEV_CMD_INIT, &data, error);
++	rc = __sev_do_cmd_locked(SEV_CMD_INIT, &data, &psp_ret);
++	if (rc && psp_ret == SEV_RET_SECURE_DATA_INVALID) {
++		/*
++		 * Initialization command returned an integrity check failure
++		 * status code, meaning that firmware load and validation of SEV
++		 * related persistent data has failed. Retrying the
++		 * initialization function should succeed by replacing the state
++		 * with a reset state.
++		 */
++		dev_dbg(sev->dev, "SEV: retrying INIT command");
++		rc = __sev_do_cmd_locked(SEV_CMD_INIT, &data, &psp_ret);
++	}
++	if (error)
++		*error = psp_ret;
++
+ 	if (rc)
+ 		return rc;
+ 
+@@ -1091,18 +1105,6 @@ void sev_pci_init(void)
+ 
+ 	/* Initialize the platform */
+ 	rc = sev_platform_init(&error);
+-	if (rc && (error == SEV_RET_SECURE_DATA_INVALID)) {
+-		/*
+-		 * INIT command returned an integrity check failure
+-		 * status code, meaning that firmware load and
+-		 * validation of SEV related persistent data has
+-		 * failed and persistent state has been erased.
+-		 * Retrying INIT command here should succeed.
+-		 */
+-		dev_dbg(sev->dev, "SEV: retrying INIT command");
+-		rc = sev_platform_init(&error);
+-	}
+-
+ 	if (rc) {
+ 		dev_err(sev->dev, "SEV: failed to INIT error %#x\n", error);
+ 		return;
+diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
+index a032c192ef1d6..7ba7641723a0b 100644
+--- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c
++++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
+@@ -1865,7 +1865,7 @@ static int hpre_curve25519_src_init(struct hpre_asym_request *hpre_req,
+ 	 */
+ 	if (memcmp(ptr, p, ctx->key_sz) == 0) {
+ 		dev_err(dev, "gx is p!\n");
+-		return -EINVAL;
++		goto err;
+ 	} else if (memcmp(ptr, p, ctx->key_sz) > 0) {
+ 		hpre_curve25519_src_modulo_p(ptr);
+ 	}
+diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
+index 52d6cca6262e2..1dc6a27ba0e0d 100644
+--- a/drivers/crypto/hisilicon/qm.c
++++ b/drivers/crypto/hisilicon/qm.c
+@@ -3399,6 +3399,7 @@ void hisi_qm_uninit(struct hisi_qm *qm)
+ 		dma_free_coherent(dev, qm->qdma.size,
+ 				  qm->qdma.va, qm->qdma.dma);
+ 	}
++	up_write(&qm->qps_lock);
+ 
+ 	qm_irq_unregister(qm);
+ 	hisi_qm_pci_uninit(qm);
+@@ -3406,8 +3407,6 @@ void hisi_qm_uninit(struct hisi_qm *qm)
+ 		uacce_remove(qm->uacce);
+ 		qm->uacce = NULL;
+ 	}
+-
+-	up_write(&qm->qps_lock);
+ }
+ EXPORT_SYMBOL_GPL(hisi_qm_uninit);
+ 
+@@ -6038,7 +6037,7 @@ int hisi_qm_resume(struct device *dev)
+ 	if (ret)
+ 		pci_err(pdev, "failed to start qm(%d)\n", ret);
+ 
+-	return 0;
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(hisi_qm_resume);
+ 
+diff --git a/drivers/crypto/keembay/keembay-ocs-ecc.c b/drivers/crypto/keembay/keembay-ocs-ecc.c
+index 679e6ae295e0b..5d0785d3f1b55 100644
+--- a/drivers/crypto/keembay/keembay-ocs-ecc.c
++++ b/drivers/crypto/keembay/keembay-ocs-ecc.c
+@@ -930,6 +930,7 @@ static int kmb_ocs_ecc_probe(struct platform_device *pdev)
+ 	ecc_dev->engine = crypto_engine_alloc_init(dev, 1);
+ 	if (!ecc_dev->engine) {
+ 		dev_err(dev, "Could not allocate crypto engine\n");
++		rc = -ENOMEM;
+ 		goto list_del;
+ 	}
+ 
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c
+index 146a55ac4b9b0..be1ad55a208f6 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c
+@@ -494,12 +494,11 @@ static ssize_t kvf_limits_store(struct device *dev,
+ {
+ 	struct otx2_cptpf_dev *cptpf = dev_get_drvdata(dev);
+ 	int lfs_num;
++	int ret;
+ 
+-	if (kstrtoint(buf, 0, &lfs_num)) {
+-		dev_err(dev, "lfs count %d must be in range [1 - %d]\n",
+-			lfs_num, num_online_cpus());
+-		return -EINVAL;
+-	}
++	ret = kstrtoint(buf, 0, &lfs_num);
++	if (ret)
++		return ret;
+ 	if (lfs_num < 1 || lfs_num > num_online_cpus()) {
+ 		dev_err(dev, "lfs count %d must be in range [1 - %d]\n",
+ 			lfs_num, num_online_cpus());
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
+index dff34b3ec09e1..7c1b92aaab398 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
+@@ -29,7 +29,8 @@ static struct otx2_cpt_bitmap get_cores_bmap(struct device *dev,
+ 	bool found = false;
+ 	int i;
+ 
+-	if (eng_grp->g->engs_num > OTX2_CPT_MAX_ENGINES) {
++	if (eng_grp->g->engs_num < 0 ||
++	    eng_grp->g->engs_num > OTX2_CPT_MAX_ENGINES) {
+ 		dev_err(dev, "unsupported number of engines %d on octeontx2\n",
+ 			eng_grp->g->engs_num);
+ 		return bmap;
+diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
+index 9b968ac4ee7b6..a196bb8b17010 100644
+--- a/drivers/crypto/omap-aes.c
++++ b/drivers/crypto/omap-aes.c
+@@ -1302,7 +1302,7 @@ static int omap_aes_suspend(struct device *dev)
+ 
+ static int omap_aes_resume(struct device *dev)
+ {
+-	pm_runtime_resume_and_get(dev);
++	pm_runtime_get_sync(dev);
+ 	return 0;
+ }
+ #endif
+diff --git a/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c b/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
+index 59860bdaedb69..99ee17c3d06bf 100644
+--- a/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
++++ b/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
+@@ -107,6 +107,12 @@ static int __adf_iov_putmsg(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr)
+ 		val = ADF_CSR_RD(pmisc_bar_addr, pf2vf_offset);
+ 	} while ((val & int_bit) && (count++ < ADF_PFVF_MSG_ACK_MAX_RETRY));
+ 
++	if (val & int_bit) {
++		dev_dbg(&GET_DEV(accel_dev), "ACK not received from remote\n");
++		val &= ~int_bit;
++		ret = -EIO;
++	}
++
+ 	if (val != msg) {
+ 		dev_dbg(&GET_DEV(accel_dev),
+ 			"Collision - PFVF CSR overwritten by remote function\n");
+@@ -114,12 +120,6 @@ static int __adf_iov_putmsg(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr)
+ 		goto out;
+ 	}
+ 
+-	if (val & int_bit) {
+-		dev_dbg(&GET_DEV(accel_dev), "ACK not received from remote\n");
+-		val &= ~int_bit;
+-		ret = -EIO;
+-	}
+-
+ 	/* Finished with the PFVF CSR; relinquish it and leave msg in CSR */
+ 	ADF_CSR_WR(pmisc_bar_addr, pf2vf_offset, val & ~local_in_use_mask);
+ out:
+diff --git a/drivers/crypto/qce/aead.c b/drivers/crypto/qce/aead.c
+index 290e2446a2f35..97a530171f07a 100644
+--- a/drivers/crypto/qce/aead.c
++++ b/drivers/crypto/qce/aead.c
+@@ -802,8 +802,8 @@ static int qce_aead_register_one(const struct qce_aead_def *def, struct qce_devi
+ 
+ 	ret = crypto_register_aead(alg);
+ 	if (ret) {
+-		kfree(tmpl);
+ 		dev_err(qce->dev, "%s registration failed\n", alg->base.cra_name);
++		kfree(tmpl);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/crypto/qce/sha.c b/drivers/crypto/qce/sha.c
+index 8e6fcf2c21cc0..59159f5e64e52 100644
+--- a/drivers/crypto/qce/sha.c
++++ b/drivers/crypto/qce/sha.c
+@@ -498,8 +498,8 @@ static int qce_ahash_register_one(const struct qce_ahash_def *def,
+ 
+ 	ret = crypto_register_ahash(alg);
+ 	if (ret) {
+-		kfree(tmpl);
+ 		dev_err(qce->dev, "%s registration failed\n", base->cra_name);
++		kfree(tmpl);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/crypto/qce/skcipher.c b/drivers/crypto/qce/skcipher.c
+index 8ff10928f581d..3d27cd5210ef5 100644
+--- a/drivers/crypto/qce/skcipher.c
++++ b/drivers/crypto/qce/skcipher.c
+@@ -484,8 +484,8 @@ static int qce_skcipher_register_one(const struct qce_skcipher_def *def,
+ 
+ 	ret = crypto_register_skcipher(alg);
+ 	if (ret) {
+-		kfree(tmpl);
+ 		dev_err(qce->dev, "%s registration failed\n", alg->base.cra_name);
++		kfree(tmpl);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/crypto/stm32/stm32-crc32.c b/drivers/crypto/stm32/stm32-crc32.c
+index 75867c0b00172..be1bf39a317de 100644
+--- a/drivers/crypto/stm32/stm32-crc32.c
++++ b/drivers/crypto/stm32/stm32-crc32.c
+@@ -279,7 +279,7 @@ static struct shash_alg algs[] = {
+ 		.digestsize     = CHKSUM_DIGEST_SIZE,
+ 		.base           = {
+ 			.cra_name               = "crc32",
+-			.cra_driver_name        = DRIVER_NAME,
++			.cra_driver_name        = "stm32-crc32-crc32",
+ 			.cra_priority           = 200,
+ 			.cra_flags		= CRYPTO_ALG_OPTIONAL_KEY,
+ 			.cra_blocksize          = CHKSUM_BLOCK_SIZE,
+@@ -301,7 +301,7 @@ static struct shash_alg algs[] = {
+ 		.digestsize     = CHKSUM_DIGEST_SIZE,
+ 		.base           = {
+ 			.cra_name               = "crc32c",
+-			.cra_driver_name        = DRIVER_NAME,
++			.cra_driver_name        = "stm32-crc32-crc32c",
+ 			.cra_priority           = 200,
+ 			.cra_flags		= CRYPTO_ALG_OPTIONAL_KEY,
+ 			.cra_blocksize          = CHKSUM_BLOCK_SIZE,
+diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c
+index 7389a0536ff02..81eb136b6c11d 100644
+--- a/drivers/crypto/stm32/stm32-cryp.c
++++ b/drivers/crypto/stm32/stm32-cryp.c
+@@ -37,7 +37,6 @@
+ /* Mode mask = bits [15..0] */
+ #define FLG_MODE_MASK           GENMASK(15, 0)
+ /* Bit [31..16] status  */
+-#define FLG_CCM_PADDED_WA       BIT(16)
+ 
+ /* Registers */
+ #define CRYP_CR                 0x00000000
+@@ -105,8 +104,6 @@
+ /* Misc */
+ #define AES_BLOCK_32            (AES_BLOCK_SIZE / sizeof(u32))
+ #define GCM_CTR_INIT            2
+-#define _walked_in              (cryp->in_walk.offset - cryp->in_sg->offset)
+-#define _walked_out             (cryp->out_walk.offset - cryp->out_sg->offset)
+ #define CRYP_AUTOSUSPEND_DELAY	50
+ 
+ struct stm32_cryp_caps {
+@@ -144,26 +141,16 @@ struct stm32_cryp {
+ 	size_t                  authsize;
+ 	size_t                  hw_blocksize;
+ 
+-	size_t                  total_in;
+-	size_t                  total_in_save;
+-	size_t                  total_out;
+-	size_t                  total_out_save;
++	size_t                  payload_in;
++	size_t                  header_in;
++	size_t                  payload_out;
+ 
+-	struct scatterlist      *in_sg;
+ 	struct scatterlist      *out_sg;
+-	struct scatterlist      *out_sg_save;
+-
+-	struct scatterlist      in_sgl;
+-	struct scatterlist      out_sgl;
+-	bool                    sgs_copied;
+-
+-	int                     in_sg_len;
+-	int                     out_sg_len;
+ 
+ 	struct scatter_walk     in_walk;
+ 	struct scatter_walk     out_walk;
+ 
+-	u32                     last_ctr[4];
++	__be32                  last_ctr[4];
+ 	u32                     gcm_ctr;
+ };
+ 
+@@ -262,6 +249,7 @@ static inline int stm32_cryp_wait_output(struct stm32_cryp *cryp)
+ }
+ 
+ static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp);
++static void stm32_cryp_finish_req(struct stm32_cryp *cryp, int err);
+ 
+ static struct stm32_cryp *stm32_cryp_find_dev(struct stm32_cryp_ctx *ctx)
+ {
+@@ -283,103 +271,6 @@ static struct stm32_cryp *stm32_cryp_find_dev(struct stm32_cryp_ctx *ctx)
+ 	return cryp;
+ }
+ 
+-static int stm32_cryp_check_aligned(struct scatterlist *sg, size_t total,
+-				    size_t align)
+-{
+-	int len = 0;
+-
+-	if (!total)
+-		return 0;
+-
+-	if (!IS_ALIGNED(total, align))
+-		return -EINVAL;
+-
+-	while (sg) {
+-		if (!IS_ALIGNED(sg->offset, sizeof(u32)))
+-			return -EINVAL;
+-
+-		if (!IS_ALIGNED(sg->length, align))
+-			return -EINVAL;
+-
+-		len += sg->length;
+-		sg = sg_next(sg);
+-	}
+-
+-	if (len != total)
+-		return -EINVAL;
+-
+-	return 0;
+-}
+-
+-static int stm32_cryp_check_io_aligned(struct stm32_cryp *cryp)
+-{
+-	int ret;
+-
+-	ret = stm32_cryp_check_aligned(cryp->in_sg, cryp->total_in,
+-				       cryp->hw_blocksize);
+-	if (ret)
+-		return ret;
+-
+-	ret = stm32_cryp_check_aligned(cryp->out_sg, cryp->total_out,
+-				       cryp->hw_blocksize);
+-
+-	return ret;
+-}
+-
+-static void sg_copy_buf(void *buf, struct scatterlist *sg,
+-			unsigned int start, unsigned int nbytes, int out)
+-{
+-	struct scatter_walk walk;
+-
+-	if (!nbytes)
+-		return;
+-
+-	scatterwalk_start(&walk, sg);
+-	scatterwalk_advance(&walk, start);
+-	scatterwalk_copychunks(buf, &walk, nbytes, out);
+-	scatterwalk_done(&walk, out, 0);
+-}
+-
+-static int stm32_cryp_copy_sgs(struct stm32_cryp *cryp)
+-{
+-	void *buf_in, *buf_out;
+-	int pages, total_in, total_out;
+-
+-	if (!stm32_cryp_check_io_aligned(cryp)) {
+-		cryp->sgs_copied = 0;
+-		return 0;
+-	}
+-
+-	total_in = ALIGN(cryp->total_in, cryp->hw_blocksize);
+-	pages = total_in ? get_order(total_in) : 1;
+-	buf_in = (void *)__get_free_pages(GFP_ATOMIC, pages);
+-
+-	total_out = ALIGN(cryp->total_out, cryp->hw_blocksize);
+-	pages = total_out ? get_order(total_out) : 1;
+-	buf_out = (void *)__get_free_pages(GFP_ATOMIC, pages);
+-
+-	if (!buf_in || !buf_out) {
+-		dev_err(cryp->dev, "Can't allocate pages when unaligned\n");
+-		cryp->sgs_copied = 0;
+-		return -EFAULT;
+-	}
+-
+-	sg_copy_buf(buf_in, cryp->in_sg, 0, cryp->total_in, 0);
+-
+-	sg_init_one(&cryp->in_sgl, buf_in, total_in);
+-	cryp->in_sg = &cryp->in_sgl;
+-	cryp->in_sg_len = 1;
+-
+-	sg_init_one(&cryp->out_sgl, buf_out, total_out);
+-	cryp->out_sg_save = cryp->out_sg;
+-	cryp->out_sg = &cryp->out_sgl;
+-	cryp->out_sg_len = 1;
+-
+-	cryp->sgs_copied = 1;
+-
+-	return 0;
+-}
+-
+ static void stm32_cryp_hw_write_iv(struct stm32_cryp *cryp, __be32 *iv)
+ {
+ 	if (!iv)
+@@ -481,16 +372,99 @@ static int stm32_cryp_gcm_init(struct stm32_cryp *cryp, u32 cfg)
+ 
+ 	/* Wait for end of processing */
+ 	ret = stm32_cryp_wait_enable(cryp);
+-	if (ret)
++	if (ret) {
+ 		dev_err(cryp->dev, "Timeout (gcm init)\n");
++		return ret;
++	}
+ 
+-	return ret;
++	/* Prepare next phase */
++	if (cryp->areq->assoclen) {
++		cfg |= CR_PH_HEADER;
++		stm32_cryp_write(cryp, CRYP_CR, cfg);
++	} else if (stm32_cryp_get_input_text_len(cryp)) {
++		cfg |= CR_PH_PAYLOAD;
++		stm32_cryp_write(cryp, CRYP_CR, cfg);
++	}
++
++	return 0;
++}
++
++static void stm32_crypt_gcmccm_end_header(struct stm32_cryp *cryp)
++{
++	u32 cfg;
++	int err;
++
++	/* Check if whole header written */
++	if (!cryp->header_in) {
++		/* Wait for completion */
++		err = stm32_cryp_wait_busy(cryp);
++		if (err) {
++			dev_err(cryp->dev, "Timeout (gcm/ccm header)\n");
++			stm32_cryp_write(cryp, CRYP_IMSCR, 0);
++			stm32_cryp_finish_req(cryp, err);
++			return;
++		}
++
++		if (stm32_cryp_get_input_text_len(cryp)) {
++			/* Phase 3 : payload */
++			cfg = stm32_cryp_read(cryp, CRYP_CR);
++			cfg &= ~CR_CRYPEN;
++			stm32_cryp_write(cryp, CRYP_CR, cfg);
++
++			cfg &= ~CR_PH_MASK;
++			cfg |= CR_PH_PAYLOAD | CR_CRYPEN;
++			stm32_cryp_write(cryp, CRYP_CR, cfg);
++		} else {
++			/*
++			 * Phase 4 : tag.
++			 * Nothing to read, nothing to write, caller have to
++			 * end request
++			 */
++		}
++	}
++}
++
++static void stm32_cryp_write_ccm_first_header(struct stm32_cryp *cryp)
++{
++	unsigned int i;
++	size_t written;
++	size_t len;
++	u32 alen = cryp->areq->assoclen;
++	u32 block[AES_BLOCK_32] = {0};
++	u8 *b8 = (u8 *)block;
++
++	if (alen <= 65280) {
++		/* Write first u32 of B1 */
++		b8[0] = (alen >> 8) & 0xFF;
++		b8[1] = alen & 0xFF;
++		len = 2;
++	} else {
++		/* Build the two first u32 of B1 */
++		b8[0] = 0xFF;
++		b8[1] = 0xFE;
++		b8[2] = (alen & 0xFF000000) >> 24;
++		b8[3] = (alen & 0x00FF0000) >> 16;
++		b8[4] = (alen & 0x0000FF00) >> 8;
++		b8[5] = alen & 0x000000FF;
++		len = 6;
++	}
++
++	written = min_t(size_t, AES_BLOCK_SIZE - len, alen);
++
++	scatterwalk_copychunks((char *)block + len, &cryp->in_walk, written, 0);
++	for (i = 0; i < AES_BLOCK_32; i++)
++		stm32_cryp_write(cryp, CRYP_DIN, block[i]);
++
++	cryp->header_in -= written;
++
++	stm32_crypt_gcmccm_end_header(cryp);
+ }
+ 
+ static int stm32_cryp_ccm_init(struct stm32_cryp *cryp, u32 cfg)
+ {
+ 	int ret;
+-	u8 iv[AES_BLOCK_SIZE], b0[AES_BLOCK_SIZE];
++	u32 iv_32[AES_BLOCK_32], b0_32[AES_BLOCK_32];
++	u8 *iv = (u8 *)iv_32, *b0 = (u8 *)b0_32;
+ 	__be32 *bd;
+ 	u32 *d;
+ 	unsigned int i, textlen;
+@@ -531,10 +505,24 @@ static int stm32_cryp_ccm_init(struct stm32_cryp *cryp, u32 cfg)
+ 
+ 	/* Wait for end of processing */
+ 	ret = stm32_cryp_wait_enable(cryp);
+-	if (ret)
++	if (ret) {
+ 		dev_err(cryp->dev, "Timeout (ccm init)\n");
++		return ret;
++	}
+ 
+-	return ret;
++	/* Prepare next phase */
++	if (cryp->areq->assoclen) {
++		cfg |= CR_PH_HEADER | CR_CRYPEN;
++		stm32_cryp_write(cryp, CRYP_CR, cfg);
++
++		/* Write first (special) block (may move to next phase [payload]) */
++		stm32_cryp_write_ccm_first_header(cryp);
++	} else if (stm32_cryp_get_input_text_len(cryp)) {
++		cfg |= CR_PH_PAYLOAD;
++		stm32_cryp_write(cryp, CRYP_CR, cfg);
++	}
++
++	return 0;
+ }
+ 
+ static int stm32_cryp_hw_init(struct stm32_cryp *cryp)
+@@ -542,7 +530,7 @@ static int stm32_cryp_hw_init(struct stm32_cryp *cryp)
+ 	int ret;
+ 	u32 cfg, hw_mode;
+ 
+-	pm_runtime_resume_and_get(cryp->dev);
++	pm_runtime_get_sync(cryp->dev);
+ 
+ 	/* Disable interrupt */
+ 	stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+@@ -605,16 +593,6 @@ static int stm32_cryp_hw_init(struct stm32_cryp *cryp)
+ 		if (ret)
+ 			return ret;
+ 
+-		/* Phase 2 : header (authenticated data) */
+-		if (cryp->areq->assoclen) {
+-			cfg |= CR_PH_HEADER;
+-		} else if (stm32_cryp_get_input_text_len(cryp)) {
+-			cfg |= CR_PH_PAYLOAD;
+-			stm32_cryp_write(cryp, CRYP_CR, cfg);
+-		} else {
+-			cfg |= CR_PH_INIT;
+-		}
+-
+ 		break;
+ 
+ 	case CR_DES_CBC:
+@@ -633,8 +611,6 @@ static int stm32_cryp_hw_init(struct stm32_cryp *cryp)
+ 
+ 	stm32_cryp_write(cryp, CRYP_CR, cfg);
+ 
+-	cryp->flags &= ~FLG_CCM_PADDED_WA;
+-
+ 	return 0;
+ }
+ 
+@@ -644,28 +620,9 @@ static void stm32_cryp_finish_req(struct stm32_cryp *cryp, int err)
+ 		/* Phase 4 : output tag */
+ 		err = stm32_cryp_read_auth_tag(cryp);
+ 
+-	if (!err && (!(is_gcm(cryp) || is_ccm(cryp))))
++	if (!err && (!(is_gcm(cryp) || is_ccm(cryp) || is_ecb(cryp))))
+ 		stm32_cryp_get_iv(cryp);
+ 
+-	if (cryp->sgs_copied) {
+-		void *buf_in, *buf_out;
+-		int pages, len;
+-
+-		buf_in = sg_virt(&cryp->in_sgl);
+-		buf_out = sg_virt(&cryp->out_sgl);
+-
+-		sg_copy_buf(buf_out, cryp->out_sg_save, 0,
+-			    cryp->total_out_save, 1);
+-
+-		len = ALIGN(cryp->total_in_save, cryp->hw_blocksize);
+-		pages = len ? get_order(len) : 1;
+-		free_pages((unsigned long)buf_in, pages);
+-
+-		len = ALIGN(cryp->total_out_save, cryp->hw_blocksize);
+-		pages = len ? get_order(len) : 1;
+-		free_pages((unsigned long)buf_out, pages);
+-	}
+-
+ 	pm_runtime_mark_last_busy(cryp->dev);
+ 	pm_runtime_put_autosuspend(cryp->dev);
+ 
+@@ -674,8 +631,6 @@ static void stm32_cryp_finish_req(struct stm32_cryp *cryp, int err)
+ 	else
+ 		crypto_finalize_skcipher_request(cryp->engine, cryp->req,
+ 						   err);
+-
+-	memset(cryp->ctx->key, 0, cryp->ctx->keylen);
+ }
+ 
+ static int stm32_cryp_cpu_start(struct stm32_cryp *cryp)
+@@ -801,7 +756,20 @@ static int stm32_cryp_aes_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+ static int stm32_cryp_aes_gcm_setauthsize(struct crypto_aead *tfm,
+ 					  unsigned int authsize)
+ {
+-	return authsize == AES_BLOCK_SIZE ? 0 : -EINVAL;
++	switch (authsize) {
++	case 4:
++	case 8:
++	case 12:
++	case 13:
++	case 14:
++	case 15:
++	case 16:
++		break;
++	default:
++		return -EINVAL;
++	}
++
++	return 0;
+ }
+ 
+ static int stm32_cryp_aes_ccm_setauthsize(struct crypto_aead *tfm,
+@@ -825,31 +793,61 @@ static int stm32_cryp_aes_ccm_setauthsize(struct crypto_aead *tfm,
+ 
+ static int stm32_cryp_aes_ecb_encrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % AES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_AES | FLG_ECB | FLG_ENCRYPT);
+ }
+ 
+ static int stm32_cryp_aes_ecb_decrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % AES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_AES | FLG_ECB);
+ }
+ 
+ static int stm32_cryp_aes_cbc_encrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % AES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_AES | FLG_CBC | FLG_ENCRYPT);
+ }
+ 
+ static int stm32_cryp_aes_cbc_decrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % AES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_AES | FLG_CBC);
+ }
+ 
+ static int stm32_cryp_aes_ctr_encrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_AES | FLG_CTR | FLG_ENCRYPT);
+ }
+ 
+ static int stm32_cryp_aes_ctr_decrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_AES | FLG_CTR);
+ }
+ 
+@@ -863,53 +861,122 @@ static int stm32_cryp_aes_gcm_decrypt(struct aead_request *req)
+ 	return stm32_cryp_aead_crypt(req, FLG_AES | FLG_GCM);
+ }
+ 
++static inline int crypto_ccm_check_iv(const u8 *iv)
++{
++	/* 2 <= L <= 8, so 1 <= L' <= 7. */
++	if (iv[0] < 1 || iv[0] > 7)
++		return -EINVAL;
++
++	return 0;
++}
++
+ static int stm32_cryp_aes_ccm_encrypt(struct aead_request *req)
+ {
++	int err;
++
++	err = crypto_ccm_check_iv(req->iv);
++	if (err)
++		return err;
++
+ 	return stm32_cryp_aead_crypt(req, FLG_AES | FLG_CCM | FLG_ENCRYPT);
+ }
+ 
+ static int stm32_cryp_aes_ccm_decrypt(struct aead_request *req)
+ {
++	int err;
++
++	err = crypto_ccm_check_iv(req->iv);
++	if (err)
++		return err;
++
+ 	return stm32_cryp_aead_crypt(req, FLG_AES | FLG_CCM);
+ }
+ 
+ static int stm32_cryp_des_ecb_encrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % DES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_DES | FLG_ECB | FLG_ENCRYPT);
+ }
+ 
+ static int stm32_cryp_des_ecb_decrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % DES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_DES | FLG_ECB);
+ }
+ 
+ static int stm32_cryp_des_cbc_encrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % DES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_DES | FLG_CBC | FLG_ENCRYPT);
+ }
+ 
+ static int stm32_cryp_des_cbc_decrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % DES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_DES | FLG_CBC);
+ }
+ 
+ static int stm32_cryp_tdes_ecb_encrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % DES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_TDES | FLG_ECB | FLG_ENCRYPT);
+ }
+ 
+ static int stm32_cryp_tdes_ecb_decrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % DES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_TDES | FLG_ECB);
+ }
+ 
+ static int stm32_cryp_tdes_cbc_encrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % DES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_TDES | FLG_CBC | FLG_ENCRYPT);
+ }
+ 
+ static int stm32_cryp_tdes_cbc_decrypt(struct skcipher_request *req)
+ {
++	if (req->cryptlen % DES_BLOCK_SIZE)
++		return -EINVAL;
++
++	if (req->cryptlen == 0)
++		return 0;
++
+ 	return stm32_cryp_crypt(req, FLG_TDES | FLG_CBC);
+ }
+ 
+@@ -919,6 +986,7 @@ static int stm32_cryp_prepare_req(struct skcipher_request *req,
+ 	struct stm32_cryp_ctx *ctx;
+ 	struct stm32_cryp *cryp;
+ 	struct stm32_cryp_reqctx *rctx;
++	struct scatterlist *in_sg;
+ 	int ret;
+ 
+ 	if (!req && !areq)
+@@ -944,76 +1012,55 @@ static int stm32_cryp_prepare_req(struct skcipher_request *req,
+ 	if (req) {
+ 		cryp->req = req;
+ 		cryp->areq = NULL;
+-		cryp->total_in = req->cryptlen;
+-		cryp->total_out = cryp->total_in;
++		cryp->header_in = 0;
++		cryp->payload_in = req->cryptlen;
++		cryp->payload_out = req->cryptlen;
++		cryp->authsize = 0;
+ 	} else {
+ 		/*
+ 		 * Length of input and output data:
+ 		 * Encryption case:
+-		 *  INPUT  =   AssocData  ||   PlainText
++		 *  INPUT  = AssocData   ||     PlainText
+ 		 *          <- assoclen ->  <- cryptlen ->
+-		 *          <------- total_in ----------->
+ 		 *
+-		 *  OUTPUT =   AssocData  ||  CipherText  ||   AuthTag
+-		 *          <- assoclen ->  <- cryptlen ->  <- authsize ->
+-		 *          <---------------- total_out ----------------->
++		 *  OUTPUT = AssocData    ||   CipherText   ||      AuthTag
++		 *          <- assoclen ->  <-- cryptlen -->  <- authsize ->
+ 		 *
+ 		 * Decryption case:
+-		 *  INPUT  =   AssocData  ||  CipherText  ||  AuthTag
+-		 *          <- assoclen ->  <--------- cryptlen --------->
+-		 *                                          <- authsize ->
+-		 *          <---------------- total_in ------------------>
++		 *  INPUT  =  AssocData     ||    CipherTex   ||       AuthTag
++		 *          <- assoclen --->  <---------- cryptlen ---------->
+ 		 *
+-		 *  OUTPUT =   AssocData  ||   PlainText
+-		 *          <- assoclen ->  <- crypten - authsize ->
+-		 *          <---------- total_out ----------------->
++		 *  OUTPUT = AssocData    ||               PlainText
++		 *          <- assoclen ->  <- cryptlen - authsize ->
+ 		 */
+ 		cryp->areq = areq;
+ 		cryp->req = NULL;
+ 		cryp->authsize = crypto_aead_authsize(crypto_aead_reqtfm(areq));
+-		cryp->total_in = areq->assoclen + areq->cryptlen;
+-		if (is_encrypt(cryp))
+-			/* Append auth tag to output */
+-			cryp->total_out = cryp->total_in + cryp->authsize;
+-		else
+-			/* No auth tag in output */
+-			cryp->total_out = cryp->total_in - cryp->authsize;
++		if (is_encrypt(cryp)) {
++			cryp->payload_in = areq->cryptlen;
++			cryp->header_in = areq->assoclen;
++			cryp->payload_out = areq->cryptlen;
++		} else {
++			cryp->payload_in = areq->cryptlen - cryp->authsize;
++			cryp->header_in = areq->assoclen;
++			cryp->payload_out = cryp->payload_in;
++		}
+ 	}
+ 
+-	cryp->total_in_save = cryp->total_in;
+-	cryp->total_out_save = cryp->total_out;
++	in_sg = req ? req->src : areq->src;
++	scatterwalk_start(&cryp->in_walk, in_sg);
+ 
+-	cryp->in_sg = req ? req->src : areq->src;
+ 	cryp->out_sg = req ? req->dst : areq->dst;
+-	cryp->out_sg_save = cryp->out_sg;
+-
+-	cryp->in_sg_len = sg_nents_for_len(cryp->in_sg, cryp->total_in);
+-	if (cryp->in_sg_len < 0) {
+-		dev_err(cryp->dev, "Cannot get in_sg_len\n");
+-		ret = cryp->in_sg_len;
+-		return ret;
+-	}
+-
+-	cryp->out_sg_len = sg_nents_for_len(cryp->out_sg, cryp->total_out);
+-	if (cryp->out_sg_len < 0) {
+-		dev_err(cryp->dev, "Cannot get out_sg_len\n");
+-		ret = cryp->out_sg_len;
+-		return ret;
+-	}
+-
+-	ret = stm32_cryp_copy_sgs(cryp);
+-	if (ret)
+-		return ret;
+-
+-	scatterwalk_start(&cryp->in_walk, cryp->in_sg);
+ 	scatterwalk_start(&cryp->out_walk, cryp->out_sg);
+ 
+ 	if (is_gcm(cryp) || is_ccm(cryp)) {
+ 		/* In output, jump after assoc data */
+-		scatterwalk_advance(&cryp->out_walk, cryp->areq->assoclen);
+-		cryp->total_out -= cryp->areq->assoclen;
++		scatterwalk_copychunks(NULL, &cryp->out_walk, cryp->areq->assoclen, 2);
+ 	}
+ 
++	if (is_ctr(cryp))
++		memset(cryp->last_ctr, 0, sizeof(cryp->last_ctr));
++
+ 	ret = stm32_cryp_hw_init(cryp);
+ 	return ret;
+ }
+@@ -1061,8 +1108,7 @@ static int stm32_cryp_aead_one_req(struct crypto_engine *engine, void *areq)
+ 	if (!cryp)
+ 		return -ENODEV;
+ 
+-	if (unlikely(!cryp->areq->assoclen &&
+-		     !stm32_cryp_get_input_text_len(cryp))) {
++	if (unlikely(!cryp->payload_in && !cryp->header_in)) {
+ 		/* No input data to process: get tag and finish */
+ 		stm32_cryp_finish_req(cryp, 0);
+ 		return 0;
+@@ -1071,43 +1117,10 @@ static int stm32_cryp_aead_one_req(struct crypto_engine *engine, void *areq)
+ 	return stm32_cryp_cpu_start(cryp);
+ }
+ 
+-static u32 *stm32_cryp_next_out(struct stm32_cryp *cryp, u32 *dst,
+-				unsigned int n)
+-{
+-	scatterwalk_advance(&cryp->out_walk, n);
+-
+-	if (unlikely(cryp->out_sg->length == _walked_out)) {
+-		cryp->out_sg = sg_next(cryp->out_sg);
+-		if (cryp->out_sg) {
+-			scatterwalk_start(&cryp->out_walk, cryp->out_sg);
+-			return (sg_virt(cryp->out_sg) + _walked_out);
+-		}
+-	}
+-
+-	return (u32 *)((u8 *)dst + n);
+-}
+-
+-static u32 *stm32_cryp_next_in(struct stm32_cryp *cryp, u32 *src,
+-			       unsigned int n)
+-{
+-	scatterwalk_advance(&cryp->in_walk, n);
+-
+-	if (unlikely(cryp->in_sg->length == _walked_in)) {
+-		cryp->in_sg = sg_next(cryp->in_sg);
+-		if (cryp->in_sg) {
+-			scatterwalk_start(&cryp->in_walk, cryp->in_sg);
+-			return (sg_virt(cryp->in_sg) + _walked_in);
+-		}
+-	}
+-
+-	return (u32 *)((u8 *)src + n);
+-}
+-
+ static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp)
+ {
+-	u32 cfg, size_bit, *dst, d32;
+-	u8 *d8;
+-	unsigned int i, j;
++	u32 cfg, size_bit;
++	unsigned int i;
+ 	int ret = 0;
+ 
+ 	/* Update Config */
+@@ -1130,7 +1143,7 @@ static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp)
+ 		stm32_cryp_write(cryp, CRYP_DIN, size_bit);
+ 
+ 		size_bit = is_encrypt(cryp) ? cryp->areq->cryptlen :
+-				cryp->areq->cryptlen - AES_BLOCK_SIZE;
++				cryp->areq->cryptlen - cryp->authsize;
+ 		size_bit *= 8;
+ 		if (cryp->caps->swap_final)
+ 			size_bit = (__force u32)cpu_to_be32(size_bit);
+@@ -1139,11 +1152,9 @@ static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp)
+ 		stm32_cryp_write(cryp, CRYP_DIN, size_bit);
+ 	} else {
+ 		/* CCM: write CTR0 */
+-		u8 iv[AES_BLOCK_SIZE];
+-		u32 *iv32 = (u32 *)iv;
+-		__be32 *biv;
+-
+-		biv = (void *)iv;
++		u32 iv32[AES_BLOCK_32];
++		u8 *iv = (u8 *)iv32;
++		__be32 *biv = (__be32 *)iv32;
+ 
+ 		memcpy(iv, cryp->areq->iv, AES_BLOCK_SIZE);
+ 		memset(iv + AES_BLOCK_SIZE - 1 - iv[0], 0, iv[0] + 1);
+@@ -1165,39 +1176,18 @@ static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp)
+ 	}
+ 
+ 	if (is_encrypt(cryp)) {
++		u32 out_tag[AES_BLOCK_32];
++
+ 		/* Get and write tag */
+-		dst = sg_virt(cryp->out_sg) + _walked_out;
++		for (i = 0; i < AES_BLOCK_32; i++)
++			out_tag[i] = stm32_cryp_read(cryp, CRYP_DOUT);
+ 
+-		for (i = 0; i < AES_BLOCK_32; i++) {
+-			if (cryp->total_out >= sizeof(u32)) {
+-				/* Read a full u32 */
+-				*dst = stm32_cryp_read(cryp, CRYP_DOUT);
+-
+-				dst = stm32_cryp_next_out(cryp, dst,
+-							  sizeof(u32));
+-				cryp->total_out -= sizeof(u32);
+-			} else if (!cryp->total_out) {
+-				/* Empty fifo out (data from input padding) */
+-				stm32_cryp_read(cryp, CRYP_DOUT);
+-			} else {
+-				/* Read less than an u32 */
+-				d32 = stm32_cryp_read(cryp, CRYP_DOUT);
+-				d8 = (u8 *)&d32;
+-
+-				for (j = 0; j < cryp->total_out; j++) {
+-					*((u8 *)dst) = *(d8++);
+-					dst = stm32_cryp_next_out(cryp, dst, 1);
+-				}
+-				cryp->total_out = 0;
+-			}
+-		}
++		scatterwalk_copychunks(out_tag, &cryp->out_walk, cryp->authsize, 1);
+ 	} else {
+ 		/* Get and check tag */
+ 		u32 in_tag[AES_BLOCK_32], out_tag[AES_BLOCK_32];
+ 
+-		scatterwalk_map_and_copy(in_tag, cryp->in_sg,
+-					 cryp->total_in_save - cryp->authsize,
+-					 cryp->authsize, 0);
++		scatterwalk_copychunks(in_tag, &cryp->in_walk, cryp->authsize, 0);
+ 
+ 		for (i = 0; i < AES_BLOCK_32; i++)
+ 			out_tag[i] = stm32_cryp_read(cryp, CRYP_DOUT);
+@@ -1217,115 +1207,59 @@ static void stm32_cryp_check_ctr_counter(struct stm32_cryp *cryp)
+ {
+ 	u32 cr;
+ 
+-	if (unlikely(cryp->last_ctr[3] == 0xFFFFFFFF)) {
+-		cryp->last_ctr[3] = 0;
+-		cryp->last_ctr[2]++;
+-		if (!cryp->last_ctr[2]) {
+-			cryp->last_ctr[1]++;
+-			if (!cryp->last_ctr[1])
+-				cryp->last_ctr[0]++;
+-		}
++	if (unlikely(cryp->last_ctr[3] == cpu_to_be32(0xFFFFFFFF))) {
++		/*
++		 * In this case, we need to increment manually the ctr counter,
++		 * as HW doesn't handle the U32 carry.
++		 */
++		crypto_inc((u8 *)cryp->last_ctr, sizeof(cryp->last_ctr));
+ 
+ 		cr = stm32_cryp_read(cryp, CRYP_CR);
+ 		stm32_cryp_write(cryp, CRYP_CR, cr & ~CR_CRYPEN);
+ 
+-		stm32_cryp_hw_write_iv(cryp, (__be32 *)cryp->last_ctr);
++		stm32_cryp_hw_write_iv(cryp, cryp->last_ctr);
+ 
+ 		stm32_cryp_write(cryp, CRYP_CR, cr);
+ 	}
+ 
+-	cryp->last_ctr[0] = stm32_cryp_read(cryp, CRYP_IV0LR);
+-	cryp->last_ctr[1] = stm32_cryp_read(cryp, CRYP_IV0RR);
+-	cryp->last_ctr[2] = stm32_cryp_read(cryp, CRYP_IV1LR);
+-	cryp->last_ctr[3] = stm32_cryp_read(cryp, CRYP_IV1RR);
++	/* The IV registers are BE  */
++	cryp->last_ctr[0] = cpu_to_be32(stm32_cryp_read(cryp, CRYP_IV0LR));
++	cryp->last_ctr[1] = cpu_to_be32(stm32_cryp_read(cryp, CRYP_IV0RR));
++	cryp->last_ctr[2] = cpu_to_be32(stm32_cryp_read(cryp, CRYP_IV1LR));
++	cryp->last_ctr[3] = cpu_to_be32(stm32_cryp_read(cryp, CRYP_IV1RR));
+ }
+ 
+-static bool stm32_cryp_irq_read_data(struct stm32_cryp *cryp)
++static void stm32_cryp_irq_read_data(struct stm32_cryp *cryp)
+ {
+-	unsigned int i, j;
+-	u32 d32, *dst;
+-	u8 *d8;
+-	size_t tag_size;
+-
+-	/* Do no read tag now (if any) */
+-	if (is_encrypt(cryp) && (is_gcm(cryp) || is_ccm(cryp)))
+-		tag_size = cryp->authsize;
+-	else
+-		tag_size = 0;
+-
+-	dst = sg_virt(cryp->out_sg) + _walked_out;
++	unsigned int i;
++	u32 block[AES_BLOCK_32];
+ 
+-	for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++) {
+-		if (likely(cryp->total_out - tag_size >= sizeof(u32))) {
+-			/* Read a full u32 */
+-			*dst = stm32_cryp_read(cryp, CRYP_DOUT);
++	for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++)
++		block[i] = stm32_cryp_read(cryp, CRYP_DOUT);
+ 
+-			dst = stm32_cryp_next_out(cryp, dst, sizeof(u32));
+-			cryp->total_out -= sizeof(u32);
+-		} else if (cryp->total_out == tag_size) {
+-			/* Empty fifo out (data from input padding) */
+-			d32 = stm32_cryp_read(cryp, CRYP_DOUT);
+-		} else {
+-			/* Read less than an u32 */
+-			d32 = stm32_cryp_read(cryp, CRYP_DOUT);
+-			d8 = (u8 *)&d32;
+-
+-			for (j = 0; j < cryp->total_out - tag_size; j++) {
+-				*((u8 *)dst) = *(d8++);
+-				dst = stm32_cryp_next_out(cryp, dst, 1);
+-			}
+-			cryp->total_out = tag_size;
+-		}
+-	}
+-
+-	return !(cryp->total_out - tag_size) || !cryp->total_in;
++	scatterwalk_copychunks(block, &cryp->out_walk, min_t(size_t, cryp->hw_blocksize,
++							     cryp->payload_out), 1);
++	cryp->payload_out -= min_t(size_t, cryp->hw_blocksize,
++				   cryp->payload_out);
+ }
+ 
+ static void stm32_cryp_irq_write_block(struct stm32_cryp *cryp)
+ {
+-	unsigned int i, j;
+-	u32 *src;
+-	u8 d8[4];
+-	size_t tag_size;
+-
+-	/* Do no write tag (if any) */
+-	if (is_decrypt(cryp) && (is_gcm(cryp) || is_ccm(cryp)))
+-		tag_size = cryp->authsize;
+-	else
+-		tag_size = 0;
+-
+-	src = sg_virt(cryp->in_sg) + _walked_in;
++	unsigned int i;
++	u32 block[AES_BLOCK_32] = {0};
+ 
+-	for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++) {
+-		if (likely(cryp->total_in - tag_size >= sizeof(u32))) {
+-			/* Write a full u32 */
+-			stm32_cryp_write(cryp, CRYP_DIN, *src);
++	scatterwalk_copychunks(block, &cryp->in_walk, min_t(size_t, cryp->hw_blocksize,
++							    cryp->payload_in), 0);
++	for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++)
++		stm32_cryp_write(cryp, CRYP_DIN, block[i]);
+ 
+-			src = stm32_cryp_next_in(cryp, src, sizeof(u32));
+-			cryp->total_in -= sizeof(u32);
+-		} else if (cryp->total_in == tag_size) {
+-			/* Write padding data */
+-			stm32_cryp_write(cryp, CRYP_DIN, 0);
+-		} else {
+-			/* Write less than an u32 */
+-			memset(d8, 0, sizeof(u32));
+-			for (j = 0; j < cryp->total_in - tag_size; j++) {
+-				d8[j] = *((u8 *)src);
+-				src = stm32_cryp_next_in(cryp, src, 1);
+-			}
+-
+-			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+-			cryp->total_in = tag_size;
+-		}
+-	}
++	cryp->payload_in -= min_t(size_t, cryp->hw_blocksize, cryp->payload_in);
+ }
+ 
+ static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp)
+ {
+ 	int err;
+-	u32 cfg, tmp[AES_BLOCK_32];
+-	size_t total_in_ori = cryp->total_in;
+-	struct scatterlist *out_sg_ori = cryp->out_sg;
++	u32 cfg, block[AES_BLOCK_32] = {0};
+ 	unsigned int i;
+ 
+ 	/* 'Special workaround' procedure described in the datasheet */
+@@ -1350,18 +1284,25 @@ static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp)
+ 
+ 	/* b) pad and write the last block */
+ 	stm32_cryp_irq_write_block(cryp);
+-	cryp->total_in = total_in_ori;
++	/* wait end of process */
+ 	err = stm32_cryp_wait_output(cryp);
+ 	if (err) {
+-		dev_err(cryp->dev, "Timeout (write gcm header)\n");
++		dev_err(cryp->dev, "Timeout (write gcm last data)\n");
+ 		return stm32_cryp_finish_req(cryp, err);
+ 	}
+ 
+ 	/* c) get and store encrypted data */
+-	stm32_cryp_irq_read_data(cryp);
+-	scatterwalk_map_and_copy(tmp, out_sg_ori,
+-				 cryp->total_in_save - total_in_ori,
+-				 total_in_ori, 0);
++	/*
++	 * Same code as stm32_cryp_irq_read_data(), but we want to store
++	 * block value
++	 */
++	for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++)
++		block[i] = stm32_cryp_read(cryp, CRYP_DOUT);
++
++	scatterwalk_copychunks(block, &cryp->out_walk, min_t(size_t, cryp->hw_blocksize,
++							     cryp->payload_out), 1);
++	cryp->payload_out -= min_t(size_t, cryp->hw_blocksize,
++				   cryp->payload_out);
+ 
+ 	/* d) change mode back to AES GCM */
+ 	cfg &= ~CR_ALGO_MASK;
+@@ -1374,19 +1315,13 @@ static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp)
+ 	stm32_cryp_write(cryp, CRYP_CR, cfg);
+ 
+ 	/* f) write padded data */
+-	for (i = 0; i < AES_BLOCK_32; i++) {
+-		if (cryp->total_in)
+-			stm32_cryp_write(cryp, CRYP_DIN, tmp[i]);
+-		else
+-			stm32_cryp_write(cryp, CRYP_DIN, 0);
+-
+-		cryp->total_in -= min_t(size_t, sizeof(u32), cryp->total_in);
+-	}
++	for (i = 0; i < AES_BLOCK_32; i++)
++		stm32_cryp_write(cryp, CRYP_DIN, block[i]);
+ 
+ 	/* g) Empty fifo out */
+ 	err = stm32_cryp_wait_output(cryp);
+ 	if (err) {
+-		dev_err(cryp->dev, "Timeout (write gcm header)\n");
++		dev_err(cryp->dev, "Timeout (write gcm padded data)\n");
+ 		return stm32_cryp_finish_req(cryp, err);
+ 	}
+ 
+@@ -1399,16 +1334,14 @@ static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp)
+ 
+ static void stm32_cryp_irq_set_npblb(struct stm32_cryp *cryp)
+ {
+-	u32 cfg, payload_bytes;
++	u32 cfg;
+ 
+ 	/* disable ip, set NPBLB and reneable ip */
+ 	cfg = stm32_cryp_read(cryp, CRYP_CR);
+ 	cfg &= ~CR_CRYPEN;
+ 	stm32_cryp_write(cryp, CRYP_CR, cfg);
+ 
+-	payload_bytes = is_decrypt(cryp) ? cryp->total_in - cryp->authsize :
+-					   cryp->total_in;
+-	cfg |= (cryp->hw_blocksize - payload_bytes) << CR_NBPBL_SHIFT;
++	cfg |= (cryp->hw_blocksize - cryp->payload_in) << CR_NBPBL_SHIFT;
+ 	cfg |= CR_CRYPEN;
+ 	stm32_cryp_write(cryp, CRYP_CR, cfg);
+ }
+@@ -1417,13 +1350,11 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp)
+ {
+ 	int err = 0;
+ 	u32 cfg, iv1tmp;
+-	u32 cstmp1[AES_BLOCK_32], cstmp2[AES_BLOCK_32], tmp[AES_BLOCK_32];
+-	size_t last_total_out, total_in_ori = cryp->total_in;
+-	struct scatterlist *out_sg_ori = cryp->out_sg;
++	u32 cstmp1[AES_BLOCK_32], cstmp2[AES_BLOCK_32];
++	u32 block[AES_BLOCK_32] = {0};
+ 	unsigned int i;
+ 
+ 	/* 'Special workaround' procedure described in the datasheet */
+-	cryp->flags |= FLG_CCM_PADDED_WA;
+ 
+ 	/* a) disable ip */
+ 	stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+@@ -1453,7 +1384,7 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp)
+ 
+ 	/* b) pad and write the last block */
+ 	stm32_cryp_irq_write_block(cryp);
+-	cryp->total_in = total_in_ori;
++	/* wait end of process */
+ 	err = stm32_cryp_wait_output(cryp);
+ 	if (err) {
+ 		dev_err(cryp->dev, "Timeout (wite ccm padded data)\n");
+@@ -1461,13 +1392,16 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp)
+ 	}
+ 
+ 	/* c) get and store decrypted data */
+-	last_total_out = cryp->total_out;
+-	stm32_cryp_irq_read_data(cryp);
++	/*
++	 * Same code as stm32_cryp_irq_read_data(), but we want to store
++	 * block value
++	 */
++	for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++)
++		block[i] = stm32_cryp_read(cryp, CRYP_DOUT);
+ 
+-	memset(tmp, 0, sizeof(tmp));
+-	scatterwalk_map_and_copy(tmp, out_sg_ori,
+-				 cryp->total_out_save - last_total_out,
+-				 last_total_out, 0);
++	scatterwalk_copychunks(block, &cryp->out_walk, min_t(size_t, cryp->hw_blocksize,
++							     cryp->payload_out), 1);
++	cryp->payload_out -= min_t(size_t, cryp->hw_blocksize, cryp->payload_out);
+ 
+ 	/* d) Load again CRYP_CSGCMCCMxR */
+ 	for (i = 0; i < ARRAY_SIZE(cstmp2); i++)
+@@ -1484,10 +1418,10 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp)
+ 	stm32_cryp_write(cryp, CRYP_CR, cfg);
+ 
+ 	/* g) XOR and write padded data */
+-	for (i = 0; i < ARRAY_SIZE(tmp); i++) {
+-		tmp[i] ^= cstmp1[i];
+-		tmp[i] ^= cstmp2[i];
+-		stm32_cryp_write(cryp, CRYP_DIN, tmp[i]);
++	for (i = 0; i < ARRAY_SIZE(block); i++) {
++		block[i] ^= cstmp1[i];
++		block[i] ^= cstmp2[i];
++		stm32_cryp_write(cryp, CRYP_DIN, block[i]);
+ 	}
+ 
+ 	/* h) wait for completion */
+@@ -1501,30 +1435,34 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp)
+ 
+ static void stm32_cryp_irq_write_data(struct stm32_cryp *cryp)
+ {
+-	if (unlikely(!cryp->total_in)) {
++	if (unlikely(!cryp->payload_in)) {
+ 		dev_warn(cryp->dev, "No more data to process\n");
+ 		return;
+ 	}
+ 
+-	if (unlikely(cryp->total_in < AES_BLOCK_SIZE &&
++	if (unlikely(cryp->payload_in < AES_BLOCK_SIZE &&
+ 		     (stm32_cryp_get_hw_mode(cryp) == CR_AES_GCM) &&
+ 		     is_encrypt(cryp))) {
+ 		/* Padding for AES GCM encryption */
+-		if (cryp->caps->padding_wa)
++		if (cryp->caps->padding_wa) {
+ 			/* Special case 1 */
+-			return stm32_cryp_irq_write_gcm_padded_data(cryp);
++			stm32_cryp_irq_write_gcm_padded_data(cryp);
++			return;
++		}
+ 
+ 		/* Setting padding bytes (NBBLB) */
+ 		stm32_cryp_irq_set_npblb(cryp);
+ 	}
+ 
+-	if (unlikely((cryp->total_in - cryp->authsize < AES_BLOCK_SIZE) &&
++	if (unlikely((cryp->payload_in < AES_BLOCK_SIZE) &&
+ 		     (stm32_cryp_get_hw_mode(cryp) == CR_AES_CCM) &&
+ 		     is_decrypt(cryp))) {
+ 		/* Padding for AES CCM decryption */
+-		if (cryp->caps->padding_wa)
++		if (cryp->caps->padding_wa) {
+ 			/* Special case 2 */
+-			return stm32_cryp_irq_write_ccm_padded_data(cryp);
++			stm32_cryp_irq_write_ccm_padded_data(cryp);
++			return;
++		}
+ 
+ 		/* Setting padding bytes (NBBLB) */
+ 		stm32_cryp_irq_set_npblb(cryp);
+@@ -1536,192 +1474,60 @@ static void stm32_cryp_irq_write_data(struct stm32_cryp *cryp)
+ 	stm32_cryp_irq_write_block(cryp);
+ }
+ 
+-static void stm32_cryp_irq_write_gcm_header(struct stm32_cryp *cryp)
++static void stm32_cryp_irq_write_gcmccm_header(struct stm32_cryp *cryp)
+ {
+-	int err;
+-	unsigned int i, j;
+-	u32 cfg, *src;
+-
+-	src = sg_virt(cryp->in_sg) + _walked_in;
+-
+-	for (i = 0; i < AES_BLOCK_32; i++) {
+-		stm32_cryp_write(cryp, CRYP_DIN, *src);
+-
+-		src = stm32_cryp_next_in(cryp, src, sizeof(u32));
+-		cryp->total_in -= min_t(size_t, sizeof(u32), cryp->total_in);
+-
+-		/* Check if whole header written */
+-		if ((cryp->total_in_save - cryp->total_in) ==
+-				cryp->areq->assoclen) {
+-			/* Write padding if needed */
+-			for (j = i + 1; j < AES_BLOCK_32; j++)
+-				stm32_cryp_write(cryp, CRYP_DIN, 0);
+-
+-			/* Wait for completion */
+-			err = stm32_cryp_wait_busy(cryp);
+-			if (err) {
+-				dev_err(cryp->dev, "Timeout (gcm header)\n");
+-				return stm32_cryp_finish_req(cryp, err);
+-			}
+-
+-			if (stm32_cryp_get_input_text_len(cryp)) {
+-				/* Phase 3 : payload */
+-				cfg = stm32_cryp_read(cryp, CRYP_CR);
+-				cfg &= ~CR_CRYPEN;
+-				stm32_cryp_write(cryp, CRYP_CR, cfg);
+-
+-				cfg &= ~CR_PH_MASK;
+-				cfg |= CR_PH_PAYLOAD;
+-				cfg |= CR_CRYPEN;
+-				stm32_cryp_write(cryp, CRYP_CR, cfg);
+-			} else {
+-				/* Phase 4 : tag */
+-				stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+-				stm32_cryp_finish_req(cryp, 0);
+-			}
+-
+-			break;
+-		}
+-
+-		if (!cryp->total_in)
+-			break;
+-	}
+-}
++	unsigned int i;
++	u32 block[AES_BLOCK_32] = {0};
++	size_t written;
+ 
+-static void stm32_cryp_irq_write_ccm_header(struct stm32_cryp *cryp)
+-{
+-	int err;
+-	unsigned int i = 0, j, k;
+-	u32 alen, cfg, *src;
+-	u8 d8[4];
+-
+-	src = sg_virt(cryp->in_sg) + _walked_in;
+-	alen = cryp->areq->assoclen;
+-
+-	if (!_walked_in) {
+-		if (cryp->areq->assoclen <= 65280) {
+-			/* Write first u32 of B1 */
+-			d8[0] = (alen >> 8) & 0xFF;
+-			d8[1] = alen & 0xFF;
+-			d8[2] = *((u8 *)src);
+-			src = stm32_cryp_next_in(cryp, src, 1);
+-			d8[3] = *((u8 *)src);
+-			src = stm32_cryp_next_in(cryp, src, 1);
+-
+-			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+-			i++;
+-
+-			cryp->total_in -= min_t(size_t, 2, cryp->total_in);
+-		} else {
+-			/* Build the two first u32 of B1 */
+-			d8[0] = 0xFF;
+-			d8[1] = 0xFE;
+-			d8[2] = alen & 0xFF000000;
+-			d8[3] = alen & 0x00FF0000;
+-
+-			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+-			i++;
+-
+-			d8[0] = alen & 0x0000FF00;
+-			d8[1] = alen & 0x000000FF;
+-			d8[2] = *((u8 *)src);
+-			src = stm32_cryp_next_in(cryp, src, 1);
+-			d8[3] = *((u8 *)src);
+-			src = stm32_cryp_next_in(cryp, src, 1);
+-
+-			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+-			i++;
+-
+-			cryp->total_in -= min_t(size_t, 2, cryp->total_in);
+-		}
+-	}
++	written = min_t(size_t, AES_BLOCK_SIZE, cryp->header_in);
+ 
+-	/* Write next u32 */
+-	for (; i < AES_BLOCK_32; i++) {
+-		/* Build an u32 */
+-		memset(d8, 0, sizeof(u32));
+-		for (k = 0; k < sizeof(u32); k++) {
+-			d8[k] = *((u8 *)src);
+-			src = stm32_cryp_next_in(cryp, src, 1);
+-
+-			cryp->total_in -= min_t(size_t, 1, cryp->total_in);
+-			if ((cryp->total_in_save - cryp->total_in) == alen)
+-				break;
+-		}
++	scatterwalk_copychunks(block, &cryp->in_walk, written, 0);
++	for (i = 0; i < AES_BLOCK_32; i++)
++		stm32_cryp_write(cryp, CRYP_DIN, block[i]);
+ 
+-		stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+-
+-		if ((cryp->total_in_save - cryp->total_in) == alen) {
+-			/* Write padding if needed */
+-			for (j = i + 1; j < AES_BLOCK_32; j++)
+-				stm32_cryp_write(cryp, CRYP_DIN, 0);
+-
+-			/* Wait for completion */
+-			err = stm32_cryp_wait_busy(cryp);
+-			if (err) {
+-				dev_err(cryp->dev, "Timeout (ccm header)\n");
+-				return stm32_cryp_finish_req(cryp, err);
+-			}
+-
+-			if (stm32_cryp_get_input_text_len(cryp)) {
+-				/* Phase 3 : payload */
+-				cfg = stm32_cryp_read(cryp, CRYP_CR);
+-				cfg &= ~CR_CRYPEN;
+-				stm32_cryp_write(cryp, CRYP_CR, cfg);
+-
+-				cfg &= ~CR_PH_MASK;
+-				cfg |= CR_PH_PAYLOAD;
+-				cfg |= CR_CRYPEN;
+-				stm32_cryp_write(cryp, CRYP_CR, cfg);
+-			} else {
+-				/* Phase 4 : tag */
+-				stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+-				stm32_cryp_finish_req(cryp, 0);
+-			}
++	cryp->header_in -= written;
+ 
+-			break;
+-		}
+-	}
++	stm32_crypt_gcmccm_end_header(cryp);
+ }
+ 
+ static irqreturn_t stm32_cryp_irq_thread(int irq, void *arg)
+ {
+ 	struct stm32_cryp *cryp = arg;
+ 	u32 ph;
++	u32 it_mask = stm32_cryp_read(cryp, CRYP_IMSCR);
+ 
+ 	if (cryp->irq_status & MISR_OUT)
+ 		/* Output FIFO IRQ: read data */
+-		if (unlikely(stm32_cryp_irq_read_data(cryp))) {
+-			/* All bytes processed, finish */
+-			stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+-			stm32_cryp_finish_req(cryp, 0);
+-			return IRQ_HANDLED;
+-		}
++		stm32_cryp_irq_read_data(cryp);
+ 
+ 	if (cryp->irq_status & MISR_IN) {
+-		if (is_gcm(cryp)) {
++		if (is_gcm(cryp) || is_ccm(cryp)) {
+ 			ph = stm32_cryp_read(cryp, CRYP_CR) & CR_PH_MASK;
+ 			if (unlikely(ph == CR_PH_HEADER))
+ 				/* Write Header */
+-				stm32_cryp_irq_write_gcm_header(cryp);
+-			else
+-				/* Input FIFO IRQ: write data */
+-				stm32_cryp_irq_write_data(cryp);
+-			cryp->gcm_ctr++;
+-		} else if (is_ccm(cryp)) {
+-			ph = stm32_cryp_read(cryp, CRYP_CR) & CR_PH_MASK;
+-			if (unlikely(ph == CR_PH_HEADER))
+-				/* Write Header */
+-				stm32_cryp_irq_write_ccm_header(cryp);
++				stm32_cryp_irq_write_gcmccm_header(cryp);
+ 			else
+ 				/* Input FIFO IRQ: write data */
+ 				stm32_cryp_irq_write_data(cryp);
++			if (is_gcm(cryp))
++				cryp->gcm_ctr++;
+ 		} else {
+ 			/* Input FIFO IRQ: write data */
+ 			stm32_cryp_irq_write_data(cryp);
+ 		}
+ 	}
+ 
++	/* Mask useless interrupts */
++	if (!cryp->payload_in && !cryp->header_in)
++		it_mask &= ~IMSCR_IN;
++	if (!cryp->payload_out)
++		it_mask &= ~IMSCR_OUT;
++	stm32_cryp_write(cryp, CRYP_IMSCR, it_mask);
++
++	if (!cryp->payload_in && !cryp->header_in && !cryp->payload_out)
++		stm32_cryp_finish_req(cryp, 0);
++
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -1742,7 +1548,7 @@ static struct skcipher_alg crypto_algs[] = {
+ 	.base.cra_flags		= CRYPTO_ALG_ASYNC,
+ 	.base.cra_blocksize	= AES_BLOCK_SIZE,
+ 	.base.cra_ctxsize	= sizeof(struct stm32_cryp_ctx),
+-	.base.cra_alignmask	= 0xf,
++	.base.cra_alignmask	= 0,
+ 	.base.cra_module	= THIS_MODULE,
+ 
+ 	.init			= stm32_cryp_init_tfm,
+@@ -1759,7 +1565,7 @@ static struct skcipher_alg crypto_algs[] = {
+ 	.base.cra_flags		= CRYPTO_ALG_ASYNC,
+ 	.base.cra_blocksize	= AES_BLOCK_SIZE,
+ 	.base.cra_ctxsize	= sizeof(struct stm32_cryp_ctx),
+-	.base.cra_alignmask	= 0xf,
++	.base.cra_alignmask	= 0,
+ 	.base.cra_module	= THIS_MODULE,
+ 
+ 	.init			= stm32_cryp_init_tfm,
+@@ -1777,7 +1583,7 @@ static struct skcipher_alg crypto_algs[] = {
+ 	.base.cra_flags		= CRYPTO_ALG_ASYNC,
+ 	.base.cra_blocksize	= 1,
+ 	.base.cra_ctxsize	= sizeof(struct stm32_cryp_ctx),
+-	.base.cra_alignmask	= 0xf,
++	.base.cra_alignmask	= 0,
+ 	.base.cra_module	= THIS_MODULE,
+ 
+ 	.init			= stm32_cryp_init_tfm,
+@@ -1795,7 +1601,7 @@ static struct skcipher_alg crypto_algs[] = {
+ 	.base.cra_flags		= CRYPTO_ALG_ASYNC,
+ 	.base.cra_blocksize	= DES_BLOCK_SIZE,
+ 	.base.cra_ctxsize	= sizeof(struct stm32_cryp_ctx),
+-	.base.cra_alignmask	= 0xf,
++	.base.cra_alignmask	= 0,
+ 	.base.cra_module	= THIS_MODULE,
+ 
+ 	.init			= stm32_cryp_init_tfm,
+@@ -1812,7 +1618,7 @@ static struct skcipher_alg crypto_algs[] = {
+ 	.base.cra_flags		= CRYPTO_ALG_ASYNC,
+ 	.base.cra_blocksize	= DES_BLOCK_SIZE,
+ 	.base.cra_ctxsize	= sizeof(struct stm32_cryp_ctx),
+-	.base.cra_alignmask	= 0xf,
++	.base.cra_alignmask	= 0,
+ 	.base.cra_module	= THIS_MODULE,
+ 
+ 	.init			= stm32_cryp_init_tfm,
+@@ -1830,7 +1636,7 @@ static struct skcipher_alg crypto_algs[] = {
+ 	.base.cra_flags		= CRYPTO_ALG_ASYNC,
+ 	.base.cra_blocksize	= DES_BLOCK_SIZE,
+ 	.base.cra_ctxsize	= sizeof(struct stm32_cryp_ctx),
+-	.base.cra_alignmask	= 0xf,
++	.base.cra_alignmask	= 0,
+ 	.base.cra_module	= THIS_MODULE,
+ 
+ 	.init			= stm32_cryp_init_tfm,
+@@ -1847,7 +1653,7 @@ static struct skcipher_alg crypto_algs[] = {
+ 	.base.cra_flags		= CRYPTO_ALG_ASYNC,
+ 	.base.cra_blocksize	= DES_BLOCK_SIZE,
+ 	.base.cra_ctxsize	= sizeof(struct stm32_cryp_ctx),
+-	.base.cra_alignmask	= 0xf,
++	.base.cra_alignmask	= 0,
+ 	.base.cra_module	= THIS_MODULE,
+ 
+ 	.init			= stm32_cryp_init_tfm,
+@@ -1877,7 +1683,7 @@ static struct aead_alg aead_algs[] = {
+ 		.cra_flags		= CRYPTO_ALG_ASYNC,
+ 		.cra_blocksize		= 1,
+ 		.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+-		.cra_alignmask		= 0xf,
++		.cra_alignmask		= 0,
+ 		.cra_module		= THIS_MODULE,
+ 	},
+ },
+@@ -1897,7 +1703,7 @@ static struct aead_alg aead_algs[] = {
+ 		.cra_flags		= CRYPTO_ALG_ASYNC,
+ 		.cra_blocksize		= 1,
+ 		.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+-		.cra_alignmask		= 0xf,
++		.cra_alignmask		= 0,
+ 		.cra_module		= THIS_MODULE,
+ 	},
+ },
+@@ -2025,8 +1831,6 @@ err_engine1:
+ 	list_del(&cryp->list);
+ 	spin_unlock(&cryp_list.lock);
+ 
+-	pm_runtime_disable(dev);
+-	pm_runtime_put_noidle(dev);
+ 	pm_runtime_disable(dev);
+ 	pm_runtime_put_noidle(dev);
+ 
+diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
+index 389de9e3302d5..d33006d43f761 100644
+--- a/drivers/crypto/stm32/stm32-hash.c
++++ b/drivers/crypto/stm32/stm32-hash.c
+@@ -813,7 +813,7 @@ static void stm32_hash_finish_req(struct ahash_request *req, int err)
+ static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
+ 			      struct stm32_hash_request_ctx *rctx)
+ {
+-	pm_runtime_resume_and_get(hdev->dev);
++	pm_runtime_get_sync(hdev->dev);
+ 
+ 	if (!(HASH_FLAGS_INIT & hdev->flags)) {
+ 		stm32_hash_write(hdev, HASH_CR, HASH_CR_INIT);
+@@ -962,7 +962,7 @@ static int stm32_hash_export(struct ahash_request *req, void *out)
+ 	u32 *preg;
+ 	unsigned int i;
+ 
+-	pm_runtime_resume_and_get(hdev->dev);
++	pm_runtime_get_sync(hdev->dev);
+ 
+ 	while ((stm32_hash_read(hdev, HASH_SR) & HASH_SR_BUSY))
+ 		cpu_relax();
+@@ -1000,7 +1000,7 @@ static int stm32_hash_import(struct ahash_request *req, const void *in)
+ 
+ 	preg = rctx->hw_context;
+ 
+-	pm_runtime_resume_and_get(hdev->dev);
++	pm_runtime_get_sync(hdev->dev);
+ 
+ 	stm32_hash_write(hdev, HASH_IMR, *preg++);
+ 	stm32_hash_write(hdev, HASH_STR, *preg++);
+diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c
+index ebd061d039508..46ce58376580b 100644
+--- a/drivers/cxl/core/bus.c
++++ b/drivers/cxl/core/bus.c
+@@ -485,9 +485,7 @@ out_unlock:
+ 
+ struct cxl_decoder *cxl_decoder_alloc(struct cxl_port *port, int nr_targets)
+ {
+-	struct cxl_decoder *cxld, cxld_const_init = {
+-		.nr_targets = nr_targets,
+-	};
++	struct cxl_decoder *cxld;
+ 	struct device *dev;
+ 	int rc = 0;
+ 
+@@ -497,13 +495,13 @@ struct cxl_decoder *cxl_decoder_alloc(struct cxl_port *port, int nr_targets)
+ 	cxld = kzalloc(struct_size(cxld, target, nr_targets), GFP_KERNEL);
+ 	if (!cxld)
+ 		return ERR_PTR(-ENOMEM);
+-	memcpy(cxld, &cxld_const_init, sizeof(cxld_const_init));
+ 
+ 	rc = ida_alloc(&port->decoder_ida, GFP_KERNEL);
+ 	if (rc < 0)
+ 		goto err;
+ 
+ 	cxld->id = rc;
++	cxld->nr_targets = nr_targets;
+ 	dev = &cxld->dev;
+ 	device_initialize(dev);
+ 	device_set_pm_not_required(dev);
+diff --git a/drivers/cxl/core/pmem.c b/drivers/cxl/core/pmem.c
+index 5032f4c1c69d7..ab461bfdfbece 100644
+--- a/drivers/cxl/core/pmem.c
++++ b/drivers/cxl/core/pmem.c
+@@ -51,10 +51,16 @@ struct cxl_nvdimm_bridge *to_cxl_nvdimm_bridge(struct device *dev)
+ }
+ EXPORT_SYMBOL_GPL(to_cxl_nvdimm_bridge);
+ 
+-__mock int match_nvdimm_bridge(struct device *dev, const void *data)
++bool is_cxl_nvdimm_bridge(struct device *dev)
+ {
+ 	return dev->type == &cxl_nvdimm_bridge_type;
+ }
++EXPORT_SYMBOL_NS_GPL(is_cxl_nvdimm_bridge, CXL);
++
++__mock int match_nvdimm_bridge(struct device *dev, const void *data)
++{
++	return is_cxl_nvdimm_bridge(dev);
++}
+ 
+ struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(struct cxl_nvdimm *cxl_nvd)
+ {
+diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
+index 3af704e9b448e..a5a0be3f088be 100644
+--- a/drivers/cxl/cxl.h
++++ b/drivers/cxl/cxl.h
+@@ -191,11 +191,18 @@ struct cxl_decoder {
+ 	int interleave_granularity;
+ 	enum cxl_decoder_type target_type;
+ 	unsigned long flags;
+-	const int nr_targets;
++	int nr_targets;
+ 	struct cxl_dport *target[];
+ };
+ 
+ 
++/**
++ * enum cxl_nvdimm_brige_state - state machine for managing bus rescans
++ * @CXL_NVB_NEW: Set at bridge create and after cxl_pmem_wq is destroyed
++ * @CXL_NVB_DEAD: Set at brige unregistration to preclude async probing
++ * @CXL_NVB_ONLINE: Target state after successful ->probe()
++ * @CXL_NVB_OFFLINE: Target state after ->remove() or failed ->probe()
++ */
+ enum cxl_nvdimm_brige_state {
+ 	CXL_NVB_NEW,
+ 	CXL_NVB_DEAD,
+@@ -308,6 +315,7 @@ struct cxl_nvdimm_bridge *devm_cxl_add_nvdimm_bridge(struct device *host,
+ 						     struct cxl_port *port);
+ struct cxl_nvdimm *to_cxl_nvdimm(struct device *dev);
+ bool is_cxl_nvdimm(struct device *dev);
++bool is_cxl_nvdimm_bridge(struct device *dev);
+ int devm_cxl_add_nvdimm(struct device *host, struct cxl_memdev *cxlmd);
+ struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(struct cxl_nvdimm *cxl_nvd);
+ 
+diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
+index ceb2115981e56..306f032b47988 100644
+--- a/drivers/cxl/pmem.c
++++ b/drivers/cxl/pmem.c
+@@ -266,14 +266,24 @@ static void cxl_nvb_update_state(struct work_struct *work)
+ 	put_device(&cxl_nvb->dev);
+ }
+ 
++static void cxl_nvdimm_bridge_state_work(struct cxl_nvdimm_bridge *cxl_nvb)
++{
++	/*
++	 * Take a reference that the workqueue will drop if new work
++	 * gets queued.
++	 */
++	get_device(&cxl_nvb->dev);
++	if (!queue_work(cxl_pmem_wq, &cxl_nvb->state_work))
++		put_device(&cxl_nvb->dev);
++}
++
+ static void cxl_nvdimm_bridge_remove(struct device *dev)
+ {
+ 	struct cxl_nvdimm_bridge *cxl_nvb = to_cxl_nvdimm_bridge(dev);
+ 
+ 	if (cxl_nvb->state == CXL_NVB_ONLINE)
+ 		cxl_nvb->state = CXL_NVB_OFFLINE;
+-	if (queue_work(cxl_pmem_wq, &cxl_nvb->state_work))
+-		get_device(&cxl_nvb->dev);
++	cxl_nvdimm_bridge_state_work(cxl_nvb);
+ }
+ 
+ static int cxl_nvdimm_bridge_probe(struct device *dev)
+@@ -294,8 +304,7 @@ static int cxl_nvdimm_bridge_probe(struct device *dev)
+ 	}
+ 
+ 	cxl_nvb->state = CXL_NVB_ONLINE;
+-	if (queue_work(cxl_pmem_wq, &cxl_nvb->state_work))
+-		get_device(&cxl_nvb->dev);
++	cxl_nvdimm_bridge_state_work(cxl_nvb);
+ 
+ 	return 0;
+ }
+@@ -307,6 +316,31 @@ static struct cxl_driver cxl_nvdimm_bridge_driver = {
+ 	.id = CXL_DEVICE_NVDIMM_BRIDGE,
+ };
+ 
++/*
++ * Return all bridges to the CXL_NVB_NEW state to invalidate any
++ * ->state_work referring to the now destroyed cxl_pmem_wq.
++ */
++static int cxl_nvdimm_bridge_reset(struct device *dev, void *data)
++{
++	struct cxl_nvdimm_bridge *cxl_nvb;
++
++	if (!is_cxl_nvdimm_bridge(dev))
++		return 0;
++
++	cxl_nvb = to_cxl_nvdimm_bridge(dev);
++	device_lock(dev);
++	cxl_nvb->state = CXL_NVB_NEW;
++	device_unlock(dev);
++
++	return 0;
++}
++
++static void destroy_cxl_pmem_wq(void)
++{
++	destroy_workqueue(cxl_pmem_wq);
++	bus_for_each_dev(&cxl_bus_type, NULL, NULL, cxl_nvdimm_bridge_reset);
++}
++
+ static __init int cxl_pmem_init(void)
+ {
+ 	int rc;
+@@ -332,7 +366,7 @@ static __init int cxl_pmem_init(void)
+ err_nvdimm:
+ 	cxl_driver_unregister(&cxl_nvdimm_bridge_driver);
+ err_bridge:
+-	destroy_workqueue(cxl_pmem_wq);
++	destroy_cxl_pmem_wq();
+ 	return rc;
+ }
+ 
+@@ -340,7 +374,7 @@ static __exit void cxl_pmem_exit(void)
+ {
+ 	cxl_driver_unregister(&cxl_nvdimm_driver);
+ 	cxl_driver_unregister(&cxl_nvdimm_bridge_driver);
+-	destroy_workqueue(cxl_pmem_wq);
++	destroy_cxl_pmem_wq();
+ }
+ 
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/dax/super.c b/drivers/dax/super.c
+index b882cf8106ea3..e20d0cef10a18 100644
+--- a/drivers/dax/super.c
++++ b/drivers/dax/super.c
+@@ -63,7 +63,7 @@ static int dax_host_hash(const char *host)
+ 	return hashlen_hash(hashlen_string("DAX", host)) % DAX_HASH_SIZE;
+ }
+ 
+-#ifdef CONFIG_BLOCK
++#if defined(CONFIG_BLOCK) && defined(CONFIG_FS_DAX)
+ #include <linux/blkdev.h>
+ 
+ int bdev_dax_pgoff(struct block_device *bdev, sector_t sector, size_t size,
+@@ -80,7 +80,6 @@ int bdev_dax_pgoff(struct block_device *bdev, sector_t sector, size_t size,
+ }
+ EXPORT_SYMBOL(bdev_dax_pgoff);
+ 
+-#if IS_ENABLED(CONFIG_FS_DAX)
+ /**
+  * dax_get_by_host() - temporary lookup mechanism for filesystem-dax
+  * @host: alternate name for the device registered by a dax driver
+@@ -219,8 +218,7 @@ bool dax_supported(struct dax_device *dax_dev, struct block_device *bdev,
+ 	return ret;
+ }
+ EXPORT_SYMBOL_GPL(dax_supported);
+-#endif /* CONFIG_FS_DAX */
+-#endif /* CONFIG_BLOCK */
++#endif /* CONFIG_BLOCK && CONFIG_FS_DAX */
+ 
+ enum dax_device_flags {
+ 	/* !alive + rcu grace period == no new operations / mappings */
+diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
+index d3fbd950be944..3e07f961e2f3d 100644
+--- a/drivers/dma-buf/dma-fence-array.c
++++ b/drivers/dma-buf/dma-fence-array.c
+@@ -104,7 +104,11 @@ static bool dma_fence_array_signaled(struct dma_fence *fence)
+ {
+ 	struct dma_fence_array *array = to_dma_fence_array(fence);
+ 
+-	return atomic_read(&array->num_pending) <= 0;
++	if (atomic_read(&array->num_pending) > 0)
++		return false;
++
++	dma_fence_array_clear_pending_error(array);
++	return true;
+ }
+ 
+ static void dma_fence_array_release(struct dma_fence *fence)
+diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
+index 0c05b79870f96..83f02bd51dda6 100644
+--- a/drivers/dma-buf/heaps/cma_heap.c
++++ b/drivers/dma-buf/heaps/cma_heap.c
+@@ -124,10 +124,11 @@ static int cma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+ 	struct cma_heap_buffer *buffer = dmabuf->priv;
+ 	struct dma_heap_attachment *a;
+ 
++	mutex_lock(&buffer->lock);
++
+ 	if (buffer->vmap_cnt)
+ 		invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
+ 
+-	mutex_lock(&buffer->lock);
+ 	list_for_each_entry(a, &buffer->attachments, list) {
+ 		if (!a->mapped)
+ 			continue;
+@@ -144,10 +145,11 @@ static int cma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
+ 	struct cma_heap_buffer *buffer = dmabuf->priv;
+ 	struct dma_heap_attachment *a;
+ 
++	mutex_lock(&buffer->lock);
++
+ 	if (buffer->vmap_cnt)
+ 		flush_kernel_vmap_range(buffer->vaddr, buffer->len);
+ 
+-	mutex_lock(&buffer->lock);
+ 	list_for_each_entry(a, &buffer->attachments, list) {
+ 		if (!a->mapped)
+ 			continue;
+diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
+index 275a76f188ae7..3d138c0c96deb 100644
+--- a/drivers/dma/at_xdmac.c
++++ b/drivers/dma/at_xdmac.c
+@@ -99,6 +99,7 @@
+ #define		AT_XDMAC_CNDC_NDE		(0x1 << 0)		/* Channel x Next Descriptor Enable */
+ #define		AT_XDMAC_CNDC_NDSUP		(0x1 << 1)		/* Channel x Next Descriptor Source Update */
+ #define		AT_XDMAC_CNDC_NDDUP		(0x1 << 2)		/* Channel x Next Descriptor Destination Update */
++#define		AT_XDMAC_CNDC_NDVIEW_MASK	GENMASK(28, 27)
+ #define		AT_XDMAC_CNDC_NDVIEW_NDV0	(0x0 << 3)		/* Channel x Next Descriptor View 0 */
+ #define		AT_XDMAC_CNDC_NDVIEW_NDV1	(0x1 << 3)		/* Channel x Next Descriptor View 1 */
+ #define		AT_XDMAC_CNDC_NDVIEW_NDV2	(0x2 << 3)		/* Channel x Next Descriptor View 2 */
+@@ -252,15 +253,15 @@ struct at_xdmac {
+ 
+ /* Linked List Descriptor */
+ struct at_xdmac_lld {
+-	dma_addr_t	mbr_nda;	/* Next Descriptor Member */
+-	u32		mbr_ubc;	/* Microblock Control Member */
+-	dma_addr_t	mbr_sa;		/* Source Address Member */
+-	dma_addr_t	mbr_da;		/* Destination Address Member */
+-	u32		mbr_cfg;	/* Configuration Register */
+-	u32		mbr_bc;		/* Block Control Register */
+-	u32		mbr_ds;		/* Data Stride Register */
+-	u32		mbr_sus;	/* Source Microblock Stride Register */
+-	u32		mbr_dus;	/* Destination Microblock Stride Register */
++	u32 mbr_nda;	/* Next Descriptor Member */
++	u32 mbr_ubc;	/* Microblock Control Member */
++	u32 mbr_sa;	/* Source Address Member */
++	u32 mbr_da;	/* Destination Address Member */
++	u32 mbr_cfg;	/* Configuration Register */
++	u32 mbr_bc;	/* Block Control Register */
++	u32 mbr_ds;	/* Data Stride Register */
++	u32 mbr_sus;	/* Source Microblock Stride Register */
++	u32 mbr_dus;	/* Destination Microblock Stride Register */
+ };
+ 
+ /* 64-bit alignment needed to update CNDA and CUBC registers in an atomic way. */
+@@ -385,9 +386,6 @@ static void at_xdmac_start_xfer(struct at_xdmac_chan *atchan,
+ 
+ 	dev_vdbg(chan2dev(&atchan->chan), "%s: desc 0x%p\n", __func__, first);
+ 
+-	if (at_xdmac_chan_is_enabled(atchan))
+-		return;
+-
+ 	/* Set transfer as active to not try to start it again. */
+ 	first->active_xfer = true;
+ 
+@@ -405,7 +403,8 @@ static void at_xdmac_start_xfer(struct at_xdmac_chan *atchan,
+ 	 */
+ 	if (at_xdmac_chan_is_cyclic(atchan))
+ 		reg = AT_XDMAC_CNDC_NDVIEW_NDV1;
+-	else if (first->lld.mbr_ubc & AT_XDMAC_MBR_UBC_NDV3)
++	else if ((first->lld.mbr_ubc &
++		  AT_XDMAC_CNDC_NDVIEW_MASK) == AT_XDMAC_MBR_UBC_NDV3)
+ 		reg = AT_XDMAC_CNDC_NDVIEW_NDV3;
+ 	else
+ 		reg = AT_XDMAC_CNDC_NDVIEW_NDV2;
+@@ -476,13 +475,12 @@ static dma_cookie_t at_xdmac_tx_submit(struct dma_async_tx_descriptor *tx)
+ 	spin_lock_irqsave(&atchan->lock, irqflags);
+ 	cookie = dma_cookie_assign(tx);
+ 
++	list_add_tail(&desc->xfer_node, &atchan->xfers_list);
++	spin_unlock_irqrestore(&atchan->lock, irqflags);
++
+ 	dev_vdbg(chan2dev(tx->chan), "%s: atchan 0x%p, add desc 0x%p to xfers_list\n",
+ 		 __func__, atchan, desc);
+-	list_add_tail(&desc->xfer_node, &atchan->xfers_list);
+-	if (list_is_singular(&atchan->xfers_list))
+-		at_xdmac_start_xfer(atchan, desc);
+ 
+-	spin_unlock_irqrestore(&atchan->lock, irqflags);
+ 	return cookie;
+ }
+ 
+@@ -1623,14 +1621,17 @@ static void at_xdmac_handle_cyclic(struct at_xdmac_chan *atchan)
+ 	struct at_xdmac_desc		*desc;
+ 	struct dma_async_tx_descriptor	*txd;
+ 
+-	if (!list_empty(&atchan->xfers_list)) {
+-		desc = list_first_entry(&atchan->xfers_list,
+-					struct at_xdmac_desc, xfer_node);
+-		txd = &desc->tx_dma_desc;
+-
+-		if (txd->flags & DMA_PREP_INTERRUPT)
+-			dmaengine_desc_get_callback_invoke(txd, NULL);
++	spin_lock_irq(&atchan->lock);
++	if (list_empty(&atchan->xfers_list)) {
++		spin_unlock_irq(&atchan->lock);
++		return;
+ 	}
++	desc = list_first_entry(&atchan->xfers_list, struct at_xdmac_desc,
++				xfer_node);
++	spin_unlock_irq(&atchan->lock);
++	txd = &desc->tx_dma_desc;
++	if (txd->flags & DMA_PREP_INTERRUPT)
++		dmaengine_desc_get_callback_invoke(txd, NULL);
+ }
+ 
+ static void at_xdmac_handle_error(struct at_xdmac_chan *atchan)
+@@ -1784,11 +1785,9 @@ static void at_xdmac_issue_pending(struct dma_chan *chan)
+ 
+ 	dev_dbg(chan2dev(&atchan->chan), "%s\n", __func__);
+ 
+-	if (!at_xdmac_chan_is_cyclic(atchan)) {
+-		spin_lock_irqsave(&atchan->lock, flags);
+-		at_xdmac_advance_work(atchan);
+-		spin_unlock_irqrestore(&atchan->lock, flags);
+-	}
++	spin_lock_irqsave(&atchan->lock, flags);
++	at_xdmac_advance_work(atchan);
++	spin_unlock_irqrestore(&atchan->lock, flags);
+ 
+ 	return;
+ }
+diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
+index fab412349f7fe..cd855097bfdba 100644
+--- a/drivers/dma/idxd/device.c
++++ b/drivers/dma/idxd/device.c
+@@ -382,8 +382,6 @@ static void idxd_wq_disable_cleanup(struct idxd_wq *wq)
+ 	lockdep_assert_held(&wq->wq_lock);
+ 	memset(wq->wqcfg, 0, idxd->wqcfg_size);
+ 	wq->type = IDXD_WQT_NONE;
+-	wq->size = 0;
+-	wq->group = NULL;
+ 	wq->threshold = 0;
+ 	wq->priority = 0;
+ 	wq->ats_dis = 0;
+@@ -392,6 +390,15 @@ static void idxd_wq_disable_cleanup(struct idxd_wq *wq)
+ 	memset(wq->name, 0, WQ_NAME_SIZE);
+ }
+ 
++static void idxd_wq_device_reset_cleanup(struct idxd_wq *wq)
++{
++	lockdep_assert_held(&wq->wq_lock);
++
++	idxd_wq_disable_cleanup(wq);
++	wq->size = 0;
++	wq->group = NULL;
++}
++
+ static void idxd_wq_ref_release(struct percpu_ref *ref)
+ {
+ 	struct idxd_wq *wq = container_of(ref, struct idxd_wq, wq_active);
+@@ -699,6 +706,7 @@ static void idxd_device_wqs_clear_state(struct idxd_device *idxd)
+ 
+ 		if (wq->state == IDXD_WQ_ENABLED) {
+ 			idxd_wq_disable_cleanup(wq);
++			idxd_wq_device_reset_cleanup(wq);
+ 			wq->state = IDXD_WQ_DISABLED;
+ 		}
+ 	}
+diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
+index a23563cd118b7..5a53d7fcef018 100644
+--- a/drivers/dma/mmp_pdma.c
++++ b/drivers/dma/mmp_pdma.c
+@@ -727,12 +727,6 @@ static int mmp_pdma_config_write(struct dma_chan *dchan,
+ 
+ 	chan->dir = direction;
+ 	chan->dev_addr = addr;
+-	/* FIXME: drivers should be ported over to use the filter
+-	 * function. Once that's done, the following two lines can
+-	 * be removed.
+-	 */
+-	if (cfg->slave_id)
+-		chan->drcmr = cfg->slave_id;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/dma/pxa_dma.c b/drivers/dma/pxa_dma.c
+index 52d04641e3611..6078cc81892e4 100644
+--- a/drivers/dma/pxa_dma.c
++++ b/drivers/dma/pxa_dma.c
+@@ -909,13 +909,6 @@ static void pxad_get_config(struct pxad_chan *chan,
+ 		*dcmd |= PXA_DCMD_BURST16;
+ 	else if (maxburst == 32)
+ 		*dcmd |= PXA_DCMD_BURST32;
+-
+-	/* FIXME: drivers should be ported over to use the filter
+-	 * function. Once that's done, the following two lines can
+-	 * be removed.
+-	 */
+-	if (chan->cfg.slave_id)
+-		chan->drcmr = chan->cfg.slave_id;
+ }
+ 
+ static struct dma_async_tx_descriptor *
+diff --git a/drivers/dma/stm32-mdma.c b/drivers/dma/stm32-mdma.c
+index d30a4a28d3bfd..b61a241c9fcd5 100644
+--- a/drivers/dma/stm32-mdma.c
++++ b/drivers/dma/stm32-mdma.c
+@@ -184,7 +184,7 @@
+ #define STM32_MDMA_CTBR(x)		(0x68 + 0x40 * (x))
+ #define STM32_MDMA_CTBR_DBUS		BIT(17)
+ #define STM32_MDMA_CTBR_SBUS		BIT(16)
+-#define STM32_MDMA_CTBR_TSEL_MASK	GENMASK(7, 0)
++#define STM32_MDMA_CTBR_TSEL_MASK	GENMASK(5, 0)
+ #define STM32_MDMA_CTBR_TSEL(n)		STM32_MDMA_SET(n, \
+ 						      STM32_MDMA_CTBR_TSEL_MASK)
+ 
+diff --git a/drivers/dma/uniphier-xdmac.c b/drivers/dma/uniphier-xdmac.c
+index d6b8a202474f4..290836b7e1be2 100644
+--- a/drivers/dma/uniphier-xdmac.c
++++ b/drivers/dma/uniphier-xdmac.c
+@@ -131,8 +131,9 @@ uniphier_xdmac_next_desc(struct uniphier_xdmac_chan *xc)
+ static void uniphier_xdmac_chan_start(struct uniphier_xdmac_chan *xc,
+ 				      struct uniphier_xdmac_desc *xd)
+ {
+-	u32 src_mode, src_addr, src_width;
+-	u32 dst_mode, dst_addr, dst_width;
++	u32 src_mode, src_width;
++	u32 dst_mode, dst_width;
++	dma_addr_t src_addr, dst_addr;
+ 	u32 val, its, tnum;
+ 	enum dma_slave_buswidth buswidth;
+ 
+diff --git a/drivers/edac/synopsys_edac.c b/drivers/edac/synopsys_edac.c
+index 7d08627e738b3..a5486d86fdd2f 100644
+--- a/drivers/edac/synopsys_edac.c
++++ b/drivers/edac/synopsys_edac.c
+@@ -1352,8 +1352,7 @@ static int mc_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-	if (of_device_is_compatible(pdev->dev.of_node,
+-				    "xlnx,zynqmp-ddrc-2.40a"))
++	if (priv->p_data->quirks & DDR_ECC_INTR_SUPPORT)
+ 		setup_address_map(priv);
+ #endif
+ 
+diff --git a/drivers/firmware/efi/efi-init.c b/drivers/firmware/efi/efi-init.c
+index b19ce1a83f91a..b2c829e95bd14 100644
+--- a/drivers/firmware/efi/efi-init.c
++++ b/drivers/firmware/efi/efi-init.c
+@@ -235,6 +235,11 @@ void __init efi_init(void)
+ 	}
+ 
+ 	reserve_regions();
++	/*
++	 * For memblock manipulation, the cap should come after the memblock_add().
++	 * And now, memblock is fully populated, it is time to do capping.
++	 */
++	early_init_dt_check_for_usable_mem_range();
+ 	efi_esrt_init();
+ 	efi_mokvar_table_init();
+ 
+diff --git a/drivers/firmware/google/Kconfig b/drivers/firmware/google/Kconfig
+index 97968aece54f8..931544c9f63d4 100644
+--- a/drivers/firmware/google/Kconfig
++++ b/drivers/firmware/google/Kconfig
+@@ -3,9 +3,9 @@ menuconfig GOOGLE_FIRMWARE
+ 	bool "Google Firmware Drivers"
+ 	default n
+ 	help
+-	  These firmware drivers are used by Google's servers.  They are
+-	  only useful if you are working directly on one of their
+-	  proprietary servers.  If in doubt, say "N".
++	  These firmware drivers are used by Google servers,
++	  Chromebooks and other devices using coreboot firmware.
++	  If in doubt, say "N".
+ 
+ if GOOGLE_FIRMWARE
+ 
+diff --git a/drivers/firmware/sysfb_simplefb.c b/drivers/firmware/sysfb_simplefb.c
+index b86761904949c..303a491e520d1 100644
+--- a/drivers/firmware/sysfb_simplefb.c
++++ b/drivers/firmware/sysfb_simplefb.c
+@@ -113,12 +113,16 @@ __init int sysfb_create_simplefb(const struct screen_info *si,
+ 	sysfb_apply_efi_quirks(pd);
+ 
+ 	ret = platform_device_add_resources(pd, &res, 1);
+-	if (ret)
++	if (ret) {
++		platform_device_put(pd);
+ 		return ret;
++	}
+ 
+ 	ret = platform_device_add_data(pd, mode, sizeof(*mode));
+-	if (ret)
++	if (ret) {
++		platform_device_put(pd);
+ 		return ret;
++	}
+ 
+ 	return platform_device_add(pd);
+ }
+diff --git a/drivers/gpio/gpio-aspeed-sgpio.c b/drivers/gpio/gpio-aspeed-sgpio.c
+index b3a9b8488f11d..454cefbeecf0e 100644
+--- a/drivers/gpio/gpio-aspeed-sgpio.c
++++ b/drivers/gpio/gpio-aspeed-sgpio.c
+@@ -31,7 +31,7 @@ struct aspeed_sgpio {
+ 	struct gpio_chip chip;
+ 	struct irq_chip intc;
+ 	struct clk *pclk;
+-	spinlock_t lock;
++	raw_spinlock_t lock;
+ 	void __iomem *base;
+ 	int irq;
+ };
+@@ -173,12 +173,12 @@ static int aspeed_sgpio_get(struct gpio_chip *gc, unsigned int offset)
+ 	enum aspeed_sgpio_reg reg;
+ 	int rc = 0;
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	reg = aspeed_sgpio_is_input(offset) ? reg_val : reg_rdata;
+ 	rc = !!(ioread32(bank_reg(gpio, bank, reg)) & GPIO_BIT(offset));
+ 
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 
+ 	return rc;
+ }
+@@ -215,11 +215,11 @@ static void aspeed_sgpio_set(struct gpio_chip *gc, unsigned int offset, int val)
+ 	struct aspeed_sgpio *gpio = gpiochip_get_data(gc);
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	sgpio_set_value(gc, offset, val);
+ 
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ }
+ 
+ static int aspeed_sgpio_dir_in(struct gpio_chip *gc, unsigned int offset)
+@@ -236,9 +236,9 @@ static int aspeed_sgpio_dir_out(struct gpio_chip *gc, unsigned int offset, int v
+ 	/* No special action is required for setting the direction; we'll
+ 	 * error-out in sgpio_set_value if this isn't an output GPIO */
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 	rc = sgpio_set_value(gc, offset, val);
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 
+ 	return rc;
+ }
+@@ -277,11 +277,11 @@ static void aspeed_sgpio_irq_ack(struct irq_data *d)
+ 
+ 	status_addr = bank_reg(gpio, bank, reg_irq_status);
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	iowrite32(bit, status_addr);
+ 
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ }
+ 
+ static void aspeed_sgpio_irq_set_mask(struct irq_data *d, bool set)
+@@ -296,7 +296,7 @@ static void aspeed_sgpio_irq_set_mask(struct irq_data *d, bool set)
+ 	irqd_to_aspeed_sgpio_data(d, &gpio, &bank, &bit, &offset);
+ 	addr = bank_reg(gpio, bank, reg_irq_enable);
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	reg = ioread32(addr);
+ 	if (set)
+@@ -306,7 +306,7 @@ static void aspeed_sgpio_irq_set_mask(struct irq_data *d, bool set)
+ 
+ 	iowrite32(reg, addr);
+ 
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ }
+ 
+ static void aspeed_sgpio_irq_mask(struct irq_data *d)
+@@ -355,7 +355,7 @@ static int aspeed_sgpio_set_type(struct irq_data *d, unsigned int type)
+ 		return -EINVAL;
+ 	}
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	addr = bank_reg(gpio, bank, reg_irq_type0);
+ 	reg = ioread32(addr);
+@@ -372,7 +372,7 @@ static int aspeed_sgpio_set_type(struct irq_data *d, unsigned int type)
+ 	reg = (reg & ~bit) | type2;
+ 	iowrite32(reg, addr);
+ 
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 
+ 	irq_set_handler_locked(d, handler);
+ 
+@@ -467,7 +467,7 @@ static int aspeed_sgpio_reset_tolerance(struct gpio_chip *chip,
+ 
+ 	reg = bank_reg(gpio, to_bank(offset), reg_tolerance);
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	val = readl(reg);
+ 
+@@ -478,7 +478,7 @@ static int aspeed_sgpio_reset_tolerance(struct gpio_chip *chip,
+ 
+ 	writel(val, reg);
+ 
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 
+ 	return 0;
+ }
+@@ -575,7 +575,7 @@ static int __init aspeed_sgpio_probe(struct platform_device *pdev)
+ 	iowrite32(FIELD_PREP(ASPEED_SGPIO_CLK_DIV_MASK, sgpio_clk_div) | gpio_cnt_regval |
+ 		  ASPEED_SGPIO_ENABLE, gpio->base + ASPEED_SGPIO_CTRL);
+ 
+-	spin_lock_init(&gpio->lock);
++	raw_spin_lock_init(&gpio->lock);
+ 
+ 	gpio->chip.parent = &pdev->dev;
+ 	gpio->chip.ngpio = nr_gpios * 2;
+diff --git a/drivers/gpio/gpio-aspeed.c b/drivers/gpio/gpio-aspeed.c
+index 3c8f20c57695f..318a7d95a1a8b 100644
+--- a/drivers/gpio/gpio-aspeed.c
++++ b/drivers/gpio/gpio-aspeed.c
+@@ -53,7 +53,7 @@ struct aspeed_gpio_config {
+ struct aspeed_gpio {
+ 	struct gpio_chip chip;
+ 	struct irq_chip irqc;
+-	spinlock_t lock;
++	raw_spinlock_t lock;
+ 	void __iomem *base;
+ 	int irq;
+ 	const struct aspeed_gpio_config *config;
+@@ -413,14 +413,14 @@ static void aspeed_gpio_set(struct gpio_chip *gc, unsigned int offset,
+ 	unsigned long flags;
+ 	bool copro;
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 	copro = aspeed_gpio_copro_request(gpio, offset);
+ 
+ 	__aspeed_gpio_set(gc, offset, val);
+ 
+ 	if (copro)
+ 		aspeed_gpio_copro_release(gpio, offset);
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ }
+ 
+ static int aspeed_gpio_dir_in(struct gpio_chip *gc, unsigned int offset)
+@@ -435,7 +435,7 @@ static int aspeed_gpio_dir_in(struct gpio_chip *gc, unsigned int offset)
+ 	if (!have_input(gpio, offset))
+ 		return -ENOTSUPP;
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	reg = ioread32(addr);
+ 	reg &= ~GPIO_BIT(offset);
+@@ -445,7 +445,7 @@ static int aspeed_gpio_dir_in(struct gpio_chip *gc, unsigned int offset)
+ 	if (copro)
+ 		aspeed_gpio_copro_release(gpio, offset);
+ 
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 
+ 	return 0;
+ }
+@@ -463,7 +463,7 @@ static int aspeed_gpio_dir_out(struct gpio_chip *gc,
+ 	if (!have_output(gpio, offset))
+ 		return -ENOTSUPP;
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	reg = ioread32(addr);
+ 	reg |= GPIO_BIT(offset);
+@@ -474,7 +474,7 @@ static int aspeed_gpio_dir_out(struct gpio_chip *gc,
+ 
+ 	if (copro)
+ 		aspeed_gpio_copro_release(gpio, offset);
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 
+ 	return 0;
+ }
+@@ -492,11 +492,11 @@ static int aspeed_gpio_get_direction(struct gpio_chip *gc, unsigned int offset)
+ 	if (!have_output(gpio, offset))
+ 		return GPIO_LINE_DIRECTION_IN;
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	val = ioread32(bank_reg(gpio, bank, reg_dir)) & GPIO_BIT(offset);
+ 
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 
+ 	return val ? GPIO_LINE_DIRECTION_OUT : GPIO_LINE_DIRECTION_IN;
+ }
+@@ -539,14 +539,14 @@ static void aspeed_gpio_irq_ack(struct irq_data *d)
+ 
+ 	status_addr = bank_reg(gpio, bank, reg_irq_status);
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 	copro = aspeed_gpio_copro_request(gpio, offset);
+ 
+ 	iowrite32(bit, status_addr);
+ 
+ 	if (copro)
+ 		aspeed_gpio_copro_release(gpio, offset);
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ }
+ 
+ static void aspeed_gpio_irq_set_mask(struct irq_data *d, bool set)
+@@ -565,7 +565,7 @@ static void aspeed_gpio_irq_set_mask(struct irq_data *d, bool set)
+ 
+ 	addr = bank_reg(gpio, bank, reg_irq_enable);
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 	copro = aspeed_gpio_copro_request(gpio, offset);
+ 
+ 	reg = ioread32(addr);
+@@ -577,7 +577,7 @@ static void aspeed_gpio_irq_set_mask(struct irq_data *d, bool set)
+ 
+ 	if (copro)
+ 		aspeed_gpio_copro_release(gpio, offset);
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ }
+ 
+ static void aspeed_gpio_irq_mask(struct irq_data *d)
+@@ -629,7 +629,7 @@ static int aspeed_gpio_set_type(struct irq_data *d, unsigned int type)
+ 		return -EINVAL;
+ 	}
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 	copro = aspeed_gpio_copro_request(gpio, offset);
+ 
+ 	addr = bank_reg(gpio, bank, reg_irq_type0);
+@@ -649,7 +649,7 @@ static int aspeed_gpio_set_type(struct irq_data *d, unsigned int type)
+ 
+ 	if (copro)
+ 		aspeed_gpio_copro_release(gpio, offset);
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 
+ 	irq_set_handler_locked(d, handler);
+ 
+@@ -716,7 +716,7 @@ static int aspeed_gpio_reset_tolerance(struct gpio_chip *chip,
+ 
+ 	treg = bank_reg(gpio, to_bank(offset), reg_tolerance);
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 	copro = aspeed_gpio_copro_request(gpio, offset);
+ 
+ 	val = readl(treg);
+@@ -730,7 +730,7 @@ static int aspeed_gpio_reset_tolerance(struct gpio_chip *chip,
+ 
+ 	if (copro)
+ 		aspeed_gpio_copro_release(gpio, offset);
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 
+ 	return 0;
+ }
+@@ -856,7 +856,7 @@ static int enable_debounce(struct gpio_chip *chip, unsigned int offset,
+ 		return rc;
+ 	}
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	if (timer_allocation_registered(gpio, offset)) {
+ 		rc = unregister_allocated_timer(gpio, offset);
+@@ -916,7 +916,7 @@ static int enable_debounce(struct gpio_chip *chip, unsigned int offset,
+ 	configure_timer(gpio, offset, i);
+ 
+ out:
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 
+ 	return rc;
+ }
+@@ -927,13 +927,13 @@ static int disable_debounce(struct gpio_chip *chip, unsigned int offset)
+ 	unsigned long flags;
+ 	int rc;
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	rc = unregister_allocated_timer(gpio, offset);
+ 	if (!rc)
+ 		configure_timer(gpio, offset, 0);
+ 
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 
+ 	return rc;
+ }
+@@ -1015,7 +1015,7 @@ int aspeed_gpio_copro_grab_gpio(struct gpio_desc *desc,
+ 		return -EINVAL;
+ 	bindex = offset >> 3;
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	/* Sanity check, this shouldn't happen */
+ 	if (gpio->cf_copro_bankmap[bindex] == 0xff) {
+@@ -1036,7 +1036,7 @@ int aspeed_gpio_copro_grab_gpio(struct gpio_desc *desc,
+ 	if (bit)
+ 		*bit = GPIO_OFFSET(offset);
+  bail:
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 	return rc;
+ }
+ EXPORT_SYMBOL_GPL(aspeed_gpio_copro_grab_gpio);
+@@ -1060,7 +1060,7 @@ int aspeed_gpio_copro_release_gpio(struct gpio_desc *desc)
+ 		return -EINVAL;
+ 	bindex = offset >> 3;
+ 
+-	spin_lock_irqsave(&gpio->lock, flags);
++	raw_spin_lock_irqsave(&gpio->lock, flags);
+ 
+ 	/* Sanity check, this shouldn't happen */
+ 	if (gpio->cf_copro_bankmap[bindex] == 0) {
+@@ -1074,7 +1074,7 @@ int aspeed_gpio_copro_release_gpio(struct gpio_desc *desc)
+ 		aspeed_gpio_change_cmd_source(gpio, bank, bindex,
+ 					      GPIO_CMDSRC_ARM);
+  bail:
+-	spin_unlock_irqrestore(&gpio->lock, flags);
++	raw_spin_unlock_irqrestore(&gpio->lock, flags);
+ 	return rc;
+ }
+ EXPORT_SYMBOL_GPL(aspeed_gpio_copro_release_gpio);
+@@ -1148,7 +1148,7 @@ static int __init aspeed_gpio_probe(struct platform_device *pdev)
+ 	if (IS_ERR(gpio->base))
+ 		return PTR_ERR(gpio->base);
+ 
+-	spin_lock_init(&gpio->lock);
++	raw_spin_lock_init(&gpio->lock);
+ 
+ 	gpio_id = of_match_node(aspeed_gpio_of_table, pdev->dev.of_node);
+ 	if (!gpio_id)
+diff --git a/drivers/gpio/gpio-idt3243x.c b/drivers/gpio/gpio-idt3243x.c
+index 50003ad2e5898..08493b05be2da 100644
+--- a/drivers/gpio/gpio-idt3243x.c
++++ b/drivers/gpio/gpio-idt3243x.c
+@@ -164,8 +164,8 @@ static int idt_gpio_probe(struct platform_device *pdev)
+ 			return PTR_ERR(ctrl->pic);
+ 
+ 		parent_irq = platform_get_irq(pdev, 0);
+-		if (!parent_irq)
+-			return -EINVAL;
++		if (parent_irq < 0)
++			return parent_irq;
+ 
+ 		girq = &ctrl->gc.irq;
+ 		girq->chip = &idt_gpio_irqchip;
+diff --git a/drivers/gpio/gpio-mpc8xxx.c b/drivers/gpio/gpio-mpc8xxx.c
+index 70d6ae20b1da5..01634c8d27b38 100644
+--- a/drivers/gpio/gpio-mpc8xxx.c
++++ b/drivers/gpio/gpio-mpc8xxx.c
+@@ -388,8 +388,8 @@ static int mpc8xxx_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	mpc8xxx_gc->irqn = platform_get_irq(pdev, 0);
+-	if (!mpc8xxx_gc->irqn)
+-		return 0;
++	if (mpc8xxx_gc->irqn < 0)
++		return mpc8xxx_gc->irqn;
+ 
+ 	mpc8xxx_gc->irq = irq_domain_create_linear(fwnode,
+ 						   MPC8XXX_GPIO_PINS,
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index 985e8589c58ba..feb8157d2d672 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -1056,10 +1056,17 @@ int acpi_dev_gpio_irq_get_by(struct acpi_device *adev, const char *name, int ind
+ 			irq_flags = acpi_dev_get_irq_type(info.triggering,
+ 							  info.polarity);
+ 
+-			/* Set type if specified and different than the current one */
+-			if (irq_flags != IRQ_TYPE_NONE &&
+-			    irq_flags != irq_get_trigger_type(irq))
+-				irq_set_irq_type(irq, irq_flags);
++			/*
++			 * If the IRQ is not already in use then set type
++			 * if specified and different than the current one.
++			 */
++			if (can_request_irq(irq, irq_flags)) {
++				if (irq_flags != IRQ_TYPE_NONE &&
++				    irq_flags != irq_get_trigger_type(irq))
++					irq_set_irq_type(irq, irq_flags);
++			} else {
++				dev_dbg(&adev->dev, "IRQ %d already in use\n", irq);
++			}
+ 
+ 			return irq;
+ 		}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+index 0de66f59adb8a..df1f9b88a53f9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+@@ -387,6 +387,9 @@ amdgpu_connector_lcd_native_mode(struct drm_encoder *encoder)
+ 	    native_mode->vdisplay != 0 &&
+ 	    native_mode->clock != 0) {
+ 		mode = drm_mode_duplicate(dev, native_mode);
++		if (!mode)
++			return NULL;
++
+ 		mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER;
+ 		drm_mode_set_name(mode);
+ 
+@@ -401,6 +404,9 @@ amdgpu_connector_lcd_native_mode(struct drm_encoder *encoder)
+ 		 * simpler.
+ 		 */
+ 		mode = drm_cvt_mode(dev, native_mode->hdisplay, native_mode->vdisplay, 60, true, false, false);
++		if (!mode)
++			return NULL;
++
+ 		mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER;
+ 		DRM_DEBUG_KMS("Adding cvt approximation of native panel mode %s\n", mode->name);
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 99370bdd8c5b4..fab8faf345604 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -1928,7 +1928,7 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
+ 			return -ENODEV;
+ 	}
+ 
+-	if (flags == 0) {
++	if (flags == CHIP_IP_DISCOVERY) {
+ 		DRM_INFO("Unsupported asic.  Remove me when IP discovery init is in place.\n");
+ 		return -ENODEV;
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
+index cc2e0c9cfe0a1..4f3c62adccbde 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
+@@ -333,7 +333,6 @@ int amdgpu_irq_init(struct amdgpu_device *adev)
+ 	if (!amdgpu_device_has_dc_support(adev)) {
+ 		if (!adev->enable_virtual_display)
+ 			/* Disable vblank IRQs aggressively for power-saving */
+-			/* XXX: can this be enabled for DC? */
+ 			adev_to_drm(adev)->vblank_disable_immediate = true;
+ 
+ 		r = drm_vblank_init(adev_to_drm(adev), adev->mode_info.num_crtc);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index c641f84649d6b..d011ae7e50a54 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -2017,12 +2017,16 @@ static int psp_hw_start(struct psp_context *psp)
+ 		return ret;
+ 	}
+ 
++	if (amdgpu_sriov_vf(adev) && amdgpu_in_reset(adev))
++		goto skip_pin_bo;
++
+ 	ret = psp_tmr_init(psp);
+ 	if (ret) {
+ 		DRM_ERROR("PSP tmr init failed!\n");
+ 		return ret;
+ 	}
+ 
++skip_pin_bo:
+ 	/*
+ 	 * For ASICs with DF Cstate management centralized
+ 	 * to PMFW, TMR setup should be performed after PMFW
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+index 08133de21fdd6..26b7a4a0b44b7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+@@ -867,9 +867,9 @@ static int amdgpu_ras_enable_all_features(struct amdgpu_device *adev,
+ /* feature ctl end */
+ 
+ 
+-void amdgpu_ras_mca_query_error_status(struct amdgpu_device *adev,
+-				       struct ras_common_if *ras_block,
+-				       struct ras_err_data  *err_data)
++static void amdgpu_ras_mca_query_error_status(struct amdgpu_device *adev,
++					      struct ras_common_if *ras_block,
++					      struct ras_err_data  *err_data)
+ {
+ 	switch (ras_block->sub_block_index) {
+ 	case AMDGPU_RAS_MCA_BLOCK__MP0:
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
+index ac9a8cd21c4b6..7d58bf410be05 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
+@@ -142,15 +142,16 @@ static void amdgpu_vkms_crtc_atomic_disable(struct drm_crtc *crtc,
+ static void amdgpu_vkms_crtc_atomic_flush(struct drm_crtc *crtc,
+ 					  struct drm_atomic_state *state)
+ {
++	unsigned long flags;
+ 	if (crtc->state->event) {
+-		spin_lock(&crtc->dev->event_lock);
++		spin_lock_irqsave(&crtc->dev->event_lock, flags);
+ 
+ 		if (drm_crtc_vblank_get(crtc) != 0)
+ 			drm_crtc_send_vblank_event(crtc, crtc->state->event);
+ 		else
+ 			drm_crtc_arm_vblank_event(crtc, crtc->state->event);
+ 
+-		spin_unlock(&crtc->dev->event_lock);
++		spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
+ 
+ 		crtc->state->event = NULL;
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/cik.c b/drivers/gpu/drm/amd/amdgpu/cik.c
+index 54f28c075f214..f10ce740a29cc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/cik.c
++++ b/drivers/gpu/drm/amd/amdgpu/cik.c
+@@ -1428,6 +1428,10 @@ static int cik_asic_reset(struct amdgpu_device *adev)
+ {
+ 	int r;
+ 
++	/* APUs don't have full asic reset */
++	if (adev->flags & AMD_IS_APU)
++		return 0;
++
+ 	if (cik_asic_reset_method(adev) == AMD_RESET_METHOD_BACO) {
+ 		dev_info(adev->dev, "BACO reset\n");
+ 		r = amdgpu_dpm_baco_reset(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+index 3ec5ff5a6dbe6..61ec6145bbb16 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+@@ -992,10 +992,14 @@ static int gmc_v10_0_gart_enable(struct amdgpu_device *adev)
+ 		return -EINVAL;
+ 	}
+ 
++	if (amdgpu_sriov_vf(adev) && amdgpu_in_reset(adev))
++		goto skip_pin_bo;
++
+ 	r = amdgpu_gart_table_vram_pin(adev);
+ 	if (r)
+ 		return r;
+ 
++skip_pin_bo:
+ 	r = adev->gfxhub.funcs->gart_enable(adev);
+ 	if (r)
+ 		return r;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
+index 492ebed2915be..63b890f1e8afb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
+@@ -515,10 +515,10 @@ static void gmc_v8_0_mc_program(struct amdgpu_device *adev)
+ static int gmc_v8_0_mc_init(struct amdgpu_device *adev)
+ {
+ 	int r;
++	u32 tmp;
+ 
+ 	adev->gmc.vram_width = amdgpu_atombios_get_vram_width(adev);
+ 	if (!adev->gmc.vram_width) {
+-		u32 tmp;
+ 		int chansize, numchan;
+ 
+ 		/* Get VRAM informations */
+@@ -562,8 +562,15 @@ static int gmc_v8_0_mc_init(struct amdgpu_device *adev)
+ 		adev->gmc.vram_width = numchan * chansize;
+ 	}
+ 	/* size in MB on si */
+-	adev->gmc.mc_vram_size = RREG32(mmCONFIG_MEMSIZE) * 1024ULL * 1024ULL;
+-	adev->gmc.real_vram_size = RREG32(mmCONFIG_MEMSIZE) * 1024ULL * 1024ULL;
++	tmp = RREG32(mmCONFIG_MEMSIZE);
++	/* some boards may have garbage in the upper 16 bits */
++	if (tmp & 0xffff0000) {
++		DRM_INFO("Probable bad vram size: 0x%08x\n", tmp);
++		if (tmp & 0xffff)
++			tmp &= 0xffff;
++	}
++	adev->gmc.mc_vram_size = tmp * 1024ULL * 1024ULL;
++	adev->gmc.real_vram_size = adev->gmc.mc_vram_size;
+ 
+ 	if (!(adev->flags & AMD_IS_APU)) {
+ 		r = amdgpu_device_resize_fb_bar(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index d84523cf5f759..6866b40b6f04a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -72,6 +72,9 @@
+ #define mmDCHUBBUB_SDPIF_MMIO_CNTRL_0                                                                  0x049d
+ #define mmDCHUBBUB_SDPIF_MMIO_CNTRL_0_BASE_IDX                                                         2
+ 
++#define mmHUBP0_DCSURF_PRI_VIEWPORT_DIMENSION_DCN2                                                          0x05ea
++#define mmHUBP0_DCSURF_PRI_VIEWPORT_DIMENSION_DCN2_BASE_IDX                                                 2
++
+ 
+ static const char *gfxhub_client_ids[] = {
+ 	"CB",
+@@ -1105,6 +1108,8 @@ static unsigned gmc_v9_0_get_vbios_fb_size(struct amdgpu_device *adev)
+ 	u32 d1vga_control = RREG32_SOC15(DCE, 0, mmD1VGA_CONTROL);
+ 	unsigned size;
+ 
++	/* TODO move to DC so GMC doesn't need to hard-code DCN registers */
++
+ 	if (REG_GET_FIELD(d1vga_control, D1VGA_CONTROL, D1VGA_MODE_ENABLE)) {
+ 		size = AMDGPU_VBIOS_VGA_ALLOCATION;
+ 	} else {
+@@ -1113,7 +1118,6 @@ static unsigned gmc_v9_0_get_vbios_fb_size(struct amdgpu_device *adev)
+ 		switch (adev->ip_versions[DCE_HWIP][0]) {
+ 		case IP_VERSION(1, 0, 0):
+ 		case IP_VERSION(1, 0, 1):
+-		case IP_VERSION(2, 1, 0):
+ 			viewport = RREG32_SOC15(DCE, 0, mmHUBP0_DCSURF_PRI_VIEWPORT_DIMENSION);
+ 			size = (REG_GET_FIELD(viewport,
+ 					      HUBP0_DCSURF_PRI_VIEWPORT_DIMENSION, PRI_VIEWPORT_HEIGHT) *
+@@ -1121,6 +1125,14 @@ static unsigned gmc_v9_0_get_vbios_fb_size(struct amdgpu_device *adev)
+ 					      HUBP0_DCSURF_PRI_VIEWPORT_DIMENSION, PRI_VIEWPORT_WIDTH) *
+ 				4);
+ 			break;
++		case IP_VERSION(2, 1, 0):
++			viewport = RREG32_SOC15(DCE, 0, mmHUBP0_DCSURF_PRI_VIEWPORT_DIMENSION_DCN2);
++			size = (REG_GET_FIELD(viewport,
++					      HUBP0_DCSURF_PRI_VIEWPORT_DIMENSION, PRI_VIEWPORT_HEIGHT) *
++				REG_GET_FIELD(viewport,
++					      HUBP0_DCSURF_PRI_VIEWPORT_DIMENSION, PRI_VIEWPORT_WIDTH) *
++				4);
++			break;
+ 		default:
+ 			viewport = RREG32_SOC15(DCE, 0, mmSCL0_VIEWPORT_SIZE);
+ 			size = (REG_GET_FIELD(viewport, SCL0_VIEWPORT_SIZE, VIEWPORT_HEIGHT) *
+@@ -1714,10 +1726,14 @@ static int gmc_v9_0_gart_enable(struct amdgpu_device *adev)
+ 		return -EINVAL;
+ 	}
+ 
++	if (amdgpu_sriov_vf(adev) && amdgpu_in_reset(adev))
++		goto skip_pin_bo;
++
+ 	r = amdgpu_gart_table_vram_pin(adev);
+ 	if (r)
+ 		return r;
+ 
++skip_pin_bo:
+ 	r = adev->gfxhub.funcs->gart_enable(adev);
+ 	if (r)
+ 		return r;
+diff --git a/drivers/gpu/drm/amd/amdgpu/vi.c b/drivers/gpu/drm/amd/amdgpu/vi.c
+index fe9a7cc8d9eb0..6645ebbd2696c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vi.c
++++ b/drivers/gpu/drm/amd/amdgpu/vi.c
+@@ -956,6 +956,10 @@ static int vi_asic_reset(struct amdgpu_device *adev)
+ {
+ 	int r;
+ 
++	/* APUs don't have full asic reset */
++	if (adev->flags & AMD_IS_APU)
++		return 0;
++
+ 	if (vi_asic_reset_method(adev) == AMD_RESET_METHOD_BACO) {
+ 		dev_info(adev->dev, "BACO reset\n");
+ 		r = amdgpu_dpm_baco_reset(adev);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+index cfedfb1e8596c..c33d689f29e8e 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+@@ -1060,6 +1060,9 @@ static int kfd_parse_subtype_iolink(struct crat_subtype_iolink *iolink,
+ 			return -ENODEV;
+ 		/* same everything but the other direction */
+ 		props2 = kmemdup(props, sizeof(*props2), GFP_KERNEL);
++		if (!props2)
++			return -ENOMEM;
++
+ 		props2->node_from = id_to;
+ 		props2->node_to = id_from;
+ 		props2->kobj = NULL;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+index 3cb4681c5f539..c0b8f4ff80b8a 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+@@ -943,7 +943,7 @@ svm_range_split(struct svm_range *prange, uint64_t start, uint64_t last,
+ }
+ 
+ static int
+-svm_range_split_tail(struct svm_range *prange, struct svm_range *new,
++svm_range_split_tail(struct svm_range *prange,
+ 		     uint64_t new_last, struct list_head *insert_list)
+ {
+ 	struct svm_range *tail;
+@@ -955,7 +955,7 @@ svm_range_split_tail(struct svm_range *prange, struct svm_range *new,
+ }
+ 
+ static int
+-svm_range_split_head(struct svm_range *prange, struct svm_range *new,
++svm_range_split_head(struct svm_range *prange,
+ 		     uint64_t new_start, struct list_head *insert_list)
+ {
+ 	struct svm_range *head;
+@@ -1764,49 +1764,54 @@ static struct svm_range *svm_range_clone(struct svm_range *old)
+ }
+ 
+ /**
+- * svm_range_handle_overlap - split overlap ranges
+- * @svms: svm range list header
+- * @new: range added with this attributes
+- * @start: range added start address, in pages
+- * @last: range last address, in pages
+- * @update_list: output, the ranges attributes are updated. For set_attr, this
+- *               will do validation and map to GPUs. For unmap, this will be
+- *               removed and unmap from GPUs
+- * @insert_list: output, the ranges will be inserted into svms, attributes are
+- *               not changes. For set_attr, this will add into svms.
+- * @remove_list:output, the ranges will be removed from svms
+- * @left: the remaining range after overlap, For set_attr, this will be added
+- *        as new range.
++ * svm_range_add - add svm range and handle overlap
++ * @p: the range add to this process svms
++ * @start: page size aligned
++ * @size: page size aligned
++ * @nattr: number of attributes
++ * @attrs: array of attributes
++ * @update_list: output, the ranges need validate and update GPU mapping
++ * @insert_list: output, the ranges need insert to svms
++ * @remove_list: output, the ranges are replaced and need remove from svms
+  *
+- * Total have 5 overlap cases.
++ * Check if the virtual address range has overlap with any existing ranges,
++ * split partly overlapping ranges and add new ranges in the gaps. All changes
++ * should be applied to the range_list and interval tree transactionally. If
++ * any range split or allocation fails, the entire update fails. Therefore any
++ * existing overlapping svm_ranges are cloned and the original svm_ranges left
++ * unchanged.
+  *
+- * This function handles overlap of an address interval with existing
+- * struct svm_ranges for applying new attributes. This may require
+- * splitting existing struct svm_ranges. All changes should be applied to
+- * the range_list and interval tree transactionally. If any split operation
+- * fails, the entire update fails. Therefore the existing overlapping
+- * svm_ranges are cloned and the original svm_ranges left unchanged. If the
+- * transaction succeeds, the modified clones are added and the originals
+- * freed. Otherwise the clones are removed and the old svm_ranges remain.
++ * If the transaction succeeds, the caller can update and insert clones and
++ * new ranges, then free the originals.
+  *
+- * Context: The caller must hold svms->lock
++ * Otherwise the caller can free the clones and new ranges, while the old
++ * svm_ranges remain unchanged.
++ *
++ * Context: Process context, caller must hold svms->lock
++ *
++ * Return:
++ * 0 - OK, otherwise error code
+  */
+ static int
+-svm_range_handle_overlap(struct svm_range_list *svms, struct svm_range *new,
+-			 unsigned long start, unsigned long last,
+-			 struct list_head *update_list,
+-			 struct list_head *insert_list,
+-			 struct list_head *remove_list,
+-			 unsigned long *left)
++svm_range_add(struct kfd_process *p, uint64_t start, uint64_t size,
++	      uint32_t nattr, struct kfd_ioctl_svm_attribute *attrs,
++	      struct list_head *update_list, struct list_head *insert_list,
++	      struct list_head *remove_list)
+ {
++	unsigned long last = start + size - 1UL;
++	struct svm_range_list *svms = &p->svms;
+ 	struct interval_tree_node *node;
++	struct svm_range new = {0};
+ 	struct svm_range *prange;
+ 	struct svm_range *tmp;
+ 	int r = 0;
+ 
++	pr_debug("svms 0x%p [0x%llx 0x%lx]\n", &p->svms, start, last);
++
+ 	INIT_LIST_HEAD(update_list);
+ 	INIT_LIST_HEAD(insert_list);
+ 	INIT_LIST_HEAD(remove_list);
++	svm_range_apply_attrs(p, &new, nattr, attrs);
+ 
+ 	node = interval_tree_iter_first(&svms->objects, start, last);
+ 	while (node) {
+@@ -1834,14 +1839,14 @@ svm_range_handle_overlap(struct svm_range_list *svms, struct svm_range *new,
+ 
+ 			if (node->start < start) {
+ 				pr_debug("change old range start\n");
+-				r = svm_range_split_head(prange, new, start,
++				r = svm_range_split_head(prange, start,
+ 							 insert_list);
+ 				if (r)
+ 					goto out;
+ 			}
+ 			if (node->last > last) {
+ 				pr_debug("change old range last\n");
+-				r = svm_range_split_tail(prange, new, last,
++				r = svm_range_split_tail(prange, last,
+ 							 insert_list);
+ 				if (r)
+ 					goto out;
+@@ -1853,7 +1858,7 @@ svm_range_handle_overlap(struct svm_range_list *svms, struct svm_range *new,
+ 			prange = old;
+ 		}
+ 
+-		if (!svm_range_is_same_attrs(prange, new))
++		if (!svm_range_is_same_attrs(prange, &new))
+ 			list_add(&prange->update_list, update_list);
+ 
+ 		/* insert a new node if needed */
+@@ -1873,8 +1878,16 @@ svm_range_handle_overlap(struct svm_range_list *svms, struct svm_range *new,
+ 		start = next_start;
+ 	}
+ 
+-	if (left && start <= last)
+-		*left = last - start + 1;
++	/* add a final range at the end if needed */
++	if (start <= last) {
++		prange = svm_range_new(svms, start, last);
++		if (!prange) {
++			r = -ENOMEM;
++			goto out;
++		}
++		list_add(&prange->insert_list, insert_list);
++		list_add(&prange->update_list, update_list);
++	}
+ 
+ out:
+ 	if (r)
+@@ -2894,59 +2907,6 @@ svm_range_is_valid(struct kfd_process *p, uint64_t start, uint64_t size)
+ 				  NULL);
+ }
+ 
+-/**
+- * svm_range_add - add svm range and handle overlap
+- * @p: the range add to this process svms
+- * @start: page size aligned
+- * @size: page size aligned
+- * @nattr: number of attributes
+- * @attrs: array of attributes
+- * @update_list: output, the ranges need validate and update GPU mapping
+- * @insert_list: output, the ranges need insert to svms
+- * @remove_list: output, the ranges are replaced and need remove from svms
+- *
+- * Check if the virtual address range has overlap with the registered ranges,
+- * split the overlapped range, copy and adjust pages address and vram nodes in
+- * old and new ranges.
+- *
+- * Context: Process context, caller must hold svms->lock
+- *
+- * Return:
+- * 0 - OK, otherwise error code
+- */
+-static int
+-svm_range_add(struct kfd_process *p, uint64_t start, uint64_t size,
+-	      uint32_t nattr, struct kfd_ioctl_svm_attribute *attrs,
+-	      struct list_head *update_list, struct list_head *insert_list,
+-	      struct list_head *remove_list)
+-{
+-	uint64_t last = start + size - 1UL;
+-	struct svm_range_list *svms;
+-	struct svm_range new = {0};
+-	struct svm_range *prange;
+-	unsigned long left = 0;
+-	int r = 0;
+-
+-	pr_debug("svms 0x%p [0x%llx 0x%llx]\n", &p->svms, start, last);
+-
+-	svm_range_apply_attrs(p, &new, nattr, attrs);
+-
+-	svms = &p->svms;
+-
+-	r = svm_range_handle_overlap(svms, &new, start, last, update_list,
+-				     insert_list, remove_list, &left);
+-	if (r)
+-		return r;
+-
+-	if (left) {
+-		prange = svm_range_new(svms, last - left + 1, last);
+-		list_add(&prange->insert_list, insert_list);
+-		list_add(&prange->update_list, update_list);
+-	}
+-
+-	return 0;
+-}
+-
+ /**
+  * svm_range_best_prefetch_location - decide the best prefetch location
+  * @prange: svm range structure
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 05f7ffd6a28da..efcb25ef1809a 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -656,7 +656,7 @@ void dmub_hpd_callback(struct amdgpu_device *adev, struct dmub_notification *not
+ 	struct drm_connector_list_iter iter;
+ 	struct dc_link *link;
+ 	uint8_t link_index = 0;
+-	struct drm_device *dev = adev->dm.ddev;
++	struct drm_device *dev;
+ 
+ 	if (adev == NULL)
+ 		return;
+@@ -673,6 +673,7 @@ void dmub_hpd_callback(struct amdgpu_device *adev, struct dmub_notification *not
+ 
+ 	link_index = notify->link_index;
+ 	link = adev->dm.dc->links[link_index];
++	dev = adev->dm.ddev;
+ 
+ 	drm_connector_list_iter_begin(dev, &iter);
+ 	drm_for_each_connector_iter(connector, &iter) {
+@@ -4281,6 +4282,14 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
+ 
+ 	}
+ 
++	/*
++	 * Disable vblank IRQs aggressively for power-saving.
++	 *
++	 * TODO: Fix vblank control helpers to delay PSR entry to allow this when PSR
++	 * is also supported.
++	 */
++	adev_to_drm(adev)->vblank_disable_immediate = !psr_feature_enabled;
++
+ 	/* Software is initialized. Now we can register interrupt handlers. */
+ 	switch (adev->asic_type) {
+ #if defined(CONFIG_DRM_AMD_DC_SI)
+@@ -10659,6 +10668,24 @@ static int dm_update_plane_state(struct dc *dc,
+ 	return ret;
+ }
+ 
++static void dm_get_oriented_plane_size(struct drm_plane_state *plane_state,
++				       int *src_w, int *src_h)
++{
++	switch (plane_state->rotation & DRM_MODE_ROTATE_MASK) {
++	case DRM_MODE_ROTATE_90:
++	case DRM_MODE_ROTATE_270:
++		*src_w = plane_state->src_h >> 16;
++		*src_h = plane_state->src_w >> 16;
++		break;
++	case DRM_MODE_ROTATE_0:
++	case DRM_MODE_ROTATE_180:
++	default:
++		*src_w = plane_state->src_w >> 16;
++		*src_h = plane_state->src_h >> 16;
++		break;
++	}
++}
++
+ static int dm_check_crtc_cursor(struct drm_atomic_state *state,
+ 				struct drm_crtc *crtc,
+ 				struct drm_crtc_state *new_crtc_state)
+@@ -10667,6 +10694,8 @@ static int dm_check_crtc_cursor(struct drm_atomic_state *state,
+ 	struct drm_plane_state *new_cursor_state, *new_underlying_state;
+ 	int i;
+ 	int cursor_scale_w, cursor_scale_h, underlying_scale_w, underlying_scale_h;
++	int cursor_src_w, cursor_src_h;
++	int underlying_src_w, underlying_src_h;
+ 
+ 	/* On DCE and DCN there is no dedicated hardware cursor plane. We get a
+ 	 * cursor per pipe but it's going to inherit the scaling and
+@@ -10678,10 +10707,9 @@ static int dm_check_crtc_cursor(struct drm_atomic_state *state,
+ 		return 0;
+ 	}
+ 
+-	cursor_scale_w = new_cursor_state->crtc_w * 1000 /
+-			 (new_cursor_state->src_w >> 16);
+-	cursor_scale_h = new_cursor_state->crtc_h * 1000 /
+-			 (new_cursor_state->src_h >> 16);
++	dm_get_oriented_plane_size(new_cursor_state, &cursor_src_w, &cursor_src_h);
++	cursor_scale_w = new_cursor_state->crtc_w * 1000 / cursor_src_w;
++	cursor_scale_h = new_cursor_state->crtc_h * 1000 / cursor_src_h;
+ 
+ 	for_each_new_plane_in_state_reverse(state, underlying, new_underlying_state, i) {
+ 		/* Narrow down to non-cursor planes on the same CRTC as the cursor */
+@@ -10692,10 +10720,10 @@ static int dm_check_crtc_cursor(struct drm_atomic_state *state,
+ 		if (!new_underlying_state->fb)
+ 			continue;
+ 
+-		underlying_scale_w = new_underlying_state->crtc_w * 1000 /
+-				     (new_underlying_state->src_w >> 16);
+-		underlying_scale_h = new_underlying_state->crtc_h * 1000 /
+-				     (new_underlying_state->src_h >> 16);
++		dm_get_oriented_plane_size(new_underlying_state,
++					   &underlying_src_w, &underlying_src_h);
++		underlying_scale_w = new_underlying_state->crtc_w * 1000 / underlying_src_w;
++		underlying_scale_h = new_underlying_state->crtc_h * 1000 / underlying_src_h;
+ 
+ 		if (cursor_scale_w != underlying_scale_w ||
+ 		    cursor_scale_h != underlying_scale_h) {
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index 9d43ecb1f692d..f4e829ec8e108 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -2909,10 +2909,13 @@ static int crc_win_update_set(void *data, u64 val)
+ 	struct amdgpu_device *adev = drm_to_adev(new_crtc->dev);
+ 	struct crc_rd_work *crc_rd_wrk = adev->dm.crc_rd_wrk;
+ 
++	if (!crc_rd_wrk)
++		return 0;
++
+ 	if (val) {
+ 		spin_lock_irq(&adev_to_drm(adev)->event_lock);
+ 		spin_lock_irq(&crc_rd_wrk->crc_rd_work_lock);
+-		if (crc_rd_wrk && crc_rd_wrk->crtc) {
++		if (crc_rd_wrk->crtc) {
+ 			old_crtc = crc_rd_wrk->crtc;
+ 			old_acrtc = to_amdgpu_crtc(old_crtc);
+ 		}
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
+index 26f96ee324729..9200c8ce02ba9 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
+@@ -308,8 +308,7 @@ void dc_destroy_clk_mgr(struct clk_mgr *clk_mgr_base)
+ 	case FAMILY_NV:
+ 		if (ASICREV_IS_SIENNA_CICHLID_P(clk_mgr_base->ctx->asic_id.hw_internal_rev)) {
+ 			dcn3_clk_mgr_destroy(clk_mgr);
+-		}
+-		if (ASICREV_IS_DIMGREY_CAVEFISH_P(clk_mgr_base->ctx->asic_id.hw_internal_rev)) {
++		} else if (ASICREV_IS_DIMGREY_CAVEFISH_P(clk_mgr_base->ctx->asic_id.hw_internal_rev)) {
+ 			dcn3_clk_mgr_destroy(clk_mgr);
+ 		}
+ 		if (ASICREV_IS_BEIGE_GOBY_P(clk_mgr_base->ctx->asic_id.hw_internal_rev)) {
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
+index 2108bff49d4eb..146e6d6708990 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
+@@ -38,7 +38,6 @@
+ #include "clk/clk_11_0_0_offset.h"
+ #include "clk/clk_11_0_0_sh_mask.h"
+ 
+-#include "irq/dcn20/irq_service_dcn20.h"
+ 
+ #undef FN
+ #define FN(reg_name, field_name) \
+@@ -223,8 +222,6 @@ void dcn2_update_clocks(struct clk_mgr *clk_mgr_base,
+ 	bool force_reset = false;
+ 	bool p_state_change_support;
+ 	int total_plane_count;
+-	int irq_src;
+-	uint32_t hpd_state;
+ 
+ 	if (dc->work_arounds.skip_clock_update)
+ 		return;
+@@ -242,13 +239,7 @@ void dcn2_update_clocks(struct clk_mgr *clk_mgr_base,
+ 	if (dc->res_pool->pp_smu)
+ 		pp_smu = &dc->res_pool->pp_smu->nv_funcs;
+ 
+-	for (irq_src = DC_IRQ_SOURCE_HPD1; irq_src <= DC_IRQ_SOURCE_HPD6; irq_src++) {
+-		hpd_state = dc_get_hpd_state_dcn20(dc->res_pool->irqs, irq_src);
+-		if (hpd_state)
+-			break;
+-	}
+-
+-	if (display_count == 0 && !hpd_state)
++	if (display_count == 0)
+ 		enter_display_off = true;
+ 
+ 	if (enter_display_off == safe_to_lower) {
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+index ac2d4c4f04e48..d3c8db65ff454 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+@@ -42,7 +42,6 @@
+ #include "clk/clk_10_0_2_sh_mask.h"
+ #include "renoir_ip_offset.h"
+ 
+-#include "irq/dcn21/irq_service_dcn21.h"
+ 
+ /* Constants */
+ 
+@@ -130,11 +129,9 @@ void rn_update_clocks(struct clk_mgr *clk_mgr_base,
+ 	struct dc_clocks *new_clocks = &context->bw_ctx.bw.dcn.clk;
+ 	struct dc *dc = clk_mgr_base->ctx->dc;
+ 	int display_count;
+-	int irq_src;
+ 	bool update_dppclk = false;
+ 	bool update_dispclk = false;
+ 	bool dpp_clock_lowered = false;
+-	uint32_t hpd_state;
+ 
+ 	struct dmcu *dmcu = clk_mgr_base->ctx->dc->res_pool->dmcu;
+ 
+@@ -151,14 +148,8 @@ void rn_update_clocks(struct clk_mgr *clk_mgr_base,
+ 
+ 			display_count = rn_get_active_display_cnt_wa(dc, context);
+ 
+-			for (irq_src = DC_IRQ_SOURCE_HPD1; irq_src <= DC_IRQ_SOURCE_HPD5; irq_src++) {
+-				hpd_state = dc_get_hpd_state_dcn21(dc->res_pool->irqs, irq_src);
+-				if (hpd_state)
+-					break;
+-			}
+-
+ 			/* if we can go lower, go lower */
+-			if (display_count == 0 && !hpd_state) {
++			if (display_count == 0) {
+ 				rn_vbios_smu_set_dcn_low_power_state(clk_mgr, DCN_PWR_STATE_LOW_POWER);
+ 				/* update power state */
+ 				clk_mgr_base->clks.pwr_state = DCN_PWR_STATE_LOW_POWER;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 0ded4decee05f..f0fbd8ad56229 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -2870,7 +2870,8 @@ static void commit_planes_for_stream(struct dc *dc,
+ #endif
+ 
+ 	if ((update_type != UPDATE_TYPE_FAST) && stream->update_flags.bits.dsc_changed)
+-		if (top_pipe_to_program->stream_res.tg->funcs->lock_doublebuffer_enable) {
++		if (top_pipe_to_program &&
++			top_pipe_to_program->stream_res.tg->funcs->lock_doublebuffer_enable) {
+ 			if (should_use_dmub_lock(stream->link)) {
+ 				union dmub_hw_lock_flags hw_locks = { 0 };
+ 				struct dmub_hw_lock_inst_flags inst_flags = { 0 };
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index c0bdc23702c83..f640990ae2304 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -1844,6 +1844,8 @@ static void enable_stream_features(struct pipe_ctx *pipe_ctx)
+ 		union down_spread_ctrl old_downspread;
+ 		union down_spread_ctrl new_downspread;
+ 
++		memset(&old_downspread, 0, sizeof(old_downspread));
++
+ 		core_link_read_dpcd(link, DP_DOWNSPREAD_CTRL,
+ 				&old_downspread.raw, sizeof(old_downspread));
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn201/dcn201_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn201/dcn201_hwseq.c
+index cfd09b3f705e9..fe22530242d2e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn201/dcn201_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn201/dcn201_hwseq.c
+@@ -134,11 +134,12 @@ void dcn201_update_plane_addr(const struct dc *dc, struct pipe_ctx *pipe_ctx)
+ 	PHYSICAL_ADDRESS_LOC addr;
+ 	struct dc_plane_state *plane_state = pipe_ctx->plane_state;
+ 	struct dce_hwseq *hws = dc->hwseq;
+-	struct dc_plane_address uma = plane_state->address;
++	struct dc_plane_address uma;
+ 
+ 	if (plane_state == NULL)
+ 		return;
+ 
++	uma = plane_state->address;
+ 	addr_patched = patch_address_for_sbs_tb_stereo(pipe_ctx, &addr);
+ 
+ 	plane_address_in_gpu_space_to_uma(hws, &uma);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
+index 27afbe6ec0fee..f969ff65f802b 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
+@@ -493,7 +493,8 @@ static const struct dcn31_apg_mask apg_mask = {
+ 	SE_DCN3_REG_LIST(id)\
+ }
+ 
+-static const struct dcn10_stream_enc_registers stream_enc_regs[] = {
++/* Some encoders won't be initialized here - but they're logical, not physical. */
++static const struct dcn10_stream_enc_registers stream_enc_regs[ENGINE_ID_COUNT] = {
+ 	stream_enc_regs(0),
+ 	stream_enc_regs(1),
+ 	stream_enc_regs(2),
+diff --git a/drivers/gpu/drm/amd/display/dc/irq/dcn20/irq_service_dcn20.c b/drivers/gpu/drm/amd/display/dc/irq/dcn20/irq_service_dcn20.c
+index 9ccafe007b23a..c4b067d018956 100644
+--- a/drivers/gpu/drm/amd/display/dc/irq/dcn20/irq_service_dcn20.c
++++ b/drivers/gpu/drm/amd/display/dc/irq/dcn20/irq_service_dcn20.c
+@@ -132,31 +132,6 @@ enum dc_irq_source to_dal_irq_source_dcn20(
+ 	}
+ }
+ 
+-uint32_t dc_get_hpd_state_dcn20(struct irq_service *irq_service, enum dc_irq_source source)
+-{
+-	const struct irq_source_info *info;
+-	uint32_t addr;
+-	uint32_t value;
+-	uint32_t current_status;
+-
+-	info = find_irq_source_info(irq_service, source);
+-	if (!info)
+-		return 0;
+-
+-	addr = info->status_reg;
+-	if (!addr)
+-		return 0;
+-
+-	value = dm_read_reg(irq_service->ctx, addr);
+-	current_status =
+-		get_reg_field_value(
+-			value,
+-			HPD0_DC_HPD_INT_STATUS,
+-			DC_HPD_SENSE);
+-
+-	return current_status;
+-}
+-
+ static bool hpd_ack(
+ 	struct irq_service *irq_service,
+ 	const struct irq_source_info *info)
+diff --git a/drivers/gpu/drm/amd/display/dc/irq/dcn20/irq_service_dcn20.h b/drivers/gpu/drm/amd/display/dc/irq/dcn20/irq_service_dcn20.h
+index 4d69ab24ca257..aee4b37999f19 100644
+--- a/drivers/gpu/drm/amd/display/dc/irq/dcn20/irq_service_dcn20.h
++++ b/drivers/gpu/drm/amd/display/dc/irq/dcn20/irq_service_dcn20.h
+@@ -31,6 +31,4 @@
+ struct irq_service *dal_irq_service_dcn20_create(
+ 	struct irq_service_init_data *init_data);
+ 
+-uint32_t dc_get_hpd_state_dcn20(struct irq_service *irq_service, enum dc_irq_source source);
+-
+ #endif
+diff --git a/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c b/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c
+index 78940cb20e10f..ed54e1c819bed 100644
+--- a/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c
++++ b/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c
+@@ -135,31 +135,6 @@ enum dc_irq_source to_dal_irq_source_dcn21(
+ 	return DC_IRQ_SOURCE_INVALID;
+ }
+ 
+-uint32_t dc_get_hpd_state_dcn21(struct irq_service *irq_service, enum dc_irq_source source)
+-{
+-	const struct irq_source_info *info;
+-	uint32_t addr;
+-	uint32_t value;
+-	uint32_t current_status;
+-
+-	info = find_irq_source_info(irq_service, source);
+-	if (!info)
+-		return 0;
+-
+-	addr = info->status_reg;
+-	if (!addr)
+-		return 0;
+-
+-	value = dm_read_reg(irq_service->ctx, addr);
+-	current_status =
+-		get_reg_field_value(
+-			value,
+-			HPD0_DC_HPD_INT_STATUS,
+-			DC_HPD_SENSE);
+-
+-	return current_status;
+-}
+-
+ static bool hpd_ack(
+ 	struct irq_service *irq_service,
+ 	const struct irq_source_info *info)
+diff --git a/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.h b/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.h
+index 616470e323803..da2bd0e93d7ad 100644
+--- a/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.h
++++ b/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.h
+@@ -31,6 +31,4 @@
+ struct irq_service *dal_irq_service_dcn21_create(
+ 	struct irq_service_init_data *init_data);
+ 
+-uint32_t dc_get_hpd_state_dcn21(struct irq_service *irq_service, enum dc_irq_source source);
+-
+ #endif
+diff --git a/drivers/gpu/drm/amd/display/dc/irq/irq_service.c b/drivers/gpu/drm/amd/display/dc/irq/irq_service.c
+index 4db1133e4466b..a2a4fbeb83f86 100644
+--- a/drivers/gpu/drm/amd/display/dc/irq/irq_service.c
++++ b/drivers/gpu/drm/amd/display/dc/irq/irq_service.c
+@@ -79,7 +79,7 @@ void dal_irq_service_destroy(struct irq_service **irq_service)
+ 	*irq_service = NULL;
+ }
+ 
+-const struct irq_source_info *find_irq_source_info(
++static const struct irq_source_info *find_irq_source_info(
+ 	struct irq_service *irq_service,
+ 	enum dc_irq_source source)
+ {
+diff --git a/drivers/gpu/drm/amd/display/dc/irq/irq_service.h b/drivers/gpu/drm/amd/display/dc/irq/irq_service.h
+index e60b824800932..dbfcb096eedd6 100644
+--- a/drivers/gpu/drm/amd/display/dc/irq/irq_service.h
++++ b/drivers/gpu/drm/amd/display/dc/irq/irq_service.h
+@@ -69,10 +69,6 @@ struct irq_service {
+ 	const struct irq_service_funcs *funcs;
+ };
+ 
+-const struct irq_source_info *find_irq_source_info(
+-	struct irq_service *irq_service,
+-	enum dc_irq_source source);
+-
+ void dal_irq_service_construct(
+ 	struct irq_service *irq_service,
+ 	struct irq_service_init_data *init_data);
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+index 41472ed992530..f8370d54100e8 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+@@ -2123,6 +2123,12 @@ static int default_attr_update(struct amdgpu_device *adev, struct amdgpu_device_
+ 		}
+ 	}
+ 
++	/* setting should not be allowed from VF */
++	if (amdgpu_sriov_vf(adev)) {
++		dev_attr->attr.mode &= ~S_IWUGO;
++		dev_attr->store = NULL;
++	}
++
+ #undef DEVICE_ATTR_IS
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c
+index cab6c8b92efd4..6a4f20fccf841 100644
+--- a/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c
++++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c
+@@ -998,11 +998,21 @@ int analogix_dp_send_psr_spd(struct analogix_dp_device *dp,
+ 	if (!blocking)
+ 		return 0;
+ 
++	/*
++	 * db[1]!=0: entering PSR, wait for fully active remote frame buffer.
++	 * db[1]==0: exiting PSR, wait for either
++	 *  (a) ACTIVE_RESYNC - the sink "must display the
++	 *      incoming active frames from the Source device with no visible
++	 *      glitches and/or artifacts", even though timings may still be
++	 *      re-synchronizing; or
++	 *  (b) INACTIVE - the transition is fully complete.
++	 */
+ 	ret = readx_poll_timeout(analogix_dp_get_psr_status, dp, psr_status,
+ 		psr_status >= 0 &&
+ 		((vsc->db[1] && psr_status == DP_PSR_SINK_ACTIVE_RFB) ||
+-		(!vsc->db[1] && psr_status == DP_PSR_SINK_INACTIVE)), 1500,
+-		DP_TIMEOUT_PSR_LOOP_MS * 1000);
++		(!vsc->db[1] && (psr_status == DP_PSR_SINK_ACTIVE_RESYNC ||
++				 psr_status == DP_PSR_SINK_INACTIVE))),
++		1500, DP_TIMEOUT_PSR_LOOP_MS * 1000);
+ 	if (ret) {
+ 		dev_warn(dp->dev, "Failed to apply PSR %d\n", ret);
+ 		return ret;
+diff --git a/drivers/gpu/drm/bridge/display-connector.c b/drivers/gpu/drm/bridge/display-connector.c
+index 05eb759da6fc6..847a0dce7f1d3 100644
+--- a/drivers/gpu/drm/bridge/display-connector.c
++++ b/drivers/gpu/drm/bridge/display-connector.c
+@@ -107,7 +107,7 @@ static int display_connector_probe(struct platform_device *pdev)
+ {
+ 	struct display_connector *conn;
+ 	unsigned int type;
+-	const char *label;
++	const char *label = NULL;
+ 	int ret;
+ 
+ 	conn = devm_kzalloc(&pdev->dev, sizeof(*conn), GFP_KERNEL);
+diff --git a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
+index d2808c4a6fb1c..cce98bf2a4e73 100644
+--- a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
++++ b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
+@@ -306,19 +306,10 @@ out:
+ 	mutex_unlock(&ge_b850v3_lvds_dev_mutex);
+ }
+ 
+-static int stdp4028_ge_b850v3_fw_probe(struct i2c_client *stdp4028_i2c,
+-				       const struct i2c_device_id *id)
++static int ge_b850v3_register(void)
+ {
++	struct i2c_client *stdp4028_i2c = ge_b850v3_lvds_ptr->stdp4028_i2c;
+ 	struct device *dev = &stdp4028_i2c->dev;
+-	int ret;
+-
+-	ret = ge_b850v3_lvds_init(dev);
+-
+-	if (ret)
+-		return ret;
+-
+-	ge_b850v3_lvds_ptr->stdp4028_i2c = stdp4028_i2c;
+-	i2c_set_clientdata(stdp4028_i2c, ge_b850v3_lvds_ptr);
+ 
+ 	/* drm bridge initialization */
+ 	ge_b850v3_lvds_ptr->bridge.funcs = &ge_b850v3_lvds_funcs;
+@@ -343,6 +334,27 @@ static int stdp4028_ge_b850v3_fw_probe(struct i2c_client *stdp4028_i2c,
+ 			"ge-b850v3-lvds-dp", ge_b850v3_lvds_ptr);
+ }
+ 
++static int stdp4028_ge_b850v3_fw_probe(struct i2c_client *stdp4028_i2c,
++				       const struct i2c_device_id *id)
++{
++	struct device *dev = &stdp4028_i2c->dev;
++	int ret;
++
++	ret = ge_b850v3_lvds_init(dev);
++
++	if (ret)
++		return ret;
++
++	ge_b850v3_lvds_ptr->stdp4028_i2c = stdp4028_i2c;
++	i2c_set_clientdata(stdp4028_i2c, ge_b850v3_lvds_ptr);
++
++	/* Only register after both bridges are probed */
++	if (!ge_b850v3_lvds_ptr->stdp2690_i2c)
++		return 0;
++
++	return ge_b850v3_register();
++}
++
+ static int stdp4028_ge_b850v3_fw_remove(struct i2c_client *stdp4028_i2c)
+ {
+ 	ge_b850v3_lvds_remove();
+@@ -386,7 +398,11 @@ static int stdp2690_ge_b850v3_fw_probe(struct i2c_client *stdp2690_i2c,
+ 	ge_b850v3_lvds_ptr->stdp2690_i2c = stdp2690_i2c;
+ 	i2c_set_clientdata(stdp2690_i2c, ge_b850v3_lvds_ptr);
+ 
+-	return 0;
++	/* Only register after both bridges are probed */
++	if (!ge_b850v3_lvds_ptr->stdp4028_i2c)
++		return 0;
++
++	return ge_b850v3_register();
+ }
+ 
+ static int stdp2690_ge_b850v3_fw_remove(struct i2c_client *stdp2690_i2c)
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-ahb-audio.c b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-ahb-audio.c
+index d0db1acf11d73..7d2ed0ed2fe26 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-ahb-audio.c
++++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-ahb-audio.c
+@@ -320,13 +320,17 @@ static int dw_hdmi_open(struct snd_pcm_substream *substream)
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+ 	struct snd_dw_hdmi *dw = substream->private_data;
+ 	void __iomem *base = dw->data.base;
++	u8 *eld;
+ 	int ret;
+ 
+ 	runtime->hw = dw_hdmi_hw;
+ 
+-	ret = snd_pcm_hw_constraint_eld(runtime, dw->data.eld);
+-	if (ret < 0)
+-		return ret;
++	eld = dw->data.get_eld(dw->data.hdmi);
++	if (eld) {
++		ret = snd_pcm_hw_constraint_eld(runtime, eld);
++		if (ret < 0)
++			return ret;
++	}
+ 
+ 	ret = snd_pcm_limit_hw_rates(runtime);
+ 	if (ret < 0)
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-audio.h b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-audio.h
+index cb07dc0da5a70..f72d27208ebef 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-audio.h
++++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-audio.h
+@@ -9,15 +9,15 @@ struct dw_hdmi_audio_data {
+ 	void __iomem *base;
+ 	int irq;
+ 	struct dw_hdmi *hdmi;
+-	u8 *eld;
++	u8 *(*get_eld)(struct dw_hdmi *hdmi);
+ };
+ 
+ struct dw_hdmi_i2s_audio_data {
+ 	struct dw_hdmi *hdmi;
+-	u8 *eld;
+ 
+ 	void (*write)(struct dw_hdmi *hdmi, u8 val, int offset);
+ 	u8 (*read)(struct dw_hdmi *hdmi, int offset);
++	u8 *(*get_eld)(struct dw_hdmi *hdmi);
+ };
+ 
+ #endif
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-i2s-audio.c b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-i2s-audio.c
+index feb04f127b550..f50b47ac11a82 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-i2s-audio.c
++++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-i2s-audio.c
+@@ -135,8 +135,15 @@ static int dw_hdmi_i2s_get_eld(struct device *dev, void *data, uint8_t *buf,
+ 			       size_t len)
+ {
+ 	struct dw_hdmi_i2s_audio_data *audio = data;
++	u8 *eld;
++
++	eld = audio->get_eld(audio->hdmi);
++	if (eld)
++		memcpy(buf, eld, min_t(size_t, MAX_ELD_BYTES, len));
++	else
++		/* Pass en empty ELD if connector not available */
++		memset(buf, 0, len);
+ 
+-	memcpy(buf, audio->eld, min_t(size_t, MAX_ELD_BYTES, len));
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
+index f08d0fded61f7..e1211a5b334ba 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
++++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
+@@ -757,6 +757,14 @@ static void hdmi_enable_audio_clk(struct dw_hdmi *hdmi, bool enable)
+ 	hdmi_writeb(hdmi, hdmi->mc_clkdis, HDMI_MC_CLKDIS);
+ }
+ 
++static u8 *hdmi_audio_get_eld(struct dw_hdmi *hdmi)
++{
++	if (!hdmi->curr_conn)
++		return NULL;
++
++	return hdmi->curr_conn->eld;
++}
++
+ static void dw_hdmi_ahb_audio_enable(struct dw_hdmi *hdmi)
+ {
+ 	hdmi_set_cts_n(hdmi, hdmi->audio_cts, hdmi->audio_n);
+@@ -3431,7 +3439,7 @@ struct dw_hdmi *dw_hdmi_probe(struct platform_device *pdev,
+ 		audio.base = hdmi->regs;
+ 		audio.irq = irq;
+ 		audio.hdmi = hdmi;
+-		audio.eld = hdmi->connector.eld;
++		audio.get_eld = hdmi_audio_get_eld;
+ 		hdmi->enable_audio = dw_hdmi_ahb_audio_enable;
+ 		hdmi->disable_audio = dw_hdmi_ahb_audio_disable;
+ 
+@@ -3444,7 +3452,7 @@ struct dw_hdmi *dw_hdmi_probe(struct platform_device *pdev,
+ 		struct dw_hdmi_i2s_audio_data audio;
+ 
+ 		audio.hdmi	= hdmi;
+-		audio.eld	= hdmi->connector.eld;
++		audio.get_eld	= hdmi_audio_get_eld;
+ 		audio.write	= hdmi_writeb;
+ 		audio.read	= hdmi_readb;
+ 		hdmi->enable_audio = dw_hdmi_i2s_audio_enable;
+diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi83.c b/drivers/gpu/drm/bridge/ti-sn65dsi83.c
+index ba1160ec6d6e8..07917681782d2 100644
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi83.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi83.c
+@@ -297,7 +297,6 @@ static void sn65dsi83_detach(struct drm_bridge *bridge)
+ 
+ 	mipi_dsi_detach(ctx->dsi);
+ 	mipi_dsi_device_unregister(ctx->dsi);
+-	drm_bridge_remove(&ctx->bridge);
+ 	ctx->dsi = NULL;
+ }
+ 
+@@ -711,6 +710,7 @@ static int sn65dsi83_remove(struct i2c_client *client)
+ {
+ 	struct sn65dsi83 *ctx = i2c_get_clientdata(client);
+ 
++	drm_bridge_remove(&ctx->bridge);
+ 	of_node_put(ctx->host_node);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi86.c b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+index 6154bed0af5bf..83d06c16d4d74 100644
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi86.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+@@ -188,6 +188,7 @@ static const struct regmap_config ti_sn65dsi86_regmap_config = {
+ 	.val_bits = 8,
+ 	.volatile_table = &ti_sn_bridge_volatile_table,
+ 	.cache_type = REGCACHE_NONE,
++	.max_register = 0xFF,
+ };
+ 
+ static void ti_sn65dsi86_write_u16(struct ti_sn65dsi86 *pdata,
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index 2c0c6ec928200..ff2bc9a118011 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -1001,7 +1001,7 @@ crtc_needs_disable(struct drm_crtc_state *old_state,
+ 	 * it's in self refresh mode and needs to be fully disabled.
+ 	 */
+ 	return old_state->active ||
+-	       (old_state->self_refresh_active && !new_state->enable) ||
++	       (old_state->self_refresh_active && !new_state->active) ||
+ 	       new_state->self_refresh_active;
+ }
+ 
+diff --git a/drivers/gpu/drm/drm_dp_helper.c b/drivers/gpu/drm/drm_dp_helper.c
+index 4d0d1e8e51fa7..db7db839e42d1 100644
+--- a/drivers/gpu/drm/drm_dp_helper.c
++++ b/drivers/gpu/drm/drm_dp_helper.c
+@@ -3246,27 +3246,13 @@ int drm_edp_backlight_enable(struct drm_dp_aux *aux, const struct drm_edp_backli
+ 			     const u16 level)
+ {
+ 	int ret;
+-	u8 dpcd_buf, new_dpcd_buf;
++	u8 dpcd_buf = DP_EDP_BACKLIGHT_CONTROL_MODE_DPCD;
+ 
+-	ret = drm_dp_dpcd_readb(aux, DP_EDP_BACKLIGHT_MODE_SET_REGISTER, &dpcd_buf);
+-	if (ret != 1) {
+-		drm_dbg_kms(aux->drm_dev,
+-			    "%s: Failed to read backlight mode: %d\n", aux->name, ret);
+-		return ret < 0 ? ret : -EIO;
+-	}
+-
+-	new_dpcd_buf = dpcd_buf;
+-
+-	if ((dpcd_buf & DP_EDP_BACKLIGHT_CONTROL_MODE_MASK) != DP_EDP_BACKLIGHT_CONTROL_MODE_DPCD) {
+-		new_dpcd_buf &= ~DP_EDP_BACKLIGHT_CONTROL_MODE_MASK;
+-		new_dpcd_buf |= DP_EDP_BACKLIGHT_CONTROL_MODE_DPCD;
+-
+-		if (bl->pwmgen_bit_count) {
+-			ret = drm_dp_dpcd_writeb(aux, DP_EDP_PWMGEN_BIT_COUNT, bl->pwmgen_bit_count);
+-			if (ret != 1)
+-				drm_dbg_kms(aux->drm_dev, "%s: Failed to write aux pwmgen bit count: %d\n",
+-					    aux->name, ret);
+-		}
++	if (bl->pwmgen_bit_count) {
++		ret = drm_dp_dpcd_writeb(aux, DP_EDP_PWMGEN_BIT_COUNT, bl->pwmgen_bit_count);
++		if (ret != 1)
++			drm_dbg_kms(aux->drm_dev, "%s: Failed to write aux pwmgen bit count: %d\n",
++				    aux->name, ret);
+ 	}
+ 
+ 	if (bl->pwm_freq_pre_divider) {
+@@ -3276,16 +3262,14 @@ int drm_edp_backlight_enable(struct drm_dp_aux *aux, const struct drm_edp_backli
+ 				    "%s: Failed to write aux backlight frequency: %d\n",
+ 				    aux->name, ret);
+ 		else
+-			new_dpcd_buf |= DP_EDP_BACKLIGHT_FREQ_AUX_SET_ENABLE;
++			dpcd_buf |= DP_EDP_BACKLIGHT_FREQ_AUX_SET_ENABLE;
+ 	}
+ 
+-	if (new_dpcd_buf != dpcd_buf) {
+-		ret = drm_dp_dpcd_writeb(aux, DP_EDP_BACKLIGHT_MODE_SET_REGISTER, new_dpcd_buf);
+-		if (ret != 1) {
+-			drm_dbg_kms(aux->drm_dev, "%s: Failed to write aux backlight mode: %d\n",
+-				    aux->name, ret);
+-			return ret < 0 ? ret : -EIO;
+-		}
++	ret = drm_dp_dpcd_writeb(aux, DP_EDP_BACKLIGHT_MODE_SET_REGISTER, dpcd_buf);
++	if (ret != 1) {
++		drm_dbg_kms(aux->drm_dev, "%s: Failed to write aux backlight mode: %d\n",
++			    aux->name, ret);
++		return ret < 0 ? ret : -EIO;
+ 	}
+ 
+ 	ret = drm_edp_backlight_set_level(aux, bl, level);
+diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
+index 7a5097467ba5c..b3a1636d1b984 100644
+--- a/drivers/gpu/drm/drm_drv.c
++++ b/drivers/gpu/drm/drm_drv.c
+@@ -581,6 +581,7 @@ static int drm_dev_init(struct drm_device *dev,
+ 			const struct drm_driver *driver,
+ 			struct device *parent)
+ {
++	struct inode *inode;
+ 	int ret;
+ 
+ 	if (!drm_core_init_complete) {
+@@ -617,13 +618,15 @@ static int drm_dev_init(struct drm_device *dev,
+ 	if (ret)
+ 		return ret;
+ 
+-	dev->anon_inode = drm_fs_inode_new();
+-	if (IS_ERR(dev->anon_inode)) {
+-		ret = PTR_ERR(dev->anon_inode);
++	inode = drm_fs_inode_new();
++	if (IS_ERR(inode)) {
++		ret = PTR_ERR(inode);
+ 		DRM_ERROR("Cannot allocate anonymous inode: %d\n", ret);
+ 		goto err;
+ 	}
+ 
++	dev->anon_inode = inode;
++
+ 	if (drm_core_check_feature(dev, DRIVER_RENDER)) {
+ 		ret = drm_minor_alloc(dev, DRM_MINOR_RENDER);
+ 		if (ret)
+diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
+index 9d05674550a4f..1e7e8cd64cb58 100644
+--- a/drivers/gpu/drm/drm_gem_cma_helper.c
++++ b/drivers/gpu/drm/drm_gem_cma_helper.c
+@@ -62,18 +62,21 @@ __drm_gem_cma_create(struct drm_device *drm, size_t size, bool private)
+ 	struct drm_gem_object *gem_obj;
+ 	int ret = 0;
+ 
+-	if (drm->driver->gem_create_object)
++	if (drm->driver->gem_create_object) {
+ 		gem_obj = drm->driver->gem_create_object(drm, size);
+-	else
+-		gem_obj = kzalloc(sizeof(*cma_obj), GFP_KERNEL);
+-	if (!gem_obj)
+-		return ERR_PTR(-ENOMEM);
++		if (IS_ERR(gem_obj))
++			return ERR_CAST(gem_obj);
++		cma_obj = to_drm_gem_cma_obj(gem_obj);
++	} else {
++		cma_obj = kzalloc(sizeof(*cma_obj), GFP_KERNEL);
++		if (!cma_obj)
++			return ERR_PTR(-ENOMEM);
++		gem_obj = &cma_obj->base;
++	}
+ 
+ 	if (!gem_obj->funcs)
+ 		gem_obj->funcs = &drm_gem_cma_default_funcs;
+ 
+-	cma_obj = container_of(gem_obj, struct drm_gem_cma_object, base);
+-
+ 	if (private) {
+ 		drm_gem_private_object_init(drm, gem_obj, size);
+ 
+diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
+index bca0de92802ef..fe157bf278347 100644
+--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
++++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
+@@ -51,14 +51,17 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private)
+ 
+ 	size = PAGE_ALIGN(size);
+ 
+-	if (dev->driver->gem_create_object)
++	if (dev->driver->gem_create_object) {
+ 		obj = dev->driver->gem_create_object(dev, size);
+-	else
+-		obj = kzalloc(sizeof(*shmem), GFP_KERNEL);
+-	if (!obj)
+-		return ERR_PTR(-ENOMEM);
+-
+-	shmem = to_drm_gem_shmem_obj(obj);
++		if (IS_ERR(obj))
++			return ERR_CAST(obj);
++		shmem = to_drm_gem_shmem_obj(obj);
++	} else {
++		shmem = kzalloc(sizeof(*shmem), GFP_KERNEL);
++		if (!shmem)
++			return ERR_PTR(-ENOMEM);
++		obj = &shmem->base;
++	}
+ 
+ 	if (!obj->funcs)
+ 		obj->funcs = &drm_gem_shmem_funcs;
+diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
+index bfa386b981346..3f00192215d11 100644
+--- a/drivers/gpu/drm/drm_gem_vram_helper.c
++++ b/drivers/gpu/drm/drm_gem_vram_helper.c
+@@ -197,8 +197,8 @@ struct drm_gem_vram_object *drm_gem_vram_create(struct drm_device *dev,
+ 
+ 	if (dev->driver->gem_create_object) {
+ 		gem = dev->driver->gem_create_object(dev, size);
+-		if (!gem)
+-			return ERR_PTR(-ENOMEM);
++		if (IS_ERR(gem))
++			return ERR_CAST(gem);
+ 		gbo = drm_gem_vram_of_gem(gem);
+ 	} else {
+ 		gbo = kzalloc(sizeof(*gbo), GFP_KERNEL);
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index a9359878f4ed6..042bb80383c93 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -262,6 +262,12 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad D330-10IGM"),
+ 		},
+ 		.driver_data = (void *)&lcd1200x1920_rightside_up,
++	}, {	/* Lenovo Yoga Book X90F / X91F / X91L */
++		.matches = {
++		  /* Non exact match to match all versions */
++		  DMI_MATCH(DMI_PRODUCT_NAME, "Lenovo YB1-X9"),
++		},
++		.driver_data = (void *)&lcd1200x1920_rightside_up,
+ 	}, {	/* OneGX1 Pro */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "SYSTEM_MANUFACTURER"),
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
+index 486259e154aff..225fa5879ebd9 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
+@@ -469,6 +469,12 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
+ 		return -EINVAL;
+ 	}
+ 
++	if (args->stream_size > SZ_64K || args->nr_relocs > SZ_64K ||
++	    args->nr_bos > SZ_64K || args->nr_pmrs > 128) {
++		DRM_ERROR("submit arguments out of size limits\n");
++		return -EINVAL;
++	}
++
+ 	/*
+ 	 * Copy the command submission and bo array to kernel space in
+ 	 * one go, and do this outside of any locks.
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
+index 1c75c8ed5bcea..85eddd492774d 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
+@@ -130,6 +130,7 @@ struct etnaviv_gpu {
+ 
+ 	/* hang detection */
+ 	u32 hangcheck_dma_addr;
++	u32 hangcheck_fence;
+ 
+ 	void __iomem *mmio;
+ 	int irq;
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+index 180bb633d5c53..58f593b278c15 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+@@ -107,8 +107,10 @@ static enum drm_gpu_sched_stat etnaviv_sched_timedout_job(struct drm_sched_job
+ 	 */
+ 	dma_addr = gpu_read(gpu, VIVS_FE_DMA_ADDRESS);
+ 	change = dma_addr - gpu->hangcheck_dma_addr;
+-	if (change < 0 || change > 16) {
++	if (gpu->completed_fence != gpu->hangcheck_fence ||
++	    change < 0 || change > 16) {
+ 		gpu->hangcheck_dma_addr = dma_addr;
++		gpu->hangcheck_fence = gpu->completed_fence;
+ 		goto out_no_timeout;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.c b/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.c
+index 78cd8f77b49d7..34aab40c6e1f9 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi_buf_trans.c
+@@ -477,14 +477,14 @@ static const struct intel_ddi_buf_trans icl_combo_phy_trans_hdmi = {
+ static const union intel_ddi_buf_trans_entry _ehl_combo_phy_trans_dp[] = {
+ 							/* NT mV Trans mV db    */
+ 	{ .icl = { 0xA, 0x33, 0x3F, 0x00, 0x00 } },	/* 350   350      0.0   */
+-	{ .icl = { 0xA, 0x47, 0x36, 0x00, 0x09 } },	/* 350   500      3.1   */
+-	{ .icl = { 0xC, 0x64, 0x34, 0x00, 0x0B } },	/* 350   700      6.0   */
+-	{ .icl = { 0x6, 0x7F, 0x30, 0x00, 0x0F } },	/* 350   900      8.2   */
++	{ .icl = { 0xA, 0x47, 0x38, 0x00, 0x07 } },	/* 350   500      3.1   */
++	{ .icl = { 0xC, 0x64, 0x33, 0x00, 0x0C } },	/* 350   700      6.0   */
++	{ .icl = { 0x6, 0x7F, 0x2F, 0x00, 0x10 } },	/* 350   900      8.2   */
+ 	{ .icl = { 0xA, 0x46, 0x3F, 0x00, 0x00 } },	/* 500   500      0.0   */
+-	{ .icl = { 0xC, 0x64, 0x38, 0x00, 0x07 } },	/* 500   700      2.9   */
++	{ .icl = { 0xC, 0x64, 0x37, 0x00, 0x08 } },	/* 500   700      2.9   */
+ 	{ .icl = { 0x6, 0x7F, 0x32, 0x00, 0x0D } },	/* 500   900      5.1   */
+ 	{ .icl = { 0xC, 0x61, 0x3F, 0x00, 0x00 } },	/* 650   700      0.6   */
+-	{ .icl = { 0x6, 0x7F, 0x38, 0x00, 0x07 } },	/* 600   900      3.5   */
++	{ .icl = { 0x6, 0x7F, 0x37, 0x00, 0x08 } },	/* 600   900      3.5   */
+ 	{ .icl = { 0x6, 0x7F, 0x3F, 0x00, 0x00 } },	/* 900   900      0.0   */
+ };
+ 
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+index 8eb1c3a6fc9cd..1d3f40abd0258 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+@@ -160,7 +160,6 @@ retry:
+ /* Immediately discard the backing storage */
+ void i915_gem_object_truncate(struct drm_i915_gem_object *obj)
+ {
+-	drm_gem_free_mmap_offset(&obj->base);
+ 	if (obj->ops->truncate)
+ 		obj->ops->truncate(obj);
+ }
+diff --git a/drivers/gpu/drm/i915/pxp/intel_pxp_tee.c b/drivers/gpu/drm/i915/pxp/intel_pxp_tee.c
+index 49508f31dcb73..d2980370d9297 100644
+--- a/drivers/gpu/drm/i915/pxp/intel_pxp_tee.c
++++ b/drivers/gpu/drm/i915/pxp/intel_pxp_tee.c
+@@ -103,9 +103,12 @@ static int i915_pxp_tee_component_bind(struct device *i915_kdev,
+ static void i915_pxp_tee_component_unbind(struct device *i915_kdev,
+ 					  struct device *tee_kdev, void *data)
+ {
++	struct drm_i915_private *i915 = kdev_to_i915(i915_kdev);
+ 	struct intel_pxp *pxp = i915_dev_to_pxp(i915_kdev);
++	intel_wakeref_t wakeref;
+ 
+-	intel_pxp_fini_hw(pxp);
++	with_intel_runtime_pm_if_in_use(&i915->runtime_pm, wakeref)
++		intel_pxp_fini_hw(pxp);
+ 
+ 	mutex_lock(&pxp->tee_mutex);
+ 	pxp->pxp_component = NULL;
+diff --git a/drivers/gpu/drm/lima/lima_device.c b/drivers/gpu/drm/lima/lima_device.c
+index f74f8048af8f2..02cef0cea6572 100644
+--- a/drivers/gpu/drm/lima/lima_device.c
++++ b/drivers/gpu/drm/lima/lima_device.c
+@@ -358,6 +358,7 @@ int lima_device_init(struct lima_device *ldev)
+ 	int err, i;
+ 
+ 	dma_set_coherent_mask(ldev->dev, DMA_BIT_MASK(32));
++	dma_set_max_seg_size(ldev->dev, UINT_MAX);
+ 
+ 	err = lima_clk_init(ldev);
+ 	if (err)
+diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
+index 640acc060467c..54823bd701a4b 100644
+--- a/drivers/gpu/drm/lima/lima_gem.c
++++ b/drivers/gpu/drm/lima/lima_gem.c
+@@ -221,7 +221,7 @@ struct drm_gem_object *lima_gem_create_object(struct drm_device *dev, size_t siz
+ 
+ 	bo = kzalloc(sizeof(*bo), GFP_KERNEL);
+ 	if (!bo)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	mutex_init(&bo->lock);
+ 	INIT_LIST_HEAD(&bo->va);
+diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig
+index 39197b4beea78..1eae5a9645f41 100644
+--- a/drivers/gpu/drm/msm/Kconfig
++++ b/drivers/gpu/drm/msm/Kconfig
+@@ -65,6 +65,7 @@ config DRM_MSM_HDMI_HDCP
+ config DRM_MSM_DP
+ 	bool "Enable DisplayPort support in MSM DRM driver"
+ 	depends on DRM_MSM
++	select RATIONAL
+ 	default y
+ 	help
+ 	  Compile in support for DP driver in MSM DRM driver. DP external
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+index a15b264282809..d25d18a0084d9 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+@@ -73,8 +73,8 @@ static int _dpu_danger_signal_status(struct seq_file *s,
+ 					&status);
+ 	} else {
+ 		seq_puts(s, "\nSafe signal status:\n");
+-		if (kms->hw_mdp->ops.get_danger_status)
+-			kms->hw_mdp->ops.get_danger_status(kms->hw_mdp,
++		if (kms->hw_mdp->ops.get_safe_status)
++			kms->hw_mdp->ops.get_safe_status(kms->hw_mdp,
+ 					&status);
+ 	}
+ 	pm_runtime_put_sync(&kms->pdev->dev);
+diff --git a/drivers/gpu/drm/msm/dsi/dsi.c b/drivers/gpu/drm/msm/dsi/dsi.c
+index 75ae3008b68f4..fc280cc434943 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi.c
++++ b/drivers/gpu/drm/msm/dsi/dsi.c
+@@ -215,9 +215,13 @@ int msm_dsi_modeset_init(struct msm_dsi *msm_dsi, struct drm_device *dev,
+ 		goto fail;
+ 	}
+ 
+-	if (!msm_dsi_manager_validate_current_config(msm_dsi->id)) {
+-		ret = -EINVAL;
+-		goto fail;
++	if (msm_dsi_is_bonded_dsi(msm_dsi) &&
++	    !msm_dsi_is_master_dsi(msm_dsi)) {
++		/*
++		 * Do not return an eror here,
++		 * Just skip creating encoder/connector for the slave-DSI.
++		 */
++		return 0;
+ 	}
+ 
+ 	msm_dsi->encoder = encoder;
+diff --git a/drivers/gpu/drm/msm/dsi/dsi.h b/drivers/gpu/drm/msm/dsi/dsi.h
+index 569c8ff062ba4..a63666e59d19e 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi.h
++++ b/drivers/gpu/drm/msm/dsi/dsi.h
+@@ -82,7 +82,6 @@ int msm_dsi_manager_cmd_xfer(int id, const struct mipi_dsi_msg *msg);
+ bool msm_dsi_manager_cmd_xfer_trigger(int id, u32 dma_base, u32 len);
+ int msm_dsi_manager_register(struct msm_dsi *msm_dsi);
+ void msm_dsi_manager_unregister(struct msm_dsi *msm_dsi);
+-bool msm_dsi_manager_validate_current_config(u8 id);
+ void msm_dsi_manager_tpg_enable(void);
+ 
+ /* msm dsi */
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_manager.c b/drivers/gpu/drm/msm/dsi/dsi_manager.c
+index 20c4d650fd80c..e58ec5c1a4c37 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_manager.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_manager.c
+@@ -649,23 +649,6 @@ fail:
+ 	return ERR_PTR(ret);
+ }
+ 
+-bool msm_dsi_manager_validate_current_config(u8 id)
+-{
+-	bool is_bonded_dsi = IS_BONDED_DSI();
+-
+-	/*
+-	 * For bonded DSI, we only have one drm panel. For this
+-	 * use case, we register only one bridge/connector.
+-	 * Skip bridge/connector initialisation if it is
+-	 * slave-DSI for bonded DSI configuration.
+-	 */
+-	if (is_bonded_dsi && !IS_MASTER_DSI_LINK(id)) {
+-		DBG("Skip bridge registration for slave DSI->id: %d\n", id);
+-		return false;
+-	}
+-	return true;
+-}
+-
+ /* initialize bridge */
+ struct drm_bridge *msm_dsi_manager_bridge_init(u8 id)
+ {
+diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
+index 282628d6b72c0..6cfa984dee6ae 100644
+--- a/drivers/gpu/drm/msm/msm_gem_submit.c
++++ b/drivers/gpu/drm/msm/msm_gem_submit.c
+@@ -881,7 +881,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
+ 	 * to the underlying fence.
+ 	 */
+ 	submit->fence_id = idr_alloc_cyclic(&queue->fence_idr,
+-			submit->user_fence, 0, INT_MAX, GFP_KERNEL);
++			submit->user_fence, 1, INT_MAX, GFP_KERNEL);
+ 	if (submit->fence_id < 0) {
+ 		ret = submit->fence_id = 0;
+ 		submit->fence_id = 0;
+diff --git a/drivers/gpu/drm/nouveau/dispnv04/disp.c b/drivers/gpu/drm/nouveau/dispnv04/disp.c
+index 7739f46470d3e..99fee4d8cd318 100644
+--- a/drivers/gpu/drm/nouveau/dispnv04/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv04/disp.c
+@@ -205,7 +205,7 @@ nv04_display_destroy(struct drm_device *dev)
+ 	nvif_notify_dtor(&disp->flip);
+ 
+ 	nouveau_display(dev)->priv = NULL;
+-	kfree(disp);
++	vfree(disp);
+ 
+ 	nvif_object_unmap(&drm->client.device.object);
+ }
+@@ -223,7 +223,7 @@ nv04_display_create(struct drm_device *dev)
+ 	struct nv04_display *disp;
+ 	int i, ret;
+ 
+-	disp = kzalloc(sizeof(*disp), GFP_KERNEL);
++	disp = vzalloc(sizeof(*disp));
+ 	if (!disp)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c
+index 24382875fb4f3..455e95a89259f 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c
+@@ -94,20 +94,13 @@ nvkm_pmu_fini(struct nvkm_subdev *subdev, bool suspend)
+ 	return 0;
+ }
+ 
+-static int
++static void
+ nvkm_pmu_reset(struct nvkm_pmu *pmu)
+ {
+ 	struct nvkm_device *device = pmu->subdev.device;
+ 
+ 	if (!pmu->func->enabled(pmu))
+-		return 0;
+-
+-	/* Inhibit interrupts, and wait for idle. */
+-	nvkm_wr32(device, 0x10a014, 0x0000ffff);
+-	nvkm_msec(device, 2000,
+-		if (!nvkm_rd32(device, 0x10a04c))
+-			break;
+-	);
++		return;
+ 
+ 	/* Reset. */
+ 	if (pmu->func->reset)
+@@ -118,25 +111,37 @@ nvkm_pmu_reset(struct nvkm_pmu *pmu)
+ 		if (!(nvkm_rd32(device, 0x10a10c) & 0x00000006))
+ 			break;
+ 	);
+-
+-	return 0;
+ }
+ 
+ static int
+ nvkm_pmu_preinit(struct nvkm_subdev *subdev)
+ {
+ 	struct nvkm_pmu *pmu = nvkm_pmu(subdev);
+-	return nvkm_pmu_reset(pmu);
++	nvkm_pmu_reset(pmu);
++	return 0;
+ }
+ 
+ static int
+ nvkm_pmu_init(struct nvkm_subdev *subdev)
+ {
+ 	struct nvkm_pmu *pmu = nvkm_pmu(subdev);
+-	int ret = nvkm_pmu_reset(pmu);
+-	if (ret == 0 && pmu->func->init)
+-		ret = pmu->func->init(pmu);
+-	return ret;
++	struct nvkm_device *device = pmu->subdev.device;
++
++	if (!pmu->func->init)
++		return 0;
++
++	if (pmu->func->enabled(pmu)) {
++		/* Inhibit interrupts, and wait for idle. */
++		nvkm_wr32(device, 0x10a014, 0x0000ffff);
++		nvkm_msec(device, 2000,
++			if (!nvkm_rd32(device, 0x10a04c))
++				break;
++		);
++
++		nvkm_pmu_reset(pmu);
++	}
++
++	return pmu->func->init(pmu);
+ }
+ 
+ static void *
+diff --git a/drivers/gpu/drm/panel/panel-feiyang-fy07024di26a30d.c b/drivers/gpu/drm/panel/panel-feiyang-fy07024di26a30d.c
+index 581661b506f81..f9c1f7bc8218c 100644
+--- a/drivers/gpu/drm/panel/panel-feiyang-fy07024di26a30d.c
++++ b/drivers/gpu/drm/panel/panel-feiyang-fy07024di26a30d.c
+@@ -227,7 +227,13 @@ static int feiyang_dsi_probe(struct mipi_dsi_device *dsi)
+ 	dsi->format = MIPI_DSI_FMT_RGB888;
+ 	dsi->lanes = 4;
+ 
+-	return mipi_dsi_attach(dsi);
++	ret = mipi_dsi_attach(dsi);
++	if (ret < 0) {
++		drm_panel_remove(&ctx->panel);
++		return ret;
++	}
++
++	return 0;
+ }
+ 
+ static int feiyang_dsi_remove(struct mipi_dsi_device *dsi)
+diff --git a/drivers/gpu/drm/panel/panel-innolux-p079zca.c b/drivers/gpu/drm/panel/panel-innolux-p079zca.c
+index aea3162253914..f194b62e290ca 100644
+--- a/drivers/gpu/drm/panel/panel-innolux-p079zca.c
++++ b/drivers/gpu/drm/panel/panel-innolux-p079zca.c
+@@ -484,6 +484,7 @@ static void innolux_panel_del(struct innolux_panel *innolux)
+ static int innolux_panel_probe(struct mipi_dsi_device *dsi)
+ {
+ 	const struct panel_desc *desc;
++	struct innolux_panel *innolux;
+ 	int err;
+ 
+ 	desc = of_device_get_match_data(&dsi->dev);
+@@ -495,7 +496,14 @@ static int innolux_panel_probe(struct mipi_dsi_device *dsi)
+ 	if (err < 0)
+ 		return err;
+ 
+-	return mipi_dsi_attach(dsi);
++	err = mipi_dsi_attach(dsi);
++	if (err < 0) {
++		innolux = mipi_dsi_get_drvdata(dsi);
++		innolux_panel_del(innolux);
++		return err;
++	}
++
++	return 0;
+ }
+ 
+ static int innolux_panel_remove(struct mipi_dsi_device *dsi)
+diff --git a/drivers/gpu/drm/panel/panel-jdi-lt070me05000.c b/drivers/gpu/drm/panel/panel-jdi-lt070me05000.c
+index 733010b5e4f53..3c86ad262d5e0 100644
+--- a/drivers/gpu/drm/panel/panel-jdi-lt070me05000.c
++++ b/drivers/gpu/drm/panel/panel-jdi-lt070me05000.c
+@@ -473,7 +473,13 @@ static int jdi_panel_probe(struct mipi_dsi_device *dsi)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	return mipi_dsi_attach(dsi);
++	ret = mipi_dsi_attach(dsi);
++	if (ret < 0) {
++		jdi_panel_del(jdi);
++		return ret;
++	}
++
++	return 0;
+ }
+ 
+ static int jdi_panel_remove(struct mipi_dsi_device *dsi)
+diff --git a/drivers/gpu/drm/panel/panel-kingdisplay-kd097d04.c b/drivers/gpu/drm/panel/panel-kingdisplay-kd097d04.c
+index 86e4213e8bb13..daccb1fd5fdad 100644
+--- a/drivers/gpu/drm/panel/panel-kingdisplay-kd097d04.c
++++ b/drivers/gpu/drm/panel/panel-kingdisplay-kd097d04.c
+@@ -406,7 +406,13 @@ static int kingdisplay_panel_probe(struct mipi_dsi_device *dsi)
+ 	if (err < 0)
+ 		return err;
+ 
+-	return mipi_dsi_attach(dsi);
++	err = mipi_dsi_attach(dsi);
++	if (err < 0) {
++		kingdisplay_panel_del(kingdisplay);
++		return err;
++	}
++
++	return 0;
+ }
+ 
+ static int kingdisplay_panel_remove(struct mipi_dsi_device *dsi)
+diff --git a/drivers/gpu/drm/panel/panel-novatek-nt36672a.c b/drivers/gpu/drm/panel/panel-novatek-nt36672a.c
+index 533cd3934b8b7..839b263fb3c0f 100644
+--- a/drivers/gpu/drm/panel/panel-novatek-nt36672a.c
++++ b/drivers/gpu/drm/panel/panel-novatek-nt36672a.c
+@@ -656,7 +656,13 @@ static int nt36672a_panel_probe(struct mipi_dsi_device *dsi)
+ 	if (err < 0)
+ 		return err;
+ 
+-	return mipi_dsi_attach(dsi);
++	err = mipi_dsi_attach(dsi);
++	if (err < 0) {
++		drm_panel_remove(&pinfo->base);
++		return err;
++	}
++
++	return 0;
+ }
+ 
+ static int nt36672a_panel_remove(struct mipi_dsi_device *dsi)
+diff --git a/drivers/gpu/drm/panel/panel-panasonic-vvx10f034n00.c b/drivers/gpu/drm/panel/panel-panasonic-vvx10f034n00.c
+index 3c20beeb17819..3991f5d950af4 100644
+--- a/drivers/gpu/drm/panel/panel-panasonic-vvx10f034n00.c
++++ b/drivers/gpu/drm/panel/panel-panasonic-vvx10f034n00.c
+@@ -241,7 +241,13 @@ static int wuxga_nt_panel_probe(struct mipi_dsi_device *dsi)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	return mipi_dsi_attach(dsi);
++	ret = mipi_dsi_attach(dsi);
++	if (ret < 0) {
++		wuxga_nt_panel_del(wuxga_nt);
++		return ret;
++	}
++
++	return 0;
+ }
+ 
+ static int wuxga_nt_panel_remove(struct mipi_dsi_device *dsi)
+diff --git a/drivers/gpu/drm/panel/panel-ronbo-rb070d30.c b/drivers/gpu/drm/panel/panel-ronbo-rb070d30.c
+index a3782830ae3c4..1fb579a574d9f 100644
+--- a/drivers/gpu/drm/panel/panel-ronbo-rb070d30.c
++++ b/drivers/gpu/drm/panel/panel-ronbo-rb070d30.c
+@@ -199,7 +199,13 @@ static int rb070d30_panel_dsi_probe(struct mipi_dsi_device *dsi)
+ 	dsi->format = MIPI_DSI_FMT_RGB888;
+ 	dsi->lanes = 4;
+ 
+-	return mipi_dsi_attach(dsi);
++	ret = mipi_dsi_attach(dsi);
++	if (ret < 0) {
++		drm_panel_remove(&ctx->panel);
++		return ret;
++	}
++
++	return 0;
+ }
+ 
+ static int rb070d30_panel_dsi_remove(struct mipi_dsi_device *dsi)
+diff --git a/drivers/gpu/drm/panel/panel-samsung-s6e88a0-ams452ef01.c b/drivers/gpu/drm/panel/panel-samsung-s6e88a0-ams452ef01.c
+index ea63799ff2a1e..29fde3823212b 100644
+--- a/drivers/gpu/drm/panel/panel-samsung-s6e88a0-ams452ef01.c
++++ b/drivers/gpu/drm/panel/panel-samsung-s6e88a0-ams452ef01.c
+@@ -247,6 +247,7 @@ static int s6e88a0_ams452ef01_probe(struct mipi_dsi_device *dsi)
+ 	ret = mipi_dsi_attach(dsi);
+ 	if (ret < 0) {
+ 		dev_err(dev, "Failed to attach to DSI host: %d\n", ret);
++		drm_panel_remove(&ctx->panel);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/panel/panel-samsung-sofef00.c b/drivers/gpu/drm/panel/panel-samsung-sofef00.c
+index 8cb1853574bb8..6d107e14fcc55 100644
+--- a/drivers/gpu/drm/panel/panel-samsung-sofef00.c
++++ b/drivers/gpu/drm/panel/panel-samsung-sofef00.c
+@@ -302,6 +302,7 @@ static int sofef00_panel_probe(struct mipi_dsi_device *dsi)
+ 	ret = mipi_dsi_attach(dsi);
+ 	if (ret < 0) {
+ 		dev_err(dev, "Failed to attach to DSI host: %d\n", ret);
++		drm_panel_remove(&ctx->panel);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c b/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
+index b937e24dac8e0..25829a0a8e801 100644
+--- a/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
++++ b/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
+@@ -296,7 +296,13 @@ static int sharp_nt_panel_probe(struct mipi_dsi_device *dsi)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	return mipi_dsi_attach(dsi);
++	ret = mipi_dsi_attach(dsi);
++	if (ret < 0) {
++		sharp_nt_panel_del(sharp_nt);
++		return ret;
++	}
++
++	return 0;
+ }
+ 
+ static int sharp_nt_panel_remove(struct mipi_dsi_device *dsi)
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
+index 23377481f4e31..39ac031548954 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
++++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
+@@ -221,7 +221,7 @@ struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t
+ 
+ 	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
+ 	if (!obj)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	INIT_LIST_HEAD(&obj->mappings.list);
+ 	mutex_init(&obj->mappings.lock);
+diff --git a/drivers/gpu/drm/radeon/radeon_kms.c b/drivers/gpu/drm/radeon/radeon_kms.c
+index 482fb0ae6cb5d..0e14907f2043e 100644
+--- a/drivers/gpu/drm/radeon/radeon_kms.c
++++ b/drivers/gpu/drm/radeon/radeon_kms.c
+@@ -648,6 +648,8 @@ void radeon_driver_lastclose_kms(struct drm_device *dev)
+ int radeon_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv)
+ {
+ 	struct radeon_device *rdev = dev->dev_private;
++	struct radeon_fpriv *fpriv;
++	struct radeon_vm *vm;
+ 	int r;
+ 
+ 	file_priv->driver_priv = NULL;
+@@ -660,48 +662,52 @@ int radeon_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv)
+ 
+ 	/* new gpu have virtual address space support */
+ 	if (rdev->family >= CHIP_CAYMAN) {
+-		struct radeon_fpriv *fpriv;
+-		struct radeon_vm *vm;
+ 
+ 		fpriv = kzalloc(sizeof(*fpriv), GFP_KERNEL);
+ 		if (unlikely(!fpriv)) {
+ 			r = -ENOMEM;
+-			goto out_suspend;
++			goto err_suspend;
+ 		}
+ 
+ 		if (rdev->accel_working) {
+ 			vm = &fpriv->vm;
+ 			r = radeon_vm_init(rdev, vm);
+-			if (r) {
+-				kfree(fpriv);
+-				goto out_suspend;
+-			}
++			if (r)
++				goto err_fpriv;
+ 
+ 			r = radeon_bo_reserve(rdev->ring_tmp_bo.bo, false);
+-			if (r) {
+-				radeon_vm_fini(rdev, vm);
+-				kfree(fpriv);
+-				goto out_suspend;
+-			}
++			if (r)
++				goto err_vm_fini;
+ 
+ 			/* map the ib pool buffer read only into
+ 			 * virtual address space */
+ 			vm->ib_bo_va = radeon_vm_bo_add(rdev, vm,
+ 							rdev->ring_tmp_bo.bo);
++			if (!vm->ib_bo_va) {
++				r = -ENOMEM;
++				goto err_vm_fini;
++			}
++
+ 			r = radeon_vm_bo_set_addr(rdev, vm->ib_bo_va,
+ 						  RADEON_VA_IB_OFFSET,
+ 						  RADEON_VM_PAGE_READABLE |
+ 						  RADEON_VM_PAGE_SNOOPED);
+-			if (r) {
+-				radeon_vm_fini(rdev, vm);
+-				kfree(fpriv);
+-				goto out_suspend;
+-			}
++			if (r)
++				goto err_vm_fini;
+ 		}
+ 		file_priv->driver_priv = fpriv;
+ 	}
+ 
+-out_suspend:
++	pm_runtime_mark_last_busy(dev->dev);
++	pm_runtime_put_autosuspend(dev->dev);
++	return 0;
++
++err_vm_fini:
++	radeon_vm_fini(rdev, vm);
++err_fpriv:
++	kfree(fpriv);
++
++err_suspend:
+ 	pm_runtime_mark_last_busy(dev->dev);
+ 	pm_runtime_put_autosuspend(dev->dev);
+ 	return r;
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
+index 5672830ca184d..f361a604337f6 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
+@@ -215,6 +215,7 @@ static void rcar_du_crtc_set_display_timing(struct rcar_du_crtc *rcrtc)
+ 	const struct drm_display_mode *mode = &rcrtc->crtc.state->adjusted_mode;
+ 	struct rcar_du_device *rcdu = rcrtc->dev;
+ 	unsigned long mode_clock = mode->clock * 1000;
++	unsigned int hdse_offset;
+ 	u32 dsmr;
+ 	u32 escr;
+ 
+@@ -261,12 +262,13 @@ static void rcar_du_crtc_set_display_timing(struct rcar_du_crtc *rcrtc)
+ 		rcar_du_group_write(rcrtc->group, DPLLCR, dpllcr);
+ 
+ 		escr = ESCR_DCLKSEL_DCLKIN | div;
+-	} else if (rcdu->info->lvds_clk_mask & BIT(rcrtc->index)) {
++	} else if (rcdu->info->lvds_clk_mask & BIT(rcrtc->index) ||
++		   rcdu->info->dsi_clk_mask & BIT(rcrtc->index)) {
+ 		/*
+-		 * Use the LVDS PLL output as the dot clock when outputting to
+-		 * the LVDS encoder on an SoC that supports this clock routing
+-		 * option. We use the clock directly in that case, without any
+-		 * additional divider.
++		 * Use the external LVDS or DSI PLL output as the dot clock when
++		 * outputting to the LVDS or DSI encoder on an SoC that supports
++		 * this clock routing option. We use the clock directly in that
++		 * case, without any additional divider.
+ 		 */
+ 		escr = ESCR_DCLKSEL_DCLKIN;
+ 	} else {
+@@ -298,10 +300,15 @@ static void rcar_du_crtc_set_display_timing(struct rcar_du_crtc *rcrtc)
+ 	     | DSMR_DIPM_DISP | DSMR_CSPM;
+ 	rcar_du_crtc_write(rcrtc, DSMR, dsmr);
+ 
++	hdse_offset = 19;
++	if (rcrtc->group->cmms_mask & BIT(rcrtc->index % 2))
++		hdse_offset += 25;
++
+ 	/* Display timings */
+-	rcar_du_crtc_write(rcrtc, HDSR, mode->htotal - mode->hsync_start - 19);
++	rcar_du_crtc_write(rcrtc, HDSR, mode->htotal - mode->hsync_start -
++					hdse_offset);
+ 	rcar_du_crtc_write(rcrtc, HDER, mode->htotal - mode->hsync_start +
+-					mode->hdisplay - 19);
++					mode->hdisplay - hdse_offset);
+ 	rcar_du_crtc_write(rcrtc, HSWR, mode->hsync_end -
+ 					mode->hsync_start - 1);
+ 	rcar_du_crtc_write(rcrtc, HCR,  mode->htotal - 1);
+@@ -836,6 +843,7 @@ rcar_du_crtc_mode_valid(struct drm_crtc *crtc,
+ 	struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc);
+ 	struct rcar_du_device *rcdu = rcrtc->dev;
+ 	bool interlaced = mode->flags & DRM_MODE_FLAG_INTERLACE;
++	unsigned int min_sync_porch;
+ 	unsigned int vbp;
+ 
+ 	if (interlaced && !rcar_du_has(rcdu, RCAR_DU_FEATURE_INTERLACED))
+@@ -843,9 +851,14 @@ rcar_du_crtc_mode_valid(struct drm_crtc *crtc,
+ 
+ 	/*
+ 	 * The hardware requires a minimum combined horizontal sync and back
+-	 * porch of 20 pixels and a minimum vertical back porch of 3 lines.
++	 * porch of 20 pixels (when CMM isn't used) or 45 pixels (when CMM is
++	 * used), and a minimum vertical back porch of 3 lines.
+ 	 */
+-	if (mode->htotal - mode->hsync_start < 20)
++	min_sync_porch = 20;
++	if (rcrtc->group->cmms_mask & BIT(rcrtc->index % 2))
++		min_sync_porch += 25;
++
++	if (mode->htotal - mode->hsync_start < min_sync_porch)
+ 		return MODE_HBLANK_NARROW;
+ 
+ 	vbp = (mode->vtotal - mode->vsync_end) / (interlaced ? 2 : 1);
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_drv.c b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
+index 5612a9e7a9056..5a8131ef81d5a 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_drv.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
+@@ -544,10 +544,12 @@ const char *rcar_du_output_name(enum rcar_du_output output)
+ 	static const char * const names[] = {
+ 		[RCAR_DU_OUTPUT_DPAD0] = "DPAD0",
+ 		[RCAR_DU_OUTPUT_DPAD1] = "DPAD1",
+-		[RCAR_DU_OUTPUT_LVDS0] = "LVDS0",
+-		[RCAR_DU_OUTPUT_LVDS1] = "LVDS1",
++		[RCAR_DU_OUTPUT_DSI0] = "DSI0",
++		[RCAR_DU_OUTPUT_DSI1] = "DSI1",
+ 		[RCAR_DU_OUTPUT_HDMI0] = "HDMI0",
+ 		[RCAR_DU_OUTPUT_HDMI1] = "HDMI1",
++		[RCAR_DU_OUTPUT_LVDS0] = "LVDS0",
++		[RCAR_DU_OUTPUT_LVDS1] = "LVDS1",
+ 		[RCAR_DU_OUTPUT_TCON] = "TCON",
+ 	};
+ 
+diff --git a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
+index a9acbcc420d07..4ed7a68681978 100644
+--- a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
++++ b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
+@@ -267,6 +267,8 @@ struct dw_mipi_dsi_rockchip {
+ 	struct dw_mipi_dsi *dmd;
+ 	const struct rockchip_dw_dsi_chip_data *cdata;
+ 	struct dw_mipi_dsi_plat_data pdata;
++
++	bool dsi_bound;
+ };
+ 
+ struct dphy_pll_parameter_map {
+@@ -772,10 +774,6 @@ static void dw_mipi_dsi_encoder_enable(struct drm_encoder *encoder)
+ 	if (mux < 0)
+ 		return;
+ 
+-	pm_runtime_get_sync(dsi->dev);
+-	if (dsi->slave)
+-		pm_runtime_get_sync(dsi->slave->dev);
+-
+ 	/*
+ 	 * For the RK3399, the clk of grf must be enabled before writing grf
+ 	 * register. And for RK3288 or other soc, this grf_clk must be NULL,
+@@ -794,20 +792,10 @@ static void dw_mipi_dsi_encoder_enable(struct drm_encoder *encoder)
+ 	clk_disable_unprepare(dsi->grf_clk);
+ }
+ 
+-static void dw_mipi_dsi_encoder_disable(struct drm_encoder *encoder)
+-{
+-	struct dw_mipi_dsi_rockchip *dsi = to_dsi(encoder);
+-
+-	if (dsi->slave)
+-		pm_runtime_put(dsi->slave->dev);
+-	pm_runtime_put(dsi->dev);
+-}
+-
+ static const struct drm_encoder_helper_funcs
+ dw_mipi_dsi_encoder_helper_funcs = {
+ 	.atomic_check = dw_mipi_dsi_encoder_atomic_check,
+ 	.enable = dw_mipi_dsi_encoder_enable,
+-	.disable = dw_mipi_dsi_encoder_disable,
+ };
+ 
+ static int rockchip_dsi_drm_create_encoder(struct dw_mipi_dsi_rockchip *dsi,
+@@ -937,10 +925,14 @@ static int dw_mipi_dsi_rockchip_bind(struct device *dev,
+ 		put_device(second);
+ 	}
+ 
++	pm_runtime_get_sync(dsi->dev);
++	if (dsi->slave)
++		pm_runtime_get_sync(dsi->slave->dev);
++
+ 	ret = clk_prepare_enable(dsi->pllref_clk);
+ 	if (ret) {
+ 		DRM_DEV_ERROR(dev, "Failed to enable pllref_clk: %d\n", ret);
+-		return ret;
++		goto out_pm_runtime;
+ 	}
+ 
+ 	/*
+@@ -952,7 +944,7 @@ static int dw_mipi_dsi_rockchip_bind(struct device *dev,
+ 	ret = clk_prepare_enable(dsi->grf_clk);
+ 	if (ret) {
+ 		DRM_DEV_ERROR(dsi->dev, "Failed to enable grf_clk: %d\n", ret);
+-		return ret;
++		goto out_pll_clk;
+ 	}
+ 
+ 	dw_mipi_dsi_rockchip_config(dsi);
+@@ -964,16 +956,27 @@ static int dw_mipi_dsi_rockchip_bind(struct device *dev,
+ 	ret = rockchip_dsi_drm_create_encoder(dsi, drm_dev);
+ 	if (ret) {
+ 		DRM_DEV_ERROR(dev, "Failed to create drm encoder\n");
+-		return ret;
++		goto out_pll_clk;
+ 	}
+ 
+ 	ret = dw_mipi_dsi_bind(dsi->dmd, &dsi->encoder);
+ 	if (ret) {
+ 		DRM_DEV_ERROR(dev, "Failed to bind: %d\n", ret);
+-		return ret;
++		goto out_pll_clk;
+ 	}
+ 
++	dsi->dsi_bound = true;
++
+ 	return 0;
++
++out_pll_clk:
++	clk_disable_unprepare(dsi->pllref_clk);
++out_pm_runtime:
++	pm_runtime_put(dsi->dev);
++	if (dsi->slave)
++		pm_runtime_put(dsi->slave->dev);
++
++	return ret;
+ }
+ 
+ static void dw_mipi_dsi_rockchip_unbind(struct device *dev,
+@@ -985,9 +988,15 @@ static void dw_mipi_dsi_rockchip_unbind(struct device *dev,
+ 	if (dsi->is_slave)
+ 		return;
+ 
++	dsi->dsi_bound = false;
++
+ 	dw_mipi_dsi_unbind(dsi->dmd);
+ 
+ 	clk_disable_unprepare(dsi->pllref_clk);
++
++	pm_runtime_put(dsi->dev);
++	if (dsi->slave)
++		pm_runtime_put(dsi->slave->dev);
+ }
+ 
+ static const struct component_ops dw_mipi_dsi_rockchip_ops = {
+@@ -1275,6 +1284,36 @@ static const struct phy_ops dw_mipi_dsi_dphy_ops = {
+ 	.exit		= dw_mipi_dsi_dphy_exit,
+ };
+ 
++static int __maybe_unused dw_mipi_dsi_rockchip_resume(struct device *dev)
++{
++	struct dw_mipi_dsi_rockchip *dsi = dev_get_drvdata(dev);
++	int ret;
++
++	/*
++	 * Re-configure DSI state, if we were previously initialized. We need
++	 * to do this before rockchip_drm_drv tries to re-enable() any panels.
++	 */
++	if (dsi->dsi_bound) {
++		ret = clk_prepare_enable(dsi->grf_clk);
++		if (ret) {
++			DRM_DEV_ERROR(dsi->dev, "Failed to enable grf_clk: %d\n", ret);
++			return ret;
++		}
++
++		dw_mipi_dsi_rockchip_config(dsi);
++		if (dsi->slave)
++			dw_mipi_dsi_rockchip_config(dsi->slave);
++
++		clk_disable_unprepare(dsi->grf_clk);
++	}
++
++	return 0;
++}
++
++static const struct dev_pm_ops dw_mipi_dsi_rockchip_pm_ops = {
++	SET_LATE_SYSTEM_SLEEP_PM_OPS(NULL, dw_mipi_dsi_rockchip_resume)
++};
++
+ static int dw_mipi_dsi_rockchip_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+@@ -1396,14 +1435,10 @@ static int dw_mipi_dsi_rockchip_probe(struct platform_device *pdev)
+ 		if (ret != -EPROBE_DEFER)
+ 			DRM_DEV_ERROR(dev,
+ 				      "Failed to probe dw_mipi_dsi: %d\n", ret);
+-		goto err_clkdisable;
++		return ret;
+ 	}
+ 
+ 	return 0;
+-
+-err_clkdisable:
+-	clk_disable_unprepare(dsi->pllref_clk);
+-	return ret;
+ }
+ 
+ static int dw_mipi_dsi_rockchip_remove(struct platform_device *pdev)
+@@ -1592,6 +1627,7 @@ struct platform_driver dw_mipi_dsi_rockchip_driver = {
+ 	.remove		= dw_mipi_dsi_rockchip_remove,
+ 	.driver		= {
+ 		.of_match_table = dw_mipi_dsi_rockchip_dt_ids,
++		.pm	= &dw_mipi_dsi_rockchip_pm_ops,
+ 		.name	= "dw-mipi-dsi-rockchip",
+ 	},
+ };
+diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
+index 27e1573af96e2..191c56064f196 100644
+--- a/drivers/gpu/drm/scheduler/sched_entity.c
++++ b/drivers/gpu/drm/scheduler/sched_entity.c
+@@ -190,6 +190,16 @@ long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout)
+ }
+ EXPORT_SYMBOL(drm_sched_entity_flush);
+ 
++static void drm_sched_entity_kill_jobs_irq_work(struct irq_work *wrk)
++{
++	struct drm_sched_job *job = container_of(wrk, typeof(*job), work);
++
++	drm_sched_fence_finished(job->s_fence);
++	WARN_ON(job->s_fence->parent);
++	job->sched->ops->free_job(job);
++}
++
++
+ /* Signal the scheduler finished fence when the entity in question is killed. */
+ static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f,
+ 					  struct dma_fence_cb *cb)
+@@ -197,9 +207,8 @@ static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f,
+ 	struct drm_sched_job *job = container_of(cb, struct drm_sched_job,
+ 						 finish_cb);
+ 
+-	drm_sched_fence_finished(job->s_fence);
+-	WARN_ON(job->s_fence->parent);
+-	job->sched->ops->free_job(job);
++	init_irq_work(&job->work, drm_sched_entity_kill_jobs_irq_work);
++	irq_work_queue(&job->work);
+ }
+ 
+ static struct dma_fence *
+diff --git a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+index b64d93da651d2..5e2b0175df36f 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
++++ b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+@@ -658,8 +658,10 @@ int sun8i_hdmi_phy_get(struct sun8i_dw_hdmi *hdmi, struct device_node *node)
+ 		return -EPROBE_DEFER;
+ 
+ 	phy = platform_get_drvdata(pdev);
+-	if (!phy)
++	if (!phy) {
++		put_device(&pdev->dev);
+ 		return -EPROBE_DEFER;
++	}
+ 
+ 	hdmi->phy = phy;
+ 
+diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c
+index 8d37d6b00562a..611cd8dad46ed 100644
+--- a/drivers/gpu/drm/tegra/drm.c
++++ b/drivers/gpu/drm/tegra/drm.c
+@@ -21,6 +21,10 @@
+ #include <drm/drm_prime.h>
+ #include <drm/drm_vblank.h>
+ 
++#if IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)
++#include <asm/dma-iommu.h>
++#endif
++
+ #include "dc.h"
+ #include "drm.h"
+ #include "gem.h"
+@@ -936,6 +940,17 @@ int host1x_client_iommu_attach(struct host1x_client *client)
+ 	struct iommu_group *group = NULL;
+ 	int err;
+ 
++#if IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)
++	if (client->dev->archdata.mapping) {
++		struct dma_iommu_mapping *mapping =
++				to_dma_iommu_mapping(client->dev);
++		arm_iommu_detach_device(client->dev);
++		arm_iommu_release_mapping(mapping);
++
++		domain = iommu_get_domain_for_dev(client->dev);
++	}
++#endif
++
+ 	/*
+ 	 * If the host1x client is already attached to an IOMMU domain that is
+ 	 * not the shared IOMMU domain, don't try to attach it to a different
+diff --git a/drivers/gpu/drm/tegra/gr2d.c b/drivers/gpu/drm/tegra/gr2d.c
+index de288cba39055..ba3722f1b8651 100644
+--- a/drivers/gpu/drm/tegra/gr2d.c
++++ b/drivers/gpu/drm/tegra/gr2d.c
+@@ -4,9 +4,11 @@
+  */
+ 
+ #include <linux/clk.h>
++#include <linux/delay.h>
+ #include <linux/iommu.h>
+ #include <linux/module.h>
+ #include <linux/of_device.h>
++#include <linux/reset.h>
+ 
+ #include "drm.h"
+ #include "gem.h"
+@@ -19,6 +21,7 @@ struct gr2d_soc {
+ struct gr2d {
+ 	struct tegra_drm_client client;
+ 	struct host1x_channel *channel;
++	struct reset_control *rst;
+ 	struct clk *clk;
+ 
+ 	const struct gr2d_soc *soc;
+@@ -208,6 +211,12 @@ static int gr2d_probe(struct platform_device *pdev)
+ 	if (!syncpts)
+ 		return -ENOMEM;
+ 
++	gr2d->rst = devm_reset_control_get(dev, NULL);
++	if (IS_ERR(gr2d->rst)) {
++		dev_err(dev, "cannot get reset\n");
++		return PTR_ERR(gr2d->rst);
++	}
++
+ 	gr2d->clk = devm_clk_get(dev, NULL);
+ 	if (IS_ERR(gr2d->clk)) {
+ 		dev_err(dev, "cannot get clock\n");
+@@ -220,6 +229,14 @@ static int gr2d_probe(struct platform_device *pdev)
+ 		return err;
+ 	}
+ 
++	usleep_range(2000, 4000);
++
++	err = reset_control_deassert(gr2d->rst);
++	if (err < 0) {
++		dev_err(dev, "failed to deassert reset: %d\n", err);
++		goto disable_clk;
++	}
++
+ 	INIT_LIST_HEAD(&gr2d->client.base.list);
+ 	gr2d->client.base.ops = &gr2d_client_ops;
+ 	gr2d->client.base.dev = dev;
+@@ -234,8 +251,7 @@ static int gr2d_probe(struct platform_device *pdev)
+ 	err = host1x_client_register(&gr2d->client.base);
+ 	if (err < 0) {
+ 		dev_err(dev, "failed to register host1x client: %d\n", err);
+-		clk_disable_unprepare(gr2d->clk);
+-		return err;
++		goto assert_rst;
+ 	}
+ 
+ 	/* initialize address register map */
+@@ -245,6 +261,13 @@ static int gr2d_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, gr2d);
+ 
+ 	return 0;
++
++assert_rst:
++	(void)reset_control_assert(gr2d->rst);
++disable_clk:
++	clk_disable_unprepare(gr2d->clk);
++
++	return err;
+ }
+ 
+ static int gr2d_remove(struct platform_device *pdev)
+@@ -259,6 +282,12 @@ static int gr2d_remove(struct platform_device *pdev)
+ 		return err;
+ 	}
+ 
++	err = reset_control_assert(gr2d->rst);
++	if (err < 0)
++		dev_err(&pdev->dev, "failed to assert reset: %d\n", err);
++
++	usleep_range(2000, 4000);
++
+ 	clk_disable_unprepare(gr2d->clk);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/tegra/submit.c b/drivers/gpu/drm/tegra/submit.c
+index 776f825df52fa..aba9d0c9d9031 100644
+--- a/drivers/gpu/drm/tegra/submit.c
++++ b/drivers/gpu/drm/tegra/submit.c
+@@ -475,8 +475,10 @@ static void release_job(struct host1x_job *job)
+ 	kfree(job_data->used_mappings);
+ 	kfree(job_data);
+ 
+-	if (pm_runtime_enabled(client->base.dev))
++	if (pm_runtime_enabled(client->base.dev)) {
++		pm_runtime_mark_last_busy(client->base.dev);
+ 		pm_runtime_put_autosuspend(client->base.dev);
++	}
+ }
+ 
+ int tegra_drm_ioctl_channel_submit(struct drm_device *drm, void *data,
+diff --git a/drivers/gpu/drm/tegra/vic.c b/drivers/gpu/drm/tegra/vic.c
+index c02010ff2b7f2..da4af53719917 100644
+--- a/drivers/gpu/drm/tegra/vic.c
++++ b/drivers/gpu/drm/tegra/vic.c
+@@ -5,6 +5,7 @@
+ 
+ #include <linux/clk.h>
+ #include <linux/delay.h>
++#include <linux/dma-mapping.h>
+ #include <linux/host1x.h>
+ #include <linux/iommu.h>
+ #include <linux/module.h>
+@@ -232,10 +233,8 @@ static int vic_load_firmware(struct vic *vic)
+ 
+ 	if (!client->group) {
+ 		virt = dma_alloc_coherent(vic->dev, size, &iova, GFP_KERNEL);
+-
+-		err = dma_mapping_error(vic->dev, iova);
+-		if (err < 0)
+-			return err;
++		if (!virt)
++			return -ENOMEM;
+ 	} else {
+ 		virt = tegra_drm_alloc(tegra, size, &iova);
+ 	}
+diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
+index 047adc42d9a0d..26740c9fb5e1f 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo.c
++++ b/drivers/gpu/drm/ttm/ttm_bo.c
+@@ -727,6 +727,8 @@ int ttm_mem_evict_first(struct ttm_device *bdev,
+ 	ret = ttm_bo_evict(bo, ctx);
+ 	if (locked)
+ 		ttm_bo_unreserve(bo);
++	else
++		ttm_bo_move_to_lru_tail_unlocked(bo);
+ 
+ 	ttm_bo_put(bo);
+ 	return ret;
+diff --git a/drivers/gpu/drm/v3d/v3d_bo.c b/drivers/gpu/drm/v3d/v3d_bo.c
+index 6a8731ab9d7d0..9a1a92782524c 100644
+--- a/drivers/gpu/drm/v3d/v3d_bo.c
++++ b/drivers/gpu/drm/v3d/v3d_bo.c
+@@ -70,11 +70,11 @@ struct drm_gem_object *v3d_create_object(struct drm_device *dev, size_t size)
+ 	struct drm_gem_object *obj;
+ 
+ 	if (size == 0)
+-		return NULL;
++		return ERR_PTR(-EINVAL);
+ 
+ 	bo = kzalloc(sizeof(*bo), GFP_KERNEL);
+ 	if (!bo)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 	obj = &bo->base.base;
+ 
+ 	obj->funcs = &v3d_gem_funcs;
+diff --git a/drivers/gpu/drm/vboxvideo/vbox_main.c b/drivers/gpu/drm/vboxvideo/vbox_main.c
+index f28779715ccda..c9e8b3a63c621 100644
+--- a/drivers/gpu/drm/vboxvideo/vbox_main.c
++++ b/drivers/gpu/drm/vboxvideo/vbox_main.c
+@@ -127,8 +127,8 @@ int vbox_hw_init(struct vbox_private *vbox)
+ 	/* Create guest-heap mem-pool use 2^4 = 16 byte chunks */
+ 	vbox->guest_pool = devm_gen_pool_create(vbox->ddev.dev, 4, -1,
+ 						"vboxvideo-accel");
+-	if (!vbox->guest_pool)
+-		return -ENOMEM;
++	if (IS_ERR(vbox->guest_pool))
++		return PTR_ERR(vbox->guest_pool);
+ 
+ 	ret = gen_pool_add_virt(vbox->guest_pool,
+ 				(unsigned long)vbox->guest_heap,
+diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c
+index 18f5009ce90e3..e3ed52d96f423 100644
+--- a/drivers/gpu/drm/vc4/vc4_crtc.c
++++ b/drivers/gpu/drm/vc4/vc4_crtc.c
+@@ -32,6 +32,7 @@
+ #include <linux/clk.h>
+ #include <linux/component.h>
+ #include <linux/of_device.h>
++#include <linux/pm_runtime.h>
+ 
+ #include <drm/drm_atomic.h>
+ #include <drm/drm_atomic_helper.h>
+@@ -42,6 +43,7 @@
+ #include <drm/drm_vblank.h>
+ 
+ #include "vc4_drv.h"
++#include "vc4_hdmi.h"
+ #include "vc4_regs.h"
+ 
+ #define HVS_FIFO_LATENCY_PIX	6
+@@ -496,8 +498,10 @@ int vc4_crtc_disable_at_boot(struct drm_crtc *crtc)
+ 	enum vc4_encoder_type encoder_type;
+ 	const struct vc4_pv_data *pv_data;
+ 	struct drm_encoder *encoder;
++	struct vc4_hdmi *vc4_hdmi;
+ 	unsigned encoder_sel;
+ 	int channel;
++	int ret;
+ 
+ 	if (!(of_device_is_compatible(vc4_crtc->pdev->dev.of_node,
+ 				      "brcm,bcm2711-pixelvalve2") ||
+@@ -525,7 +529,20 @@ int vc4_crtc_disable_at_boot(struct drm_crtc *crtc)
+ 	if (WARN_ON(!encoder))
+ 		return 0;
+ 
+-	return vc4_crtc_disable(crtc, encoder, NULL, channel);
++	vc4_hdmi = encoder_to_vc4_hdmi(encoder);
++	ret = pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev);
++	if (ret)
++		return ret;
++
++	ret = vc4_crtc_disable(crtc, encoder, NULL, channel);
++	if (ret)
++		return ret;
++
++	ret = pm_runtime_put(&vc4_hdmi->pdev->dev);
++	if (ret)
++		return ret;
++
++	return 0;
+ }
+ 
+ static void vc4_crtc_atomic_disable(struct drm_crtc *crtc,
+@@ -691,14 +708,14 @@ static void vc4_crtc_handle_page_flip(struct vc4_crtc *vc4_crtc)
+ 	struct drm_crtc *crtc = &vc4_crtc->base;
+ 	struct drm_device *dev = crtc->dev;
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+-	struct vc4_crtc_state *vc4_state = to_vc4_crtc_state(crtc->state);
+-	u32 chan = vc4_state->assigned_channel;
++	u32 chan = vc4_crtc->current_hvs_channel;
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&dev->event_lock, flags);
++	spin_lock(&vc4_crtc->irq_lock);
+ 	if (vc4_crtc->event &&
+-	    (vc4_state->mm.start == HVS_READ(SCALER_DISPLACTX(chan)) ||
+-	     vc4_state->feed_txp)) {
++	    (vc4_crtc->current_dlist == HVS_READ(SCALER_DISPLACTX(chan)) ||
++	     vc4_crtc->feeds_txp)) {
+ 		drm_crtc_send_vblank_event(crtc, vc4_crtc->event);
+ 		vc4_crtc->event = NULL;
+ 		drm_crtc_vblank_put(crtc);
+@@ -711,6 +728,7 @@ static void vc4_crtc_handle_page_flip(struct vc4_crtc *vc4_crtc)
+ 		 */
+ 		vc4_hvs_unmask_underrun(dev, chan);
+ 	}
++	spin_unlock(&vc4_crtc->irq_lock);
+ 	spin_unlock_irqrestore(&dev->event_lock, flags);
+ }
+ 
+@@ -876,7 +894,6 @@ struct drm_crtc_state *vc4_crtc_duplicate_state(struct drm_crtc *crtc)
+ 		return NULL;
+ 
+ 	old_vc4_state = to_vc4_crtc_state(crtc->state);
+-	vc4_state->feed_txp = old_vc4_state->feed_txp;
+ 	vc4_state->margins = old_vc4_state->margins;
+ 	vc4_state->assigned_channel = old_vc4_state->assigned_channel;
+ 
+@@ -937,6 +954,7 @@ static const struct drm_crtc_funcs vc4_crtc_funcs = {
+ static const struct drm_crtc_helper_funcs vc4_crtc_helper_funcs = {
+ 	.mode_valid = vc4_crtc_mode_valid,
+ 	.atomic_check = vc4_crtc_atomic_check,
++	.atomic_begin = vc4_hvs_atomic_begin,
+ 	.atomic_flush = vc4_hvs_atomic_flush,
+ 	.atomic_enable = vc4_crtc_atomic_enable,
+ 	.atomic_disable = vc4_crtc_atomic_disable,
+@@ -1111,6 +1129,7 @@ int vc4_crtc_init(struct drm_device *drm, struct vc4_crtc *vc4_crtc,
+ 		return PTR_ERR(primary_plane);
+ 	}
+ 
++	spin_lock_init(&vc4_crtc->irq_lock);
+ 	drm_crtc_init_with_planes(drm, crtc, primary_plane, NULL,
+ 				  crtc_funcs, NULL);
+ 	drm_crtc_helper_add(crtc, crtc_helper_funcs);
+diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
+index ef73e0aaf7261..4b550ebd9572d 100644
+--- a/drivers/gpu/drm/vc4/vc4_drv.h
++++ b/drivers/gpu/drm/vc4/vc4_drv.h
+@@ -495,6 +495,33 @@ struct vc4_crtc {
+ 	struct drm_pending_vblank_event *event;
+ 
+ 	struct debugfs_regset32 regset;
++
++	/**
++	 * @feeds_txp: True if the CRTC feeds our writeback controller.
++	 */
++	bool feeds_txp;
++
++	/**
++	 * @irq_lock: Spinlock protecting the resources shared between
++	 * the atomic code and our vblank handler.
++	 */
++	spinlock_t irq_lock;
++
++	/**
++	 * @current_dlist: Start offset of the display list currently
++	 * set in the HVS for that CRTC. Protected by @irq_lock, and
++	 * copied in vc4_hvs_update_dlist() for the CRTC interrupt
++	 * handler to have access to that value.
++	 */
++	unsigned int current_dlist;
++
++	/**
++	 * @current_hvs_channel: HVS channel currently assigned to the
++	 * CRTC. Protected by @irq_lock, and copied in
++	 * vc4_hvs_atomic_begin() for the CRTC interrupt handler to have
++	 * access to that value.
++	 */
++	unsigned int current_hvs_channel;
+ };
+ 
+ static inline struct vc4_crtc *
+@@ -521,7 +548,6 @@ struct vc4_crtc_state {
+ 	struct drm_crtc_state base;
+ 	/* Dlist area for this CRTC configuration. */
+ 	struct drm_mm_node mm;
+-	bool feed_txp;
+ 	bool txp_armed;
+ 	unsigned int assigned_channel;
+ 
+@@ -908,6 +934,7 @@ extern struct platform_driver vc4_hvs_driver;
+ void vc4_hvs_stop_channel(struct drm_device *dev, unsigned int output);
+ int vc4_hvs_get_fifo_from_output(struct drm_device *dev, unsigned int output);
+ int vc4_hvs_atomic_check(struct drm_crtc *crtc, struct drm_atomic_state *state);
++void vc4_hvs_atomic_begin(struct drm_crtc *crtc, struct drm_atomic_state *state);
+ void vc4_hvs_atomic_enable(struct drm_crtc *crtc, struct drm_atomic_state *state);
+ void vc4_hvs_atomic_disable(struct drm_crtc *crtc, struct drm_atomic_state *state);
+ void vc4_hvs_atomic_flush(struct drm_crtc *crtc, struct drm_atomic_state *state);
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index b284623e28634..8465914892fad 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -94,6 +94,7 @@
+ # define VC4_HD_M_SW_RST			BIT(2)
+ # define VC4_HD_M_ENABLE			BIT(0)
+ 
++#define HSM_MIN_CLOCK_FREQ	120000000
+ #define CEC_CLOCK_FREQ 40000
+ 
+ #define HDMI_14_MAX_TMDS_CLK   (340 * 1000 * 1000)
+@@ -161,12 +162,16 @@ static void vc4_hdmi_cec_update_clk_div(struct vc4_hdmi *vc4_hdmi)
+ static void vc4_hdmi_cec_update_clk_div(struct vc4_hdmi *vc4_hdmi) {}
+ #endif
+ 
++static void vc4_hdmi_enable_scrambling(struct drm_encoder *encoder);
++
+ static enum drm_connector_status
+ vc4_hdmi_connector_detect(struct drm_connector *connector, bool force)
+ {
+ 	struct vc4_hdmi *vc4_hdmi = connector_to_vc4_hdmi(connector);
+ 	bool connected = false;
+ 
++	WARN_ON(pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev));
++
+ 	if (vc4_hdmi->hpd_gpio &&
+ 	    gpiod_get_value_cansleep(vc4_hdmi->hpd_gpio)) {
+ 		connected = true;
+@@ -187,10 +192,13 @@ vc4_hdmi_connector_detect(struct drm_connector *connector, bool force)
+ 			}
+ 		}
+ 
++		vc4_hdmi_enable_scrambling(&vc4_hdmi->encoder.base.base);
++		pm_runtime_put(&vc4_hdmi->pdev->dev);
+ 		return connector_status_connected;
+ 	}
+ 
+ 	cec_phys_addr_invalidate(vc4_hdmi->cec_adap);
++	pm_runtime_put(&vc4_hdmi->pdev->dev);
+ 	return connector_status_disconnected;
+ }
+ 
+@@ -627,7 +635,6 @@ static void vc4_hdmi_encoder_post_crtc_powerdown(struct drm_encoder *encoder,
+ 		vc4_hdmi->variant->phy_disable(vc4_hdmi);
+ 
+ 	clk_disable_unprepare(vc4_hdmi->pixel_bvb_clock);
+-	clk_disable_unprepare(vc4_hdmi->hsm_clock);
+ 	clk_disable_unprepare(vc4_hdmi->pixel_clock);
+ 
+ 	ret = pm_runtime_put(&vc4_hdmi->pdev->dev);
+@@ -893,28 +900,10 @@ static void vc4_hdmi_encoder_pre_crtc_configure(struct drm_encoder *encoder,
+ 		conn_state_to_vc4_hdmi_conn_state(conn_state);
+ 	struct drm_display_mode *mode = &encoder->crtc->state->adjusted_mode;
+ 	struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder);
+-	unsigned long bvb_rate, pixel_rate, hsm_rate;
++	unsigned long pixel_rate = vc4_conn_state->pixel_rate;
++	unsigned long bvb_rate, hsm_rate;
+ 	int ret;
+ 
+-	ret = pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev);
+-	if (ret < 0) {
+-		DRM_ERROR("Failed to retain power domain: %d\n", ret);
+-		return;
+-	}
+-
+-	pixel_rate = vc4_conn_state->pixel_rate;
+-	ret = clk_set_rate(vc4_hdmi->pixel_clock, pixel_rate);
+-	if (ret) {
+-		DRM_ERROR("Failed to set pixel clock rate: %d\n", ret);
+-		return;
+-	}
+-
+-	ret = clk_prepare_enable(vc4_hdmi->pixel_clock);
+-	if (ret) {
+-		DRM_ERROR("Failed to turn on pixel clock: %d\n", ret);
+-		return;
+-	}
+-
+ 	/*
+ 	 * As stated in RPi's vc4 firmware "HDMI state machine (HSM) clock must
+ 	 * be faster than pixel clock, infinitesimally faster, tested in
+@@ -938,13 +927,25 @@ static void vc4_hdmi_encoder_pre_crtc_configure(struct drm_encoder *encoder,
+ 		return;
+ 	}
+ 
+-	ret = clk_prepare_enable(vc4_hdmi->hsm_clock);
+-	if (ret) {
+-		DRM_ERROR("Failed to turn on HSM clock: %d\n", ret);
+-		clk_disable_unprepare(vc4_hdmi->pixel_clock);
++	ret = pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev);
++	if (ret < 0) {
++		DRM_ERROR("Failed to retain power domain: %d\n", ret);
+ 		return;
+ 	}
+ 
++	ret = clk_set_rate(vc4_hdmi->pixel_clock, pixel_rate);
++	if (ret) {
++		DRM_ERROR("Failed to set pixel clock rate: %d\n", ret);
++		goto err_put_runtime_pm;
++	}
++
++	ret = clk_prepare_enable(vc4_hdmi->pixel_clock);
++	if (ret) {
++		DRM_ERROR("Failed to turn on pixel clock: %d\n", ret);
++		goto err_put_runtime_pm;
++	}
++
++
+ 	vc4_hdmi_cec_update_clk_div(vc4_hdmi);
+ 
+ 	if (pixel_rate > 297000000)
+@@ -957,17 +958,13 @@ static void vc4_hdmi_encoder_pre_crtc_configure(struct drm_encoder *encoder,
+ 	ret = clk_set_min_rate(vc4_hdmi->pixel_bvb_clock, bvb_rate);
+ 	if (ret) {
+ 		DRM_ERROR("Failed to set pixel bvb clock rate: %d\n", ret);
+-		clk_disable_unprepare(vc4_hdmi->hsm_clock);
+-		clk_disable_unprepare(vc4_hdmi->pixel_clock);
+-		return;
++		goto err_disable_pixel_clock;
+ 	}
+ 
+ 	ret = clk_prepare_enable(vc4_hdmi->pixel_bvb_clock);
+ 	if (ret) {
+ 		DRM_ERROR("Failed to turn on pixel bvb clock: %d\n", ret);
+-		clk_disable_unprepare(vc4_hdmi->hsm_clock);
+-		clk_disable_unprepare(vc4_hdmi->pixel_clock);
+-		return;
++		goto err_disable_pixel_clock;
+ 	}
+ 
+ 	if (vc4_hdmi->variant->phy_init)
+@@ -980,6 +977,15 @@ static void vc4_hdmi_encoder_pre_crtc_configure(struct drm_encoder *encoder,
+ 
+ 	if (vc4_hdmi->variant->set_timings)
+ 		vc4_hdmi->variant->set_timings(vc4_hdmi, conn_state, mode);
++
++	return;
++
++err_disable_pixel_clock:
++	clk_disable_unprepare(vc4_hdmi->pixel_clock);
++err_put_runtime_pm:
++	pm_runtime_put(&vc4_hdmi->pdev->dev);
++
++	return;
+ }
+ 
+ static void vc4_hdmi_encoder_pre_crtc_enable(struct drm_encoder *encoder,
+@@ -1730,8 +1736,14 @@ static int vc4_hdmi_cec_adap_enable(struct cec_adapter *adap, bool enable)
+ 	struct vc4_hdmi *vc4_hdmi = cec_get_drvdata(adap);
+ 	/* clock period in microseconds */
+ 	const u32 usecs = 1000000 / CEC_CLOCK_FREQ;
+-	u32 val = HDMI_READ(HDMI_CEC_CNTRL_5);
++	u32 val;
++	int ret;
++
++	ret = pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev);
++	if (ret)
++		return ret;
+ 
++	val = HDMI_READ(HDMI_CEC_CNTRL_5);
+ 	val &= ~(VC4_HDMI_CEC_TX_SW_RESET | VC4_HDMI_CEC_RX_SW_RESET |
+ 		 VC4_HDMI_CEC_CNT_TO_4700_US_MASK |
+ 		 VC4_HDMI_CEC_CNT_TO_4500_US_MASK);
+@@ -1877,6 +1889,8 @@ static int vc4_hdmi_cec_init(struct vc4_hdmi *vc4_hdmi)
+ 	if (ret < 0)
+ 		goto err_remove_handlers;
+ 
++	pm_runtime_put(&vc4_hdmi->pdev->dev);
++
+ 	return 0;
+ 
+ err_remove_handlers:
+@@ -2099,6 +2113,27 @@ static int vc5_hdmi_init_resources(struct vc4_hdmi *vc4_hdmi)
+ 	return 0;
+ }
+ 
++static int __maybe_unused vc4_hdmi_runtime_suspend(struct device *dev)
++{
++	struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev);
++
++	clk_disable_unprepare(vc4_hdmi->hsm_clock);
++
++	return 0;
++}
++
++static int vc4_hdmi_runtime_resume(struct device *dev)
++{
++	struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev);
++	int ret;
++
++	ret = clk_prepare_enable(vc4_hdmi->hsm_clock);
++	if (ret)
++		return ret;
++
++	return 0;
++}
++
+ static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data)
+ {
+ 	const struct vc4_hdmi_variant *variant = of_device_get_match_data(dev);
+@@ -2162,6 +2197,31 @@ static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data)
+ 			vc4_hdmi->disable_4kp60 = true;
+ 	}
+ 
++	/*
++	 * If we boot without any cable connected to the HDMI connector,
++	 * the firmware will skip the HSM initialization and leave it
++	 * with a rate of 0, resulting in a bus lockup when we're
++	 * accessing the registers even if it's enabled.
++	 *
++	 * Let's put a sensible default at runtime_resume so that we
++	 * don't end up in this situation.
++	 */
++	ret = clk_set_min_rate(vc4_hdmi->hsm_clock, HSM_MIN_CLOCK_FREQ);
++	if (ret)
++		goto err_put_ddc;
++
++	/*
++	 * We need to have the device powered up at this point to call
++	 * our reset hook and for the CEC init.
++	 */
++	ret = vc4_hdmi_runtime_resume(dev);
++	if (ret)
++		goto err_put_ddc;
++
++	pm_runtime_get_noresume(dev);
++	pm_runtime_set_active(dev);
++	pm_runtime_enable(dev);
++
+ 	if (vc4_hdmi->variant->reset)
+ 		vc4_hdmi->variant->reset(vc4_hdmi);
+ 
+@@ -2173,8 +2233,6 @@ static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data)
+ 		clk_prepare_enable(vc4_hdmi->pixel_bvb_clock);
+ 	}
+ 
+-	pm_runtime_enable(dev);
+-
+ 	drm_simple_encoder_init(drm, encoder, DRM_MODE_ENCODER_TMDS);
+ 	drm_encoder_helper_add(encoder, &vc4_hdmi_encoder_helper_funcs);
+ 
+@@ -2198,6 +2256,8 @@ static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data)
+ 			     vc4_hdmi_debugfs_regs,
+ 			     vc4_hdmi);
+ 
++	pm_runtime_put_sync(dev);
++
+ 	return 0;
+ 
+ err_free_cec:
+@@ -2208,6 +2268,7 @@ err_destroy_conn:
+ 	vc4_hdmi_connector_destroy(&vc4_hdmi->connector);
+ err_destroy_encoder:
+ 	drm_encoder_cleanup(encoder);
++	pm_runtime_put_sync(dev);
+ 	pm_runtime_disable(dev);
+ err_put_ddc:
+ 	put_device(&vc4_hdmi->ddc->dev);
+@@ -2353,11 +2414,18 @@ static const struct of_device_id vc4_hdmi_dt_match[] = {
+ 	{}
+ };
+ 
++static const struct dev_pm_ops vc4_hdmi_pm_ops = {
++	SET_RUNTIME_PM_OPS(vc4_hdmi_runtime_suspend,
++			   vc4_hdmi_runtime_resume,
++			   NULL)
++};
++
+ struct platform_driver vc4_hdmi_driver = {
+ 	.probe = vc4_hdmi_dev_probe,
+ 	.remove = vc4_hdmi_dev_remove,
+ 	.driver = {
+ 		.name = "vc4_hdmi",
+ 		.of_match_table = vc4_hdmi_dt_match,
++		.pm = &vc4_hdmi_pm_ops,
+ 	},
+ };
+diff --git a/drivers/gpu/drm/vc4/vc4_hvs.c b/drivers/gpu/drm/vc4/vc4_hvs.c
+index c239045e05d6f..604933e20e6a2 100644
+--- a/drivers/gpu/drm/vc4/vc4_hvs.c
++++ b/drivers/gpu/drm/vc4/vc4_hvs.c
+@@ -365,17 +365,16 @@ static void vc4_hvs_update_dlist(struct drm_crtc *crtc)
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+ 	struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);
+ 	struct vc4_crtc_state *vc4_state = to_vc4_crtc_state(crtc->state);
++	unsigned long flags;
+ 
+ 	if (crtc->state->event) {
+-		unsigned long flags;
+-
+ 		crtc->state->event->pipe = drm_crtc_index(crtc);
+ 
+ 		WARN_ON(drm_crtc_vblank_get(crtc) != 0);
+ 
+ 		spin_lock_irqsave(&dev->event_lock, flags);
+ 
+-		if (!vc4_state->feed_txp || vc4_state->txp_armed) {
++		if (!vc4_crtc->feeds_txp || vc4_state->txp_armed) {
+ 			vc4_crtc->event = crtc->state->event;
+ 			crtc->state->event = NULL;
+ 		}
+@@ -388,6 +387,22 @@ static void vc4_hvs_update_dlist(struct drm_crtc *crtc)
+ 		HVS_WRITE(SCALER_DISPLISTX(vc4_state->assigned_channel),
+ 			  vc4_state->mm.start);
+ 	}
++
++	spin_lock_irqsave(&vc4_crtc->irq_lock, flags);
++	vc4_crtc->current_dlist = vc4_state->mm.start;
++	spin_unlock_irqrestore(&vc4_crtc->irq_lock, flags);
++}
++
++void vc4_hvs_atomic_begin(struct drm_crtc *crtc,
++			  struct drm_atomic_state *state)
++{
++	struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);
++	struct vc4_crtc_state *vc4_state = to_vc4_crtc_state(crtc->state);
++	unsigned long flags;
++
++	spin_lock_irqsave(&vc4_crtc->irq_lock, flags);
++	vc4_crtc->current_hvs_channel = vc4_state->assigned_channel;
++	spin_unlock_irqrestore(&vc4_crtc->irq_lock, flags);
+ }
+ 
+ void vc4_hvs_atomic_enable(struct drm_crtc *crtc,
+@@ -395,10 +410,9 @@ void vc4_hvs_atomic_enable(struct drm_crtc *crtc,
+ {
+ 	struct drm_device *dev = crtc->dev;
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+-	struct drm_crtc_state *new_crtc_state = drm_atomic_get_new_crtc_state(state, crtc);
+-	struct vc4_crtc_state *vc4_state = to_vc4_crtc_state(new_crtc_state);
+ 	struct drm_display_mode *mode = &crtc->state->adjusted_mode;
+-	bool oneshot = vc4_state->feed_txp;
++	struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);
++	bool oneshot = vc4_crtc->feeds_txp;
+ 
+ 	vc4_hvs_update_dlist(crtc);
+ 	vc4_hvs_init_channel(vc4, crtc, mode, oneshot);
+diff --git a/drivers/gpu/drm/vc4/vc4_kms.c b/drivers/gpu/drm/vc4/vc4_kms.c
+index b61792d2aa657..6030d4a821555 100644
+--- a/drivers/gpu/drm/vc4/vc4_kms.c
++++ b/drivers/gpu/drm/vc4/vc4_kms.c
+@@ -233,6 +233,7 @@ static void vc4_hvs_pv_muxing_commit(struct vc4_dev *vc4,
+ 	unsigned int i;
+ 
+ 	for_each_new_crtc_in_state(state, crtc, crtc_state, i) {
++		struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);
+ 		struct vc4_crtc_state *vc4_state = to_vc4_crtc_state(crtc_state);
+ 		u32 dispctrl;
+ 		u32 dsp3_mux;
+@@ -253,7 +254,7 @@ static void vc4_hvs_pv_muxing_commit(struct vc4_dev *vc4,
+ 		 * TXP IP, and we need to disable the FIFO2 -> pixelvalve1
+ 		 * route.
+ 		 */
+-		if (vc4_state->feed_txp)
++		if (vc4_crtc->feeds_txp)
+ 			dsp3_mux = VC4_SET_FIELD(3, SCALER_DISPCTRL_DSP3_MUX);
+ 		else
+ 			dsp3_mux = VC4_SET_FIELD(2, SCALER_DISPCTRL_DSP3_MUX);
+diff --git a/drivers/gpu/drm/vc4/vc4_txp.c b/drivers/gpu/drm/vc4/vc4_txp.c
+index 2fc7f4b5fa098..9809ca3e29451 100644
+--- a/drivers/gpu/drm/vc4/vc4_txp.c
++++ b/drivers/gpu/drm/vc4/vc4_txp.c
+@@ -391,7 +391,6 @@ static int vc4_txp_atomic_check(struct drm_crtc *crtc,
+ {
+ 	struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state,
+ 									  crtc);
+-	struct vc4_crtc_state *vc4_state = to_vc4_crtc_state(crtc_state);
+ 	int ret;
+ 
+ 	ret = vc4_hvs_atomic_check(crtc, state);
+@@ -399,7 +398,6 @@ static int vc4_txp_atomic_check(struct drm_crtc *crtc,
+ 		return ret;
+ 
+ 	crtc_state->no_vblank = true;
+-	vc4_state->feed_txp = true;
+ 
+ 	return 0;
+ }
+@@ -437,6 +435,7 @@ static void vc4_txp_atomic_disable(struct drm_crtc *crtc,
+ 
+ static const struct drm_crtc_helper_funcs vc4_txp_crtc_helper_funcs = {
+ 	.atomic_check	= vc4_txp_atomic_check,
++	.atomic_begin	= vc4_hvs_atomic_begin,
+ 	.atomic_flush	= vc4_hvs_atomic_flush,
+ 	.atomic_enable	= vc4_txp_atomic_enable,
+ 	.atomic_disable	= vc4_txp_atomic_disable,
+@@ -482,6 +481,7 @@ static int vc4_txp_bind(struct device *dev, struct device *master, void *data)
+ 
+ 	vc4_crtc->pdev = pdev;
+ 	vc4_crtc->data = &vc4_txp_crtc_data;
++	vc4_crtc->feeds_txp = true;
+ 
+ 	txp->pdev = pdev;
+ 
+diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
+index a87eafa89e9f4..c5e3e54577377 100644
+--- a/drivers/gpu/drm/vgem/vgem_drv.c
++++ b/drivers/gpu/drm/vgem/vgem_drv.c
+@@ -97,7 +97,7 @@ static struct drm_gem_object *vgem_gem_create_object(struct drm_device *dev, siz
+ 
+ 	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
+ 	if (!obj)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	/*
+ 	 * vgem doesn't have any begin/end cpu access ioctls, therefore must use
+diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+index 3607646d32295..c708bab555c6b 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
++++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+@@ -774,7 +774,7 @@ static int virtio_gpu_context_init_ioctl(struct drm_device *dev,
+ 				goto out_unlock;
+ 			}
+ 
+-			if ((vgdev->capset_id_mask & (1 << value)) == 0) {
++			if ((vgdev->capset_id_mask & (1ULL << value)) == 0) {
+ 				ret = -EINVAL;
+ 				goto out_unlock;
+ 			}
+@@ -819,7 +819,7 @@ static int virtio_gpu_context_init_ioctl(struct drm_device *dev,
+ 	if (vfpriv->ring_idx_mask) {
+ 		valid_ring_mask = 0;
+ 		for (i = 0; i < vfpriv->num_rings; i++)
+-			valid_ring_mask |= 1 << i;
++			valid_ring_mask |= 1ULL << i;
+ 
+ 		if (~valid_ring_mask & vfpriv->ring_idx_mask) {
+ 			ret = -EINVAL;
+diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
+index f648b0e24447b..4749c9303de05 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_object.c
++++ b/drivers/gpu/drm/virtio/virtgpu_object.c
+@@ -140,7 +140,7 @@ struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
+ 
+ 	shmem = kzalloc(sizeof(*shmem), GFP_KERNEL);
+ 	if (!shmem)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	dshmem = &shmem->base.base;
+ 	dshmem->base.funcs = &virtio_gpu_shmem_funcs;
+diff --git a/drivers/gpu/drm/vmwgfx/Makefile b/drivers/gpu/drm/vmwgfx/Makefile
+index bc323f7d40321..18edc7ca5b454 100644
+--- a/drivers/gpu/drm/vmwgfx/Makefile
++++ b/drivers/gpu/drm/vmwgfx/Makefile
+@@ -9,9 +9,8 @@ vmwgfx-y := vmwgfx_execbuf.o vmwgfx_gmr.o vmwgfx_kms.o vmwgfx_drv.o \
+ 	    vmwgfx_cotable.o vmwgfx_so.o vmwgfx_binding.o vmwgfx_msg.o \
+ 	    vmwgfx_simple_resource.o vmwgfx_va.o vmwgfx_blit.o \
+ 	    vmwgfx_validation.o vmwgfx_page_dirty.o vmwgfx_streamoutput.o \
+-            vmwgfx_devcaps.o ttm_object.o ttm_memory.o
++	    vmwgfx_devcaps.o ttm_object.o ttm_memory.o vmwgfx_system_manager.o
+ 
+ vmwgfx-$(CONFIG_DRM_FBDEV_EMULATION) += vmwgfx_fb.o
+-vmwgfx-$(CONFIG_TRANSPARENT_HUGEPAGE) += vmwgfx_thp.o
+ 
+ obj-$(CONFIG_DRM_VMWGFX) := vmwgfx.o
+diff --git a/drivers/gpu/drm/vmwgfx/ttm_memory.c b/drivers/gpu/drm/vmwgfx/ttm_memory.c
+index 7f7fe35fc21df..326d2d177c8bb 100644
+--- a/drivers/gpu/drm/vmwgfx/ttm_memory.c
++++ b/drivers/gpu/drm/vmwgfx/ttm_memory.c
+@@ -34,7 +34,6 @@
+ #include <linux/mm.h>
+ #include <linux/module.h>
+ #include <linux/slab.h>
+-#include <linux/swap.h>
+ 
+ #include <drm/drm_device.h>
+ #include <drm/drm_file.h>
+@@ -173,69 +172,7 @@ static struct kobj_type ttm_mem_zone_kobj_type = {
+ 	.sysfs_ops = &ttm_mem_zone_ops,
+ 	.default_attrs = ttm_mem_zone_attrs,
+ };
+-
+-static struct attribute ttm_mem_global_lower_mem_limit = {
+-	.name = "lower_mem_limit",
+-	.mode = S_IRUGO | S_IWUSR
+-};
+-
+-static ssize_t ttm_mem_global_show(struct kobject *kobj,
+-				 struct attribute *attr,
+-				 char *buffer)
+-{
+-	struct ttm_mem_global *glob =
+-		container_of(kobj, struct ttm_mem_global, kobj);
+-	uint64_t val = 0;
+-
+-	spin_lock(&glob->lock);
+-	val = glob->lower_mem_limit;
+-	spin_unlock(&glob->lock);
+-	/* convert from number of pages to KB */
+-	val <<= (PAGE_SHIFT - 10);
+-	return snprintf(buffer, PAGE_SIZE, "%llu\n",
+-			(unsigned long long) val);
+-}
+-
+-static ssize_t ttm_mem_global_store(struct kobject *kobj,
+-				  struct attribute *attr,
+-				  const char *buffer,
+-				  size_t size)
+-{
+-	int chars;
+-	uint64_t val64;
+-	unsigned long val;
+-	struct ttm_mem_global *glob =
+-		container_of(kobj, struct ttm_mem_global, kobj);
+-
+-	chars = sscanf(buffer, "%lu", &val);
+-	if (chars == 0)
+-		return size;
+-
+-	val64 = val;
+-	/* convert from KB to number of pages */
+-	val64 >>= (PAGE_SHIFT - 10);
+-
+-	spin_lock(&glob->lock);
+-	glob->lower_mem_limit = val64;
+-	spin_unlock(&glob->lock);
+-
+-	return size;
+-}
+-
+-static struct attribute *ttm_mem_global_attrs[] = {
+-	&ttm_mem_global_lower_mem_limit,
+-	NULL
+-};
+-
+-static const struct sysfs_ops ttm_mem_global_ops = {
+-	.show = &ttm_mem_global_show,
+-	.store = &ttm_mem_global_store,
+-};
+-
+-static struct kobj_type ttm_mem_glob_kobj_type = {
+-	.sysfs_ops = &ttm_mem_global_ops,
+-	.default_attrs = ttm_mem_global_attrs,
+-};
++static struct kobj_type ttm_mem_glob_kobj_type = {0};
+ 
+ static bool ttm_zones_above_swap_target(struct ttm_mem_global *glob,
+ 					bool from_wq, uint64_t extra)
+@@ -435,11 +372,6 @@ int ttm_mem_global_init(struct ttm_mem_global *glob, struct device *dev)
+ 
+ 	si_meminfo(&si);
+ 
+-	spin_lock(&glob->lock);
+-	/* set it as 0 by default to keep original behavior of OOM */
+-	glob->lower_mem_limit = 0;
+-	spin_unlock(&glob->lock);
+-
+ 	ret = ttm_mem_init_kernel_zone(glob, &si);
+ 	if (unlikely(ret != 0))
+ 		goto out_no_zone;
+@@ -526,35 +458,6 @@ void ttm_mem_global_free(struct ttm_mem_global *glob,
+ }
+ EXPORT_SYMBOL(ttm_mem_global_free);
+ 
+-/*
+- * check if the available mem is under lower memory limit
+- *
+- * a. if no swap disk at all or free swap space is under swap_mem_limit
+- * but available system mem is bigger than sys_mem_limit, allow TTM
+- * allocation;
+- *
+- * b. if the available system mem is less than sys_mem_limit but free
+- * swap disk is bigger than swap_mem_limit, allow TTM allocation.
+- */
+-bool
+-ttm_check_under_lowerlimit(struct ttm_mem_global *glob,
+-			uint64_t num_pages,
+-			struct ttm_operation_ctx *ctx)
+-{
+-	int64_t available;
+-
+-	/* We allow over commit during suspend */
+-	if (ctx->force_alloc)
+-		return false;
+-
+-	available = get_nr_swap_pages() + si_mem_available();
+-	available -= num_pages;
+-	if (available < glob->lower_mem_limit)
+-		return true;
+-
+-	return false;
+-}
+-
+ static int ttm_mem_global_reserve(struct ttm_mem_global *glob,
+ 				  struct ttm_mem_zone *single_zone,
+ 				  uint64_t amount, bool reserve)
+diff --git a/drivers/gpu/drm/vmwgfx/ttm_memory.h b/drivers/gpu/drm/vmwgfx/ttm_memory.h
+index c50dba7744854..7b0d617ebcb1e 100644
+--- a/drivers/gpu/drm/vmwgfx/ttm_memory.h
++++ b/drivers/gpu/drm/vmwgfx/ttm_memory.h
+@@ -50,8 +50,6 @@
+  * @work: The workqueue callback for the shrink queue.
+  * @lock: Lock to protect the @shrink - and the memory accounting members,
+  * that is, essentially the whole structure with some exceptions.
+- * @lower_mem_limit: include lower limit of swap space and lower limit of
+- * system memory.
+  * @zones: Array of pointers to accounting zones.
+  * @num_zones: Number of populated entries in the @zones array.
+  * @zone_kernel: Pointer to the kernel zone.
+@@ -69,7 +67,6 @@ extern struct ttm_mem_global {
+ 	struct workqueue_struct *swap_queue;
+ 	struct work_struct work;
+ 	spinlock_t lock;
+-	uint64_t lower_mem_limit;
+ 	struct ttm_mem_zone *zones[TTM_MEM_MAX_ZONES];
+ 	unsigned int num_zones;
+ 	struct ttm_mem_zone *zone_kernel;
+@@ -91,6 +88,5 @@ int ttm_mem_global_alloc_page(struct ttm_mem_global *glob,
+ void ttm_mem_global_free_page(struct ttm_mem_global *glob,
+ 			      struct page *page, uint64_t size);
+ size_t ttm_round_pot(size_t size);
+-bool ttm_check_under_lowerlimit(struct ttm_mem_global *glob, uint64_t num_pages,
+-				struct ttm_operation_ctx *ctx);
++
+ #endif
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c
+index 67db472d3493c..a3bfbb6c3e14a 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c
+@@ -145,6 +145,13 @@ struct vmw_fifo_state *vmw_fifo_create(struct vmw_private *dev_priv)
+ 		 (unsigned int) max,
+ 		 (unsigned int) min,
+ 		 (unsigned int) fifo->capabilities);
++
++	if (unlikely(min >= max)) {
++		drm_warn(&dev_priv->drm,
++			 "FIFO memory is not usable. Driver failed to initialize.");
++		return ERR_PTR(-ENXIO);
++	}
++
+ 	return fifo;
+ }
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+index bfd71c86faa58..ab246cff209a1 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+@@ -707,23 +707,15 @@ static int vmw_dma_masks(struct vmw_private *dev_priv)
+ static int vmw_vram_manager_init(struct vmw_private *dev_priv)
+ {
+ 	int ret;
+-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+-	ret = vmw_thp_init(dev_priv);
+-#else
+ 	ret = ttm_range_man_init(&dev_priv->bdev, TTM_PL_VRAM, false,
+ 				 dev_priv->vram_size >> PAGE_SHIFT);
+-#endif
+ 	ttm_resource_manager_set_used(ttm_manager_type(&dev_priv->bdev, TTM_PL_VRAM), false);
+ 	return ret;
+ }
+ 
+ static void vmw_vram_manager_fini(struct vmw_private *dev_priv)
+ {
+-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+-	vmw_thp_fini(dev_priv);
+-#else
+ 	ttm_range_man_fini(&dev_priv->bdev, TTM_PL_VRAM);
+-#endif
+ }
+ 
+ static int vmw_setup_pci_resources(struct vmw_private *dev,
+@@ -1071,6 +1063,12 @@ static int vmw_driver_load(struct vmw_private *dev_priv, u32 pci_id)
+ 				 "3D will be disabled.\n");
+ 			dev_priv->has_mob = false;
+ 		}
++		if (vmw_sys_man_init(dev_priv) != 0) {
++			drm_info(&dev_priv->drm,
++				 "No MOB page table memory available. "
++				 "3D will be disabled.\n");
++			dev_priv->has_mob = false;
++		}
+ 	}
+ 
+ 	if (dev_priv->has_mob && (dev_priv->capabilities & SVGA_CAP_DX)) {
+@@ -1121,8 +1119,10 @@ out_no_fifo:
+ 	vmw_overlay_close(dev_priv);
+ 	vmw_kms_close(dev_priv);
+ out_no_kms:
+-	if (dev_priv->has_mob)
++	if (dev_priv->has_mob) {
+ 		vmw_gmrid_man_fini(dev_priv, VMW_PL_MOB);
++		vmw_sys_man_fini(dev_priv);
++	}
+ 	if (dev_priv->has_gmr)
+ 		vmw_gmrid_man_fini(dev_priv, VMW_PL_GMR);
+ 	vmw_devcaps_destroy(dev_priv);
+@@ -1172,8 +1172,10 @@ static void vmw_driver_unload(struct drm_device *dev)
+ 		vmw_gmrid_man_fini(dev_priv, VMW_PL_GMR);
+ 
+ 	vmw_release_device_early(dev_priv);
+-	if (dev_priv->has_mob)
++	if (dev_priv->has_mob) {
+ 		vmw_gmrid_man_fini(dev_priv, VMW_PL_MOB);
++		vmw_sys_man_fini(dev_priv);
++	}
+ 	vmw_devcaps_destroy(dev_priv);
+ 	vmw_vram_manager_fini(dev_priv);
+ 	ttm_device_fini(&dev_priv->bdev);
+@@ -1617,34 +1619,40 @@ static int vmw_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	ret = drm_aperture_remove_conflicting_pci_framebuffers(pdev, &driver);
+ 	if (ret)
+-		return ret;
++		goto out_error;
+ 
+ 	ret = pcim_enable_device(pdev);
+ 	if (ret)
+-		return ret;
++		goto out_error;
+ 
+ 	vmw = devm_drm_dev_alloc(&pdev->dev, &driver,
+ 				 struct vmw_private, drm);
+-	if (IS_ERR(vmw))
+-		return PTR_ERR(vmw);
++	if (IS_ERR(vmw)) {
++		ret = PTR_ERR(vmw);
++		goto out_error;
++	}
+ 
+ 	pci_set_drvdata(pdev, &vmw->drm);
+ 
+ 	ret = ttm_mem_global_init(&ttm_mem_glob, &pdev->dev);
+ 	if (ret)
+-		return ret;
++		goto out_error;
+ 
+ 	ret = vmw_driver_load(vmw, ent->device);
+ 	if (ret)
+-		return ret;
++		goto out_release;
+ 
+ 	ret = drm_dev_register(&vmw->drm, 0);
+-	if (ret) {
+-		vmw_driver_unload(&vmw->drm);
+-		return ret;
+-	}
++	if (ret)
++		goto out_unload;
+ 
+ 	return 0;
++out_unload:
++	vmw_driver_unload(&vmw->drm);
++out_release:
++	ttm_mem_global_release(&ttm_mem_glob);
++out_error:
++	return ret;
+ }
+ 
+ static int __init vmwgfx_init(void)
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+index 858aff99a3fe5..2a7cec4cb8a89 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+@@ -59,11 +59,8 @@
+ #define VMWGFX_DRIVER_MINOR 19
+ #define VMWGFX_DRIVER_PATCHLEVEL 0
+ #define VMWGFX_FIFO_STATIC_SIZE (1024*1024)
+-#define VMWGFX_MAX_RELOCATIONS 2048
+-#define VMWGFX_MAX_VALIDATIONS 2048
+ #define VMWGFX_MAX_DISPLAYS 16
+ #define VMWGFX_CMD_BOUNCE_INIT_SIZE 32768
+-#define VMWGFX_ENABLE_SCREEN_TARGET_OTABLE 1
+ 
+ #define VMWGFX_PCI_ID_SVGA2              0x0405
+ #define VMWGFX_PCI_ID_SVGA3              0x0406
+@@ -82,8 +79,9 @@
+ 			VMWGFX_NUM_GB_SURFACE +\
+ 			VMWGFX_NUM_GB_SCREEN_TARGET)
+ 
+-#define VMW_PL_GMR (TTM_PL_PRIV + 0)
+-#define VMW_PL_MOB (TTM_PL_PRIV + 1)
++#define VMW_PL_GMR      (TTM_PL_PRIV + 0)
++#define VMW_PL_MOB      (TTM_PL_PRIV + 1)
++#define VMW_PL_SYSTEM   (TTM_PL_PRIV + 2)
+ 
+ #define VMW_RES_CONTEXT ttm_driver_type0
+ #define VMW_RES_SURFACE ttm_driver_type1
+@@ -1039,7 +1037,6 @@ extern struct ttm_placement vmw_vram_placement;
+ extern struct ttm_placement vmw_vram_sys_placement;
+ extern struct ttm_placement vmw_vram_gmr_placement;
+ extern struct ttm_placement vmw_sys_placement;
+-extern struct ttm_placement vmw_evictable_placement;
+ extern struct ttm_placement vmw_srf_placement;
+ extern struct ttm_placement vmw_mob_placement;
+ extern struct ttm_placement vmw_nonfixed_placement;
+@@ -1251,6 +1248,12 @@ int vmw_overlay_num_free_overlays(struct vmw_private *dev_priv);
+ int vmw_gmrid_man_init(struct vmw_private *dev_priv, int type);
+ void vmw_gmrid_man_fini(struct vmw_private *dev_priv, int type);
+ 
++/**
++ * System memory manager
++ */
++int vmw_sys_man_init(struct vmw_private *dev_priv);
++void vmw_sys_man_fini(struct vmw_private *dev_priv);
++
+ /**
+  * Prime - vmwgfx_prime.c
+  */
+@@ -1551,11 +1554,6 @@ void vmw_bo_dirty_unmap(struct vmw_buffer_object *vbo,
+ vm_fault_t vmw_bo_vm_fault(struct vm_fault *vmf);
+ vm_fault_t vmw_bo_vm_mkwrite(struct vm_fault *vmf);
+ 
+-/* Transparent hugepage support - vmwgfx_thp.c */
+-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+-extern int vmw_thp_init(struct vmw_private *dev_priv);
+-void vmw_thp_fini(struct vmw_private *dev_priv);
+-#endif
+ 
+ /**
+  * VMW_DEBUG_KMS - Debug output for kernel mode-setting
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c b/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c
+index f9394207dd3cc..632e587519722 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0 OR MIT
+ /**************************************************************************
+  *
+- * Copyright 2012-2015 VMware, Inc., Palo Alto, CA., USA
++ * Copyright 2012-2021 VMware, Inc., Palo Alto, CA., USA
+  *
+  * Permission is hereby granted, free of charge, to any person obtaining a
+  * copy of this software and associated documentation files (the
+@@ -29,12 +29,6 @@
+ 
+ #include "vmwgfx_drv.h"
+ 
+-/*
+- * If we set up the screen target otable, screen objects stop working.
+- */
+-
+-#define VMW_OTABLE_SETUP_SUB ((VMWGFX_ENABLE_SCREEN_TARGET_OTABLE ? 0 : 1))
+-
+ #ifdef CONFIG_64BIT
+ #define VMW_PPN_SIZE 8
+ #define VMW_MOBFMT_PTDEPTH_0 SVGA3D_MOBFMT_PT64_0
+@@ -75,7 +69,7 @@ static const struct vmw_otable pre_dx_tables[] = {
+ 	{VMWGFX_NUM_GB_CONTEXT * sizeof(SVGAOTableContextEntry), NULL, true},
+ 	{VMWGFX_NUM_GB_SHADER * sizeof(SVGAOTableShaderEntry), NULL, true},
+ 	{VMWGFX_NUM_GB_SCREEN_TARGET * sizeof(SVGAOTableScreenTargetEntry),
+-	 NULL, VMWGFX_ENABLE_SCREEN_TARGET_OTABLE}
++	 NULL, true}
+ };
+ 
+ static const struct vmw_otable dx_tables[] = {
+@@ -84,7 +78,7 @@ static const struct vmw_otable dx_tables[] = {
+ 	{VMWGFX_NUM_GB_CONTEXT * sizeof(SVGAOTableContextEntry), NULL, true},
+ 	{VMWGFX_NUM_GB_SHADER * sizeof(SVGAOTableShaderEntry), NULL, true},
+ 	{VMWGFX_NUM_GB_SCREEN_TARGET * sizeof(SVGAOTableScreenTargetEntry),
+-	 NULL, VMWGFX_ENABLE_SCREEN_TARGET_OTABLE},
++	 NULL, true},
+ 	{VMWGFX_NUM_DXCONTEXT * sizeof(SVGAOTableDXContextEntry), NULL, true},
+ };
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
+index d85310b2608dd..f5e90d0e2d0f8 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
+@@ -1872,8 +1872,8 @@ int vmw_kms_stdu_init_display(struct vmw_private *dev_priv)
+ 	int i, ret;
+ 
+ 
+-	/* Do nothing if Screen Target support is turned off */
+-	if (!VMWGFX_ENABLE_SCREEN_TARGET_OTABLE || !dev_priv->has_mob)
++	/* Do nothing if there's no support for MOBs */
++	if (!dev_priv->has_mob)
+ 		return -ENOSYS;
+ 
+ 	if (!(dev_priv->capabilities & SVGA_CAP_GBOBJECTS))
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_system_manager.c b/drivers/gpu/drm/vmwgfx/vmwgfx_system_manager.c
+new file mode 100644
+index 0000000000000..b0005b03a6174
+--- /dev/null
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_system_manager.c
+@@ -0,0 +1,90 @@
++/* SPDX-License-Identifier: GPL-2.0 OR MIT */
++/*
++ * Copyright 2021 VMware, Inc.
++ *
++ * Permission is hereby granted, free of charge, to any person
++ * obtaining a copy of this software and associated documentation
++ * files (the "Software"), to deal in the Software without
++ * restriction, including without limitation the rights to use, copy,
++ * modify, merge, publish, distribute, sublicense, and/or sell copies
++ * of the Software, and to permit persons to whom the Software is
++ * furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice shall be
++ * included in all copies or substantial portions of the Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
++ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
++ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
++ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
++ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
++ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
++ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
++ * SOFTWARE.
++ *
++ */
++
++#include "vmwgfx_drv.h"
++
++#include <drm/ttm/ttm_bo_driver.h>
++#include <drm/ttm/ttm_device.h>
++#include <drm/ttm/ttm_placement.h>
++#include <drm/ttm/ttm_resource.h>
++#include <linux/slab.h>
++
++
++static int vmw_sys_man_alloc(struct ttm_resource_manager *man,
++			     struct ttm_buffer_object *bo,
++			     const struct ttm_place *place,
++			     struct ttm_resource **res)
++{
++	*res = kzalloc(sizeof(**res), GFP_KERNEL);
++	if (!*res)
++		return -ENOMEM;
++
++	ttm_resource_init(bo, place, *res);
++	return 0;
++}
++
++static void vmw_sys_man_free(struct ttm_resource_manager *man,
++			     struct ttm_resource *res)
++{
++	kfree(res);
++}
++
++static const struct ttm_resource_manager_func vmw_sys_manager_func = {
++	.alloc = vmw_sys_man_alloc,
++	.free = vmw_sys_man_free,
++};
++
++int vmw_sys_man_init(struct vmw_private *dev_priv)
++{
++	struct ttm_device *bdev = &dev_priv->bdev;
++	struct ttm_resource_manager *man =
++			kzalloc(sizeof(*man), GFP_KERNEL);
++
++	if (!man)
++		return -ENOMEM;
++
++	man->use_tt = true;
++	man->func = &vmw_sys_manager_func;
++
++	ttm_resource_manager_init(man, 0);
++	ttm_set_driver_manager(bdev, VMW_PL_SYSTEM, man);
++	ttm_resource_manager_set_used(man, true);
++	return 0;
++}
++
++void vmw_sys_man_fini(struct vmw_private *dev_priv)
++{
++	struct ttm_resource_manager *man = ttm_manager_type(&dev_priv->bdev,
++							    VMW_PL_SYSTEM);
++
++	ttm_resource_manager_evict_all(&dev_priv->bdev, man);
++
++	ttm_resource_manager_set_used(man, false);
++	ttm_resource_manager_cleanup(man);
++
++	ttm_set_driver_manager(&dev_priv->bdev, VMW_PL_SYSTEM, NULL);
++	kfree(man);
++}
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_thp.c b/drivers/gpu/drm/vmwgfx/vmwgfx_thp.c
+deleted file mode 100644
+index 2a3d3468e4e0a..0000000000000
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_thp.c
++++ /dev/null
+@@ -1,184 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0 OR MIT
+-/*
+- * Huge page-table-entry support for IO memory.
+- *
+- * Copyright (C) 2007-2019 Vmware, Inc. All rights reservedd.
+- */
+-#include "vmwgfx_drv.h"
+-#include <drm/ttm/ttm_bo_driver.h>
+-#include <drm/ttm/ttm_placement.h>
+-#include <drm/ttm/ttm_range_manager.h>
+-
+-/**
+- * struct vmw_thp_manager - Range manager implementing huge page alignment
+- *
+- * @manager: TTM resource manager.
+- * @mm: The underlying range manager. Protected by @lock.
+- * @lock: Manager lock.
+- */
+-struct vmw_thp_manager {
+-	struct ttm_resource_manager manager;
+-	struct drm_mm mm;
+-	spinlock_t lock;
+-};
+-
+-static struct vmw_thp_manager *to_thp_manager(struct ttm_resource_manager *man)
+-{
+-	return container_of(man, struct vmw_thp_manager, manager);
+-}
+-
+-static const struct ttm_resource_manager_func vmw_thp_func;
+-
+-static int vmw_thp_insert_aligned(struct ttm_buffer_object *bo,
+-				  struct drm_mm *mm, struct drm_mm_node *node,
+-				  unsigned long align_pages,
+-				  const struct ttm_place *place,
+-				  struct ttm_resource *mem,
+-				  unsigned long lpfn,
+-				  enum drm_mm_insert_mode mode)
+-{
+-	if (align_pages >= bo->page_alignment &&
+-	    (!bo->page_alignment || align_pages % bo->page_alignment == 0)) {
+-		return drm_mm_insert_node_in_range(mm, node,
+-						   mem->num_pages,
+-						   align_pages, 0,
+-						   place->fpfn, lpfn, mode);
+-	}
+-
+-	return -ENOSPC;
+-}
+-
+-static int vmw_thp_get_node(struct ttm_resource_manager *man,
+-			    struct ttm_buffer_object *bo,
+-			    const struct ttm_place *place,
+-			    struct ttm_resource **res)
+-{
+-	struct vmw_thp_manager *rman = to_thp_manager(man);
+-	struct drm_mm *mm = &rman->mm;
+-	struct ttm_range_mgr_node *node;
+-	unsigned long align_pages;
+-	unsigned long lpfn;
+-	enum drm_mm_insert_mode mode = DRM_MM_INSERT_BEST;
+-	int ret;
+-
+-	node = kzalloc(struct_size(node, mm_nodes, 1), GFP_KERNEL);
+-	if (!node)
+-		return -ENOMEM;
+-
+-	ttm_resource_init(bo, place, &node->base);
+-
+-	lpfn = place->lpfn;
+-	if (!lpfn)
+-		lpfn = man->size;
+-
+-	mode = DRM_MM_INSERT_BEST;
+-	if (place->flags & TTM_PL_FLAG_TOPDOWN)
+-		mode = DRM_MM_INSERT_HIGH;
+-
+-	spin_lock(&rman->lock);
+-	if (IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)) {
+-		align_pages = (HPAGE_PUD_SIZE >> PAGE_SHIFT);
+-		if (node->base.num_pages >= align_pages) {
+-			ret = vmw_thp_insert_aligned(bo, mm, &node->mm_nodes[0],
+-						     align_pages, place,
+-						     &node->base, lpfn, mode);
+-			if (!ret)
+-				goto found_unlock;
+-		}
+-	}
+-
+-	align_pages = (HPAGE_PMD_SIZE >> PAGE_SHIFT);
+-	if (node->base.num_pages >= align_pages) {
+-		ret = vmw_thp_insert_aligned(bo, mm, &node->mm_nodes[0],
+-					     align_pages, place, &node->base,
+-					     lpfn, mode);
+-		if (!ret)
+-			goto found_unlock;
+-	}
+-
+-	ret = drm_mm_insert_node_in_range(mm, &node->mm_nodes[0],
+-					  node->base.num_pages,
+-					  bo->page_alignment, 0,
+-					  place->fpfn, lpfn, mode);
+-found_unlock:
+-	spin_unlock(&rman->lock);
+-
+-	if (unlikely(ret)) {
+-		kfree(node);
+-	} else {
+-		node->base.start = node->mm_nodes[0].start;
+-		*res = &node->base;
+-	}
+-
+-	return ret;
+-}
+-
+-static void vmw_thp_put_node(struct ttm_resource_manager *man,
+-			     struct ttm_resource *res)
+-{
+-	struct ttm_range_mgr_node *node = to_ttm_range_mgr_node(res);
+-	struct vmw_thp_manager *rman = to_thp_manager(man);
+-
+-	spin_lock(&rman->lock);
+-	drm_mm_remove_node(&node->mm_nodes[0]);
+-	spin_unlock(&rman->lock);
+-
+-	kfree(node);
+-}
+-
+-int vmw_thp_init(struct vmw_private *dev_priv)
+-{
+-	struct vmw_thp_manager *rman;
+-
+-	rman = kzalloc(sizeof(*rman), GFP_KERNEL);
+-	if (!rman)
+-		return -ENOMEM;
+-
+-	ttm_resource_manager_init(&rman->manager,
+-				  dev_priv->vram_size >> PAGE_SHIFT);
+-
+-	rman->manager.func = &vmw_thp_func;
+-	drm_mm_init(&rman->mm, 0, rman->manager.size);
+-	spin_lock_init(&rman->lock);
+-
+-	ttm_set_driver_manager(&dev_priv->bdev, TTM_PL_VRAM, &rman->manager);
+-	ttm_resource_manager_set_used(&rman->manager, true);
+-	return 0;
+-}
+-
+-void vmw_thp_fini(struct vmw_private *dev_priv)
+-{
+-	struct ttm_resource_manager *man = ttm_manager_type(&dev_priv->bdev, TTM_PL_VRAM);
+-	struct vmw_thp_manager *rman = to_thp_manager(man);
+-	struct drm_mm *mm = &rman->mm;
+-	int ret;
+-
+-	ttm_resource_manager_set_used(man, false);
+-
+-	ret = ttm_resource_manager_evict_all(&dev_priv->bdev, man);
+-	if (ret)
+-		return;
+-	spin_lock(&rman->lock);
+-	drm_mm_clean(mm);
+-	drm_mm_takedown(mm);
+-	spin_unlock(&rman->lock);
+-	ttm_resource_manager_cleanup(man);
+-	ttm_set_driver_manager(&dev_priv->bdev, TTM_PL_VRAM, NULL);
+-	kfree(rman);
+-}
+-
+-static void vmw_thp_debug(struct ttm_resource_manager *man,
+-			  struct drm_printer *printer)
+-{
+-	struct vmw_thp_manager *rman = to_thp_manager(man);
+-
+-	spin_lock(&rman->lock);
+-	drm_mm_print(&rman->mm, printer);
+-	spin_unlock(&rman->lock);
+-}
+-
+-static const struct ttm_resource_manager_func vmw_thp_func = {
+-	.alloc = vmw_thp_get_node,
+-	.free = vmw_thp_put_node,
+-	.debug = vmw_thp_debug
+-};
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
+index e899a936a42a0..b15228e7dbeb8 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
+@@ -92,6 +92,13 @@ static const struct ttm_place gmr_vram_placement_flags[] = {
+ 	}
+ };
+ 
++static const struct ttm_place vmw_sys_placement_flags = {
++	.fpfn = 0,
++	.lpfn = 0,
++	.mem_type = VMW_PL_SYSTEM,
++	.flags = 0
++};
++
+ struct ttm_placement vmw_vram_gmr_placement = {
+ 	.num_placement = 2,
+ 	.placement = vram_gmr_placement_flags,
+@@ -113,28 +120,11 @@ struct ttm_placement vmw_sys_placement = {
+ 	.busy_placement = &sys_placement_flags
+ };
+ 
+-static const struct ttm_place evictable_placement_flags[] = {
+-	{
+-		.fpfn = 0,
+-		.lpfn = 0,
+-		.mem_type = TTM_PL_SYSTEM,
+-		.flags = 0
+-	}, {
+-		.fpfn = 0,
+-		.lpfn = 0,
+-		.mem_type = TTM_PL_VRAM,
+-		.flags = 0
+-	}, {
+-		.fpfn = 0,
+-		.lpfn = 0,
+-		.mem_type = VMW_PL_GMR,
+-		.flags = 0
+-	}, {
+-		.fpfn = 0,
+-		.lpfn = 0,
+-		.mem_type = VMW_PL_MOB,
+-		.flags = 0
+-	}
++struct ttm_placement vmw_pt_sys_placement = {
++	.num_placement = 1,
++	.placement = &vmw_sys_placement_flags,
++	.num_busy_placement = 1,
++	.busy_placement = &vmw_sys_placement_flags
+ };
+ 
+ static const struct ttm_place nonfixed_placement_flags[] = {
+@@ -156,13 +146,6 @@ static const struct ttm_place nonfixed_placement_flags[] = {
+ 	}
+ };
+ 
+-struct ttm_placement vmw_evictable_placement = {
+-	.num_placement = 4,
+-	.placement = evictable_placement_flags,
+-	.num_busy_placement = 1,
+-	.busy_placement = &sys_placement_flags
+-};
+-
+ struct ttm_placement vmw_srf_placement = {
+ 	.num_placement = 1,
+ 	.num_busy_placement = 2,
+@@ -484,6 +467,9 @@ static int vmw_ttm_bind(struct ttm_device *bdev,
+ 				    &vmw_be->vsgt, ttm->num_pages,
+ 				    vmw_be->gmr_id);
+ 		break;
++	case VMW_PL_SYSTEM:
++		/* Nothing to be done for a system bind */
++		break;
+ 	default:
+ 		BUG();
+ 	}
+@@ -507,6 +493,8 @@ static void vmw_ttm_unbind(struct ttm_device *bdev,
+ 	case VMW_PL_MOB:
+ 		vmw_mob_unbind(vmw_be->dev_priv, vmw_be->mob);
+ 		break;
++	case VMW_PL_SYSTEM:
++		break;
+ 	default:
+ 		BUG();
+ 	}
+@@ -624,6 +612,7 @@ static int vmw_ttm_io_mem_reserve(struct ttm_device *bdev, struct ttm_resource *
+ 
+ 	switch (mem->mem_type) {
+ 	case TTM_PL_SYSTEM:
++	case VMW_PL_SYSTEM:
+ 	case VMW_PL_GMR:
+ 	case VMW_PL_MOB:
+ 		return 0;
+@@ -670,6 +659,11 @@ static void vmw_swap_notify(struct ttm_buffer_object *bo)
+ 	(void) ttm_bo_wait(bo, false, false);
+ }
+ 
++static bool vmw_memtype_is_system(uint32_t mem_type)
++{
++	return mem_type == TTM_PL_SYSTEM || mem_type == VMW_PL_SYSTEM;
++}
++
+ static int vmw_move(struct ttm_buffer_object *bo,
+ 		    bool evict,
+ 		    struct ttm_operation_ctx *ctx,
+@@ -680,7 +674,7 @@ static int vmw_move(struct ttm_buffer_object *bo,
+ 	struct ttm_resource_manager *new_man = ttm_manager_type(bo->bdev, new_mem->mem_type);
+ 	int ret;
+ 
+-	if (new_man->use_tt && new_mem->mem_type != TTM_PL_SYSTEM) {
++	if (new_man->use_tt && !vmw_memtype_is_system(new_mem->mem_type)) {
+ 		ret = vmw_ttm_bind(bo->bdev, bo->ttm, new_mem);
+ 		if (ret)
+ 			return ret;
+@@ -689,7 +683,7 @@ static int vmw_move(struct ttm_buffer_object *bo,
+ 	vmw_move_notify(bo, bo->resource, new_mem);
+ 
+ 	if (old_man->use_tt && new_man->use_tt) {
+-		if (bo->resource->mem_type == TTM_PL_SYSTEM) {
++		if (vmw_memtype_is_system(bo->resource->mem_type)) {
+ 			ttm_bo_move_null(bo, new_mem);
+ 			return 0;
+ 		}
+@@ -736,7 +730,7 @@ int vmw_bo_create_and_populate(struct vmw_private *dev_priv,
+ 	int ret;
+ 
+ 	ret = vmw_bo_create_kernel(dev_priv, bo_size,
+-				   &vmw_sys_placement,
++				   &vmw_pt_sys_placement,
+ 				   &bo);
+ 	if (unlikely(ret != 0))
+ 		return ret;
+diff --git a/drivers/gpu/host1x/Kconfig b/drivers/gpu/host1x/Kconfig
+index 6dab94adf25e5..6815b4db17c1b 100644
+--- a/drivers/gpu/host1x/Kconfig
++++ b/drivers/gpu/host1x/Kconfig
+@@ -2,6 +2,7 @@
+ config TEGRA_HOST1X
+ 	tristate "NVIDIA Tegra host1x driver"
+ 	depends on ARCH_TEGRA || (ARM && COMPILE_TEST)
++	select DMA_SHARED_BUFFER
+ 	select IOMMU_IOVA
+ 	help
+ 	  Driver for the NVIDIA Tegra host1x hardware.
+diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c
+index fbb6447b8659e..3872e4cd26989 100644
+--- a/drivers/gpu/host1x/dev.c
++++ b/drivers/gpu/host1x/dev.c
+@@ -18,6 +18,10 @@
+ #include <trace/events/host1x.h>
+ #undef CREATE_TRACE_POINTS
+ 
++#if IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)
++#include <asm/dma-iommu.h>
++#endif
++
+ #include "bus.h"
+ #include "channel.h"
+ #include "debug.h"
+@@ -238,6 +242,17 @@ static struct iommu_domain *host1x_iommu_attach(struct host1x *host)
+ 	struct iommu_domain *domain = iommu_get_domain_for_dev(host->dev);
+ 	int err;
+ 
++#if IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)
++	if (host->dev->archdata.mapping) {
++		struct dma_iommu_mapping *mapping =
++				to_dma_iommu_mapping(host->dev);
++		arm_iommu_detach_device(host->dev);
++		arm_iommu_release_mapping(mapping);
++
++		domain = iommu_get_domain_for_dev(host->dev);
++	}
++#endif
++
+ 	/*
+ 	 * We may not always want to enable IOMMU support (for example if the
+ 	 * host1x firewall is already enabled and we don't support addressing
+diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
+index 2c9c5faa74a97..a4ca5ed00e5f5 100644
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -428,7 +428,7 @@ static int apple_input_configured(struct hid_device *hdev,
+ 
+ 	if ((asc->quirks & APPLE_HAS_FN) && !asc->fn_found) {
+ 		hid_info(hdev, "Fn key not found (Apple Wireless Keyboard clone?), disabling Fn key handling\n");
+-		asc->quirks = 0;
++		asc->quirks &= ~APPLE_HAS_FN;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 19da07777d628..a5a5a64c7abc4 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -398,6 +398,7 @@
+ #define USB_DEVICE_ID_HP_X2		0x074d
+ #define USB_DEVICE_ID_HP_X2_10_COVER	0x0755
+ #define I2C_DEVICE_ID_HP_ENVY_X360_15	0x2d05
++#define I2C_DEVICE_ID_HP_ENVY_X360_15T_DR100	0x29CF
+ #define I2C_DEVICE_ID_HP_SPECTRE_X360_15	0x2817
+ #define USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN	0x2544
+ #define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN	0x2706
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index 03f994541981c..87fee137ff65e 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -329,6 +329,8 @@ static const struct hid_device_id hid_battery_quirks[] = {
+ 	  HID_BATTERY_QUIRK_IGNORE },
+ 	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_ENVY_X360_15),
+ 	  HID_BATTERY_QUIRK_IGNORE },
++	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_ENVY_X360_15T_DR100),
++	  HID_BATTERY_QUIRK_IGNORE },
+ 	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_15),
+ 	  HID_BATTERY_QUIRK_IGNORE },
+ 	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN),
+@@ -1333,6 +1335,12 @@ void hidinput_hid_event(struct hid_device *hid, struct hid_field *field, struct
+ 
+ 	input = field->hidinput->input;
+ 
++	if (usage->type == EV_ABS &&
++	    (((*quirks & HID_QUIRK_X_INVERT) && usage->code == ABS_X) ||
++	     ((*quirks & HID_QUIRK_Y_INVERT) && usage->code == ABS_Y))) {
++		value = field->logical_maximum - value;
++	}
++
+ 	if (usage->hat_min < usage->hat_max || usage->hat_dir) {
+ 		int hat_dir = usage->hat_dir;
+ 		if (!hat_dir)
+diff --git a/drivers/hid/hid-magicmouse.c b/drivers/hid/hid-magicmouse.c
+index d7687ce706144..b8b08f0a8c541 100644
+--- a/drivers/hid/hid-magicmouse.c
++++ b/drivers/hid/hid-magicmouse.c
+@@ -57,6 +57,8 @@ MODULE_PARM_DESC(report_undeciphered, "Report undeciphered multi-touch state fie
+ #define MOUSE_REPORT_ID    0x29
+ #define MOUSE2_REPORT_ID   0x12
+ #define DOUBLE_REPORT_ID   0xf7
++#define USB_BATTERY_TIMEOUT_MS 60000
++
+ /* These definitions are not precise, but they're close enough.  (Bits
+  * 0x03 seem to indicate the aspect ratio of the touch, bits 0x70 seem
+  * to be some kind of bit mask -- 0x20 may be a near-field reading,
+@@ -140,6 +142,7 @@ struct magicmouse_sc {
+ 
+ 	struct hid_device *hdev;
+ 	struct delayed_work work;
++	struct timer_list battery_timer;
+ };
+ 
+ static int magicmouse_firm_touch(struct magicmouse_sc *msc)
+@@ -738,6 +741,44 @@ static void magicmouse_enable_mt_work(struct work_struct *work)
+ 		hid_err(msc->hdev, "unable to request touch data (%d)\n", ret);
+ }
+ 
++static int magicmouse_fetch_battery(struct hid_device *hdev)
++{
++#ifdef CONFIG_HID_BATTERY_STRENGTH
++	struct hid_report_enum *report_enum;
++	struct hid_report *report;
++
++	if (!hdev->battery || hdev->vendor != USB_VENDOR_ID_APPLE ||
++	    (hdev->product != USB_DEVICE_ID_APPLE_MAGICMOUSE2 &&
++	     hdev->product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2))
++		return -1;
++
++	report_enum = &hdev->report_enum[hdev->battery_report_type];
++	report = report_enum->report_id_hash[hdev->battery_report_id];
++
++	if (!report || report->maxfield < 1)
++		return -1;
++
++	if (hdev->battery_capacity == hdev->battery_max)
++		return -1;
++
++	hid_hw_request(hdev, report, HID_REQ_GET_REPORT);
++	return 0;
++#else
++	return -1;
++#endif
++}
++
++static void magicmouse_battery_timer_tick(struct timer_list *t)
++{
++	struct magicmouse_sc *msc = from_timer(msc, t, battery_timer);
++	struct hid_device *hdev = msc->hdev;
++
++	if (magicmouse_fetch_battery(hdev) == 0) {
++		mod_timer(&msc->battery_timer,
++			  jiffies + msecs_to_jiffies(USB_BATTERY_TIMEOUT_MS));
++	}
++}
++
+ static int magicmouse_probe(struct hid_device *hdev,
+ 	const struct hid_device_id *id)
+ {
+@@ -745,11 +786,6 @@ static int magicmouse_probe(struct hid_device *hdev,
+ 	struct hid_report *report;
+ 	int ret;
+ 
+-	if (id->vendor == USB_VENDOR_ID_APPLE &&
+-	    id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 &&
+-	    hdev->type != HID_TYPE_USBMOUSE)
+-		return -ENODEV;
+-
+ 	msc = devm_kzalloc(&hdev->dev, sizeof(*msc), GFP_KERNEL);
+ 	if (msc == NULL) {
+ 		hid_err(hdev, "can't alloc magicmouse descriptor\n");
+@@ -775,6 +811,16 @@ static int magicmouse_probe(struct hid_device *hdev,
+ 		return ret;
+ 	}
+ 
++	timer_setup(&msc->battery_timer, magicmouse_battery_timer_tick, 0);
++	mod_timer(&msc->battery_timer,
++		  jiffies + msecs_to_jiffies(USB_BATTERY_TIMEOUT_MS));
++	magicmouse_fetch_battery(hdev);
++
++	if (id->vendor == USB_VENDOR_ID_APPLE &&
++	    (id->product == USB_DEVICE_ID_APPLE_MAGICMOUSE2 ||
++	     (id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 && hdev->type != HID_TYPE_USBMOUSE)))
++		return 0;
++
+ 	if (!msc->input) {
+ 		hid_err(hdev, "magicmouse input not registered\n");
+ 		ret = -ENOMEM;
+@@ -827,6 +873,7 @@ static int magicmouse_probe(struct hid_device *hdev,
+ 
+ 	return 0;
+ err_stop_hw:
++	del_timer_sync(&msc->battery_timer);
+ 	hid_hw_stop(hdev);
+ 	return ret;
+ }
+@@ -835,17 +882,52 @@ static void magicmouse_remove(struct hid_device *hdev)
+ {
+ 	struct magicmouse_sc *msc = hid_get_drvdata(hdev);
+ 
+-	if (msc)
++	if (msc) {
+ 		cancel_delayed_work_sync(&msc->work);
++		del_timer_sync(&msc->battery_timer);
++	}
+ 
+ 	hid_hw_stop(hdev);
+ }
+ 
++static __u8 *magicmouse_report_fixup(struct hid_device *hdev, __u8 *rdesc,
++				     unsigned int *rsize)
++{
++	/*
++	 * Change the usage from:
++	 *   0x06, 0x00, 0xff, // Usage Page (Vendor Defined Page 1)  0
++	 *   0x09, 0x0b,       // Usage (Vendor Usage 0x0b)           3
++	 * To:
++	 *   0x05, 0x01,       // Usage Page (Generic Desktop)        0
++	 *   0x09, 0x02,       // Usage (Mouse)                       2
++	 */
++	if (hdev->vendor == USB_VENDOR_ID_APPLE &&
++	    (hdev->product == USB_DEVICE_ID_APPLE_MAGICMOUSE2 ||
++	     hdev->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2) &&
++	    *rsize == 83 && rdesc[46] == 0x84 && rdesc[58] == 0x85) {
++		hid_info(hdev,
++			 "fixing up magicmouse battery report descriptor\n");
++		*rsize = *rsize - 1;
++		rdesc = kmemdup(rdesc + 1, *rsize, GFP_KERNEL);
++		if (!rdesc)
++			return NULL;
++
++		rdesc[0] = 0x05;
++		rdesc[1] = 0x01;
++		rdesc[2] = 0x09;
++		rdesc[3] = 0x02;
++	}
++
++	return rdesc;
++}
++
+ static const struct hid_device_id magic_mice[] = {
+ 	{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE,
+ 		USB_DEVICE_ID_APPLE_MAGICMOUSE), .driver_data = 0 },
+ 	{ HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE,
+ 		USB_DEVICE_ID_APPLE_MAGICMOUSE2), .driver_data = 0 },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE,
++		USB_DEVICE_ID_APPLE_MAGICMOUSE2), .driver_data = 0 },
+ 	{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE,
+ 		USB_DEVICE_ID_APPLE_MAGICTRACKPAD), .driver_data = 0 },
+ 	{ HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE,
+@@ -861,6 +943,7 @@ static struct hid_driver magicmouse_driver = {
+ 	.id_table = magic_mice,
+ 	.probe = magicmouse_probe,
+ 	.remove = magicmouse_remove,
++	.report_fixup = magicmouse_report_fixup,
+ 	.raw_event = magicmouse_raw_event,
+ 	.event = magicmouse_event,
+ 	.input_mapping = magicmouse_input_mapping,
+diff --git a/drivers/hid/hid-uclogic-params.c b/drivers/hid/hid-uclogic-params.c
+index adff1bd68d9f8..3e70f969fb849 100644
+--- a/drivers/hid/hid-uclogic-params.c
++++ b/drivers/hid/hid-uclogic-params.c
+@@ -66,7 +66,7 @@ static int uclogic_params_get_str_desc(__u8 **pbuf, struct hid_device *hdev,
+ 					__u8 idx, size_t len)
+ {
+ 	int rc;
+-	struct usb_device *udev = hid_to_usb_dev(hdev);
++	struct usb_device *udev;
+ 	__u8 *buf = NULL;
+ 
+ 	/* Check arguments */
+@@ -75,6 +75,8 @@ static int uclogic_params_get_str_desc(__u8 **pbuf, struct hid_device *hdev,
+ 		goto cleanup;
+ 	}
+ 
++	udev = hid_to_usb_dev(hdev);
++
+ 	buf = kmalloc(len, GFP_KERNEL);
+ 	if (buf == NULL) {
+ 		rc = -ENOMEM;
+@@ -450,7 +452,7 @@ static int uclogic_params_frame_init_v1_buttonpad(
+ {
+ 	int rc;
+ 	bool found = false;
+-	struct usb_device *usb_dev = hid_to_usb_dev(hdev);
++	struct usb_device *usb_dev;
+ 	char *str_buf = NULL;
+ 	const size_t str_len = 16;
+ 
+@@ -460,6 +462,8 @@ static int uclogic_params_frame_init_v1_buttonpad(
+ 		goto cleanup;
+ 	}
+ 
++	usb_dev = hid_to_usb_dev(hdev);
++
+ 	/*
+ 	 * Enable generic button mode
+ 	 */
+@@ -707,9 +711,9 @@ static int uclogic_params_huion_init(struct uclogic_params *params,
+ 				     struct hid_device *hdev)
+ {
+ 	int rc;
+-	struct usb_device *udev = hid_to_usb_dev(hdev);
+-	struct usb_interface *iface = to_usb_interface(hdev->dev.parent);
+-	__u8 bInterfaceNumber = iface->cur_altsetting->desc.bInterfaceNumber;
++	struct usb_device *udev;
++	struct usb_interface *iface;
++	__u8 bInterfaceNumber;
+ 	bool found;
+ 	/* The resulting parameters (noop) */
+ 	struct uclogic_params p = {0, };
+@@ -723,6 +727,10 @@ static int uclogic_params_huion_init(struct uclogic_params *params,
+ 		goto cleanup;
+ 	}
+ 
++	udev = hid_to_usb_dev(hdev);
++	iface = to_usb_interface(hdev->dev.parent);
++	bInterfaceNumber = iface->cur_altsetting->desc.bInterfaceNumber;
++
+ 	/* If it's not a pen interface */
+ 	if (bInterfaceNumber != 0) {
+ 		/* TODO: Consider marking the interface invalid */
+@@ -834,10 +842,10 @@ int uclogic_params_init(struct uclogic_params *params,
+ 			struct hid_device *hdev)
+ {
+ 	int rc;
+-	struct usb_device *udev = hid_to_usb_dev(hdev);
+-	__u8  bNumInterfaces = udev->config->desc.bNumInterfaces;
+-	struct usb_interface *iface = to_usb_interface(hdev->dev.parent);
+-	__u8 bInterfaceNumber = iface->cur_altsetting->desc.bInterfaceNumber;
++	struct usb_device *udev;
++	__u8  bNumInterfaces;
++	struct usb_interface *iface;
++	__u8 bInterfaceNumber;
+ 	bool found;
+ 	/* The resulting parameters (noop) */
+ 	struct uclogic_params p = {0, };
+@@ -848,6 +856,11 @@ int uclogic_params_init(struct uclogic_params *params,
+ 		goto cleanup;
+ 	}
+ 
++	udev = hid_to_usb_dev(hdev);
++	bNumInterfaces = udev->config->desc.bNumInterfaces;
++	iface = to_usb_interface(hdev->dev.parent);
++	bInterfaceNumber = iface->cur_altsetting->desc.bInterfaceNumber;
++
+ 	/*
+ 	 * Set replacement report descriptor if the original matches the
+ 	 * specified size. Otherwise keep interface unchanged.
+diff --git a/drivers/hid/hid-vivaldi.c b/drivers/hid/hid-vivaldi.c
+index 72957a9f71170..576518e704ee6 100644
+--- a/drivers/hid/hid-vivaldi.c
++++ b/drivers/hid/hid-vivaldi.c
+@@ -74,10 +74,11 @@ static void vivaldi_feature_mapping(struct hid_device *hdev,
+ 				    struct hid_usage *usage)
+ {
+ 	struct vivaldi_data *drvdata = hid_get_drvdata(hdev);
++	struct hid_report *report = field->report;
+ 	int fn_key;
+ 	int ret;
+ 	u32 report_len;
+-	u8 *buf;
++	u8 *report_data, *buf;
+ 
+ 	if (field->logical != HID_USAGE_FN_ROW_PHYSMAP ||
+ 	    (usage->hid & HID_USAGE_PAGE) != HID_UP_ORDINAL)
+@@ -89,12 +90,24 @@ static void vivaldi_feature_mapping(struct hid_device *hdev,
+ 	if (fn_key > drvdata->max_function_row_key)
+ 		drvdata->max_function_row_key = fn_key;
+ 
+-	buf = hid_alloc_report_buf(field->report, GFP_KERNEL);
+-	if (!buf)
++	report_data = buf = hid_alloc_report_buf(report, GFP_KERNEL);
++	if (!report_data)
+ 		return;
+ 
+-	report_len = hid_report_len(field->report);
+-	ret = hid_hw_raw_request(hdev, field->report->id, buf,
++	report_len = hid_report_len(report);
++	if (!report->id) {
++		/*
++		 * hid_hw_raw_request() will stuff report ID (which will be 0)
++		 * into the first byte of the buffer even for unnumbered
++		 * reports, so we need to account for this to avoid getting
++		 * -EOVERFLOW in return.
++		 * Note that hid_alloc_report_buf() adds 7 bytes to the size
++		 * so we can safely say that we have space for an extra byte.
++		 */
++		report_len++;
++	}
++
++	ret = hid_hw_raw_request(hdev, report->id, report_data,
+ 				 report_len, HID_FEATURE_REPORT,
+ 				 HID_REQ_GET_REPORT);
+ 	if (ret < 0) {
+@@ -103,7 +116,16 @@ static void vivaldi_feature_mapping(struct hid_device *hdev,
+ 		goto out;
+ 	}
+ 
+-	ret = hid_report_raw_event(hdev, HID_FEATURE_REPORT, buf,
++	if (!report->id) {
++		/*
++		 * Undo the damage from hid_hw_raw_request() for unnumbered
++		 * reports.
++		 */
++		report_data++;
++		report_len--;
++	}
++
++	ret = hid_report_raw_event(hdev, HID_FEATURE_REPORT, report_data,
+ 				   report_len, 0);
+ 	if (ret) {
+ 		dev_warn(&hdev->dev, "failed to report feature %d\n",
+diff --git a/drivers/hid/i2c-hid/i2c-hid-acpi.c b/drivers/hid/i2c-hid/i2c-hid-acpi.c
+index a6f0257a26de3..b96ae15e0ad91 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-acpi.c
++++ b/drivers/hid/i2c-hid/i2c-hid-acpi.c
+@@ -111,7 +111,7 @@ static int i2c_hid_acpi_probe(struct i2c_client *client)
+ 	}
+ 
+ 	return i2c_hid_core_probe(client, &ihid_acpi->ops,
+-				  hid_descriptor_address);
++				  hid_descriptor_address, 0);
+ }
+ 
+ static const struct acpi_device_id i2c_hid_acpi_match[] = {
+diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
+index 517141138b007..4804d71e5293a 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-core.c
++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
+@@ -912,7 +912,7 @@ static void i2c_hid_core_shutdown_tail(struct i2c_hid *ihid)
+ }
+ 
+ int i2c_hid_core_probe(struct i2c_client *client, struct i2chid_ops *ops,
+-		       u16 hid_descriptor_address)
++		       u16 hid_descriptor_address, u32 quirks)
+ {
+ 	int ret;
+ 	struct i2c_hid *ihid;
+@@ -1009,6 +1009,8 @@ int i2c_hid_core_probe(struct i2c_client *client, struct i2chid_ops *ops,
+ 		goto err_mem_free;
+ 	}
+ 
++	hid->quirks |= quirks;
++
+ 	return 0;
+ 
+ err_mem_free:
+diff --git a/drivers/hid/i2c-hid/i2c-hid-of-goodix.c b/drivers/hid/i2c-hid/i2c-hid-of-goodix.c
+index 52674149a2750..b4dad66fa954d 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-of-goodix.c
++++ b/drivers/hid/i2c-hid/i2c-hid-of-goodix.c
+@@ -150,7 +150,7 @@ static int i2c_hid_of_goodix_probe(struct i2c_client *client,
+ 		goodix_i2c_hid_deassert_reset(ihid_goodix, true);
+ 	mutex_unlock(&ihid_goodix->regulator_mutex);
+ 
+-	return i2c_hid_core_probe(client, &ihid_goodix->ops, 0x0001);
++	return i2c_hid_core_probe(client, &ihid_goodix->ops, 0x0001, 0);
+ }
+ 
+ static const struct goodix_i2c_hid_timing_data goodix_gt7375p_timing_data = {
+diff --git a/drivers/hid/i2c-hid/i2c-hid-of.c b/drivers/hid/i2c-hid/i2c-hid-of.c
+index 4bf7cea926379..97a27a803f58d 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-of.c
++++ b/drivers/hid/i2c-hid/i2c-hid-of.c
+@@ -21,6 +21,7 @@
+ 
+ #include <linux/delay.h>
+ #include <linux/device.h>
++#include <linux/hid.h>
+ #include <linux/i2c.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+@@ -71,6 +72,7 @@ static int i2c_hid_of_probe(struct i2c_client *client,
+ 	struct device *dev = &client->dev;
+ 	struct i2c_hid_of *ihid_of;
+ 	u16 hid_descriptor_address;
++	u32 quirks = 0;
+ 	int ret;
+ 	u32 val;
+ 
+@@ -105,8 +107,14 @@ static int i2c_hid_of_probe(struct i2c_client *client,
+ 	if (ret)
+ 		return ret;
+ 
++	if (device_property_read_bool(dev, "touchscreen-inverted-x"))
++		quirks |= HID_QUIRK_X_INVERT;
++
++	if (device_property_read_bool(dev, "touchscreen-inverted-y"))
++		quirks |= HID_QUIRK_Y_INVERT;
++
+ 	return i2c_hid_core_probe(client, &ihid_of->ops,
+-				  hid_descriptor_address);
++				  hid_descriptor_address, quirks);
+ }
+ 
+ static const struct of_device_id i2c_hid_of_match[] = {
+diff --git a/drivers/hid/i2c-hid/i2c-hid.h b/drivers/hid/i2c-hid/i2c-hid.h
+index 05a7827d211af..236cc062d5ef8 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.h
++++ b/drivers/hid/i2c-hid/i2c-hid.h
+@@ -32,7 +32,7 @@ struct i2chid_ops {
+ };
+ 
+ int i2c_hid_core_probe(struct i2c_client *client, struct i2chid_ops *ops,
+-		       u16 hid_descriptor_address);
++		       u16 hid_descriptor_address, u32 quirks);
+ int i2c_hid_core_remove(struct i2c_client *client);
+ 
+ void i2c_hid_core_shutdown(struct i2c_client *client);
+diff --git a/drivers/hid/uhid.c b/drivers/hid/uhid.c
+index 8fe3efcb83271..fc06d8bb42e0f 100644
+--- a/drivers/hid/uhid.c
++++ b/drivers/hid/uhid.c
+@@ -28,11 +28,22 @@
+ 
+ struct uhid_device {
+ 	struct mutex devlock;
++
++	/* This flag tracks whether the HID device is usable for commands from
++	 * userspace. The flag is already set before hid_add_device(), which
++	 * runs in workqueue context, to allow hid_add_device() to communicate
++	 * with userspace.
++	 * However, if hid_add_device() fails, the flag is cleared without
++	 * holding devlock.
++	 * We guarantee that if @running changes from true to false while you're
++	 * holding @devlock, it's still fine to access @hid.
++	 */
+ 	bool running;
+ 
+ 	__u8 *rd_data;
+ 	uint rd_size;
+ 
++	/* When this is NULL, userspace may use UHID_CREATE/UHID_CREATE2. */
+ 	struct hid_device *hid;
+ 	struct uhid_event input_buf;
+ 
+@@ -63,9 +74,18 @@ static void uhid_device_add_worker(struct work_struct *work)
+ 	if (ret) {
+ 		hid_err(uhid->hid, "Cannot register HID device: error %d\n", ret);
+ 
+-		hid_destroy_device(uhid->hid);
+-		uhid->hid = NULL;
++		/* We used to call hid_destroy_device() here, but that's really
++		 * messy to get right because we have to coordinate with
++		 * concurrent writes from userspace that might be in the middle
++		 * of using uhid->hid.
++		 * Just leave uhid->hid as-is for now, and clean it up when
++		 * userspace tries to close or reinitialize the uhid instance.
++		 *
++		 * However, we do have to clear the ->running flag and do a
++		 * wakeup to make sure userspace knows that the device is gone.
++		 */
+ 		uhid->running = false;
++		wake_up_interruptible(&uhid->report_wait);
+ 	}
+ }
+ 
+@@ -474,7 +494,7 @@ static int uhid_dev_create2(struct uhid_device *uhid,
+ 	void *rd_data;
+ 	int ret;
+ 
+-	if (uhid->running)
++	if (uhid->hid)
+ 		return -EALREADY;
+ 
+ 	rd_size = ev->u.create2.rd_size;
+@@ -556,7 +576,7 @@ static int uhid_dev_create(struct uhid_device *uhid,
+ 
+ static int uhid_dev_destroy(struct uhid_device *uhid)
+ {
+-	if (!uhid->running)
++	if (!uhid->hid)
+ 		return -EINVAL;
+ 
+ 	uhid->running = false;
+@@ -565,6 +585,7 @@ static int uhid_dev_destroy(struct uhid_device *uhid)
+ 	cancel_work_sync(&uhid->worker);
+ 
+ 	hid_destroy_device(uhid->hid);
++	uhid->hid = NULL;
+ 	kfree(uhid->rd_data);
+ 
+ 	return 0;
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 2a4cc39962e76..a7176fc0635dd 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -2588,6 +2588,24 @@ static void wacom_wac_finger_slot(struct wacom_wac *wacom_wac,
+ 	}
+ }
+ 
++static bool wacom_wac_slot_is_active(struct input_dev *dev, int key)
++{
++	struct input_mt *mt = dev->mt;
++	struct input_mt_slot *s;
++
++	if (!mt)
++		return false;
++
++	for (s = mt->slots; s != mt->slots + mt->num_slots; s++) {
++		if (s->key == key &&
++			input_mt_get_value(s, ABS_MT_TRACKING_ID) >= 0) {
++			return true;
++		}
++	}
++
++	return false;
++}
++
+ static void wacom_wac_finger_event(struct hid_device *hdev,
+ 		struct hid_field *field, struct hid_usage *usage, __s32 value)
+ {
+@@ -2638,9 +2656,14 @@ static void wacom_wac_finger_event(struct hid_device *hdev,
+ 	}
+ 
+ 	if (usage->usage_index + 1 == field->report_count) {
+-		if (equivalent_usage == wacom_wac->hid_data.last_slot_field &&
+-		    wacom_wac->hid_data.confidence)
+-			wacom_wac_finger_slot(wacom_wac, wacom_wac->touch_input);
++		if (equivalent_usage == wacom_wac->hid_data.last_slot_field) {
++			bool touch_removed = wacom_wac_slot_is_active(wacom_wac->touch_input,
++				wacom_wac->hid_data.id) && !wacom_wac->hid_data.tipswitch;
++
++			if (wacom_wac->hid_data.confidence || touch_removed) {
++				wacom_wac_finger_slot(wacom_wac, wacom_wac->touch_input);
++			}
++		}
+ 	}
+ }
+ 
+@@ -2659,6 +2682,10 @@ static void wacom_wac_finger_pre_report(struct hid_device *hdev,
+ 
+ 	hid_data->confidence = true;
+ 
++	hid_data->cc_report = 0;
++	hid_data->cc_index = -1;
++	hid_data->cc_value_index = -1;
++
+ 	for (i = 0; i < report->maxfield; i++) {
+ 		struct hid_field *field = report->field[i];
+ 		int j;
+@@ -2692,11 +2719,14 @@ static void wacom_wac_finger_pre_report(struct hid_device *hdev,
+ 	    hid_data->cc_index >= 0) {
+ 		struct hid_field *field = report->field[hid_data->cc_index];
+ 		int value = field->value[hid_data->cc_value_index];
+-		if (value)
++		if (value) {
+ 			hid_data->num_expected = value;
++			hid_data->num_received = 0;
++		}
+ 	}
+ 	else {
+ 		hid_data->num_expected = wacom_wac->features.touch_max;
++		hid_data->num_received = 0;
+ 	}
+ }
+ 
+@@ -2724,6 +2754,7 @@ static void wacom_wac_finger_report(struct hid_device *hdev,
+ 
+ 	input_sync(input);
+ 	wacom_wac->hid_data.num_received = 0;
++	wacom_wac->hid_data.num_expected = 0;
+ 
+ 	/* keep touch state for pen event */
+ 	wacom_wac->shared->touch_down = wacom_wac_finger_count_touches(wacom_wac);
+diff --git a/drivers/hsi/hsi_core.c b/drivers/hsi/hsi_core.c
+index ec90713564e32..884066109699c 100644
+--- a/drivers/hsi/hsi_core.c
++++ b/drivers/hsi/hsi_core.c
+@@ -102,6 +102,7 @@ struct hsi_client *hsi_new_client(struct hsi_port *port,
+ 	if (device_register(&cl->device) < 0) {
+ 		pr_err("hsi: failed to register client: %s\n", info->name);
+ 		put_device(&cl->device);
++		goto err;
+ 	}
+ 
+ 	return cl;
+diff --git a/drivers/hwmon/mr75203.c b/drivers/hwmon/mr75203.c
+index 868243dba1ee0..1ba1e31459690 100644
+--- a/drivers/hwmon/mr75203.c
++++ b/drivers/hwmon/mr75203.c
+@@ -93,7 +93,7 @@
+ #define VM_CH_REQ	BIT(21)
+ 
+ #define IP_TMR			0x05
+-#define POWER_DELAY_CYCLE_256	0x80
++#define POWER_DELAY_CYCLE_256	0x100
+ #define POWER_DELAY_CYCLE_64	0x40
+ 
+ #define PVT_POLL_DELAY_US	20
+diff --git a/drivers/i2c/busses/i2c-designware-pcidrv.c b/drivers/i2c/busses/i2c-designware-pcidrv.c
+index 0f409a4c2da0d..5b45941bcbddc 100644
+--- a/drivers/i2c/busses/i2c-designware-pcidrv.c
++++ b/drivers/i2c/busses/i2c-designware-pcidrv.c
+@@ -39,10 +39,10 @@ enum dw_pci_ctl_id_t {
+ };
+ 
+ struct dw_scl_sda_cfg {
+-	u32 ss_hcnt;
+-	u32 fs_hcnt;
+-	u32 ss_lcnt;
+-	u32 fs_lcnt;
++	u16 ss_hcnt;
++	u16 fs_hcnt;
++	u16 ss_lcnt;
++	u16 fs_lcnt;
+ 	u32 sda_hold;
+ };
+ 
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index 41446f9cc52da..c87ea470eba98 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -775,6 +775,11 @@ static int i801_block_transaction(struct i801_priv *priv, union i2c_smbus_data *
+ 	int result = 0;
+ 	unsigned char hostc;
+ 
++	if (read_write == I2C_SMBUS_READ && command == I2C_SMBUS_BLOCK_DATA)
++		data->block[0] = I2C_SMBUS_BLOCK_MAX;
++	else if (data->block[0] < 1 || data->block[0] > I2C_SMBUS_BLOCK_MAX)
++		return -EPROTO;
++
+ 	if (command == I2C_SMBUS_I2C_BLOCK_DATA) {
+ 		if (read_write == I2C_SMBUS_WRITE) {
+ 			/* set I2C_EN bit in configuration register */
+@@ -788,16 +793,6 @@ static int i801_block_transaction(struct i801_priv *priv, union i2c_smbus_data *
+ 		}
+ 	}
+ 
+-	if (read_write == I2C_SMBUS_WRITE
+-	 || command == I2C_SMBUS_I2C_BLOCK_DATA) {
+-		if (data->block[0] < 1)
+-			data->block[0] = 1;
+-		if (data->block[0] > I2C_SMBUS_BLOCK_MAX)
+-			data->block[0] = I2C_SMBUS_BLOCK_MAX;
+-	} else {
+-		data->block[0] = 32;	/* max for SMBus block reads */
+-	}
+-
+ 	/* Experience has shown that the block buffer can only be used for
+ 	   SMBus (not I2C) block transactions, even though the datasheet
+ 	   doesn't mention this limitation. */
+diff --git a/drivers/i2c/busses/i2c-mpc.c b/drivers/i2c/busses/i2c-mpc.c
+index db26cc36e13fe..6c698c10d3cdb 100644
+--- a/drivers/i2c/busses/i2c-mpc.c
++++ b/drivers/i2c/busses/i2c-mpc.c
+@@ -119,23 +119,30 @@ static inline void writeccr(struct mpc_i2c *i2c, u32 x)
+ /* Sometimes 9th clock pulse isn't generated, and slave doesn't release
+  * the bus, because it wants to send ACK.
+  * Following sequence of enabling/disabling and sending start/stop generates
+- * the 9 pulses, so it's all OK.
++ * the 9 pulses, each with a START then ending with STOP, so it's all OK.
+  */
+ static void mpc_i2c_fixup(struct mpc_i2c *i2c)
+ {
+ 	int k;
+-	u32 delay_val = 1000000 / i2c->real_clk + 1;
+-
+-	if (delay_val < 2)
+-		delay_val = 2;
++	unsigned long flags;
+ 
+ 	for (k = 9; k; k--) {
+ 		writeccr(i2c, 0);
+-		writeccr(i2c, CCR_MSTA | CCR_MTX | CCR_MEN);
++		writeb(0, i2c->base + MPC_I2C_SR); /* clear any status bits */
++		writeccr(i2c, CCR_MEN | CCR_MSTA); /* START */
++		readb(i2c->base + MPC_I2C_DR); /* init xfer */
++		udelay(15); /* let it hit the bus */
++		local_irq_save(flags); /* should not be delayed further */
++		writeccr(i2c, CCR_MEN | CCR_MSTA | CCR_RSTA); /* delay SDA */
+ 		readb(i2c->base + MPC_I2C_DR);
+-		writeccr(i2c, CCR_MEN);
+-		udelay(delay_val << 1);
++		if (k != 1)
++			udelay(5);
++		local_irq_restore(flags);
+ 	}
++	writeccr(i2c, CCR_MEN); /* Initiate STOP */
++	readb(i2c->base + MPC_I2C_DR);
++	udelay(15); /* Let STOP propagate */
++	writeccr(i2c, 0);
+ }
+ 
+ static int i2c_mpc_wait_sr(struct mpc_i2c *i2c, int mask)
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index c3b4c677b4429..dfe18dcd008d4 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -343,7 +343,8 @@ struct bus_type i3c_bus_type = {
+ static enum i3c_addr_slot_status
+ i3c_bus_get_addr_slot_status(struct i3c_bus *bus, u16 addr)
+ {
+-	int status, bitpos = addr * 2;
++	unsigned long status;
++	int bitpos = addr * 2;
+ 
+ 	if (addr > I2C_MAX_ADDR)
+ 		return I3C_ADDR_SLOT_RSVD;
+diff --git a/drivers/i3c/master/dw-i3c-master.c b/drivers/i3c/master/dw-i3c-master.c
+index 03a368da51b95..51a8608203de7 100644
+--- a/drivers/i3c/master/dw-i3c-master.c
++++ b/drivers/i3c/master/dw-i3c-master.c
+@@ -793,6 +793,10 @@ static int dw_i3c_master_daa(struct i3c_master_controller *m)
+ 		return -ENOMEM;
+ 
+ 	pos = dw_i3c_master_get_free_pos(master);
++	if (pos < 0) {
++		dw_i3c_master_free_xfer(xfer);
++		return pos;
++	}
+ 	cmd = &xfer->cmds[0];
+ 	cmd->cmd_hi = 0x1;
+ 	cmd->cmd_lo = COMMAND_PORT_DEV_COUNT(master->maxdevs - pos) |
+diff --git a/drivers/i3c/master/mipi-i3c-hci/dat_v1.c b/drivers/i3c/master/mipi-i3c-hci/dat_v1.c
+index 783e551a2c85a..97bb49ff5b53b 100644
+--- a/drivers/i3c/master/mipi-i3c-hci/dat_v1.c
++++ b/drivers/i3c/master/mipi-i3c-hci/dat_v1.c
+@@ -160,9 +160,7 @@ static int hci_dat_v1_get_index(struct i3c_hci *hci, u8 dev_addr)
+ 	unsigned int dat_idx;
+ 	u32 dat_w0;
+ 
+-	for (dat_idx = find_first_bit(hci->DAT_data, hci->DAT_entries);
+-	     dat_idx < hci->DAT_entries;
+-	     dat_idx = find_next_bit(hci->DAT_data, hci->DAT_entries, dat_idx)) {
++	for_each_set_bit(dat_idx, hci->DAT_data, hci->DAT_entries) {
+ 		dat_w0 = dat_w0_read(dat_idx);
+ 		if (FIELD_GET(DAT_0_DYNAMIC_ADDRESS, dat_w0) == dev_addr)
+ 			return dat_idx;
+diff --git a/drivers/iio/adc/ti-adc081c.c b/drivers/iio/adc/ti-adc081c.c
+index 16fc608db36a5..bd48b073e7200 100644
+--- a/drivers/iio/adc/ti-adc081c.c
++++ b/drivers/iio/adc/ti-adc081c.c
+@@ -19,6 +19,7 @@
+ #include <linux/i2c.h>
+ #include <linux/module.h>
+ #include <linux/mod_devicetable.h>
++#include <linux/property.h>
+ 
+ #include <linux/iio/iio.h>
+ #include <linux/iio/buffer.h>
+@@ -156,13 +157,16 @@ static int adc081c_probe(struct i2c_client *client,
+ {
+ 	struct iio_dev *iio;
+ 	struct adc081c *adc;
+-	struct adcxx1c_model *model;
++	const struct adcxx1c_model *model;
+ 	int err;
+ 
+ 	if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_WORD_DATA))
+ 		return -EOPNOTSUPP;
+ 
+-	model = &adcxx1c_models[id->driver_data];
++	if (dev_fwnode(&client->dev))
++		model = device_get_match_data(&client->dev);
++	else
++		model = &adcxx1c_models[id->driver_data];
+ 
+ 	iio = devm_iio_device_alloc(&client->dev, sizeof(*adc));
+ 	if (!iio)
+@@ -210,10 +214,17 @@ static const struct i2c_device_id adc081c_id[] = {
+ };
+ MODULE_DEVICE_TABLE(i2c, adc081c_id);
+ 
++static const struct acpi_device_id adc081c_acpi_match[] = {
++	/* Used on some AAEON boards */
++	{ "ADC081C", (kernel_ulong_t)&adcxx1c_models[ADC081C] },
++	{ }
++};
++MODULE_DEVICE_TABLE(acpi, adc081c_acpi_match);
++
+ static const struct of_device_id adc081c_of_match[] = {
+-	{ .compatible = "ti,adc081c" },
+-	{ .compatible = "ti,adc101c" },
+-	{ .compatible = "ti,adc121c" },
++	{ .compatible = "ti,adc081c", .data = &adcxx1c_models[ADC081C] },
++	{ .compatible = "ti,adc101c", .data = &adcxx1c_models[ADC101C] },
++	{ .compatible = "ti,adc121c", .data = &adcxx1c_models[ADC121C] },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(of, adc081c_of_match);
+@@ -222,6 +233,7 @@ static struct i2c_driver adc081c_driver = {
+ 	.driver = {
+ 		.name = "adc081c",
+ 		.of_match_table = adc081c_of_match,
++		.acpi_match_table = adc081c_acpi_match,
+ 	},
+ 	.probe = adc081c_probe,
+ 	.id_table = adc081c_id,
+diff --git a/drivers/iio/chemical/sunrise_co2.c b/drivers/iio/chemical/sunrise_co2.c
+index 233bd0f379c93..8440dc0c77cfe 100644
+--- a/drivers/iio/chemical/sunrise_co2.c
++++ b/drivers/iio/chemical/sunrise_co2.c
+@@ -407,24 +407,24 @@ static int sunrise_read_raw(struct iio_dev *iio_dev,
+ 			mutex_lock(&sunrise->lock);
+ 			ret = sunrise_read_word(sunrise, SUNRISE_CO2_FILTERED_COMP_REG,
+ 						&value);
+-			*val = value;
+ 			mutex_unlock(&sunrise->lock);
+ 
+ 			if (ret)
+ 				return ret;
+ 
++			*val = value;
+ 			return IIO_VAL_INT;
+ 
+ 		case IIO_TEMP:
+ 			mutex_lock(&sunrise->lock);
+ 			ret = sunrise_read_word(sunrise, SUNRISE_CHIP_TEMPERATURE_REG,
+ 						&value);
+-			*val = value;
+ 			mutex_unlock(&sunrise->lock);
+ 
+ 			if (ret)
+ 				return ret;
+ 
++			*val = value;
+ 			return IIO_VAL_INT;
+ 
+ 		default:
+diff --git a/drivers/iio/industrialio-trigger.c b/drivers/iio/industrialio-trigger.c
+index 93990ff1dfe39..f504ed351b3e2 100644
+--- a/drivers/iio/industrialio-trigger.c
++++ b/drivers/iio/industrialio-trigger.c
+@@ -162,6 +162,39 @@ static struct iio_trigger *iio_trigger_acquire_by_name(const char *name)
+ 	return trig;
+ }
+ 
++static void iio_reenable_work_fn(struct work_struct *work)
++{
++	struct iio_trigger *trig = container_of(work, struct iio_trigger,
++						reenable_work);
++
++	/*
++	 * This 'might' occur after the trigger state is set to disabled -
++	 * in that case the driver should skip reenabling.
++	 */
++	trig->ops->reenable(trig);
++}
++
++/*
++ * In general, reenable callbacks may need to sleep and this path is
++ * not performance sensitive, so just queue up a work item
++ * to reneable the trigger for us.
++ *
++ * Races that can cause this.
++ * 1) A handler occurs entirely in interrupt context so the counter
++ *    the final decrement is still in this interrupt.
++ * 2) The trigger has been removed, but one last interrupt gets through.
++ *
++ * For (1) we must call reenable, but not in atomic context.
++ * For (2) it should be safe to call reenanble, if drivers never blindly
++ * reenable after state is off.
++ */
++static void iio_trigger_notify_done_atomic(struct iio_trigger *trig)
++{
++	if (atomic_dec_and_test(&trig->use_count) && trig->ops &&
++	    trig->ops->reenable)
++		schedule_work(&trig->reenable_work);
++}
++
+ void iio_trigger_poll(struct iio_trigger *trig)
+ {
+ 	int i;
+@@ -173,7 +206,7 @@ void iio_trigger_poll(struct iio_trigger *trig)
+ 			if (trig->subirqs[i].enabled)
+ 				generic_handle_irq(trig->subirq_base + i);
+ 			else
+-				iio_trigger_notify_done(trig);
++				iio_trigger_notify_done_atomic(trig);
+ 		}
+ 	}
+ }
+@@ -535,6 +568,7 @@ struct iio_trigger *viio_trigger_alloc(struct device *parent,
+ 	trig->dev.type = &iio_trig_type;
+ 	trig->dev.bus = &iio_bus_type;
+ 	device_initialize(&trig->dev);
++	INIT_WORK(&trig->reenable_work, iio_reenable_work_fn);
+ 
+ 	mutex_init(&trig->pool_lock);
+ 	trig->subirq_base = irq_alloc_descs(-1, 0,
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 835ac54d4a24c..a3834ef691910 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -766,6 +766,7 @@ static int cma_resolve_ib_dev(struct rdma_id_private *id_priv)
+ 	unsigned int p;
+ 	u16 pkey, index;
+ 	enum ib_port_state port_state;
++	int ret;
+ 	int i;
+ 
+ 	cma_dev = NULL;
+@@ -784,9 +785,14 @@ static int cma_resolve_ib_dev(struct rdma_id_private *id_priv)
+ 
+ 			if (ib_get_cached_port_state(cur_dev->device, p, &port_state))
+ 				continue;
+-			for (i = 0; !rdma_query_gid(cur_dev->device,
+-						    p, i, &gid);
+-			     i++) {
++
++			for (i = 0; i < cur_dev->device->port_data[p].immutable.gid_tbl_len;
++			     ++i) {
++				ret = rdma_query_gid(cur_dev->device, p, i,
++						     &gid);
++				if (ret)
++					continue;
++
+ 				if (!memcmp(&gid, dgid, sizeof(gid))) {
+ 					cma_dev = cur_dev;
+ 					sgid = gid;
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index 22a4adda7981d..a311df07b1bdb 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -2461,7 +2461,8 @@ int ib_find_gid(struct ib_device *device, union ib_gid *gid,
+ 		     ++i) {
+ 			ret = rdma_query_gid(device, port, i, &tmp_gid);
+ 			if (ret)
+-				return ret;
++				continue;
++
+ 			if (!memcmp(&tmp_gid, gid, sizeof *gid)) {
+ 				*port_num = port;
+ 				if (index)
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+index 3de854727460e..19a0778d38a2d 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+@@ -618,8 +618,6 @@ int bnxt_qplib_alloc_rcfw_channel(struct bnxt_qplib_res *res,
+ 	if (!cmdq->cmdq_bitmap)
+ 		goto fail;
+ 
+-	cmdq->bmap_size = bmap_size;
+-
+ 	/* Allocate one extra to hold the QP1 entries */
+ 	rcfw->qp_tbl_size = qp_tbl_sz + 1;
+ 	rcfw->qp_tbl = kcalloc(rcfw->qp_tbl_size, sizeof(struct bnxt_qplib_qp_node),
+@@ -667,8 +665,8 @@ void bnxt_qplib_disable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw)
+ 	iounmap(cmdq->cmdq_mbox.reg.bar_reg);
+ 	iounmap(creq->creq_db.reg.bar_reg);
+ 
+-	indx = find_first_bit(cmdq->cmdq_bitmap, cmdq->bmap_size);
+-	if (indx != cmdq->bmap_size)
++	indx = find_first_bit(cmdq->cmdq_bitmap, rcfw->cmdq_depth);
++	if (indx != rcfw->cmdq_depth)
+ 		dev_err(&rcfw->pdev->dev,
+ 			"disabling RCFW with pending cmd-bit %lx\n", indx);
+ 
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
+index 82faa4e4cda84..0a3d8e7da3d42 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
+@@ -152,7 +152,6 @@ struct bnxt_qplib_cmdq_ctx {
+ 	wait_queue_head_t		waitq;
+ 	unsigned long			flags;
+ 	unsigned long			*cmdq_bitmap;
+-	u32				bmap_size;
+ 	u32				seq_num;
+ };
+ 
+diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
+index d20b4ef2c853d..ffbd9a89981e7 100644
+--- a/drivers/infiniband/hw/cxgb4/qp.c
++++ b/drivers/infiniband/hw/cxgb4/qp.c
+@@ -2460,6 +2460,7 @@ int c4iw_ib_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 	memset(attr, 0, sizeof(*attr));
+ 	memset(init_attr, 0, sizeof(*init_attr));
+ 	attr->qp_state = to_ib_qp_state(qhp->attr.state);
++	attr->cur_qp_state = to_ib_qp_state(qhp->attr.state);
+ 	init_attr->cap.max_send_wr = qhp->attr.sq_num_entries;
+ 	init_attr->cap.max_recv_wr = qhp->attr.rq_num_entries;
+ 	init_attr->cap.max_send_sge = qhp->attr.sq_max_sges;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index 4194b626f3c65..a906c6078b722 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -270,6 +270,9 @@ static enum rdma_link_layer hns_roce_get_link_layer(struct ib_device *device,
+ static int hns_roce_query_pkey(struct ib_device *ib_dev, u32 port, u16 index,
+ 			       u16 *pkey)
+ {
++	if (index > 0)
++		return -EINVAL;
++
+ 	*pkey = PKEY_ID;
+ 
+ 	return 0;
+@@ -439,7 +442,7 @@ static int hns_roce_mmap(struct ib_ucontext *uctx, struct vm_area_struct *vma)
+ 	prot = vma->vm_page_prot;
+ 
+ 	if (entry->mmap_type != HNS_ROCE_MMAP_TYPE_TPTR)
+-		prot = pgprot_noncached(prot);
++		prot = pgprot_device(prot);
+ 
+ 	ret = rdma_user_mmap_io(uctx, vma, pfn, rdma_entry->npages * PAGE_SIZE,
+ 				prot, rdma_entry);
+diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
+index 9100009f0a23d..a53476653b0d9 100644
+--- a/drivers/infiniband/hw/qedr/verbs.c
++++ b/drivers/infiniband/hw/qedr/verbs.c
+@@ -1931,6 +1931,7 @@ static int qedr_create_user_qp(struct qedr_dev *dev,
+ 	/* db offset was calculated in copy_qp_uresp, now set in the user q */
+ 	if (qedr_qp_has_sq(qp)) {
+ 		qp->usq.db_addr = ctx->dpi_addr + uresp.sq_db_offset;
++		qp->sq.max_wr = attrs->cap.max_send_wr;
+ 		rc = qedr_db_recovery_add(dev, qp->usq.db_addr,
+ 					  &qp->usq.db_rec_data->db_data,
+ 					  DB_REC_WIDTH_32B,
+@@ -1941,6 +1942,7 @@ static int qedr_create_user_qp(struct qedr_dev *dev,
+ 
+ 	if (qedr_qp_has_rq(qp)) {
+ 		qp->urq.db_addr = ctx->dpi_addr + uresp.rq_db_offset;
++		qp->rq.max_wr = attrs->cap.max_recv_wr;
+ 		rc = qedr_db_recovery_add(dev, qp->urq.db_addr,
+ 					  &qp->urq.db_rec_data->db_data,
+ 					  DB_REC_WIDTH_32B,
+diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c
+index 3ef5a10a6efd8..47ebaac8f4754 100644
+--- a/drivers/infiniband/sw/rxe/rxe_opcode.c
++++ b/drivers/infiniband/sw/rxe/rxe_opcode.c
+@@ -117,7 +117,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = {
+ 		}
+ 	},
+ 	[IB_OPCODE_RC_SEND_MIDDLE]		= {
+-		.name	= "IB_OPCODE_RC_SEND_MIDDLE]",
++		.name	= "IB_OPCODE_RC_SEND_MIDDLE",
+ 		.mask	= RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_SEND_MASK
+ 				| RXE_MIDDLE_MASK,
+ 		.length = RXE_BTH_BYTES,
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index 15c0077dd27eb..e39709dee179d 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -867,7 +867,7 @@ static struct rtrs_clt_sess *get_next_path_min_latency(struct path_it *it)
+ 	struct rtrs_clt_sess *min_path = NULL;
+ 	struct rtrs_clt *clt = it->clt;
+ 	struct rtrs_clt_sess *sess;
+-	ktime_t min_latency = INT_MAX;
++	ktime_t min_latency = KTIME_MAX;
+ 	ktime_t latency;
+ 
+ 	list_for_each_entry_rcu(sess, &clt->paths_list, s.entry) {
+diff --git a/drivers/input/touchscreen/ti_am335x_tsc.c b/drivers/input/touchscreen/ti_am335x_tsc.c
+index 83e685557a197..cfc943423241f 100644
+--- a/drivers/input/touchscreen/ti_am335x_tsc.c
++++ b/drivers/input/touchscreen/ti_am335x_tsc.c
+@@ -131,7 +131,8 @@ static void titsc_step_config(struct titsc *ts_dev)
+ 	u32 stepenable;
+ 
+ 	config = STEPCONFIG_MODE_HWSYNC |
+-			STEPCONFIG_AVG_16 | ts_dev->bit_xp;
++			STEPCONFIG_AVG_16 | ts_dev->bit_xp |
++			STEPCONFIG_INM_ADCREFM;
+ 	switch (ts_dev->wires) {
+ 	case 4:
+ 		config |= STEPCONFIG_INP(ts_dev->inp_yp) | ts_dev->bit_xn;
+@@ -195,7 +196,10 @@ static void titsc_step_config(struct titsc *ts_dev)
+ 			STEPCONFIG_OPENDLY);
+ 
+ 	end_step++;
+-	config |= STEPCONFIG_INP(ts_dev->inp_yn);
++	config = STEPCONFIG_MODE_HWSYNC |
++			STEPCONFIG_AVG_16 | ts_dev->bit_yp |
++			ts_dev->bit_xn | STEPCONFIG_INM_ADCREFM |
++			STEPCONFIG_INP(ts_dev->inp_yn);
+ 	titsc_writel(ts_dev, REG_STEPCONFIG(end_step), config);
+ 	titsc_writel(ts_dev, REG_STEPDELAY(end_step),
+ 			STEPCONFIG_OPENDLY);
+diff --git a/drivers/interconnect/qcom/icc-rpm.c b/drivers/interconnect/qcom/icc-rpm.c
+index ef7999a08c8bf..8114295a83129 100644
+--- a/drivers/interconnect/qcom/icc-rpm.c
++++ b/drivers/interconnect/qcom/icc-rpm.c
+@@ -239,6 +239,7 @@ static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
+ 	rate = max(sum_bw, max_peak_bw);
+ 
+ 	do_div(rate, qn->buswidth);
++	rate = min_t(u64, rate, LONG_MAX);
+ 
+ 	if (qn->rate == rate)
+ 		return 0;
+diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
+index 867535eb0ce97..ffc89c4fb1205 100644
+--- a/drivers/iommu/amd/amd_iommu_types.h
++++ b/drivers/iommu/amd/amd_iommu_types.h
+@@ -645,8 +645,6 @@ struct amd_iommu {
+ 	/* DebugFS Info */
+ 	struct dentry *debugfs;
+ #endif
+-	/* IRQ notifier for IntCapXT interrupt */
+-	struct irq_affinity_notify intcapxt_notify;
+ };
+ 
+ static inline struct amd_iommu *dev_to_amd_iommu(struct device *dev)
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 1eacd43cb4368..b94822fc2c9f7 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -806,16 +806,27 @@ static int iommu_ga_log_enable(struct amd_iommu *iommu)
+ {
+ #ifdef CONFIG_IRQ_REMAP
+ 	u32 status, i;
++	u64 entry;
+ 
+ 	if (!iommu->ga_log)
+ 		return -EINVAL;
+ 
+-	status = readl(iommu->mmio_base + MMIO_STATUS_OFFSET);
+-
+ 	/* Check if already running */
+-	if (status & (MMIO_STATUS_GALOG_RUN_MASK))
++	status = readl(iommu->mmio_base + MMIO_STATUS_OFFSET);
++	if (WARN_ON(status & (MMIO_STATUS_GALOG_RUN_MASK)))
+ 		return 0;
+ 
++	entry = iommu_virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512;
++	memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_BASE_OFFSET,
++		    &entry, sizeof(entry));
++	entry = (iommu_virt_to_phys(iommu->ga_log_tail) &
++		 (BIT_ULL(52)-1)) & ~7ULL;
++	memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_TAIL_OFFSET,
++		    &entry, sizeof(entry));
++	writel(0x00, iommu->mmio_base + MMIO_GA_HEAD_OFFSET);
++	writel(0x00, iommu->mmio_base + MMIO_GA_TAIL_OFFSET);
++
++
+ 	iommu_feature_enable(iommu, CONTROL_GAINT_EN);
+ 	iommu_feature_enable(iommu, CONTROL_GALOG_EN);
+ 
+@@ -825,7 +836,7 @@ static int iommu_ga_log_enable(struct amd_iommu *iommu)
+ 			break;
+ 	}
+ 
+-	if (i >= LOOP_TIMEOUT)
++	if (WARN_ON(i >= LOOP_TIMEOUT))
+ 		return -EINVAL;
+ #endif /* CONFIG_IRQ_REMAP */
+ 	return 0;
+@@ -834,8 +845,6 @@ static int iommu_ga_log_enable(struct amd_iommu *iommu)
+ static int iommu_init_ga_log(struct amd_iommu *iommu)
+ {
+ #ifdef CONFIG_IRQ_REMAP
+-	u64 entry;
+-
+ 	if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir))
+ 		return 0;
+ 
+@@ -849,16 +858,6 @@ static int iommu_init_ga_log(struct amd_iommu *iommu)
+ 	if (!iommu->ga_log_tail)
+ 		goto err_out;
+ 
+-	entry = iommu_virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512;
+-	memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_BASE_OFFSET,
+-		    &entry, sizeof(entry));
+-	entry = (iommu_virt_to_phys(iommu->ga_log_tail) &
+-		 (BIT_ULL(52)-1)) & ~7ULL;
+-	memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_TAIL_OFFSET,
+-		    &entry, sizeof(entry));
+-	writel(0x00, iommu->mmio_base + MMIO_GA_HEAD_OFFSET);
+-	writel(0x00, iommu->mmio_base + MMIO_GA_TAIL_OFFSET);
+-
+ 	return 0;
+ err_out:
+ 	free_ga_log(iommu);
+@@ -2016,48 +2015,18 @@ union intcapxt {
+ 	};
+ } __attribute__ ((packed));
+ 
+-/*
+- * There isn't really any need to mask/unmask at the irqchip level because
+- * the 64-bit INTCAPXT registers can be updated atomically without tearing
+- * when the affinity is being updated.
+- */
+-static void intcapxt_unmask_irq(struct irq_data *data)
+-{
+-}
+-
+-static void intcapxt_mask_irq(struct irq_data *data)
+-{
+-}
+ 
+ static struct irq_chip intcapxt_controller;
+ 
+ static int intcapxt_irqdomain_activate(struct irq_domain *domain,
+ 				       struct irq_data *irqd, bool reserve)
+ {
+-	struct amd_iommu *iommu = irqd->chip_data;
+-	struct irq_cfg *cfg = irqd_cfg(irqd);
+-	union intcapxt xt;
+-
+-	xt.capxt = 0ULL;
+-	xt.dest_mode_logical = apic->dest_mode_logical;
+-	xt.vector = cfg->vector;
+-	xt.destid_0_23 = cfg->dest_apicid & GENMASK(23, 0);
+-	xt.destid_24_31 = cfg->dest_apicid >> 24;
+-
+-	/**
+-	 * Current IOMMU implemtation uses the same IRQ for all
+-	 * 3 IOMMU interrupts.
+-	 */
+-	writeq(xt.capxt, iommu->mmio_base + MMIO_INTCAPXT_EVT_OFFSET);
+-	writeq(xt.capxt, iommu->mmio_base + MMIO_INTCAPXT_PPR_OFFSET);
+-	writeq(xt.capxt, iommu->mmio_base + MMIO_INTCAPXT_GALOG_OFFSET);
+ 	return 0;
+ }
+ 
+ static void intcapxt_irqdomain_deactivate(struct irq_domain *domain,
+ 					  struct irq_data *irqd)
+ {
+-	intcapxt_mask_irq(irqd);
+ }
+ 
+ 
+@@ -2091,6 +2060,38 @@ static void intcapxt_irqdomain_free(struct irq_domain *domain, unsigned int virq
+ 	irq_domain_free_irqs_top(domain, virq, nr_irqs);
+ }
+ 
++
++static void intcapxt_unmask_irq(struct irq_data *irqd)
++{
++	struct amd_iommu *iommu = irqd->chip_data;
++	struct irq_cfg *cfg = irqd_cfg(irqd);
++	union intcapxt xt;
++
++	xt.capxt = 0ULL;
++	xt.dest_mode_logical = apic->dest_mode_logical;
++	xt.vector = cfg->vector;
++	xt.destid_0_23 = cfg->dest_apicid & GENMASK(23, 0);
++	xt.destid_24_31 = cfg->dest_apicid >> 24;
++
++	/**
++	 * Current IOMMU implementation uses the same IRQ for all
++	 * 3 IOMMU interrupts.
++	 */
++	writeq(xt.capxt, iommu->mmio_base + MMIO_INTCAPXT_EVT_OFFSET);
++	writeq(xt.capxt, iommu->mmio_base + MMIO_INTCAPXT_PPR_OFFSET);
++	writeq(xt.capxt, iommu->mmio_base + MMIO_INTCAPXT_GALOG_OFFSET);
++}
++
++static void intcapxt_mask_irq(struct irq_data *irqd)
++{
++	struct amd_iommu *iommu = irqd->chip_data;
++
++	writeq(0, iommu->mmio_base + MMIO_INTCAPXT_EVT_OFFSET);
++	writeq(0, iommu->mmio_base + MMIO_INTCAPXT_PPR_OFFSET);
++	writeq(0, iommu->mmio_base + MMIO_INTCAPXT_GALOG_OFFSET);
++}
++
++
+ static int intcapxt_set_affinity(struct irq_data *irqd,
+ 				 const struct cpumask *mask, bool force)
+ {
+@@ -2100,8 +2101,12 @@ static int intcapxt_set_affinity(struct irq_data *irqd,
+ 	ret = parent->chip->irq_set_affinity(parent, mask, force);
+ 	if (ret < 0 || ret == IRQ_SET_MASK_OK_DONE)
+ 		return ret;
++	return 0;
++}
+ 
+-	return intcapxt_irqdomain_activate(irqd->domain, irqd, false);
++static int intcapxt_set_wake(struct irq_data *irqd, unsigned int on)
++{
++	return on ? -EOPNOTSUPP : 0;
+ }
+ 
+ static struct irq_chip intcapxt_controller = {
+@@ -2111,7 +2116,8 @@ static struct irq_chip intcapxt_controller = {
+ 	.irq_ack		= irq_chip_ack_parent,
+ 	.irq_retrigger		= irq_chip_retrigger_hierarchy,
+ 	.irq_set_affinity       = intcapxt_set_affinity,
+-	.flags			= IRQCHIP_SKIP_SET_WAKE,
++	.irq_set_wake		= intcapxt_set_wake,
++	.flags			= IRQCHIP_MASK_ON_SUSPEND,
+ };
+ 
+ static const struct irq_domain_ops intcapxt_domain_ops = {
+@@ -2173,7 +2179,6 @@ static int iommu_setup_intcapxt(struct amd_iommu *iommu)
+ 		return ret;
+ 	}
+ 
+-	iommu_feature_enable(iommu, CONTROL_INTCAPXT_EN);
+ 	return 0;
+ }
+ 
+@@ -2196,6 +2201,10 @@ static int iommu_init_irq(struct amd_iommu *iommu)
+ 
+ 	iommu->int_enabled = true;
+ enable_faults:
++
++	if (amd_iommu_xt_mode == IRQ_REMAP_X2APIC_MODE)
++		iommu_feature_enable(iommu, CONTROL_INTCAPXT_EN);
++
+ 	iommu_feature_enable(iommu, CONTROL_EVT_INT_EN);
+ 
+ 	if (iommu->ppr_log != NULL)
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+index ca736b065dd0b..40c91dd368a4d 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+@@ -51,7 +51,7 @@ static void qcom_adreno_smmu_get_fault_info(const void *cookie,
+ 	info->fsynr1 = arm_smmu_cb_read(smmu, cfg->cbndx, ARM_SMMU_CB_FSYNR1);
+ 	info->far = arm_smmu_cb_readq(smmu, cfg->cbndx, ARM_SMMU_CB_FAR);
+ 	info->cbfrsynra = arm_smmu_gr1_read(smmu, ARM_SMMU_GR1_CBFRSYNRA(cfg->cbndx));
+-	info->ttbr0 = arm_smmu_cb_read(smmu, cfg->cbndx, ARM_SMMU_CB_TTBR0);
++	info->ttbr0 = arm_smmu_cb_readq(smmu, cfg->cbndx, ARM_SMMU_CB_TTBR0);
+ 	info->contextidr = arm_smmu_cb_read(smmu, cfg->cbndx, ARM_SMMU_CB_CONTEXTIDR);
+ }
+ 
+diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c
+index bfb6acb651e5f..be066c1503d37 100644
+--- a/drivers/iommu/io-pgtable-arm-v7s.c
++++ b/drivers/iommu/io-pgtable-arm-v7s.c
+@@ -246,13 +246,17 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ 			__GFP_ZERO | ARM_V7S_TABLE_GFP_DMA, get_order(size));
+ 	else if (lvl == 2)
+ 		table = kmem_cache_zalloc(data->l2_tables, gfp);
++
++	if (!table)
++		return NULL;
++
+ 	phys = virt_to_phys(table);
+ 	if (phys != (arm_v7s_iopte)phys) {
+ 		/* Doesn't fit in PTE */
+ 		dev_err(dev, "Page table does not fit in PTE: %pa", &phys);
+ 		goto out_free;
+ 	}
+-	if (table && !cfg->coherent_walk) {
++	if (!cfg->coherent_walk) {
+ 		dma = dma_map_single(dev, table, size, DMA_TO_DEVICE);
+ 		if (dma_mapping_error(dev, dma))
+ 			goto out_free;
+diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
+index dd9e47189d0d9..94ff319ae8acc 100644
+--- a/drivers/iommu/io-pgtable-arm.c
++++ b/drivers/iommu/io-pgtable-arm.c
+@@ -315,11 +315,12 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data,
+ static arm_lpae_iopte arm_lpae_install_table(arm_lpae_iopte *table,
+ 					     arm_lpae_iopte *ptep,
+ 					     arm_lpae_iopte curr,
+-					     struct io_pgtable_cfg *cfg)
++					     struct arm_lpae_io_pgtable *data)
+ {
+ 	arm_lpae_iopte old, new;
++	struct io_pgtable_cfg *cfg = &data->iop.cfg;
+ 
+-	new = __pa(table) | ARM_LPAE_PTE_TYPE_TABLE;
++	new = paddr_to_iopte(__pa(table), data) | ARM_LPAE_PTE_TYPE_TABLE;
+ 	if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS)
+ 		new |= ARM_LPAE_PTE_NSTABLE;
+ 
+@@ -380,7 +381,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova,
+ 		if (!cptep)
+ 			return -ENOMEM;
+ 
+-		pte = arm_lpae_install_table(cptep, ptep, 0, cfg);
++		pte = arm_lpae_install_table(cptep, ptep, 0, data);
+ 		if (pte)
+ 			__arm_lpae_free_pages(cptep, tblsz, cfg);
+ 	} else if (!cfg->coherent_walk && !(pte & ARM_LPAE_PTE_SW_SYNC)) {
+@@ -592,7 +593,7 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
+ 		__arm_lpae_init_pte(data, blk_paddr, pte, lvl, 1, &tablep[i]);
+ 	}
+ 
+-	pte = arm_lpae_install_table(tablep, ptep, blk_pte, cfg);
++	pte = arm_lpae_install_table(tablep, ptep, blk_pte, data);
+ 	if (pte != blk_pte) {
+ 		__arm_lpae_free_pages(tablep, tablesz, cfg);
+ 		/*
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index dd7863e453a5f..8b86406b71627 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -288,11 +288,11 @@ int iommu_probe_device(struct device *dev)
+ 	 */
+ 	mutex_lock(&group->mutex);
+ 	iommu_alloc_default_domain(group, dev);
+-	mutex_unlock(&group->mutex);
+ 
+ 	if (group->default_domain) {
+ 		ret = __iommu_attach_device(group->default_domain, dev);
+ 		if (ret) {
++			mutex_unlock(&group->mutex);
+ 			iommu_group_put(group);
+ 			goto err_release;
+ 		}
+@@ -300,6 +300,7 @@ int iommu_probe_device(struct device *dev)
+ 
+ 	iommu_create_device_direct_mappings(group, dev);
+ 
++	mutex_unlock(&group->mutex);
+ 	iommu_group_put(group);
+ 
+ 	if (ops->probe_finalize)
+diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
+index 9e8bc802ac053..920fcc27c9a1e 100644
+--- a/drivers/iommu/iova.c
++++ b/drivers/iommu/iova.c
+@@ -83,8 +83,7 @@ static void free_iova_flush_queue(struct iova_domain *iovad)
+ 	if (!has_iova_flush_queue(iovad))
+ 		return;
+ 
+-	if (timer_pending(&iovad->fq_timer))
+-		del_timer(&iovad->fq_timer);
++	del_timer_sync(&iovad->fq_timer);
+ 
+ 	fq_destroy_all_entries(iovad);
+ 
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index daec3309b014d..86397522e7864 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -920,6 +920,22 @@ static int __gic_update_rdist_properties(struct redist_region *region,
+ {
+ 	u64 typer = gic_read_typer(ptr + GICR_TYPER);
+ 
++	/* Boot-time cleanip */
++	if ((typer & GICR_TYPER_VLPIS) && (typer & GICR_TYPER_RVPEID)) {
++		u64 val;
++
++		/* Deactivate any present vPE */
++		val = gicr_read_vpendbaser(ptr + SZ_128K + GICR_VPENDBASER);
++		if (val & GICR_VPENDBASER_Valid)
++			gicr_write_vpendbaser(GICR_VPENDBASER_PendingLast,
++					      ptr + SZ_128K + GICR_VPENDBASER);
++
++		/* Mark the VPE table as invalid */
++		val = gicr_read_vpropbaser(ptr + SZ_128K + GICR_VPROPBASER);
++		val &= ~GICR_VPROPBASER_4_1_VALID;
++		gicr_write_vpropbaser(val, ptr + SZ_128K + GICR_VPROPBASER);
++	}
++
+ 	gic_data.rdists.has_vlpis &= !!(typer & GICR_TYPER_VLPIS);
+ 
+ 	/* RVPEID implies some form of DirectLPI, no matter what the doc says... :-/ */
+diff --git a/drivers/leds/leds-lp55xx-common.c b/drivers/leds/leds-lp55xx-common.c
+index d1657c46ee2f8..9fdfc1b9a1a0c 100644
+--- a/drivers/leds/leds-lp55xx-common.c
++++ b/drivers/leds/leds-lp55xx-common.c
+@@ -439,6 +439,8 @@ int lp55xx_init_device(struct lp55xx_chip *chip)
+ 		return -EINVAL;
+ 
+ 	if (pdata->enable_gpiod) {
++		gpiod_direction_output(pdata->enable_gpiod, 0);
++
+ 		gpiod_set_consumer_name(pdata->enable_gpiod, "LP55xx enable");
+ 		gpiod_set_value(pdata->enable_gpiod, 0);
+ 		usleep_range(1000, 2000); /* Keep enable down at least 1ms */
+@@ -694,7 +696,7 @@ struct lp55xx_platform_data *lp55xx_of_populate_pdata(struct device *dev,
+ 	of_property_read_u8(np, "clock-mode", &pdata->clock_mode);
+ 
+ 	pdata->enable_gpiod = devm_gpiod_get_optional(dev, "enable",
+-						      GPIOD_OUT_LOW);
++						      GPIOD_ASIS);
+ 	if (IS_ERR(pdata->enable_gpiod))
+ 		return ERR_CAST(pdata->enable_gpiod);
+ 
+diff --git a/drivers/mailbox/imx-mailbox.c b/drivers/mailbox/imx-mailbox.c
+index ffe36a6bef9e0..544de2db64531 100644
+--- a/drivers/mailbox/imx-mailbox.c
++++ b/drivers/mailbox/imx-mailbox.c
+@@ -563,8 +563,8 @@ static int imx_mu_probe(struct platform_device *pdev)
+ 		size = sizeof(struct imx_sc_rpc_msg_max);
+ 
+ 	priv->msg = devm_kzalloc(dev, size, GFP_KERNEL);
+-	if (IS_ERR(priv->msg))
+-		return PTR_ERR(priv->msg);
++	if (!priv->msg)
++		return -ENOMEM;
+ 
+ 	priv->clk = devm_clk_get(dev, NULL);
+ 	if (IS_ERR(priv->clk)) {
+diff --git a/drivers/mailbox/mailbox-mpfs.c b/drivers/mailbox/mailbox-mpfs.c
+index 0d6e2231a2c75..4e34854d12389 100644
+--- a/drivers/mailbox/mailbox-mpfs.c
++++ b/drivers/mailbox/mailbox-mpfs.c
+@@ -232,7 +232,7 @@ static int mpfs_mbox_probe(struct platform_device *pdev)
+ }
+ 
+ static const struct of_device_id mpfs_mbox_of_match[] = {
+-	{.compatible = "microchip,polarfire-soc-mailbox", },
++	{.compatible = "microchip,mpfs-mailbox", },
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(of, mpfs_mbox_of_match);
+diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
+index a8845b162dbfa..9aae13e9e050e 100644
+--- a/drivers/mailbox/mtk-cmdq-mailbox.c
++++ b/drivers/mailbox/mtk-cmdq-mailbox.c
+@@ -658,7 +658,7 @@ static const struct gce_plat gce_plat_v5 = {
+ 	.thread_nr = 24,
+ 	.shift = 3,
+ 	.control_by_sw = true,
+-	.gce_num = 2
++	.gce_num = 1
+ };
+ 
+ static const struct gce_plat gce_plat_v6 = {
+diff --git a/drivers/mailbox/pcc.c b/drivers/mailbox/pcc.c
+index 887a3704c12ec..ed18936b8ce68 100644
+--- a/drivers/mailbox/pcc.c
++++ b/drivers/mailbox/pcc.c
+@@ -241,9 +241,11 @@ static irqreturn_t pcc_mbox_irq(int irq, void *p)
+ 	if (ret)
+ 		return IRQ_NONE;
+ 
+-	val &= pchan->cmd_complete.status_mask;
+-	if (!val)
+-		return IRQ_NONE;
++	if (val) { /* Ensure GAS exists and value is non-zero */
++		val &= pchan->cmd_complete.status_mask;
++		if (!val)
++			return IRQ_NONE;
++	}
+ 
+ 	ret = pcc_chan_reg_read(&pchan->error, &val);
+ 	if (ret)
+@@ -289,7 +291,7 @@ pcc_mbox_request_channel(struct mbox_client *cl, int subspace_id)
+ 	pchan = chan_info + subspace_id;
+ 	chan = pchan->chan.mchan;
+ 	if (IS_ERR(chan) || chan->cl) {
+-		dev_err(dev, "Channel not found for idx: %d\n", subspace_id);
++		pr_err("Channel not found for idx: %d\n", subspace_id);
+ 		return ERR_PTR(-EBUSY);
+ 	}
+ 	dev = chan->mbox->dev;
+diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c
+index 66ba16713f696..0a260c35aeeed 100644
+--- a/drivers/md/dm-linear.c
++++ b/drivers/md/dm-linear.c
+@@ -162,7 +162,7 @@ static int linear_iterate_devices(struct dm_target *ti,
+ 	return fn(ti, lc->dev, lc->start, ti->len, data);
+ }
+ 
+-#if IS_ENABLED(CONFIG_DAX_DRIVER)
++#if IS_ENABLED(CONFIG_FS_DAX)
+ static long linear_dax_direct_access(struct dm_target *ti, pgoff_t pgoff,
+ 		long nr_pages, void **kaddr, pfn_t *pfn)
+ {
+diff --git a/drivers/md/dm-log-writes.c b/drivers/md/dm-log-writes.c
+index 0b3ef977ceeba..3155875d4e5b0 100644
+--- a/drivers/md/dm-log-writes.c
++++ b/drivers/md/dm-log-writes.c
+@@ -901,7 +901,7 @@ static void log_writes_io_hints(struct dm_target *ti, struct queue_limits *limit
+ 	limits->io_min = limits->physical_block_size;
+ }
+ 
+-#if IS_ENABLED(CONFIG_DAX_DRIVER)
++#if IS_ENABLED(CONFIG_FS_DAX)
+ static int log_dax(struct log_writes_c *lc, sector_t sector, size_t bytes,
+ 		   struct iov_iter *i)
+ {
+diff --git a/drivers/md/dm-stripe.c b/drivers/md/dm-stripe.c
+index 6660b6b53d5bf..f084607220293 100644
+--- a/drivers/md/dm-stripe.c
++++ b/drivers/md/dm-stripe.c
+@@ -300,7 +300,7 @@ static int stripe_map(struct dm_target *ti, struct bio *bio)
+ 	return DM_MAPIO_REMAPPED;
+ }
+ 
+-#if IS_ENABLED(CONFIG_DAX_DRIVER)
++#if IS_ENABLED(CONFIG_FS_DAX)
+ static long stripe_dax_direct_access(struct dm_target *ti, pgoff_t pgoff,
+ 		long nr_pages, void **kaddr, pfn_t *pfn)
+ {
+diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
+index 4b8991cde223d..4f31591d2d25e 100644
+--- a/drivers/md/dm-writecache.c
++++ b/drivers/md/dm-writecache.c
+@@ -38,7 +38,7 @@
+ #define BITMAP_GRANULARITY	PAGE_SIZE
+ #endif
+ 
+-#if IS_ENABLED(CONFIG_ARCH_HAS_PMEM_API) && IS_ENABLED(CONFIG_DAX_DRIVER)
++#if IS_ENABLED(CONFIG_ARCH_HAS_PMEM_API) && IS_ENABLED(CONFIG_FS_DAX)
+ #define DM_WRITECACHE_HAS_PMEM
+ #endif
+ 
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 662742a310cbb..b93fcc91176e5 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1783,11 +1783,13 @@ static struct mapped_device *alloc_dev(int minor)
+ 	md->disk->private_data = md;
+ 	sprintf(md->disk->disk_name, "dm-%d", minor);
+ 
+-	if (IS_ENABLED(CONFIG_DAX_DRIVER)) {
++	if (IS_ENABLED(CONFIG_FS_DAX)) {
+ 		md->dax_dev = alloc_dax(md, md->disk->disk_name,
+ 					&dm_dax_ops, 0);
+-		if (IS_ERR(md->dax_dev))
++		if (IS_ERR(md->dax_dev)) {
++			md->dax_dev = NULL;
+ 			goto bad;
++		}
+ 	}
+ 
+ 	format_dev_t(md->name, MKDEV(_major, minor));
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 41d6e2383517b..db969cf04dec1 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -5875,13 +5875,6 @@ int md_run(struct mddev *mddev)
+ 		if (err)
+ 			goto exit_bio_set;
+ 	}
+-	if (mddev->level != 1 && mddev->level != 10 &&
+-	    !bioset_initialized(&mddev->io_acct_set)) {
+-		err = bioset_init(&mddev->io_acct_set, BIO_POOL_SIZE,
+-				  offsetof(struct md_io_acct, bio_clone), 0);
+-		if (err)
+-			goto exit_sync_set;
+-	}
+ 
+ 	spin_lock(&pers_lock);
+ 	pers = find_pers(mddev->level, mddev->clevel);
+@@ -6058,9 +6051,6 @@ bitmap_abort:
+ 	module_put(pers->owner);
+ 	md_bitmap_destroy(mddev);
+ abort:
+-	if (mddev->level != 1 && mddev->level != 10)
+-		bioset_exit(&mddev->io_acct_set);
+-exit_sync_set:
+ 	bioset_exit(&mddev->sync_set);
+ exit_bio_set:
+ 	bioset_exit(&mddev->bio_set);
+@@ -8594,6 +8584,23 @@ void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev,
+ }
+ EXPORT_SYMBOL_GPL(md_submit_discard_bio);
+ 
++int acct_bioset_init(struct mddev *mddev)
++{
++	int err = 0;
++
++	if (!bioset_initialized(&mddev->io_acct_set))
++		err = bioset_init(&mddev->io_acct_set, BIO_POOL_SIZE,
++			offsetof(struct md_io_acct, bio_clone), 0);
++	return err;
++}
++EXPORT_SYMBOL_GPL(acct_bioset_init);
++
++void acct_bioset_exit(struct mddev *mddev)
++{
++	bioset_exit(&mddev->io_acct_set);
++}
++EXPORT_SYMBOL_GPL(acct_bioset_exit);
++
+ static void md_end_io_acct(struct bio *bio)
+ {
+ 	struct md_io_acct *md_io_acct = bio->bi_private;
+diff --git a/drivers/md/md.h b/drivers/md/md.h
+index 53ea7a6961de2..f1bf3625ef4c9 100644
+--- a/drivers/md/md.h
++++ b/drivers/md/md.h
+@@ -721,6 +721,8 @@ extern void md_error(struct mddev *mddev, struct md_rdev *rdev);
+ extern void md_finish_reshape(struct mddev *mddev);
+ void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev,
+ 			struct bio *bio, sector_t start, sector_t size);
++int acct_bioset_init(struct mddev *mddev);
++void acct_bioset_exit(struct mddev *mddev);
+ void md_account_bio(struct mddev *mddev, struct bio **bio);
+ 
+ extern bool __must_check md_flush_request(struct mddev *mddev, struct bio *bio);
+diff --git a/drivers/md/persistent-data/dm-btree.c b/drivers/md/persistent-data/dm-btree.c
+index 0703ca7a7d9a4..5ce64e93aae74 100644
+--- a/drivers/md/persistent-data/dm-btree.c
++++ b/drivers/md/persistent-data/dm-btree.c
+@@ -81,14 +81,16 @@ void inc_children(struct dm_transaction_manager *tm, struct btree_node *n,
+ }
+ 
+ static int insert_at(size_t value_size, struct btree_node *node, unsigned index,
+-		      uint64_t key, void *value)
+-		      __dm_written_to_disk(value)
++		     uint64_t key, void *value)
++	__dm_written_to_disk(value)
+ {
+ 	uint32_t nr_entries = le32_to_cpu(node->header.nr_entries);
++	uint32_t max_entries = le32_to_cpu(node->header.max_entries);
+ 	__le64 key_le = cpu_to_le64(key);
+ 
+ 	if (index > nr_entries ||
+-	    index >= le32_to_cpu(node->header.max_entries)) {
++	    index >= max_entries ||
++	    nr_entries >= max_entries) {
+ 		DMERR("too many entries in btree node for insert");
+ 		__dm_unbless_for_disk(value);
+ 		return -ENOMEM;
+diff --git a/drivers/md/persistent-data/dm-space-map-common.c b/drivers/md/persistent-data/dm-space-map-common.c
+index 4a6a2a9b4eb49..bfbfa750e0160 100644
+--- a/drivers/md/persistent-data/dm-space-map-common.c
++++ b/drivers/md/persistent-data/dm-space-map-common.c
+@@ -283,6 +283,11 @@ int sm_ll_lookup_bitmap(struct ll_disk *ll, dm_block_t b, uint32_t *result)
+ 	struct disk_index_entry ie_disk;
+ 	struct dm_block *blk;
+ 
++	if (b >= ll->nr_blocks) {
++		DMERR_LIMIT("metadata block out of bounds");
++		return -EINVAL;
++	}
++
+ 	b = do_div(index, ll->entries_per_block);
+ 	r = ll->load_ie(ll, index, &ie_disk);
+ 	if (r < 0)
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index 62c8b6adac70e..b59a77b31b90d 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -356,7 +356,21 @@ static sector_t raid0_size(struct mddev *mddev, sector_t sectors, int raid_disks
+ 	return array_sectors;
+ }
+ 
+-static void raid0_free(struct mddev *mddev, void *priv);
++static void free_conf(struct mddev *mddev, struct r0conf *conf)
++{
++	kfree(conf->strip_zone);
++	kfree(conf->devlist);
++	kfree(conf);
++	mddev->private = NULL;
++}
++
++static void raid0_free(struct mddev *mddev, void *priv)
++{
++	struct r0conf *conf = priv;
++
++	free_conf(mddev, conf);
++	acct_bioset_exit(mddev);
++}
+ 
+ static int raid0_run(struct mddev *mddev)
+ {
+@@ -370,11 +384,16 @@ static int raid0_run(struct mddev *mddev)
+ 	if (md_check_no_bitmap(mddev))
+ 		return -EINVAL;
+ 
++	if (acct_bioset_init(mddev)) {
++		pr_err("md/raid0:%s: alloc acct bioset failed.\n", mdname(mddev));
++		return -ENOMEM;
++	}
++
+ 	/* if private is not null, we are here after takeover */
+ 	if (mddev->private == NULL) {
+ 		ret = create_strip_zones(mddev, &conf);
+ 		if (ret < 0)
+-			return ret;
++			goto exit_acct_set;
+ 		mddev->private = conf;
+ 	}
+ 	conf = mddev->private;
+@@ -413,17 +432,16 @@ static int raid0_run(struct mddev *mddev)
+ 	dump_zones(mddev);
+ 
+ 	ret = md_integrity_register(mddev);
++	if (ret)
++		goto free;
+ 
+ 	return ret;
+-}
+ 
+-static void raid0_free(struct mddev *mddev, void *priv)
+-{
+-	struct r0conf *conf = priv;
+-
+-	kfree(conf->strip_zone);
+-	kfree(conf->devlist);
+-	kfree(conf);
++free:
++	free_conf(mddev, conf);
++exit_acct_set:
++	acct_bioset_exit(mddev);
++	return ret;
+ }
+ 
+ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 9c1a5877cf9f6..d7c16b1d21da8 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -7446,12 +7446,19 @@ static int raid5_run(struct mddev *mddev)
+ 	struct md_rdev *rdev;
+ 	struct md_rdev *journal_dev = NULL;
+ 	sector_t reshape_offset = 0;
+-	int i;
++	int i, ret = 0;
+ 	long long min_offset_diff = 0;
+ 	int first = 1;
+ 
+-	if (mddev_init_writes_pending(mddev) < 0)
++	if (acct_bioset_init(mddev)) {
++		pr_err("md/raid456:%s: alloc acct bioset failed.\n", mdname(mddev));
+ 		return -ENOMEM;
++	}
++
++	if (mddev_init_writes_pending(mddev) < 0) {
++		ret = -ENOMEM;
++		goto exit_acct_set;
++	}
+ 
+ 	if (mddev->recovery_cp != MaxSector)
+ 		pr_notice("md/raid:%s: not clean -- starting background reconstruction\n",
+@@ -7482,7 +7489,8 @@ static int raid5_run(struct mddev *mddev)
+ 	    (mddev->bitmap_info.offset || mddev->bitmap_info.file)) {
+ 		pr_notice("md/raid:%s: array cannot have both journal and bitmap\n",
+ 			  mdname(mddev));
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto exit_acct_set;
+ 	}
+ 
+ 	if (mddev->reshape_position != MaxSector) {
+@@ -7507,13 +7515,15 @@ static int raid5_run(struct mddev *mddev)
+ 		if (journal_dev) {
+ 			pr_warn("md/raid:%s: don't support reshape with journal - aborting.\n",
+ 				mdname(mddev));
+-			return -EINVAL;
++			ret = -EINVAL;
++			goto exit_acct_set;
+ 		}
+ 
+ 		if (mddev->new_level != mddev->level) {
+ 			pr_warn("md/raid:%s: unsupported reshape required - aborting.\n",
+ 				mdname(mddev));
+-			return -EINVAL;
++			ret = -EINVAL;
++			goto exit_acct_set;
+ 		}
+ 		old_disks = mddev->raid_disks - mddev->delta_disks;
+ 		/* reshape_position must be on a new-stripe boundary, and one
+@@ -7529,7 +7539,8 @@ static int raid5_run(struct mddev *mddev)
+ 		if (sector_div(here_new, chunk_sectors * new_data_disks)) {
+ 			pr_warn("md/raid:%s: reshape_position not on a stripe boundary\n",
+ 				mdname(mddev));
+-			return -EINVAL;
++			ret = -EINVAL;
++			goto exit_acct_set;
+ 		}
+ 		reshape_offset = here_new * chunk_sectors;
+ 		/* here_new is the stripe we will write to */
+@@ -7551,7 +7562,8 @@ static int raid5_run(struct mddev *mddev)
+ 			else if (mddev->ro == 0) {
+ 				pr_warn("md/raid:%s: in-place reshape must be started in read-only mode - aborting\n",
+ 					mdname(mddev));
+-				return -EINVAL;
++				ret = -EINVAL;
++				goto exit_acct_set;
+ 			}
+ 		} else if (mddev->reshape_backwards
+ 		    ? (here_new * chunk_sectors + min_offset_diff <=
+@@ -7561,7 +7573,8 @@ static int raid5_run(struct mddev *mddev)
+ 			/* Reading from the same stripe as writing to - bad */
+ 			pr_warn("md/raid:%s: reshape_position too early for auto-recovery - aborting.\n",
+ 				mdname(mddev));
+-			return -EINVAL;
++			ret = -EINVAL;
++			goto exit_acct_set;
+ 		}
+ 		pr_debug("md/raid:%s: reshape will continue\n", mdname(mddev));
+ 		/* OK, we should be able to continue; */
+@@ -7585,8 +7598,10 @@ static int raid5_run(struct mddev *mddev)
+ 	else
+ 		conf = mddev->private;
+ 
+-	if (IS_ERR(conf))
+-		return PTR_ERR(conf);
++	if (IS_ERR(conf)) {
++		ret = PTR_ERR(conf);
++		goto exit_acct_set;
++	}
+ 
+ 	if (test_bit(MD_HAS_JOURNAL, &mddev->flags)) {
+ 		if (!journal_dev) {
+@@ -7783,7 +7798,10 @@ abort:
+ 	free_conf(conf);
+ 	mddev->private = NULL;
+ 	pr_warn("md/raid:%s: failed to run raid set.\n", mdname(mddev));
+-	return -EIO;
++	ret = -EIO;
++exit_acct_set:
++	acct_bioset_exit(mddev);
++	return ret;
+ }
+ 
+ static void raid5_free(struct mddev *mddev, void *priv)
+@@ -7791,6 +7809,7 @@ static void raid5_free(struct mddev *mddev, void *priv)
+ 	struct r5conf *conf = priv;
+ 
+ 	free_conf(conf);
++	acct_bioset_exit(mddev);
+ 	mddev->to_remove = &raid5_attrs_group;
+ }
+ 
+diff --git a/drivers/media/Kconfig b/drivers/media/Kconfig
+index b07812657cee6..f3f24c63536b6 100644
+--- a/drivers/media/Kconfig
++++ b/drivers/media/Kconfig
+@@ -141,10 +141,10 @@ config MEDIA_TEST_SUPPORT
+ 	prompt "Test drivers" if MEDIA_SUPPORT_FILTER
+ 	default y if !MEDIA_SUPPORT_FILTER
+ 	help
+-	  Those drivers should not be used on production Kernels, but
+-	  can be useful on debug ones. It enables several dummy drivers
+-	  that simulate a real hardware. Very useful to test userspace
+-	  applications and to validate if the subsystem core is doesn't
++	  These drivers should not be used on production kernels, but
++	  can be useful on debug ones. This option enables several dummy drivers
++	  that simulate real hardware. Very useful to test userspace
++	  applications and to validate if the subsystem core doesn't
+ 	  have regressions.
+ 
+ 	  Say Y if you want to use some virtual test driver.
+diff --git a/drivers/media/cec/core/cec-adap.c b/drivers/media/cec/core/cec-adap.c
+index cd9cb354dc2c7..1f599e300e42e 100644
+--- a/drivers/media/cec/core/cec-adap.c
++++ b/drivers/media/cec/core/cec-adap.c
+@@ -161,10 +161,10 @@ static void cec_queue_event(struct cec_adapter *adap,
+ 	u64 ts = ktime_get_ns();
+ 	struct cec_fh *fh;
+ 
+-	mutex_lock(&adap->devnode.lock);
++	mutex_lock(&adap->devnode.lock_fhs);
+ 	list_for_each_entry(fh, &adap->devnode.fhs, list)
+ 		cec_queue_event_fh(fh, ev, ts);
+-	mutex_unlock(&adap->devnode.lock);
++	mutex_unlock(&adap->devnode.lock_fhs);
+ }
+ 
+ /* Notify userspace that the CEC pin changed state at the given time. */
+@@ -178,11 +178,12 @@ void cec_queue_pin_cec_event(struct cec_adapter *adap, bool is_high,
+ 	};
+ 	struct cec_fh *fh;
+ 
+-	mutex_lock(&adap->devnode.lock);
+-	list_for_each_entry(fh, &adap->devnode.fhs, list)
++	mutex_lock(&adap->devnode.lock_fhs);
++	list_for_each_entry(fh, &adap->devnode.fhs, list) {
+ 		if (fh->mode_follower == CEC_MODE_MONITOR_PIN)
+ 			cec_queue_event_fh(fh, &ev, ktime_to_ns(ts));
+-	mutex_unlock(&adap->devnode.lock);
++	}
++	mutex_unlock(&adap->devnode.lock_fhs);
+ }
+ EXPORT_SYMBOL_GPL(cec_queue_pin_cec_event);
+ 
+@@ -195,10 +196,10 @@ void cec_queue_pin_hpd_event(struct cec_adapter *adap, bool is_high, ktime_t ts)
+ 	};
+ 	struct cec_fh *fh;
+ 
+-	mutex_lock(&adap->devnode.lock);
++	mutex_lock(&adap->devnode.lock_fhs);
+ 	list_for_each_entry(fh, &adap->devnode.fhs, list)
+ 		cec_queue_event_fh(fh, &ev, ktime_to_ns(ts));
+-	mutex_unlock(&adap->devnode.lock);
++	mutex_unlock(&adap->devnode.lock_fhs);
+ }
+ EXPORT_SYMBOL_GPL(cec_queue_pin_hpd_event);
+ 
+@@ -211,10 +212,10 @@ void cec_queue_pin_5v_event(struct cec_adapter *adap, bool is_high, ktime_t ts)
+ 	};
+ 	struct cec_fh *fh;
+ 
+-	mutex_lock(&adap->devnode.lock);
++	mutex_lock(&adap->devnode.lock_fhs);
+ 	list_for_each_entry(fh, &adap->devnode.fhs, list)
+ 		cec_queue_event_fh(fh, &ev, ktime_to_ns(ts));
+-	mutex_unlock(&adap->devnode.lock);
++	mutex_unlock(&adap->devnode.lock_fhs);
+ }
+ EXPORT_SYMBOL_GPL(cec_queue_pin_5v_event);
+ 
+@@ -286,12 +287,12 @@ static void cec_queue_msg_monitor(struct cec_adapter *adap,
+ 	u32 monitor_mode = valid_la ? CEC_MODE_MONITOR :
+ 				      CEC_MODE_MONITOR_ALL;
+ 
+-	mutex_lock(&adap->devnode.lock);
++	mutex_lock(&adap->devnode.lock_fhs);
+ 	list_for_each_entry(fh, &adap->devnode.fhs, list) {
+ 		if (fh->mode_follower >= monitor_mode)
+ 			cec_queue_msg_fh(fh, msg);
+ 	}
+-	mutex_unlock(&adap->devnode.lock);
++	mutex_unlock(&adap->devnode.lock_fhs);
+ }
+ 
+ /*
+@@ -302,12 +303,12 @@ static void cec_queue_msg_followers(struct cec_adapter *adap,
+ {
+ 	struct cec_fh *fh;
+ 
+-	mutex_lock(&adap->devnode.lock);
++	mutex_lock(&adap->devnode.lock_fhs);
+ 	list_for_each_entry(fh, &adap->devnode.fhs, list) {
+ 		if (fh->mode_follower == CEC_MODE_FOLLOWER)
+ 			cec_queue_msg_fh(fh, msg);
+ 	}
+-	mutex_unlock(&adap->devnode.lock);
++	mutex_unlock(&adap->devnode.lock_fhs);
+ }
+ 
+ /* Notify userspace of an adapter state change. */
+@@ -1573,6 +1574,7 @@ void __cec_s_phys_addr(struct cec_adapter *adap, u16 phys_addr, bool block)
+ 		/* Disabling monitor all mode should always succeed */
+ 		if (adap->monitor_all_cnt)
+ 			WARN_ON(call_op(adap, adap_monitor_all_enable, false));
++		/* serialize adap_enable */
+ 		mutex_lock(&adap->devnode.lock);
+ 		if (adap->needs_hpd || list_empty(&adap->devnode.fhs)) {
+ 			WARN_ON(adap->ops->adap_enable(adap, false));
+@@ -1584,14 +1586,16 @@ void __cec_s_phys_addr(struct cec_adapter *adap, u16 phys_addr, bool block)
+ 			return;
+ 	}
+ 
++	/* serialize adap_enable */
+ 	mutex_lock(&adap->devnode.lock);
+ 	adap->last_initiator = 0xff;
+ 	adap->transmit_in_progress = false;
+ 
+-	if ((adap->needs_hpd || list_empty(&adap->devnode.fhs)) &&
+-	    adap->ops->adap_enable(adap, true)) {
+-		mutex_unlock(&adap->devnode.lock);
+-		return;
++	if (adap->needs_hpd || list_empty(&adap->devnode.fhs)) {
++		if (adap->ops->adap_enable(adap, true)) {
++			mutex_unlock(&adap->devnode.lock);
++			return;
++		}
+ 	}
+ 
+ 	if (adap->monitor_all_cnt &&
+diff --git a/drivers/media/cec/core/cec-api.c b/drivers/media/cec/core/cec-api.c
+index 769e6b4cddce3..52c30e4e20055 100644
+--- a/drivers/media/cec/core/cec-api.c
++++ b/drivers/media/cec/core/cec-api.c
+@@ -586,6 +586,7 @@ static int cec_open(struct inode *inode, struct file *filp)
+ 		return err;
+ 	}
+ 
++	/* serialize adap_enable */
+ 	mutex_lock(&devnode->lock);
+ 	if (list_empty(&devnode->fhs) &&
+ 	    !adap->needs_hpd &&
+@@ -624,7 +625,9 @@ static int cec_open(struct inode *inode, struct file *filp)
+ 	}
+ #endif
+ 
++	mutex_lock(&devnode->lock_fhs);
+ 	list_add(&fh->list, &devnode->fhs);
++	mutex_unlock(&devnode->lock_fhs);
+ 	mutex_unlock(&devnode->lock);
+ 
+ 	return 0;
+@@ -653,8 +656,11 @@ static int cec_release(struct inode *inode, struct file *filp)
+ 		cec_monitor_all_cnt_dec(adap);
+ 	mutex_unlock(&adap->lock);
+ 
++	/* serialize adap_enable */
+ 	mutex_lock(&devnode->lock);
++	mutex_lock(&devnode->lock_fhs);
+ 	list_del(&fh->list);
++	mutex_unlock(&devnode->lock_fhs);
+ 	if (cec_is_registered(adap) && list_empty(&devnode->fhs) &&
+ 	    !adap->needs_hpd && adap->phys_addr == CEC_PHYS_ADDR_INVALID) {
+ 		WARN_ON(adap->ops->adap_enable(adap, false));
+diff --git a/drivers/media/cec/core/cec-core.c b/drivers/media/cec/core/cec-core.c
+index 551689d371a71..ec67065d52021 100644
+--- a/drivers/media/cec/core/cec-core.c
++++ b/drivers/media/cec/core/cec-core.c
+@@ -169,8 +169,10 @@ static void cec_devnode_unregister(struct cec_adapter *adap)
+ 	devnode->registered = false;
+ 	devnode->unregistered = true;
+ 
++	mutex_lock(&devnode->lock_fhs);
+ 	list_for_each_entry(fh, &devnode->fhs, list)
+ 		wake_up_interruptible(&fh->wait);
++	mutex_unlock(&devnode->lock_fhs);
+ 
+ 	mutex_unlock(&devnode->lock);
+ 
+@@ -272,6 +274,7 @@ struct cec_adapter *cec_allocate_adapter(const struct cec_adap_ops *ops,
+ 
+ 	/* adap->devnode initialization */
+ 	INIT_LIST_HEAD(&adap->devnode.fhs);
++	mutex_init(&adap->devnode.lock_fhs);
+ 	mutex_init(&adap->devnode.lock);
+ 
+ 	adap->kthread = kthread_run(cec_thread_func, adap, "cec-%s", name);
+diff --git a/drivers/media/cec/core/cec-pin.c b/drivers/media/cec/core/cec-pin.c
+index a60b6f03a6a1a..178edc85dc927 100644
+--- a/drivers/media/cec/core/cec-pin.c
++++ b/drivers/media/cec/core/cec-pin.c
+@@ -1033,6 +1033,7 @@ static int cec_pin_thread_func(void *_adap)
+ {
+ 	struct cec_adapter *adap = _adap;
+ 	struct cec_pin *pin = adap->pin;
++	bool irq_enabled = false;
+ 
+ 	for (;;) {
+ 		wait_event_interruptible(pin->kthread_waitq,
+@@ -1060,6 +1061,7 @@ static int cec_pin_thread_func(void *_adap)
+ 				ns_to_ktime(pin->work_rx_msg.rx_ts));
+ 			msg->len = 0;
+ 		}
++
+ 		if (pin->work_tx_status) {
+ 			unsigned int tx_status = pin->work_tx_status;
+ 
+@@ -1083,27 +1085,39 @@ static int cec_pin_thread_func(void *_adap)
+ 		switch (atomic_xchg(&pin->work_irq_change,
+ 				    CEC_PIN_IRQ_UNCHANGED)) {
+ 		case CEC_PIN_IRQ_DISABLE:
+-			pin->ops->disable_irq(adap);
++			if (irq_enabled) {
++				pin->ops->disable_irq(adap);
++				irq_enabled = false;
++			}
+ 			cec_pin_high(pin);
+ 			cec_pin_to_idle(pin);
+ 			hrtimer_start(&pin->timer, ns_to_ktime(0),
+ 				      HRTIMER_MODE_REL);
+ 			break;
+ 		case CEC_PIN_IRQ_ENABLE:
++			if (irq_enabled)
++				break;
+ 			pin->enable_irq_failed = !pin->ops->enable_irq(adap);
+ 			if (pin->enable_irq_failed) {
+ 				cec_pin_to_idle(pin);
+ 				hrtimer_start(&pin->timer, ns_to_ktime(0),
+ 					      HRTIMER_MODE_REL);
++			} else {
++				irq_enabled = true;
+ 			}
+ 			break;
+ 		default:
+ 			break;
+ 		}
+-
+ 		if (kthread_should_stop())
+ 			break;
+ 	}
++	if (pin->ops->disable_irq && irq_enabled)
++		pin->ops->disable_irq(adap);
++	hrtimer_cancel(&pin->timer);
++	cec_pin_read(pin);
++	cec_pin_to_idle(pin);
++	pin->state = CEC_ST_OFF;
+ 	return 0;
+ }
+ 
+@@ -1130,13 +1144,7 @@ static int cec_pin_adap_enable(struct cec_adapter *adap, bool enable)
+ 		hrtimer_start(&pin->timer, ns_to_ktime(0),
+ 			      HRTIMER_MODE_REL);
+ 	} else {
+-		if (pin->ops->disable_irq)
+-			pin->ops->disable_irq(adap);
+-		hrtimer_cancel(&pin->timer);
+ 		kthread_stop(pin->kthread);
+-		cec_pin_read(pin);
+-		cec_pin_to_idle(pin);
+-		pin->state = CEC_ST_OFF;
+ 	}
+ 	return 0;
+ }
+@@ -1157,11 +1165,8 @@ void cec_pin_start_timer(struct cec_pin *pin)
+ 	if (pin->state != CEC_ST_RX_IRQ)
+ 		return;
+ 
+-	atomic_set(&pin->work_irq_change, CEC_PIN_IRQ_UNCHANGED);
+-	pin->ops->disable_irq(pin->adap);
+-	cec_pin_high(pin);
+-	cec_pin_to_idle(pin);
+-	hrtimer_start(&pin->timer, ns_to_ktime(0), HRTIMER_MODE_REL);
++	atomic_set(&pin->work_irq_change, CEC_PIN_IRQ_DISABLE);
++	wake_up_interruptible(&pin->kthread_waitq);
+ }
+ 
+ static int cec_pin_adap_transmit(struct cec_adapter *adap, u8 attempts,
+diff --git a/drivers/media/common/saa7146/saa7146_fops.c b/drivers/media/common/saa7146/saa7146_fops.c
+index baf5772c52a96..be32159777142 100644
+--- a/drivers/media/common/saa7146/saa7146_fops.c
++++ b/drivers/media/common/saa7146/saa7146_fops.c
+@@ -521,7 +521,7 @@ int saa7146_vv_init(struct saa7146_dev* dev, struct saa7146_ext_vv *ext_vv)
+ 		ERR("out of memory. aborting.\n");
+ 		kfree(vv);
+ 		v4l2_ctrl_handler_free(hdl);
+-		return -1;
++		return -ENOMEM;
+ 	}
+ 
+ 	saa7146_video_uops.init(dev,vv);
+diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+index 556e42ba66e55..7c4096e621738 100644
+--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
++++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+@@ -257,7 +257,7 @@ static void *vb2_dc_alloc(struct vb2_buffer *vb,
+ 		ret = vb2_dc_alloc_coherent(buf);
+ 
+ 	if (ret) {
+-		dev_err(dev, "dma alloc of size %ld failed\n", size);
++		dev_err(dev, "dma alloc of size %lu failed\n", size);
+ 		kfree(buf);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+@@ -298,9 +298,9 @@ static int vb2_dc_mmap(void *buf_priv, struct vm_area_struct *vma)
+ 
+ 	vma->vm_ops->open(vma);
+ 
+-	pr_debug("%s: mapped dma addr 0x%08lx at 0x%08lx, size %ld\n",
+-		__func__, (unsigned long)buf->dma_addr, vma->vm_start,
+-		buf->size);
++	pr_debug("%s: mapped dma addr 0x%08lx at 0x%08lx, size %lu\n",
++		 __func__, (unsigned long)buf->dma_addr, vma->vm_start,
++		 buf->size);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/dvb-core/dmxdev.c b/drivers/media/dvb-core/dmxdev.c
+index 5d5a48475a54f..01f288fa37e0e 100644
+--- a/drivers/media/dvb-core/dmxdev.c
++++ b/drivers/media/dvb-core/dmxdev.c
+@@ -1413,7 +1413,7 @@ static const struct dvb_device dvbdev_dvr = {
+ };
+ int dvb_dmxdev_init(struct dmxdev *dmxdev, struct dvb_adapter *dvb_adapter)
+ {
+-	int i;
++	int i, ret;
+ 
+ 	if (dmxdev->demux->open(dmxdev->demux) < 0)
+ 		return -EUSERS;
+@@ -1432,14 +1432,26 @@ int dvb_dmxdev_init(struct dmxdev *dmxdev, struct dvb_adapter *dvb_adapter)
+ 					    DMXDEV_STATE_FREE);
+ 	}
+ 
+-	dvb_register_device(dvb_adapter, &dmxdev->dvbdev, &dvbdev_demux, dmxdev,
++	ret = dvb_register_device(dvb_adapter, &dmxdev->dvbdev, &dvbdev_demux, dmxdev,
+ 			    DVB_DEVICE_DEMUX, dmxdev->filternum);
+-	dvb_register_device(dvb_adapter, &dmxdev->dvr_dvbdev, &dvbdev_dvr,
++	if (ret < 0)
++		goto err_register_dvbdev;
++
++	ret = dvb_register_device(dvb_adapter, &dmxdev->dvr_dvbdev, &dvbdev_dvr,
+ 			    dmxdev, DVB_DEVICE_DVR, dmxdev->filternum);
++	if (ret < 0)
++		goto err_register_dvr_dvbdev;
+ 
+ 	dvb_ringbuffer_init(&dmxdev->dvr_buffer, NULL, 8192);
+ 
+ 	return 0;
++
++err_register_dvr_dvbdev:
++	dvb_unregister_device(dmxdev->dvbdev);
++err_register_dvbdev:
++	vfree(dmxdev->filter);
++	dmxdev->filter = NULL;
++	return ret;
+ }
+ 
+ EXPORT_SYMBOL(dvb_dmxdev_init);
+diff --git a/drivers/media/dvb-frontends/dib8000.c b/drivers/media/dvb-frontends/dib8000.c
+index bb02354a48b81..d67f2dd997d06 100644
+--- a/drivers/media/dvb-frontends/dib8000.c
++++ b/drivers/media/dvb-frontends/dib8000.c
+@@ -4473,8 +4473,10 @@ static struct dvb_frontend *dib8000_init(struct i2c_adapter *i2c_adap, u8 i2c_ad
+ 
+ 	state->timf_default = cfg->pll->timf;
+ 
+-	if (dib8000_identify(&state->i2c) == 0)
++	if (dib8000_identify(&state->i2c) == 0) {
++		kfree(fe);
+ 		goto error;
++	}
+ 
+ 	dibx000_init_i2c_master(&state->i2c_master, DIB8000, state->i2c.adap, state->i2c.addr);
+ 
+diff --git a/drivers/media/i2c/imx274.c b/drivers/media/i2c/imx274.c
+index 0dce92872176d..4d9b64c61f603 100644
+--- a/drivers/media/i2c/imx274.c
++++ b/drivers/media/i2c/imx274.c
+@@ -1367,6 +1367,10 @@ static int imx274_s_frame_interval(struct v4l2_subdev *sd,
+ 	int min, max, def;
+ 	int ret;
+ 
++	ret = pm_runtime_resume_and_get(&imx274->client->dev);
++	if (ret < 0)
++		return ret;
++
+ 	mutex_lock(&imx274->lock);
+ 	ret = imx274_set_frame_interval(imx274, fi->interval);
+ 
+@@ -1398,6 +1402,7 @@ static int imx274_s_frame_interval(struct v4l2_subdev *sd,
+ 
+ unlock:
+ 	mutex_unlock(&imx274->lock);
++	pm_runtime_put(&imx274->client->dev);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/media/i2c/ov8865.c b/drivers/media/i2c/ov8865.c
+index ce50f3ea87b8e..92f6c3a940cfb 100644
+--- a/drivers/media/i2c/ov8865.c
++++ b/drivers/media/i2c/ov8865.c
+@@ -2330,27 +2330,27 @@ static int ov8865_sensor_power(struct ov8865_sensor *sensor, bool on)
+ 		if (ret) {
+ 			dev_err(sensor->dev,
+ 				"failed to enable DOVDD regulator\n");
+-			goto disable;
++			return ret;
+ 		}
+ 
+ 		ret = regulator_enable(sensor->avdd);
+ 		if (ret) {
+ 			dev_err(sensor->dev,
+ 				"failed to enable AVDD regulator\n");
+-			goto disable;
++			goto disable_dovdd;
+ 		}
+ 
+ 		ret = regulator_enable(sensor->dvdd);
+ 		if (ret) {
+ 			dev_err(sensor->dev,
+ 				"failed to enable DVDD regulator\n");
+-			goto disable;
++			goto disable_avdd;
+ 		}
+ 
+ 		ret = clk_prepare_enable(sensor->extclk);
+ 		if (ret) {
+ 			dev_err(sensor->dev, "failed to enable EXTCLK clock\n");
+-			goto disable;
++			goto disable_dvdd;
+ 		}
+ 
+ 		gpiod_set_value_cansleep(sensor->reset, 0);
+@@ -2359,14 +2359,16 @@ static int ov8865_sensor_power(struct ov8865_sensor *sensor, bool on)
+ 		/* Time to enter streaming mode according to power timings. */
+ 		usleep_range(10000, 12000);
+ 	} else {
+-disable:
+ 		gpiod_set_value_cansleep(sensor->powerdown, 1);
+ 		gpiod_set_value_cansleep(sensor->reset, 1);
+ 
+ 		clk_disable_unprepare(sensor->extclk);
+ 
++disable_dvdd:
+ 		regulator_disable(sensor->dvdd);
++disable_avdd:
+ 		regulator_disable(sensor->avdd);
++disable_dovdd:
+ 		regulator_disable(sensor->dovdd);
+ 	}
+ 
+@@ -2891,14 +2893,16 @@ static int ov8865_probe(struct i2c_client *client)
+ 	if (ret)
+ 		goto error_mutex;
+ 
++	mutex_lock(&sensor->mutex);
+ 	ret = ov8865_state_init(sensor);
++	mutex_unlock(&sensor->mutex);
+ 	if (ret)
+ 		goto error_ctrls;
+ 
+ 	/* Runtime PM */
+ 
+-	pm_runtime_enable(sensor->dev);
+ 	pm_runtime_set_suspended(sensor->dev);
++	pm_runtime_enable(sensor->dev);
+ 
+ 	/* V4L2 subdev register */
+ 
+diff --git a/drivers/media/pci/b2c2/flexcop-pci.c b/drivers/media/pci/b2c2/flexcop-pci.c
+index 6a4c7cb0ad0f9..486c8ec0fa60d 100644
+--- a/drivers/media/pci/b2c2/flexcop-pci.c
++++ b/drivers/media/pci/b2c2/flexcop-pci.c
+@@ -185,6 +185,8 @@ static irqreturn_t flexcop_pci_isr(int irq, void *dev_id)
+ 		dma_addr_t cur_addr =
+ 			fc->read_ibi_reg(fc,dma1_008).dma_0x8.dma_cur_addr << 2;
+ 		u32 cur_pos = cur_addr - fc_pci->dma[0].dma_addr0;
++		if (cur_pos > fc_pci->dma[0].size * 2)
++			goto error;
+ 
+ 		deb_irq("%u irq: %08x cur_addr: %llx: cur_pos: %08x, last_cur_pos: %08x ",
+ 				jiffies_to_usecs(jiffies - fc_pci->last_irq),
+@@ -225,6 +227,7 @@ static irqreturn_t flexcop_pci_isr(int irq, void *dev_id)
+ 		ret = IRQ_NONE;
+ 	}
+ 
++error:
+ 	spin_unlock_irqrestore(&fc_pci->irq_lock, flags);
+ 	return ret;
+ }
+diff --git a/drivers/media/pci/intel/ipu3/cio2-bridge.c b/drivers/media/pci/intel/ipu3/cio2-bridge.c
+index 67c467d3c81f9..0b586b4e537ef 100644
+--- a/drivers/media/pci/intel/ipu3/cio2-bridge.c
++++ b/drivers/media/pci/intel/ipu3/cio2-bridge.c
+@@ -238,8 +238,10 @@ static int cio2_bridge_connect_sensor(const struct cio2_sensor_config *cfg,
+ 			goto err_put_adev;
+ 
+ 		status = acpi_get_physical_device_location(adev->handle, &sensor->pld);
+-		if (ACPI_FAILURE(status))
++		if (ACPI_FAILURE(status)) {
++			ret = -ENODEV;
+ 			goto err_put_adev;
++		}
+ 
+ 		if (sensor->ssdb.lanes > CIO2_MAX_LANES) {
+ 			dev_err(&adev->dev,
+diff --git a/drivers/media/pci/saa7146/hexium_gemini.c b/drivers/media/pci/saa7146/hexium_gemini.c
+index 2214c74bbbf15..3947701cd6c7e 100644
+--- a/drivers/media/pci/saa7146/hexium_gemini.c
++++ b/drivers/media/pci/saa7146/hexium_gemini.c
+@@ -284,7 +284,12 @@ static int hexium_attach(struct saa7146_dev *dev, struct saa7146_pci_extension_d
+ 	hexium_set_input(hexium, 0);
+ 	hexium->cur_input = 0;
+ 
+-	saa7146_vv_init(dev, &vv_data);
++	ret = saa7146_vv_init(dev, &vv_data);
++	if (ret) {
++		i2c_del_adapter(&hexium->i2c_adapter);
++		kfree(hexium);
++		return ret;
++	}
+ 
+ 	vv_data.vid_ops.vidioc_enum_input = vidioc_enum_input;
+ 	vv_data.vid_ops.vidioc_g_input = vidioc_g_input;
+diff --git a/drivers/media/pci/saa7146/hexium_orion.c b/drivers/media/pci/saa7146/hexium_orion.c
+index 39d14c179d229..2eb4bee16b71f 100644
+--- a/drivers/media/pci/saa7146/hexium_orion.c
++++ b/drivers/media/pci/saa7146/hexium_orion.c
+@@ -355,10 +355,16 @@ static struct saa7146_ext_vv vv_data;
+ static int hexium_attach(struct saa7146_dev *dev, struct saa7146_pci_extension_data *info)
+ {
+ 	struct hexium *hexium = (struct hexium *) dev->ext_priv;
++	int ret;
+ 
+ 	DEB_EE("\n");
+ 
+-	saa7146_vv_init(dev, &vv_data);
++	ret = saa7146_vv_init(dev, &vv_data);
++	if (ret) {
++		pr_err("Error in saa7146_vv_init()\n");
++		return ret;
++	}
++
+ 	vv_data.vid_ops.vidioc_enum_input = vidioc_enum_input;
+ 	vv_data.vid_ops.vidioc_g_input = vidioc_g_input;
+ 	vv_data.vid_ops.vidioc_s_input = vidioc_s_input;
+diff --git a/drivers/media/pci/saa7146/mxb.c b/drivers/media/pci/saa7146/mxb.c
+index 73fc901ecf3db..bf0b9b0914cd5 100644
+--- a/drivers/media/pci/saa7146/mxb.c
++++ b/drivers/media/pci/saa7146/mxb.c
+@@ -683,10 +683,16 @@ static struct saa7146_ext_vv vv_data;
+ static int mxb_attach(struct saa7146_dev *dev, struct saa7146_pci_extension_data *info)
+ {
+ 	struct mxb *mxb;
++	int ret;
+ 
+ 	DEB_EE("dev:%p\n", dev);
+ 
+-	saa7146_vv_init(dev, &vv_data);
++	ret = saa7146_vv_init(dev, &vv_data);
++	if (ret) {
++		ERR("Error in saa7146_vv_init()");
++		return ret;
++	}
++
+ 	if (mxb_probe(dev)) {
+ 		saa7146_vv_release(dev);
+ 		return -1;
+diff --git a/drivers/media/platform/aspeed-video.c b/drivers/media/platform/aspeed-video.c
+index cad3f97515aef..7a24daf7165a4 100644
+--- a/drivers/media/platform/aspeed-video.c
++++ b/drivers/media/platform/aspeed-video.c
+@@ -539,6 +539,10 @@ static void aspeed_video_enable_mode_detect(struct aspeed_video *video)
+ 	aspeed_video_update(video, VE_INTERRUPT_CTRL, 0,
+ 			    VE_INTERRUPT_MODE_DETECT);
+ 
++	/* Disable mode detect in order to re-trigger */
++	aspeed_video_update(video, VE_SEQ_CTRL,
++			    VE_SEQ_CTRL_TRIG_MODE_DET, 0);
++
+ 	/* Trigger mode detect */
+ 	aspeed_video_update(video, VE_SEQ_CTRL, 0, VE_SEQ_CTRL_TRIG_MODE_DET);
+ }
+@@ -591,6 +595,8 @@ static void aspeed_video_irq_res_change(struct aspeed_video *video, ulong delay)
+ 	set_bit(VIDEO_RES_CHANGE, &video->flags);
+ 	clear_bit(VIDEO_FRAME_INPRG, &video->flags);
+ 
++	video->v4l2_input_status = V4L2_IN_ST_NO_SIGNAL;
++
+ 	aspeed_video_off(video);
+ 	aspeed_video_bufs_done(video, VB2_BUF_STATE_ERROR);
+ 
+@@ -824,10 +830,6 @@ static void aspeed_video_get_resolution(struct aspeed_video *video)
+ 			return;
+ 		}
+ 
+-		/* Disable mode detect in order to re-trigger */
+-		aspeed_video_update(video, VE_SEQ_CTRL,
+-				    VE_SEQ_CTRL_TRIG_MODE_DET, 0);
+-
+ 		aspeed_video_check_and_set_polarity(video);
+ 
+ 		aspeed_video_enable_mode_detect(video);
+@@ -1375,7 +1377,6 @@ static void aspeed_video_resolution_work(struct work_struct *work)
+ 	struct delayed_work *dwork = to_delayed_work(work);
+ 	struct aspeed_video *video = container_of(dwork, struct aspeed_video,
+ 						  res_work);
+-	u32 input_status = video->v4l2_input_status;
+ 
+ 	aspeed_video_on(video);
+ 
+@@ -1388,8 +1389,7 @@ static void aspeed_video_resolution_work(struct work_struct *work)
+ 	aspeed_video_get_resolution(video);
+ 
+ 	if (video->detected_timings.width != video->active_timings.width ||
+-	    video->detected_timings.height != video->active_timings.height ||
+-	    input_status != video->v4l2_input_status) {
++	    video->detected_timings.height != video->active_timings.height) {
+ 		static const struct v4l2_event ev = {
+ 			.type = V4L2_EVENT_SOURCE_CHANGE,
+ 			.u.src_change.changes = V4L2_EVENT_SRC_CH_RESOLUTION,
+diff --git a/drivers/media/platform/coda/coda-common.c b/drivers/media/platform/coda/coda-common.c
+index 0e312b0842d7f..9a2640a9c75c6 100644
+--- a/drivers/media/platform/coda/coda-common.c
++++ b/drivers/media/platform/coda/coda-common.c
+@@ -1537,11 +1537,13 @@ static void coda_pic_run_work(struct work_struct *work)
+ 
+ 	if (!wait_for_completion_timeout(&ctx->completion,
+ 					 msecs_to_jiffies(1000))) {
+-		dev_err(dev->dev, "CODA PIC_RUN timeout\n");
++		if (ctx->use_bit) {
++			dev_err(dev->dev, "CODA PIC_RUN timeout\n");
+ 
+-		ctx->hold = true;
++			ctx->hold = true;
+ 
+-		coda_hw_reset(ctx);
++			coda_hw_reset(ctx);
++		}
+ 
+ 		if (ctx->ops->run_timeout)
+ 			ctx->ops->run_timeout(ctx);
+diff --git a/drivers/media/platform/coda/coda-jpeg.c b/drivers/media/platform/coda/coda-jpeg.c
+index b11cfbe166dd3..a72f4655e5ad5 100644
+--- a/drivers/media/platform/coda/coda-jpeg.c
++++ b/drivers/media/platform/coda/coda-jpeg.c
+@@ -1127,7 +1127,8 @@ static int coda9_jpeg_prepare_encode(struct coda_ctx *ctx)
+ 	coda_write(dev, 0, CODA9_REG_JPEG_GBU_BT_PTR);
+ 	coda_write(dev, 0, CODA9_REG_JPEG_GBU_WD_PTR);
+ 	coda_write(dev, 0, CODA9_REG_JPEG_GBU_BBSR);
+-	coda_write(dev, 0, CODA9_REG_JPEG_BBC_STRM_CTRL);
++	coda_write(dev, BIT(31) | ((end_addr - start_addr - header_len) / 256),
++		   CODA9_REG_JPEG_BBC_STRM_CTRL);
+ 	coda_write(dev, 0, CODA9_REG_JPEG_GBU_CTRL);
+ 	coda_write(dev, 0, CODA9_REG_JPEG_GBU_FF_RPTR);
+ 	coda_write(dev, 127, CODA9_REG_JPEG_GBU_BBER);
+@@ -1257,6 +1258,23 @@ static void coda9_jpeg_finish_encode(struct coda_ctx *ctx)
+ 	coda_hw_reset(ctx);
+ }
+ 
++static void coda9_jpeg_encode_timeout(struct coda_ctx *ctx)
++{
++	struct coda_dev *dev = ctx->dev;
++	u32 end_addr, wr_ptr;
++
++	/* Handle missing BBC overflow interrupt via timeout */
++	end_addr = coda_read(dev, CODA9_REG_JPEG_BBC_END_ADDR);
++	wr_ptr = coda_read(dev, CODA9_REG_JPEG_BBC_WR_PTR);
++	if (wr_ptr >= end_addr - 256) {
++		v4l2_err(&dev->v4l2_dev, "JPEG too large for capture buffer\n");
++		coda9_jpeg_finish_encode(ctx);
++		return;
++	}
++
++	coda_hw_reset(ctx);
++}
++
+ static void coda9_jpeg_release(struct coda_ctx *ctx)
+ {
+ 	int i;
+@@ -1276,6 +1294,7 @@ const struct coda_context_ops coda9_jpeg_encode_ops = {
+ 	.start_streaming = coda9_jpeg_start_encoding,
+ 	.prepare_run = coda9_jpeg_prepare_encode,
+ 	.finish_run = coda9_jpeg_finish_encode,
++	.run_timeout = coda9_jpeg_encode_timeout,
+ 	.release = coda9_jpeg_release,
+ };
+ 
+diff --git a/drivers/media/platform/coda/imx-vdoa.c b/drivers/media/platform/coda/imx-vdoa.c
+index 6996d4571e363..00643f37b3e6f 100644
+--- a/drivers/media/platform/coda/imx-vdoa.c
++++ b/drivers/media/platform/coda/imx-vdoa.c
+@@ -287,7 +287,11 @@ static int vdoa_probe(struct platform_device *pdev)
+ 	struct resource *res;
+ 	int ret;
+ 
+-	dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32));
++	ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32));
++	if (ret) {
++		dev_err(&pdev->dev, "DMA enable failed\n");
++		return ret;
++	}
+ 
+ 	vdoa = devm_kzalloc(&pdev->dev, sizeof(*vdoa), GFP_KERNEL);
+ 	if (!vdoa)
+diff --git a/drivers/media/platform/imx-pxp.c b/drivers/media/platform/imx-pxp.c
+index 723b096fedd10..b7174778db539 100644
+--- a/drivers/media/platform/imx-pxp.c
++++ b/drivers/media/platform/imx-pxp.c
+@@ -1659,6 +1659,8 @@ static int pxp_probe(struct platform_device *pdev)
+ 	if (irq < 0)
+ 		return irq;
+ 
++	spin_lock_init(&dev->irqlock);
++
+ 	ret = devm_request_threaded_irq(&pdev->dev, irq, NULL, pxp_irq_handler,
+ 			IRQF_ONESHOT, dev_name(&pdev->dev), dev);
+ 	if (ret < 0) {
+@@ -1676,8 +1678,6 @@ static int pxp_probe(struct platform_device *pdev)
+ 		goto err_clk;
+ 	}
+ 
+-	spin_lock_init(&dev->irqlock);
+-
+ 	ret = v4l2_device_register(&pdev->dev, &dev->v4l2_dev);
+ 	if (ret)
+ 		goto err_clk;
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c
+index e6e6a8203eebf..8277c44209b5b 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c
+@@ -358,6 +358,8 @@ err_media_reg:
+ 	if (dev->vdec_pdata->uses_stateless_api)
+ 		v4l2_m2m_unregister_media_controller(dev->m2m_dev_dec);
+ err_reg_cont:
++	if (dev->vdec_pdata->uses_stateless_api)
++		media_device_cleanup(&dev->mdev_dec);
+ 	destroy_workqueue(dev->decode_workqueue);
+ err_event_workq:
+ 	v4l2_m2m_release(dev->m2m_dev_dec);
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c
+index eed67394cf463..f898226fc53e5 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c
+@@ -214,11 +214,11 @@ static int fops_vcodec_release(struct file *file)
+ 	mtk_v4l2_debug(1, "[%d] encoder", ctx->id);
+ 	mutex_lock(&dev->dev_mutex);
+ 
++	v4l2_m2m_ctx_release(ctx->m2m_ctx);
+ 	mtk_vcodec_enc_release(ctx);
+ 	v4l2_fh_del(&ctx->fh);
+ 	v4l2_fh_exit(&ctx->fh);
+ 	v4l2_ctrl_handler_free(&ctx->ctrl_hdl);
+-	v4l2_m2m_ctx_release(ctx->m2m_ctx);
+ 
+ 	list_del_init(&ctx->list);
+ 	kfree(ctx);
+diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
+index f5fa81896012d..877eca1258037 100644
+--- a/drivers/media/platform/qcom/venus/core.c
++++ b/drivers/media/platform/qcom/venus/core.c
+@@ -350,11 +350,11 @@ static int venus_probe(struct platform_device *pdev)
+ 
+ 	ret = venus_firmware_init(core);
+ 	if (ret)
+-		goto err_runtime_disable;
++		goto err_of_depopulate;
+ 
+ 	ret = venus_boot(core);
+ 	if (ret)
+-		goto err_runtime_disable;
++		goto err_firmware_deinit;
+ 
+ 	ret = hfi_core_resume(core, true);
+ 	if (ret)
+@@ -386,6 +386,10 @@ err_dev_unregister:
+ 	v4l2_device_unregister(&core->v4l2_dev);
+ err_venus_shutdown:
+ 	venus_shutdown(core);
++err_firmware_deinit:
++	venus_firmware_deinit(core);
++err_of_depopulate:
++	of_platform_depopulate(dev);
+ err_runtime_disable:
+ 	pm_runtime_put_noidle(dev);
+ 	pm_runtime_set_suspended(dev);
+@@ -473,7 +477,8 @@ static __maybe_unused int venus_runtime_suspend(struct device *dev)
+ err_video_path:
+ 	icc_set_bw(core->cpucfg_path, kbps_to_icc(1000), 0);
+ err_cpucfg_path:
+-	pm_ops->core_power(core, POWER_ON);
++	if (pm_ops->core_power)
++		pm_ops->core_power(core, POWER_ON);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/media/platform/qcom/venus/pm_helpers.c b/drivers/media/platform/qcom/venus/pm_helpers.c
+index cedc664ba755f..cb48c5ff3dee2 100644
+--- a/drivers/media/platform/qcom/venus/pm_helpers.c
++++ b/drivers/media/platform/qcom/venus/pm_helpers.c
+@@ -163,14 +163,12 @@ static u32 load_per_type(struct venus_core *core, u32 session_type)
+ 	struct venus_inst *inst = NULL;
+ 	u32 mbs_per_sec = 0;
+ 
+-	mutex_lock(&core->lock);
+ 	list_for_each_entry(inst, &core->instances, list) {
+ 		if (inst->session_type != session_type)
+ 			continue;
+ 
+ 		mbs_per_sec += load_per_instance(inst);
+ 	}
+-	mutex_unlock(&core->lock);
+ 
+ 	return mbs_per_sec;
+ }
+@@ -219,14 +217,12 @@ static int load_scale_bw(struct venus_core *core)
+ 	struct venus_inst *inst = NULL;
+ 	u32 mbs_per_sec, avg, peak, total_avg = 0, total_peak = 0;
+ 
+-	mutex_lock(&core->lock);
+ 	list_for_each_entry(inst, &core->instances, list) {
+ 		mbs_per_sec = load_per_instance(inst);
+ 		mbs_to_bw(inst, mbs_per_sec, &avg, &peak);
+ 		total_avg += avg;
+ 		total_peak += peak;
+ 	}
+-	mutex_unlock(&core->lock);
+ 
+ 	/*
+ 	 * keep minimum bandwidth vote for "video-mem" path,
+@@ -253,8 +249,9 @@ static int load_scale_v1(struct venus_inst *inst)
+ 	struct device *dev = core->dev;
+ 	u32 mbs_per_sec;
+ 	unsigned int i;
+-	int ret;
++	int ret = 0;
+ 
++	mutex_lock(&core->lock);
+ 	mbs_per_sec = load_per_type(core, VIDC_SESSION_TYPE_ENC) +
+ 		      load_per_type(core, VIDC_SESSION_TYPE_DEC);
+ 
+@@ -279,17 +276,19 @@ set_freq:
+ 	if (ret) {
+ 		dev_err(dev, "failed to set clock rate %lu (%d)\n",
+ 			freq, ret);
+-		return ret;
++		goto exit;
+ 	}
+ 
+ 	ret = load_scale_bw(core);
+ 	if (ret) {
+ 		dev_err(dev, "failed to set bandwidth (%d)\n",
+ 			ret);
+-		return ret;
++		goto exit;
+ 	}
+ 
+-	return 0;
++exit:
++	mutex_unlock(&core->lock);
++	return ret;
+ }
+ 
+ static int core_get_v1(struct venus_core *core)
+@@ -587,8 +586,8 @@ min_loaded_core(struct venus_inst *inst, u32 *min_coreid, u32 *min_load, bool lo
+ 		if (inst->session_type == VIDC_SESSION_TYPE_DEC)
+ 			vpp_freq = inst_pos->clk_data.vpp_freq;
+ 		else if (inst->session_type == VIDC_SESSION_TYPE_ENC)
+-			vpp_freq = low_power ? inst_pos->clk_data.vpp_freq :
+-				inst_pos->clk_data.low_power_freq;
++			vpp_freq = low_power ? inst_pos->clk_data.low_power_freq :
++				inst_pos->clk_data.vpp_freq;
+ 		else
+ 			continue;
+ 
+@@ -1116,13 +1115,13 @@ static int load_scale_v4(struct venus_inst *inst)
+ 	struct device *dev = core->dev;
+ 	unsigned long freq = 0, freq_core1 = 0, freq_core2 = 0;
+ 	unsigned long filled_len = 0;
+-	int i, ret;
++	int i, ret = 0;
+ 
+ 	for (i = 0; i < inst->num_input_bufs; i++)
+ 		filled_len = max(filled_len, inst->payloads[i]);
+ 
+ 	if (inst->session_type == VIDC_SESSION_TYPE_DEC && !filled_len)
+-		return 0;
++		return ret;
+ 
+ 	freq = calculate_inst_freq(inst, filled_len);
+ 	inst->clk_data.freq = freq;
+@@ -1138,7 +1137,6 @@ static int load_scale_v4(struct venus_inst *inst)
+ 			freq_core2 += inst->clk_data.freq;
+ 		}
+ 	}
+-	mutex_unlock(&core->lock);
+ 
+ 	freq = max(freq_core1, freq_core2);
+ 
+@@ -1163,17 +1161,19 @@ set_freq:
+ 	if (ret) {
+ 		dev_err(dev, "failed to set clock rate %lu (%d)\n",
+ 			freq, ret);
+-		return ret;
++		goto exit;
+ 	}
+ 
+ 	ret = load_scale_bw(core);
+ 	if (ret) {
+ 		dev_err(dev, "failed to set bandwidth (%d)\n",
+ 			ret);
+-		return ret;
++		goto exit;
+ 	}
+ 
+-	return 0;
++exit:
++	mutex_unlock(&core->lock);
++	return ret;
+ }
+ 
+ static const struct venus_pm_ops pm_ops_v4 = {
+diff --git a/drivers/media/platform/rcar-vin/rcar-csi2.c b/drivers/media/platform/rcar-vin/rcar-csi2.c
+index 11848d0c4a55c..c9b2700e12550 100644
+--- a/drivers/media/platform/rcar-vin/rcar-csi2.c
++++ b/drivers/media/platform/rcar-vin/rcar-csi2.c
+@@ -542,16 +542,23 @@ static int rcsi2_wait_phy_start(struct rcar_csi2 *priv,
+ static int rcsi2_set_phypll(struct rcar_csi2 *priv, unsigned int mbps)
+ {
+ 	const struct rcsi2_mbps_reg *hsfreq;
++	const struct rcsi2_mbps_reg *hsfreq_prev = NULL;
+ 
+-	for (hsfreq = priv->info->hsfreqrange; hsfreq->mbps != 0; hsfreq++)
++	for (hsfreq = priv->info->hsfreqrange; hsfreq->mbps != 0; hsfreq++) {
+ 		if (hsfreq->mbps >= mbps)
+ 			break;
++		hsfreq_prev = hsfreq;
++	}
+ 
+ 	if (!hsfreq->mbps) {
+ 		dev_err(priv->dev, "Unsupported PHY speed (%u Mbps)", mbps);
+ 		return -ERANGE;
+ 	}
+ 
++	if (hsfreq_prev &&
++	    ((mbps - hsfreq_prev->mbps) <= (hsfreq->mbps - mbps)))
++		hsfreq = hsfreq_prev;
++
+ 	rcsi2_write(priv, PHYPLL_REG, PHYPLL_HSFREQRANGE(hsfreq->reg));
+ 
+ 	return 0;
+@@ -1097,10 +1104,17 @@ static int rcsi2_phtw_write_mbps(struct rcar_csi2 *priv, unsigned int mbps,
+ 				 const struct rcsi2_mbps_reg *values, u16 code)
+ {
+ 	const struct rcsi2_mbps_reg *value;
++	const struct rcsi2_mbps_reg *prev_value = NULL;
+ 
+-	for (value = values; value->mbps; value++)
++	for (value = values; value->mbps; value++) {
+ 		if (value->mbps >= mbps)
+ 			break;
++		prev_value = value;
++	}
++
++	if (prev_value &&
++	    ((mbps - prev_value->mbps) <= (value->mbps - mbps)))
++		value = prev_value;
+ 
+ 	if (!value->mbps) {
+ 		dev_err(priv->dev, "Unsupported PHY speed (%u Mbps)", mbps);
+diff --git a/drivers/media/platform/rcar-vin/rcar-v4l2.c b/drivers/media/platform/rcar-vin/rcar-v4l2.c
+index a5bfa76fdac6e..2e60b9fce03b0 100644
+--- a/drivers/media/platform/rcar-vin/rcar-v4l2.c
++++ b/drivers/media/platform/rcar-vin/rcar-v4l2.c
+@@ -179,20 +179,27 @@ static void rvin_format_align(struct rvin_dev *vin, struct v4l2_pix_format *pix)
+ 		break;
+ 	}
+ 
+-	/* HW limit width to a multiple of 32 (2^5) for NV12/16 else 2 (2^1) */
++	/* Hardware limits width alignment based on format. */
+ 	switch (pix->pixelformat) {
++	/* Multiple of 32 (2^5) for NV12/16. */
+ 	case V4L2_PIX_FMT_NV12:
+ 	case V4L2_PIX_FMT_NV16:
+ 		walign = 5;
+ 		break;
+-	default:
++	/* Multiple of 2 (2^1) for YUV. */
++	case V4L2_PIX_FMT_YUYV:
++	case V4L2_PIX_FMT_UYVY:
+ 		walign = 1;
+ 		break;
++	/* No multiple for RGB. */
++	default:
++		walign = 0;
++		break;
+ 	}
+ 
+ 	/* Limit to VIN capabilities */
+-	v4l_bound_align_image(&pix->width, 2, vin->info->max_width, walign,
+-			      &pix->height, 4, vin->info->max_height, 2, 0);
++	v4l_bound_align_image(&pix->width, 5, vin->info->max_width, walign,
++			      &pix->height, 2, vin->info->max_height, 0, 0);
+ 
+ 	pix->bytesperline = rvin_format_bytesperline(vin, pix);
+ 	pix->sizeimage = rvin_format_sizeimage(pix);
+diff --git a/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c b/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c
+index 50b166c49a03a..3f5cfa7eb9372 100644
+--- a/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c
++++ b/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c
+@@ -462,7 +462,7 @@ static void rkisp1_debug_init(struct rkisp1_device *rkisp1)
+ {
+ 	struct rkisp1_debug *debug = &rkisp1->debug;
+ 
+-	debug->debugfs_dir = debugfs_create_dir(RKISP1_DRIVER_NAME, NULL);
++	debug->debugfs_dir = debugfs_create_dir(dev_name(rkisp1->dev), NULL);
+ 	debugfs_create_ulong("data_loss", 0444, debug->debugfs_dir,
+ 			     &debug->data_loss);
+ 	debugfs_create_ulong("outform_size_err", 0444,  debug->debugfs_dir,
+diff --git a/drivers/media/radio/si470x/radio-si470x-i2c.c b/drivers/media/radio/si470x/radio-si470x-i2c.c
+index a972c0705ac79..76d39e2e87706 100644
+--- a/drivers/media/radio/si470x/radio-si470x-i2c.c
++++ b/drivers/media/radio/si470x/radio-si470x-i2c.c
+@@ -368,7 +368,7 @@ static int si470x_i2c_probe(struct i2c_client *client)
+ 	if (radio->hdl.error) {
+ 		retval = radio->hdl.error;
+ 		dev_err(&client->dev, "couldn't register control\n");
+-		goto err_dev;
++		goto err_all;
+ 	}
+ 
+ 	/* video device initialization */
+@@ -463,7 +463,6 @@ static int si470x_i2c_probe(struct i2c_client *client)
+ 	return 0;
+ err_all:
+ 	v4l2_ctrl_handler_free(&radio->hdl);
+-err_dev:
+ 	v4l2_device_unregister(&radio->v4l2_dev);
+ err_initial:
+ 	return retval;
+diff --git a/drivers/media/rc/igorplugusb.c b/drivers/media/rc/igorplugusb.c
+index effaa5751d6c9..3e9988ee785f0 100644
+--- a/drivers/media/rc/igorplugusb.c
++++ b/drivers/media/rc/igorplugusb.c
+@@ -64,9 +64,11 @@ static void igorplugusb_irdata(struct igorplugusb *ir, unsigned len)
+ 	if (start >= len) {
+ 		dev_err(ir->dev, "receive overflow invalid: %u", overflow);
+ 	} else {
+-		if (overflow > 0)
++		if (overflow > 0) {
+ 			dev_warn(ir->dev, "receive overflow, at least %u lost",
+ 								overflow);
++			ir_raw_event_reset(ir->rc);
++		}
+ 
+ 		do {
+ 			rawir.duration = ir->buf_in[i] * 85;
+diff --git a/drivers/media/rc/mceusb.c b/drivers/media/rc/mceusb.c
+index d09bee82c04c5..2dc810f5a73f7 100644
+--- a/drivers/media/rc/mceusb.c
++++ b/drivers/media/rc/mceusb.c
+@@ -1430,7 +1430,7 @@ static void mceusb_gen1_init(struct mceusb_dev *ir)
+ 	 */
+ 	ret = usb_control_msg(ir->usbdev, usb_rcvctrlpipe(ir->usbdev, 0),
+ 			      USB_REQ_SET_ADDRESS, USB_TYPE_VENDOR, 0, 0,
+-			      data, USB_CTRL_MSG_SZ, HZ * 3);
++			      data, USB_CTRL_MSG_SZ, 3000);
+ 	dev_dbg(dev, "set address - ret = %d", ret);
+ 	dev_dbg(dev, "set address - data[0] = %d, data[1] = %d",
+ 						data[0], data[1]);
+@@ -1438,20 +1438,20 @@ static void mceusb_gen1_init(struct mceusb_dev *ir)
+ 	/* set feature: bit rate 38400 bps */
+ 	ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0),
+ 			      USB_REQ_SET_FEATURE, USB_TYPE_VENDOR,
+-			      0xc04e, 0x0000, NULL, 0, HZ * 3);
++			      0xc04e, 0x0000, NULL, 0, 3000);
+ 
+ 	dev_dbg(dev, "set feature - ret = %d", ret);
+ 
+ 	/* bRequest 4: set char length to 8 bits */
+ 	ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0),
+ 			      4, USB_TYPE_VENDOR,
+-			      0x0808, 0x0000, NULL, 0, HZ * 3);
++			      0x0808, 0x0000, NULL, 0, 3000);
+ 	dev_dbg(dev, "set char length - retB = %d", ret);
+ 
+ 	/* bRequest 2: set handshaking to use DTR/DSR */
+ 	ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0),
+ 			      2, USB_TYPE_VENDOR,
+-			      0x0000, 0x0100, NULL, 0, HZ * 3);
++			      0x0000, 0x0100, NULL, 0, 3000);
+ 	dev_dbg(dev, "set handshake  - retC = %d", ret);
+ 
+ 	/* device resume */
+diff --git a/drivers/media/rc/redrat3.c b/drivers/media/rc/redrat3.c
+index ac85464864b9d..cb22316b3f002 100644
+--- a/drivers/media/rc/redrat3.c
++++ b/drivers/media/rc/redrat3.c
+@@ -404,7 +404,7 @@ static int redrat3_send_cmd(int cmd, struct redrat3_dev *rr3)
+ 	udev = rr3->udev;
+ 	res = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), cmd,
+ 			      USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+-			      0x0000, 0x0000, data, sizeof(u8), HZ * 10);
++			      0x0000, 0x0000, data, sizeof(u8), 10000);
+ 
+ 	if (res < 0) {
+ 		dev_err(rr3->dev, "%s: Error sending rr3 cmd res %d, data %d",
+@@ -480,7 +480,7 @@ static u32 redrat3_get_timeout(struct redrat3_dev *rr3)
+ 	pipe = usb_rcvctrlpipe(rr3->udev, 0);
+ 	ret = usb_control_msg(rr3->udev, pipe, RR3_GET_IR_PARAM,
+ 			      USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+-			      RR3_IR_IO_SIG_TIMEOUT, 0, tmp, len, HZ * 5);
++			      RR3_IR_IO_SIG_TIMEOUT, 0, tmp, len, 5000);
+ 	if (ret != len)
+ 		dev_warn(rr3->dev, "Failed to read timeout from hardware\n");
+ 	else {
+@@ -510,7 +510,7 @@ static int redrat3_set_timeout(struct rc_dev *rc_dev, unsigned int timeoutus)
+ 	ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), RR3_SET_IR_PARAM,
+ 		     USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+ 		     RR3_IR_IO_SIG_TIMEOUT, 0, timeout, sizeof(*timeout),
+-		     HZ * 25);
++		     25000);
+ 	dev_dbg(dev, "set ir parm timeout %d ret 0x%02x\n",
+ 						be32_to_cpu(*timeout), ret);
+ 
+@@ -542,32 +542,32 @@ static void redrat3_reset(struct redrat3_dev *rr3)
+ 	*val = 0x01;
+ 	rc = usb_control_msg(udev, rxpipe, RR3_RESET,
+ 			     USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+-			     RR3_CPUCS_REG_ADDR, 0, val, len, HZ * 25);
++			     RR3_CPUCS_REG_ADDR, 0, val, len, 25000);
+ 	dev_dbg(dev, "reset returned 0x%02x\n", rc);
+ 
+ 	*val = length_fuzz;
+ 	rc = usb_control_msg(udev, txpipe, RR3_SET_IR_PARAM,
+ 			     USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+-			     RR3_IR_IO_LENGTH_FUZZ, 0, val, len, HZ * 25);
++			     RR3_IR_IO_LENGTH_FUZZ, 0, val, len, 25000);
+ 	dev_dbg(dev, "set ir parm len fuzz %d rc 0x%02x\n", *val, rc);
+ 
+ 	*val = (65536 - (minimum_pause * 2000)) / 256;
+ 	rc = usb_control_msg(udev, txpipe, RR3_SET_IR_PARAM,
+ 			     USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+-			     RR3_IR_IO_MIN_PAUSE, 0, val, len, HZ * 25);
++			     RR3_IR_IO_MIN_PAUSE, 0, val, len, 25000);
+ 	dev_dbg(dev, "set ir parm min pause %d rc 0x%02x\n", *val, rc);
+ 
+ 	*val = periods_measure_carrier;
+ 	rc = usb_control_msg(udev, txpipe, RR3_SET_IR_PARAM,
+ 			     USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+-			     RR3_IR_IO_PERIODS_MF, 0, val, len, HZ * 25);
++			     RR3_IR_IO_PERIODS_MF, 0, val, len, 25000);
+ 	dev_dbg(dev, "set ir parm periods measure carrier %d rc 0x%02x", *val,
+ 									rc);
+ 
+ 	*val = RR3_DRIVER_MAXLENS;
+ 	rc = usb_control_msg(udev, txpipe, RR3_SET_IR_PARAM,
+ 			     USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+-			     RR3_IR_IO_MAX_LENGTHS, 0, val, len, HZ * 25);
++			     RR3_IR_IO_MAX_LENGTHS, 0, val, len, 25000);
+ 	dev_dbg(dev, "set ir parm max lens %d rc 0x%02x\n", *val, rc);
+ 
+ 	kfree(val);
+@@ -585,7 +585,7 @@ static void redrat3_get_firmware_rev(struct redrat3_dev *rr3)
+ 	rc = usb_control_msg(rr3->udev, usb_rcvctrlpipe(rr3->udev, 0),
+ 			     RR3_FW_VERSION,
+ 			     USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+-			     0, 0, buffer, RR3_FW_VERSION_LEN, HZ * 5);
++			     0, 0, buffer, RR3_FW_VERSION_LEN, 5000);
+ 
+ 	if (rc >= 0)
+ 		dev_info(rr3->dev, "Firmware rev: %s", buffer);
+@@ -825,14 +825,14 @@ static int redrat3_transmit_ir(struct rc_dev *rcdev, unsigned *txbuf,
+ 
+ 	pipe = usb_sndbulkpipe(rr3->udev, rr3->ep_out->bEndpointAddress);
+ 	ret = usb_bulk_msg(rr3->udev, pipe, irdata,
+-			    sendbuf_len, &ret_len, 10 * HZ);
++			    sendbuf_len, &ret_len, 10000);
+ 	dev_dbg(dev, "sent %d bytes, (ret %d)\n", ret_len, ret);
+ 
+ 	/* now tell the hardware to transmit what we sent it */
+ 	pipe = usb_rcvctrlpipe(rr3->udev, 0);
+ 	ret = usb_control_msg(rr3->udev, pipe, RR3_TX_SEND_SIGNAL,
+ 			      USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+-			      0, 0, irdata, 2, HZ * 10);
++			      0, 0, irdata, 2, 10000);
+ 
+ 	if (ret < 0)
+ 		dev_err(dev, "Error: control msg send failed, rc %d\n", ret);
+diff --git a/drivers/media/tuners/msi001.c b/drivers/media/tuners/msi001.c
+index 78e6fd600d8ef..44247049a3190 100644
+--- a/drivers/media/tuners/msi001.c
++++ b/drivers/media/tuners/msi001.c
+@@ -442,6 +442,13 @@ static int msi001_probe(struct spi_device *spi)
+ 			V4L2_CID_RF_TUNER_BANDWIDTH_AUTO, 0, 1, 1, 1);
+ 	dev->bandwidth = v4l2_ctrl_new_std(&dev->hdl, &msi001_ctrl_ops,
+ 			V4L2_CID_RF_TUNER_BANDWIDTH, 200000, 8000000, 1, 200000);
++	if (dev->hdl.error) {
++		ret = dev->hdl.error;
++		dev_err(&spi->dev, "Could not initialize controls\n");
++		/* control init failed, free handler */
++		goto err_ctrl_handler_free;
++	}
++
+ 	v4l2_ctrl_auto_cluster(2, &dev->bandwidth_auto, 0, false);
+ 	dev->lna_gain = v4l2_ctrl_new_std(&dev->hdl, &msi001_ctrl_ops,
+ 			V4L2_CID_RF_TUNER_LNA_GAIN, 0, 1, 1, 1);
+diff --git a/drivers/media/tuners/si2157.c b/drivers/media/tuners/si2157.c
+index fefb2625f6558..75ddf7ed1faff 100644
+--- a/drivers/media/tuners/si2157.c
++++ b/drivers/media/tuners/si2157.c
+@@ -90,7 +90,7 @@ static int si2157_init(struct dvb_frontend *fe)
+ 	dev_dbg(&client->dev, "\n");
+ 
+ 	/* Try to get Xtal trim property, to verify tuner still running */
+-	memcpy(cmd.args, "\x15\x00\x04\x02", 4);
++	memcpy(cmd.args, "\x15\x00\x02\x04", 4);
+ 	cmd.wlen = 4;
+ 	cmd.rlen = 4;
+ 	ret = si2157_cmd_execute(client, &cmd);
+diff --git a/drivers/media/usb/b2c2/flexcop-usb.c b/drivers/media/usb/b2c2/flexcop-usb.c
+index 5d38171b7638c..bfeb92d93de39 100644
+--- a/drivers/media/usb/b2c2/flexcop-usb.c
++++ b/drivers/media/usb/b2c2/flexcop-usb.c
+@@ -87,7 +87,7 @@ static int flexcop_usb_readwrite_dw(struct flexcop_device *fc, u16 wRegOffsPCI,
+ 			0,
+ 			fc_usb->data,
+ 			sizeof(u32),
+-			B2C2_WAIT_FOR_OPERATION_RDW * HZ);
++			B2C2_WAIT_FOR_OPERATION_RDW);
+ 
+ 	if (ret != sizeof(u32)) {
+ 		err("error while %s dword from %d (%d).", read ? "reading" :
+@@ -155,7 +155,7 @@ static int flexcop_usb_v8_memory_req(struct flexcop_usb *fc_usb,
+ 			wIndex,
+ 			fc_usb->data,
+ 			buflen,
+-			nWaitTime * HZ);
++			nWaitTime);
+ 	if (ret != buflen)
+ 		ret = -EIO;
+ 
+@@ -248,13 +248,13 @@ static int flexcop_usb_i2c_req(struct flexcop_i2c_adapter *i2c,
+ 		/* DKT 020208 - add this to support special case of DiSEqC */
+ 	case USB_FUNC_I2C_CHECKWRITE:
+ 		pipe = B2C2_USB_CTRL_PIPE_OUT;
+-		nWaitTime = 2;
++		nWaitTime = 2000;
+ 		request_type |= USB_DIR_OUT;
+ 		break;
+ 	case USB_FUNC_I2C_READ:
+ 	case USB_FUNC_I2C_REPEATREAD:
+ 		pipe = B2C2_USB_CTRL_PIPE_IN;
+-		nWaitTime = 2;
++		nWaitTime = 2000;
+ 		request_type |= USB_DIR_IN;
+ 		break;
+ 	default:
+@@ -281,7 +281,7 @@ static int flexcop_usb_i2c_req(struct flexcop_i2c_adapter *i2c,
+ 			wIndex,
+ 			fc_usb->data,
+ 			buflen,
+-			nWaitTime * HZ);
++			nWaitTime);
+ 
+ 	if (ret != buflen)
+ 		ret = -EIO;
+diff --git a/drivers/media/usb/b2c2/flexcop-usb.h b/drivers/media/usb/b2c2/flexcop-usb.h
+index 2f230bf72252b..c7cca1a5ee59d 100644
+--- a/drivers/media/usb/b2c2/flexcop-usb.h
++++ b/drivers/media/usb/b2c2/flexcop-usb.h
+@@ -91,13 +91,13 @@ typedef enum {
+ 	UTILITY_SRAM_TESTVERIFY     = 0x16,
+ } flexcop_usb_utility_function_t;
+ 
+-#define B2C2_WAIT_FOR_OPERATION_RW (1*HZ)
+-#define B2C2_WAIT_FOR_OPERATION_RDW (3*HZ)
+-#define B2C2_WAIT_FOR_OPERATION_WDW (1*HZ)
++#define B2C2_WAIT_FOR_OPERATION_RW 1000
++#define B2C2_WAIT_FOR_OPERATION_RDW 3000
++#define B2C2_WAIT_FOR_OPERATION_WDW 1000
+ 
+-#define B2C2_WAIT_FOR_OPERATION_V8READ (3*HZ)
+-#define B2C2_WAIT_FOR_OPERATION_V8WRITE (3*HZ)
+-#define B2C2_WAIT_FOR_OPERATION_V8FLASH (3*HZ)
++#define B2C2_WAIT_FOR_OPERATION_V8READ 3000
++#define B2C2_WAIT_FOR_OPERATION_V8WRITE 3000
++#define B2C2_WAIT_FOR_OPERATION_V8FLASH 3000
+ 
+ typedef enum {
+ 	V8_MEMORY_PAGE_DVB_CI = 0x20,
+diff --git a/drivers/media/usb/cpia2/cpia2_usb.c b/drivers/media/usb/cpia2/cpia2_usb.c
+index 76aac06f9fb8e..cba03b2864738 100644
+--- a/drivers/media/usb/cpia2/cpia2_usb.c
++++ b/drivers/media/usb/cpia2/cpia2_usb.c
+@@ -550,7 +550,7 @@ static int write_packet(struct usb_device *udev,
+ 			       0,	/* index */
+ 			       buf,	/* buffer */
+ 			       size,
+-			       HZ);
++			       1000);
+ 
+ 	kfree(buf);
+ 	return ret;
+@@ -582,7 +582,7 @@ static int read_packet(struct usb_device *udev,
+ 			       0,	/* index */
+ 			       buf,	/* buffer */
+ 			       size,
+-			       HZ);
++			       1000);
+ 
+ 	if (ret >= 0)
+ 		memcpy(registers, buf, size);
+diff --git a/drivers/media/usb/dvb-usb/dib0700_core.c b/drivers/media/usb/dvb-usb/dib0700_core.c
+index 70219b3e85666..7ea8f68b0f458 100644
+--- a/drivers/media/usb/dvb-usb/dib0700_core.c
++++ b/drivers/media/usb/dvb-usb/dib0700_core.c
+@@ -618,8 +618,6 @@ int dib0700_streaming_ctrl(struct dvb_usb_adapter *adap, int onoff)
+ 		deb_info("the endpoint number (%i) is not correct, use the adapter id instead", adap->fe_adap[0].stream.props.endpoint);
+ 		if (onoff)
+ 			st->channel_state |=	1 << (adap->id);
+-		else
+-			st->channel_state |=	1 << ~(adap->id);
+ 	} else {
+ 		if (onoff)
+ 			st->channel_state |=	1 << (adap->fe_adap[0].stream.props.endpoint-2);
+diff --git a/drivers/media/usb/dvb-usb/dw2102.c b/drivers/media/usb/dvb-usb/dw2102.c
+index f0e686b05dc63..ca75ebdc10b37 100644
+--- a/drivers/media/usb/dvb-usb/dw2102.c
++++ b/drivers/media/usb/dvb-usb/dw2102.c
+@@ -2150,46 +2150,153 @@ static struct dvb_usb_device_properties s6x0_properties = {
+ 	}
+ };
+ 
+-static const struct dvb_usb_device_description d1100 = {
+-	"Prof 1100 USB ",
+-	{&dw2102_table[PROF_1100], NULL},
+-	{NULL},
+-};
++static struct dvb_usb_device_properties p1100_properties = {
++	.caps = DVB_USB_IS_AN_I2C_ADAPTER,
++	.usb_ctrl = DEVICE_SPECIFIC,
++	.size_of_priv = sizeof(struct dw2102_state),
++	.firmware = P1100_FIRMWARE,
++	.no_reconnect = 1,
+ 
+-static const struct dvb_usb_device_description d660 = {
+-	"TeVii S660 USB",
+-	{&dw2102_table[TEVII_S660], NULL},
+-	{NULL},
+-};
++	.i2c_algo = &s6x0_i2c_algo,
++	.rc.core = {
++		.rc_interval = 150,
++		.rc_codes = RC_MAP_TBS_NEC,
++		.module_name = "dw2102",
++		.allowed_protos   = RC_PROTO_BIT_NEC,
++		.rc_query = prof_rc_query,
++	},
+ 
+-static const struct dvb_usb_device_description d480_1 = {
+-	"TeVii S480.1 USB",
+-	{&dw2102_table[TEVII_S480_1], NULL},
+-	{NULL},
++	.generic_bulk_ctrl_endpoint = 0x81,
++	.num_adapters = 1,
++	.download_firmware = dw2102_load_firmware,
++	.read_mac_address = s6x0_read_mac_address,
++	.adapter = {
++		{
++			.num_frontends = 1,
++			.fe = {{
++				.frontend_attach = stv0288_frontend_attach,
++				.stream = {
++					.type = USB_BULK,
++					.count = 8,
++					.endpoint = 0x82,
++					.u = {
++						.bulk = {
++							.buffersize = 4096,
++						}
++					}
++				},
++			} },
++		}
++	},
++	.num_device_descs = 1,
++	.devices = {
++		{"Prof 1100 USB ",
++			{&dw2102_table[PROF_1100], NULL},
++			{NULL},
++		},
++	}
+ };
+ 
+-static const struct dvb_usb_device_description d480_2 = {
+-	"TeVii S480.2 USB",
+-	{&dw2102_table[TEVII_S480_2], NULL},
+-	{NULL},
+-};
++static struct dvb_usb_device_properties s660_properties = {
++	.caps = DVB_USB_IS_AN_I2C_ADAPTER,
++	.usb_ctrl = DEVICE_SPECIFIC,
++	.size_of_priv = sizeof(struct dw2102_state),
++	.firmware = S660_FIRMWARE,
++	.no_reconnect = 1,
+ 
+-static const struct dvb_usb_device_description d7500 = {
+-	"Prof 7500 USB DVB-S2",
+-	{&dw2102_table[PROF_7500], NULL},
+-	{NULL},
+-};
++	.i2c_algo = &s6x0_i2c_algo,
++	.rc.core = {
++		.rc_interval = 150,
++		.rc_codes = RC_MAP_TEVII_NEC,
++		.module_name = "dw2102",
++		.allowed_protos   = RC_PROTO_BIT_NEC,
++		.rc_query = dw2102_rc_query,
++	},
+ 
+-static const struct dvb_usb_device_description d421 = {
+-	"TeVii S421 PCI",
+-	{&dw2102_table[TEVII_S421], NULL},
+-	{NULL},
++	.generic_bulk_ctrl_endpoint = 0x81,
++	.num_adapters = 1,
++	.download_firmware = dw2102_load_firmware,
++	.read_mac_address = s6x0_read_mac_address,
++	.adapter = {
++		{
++			.num_frontends = 1,
++			.fe = {{
++				.frontend_attach = ds3000_frontend_attach,
++				.stream = {
++					.type = USB_BULK,
++					.count = 8,
++					.endpoint = 0x82,
++					.u = {
++						.bulk = {
++							.buffersize = 4096,
++						}
++					}
++				},
++			} },
++		}
++	},
++	.num_device_descs = 3,
++	.devices = {
++		{"TeVii S660 USB",
++			{&dw2102_table[TEVII_S660], NULL},
++			{NULL},
++		},
++		{"TeVii S480.1 USB",
++			{&dw2102_table[TEVII_S480_1], NULL},
++			{NULL},
++		},
++		{"TeVii S480.2 USB",
++			{&dw2102_table[TEVII_S480_2], NULL},
++			{NULL},
++		},
++	}
+ };
+ 
+-static const struct dvb_usb_device_description d632 = {
+-	"TeVii S632 USB",
+-	{&dw2102_table[TEVII_S632], NULL},
+-	{NULL},
++static struct dvb_usb_device_properties p7500_properties = {
++	.caps = DVB_USB_IS_AN_I2C_ADAPTER,
++	.usb_ctrl = DEVICE_SPECIFIC,
++	.size_of_priv = sizeof(struct dw2102_state),
++	.firmware = P7500_FIRMWARE,
++	.no_reconnect = 1,
++
++	.i2c_algo = &s6x0_i2c_algo,
++	.rc.core = {
++		.rc_interval = 150,
++		.rc_codes = RC_MAP_TBS_NEC,
++		.module_name = "dw2102",
++		.allowed_protos   = RC_PROTO_BIT_NEC,
++		.rc_query = prof_rc_query,
++	},
++
++	.generic_bulk_ctrl_endpoint = 0x81,
++	.num_adapters = 1,
++	.download_firmware = dw2102_load_firmware,
++	.read_mac_address = s6x0_read_mac_address,
++	.adapter = {
++		{
++			.num_frontends = 1,
++			.fe = {{
++				.frontend_attach = prof_7500_frontend_attach,
++				.stream = {
++					.type = USB_BULK,
++					.count = 8,
++					.endpoint = 0x82,
++					.u = {
++						.bulk = {
++							.buffersize = 4096,
++						}
++					}
++				},
++			} },
++		}
++	},
++	.num_device_descs = 1,
++	.devices = {
++		{"Prof 7500 USB DVB-S2",
++			{&dw2102_table[PROF_7500], NULL},
++			{NULL},
++		},
++	}
+ };
+ 
+ static struct dvb_usb_device_properties su3000_properties = {
+@@ -2273,6 +2380,59 @@ static struct dvb_usb_device_properties su3000_properties = {
+ 	}
+ };
+ 
++static struct dvb_usb_device_properties s421_properties = {
++	.caps = DVB_USB_IS_AN_I2C_ADAPTER,
++	.usb_ctrl = DEVICE_SPECIFIC,
++	.size_of_priv = sizeof(struct dw2102_state),
++	.power_ctrl = su3000_power_ctrl,
++	.num_adapters = 1,
++	.identify_state	= su3000_identify_state,
++	.i2c_algo = &su3000_i2c_algo,
++
++	.rc.core = {
++		.rc_interval = 150,
++		.rc_codes = RC_MAP_SU3000,
++		.module_name = "dw2102",
++		.allowed_protos   = RC_PROTO_BIT_RC5,
++		.rc_query = su3000_rc_query,
++	},
++
++	.read_mac_address = su3000_read_mac_address,
++
++	.generic_bulk_ctrl_endpoint = 0x01,
++
++	.adapter = {
++		{
++		.num_frontends = 1,
++		.fe = {{
++			.streaming_ctrl   = su3000_streaming_ctrl,
++			.frontend_attach  = m88rs2000_frontend_attach,
++			.stream = {
++				.type = USB_BULK,
++				.count = 8,
++				.endpoint = 0x82,
++				.u = {
++					.bulk = {
++						.buffersize = 4096,
++					}
++				}
++			}
++		} },
++		}
++	},
++	.num_device_descs = 2,
++	.devices = {
++		{ "TeVii S421 PCI",
++			{ &dw2102_table[TEVII_S421], NULL },
++			{ NULL },
++		},
++		{ "TeVii S632 USB",
++			{ &dw2102_table[TEVII_S632], NULL },
++			{ NULL },
++		},
++	}
++};
++
+ static struct dvb_usb_device_properties t220_properties = {
+ 	.caps = DVB_USB_IS_AN_I2C_ADAPTER,
+ 	.usb_ctrl = DEVICE_SPECIFIC,
+@@ -2390,101 +2550,33 @@ static struct dvb_usb_device_properties tt_s2_4600_properties = {
+ static int dw2102_probe(struct usb_interface *intf,
+ 		const struct usb_device_id *id)
+ {
+-	int retval = -ENOMEM;
+-	struct dvb_usb_device_properties *p1100;
+-	struct dvb_usb_device_properties *s660;
+-	struct dvb_usb_device_properties *p7500;
+-	struct dvb_usb_device_properties *s421;
+-
+-	p1100 = kmemdup(&s6x0_properties,
+-			sizeof(struct dvb_usb_device_properties), GFP_KERNEL);
+-	if (!p1100)
+-		goto err0;
+-
+-	/* copy default structure */
+-	/* fill only different fields */
+-	p1100->firmware = P1100_FIRMWARE;
+-	p1100->devices[0] = d1100;
+-	p1100->rc.core.rc_query = prof_rc_query;
+-	p1100->rc.core.rc_codes = RC_MAP_TBS_NEC;
+-	p1100->adapter->fe[0].frontend_attach = stv0288_frontend_attach;
+-
+-	s660 = kmemdup(&s6x0_properties,
+-		       sizeof(struct dvb_usb_device_properties), GFP_KERNEL);
+-	if (!s660)
+-		goto err1;
+-
+-	s660->firmware = S660_FIRMWARE;
+-	s660->num_device_descs = 3;
+-	s660->devices[0] = d660;
+-	s660->devices[1] = d480_1;
+-	s660->devices[2] = d480_2;
+-	s660->adapter->fe[0].frontend_attach = ds3000_frontend_attach;
+-
+-	p7500 = kmemdup(&s6x0_properties,
+-			sizeof(struct dvb_usb_device_properties), GFP_KERNEL);
+-	if (!p7500)
+-		goto err2;
+-
+-	p7500->firmware = P7500_FIRMWARE;
+-	p7500->devices[0] = d7500;
+-	p7500->rc.core.rc_query = prof_rc_query;
+-	p7500->rc.core.rc_codes = RC_MAP_TBS_NEC;
+-	p7500->adapter->fe[0].frontend_attach = prof_7500_frontend_attach;
+-
+-
+-	s421 = kmemdup(&su3000_properties,
+-		       sizeof(struct dvb_usb_device_properties), GFP_KERNEL);
+-	if (!s421)
+-		goto err3;
+-
+-	s421->num_device_descs = 2;
+-	s421->devices[0] = d421;
+-	s421->devices[1] = d632;
+-	s421->adapter->fe[0].frontend_attach = m88rs2000_frontend_attach;
+-
+-	if (0 == dvb_usb_device_init(intf, &dw2102_properties,
+-			THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, &dw2104_properties,
+-			THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, &dw3101_properties,
+-			THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, &s6x0_properties,
+-			THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, p1100,
+-			THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, s660,
+-			THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, p7500,
+-			THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, s421,
+-			THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, &su3000_properties,
+-			 THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, &t220_properties,
+-			 THIS_MODULE, NULL, adapter_nr) ||
+-	    0 == dvb_usb_device_init(intf, &tt_s2_4600_properties,
+-			 THIS_MODULE, NULL, adapter_nr)) {
+-
+-		/* clean up copied properties */
+-		kfree(s421);
+-		kfree(p7500);
+-		kfree(s660);
+-		kfree(p1100);
++	if (!(dvb_usb_device_init(intf, &dw2102_properties,
++			          THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &dw2104_properties,
++				  THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &dw3101_properties,
++			          THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &s6x0_properties,
++			          THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &p1100_properties,
++			          THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &s660_properties,
++				  THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &p7500_properties,
++				  THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &s421_properties,
++				  THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &su3000_properties,
++				  THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &t220_properties,
++				  THIS_MODULE, NULL, adapter_nr) &&
++	      dvb_usb_device_init(intf, &tt_s2_4600_properties,
++				  THIS_MODULE, NULL, adapter_nr))) {
+ 
+ 		return 0;
+ 	}
+ 
+-	retval = -ENODEV;
+-	kfree(s421);
+-err3:
+-	kfree(p7500);
+-err2:
+-	kfree(s660);
+-err1:
+-	kfree(p1100);
+-err0:
+-	return retval;
++	return -ENODEV;
+ }
+ 
+ static void dw2102_disconnect(struct usb_interface *intf)
+diff --git a/drivers/media/usb/dvb-usb/m920x.c b/drivers/media/usb/dvb-usb/m920x.c
+index 4bb5b82599a79..691e05833db19 100644
+--- a/drivers/media/usb/dvb-usb/m920x.c
++++ b/drivers/media/usb/dvb-usb/m920x.c
+@@ -274,6 +274,13 @@ static int m920x_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int nu
+ 			/* Should check for ack here, if we knew how. */
+ 		}
+ 		if (msg[i].flags & I2C_M_RD) {
++			char *read = kmalloc(1, GFP_KERNEL);
++			if (!read) {
++				ret = -ENOMEM;
++				kfree(read);
++				goto unlock;
++			}
++
+ 			for (j = 0; j < msg[i].len; j++) {
+ 				/* Last byte of transaction?
+ 				 * Send STOP, otherwise send ACK. */
+@@ -281,9 +288,12 @@ static int m920x_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int nu
+ 
+ 				if ((ret = m920x_read(d->udev, M9206_I2C, 0x0,
+ 						      0x20 | stop,
+-						      &msg[i].buf[j], 1)) != 0)
++						      read, 1)) != 0)
+ 					goto unlock;
++				msg[i].buf[j] = read[0];
+ 			}
++
++			kfree(read);
+ 		} else {
+ 			for (j = 0; j < msg[i].len; j++) {
+ 				/* Last byte of transaction? Then send STOP. */
+diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
+index b207f34af5c6f..b451ce3cb169a 100644
+--- a/drivers/media/usb/em28xx/em28xx-cards.c
++++ b/drivers/media/usb/em28xx/em28xx-cards.c
+@@ -3630,8 +3630,10 @@ static int em28xx_init_dev(struct em28xx *dev, struct usb_device *udev,
+ 
+ 	if (dev->is_audio_only) {
+ 		retval = em28xx_audio_setup(dev);
+-		if (retval)
+-			return -ENODEV;
++		if (retval) {
++			retval = -ENODEV;
++			goto err_deinit_media;
++		}
+ 		em28xx_init_extension(dev);
+ 
+ 		return 0;
+@@ -3650,7 +3652,7 @@ static int em28xx_init_dev(struct em28xx *dev, struct usb_device *udev,
+ 		dev_err(&dev->intf->dev,
+ 			"%s: em28xx_i2c_register bus 0 - error [%d]!\n",
+ 		       __func__, retval);
+-		return retval;
++		goto err_deinit_media;
+ 	}
+ 
+ 	/* register i2c bus 1 */
+@@ -3666,9 +3668,7 @@ static int em28xx_init_dev(struct em28xx *dev, struct usb_device *udev,
+ 				"%s: em28xx_i2c_register bus 1 - error [%d]!\n",
+ 				__func__, retval);
+ 
+-			em28xx_i2c_unregister(dev, 0);
+-
+-			return retval;
++			goto err_unreg_i2c;
+ 		}
+ 	}
+ 
+@@ -3676,6 +3676,12 @@ static int em28xx_init_dev(struct em28xx *dev, struct usb_device *udev,
+ 	em28xx_card_setup(dev);
+ 
+ 	return 0;
++
++err_unreg_i2c:
++	em28xx_i2c_unregister(dev, 0);
++err_deinit_media:
++	em28xx_unregister_media_device(dev);
++	return retval;
+ }
+ 
+ static int em28xx_duplicate_dev(struct em28xx *dev)
+diff --git a/drivers/media/usb/em28xx/em28xx-core.c b/drivers/media/usb/em28xx/em28xx-core.c
+index acc0bf7dbe2b1..c837cc528a335 100644
+--- a/drivers/media/usb/em28xx/em28xx-core.c
++++ b/drivers/media/usb/em28xx/em28xx-core.c
+@@ -89,7 +89,7 @@ int em28xx_read_reg_req_len(struct em28xx *dev, u8 req, u16 reg,
+ 	mutex_lock(&dev->ctrl_urb_lock);
+ 	ret = usb_control_msg(udev, pipe, req,
+ 			      USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-			      0x0000, reg, dev->urb_buf, len, HZ);
++			      0x0000, reg, dev->urb_buf, len, 1000);
+ 	if (ret < 0) {
+ 		em28xx_regdbg("(pipe 0x%08x): IN:  %02x %02x %02x %02x %02x %02x %02x %02x  failed with error %i\n",
+ 			      pipe,
+@@ -158,7 +158,7 @@ int em28xx_write_regs_req(struct em28xx *dev, u8 req, u16 reg, char *buf,
+ 	memcpy(dev->urb_buf, buf, len);
+ 	ret = usb_control_msg(udev, pipe, req,
+ 			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-			      0x0000, reg, dev->urb_buf, len, HZ);
++			      0x0000, reg, dev->urb_buf, len, 1000);
+ 	mutex_unlock(&dev->ctrl_urb_lock);
+ 
+ 	if (ret < 0) {
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+index d38dee1792e41..3915d551d59e7 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+@@ -1467,7 +1467,7 @@ static int pvr2_upload_firmware1(struct pvr2_hdw *hdw)
+ 	for (address = 0; address < fwsize; address += 0x800) {
+ 		memcpy(fw_ptr, fw_entry->data + address, 0x800);
+ 		ret += usb_control_msg(hdw->usb_dev, pipe, 0xa0, 0x40, address,
+-				       0, fw_ptr, 0x800, HZ);
++				       0, fw_ptr, 0x800, 1000);
+ 	}
+ 
+ 	trace_firmware("Upload done, releasing device's CPU");
+@@ -1605,7 +1605,7 @@ int pvr2_upload_firmware2(struct pvr2_hdw *hdw)
+ 			((u32 *)fw_ptr)[icnt] = swab32(((u32 *)fw_ptr)[icnt]);
+ 
+ 		ret |= usb_bulk_msg(hdw->usb_dev, pipe, fw_ptr,bcnt,
+-				    &actual_length, HZ);
++				    &actual_length, 1000);
+ 		ret |= (actual_length != bcnt);
+ 		if (ret) break;
+ 		fw_done += bcnt;
+@@ -3438,7 +3438,7 @@ void pvr2_hdw_cpufw_set_enabled(struct pvr2_hdw *hdw,
+ 						      0xa0,0xc0,
+ 						      address,0,
+ 						      hdw->fw_buffer+address,
+-						      0x800,HZ);
++						      0x800,1000);
+ 				if (ret < 0) break;
+ 			}
+ 
+@@ -3977,7 +3977,7 @@ void pvr2_hdw_cpureset_assert(struct pvr2_hdw *hdw,int val)
+ 	/* Write the CPUCS register on the 8051.  The lsb of the register
+ 	   is the reset bit; a 1 asserts reset while a 0 clears it. */
+ 	pipe = usb_sndctrlpipe(hdw->usb_dev, 0);
+-	ret = usb_control_msg(hdw->usb_dev,pipe,0xa0,0x40,0xe600,0,da,1,HZ);
++	ret = usb_control_msg(hdw->usb_dev,pipe,0xa0,0x40,0xe600,0,da,1,1000);
+ 	if (ret < 0) {
+ 		pvr2_trace(PVR2_TRACE_ERROR_LEGS,
+ 			   "cpureset_assert(%d) error=%d",val,ret);
+diff --git a/drivers/media/usb/s2255/s2255drv.c b/drivers/media/usb/s2255/s2255drv.c
+index 3b0e4ed75d99c..acf18e2251a52 100644
+--- a/drivers/media/usb/s2255/s2255drv.c
++++ b/drivers/media/usb/s2255/s2255drv.c
+@@ -1882,7 +1882,7 @@ static long s2255_vendor_req(struct s2255_dev *dev, unsigned char Request,
+ 				    USB_TYPE_VENDOR | USB_RECIP_DEVICE |
+ 				    USB_DIR_IN,
+ 				    Value, Index, buf,
+-				    TransferBufferLength, HZ * 5);
++				    TransferBufferLength, USB_CTRL_SET_TIMEOUT);
+ 
+ 		if (r >= 0)
+ 			memcpy(TransferBuffer, buf, TransferBufferLength);
+@@ -1891,7 +1891,7 @@ static long s2255_vendor_req(struct s2255_dev *dev, unsigned char Request,
+ 		r = usb_control_msg(dev->udev, usb_sndctrlpipe(dev->udev, 0),
+ 				    Request, USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 				    Value, Index, buf,
+-				    TransferBufferLength, HZ * 5);
++				    TransferBufferLength, USB_CTRL_SET_TIMEOUT);
+ 	}
+ 	kfree(buf);
+ 	return r;
+diff --git a/drivers/media/usb/stk1160/stk1160-core.c b/drivers/media/usb/stk1160/stk1160-core.c
+index b4f8bc5db1389..4e1698f788187 100644
+--- a/drivers/media/usb/stk1160/stk1160-core.c
++++ b/drivers/media/usb/stk1160/stk1160-core.c
+@@ -65,7 +65,7 @@ int stk1160_read_reg(struct stk1160 *dev, u16 reg, u8 *value)
+ 		return -ENOMEM;
+ 	ret = usb_control_msg(dev->udev, pipe, 0x00,
+ 			USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-			0x00, reg, buf, sizeof(u8), HZ);
++			0x00, reg, buf, sizeof(u8), 1000);
+ 	if (ret < 0) {
+ 		stk1160_err("read failed on reg 0x%x (%d)\n",
+ 			reg, ret);
+@@ -85,7 +85,7 @@ int stk1160_write_reg(struct stk1160 *dev, u16 reg, u16 value)
+ 
+ 	ret =  usb_control_msg(dev->udev, pipe, 0x01,
+ 			USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-			value, reg, NULL, 0, HZ);
++			value, reg, NULL, 0, 1000);
+ 	if (ret < 0) {
+ 		stk1160_err("write failed on reg 0x%x (%d)\n",
+ 			reg, ret);
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index 30bfe9069a1fb..b4f6edf968bc0 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -1638,8 +1638,8 @@ static int uvc_ctrl_find_ctrl_idx(struct uvc_entity *entity,
+ 				  struct v4l2_ext_controls *ctrls,
+ 				  struct uvc_control *uvc_control)
+ {
+-	struct uvc_control_mapping *mapping;
+-	struct uvc_control *ctrl_found;
++	struct uvc_control_mapping *mapping = NULL;
++	struct uvc_control *ctrl_found = NULL;
+ 	unsigned int i;
+ 
+ 	if (!entity)
+diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
+index f4e4aff8ddf77..711556d13d03a 100644
+--- a/drivers/media/usb/uvc/uvc_v4l2.c
++++ b/drivers/media/usb/uvc/uvc_v4l2.c
+@@ -44,8 +44,10 @@ static int uvc_ioctl_ctrl_map(struct uvc_video_chain *chain,
+ 	if (v4l2_ctrl_get_name(map->id) == NULL) {
+ 		map->name = kmemdup(xmap->name, sizeof(xmap->name),
+ 				    GFP_KERNEL);
+-		if (!map->name)
+-			return -ENOMEM;
++		if (!map->name) {
++			ret = -ENOMEM;
++			goto free_map;
++		}
+ 	}
+ 	memcpy(map->entity, xmap->entity, sizeof(map->entity));
+ 	map->selector = xmap->selector;
+diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
+index 2e5366143b814..143230b3275b3 100644
+--- a/drivers/media/usb/uvc/uvcvideo.h
++++ b/drivers/media/usb/uvc/uvcvideo.h
+@@ -189,7 +189,7 @@
+ /* Maximum status buffer size in bytes of interrupt URB. */
+ #define UVC_MAX_STATUS_SIZE	16
+ 
+-#define UVC_CTRL_CONTROL_TIMEOUT	500
++#define UVC_CTRL_CONTROL_TIMEOUT	5000
+ #define UVC_CTRL_STREAMING_TIMEOUT	5000
+ 
+ /* Maximum allowed number of control mappings per device */
+diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
+index 31d0109ce5a8f..69b74d0e8a90a 100644
+--- a/drivers/media/v4l2-core/v4l2-ioctl.c
++++ b/drivers/media/v4l2-core/v4l2-ioctl.c
+@@ -2090,6 +2090,7 @@ static int v4l_prepare_buf(const struct v4l2_ioctl_ops *ops,
+ static int v4l_g_parm(const struct v4l2_ioctl_ops *ops,
+ 				struct file *file, void *fh, void *arg)
+ {
++	struct video_device *vfd = video_devdata(file);
+ 	struct v4l2_streamparm *p = arg;
+ 	v4l2_std_id std;
+ 	int ret = check_fmt(file, p->type);
+@@ -2101,7 +2102,8 @@ static int v4l_g_parm(const struct v4l2_ioctl_ops *ops,
+ 	if (p->type != V4L2_BUF_TYPE_VIDEO_CAPTURE &&
+ 	    p->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE)
+ 		return -EINVAL;
+-	p->parm.capture.readbuffers = 2;
++	if (vfd->device_caps & V4L2_CAP_READWRITE)
++		p->parm.capture.readbuffers = 2;
+ 	ret = ops->vidioc_g_std(file, fh, &std);
+ 	if (ret == 0)
+ 		v4l2_video_std_frame_period(std, &p->parm.capture.timeperframe);
+diff --git a/drivers/memory/renesas-rpc-if.c b/drivers/memory/renesas-rpc-if.c
+index 7435baad00075..ff8bcbccac637 100644
+--- a/drivers/memory/renesas-rpc-if.c
++++ b/drivers/memory/renesas-rpc-if.c
+@@ -243,7 +243,7 @@ int rpcif_sw_init(struct rpcif *rpc, struct device *dev)
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dirmap");
+ 	rpc->dirmap = devm_ioremap_resource(&pdev->dev, res);
+ 	if (IS_ERR(rpc->dirmap))
+-		rpc->dirmap = NULL;
++		return PTR_ERR(rpc->dirmap);
+ 	rpc->size = resource_size(res);
+ 
+ 	rpc->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
+diff --git a/drivers/mfd/atmel-flexcom.c b/drivers/mfd/atmel-flexcom.c
+index d2f5c073fdf31..559eb4d352b68 100644
+--- a/drivers/mfd/atmel-flexcom.c
++++ b/drivers/mfd/atmel-flexcom.c
+@@ -87,8 +87,7 @@ static const struct of_device_id atmel_flexcom_of_match[] = {
+ };
+ MODULE_DEVICE_TABLE(of, atmel_flexcom_of_match);
+ 
+-#ifdef CONFIG_PM_SLEEP
+-static int atmel_flexcom_resume(struct device *dev)
++static int __maybe_unused atmel_flexcom_resume_noirq(struct device *dev)
+ {
+ 	struct atmel_flexcom *ddata = dev_get_drvdata(dev);
+ 	int err;
+@@ -105,16 +104,16 @@ static int atmel_flexcom_resume(struct device *dev)
+ 
+ 	return 0;
+ }
+-#endif
+ 
+-static SIMPLE_DEV_PM_OPS(atmel_flexcom_pm_ops, NULL,
+-			 atmel_flexcom_resume);
++static const struct dev_pm_ops atmel_flexcom_pm_ops = {
++	.resume_noirq = atmel_flexcom_resume_noirq,
++};
+ 
+ static struct platform_driver atmel_flexcom_driver = {
+ 	.probe	= atmel_flexcom_probe,
+ 	.driver	= {
+ 		.name		= "atmel_flexcom",
+-		.pm		= &atmel_flexcom_pm_ops,
++		.pm		= pm_ptr(&atmel_flexcom_pm_ops),
+ 		.of_match_table	= atmel_flexcom_of_match,
+ 	},
+ };
+diff --git a/drivers/mfd/intel_soc_pmic_core.c b/drivers/mfd/intel_soc_pmic_core.c
+index ddd64f9e3341e..47cb7f00dfcfc 100644
+--- a/drivers/mfd/intel_soc_pmic_core.c
++++ b/drivers/mfd/intel_soc_pmic_core.c
+@@ -14,15 +14,12 @@
+ #include <linux/module.h>
+ #include <linux/mfd/core.h>
+ #include <linux/mfd/intel_soc_pmic.h>
++#include <linux/platform_data/x86/soc.h>
+ #include <linux/pwm.h>
+ #include <linux/regmap.h>
+ 
+ #include "intel_soc_pmic_core.h"
+ 
+-/* Crystal Cove PMIC shares same ACPI ID between different platforms */
+-#define BYT_CRC_HRV		2
+-#define CHT_CRC_HRV		3
+-
+ /* PWM consumed by the Intel GFX */
+ static struct pwm_lookup crc_pwm_lookup[] = {
+ 	PWM_LOOKUP("crystal_cove_pwm", 0, "0000:00:02.0", "pwm_pmic_backlight", 0, PWM_POLARITY_NORMAL),
+@@ -34,31 +31,12 @@ static int intel_soc_pmic_i2c_probe(struct i2c_client *i2c,
+ 	struct device *dev = &i2c->dev;
+ 	struct intel_soc_pmic_config *config;
+ 	struct intel_soc_pmic *pmic;
+-	unsigned long long hrv;
+-	acpi_status status;
+ 	int ret;
+ 
+-	/*
+-	 * There are 2 different Crystal Cove PMICs a Bay Trail and Cherry
+-	 * Trail version, use _HRV to differentiate between the 2.
+-	 */
+-	status = acpi_evaluate_integer(ACPI_HANDLE(dev), "_HRV", NULL, &hrv);
+-	if (ACPI_FAILURE(status)) {
+-		dev_err(dev, "Failed to get PMIC hardware revision\n");
+-		return -ENODEV;
+-	}
+-
+-	switch (hrv) {
+-	case BYT_CRC_HRV:
++	if (soc_intel_is_byt())
+ 		config = &intel_soc_pmic_config_byt_crc;
+-		break;
+-	case CHT_CRC_HRV:
++	else
+ 		config = &intel_soc_pmic_config_cht_crc;
+-		break;
+-	default:
+-		dev_warn(dev, "Unknown hardware rev %llu, assuming BYT\n", hrv);
+-		config = &intel_soc_pmic_config_byt_crc;
+-	}
+ 
+ 	pmic = devm_kzalloc(dev, sizeof(*pmic), GFP_KERNEL);
+ 	if (!pmic)
+diff --git a/drivers/mfd/tps65910.c b/drivers/mfd/tps65910.c
+index 6e105cca27d47..67e2707af4bce 100644
+--- a/drivers/mfd/tps65910.c
++++ b/drivers/mfd/tps65910.c
+@@ -436,15 +436,6 @@ static void tps65910_power_off(void)
+ 
+ 	tps65910 = dev_get_drvdata(&tps65910_i2c_client->dev);
+ 
+-	/*
+-	 * The PWR_OFF bit needs to be set separately, before transitioning
+-	 * to the OFF state. It enables the "sequential" power-off mode on
+-	 * TPS65911, it's a NO-OP on TPS65910.
+-	 */
+-	if (regmap_set_bits(tps65910->regmap, TPS65910_DEVCTRL,
+-			    DEVCTRL_PWR_OFF_MASK) < 0)
+-		return;
+-
+ 	regmap_update_bits(tps65910->regmap, TPS65910_DEVCTRL,
+ 			   DEVCTRL_DEV_OFF_MASK | DEVCTRL_DEV_ON_MASK,
+ 			   DEVCTRL_DEV_OFF_MASK);
+@@ -504,6 +495,19 @@ static int tps65910_i2c_probe(struct i2c_client *i2c,
+ 	tps65910_sleepinit(tps65910, pmic_plat_data);
+ 
+ 	if (pmic_plat_data->pm_off && !pm_power_off) {
++		/*
++		 * The PWR_OFF bit needs to be set separately, before
++		 * transitioning to the OFF state. It enables the "sequential"
++		 * power-off mode on TPS65911, it's a NO-OP on TPS65910.
++		 */
++		ret = regmap_set_bits(tps65910->regmap, TPS65910_DEVCTRL,
++				      DEVCTRL_PWR_OFF_MASK);
++		if (ret) {
++			dev_err(&i2c->dev, "failed to set power-off mode: %d\n",
++				ret);
++			return ret;
++		}
++
+ 		tps65910_i2c_client = i2c;
+ 		pm_power_off = tps65910_power_off;
+ 	}
+diff --git a/drivers/misc/eeprom/at25.c b/drivers/misc/eeprom/at25.c
+index b38978a3b3ffa..9193b812bc07e 100644
+--- a/drivers/misc/eeprom/at25.c
++++ b/drivers/misc/eeprom/at25.c
+@@ -17,8 +17,6 @@
+ #include <linux/spi/spi.h>
+ #include <linux/spi/eeprom.h>
+ #include <linux/property.h>
+-#include <linux/of.h>
+-#include <linux/of_device.h>
+ #include <linux/math.h>
+ 
+ /*
+@@ -380,13 +378,14 @@ static int at25_probe(struct spi_device *spi)
+ 	int			sr;
+ 	u8 id[FM25_ID_LEN];
+ 	u8 sernum[FM25_SN_LEN];
++	bool is_fram;
+ 	int i;
+-	const struct of_device_id *match;
+-	bool is_fram = 0;
+ 
+-	match = of_match_device(of_match_ptr(at25_of_match), &spi->dev);
+-	if (match && !strcmp(match->compatible, "cypress,fm25"))
+-		is_fram = 1;
++	err = device_property_match_string(&spi->dev, "compatible", "cypress,fm25");
++	if (err >= 0)
++		is_fram = true;
++	else
++		is_fram = false;
+ 
+ 	at25 = devm_kzalloc(&spi->dev, sizeof(struct at25_data), GFP_KERNEL);
+ 	if (!at25)
+diff --git a/drivers/misc/habanalabs/common/command_submission.c b/drivers/misc/habanalabs/common/command_submission.c
+index 4c8000fd246cd..9451e4bae05df 100644
+--- a/drivers/misc/habanalabs/common/command_submission.c
++++ b/drivers/misc/habanalabs/common/command_submission.c
+@@ -2765,8 +2765,23 @@ static int hl_cs_wait_ioctl(struct hl_fpriv *hpriv, void *data)
+ 	return 0;
+ }
+ 
++static inline unsigned long hl_usecs64_to_jiffies(const u64 usecs)
++{
++	if (usecs <= U32_MAX)
++		return usecs_to_jiffies(usecs);
++
++	/*
++	 * If the value in nanoseconds is larger than 64 bit, use the largest
++	 * 64 bit value.
++	 */
++	if (usecs >= ((u64)(U64_MAX / NSEC_PER_USEC)))
++		return nsecs_to_jiffies(U64_MAX);
++
++	return nsecs_to_jiffies(usecs * NSEC_PER_USEC);
++}
++
+ static int _hl_interrupt_wait_ioctl(struct hl_device *hdev, struct hl_ctx *ctx,
+-				u32 timeout_us, u64 user_address,
++				u64 timeout_us, u64 user_address,
+ 				u64 target_value, u16 interrupt_offset,
+ 				enum hl_cs_wait_status *status,
+ 				u64 *timestamp)
+@@ -2778,10 +2793,7 @@ static int _hl_interrupt_wait_ioctl(struct hl_device *hdev, struct hl_ctx *ctx,
+ 	long completion_rc;
+ 	int rc = 0;
+ 
+-	if (timeout_us == U32_MAX)
+-		timeout = timeout_us;
+-	else
+-		timeout = usecs_to_jiffies(timeout_us);
++	timeout = hl_usecs64_to_jiffies(timeout_us);
+ 
+ 	hl_ctx_get(hdev, ctx);
+ 
+diff --git a/drivers/misc/habanalabs/common/firmware_if.c b/drivers/misc/habanalabs/common/firmware_if.c
+index 4e68fb9d2a6bd..67a0be4573710 100644
+--- a/drivers/misc/habanalabs/common/firmware_if.c
++++ b/drivers/misc/habanalabs/common/firmware_if.c
+@@ -1703,6 +1703,9 @@ static int hl_fw_dynamic_validate_descriptor(struct hl_device *hdev,
+ 		return rc;
+ 	}
+ 
++	/* here we can mark the descriptor as valid as the content has been validated */
++	fw_loader->dynamic_loader.fw_desc_valid = true;
++
+ 	return 0;
+ }
+ 
+@@ -1759,7 +1762,13 @@ static int hl_fw_dynamic_read_and_validate_descriptor(struct hl_device *hdev,
+ 		return rc;
+ 	}
+ 
+-	/* extract address copy the descriptor from */
++	/*
++	 * extract address to copy the descriptor from
++	 * in addition, as the descriptor value is going to be over-ridden by new data- we mark it
++	 * as invalid.
++	 * it will be marked again as valid once validated
++	 */
++	fw_loader->dynamic_loader.fw_desc_valid = false;
+ 	src = hdev->pcie_bar[region->bar_id] + region->offset_in_bar +
+ 							response->ram_offset;
+ 	memcpy_fromio(fw_desc, src, sizeof(struct lkd_fw_comms_desc));
+@@ -2247,6 +2256,9 @@ static int hl_fw_dynamic_init_cpu(struct hl_device *hdev,
+ 	dev_info(hdev->dev,
+ 		"Loading firmware to device, may take some time...\n");
+ 
++	/* initialize FW descriptor as invalid */
++	fw_loader->dynamic_loader.fw_desc_valid = false;
++
+ 	/*
+ 	 * In this stage, "cpu_dyn_regs" contains only LKD's hard coded values!
+ 	 * It will be updated from FW after hl_fw_dynamic_request_descriptor().
+@@ -2333,7 +2345,8 @@ static int hl_fw_dynamic_init_cpu(struct hl_device *hdev,
+ 	return 0;
+ 
+ protocol_err:
+-	fw_read_errors(hdev, le32_to_cpu(dyn_regs->cpu_boot_err0),
++	if (fw_loader->dynamic_loader.fw_desc_valid)
++		fw_read_errors(hdev, le32_to_cpu(dyn_regs->cpu_boot_err0),
+ 				le32_to_cpu(dyn_regs->cpu_boot_err1),
+ 				le32_to_cpu(dyn_regs->cpu_boot_dev_sts0),
+ 				le32_to_cpu(dyn_regs->cpu_boot_dev_sts1));
+diff --git a/drivers/misc/habanalabs/common/habanalabs.h b/drivers/misc/habanalabs/common/habanalabs.h
+index a2002cbf794b5..ba0965667b182 100644
+--- a/drivers/misc/habanalabs/common/habanalabs.h
++++ b/drivers/misc/habanalabs/common/habanalabs.h
+@@ -1010,6 +1010,7 @@ struct fw_response {
+  * @image_region: region to copy the FW image to
+  * @fw_image_size: size of FW image to load
+  * @wait_for_bl_timeout: timeout for waiting for boot loader to respond
++ * @fw_desc_valid: true if FW descriptor has been validated and hence the data can be used
+  */
+ struct dynamic_fw_load_mgr {
+ 	struct fw_response response;
+@@ -1017,6 +1018,7 @@ struct dynamic_fw_load_mgr {
+ 	struct pci_mem_region *image_region;
+ 	size_t fw_image_size;
+ 	u32 wait_for_bl_timeout;
++	bool fw_desc_valid;
+ };
+ 
+ /**
+diff --git a/drivers/misc/lattice-ecp3-config.c b/drivers/misc/lattice-ecp3-config.c
+index 0f54730c7ed56..98828030b5a4d 100644
+--- a/drivers/misc/lattice-ecp3-config.c
++++ b/drivers/misc/lattice-ecp3-config.c
+@@ -76,12 +76,12 @@ static void firmware_load(const struct firmware *fw, void *context)
+ 
+ 	if (fw == NULL) {
+ 		dev_err(&spi->dev, "Cannot load firmware, aborting\n");
+-		return;
++		goto out;
+ 	}
+ 
+ 	if (fw->size == 0) {
+ 		dev_err(&spi->dev, "Error: Firmware size is 0!\n");
+-		return;
++		goto out;
+ 	}
+ 
+ 	/* Fill dummy data (24 stuffing bits for commands) */
+@@ -103,7 +103,7 @@ static void firmware_load(const struct firmware *fw, void *context)
+ 		dev_err(&spi->dev,
+ 			"Error: No supported FPGA detected (JEDEC_ID=%08x)!\n",
+ 			jedec_id);
+-		return;
++		goto out;
+ 	}
+ 
+ 	dev_info(&spi->dev, "FPGA %s detected\n", ecp3_dev[i].name);
+@@ -116,7 +116,7 @@ static void firmware_load(const struct firmware *fw, void *context)
+ 	buffer = kzalloc(fw->size + 8, GFP_KERNEL);
+ 	if (!buffer) {
+ 		dev_err(&spi->dev, "Error: Can't allocate memory!\n");
+-		return;
++		goto out;
+ 	}
+ 
+ 	/*
+@@ -155,7 +155,7 @@ static void firmware_load(const struct firmware *fw, void *context)
+ 			"Error: Timeout waiting for FPGA to clear (status=%08x)!\n",
+ 			status);
+ 		kfree(buffer);
+-		return;
++		goto out;
+ 	}
+ 
+ 	dev_info(&spi->dev, "Configuring the FPGA...\n");
+@@ -181,7 +181,7 @@ static void firmware_load(const struct firmware *fw, void *context)
+ 	release_firmware(fw);
+ 
+ 	kfree(buffer);
+-
++out:
+ 	complete(&data->fw_loaded);
+ }
+ 
+diff --git a/drivers/misc/lkdtm/Makefile b/drivers/misc/lkdtm/Makefile
+index aa12097668d33..e2984ce51fe4d 100644
+--- a/drivers/misc/lkdtm/Makefile
++++ b/drivers/misc/lkdtm/Makefile
+@@ -20,7 +20,7 @@ CFLAGS_REMOVE_rodata.o		+= $(CC_FLAGS_LTO)
+ 
+ OBJCOPYFLAGS :=
+ OBJCOPYFLAGS_rodata_objcopy.o	:= \
+-			--rename-section .noinstr.text=.rodata,alloc,readonly,load
++			--rename-section .noinstr.text=.rodata,alloc,readonly,load,contents
+ targets += rodata.o rodata_objcopy.o
+ $(obj)/rodata_objcopy.o: $(obj)/rodata.o FORCE
+ 	$(call if_changed,objcopy)
+diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c
+index be41843df75bc..cebcca6d6d3ef 100644
+--- a/drivers/misc/mei/hbm.c
++++ b/drivers/misc/mei/hbm.c
+@@ -672,10 +672,14 @@ static void mei_hbm_cl_dma_map_res(struct mei_device *dev,
+ 	if (!cl)
+ 		return;
+ 
+-	dev_dbg(dev->dev, "cl dma map result = %d\n", res->status);
+-	cl->status = res->status;
+-	if (!cl->status)
++	if (res->status) {
++		dev_err(dev->dev, "cl dma map failed %d\n", res->status);
++		cl->status = -EFAULT;
++	} else {
++		dev_dbg(dev->dev, "cl dma map succeeded\n");
+ 		cl->dma_mapped = 1;
++		cl->status = 0;
++	}
+ 	wake_up(&cl->wait);
+ }
+ 
+@@ -698,10 +702,14 @@ static void mei_hbm_cl_dma_unmap_res(struct mei_device *dev,
+ 	if (!cl)
+ 		return;
+ 
+-	dev_dbg(dev->dev, "cl dma unmap result = %d\n", res->status);
+-	cl->status = res->status;
+-	if (!cl->status)
++	if (res->status) {
++		dev_err(dev->dev, "cl dma unmap failed %d\n", res->status);
++		cl->status = -EFAULT;
++	} else {
++		dev_dbg(dev->dev, "cl dma unmap succeeded\n");
+ 		cl->dma_mapped = 0;
++		cl->status = 0;
++	}
+ 	wake_up(&cl->wait);
+ }
+ 
+diff --git a/drivers/mmc/core/sdio.c b/drivers/mmc/core/sdio.c
+index 68edf7a615be5..5447c47157aa5 100644
+--- a/drivers/mmc/core/sdio.c
++++ b/drivers/mmc/core/sdio.c
+@@ -708,6 +708,8 @@ try_again:
+ 	if (host->ops->init_card)
+ 		host->ops->init_card(host, card);
+ 
++	card->ocr = ocr_card;
++
+ 	/*
+ 	 * If the host and card support UHS-I mode request the card
+ 	 * to switch to 1.8V signaling level.  No 1.8v signalling if
+@@ -820,7 +822,7 @@ try_again:
+ 			goto mismatch;
+ 		}
+ 	}
+-	card->ocr = ocr_card;
++
+ 	mmc_fixup_device(card, sdio_fixup_methods);
+ 
+ 	if (card->type == MMC_TYPE_SD_COMBO) {
+diff --git a/drivers/mmc/host/meson-mx-sdhc-mmc.c b/drivers/mmc/host/meson-mx-sdhc-mmc.c
+index 8fdd0bbbfa21f..28aa78aa08f3f 100644
+--- a/drivers/mmc/host/meson-mx-sdhc-mmc.c
++++ b/drivers/mmc/host/meson-mx-sdhc-mmc.c
+@@ -854,6 +854,11 @@ static int meson_mx_sdhc_probe(struct platform_device *pdev)
+ 		goto err_disable_pclk;
+ 
+ 	irq = platform_get_irq(pdev, 0);
++	if (irq < 0) {
++		ret = irq;
++		goto err_disable_pclk;
++	}
++
+ 	ret = devm_request_threaded_irq(dev, irq, meson_mx_sdhc_irq,
+ 					meson_mx_sdhc_irq_thread, IRQF_ONESHOT,
+ 					NULL, host);
+diff --git a/drivers/mmc/host/meson-mx-sdio.c b/drivers/mmc/host/meson-mx-sdio.c
+index d4a48916bfb67..3a19a05ef55a7 100644
+--- a/drivers/mmc/host/meson-mx-sdio.c
++++ b/drivers/mmc/host/meson-mx-sdio.c
+@@ -662,6 +662,11 @@ static int meson_mx_mmc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	irq = platform_get_irq(pdev, 0);
++	if (irq < 0) {
++		ret = irq;
++		goto error_free_mmc;
++	}
++
+ 	ret = devm_request_threaded_irq(host->controller_dev, irq,
+ 					meson_mx_mmc_irq,
+ 					meson_mx_mmc_irq_thread, IRQF_ONESHOT,
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index 632775217d35c..d5a9c269d4926 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -636,12 +636,11 @@ static void msdc_reset_hw(struct msdc_host *host)
+ 	u32 val;
+ 
+ 	sdr_set_bits(host->base + MSDC_CFG, MSDC_CFG_RST);
+-	while (readl(host->base + MSDC_CFG) & MSDC_CFG_RST)
+-		cpu_relax();
++	readl_poll_timeout(host->base + MSDC_CFG, val, !(val & MSDC_CFG_RST), 0, 0);
+ 
+ 	sdr_set_bits(host->base + MSDC_FIFOCS, MSDC_FIFOCS_CLR);
+-	while (readl(host->base + MSDC_FIFOCS) & MSDC_FIFOCS_CLR)
+-		cpu_relax();
++	readl_poll_timeout(host->base + MSDC_FIFOCS, val,
++			   !(val & MSDC_FIFOCS_CLR), 0, 0);
+ 
+ 	val = readl(host->base + MSDC_INT);
+ 	writel(val, host->base + MSDC_INT);
+@@ -814,8 +813,9 @@ static void msdc_gate_clock(struct msdc_host *host)
+ 	clk_disable_unprepare(host->h_clk);
+ }
+ 
+-static void msdc_ungate_clock(struct msdc_host *host)
++static int msdc_ungate_clock(struct msdc_host *host)
+ {
++	u32 val;
+ 	int ret;
+ 
+ 	clk_prepare_enable(host->h_clk);
+@@ -825,11 +825,11 @@ static void msdc_ungate_clock(struct msdc_host *host)
+ 	ret = clk_bulk_prepare_enable(MSDC_NR_CLOCKS, host->bulk_clks);
+ 	if (ret) {
+ 		dev_err(host->dev, "Cannot enable pclk/axi/ahb clock gates\n");
+-		return;
++		return ret;
+ 	}
+ 
+-	while (!(readl(host->base + MSDC_CFG) & MSDC_CFG_CKSTB))
+-		cpu_relax();
++	return readl_poll_timeout(host->base + MSDC_CFG, val,
++				  (val & MSDC_CFG_CKSTB), 1, 20000);
+ }
+ 
+ static void msdc_set_mclk(struct msdc_host *host, unsigned char timing, u32 hz)
+@@ -840,6 +840,7 @@ static void msdc_set_mclk(struct msdc_host *host, unsigned char timing, u32 hz)
+ 	u32 div;
+ 	u32 sclk;
+ 	u32 tune_reg = host->dev_comp->pad_tune_reg;
++	u32 val;
+ 
+ 	if (!hz) {
+ 		dev_dbg(host->dev, "set mclk to 0\n");
+@@ -920,8 +921,7 @@ static void msdc_set_mclk(struct msdc_host *host, unsigned char timing, u32 hz)
+ 	else
+ 		clk_prepare_enable(clk_get_parent(host->src_clk));
+ 
+-	while (!(readl(host->base + MSDC_CFG) & MSDC_CFG_CKSTB))
+-		cpu_relax();
++	readl_poll_timeout(host->base + MSDC_CFG, val, (val & MSDC_CFG_CKSTB), 0, 0);
+ 	sdr_set_bits(host->base + MSDC_CFG, MSDC_CFG_CKPDN);
+ 	mmc->actual_clock = sclk;
+ 	host->mclk = hz;
+@@ -1231,13 +1231,13 @@ static bool msdc_cmd_done(struct msdc_host *host, int events,
+ static inline bool msdc_cmd_is_ready(struct msdc_host *host,
+ 		struct mmc_request *mrq, struct mmc_command *cmd)
+ {
+-	/* The max busy time we can endure is 20ms */
+-	unsigned long tmo = jiffies + msecs_to_jiffies(20);
++	u32 val;
++	int ret;
+ 
+-	while ((readl(host->base + SDC_STS) & SDC_STS_CMDBUSY) &&
+-			time_before(jiffies, tmo))
+-		cpu_relax();
+-	if (readl(host->base + SDC_STS) & SDC_STS_CMDBUSY) {
++	/* The max busy time we can endure is 20ms */
++	ret = readl_poll_timeout_atomic(host->base + SDC_STS, val,
++					!(val & SDC_STS_CMDBUSY), 1, 20000);
++	if (ret) {
+ 		dev_err(host->dev, "CMD bus busy detected\n");
+ 		host->error |= REQ_CMD_BUSY;
+ 		msdc_cmd_done(host, MSDC_INT_CMDTMO, mrq, cmd);
+@@ -1245,12 +1245,10 @@ static inline bool msdc_cmd_is_ready(struct msdc_host *host,
+ 	}
+ 
+ 	if (mmc_resp_type(cmd) == MMC_RSP_R1B || cmd->data) {
+-		tmo = jiffies + msecs_to_jiffies(20);
+ 		/* R1B or with data, should check SDCBUSY */
+-		while ((readl(host->base + SDC_STS) & SDC_STS_SDCBUSY) &&
+-				time_before(jiffies, tmo))
+-			cpu_relax();
+-		if (readl(host->base + SDC_STS) & SDC_STS_SDCBUSY) {
++		ret = readl_poll_timeout_atomic(host->base + SDC_STS, val,
++						!(val & SDC_STS_SDCBUSY), 1, 20000);
++		if (ret) {
+ 			dev_err(host->dev, "Controller busy detected\n");
+ 			host->error |= REQ_CMD_BUSY;
+ 			msdc_cmd_done(host, MSDC_INT_CMDTMO, mrq, cmd);
+@@ -1376,6 +1374,8 @@ static bool msdc_data_xfer_done(struct msdc_host *host, u32 events,
+ 	    (MSDC_INT_XFER_COMPL | MSDC_INT_DATCRCERR | MSDC_INT_DATTMO
+ 	     | MSDC_INT_DMA_BDCSERR | MSDC_INT_DMA_GPDCSERR
+ 	     | MSDC_INT_DMA_PROTECT);
++	u32 val;
++	int ret;
+ 
+ 	spin_lock_irqsave(&host->lock, flags);
+ 	done = !host->data;
+@@ -1392,8 +1392,14 @@ static bool msdc_data_xfer_done(struct msdc_host *host, u32 events,
+ 				readl(host->base + MSDC_DMA_CFG));
+ 		sdr_set_field(host->base + MSDC_DMA_CTRL, MSDC_DMA_CTRL_STOP,
+ 				1);
+-		while (readl(host->base + MSDC_DMA_CFG) & MSDC_DMA_CFG_STS)
+-			cpu_relax();
++
++		ret = readl_poll_timeout_atomic(host->base + MSDC_DMA_CFG, val,
++						!(val & MSDC_DMA_CFG_STS), 1, 20000);
++		if (ret) {
++			dev_dbg(host->dev, "DMA stop timed out\n");
++			return false;
++		}
++
+ 		sdr_clr_bits(host->base + MSDC_INTEN, data_ints_mask);
+ 		dev_dbg(host->dev, "DMA stop\n");
+ 
+@@ -2674,7 +2680,11 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ 	spin_lock_init(&host->lock);
+ 
+ 	platform_set_drvdata(pdev, mmc);
+-	msdc_ungate_clock(host);
++	ret = msdc_ungate_clock(host);
++	if (ret) {
++		dev_err(&pdev->dev, "Cannot ungate clocks!\n");
++		goto release_mem;
++	}
+ 	msdc_init_hw(host);
+ 
+ 	if (mmc->caps2 & MMC_CAP2_CQE) {
+@@ -2833,8 +2843,12 @@ static int __maybe_unused msdc_runtime_resume(struct device *dev)
+ {
+ 	struct mmc_host *mmc = dev_get_drvdata(dev);
+ 	struct msdc_host *host = mmc_priv(mmc);
++	int ret;
++
++	ret = msdc_ungate_clock(host);
++	if (ret)
++		return ret;
+ 
+-	msdc_ungate_clock(host);
+ 	msdc_restore_reg(host);
+ 	return 0;
+ }
+diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c
+index 9dafcbf969d96..fca30add563e9 100644
+--- a/drivers/mmc/host/omap_hsmmc.c
++++ b/drivers/mmc/host/omap_hsmmc.c
+@@ -1499,41 +1499,6 @@ static void omap_hsmmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ 	omap_hsmmc_set_bus_mode(host);
+ }
+ 
+-static void omap_hsmmc_init_card(struct mmc_host *mmc, struct mmc_card *card)
+-{
+-	struct omap_hsmmc_host *host = mmc_priv(mmc);
+-
+-	if (card->type == MMC_TYPE_SDIO || card->type == MMC_TYPE_SD_COMBO) {
+-		struct device_node *np = mmc_dev(mmc)->of_node;
+-
+-		/*
+-		 * REVISIT: should be moved to sdio core and made more
+-		 * general e.g. by expanding the DT bindings of child nodes
+-		 * to provide a mechanism to provide this information:
+-		 * Documentation/devicetree/bindings/mmc/mmc-card.yaml
+-		 */
+-
+-		np = of_get_compatible_child(np, "ti,wl1251");
+-		if (np) {
+-			/*
+-			 * We have TI wl1251 attached to MMC3. Pass this
+-			 * information to the SDIO core because it can't be
+-			 * probed by normal methods.
+-			 */
+-
+-			dev_info(host->dev, "found wl1251\n");
+-			card->quirks |= MMC_QUIRK_NONSTD_SDIO;
+-			card->cccr.wide_bus = 1;
+-			card->cis.vendor = 0x104c;
+-			card->cis.device = 0x9066;
+-			card->cis.blksize = 512;
+-			card->cis.max_dtr = 24000000;
+-			card->ocr = 0x80;
+-			of_node_put(np);
+-		}
+-	}
+-}
+-
+ static void omap_hsmmc_enable_sdio_irq(struct mmc_host *mmc, int enable)
+ {
+ 	struct omap_hsmmc_host *host = mmc_priv(mmc);
+@@ -1660,7 +1625,6 @@ static struct mmc_host_ops omap_hsmmc_ops = {
+ 	.set_ios = omap_hsmmc_set_ios,
+ 	.get_cd = mmc_gpio_get_cd,
+ 	.get_ro = mmc_gpio_get_ro,
+-	.init_card = omap_hsmmc_init_card,
+ 	.enable_sdio_irq = omap_hsmmc_enable_sdio_irq,
+ };
+ 
+diff --git a/drivers/mmc/host/sdhci-pci-gli.c b/drivers/mmc/host/sdhci-pci-gli.c
+index 4fd99c1e82ba3..ad50f16658fe2 100644
+--- a/drivers/mmc/host/sdhci-pci-gli.c
++++ b/drivers/mmc/host/sdhci-pci-gli.c
+@@ -12,6 +12,7 @@
+ #include <linux/pci.h>
+ #include <linux/mmc/mmc.h>
+ #include <linux/delay.h>
++#include <linux/of.h>
+ #include "sdhci.h"
+ #include "sdhci-pci.h"
+ #include "cqhci.h"
+@@ -116,6 +117,8 @@
+ #define PCI_GLI_9755_PECONF   0x44
+ #define   PCI_GLI_9755_LFCLK    GENMASK(14, 12)
+ #define   PCI_GLI_9755_DMACLK   BIT(29)
++#define   PCI_GLI_9755_INVERT_CD  BIT(30)
++#define   PCI_GLI_9755_INVERT_WP  BIT(31)
+ 
+ #define PCI_GLI_9755_CFG2          0x48
+ #define   PCI_GLI_9755_CFG2_L1DLY    GENMASK(28, 24)
+@@ -570,6 +573,14 @@ static void gl9755_hw_setting(struct sdhci_pci_slot *slot)
+ 	gl9755_wt_on(pdev);
+ 
+ 	pci_read_config_dword(pdev, PCI_GLI_9755_PECONF, &value);
++	/*
++	 * Apple ARM64 platforms using these chips may have
++	 * inverted CD/WP detection.
++	 */
++	if (of_property_read_bool(pdev->dev.of_node, "cd-inverted"))
++		value |= PCI_GLI_9755_INVERT_CD;
++	if (of_property_read_bool(pdev->dev.of_node, "wp-inverted"))
++		value |= PCI_GLI_9755_INVERT_WP;
+ 	value &= ~PCI_GLI_9755_LFCLK;
+ 	value &= ~PCI_GLI_9755_DMACLK;
+ 	pci_write_config_dword(pdev, PCI_GLI_9755_PECONF, value);
+diff --git a/drivers/mmc/host/tmio_mmc_core.c b/drivers/mmc/host/tmio_mmc_core.c
+index e2affa52ef469..a5850d83908be 100644
+--- a/drivers/mmc/host/tmio_mmc_core.c
++++ b/drivers/mmc/host/tmio_mmc_core.c
+@@ -960,14 +960,8 @@ static void tmio_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ 	case MMC_POWER_OFF:
+ 		tmio_mmc_power_off(host);
+ 		/* For R-Car Gen2+, we need to reset SDHI specific SCC */
+-		if (host->pdata->flags & TMIO_MMC_MIN_RCAR2) {
+-			host->reset(host);
+-
+-			if (host->native_hotplug)
+-				tmio_mmc_enable_mmc_irqs(host,
+-						TMIO_STAT_CARD_REMOVE |
+-						TMIO_STAT_CARD_INSERT);
+-		}
++		if (host->pdata->flags & TMIO_MMC_MIN_RCAR2)
++			tmio_mmc_reset(host);
+ 
+ 		host->set_clock(host, 0);
+ 		break;
+@@ -1175,6 +1169,7 @@ int tmio_mmc_host_probe(struct tmio_mmc_host *_host)
+ 	if (mmc_can_gpio_cd(mmc))
+ 		_host->ops.get_cd = mmc_gpio_get_cd;
+ 
++	/* must be set before tmio_mmc_reset() */
+ 	_host->native_hotplug = !(mmc_can_gpio_cd(mmc) ||
+ 				  mmc->caps & MMC_CAP_NEEDS_POLL ||
+ 				  !mmc_card_is_removable(mmc));
+@@ -1295,10 +1290,6 @@ int tmio_mmc_host_runtime_resume(struct device *dev)
+ 	if (host->clk_cache)
+ 		host->set_clock(host, host->clk_cache);
+ 
+-	if (host->native_hotplug)
+-		tmio_mmc_enable_mmc_irqs(host,
+-				TMIO_STAT_CARD_REMOVE | TMIO_STAT_CARD_INSERT);
+-
+ 	tmio_mmc_enable_dma(host, true);
+ 
+ 	return 0;
+diff --git a/drivers/mtd/hyperbus/rpc-if.c b/drivers/mtd/hyperbus/rpc-if.c
+index ecb050ba95cdf..dc164c18f8429 100644
+--- a/drivers/mtd/hyperbus/rpc-if.c
++++ b/drivers/mtd/hyperbus/rpc-if.c
+@@ -124,7 +124,9 @@ static int rpcif_hb_probe(struct platform_device *pdev)
+ 	if (!hyperbus)
+ 		return -ENOMEM;
+ 
+-	rpcif_sw_init(&hyperbus->rpc, pdev->dev.parent);
++	error = rpcif_sw_init(&hyperbus->rpc, pdev->dev.parent);
++	if (error)
++		return error;
+ 
+ 	platform_set_drvdata(pdev, hyperbus);
+ 
+@@ -150,9 +152,9 @@ static int rpcif_hb_remove(struct platform_device *pdev)
+ {
+ 	struct rpcif_hyperbus *hyperbus = platform_get_drvdata(pdev);
+ 	int error = hyperbus_unregister_device(&hyperbus->hbdev);
+-	struct rpcif *rpc = dev_get_drvdata(pdev->dev.parent);
+ 
+-	rpcif_disable_rpm(rpc);
++	rpcif_disable_rpm(&hyperbus->rpc);
++
+ 	return error;
+ }
+ 
+diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
+index 9186268d361b4..fc0bed14bfb10 100644
+--- a/drivers/mtd/mtdcore.c
++++ b/drivers/mtd/mtdcore.c
+@@ -825,8 +825,7 @@ static struct nvmem_device *mtd_otp_nvmem_register(struct mtd_info *mtd,
+ 
+ 	/* OTP nvmem will be registered on the physical device */
+ 	config.dev = mtd->dev.parent;
+-	/* just reuse the compatible as name */
+-	config.name = compatible;
++	config.name = kasprintf(GFP_KERNEL, "%s-%s", dev_name(&mtd->dev), compatible);
+ 	config.id = NVMEM_DEVID_NONE;
+ 	config.owner = THIS_MODULE;
+ 	config.type = NVMEM_TYPE_OTP;
+@@ -842,6 +841,7 @@ static struct nvmem_device *mtd_otp_nvmem_register(struct mtd_info *mtd,
+ 		nvmem = NULL;
+ 
+ 	of_node_put(np);
++	kfree(config.name);
+ 
+ 	return nvmem;
+ }
+diff --git a/drivers/mtd/mtdpart.c b/drivers/mtd/mtdpart.c
+index 04af12b66110c..357661b62c94d 100644
+--- a/drivers/mtd/mtdpart.c
++++ b/drivers/mtd/mtdpart.c
+@@ -312,7 +312,7 @@ static int __mtd_del_partition(struct mtd_info *mtd)
+ 	if (err)
+ 		return err;
+ 
+-	list_del(&child->part.node);
++	list_del(&mtd->part.node);
+ 	free_partition(mtd);
+ 
+ 	return 0;
+diff --git a/drivers/mtd/nand/raw/davinci_nand.c b/drivers/mtd/nand/raw/davinci_nand.c
+index 118da9944e3bc..45fec8c192aba 100644
+--- a/drivers/mtd/nand/raw/davinci_nand.c
++++ b/drivers/mtd/nand/raw/davinci_nand.c
+@@ -371,77 +371,6 @@ correct:
+ 	return corrected;
+ }
+ 
+-/**
+- * nand_read_page_hwecc_oob_first - hw ecc, read oob first
+- * @chip: nand chip info structure
+- * @buf: buffer to store read data
+- * @oob_required: caller requires OOB data read to chip->oob_poi
+- * @page: page number to read
+- *
+- * Hardware ECC for large page chips, require OOB to be read first. For this
+- * ECC mode, the write_page method is re-used from ECC_HW. These methods
+- * read/write ECC from the OOB area, unlike the ECC_HW_SYNDROME support with
+- * multiple ECC steps, follows the "infix ECC" scheme and reads/writes ECC from
+- * the data area, by overwriting the NAND manufacturer bad block markings.
+- */
+-static int nand_davinci_read_page_hwecc_oob_first(struct nand_chip *chip,
+-						  uint8_t *buf,
+-						  int oob_required, int page)
+-{
+-	struct mtd_info *mtd = nand_to_mtd(chip);
+-	int i, eccsize = chip->ecc.size, ret;
+-	int eccbytes = chip->ecc.bytes;
+-	int eccsteps = chip->ecc.steps;
+-	uint8_t *p = buf;
+-	uint8_t *ecc_code = chip->ecc.code_buf;
+-	uint8_t *ecc_calc = chip->ecc.calc_buf;
+-	unsigned int max_bitflips = 0;
+-
+-	/* Read the OOB area first */
+-	ret = nand_read_oob_op(chip, page, 0, chip->oob_poi, mtd->oobsize);
+-	if (ret)
+-		return ret;
+-
+-	ret = nand_read_page_op(chip, page, 0, NULL, 0);
+-	if (ret)
+-		return ret;
+-
+-	ret = mtd_ooblayout_get_eccbytes(mtd, ecc_code, chip->oob_poi, 0,
+-					 chip->ecc.total);
+-	if (ret)
+-		return ret;
+-
+-	for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) {
+-		int stat;
+-
+-		chip->ecc.hwctl(chip, NAND_ECC_READ);
+-
+-		ret = nand_read_data_op(chip, p, eccsize, false, false);
+-		if (ret)
+-			return ret;
+-
+-		chip->ecc.calculate(chip, p, &ecc_calc[i]);
+-
+-		stat = chip->ecc.correct(chip, p, &ecc_code[i], NULL);
+-		if (stat == -EBADMSG &&
+-		    (chip->ecc.options & NAND_ECC_GENERIC_ERASED_CHECK)) {
+-			/* check for empty pages with bitflips */
+-			stat = nand_check_erased_ecc_chunk(p, eccsize,
+-							   &ecc_code[i],
+-							   eccbytes, NULL, 0,
+-							   chip->ecc.strength);
+-		}
+-
+-		if (stat < 0) {
+-			mtd->ecc_stats.failed++;
+-		} else {
+-			mtd->ecc_stats.corrected += stat;
+-			max_bitflips = max_t(unsigned int, max_bitflips, stat);
+-		}
+-	}
+-	return max_bitflips;
+-}
+-
+ /*----------------------------------------------------------------------*/
+ 
+ /* An ECC layout for using 4-bit ECC with small-page flash, storing
+@@ -651,7 +580,7 @@ static int davinci_nand_attach_chip(struct nand_chip *chip)
+ 			} else if (chunks == 4 || chunks == 8) {
+ 				mtd_set_ooblayout(mtd,
+ 						  nand_get_large_page_ooblayout());
+-				chip->ecc.read_page = nand_davinci_read_page_hwecc_oob_first;
++				chip->ecc.read_page = nand_read_page_hwecc_oob_first;
+ 			} else {
+ 				return -EIO;
+ 			}
+diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+index 10cc71829dcb6..65bcd1c548d2e 100644
+--- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
++++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+@@ -713,14 +713,32 @@ static void gpmi_nfc_compute_timings(struct gpmi_nand_data *this,
+ 			      (use_half_period ? BM_GPMI_CTRL1_HALF_PERIOD : 0);
+ }
+ 
+-static void gpmi_nfc_apply_timings(struct gpmi_nand_data *this)
++static int gpmi_nfc_apply_timings(struct gpmi_nand_data *this)
+ {
+ 	struct gpmi_nfc_hardware_timing *hw = &this->hw;
+ 	struct resources *r = &this->resources;
+ 	void __iomem *gpmi_regs = r->gpmi_regs;
+ 	unsigned int dll_wait_time_us;
++	int ret;
++
++	/* Clock dividers do NOT guarantee a clean clock signal on its output
++	 * during the change of the divide factor on i.MX6Q/UL/SX. On i.MX7/8,
++	 * all clock dividers provide these guarantee.
++	 */
++	if (GPMI_IS_MX6Q(this) || GPMI_IS_MX6SX(this))
++		clk_disable_unprepare(r->clock[0]);
+ 
+-	clk_set_rate(r->clock[0], hw->clk_rate);
++	ret = clk_set_rate(r->clock[0], hw->clk_rate);
++	if (ret) {
++		dev_err(this->dev, "cannot set clock rate to %lu Hz: %d\n", hw->clk_rate, ret);
++		return ret;
++	}
++
++	if (GPMI_IS_MX6Q(this) || GPMI_IS_MX6SX(this)) {
++		ret = clk_prepare_enable(r->clock[0]);
++		if (ret)
++			return ret;
++	}
+ 
+ 	writel(hw->timing0, gpmi_regs + HW_GPMI_TIMING0);
+ 	writel(hw->timing1, gpmi_regs + HW_GPMI_TIMING1);
+@@ -739,6 +757,8 @@ static void gpmi_nfc_apply_timings(struct gpmi_nand_data *this)
+ 
+ 	/* Wait for the DLL to settle. */
+ 	udelay(dll_wait_time_us);
++
++	return 0;
+ }
+ 
+ static int gpmi_setup_interface(struct nand_chip *chip, int chipnr,
+@@ -1032,15 +1052,6 @@ static int gpmi_get_clks(struct gpmi_nand_data *this)
+ 		r->clock[i] = clk;
+ 	}
+ 
+-	if (GPMI_IS_MX6(this))
+-		/*
+-		 * Set the default value for the gpmi clock.
+-		 *
+-		 * If you want to use the ONFI nand which is in the
+-		 * Synchronous Mode, you should change the clock as you need.
+-		 */
+-		clk_set_rate(r->clock[0], 22000000);
+-
+ 	return 0;
+ 
+ err_clock:
+@@ -2278,7 +2289,9 @@ static int gpmi_nfc_exec_op(struct nand_chip *chip,
+ 	 */
+ 	if (this->hw.must_apply_timings) {
+ 		this->hw.must_apply_timings = false;
+-		gpmi_nfc_apply_timings(this);
++		ret = gpmi_nfc_apply_timings(this);
++		if (ret)
++			return ret;
+ 	}
+ 
+ 	dev_dbg(this->dev, "%s: %d instructions\n", __func__, op->ninstrs);
+diff --git a/drivers/mtd/nand/raw/ingenic/ingenic_nand_drv.c b/drivers/mtd/nand/raw/ingenic/ingenic_nand_drv.c
+index 0e9d426fe4f2b..b18861bdcdc88 100644
+--- a/drivers/mtd/nand/raw/ingenic/ingenic_nand_drv.c
++++ b/drivers/mtd/nand/raw/ingenic/ingenic_nand_drv.c
+@@ -32,6 +32,7 @@ struct jz_soc_info {
+ 	unsigned long addr_offset;
+ 	unsigned long cmd_offset;
+ 	const struct mtd_ooblayout_ops *oob_layout;
++	bool oob_first;
+ };
+ 
+ struct ingenic_nand_cs {
+@@ -240,6 +241,9 @@ static int ingenic_nand_attach_chip(struct nand_chip *chip)
+ 	if (chip->bbt_options & NAND_BBT_USE_FLASH)
+ 		chip->bbt_options |= NAND_BBT_NO_OOB;
+ 
++	if (nfc->soc_info->oob_first)
++		chip->ecc.read_page = nand_read_page_hwecc_oob_first;
++
+ 	/* For legacy reasons we use a different layout on the qi,lb60 board. */
+ 	if (of_machine_is_compatible("qi,lb60"))
+ 		mtd_set_ooblayout(mtd, &qi_lb60_ooblayout_ops);
+@@ -534,6 +538,7 @@ static const struct jz_soc_info jz4740_soc_info = {
+ 	.data_offset = 0x00000000,
+ 	.cmd_offset = 0x00008000,
+ 	.addr_offset = 0x00010000,
++	.oob_first = true,
+ };
+ 
+ static const struct jz_soc_info jz4725b_soc_info = {
+diff --git a/drivers/mtd/nand/raw/nand_base.c b/drivers/mtd/nand/raw/nand_base.c
+index a130320de4128..d5a2110eb38ed 100644
+--- a/drivers/mtd/nand/raw/nand_base.c
++++ b/drivers/mtd/nand/raw/nand_base.c
+@@ -3160,6 +3160,73 @@ static int nand_read_page_hwecc(struct nand_chip *chip, uint8_t *buf,
+ 	return max_bitflips;
+ }
+ 
++/**
++ * nand_read_page_hwecc_oob_first - Hardware ECC page read with ECC
++ *                                  data read from OOB area
++ * @chip: nand chip info structure
++ * @buf: buffer to store read data
++ * @oob_required: caller requires OOB data read to chip->oob_poi
++ * @page: page number to read
++ *
++ * Hardware ECC for large page chips, which requires the ECC data to be
++ * extracted from the OOB before the actual data is read.
++ */
++int nand_read_page_hwecc_oob_first(struct nand_chip *chip, uint8_t *buf,
++				   int oob_required, int page)
++{
++	struct mtd_info *mtd = nand_to_mtd(chip);
++	int i, eccsize = chip->ecc.size, ret;
++	int eccbytes = chip->ecc.bytes;
++	int eccsteps = chip->ecc.steps;
++	uint8_t *p = buf;
++	uint8_t *ecc_code = chip->ecc.code_buf;
++	unsigned int max_bitflips = 0;
++
++	/* Read the OOB area first */
++	ret = nand_read_oob_op(chip, page, 0, chip->oob_poi, mtd->oobsize);
++	if (ret)
++		return ret;
++
++	/* Move read cursor to start of page */
++	ret = nand_change_read_column_op(chip, 0, NULL, 0, false);
++	if (ret)
++		return ret;
++
++	ret = mtd_ooblayout_get_eccbytes(mtd, ecc_code, chip->oob_poi, 0,
++					 chip->ecc.total);
++	if (ret)
++		return ret;
++
++	for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) {
++		int stat;
++
++		chip->ecc.hwctl(chip, NAND_ECC_READ);
++
++		ret = nand_read_data_op(chip, p, eccsize, false, false);
++		if (ret)
++			return ret;
++
++		stat = chip->ecc.correct(chip, p, &ecc_code[i], NULL);
++		if (stat == -EBADMSG &&
++		    (chip->ecc.options & NAND_ECC_GENERIC_ERASED_CHECK)) {
++			/* check for empty pages with bitflips */
++			stat = nand_check_erased_ecc_chunk(p, eccsize,
++							   &ecc_code[i],
++							   eccbytes, NULL, 0,
++							   chip->ecc.strength);
++		}
++
++		if (stat < 0) {
++			mtd->ecc_stats.failed++;
++		} else {
++			mtd->ecc_stats.corrected += stat;
++			max_bitflips = max_t(unsigned int, max_bitflips, stat);
++		}
++	}
++	return max_bitflips;
++}
++EXPORT_SYMBOL_GPL(nand_read_page_hwecc_oob_first);
++
+ /**
+  * nand_read_page_syndrome - [REPLACEABLE] hardware ECC syndrome based page read
+  * @chip: nand chip info structure
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index cc08bd707378f..fa66dfed002d2 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -1952,6 +1952,7 @@ static int spi_nor_write(struct mtd_info *mtd, loff_t to, size_t len,
+ 	struct spi_nor *nor = mtd_to_spi_nor(mtd);
+ 	size_t page_offset, page_remain, i;
+ 	ssize_t ret;
++	u32 page_size = nor->params->page_size;
+ 
+ 	dev_dbg(nor->dev, "to 0x%08x, len %zd\n", (u32)to, len);
+ 
+@@ -1968,16 +1969,15 @@ static int spi_nor_write(struct mtd_info *mtd, loff_t to, size_t len,
+ 		 * calculated with an AND operation. On the other cases we
+ 		 * need to do a modulus operation (more expensive).
+ 		 */
+-		if (is_power_of_2(nor->page_size)) {
+-			page_offset = addr & (nor->page_size - 1);
++		if (is_power_of_2(page_size)) {
++			page_offset = addr & (page_size - 1);
+ 		} else {
+ 			uint64_t aux = addr;
+ 
+-			page_offset = do_div(aux, nor->page_size);
++			page_offset = do_div(aux, page_size);
+ 		}
+ 		/* the size of data remaining on the first page */
+-		page_remain = min_t(size_t,
+-				    nor->page_size - page_offset, len - i);
++		page_remain = min_t(size_t, page_size - page_offset, len - i);
+ 
+ 		addr = spi_nor_convert_addr(nor, addr);
+ 
+@@ -3094,7 +3094,7 @@ int spi_nor_scan(struct spi_nor *nor, const char *name,
+ 	 * We need the bounce buffer early to read/write registers when going
+ 	 * through the spi-mem layer (buffers have to be DMA-able).
+ 	 * For spi-mem drivers, we'll reallocate a new buffer if
+-	 * nor->page_size turns out to be greater than PAGE_SIZE (which
++	 * nor->params->page_size turns out to be greater than PAGE_SIZE (which
+ 	 * shouldn't happen before long since NOR pages are usually less
+ 	 * than 1KB) after spi_nor_scan() returns.
+ 	 */
+@@ -3171,8 +3171,7 @@ int spi_nor_scan(struct spi_nor *nor, const char *name,
+ 		mtd->flags |= MTD_NO_ERASE;
+ 
+ 	mtd->dev.parent = dev;
+-	nor->page_size = nor->params->page_size;
+-	mtd->writebufsize = nor->page_size;
++	mtd->writebufsize = nor->params->page_size;
+ 
+ 	if (of_property_read_bool(np, "broken-flash-reset"))
+ 		nor->flags |= SNOR_F_BROKEN_RESET;
+@@ -3341,8 +3340,8 @@ static int spi_nor_probe(struct spi_mem *spimem)
+ 	 * and add this logic so that if anyone ever adds support for such
+ 	 * a NOR we don't end up with buffer overflows.
+ 	 */
+-	if (nor->page_size > PAGE_SIZE) {
+-		nor->bouncebuf_size = nor->page_size;
++	if (nor->params->page_size > PAGE_SIZE) {
++		nor->bouncebuf_size = nor->params->page_size;
+ 		devm_kfree(nor->dev, nor->bouncebuf);
+ 		nor->bouncebuf = devm_kmalloc(nor->dev,
+ 					      nor->bouncebuf_size,
+diff --git a/drivers/mtd/spi-nor/xilinx.c b/drivers/mtd/spi-nor/xilinx.c
+index 1138bdbf41998..0ac1d681e84d5 100644
+--- a/drivers/mtd/spi-nor/xilinx.c
++++ b/drivers/mtd/spi-nor/xilinx.c
+@@ -28,11 +28,12 @@ static const struct flash_info xilinx_parts[] = {
+  */
+ static u32 s3an_convert_addr(struct spi_nor *nor, u32 addr)
+ {
++	u32 page_size = nor->params->page_size;
+ 	u32 offset, page;
+ 
+-	offset = addr % nor->page_size;
+-	page = addr / nor->page_size;
+-	page <<= (nor->page_size > 512) ? 10 : 9;
++	offset = addr % page_size;
++	page = addr / page_size;
++	page <<= (page_size > 512) ? 10 : 9;
+ 
+ 	return page | offset;
+ }
+@@ -40,6 +41,7 @@ static u32 s3an_convert_addr(struct spi_nor *nor, u32 addr)
+ static int xilinx_nor_setup(struct spi_nor *nor,
+ 			    const struct spi_nor_hwcaps *hwcaps)
+ {
++	u32 page_size;
+ 	int ret;
+ 
+ 	ret = spi_nor_xread_sr(nor, nor->bouncebuf);
+@@ -64,10 +66,12 @@ static int xilinx_nor_setup(struct spi_nor *nor,
+ 	 */
+ 	if (nor->bouncebuf[0] & XSR_PAGESIZE) {
+ 		/* Flash in Power of 2 mode */
+-		nor->page_size = (nor->page_size == 264) ? 256 : 512;
+-		nor->mtd.writebufsize = nor->page_size;
+-		nor->mtd.size = 8 * nor->page_size * nor->info->n_sectors;
+-		nor->mtd.erasesize = 8 * nor->page_size;
++		page_size = (nor->params->page_size == 264) ? 256 : 512;
++		nor->params->page_size = page_size;
++		nor->mtd.writebufsize = page_size;
++		nor->params->size = 8 * page_size * nor->info->n_sectors;
++		nor->mtd.size = nor->params->size;
++		nor->mtd.erasesize = 8 * page_size;
+ 	} else {
+ 		/* Flash in Default addressing mode */
+ 		nor->params->convert_addr = s3an_convert_addr;
+diff --git a/drivers/net/amt.c b/drivers/net/amt.c
+index b732ee9a50ef9..d3a9dda6c7286 100644
+--- a/drivers/net/amt.c
++++ b/drivers/net/amt.c
+@@ -1106,7 +1106,7 @@ static bool amt_send_membership_query(struct amt_dev *amt,
+ 	rt = ip_route_output_key(amt->net, &fl4);
+ 	if (IS_ERR(rt)) {
+ 		netdev_dbg(amt->dev, "no route to %pI4\n", &tunnel->ip4);
+-		return -1;
++		return true;
+ 	}
+ 
+ 	amtmq		= skb_push(skb, sizeof(*amtmq));
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index ff8da720a33a7..1db5c7a172a71 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -1096,9 +1096,6 @@ static bool bond_should_notify_peers(struct bonding *bond)
+ 	slave = rcu_dereference(bond->curr_active_slave);
+ 	rcu_read_unlock();
+ 
+-	netdev_dbg(bond->dev, "bond_should_notify_peers: slave %s\n",
+-		   slave ? slave->dev->name : "NULL");
+-
+ 	if (!slave || !bond->send_peer_notif ||
+ 	    bond->send_peer_notif %
+ 	    max(1, bond->params.peer_notif_delay) != 0 ||
+@@ -1106,6 +1103,9 @@ static bool bond_should_notify_peers(struct bonding *bond)
+ 	    test_bit(__LINK_STATE_LINKWATCH_PENDING, &slave->dev->state))
+ 		return false;
+ 
++	netdev_dbg(bond->dev, "bond_should_notify_peers: slave %s\n",
++		   slave ? slave->dev->name : "NULL");
++
+ 	return true;
+ }
+ 
+@@ -3872,8 +3872,8 @@ u32 bond_xmit_hash(struct bonding *bond, struct sk_buff *skb)
+ 	    skb->l4_hash)
+ 		return skb->hash;
+ 
+-	return __bond_xmit_hash(bond, skb, skb->head, skb->protocol,
+-				skb->mac_header, skb->network_header,
++	return __bond_xmit_hash(bond, skb, skb->data, skb->protocol,
++				skb_mac_offset(skb), skb_network_offset(skb),
+ 				skb_headlen(skb));
+ }
+ 
+@@ -4843,25 +4843,39 @@ static netdev_tx_t bond_xmit_broadcast(struct sk_buff *skb,
+ 	struct bonding *bond = netdev_priv(bond_dev);
+ 	struct slave *slave = NULL;
+ 	struct list_head *iter;
++	bool xmit_suc = false;
++	bool skb_used = false;
+ 
+ 	bond_for_each_slave_rcu(bond, slave, iter) {
+-		if (bond_is_last_slave(bond, slave))
+-			break;
+-		if (bond_slave_is_up(slave) && slave->link == BOND_LINK_UP) {
+-			struct sk_buff *skb2 = skb_clone(skb, GFP_ATOMIC);
++		struct sk_buff *skb2;
++
++		if (!(bond_slave_is_up(slave) && slave->link == BOND_LINK_UP))
++			continue;
+ 
++		if (bond_is_last_slave(bond, slave)) {
++			skb2 = skb;
++			skb_used = true;
++		} else {
++			skb2 = skb_clone(skb, GFP_ATOMIC);
+ 			if (!skb2) {
+ 				net_err_ratelimited("%s: Error: %s: skb_clone() failed\n",
+ 						    bond_dev->name, __func__);
+ 				continue;
+ 			}
+-			bond_dev_queue_xmit(bond, skb2, slave->dev);
+ 		}
++
++		if (bond_dev_queue_xmit(bond, skb2, slave->dev) == NETDEV_TX_OK)
++			xmit_suc = true;
+ 	}
+-	if (slave && bond_slave_is_up(slave) && slave->link == BOND_LINK_UP)
+-		return bond_dev_queue_xmit(bond, skb, slave->dev);
+ 
+-	return bond_tx_drop(bond_dev, skb);
++	if (!skb_used)
++		dev_kfree_skb_any(skb);
++
++	if (xmit_suc)
++		return NETDEV_TX_OK;
++
++	atomic_long_inc(&bond_dev->tx_dropped);
++	return NET_XMIT_DROP;
+ }
+ 
+ /*------------------------- Device initialization ---------------------------*/
+diff --git a/drivers/net/can/at91_can.c b/drivers/net/can/at91_can.c
+index 3aea32c9b108f..3cd872cf9be66 100644
+--- a/drivers/net/can/at91_can.c
++++ b/drivers/net/can/at91_can.c
+@@ -553,8 +553,6 @@ static void at91_rx_overflow_err(struct net_device *dev)
+ 	cf->can_id |= CAN_ERR_CRTL;
+ 	cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
+ 
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+ 	netif_receive_skb(skb);
+ }
+ 
+@@ -779,8 +777,6 @@ static int at91_poll_err(struct net_device *dev, int quota, u32 reg_sr)
+ 
+ 	at91_poll_err_frame(dev, cf, reg_sr);
+ 
+-	dev->stats.rx_packets++;
+-	dev->stats.rx_bytes += cf->len;
+ 	netif_receive_skb(skb);
+ 
+ 	return 1;
+@@ -1037,8 +1033,6 @@ static void at91_irq_err(struct net_device *dev)
+ 
+ 	at91_irq_err_state(dev, cf, new_state);
+ 
+-	dev->stats.rx_packets++;
+-	dev->stats.rx_bytes += cf->len;
+ 	netif_rx(skb);
+ 
+ 	priv->can.state = new_state;
+diff --git a/drivers/net/can/c_can/c_can_main.c b/drivers/net/can/c_can/c_can_main.c
+index 52671d1ea17d5..670754a129846 100644
+--- a/drivers/net/can/c_can/c_can_main.c
++++ b/drivers/net/can/c_can/c_can_main.c
+@@ -920,7 +920,6 @@ static int c_can_handle_state_change(struct net_device *dev,
+ 	unsigned int reg_err_counter;
+ 	unsigned int rx_err_passive;
+ 	struct c_can_priv *priv = netdev_priv(dev);
+-	struct net_device_stats *stats = &dev->stats;
+ 	struct can_frame *cf;
+ 	struct sk_buff *skb;
+ 	struct can_berr_counter bec;
+@@ -996,8 +995,6 @@ static int c_can_handle_state_change(struct net_device *dev,
+ 		break;
+ 	}
+ 
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+ 	netif_receive_skb(skb);
+ 
+ 	return 1;
+@@ -1064,8 +1061,6 @@ static int c_can_handle_bus_err(struct net_device *dev,
+ 		break;
+ 	}
+ 
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+ 	netif_receive_skb(skb);
+ 	return 1;
+ }
+diff --git a/drivers/net/can/cc770/cc770.c b/drivers/net/can/cc770/cc770.c
+index f8a130f594e2e..a5fd8ccedec21 100644
+--- a/drivers/net/can/cc770/cc770.c
++++ b/drivers/net/can/cc770/cc770.c
+@@ -499,7 +499,6 @@ static void cc770_rx(struct net_device *dev, unsigned int mo, u8 ctrl1)
+ static int cc770_err(struct net_device *dev, u8 status)
+ {
+ 	struct cc770_priv *priv = netdev_priv(dev);
+-	struct net_device_stats *stats = &dev->stats;
+ 	struct can_frame *cf;
+ 	struct sk_buff *skb;
+ 	u8 lec;
+@@ -571,8 +570,6 @@ static int cc770_err(struct net_device *dev, u8 status)
+ 	}
+ 
+ 
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+ 	netif_rx(skb);
+ 
+ 	return 0;
+diff --git a/drivers/net/can/dev/dev.c b/drivers/net/can/dev/dev.c
+index e3d840b81357d..4845ae6456e19 100644
+--- a/drivers/net/can/dev/dev.c
++++ b/drivers/net/can/dev/dev.c
+@@ -136,7 +136,6 @@ EXPORT_SYMBOL_GPL(can_change_state);
+ static void can_restart(struct net_device *dev)
+ {
+ 	struct can_priv *priv = netdev_priv(dev);
+-	struct net_device_stats *stats = &dev->stats;
+ 	struct sk_buff *skb;
+ 	struct can_frame *cf;
+ 	int err;
+@@ -155,9 +154,6 @@ static void can_restart(struct net_device *dev)
+ 
+ 	cf->can_id |= CAN_ERR_RESTARTED;
+ 
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+-
+ 	netif_rx_ni(skb);
+ 
+ restart:
+diff --git a/drivers/net/can/dev/rx-offload.c b/drivers/net/can/dev/rx-offload.c
+index 37b0cc65237b7..7dbf46b9ca5dd 100644
+--- a/drivers/net/can/dev/rx-offload.c
++++ b/drivers/net/can/dev/rx-offload.c
+@@ -54,8 +54,10 @@ static int can_rx_offload_napi_poll(struct napi_struct *napi, int quota)
+ 		struct can_frame *cf = (struct can_frame *)skb->data;
+ 
+ 		work_done++;
+-		stats->rx_packets++;
+-		stats->rx_bytes += cf->len;
++		if (!(cf->can_id & CAN_ERR_FLAG)) {
++			stats->rx_packets++;
++			stats->rx_bytes += cf->len;
++		}
+ 		netif_receive_skb(skb);
+ 	}
+ 
+diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c
+index 12b60ad95b02a..57cebc4156616 100644
+--- a/drivers/net/can/flexcan.c
++++ b/drivers/net/can/flexcan.c
+@@ -173,9 +173,9 @@
+ 
+ /* FLEXCAN interrupt flag register (IFLAG) bits */
+ /* Errata ERR005829 step7: Reserve first valid MB */
+-#define FLEXCAN_TX_MB_RESERVED_OFF_FIFO		8
+-#define FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP	0
+-#define FLEXCAN_RX_MB_OFF_TIMESTAMP_FIRST	(FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP + 1)
++#define FLEXCAN_TX_MB_RESERVED_RX_FIFO	8
++#define FLEXCAN_TX_MB_RESERVED_RX_MAILBOX	0
++#define FLEXCAN_RX_MB_RX_MAILBOX_FIRST	(FLEXCAN_TX_MB_RESERVED_RX_MAILBOX + 1)
+ #define FLEXCAN_IFLAG_MB(x)		BIT_ULL(x)
+ #define FLEXCAN_IFLAG_RX_FIFO_OVERFLOW	BIT(7)
+ #define FLEXCAN_IFLAG_RX_FIFO_WARN	BIT(6)
+@@ -234,8 +234,8 @@
+ #define FLEXCAN_QUIRK_ENABLE_EACEN_RRS  BIT(3)
+ /* Disable non-correctable errors interrupt and freeze mode */
+ #define FLEXCAN_QUIRK_DISABLE_MECR BIT(4)
+-/* Use timestamp based offloading */
+-#define FLEXCAN_QUIRK_USE_OFF_TIMESTAMP BIT(5)
++/* Use mailboxes (not FIFO) for RX path */
++#define FLEXCAN_QUIRK_USE_RX_MAILBOX BIT(5)
+ /* No interrupt for error passive */
+ #define FLEXCAN_QUIRK_BROKEN_PERR_STATE BIT(6)
+ /* default to BE register access */
+@@ -252,6 +252,12 @@
+ #define FLEXCAN_QUIRK_NR_IRQ_3 BIT(12)
+ /* Setup 16 mailboxes */
+ #define FLEXCAN_QUIRK_NR_MB_16 BIT(13)
++/* Device supports RX via mailboxes */
++#define FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX BIT(14)
++/* Device supports RTR reception via mailboxes */
++#define FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX_RTR BIT(15)
++/* Device supports RX via FIFO */
++#define FLEXCAN_QUIRK_SUPPPORT_RX_FIFO BIT(16)
+ 
+ /* Structure of the message buffer */
+ struct flexcan_mb {
+@@ -369,7 +375,7 @@ struct flexcan_priv {
+ 
+ 	struct clk *clk_ipg;
+ 	struct clk *clk_per;
+-	const struct flexcan_devtype_data *devtype_data;
++	struct flexcan_devtype_data devtype_data;
+ 	struct regulator *reg_xceiver;
+ 	struct flexcan_stop_mode stm;
+ 
+@@ -386,59 +392,78 @@ struct flexcan_priv {
+ 
+ static const struct flexcan_devtype_data fsl_mcf5441x_devtype_data = {
+ 	.quirks = FLEXCAN_QUIRK_BROKEN_PERR_STATE |
+-		FLEXCAN_QUIRK_NR_IRQ_3 | FLEXCAN_QUIRK_NR_MB_16,
++		FLEXCAN_QUIRK_NR_IRQ_3 | FLEXCAN_QUIRK_NR_MB_16 |
++		FLEXCAN_QUIRK_SUPPPORT_RX_FIFO,
+ };
+ 
+ static const struct flexcan_devtype_data fsl_p1010_devtype_data = {
+ 	.quirks = FLEXCAN_QUIRK_BROKEN_WERR_STATE |
+ 		FLEXCAN_QUIRK_BROKEN_PERR_STATE |
+-		FLEXCAN_QUIRK_DEFAULT_BIG_ENDIAN,
++		FLEXCAN_QUIRK_DEFAULT_BIG_ENDIAN |
++		FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX |
++		FLEXCAN_QUIRK_SUPPPORT_RX_FIFO,
+ };
+ 
+ static const struct flexcan_devtype_data fsl_imx25_devtype_data = {
+ 	.quirks = FLEXCAN_QUIRK_BROKEN_WERR_STATE |
+-		FLEXCAN_QUIRK_BROKEN_PERR_STATE,
++		FLEXCAN_QUIRK_BROKEN_PERR_STATE |
++		FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX |
++		FLEXCAN_QUIRK_SUPPPORT_RX_FIFO,
+ };
+ 
+ static const struct flexcan_devtype_data fsl_imx28_devtype_data = {
+-	.quirks = FLEXCAN_QUIRK_BROKEN_PERR_STATE,
++	.quirks = FLEXCAN_QUIRK_BROKEN_PERR_STATE |
++		FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX |
++		FLEXCAN_QUIRK_SUPPPORT_RX_FIFO,
+ };
+ 
+ static const struct flexcan_devtype_data fsl_imx6q_devtype_data = {
+ 	.quirks = FLEXCAN_QUIRK_DISABLE_RXFG | FLEXCAN_QUIRK_ENABLE_EACEN_RRS |
+-		FLEXCAN_QUIRK_USE_OFF_TIMESTAMP | FLEXCAN_QUIRK_BROKEN_PERR_STATE |
+-		FLEXCAN_QUIRK_SETUP_STOP_MODE_GPR,
++		FLEXCAN_QUIRK_USE_RX_MAILBOX | FLEXCAN_QUIRK_BROKEN_PERR_STATE |
++		FLEXCAN_QUIRK_SETUP_STOP_MODE_GPR |
++		FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX |
++		FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX_RTR,
+ };
+ 
+ static const struct flexcan_devtype_data fsl_imx8qm_devtype_data = {
+ 	.quirks = FLEXCAN_QUIRK_DISABLE_RXFG | FLEXCAN_QUIRK_ENABLE_EACEN_RRS |
+-		FLEXCAN_QUIRK_USE_OFF_TIMESTAMP | FLEXCAN_QUIRK_BROKEN_PERR_STATE |
+-		FLEXCAN_QUIRK_SUPPORT_FD | FLEXCAN_QUIRK_SETUP_STOP_MODE_SCFW,
++		FLEXCAN_QUIRK_USE_RX_MAILBOX | FLEXCAN_QUIRK_BROKEN_PERR_STATE |
++		FLEXCAN_QUIRK_SUPPORT_FD | FLEXCAN_QUIRK_SETUP_STOP_MODE_SCFW |
++		FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX |
++		FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX_RTR,
+ };
+ 
+ static struct flexcan_devtype_data fsl_imx8mp_devtype_data = {
+ 	.quirks = FLEXCAN_QUIRK_DISABLE_RXFG | FLEXCAN_QUIRK_ENABLE_EACEN_RRS |
+-		FLEXCAN_QUIRK_DISABLE_MECR | FLEXCAN_QUIRK_USE_OFF_TIMESTAMP |
++		FLEXCAN_QUIRK_DISABLE_MECR | FLEXCAN_QUIRK_USE_RX_MAILBOX |
+ 		FLEXCAN_QUIRK_BROKEN_PERR_STATE | FLEXCAN_QUIRK_SETUP_STOP_MODE_GPR |
+-		FLEXCAN_QUIRK_SUPPORT_FD | FLEXCAN_QUIRK_SUPPORT_ECC,
++		FLEXCAN_QUIRK_SUPPORT_FD | FLEXCAN_QUIRK_SUPPORT_ECC |
++		FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX |
++		FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX_RTR,
+ };
+ 
+ static const struct flexcan_devtype_data fsl_vf610_devtype_data = {
+ 	.quirks = FLEXCAN_QUIRK_DISABLE_RXFG | FLEXCAN_QUIRK_ENABLE_EACEN_RRS |
+-		FLEXCAN_QUIRK_DISABLE_MECR | FLEXCAN_QUIRK_USE_OFF_TIMESTAMP |
+-		FLEXCAN_QUIRK_BROKEN_PERR_STATE | FLEXCAN_QUIRK_SUPPORT_ECC,
++		FLEXCAN_QUIRK_DISABLE_MECR | FLEXCAN_QUIRK_USE_RX_MAILBOX |
++		FLEXCAN_QUIRK_BROKEN_PERR_STATE | FLEXCAN_QUIRK_SUPPORT_ECC |
++		FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX |
++		FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX_RTR,
+ };
+ 
+ static const struct flexcan_devtype_data fsl_ls1021a_r2_devtype_data = {
+ 	.quirks = FLEXCAN_QUIRK_DISABLE_RXFG | FLEXCAN_QUIRK_ENABLE_EACEN_RRS |
+-		FLEXCAN_QUIRK_BROKEN_PERR_STATE | FLEXCAN_QUIRK_USE_OFF_TIMESTAMP,
++		FLEXCAN_QUIRK_BROKEN_PERR_STATE | FLEXCAN_QUIRK_USE_RX_MAILBOX |
++		FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX |
++		FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX_RTR,
+ };
+ 
+ static const struct flexcan_devtype_data fsl_lx2160a_r1_devtype_data = {
+ 	.quirks = FLEXCAN_QUIRK_DISABLE_RXFG | FLEXCAN_QUIRK_ENABLE_EACEN_RRS |
+ 		FLEXCAN_QUIRK_DISABLE_MECR | FLEXCAN_QUIRK_BROKEN_PERR_STATE |
+-		FLEXCAN_QUIRK_USE_OFF_TIMESTAMP | FLEXCAN_QUIRK_SUPPORT_FD |
+-		FLEXCAN_QUIRK_SUPPORT_ECC,
++		FLEXCAN_QUIRK_USE_RX_MAILBOX | FLEXCAN_QUIRK_SUPPORT_FD |
++		FLEXCAN_QUIRK_SUPPORT_ECC |
++		FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX |
++		FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX_RTR,
+ };
+ 
+ static const struct can_bittiming_const flexcan_bittiming_const = {
+@@ -600,7 +625,7 @@ static inline int flexcan_enter_stop_mode(struct flexcan_priv *priv)
+ 	priv->write(reg_mcr, &regs->mcr);
+ 
+ 	/* enable stop request */
+-	if (priv->devtype_data->quirks & FLEXCAN_QUIRK_SETUP_STOP_MODE_SCFW) {
++	if (priv->devtype_data.quirks & FLEXCAN_QUIRK_SETUP_STOP_MODE_SCFW) {
+ 		ret = flexcan_stop_mode_enable_scfw(priv, true);
+ 		if (ret < 0)
+ 			return ret;
+@@ -619,7 +644,7 @@ static inline int flexcan_exit_stop_mode(struct flexcan_priv *priv)
+ 	int ret;
+ 
+ 	/* remove stop request */
+-	if (priv->devtype_data->quirks & FLEXCAN_QUIRK_SETUP_STOP_MODE_SCFW) {
++	if (priv->devtype_data.quirks & FLEXCAN_QUIRK_SETUP_STOP_MODE_SCFW) {
+ 		ret = flexcan_stop_mode_enable_scfw(priv, false);
+ 		if (ret < 0)
+ 			return ret;
+@@ -1022,7 +1047,7 @@ static struct sk_buff *flexcan_mailbox_read(struct can_rx_offload *offload,
+ 
+ 	mb = flexcan_get_mb(priv, n);
+ 
+-	if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
++	if (priv->devtype_data.quirks & FLEXCAN_QUIRK_USE_RX_MAILBOX) {
+ 		u32 code;
+ 
+ 		do {
+@@ -1087,7 +1112,7 @@ static struct sk_buff *flexcan_mailbox_read(struct can_rx_offload *offload,
+ 	}
+ 
+  mark_as_read:
+-	if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP)
++	if (priv->devtype_data.quirks & FLEXCAN_QUIRK_USE_RX_MAILBOX)
+ 		flexcan_write64(priv, FLEXCAN_IFLAG_MB(n), &regs->iflag1);
+ 	else
+ 		priv->write(FLEXCAN_IFLAG_RX_FIFO_AVAILABLE, &regs->iflag1);
+@@ -1113,7 +1138,7 @@ static irqreturn_t flexcan_irq(int irq, void *dev_id)
+ 	enum can_state last_state = priv->can.state;
+ 
+ 	/* reception interrupt */
+-	if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
++	if (priv->devtype_data.quirks & FLEXCAN_QUIRK_USE_RX_MAILBOX) {
+ 		u64 reg_iflag_rx;
+ 		int ret;
+ 
+@@ -1173,7 +1198,7 @@ static irqreturn_t flexcan_irq(int irq, void *dev_id)
+ 
+ 	/* state change interrupt or broken error state quirk fix is enabled */
+ 	if ((reg_esr & FLEXCAN_ESR_ERR_STATE) ||
+-	    (priv->devtype_data->quirks & (FLEXCAN_QUIRK_BROKEN_WERR_STATE |
++	    (priv->devtype_data.quirks & (FLEXCAN_QUIRK_BROKEN_WERR_STATE |
+ 					   FLEXCAN_QUIRK_BROKEN_PERR_STATE)))
+ 		flexcan_irq_state(dev, reg_esr);
+ 
+@@ -1195,11 +1220,11 @@ static irqreturn_t flexcan_irq(int irq, void *dev_id)
+ 	 * (1): enabled if FLEXCAN_QUIRK_BROKEN_WERR_STATE is enabled
+ 	 */
+ 	if ((last_state != priv->can.state) &&
+-	    (priv->devtype_data->quirks & FLEXCAN_QUIRK_BROKEN_PERR_STATE) &&
++	    (priv->devtype_data.quirks & FLEXCAN_QUIRK_BROKEN_PERR_STATE) &&
+ 	    !(priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING)) {
+ 		switch (priv->can.state) {
+ 		case CAN_STATE_ERROR_ACTIVE:
+-			if (priv->devtype_data->quirks &
++			if (priv->devtype_data.quirks &
+ 			    FLEXCAN_QUIRK_BROKEN_WERR_STATE)
+ 				flexcan_error_irq_enable(priv);
+ 			else
+@@ -1423,26 +1448,26 @@ static int flexcan_rx_offload_setup(struct net_device *dev)
+ 	else
+ 		priv->mb_size = sizeof(struct flexcan_mb) + CAN_MAX_DLEN;
+ 
+-	if (priv->devtype_data->quirks & FLEXCAN_QUIRK_NR_MB_16)
++	if (priv->devtype_data.quirks & FLEXCAN_QUIRK_NR_MB_16)
+ 		priv->mb_count = 16;
+ 	else
+ 		priv->mb_count = (sizeof(priv->regs->mb[0]) / priv->mb_size) +
+ 				 (sizeof(priv->regs->mb[1]) / priv->mb_size);
+ 
+-	if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP)
++	if (priv->devtype_data.quirks & FLEXCAN_QUIRK_USE_RX_MAILBOX)
+ 		priv->tx_mb_reserved =
+-			flexcan_get_mb(priv, FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP);
++			flexcan_get_mb(priv, FLEXCAN_TX_MB_RESERVED_RX_MAILBOX);
+ 	else
+ 		priv->tx_mb_reserved =
+-			flexcan_get_mb(priv, FLEXCAN_TX_MB_RESERVED_OFF_FIFO);
++			flexcan_get_mb(priv, FLEXCAN_TX_MB_RESERVED_RX_FIFO);
+ 	priv->tx_mb_idx = priv->mb_count - 1;
+ 	priv->tx_mb = flexcan_get_mb(priv, priv->tx_mb_idx);
+ 	priv->tx_mask = FLEXCAN_IFLAG_MB(priv->tx_mb_idx);
+ 
+ 	priv->offload.mailbox_read = flexcan_mailbox_read;
+ 
+-	if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
+-		priv->offload.mb_first = FLEXCAN_RX_MB_OFF_TIMESTAMP_FIRST;
++	if (priv->devtype_data.quirks & FLEXCAN_QUIRK_USE_RX_MAILBOX) {
++		priv->offload.mb_first = FLEXCAN_RX_MB_RX_MAILBOX_FIRST;
+ 		priv->offload.mb_last = priv->mb_count - 2;
+ 
+ 		priv->rx_mask = GENMASK_ULL(priv->offload.mb_last,
+@@ -1506,7 +1531,7 @@ static int flexcan_chip_start(struct net_device *dev)
+ 	if (err)
+ 		goto out_chip_disable;
+ 
+-	if (priv->devtype_data->quirks & FLEXCAN_QUIRK_SUPPORT_ECC)
++	if (priv->devtype_data.quirks & FLEXCAN_QUIRK_SUPPORT_ECC)
+ 		flexcan_ram_init(dev);
+ 
+ 	flexcan_set_bittiming(dev);
+@@ -1532,10 +1557,10 @@ static int flexcan_chip_start(struct net_device *dev)
+ 	/* MCR
+ 	 *
+ 	 * FIFO:
+-	 * - disable for timestamp mode
++	 * - disable for mailbox mode
+ 	 * - enable for FIFO mode
+ 	 */
+-	if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP)
++	if (priv->devtype_data.quirks & FLEXCAN_QUIRK_USE_RX_MAILBOX)
+ 		reg_mcr &= ~FLEXCAN_MCR_FEN;
+ 	else
+ 		reg_mcr |= FLEXCAN_MCR_FEN;
+@@ -1586,7 +1611,7 @@ static int flexcan_chip_start(struct net_device *dev)
+ 	 * on most Flexcan cores, too. Otherwise we don't get
+ 	 * any error warning or passive interrupts.
+ 	 */
+-	if (priv->devtype_data->quirks & FLEXCAN_QUIRK_BROKEN_WERR_STATE ||
++	if (priv->devtype_data.quirks & FLEXCAN_QUIRK_BROKEN_WERR_STATE ||
+ 	    priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING)
+ 		reg_ctrl |= FLEXCAN_CTRL_ERR_MSK;
+ 	else
+@@ -1599,7 +1624,7 @@ static int flexcan_chip_start(struct net_device *dev)
+ 	netdev_dbg(dev, "%s: writing ctrl=0x%08x", __func__, reg_ctrl);
+ 	priv->write(reg_ctrl, &regs->ctrl);
+ 
+-	if ((priv->devtype_data->quirks & FLEXCAN_QUIRK_ENABLE_EACEN_RRS)) {
++	if ((priv->devtype_data.quirks & FLEXCAN_QUIRK_ENABLE_EACEN_RRS)) {
+ 		reg_ctrl2 = priv->read(&regs->ctrl2);
+ 		reg_ctrl2 |= FLEXCAN_CTRL2_EACEN | FLEXCAN_CTRL2_RRS;
+ 		priv->write(reg_ctrl2, &regs->ctrl2);
+@@ -1631,7 +1656,7 @@ static int flexcan_chip_start(struct net_device *dev)
+ 		priv->write(reg_fdctrl, &regs->fdctrl);
+ 	}
+ 
+-	if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
++	if (priv->devtype_data.quirks & FLEXCAN_QUIRK_USE_RX_MAILBOX) {
+ 		for (i = priv->offload.mb_first; i <= priv->offload.mb_last; i++) {
+ 			mb = flexcan_get_mb(priv, i);
+ 			priv->write(FLEXCAN_MB_CODE_RX_EMPTY,
+@@ -1639,7 +1664,7 @@ static int flexcan_chip_start(struct net_device *dev)
+ 		}
+ 	} else {
+ 		/* clear and invalidate unused mailboxes first */
+-		for (i = FLEXCAN_TX_MB_RESERVED_OFF_FIFO; i < priv->mb_count; i++) {
++		for (i = FLEXCAN_TX_MB_RESERVED_RX_FIFO; i < priv->mb_count; i++) {
+ 			mb = flexcan_get_mb(priv, i);
+ 			priv->write(FLEXCAN_MB_CODE_RX_INACTIVE,
+ 				    &mb->can_ctrl);
+@@ -1659,7 +1684,7 @@ static int flexcan_chip_start(struct net_device *dev)
+ 	priv->write(0x0, &regs->rx14mask);
+ 	priv->write(0x0, &regs->rx15mask);
+ 
+-	if (priv->devtype_data->quirks & FLEXCAN_QUIRK_DISABLE_RXFG)
++	if (priv->devtype_data.quirks & FLEXCAN_QUIRK_DISABLE_RXFG)
+ 		priv->write(0x0, &regs->rxfgmask);
+ 
+ 	/* clear acceptance filters */
+@@ -1673,7 +1698,7 @@ static int flexcan_chip_start(struct net_device *dev)
+ 	 * This also works around errata e5295 which generates false
+ 	 * positive memory errors and put the device in freeze mode.
+ 	 */
+-	if (priv->devtype_data->quirks & FLEXCAN_QUIRK_DISABLE_MECR) {
++	if (priv->devtype_data.quirks & FLEXCAN_QUIRK_DISABLE_MECR) {
+ 		/* Follow the protocol as described in "Detection
+ 		 * and Correction of Memory Errors" to write to
+ 		 * MECR register (step 1 - 5)
+@@ -1799,7 +1824,7 @@ static int flexcan_open(struct net_device *dev)
+ 	if (err)
+ 		goto out_can_rx_offload_disable;
+ 
+-	if (priv->devtype_data->quirks & FLEXCAN_QUIRK_NR_IRQ_3) {
++	if (priv->devtype_data.quirks & FLEXCAN_QUIRK_NR_IRQ_3) {
+ 		err = request_irq(priv->irq_boff,
+ 				  flexcan_irq, IRQF_SHARED, dev->name, dev);
+ 		if (err)
+@@ -1845,7 +1870,7 @@ static int flexcan_close(struct net_device *dev)
+ 	netif_stop_queue(dev);
+ 	flexcan_chip_interrupts_disable(dev);
+ 
+-	if (priv->devtype_data->quirks & FLEXCAN_QUIRK_NR_IRQ_3) {
++	if (priv->devtype_data.quirks & FLEXCAN_QUIRK_NR_IRQ_3) {
+ 		free_irq(priv->irq_err, dev);
+ 		free_irq(priv->irq_boff, dev);
+ 	}
+@@ -2051,9 +2076,9 @@ static int flexcan_setup_stop_mode(struct platform_device *pdev)
+ 
+ 	priv = netdev_priv(dev);
+ 
+-	if (priv->devtype_data->quirks & FLEXCAN_QUIRK_SETUP_STOP_MODE_SCFW)
++	if (priv->devtype_data.quirks & FLEXCAN_QUIRK_SETUP_STOP_MODE_SCFW)
+ 		ret = flexcan_setup_stop_mode_scfw(pdev);
+-	else if (priv->devtype_data->quirks & FLEXCAN_QUIRK_SETUP_STOP_MODE_GPR)
++	else if (priv->devtype_data.quirks & FLEXCAN_QUIRK_SETUP_STOP_MODE_GPR)
+ 		ret = flexcan_setup_stop_mode_gpr(pdev);
+ 	else
+ 		/* return 0 directly if doesn't support stop mode feature */
+@@ -2164,8 +2189,25 @@ static int flexcan_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 
+ 	if ((devtype_data->quirks & FLEXCAN_QUIRK_SUPPORT_FD) &&
+-	    !(devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP)) {
+-		dev_err(&pdev->dev, "CAN-FD mode doesn't work with FIFO mode!\n");
++	    !((devtype_data->quirks &
++	       (FLEXCAN_QUIRK_USE_RX_MAILBOX |
++		FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX |
++		FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX_RTR |
++		FLEXCAN_QUIRK_SUPPPORT_RX_FIFO)) ==
++	      (FLEXCAN_QUIRK_USE_RX_MAILBOX |
++	       FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX |
++	       FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX_RTR))) {
++		dev_err(&pdev->dev, "CAN-FD mode doesn't work in RX-FIFO mode!\n");
++		return -EINVAL;
++	}
++
++	if ((devtype_data->quirks &
++	     (FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX |
++	      FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX_RTR)) ==
++	    FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX_RTR) {
++		dev_err(&pdev->dev,
++			"Quirks (0x%08x) inconsistent: RX_MAILBOX_RX supported but not RX_MAILBOX\n",
++			devtype_data->quirks);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -2181,9 +2223,10 @@ static int flexcan_probe(struct platform_device *pdev)
+ 	dev->flags |= IFF_ECHO;
+ 
+ 	priv = netdev_priv(dev);
++	priv->devtype_data = *devtype_data;
+ 
+ 	if (of_property_read_bool(pdev->dev.of_node, "big-endian") ||
+-	    devtype_data->quirks & FLEXCAN_QUIRK_DEFAULT_BIG_ENDIAN) {
++	    priv->devtype_data.quirks & FLEXCAN_QUIRK_DEFAULT_BIG_ENDIAN) {
+ 		priv->read = flexcan_read_be;
+ 		priv->write = flexcan_write_be;
+ 	} else {
+@@ -2202,10 +2245,9 @@ static int flexcan_probe(struct platform_device *pdev)
+ 	priv->clk_ipg = clk_ipg;
+ 	priv->clk_per = clk_per;
+ 	priv->clk_src = clk_src;
+-	priv->devtype_data = devtype_data;
+ 	priv->reg_xceiver = reg_xceiver;
+ 
+-	if (devtype_data->quirks & FLEXCAN_QUIRK_NR_IRQ_3) {
++	if (priv->devtype_data.quirks & FLEXCAN_QUIRK_NR_IRQ_3) {
+ 		priv->irq_boff = platform_get_irq(pdev, 1);
+ 		if (priv->irq_boff <= 0) {
+ 			err = -ENODEV;
+@@ -2218,7 +2260,7 @@ static int flexcan_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-	if (priv->devtype_data->quirks & FLEXCAN_QUIRK_SUPPORT_FD) {
++	if (priv->devtype_data.quirks & FLEXCAN_QUIRK_SUPPORT_FD) {
+ 		priv->can.ctrlmode_supported |= CAN_CTRLMODE_FD |
+ 			CAN_CTRLMODE_FD_NON_ISO;
+ 		priv->can.bittiming_const = &flexcan_fd_bittiming_const;
+diff --git a/drivers/net/can/ifi_canfd/ifi_canfd.c b/drivers/net/can/ifi_canfd/ifi_canfd.c
+index 5bb957a26bc69..e8318e984bf2f 100644
+--- a/drivers/net/can/ifi_canfd/ifi_canfd.c
++++ b/drivers/net/can/ifi_canfd/ifi_canfd.c
+@@ -430,8 +430,6 @@ static int ifi_canfd_handle_lec_err(struct net_device *ndev)
+ 	       priv->base + IFI_CANFD_INTERRUPT);
+ 	writel(IFI_CANFD_ERROR_CTR_ER_ENABLE, priv->base + IFI_CANFD_ERROR_CTR);
+ 
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+ 	netif_receive_skb(skb);
+ 
+ 	return 1;
+@@ -456,7 +454,6 @@ static int ifi_canfd_handle_state_change(struct net_device *ndev,
+ 					 enum can_state new_state)
+ {
+ 	struct ifi_canfd_priv *priv = netdev_priv(ndev);
+-	struct net_device_stats *stats = &ndev->stats;
+ 	struct can_frame *cf;
+ 	struct sk_buff *skb;
+ 	struct can_berr_counter bec;
+@@ -522,8 +519,6 @@ static int ifi_canfd_handle_state_change(struct net_device *ndev,
+ 		break;
+ 	}
+ 
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+ 	netif_receive_skb(skb);
+ 
+ 	return 1;
+diff --git a/drivers/net/can/kvaser_pciefd.c b/drivers/net/can/kvaser_pciefd.c
+index eb74cdf26b88c..ab672c92ab078 100644
+--- a/drivers/net/can/kvaser_pciefd.c
++++ b/drivers/net/can/kvaser_pciefd.c
+@@ -1310,9 +1310,6 @@ static int kvaser_pciefd_rx_error_frame(struct kvaser_pciefd_can *can,
+ 	cf->data[6] = bec.txerr;
+ 	cf->data[7] = bec.rxerr;
+ 
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+-
+ 	netif_rx(skb);
+ 	return 0;
+ }
+@@ -1510,8 +1507,6 @@ static void kvaser_pciefd_handle_nack_packet(struct kvaser_pciefd_can *can,
+ 
+ 	if (skb) {
+ 		cf->can_id |= CAN_ERR_BUSERROR;
+-		stats->rx_bytes += cf->len;
+-		stats->rx_packets++;
+ 		netif_rx(skb);
+ 	} else {
+ 		stats->rx_dropped++;
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index c2a8421e7845c..30d94cb43113d 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -647,9 +647,6 @@ static int m_can_handle_lec_err(struct net_device *dev,
+ 		break;
+ 	}
+ 
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+-
+ 	if (cdev->is_peripheral)
+ 		timestamp = m_can_get_timestamp(cdev);
+ 
+@@ -706,7 +703,6 @@ static int m_can_handle_state_change(struct net_device *dev,
+ 				     enum can_state new_state)
+ {
+ 	struct m_can_classdev *cdev = netdev_priv(dev);
+-	struct net_device_stats *stats = &dev->stats;
+ 	struct can_frame *cf;
+ 	struct sk_buff *skb;
+ 	struct can_berr_counter bec;
+@@ -771,9 +767,6 @@ static int m_can_handle_state_change(struct net_device *dev,
+ 		break;
+ 	}
+ 
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+-
+ 	if (cdev->is_peripheral)
+ 		timestamp = m_can_get_timestamp(cdev);
+ 
+diff --git a/drivers/net/can/mscan/mscan.c b/drivers/net/can/mscan/mscan.c
+index fa32e418eb296..9e1cce0260da6 100644
+--- a/drivers/net/can/mscan/mscan.c
++++ b/drivers/net/can/mscan/mscan.c
+@@ -401,13 +401,14 @@ static int mscan_rx_poll(struct napi_struct *napi, int quota)
+ 			continue;
+ 		}
+ 
+-		if (canrflg & MSCAN_RXF)
++		if (canrflg & MSCAN_RXF) {
+ 			mscan_get_rx_frame(dev, frame);
+-		else if (canrflg & MSCAN_ERR_IF)
++			stats->rx_packets++;
++			stats->rx_bytes += frame->len;
++		} else if (canrflg & MSCAN_ERR_IF) {
+ 			mscan_get_err_frame(dev, frame, canrflg);
++		}
+ 
+-		stats->rx_packets++;
+-		stats->rx_bytes += frame->len;
+ 		work_done++;
+ 		netif_receive_skb(skb);
+ 	}
+diff --git a/drivers/net/can/pch_can.c b/drivers/net/can/pch_can.c
+index 964c8a09226a9..6b45840db1f9b 100644
+--- a/drivers/net/can/pch_can.c
++++ b/drivers/net/can/pch_can.c
+@@ -561,9 +561,6 @@ static void pch_can_error(struct net_device *ndev, u32 status)
+ 
+ 	priv->can.state = state;
+ 	netif_receive_skb(skb);
+-
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+ }
+ 
+ static irqreturn_t pch_can_interrupt(int irq, void *dev_id)
+diff --git a/drivers/net/can/peak_canfd/peak_canfd.c b/drivers/net/can/peak_canfd/peak_canfd.c
+index d08718e98e110..d5b8bc6d29804 100644
+--- a/drivers/net/can/peak_canfd/peak_canfd.c
++++ b/drivers/net/can/peak_canfd/peak_canfd.c
+@@ -409,8 +409,6 @@ static int pucan_handle_status(struct peak_canfd_priv *priv,
+ 		return -ENOMEM;
+ 	}
+ 
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+ 	pucan_netif_rx(skb, msg->ts_low, msg->ts_high);
+ 
+ 	return 0;
+@@ -438,8 +436,6 @@ static int pucan_handle_cache_critical(struct peak_canfd_priv *priv)
+ 	cf->data[6] = priv->bec.txerr;
+ 	cf->data[7] = priv->bec.rxerr;
+ 
+-	stats->rx_bytes += cf->len;
+-	stats->rx_packets++;
+ 	netif_rx(skb);
+ 
+ 	return 0;
+diff --git a/drivers/net/can/rcar/rcar_can.c b/drivers/net/can/rcar/rcar_can.c
+index 8999ec9455ec2..f408ed9a6ccd1 100644
+--- a/drivers/net/can/rcar/rcar_can.c
++++ b/drivers/net/can/rcar/rcar_can.c
+@@ -223,7 +223,6 @@ static void tx_failure_cleanup(struct net_device *ndev)
+ static void rcar_can_error(struct net_device *ndev)
+ {
+ 	struct rcar_can_priv *priv = netdev_priv(ndev);
+-	struct net_device_stats *stats = &ndev->stats;
+ 	struct can_frame *cf;
+ 	struct sk_buff *skb;
+ 	u8 eifr, txerr = 0, rxerr = 0;
+@@ -362,11 +361,8 @@ static void rcar_can_error(struct net_device *ndev)
+ 		}
+ 	}
+ 
+-	if (skb) {
+-		stats->rx_packets++;
+-		stats->rx_bytes += cf->len;
++	if (skb)
+ 		netif_rx(skb);
+-	}
+ }
+ 
+ static void rcar_can_tx_done(struct net_device *ndev)
+diff --git a/drivers/net/can/rcar/rcar_canfd.c b/drivers/net/can/rcar/rcar_canfd.c
+index ff9d0f5ae0dd2..137eea4c7bad8 100644
+--- a/drivers/net/can/rcar/rcar_canfd.c
++++ b/drivers/net/can/rcar/rcar_canfd.c
+@@ -1033,8 +1033,6 @@ static void rcar_canfd_error(struct net_device *ndev, u32 cerfl,
+ 	/* Clear channel error interrupts that are handled */
+ 	rcar_canfd_write(priv->base, RCANFD_CERFL(ch),
+ 			 RCANFD_CERFL_ERR(~cerfl));
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+ 	netif_rx(skb);
+ }
+ 
+@@ -1174,8 +1172,6 @@ static void rcar_canfd_state_change(struct net_device *ndev,
+ 		rx_state = txerr <= rxerr ? state : 0;
+ 
+ 		can_change_state(ndev, cf, tx_state, rx_state);
+-		stats->rx_packets++;
+-		stats->rx_bytes += cf->len;
+ 		netif_rx(skb);
+ 	}
+ }
+@@ -1640,8 +1636,7 @@ static int rcar_canfd_channel_probe(struct rcar_canfd_global *gpriv, u32 ch,
+ 	ndev = alloc_candev(sizeof(*priv), RCANFD_FIFO_DEPTH);
+ 	if (!ndev) {
+ 		dev_err(&pdev->dev, "alloc_candev() failed\n");
+-		err = -ENOMEM;
+-		goto fail;
++		return -ENOMEM;
+ 	}
+ 	priv = netdev_priv(ndev);
+ 
+@@ -1735,8 +1730,8 @@ static int rcar_canfd_channel_probe(struct rcar_canfd_global *gpriv, u32 ch,
+ 
+ fail_candev:
+ 	netif_napi_del(&priv->napi);
+-	free_candev(ndev);
+ fail:
++	free_candev(ndev);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/can/sja1000/sja1000.c b/drivers/net/can/sja1000/sja1000.c
+index 3fad546467461..a65546ca94610 100644
+--- a/drivers/net/can/sja1000/sja1000.c
++++ b/drivers/net/can/sja1000/sja1000.c
+@@ -487,8 +487,6 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
+ 			can_bus_off(dev);
+ 	}
+ 
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+ 	netif_rx(skb);
+ 
+ 	return 0;
+diff --git a/drivers/net/can/softing/softing_cs.c b/drivers/net/can/softing/softing_cs.c
+index 2e93ee7923739..e5c939b63fa65 100644
+--- a/drivers/net/can/softing/softing_cs.c
++++ b/drivers/net/can/softing/softing_cs.c
+@@ -293,7 +293,7 @@ static int softingcs_probe(struct pcmcia_device *pcmcia)
+ 	return 0;
+ 
+ platform_failed:
+-	kfree(dev);
++	platform_device_put(pdev);
+ mem_failed:
+ pcmcia_bad:
+ pcmcia_failed:
+diff --git a/drivers/net/can/softing/softing_fw.c b/drivers/net/can/softing/softing_fw.c
+index 7e15368779931..32286f861a195 100644
+--- a/drivers/net/can/softing/softing_fw.c
++++ b/drivers/net/can/softing/softing_fw.c
+@@ -565,18 +565,19 @@ int softing_startstop(struct net_device *dev, int up)
+ 		if (ret < 0)
+ 			goto failed;
+ 	}
+-	/* enable_error_frame */
+-	/*
++
++	/* enable_error_frame
++	 *
+ 	 * Error reporting is switched off at the moment since
+ 	 * the receiving of them is not yet 100% verified
+ 	 * This should be enabled sooner or later
+-	 *
+-	if (error_reporting) {
++	 */
++	if (0 && error_reporting) {
+ 		ret = softing_fct_cmd(card, 51, "enable_error_frame");
+ 		if (ret < 0)
+ 			goto failed;
+ 	}
+-	*/
++
+ 	/* initialize interface */
+ 	iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 2]);
+ 	iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 4]);
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+index e16dc482f3270..9a4791d88683c 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+@@ -1336,7 +1336,7 @@ mcp251xfd_tef_obj_read(const struct mcp251xfd_priv *priv,
+ 	     len > tx_ring->obj_num ||
+ 	     offset + len > tx_ring->obj_num)) {
+ 		netdev_err(priv->ndev,
+-			   "Trying to read to many TEF objects (max=%d, offset=%d, len=%d).\n",
++			   "Trying to read too many TEF objects (max=%d, offset=%d, len=%d).\n",
+ 			   tx_ring->obj_num, offset, len);
+ 		return -ERANGE;
+ 	}
+@@ -2625,7 +2625,7 @@ static int mcp251xfd_register_chip_detect(struct mcp251xfd_priv *priv)
+ 	if (!mcp251xfd_is_251X(priv) &&
+ 	    priv->devtype_data.model != devtype_data->model) {
+ 		netdev_info(ndev,
+-			    "Detected %s, but firmware specifies a %s. Fixing up.",
++			    "Detected %s, but firmware specifies a %s. Fixing up.\n",
+ 			    __mcp251xfd_get_model_str(devtype_data->model),
+ 			    mcp251xfd_get_model_str(priv));
+ 	}
+@@ -2662,7 +2662,7 @@ static int mcp251xfd_register_check_rx_int(struct mcp251xfd_priv *priv)
+ 		return 0;
+ 
+ 	netdev_info(priv->ndev,
+-		    "RX_INT active after softreset, disabling RX_INT support.");
++		    "RX_INT active after softreset, disabling RX_INT support.\n");
+ 	devm_gpiod_put(&priv->spi->dev, priv->rx_int);
+ 	priv->rx_int = NULL;
+ 
+diff --git a/drivers/net/can/sun4i_can.c b/drivers/net/can/sun4i_can.c
+index 54aa7c25c4de1..599174098883d 100644
+--- a/drivers/net/can/sun4i_can.c
++++ b/drivers/net/can/sun4i_can.c
+@@ -622,13 +622,10 @@ static int sun4i_can_err(struct net_device *dev, u8 isrc, u8 status)
+ 			can_bus_off(dev);
+ 	}
+ 
+-	if (likely(skb)) {
+-		stats->rx_packets++;
+-		stats->rx_bytes += cf->len;
++	if (likely(skb))
+ 		netif_rx(skb);
+-	} else {
++	else
+ 		return -ENOMEM;
+-	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/can/usb/ems_usb.c b/drivers/net/can/usb/ems_usb.c
+index 2b5302e724353..7cf65936d02e5 100644
+--- a/drivers/net/can/usb/ems_usb.c
++++ b/drivers/net/can/usb/ems_usb.c
+@@ -397,8 +397,6 @@ static void ems_usb_rx_err(struct ems_usb *dev, struct ems_cpc_msg *msg)
+ 		stats->rx_errors++;
+ 	}
+ 
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+ 	netif_rx(skb);
+ }
+ 
+diff --git a/drivers/net/can/usb/esd_usb2.c b/drivers/net/can/usb/esd_usb2.c
+index c6068a251fbed..5f6915a27b3d9 100644
+--- a/drivers/net/can/usb/esd_usb2.c
++++ b/drivers/net/can/usb/esd_usb2.c
+@@ -293,8 +293,6 @@ static void esd_usb2_rx_event(struct esd_usb2_net_priv *priv,
+ 		priv->bec.txerr = txerr;
+ 		priv->bec.rxerr = rxerr;
+ 
+-		stats->rx_packets++;
+-		stats->rx_bytes += cf->len;
+ 		netif_rx(skb);
+ 	}
+ }
+diff --git a/drivers/net/can/usb/etas_es58x/es58x_core.c b/drivers/net/can/usb/etas_es58x/es58x_core.c
+index 24627ab146261..fb07c33ba0c3c 100644
+--- a/drivers/net/can/usb/etas_es58x/es58x_core.c
++++ b/drivers/net/can/usb/etas_es58x/es58x_core.c
+@@ -849,13 +849,6 @@ int es58x_rx_err_msg(struct net_device *netdev, enum es58x_err error,
+ 		break;
+ 	}
+ 
+-	/* driver/net/can/dev.c:can_restart() takes in account error
+-	 * messages in the RX stats. Doing the same here for
+-	 * consistency.
+-	 */
+-	netdev->stats.rx_packets++;
+-	netdev->stats.rx_bytes += CAN_ERR_DLC;
+-
+ 	if (cf) {
+ 		if (cf->data[1])
+ 			cf->can_id |= CAN_ERR_CRTL;
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+index 0cc0fc866a2a9..3e682ef43f8ef 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+@@ -279,8 +279,6 @@ int kvaser_usb_can_rx_over_error(struct net_device *netdev)
+ 	cf->can_id |= CAN_ERR_CRTL;
+ 	cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
+ 
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+ 	netif_rx(skb);
+ 
+ 	return 0;
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+index dcee8dc828ecc..3398da323126c 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+@@ -869,7 +869,6 @@ static void kvaser_usb_hydra_update_state(struct kvaser_usb_net_priv *priv,
+ 	struct net_device *netdev = priv->netdev;
+ 	struct can_frame *cf;
+ 	struct sk_buff *skb;
+-	struct net_device_stats *stats;
+ 	enum can_state new_state, old_state;
+ 
+ 	old_state = priv->can.state;
+@@ -919,9 +918,6 @@ static void kvaser_usb_hydra_update_state(struct kvaser_usb_net_priv *priv,
+ 	cf->data[6] = bec->txerr;
+ 	cf->data[7] = bec->rxerr;
+ 
+-	stats = &netdev->stats;
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+ 	netif_rx(skb);
+ }
+ 
+@@ -1074,8 +1070,6 @@ kvaser_usb_hydra_error_frame(struct kvaser_usb_net_priv *priv,
+ 	cf->data[6] = bec.txerr;
+ 	cf->data[7] = bec.rxerr;
+ 
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+ 	netif_rx(skb);
+ 
+ 	priv->bec.txerr = bec.txerr;
+@@ -1109,8 +1103,6 @@ static void kvaser_usb_hydra_one_shot_fail(struct kvaser_usb_net_priv *priv,
+ 	}
+ 
+ 	stats->tx_errors++;
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+ 	netif_rx(skb);
+ }
+ 
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+index f7af1bf5ab46d..5434b7386a51e 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+@@ -641,8 +641,6 @@ static void kvaser_usb_leaf_tx_acknowledge(const struct kvaser_usb *dev,
+ 		if (skb) {
+ 			cf->can_id |= CAN_ERR_RESTARTED;
+ 
+-			stats->rx_packets++;
+-			stats->rx_bytes += cf->len;
+ 			netif_rx(skb);
+ 		} else {
+ 			netdev_err(priv->netdev,
+@@ -843,8 +841,6 @@ static void kvaser_usb_leaf_rx_error(const struct kvaser_usb *dev,
+ 	cf->data[6] = es->txerr;
+ 	cf->data[7] = es->rxerr;
+ 
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+ 	netif_rx(skb);
+ }
+ 
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb.c b/drivers/net/can/usb/peak_usb/pcan_usb.c
+index 8762187527669..21b06a7385959 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb.c
+@@ -520,8 +520,6 @@ static int pcan_usb_decode_error(struct pcan_usb_msg_context *mc, u8 n,
+ 				     &hwts->hwtstamp);
+ 	}
+ 
+-	mc->netdev->stats.rx_packets++;
+-	mc->netdev->stats.rx_bytes += cf->len;
+ 	netif_rx(skb);
+ 
+ 	return 0;
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
+index 6bd12549f1014..185f5a98d2177 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
+@@ -577,9 +577,6 @@ static int pcan_usb_fd_decode_status(struct pcan_usb_fd_if *usb_if,
+ 	if (!skb)
+ 		return -ENOMEM;
+ 
+-	netdev->stats.rx_packets++;
+-	netdev->stats.rx_bytes += cf->len;
+-
+ 	peak_usb_netif_rx_64(skb, le32_to_cpu(sm->ts_low),
+ 			     le32_to_cpu(sm->ts_high));
+ 
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_pro.c b/drivers/net/can/usb/peak_usb/pcan_usb_pro.c
+index 858ab22708fcd..f6d19879bf404 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb_pro.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb_pro.c
+@@ -660,8 +660,6 @@ static int pcan_usb_pro_handle_error(struct pcan_usb_pro_interface *usb_if,
+ 
+ 	hwts = skb_hwtstamps(skb);
+ 	peak_usb_get_ts_time(&usb_if->time_ref, le32_to_cpu(er->ts32), &hwts->hwtstamp);
+-	netdev->stats.rx_packets++;
+-	netdev->stats.rx_bytes += can_frame->len;
+ 	netif_rx(skb);
+ 
+ 	return 0;
+diff --git a/drivers/net/can/usb/ucan.c b/drivers/net/can/usb/ucan.c
+index 1679cbe45ded2..d582c39fc8d0e 100644
+--- a/drivers/net/can/usb/ucan.c
++++ b/drivers/net/can/usb/ucan.c
+@@ -621,8 +621,10 @@ static void ucan_rx_can_msg(struct ucan_priv *up, struct ucan_message_in *m)
+ 		memcpy(cf->data, m->msg.can_msg.data, cf->len);
+ 
+ 	/* don't count error frames as real packets */
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
++	if (!(cf->can_id & CAN_ERR_FLAG)) {
++		stats->rx_packets++;
++		stats->rx_bytes += cf->len;
++	}
+ 
+ 	/* pass it to Linux */
+ 	netif_rx(skb);
+diff --git a/drivers/net/can/usb/usb_8dev.c b/drivers/net/can/usb/usb_8dev.c
+index d1b83bd1b3cb9..040324362b260 100644
+--- a/drivers/net/can/usb/usb_8dev.c
++++ b/drivers/net/can/usb/usb_8dev.c
+@@ -449,8 +449,6 @@ static void usb_8dev_rx_err_msg(struct usb_8dev_priv *priv,
+ 	priv->bec.txerr = txerr;
+ 	priv->bec.rxerr = rxerr;
+ 
+-	stats->rx_packets++;
+-	stats->rx_bytes += cf->len;
+ 	netif_rx(skb);
+ }
+ 
+diff --git a/drivers/net/can/xilinx_can.c b/drivers/net/can/xilinx_can.c
+index e2b15d29d15eb..e0525541b32a7 100644
+--- a/drivers/net/can/xilinx_can.c
++++ b/drivers/net/can/xilinx_can.c
+@@ -965,13 +965,8 @@ static void xcan_update_error_state_after_rxtx(struct net_device *ndev)
+ 
+ 		xcan_set_error_state(ndev, new_state, skb ? cf : NULL);
+ 
+-		if (skb) {
+-			struct net_device_stats *stats = &ndev->stats;
+-
+-			stats->rx_packets++;
+-			stats->rx_bytes += cf->len;
++		if (skb)
+ 			netif_rx(skb);
+-		}
+ 	}
+ }
+ 
+@@ -1095,8 +1090,6 @@ static void xcan_err_interrupt(struct net_device *ndev, u32 isr)
+ 		if (skb) {
+ 			skb_cf->can_id |= cf.can_id;
+ 			memcpy(skb_cf->data, cf.data, CAN_ERR_DLC);
+-			stats->rx_packets++;
+-			stats->rx_bytes += CAN_ERR_DLC;
+ 			netif_rx(skb);
+ 		}
+ 	}
+@@ -1761,7 +1754,12 @@ static int xcan_probe(struct platform_device *pdev)
+ 	spin_lock_init(&priv->tx_lock);
+ 
+ 	/* Get IRQ for the device */
+-	ndev->irq = platform_get_irq(pdev, 0);
++	ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
++		goto err_free;
++
++	ndev->irq = ret;
++
+ 	ndev->flags |= IFF_ECHO;	/* We support local echo */
+ 
+ 	platform_set_drvdata(pdev, ndev);
+diff --git a/drivers/net/dsa/hirschmann/hellcreek.c b/drivers/net/dsa/hirschmann/hellcreek.c
+index 4e0b53d94b525..b2bab460d2e98 100644
+--- a/drivers/net/dsa/hirschmann/hellcreek.c
++++ b/drivers/net/dsa/hirschmann/hellcreek.c
+@@ -710,8 +710,9 @@ static int __hellcreek_fdb_add(struct hellcreek *hellcreek,
+ 	u16 meta = 0;
+ 
+ 	dev_dbg(hellcreek->dev, "Add static FDB entry: MAC=%pM, MASK=0x%02x, "
+-		"OBT=%d, REPRIO_EN=%d, PRIO=%d\n", entry->mac, entry->portmask,
+-		entry->is_obt, entry->reprio_en, entry->reprio_tc);
++		"OBT=%d, PASS_BLOCKED=%d, REPRIO_EN=%d, PRIO=%d\n", entry->mac,
++		entry->portmask, entry->is_obt, entry->pass_blocked,
++		entry->reprio_en, entry->reprio_tc);
+ 
+ 	/* Add mac address */
+ 	hellcreek_write(hellcreek, entry->mac[1] | (entry->mac[0] << 8), HR_FDBWDH);
+@@ -722,6 +723,8 @@ static int __hellcreek_fdb_add(struct hellcreek *hellcreek,
+ 	meta |= entry->portmask << HR_FDBWRM0_PORTMASK_SHIFT;
+ 	if (entry->is_obt)
+ 		meta |= HR_FDBWRM0_OBT;
++	if (entry->pass_blocked)
++		meta |= HR_FDBWRM0_PASS_BLOCKED;
+ 	if (entry->reprio_en) {
+ 		meta |= HR_FDBWRM0_REPRIO_EN;
+ 		meta |= entry->reprio_tc << HR_FDBWRM0_REPRIO_TC_SHIFT;
+@@ -1049,7 +1052,7 @@ static void hellcreek_setup_tc_identity_mapping(struct hellcreek *hellcreek)
+ 
+ static int hellcreek_setup_fdb(struct hellcreek *hellcreek)
+ {
+-	static struct hellcreek_fdb_entry ptp = {
++	static struct hellcreek_fdb_entry l2_ptp = {
+ 		/* MAC: 01-1B-19-00-00-00 */
+ 		.mac	      = { 0x01, 0x1b, 0x19, 0x00, 0x00, 0x00 },
+ 		.portmask     = 0x03,	/* Management ports */
+@@ -1060,24 +1063,94 @@ static int hellcreek_setup_fdb(struct hellcreek *hellcreek)
+ 		.reprio_tc    = 6,	/* TC: 6 as per IEEE 802.1AS */
+ 		.reprio_en    = 1,
+ 	};
+-	static struct hellcreek_fdb_entry p2p = {
++	static struct hellcreek_fdb_entry udp4_ptp = {
++		/* MAC: 01-00-5E-00-01-81 */
++		.mac	      = { 0x01, 0x00, 0x5e, 0x00, 0x01, 0x81 },
++		.portmask     = 0x03,	/* Management ports */
++		.age	      = 0,
++		.is_obt	      = 0,
++		.pass_blocked = 0,
++		.is_static    = 1,
++		.reprio_tc    = 6,
++		.reprio_en    = 1,
++	};
++	static struct hellcreek_fdb_entry udp6_ptp = {
++		/* MAC: 33-33-00-00-01-81 */
++		.mac	      = { 0x33, 0x33, 0x00, 0x00, 0x01, 0x81 },
++		.portmask     = 0x03,	/* Management ports */
++		.age	      = 0,
++		.is_obt	      = 0,
++		.pass_blocked = 0,
++		.is_static    = 1,
++		.reprio_tc    = 6,
++		.reprio_en    = 1,
++	};
++	static struct hellcreek_fdb_entry l2_p2p = {
+ 		/* MAC: 01-80-C2-00-00-0E */
+ 		.mac	      = { 0x01, 0x80, 0xc2, 0x00, 0x00, 0x0e },
+ 		.portmask     = 0x03,	/* Management ports */
+ 		.age	      = 0,
+ 		.is_obt	      = 0,
+-		.pass_blocked = 0,
++		.pass_blocked = 1,
+ 		.is_static    = 1,
+ 		.reprio_tc    = 6,	/* TC: 6 as per IEEE 802.1AS */
+ 		.reprio_en    = 1,
+ 	};
++	static struct hellcreek_fdb_entry udp4_p2p = {
++		/* MAC: 01-00-5E-00-00-6B */
++		.mac	      = { 0x01, 0x00, 0x5e, 0x00, 0x00, 0x6b },
++		.portmask     = 0x03,	/* Management ports */
++		.age	      = 0,
++		.is_obt	      = 0,
++		.pass_blocked = 1,
++		.is_static    = 1,
++		.reprio_tc    = 6,
++		.reprio_en    = 1,
++	};
++	static struct hellcreek_fdb_entry udp6_p2p = {
++		/* MAC: 33-33-00-00-00-6B */
++		.mac	      = { 0x33, 0x33, 0x00, 0x00, 0x00, 0x6b },
++		.portmask     = 0x03,	/* Management ports */
++		.age	      = 0,
++		.is_obt	      = 0,
++		.pass_blocked = 1,
++		.is_static    = 1,
++		.reprio_tc    = 6,
++		.reprio_en    = 1,
++	};
++	static struct hellcreek_fdb_entry stp = {
++		/* MAC: 01-80-C2-00-00-00 */
++		.mac	      = { 0x01, 0x80, 0xc2, 0x00, 0x00, 0x00 },
++		.portmask     = 0x03,	/* Management ports */
++		.age	      = 0,
++		.is_obt	      = 0,
++		.pass_blocked = 1,
++		.is_static    = 1,
++		.reprio_tc    = 6,
++		.reprio_en    = 1,
++	};
+ 	int ret;
+ 
+ 	mutex_lock(&hellcreek->reg_lock);
+-	ret = __hellcreek_fdb_add(hellcreek, &ptp);
++	ret = __hellcreek_fdb_add(hellcreek, &l2_ptp);
++	if (ret)
++		goto out;
++	ret = __hellcreek_fdb_add(hellcreek, &udp4_ptp);
++	if (ret)
++		goto out;
++	ret = __hellcreek_fdb_add(hellcreek, &udp6_ptp);
++	if (ret)
++		goto out;
++	ret = __hellcreek_fdb_add(hellcreek, &l2_p2p);
++	if (ret)
++		goto out;
++	ret = __hellcreek_fdb_add(hellcreek, &udp4_p2p);
++	if (ret)
++		goto out;
++	ret = __hellcreek_fdb_add(hellcreek, &udp6_p2p);
+ 	if (ret)
+ 		goto out;
+-	ret = __hellcreek_fdb_add(hellcreek, &p2p);
++	ret = __hellcreek_fdb_add(hellcreek, &stp);
+ out:
+ 	mutex_unlock(&hellcreek->reg_lock);
+ 
+diff --git a/drivers/net/dsa/rtl8365mb.c b/drivers/net/dsa/rtl8365mb.c
+index 078ca4cd71605..48c0e3e466001 100644
+--- a/drivers/net/dsa/rtl8365mb.c
++++ b/drivers/net/dsa/rtl8365mb.c
+@@ -767,7 +767,8 @@ static int rtl8365mb_ext_config_rgmii(struct realtek_smi *smi, int port,
+ 	 *     0 = no delay, 1 = 2 ns delay
+ 	 *   RX delay:
+ 	 *     0 = no delay, 7 = maximum delay
+-	 *     No units are specified, but there are a total of 8 steps.
++	 *     Each step is approximately 0.3 ns, so the maximum delay is about
++	 *     2.1 ns.
+ 	 *
+ 	 * The vendor driver also states that this must be configured *before*
+ 	 * forcing the external interface into a particular mode, which is done
+@@ -778,10 +779,6 @@ static int rtl8365mb_ext_config_rgmii(struct realtek_smi *smi, int port,
+ 	 * specified. We ignore the detail of the RGMII interface mode
+ 	 * (RGMII_{RXID, TXID, etc.}), as this is considered to be a PHY-only
+ 	 * property.
+-	 *
+-	 * For the RX delay, we assume that a register value of 4 corresponds to
+-	 * 2 ns. But this is just an educated guess, so ignore all other values
+-	 * to avoid too much confusion.
+ 	 */
+ 	if (!of_property_read_u32(dn, "tx-internal-delay-ps", &val)) {
+ 		val = val / 1000; /* convert to ns */
+@@ -794,13 +791,13 @@ static int rtl8365mb_ext_config_rgmii(struct realtek_smi *smi, int port,
+ 	}
+ 
+ 	if (!of_property_read_u32(dn, "rx-internal-delay-ps", &val)) {
+-		val = val / 1000; /* convert to ns */
++		val = DIV_ROUND_CLOSEST(val, 300); /* convert to 0.3 ns step */
+ 
+-		if (val == 0 || val == 2)
+-			rx_delay = val * 2;
++		if (val <= 7)
++			rx_delay = val;
+ 		else
+ 			dev_warn(smi->dev,
+-				 "EXT port RX delay must be 0 to 2 ns\n");
++				 "EXT port RX delay must be 0 to 2.1 ns\n");
+ 	}
+ 
+ 	ret = regmap_update_bits(
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index c04ea83188e22..7eaf74e5b2929 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -8008,6 +8008,12 @@ static int bnxt_hwrm_ver_get(struct bnxt *bp)
+ 	bp->hwrm_cmd_timeout = le16_to_cpu(resp->def_req_timeout);
+ 	if (!bp->hwrm_cmd_timeout)
+ 		bp->hwrm_cmd_timeout = DFLT_HWRM_CMD_TIMEOUT;
++	bp->hwrm_cmd_max_timeout = le16_to_cpu(resp->max_req_timeout) * 1000;
++	if (!bp->hwrm_cmd_max_timeout)
++		bp->hwrm_cmd_max_timeout = HWRM_CMD_MAX_TIMEOUT;
++	else if (bp->hwrm_cmd_max_timeout > HWRM_CMD_MAX_TIMEOUT)
++		netdev_warn(bp->dev, "Device requests max timeout of %d seconds, may trigger hung task watchdog\n",
++			    bp->hwrm_cmd_max_timeout / 1000);
+ 
+ 	if (resp->hwrm_intf_maj_8b >= 1) {
+ 		bp->hwrm_max_req_len = le16_to_cpu(resp->max_req_win_len);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 4c9507d82fd0d..6bacd5fae6ba5 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -1985,7 +1985,8 @@ struct bnxt {
+ 
+ 	u16			hwrm_max_req_len;
+ 	u16			hwrm_max_ext_req_len;
+-	int			hwrm_cmd_timeout;
++	unsigned int		hwrm_cmd_timeout;
++	unsigned int		hwrm_cmd_max_timeout;
+ 	struct mutex		hwrm_cmd_lock;	/* serialize hwrm messages */
+ 	struct hwrm_ver_get_output	ver_resp;
+ #define FW_VER_STR_LEN		32
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.c
+index d3cb2f21946da..c067898820360 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.c
+@@ -32,7 +32,7 @@ static int bnxt_hwrm_dbg_dma_data(struct bnxt *bp, void *msg,
+ 		return -ENOMEM;
+ 	}
+ 
+-	hwrm_req_timeout(bp, msg, HWRM_COREDUMP_TIMEOUT);
++	hwrm_req_timeout(bp, msg, bp->hwrm_cmd_max_timeout);
+ 	cmn_resp = hwrm_req_hold(bp, msg);
+ 	resp = cmn_resp;
+ 
+@@ -125,7 +125,7 @@ static int bnxt_hwrm_dbg_coredump_initiate(struct bnxt *bp, u16 component_id,
+ 	if (rc)
+ 		return rc;
+ 
+-	hwrm_req_timeout(bp, req, HWRM_COREDUMP_TIMEOUT);
++	hwrm_req_timeout(bp, req, bp->hwrm_cmd_max_timeout);
+ 	req->component_id = cpu_to_le16(component_id);
+ 	req->segment_id = cpu_to_le16(segment_id);
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index 8188d55722e4b..7307df49c1313 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -31,9 +31,6 @@
+ #include "bnxt_nvm_defs.h"	/* NVRAM content constant and structure defs */
+ #include "bnxt_fw_hdr.h"	/* Firmware hdr constant and structure defs */
+ #include "bnxt_coredump.h"
+-#define FLASH_NVRAM_TIMEOUT	((HWRM_CMD_TIMEOUT) * 100)
+-#define FLASH_PACKAGE_TIMEOUT	((HWRM_CMD_TIMEOUT) * 200)
+-#define INSTALL_PACKAGE_TIMEOUT	((HWRM_CMD_TIMEOUT) * 200)
+ 
+ static u32 bnxt_get_msglevel(struct net_device *dev)
+ {
+@@ -2169,7 +2166,7 @@ static int bnxt_flash_nvram(struct net_device *dev, u16 dir_type,
+ 		req->host_src_addr = cpu_to_le64(dma_handle);
+ 	}
+ 
+-	hwrm_req_timeout(bp, req, FLASH_NVRAM_TIMEOUT);
++	hwrm_req_timeout(bp, req, bp->hwrm_cmd_max_timeout);
+ 	req->dir_type = cpu_to_le16(dir_type);
+ 	req->dir_ordinal = cpu_to_le16(dir_ordinal);
+ 	req->dir_ext = cpu_to_le16(dir_ext);
+@@ -2515,8 +2512,8 @@ int bnxt_flash_package_from_fw_obj(struct net_device *dev, const struct firmware
+ 		return rc;
+ 	}
+ 
+-	hwrm_req_timeout(bp, modify, FLASH_PACKAGE_TIMEOUT);
+-	hwrm_req_timeout(bp, install, INSTALL_PACKAGE_TIMEOUT);
++	hwrm_req_timeout(bp, modify, bp->hwrm_cmd_max_timeout);
++	hwrm_req_timeout(bp, install, bp->hwrm_cmd_max_timeout);
+ 
+ 	hwrm_req_hold(bp, modify);
+ 	modify->host_src_addr = cpu_to_le64(dma_handle);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c
+index bb7327b82d0b2..8171f4912fa01 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c
+@@ -496,7 +496,7 @@ static int __hwrm_send(struct bnxt *bp, struct bnxt_hwrm_ctx *ctx)
+ 	}
+ 
+ 	/* Limit timeout to an upper limit */
+-	timeout = min_t(uint, ctx->timeout, HWRM_CMD_MAX_TIMEOUT);
++	timeout = min(ctx->timeout, bp->hwrm_cmd_max_timeout ?: HWRM_CMD_MAX_TIMEOUT);
+ 	/* convert timeout to usec */
+ 	timeout *= 1000;
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.h
+index 4d17f0d5363bb..9a9fc4e8041b6 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.h
+@@ -58,11 +58,10 @@ void hwrm_update_token(struct bnxt *bp, u16 seq, enum bnxt_hwrm_wait_state s);
+ 
+ #define BNXT_HWRM_MAX_REQ_LEN		(bp->hwrm_max_req_len)
+ #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
+-#define HWRM_CMD_MAX_TIMEOUT		40000
++#define HWRM_CMD_MAX_TIMEOUT		40000U
+ #define SHORT_HWRM_CMD_TIMEOUT		20
+ #define HWRM_CMD_TIMEOUT		(bp->hwrm_cmd_timeout)
+ #define HWRM_RESET_TIMEOUT		((HWRM_CMD_TIMEOUT) * 4)
+-#define HWRM_COREDUMP_TIMEOUT		((HWRM_CMD_TIMEOUT) * 12)
+ #define BNXT_HWRM_TARGET		0xffff
+ #define BNXT_HWRM_NO_CMPL_RING		-1
+ #define BNXT_HWRM_REQ_MAX_SIZE		128
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 226f4403cfed3..87f1056e29ff2 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -4020,10 +4020,12 @@ static int bcmgenet_probe(struct platform_device *pdev)
+ 
+ 	/* Request the WOL interrupt and advertise suspend if available */
+ 	priv->wol_irq_disabled = true;
+-	err = devm_request_irq(&pdev->dev, priv->wol_irq, bcmgenet_wol_isr, 0,
+-			       dev->name, priv);
+-	if (!err)
+-		device_set_wakeup_capable(&pdev->dev, 1);
++	if (priv->wol_irq > 0) {
++		err = devm_request_irq(&pdev->dev, priv->wol_irq,
++				       bcmgenet_wol_isr, 0, dev->name, priv);
++		if (!err)
++			device_set_wakeup_capable(&pdev->dev, 1);
++	}
+ 
+ 	/* Set the needed headroom to account for any possible
+ 	 * features enabling/disabling at runtime
+diff --git a/drivers/net/ethernet/chelsio/libcxgb/libcxgb_cm.c b/drivers/net/ethernet/chelsio/libcxgb/libcxgb_cm.c
+index d04a6c1634452..da8d10475a08e 100644
+--- a/drivers/net/ethernet/chelsio/libcxgb/libcxgb_cm.c
++++ b/drivers/net/ethernet/chelsio/libcxgb/libcxgb_cm.c
+@@ -32,6 +32,7 @@
+ 
+ #include <linux/tcp.h>
+ #include <linux/ipv6.h>
++#include <net/inet_ecn.h>
+ #include <net/route.h>
+ #include <net/ip6_route.h>
+ 
+@@ -99,7 +100,7 @@ cxgb_find_route(struct cxgb4_lld_info *lldi,
+ 
+ 	rt = ip_route_output_ports(&init_net, &fl4, NULL, peer_ip, local_ip,
+ 				   peer_port, local_port, IPPROTO_TCP,
+-				   tos, 0);
++				   tos & ~INET_ECN_MASK, 0);
+ 	if (IS_ERR(rt))
+ 		return NULL;
+ 	n = dst_neigh_lookup(&rt->dst, &peer_ip);
+diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c
+index 941f175fb911e..0ff40a9b06cec 100644
+--- a/drivers/net/ethernet/cortina/gemini.c
++++ b/drivers/net/ethernet/cortina/gemini.c
+@@ -305,21 +305,21 @@ static void gmac_speed_set(struct net_device *netdev)
+ 	switch (phydev->speed) {
+ 	case 1000:
+ 		status.bits.speed = GMAC_SPEED_1000;
+-		if (phydev->interface == PHY_INTERFACE_MODE_RGMII)
++		if (phy_interface_mode_is_rgmii(phydev->interface))
+ 			status.bits.mii_rmii = GMAC_PHY_RGMII_1000;
+ 		netdev_dbg(netdev, "connect %s to RGMII @ 1Gbit\n",
+ 			   phydev_name(phydev));
+ 		break;
+ 	case 100:
+ 		status.bits.speed = GMAC_SPEED_100;
+-		if (phydev->interface == PHY_INTERFACE_MODE_RGMII)
++		if (phy_interface_mode_is_rgmii(phydev->interface))
+ 			status.bits.mii_rmii = GMAC_PHY_RGMII_100_10;
+ 		netdev_dbg(netdev, "connect %s to RGMII @ 100 Mbit\n",
+ 			   phydev_name(phydev));
+ 		break;
+ 	case 10:
+ 		status.bits.speed = GMAC_SPEED_10;
+-		if (phydev->interface == PHY_INTERFACE_MODE_RGMII)
++		if (phy_interface_mode_is_rgmii(phydev->interface))
+ 			status.bits.mii_rmii = GMAC_PHY_RGMII_100_10;
+ 		netdev_dbg(netdev, "connect %s to RGMII @ 10 Mbit\n",
+ 			   phydev_name(phydev));
+@@ -389,6 +389,9 @@ static int gmac_setup_phy(struct net_device *netdev)
+ 		status.bits.mii_rmii = GMAC_PHY_GMII;
+ 		break;
+ 	case PHY_INTERFACE_MODE_RGMII:
++	case PHY_INTERFACE_MODE_RGMII_ID:
++	case PHY_INTERFACE_MODE_RGMII_TXID:
++	case PHY_INTERFACE_MODE_RGMII_RXID:
+ 		netdev_dbg(netdev,
+ 			   "RGMII: set GMAC0 and GMAC1 to MII/RGMII mode\n");
+ 		status.bits.mii_rmii = GMAC_PHY_RGMII_100_10;
+diff --git a/drivers/net/ethernet/freescale/fman/mac.c b/drivers/net/ethernet/freescale/fman/mac.c
+index d9fc5c456bf3e..39ae965cd4f64 100644
+--- a/drivers/net/ethernet/freescale/fman/mac.c
++++ b/drivers/net/ethernet/freescale/fman/mac.c
+@@ -94,14 +94,17 @@ static void mac_exception(void *handle, enum fman_mac_exceptions ex)
+ 		__func__, ex);
+ }
+ 
+-static void set_fman_mac_params(struct mac_device *mac_dev,
+-				struct fman_mac_params *params)
++static int set_fman_mac_params(struct mac_device *mac_dev,
++			       struct fman_mac_params *params)
+ {
+ 	struct mac_priv_s *priv = mac_dev->priv;
+ 
+ 	params->base_addr = (typeof(params->base_addr))
+ 		devm_ioremap(priv->dev, mac_dev->res->start,
+ 			     resource_size(mac_dev->res));
++	if (!params->base_addr)
++		return -ENOMEM;
++
+ 	memcpy(&params->addr, mac_dev->addr, sizeof(mac_dev->addr));
+ 	params->max_speed	= priv->max_speed;
+ 	params->phy_if		= mac_dev->phy_if;
+@@ -112,6 +115,8 @@ static void set_fman_mac_params(struct mac_device *mac_dev,
+ 	params->event_cb	= mac_exception;
+ 	params->dev_id		= mac_dev;
+ 	params->internal_phy_node = priv->internal_phy_node;
++
++	return 0;
+ }
+ 
+ static int tgec_initialization(struct mac_device *mac_dev)
+@@ -123,7 +128,9 @@ static int tgec_initialization(struct mac_device *mac_dev)
+ 
+ 	priv = mac_dev->priv;
+ 
+-	set_fman_mac_params(mac_dev, &params);
++	err = set_fman_mac_params(mac_dev, &params);
++	if (err)
++		goto _return;
+ 
+ 	mac_dev->fman_mac = tgec_config(&params);
+ 	if (!mac_dev->fman_mac) {
+@@ -169,7 +176,9 @@ static int dtsec_initialization(struct mac_device *mac_dev)
+ 
+ 	priv = mac_dev->priv;
+ 
+-	set_fman_mac_params(mac_dev, &params);
++	err = set_fman_mac_params(mac_dev, &params);
++	if (err)
++		goto _return;
+ 
+ 	mac_dev->fman_mac = dtsec_config(&params);
+ 	if (!mac_dev->fman_mac) {
+@@ -218,7 +227,9 @@ static int memac_initialization(struct mac_device *mac_dev)
+ 
+ 	priv = mac_dev->priv;
+ 
+-	set_fman_mac_params(mac_dev, &params);
++	err = set_fman_mac_params(mac_dev, &params);
++	if (err)
++		goto _return;
+ 
+ 	if (priv->max_speed == SPEED_10000)
+ 		params.phy_if = PHY_INTERFACE_MODE_XGMII;
+diff --git a/drivers/net/ethernet/freescale/xgmac_mdio.c b/drivers/net/ethernet/freescale/xgmac_mdio.c
+index 5b8b9bcf41a25..266e562bd67ae 100644
+--- a/drivers/net/ethernet/freescale/xgmac_mdio.c
++++ b/drivers/net/ethernet/freescale/xgmac_mdio.c
+@@ -51,6 +51,7 @@ struct tgec_mdio_controller {
+ struct mdio_fsl_priv {
+ 	struct	tgec_mdio_controller __iomem *mdio_base;
+ 	bool	is_little_endian;
++	bool	has_a009885;
+ 	bool	has_a011043;
+ };
+ 
+@@ -186,10 +187,10 @@ static int xgmac_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
+ {
+ 	struct mdio_fsl_priv *priv = (struct mdio_fsl_priv *)bus->priv;
+ 	struct tgec_mdio_controller __iomem *regs = priv->mdio_base;
++	unsigned long flags;
+ 	uint16_t dev_addr;
+ 	uint32_t mdio_stat;
+ 	uint32_t mdio_ctl;
+-	uint16_t value;
+ 	int ret;
+ 	bool endian = priv->is_little_endian;
+ 
+@@ -221,12 +222,18 @@ static int xgmac_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
+ 			return ret;
+ 	}
+ 
++	if (priv->has_a009885)
++		/* Once the operation completes, i.e. MDIO_STAT_BSY clears, we
++		 * must read back the data register within 16 MDC cycles.
++		 */
++		local_irq_save(flags);
++
+ 	/* Initiate the read */
+ 	xgmac_write32(mdio_ctl | MDIO_CTL_READ, &regs->mdio_ctl, endian);
+ 
+ 	ret = xgmac_wait_until_done(&bus->dev, regs, endian);
+ 	if (ret)
+-		return ret;
++		goto irq_restore;
+ 
+ 	/* Return all Fs if nothing was there */
+ 	if ((xgmac_read32(&regs->mdio_stat, endian) & MDIO_STAT_RD_ER) &&
+@@ -234,13 +241,17 @@ static int xgmac_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
+ 		dev_dbg(&bus->dev,
+ 			"Error while reading PHY%d reg at %d.%hhu\n",
+ 			phy_id, dev_addr, regnum);
+-		return 0xffff;
++		ret = 0xffff;
++	} else {
++		ret = xgmac_read32(&regs->mdio_data, endian) & 0xffff;
++		dev_dbg(&bus->dev, "read %04x\n", ret);
+ 	}
+ 
+-	value = xgmac_read32(&regs->mdio_data, endian) & 0xffff;
+-	dev_dbg(&bus->dev, "read %04x\n", value);
++irq_restore:
++	if (priv->has_a009885)
++		local_irq_restore(flags);
+ 
+-	return value;
++	return ret;
+ }
+ 
+ static int xgmac_mdio_probe(struct platform_device *pdev)
+@@ -287,6 +298,8 @@ static int xgmac_mdio_probe(struct platform_device *pdev)
+ 	priv->is_little_endian = device_property_read_bool(&pdev->dev,
+ 							   "little-endian");
+ 
++	priv->has_a009885 = device_property_read_bool(&pdev->dev,
++						      "fsl,erratum-a009885");
+ 	priv->has_a011043 = device_property_read_bool(&pdev->dev,
+ 						      "fsl,erratum-a011043");
+ 
+@@ -318,9 +331,10 @@ err_ioremap:
+ static int xgmac_mdio_remove(struct platform_device *pdev)
+ {
+ 	struct mii_bus *bus = platform_get_drvdata(pdev);
++	struct mdio_fsl_priv *priv = bus->priv;
+ 
+ 	mdiobus_unregister(bus);
+-	iounmap(bus->priv);
++	iounmap(priv->mdio_base);
+ 	mdiobus_free(bus);
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/i825xx/sni_82596.c b/drivers/net/ethernet/i825xx/sni_82596.c
+index 27937c5d79567..daec9ce04531b 100644
+--- a/drivers/net/ethernet/i825xx/sni_82596.c
++++ b/drivers/net/ethernet/i825xx/sni_82596.c
+@@ -117,9 +117,10 @@ static int sni_82596_probe(struct platform_device *dev)
+ 	netdevice->dev_addr[5] = readb(eth_addr + 0x06);
+ 	iounmap(eth_addr);
+ 
+-	if (!netdevice->irq) {
++	if (netdevice->irq < 0) {
+ 		printk(KERN_ERR "%s: IRQ not found for i82596 at 0x%lx\n",
+ 			__FILE__, netdevice->base_addr);
++		retval = netdevice->irq;
+ 		goto probe_failed;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index d28a80a009537..d83e665b3a4f2 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -2448,8 +2448,10 @@ static struct sk_buff *igc_construct_skb_zc(struct igc_ring *ring,
+ 
+ 	skb_reserve(skb, xdp->data_meta - xdp->data_hard_start);
+ 	memcpy(__skb_put(skb, totalsize), xdp->data_meta, totalsize);
+-	if (metasize)
++	if (metasize) {
+ 		skb_metadata_set(skb, metasize);
++		__skb_pull(skb, metasize);
++	}
+ 
+ 	return skb;
+ }
+diff --git a/drivers/net/ethernet/lantiq_etop.c b/drivers/net/ethernet/lantiq_etop.c
+index 072391c494ce4..14059e11710ad 100644
+--- a/drivers/net/ethernet/lantiq_etop.c
++++ b/drivers/net/ethernet/lantiq_etop.c
+@@ -687,13 +687,13 @@ ltq_etop_probe(struct platform_device *pdev)
+ 	err = device_property_read_u32(&pdev->dev, "lantiq,tx-burst-length", &priv->tx_burst_len);
+ 	if (err < 0) {
+ 		dev_err(&pdev->dev, "unable to read tx-burst-length property\n");
+-		return err;
++		goto err_free;
+ 	}
+ 
+ 	err = device_property_read_u32(&pdev->dev, "lantiq,rx-burst-length", &priv->rx_burst_len);
+ 	if (err < 0) {
+ 		dev_err(&pdev->dev, "unable to read rx-burst-length property\n");
+-		return err;
++		goto err_free;
+ 	}
+ 
+ 	for (i = 0; i < MAX_DMA_CHAN; i++) {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/ptp.c b/drivers/net/ethernet/marvell/octeontx2/af/ptp.c
+index d6321de3cc171..e682b7bfde640 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/ptp.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/ptp.c
+@@ -60,6 +60,8 @@ struct ptp *ptp_get(void)
+ 	/* Check driver is bound to PTP block */
+ 	if (!ptp)
+ 		ptp = ERR_PTR(-EPROBE_DEFER);
++	else
++		pci_dev_get(ptp->pdev);
+ 
+ 	return ptp;
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+index 45357deecabbf..a73a8017e0ee9 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+@@ -172,14 +172,13 @@ static int cpt_10k_register_interrupts(struct rvu_block *block, int off)
+ {
+ 	struct rvu *rvu = block->rvu;
+ 	int blkaddr = block->addr;
+-	char irq_name[16];
+ 	int i, ret;
+ 
+ 	for (i = CPT_10K_AF_INT_VEC_FLT0; i < CPT_10K_AF_INT_VEC_RVU; i++) {
+-		snprintf(irq_name, sizeof(irq_name), "CPTAF FLT%d", i);
++		sprintf(&rvu->irq_name[(off + i) * NAME_SIZE], "CPTAF FLT%d", i);
+ 		ret = rvu_cpt_do_register_interrupt(block, off + i,
+ 						    rvu_cpt_af_flt_intr_handler,
+-						    irq_name);
++						    &rvu->irq_name[(off + i) * NAME_SIZE]);
+ 		if (ret)
+ 			goto err;
+ 		rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1S(i), 0x1);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c
+index 70bacd38a6d9d..d0ab8f233a029 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c
+@@ -41,7 +41,7 @@ static bool rvu_common_request_irq(struct rvu *rvu, int offset,
+ 	struct rvu_devlink *rvu_dl = rvu->rvu_dl;
+ 	int rc;
+ 
+-	sprintf(&rvu->irq_name[offset * NAME_SIZE], name);
++	sprintf(&rvu->irq_name[offset * NAME_SIZE], "%s", name);
+ 	rc = request_irq(pci_irq_vector(rvu->pdev, offset), fn, 0,
+ 			 &rvu->irq_name[offset * NAME_SIZE], rvu_dl);
+ 	if (rc)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+index 78944ad3492ff..d75f3a78fabf1 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+@@ -684,7 +684,7 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	err = register_netdev(netdev);
+ 	if (err) {
+ 		dev_err(dev, "Failed to register netdevice\n");
+-		goto err_detach_rsrc;
++		goto err_ptp_destroy;
+ 	}
+ 
+ 	err = otx2_wq_init(vf);
+@@ -709,6 +709,8 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 
+ err_unreg_netdev:
+ 	unregister_netdev(netdev);
++err_ptp_destroy:
++	otx2_ptp_destroy(vf);
+ err_detach_rsrc:
+ 	if (test_bit(CN10K_LMTST, &vf->hw.cap_flag))
+ 		qmem_free(vf->dev, vf->dync_lmt);
+@@ -742,6 +744,7 @@ static void otx2vf_remove(struct pci_dev *pdev)
+ 	unregister_netdev(netdev);
+ 	if (vf->otx2_wq)
+ 		destroy_workqueue(vf->otx2_wq);
++	otx2_ptp_destroy(vf);
+ 	otx2vf_disable_mbox_intr(vf);
+ 	otx2_detach_resources(&vf->mbox);
+ 	if (test_bit(CN10K_LMTST, &vf->hw.cap_flag))
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index 75d67d1b5f6b2..166eaa9bd3938 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -91,46 +91,53 @@ static int mtk_mdio_busy_wait(struct mtk_eth *eth)
+ 	}
+ 
+ 	dev_err(eth->dev, "mdio: MDIO timeout\n");
+-	return -1;
++	return -ETIMEDOUT;
+ }
+ 
+-static u32 _mtk_mdio_write(struct mtk_eth *eth, u32 phy_addr,
+-			   u32 phy_register, u32 write_data)
++static int _mtk_mdio_write(struct mtk_eth *eth, u32 phy_addr, u32 phy_reg,
++			   u32 write_data)
+ {
+-	if (mtk_mdio_busy_wait(eth))
+-		return -1;
++	int ret;
+ 
+-	write_data &= 0xffff;
++	ret = mtk_mdio_busy_wait(eth);
++	if (ret < 0)
++		return ret;
+ 
+-	mtk_w32(eth, PHY_IAC_ACCESS | PHY_IAC_START | PHY_IAC_WRITE |
+-		(phy_register << PHY_IAC_REG_SHIFT) |
+-		(phy_addr << PHY_IAC_ADDR_SHIFT) | write_data,
++	mtk_w32(eth, PHY_IAC_ACCESS |
++		     PHY_IAC_START_C22 |
++		     PHY_IAC_CMD_WRITE |
++		     PHY_IAC_REG(phy_reg) |
++		     PHY_IAC_ADDR(phy_addr) |
++		     PHY_IAC_DATA(write_data),
+ 		MTK_PHY_IAC);
+ 
+-	if (mtk_mdio_busy_wait(eth))
+-		return -1;
++	ret = mtk_mdio_busy_wait(eth);
++	if (ret < 0)
++		return ret;
+ 
+ 	return 0;
+ }
+ 
+-static u32 _mtk_mdio_read(struct mtk_eth *eth, int phy_addr, int phy_reg)
++static int _mtk_mdio_read(struct mtk_eth *eth, u32 phy_addr, u32 phy_reg)
+ {
+-	u32 d;
++	int ret;
+ 
+-	if (mtk_mdio_busy_wait(eth))
+-		return 0xffff;
++	ret = mtk_mdio_busy_wait(eth);
++	if (ret < 0)
++		return ret;
+ 
+-	mtk_w32(eth, PHY_IAC_ACCESS | PHY_IAC_START | PHY_IAC_READ |
+-		(phy_reg << PHY_IAC_REG_SHIFT) |
+-		(phy_addr << PHY_IAC_ADDR_SHIFT),
++	mtk_w32(eth, PHY_IAC_ACCESS |
++		     PHY_IAC_START_C22 |
++		     PHY_IAC_CMD_C22_READ |
++		     PHY_IAC_REG(phy_reg) |
++		     PHY_IAC_ADDR(phy_addr),
+ 		MTK_PHY_IAC);
+ 
+-	if (mtk_mdio_busy_wait(eth))
+-		return 0xffff;
+-
+-	d = mtk_r32(eth, MTK_PHY_IAC) & 0xffff;
++	ret = mtk_mdio_busy_wait(eth);
++	if (ret < 0)
++		return ret;
+ 
+-	return d;
++	return mtk_r32(eth, MTK_PHY_IAC) & PHY_IAC_DATA_MASK;
+ }
+ 
+ static int mtk_mdio_write(struct mii_bus *bus, int phy_addr,
+@@ -217,7 +224,7 @@ static void mtk_mac_config(struct phylink_config *config, unsigned int mode,
+ 					   phylink_config);
+ 	struct mtk_eth *eth = mac->hw;
+ 	u32 mcr_cur, mcr_new, sid, i;
+-	int val, ge_mode, err;
++	int val, ge_mode, err = 0;
+ 
+ 	/* MT76x8 has no hardware settings between for the MAC */
+ 	if (!MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628) &&
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+index 5ef70dd8b49c6..f2d90639d7ed1 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+@@ -341,11 +341,17 @@
+ /* PHY Indirect Access Control registers */
+ #define MTK_PHY_IAC		0x10004
+ #define PHY_IAC_ACCESS		BIT(31)
+-#define PHY_IAC_READ		BIT(19)
+-#define PHY_IAC_WRITE		BIT(18)
+-#define PHY_IAC_START		BIT(16)
+-#define PHY_IAC_ADDR_SHIFT	20
+-#define PHY_IAC_REG_SHIFT	25
++#define PHY_IAC_REG_MASK	GENMASK(29, 25)
++#define PHY_IAC_REG(x)		FIELD_PREP(PHY_IAC_REG_MASK, (x))
++#define PHY_IAC_ADDR_MASK	GENMASK(24, 20)
++#define PHY_IAC_ADDR(x)		FIELD_PREP(PHY_IAC_ADDR_MASK, (x))
++#define PHY_IAC_CMD_MASK	GENMASK(19, 18)
++#define PHY_IAC_CMD_WRITE	FIELD_PREP(PHY_IAC_CMD_MASK, 1)
++#define PHY_IAC_CMD_C22_READ	FIELD_PREP(PHY_IAC_CMD_MASK, 2)
++#define PHY_IAC_START_MASK	GENMASK(17, 16)
++#define PHY_IAC_START_C22	FIELD_PREP(PHY_IAC_START_MASK, 1)
++#define PHY_IAC_DATA_MASK	GENMASK(15, 0)
++#define PHY_IAC_DATA(x)		FIELD_PREP(PHY_IAC_DATA_MASK, (x))
+ #define PHY_IAC_TIMEOUT		HZ
+ 
+ #define MTK_MAC_MISC		0x1000c
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index a46284ca51720..17fe058096533 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -148,8 +148,12 @@ static void cmd_ent_put(struct mlx5_cmd_work_ent *ent)
+ 	if (!refcount_dec_and_test(&ent->refcnt))
+ 		return;
+ 
+-	if (ent->idx >= 0)
+-		cmd_free_index(ent->cmd, ent->idx);
++	if (ent->idx >= 0) {
++		struct mlx5_cmd *cmd = ent->cmd;
++
++		cmd_free_index(cmd, ent->idx);
++		up(ent->page_queue ? &cmd->pages_sem : &cmd->sem);
++	}
+ 
+ 	cmd_free_ent(ent);
+ }
+@@ -900,25 +904,6 @@ static bool opcode_allowed(struct mlx5_cmd *cmd, u16 opcode)
+ 	return cmd->allowed_opcode == opcode;
+ }
+ 
+-static int cmd_alloc_index_retry(struct mlx5_cmd *cmd)
+-{
+-	unsigned long alloc_end = jiffies + msecs_to_jiffies(1000);
+-	int idx;
+-
+-retry:
+-	idx = cmd_alloc_index(cmd);
+-	if (idx < 0 && time_before(jiffies, alloc_end)) {
+-		/* Index allocation can fail on heavy load of commands. This is a temporary
+-		 * situation as the current command already holds the semaphore, meaning that
+-		 * another command completion is being handled and it is expected to release
+-		 * the entry index soon.
+-		 */
+-		cpu_relax();
+-		goto retry;
+-	}
+-	return idx;
+-}
+-
+ bool mlx5_cmd_is_down(struct mlx5_core_dev *dev)
+ {
+ 	return pci_channel_offline(dev->pdev) ||
+@@ -946,7 +931,7 @@ static void cmd_work_handler(struct work_struct *work)
+ 	sem = ent->page_queue ? &cmd->pages_sem : &cmd->sem;
+ 	down(sem);
+ 	if (!ent->page_queue) {
+-		alloc_ret = cmd_alloc_index_retry(cmd);
++		alloc_ret = cmd_alloc_index(cmd);
+ 		if (alloc_ret < 0) {
+ 			mlx5_core_err_rl(dev, "failed to allocate command entry\n");
+ 			if (ent->callback) {
+@@ -1602,8 +1587,6 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force
+ 	vector = vec & 0xffffffff;
+ 	for (i = 0; i < (1 << cmd->log_sz); i++) {
+ 		if (test_bit(i, &vector)) {
+-			struct semaphore *sem;
+-
+ 			ent = cmd->ent_arr[i];
+ 
+ 			/* if we already completed the command, ignore it */
+@@ -1626,10 +1609,6 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force
+ 			    dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR)
+ 				cmd_ent_put(ent);
+ 
+-			if (ent->page_queue)
+-				sem = &cmd->pages_sem;
+-			else
+-				sem = &cmd->sem;
+ 			ent->ts2 = ktime_get_ns();
+ 			memcpy(ent->out->first.data, ent->lay->out, sizeof(ent->lay->out));
+ 			dump_command(dev, ent, 0);
+@@ -1683,7 +1662,6 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force
+ 				 */
+ 				complete(&ent->done);
+ 			}
+-			up(sem);
+ 		}
+ 	}
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+index a5e4509732257..bc5f1dcb75e1f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+@@ -1,6 +1,7 @@
+ /* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+ /* Copyright (c) 2018 Mellanox Technologies. */
+ 
++#include <net/inet_ecn.h>
+ #include <net/vxlan.h>
+ #include <net/gre.h>
+ #include <net/geneve.h>
+@@ -235,7 +236,7 @@ int mlx5e_tc_tun_create_header_ipv4(struct mlx5e_priv *priv,
+ 	int err;
+ 
+ 	/* add the IP fields */
+-	attr.fl.fl4.flowi4_tos = tun_key->tos;
++	attr.fl.fl4.flowi4_tos = tun_key->tos & ~INET_ECN_MASK;
+ 	attr.fl.fl4.daddr = tun_key->u.ipv4.dst;
+ 	attr.fl.fl4.saddr = tun_key->u.ipv4.src;
+ 	attr.ttl = tun_key->ttl;
+@@ -350,7 +351,7 @@ int mlx5e_tc_tun_update_header_ipv4(struct mlx5e_priv *priv,
+ 	int err;
+ 
+ 	/* add the IP fields */
+-	attr.fl.fl4.flowi4_tos = tun_key->tos;
++	attr.fl.fl4.flowi4_tos = tun_key->tos & ~INET_ECN_MASK;
+ 	attr.fl.fl4.daddr = tun_key->u.ipv4.dst;
+ 	attr.fl.fl4.saddr = tun_key->u.ipv4.src;
+ 	attr.ttl = tun_key->ttl;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
+index 042b1abe1437f..62cbd15ffc341 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
+@@ -1579,6 +1579,8 @@ mlx5e_init_fib_work_ipv4(struct mlx5e_priv *priv,
+ 	struct net_device *fib_dev;
+ 
+ 	fen_info = container_of(info, struct fib_entry_notifier_info, info);
++	if (fen_info->fi->nh)
++		return NULL;
+ 	fib_dev = fib_info_nh(fen_info->fi, 0)->fib_nh_dev;
+ 	if (!fib_dev || fib_dev->netdev_ops != &mlx5e_netdev_ops ||
+ 	    fen_info->dst_len != 32)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c
+index 7b562d2c8a196..279cd8f4e79f7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c
+@@ -11,13 +11,13 @@ static int mlx5e_xsk_map_pool(struct mlx5e_priv *priv,
+ {
+ 	struct device *dev = mlx5_core_dma_dev(priv->mdev);
+ 
+-	return xsk_pool_dma_map(pool, dev, 0);
++	return xsk_pool_dma_map(pool, dev, DMA_ATTR_SKIP_CPU_SYNC);
+ }
+ 
+ static void mlx5e_xsk_unmap_pool(struct mlx5e_priv *priv,
+ 				 struct xsk_buff_pool *pool)
+ {
+-	return xsk_pool_dma_unmap(pool, 0);
++	return xsk_pool_dma_unmap(pool, DMA_ATTR_SKIP_CPU_SYNC);
+ }
+ 
+ static int mlx5e_xsk_get_pools(struct mlx5e_xsk *xsk)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 41379844eee1f..d92b82cdfd4e1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -4789,15 +4789,22 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
+ 	}
+ 
+ 	if (mlx5_vxlan_allowed(mdev->vxlan) || mlx5_geneve_tx_allowed(mdev)) {
+-		netdev->hw_features     |= NETIF_F_GSO_UDP_TUNNEL;
+-		netdev->hw_enc_features |= NETIF_F_GSO_UDP_TUNNEL;
+-		netdev->vlan_features |= NETIF_F_GSO_UDP_TUNNEL;
++		netdev->hw_features     |= NETIF_F_GSO_UDP_TUNNEL |
++					   NETIF_F_GSO_UDP_TUNNEL_CSUM;
++		netdev->hw_enc_features |= NETIF_F_GSO_UDP_TUNNEL |
++					   NETIF_F_GSO_UDP_TUNNEL_CSUM;
++		netdev->gso_partial_features = NETIF_F_GSO_UDP_TUNNEL_CSUM;
++		netdev->vlan_features |= NETIF_F_GSO_UDP_TUNNEL |
++					 NETIF_F_GSO_UDP_TUNNEL_CSUM;
+ 	}
+ 
+ 	if (mlx5e_tunnel_proto_supported_tx(mdev, IPPROTO_GRE)) {
+-		netdev->hw_features     |= NETIF_F_GSO_GRE;
+-		netdev->hw_enc_features |= NETIF_F_GSO_GRE;
+-		netdev->gso_partial_features |= NETIF_F_GSO_GRE;
++		netdev->hw_features     |= NETIF_F_GSO_GRE |
++					   NETIF_F_GSO_GRE_CSUM;
++		netdev->hw_enc_features |= NETIF_F_GSO_GRE |
++					   NETIF_F_GSO_GRE_CSUM;
++		netdev->gso_partial_features |= NETIF_F_GSO_GRE |
++						NETIF_F_GSO_GRE_CSUM;
+ 	}
+ 
+ 	if (mlx5e_tunnel_proto_supported_tx(mdev, IPPROTO_IPIP)) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index 48895d79796a8..c0df4b1115b72 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -50,6 +50,7 @@
+ #include "fs_core.h"
+ #include "lib/mlx5.h"
+ #include "lib/devcom.h"
++#include "lib/vxlan.h"
+ #define CREATE_TRACE_POINTS
+ #include "diag/en_rep_tracepoint.h"
+ #include "en_accel/ipsec.h"
+@@ -1027,6 +1028,7 @@ static void mlx5e_uplink_rep_enable(struct mlx5e_priv *priv)
+ 	rtnl_lock();
+ 	if (netif_running(netdev))
+ 		mlx5e_open(netdev);
++	udp_tunnel_nic_reset_ntf(priv->netdev);
+ 	netif_device_attach(netdev);
+ 	rtnl_unlock();
+ }
+@@ -1048,6 +1050,7 @@ static void mlx5e_uplink_rep_disable(struct mlx5e_priv *priv)
+ 	mlx5_notifier_unregister(mdev, &priv->events_nb);
+ 	mlx5e_rep_tc_disable(priv);
+ 	mlx5_lag_remove_netdev(mdev, priv->netdev);
++	mlx5_vxlan_reset_to_default(mdev->vxlan);
+ }
+ 
+ static MLX5E_DEFINE_STATS_GRP(sw_rep, 0);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index 793511d5ee4cd..dfc6604b9538b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -278,8 +278,8 @@ static inline int mlx5e_page_alloc_pool(struct mlx5e_rq *rq,
+ 	if (unlikely(!dma_info->page))
+ 		return -ENOMEM;
+ 
+-	dma_info->addr = dma_map_page(rq->pdev, dma_info->page, 0,
+-				      PAGE_SIZE, rq->buff.map_dir);
++	dma_info->addr = dma_map_page_attrs(rq->pdev, dma_info->page, 0, PAGE_SIZE,
++					    rq->buff.map_dir, DMA_ATTR_SKIP_CPU_SYNC);
+ 	if (unlikely(dma_mapping_error(rq->pdev, dma_info->addr))) {
+ 		page_pool_recycle_direct(rq->page_pool, dma_info->page);
+ 		dma_info->page = NULL;
+@@ -300,7 +300,8 @@ static inline int mlx5e_page_alloc(struct mlx5e_rq *rq,
+ 
+ void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info)
+ {
+-	dma_unmap_page(rq->pdev, dma_info->addr, PAGE_SIZE, rq->buff.map_dir);
++	dma_unmap_page_attrs(rq->pdev, dma_info->addr, PAGE_SIZE, rq->buff.map_dir,
++			     DMA_ATTR_SKIP_CPU_SYNC);
+ }
+ 
+ void mlx5e_page_release_dynamic(struct mlx5e_rq *rq,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 5e454a14428f2..9b3adaccc9beb 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -1949,6 +1949,111 @@ u8 mlx5e_tc_get_ip_version(struct mlx5_flow_spec *spec, bool outer)
+ 	return ip_version;
+ }
+ 
++/* Tunnel device follows RFC 6040, see include/net/inet_ecn.h.
++ * And changes inner ip_ecn depending on inner and outer ip_ecn as follows:
++ *      +---------+----------------------------------------+
++ *      |Arriving |         Arriving Outer Header          |
++ *      |   Inner +---------+---------+---------+----------+
++ *      |  Header | Not-ECT | ECT(0)  | ECT(1)  |   CE     |
++ *      +---------+---------+---------+---------+----------+
++ *      | Not-ECT | Not-ECT | Not-ECT | Not-ECT | <drop>   |
++ *      |  ECT(0) |  ECT(0) | ECT(0)  | ECT(1)  |   CE*    |
++ *      |  ECT(1) |  ECT(1) | ECT(1)  | ECT(1)* |   CE*    |
++ *      |    CE   |   CE    |  CE     | CE      |   CE     |
++ *      +---------+---------+---------+---------+----------+
++ *
++ * Tc matches on inner after decapsulation on tunnel device, but hw offload matches
++ * the inner ip_ecn value before hardware decap action.
++ *
++ * Cells marked are changed from original inner packet ip_ecn value during decap, and
++ * so matching those values on inner ip_ecn before decap will fail.
++ *
++ * The following helper allows offload when inner ip_ecn won't be changed by outer ip_ecn,
++ * except for the outer ip_ecn = CE, where in all cases inner ip_ecn will be changed to CE,
++ * and such we can drop the inner ip_ecn=CE match.
++ */
++
++static int mlx5e_tc_verify_tunnel_ecn(struct mlx5e_priv *priv,
++				      struct flow_cls_offload *f,
++				      bool *match_inner_ecn)
++{
++	u8 outer_ecn_mask = 0, outer_ecn_key = 0, inner_ecn_mask = 0, inner_ecn_key = 0;
++	struct flow_rule *rule = flow_cls_offload_flow_rule(f);
++	struct netlink_ext_ack *extack = f->common.extack;
++	struct flow_match_ip match;
++
++	*match_inner_ecn = true;
++
++	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_IP)) {
++		flow_rule_match_enc_ip(rule, &match);
++		outer_ecn_key = match.key->tos & INET_ECN_MASK;
++		outer_ecn_mask = match.mask->tos & INET_ECN_MASK;
++	}
++
++	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IP)) {
++		flow_rule_match_ip(rule, &match);
++		inner_ecn_key = match.key->tos & INET_ECN_MASK;
++		inner_ecn_mask = match.mask->tos & INET_ECN_MASK;
++	}
++
++	if (outer_ecn_mask != 0 && outer_ecn_mask != INET_ECN_MASK) {
++		NL_SET_ERR_MSG_MOD(extack, "Partial match on enc_tos ecn bits isn't supported");
++		netdev_warn(priv->netdev, "Partial match on enc_tos ecn bits isn't supported");
++		return -EOPNOTSUPP;
++	}
++
++	if (!outer_ecn_mask) {
++		if (!inner_ecn_mask)
++			return 0;
++
++		NL_SET_ERR_MSG_MOD(extack,
++				   "Matching on tos ecn bits without also matching enc_tos ecn bits isn't supported");
++		netdev_warn(priv->netdev,
++			    "Matching on tos ecn bits without also matching enc_tos ecn bits isn't supported");
++		return -EOPNOTSUPP;
++	}
++
++	if (inner_ecn_mask && inner_ecn_mask != INET_ECN_MASK) {
++		NL_SET_ERR_MSG_MOD(extack,
++				   "Partial match on tos ecn bits with match on enc_tos ecn bits isn't supported");
++		netdev_warn(priv->netdev,
++			    "Partial match on tos ecn bits with match on enc_tos ecn bits isn't supported");
++		return -EOPNOTSUPP;
++	}
++
++	if (!inner_ecn_mask)
++		return 0;
++
++	/* Both inner and outer have full mask on ecn */
++
++	if (outer_ecn_key == INET_ECN_ECT_1) {
++		/* inner ecn might change by DECAP action */
++
++		NL_SET_ERR_MSG_MOD(extack, "Match on enc_tos ecn = ECT(1) isn't supported");
++		netdev_warn(priv->netdev, "Match on enc_tos ecn = ECT(1) isn't supported");
++		return -EOPNOTSUPP;
++	}
++
++	if (outer_ecn_key != INET_ECN_CE)
++		return 0;
++
++	if (inner_ecn_key != INET_ECN_CE) {
++		/* Can't happen in software, as packet ecn will be changed to CE after decap */
++		NL_SET_ERR_MSG_MOD(extack,
++				   "Match on tos enc_tos ecn = CE while match on tos ecn != CE isn't supported");
++		netdev_warn(priv->netdev,
++			    "Match on tos enc_tos ecn = CE while match on tos ecn != CE isn't supported");
++		return -EOPNOTSUPP;
++	}
++
++	/* outer ecn = CE, inner ecn = CE, as decap will change inner ecn to CE in anycase,
++	 * drop match on inner ecn
++	 */
++	*match_inner_ecn = false;
++
++	return 0;
++}
++
+ static int parse_tunnel_attr(struct mlx5e_priv *priv,
+ 			     struct mlx5e_tc_flow *flow,
+ 			     struct mlx5_flow_spec *spec,
+@@ -2144,6 +2249,7 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
+ 	struct flow_rule *rule = flow_cls_offload_flow_rule(f);
+ 	struct flow_dissector *dissector = rule->match.dissector;
+ 	enum fs_flow_table_type fs_type;
++	bool match_inner_ecn = true;
+ 	u16 addr_type = 0;
+ 	u8 ip_proto = 0;
+ 	u8 *match_level;
+@@ -2197,6 +2303,10 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
+ 			headers_c = get_match_inner_headers_criteria(spec);
+ 			headers_v = get_match_inner_headers_value(spec);
+ 		}
++
++		err = mlx5e_tc_verify_tunnel_ecn(priv, f, &match_inner_ecn);
++		if (err)
++			return err;
+ 	}
+ 
+ 	err = mlx5e_flower_parse_meta(filter_dev, f);
+@@ -2420,10 +2530,12 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
+ 		struct flow_match_ip match;
+ 
+ 		flow_rule_match_ip(rule, &match);
+-		MLX5_SET(fte_match_set_lyr_2_4, headers_c, ip_ecn,
+-			 match.mask->tos & 0x3);
+-		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn,
+-			 match.key->tos & 0x3);
++		if (match_inner_ecn) {
++			MLX5_SET(fte_match_set_lyr_2_4, headers_c, ip_ecn,
++				 match.mask->tos & 0x3);
++			MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn,
++				 match.key->tos & 0x3);
++		}
+ 
+ 		MLX5_SET(fte_match_set_lyr_2_4, headers_c, ip_dscp,
+ 			 match.mask->tos >> 2);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
+index df277a6cddc0b..0c4c743ca31e1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
+@@ -431,7 +431,7 @@ int mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
+ 	int err = 0;
+ 
+ 	if (!mlx5_esw_allowed(esw))
+-		return -EPERM;
++		return vlan ? -EPERM : 0;
+ 
+ 	if (vlan || qos)
+ 		set_flags = SET_VLAN_STRIP | SET_VLAN_INSERT;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 32bc08a399256..ccb66428aeb5b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -295,26 +295,28 @@ esw_setup_chain_src_port_rewrite(struct mlx5_flow_destination *dest,
+ 				 int *i)
+ {
+ 	struct mlx5_esw_flow_attr *esw_attr = attr->esw_attr;
+-	int j, err;
++	int err;
+ 
+ 	if (!(attr->flags & MLX5_ESW_ATTR_FLAG_SRC_REWRITE))
+ 		return -EOPNOTSUPP;
+ 
+-	for (j = esw_attr->split_count; j < esw_attr->out_count; j++, (*i)++) {
+-		err = esw_setup_chain_dest(dest, flow_act, chains, attr->dest_chain, 1, 0, *i);
+-		if (err)
+-			goto err_setup_chain;
++	/* flow steering cannot handle more than one dest with the same ft
++	 * in a single flow
++	 */
++	if (esw_attr->out_count - esw_attr->split_count > 1)
++		return -EOPNOTSUPP;
+ 
+-		if (esw_attr->dests[j].pkt_reformat) {
+-			flow_act->action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
+-			flow_act->pkt_reformat = esw_attr->dests[j].pkt_reformat;
+-		}
++	err = esw_setup_chain_dest(dest, flow_act, chains, attr->dest_chain, 1, 0, *i);
++	if (err)
++		return err;
++
++	if (esw_attr->dests[esw_attr->split_count].pkt_reformat) {
++		flow_act->action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
++		flow_act->pkt_reformat = esw_attr->dests[esw_attr->split_count].pkt_reformat;
+ 	}
+-	return 0;
++	(*i)++;
+ 
+-err_setup_chain:
+-	esw_put_dest_tables_loop(esw, attr, esw_attr->split_count, j);
+-	return err;
++	return 0;
+ }
+ 
+ static void esw_cleanup_chain_src_port_rewrite(struct mlx5_eswitch *esw,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c
+index bf4d3cbefa633..1ca01a5b6cdd8 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c
+@@ -268,10 +268,8 @@ static int mlx5_lag_fib_event(struct notifier_block *nb,
+ 		fen_info = container_of(info, struct fib_entry_notifier_info,
+ 					info);
+ 		fi = fen_info->fi;
+-		if (fi->nh) {
+-			NL_SET_ERR_MSG_MOD(info->extack, "IPv4 route with nexthop objects is not supported");
+-			return notifier_from_errno(-EINVAL);
+-		}
++		if (fi->nh)
++			return NOTIFY_DONE;
+ 		fib_dev = fib_info_nh(fen_info->fi, 0)->fib_nh_dev;
+ 		if (fib_dev != ldev->pf[MLX5_LAG_P1].netdev &&
+ 		    fib_dev != ldev->pf[MLX5_LAG_P2].netdev) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 65083496f9131..6e381111f1d2f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -98,6 +98,8 @@ enum {
+ 	MLX5_ATOMIC_REQ_MODE_HOST_ENDIANNESS = 0x1,
+ };
+ 
++#define LOG_MAX_SUPPORTED_QPS 0xff
++
+ static struct mlx5_profile profile[] = {
+ 	[0] = {
+ 		.mask           = 0,
+@@ -109,7 +111,7 @@ static struct mlx5_profile profile[] = {
+ 	[2] = {
+ 		.mask		= MLX5_PROF_MASK_QP_SIZE |
+ 				  MLX5_PROF_MASK_MR_CACHE,
+-		.log_max_qp	= 18,
++		.log_max_qp	= LOG_MAX_SUPPORTED_QPS,
+ 		.mr_cache[0]	= {
+ 			.size	= 500,
+ 			.limit	= 250
+@@ -507,7 +509,9 @@ static int handle_hca_cap(struct mlx5_core_dev *dev, void *set_ctx)
+ 		 to_fw_pkey_sz(dev, 128));
+ 
+ 	/* Check log_max_qp from HCA caps to set in current profile */
+-	if (MLX5_CAP_GEN_MAX(dev, log_max_qp) < prof->log_max_qp) {
++	if (prof->log_max_qp == LOG_MAX_SUPPORTED_QPS) {
++		prof->log_max_qp = MLX5_CAP_GEN_MAX(dev, log_max_qp);
++	} else if (MLX5_CAP_GEN_MAX(dev, log_max_qp) < prof->log_max_qp) {
+ 		mlx5_core_warn(dev, "log_max_qp value in current profile is %d, changing it to HCA capability limit (%d)\n",
+ 			       prof->log_max_qp,
+ 			       MLX5_CAP_GEN_MAX(dev, log_max_qp));
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.c
+index f37db7cc32a65..7da012ff0d419 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.c
+@@ -30,10 +30,7 @@ bool mlx5_sf_dev_allocated(const struct mlx5_core_dev *dev)
+ {
+ 	struct mlx5_sf_dev_table *table = dev->priv.sf_dev_table;
+ 
+-	if (!mlx5_sf_dev_supported(dev))
+-		return false;
+-
+-	return !xa_empty(&table->devices);
++	return table && !xa_empty(&table->devices);
+ }
+ 
+ static ssize_t sfnum_show(struct device *dev, struct device_attribute *attr, char *buf)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c
+index 793365242e852..3d0cdc36a91ab 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c
+@@ -872,13 +872,12 @@ uninit_nic_rx:
+ 	return ret;
+ }
+ 
+-static int dr_matcher_init(struct mlx5dr_matcher *matcher,
+-			   struct mlx5dr_match_parameters *mask)
++static int dr_matcher_copy_param(struct mlx5dr_matcher *matcher,
++				 struct mlx5dr_match_parameters *mask)
+ {
++	struct mlx5dr_domain *dmn = matcher->tbl->dmn;
+ 	struct mlx5dr_match_parameters consumed_mask;
+-	struct mlx5dr_table *tbl = matcher->tbl;
+-	struct mlx5dr_domain *dmn = tbl->dmn;
+-	int i, ret;
++	int i, ret = 0;
+ 
+ 	if (matcher->match_criteria >= DR_MATCHER_CRITERIA_MAX) {
+ 		mlx5dr_err(dmn, "Invalid match criteria attribute\n");
+@@ -898,10 +897,36 @@ static int dr_matcher_init(struct mlx5dr_matcher *matcher,
+ 		consumed_mask.match_sz = mask->match_sz;
+ 		memcpy(consumed_mask.match_buf, mask->match_buf, mask->match_sz);
+ 		mlx5dr_ste_copy_param(matcher->match_criteria,
+-				      &matcher->mask, &consumed_mask,
+-				      true);
++				      &matcher->mask, &consumed_mask, true);
++
++		/* Check that all mask data was consumed */
++		for (i = 0; i < consumed_mask.match_sz; i++) {
++			if (!((u8 *)consumed_mask.match_buf)[i])
++				continue;
++
++			mlx5dr_dbg(dmn,
++				   "Match param mask contains unsupported parameters\n");
++			ret = -EOPNOTSUPP;
++			break;
++		}
++
++		kfree(consumed_mask.match_buf);
+ 	}
+ 
++	return ret;
++}
++
++static int dr_matcher_init(struct mlx5dr_matcher *matcher,
++			   struct mlx5dr_match_parameters *mask)
++{
++	struct mlx5dr_table *tbl = matcher->tbl;
++	struct mlx5dr_domain *dmn = tbl->dmn;
++	int ret;
++
++	ret = dr_matcher_copy_param(matcher, mask);
++	if (ret)
++		return ret;
++
+ 	switch (dmn->type) {
+ 	case MLX5DR_DOMAIN_TYPE_NIC_RX:
+ 		matcher->rx.nic_tbl = &tbl->rx;
+@@ -919,22 +944,8 @@ static int dr_matcher_init(struct mlx5dr_matcher *matcher,
+ 	default:
+ 		WARN_ON(true);
+ 		ret = -EINVAL;
+-		goto free_consumed_mask;
+-	}
+-
+-	/* Check that all mask data was consumed */
+-	for (i = 0; i < consumed_mask.match_sz; i++) {
+-		if (!((u8 *)consumed_mask.match_buf)[i])
+-			continue;
+-
+-		mlx5dr_dbg(dmn, "Match param mask contains unsupported parameters\n");
+-		ret = -EOPNOTSUPP;
+-		goto free_consumed_mask;
+ 	}
+ 
+-	ret =  0;
+-free_consumed_mask:
+-	kfree(consumed_mask.match_buf);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/cmd.h b/drivers/net/ethernet/mellanox/mlxsw/cmd.h
+index 392ce3cb27f72..51b260d54237e 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/cmd.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/cmd.h
+@@ -935,6 +935,18 @@ static inline int mlxsw_cmd_sw2hw_rdq(struct mlxsw_core *mlxsw_core,
+  */
+ MLXSW_ITEM32(cmd_mbox, sw2hw_dq, cq, 0x00, 24, 8);
+ 
++enum mlxsw_cmd_mbox_sw2hw_dq_sdq_lp {
++	MLXSW_CMD_MBOX_SW2HW_DQ_SDQ_LP_WQE,
++	MLXSW_CMD_MBOX_SW2HW_DQ_SDQ_LP_IGNORE_WQE,
++};
++
++/* cmd_mbox_sw2hw_dq_sdq_lp
++ * SDQ local Processing
++ * 0: local processing by wqe.lp
++ * 1: local processing (ignoring wqe.lp)
++ */
++MLXSW_ITEM32(cmd_mbox, sw2hw_dq, sdq_lp, 0x00, 23, 1);
++
+ /* cmd_mbox_sw2hw_dq_sdq_tclass
+  * SDQ: CPU Egress TClass
+  * RDQ: Reserved
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci.c b/drivers/net/ethernet/mellanox/mlxsw/pci.c
+index a15c95a10bae4..f91dde4df152b 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/pci.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/pci.c
+@@ -285,6 +285,7 @@ static int mlxsw_pci_sdq_init(struct mlxsw_pci *mlxsw_pci, char *mbox,
+ 			      struct mlxsw_pci_queue *q)
+ {
+ 	int tclass;
++	int lp;
+ 	int i;
+ 	int err;
+ 
+@@ -292,9 +293,12 @@ static int mlxsw_pci_sdq_init(struct mlxsw_pci *mlxsw_pci, char *mbox,
+ 	q->consumer_counter = 0;
+ 	tclass = q->num == MLXSW_PCI_SDQ_EMAD_INDEX ? MLXSW_PCI_SDQ_EMAD_TC :
+ 						      MLXSW_PCI_SDQ_CTL_TC;
++	lp = q->num == MLXSW_PCI_SDQ_EMAD_INDEX ? MLXSW_CMD_MBOX_SW2HW_DQ_SDQ_LP_IGNORE_WQE :
++						  MLXSW_CMD_MBOX_SW2HW_DQ_SDQ_LP_WQE;
+ 
+ 	/* Set CQ of same number of this SDQ. */
+ 	mlxsw_cmd_mbox_sw2hw_dq_cq_set(mbox, q->num);
++	mlxsw_cmd_mbox_sw2hw_dq_sdq_lp_set(mbox, lp);
+ 	mlxsw_cmd_mbox_sw2hw_dq_sdq_tclass_set(mbox, tclass);
+ 	mlxsw_cmd_mbox_sw2hw_dq_log2_dq_sz_set(mbox, 3); /* 8 pages */
+ 	for (i = 0; i < MLXSW_PCI_AQ_PAGES; i++) {
+@@ -1678,7 +1682,7 @@ static int mlxsw_pci_skb_transmit(void *bus_priv, struct sk_buff *skb,
+ 
+ 	wqe = elem_info->elem;
+ 	mlxsw_pci_wqe_c_set(wqe, 1); /* always report completion */
+-	mlxsw_pci_wqe_lp_set(wqe, !!tx_info->is_emad);
++	mlxsw_pci_wqe_lp_set(wqe, 0);
+ 	mlxsw_pci_wqe_type_set(wqe, MLXSW_PCI_WQE_TYPE_ETHERNET);
+ 
+ 	err = mlxsw_pci_wqe_frag_map(mlxsw_pci, wqe, 0, skb->data,
+@@ -1973,6 +1977,7 @@ int mlxsw_pci_driver_register(struct pci_driver *pci_driver)
+ {
+ 	pci_driver->probe = mlxsw_pci_probe;
+ 	pci_driver->remove = mlxsw_pci_remove;
++	pci_driver->shutdown = mlxsw_pci_remove;
+ 	return pci_register_driver(pci_driver);
+ }
+ EXPORT_SYMBOL(mlxsw_pci_driver_register);
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index 1e4ad953cffbc..294bb4eb3833f 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -692,7 +692,10 @@ void ocelot_phylink_mac_link_up(struct ocelot *ocelot, int port,
+ 
+ 	ocelot_write_rix(ocelot, 0, ANA_POL_FLOWC, port);
+ 
+-	ocelot_fields_write(ocelot, port, SYS_PAUSE_CFG_PAUSE_ENA, tx_pause);
++	/* Don't attempt to send PAUSE frames on the NPI port, it's broken */
++	if (port != ocelot->npi)
++		ocelot_fields_write(ocelot, port, SYS_PAUSE_CFG_PAUSE_ENA,
++				    tx_pause);
+ 
+ 	/* Undo the effects of ocelot_phylink_mac_link_down:
+ 	 * enable MAC module
+@@ -1688,8 +1691,7 @@ int ocelot_get_ts_info(struct ocelot *ocelot, int port,
+ }
+ EXPORT_SYMBOL(ocelot_get_ts_info);
+ 
+-static u32 ocelot_get_bond_mask(struct ocelot *ocelot, struct net_device *bond,
+-				bool only_active_ports)
++static u32 ocelot_get_bond_mask(struct ocelot *ocelot, struct net_device *bond)
+ {
+ 	u32 mask = 0;
+ 	int port;
+@@ -1700,12 +1702,8 @@ static u32 ocelot_get_bond_mask(struct ocelot *ocelot, struct net_device *bond,
+ 		if (!ocelot_port)
+ 			continue;
+ 
+-		if (ocelot_port->bond == bond) {
+-			if (only_active_ports && !ocelot_port->lag_tx_active)
+-				continue;
+-
++		if (ocelot_port->bond == bond)
+ 			mask |= BIT(port);
+-		}
+ 	}
+ 
+ 	return mask;
+@@ -1792,10 +1790,8 @@ void ocelot_apply_bridge_fwd_mask(struct ocelot *ocelot)
+ 			mask = ocelot_get_bridge_fwd_mask(ocelot, port, bridge);
+ 			mask |= cpu_fwd_mask;
+ 			mask &= ~BIT(port);
+-			if (bond) {
+-				mask &= ~ocelot_get_bond_mask(ocelot, bond,
+-							      false);
+-			}
++			if (bond)
++				mask &= ~ocelot_get_bond_mask(ocelot, bond);
+ 		} else {
+ 			/* Standalone ports forward only to DSA tag_8021q CPU
+ 			 * ports (if those exist), or to the hardware CPU port
+@@ -2112,13 +2108,17 @@ static void ocelot_set_aggr_pgids(struct ocelot *ocelot)
+ 		if (!bond || (visited & BIT(lag)))
+ 			continue;
+ 
+-		bond_mask = ocelot_get_bond_mask(ocelot, bond, true);
++		bond_mask = ocelot_get_bond_mask(ocelot, bond);
+ 
+ 		for_each_set_bit(port, &bond_mask, ocelot->num_phys_ports) {
++			struct ocelot_port *ocelot_port = ocelot->ports[port];
++
+ 			// Destination mask
+ 			ocelot_write_rix(ocelot, bond_mask,
+ 					 ANA_PGID_PGID, port);
+-			aggr_idx[num_active_ports++] = port;
++
++			if (ocelot_port->lag_tx_active)
++				aggr_idx[num_active_ports++] = port;
+ 		}
+ 
+ 		for_each_aggr_pgid(ocelot, i) {
+@@ -2167,8 +2167,7 @@ static void ocelot_setup_logical_port_ids(struct ocelot *ocelot)
+ 
+ 		bond = ocelot_port->bond;
+ 		if (bond) {
+-			int lag = __ffs(ocelot_get_bond_mask(ocelot, bond,
+-							     false));
++			int lag = __ffs(ocelot_get_bond_mask(ocelot, bond));
+ 
+ 			ocelot_rmw_gix(ocelot,
+ 				       ANA_PORT_PORT_CFG_PORTID_VAL(lag),
+diff --git a/drivers/net/ethernet/mscc/ocelot_flower.c b/drivers/net/ethernet/mscc/ocelot_flower.c
+index 769a8159373e0..738dd2be79dcf 100644
+--- a/drivers/net/ethernet/mscc/ocelot_flower.c
++++ b/drivers/net/ethernet/mscc/ocelot_flower.c
+@@ -521,13 +521,6 @@ ocelot_flower_parse_key(struct ocelot *ocelot, int port, bool ingress,
+ 			return -EOPNOTSUPP;
+ 		}
+ 
+-		if (filter->block_id == VCAP_IS1 &&
+-		    !is_zero_ether_addr(match.mask->dst)) {
+-			NL_SET_ERR_MSG_MOD(extack,
+-					   "Key type S1_NORMAL cannot match on destination MAC");
+-			return -EOPNOTSUPP;
+-		}
+-
+ 		/* The hw support mac matches only for MAC_ETYPE key,
+ 		 * therefore if other matches(port, tcp flags, etc) are added
+ 		 * then just bail out
+@@ -542,6 +535,14 @@ ocelot_flower_parse_key(struct ocelot *ocelot, int port, bool ingress,
+ 			return -EOPNOTSUPP;
+ 
+ 		flow_rule_match_eth_addrs(rule, &match);
++
++		if (filter->block_id == VCAP_IS1 &&
++		    !is_zero_ether_addr(match.mask->dst)) {
++			NL_SET_ERR_MSG_MOD(extack,
++					   "Key type S1_NORMAL cannot match on destination MAC");
++			return -EOPNOTSUPP;
++		}
++
+ 		filter->key_type = OCELOT_VCAP_KEY_ETYPE;
+ 		ether_addr_copy(filter->key.etype.dmac.value,
+ 				match.key->dst);
+@@ -763,13 +764,34 @@ int ocelot_cls_flower_replace(struct ocelot *ocelot, int port,
+ 	struct netlink_ext_ack *extack = f->common.extack;
+ 	struct ocelot_vcap_filter *filter;
+ 	int chain = f->common.chain_index;
+-	int ret;
++	int block_id, ret;
+ 
+ 	if (chain && !ocelot_find_vcap_filter_that_points_at(ocelot, chain)) {
+ 		NL_SET_ERR_MSG_MOD(extack, "No default GOTO action points to this chain");
+ 		return -EOPNOTSUPP;
+ 	}
+ 
++	block_id = ocelot_chain_to_block(chain, ingress);
++	if (block_id < 0) {
++		NL_SET_ERR_MSG_MOD(extack, "Cannot offload to this chain");
++		return -EOPNOTSUPP;
++	}
++
++	filter = ocelot_vcap_block_find_filter_by_id(&ocelot->block[block_id],
++						     f->cookie, true);
++	if (filter) {
++		/* Filter already exists on other ports */
++		if (!ingress) {
++			NL_SET_ERR_MSG_MOD(extack, "VCAP ES0 does not support shared filters");
++			return -EOPNOTSUPP;
++		}
++
++		filter->ingress_port_mask |= BIT(port);
++
++		return ocelot_vcap_filter_replace(ocelot, filter);
++	}
++
++	/* Filter didn't exist, create it now */
+ 	filter = ocelot_vcap_filter_create(ocelot, port, ingress, f);
+ 	if (!filter)
+ 		return -ENOMEM;
+@@ -816,6 +838,12 @@ int ocelot_cls_flower_destroy(struct ocelot *ocelot, int port,
+ 	if (filter->type == OCELOT_VCAP_FILTER_DUMMY)
+ 		return ocelot_vcap_dummy_filter_del(ocelot, filter);
+ 
++	if (ingress) {
++		filter->ingress_port_mask &= ~BIT(port);
++		if (filter->ingress_port_mask)
++			return ocelot_vcap_filter_replace(ocelot, filter);
++	}
++
+ 	return ocelot_vcap_filter_del(ocelot, filter);
+ }
+ EXPORT_SYMBOL_GPL(ocelot_cls_flower_destroy);
+diff --git a/drivers/net/ethernet/mscc/ocelot_net.c b/drivers/net/ethernet/mscc/ocelot_net.c
+index eaeba60b1bba5..3cf998813b837 100644
+--- a/drivers/net/ethernet/mscc/ocelot_net.c
++++ b/drivers/net/ethernet/mscc/ocelot_net.c
+@@ -1168,7 +1168,7 @@ static int ocelot_netdevice_bridge_join(struct net_device *dev,
+ 	ocelot_port_bridge_join(ocelot, port, bridge);
+ 
+ 	err = switchdev_bridge_port_offload(brport_dev, dev, priv,
+-					    &ocelot_netdevice_nb,
++					    &ocelot_switchdev_nb,
+ 					    &ocelot_switchdev_blocking_nb,
+ 					    false, extack);
+ 	if (err)
+@@ -1182,7 +1182,7 @@ static int ocelot_netdevice_bridge_join(struct net_device *dev,
+ 
+ err_switchdev_sync:
+ 	switchdev_bridge_port_unoffload(brport_dev, priv,
+-					&ocelot_netdevice_nb,
++					&ocelot_switchdev_nb,
+ 					&ocelot_switchdev_blocking_nb);
+ err_switchdev_offload:
+ 	ocelot_port_bridge_leave(ocelot, port, bridge);
+@@ -1195,7 +1195,7 @@ static void ocelot_netdevice_pre_bridge_leave(struct net_device *dev,
+ 	struct ocelot_port_private *priv = netdev_priv(dev);
+ 
+ 	switchdev_bridge_port_unoffload(brport_dev, priv,
+-					&ocelot_netdevice_nb,
++					&ocelot_switchdev_nb,
+ 					&ocelot_switchdev_blocking_nb);
+ }
+ 
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index b4c597f4040c8..151cce2fe36d5 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -30,8 +30,7 @@
+ #include <linux/spinlock.h>
+ #include <linux/sys_soc.h>
+ #include <linux/reset.h>
+-
+-#include <asm/div64.h>
++#include <linux/math64.h>
+ 
+ #include "ravb.h"
+ 
+@@ -2488,8 +2487,7 @@ static int ravb_set_gti(struct net_device *ndev)
+ 	if (!rate)
+ 		return -EINVAL;
+ 
+-	inc = 1000000000ULL << 20;
+-	do_div(inc, rate);
++	inc = div64_ul(1000000000ULL << 20, rate);
+ 
+ 	if (inc < GTI_TIV_MIN || inc > GTI_TIV_MAX) {
+ 		dev_err(dev, "gti.tiv increment 0x%llx is outside the range 0x%x - 0x%x\n",
+diff --git a/drivers/net/ethernet/rocker/rocker_ofdpa.c b/drivers/net/ethernet/rocker/rocker_ofdpa.c
+index 3e1ca7a8d0295..bc70c6abd6a5b 100644
+--- a/drivers/net/ethernet/rocker/rocker_ofdpa.c
++++ b/drivers/net/ethernet/rocker/rocker_ofdpa.c
+@@ -2783,7 +2783,8 @@ static void ofdpa_fib4_abort(struct rocker *rocker)
+ 		if (!ofdpa_port)
+ 			continue;
+ 		nh->fib_nh_flags &= ~RTNH_F_OFFLOAD;
+-		ofdpa_flow_tbl_del(ofdpa_port, OFDPA_OP_FLAG_REMOVE,
++		ofdpa_flow_tbl_del(ofdpa_port,
++				   OFDPA_OP_FLAG_REMOVE | OFDPA_OP_FLAG_NOWAIT,
+ 				   flow_entry);
+ 	}
+ 	spin_unlock_irqrestore(&ofdpa->flow_tbl_lock, flags);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
+index 5c74b6279d690..6b1d9e8879f46 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
+@@ -113,8 +113,10 @@ static void rgmii_updatel(struct qcom_ethqos *ethqos,
+ 	rgmii_writel(ethqos, temp, offset);
+ }
+ 
+-static void rgmii_dump(struct qcom_ethqos *ethqos)
++static void rgmii_dump(void *priv)
+ {
++	struct qcom_ethqos *ethqos = priv;
++
+ 	dev_dbg(&ethqos->pdev->dev, "Rgmii register dump\n");
+ 	dev_dbg(&ethqos->pdev->dev, "RGMII_IO_MACRO_CONFIG: %x\n",
+ 		rgmii_readl(ethqos, RGMII_IO_MACRO_CONFIG));
+@@ -499,6 +501,7 @@ static int qcom_ethqos_probe(struct platform_device *pdev)
+ 
+ 	plat_dat->bsp_priv = ethqos;
+ 	plat_dat->fix_mac_speed = ethqos_fix_mac_speed;
++	plat_dat->dump_debug_regs = rgmii_dump;
+ 	plat_dat->has_gmac4 = 1;
+ 	plat_dat->pmt = 1;
+ 	plat_dat->tso_en = of_property_read_bool(np, "snps,tso");
+@@ -507,8 +510,6 @@ static int qcom_ethqos_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto err_clk;
+ 
+-	rgmii_dump(ethqos);
+-
+ 	return ret;
+ 
+ err_clk:
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 8ded4be08b001..e81a79845d425 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -7094,6 +7094,9 @@ int stmmac_dvr_probe(struct device *device,
+ 	stmmac_init_fs(ndev);
+ #endif
+ 
++	if (priv->plat->dump_debug_regs)
++		priv->plat->dump_debug_regs(priv->plat->bsp_priv);
++
+ 	/* Let pm_runtime_put() disable the clocks.
+ 	 * If CONFIG_PM is not enabled, the clocks will stay powered.
+ 	 */
+diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
+index 33142d505fc81..03575c0175008 100644
+--- a/drivers/net/ethernet/ti/cpsw.c
++++ b/drivers/net/ethernet/ti/cpsw.c
+@@ -349,7 +349,7 @@ static void cpsw_rx_handler(void *token, int len, int status)
+ 	struct cpsw_common	*cpsw = ndev_to_cpsw(xmeta->ndev);
+ 	int			pkt_size = cpsw->rx_packet_max;
+ 	int			ret = 0, port, ch = xmeta->ch;
+-	int			headroom = CPSW_HEADROOM;
++	int			headroom = CPSW_HEADROOM_NA;
+ 	struct net_device	*ndev = xmeta->ndev;
+ 	struct cpsw_priv	*priv;
+ 	struct page_pool	*pool;
+@@ -392,7 +392,7 @@ static void cpsw_rx_handler(void *token, int len, int status)
+ 	}
+ 
+ 	if (priv->xdp_prog) {
+-		int headroom = CPSW_HEADROOM, size = len;
++		int size = len;
+ 
+ 		xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]);
+ 		if (status & CPDMA_RX_VLAN_ENCAP) {
+@@ -442,7 +442,7 @@ requeue:
+ 	xmeta->ndev = ndev;
+ 	xmeta->ch = ch;
+ 
+-	dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM;
++	dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM_NA;
+ 	ret = cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, new_page, dma,
+ 				       pkt_size, 0);
+ 	if (ret < 0) {
+diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c
+index 279e261e47207..bd4b1528cf992 100644
+--- a/drivers/net/ethernet/ti/cpsw_new.c
++++ b/drivers/net/ethernet/ti/cpsw_new.c
+@@ -283,7 +283,7 @@ static void cpsw_rx_handler(void *token, int len, int status)
+ {
+ 	struct page *new_page, *page = token;
+ 	void *pa = page_address(page);
+-	int headroom = CPSW_HEADROOM;
++	int headroom = CPSW_HEADROOM_NA;
+ 	struct cpsw_meta_xdp *xmeta;
+ 	struct cpsw_common *cpsw;
+ 	struct net_device *ndev;
+@@ -336,7 +336,7 @@ static void cpsw_rx_handler(void *token, int len, int status)
+ 	}
+ 
+ 	if (priv->xdp_prog) {
+-		int headroom = CPSW_HEADROOM, size = len;
++		int size = len;
+ 
+ 		xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]);
+ 		if (status & CPDMA_RX_VLAN_ENCAP) {
+@@ -386,7 +386,7 @@ requeue:
+ 	xmeta->ndev = ndev;
+ 	xmeta->ch = ch;
+ 
+-	dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM;
++	dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM_NA;
+ 	ret = cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, new_page, dma,
+ 				       pkt_size, 0);
+ 	if (ret < 0) {
+diff --git a/drivers/net/ethernet/ti/cpsw_priv.c b/drivers/net/ethernet/ti/cpsw_priv.c
+index ecc2a6b7e28f2..6bb5ac51d23c3 100644
+--- a/drivers/net/ethernet/ti/cpsw_priv.c
++++ b/drivers/net/ethernet/ti/cpsw_priv.c
+@@ -1120,7 +1120,7 @@ int cpsw_fill_rx_channels(struct cpsw_priv *priv)
+ 			xmeta->ndev = priv->ndev;
+ 			xmeta->ch = ch;
+ 
+-			dma = page_pool_get_dma_addr(page) + CPSW_HEADROOM;
++			dma = page_pool_get_dma_addr(page) + CPSW_HEADROOM_NA;
+ 			ret = cpdma_chan_idle_submit_mapped(cpsw->rxv[ch].ch,
+ 							    page, dma,
+ 							    cpsw->rx_packet_max,
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 9b068b81ae093..f12eb5beaded4 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -41,8 +41,9 @@
+ #include "xilinx_axienet.h"
+ 
+ /* Descriptors defines for Tx and Rx DMA */
+-#define TX_BD_NUM_DEFAULT		64
++#define TX_BD_NUM_DEFAULT		128
+ #define RX_BD_NUM_DEFAULT		1024
++#define TX_BD_NUM_MIN			(MAX_SKB_FRAGS + 1)
+ #define TX_BD_NUM_MAX			4096
+ #define RX_BD_NUM_MAX			4096
+ 
+@@ -496,7 +497,8 @@ static void axienet_setoptions(struct net_device *ndev, u32 options)
+ 
+ static int __axienet_device_reset(struct axienet_local *lp)
+ {
+-	u32 timeout;
++	u32 value;
++	int ret;
+ 
+ 	/* Reset Axi DMA. This would reset Axi Ethernet core as well. The reset
+ 	 * process of Axi DMA takes a while to complete as all pending
+@@ -506,15 +508,23 @@ static int __axienet_device_reset(struct axienet_local *lp)
+ 	 * they both reset the entire DMA core, so only one needs to be used.
+ 	 */
+ 	axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET, XAXIDMA_CR_RESET_MASK);
+-	timeout = DELAY_OF_ONE_MILLISEC;
+-	while (axienet_dma_in32(lp, XAXIDMA_TX_CR_OFFSET) &
+-				XAXIDMA_CR_RESET_MASK) {
+-		udelay(1);
+-		if (--timeout == 0) {
+-			netdev_err(lp->ndev, "%s: DMA reset timeout!\n",
+-				   __func__);
+-			return -ETIMEDOUT;
+-		}
++	ret = read_poll_timeout(axienet_dma_in32, value,
++				!(value & XAXIDMA_CR_RESET_MASK),
++				DELAY_OF_ONE_MILLISEC, 50000, false, lp,
++				XAXIDMA_TX_CR_OFFSET);
++	if (ret) {
++		dev_err(lp->dev, "%s: DMA reset timeout!\n", __func__);
++		return ret;
++	}
++
++	/* Wait for PhyRstCmplt bit to be set, indicating the PHY reset has finished */
++	ret = read_poll_timeout(axienet_ior, value,
++				value & XAE_INT_PHYRSTCMPLT_MASK,
++				DELAY_OF_ONE_MILLISEC, 50000, false, lp,
++				XAE_IS_OFFSET);
++	if (ret) {
++		dev_err(lp->dev, "%s: timeout waiting for PhyRstCmplt\n", __func__);
++		return ret;
+ 	}
+ 
+ 	return 0;
+@@ -623,6 +633,8 @@ static int axienet_free_tx_chain(struct net_device *ndev, u32 first_bd,
+ 		if (nr_bds == -1 && !(status & XAXIDMA_BD_STS_COMPLETE_MASK))
+ 			break;
+ 
++		/* Ensure we see complete descriptor update */
++		dma_rmb();
+ 		phys = desc_get_phys_addr(lp, cur_p);
+ 		dma_unmap_single(ndev->dev.parent, phys,
+ 				 (cur_p->cntrl & XAXIDMA_BD_CTRL_LENGTH_MASK),
+@@ -631,13 +643,15 @@ static int axienet_free_tx_chain(struct net_device *ndev, u32 first_bd,
+ 		if (cur_p->skb && (status & XAXIDMA_BD_STS_COMPLETE_MASK))
+ 			dev_consume_skb_irq(cur_p->skb);
+ 
+-		cur_p->cntrl = 0;
+ 		cur_p->app0 = 0;
+ 		cur_p->app1 = 0;
+ 		cur_p->app2 = 0;
+ 		cur_p->app4 = 0;
+-		cur_p->status = 0;
+ 		cur_p->skb = NULL;
++		/* ensure our transmit path and device don't prematurely see status cleared */
++		wmb();
++		cur_p->cntrl = 0;
++		cur_p->status = 0;
+ 
+ 		if (sizep)
+ 			*sizep += status & XAXIDMA_BD_STS_ACTUAL_LEN_MASK;
+@@ -646,6 +660,32 @@ static int axienet_free_tx_chain(struct net_device *ndev, u32 first_bd,
+ 	return i;
+ }
+ 
++/**
++ * axienet_check_tx_bd_space - Checks if a BD/group of BDs are currently busy
++ * @lp:		Pointer to the axienet_local structure
++ * @num_frag:	The number of BDs to check for
++ *
++ * Return: 0, on success
++ *	    NETDEV_TX_BUSY, if any of the descriptors are not free
++ *
++ * This function is invoked before BDs are allocated and transmission starts.
++ * This function returns 0 if a BD or group of BDs can be allocated for
++ * transmission. If the BD or any of the BDs are not free the function
++ * returns a busy status. This is invoked from axienet_start_xmit.
++ */
++static inline int axienet_check_tx_bd_space(struct axienet_local *lp,
++					    int num_frag)
++{
++	struct axidma_bd *cur_p;
++
++	/* Ensure we see all descriptor updates from device or TX IRQ path */
++	rmb();
++	cur_p = &lp->tx_bd_v[(lp->tx_bd_tail + num_frag) % lp->tx_bd_num];
++	if (cur_p->cntrl)
++		return NETDEV_TX_BUSY;
++	return 0;
++}
++
+ /**
+  * axienet_start_xmit_done - Invoked once a transmit is completed by the
+  * Axi DMA Tx channel.
+@@ -675,30 +715,8 @@ static void axienet_start_xmit_done(struct net_device *ndev)
+ 	/* Matches barrier in axienet_start_xmit */
+ 	smp_mb();
+ 
+-	netif_wake_queue(ndev);
+-}
+-
+-/**
+- * axienet_check_tx_bd_space - Checks if a BD/group of BDs are currently busy
+- * @lp:		Pointer to the axienet_local structure
+- * @num_frag:	The number of BDs to check for
+- *
+- * Return: 0, on success
+- *	    NETDEV_TX_BUSY, if any of the descriptors are not free
+- *
+- * This function is invoked before BDs are allocated and transmission starts.
+- * This function returns 0 if a BD or group of BDs can be allocated for
+- * transmission. If the BD or any of the BDs are not free the function
+- * returns a busy status. This is invoked from axienet_start_xmit.
+- */
+-static inline int axienet_check_tx_bd_space(struct axienet_local *lp,
+-					    int num_frag)
+-{
+-	struct axidma_bd *cur_p;
+-	cur_p = &lp->tx_bd_v[(lp->tx_bd_tail + num_frag) % lp->tx_bd_num];
+-	if (cur_p->status & XAXIDMA_BD_STS_ALL_MASK)
+-		return NETDEV_TX_BUSY;
+-	return 0;
++	if (!axienet_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1))
++		netif_wake_queue(ndev);
+ }
+ 
+ /**
+@@ -730,20 +748,15 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 	num_frag = skb_shinfo(skb)->nr_frags;
+ 	cur_p = &lp->tx_bd_v[lp->tx_bd_tail];
+ 
+-	if (axienet_check_tx_bd_space(lp, num_frag)) {
+-		if (netif_queue_stopped(ndev))
+-			return NETDEV_TX_BUSY;
+-
++	if (axienet_check_tx_bd_space(lp, num_frag + 1)) {
++		/* Should not happen as last start_xmit call should have
++		 * checked for sufficient space and queue should only be
++		 * woken when sufficient space is available.
++		 */
+ 		netif_stop_queue(ndev);
+-
+-		/* Matches barrier in axienet_start_xmit_done */
+-		smp_mb();
+-
+-		/* Space might have just been freed - check again */
+-		if (axienet_check_tx_bd_space(lp, num_frag))
+-			return NETDEV_TX_BUSY;
+-
+-		netif_wake_queue(ndev);
++		if (net_ratelimit())
++			netdev_warn(ndev, "TX ring unexpectedly full\n");
++		return NETDEV_TX_BUSY;
+ 	}
+ 
+ 	if (skb->ip_summed == CHECKSUM_PARTIAL) {
+@@ -804,6 +817,18 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 	if (++lp->tx_bd_tail >= lp->tx_bd_num)
+ 		lp->tx_bd_tail = 0;
+ 
++	/* Stop queue if next transmit may not have space */
++	if (axienet_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1)) {
++		netif_stop_queue(ndev);
++
++		/* Matches barrier in axienet_start_xmit_done */
++		smp_mb();
++
++		/* Space might have just been freed - check again */
++		if (!axienet_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1))
++			netif_wake_queue(ndev);
++	}
++
+ 	return NETDEV_TX_OK;
+ }
+ 
+@@ -834,6 +859,8 @@ static void axienet_recv(struct net_device *ndev)
+ 
+ 		tail_p = lp->rx_bd_p + sizeof(*lp->rx_bd_v) * lp->rx_bd_ci;
+ 
++		/* Ensure we see complete descriptor update */
++		dma_rmb();
+ 		phys = desc_get_phys_addr(lp, cur_p);
+ 		dma_unmap_single(ndev->dev.parent, phys, lp->max_frm_size,
+ 				 DMA_FROM_DEVICE);
+@@ -1346,7 +1373,8 @@ static int axienet_ethtools_set_ringparam(struct net_device *ndev,
+ 	if (ering->rx_pending > RX_BD_NUM_MAX ||
+ 	    ering->rx_mini_pending ||
+ 	    ering->rx_jumbo_pending ||
+-	    ering->rx_pending > TX_BD_NUM_MAX)
++	    ering->tx_pending < TX_BD_NUM_MIN ||
++	    ering->tx_pending > TX_BD_NUM_MAX)
+ 		return -EINVAL;
+ 
+ 	if (netif_running(ndev))
+@@ -2080,6 +2108,11 @@ static int axienet_probe(struct platform_device *pdev)
+ 	lp->coalesce_count_rx = XAXIDMA_DFT_RX_THRESHOLD;
+ 	lp->coalesce_count_tx = XAXIDMA_DFT_TX_THRESHOLD;
+ 
++	/* Reset core now that clocks are enabled, prior to accessing MDIO */
++	ret = __axienet_device_reset(lp);
++	if (ret)
++		goto cleanup_clk;
++
+ 	lp->phy_node = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
+ 	if (lp->phy_node) {
+ 		ret = axienet_mdio_setup(lp);
+diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
+index 03a1709934208..c8f90cb1ee8f3 100644
+--- a/drivers/net/ipa/ipa_endpoint.c
++++ b/drivers/net/ipa/ipa_endpoint.c
+@@ -1067,6 +1067,7 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
+ {
+ 	struct gsi *gsi;
+ 	u32 backlog;
++	int delta;
+ 
+ 	if (!endpoint->replenish_enabled) {
+ 		if (add_one)
+@@ -1084,10 +1085,8 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
+ 
+ try_again_later:
+ 	/* The last one didn't succeed, so fix the backlog */
+-	backlog = atomic_inc_return(&endpoint->replenish_backlog);
+-
+-	if (add_one)
+-		atomic_inc(&endpoint->replenish_backlog);
++	delta = add_one ? 2 : 1;
++	backlog = atomic_add_return(delta, &endpoint->replenish_backlog);
+ 
+ 	/* Whenever a receive buffer transaction completes we'll try to
+ 	 * replenish again.  It's unlikely, but if we fail to supply even
+diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
+index 4fcfca4e17021..c10cc2cd53b66 100644
+--- a/drivers/net/phy/marvell.c
++++ b/drivers/net/phy/marvell.c
+@@ -189,6 +189,8 @@
+ #define MII_88E1510_GEN_CTRL_REG_1_MODE_RGMII_SGMII	0x4
+ #define MII_88E1510_GEN_CTRL_REG_1_RESET	0x8000	/* Soft reset */
+ 
++#define MII_88E1510_MSCR_2		0x15
++
+ #define MII_VCT5_TX_RX_MDI0_COUPLING	0x10
+ #define MII_VCT5_TX_RX_MDI1_COUPLING	0x11
+ #define MII_VCT5_TX_RX_MDI2_COUPLING	0x12
+@@ -1242,6 +1244,12 @@ static int m88e1118_config_init(struct phy_device *phydev)
+ 	if (err < 0)
+ 		return err;
+ 
++	if (phy_interface_is_rgmii(phydev)) {
++		err = m88e1121_config_aneg_rgmii_delays(phydev);
++		if (err < 0)
++			return err;
++	}
++
+ 	/* Adjust LED Control */
+ 	if (phydev->dev_flags & MARVELL_PHY_M1118_DNS323_LEDS)
+ 		err = phy_write(phydev, 0x10, 0x1100);
+@@ -1932,6 +1940,58 @@ static void marvell_get_stats(struct phy_device *phydev,
+ 		data[i] = marvell_get_stat(phydev, i);
+ }
+ 
++static int m88e1510_loopback(struct phy_device *phydev, bool enable)
++{
++	int err;
++
++	if (enable) {
++		u16 bmcr_ctl = 0, mscr2_ctl = 0;
++
++		if (phydev->speed == SPEED_1000)
++			bmcr_ctl = BMCR_SPEED1000;
++		else if (phydev->speed == SPEED_100)
++			bmcr_ctl = BMCR_SPEED100;
++
++		if (phydev->duplex == DUPLEX_FULL)
++			bmcr_ctl |= BMCR_FULLDPLX;
++
++		err = phy_write(phydev, MII_BMCR, bmcr_ctl);
++		if (err < 0)
++			return err;
++
++		if (phydev->speed == SPEED_1000)
++			mscr2_ctl = BMCR_SPEED1000;
++		else if (phydev->speed == SPEED_100)
++			mscr2_ctl = BMCR_SPEED100;
++
++		err = phy_modify_paged(phydev, MII_MARVELL_MSCR_PAGE,
++				       MII_88E1510_MSCR_2, BMCR_SPEED1000 |
++				       BMCR_SPEED100, mscr2_ctl);
++		if (err < 0)
++			return err;
++
++		/* Need soft reset to have speed configuration takes effect */
++		err = genphy_soft_reset(phydev);
++		if (err < 0)
++			return err;
++
++		/* FIXME: Based on trial and error test, it seem 1G need to have
++		 * delay between soft reset and loopback enablement.
++		 */
++		if (phydev->speed == SPEED_1000)
++			msleep(1000);
++
++		return phy_modify(phydev, MII_BMCR, BMCR_LOOPBACK,
++				  BMCR_LOOPBACK);
++	} else {
++		err = phy_modify(phydev, MII_BMCR, BMCR_LOOPBACK, 0);
++		if (err < 0)
++			return err;
++
++		return phy_config_aneg(phydev);
++	}
++}
++
+ static int marvell_vct5_wait_complete(struct phy_device *phydev)
+ {
+ 	int i;
+@@ -3078,7 +3138,7 @@ static struct phy_driver marvell_drivers[] = {
+ 		.get_sset_count = marvell_get_sset_count,
+ 		.get_strings = marvell_get_strings,
+ 		.get_stats = marvell_get_stats,
+-		.set_loopback = genphy_loopback,
++		.set_loopback = m88e1510_loopback,
+ 		.get_tunable = m88e1011_get_tunable,
+ 		.set_tunable = m88e1011_set_tunable,
+ 		.cable_test_start = marvell_vct7_cable_test_start,
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index c198722e4871d..3f7b93d5c76fe 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -594,7 +594,7 @@ int __mdiobus_register(struct mii_bus *bus, struct module *owner)
+ 	mdiobus_setup_mdiodev_from_board_info(bus, mdiobus_create_device);
+ 
+ 	bus->state = MDIOBUS_REGISTERED;
+-	pr_info("%s: probed\n", bus->name);
++	dev_dbg(&bus->dev, "probed\n");
+ 	return 0;
+ 
+ error:
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 44a24b99c8943..76ef4e019ca92 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -1630,8 +1630,8 @@ static struct phy_driver ksphy_driver[] = {
+ 	.config_init	= kszphy_config_init,
+ 	.config_intr	= kszphy_config_intr,
+ 	.handle_interrupt = kszphy_handle_interrupt,
+-	.suspend	= genphy_suspend,
+-	.resume		= genphy_resume,
++	.suspend	= kszphy_suspend,
++	.resume		= kszphy_resume,
+ }, {
+ 	.phy_id		= PHY_ID_KSZ8021,
+ 	.phy_id_mask	= 0x00ffffff,
+@@ -1645,8 +1645,8 @@ static struct phy_driver ksphy_driver[] = {
+ 	.get_sset_count = kszphy_get_sset_count,
+ 	.get_strings	= kszphy_get_strings,
+ 	.get_stats	= kszphy_get_stats,
+-	.suspend	= genphy_suspend,
+-	.resume		= genphy_resume,
++	.suspend	= kszphy_suspend,
++	.resume		= kszphy_resume,
+ }, {
+ 	.phy_id		= PHY_ID_KSZ8031,
+ 	.phy_id_mask	= 0x00ffffff,
+@@ -1660,8 +1660,8 @@ static struct phy_driver ksphy_driver[] = {
+ 	.get_sset_count = kszphy_get_sset_count,
+ 	.get_strings	= kszphy_get_strings,
+ 	.get_stats	= kszphy_get_stats,
+-	.suspend	= genphy_suspend,
+-	.resume		= genphy_resume,
++	.suspend	= kszphy_suspend,
++	.resume		= kszphy_resume,
+ }, {
+ 	.phy_id		= PHY_ID_KSZ8041,
+ 	.phy_id_mask	= MICREL_PHY_ID_MASK,
+@@ -1692,8 +1692,8 @@ static struct phy_driver ksphy_driver[] = {
+ 	.get_sset_count = kszphy_get_sset_count,
+ 	.get_strings	= kszphy_get_strings,
+ 	.get_stats	= kszphy_get_stats,
+-	.suspend	= genphy_suspend,
+-	.resume		= genphy_resume,
++	.suspend	= kszphy_suspend,
++	.resume		= kszphy_resume,
+ }, {
+ 	.name		= "Micrel KSZ8051",
+ 	/* PHY_BASIC_FEATURES */
+@@ -1706,8 +1706,8 @@ static struct phy_driver ksphy_driver[] = {
+ 	.get_strings	= kszphy_get_strings,
+ 	.get_stats	= kszphy_get_stats,
+ 	.match_phy_device = ksz8051_match_phy_device,
+-	.suspend	= genphy_suspend,
+-	.resume		= genphy_resume,
++	.suspend	= kszphy_suspend,
++	.resume		= kszphy_resume,
+ }, {
+ 	.phy_id		= PHY_ID_KSZ8001,
+ 	.name		= "Micrel KSZ8001 or KS8721",
+@@ -1721,8 +1721,8 @@ static struct phy_driver ksphy_driver[] = {
+ 	.get_sset_count = kszphy_get_sset_count,
+ 	.get_strings	= kszphy_get_strings,
+ 	.get_stats	= kszphy_get_stats,
+-	.suspend	= genphy_suspend,
+-	.resume		= genphy_resume,
++	.suspend	= kszphy_suspend,
++	.resume		= kszphy_resume,
+ }, {
+ 	.phy_id		= PHY_ID_KSZ8081,
+ 	.name		= "Micrel KSZ8081 or KSZ8091",
+@@ -1752,8 +1752,8 @@ static struct phy_driver ksphy_driver[] = {
+ 	.config_init	= ksz8061_config_init,
+ 	.config_intr	= kszphy_config_intr,
+ 	.handle_interrupt = kszphy_handle_interrupt,
+-	.suspend	= genphy_suspend,
+-	.resume		= genphy_resume,
++	.suspend	= kszphy_suspend,
++	.resume		= kszphy_resume,
+ }, {
+ 	.phy_id		= PHY_ID_KSZ9021,
+ 	.phy_id_mask	= 0x000ffffe,
+@@ -1768,8 +1768,8 @@ static struct phy_driver ksphy_driver[] = {
+ 	.get_sset_count = kszphy_get_sset_count,
+ 	.get_strings	= kszphy_get_strings,
+ 	.get_stats	= kszphy_get_stats,
+-	.suspend	= genphy_suspend,
+-	.resume		= genphy_resume,
++	.suspend	= kszphy_suspend,
++	.resume		= kszphy_resume,
+ 	.read_mmd	= genphy_read_mmd_unsupported,
+ 	.write_mmd	= genphy_write_mmd_unsupported,
+ }, {
+@@ -1787,7 +1787,7 @@ static struct phy_driver ksphy_driver[] = {
+ 	.get_sset_count = kszphy_get_sset_count,
+ 	.get_strings	= kszphy_get_strings,
+ 	.get_stats	= kszphy_get_stats,
+-	.suspend	= genphy_suspend,
++	.suspend	= kszphy_suspend,
+ 	.resume		= kszphy_resume,
+ }, {
+ 	.phy_id		= PHY_ID_LAN8814,
+@@ -1829,7 +1829,7 @@ static struct phy_driver ksphy_driver[] = {
+ 	.get_sset_count = kszphy_get_sset_count,
+ 	.get_strings	= kszphy_get_strings,
+ 	.get_stats	= kszphy_get_stats,
+-	.suspend	= genphy_suspend,
++	.suspend	= kszphy_suspend,
+ 	.resume		= kszphy_resume,
+ }, {
+ 	.phy_id		= PHY_ID_KSZ8873MLL,
+diff --git a/drivers/net/phy/phy-core.c b/drivers/net/phy/phy-core.c
+index 2870c33b8975d..271fc01f7f7fd 100644
+--- a/drivers/net/phy/phy-core.c
++++ b/drivers/net/phy/phy-core.c
+@@ -162,11 +162,11 @@ static const struct phy_setting settings[] = {
+ 	PHY_SETTING(   2500, FULL,   2500baseT_Full		),
+ 	PHY_SETTING(   2500, FULL,   2500baseX_Full		),
+ 	/* 1G */
+-	PHY_SETTING(   1000, FULL,   1000baseKX_Full		),
+ 	PHY_SETTING(   1000, FULL,   1000baseT_Full		),
+ 	PHY_SETTING(   1000, HALF,   1000baseT_Half		),
+ 	PHY_SETTING(   1000, FULL,   1000baseT1_Full		),
+ 	PHY_SETTING(   1000, FULL,   1000baseX_Full		),
++	PHY_SETTING(   1000, FULL,   1000baseKX_Full		),
+ 	/* 100M */
+ 	PHY_SETTING(    100, FULL,    100baseT_Full		),
+ 	PHY_SETTING(    100, FULL,    100baseT1_Full		),
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index ab77a9f439ef9..4720b24ca51b5 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -1641,17 +1641,20 @@ static int sfp_sm_probe_for_phy(struct sfp *sfp)
+ static int sfp_module_parse_power(struct sfp *sfp)
+ {
+ 	u32 power_mW = 1000;
++	bool supports_a2;
+ 
+ 	if (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_POWER_DECL))
+ 		power_mW = 1500;
+ 	if (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_HIGH_POWER_LEVEL))
+ 		power_mW = 2000;
+ 
++	supports_a2 = sfp->id.ext.sff8472_compliance !=
++				SFP_SFF8472_COMPLIANCE_NONE ||
++		      sfp->id.ext.diagmon & SFP_DIAGMON_DDM;
++
+ 	if (power_mW > sfp->max_power_mW) {
+ 		/* Module power specification exceeds the allowed maximum. */
+-		if (sfp->id.ext.sff8472_compliance ==
+-			SFP_SFF8472_COMPLIANCE_NONE &&
+-		    !(sfp->id.ext.diagmon & SFP_DIAGMON_DDM)) {
++		if (!supports_a2) {
+ 			/* The module appears not to implement bus address
+ 			 * 0xa2, so assume that the module powers up in the
+ 			 * indicated mode.
+@@ -1668,11 +1671,25 @@ static int sfp_module_parse_power(struct sfp *sfp)
+ 		}
+ 	}
+ 
++	if (power_mW <= 1000) {
++		/* Modules below 1W do not require a power change sequence */
++		sfp->module_power_mW = power_mW;
++		return 0;
++	}
++
++	if (!supports_a2) {
++		/* The module power level is below the host maximum and the
++		 * module appears not to implement bus address 0xa2, so assume
++		 * that the module powers up in the indicated mode.
++		 */
++		return 0;
++	}
++
+ 	/* If the module requires a higher power mode, but also requires
+ 	 * an address change sequence, warn the user that the module may
+ 	 * not be functional.
+ 	 */
+-	if (sfp->id.ext.diagmon & SFP_DIAGMON_ADDRMODE && power_mW > 1000) {
++	if (sfp->id.ext.diagmon & SFP_DIAGMON_ADDRMODE) {
+ 		dev_warn(sfp->dev,
+ 			 "Address Change Sequence not supported but module requires %u.%uW, module may not be functional\n",
+ 			 power_mW / 1000, (power_mW / 100) % 10);
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index 1180a0e2445fb..3ab24988198fe 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -69,6 +69,8 @@
+ #define MPHDRLEN	6	/* multilink protocol header length */
+ #define MPHDRLEN_SSN	4	/* ditto with short sequence numbers */
+ 
++#define PPP_PROTO_LEN	2
++
+ /*
+  * An instance of /dev/ppp can be associated with either a ppp
+  * interface unit or a ppp channel.  In both cases, file->private_data
+@@ -497,6 +499,9 @@ static ssize_t ppp_write(struct file *file, const char __user *buf,
+ 
+ 	if (!pf)
+ 		return -ENXIO;
++	/* All PPP packets should start with the 2-byte protocol */
++	if (count < PPP_PROTO_LEN)
++		return -EINVAL;
+ 	ret = -ENOMEM;
+ 	skb = alloc_skb(count + pf->hdrlen, GFP_KERNEL);
+ 	if (!skb)
+@@ -1764,7 +1769,7 @@ ppp_send_frame(struct ppp *ppp, struct sk_buff *skb)
+ 	}
+ 
+ 	++ppp->stats64.tx_packets;
+-	ppp->stats64.tx_bytes += skb->len - 2;
++	ppp->stats64.tx_bytes += skb->len - PPP_PROTO_LEN;
+ 
+ 	switch (proto) {
+ 	case PPP_IP:
+diff --git a/drivers/net/usb/mcs7830.c b/drivers/net/usb/mcs7830.c
+index 326cc4e749d80..fdda0616704ea 100644
+--- a/drivers/net/usb/mcs7830.c
++++ b/drivers/net/usb/mcs7830.c
+@@ -108,8 +108,16 @@ static const char driver_name[] = "MOSCHIP usb-ethernet driver";
+ 
+ static int mcs7830_get_reg(struct usbnet *dev, u16 index, u16 size, void *data)
+ {
+-	return usbnet_read_cmd(dev, MCS7830_RD_BREQ, MCS7830_RD_BMREQ,
+-				0x0000, index, data, size);
++	int ret;
++
++	ret = usbnet_read_cmd(dev, MCS7830_RD_BREQ, MCS7830_RD_BMREQ,
++			      0x0000, index, data, size);
++	if (ret < 0)
++		return ret;
++	else if (ret < size)
++		return -ENODATA;
++
++	return ret;
+ }
+ 
+ static int mcs7830_set_reg(struct usbnet *dev, u16 index, u16 size, const void *data)
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index abe0149ed917a..bc1e3dd67c04c 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -1962,7 +1962,8 @@ static const struct driver_info smsc95xx_info = {
+ 	.bind		= smsc95xx_bind,
+ 	.unbind		= smsc95xx_unbind,
+ 	.link_reset	= smsc95xx_link_reset,
+-	.reset		= smsc95xx_start_phy,
++	.reset		= smsc95xx_reset,
++	.check_connect	= smsc95xx_start_phy,
+ 	.stop		= smsc95xx_stop,
+ 	.rx_fixup	= smsc95xx_rx_fixup,
+ 	.tx_fixup	= smsc95xx_tx_fixup,
+diff --git a/drivers/net/wireless/ath/ar5523/ar5523.c b/drivers/net/wireless/ath/ar5523/ar5523.c
+index 0e9bad33fac85..141c1b5a7b1f3 100644
+--- a/drivers/net/wireless/ath/ar5523/ar5523.c
++++ b/drivers/net/wireless/ath/ar5523/ar5523.c
+@@ -153,6 +153,10 @@ static void ar5523_cmd_rx_cb(struct urb *urb)
+ 			ar5523_err(ar, "Invalid reply to WDCMSG_TARGET_START");
+ 			return;
+ 		}
++		if (!cmd->odata) {
++			ar5523_err(ar, "Unexpected WDCMSG_TARGET_START reply");
++			return;
++		}
+ 		memcpy(cmd->odata, hdr + 1, sizeof(u32));
+ 		cmd->olen = sizeof(u32);
+ 		cmd->res = 0;
+diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
+index 5935e0973d146..5e3b4d10c1a95 100644
+--- a/drivers/net/wireless/ath/ath10k/core.c
++++ b/drivers/net/wireless/ath/ath10k/core.c
+@@ -89,6 +89,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = true,
+ 		.dynamic_sar_support = false,
+ 	},
+@@ -124,6 +125,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = true,
+ 		.dynamic_sar_support = false,
+ 	},
+@@ -160,6 +162,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 		.dynamic_sar_support = false,
+ 	},
+@@ -190,6 +193,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.num_wds_entries = 0x20,
+ 		.uart_pin_workaround = true,
+ 		.tx_stats_over_pktlog = false,
++		.credit_size_workaround = false,
+ 		.bmi_large_size_download = true,
+ 		.supports_peer_stats_info = true,
+ 		.dynamic_sar_support = true,
+@@ -226,6 +230,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 		.dynamic_sar_support = false,
+ 	},
+@@ -261,6 +266,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 		.dynamic_sar_support = false,
+ 	},
+@@ -296,6 +302,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 		.dynamic_sar_support = false,
+ 	},
+@@ -334,6 +341,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = true,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 		.supports_peer_stats_info = true,
+ 		.dynamic_sar_support = true,
+@@ -376,6 +384,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 		.dynamic_sar_support = false,
+ 	},
+@@ -424,6 +433,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 		.dynamic_sar_support = false,
+ 	},
+@@ -469,6 +479,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 		.dynamic_sar_support = false,
+ 	},
+@@ -504,6 +515,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 		.dynamic_sar_support = false,
+ 	},
+@@ -541,6 +553,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = true,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 		.dynamic_sar_support = false,
+ 	},
+@@ -570,6 +583,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.ast_skid_limit = 0x10,
+ 		.num_wds_entries = 0x20,
+ 		.uart_pin_workaround = true,
++		.credit_size_workaround = true,
+ 		.dynamic_sar_support = false,
+ 	},
+ 	{
+@@ -611,6 +625,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = false,
+ 		.hw_filter_reset_required = true,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 		.dynamic_sar_support = false,
+ 	},
+@@ -639,6 +654,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
+ 		.rri_on_ddr = true,
+ 		.hw_filter_reset_required = false,
+ 		.fw_diag_ce_download = false,
++		.credit_size_workaround = false,
+ 		.tx_stats_over_pktlog = false,
+ 		.dynamic_sar_support = true,
+ 	},
+@@ -714,6 +730,7 @@ static void ath10k_send_suspend_complete(struct ath10k *ar)
+ 
+ static int ath10k_init_sdio(struct ath10k *ar, enum ath10k_firmware_mode mode)
+ {
++	bool mtu_workaround = ar->hw_params.credit_size_workaround;
+ 	int ret;
+ 	u32 param = 0;
+ 
+@@ -731,7 +748,7 @@ static int ath10k_init_sdio(struct ath10k *ar, enum ath10k_firmware_mode mode)
+ 
+ 	param |= HI_ACS_FLAGS_SDIO_REDUCE_TX_COMPL_SET;
+ 
+-	if (mode == ATH10K_FIRMWARE_MODE_NORMAL)
++	if (mode == ATH10K_FIRMWARE_MODE_NORMAL && !mtu_workaround)
+ 		param |= HI_ACS_FLAGS_ALT_DATA_CREDIT_SIZE;
+ 	else
+ 		param &= ~HI_ACS_FLAGS_ALT_DATA_CREDIT_SIZE;
+diff --git a/drivers/net/wireless/ath/ath10k/htt_tx.c b/drivers/net/wireless/ath/ath10k/htt_tx.c
+index d6b8bdcef4160..b793eac2cfac8 100644
+--- a/drivers/net/wireless/ath/ath10k/htt_tx.c
++++ b/drivers/net/wireless/ath/ath10k/htt_tx.c
+@@ -147,6 +147,9 @@ void ath10k_htt_tx_dec_pending(struct ath10k_htt *htt)
+ 	htt->num_pending_tx--;
+ 	if (htt->num_pending_tx == htt->max_num_pending_tx - 1)
+ 		ath10k_mac_tx_unlock(htt->ar, ATH10K_TX_PAUSE_Q_FULL);
++
++	if (htt->num_pending_tx == 0)
++		wake_up(&htt->empty_tx_wq);
+ }
+ 
+ int ath10k_htt_tx_inc_pending(struct ath10k_htt *htt)
+diff --git a/drivers/net/wireless/ath/ath10k/hw.h b/drivers/net/wireless/ath/ath10k/hw.h
+index 6b03c7787e36a..591ef7416b613 100644
+--- a/drivers/net/wireless/ath/ath10k/hw.h
++++ b/drivers/net/wireless/ath/ath10k/hw.h
+@@ -618,6 +618,9 @@ struct ath10k_hw_params {
+ 	 */
+ 	bool uart_pin_workaround;
+ 
++	/* Workaround for the credit size calculation */
++	bool credit_size_workaround;
++
+ 	/* tx stats support over pktlog */
+ 	bool tx_stats_over_pktlog;
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/txrx.c b/drivers/net/wireless/ath/ath10k/txrx.c
+index 7c9ea0c073d8b..6f8b642188941 100644
+--- a/drivers/net/wireless/ath/ath10k/txrx.c
++++ b/drivers/net/wireless/ath/ath10k/txrx.c
+@@ -82,8 +82,6 @@ int ath10k_txrx_tx_unref(struct ath10k_htt *htt,
+ 	flags = skb_cb->flags;
+ 	ath10k_htt_tx_free_msdu_id(htt, tx_done->msdu_id);
+ 	ath10k_htt_tx_dec_pending(htt);
+-	if (htt->num_pending_tx == 0)
+-		wake_up(&htt->empty_tx_wq);
+ 	spin_unlock_bh(&htt->tx_lock);
+ 
+ 	rcu_read_lock();
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index 7c1c2658cb5f8..4733fd7fb169e 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -2611,9 +2611,30 @@ int ath10k_wmi_event_mgmt_rx(struct ath10k *ar, struct sk_buff *skb)
+ 		ath10k_mac_handle_beacon(ar, skb);
+ 
+ 	if (ieee80211_is_beacon(hdr->frame_control) ||
+-	    ieee80211_is_probe_resp(hdr->frame_control))
++	    ieee80211_is_probe_resp(hdr->frame_control)) {
++		struct ieee80211_mgmt *mgmt = (void *)skb->data;
++		u8 *ies;
++		int ies_ch;
++
+ 		status->boottime_ns = ktime_get_boottime_ns();
+ 
++		if (!ar->scan_channel)
++			goto drop;
++
++		ies = mgmt->u.beacon.variable;
++
++		ies_ch = cfg80211_get_ies_channel_number(mgmt->u.beacon.variable,
++							 skb_tail_pointer(skb) - ies,
++							 sband->band);
++
++		if (ies_ch > 0 && ies_ch != channel) {
++			ath10k_dbg(ar, ATH10K_DBG_MGMT,
++				   "channel mismatched ds channel %d scan channel %d\n",
++				   ies_ch, channel);
++			goto drop;
++		}
++	}
++
+ 	ath10k_dbg(ar, ATH10K_DBG_MGMT,
+ 		   "event mgmt rx skb %pK len %d ftype %02x stype %02x\n",
+ 		   skb, skb->len,
+@@ -2627,6 +2648,10 @@ int ath10k_wmi_event_mgmt_rx(struct ath10k *ar, struct sk_buff *skb)
+ 	ieee80211_rx_ni(ar->hw, skb);
+ 
+ 	return 0;
++
++drop:
++	dev_kfree_skb(skb);
++	return 0;
+ }
+ 
+ static int freq_to_idx(struct ath10k *ar, int freq)
+diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c
+index 8c9c781afc3e5..3fb0aa0008259 100644
+--- a/drivers/net/wireless/ath/ath11k/ahb.c
++++ b/drivers/net/wireless/ath/ath11k/ahb.c
+@@ -175,8 +175,11 @@ static void __ath11k_ahb_ext_irq_disable(struct ath11k_base *ab)
+ 
+ 		ath11k_ahb_ext_grp_disable(irq_grp);
+ 
+-		napi_synchronize(&irq_grp->napi);
+-		napi_disable(&irq_grp->napi);
++		if (irq_grp->napi_enabled) {
++			napi_synchronize(&irq_grp->napi);
++			napi_disable(&irq_grp->napi);
++			irq_grp->napi_enabled = false;
++		}
+ 	}
+ }
+ 
+@@ -206,13 +209,13 @@ static void ath11k_ahb_clearbit32(struct ath11k_base *ab, u8 bit, u32 offset)
+ 
+ static void ath11k_ahb_ce_irq_enable(struct ath11k_base *ab, u16 ce_id)
+ {
+-	const struct ce_pipe_config *ce_config;
++	const struct ce_attr *ce_attr;
+ 
+-	ce_config = &ab->hw_params.target_ce_config[ce_id];
+-	if (__le32_to_cpu(ce_config->pipedir) & PIPEDIR_OUT)
++	ce_attr = &ab->hw_params.host_ce_config[ce_id];
++	if (ce_attr->src_nentries)
+ 		ath11k_ahb_setbit32(ab, ce_id, CE_HOST_IE_ADDRESS);
+ 
+-	if (__le32_to_cpu(ce_config->pipedir) & PIPEDIR_IN) {
++	if (ce_attr->dest_nentries) {
+ 		ath11k_ahb_setbit32(ab, ce_id, CE_HOST_IE_2_ADDRESS);
+ 		ath11k_ahb_setbit32(ab, ce_id + CE_HOST_IE_3_SHIFT,
+ 				    CE_HOST_IE_3_ADDRESS);
+@@ -221,13 +224,13 @@ static void ath11k_ahb_ce_irq_enable(struct ath11k_base *ab, u16 ce_id)
+ 
+ static void ath11k_ahb_ce_irq_disable(struct ath11k_base *ab, u16 ce_id)
+ {
+-	const struct ce_pipe_config *ce_config;
++	const struct ce_attr *ce_attr;
+ 
+-	ce_config = &ab->hw_params.target_ce_config[ce_id];
+-	if (__le32_to_cpu(ce_config->pipedir) & PIPEDIR_OUT)
++	ce_attr = &ab->hw_params.host_ce_config[ce_id];
++	if (ce_attr->src_nentries)
+ 		ath11k_ahb_clearbit32(ab, ce_id, CE_HOST_IE_ADDRESS);
+ 
+-	if (__le32_to_cpu(ce_config->pipedir) & PIPEDIR_IN) {
++	if (ce_attr->dest_nentries) {
+ 		ath11k_ahb_clearbit32(ab, ce_id, CE_HOST_IE_2_ADDRESS);
+ 		ath11k_ahb_clearbit32(ab, ce_id + CE_HOST_IE_3_SHIFT,
+ 				      CE_HOST_IE_3_ADDRESS);
+@@ -300,7 +303,10 @@ static void ath11k_ahb_ext_irq_enable(struct ath11k_base *ab)
+ 	for (i = 0; i < ATH11K_EXT_IRQ_GRP_NUM_MAX; i++) {
+ 		struct ath11k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
+ 
+-		napi_enable(&irq_grp->napi);
++		if (!irq_grp->napi_enabled) {
++			napi_enable(&irq_grp->napi);
++			irq_grp->napi_enabled = true;
++		}
+ 		ath11k_ahb_ext_grp_enable(irq_grp);
+ 	}
+ }
+diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
+index b5a2af3ffc3e1..cb8cacbbd5b48 100644
+--- a/drivers/net/wireless/ath/ath11k/core.c
++++ b/drivers/net/wireless/ath/ath11k/core.c
+@@ -82,6 +82,9 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
+ 		.fix_l1ss = true,
+ 		.max_tx_ring = DP_TCL_NUM_RING_MAX,
+ 		.hal_params = &ath11k_hw_hal_params_ipq8074,
++		.supports_dynamic_smps_6ghz = false,
++		.alloc_cacheable_memory = true,
++		.wakeup_mhi = false,
+ 	},
+ 	{
+ 		.hw_rev = ATH11K_HW_IPQ6018_HW10,
+@@ -131,6 +134,9 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
+ 		.fix_l1ss = true,
+ 		.max_tx_ring = DP_TCL_NUM_RING_MAX,
+ 		.hal_params = &ath11k_hw_hal_params_ipq8074,
++		.supports_dynamic_smps_6ghz = false,
++		.alloc_cacheable_memory = true,
++		.wakeup_mhi = false,
+ 	},
+ 	{
+ 		.name = "qca6390 hw2.0",
+@@ -179,6 +185,9 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
+ 		.fix_l1ss = true,
+ 		.max_tx_ring = DP_TCL_NUM_RING_MAX_QCA6390,
+ 		.hal_params = &ath11k_hw_hal_params_qca6390,
++		.supports_dynamic_smps_6ghz = false,
++		.alloc_cacheable_memory = false,
++		.wakeup_mhi = true,
+ 	},
+ 	{
+ 		.name = "qcn9074 hw1.0",
+@@ -227,6 +236,9 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
+ 		.fix_l1ss = true,
+ 		.max_tx_ring = DP_TCL_NUM_RING_MAX,
+ 		.hal_params = &ath11k_hw_hal_params_ipq8074,
++		.supports_dynamic_smps_6ghz = true,
++		.alloc_cacheable_memory = true,
++		.wakeup_mhi = false,
+ 	},
+ 	{
+ 		.name = "wcn6855 hw2.0",
+@@ -275,6 +287,9 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
+ 		.fix_l1ss = false,
+ 		.max_tx_ring = DP_TCL_NUM_RING_MAX_QCA6390,
+ 		.hal_params = &ath11k_hw_hal_params_qca6390,
++		.supports_dynamic_smps_6ghz = false,
++		.alloc_cacheable_memory = false,
++		.wakeup_mhi = true,
+ 	},
+ };
+ 
+@@ -392,11 +407,26 @@ static int ath11k_core_create_board_name(struct ath11k_base *ab, char *name,
+ 		scnprintf(variant, sizeof(variant), ",variant=%s",
+ 			  ab->qmi.target.bdf_ext);
+ 
+-	scnprintf(name, name_len,
+-		  "bus=%s,qmi-chip-id=%d,qmi-board-id=%d%s",
+-		  ath11k_bus_str(ab->hif.bus),
+-		  ab->qmi.target.chip_id,
+-		  ab->qmi.target.board_id, variant);
++	switch (ab->id.bdf_search) {
++	case ATH11K_BDF_SEARCH_BUS_AND_BOARD:
++		scnprintf(name, name_len,
++			  "bus=%s,vendor=%04x,device=%04x,subsystem-vendor=%04x,subsystem-device=%04x,qmi-chip-id=%d,qmi-board-id=%d%s",
++			  ath11k_bus_str(ab->hif.bus),
++			  ab->id.vendor, ab->id.device,
++			  ab->id.subsystem_vendor,
++			  ab->id.subsystem_device,
++			  ab->qmi.target.chip_id,
++			  ab->qmi.target.board_id,
++			  variant);
++		break;
++	default:
++		scnprintf(name, name_len,
++			  "bus=%s,qmi-chip-id=%d,qmi-board-id=%d%s",
++			  ath11k_bus_str(ab->hif.bus),
++			  ab->qmi.target.chip_id,
++			  ab->qmi.target.board_id, variant);
++		break;
++	}
+ 
+ 	ath11k_dbg(ab, ATH11K_DBG_BOOT, "boot using board name '%s'\n", name);
+ 
+@@ -633,7 +663,7 @@ static int ath11k_core_fetch_board_data_api_1(struct ath11k_base *ab,
+ 	return 0;
+ }
+ 
+-#define BOARD_NAME_SIZE 100
++#define BOARD_NAME_SIZE 200
+ int ath11k_core_fetch_bdf(struct ath11k_base *ab, struct ath11k_board_data *bd)
+ {
+ 	char boardname[BOARD_NAME_SIZE];
+diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
+index 31d234a51c79b..011373b91ae08 100644
+--- a/drivers/net/wireless/ath/ath11k/core.h
++++ b/drivers/net/wireless/ath/ath11k/core.h
+@@ -47,6 +47,11 @@ enum ath11k_supported_bw {
+ 	ATH11K_BW_160	= 3,
+ };
+ 
++enum ath11k_bdf_search {
++	ATH11K_BDF_SEARCH_DEFAULT,
++	ATH11K_BDF_SEARCH_BUS_AND_BOARD,
++};
++
+ enum wme_ac {
+ 	WME_AC_BE,
+ 	WME_AC_BK,
+@@ -136,6 +141,7 @@ struct ath11k_ext_irq_grp {
+ 	u32 num_irq;
+ 	u32 grp_id;
+ 	u64 timestamp;
++	bool napi_enabled;
+ 	struct napi_struct napi;
+ 	struct net_device napi_ndev;
+ };
+@@ -713,7 +719,6 @@ struct ath11k_base {
+ 	u32 wlan_init_status;
+ 	int irq_num[ATH11K_IRQ_NUM_MAX];
+ 	struct ath11k_ext_irq_grp ext_irq_grp[ATH11K_EXT_IRQ_GRP_NUM_MAX];
+-	struct napi_struct *napi;
+ 	struct ath11k_targ_cap target_caps;
+ 	u32 ext_service_bitmap[WMI_SERVICE_EXT_BM_SIZE];
+ 	bool pdevs_macaddr_valid;
+@@ -759,6 +764,14 @@ struct ath11k_base {
+ 
+ 	struct completion htc_suspend;
+ 
++	struct {
++		enum ath11k_bdf_search bdf_search;
++		u32 vendor;
++		u32 device;
++		u32 subsystem_vendor;
++		u32 subsystem_device;
++	} id;
++
+ 	/* must be last */
+ 	u8 drv_priv[0] __aligned(sizeof(void *));
+ };
+diff --git a/drivers/net/wireless/ath/ath11k/dp.c b/drivers/net/wireless/ath/ath11k/dp.c
+index 8baaeeb8cf821..8058b56028ded 100644
+--- a/drivers/net/wireless/ath/ath11k/dp.c
++++ b/drivers/net/wireless/ath/ath11k/dp.c
+@@ -101,8 +101,11 @@ void ath11k_dp_srng_cleanup(struct ath11k_base *ab, struct dp_srng *ring)
+ 	if (!ring->vaddr_unaligned)
+ 		return;
+ 
+-	dma_free_coherent(ab->dev, ring->size, ring->vaddr_unaligned,
+-			  ring->paddr_unaligned);
++	if (ring->cached)
++		kfree(ring->vaddr_unaligned);
++	else
++		dma_free_coherent(ab->dev, ring->size, ring->vaddr_unaligned,
++				  ring->paddr_unaligned);
+ 
+ 	ring->vaddr_unaligned = NULL;
+ }
+@@ -222,6 +225,7 @@ int ath11k_dp_srng_setup(struct ath11k_base *ab, struct dp_srng *ring,
+ 	int entry_sz = ath11k_hal_srng_get_entrysize(ab, type);
+ 	int max_entries = ath11k_hal_srng_get_max_entries(ab, type);
+ 	int ret;
++	bool cached = false;
+ 
+ 	if (max_entries < 0 || entry_sz < 0)
+ 		return -EINVAL;
+@@ -230,9 +234,28 @@ int ath11k_dp_srng_setup(struct ath11k_base *ab, struct dp_srng *ring,
+ 		num_entries = max_entries;
+ 
+ 	ring->size = (num_entries * entry_sz) + HAL_RING_BASE_ALIGN - 1;
+-	ring->vaddr_unaligned = dma_alloc_coherent(ab->dev, ring->size,
+-						   &ring->paddr_unaligned,
+-						   GFP_KERNEL);
++
++	if (ab->hw_params.alloc_cacheable_memory) {
++		/* Allocate the reo dst and tx completion rings from cacheable memory */
++		switch (type) {
++		case HAL_REO_DST:
++			cached = true;
++			break;
++		default:
++			cached = false;
++		}
++
++		if (cached) {
++			ring->vaddr_unaligned = kzalloc(ring->size, GFP_KERNEL);
++			ring->paddr_unaligned = virt_to_phys(ring->vaddr_unaligned);
++		}
++	}
++
++	if (!cached)
++		ring->vaddr_unaligned = dma_alloc_coherent(ab->dev, ring->size,
++							   &ring->paddr_unaligned,
++							   GFP_KERNEL);
++
+ 	if (!ring->vaddr_unaligned)
+ 		return -ENOMEM;
+ 
+@@ -292,6 +315,11 @@ int ath11k_dp_srng_setup(struct ath11k_base *ab, struct dp_srng *ring,
+ 		return -EINVAL;
+ 	}
+ 
++	if (cached) {
++		params.flags |= HAL_SRNG_FLAGS_CACHED;
++		ring->cached = 1;
++	}
++
+ 	ret = ath11k_hal_srng_setup(ab, type, ring_num, mac_id, &params);
+ 	if (ret < 0) {
+ 		ath11k_warn(ab, "failed to setup srng: %d ring_id %d\n",
+diff --git a/drivers/net/wireless/ath/ath11k/dp.h b/drivers/net/wireless/ath/ath11k/dp.h
+index 4794ca04f2136..a4c36a9be338a 100644
+--- a/drivers/net/wireless/ath/ath11k/dp.h
++++ b/drivers/net/wireless/ath/ath11k/dp.h
+@@ -64,6 +64,7 @@ struct dp_srng {
+ 	dma_addr_t paddr;
+ 	int size;
+ 	u32 ring_id;
++	u8 cached;
+ };
+ 
+ struct dp_rxdma_ring {
+@@ -517,7 +518,8 @@ struct htt_ppdu_stats_cfg_cmd {
+ } __packed;
+ 
+ #define HTT_PPDU_STATS_CFG_MSG_TYPE		GENMASK(7, 0)
+-#define HTT_PPDU_STATS_CFG_PDEV_ID		GENMASK(15, 8)
++#define HTT_PPDU_STATS_CFG_SOC_STATS		BIT(8)
++#define HTT_PPDU_STATS_CFG_PDEV_ID		GENMASK(15, 9)
+ #define HTT_PPDU_STATS_CFG_TLV_TYPE_BITMASK	GENMASK(31, 16)
+ 
+ enum htt_ppdu_stats_tag_type {
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index c5320847b80a7..621372c568d2c 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -3064,10 +3064,10 @@ int ath11k_dp_rx_process_mon_status(struct ath11k_base *ab, int mac_id,
+ 	if (!num_buffs_reaped)
+ 		goto exit;
+ 
+-	while ((skb = __skb_dequeue(&skb_list))) {
+-		memset(&ppdu_info, 0, sizeof(ppdu_info));
+-		ppdu_info.peer_id = HAL_INVALID_PEERID;
++	memset(&ppdu_info, 0, sizeof(ppdu_info));
++	ppdu_info.peer_id = HAL_INVALID_PEERID;
+ 
++	while ((skb = __skb_dequeue(&skb_list))) {
+ 		if (ath11k_debugfs_is_pktlog_lite_mode_enabled(ar)) {
+ 			log_type = ATH11K_PKTLOG_TYPE_LITE_RX;
+ 			rx_buf_sz = DP_RX_BUFFER_SIZE_LITE;
+@@ -3095,10 +3095,7 @@ int ath11k_dp_rx_process_mon_status(struct ath11k_base *ab, int mac_id,
+ 			ath11k_dbg(ab, ATH11K_DBG_DATA,
+ 				   "failed to find the peer with peer_id %d\n",
+ 				   ppdu_info.peer_id);
+-			spin_unlock_bh(&ab->base_lock);
+-			rcu_read_unlock();
+-			dev_kfree_skb_any(skb);
+-			continue;
++			goto next_skb;
+ 		}
+ 
+ 		arsta = (struct ath11k_sta *)peer->sta->drv_priv;
+@@ -3107,10 +3104,13 @@ int ath11k_dp_rx_process_mon_status(struct ath11k_base *ab, int mac_id,
+ 		if (ath11k_debugfs_is_pktlog_peer_valid(ar, peer->addr))
+ 			trace_ath11k_htt_rxdesc(ar, skb->data, log_type, rx_buf_sz);
+ 
++next_skb:
+ 		spin_unlock_bh(&ab->base_lock);
+ 		rcu_read_unlock();
+ 
+ 		dev_kfree_skb_any(skb);
++		memset(&ppdu_info, 0, sizeof(ppdu_info));
++		ppdu_info.peer_id = HAL_INVALID_PEERID;
+ 	}
+ exit:
+ 	return num_buffs_reaped;
+@@ -3800,7 +3800,7 @@ int ath11k_dp_process_rx_err(struct ath11k_base *ab, struct napi_struct *napi,
+ 		ath11k_hal_rx_msdu_link_info_get(link_desc_va, &num_msdus, msdu_cookies,
+ 						 &rbm);
+ 		if (rbm != HAL_RX_BUF_RBM_WBM_IDLE_DESC_LIST &&
+-		    rbm != ab->hw_params.hal_params->rx_buf_rbm) {
++		    rbm != HAL_RX_BUF_RBM_SW3_BM) {
+ 			ab->soc_stats.invalid_rbm++;
+ 			ath11k_warn(ab, "invalid return buffer manager %d\n", rbm);
+ 			ath11k_dp_rx_link_desc_return(ab, desc,
+diff --git a/drivers/net/wireless/ath/ath11k/dp_tx.c b/drivers/net/wireless/ath/ath11k/dp_tx.c
+index 879fb2a9dc0c6..10b76f6f710b0 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_tx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_tx.c
+@@ -903,7 +903,7 @@ int ath11k_dp_tx_htt_h2t_ppdu_stats_req(struct ath11k *ar, u32 mask)
+ 		cmd->msg = FIELD_PREP(HTT_PPDU_STATS_CFG_MSG_TYPE,
+ 				      HTT_H2T_MSG_TYPE_PPDU_STATS_CFG);
+ 
+-		pdev_mask = 1 << (i + 1);
++		pdev_mask = 1 << (ar->pdev_idx + i);
+ 		cmd->msg |= FIELD_PREP(HTT_PPDU_STATS_CFG_PDEV_ID, pdev_mask);
+ 		cmd->msg |= FIELD_PREP(HTT_PPDU_STATS_CFG_TLV_TYPE_BITMASK, mask);
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/hal.c b/drivers/net/wireless/ath/ath11k/hal.c
+index eaa0edca55761..1832d13654a87 100644
+--- a/drivers/net/wireless/ath/ath11k/hal.c
++++ b/drivers/net/wireless/ath/ath11k/hal.c
+@@ -627,6 +627,21 @@ u32 *ath11k_hal_srng_dst_peek(struct ath11k_base *ab, struct hal_srng *srng)
+ 	return NULL;
+ }
+ 
++static void ath11k_hal_srng_prefetch_desc(struct ath11k_base *ab,
++					  struct hal_srng *srng)
++{
++	u32 *desc;
++
++	/* prefetch only if desc is available */
++	desc = ath11k_hal_srng_dst_peek(ab, srng);
++	if (likely(desc)) {
++		dma_sync_single_for_cpu(ab->dev, virt_to_phys(desc),
++					(srng->entry_size * sizeof(u32)),
++					DMA_FROM_DEVICE);
++		prefetch(desc);
++	}
++}
++
+ u32 *ath11k_hal_srng_dst_get_next_entry(struct ath11k_base *ab,
+ 					struct hal_srng *srng)
+ {
+@@ -642,6 +657,10 @@ u32 *ath11k_hal_srng_dst_get_next_entry(struct ath11k_base *ab,
+ 	srng->u.dst_ring.tp = (srng->u.dst_ring.tp + srng->entry_size) %
+ 			      srng->ring_size;
+ 
++	/* Try to prefetch the next descriptor in the ring */
++	if (srng->flags & HAL_SRNG_FLAGS_CACHED)
++		ath11k_hal_srng_prefetch_desc(ab, srng);
++
+ 	return desc;
+ }
+ 
+@@ -775,11 +794,16 @@ void ath11k_hal_srng_access_begin(struct ath11k_base *ab, struct hal_srng *srng)
+ {
+ 	lockdep_assert_held(&srng->lock);
+ 
+-	if (srng->ring_dir == HAL_SRNG_DIR_SRC)
++	if (srng->ring_dir == HAL_SRNG_DIR_SRC) {
+ 		srng->u.src_ring.cached_tp =
+ 			*(volatile u32 *)srng->u.src_ring.tp_addr;
+-	else
++	} else {
+ 		srng->u.dst_ring.cached_hp = *srng->u.dst_ring.hp_addr;
++
++		/* Try to prefetch the next descriptor in the ring */
++		if (srng->flags & HAL_SRNG_FLAGS_CACHED)
++			ath11k_hal_srng_prefetch_desc(ab, srng);
++	}
+ }
+ 
+ /* Update cached ring head/tail pointers to HW. ath11k_hal_srng_access_begin()
+@@ -947,6 +971,7 @@ int ath11k_hal_srng_setup(struct ath11k_base *ab, enum hal_ring_type type,
+ 	srng->msi_data = params->msi_data;
+ 	srng->initialized = 1;
+ 	spin_lock_init(&srng->lock);
++	lockdep_set_class(&srng->lock, hal->srng_key + ring_id);
+ 
+ 	for (i = 0; i < HAL_SRNG_NUM_REG_GRP; i++) {
+ 		srng->hwreg_base[i] = srng_config->reg_start[i] +
+@@ -1233,6 +1258,24 @@ static int ath11k_hal_srng_create_config(struct ath11k_base *ab)
+ 	return 0;
+ }
+ 
++static void ath11k_hal_register_srng_key(struct ath11k_base *ab)
++{
++	struct ath11k_hal *hal = &ab->hal;
++	u32 ring_id;
++
++	for (ring_id = 0; ring_id < HAL_SRNG_RING_ID_MAX; ring_id++)
++		lockdep_register_key(hal->srng_key + ring_id);
++}
++
++static void ath11k_hal_unregister_srng_key(struct ath11k_base *ab)
++{
++	struct ath11k_hal *hal = &ab->hal;
++	u32 ring_id;
++
++	for (ring_id = 0; ring_id < HAL_SRNG_RING_ID_MAX; ring_id++)
++		lockdep_unregister_key(hal->srng_key + ring_id);
++}
++
+ int ath11k_hal_srng_init(struct ath11k_base *ab)
+ {
+ 	struct ath11k_hal *hal = &ab->hal;
+@@ -1252,6 +1295,8 @@ int ath11k_hal_srng_init(struct ath11k_base *ab)
+ 	if (ret)
+ 		goto err_free_cont_rdp;
+ 
++	ath11k_hal_register_srng_key(ab);
++
+ 	return 0;
+ 
+ err_free_cont_rdp:
+@@ -1266,6 +1311,7 @@ void ath11k_hal_srng_deinit(struct ath11k_base *ab)
+ {
+ 	struct ath11k_hal *hal = &ab->hal;
+ 
++	ath11k_hal_unregister_srng_key(ab);
+ 	ath11k_hal_free_cont_rdp(ab);
+ 	ath11k_hal_free_cont_wrp(ab);
+ 	kfree(hal->srng_config);
+diff --git a/drivers/net/wireless/ath/ath11k/hal.h b/drivers/net/wireless/ath/ath11k/hal.h
+index 35ed3a14e200a..a7d9b4c551ada 100644
+--- a/drivers/net/wireless/ath/ath11k/hal.h
++++ b/drivers/net/wireless/ath/ath11k/hal.h
+@@ -513,6 +513,7 @@ enum hal_srng_dir {
+ #define HAL_SRNG_FLAGS_DATA_TLV_SWAP		0x00000020
+ #define HAL_SRNG_FLAGS_LOW_THRESH_INTR_EN	0x00010000
+ #define HAL_SRNG_FLAGS_MSI_INTR			0x00020000
++#define HAL_SRNG_FLAGS_CACHED                   0x20000000
+ #define HAL_SRNG_FLAGS_LMAC_RING		0x80000000
+ 
+ #define HAL_SRNG_TLV_HDR_TAG		GENMASK(9, 1)
+@@ -901,6 +902,8 @@ struct ath11k_hal {
+ 	/* shadow register configuration */
+ 	u32 shadow_reg_addr[HAL_SHADOW_NUM_REGS];
+ 	int num_shadow_reg_configured;
++
++	struct lock_class_key srng_key[HAL_SRNG_RING_ID_MAX];
+ };
+ 
+ u32 ath11k_hal_reo_qdesc_size(u32 ba_window_size, u8 tid);
+diff --git a/drivers/net/wireless/ath/ath11k/hal_rx.c b/drivers/net/wireless/ath/ath11k/hal_rx.c
+index 329c404cfa80d..922926246db7a 100644
+--- a/drivers/net/wireless/ath/ath11k/hal_rx.c
++++ b/drivers/net/wireless/ath/ath11k/hal_rx.c
+@@ -374,7 +374,7 @@ int ath11k_hal_wbm_desc_parse_err(struct ath11k_base *ab, void *desc,
+ 
+ 	ret_buf_mgr = FIELD_GET(BUFFER_ADDR_INFO1_RET_BUF_MGR,
+ 				wbm_desc->buf_addr_info.info1);
+-	if (ret_buf_mgr != ab->hw_params.hal_params->rx_buf_rbm) {
++	if (ret_buf_mgr != HAL_RX_BUF_RBM_SW3_BM) {
+ 		ab->soc_stats.invalid_rbm++;
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/net/wireless/ath/ath11k/hw.c b/drivers/net/wireless/ath/ath11k/hw.c
+index da35fcf5bc560..2f0b526188e45 100644
+--- a/drivers/net/wireless/ath/ath11k/hw.c
++++ b/drivers/net/wireless/ath/ath11k/hw.c
+@@ -1061,8 +1061,6 @@ const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_ipq8074 = {
+ const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_qca6390 = {
+ 	.tx  = {
+ 		ATH11K_TX_RING_MASK_0,
+-		ATH11K_TX_RING_MASK_1,
+-		ATH11K_TX_RING_MASK_2,
+ 	},
+ 	.rx_mon_status = {
+ 		0, 0, 0, 0,
+diff --git a/drivers/net/wireless/ath/ath11k/hw.h b/drivers/net/wireless/ath/ath11k/hw.h
+index 19223d36846e8..aa93f0619f936 100644
+--- a/drivers/net/wireless/ath/ath11k/hw.h
++++ b/drivers/net/wireless/ath/ath11k/hw.h
+@@ -176,6 +176,9 @@ struct ath11k_hw_params {
+ 	bool fix_l1ss;
+ 	u8 max_tx_ring;
+ 	const struct ath11k_hw_hal_params *hal_params;
++	bool supports_dynamic_smps_6ghz;
++	bool alloc_cacheable_memory;
++	bool wakeup_mhi;
+ };
+ 
+ struct ath11k_hw_ops {
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 1cc55602787bb..a7400ade7a0cf 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
++ * Copyright (c) 2021 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include <net/mac80211.h>
+@@ -1137,11 +1138,15 @@ static int ath11k_mac_setup_bcn_tmpl(struct ath11k_vif *arvif)
+ 
+ 	if (cfg80211_find_ie(WLAN_EID_RSN, ies, (skb_tail_pointer(bcn) - ies)))
+ 		arvif->rsnie_present = true;
++	else
++		arvif->rsnie_present = false;
+ 
+ 	if (cfg80211_find_vendor_ie(WLAN_OUI_MICROSOFT,
+ 				    WLAN_OUI_TYPE_MICROSOFT_WPA,
+ 				    ies, (skb_tail_pointer(bcn) - ies)))
+ 		arvif->wpaie_present = true;
++	else
++		arvif->wpaie_present = false;
+ 
+ 	ret = ath11k_wmi_bcn_tmpl(ar, arvif->vdev_id, &offs, bcn);
+ 
+@@ -3237,9 +3242,12 @@ static int ath11k_mac_op_hw_scan(struct ieee80211_hw *hw,
+ 	arg.scan_id = ATH11K_SCAN_ID;
+ 
+ 	if (req->ie_len) {
++		arg.extraie.ptr = kmemdup(req->ie, req->ie_len, GFP_KERNEL);
++		if (!arg.extraie.ptr) {
++			ret = -ENOMEM;
++			goto exit;
++		}
+ 		arg.extraie.len = req->ie_len;
+-		arg.extraie.ptr = kzalloc(req->ie_len, GFP_KERNEL);
+-		memcpy(arg.extraie.ptr, req->ie, req->ie_len);
+ 	}
+ 
+ 	if (req->n_ssids) {
+@@ -3316,9 +3324,7 @@ static int ath11k_install_key(struct ath11k_vif *arvif,
+ 		return 0;
+ 
+ 	if (cmd == DISABLE_KEY) {
+-		/* TODO: Check if FW expects  value other than NONE for del */
+-		/* arg.key_cipher = WMI_CIPHER_NONE; */
+-		arg.key_len = 0;
++		arg.key_cipher = WMI_CIPHER_NONE;
+ 		arg.key_data = NULL;
+ 		goto install;
+ 	}
+@@ -3450,7 +3456,7 @@ static int ath11k_mac_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ 	/* flush the fragments cache during key (re)install to
+ 	 * ensure all frags in the new frag list belong to the same key.
+ 	 */
+-	if (peer && cmd == SET_KEY)
++	if (peer && sta && cmd == SET_KEY)
+ 		ath11k_peer_frags_flush(ar, peer);
+ 	spin_unlock_bh(&ab->base_lock);
+ 
+@@ -4561,6 +4567,10 @@ ath11k_create_vht_cap(struct ath11k *ar, u32 rate_cap_tx_chainmask,
+ 	vht_cap.vht_supported = 1;
+ 	vht_cap.cap = ar->pdev->cap.vht_cap;
+ 
++	if (ar->pdev->cap.nss_ratio_enabled)
++		vht_cap.vht_mcs.tx_highest |=
++			cpu_to_le16(IEEE80211_VHT_EXT_NSS_BW_CAPABLE);
++
+ 	ath11k_set_vht_txbf_cap(ar, &vht_cap.cap);
+ 
+ 	rxmcs_map = 0;
+@@ -4926,23 +4936,32 @@ static int __ath11k_set_antenna(struct ath11k *ar, u32 tx_ant, u32 rx_ant)
+ 	return 0;
+ }
+ 
+-int ath11k_mac_tx_mgmt_pending_free(int buf_id, void *skb, void *ctx)
++static void ath11k_mac_tx_mgmt_free(struct ath11k *ar, int buf_id)
+ {
+-	struct sk_buff *msdu = skb;
++	struct sk_buff *msdu;
+ 	struct ieee80211_tx_info *info;
+-	struct ath11k *ar = ctx;
+-	struct ath11k_base *ab = ar->ab;
+ 
+ 	spin_lock_bh(&ar->txmgmt_idr_lock);
+-	idr_remove(&ar->txmgmt_idr, buf_id);
++	msdu = idr_remove(&ar->txmgmt_idr, buf_id);
+ 	spin_unlock_bh(&ar->txmgmt_idr_lock);
+-	dma_unmap_single(ab->dev, ATH11K_SKB_CB(msdu)->paddr, msdu->len,
++
++	if (!msdu)
++		return;
++
++	dma_unmap_single(ar->ab->dev, ATH11K_SKB_CB(msdu)->paddr, msdu->len,
+ 			 DMA_TO_DEVICE);
+ 
+ 	info = IEEE80211_SKB_CB(msdu);
+ 	memset(&info->status, 0, sizeof(info->status));
+ 
+ 	ieee80211_free_txskb(ar->hw, msdu);
++}
++
++int ath11k_mac_tx_mgmt_pending_free(int buf_id, void *skb, void *ctx)
++{
++	struct ath11k *ar = ctx;
++
++	ath11k_mac_tx_mgmt_free(ar, buf_id);
+ 
+ 	return 0;
+ }
+@@ -4951,17 +4970,10 @@ static int ath11k_mac_vif_txmgmt_idr_remove(int buf_id, void *skb, void *ctx)
+ {
+ 	struct ieee80211_vif *vif = ctx;
+ 	struct ath11k_skb_cb *skb_cb = ATH11K_SKB_CB((struct sk_buff *)skb);
+-	struct sk_buff *msdu = skb;
+ 	struct ath11k *ar = skb_cb->ar;
+-	struct ath11k_base *ab = ar->ab;
+ 
+-	if (skb_cb->vif == vif) {
+-		spin_lock_bh(&ar->txmgmt_idr_lock);
+-		idr_remove(&ar->txmgmt_idr, buf_id);
+-		spin_unlock_bh(&ar->txmgmt_idr_lock);
+-		dma_unmap_single(ab->dev, skb_cb->paddr, msdu->len,
+-				 DMA_TO_DEVICE);
+-	}
++	if (skb_cb->vif == vif)
++		ath11k_mac_tx_mgmt_free(ar, buf_id);
+ 
+ 	return 0;
+ }
+@@ -4976,6 +4988,8 @@ static int ath11k_mac_mgmt_tx_wmi(struct ath11k *ar, struct ath11k_vif *arvif,
+ 	int buf_id;
+ 	int ret;
+ 
++	ATH11K_SKB_CB(skb)->ar = ar;
++
+ 	spin_lock_bh(&ar->txmgmt_idr_lock);
+ 	buf_id = idr_alloc(&ar->txmgmt_idr, skb, 0,
+ 			   ATH11K_TX_MGMT_NUM_PENDING_MAX, GFP_ATOMIC);
+@@ -7672,7 +7686,8 @@ static int __ath11k_mac_register(struct ath11k *ar)
+ 	 * for each band for a dual band capable radio. It will be tricky to
+ 	 * handle it when the ht capability different for each band.
+ 	 */
+-	if (ht_cap & WMI_HT_CAP_DYNAMIC_SMPS || ar->supports_6ghz)
++	if (ht_cap & WMI_HT_CAP_DYNAMIC_SMPS ||
++	    (ar->supports_6ghz && ab->hw_params.supports_dynamic_smps_6ghz))
+ 		ar->hw->wiphy->features |= NL80211_FEATURE_DYNAMIC_SMPS;
+ 
+ 	ar->hw->wiphy->max_scan_ssids = WLAN_SCAN_PARAMS_MAX_SSID;
+diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c
+index 3d353e7c9d5c2..4c348bacf2cb4 100644
+--- a/drivers/net/wireless/ath/ath11k/pci.c
++++ b/drivers/net/wireless/ath/ath11k/pci.c
+@@ -182,7 +182,8 @@ void ath11k_pci_write32(struct ath11k_base *ab, u32 offset, u32 value)
+ 	/* for offset beyond BAR + 4K - 32, may
+ 	 * need to wakeup MHI to access.
+ 	 */
+-	if (test_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
++	if (ab->hw_params.wakeup_mhi &&
++	    test_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+ 	    offset >= ACCESS_ALWAYS_OFF)
+ 		mhi_device_get_sync(ab_pci->mhi_ctrl->mhi_dev);
+ 
+@@ -206,7 +207,8 @@ void ath11k_pci_write32(struct ath11k_base *ab, u32 offset, u32 value)
+ 		}
+ 	}
+ 
+-	if (test_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
++	if (ab->hw_params.wakeup_mhi &&
++	    test_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+ 	    offset >= ACCESS_ALWAYS_OFF)
+ 		mhi_device_put(ab_pci->mhi_ctrl->mhi_dev);
+ }
+@@ -219,7 +221,8 @@ u32 ath11k_pci_read32(struct ath11k_base *ab, u32 offset)
+ 	/* for offset beyond BAR + 4K - 32, may
+ 	 * need to wakeup MHI to access.
+ 	 */
+-	if (test_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
++	if (ab->hw_params.wakeup_mhi &&
++	    test_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+ 	    offset >= ACCESS_ALWAYS_OFF)
+ 		mhi_device_get_sync(ab_pci->mhi_ctrl->mhi_dev);
+ 
+@@ -243,7 +246,8 @@ u32 ath11k_pci_read32(struct ath11k_base *ab, u32 offset)
+ 		}
+ 	}
+ 
+-	if (test_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
++	if (ab->hw_params.wakeup_mhi &&
++	    test_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+ 	    offset >= ACCESS_ALWAYS_OFF)
+ 		mhi_device_put(ab_pci->mhi_ctrl->mhi_dev);
+ 
+@@ -634,8 +638,11 @@ static void __ath11k_pci_ext_irq_disable(struct ath11k_base *sc)
+ 
+ 		ath11k_pci_ext_grp_disable(irq_grp);
+ 
+-		napi_synchronize(&irq_grp->napi);
+-		napi_disable(&irq_grp->napi);
++		if (irq_grp->napi_enabled) {
++			napi_synchronize(&irq_grp->napi);
++			napi_disable(&irq_grp->napi);
++			irq_grp->napi_enabled = false;
++		}
+ 	}
+ }
+ 
+@@ -654,7 +661,10 @@ static void ath11k_pci_ext_irq_enable(struct ath11k_base *ab)
+ 	for (i = 0; i < ATH11K_EXT_IRQ_GRP_NUM_MAX; i++) {
+ 		struct ath11k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
+ 
+-		napi_enable(&irq_grp->napi);
++		if (!irq_grp->napi_enabled) {
++			napi_enable(&irq_grp->napi);
++			irq_grp->napi_enabled = true;
++		}
+ 		ath11k_pci_ext_grp_enable(irq_grp);
+ 	}
+ }
+@@ -1251,6 +1261,15 @@ static int ath11k_pci_probe(struct pci_dev *pdev,
+ 		goto err_free_core;
+ 	}
+ 
++	ath11k_dbg(ab, ATH11K_DBG_BOOT, "pci probe %04x:%04x %04x:%04x\n",
++		   pdev->vendor, pdev->device,
++		   pdev->subsystem_vendor, pdev->subsystem_device);
++
++	ab->id.vendor = pdev->vendor;
++	ab->id.device = pdev->device;
++	ab->id.subsystem_vendor = pdev->subsystem_vendor;
++	ab->id.subsystem_device = pdev->subsystem_device;
++
+ 	switch (pci_dev->device) {
+ 	case QCA6390_DEVICE_ID:
+ 		ath11k_pci_read_hw_version(ab, &soc_hw_version_major,
+@@ -1273,6 +1292,7 @@ static int ath11k_pci_probe(struct pci_dev *pdev,
+ 		ab->hw_rev = ATH11K_HW_QCN9074_HW10;
+ 		break;
+ 	case WCN6855_DEVICE_ID:
++		ab->id.bdf_search = ATH11K_BDF_SEARCH_BUS_AND_BOARD;
+ 		ath11k_pci_read_hw_version(ab, &soc_hw_version_major,
+ 					   &soc_hw_version_minor);
+ 		switch (soc_hw_version_major) {
+diff --git a/drivers/net/wireless/ath/ath11k/qmi.h b/drivers/net/wireless/ath/ath11k/qmi.h
+index 3bb0f9ef79968..d9e95b7007653 100644
+--- a/drivers/net/wireless/ath/ath11k/qmi.h
++++ b/drivers/net/wireless/ath/ath11k/qmi.h
+@@ -41,7 +41,7 @@ struct ath11k_base;
+ 
+ enum ath11k_qmi_file_type {
+ 	ATH11K_QMI_FILE_TYPE_BDF_GOLDEN,
+-	ATH11K_QMI_FILE_TYPE_CALDATA,
++	ATH11K_QMI_FILE_TYPE_CALDATA = 2,
+ 	ATH11K_QMI_FILE_TYPE_EEPROM,
+ 	ATH11K_QMI_MAX_FILE_TYPE,
+ };
+diff --git a/drivers/net/wireless/ath/ath11k/reg.c b/drivers/net/wireless/ath/ath11k/reg.c
+index a66b5bdd21679..8606170ba80d5 100644
+--- a/drivers/net/wireless/ath/ath11k/reg.c
++++ b/drivers/net/wireless/ath/ath11k/reg.c
+@@ -456,6 +456,9 @@ ath11k_reg_adjust_bw(u16 start_freq, u16 end_freq, u16 max_bw)
+ {
+ 	u16 bw;
+ 
++	if (end_freq <= start_freq)
++		return 0;
++
+ 	bw = end_freq - start_freq;
+ 	bw = min_t(u16, bw, max_bw);
+ 
+@@ -463,8 +466,10 @@ ath11k_reg_adjust_bw(u16 start_freq, u16 end_freq, u16 max_bw)
+ 		bw = 80;
+ 	else if (bw >= 40 && bw < 80)
+ 		bw = 40;
+-	else if (bw < 40)
++	else if (bw >= 20 && bw < 40)
+ 		bw = 20;
++	else
++		bw = 0;
+ 
+ 	return bw;
+ }
+@@ -488,73 +493,77 @@ ath11k_reg_update_weather_radar_band(struct ath11k_base *ab,
+ 				     struct cur_reg_rule *reg_rule,
+ 				     u8 *rule_idx, u32 flags, u16 max_bw)
+ {
++	u32 start_freq;
+ 	u32 end_freq;
+ 	u16 bw;
+ 	u8 i;
+ 
+ 	i = *rule_idx;
+ 
++	/* there might be situations when even the input rule must be dropped */
++	i--;
++
++	/* frequencies below weather radar */
+ 	bw = ath11k_reg_adjust_bw(reg_rule->start_freq,
+ 				  ETSI_WEATHER_RADAR_BAND_LOW, max_bw);
++	if (bw > 0) {
++		i++;
+ 
+-	ath11k_reg_update_rule(regd->reg_rules + i, reg_rule->start_freq,
+-			       ETSI_WEATHER_RADAR_BAND_LOW, bw,
+-			       reg_rule->ant_gain, reg_rule->reg_power,
+-			       flags);
++		ath11k_reg_update_rule(regd->reg_rules + i,
++				       reg_rule->start_freq,
++				       ETSI_WEATHER_RADAR_BAND_LOW, bw,
++				       reg_rule->ant_gain, reg_rule->reg_power,
++				       flags);
+ 
+-	ath11k_dbg(ab, ATH11K_DBG_REG,
+-		   "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
+-		   i + 1, reg_rule->start_freq, ETSI_WEATHER_RADAR_BAND_LOW,
+-		   bw, reg_rule->ant_gain, reg_rule->reg_power,
+-		   regd->reg_rules[i].dfs_cac_ms,
+-		   flags);
+-
+-	if (reg_rule->end_freq > ETSI_WEATHER_RADAR_BAND_HIGH)
+-		end_freq = ETSI_WEATHER_RADAR_BAND_HIGH;
+-	else
+-		end_freq = reg_rule->end_freq;
++		ath11k_dbg(ab, ATH11K_DBG_REG,
++			   "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
++			   i + 1, reg_rule->start_freq,
++			   ETSI_WEATHER_RADAR_BAND_LOW, bw, reg_rule->ant_gain,
++			   reg_rule->reg_power, regd->reg_rules[i].dfs_cac_ms,
++			   flags);
++	}
+ 
+-	bw = ath11k_reg_adjust_bw(ETSI_WEATHER_RADAR_BAND_LOW, end_freq,
+-				  max_bw);
++	/* weather radar frequencies */
++	start_freq = max_t(u32, reg_rule->start_freq,
++			   ETSI_WEATHER_RADAR_BAND_LOW);
++	end_freq = min_t(u32, reg_rule->end_freq, ETSI_WEATHER_RADAR_BAND_HIGH);
+ 
+-	i++;
++	bw = ath11k_reg_adjust_bw(start_freq, end_freq, max_bw);
++	if (bw > 0) {
++		i++;
+ 
+-	ath11k_reg_update_rule(regd->reg_rules + i,
+-			       ETSI_WEATHER_RADAR_BAND_LOW, end_freq, bw,
+-			       reg_rule->ant_gain, reg_rule->reg_power,
+-			       flags);
++		ath11k_reg_update_rule(regd->reg_rules + i, start_freq,
++				       end_freq, bw, reg_rule->ant_gain,
++				       reg_rule->reg_power, flags);
+ 
+-	regd->reg_rules[i].dfs_cac_ms = ETSI_WEATHER_RADAR_BAND_CAC_TIMEOUT;
++		regd->reg_rules[i].dfs_cac_ms = ETSI_WEATHER_RADAR_BAND_CAC_TIMEOUT;
+ 
+-	ath11k_dbg(ab, ATH11K_DBG_REG,
+-		   "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
+-		   i + 1, ETSI_WEATHER_RADAR_BAND_LOW, end_freq,
+-		   bw, reg_rule->ant_gain, reg_rule->reg_power,
+-		   regd->reg_rules[i].dfs_cac_ms,
+-		   flags);
+-
+-	if (end_freq == reg_rule->end_freq) {
+-		regd->n_reg_rules--;
+-		*rule_idx = i;
+-		return;
++		ath11k_dbg(ab, ATH11K_DBG_REG,
++			   "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
++			   i + 1, start_freq, end_freq, bw,
++			   reg_rule->ant_gain, reg_rule->reg_power,
++			   regd->reg_rules[i].dfs_cac_ms, flags);
+ 	}
+ 
++	/* frequencies above weather radar */
+ 	bw = ath11k_reg_adjust_bw(ETSI_WEATHER_RADAR_BAND_HIGH,
+ 				  reg_rule->end_freq, max_bw);
++	if (bw > 0) {
++		i++;
+ 
+-	i++;
+-
+-	ath11k_reg_update_rule(regd->reg_rules + i, ETSI_WEATHER_RADAR_BAND_HIGH,
+-			       reg_rule->end_freq, bw,
+-			       reg_rule->ant_gain, reg_rule->reg_power,
+-			       flags);
++		ath11k_reg_update_rule(regd->reg_rules + i,
++				       ETSI_WEATHER_RADAR_BAND_HIGH,
++				       reg_rule->end_freq, bw,
++				       reg_rule->ant_gain, reg_rule->reg_power,
++				       flags);
+ 
+-	ath11k_dbg(ab, ATH11K_DBG_REG,
+-		   "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
+-		   i + 1, ETSI_WEATHER_RADAR_BAND_HIGH, reg_rule->end_freq,
+-		   bw, reg_rule->ant_gain, reg_rule->reg_power,
+-		   regd->reg_rules[i].dfs_cac_ms,
+-		   flags);
++		ath11k_dbg(ab, ATH11K_DBG_REG,
++			   "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
++			   i + 1, ETSI_WEATHER_RADAR_BAND_HIGH,
++			   reg_rule->end_freq, bw, reg_rule->ant_gain,
++			   reg_rule->reg_power, regd->reg_rules[i].dfs_cac_ms,
++			   flags);
++	}
+ 
+ 	*rule_idx = i;
+ }
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index 04238c29419b5..c3699bd0452c9 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -1689,7 +1689,8 @@ int ath11k_wmi_vdev_install_key(struct ath11k *ar,
+ 	tlv = (struct wmi_tlv *)(skb->data + sizeof(*cmd));
+ 	tlv->header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_ARRAY_BYTE) |
+ 		      FIELD_PREP(WMI_TLV_LEN, key_len_aligned);
+-	memcpy(tlv->value, (u8 *)arg->key_data, key_len_aligned);
++	if (arg->key_data)
++		memcpy(tlv->value, (u8 *)arg->key_data, key_len_aligned);
+ 
+ 	ret = ath11k_wmi_cmd_send(wmi, skb, WMI_VDEV_INSTALL_KEY_CMDID);
+ 	if (ret) {
+@@ -5911,7 +5912,7 @@ static int ath11k_reg_chan_list_event(struct ath11k_base *ab, struct sk_buff *sk
+ 		ar = ab->pdevs[pdev_idx].ar;
+ 		kfree(ab->new_regd[pdev_idx]);
+ 		ab->new_regd[pdev_idx] = regd;
+-		ieee80211_queue_work(ar->hw, &ar->regd_update_work);
++		queue_work(ab->workqueue, &ar->regd_update_work);
+ 	} else {
+ 		/* This regd would be applied during mac registration and is
+ 		 * held constant throughout for regd intersection purpose
+diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
+index 860da13bfb6ac..f06eec99de688 100644
+--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
++++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
+@@ -590,6 +590,13 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
+ 			return;
+ 		}
+ 
++		if (pkt_len > 2 * MAX_RX_BUF_SIZE) {
++			dev_err(&hif_dev->udev->dev,
++				"ath9k_htc: invalid pkt_len (%x)\n", pkt_len);
++			RX_STAT_INC(skb_dropped);
++			return;
++		}
++
+ 		pad_len = 4 - (pkt_len & 0x3);
+ 		if (pad_len == 4)
+ 			pad_len = 0;
+diff --git a/drivers/net/wireless/ath/ath9k/htc.h b/drivers/net/wireless/ath/ath9k/htc.h
+index 0a1634238e673..6b45e63fae4ba 100644
+--- a/drivers/net/wireless/ath/ath9k/htc.h
++++ b/drivers/net/wireless/ath/ath9k/htc.h
+@@ -281,6 +281,7 @@ struct ath9k_htc_rxbuf {
+ struct ath9k_htc_rx {
+ 	struct list_head rxbuf;
+ 	spinlock_t rxbuflock;
++	bool initialized;
+ };
+ 
+ #define ATH9K_HTC_TX_CLEANUP_INTERVAL 50 /* ms */
+@@ -305,6 +306,7 @@ struct ath9k_htc_tx {
+ 	DECLARE_BITMAP(tx_slot, MAX_TX_BUF_NUM);
+ 	struct timer_list cleanup_timer;
+ 	spinlock_t tx_lock;
++	bool initialized;
+ };
+ 
+ struct ath9k_htc_tx_ctl {
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+index 8e69e8989f6d3..6a850a0bfa8ad 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+@@ -813,6 +813,11 @@ int ath9k_tx_init(struct ath9k_htc_priv *priv)
+ 	skb_queue_head_init(&priv->tx.data_vi_queue);
+ 	skb_queue_head_init(&priv->tx.data_vo_queue);
+ 	skb_queue_head_init(&priv->tx.tx_failed);
++
++	/* Allow ath9k_wmi_event_tasklet(WMI_TXSTATUS_EVENTID) to operate. */
++	smp_wmb();
++	priv->tx.initialized = true;
++
+ 	return 0;
+ }
+ 
+@@ -1130,6 +1135,10 @@ void ath9k_htc_rxep(void *drv_priv, struct sk_buff *skb,
+ 	struct ath9k_htc_rxbuf *rxbuf = NULL, *tmp_buf = NULL;
+ 	unsigned long flags;
+ 
++	/* Check if ath9k_rx_init() completed. */
++	if (!data_race(priv->rx.initialized))
++		goto err;
++
+ 	spin_lock_irqsave(&priv->rx.rxbuflock, flags);
+ 	list_for_each_entry(tmp_buf, &priv->rx.rxbuf, list) {
+ 		if (!tmp_buf->in_process) {
+@@ -1185,6 +1194,10 @@ int ath9k_rx_init(struct ath9k_htc_priv *priv)
+ 		list_add_tail(&rxbuf->list, &priv->rx.rxbuf);
+ 	}
+ 
++	/* Allow ath9k_htc_rxep() to operate. */
++	smp_wmb();
++	priv->rx.initialized = true;
++
+ 	return 0;
+ 
+ err:
+diff --git a/drivers/net/wireless/ath/ath9k/wmi.c b/drivers/net/wireless/ath/ath9k/wmi.c
+index fe29ad4b9023c..f315c54bd3ac0 100644
+--- a/drivers/net/wireless/ath/ath9k/wmi.c
++++ b/drivers/net/wireless/ath/ath9k/wmi.c
+@@ -169,6 +169,10 @@ void ath9k_wmi_event_tasklet(struct tasklet_struct *t)
+ 					     &wmi->drv_priv->fatal_work);
+ 			break;
+ 		case WMI_TXSTATUS_EVENTID:
++			/* Check if ath9k_tx_init() completed. */
++			if (!data_race(priv->tx.initialized))
++				break;
++
+ 			spin_lock_bh(&priv->tx.tx_lock);
+ 			if (priv->tx.flags & ATH9K_HTC_OP_TX_DRAIN) {
+ 				spin_unlock_bh(&priv->tx.tx_lock);
+diff --git a/drivers/net/wireless/ath/wcn36xx/dxe.c b/drivers/net/wireless/ath/wcn36xx/dxe.c
+index aff04ef662663..e1a35c2eadb6c 100644
+--- a/drivers/net/wireless/ath/wcn36xx/dxe.c
++++ b/drivers/net/wireless/ath/wcn36xx/dxe.c
+@@ -272,6 +272,21 @@ static int wcn36xx_dxe_enable_ch_int(struct wcn36xx *wcn, u16 wcn_ch)
+ 	return 0;
+ }
+ 
++static void wcn36xx_dxe_disable_ch_int(struct wcn36xx *wcn, u16 wcn_ch)
++{
++	int reg_data = 0;
++
++	wcn36xx_dxe_read_register(wcn,
++				  WCN36XX_DXE_INT_MASK_REG,
++				  &reg_data);
++
++	reg_data &= ~wcn_ch;
++
++	wcn36xx_dxe_write_register(wcn,
++				   WCN36XX_DXE_INT_MASK_REG,
++				   (int)reg_data);
++}
++
+ static int wcn36xx_dxe_fill_skb(struct device *dev,
+ 				struct wcn36xx_dxe_ctl *ctl,
+ 				gfp_t gfp)
+@@ -869,7 +884,6 @@ int wcn36xx_dxe_init(struct wcn36xx *wcn)
+ 		WCN36XX_DXE_WQ_TX_L);
+ 
+ 	wcn36xx_dxe_read_register(wcn, WCN36XX_DXE_REG_CH_EN, &reg_data);
+-	wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_TX_L);
+ 
+ 	/***************************************/
+ 	/* Init descriptors for TX HIGH channel */
+@@ -893,9 +907,6 @@ int wcn36xx_dxe_init(struct wcn36xx *wcn)
+ 
+ 	wcn36xx_dxe_read_register(wcn, WCN36XX_DXE_REG_CH_EN, &reg_data);
+ 
+-	/* Enable channel interrupts */
+-	wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_TX_H);
+-
+ 	/***************************************/
+ 	/* Init descriptors for RX LOW channel */
+ 	/***************************************/
+@@ -905,7 +916,6 @@ int wcn36xx_dxe_init(struct wcn36xx *wcn)
+ 		goto out_err_rxl_ch;
+ 	}
+ 
+-
+ 	/* For RX we need to preallocated buffers */
+ 	wcn36xx_dxe_ch_alloc_skb(wcn, &wcn->dxe_rx_l_ch);
+ 
+@@ -928,9 +938,6 @@ int wcn36xx_dxe_init(struct wcn36xx *wcn)
+ 		WCN36XX_DXE_REG_CTL_RX_L,
+ 		WCN36XX_DXE_CH_DEFAULT_CTL_RX_L);
+ 
+-	/* Enable channel interrupts */
+-	wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_RX_L);
+-
+ 	/***************************************/
+ 	/* Init descriptors for RX HIGH channel */
+ 	/***************************************/
+@@ -962,15 +969,18 @@ int wcn36xx_dxe_init(struct wcn36xx *wcn)
+ 		WCN36XX_DXE_REG_CTL_RX_H,
+ 		WCN36XX_DXE_CH_DEFAULT_CTL_RX_H);
+ 
+-	/* Enable channel interrupts */
+-	wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_RX_H);
+-
+ 	ret = wcn36xx_dxe_request_irqs(wcn);
+ 	if (ret < 0)
+ 		goto out_err_irq;
+ 
+ 	timer_setup(&wcn->tx_ack_timer, wcn36xx_dxe_tx_timer, 0);
+ 
++	/* Enable channel interrupts */
++	wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_TX_L);
++	wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_TX_H);
++	wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_RX_L);
++	wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_RX_H);
++
+ 	return 0;
+ 
+ out_err_irq:
+@@ -987,6 +997,14 @@ out_err_txh_ch:
+ 
+ void wcn36xx_dxe_deinit(struct wcn36xx *wcn)
+ {
++	int reg_data = 0;
++
++	/* Disable channel interrupts */
++	wcn36xx_dxe_disable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_RX_H);
++	wcn36xx_dxe_disable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_RX_L);
++	wcn36xx_dxe_disable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_TX_H);
++	wcn36xx_dxe_disable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_TX_L);
++
+ 	free_irq(wcn->tx_irq, wcn);
+ 	free_irq(wcn->rx_irq, wcn);
+ 	del_timer(&wcn->tx_ack_timer);
+@@ -996,6 +1014,15 @@ void wcn36xx_dxe_deinit(struct wcn36xx *wcn)
+ 		wcn->tx_ack_skb = NULL;
+ 	}
+ 
++	/* Put the DXE block into reset before freeing memory */
++	reg_data = WCN36XX_DXE_REG_RESET;
++	wcn36xx_dxe_write_register(wcn, WCN36XX_DXE_REG_CSR_RESET, reg_data);
++
+ 	wcn36xx_dxe_ch_free_skbs(wcn, &wcn->dxe_rx_l_ch);
+ 	wcn36xx_dxe_ch_free_skbs(wcn, &wcn->dxe_rx_h_ch);
++
++	wcn36xx_dxe_deinit_descs(wcn->dev, &wcn->dxe_tx_l_ch);
++	wcn36xx_dxe_deinit_descs(wcn->dev, &wcn->dxe_tx_h_ch);
++	wcn36xx_dxe_deinit_descs(wcn->dev, &wcn->dxe_rx_l_ch);
++	wcn36xx_dxe_deinit_descs(wcn->dev, &wcn->dxe_rx_h_ch);
+ }
+diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
+index b04533bbc3a45..0747c27f3bd75 100644
+--- a/drivers/net/wireless/ath/wcn36xx/main.c
++++ b/drivers/net/wireless/ath/wcn36xx/main.c
+@@ -402,6 +402,7 @@ static void wcn36xx_change_opchannel(struct wcn36xx *wcn, int ch)
+ static int wcn36xx_config(struct ieee80211_hw *hw, u32 changed)
+ {
+ 	struct wcn36xx *wcn = hw->priv;
++	int ret;
+ 
+ 	wcn36xx_dbg(WCN36XX_DBG_MAC, "mac config changed 0x%08x\n", changed);
+ 
+@@ -417,17 +418,31 @@ static int wcn36xx_config(struct ieee80211_hw *hw, u32 changed)
+ 			 * want to receive/transmit regular data packets, then
+ 			 * simply stop the scan session and exit PS mode.
+ 			 */
+-			wcn36xx_smd_finish_scan(wcn, HAL_SYS_MODE_SCAN,
+-						wcn->sw_scan_vif);
+-			wcn->sw_scan_channel = 0;
++			if (wcn->sw_scan_channel)
++				wcn36xx_smd_end_scan(wcn, wcn->sw_scan_channel);
++			if (wcn->sw_scan_init) {
++				wcn36xx_smd_finish_scan(wcn, HAL_SYS_MODE_SCAN,
++							wcn->sw_scan_vif);
++			}
+ 		} else if (wcn->sw_scan) {
+ 			/* A scan is ongoing, do not change the operating
+ 			 * channel, but start a scan session on the channel.
+ 			 */
+-			wcn36xx_smd_init_scan(wcn, HAL_SYS_MODE_SCAN,
+-					      wcn->sw_scan_vif);
++			if (wcn->sw_scan_channel)
++				wcn36xx_smd_end_scan(wcn, wcn->sw_scan_channel);
++			if (!wcn->sw_scan_init) {
++				/* This can fail if we are unable to notify the
++				 * operating channel.
++				 */
++				ret = wcn36xx_smd_init_scan(wcn,
++							    HAL_SYS_MODE_SCAN,
++							    wcn->sw_scan_vif);
++				if (ret) {
++					mutex_unlock(&wcn->conf_mutex);
++					return -EIO;
++				}
++			}
+ 			wcn36xx_smd_start_scan(wcn, ch);
+-			wcn->sw_scan_channel = ch;
+ 		} else {
+ 			wcn36xx_change_opchannel(wcn, ch);
+ 		}
+@@ -722,7 +737,12 @@ static void wcn36xx_sw_scan_complete(struct ieee80211_hw *hw,
+ 	struct wcn36xx *wcn = hw->priv;
+ 
+ 	/* ensure that any scan session is finished */
+-	wcn36xx_smd_finish_scan(wcn, HAL_SYS_MODE_SCAN, wcn->sw_scan_vif);
++	if (wcn->sw_scan_channel)
++		wcn36xx_smd_end_scan(wcn, wcn->sw_scan_channel);
++	if (wcn->sw_scan_init) {
++		wcn36xx_smd_finish_scan(wcn, HAL_SYS_MODE_SCAN,
++					wcn->sw_scan_vif);
++	}
+ 	wcn->sw_scan = false;
+ 	wcn->sw_scan_opchannel = 0;
+ }
+diff --git a/drivers/net/wireless/ath/wcn36xx/smd.c b/drivers/net/wireless/ath/wcn36xx/smd.c
+index ed45e2cf039be..bb07740149456 100644
+--- a/drivers/net/wireless/ath/wcn36xx/smd.c
++++ b/drivers/net/wireless/ath/wcn36xx/smd.c
+@@ -722,6 +722,7 @@ int wcn36xx_smd_init_scan(struct wcn36xx *wcn, enum wcn36xx_hal_sys_mode mode,
+ 		wcn36xx_err("hal_init_scan response failed err=%d\n", ret);
+ 		goto out;
+ 	}
++	wcn->sw_scan_init = true;
+ out:
+ 	mutex_unlock(&wcn->hal_mutex);
+ 	return ret;
+@@ -752,6 +753,7 @@ int wcn36xx_smd_start_scan(struct wcn36xx *wcn, u8 scan_channel)
+ 		wcn36xx_err("hal_start_scan response failed err=%d\n", ret);
+ 		goto out;
+ 	}
++	wcn->sw_scan_channel = scan_channel;
+ out:
+ 	mutex_unlock(&wcn->hal_mutex);
+ 	return ret;
+@@ -782,6 +784,7 @@ int wcn36xx_smd_end_scan(struct wcn36xx *wcn, u8 scan_channel)
+ 		wcn36xx_err("hal_end_scan response failed err=%d\n", ret);
+ 		goto out;
+ 	}
++	wcn->sw_scan_channel = 0;
+ out:
+ 	mutex_unlock(&wcn->hal_mutex);
+ 	return ret;
+@@ -823,6 +826,7 @@ int wcn36xx_smd_finish_scan(struct wcn36xx *wcn,
+ 		wcn36xx_err("hal_finish_scan response failed err=%d\n", ret);
+ 		goto out;
+ 	}
++	wcn->sw_scan_init = false;
+ out:
+ 	mutex_unlock(&wcn->hal_mutex);
+ 	return ret;
+@@ -940,7 +944,7 @@ int wcn36xx_smd_update_channel_list(struct wcn36xx *wcn, struct cfg80211_scan_re
+ 
+ 	INIT_HAL_MSG((*msg_body), WCN36XX_HAL_UPDATE_CHANNEL_LIST_REQ);
+ 
+-	msg_body->num_channel = min_t(u8, req->n_channels, sizeof(msg_body->channels));
++	msg_body->num_channel = min_t(u8, req->n_channels, ARRAY_SIZE(msg_body->channels));
+ 	for (i = 0; i < msg_body->num_channel; i++) {
+ 		struct wcn36xx_hal_channel_param *param = &msg_body->channels[i];
+ 		u32 min_power = WCN36XX_HAL_DEFAULT_MIN_POWER;
+@@ -2732,7 +2736,7 @@ static int wcn36xx_smd_missed_beacon_ind(struct wcn36xx *wcn,
+ 			wcn36xx_dbg(WCN36XX_DBG_HAL, "beacon missed bss_index %d\n",
+ 				    tmp->bss_index);
+ 			vif = wcn36xx_priv_to_vif(tmp);
+-			ieee80211_connection_loss(vif);
++			ieee80211_beacon_loss(vif);
+ 		}
+ 		return 0;
+ 	}
+@@ -2747,7 +2751,7 @@ static int wcn36xx_smd_missed_beacon_ind(struct wcn36xx *wcn,
+ 			wcn36xx_dbg(WCN36XX_DBG_HAL, "beacon missed bss_index %d\n",
+ 				    rsp->bss_index);
+ 			vif = wcn36xx_priv_to_vif(tmp);
+-			ieee80211_connection_loss(vif);
++			ieee80211_beacon_loss(vif);
+ 			return 0;
+ 		}
+ 	}
+diff --git a/drivers/net/wireless/ath/wcn36xx/txrx.c b/drivers/net/wireless/ath/wcn36xx/txrx.c
+index 75951ccbc840e..dd58dde8c8363 100644
+--- a/drivers/net/wireless/ath/wcn36xx/txrx.c
++++ b/drivers/net/wireless/ath/wcn36xx/txrx.c
+@@ -272,7 +272,6 @@ int wcn36xx_rx_skb(struct wcn36xx *wcn, struct sk_buff *skb)
+ 	const struct wcn36xx_rate *rate;
+ 	struct ieee80211_hdr *hdr;
+ 	struct wcn36xx_rx_bd *bd;
+-	struct ieee80211_supported_band *sband;
+ 	u16 fc, sn;
+ 
+ 	/*
+@@ -314,8 +313,6 @@ int wcn36xx_rx_skb(struct wcn36xx *wcn, struct sk_buff *skb)
+ 	fc = __le16_to_cpu(hdr->frame_control);
+ 	sn = IEEE80211_SEQ_TO_SN(__le16_to_cpu(hdr->seq_ctrl));
+ 
+-	status.freq = WCN36XX_CENTER_FREQ(wcn);
+-	status.band = WCN36XX_BAND(wcn);
+ 	status.mactime = 10;
+ 	status.signal = -get_rssi0(bd);
+ 	status.antenna = 1;
+@@ -327,18 +324,36 @@ int wcn36xx_rx_skb(struct wcn36xx *wcn, struct sk_buff *skb)
+ 
+ 	wcn36xx_dbg(WCN36XX_DBG_RX, "status.flags=%x\n", status.flag);
+ 
++	if (bd->scan_learn) {
++		/* If packet originate from hardware scanning, extract the
++		 * band/channel from bd descriptor.
++		 */
++		u8 hwch = (bd->reserved0 << 4) + bd->rx_ch;
++
++		if (bd->rf_band != 1 && hwch <= sizeof(ab_rx_ch_map) && hwch >= 1) {
++			status.band = NL80211_BAND_5GHZ;
++			status.freq = ieee80211_channel_to_frequency(ab_rx_ch_map[hwch - 1],
++								     status.band);
++		} else {
++			status.band = NL80211_BAND_2GHZ;
++			status.freq = ieee80211_channel_to_frequency(hwch, status.band);
++		}
++	} else {
++		status.band = WCN36XX_BAND(wcn);
++		status.freq = WCN36XX_CENTER_FREQ(wcn);
++	}
++
+ 	if (bd->rate_id < ARRAY_SIZE(wcn36xx_rate_table)) {
+ 		rate = &wcn36xx_rate_table[bd->rate_id];
+ 		status.encoding = rate->encoding;
+ 		status.enc_flags = rate->encoding_flags;
+ 		status.bw = rate->bw;
+ 		status.rate_idx = rate->mcs_or_legacy_index;
+-		sband = wcn->hw->wiphy->bands[status.band];
+ 		status.nss = 1;
+ 
+ 		if (status.band == NL80211_BAND_5GHZ &&
+ 		    status.encoding == RX_ENC_LEGACY &&
+-		    status.rate_idx >= sband->n_bitrates) {
++		    status.rate_idx >= 4) {
+ 			/* no dsss rates in 5Ghz rates table */
+ 			status.rate_idx -= 4;
+ 		}
+@@ -353,22 +368,6 @@ int wcn36xx_rx_skb(struct wcn36xx *wcn, struct sk_buff *skb)
+ 	    ieee80211_is_probe_resp(hdr->frame_control))
+ 		status.boottime_ns = ktime_get_boottime_ns();
+ 
+-	if (bd->scan_learn) {
+-		/* If packet originates from hardware scanning, extract the
+-		 * band/channel from bd descriptor.
+-		 */
+-		u8 hwch = (bd->reserved0 << 4) + bd->rx_ch;
+-
+-		if (bd->rf_band != 1 && hwch <= sizeof(ab_rx_ch_map) && hwch >= 1) {
+-			status.band = NL80211_BAND_5GHZ;
+-			status.freq = ieee80211_channel_to_frequency(ab_rx_ch_map[hwch - 1],
+-								     status.band);
+-		} else {
+-			status.band = NL80211_BAND_2GHZ;
+-			status.freq = ieee80211_channel_to_frequency(hwch, status.band);
+-		}
+-	}
+-
+ 	memcpy(IEEE80211_SKB_RXCB(skb), &status, sizeof(status));
+ 
+ 	if (ieee80211_is_beacon(hdr->frame_control)) {
+diff --git a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+index 1c8d918137da2..fbd0558c2c196 100644
+--- a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
++++ b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+@@ -248,6 +248,7 @@ struct wcn36xx {
+ 	struct cfg80211_scan_request *scan_req;
+ 	bool			sw_scan;
+ 	u8			sw_scan_opchannel;
++	bool			sw_scan_init;
+ 	u8			sw_scan_channel;
+ 	struct ieee80211_vif	*sw_scan_vif;
+ 	struct mutex		scan_lock;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index bf431fa4fe81f..2e4590876bc33 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -789,7 +789,7 @@ int iwl_sar_get_wgds_table(struct iwl_fw_runtime *fwrt)
+ 				 * looking up in ACPI
+ 				 */
+ 				if (wifi_pkg->package.count !=
+-				    min_size + profile_size * num_profiles) {
++				    hdr_size + profile_size * num_profiles) {
+ 					ret = -EINVAL;
+ 					goto out_free;
+ 				}
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/power.h b/drivers/net/wireless/intel/iwlwifi/fw/api/power.h
+index 4d671c878bb7a..23e27afe94a23 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/api/power.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/api/power.h
+@@ -419,7 +419,7 @@ struct iwl_geo_tx_power_profiles_cmd_v1 {
+  * struct iwl_geo_tx_power_profile_cmd_v2 - struct for PER_CHAIN_LIMIT_OFFSET_CMD cmd.
+  * @ops: operations, value from &enum iwl_geo_per_chain_offset_operation
+  * @table: offset profile per band.
+- * @table_revision: BIOS table revision.
++ * @table_revision: 0 for not-South Korea, 1 for South Korea (the name is misleading)
+  */
+ struct iwl_geo_tx_power_profiles_cmd_v2 {
+ 	__le32 ops;
+@@ -431,7 +431,7 @@ struct iwl_geo_tx_power_profiles_cmd_v2 {
+  * struct iwl_geo_tx_power_profile_cmd_v3 - struct for PER_CHAIN_LIMIT_OFFSET_CMD cmd.
+  * @ops: operations, value from &enum iwl_geo_per_chain_offset_operation
+  * @table: offset profile per band.
+- * @table_revision: BIOS table revision.
++ * @table_revision: 0 for not-South Korea, 1 for South Korea (the name is misleading)
+  */
+ struct iwl_geo_tx_power_profiles_cmd_v3 {
+ 	__le32 ops;
+@@ -443,7 +443,7 @@ struct iwl_geo_tx_power_profiles_cmd_v3 {
+  * struct iwl_geo_tx_power_profile_cmd_v4 - struct for PER_CHAIN_LIMIT_OFFSET_CMD cmd.
+  * @ops: operations, value from &enum iwl_geo_per_chain_offset_operation
+  * @table: offset profile per band.
+- * @table_revision: BIOS table revision.
++ * @table_revision: 0 for not-South Korea, 1 for South Korea (the name is misleading)
+  */
+ struct iwl_geo_tx_power_profiles_cmd_v4 {
+ 	__le32 ops;
+@@ -455,7 +455,7 @@ struct iwl_geo_tx_power_profiles_cmd_v4 {
+  * struct iwl_geo_tx_power_profile_cmd_v5 - struct for PER_CHAIN_LIMIT_OFFSET_CMD cmd.
+  * @ops: operations, value from &enum iwl_geo_per_chain_offset_operation
+  * @table: offset profile per band.
+- * @table_revision: BIOS table revision.
++ * @table_revision: 0 for not-South Korea, 1 for South Korea (the name is misleading)
+  */
+ struct iwl_geo_tx_power_profiles_cmd_v5 {
+ 	__le32 ops;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dump.c b/drivers/net/wireless/intel/iwlwifi/fw/dump.c
+index 016b3a4c5f513..6a37933a02169 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/dump.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/dump.c
+@@ -12,6 +12,7 @@
+ #include "iwl-io.h"
+ #include "iwl-prph.h"
+ #include "iwl-csr.h"
++#include "pnvm.h"
+ 
+ /*
+  * Note: This structure is read from the device with IO accesses,
+@@ -147,6 +148,7 @@ static void iwl_fwrt_dump_umac_error_log(struct iwl_fw_runtime *fwrt)
+ 	struct iwl_trans *trans = fwrt->trans;
+ 	struct iwl_umac_error_event_table table = {};
+ 	u32 base = fwrt->trans->dbg.umac_error_event_table;
++	char pnvm_name[MAX_PNVM_NAME];
+ 
+ 	if (!base &&
+ 	    !(fwrt->trans->dbg.error_event_table_tlv_status &
+@@ -164,6 +166,13 @@ static void iwl_fwrt_dump_umac_error_log(struct iwl_fw_runtime *fwrt)
+ 			fwrt->trans->status, table.valid);
+ 	}
+ 
++	if ((table.error_id & ~FW_SYSASSERT_CPU_MASK) ==
++	    FW_SYSASSERT_PNVM_MISSING) {
++		iwl_pnvm_get_fs_name(trans, pnvm_name, sizeof(pnvm_name));
++		IWL_ERR(fwrt, "PNVM data is missing, please install %s\n",
++			pnvm_name);
++	}
++
+ 	IWL_ERR(fwrt, "0x%08X | %s\n", table.error_id,
+ 		iwl_fw_lookup_assert_desc(table.error_id));
+ 	IWL_ERR(fwrt, "0x%08X | umac branchlink1\n", table.blink1);
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/img.c b/drivers/net/wireless/intel/iwlwifi/fw/img.c
+index 24a9666736913..530674a35eeb2 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/img.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/img.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright(c) 2019 - 2020 Intel Corporation
++ * Copyright(c) 2019 - 2021 Intel Corporation
+  */
+ 
+ #include "img.h"
+@@ -49,10 +49,9 @@ u8 iwl_fw_lookup_notif_ver(const struct iwl_fw *fw, u8 grp, u8 cmd, u8 def)
+ }
+ EXPORT_SYMBOL_GPL(iwl_fw_lookup_notif_ver);
+ 
+-#define FW_SYSASSERT_CPU_MASK 0xf0000000
+ static const struct {
+ 	const char *name;
+-	u8 num;
++	u32 num;
+ } advanced_lookup[] = {
+ 	{ "NMI_INTERRUPT_WDG", 0x34 },
+ 	{ "SYSASSERT", 0x35 },
+@@ -73,6 +72,7 @@ static const struct {
+ 	{ "NMI_INTERRUPT_ACTION_PT", 0x7C },
+ 	{ "NMI_INTERRUPT_UNKNOWN", 0x84 },
+ 	{ "NMI_INTERRUPT_INST_ACTION_PT", 0x86 },
++	{ "PNVM_MISSING", FW_SYSASSERT_PNVM_MISSING },
+ 	{ "ADVANCED_SYSASSERT", 0 },
+ };
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/img.h b/drivers/net/wireless/intel/iwlwifi/fw/img.h
+index 993bda17fa309..fa7b1780064c2 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/img.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/img.h
+@@ -279,4 +279,8 @@ u8 iwl_fw_lookup_cmd_ver(const struct iwl_fw *fw, u8 grp, u8 cmd, u8 def);
+ 
+ u8 iwl_fw_lookup_notif_ver(const struct iwl_fw *fw, u8 grp, u8 cmd, u8 def);
+ const char *iwl_fw_lookup_assert_desc(u32 num);
++
++#define FW_SYSASSERT_CPU_MASK		0xf0000000
++#define FW_SYSASSERT_PNVM_MISSING	0x0010070d
++
+ #endif  /* __iwl_fw_img_h__ */
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-csr.h b/drivers/net/wireless/intel/iwlwifi/iwl-csr.h
+index ff79a2ecb2422..70f9dc7ecb0eb 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-csr.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-csr.h
+@@ -105,9 +105,10 @@
+ /* GIO Chicken Bits (PCI Express bus link power management) */
+ #define CSR_GIO_CHICKEN_BITS    (CSR_BASE+0x100)
+ 
+-/* Doorbell NMI (since Bz) */
++/* Doorbell - since Bz
++ * connected to UREG_DOORBELL_TO_ISR6 (lower 16 bits only)
++ */
+ #define CSR_DOORBELL_VECTOR	(CSR_BASE + 0x130)
+-#define CSR_DOORBELL_VECTOR_NMI	BIT(1)
+ 
+ /* host chicken bits */
+ #define CSR_HOST_CHICKEN	(CSR_BASE + 0x204)
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+index 5cec467b995bb..f53ce9c086947 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+@@ -130,6 +130,9 @@ static void iwl_dealloc_ucode(struct iwl_drv *drv)
+ 
+ 	for (i = 0; i < IWL_UCODE_TYPE_MAX; i++)
+ 		iwl_free_fw_img(drv, drv->fw.img + i);
++
++	/* clear the data for the aborted load case */
++	memset(&drv->fw, 0, sizeof(drv->fw));
+ }
+ 
+ static int iwl_alloc_fw_desc(struct iwl_drv *drv, struct fw_desc *desc,
+@@ -1375,6 +1378,7 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
+ 	int i;
+ 	bool load_module = false;
+ 	bool usniffer_images = false;
++	bool failure = true;
+ 
+ 	fw->ucode_capa.max_probe_length = IWL_DEFAULT_MAX_PROBE_LENGTH;
+ 	fw->ucode_capa.standard_phy_calibration_size =
+@@ -1635,15 +1639,9 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
+ 	 * else from proceeding if the module fails to load
+ 	 * or hangs loading.
+ 	 */
+-	if (load_module) {
++	if (load_module)
+ 		request_module("%s", op->name);
+-#ifdef CONFIG_IWLWIFI_OPMODE_MODULAR
+-		if (err)
+-			IWL_ERR(drv,
+-				"failed to load module %s (error %d), is dynamic loading enabled?\n",
+-				op->name, err);
+-#endif
+-	}
++	failure = false;
+ 	goto free;
+ 
+  try_again:
+@@ -1659,6 +1657,9 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
+ 	complete(&drv->request_firmware_complete);
+ 	device_release_driver(drv->trans->dev);
+  free:
++	if (failure)
++		iwl_dealloc_ucode(drv);
++
+ 	if (pieces) {
+ 		for (i = 0; i < ARRAY_SIZE(pieces->img); i++)
+ 			kfree(pieces->img[i].sec);
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-io.c b/drivers/net/wireless/intel/iwlwifi/iwl-io.c
+index 46917b4216b30..253eac4cbf597 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-io.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-io.c
+@@ -218,7 +218,7 @@ void iwl_force_nmi(struct iwl_trans *trans)
+ 				    UREG_DOORBELL_TO_ISR6_NMI_BIT);
+ 	else
+ 		iwl_write32(trans, CSR_DOORBELL_VECTOR,
+-			    CSR_DOORBELL_VECTOR_NMI);
++			    UREG_DOORBELL_TO_ISR6_NMI_BIT);
+ }
+ IWL_EXPORT_SYMBOL(iwl_force_nmi);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
+index 949fb790f8fb7..628aee634b2ad 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
+@@ -511,7 +511,7 @@ iwl_mvm_ftm_put_target(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ 		rcu_read_lock();
+ 
+ 		sta = rcu_dereference(mvm->fw_id_to_mac_id[mvmvif->ap_sta_id]);
+-		if (sta->mfp)
++		if (sta->mfp && (peer->ftm.trigger_based || peer->ftm.non_trigger_based))
+ 			FTM_PUT_FLAG(PMF);
+ 
+ 		rcu_read_unlock();
+@@ -1066,7 +1066,7 @@ static void iwl_mvm_ftm_rtt_smoothing(struct iwl_mvm *mvm,
+ 	overshoot = IWL_MVM_FTM_INITIATOR_SMOOTH_OVERSHOOT;
+ 	alpha = IWL_MVM_FTM_INITIATOR_SMOOTH_ALPHA;
+ 
+-	rtt_avg = (alpha * rtt + (100 - alpha) * resp->rtt_avg) / 100;
++	rtt_avg = div_s64(alpha * rtt + (100 - alpha) * resp->rtt_avg, 100);
+ 
+ 	IWL_DEBUG_INFO(mvm,
+ 		       "%pM: prev rtt_avg=%lld, new rtt_avg=%lld, rtt=%lld\n",
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 863fec150e536..9eb78461f2800 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -820,6 +820,7 @@ static int iwl_mvm_sar_geo_init(struct iwl_mvm *mvm)
+ 	u16 len;
+ 	u32 n_bands;
+ 	u32 n_profiles;
++	u32 sk = 0;
+ 	int ret;
+ 	u8 cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw, PHY_OPS_GROUP,
+ 					   PER_CHAIN_LIMIT_OFFSET_CMD,
+@@ -879,19 +880,26 @@ static int iwl_mvm_sar_geo_init(struct iwl_mvm *mvm)
+ 	if (ret)
+ 		return 0;
+ 
++	/* Only set to South Korea if the table revision is 1 */
++	if (mvm->fwrt.geo_rev == 1)
++		sk = 1;
++
+ 	/*
+-	 * Set the revision on versions that contain it.
++	 * Set the table_revision to South Korea (1) or not (0).  The
++	 * element name is misleading, as it doesn't contain the table
++	 * revision number, but whether the South Korea variation
++	 * should be used.
+ 	 * This must be done after calling iwl_sar_geo_init().
+ 	 */
+ 	if (cmd_ver == 5)
+-		cmd.v5.table_revision = cpu_to_le32(mvm->fwrt.geo_rev);
++		cmd.v5.table_revision = cpu_to_le32(sk);
+ 	else if (cmd_ver == 4)
+-		cmd.v4.table_revision = cpu_to_le32(mvm->fwrt.geo_rev);
++		cmd.v4.table_revision = cpu_to_le32(sk);
+ 	else if (cmd_ver == 3)
+-		cmd.v3.table_revision = cpu_to_le32(mvm->fwrt.geo_rev);
++		cmd.v3.table_revision = cpu_to_le32(sk);
+ 	else if (fw_has_api(&mvm->fwrt.fw->ucode_capa,
+ 			    IWL_UCODE_TLV_API_SAR_TABLE_VER))
+-		cmd.v2.table_revision = cpu_to_le32(mvm->fwrt.geo_rev);
++		cmd.v2.table_revision = cpu_to_le32(sk);
+ 
+ 	return iwl_mvm_send_cmd_pdu(mvm,
+ 				    WIDE_ID(PHY_OPS_GROUP,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 897e3b91ddb2f..9c5c10908f013 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -1688,6 +1688,7 @@ static void iwl_mvm_recalc_multicast(struct iwl_mvm *mvm)
+ 	struct iwl_mvm_mc_iter_data iter_data = {
+ 		.mvm = mvm,
+ 	};
++	int ret;
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
+@@ -1697,6 +1698,22 @@ static void iwl_mvm_recalc_multicast(struct iwl_mvm *mvm)
+ 	ieee80211_iterate_active_interfaces_atomic(
+ 		mvm->hw, IEEE80211_IFACE_ITER_NORMAL,
+ 		iwl_mvm_mc_iface_iterator, &iter_data);
++
++	/*
++	 * Send a (synchronous) ech command so that we wait for the
++	 * multiple asynchronous MCAST_FILTER_CMD commands sent by
++	 * the interface iterator. Otherwise, we might get here over
++	 * and over again (by userspace just sending a lot of these)
++	 * and the CPU can send them faster than the firmware can
++	 * process them.
++	 * Note that the CPU is still faster - but with this we'll
++	 * actually send fewer commands overall because the CPU will
++	 * not schedule the work in mac80211 as frequently if it's
++	 * still running when rescheduled (possibly multiple times).
++	 */
++	ret = iwl_mvm_send_cmd_pdu(mvm, ECHO_CMD, 0, 0, NULL);
++	if (ret)
++		IWL_ERR(mvm, "Failed to synchronize multicast groups update\n");
+ }
+ 
+ static u64 iwl_mvm_prepare_multicast(struct ieee80211_hw *hw,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+index e0601f802628c..1e2a55ccf1926 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+@@ -121,12 +121,39 @@ static int iwl_mvm_create_skb(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 	struct iwl_rx_mpdu_desc *desc = (void *)pkt->data;
+ 	unsigned int headlen, fraglen, pad_len = 0;
+ 	unsigned int hdrlen = ieee80211_hdrlen(hdr->frame_control);
++	u8 mic_crc_len = u8_get_bits(desc->mac_flags1,
++				     IWL_RX_MPDU_MFLG1_MIC_CRC_LEN_MASK) << 1;
+ 
+ 	if (desc->mac_flags2 & IWL_RX_MPDU_MFLG2_PAD) {
+ 		len -= 2;
+ 		pad_len = 2;
+ 	}
+ 
++	/*
++	 * For non monitor interface strip the bytes the RADA might not have
++	 * removed. As monitor interface cannot exist with other interfaces
++	 * this removal is safe.
++	 */
++	if (mic_crc_len && !ieee80211_hw_check(mvm->hw, RX_INCLUDES_FCS)) {
++		u32 pkt_flags = le32_to_cpu(pkt->len_n_flags);
++
++		/*
++		 * If RADA was not enabled then decryption was not performed so
++		 * the MIC cannot be removed.
++		 */
++		if (!(pkt_flags & FH_RSCSR_RADA_EN)) {
++			if (WARN_ON(crypt_len > mic_crc_len))
++				return -EINVAL;
++
++			mic_crc_len -= crypt_len;
++		}
++
++		if (WARN_ON(mic_crc_len > len))
++			return -EINVAL;
++
++		len -= mic_crc_len;
++	}
++
+ 	/* If frame is small enough to fit in skb->head, pull it completely.
+ 	 * If not, only pull ieee80211_hdr (including crypto if present, and
+ 	 * an additional 8 bytes for SNAP/ethertype, see below) so that
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index a138b5c4cce84..960b21719b80d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -1924,22 +1924,19 @@ static void iwl_mvm_scan_6ghz_passive_scan(struct iwl_mvm *mvm,
+ 	}
+ 
+ 	/*
+-	 * 6GHz passive scan is allowed while associated in a defined time
+-	 * interval following HW reset or resume flow
++	 * 6GHz passive scan is allowed in a defined time interval following HW
++	 * reset or resume flow, or while not associated and a large interval
++	 * has passed since the last 6GHz passive scan.
+ 	 */
+-	if (vif->bss_conf.assoc &&
++	if ((vif->bss_conf.assoc ||
++	     time_after(mvm->last_6ghz_passive_scan_jiffies +
++			(IWL_MVM_6GHZ_PASSIVE_SCAN_TIMEOUT * HZ), jiffies)) &&
+ 	    (time_before(mvm->last_reset_or_resume_time_jiffies +
+ 			 (IWL_MVM_6GHZ_PASSIVE_SCAN_ASSOC_TIMEOUT * HZ),
+ 			 jiffies))) {
+-		IWL_DEBUG_SCAN(mvm, "6GHz passive scan: associated\n");
+-		return;
+-	}
+-
+-	/* No need for 6GHz passive scan if not enough time elapsed */
+-	if (time_after(mvm->last_6ghz_passive_scan_jiffies +
+-		       (IWL_MVM_6GHZ_PASSIVE_SCAN_TIMEOUT * HZ), jiffies)) {
+-		IWL_DEBUG_SCAN(mvm,
+-			       "6GHz passive scan: timeout did not expire\n");
++		IWL_DEBUG_SCAN(mvm, "6GHz passive scan: %s\n",
++			       vif->bss_conf.assoc ? "associated" :
++			       "timeout did not expire");
+ 		return;
+ 	}
+ 
+@@ -2498,7 +2495,7 @@ static int iwl_mvm_check_running_scans(struct iwl_mvm *mvm, int type)
+ 	return -EIO;
+ }
+ 
+-#define SCAN_TIMEOUT 20000
++#define SCAN_TIMEOUT 30000
+ 
+ void iwl_mvm_scan_timeout_wk(struct work_struct *work)
+ {
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+index e91f8e889df70..ab06dcda1462a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+@@ -49,14 +49,13 @@ void iwl_mvm_roc_done_wk(struct work_struct *wk)
+ 	struct iwl_mvm *mvm = container_of(wk, struct iwl_mvm, roc_done_wk);
+ 
+ 	/*
+-	 * Clear the ROC_RUNNING /ROC_AUX_RUNNING status bit.
++	 * Clear the ROC_RUNNING status bit.
+ 	 * This will cause the TX path to drop offchannel transmissions.
+ 	 * That would also be done by mac80211, but it is racy, in particular
+ 	 * in the case that the time event actually completed in the firmware
+ 	 * (which is handled in iwl_mvm_te_handle_notif).
+ 	 */
+ 	clear_bit(IWL_MVM_STATUS_ROC_RUNNING, &mvm->status);
+-	clear_bit(IWL_MVM_STATUS_ROC_AUX_RUNNING, &mvm->status);
+ 
+ 	synchronize_net();
+ 
+@@ -82,9 +81,19 @@ void iwl_mvm_roc_done_wk(struct work_struct *wk)
+ 			mvmvif = iwl_mvm_vif_from_mac80211(mvm->p2p_device_vif);
+ 			iwl_mvm_flush_sta(mvm, &mvmvif->bcast_sta, true);
+ 		}
+-	} else {
++	}
++
++	/*
++	 * Clear the ROC_AUX_RUNNING status bit.
++	 * This will cause the TX path to drop offchannel transmissions.
++	 * That would also be done by mac80211, but it is racy, in particular
++	 * in the case that the time event actually completed in the firmware
++	 * (which is handled in iwl_mvm_te_handle_notif).
++	 */
++	if (test_and_clear_bit(IWL_MVM_STATUS_ROC_AUX_RUNNING, &mvm->status)) {
+ 		/* do the same in case of hot spot 2.0 */
+ 		iwl_mvm_flush_sta(mvm, &mvm->aux_sta, true);
++
+ 		/* In newer version of this command an aux station is added only
+ 		 * in cases of dedicated tx queue and need to be removed in end
+ 		 * of use */
+@@ -687,11 +696,14 @@ static bool __iwl_mvm_remove_time_event(struct iwl_mvm *mvm,
+ 	iwl_mvm_te_clear_data(mvm, te_data);
+ 	spin_unlock_bh(&mvm->time_event_lock);
+ 
+-	/* When session protection is supported, the te_data->id field
++	/* When session protection is used, the te_data->id field
+ 	 * is reused to save session protection's configuration.
++	 * For AUX ROC, HOT_SPOT_CMD is used and the te_data->id field is set
++	 * to HOT_SPOT_CMD.
+ 	 */
+ 	if (fw_has_capa(&mvm->fw->ucode_capa,
+-			IWL_UCODE_TLV_CAPA_SESSION_PROT_CMD)) {
++			IWL_UCODE_TLV_CAPA_SESSION_PROT_CMD) &&
++	    id != HOT_SPOT_CMD) {
+ 		if (mvmvif && id < SESSION_PROTECT_CONF_MAX_ID) {
+ 			/* Session protection is still ongoing. Cancel it */
+ 			iwl_mvm_cancel_session_protection(mvm, mvmvif, id);
+@@ -1027,7 +1039,7 @@ void iwl_mvm_stop_roc(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+ 			iwl_mvm_p2p_roc_finished(mvm);
+ 		} else {
+ 			iwl_mvm_remove_aux_roc_te(mvm, mvmvif,
+-						  &mvmvif->time_event_data);
++						  &mvmvif->hs_time_event_data);
+ 			iwl_mvm_roc_finished(mvm);
+ 		}
+ 
+@@ -1158,15 +1170,10 @@ void iwl_mvm_schedule_session_protection(struct iwl_mvm *mvm,
+ 			cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id,
+ 							mvmvif->color)),
+ 		.action = cpu_to_le32(FW_CTXT_ACTION_ADD),
++		.conf_id = cpu_to_le32(SESSION_PROTECT_CONF_ASSOC),
+ 		.duration_tu = cpu_to_le32(MSEC_TO_TU(duration)),
+ 	};
+ 
+-	/* The time_event_data.id field is reused to save session
+-	 * protection's configuration.
+-	 */
+-	mvmvif->time_event_data.id = SESSION_PROTECT_CONF_ASSOC;
+-	cmd.conf_id = cpu_to_le32(mvmvif->time_event_data.id);
+-
+ 	lockdep_assert_held(&mvm->mutex);
+ 
+ 	spin_lock_bh(&mvm->time_event_lock);
+@@ -1180,6 +1187,11 @@ void iwl_mvm_schedule_session_protection(struct iwl_mvm *mvm,
+ 	}
+ 
+ 	iwl_mvm_te_clear_data(mvm, te_data);
++	/*
++	 * The time_event_data.id field is reused to save session
++	 * protection's configuration.
++	 */
++	te_data->id = le32_to_cpu(cmd.conf_id);
+ 	te_data->duration = le32_to_cpu(cmd.duration_tu);
+ 	te_data->vif = vif;
+ 	spin_unlock_bh(&mvm->time_event_lock);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+index 14602d6d6699f..8247014278f30 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+@@ -2266,7 +2266,12 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id)
+ 		}
+ 	}
+ 
+-	if (inta_hw & MSIX_HW_INT_CAUSES_REG_WAKEUP) {
++	/*
++	 * In some rare cases when the HW is in a bad state, we may
++	 * get this interrupt too early, when prph_info is still NULL.
++	 * So make sure that it's not NULL to prevent crashing.
++	 */
++	if (inta_hw & MSIX_HW_INT_CAUSES_REG_WAKEUP && trans_pcie->prph_info) {
+ 		u32 sleep_notif =
+ 			le32_to_cpu(trans_pcie->prph_info->sleep_notif);
+ 		if (sleep_notif == IWL_D3_SLEEP_STATUS_SUSPEND ||
+diff --git a/drivers/net/wireless/intel/iwlwifi/queue/tx.c b/drivers/net/wireless/intel/iwlwifi/queue/tx.c
+index 451b060693501..0f3526b0c5b00 100644
+--- a/drivers/net/wireless/intel/iwlwifi/queue/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/queue/tx.c
+@@ -1072,6 +1072,7 @@ int iwl_txq_alloc(struct iwl_trans *trans, struct iwl_txq *txq, int slots_num,
+ 	return 0;
+ err_free_tfds:
+ 	dma_free_coherent(trans->dev, tfd_sz, txq->tfds, txq->dma_addr);
++	txq->tfds = NULL;
+ error:
+ 	if (txq->entries && cmd_queue)
+ 		for (i = 0; i < slots_num; i++)
+diff --git a/drivers/net/wireless/marvell/mwifiex/sta_event.c b/drivers/net/wireless/marvell/mwifiex/sta_event.c
+index 68c63268e2e6b..2b2e6e0166e14 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sta_event.c
++++ b/drivers/net/wireless/marvell/mwifiex/sta_event.c
+@@ -365,10 +365,12 @@ static void mwifiex_process_uap_tx_pause(struct mwifiex_private *priv,
+ 		sta_ptr = mwifiex_get_sta_entry(priv, tp->peermac);
+ 		if (sta_ptr && sta_ptr->tx_pause != tp->tx_pause) {
+ 			sta_ptr->tx_pause = tp->tx_pause;
++			spin_unlock_bh(&priv->sta_list_spinlock);
+ 			mwifiex_update_ralist_tx_pause(priv, tp->peermac,
+ 						       tp->tx_pause);
++		} else {
++			spin_unlock_bh(&priv->sta_list_spinlock);
+ 		}
+-		spin_unlock_bh(&priv->sta_list_spinlock);
+ 	}
+ }
+ 
+@@ -400,11 +402,13 @@ static void mwifiex_process_sta_tx_pause(struct mwifiex_private *priv,
+ 			sta_ptr = mwifiex_get_sta_entry(priv, tp->peermac);
+ 			if (sta_ptr && sta_ptr->tx_pause != tp->tx_pause) {
+ 				sta_ptr->tx_pause = tp->tx_pause;
++				spin_unlock_bh(&priv->sta_list_spinlock);
+ 				mwifiex_update_ralist_tx_pause(priv,
+ 							       tp->peermac,
+ 							       tp->tx_pause);
++			} else {
++				spin_unlock_bh(&priv->sta_list_spinlock);
+ 			}
+-			spin_unlock_bh(&priv->sta_list_spinlock);
+ 		}
+ 	}
+ }
+diff --git a/drivers/net/wireless/marvell/mwifiex/usb.c b/drivers/net/wireless/marvell/mwifiex/usb.c
+index 9736aa0ab7fd4..8f01fcbe93961 100644
+--- a/drivers/net/wireless/marvell/mwifiex/usb.c
++++ b/drivers/net/wireless/marvell/mwifiex/usb.c
+@@ -130,7 +130,8 @@ static int mwifiex_usb_recv(struct mwifiex_adapter *adapter,
+ 		default:
+ 			mwifiex_dbg(adapter, ERROR,
+ 				    "unknown recv_type %#x\n", recv_type);
+-			return -1;
++			ret = -1;
++			goto exit_restore_skb;
+ 		}
+ 		break;
+ 	case MWIFIEX_USB_EP_DATA:
+diff --git a/drivers/net/wireless/mediatek/mt76/debugfs.c b/drivers/net/wireless/mediatek/mt76/debugfs.c
+index b8bcf22a07fd8..47e9911ee9fe6 100644
+--- a/drivers/net/wireless/mediatek/mt76/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/debugfs.c
+@@ -82,7 +82,7 @@ static int mt76_rx_queues_read(struct seq_file *s, void *data)
+ 
+ 		queued = mt76_is_usb(dev) ? q->ndesc - q->queued : q->queued;
+ 		seq_printf(s, " %9d | %9d | %9d | %9d |\n",
+-			   i, q->queued, q->head, q->tail);
++			   i, queued, q->head, q->tail);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c
+index 62807dc311c19..b0869ff86c49f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mac80211.c
++++ b/drivers/net/wireless/mediatek/mt76/mac80211.c
+@@ -1494,7 +1494,6 @@ EXPORT_SYMBOL_GPL(mt76_init_queue);
+ u16 mt76_calculate_default_rate(struct mt76_phy *phy, int rateidx)
+ {
+ 	int offset = 0;
+-	struct ieee80211_rate *rate;
+ 
+ 	if (phy->chandef.chan->band != NL80211_BAND_2GHZ)
+ 		offset = 4;
+@@ -1503,9 +1502,11 @@ u16 mt76_calculate_default_rate(struct mt76_phy *phy, int rateidx)
+ 	if (rateidx < 0)
+ 		rateidx = 0;
+ 
+-	rate = &mt76_rates[offset + rateidx];
++	rateidx += offset;
++	if (rateidx >= ARRAY_SIZE(mt76_rates))
++		rateidx = offset;
+ 
+-	return rate->hw_value;
++	return mt76_rates[rateidx].hw_value;
+ }
+ EXPORT_SYMBOL_GPL(mt76_calculate_default_rate);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/mac.c b/drivers/net/wireless/mediatek/mt76/mt7603/mac.c
+index fe03e31989bb1..a9ac61b9f854a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7603/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7603/mac.c
+@@ -525,6 +525,10 @@ mt7603_mac_fill_rx(struct mt7603_dev *dev, struct sk_buff *skb)
+ 	if (rxd2 & MT_RXD2_NORMAL_TKIP_MIC_ERR)
+ 		status->flag |= RX_FLAG_MMIC_ERROR;
+ 
++	/* ICV error or CCMP/BIP/WPI MIC error */
++	if (rxd2 & MT_RXD2_NORMAL_ICV_ERR)
++		status->flag |= RX_FLAG_ONLY_MONITOR;
++
+ 	if (FIELD_GET(MT_RXD2_NORMAL_SEC_MODE, rxd2) != 0 &&
+ 	    !(rxd2 & (MT_RXD2_NORMAL_CLM | MT_RXD2_NORMAL_CM))) {
+ 		status->flag |= RX_FLAG_DECRYPTED;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index 423f69015e3ec..c79abce543f3b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -286,9 +286,16 @@ static int mt7615_mac_fill_rx(struct mt7615_dev *dev, struct sk_buff *skb)
+ 	if (rxd2 & MT_RXD2_NORMAL_AMSDU_ERR)
+ 		return -EINVAL;
+ 
++	hdr_trans = rxd1 & MT_RXD1_NORMAL_HDR_TRANS;
++	if (hdr_trans && (rxd2 & MT_RXD2_NORMAL_CM))
++		return -EINVAL;
++
++	/* ICV error or CCMP/BIP/WPI MIC error */
++	if (rxd2 & MT_RXD2_NORMAL_ICV_ERR)
++		status->flag |= RX_FLAG_ONLY_MONITOR;
++
+ 	unicast = (rxd1 & MT_RXD1_NORMAL_ADDR_TYPE) == MT_RXD1_NORMAL_U2M;
+ 	idx = FIELD_GET(MT_RXD2_NORMAL_WLAN_IDX, rxd2);
+-	hdr_trans = rxd1 & MT_RXD1_NORMAL_HDR_TRANS;
+ 	status->wcid = mt7615_rx_get_wcid(dev, idx, unicast);
+ 
+ 	if (status->wcid) {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/main.c b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+index 890d9b07e1563..1fdcada157d6b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+@@ -73,7 +73,7 @@ static int mt7615_start(struct ieee80211_hw *hw)
+ 			goto out;
+ 	}
+ 
+-	ret = mt7615_mcu_set_chan_info(phy, MCU_EXT_CMD_SET_RX_PATH);
++	ret = mt7615_mcu_set_chan_info(phy, MCU_EXT_CMD(SET_RX_PATH));
+ 	if (ret)
+ 		goto out;
+ 
+@@ -211,11 +211,9 @@ static int mt7615_add_interface(struct ieee80211_hw *hw,
+ 	mvif->mt76.omac_idx = idx;
+ 
+ 	mvif->mt76.band_idx = ext_phy;
+-	if (mt7615_ext_phy(dev))
+-		mvif->mt76.wmm_idx = ext_phy * (MT7615_MAX_WMM_SETS / 2) +
+-				mvif->mt76.idx % (MT7615_MAX_WMM_SETS / 2);
+-	else
+-		mvif->mt76.wmm_idx = mvif->mt76.idx % MT7615_MAX_WMM_SETS;
++	mvif->mt76.wmm_idx = vif->type != NL80211_IFTYPE_AP;
++	if (ext_phy)
++		mvif->mt76.wmm_idx += 2;
+ 
+ 	dev->mt76.vif_mask |= BIT(mvif->mt76.idx);
+ 	dev->omac_mask |= BIT_ULL(mvif->mt76.omac_idx);
+@@ -331,7 +329,7 @@ int mt7615_set_channel(struct mt7615_phy *phy)
+ 			goto out;
+ 	}
+ 
+-	ret = mt7615_mcu_set_chan_info(phy, MCU_EXT_CMD_CHANNEL_SWITCH);
++	ret = mt7615_mcu_set_chan_info(phy, MCU_EXT_CMD(CHANNEL_SWITCH));
+ 	if (ret)
+ 		goto out;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+index 25f9cbe2cd610..fcbcfc9f5a04f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+@@ -87,7 +87,7 @@ struct mt7663_fw_buf {
+ void mt7615_mcu_fill_msg(struct mt7615_dev *dev, struct sk_buff *skb,
+ 			 int cmd, int *wait_seq)
+ {
+-	int txd_len, mcu_cmd = cmd & MCU_CMD_MASK;
++	int txd_len, mcu_cmd = FIELD_GET(__MCU_CMD_FIELD_ID, cmd);
+ 	struct mt7615_uni_txd *uni_txd;
+ 	struct mt7615_mcu_txd *mcu_txd;
+ 	u8 seq, q_idx, pkt_fmt;
+@@ -103,7 +103,7 @@ void mt7615_mcu_fill_msg(struct mt7615_dev *dev, struct sk_buff *skb,
+ 	if (wait_seq)
+ 		*wait_seq = seq;
+ 
+-	txd_len = cmd & MCU_UNI_PREFIX ? sizeof(*uni_txd) : sizeof(*mcu_txd);
++	txd_len = cmd & __MCU_CMD_FIELD_UNI ? sizeof(*uni_txd) : sizeof(*mcu_txd);
+ 	txd = (__le32 *)skb_push(skb, txd_len);
+ 
+ 	if (cmd != MCU_CMD_FW_SCATTER) {
+@@ -124,7 +124,7 @@ void mt7615_mcu_fill_msg(struct mt7615_dev *dev, struct sk_buff *skb,
+ 	      FIELD_PREP(MT_TXD1_PKT_FMT, pkt_fmt);
+ 	txd[1] = cpu_to_le32(val);
+ 
+-	if (cmd & MCU_UNI_PREFIX) {
++	if (cmd & __MCU_CMD_FIELD_UNI) {
+ 		uni_txd = (struct mt7615_uni_txd *)txd;
+ 		uni_txd->len = cpu_to_le16(skb->len - sizeof(uni_txd->txd));
+ 		uni_txd->option = MCU_CMD_UNI_EXT_ACK;
+@@ -142,28 +142,17 @@ void mt7615_mcu_fill_msg(struct mt7615_dev *dev, struct sk_buff *skb,
+ 	mcu_txd->s2d_index = MCU_S2D_H2N;
+ 	mcu_txd->pkt_type = MCU_PKT_ID;
+ 	mcu_txd->seq = seq;
++	mcu_txd->cid = mcu_cmd;
++	mcu_txd->ext_cid = FIELD_GET(__MCU_CMD_FIELD_EXT_ID, cmd);
+ 
+-	switch (cmd & ~MCU_CMD_MASK) {
+-	case MCU_FW_PREFIX:
+-		mcu_txd->set_query = MCU_Q_NA;
+-		mcu_txd->cid = mcu_cmd;
+-		break;
+-	case MCU_CE_PREFIX:
+-		if (cmd & MCU_QUERY_MASK)
+-			mcu_txd->set_query = MCU_Q_QUERY;
+-		else
+-			mcu_txd->set_query = MCU_Q_SET;
+-		mcu_txd->cid = mcu_cmd;
+-		break;
+-	default:
+-		mcu_txd->cid = MCU_CMD_EXT_CID;
+-		if (cmd & MCU_QUERY_PREFIX)
++	if (mcu_txd->ext_cid || (cmd & MCU_CE_PREFIX)) {
++		if (cmd & __MCU_CMD_FIELD_QUERY)
+ 			mcu_txd->set_query = MCU_Q_QUERY;
+ 		else
+ 			mcu_txd->set_query = MCU_Q_SET;
+-		mcu_txd->ext_cid = mcu_cmd;
+-		mcu_txd->ext_cid_ack = 1;
+-		break;
++		mcu_txd->ext_cid_ack = !!mcu_txd->ext_cid;
++	} else {
++		mcu_txd->set_query = MCU_Q_NA;
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(mt7615_mcu_fill_msg);
+@@ -184,42 +173,32 @@ int mt7615_mcu_parse_response(struct mt76_dev *mdev, int cmd,
+ 	if (seq != rxd->seq)
+ 		return -EAGAIN;
+ 
+-	switch (cmd) {
+-	case MCU_CMD_PATCH_SEM_CONTROL:
++	if (cmd == MCU_CMD_PATCH_SEM_CONTROL) {
+ 		skb_pull(skb, sizeof(*rxd) - 4);
+ 		ret = *skb->data;
+-		break;
+-	case MCU_EXT_CMD_GET_TEMP:
++	} else if (cmd == MCU_EXT_CMD(THERMAL_CTRL)) {
+ 		skb_pull(skb, sizeof(*rxd));
+ 		ret = le32_to_cpu(*(__le32 *)skb->data);
+-		break;
+-	case MCU_EXT_CMD_RF_REG_ACCESS | MCU_QUERY_PREFIX:
++	} else if (cmd == MCU_EXT_QUERY(RF_REG_ACCESS)) {
+ 		skb_pull(skb, sizeof(*rxd));
+ 		ret = le32_to_cpu(*(__le32 *)&skb->data[8]);
+-		break;
+-	case MCU_UNI_CMD_DEV_INFO_UPDATE:
+-	case MCU_UNI_CMD_BSS_INFO_UPDATE:
+-	case MCU_UNI_CMD_STA_REC_UPDATE:
+-	case MCU_UNI_CMD_HIF_CTRL:
+-	case MCU_UNI_CMD_OFFLOAD:
+-	case MCU_UNI_CMD_SUSPEND: {
++	} else if (cmd == MCU_UNI_CMD(DEV_INFO_UPDATE) ||
++		   cmd == MCU_UNI_CMD(BSS_INFO_UPDATE) ||
++		   cmd == MCU_UNI_CMD(STA_REC_UPDATE) ||
++		   cmd == MCU_UNI_CMD(HIF_CTRL) ||
++		   cmd == MCU_UNI_CMD(OFFLOAD) ||
++		   cmd == MCU_UNI_CMD(SUSPEND)) {
+ 		struct mt7615_mcu_uni_event *event;
+ 
+ 		skb_pull(skb, sizeof(*rxd));
+ 		event = (struct mt7615_mcu_uni_event *)skb->data;
+ 		ret = le32_to_cpu(event->status);
+-		break;
+-	}
+-	case MCU_CMD_REG_READ: {
++	} else if (cmd == MCU_CMD_REG_READ) {
+ 		struct mt7615_mcu_reg_event *event;
+ 
+ 		skb_pull(skb, sizeof(*rxd));
+ 		event = (struct mt7615_mcu_reg_event *)skb->data;
+ 		ret = (int)le32_to_cpu(event->val);
+-		break;
+-	}
+-	default:
+-		break;
+ 	}
+ 
+ 	return ret;
+@@ -253,8 +232,7 @@ u32 mt7615_rf_rr(struct mt7615_dev *dev, u32 wf, u32 reg)
+ 		.address = cpu_to_le32(reg),
+ 	};
+ 
+-	return mt76_mcu_send_msg(&dev->mt76,
+-				 MCU_EXT_CMD_RF_REG_ACCESS | MCU_QUERY_PREFIX,
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_QUERY(RF_REG_ACCESS),
+ 				 &req, sizeof(req), true);
+ }
+ 
+@@ -270,8 +248,8 @@ int mt7615_rf_wr(struct mt7615_dev *dev, u32 wf, u32 reg, u32 val)
+ 		.data = cpu_to_le32(val),
+ 	};
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_RF_REG_ACCESS, &req,
+-				 sizeof(req), false);
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(RF_REG_ACCESS),
++				 &req, sizeof(req), false);
+ }
+ 
+ void mt7622_trigger_hif_int(struct mt7615_dev *dev, bool en)
+@@ -658,8 +636,8 @@ mt7615_mcu_muar_config(struct mt7615_dev *dev, struct ieee80211_vif *vif,
+ 	if (enable)
+ 		ether_addr_copy(req.addr, addr);
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_MUAR_UPDATE, &req,
+-				 sizeof(req), true);
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(MUAR_UPDATE),
++				 &req, sizeof(req), true);
+ }
+ 
+ static int
+@@ -702,7 +680,7 @@ mt7615_mcu_add_dev(struct mt7615_phy *phy, struct ieee80211_vif *vif,
+ 		return mt7615_mcu_muar_config(dev, vif, false, enable);
+ 
+ 	memcpy(data.tlv.omac_addr, vif->addr, ETH_ALEN);
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_DEV_INFO_UPDATE,
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(DEV_INFO_UPDATE),
+ 				 &data, sizeof(data), true);
+ }
+ 
+@@ -771,7 +749,7 @@ mt7615_mcu_add_beacon_offload(struct mt7615_dev *dev,
+ 	dev_kfree_skb(skb);
+ 
+ out:
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_BCN_OFFLOAD, &req,
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(BCN_OFFLOAD), &req,
+ 				 sizeof(req), true);
+ }
+ 
+@@ -802,8 +780,8 @@ mt7615_mcu_ctrl_pm_state(struct mt7615_dev *dev, int band, int state)
+ 		.band_idx = band,
+ 	};
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_PM_STATE_CTRL, &req,
+-				 sizeof(req), true);
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(PM_STATE_CTRL),
++				 &req, sizeof(req), true);
+ }
+ 
+ static int
+@@ -944,7 +922,7 @@ mt7615_mcu_add_bss(struct mt7615_phy *phy, struct ieee80211_vif *vif,
+ 		mt7615_mcu_bss_ext_tlv(skb, mvif);
+ 
+ 	return mt76_mcu_skb_send_msg(&dev->mt76, skb,
+-				     MCU_EXT_CMD_BSS_INFO_UPDATE, true);
++				     MCU_EXT_CMD(BSS_INFO_UPDATE), true);
+ }
+ 
+ static int
+@@ -966,8 +944,8 @@ mt7615_mcu_wtbl_tx_ba(struct mt7615_dev *dev,
+ 	mt76_connac_mcu_wtbl_ba_tlv(&dev->mt76, skb, params, enable, true,
+ 				    NULL, wtbl_hdr);
+ 
+-	err = mt76_mcu_skb_send_msg(&dev->mt76, skb, MCU_EXT_CMD_WTBL_UPDATE,
+-				    true);
++	err = mt76_mcu_skb_send_msg(&dev->mt76, skb,
++				    MCU_EXT_CMD(WTBL_UPDATE), true);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -979,7 +957,7 @@ mt7615_mcu_wtbl_tx_ba(struct mt7615_dev *dev,
+ 	mt76_connac_mcu_sta_ba_tlv(skb, params, enable, true);
+ 
+ 	return mt76_mcu_skb_send_msg(&dev->mt76, skb,
+-				     MCU_EXT_CMD_STA_REC_UPDATE, true);
++				     MCU_EXT_CMD(STA_REC_UPDATE), true);
+ }
+ 
+ static int
+@@ -1001,7 +979,7 @@ mt7615_mcu_wtbl_rx_ba(struct mt7615_dev *dev,
+ 	mt76_connac_mcu_sta_ba_tlv(skb, params, enable, false);
+ 
+ 	err = mt76_mcu_skb_send_msg(&dev->mt76, skb,
+-				    MCU_EXT_CMD_STA_REC_UPDATE, true);
++				    MCU_EXT_CMD(STA_REC_UPDATE), true);
+ 	if (err < 0 || !enable)
+ 		return err;
+ 
+@@ -1014,8 +992,8 @@ mt7615_mcu_wtbl_rx_ba(struct mt7615_dev *dev,
+ 	mt76_connac_mcu_wtbl_ba_tlv(&dev->mt76, skb, params, enable, false,
+ 				    NULL, wtbl_hdr);
+ 
+-	return mt76_mcu_skb_send_msg(&dev->mt76, skb, MCU_EXT_CMD_WTBL_UPDATE,
+-				     true);
++	return mt76_mcu_skb_send_msg(&dev->mt76, skb,
++				     MCU_EXT_CMD(WTBL_UPDATE), true);
+ }
+ 
+ static int
+@@ -1057,7 +1035,7 @@ mt7615_mcu_wtbl_sta_add(struct mt7615_phy *phy, struct ieee80211_vif *vif,
+ 						   NULL, wtbl_hdr);
+ 	}
+ 
+-	cmd = enable ? MCU_EXT_CMD_WTBL_UPDATE : MCU_EXT_CMD_STA_REC_UPDATE;
++	cmd = enable ? MCU_EXT_CMD(WTBL_UPDATE) : MCU_EXT_CMD(STA_REC_UPDATE);
+ 	skb = enable ? wskb : sskb;
+ 
+ 	err = mt76_mcu_skb_send_msg(&dev->mt76, skb, cmd, true);
+@@ -1068,7 +1046,7 @@ mt7615_mcu_wtbl_sta_add(struct mt7615_phy *phy, struct ieee80211_vif *vif,
+ 		return err;
+ 	}
+ 
+-	cmd = enable ? MCU_EXT_CMD_STA_REC_UPDATE : MCU_EXT_CMD_WTBL_UPDATE;
++	cmd = enable ? MCU_EXT_CMD(STA_REC_UPDATE) : MCU_EXT_CMD(WTBL_UPDATE);
+ 	skb = enable ? sskb : wskb;
+ 
+ 	return mt76_mcu_skb_send_msg(&dev->mt76, skb, cmd, true);
+@@ -1090,8 +1068,8 @@ mt7615_mcu_wtbl_update_hdr_trans(struct mt7615_dev *dev,
+ 
+ 	mt76_connac_mcu_wtbl_hdr_trans_tlv(skb, vif, &msta->wcid, NULL,
+ 					   wtbl_hdr);
+-	return mt76_mcu_skb_send_msg(&dev->mt76, skb, MCU_EXT_CMD_WTBL_UPDATE,
+-				     true);
++	return mt76_mcu_skb_send_msg(&dev->mt76, skb,
++				     MCU_EXT_CMD(WTBL_UPDATE), true);
+ }
+ 
+ static const struct mt7615_mcu_ops wtbl_update_ops = {
+@@ -1136,7 +1114,7 @@ mt7615_mcu_sta_ba(struct mt7615_dev *dev,
+ 				    sta_wtbl, wtbl_hdr);
+ 
+ 	return mt76_mcu_skb_send_msg(&dev->mt76, skb,
+-				     MCU_EXT_CMD_STA_REC_UPDATE, true);
++				     MCU_EXT_CMD(STA_REC_UPDATE), true);
+ }
+ 
+ static int
+@@ -1179,7 +1157,7 @@ mt7615_mcu_add_sta(struct mt7615_phy *phy, struct ieee80211_vif *vif,
+ 		   struct ieee80211_sta *sta, bool enable)
+ {
+ 	return __mt7615_mcu_add_sta(phy->mt76, vif, sta, enable,
+-				    MCU_EXT_CMD_STA_REC_UPDATE, false);
++				    MCU_EXT_CMD(STA_REC_UPDATE), false);
+ }
+ 
+ static int
+@@ -1191,7 +1169,7 @@ mt7615_mcu_sta_update_hdr_trans(struct mt7615_dev *dev,
+ 
+ 	return mt76_connac_mcu_sta_update_hdr_trans(&dev->mt76,
+ 						    vif, &msta->wcid,
+-						    MCU_EXT_CMD_STA_REC_UPDATE);
++						    MCU_EXT_CMD(STA_REC_UPDATE));
+ }
+ 
+ static const struct mt7615_mcu_ops sta_update_ops = {
+@@ -1285,7 +1263,7 @@ mt7615_mcu_uni_add_beacon_offload(struct mt7615_dev *dev,
+ 	dev_kfree_skb(skb);
+ 
+ out:
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD_BSS_INFO_UPDATE,
++	return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(BSS_INFO_UPDATE),
+ 				 &req, sizeof(req), true);
+ }
+ 
+@@ -1314,7 +1292,7 @@ mt7615_mcu_uni_add_sta(struct mt7615_phy *phy, struct ieee80211_vif *vif,
+ 		       struct ieee80211_sta *sta, bool enable)
+ {
+ 	return __mt7615_mcu_add_sta(phy->mt76, vif, sta, enable,
+-				    MCU_UNI_CMD_STA_REC_UPDATE, true);
++				    MCU_UNI_CMD(STA_REC_UPDATE), true);
+ }
+ 
+ static int
+@@ -1348,7 +1326,7 @@ mt7615_mcu_uni_rx_ba(struct mt7615_dev *dev,
+ 	mt76_connac_mcu_sta_ba_tlv(skb, params, enable, false);
+ 
+ 	err = mt76_mcu_skb_send_msg(&dev->mt76, skb,
+-				    MCU_UNI_CMD_STA_REC_UPDATE, true);
++				    MCU_UNI_CMD(STA_REC_UPDATE), true);
+ 	if (err < 0 || !enable)
+ 		return err;
+ 
+@@ -1369,7 +1347,7 @@ mt7615_mcu_uni_rx_ba(struct mt7615_dev *dev,
+ 				    sta_wtbl, wtbl_hdr);
+ 
+ 	return mt76_mcu_skb_send_msg(&dev->mt76, skb,
+-				     MCU_UNI_CMD_STA_REC_UPDATE, true);
++				     MCU_UNI_CMD(STA_REC_UPDATE), true);
+ }
+ 
+ static int
+@@ -1381,7 +1359,7 @@ mt7615_mcu_sta_uni_update_hdr_trans(struct mt7615_dev *dev,
+ 
+ 	return mt76_connac_mcu_sta_update_hdr_trans(&dev->mt76,
+ 						    vif, &msta->wcid,
+-						    MCU_UNI_CMD_STA_REC_UPDATE);
++						    MCU_UNI_CMD(STA_REC_UPDATE));
+ }
+ 
+ static const struct mt7615_mcu_ops uni_update_ops = {
+@@ -1694,8 +1672,8 @@ int mt7615_mcu_fw_log_2_host(struct mt7615_dev *dev, u8 ctrl)
+ 		.ctrl_val = ctrl
+ 	};
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_FW_LOG_2_HOST, &data,
+-				 sizeof(data), true);
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(FW_LOG_2_HOST),
++				 &data, sizeof(data), true);
+ }
+ 
+ static int mt7615_mcu_cal_cache_apply(struct mt7615_dev *dev)
+@@ -1707,7 +1685,7 @@ static int mt7615_mcu_cal_cache_apply(struct mt7615_dev *dev)
+ 		.cache_enable = true
+ 	};
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_CAL_CACHE, &data,
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(CAL_CACHE), &data,
+ 				 sizeof(data), false);
+ }
+ 
+@@ -1977,7 +1955,7 @@ int mt7615_mcu_set_eeprom(struct mt7615_dev *dev)
+ 	skb_put_data(skb, eep + offset, eep_len);
+ 
+ 	return mt76_mcu_skb_send_msg(&dev->mt76, skb,
+-				     MCU_EXT_CMD_EFUSE_BUFFER_MODE, true);
++				     MCU_EXT_CMD(EFUSE_BUFFER_MODE), true);
+ }
+ 
+ int mt7615_mcu_set_wmm(struct mt7615_dev *dev, u8 queue,
+@@ -2013,8 +1991,8 @@ int mt7615_mcu_set_wmm(struct mt7615_dev *dev, u8 queue,
+ 	if (params->cw_max)
+ 		req.cw_max = cpu_to_le16(fls(params->cw_max));
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_EDCA_UPDATE, &req,
+-				 sizeof(req), true);
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(EDCA_UPDATE),
++				 &req, sizeof(req), true);
+ }
+ 
+ int mt7615_mcu_set_dbdc(struct mt7615_dev *dev)
+@@ -2072,7 +2050,7 @@ int mt7615_mcu_set_dbdc(struct mt7615_dev *dev)
+ 	ADD_DBDC_ENTRY(DBDC_TYPE_MGMT, 1, 1);
+ 
+ out:
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_DBDC_CTRL, &req,
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(DBDC_CTRL), &req,
+ 				 sizeof(req), true);
+ }
+ 
+@@ -2082,8 +2060,8 @@ int mt7615_mcu_del_wtbl_all(struct mt7615_dev *dev)
+ 		.operation = WTBL_RESET_ALL,
+ 	};
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_WTBL_UPDATE, &req,
+-				 sizeof(req), true);
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(WTBL_UPDATE),
++				 &req, sizeof(req), true);
+ }
+ 
+ int mt7615_mcu_rdd_cmd(struct mt7615_dev *dev,
+@@ -2103,8 +2081,8 @@ int mt7615_mcu_rdd_cmd(struct mt7615_dev *dev,
+ 		.val = val,
+ 	};
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_SET_RDD_CTRL, &req,
+-				 sizeof(req), true);
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(SET_RDD_CTRL),
++				 &req, sizeof(req), true);
+ }
+ 
+ int mt7615_mcu_set_fcc5_lpn(struct mt7615_dev *dev, int val)
+@@ -2117,8 +2095,8 @@ int mt7615_mcu_set_fcc5_lpn(struct mt7615_dev *dev, int val)
+ 		.min_lpn = cpu_to_le16(val),
+ 	};
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_SET_RDD_TH, &req,
+-				 sizeof(req), true);
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(SET_RADAR_TH),
++				 &req, sizeof(req), true);
+ }
+ 
+ int mt7615_mcu_set_pulse_th(struct mt7615_dev *dev,
+@@ -2146,8 +2124,8 @@ int mt7615_mcu_set_pulse_th(struct mt7615_dev *dev,
+ #undef  __req_field
+ 	};
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_SET_RDD_TH, &req,
+-				 sizeof(req), true);
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(SET_RADAR_TH),
++				 &req, sizeof(req), true);
+ }
+ 
+ int mt7615_mcu_set_radar_th(struct mt7615_dev *dev, int index,
+@@ -2193,8 +2171,8 @@ int mt7615_mcu_set_radar_th(struct mt7615_dev *dev, int index,
+ #undef __req_field_u32
+ 	};
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_SET_RDD_TH, &req,
+-				 sizeof(req), true);
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(SET_RADAR_TH),
++				 &req, sizeof(req), true);
+ }
+ 
+ int mt7615_mcu_rdd_send_pattern(struct mt7615_dev *dev)
+@@ -2225,7 +2203,7 @@ int mt7615_mcu_rdd_send_pattern(struct mt7615_dev *dev)
+ 		req.pattern[i].start_time = cpu_to_le32(ts);
+ 	}
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_SET_RDD_PATTERN,
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(SET_RDD_PATTERN),
+ 				 &req, sizeof(req), false);
+ }
+ 
+@@ -2394,8 +2372,8 @@ int mt7615_mcu_get_temperature(struct mt7615_dev *dev)
+ 		u8 rsv[3];
+ 	} req = {};
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_GET_TEMP, &req,
+-				 sizeof(req), true);
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(THERMAL_CTRL),
++				 &req, sizeof(req), true);
+ }
+ 
+ int mt7615_mcu_set_test_param(struct mt7615_dev *dev, u8 param, bool test_mode,
+@@ -2415,8 +2393,8 @@ int mt7615_mcu_set_test_param(struct mt7615_dev *dev, u8 param, bool test_mode,
+ 		.value = cpu_to_le32(val),
+ 	};
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_ATE_CTRL, &req,
+-				 sizeof(req), false);
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(ATE_CTRL),
++				 &req, sizeof(req), false);
+ }
+ 
+ int mt7615_mcu_set_sku_en(struct mt7615_phy *phy, bool enable)
+@@ -2434,8 +2412,8 @@ int mt7615_mcu_set_sku_en(struct mt7615_phy *phy, bool enable)
+ 	};
+ 
+ 	return mt76_mcu_send_msg(&dev->mt76,
+-				 MCU_EXT_CMD_TX_POWER_FEATURE_CTRL, &req,
+-				 sizeof(req), true);
++				 MCU_EXT_CMD(TX_POWER_FEATURE_CTRL),
++				 &req, sizeof(req), true);
+ }
+ 
+ static int mt7615_find_freq_idx(const u16 *freqs, int n_freqs, u16 cur)
+@@ -2574,7 +2552,7 @@ again:
+ 
+ out:
+ 	req.center_freq = cpu_to_le16(center_freq);
+-	ret = mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_RXDCOC_CAL, &req,
++	ret = mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(RXDCOC_CAL), &req,
+ 				sizeof(req), true);
+ 
+ 	if ((chandef->width == NL80211_CHAN_WIDTH_80P80 ||
+@@ -2695,8 +2673,8 @@ again:
+ 
+ out:
+ 	req.center_freq = cpu_to_le16(center_freq);
+-	ret = mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_TXDPD_CAL, &req,
+-				sizeof(req), true);
++	ret = mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(TXDPD_CAL),
++				&req, sizeof(req), true);
+ 
+ 	if ((chandef->width == NL80211_CHAN_WIDTH_80P80 ||
+ 	     chandef->width == NL80211_CHAN_WIDTH_160) && !req.is_freq2) {
+@@ -2724,7 +2702,7 @@ int mt7615_mcu_set_rx_hdr_trans_blacklist(struct mt7615_dev *dev)
+ 		.etype = cpu_to_le16(ETH_P_PAE),
+ 	};
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_RX_HDR_TRANS,
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(RX_HDR_TRANS),
+ 				 &req, sizeof(req), false);
+ }
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c b/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c
+index a2465b49ecd0c..87b4aa52ee0f9 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c
+@@ -28,8 +28,6 @@ static void mt7615_pci_init_work(struct work_struct *work)
+ 		return;
+ 
+ 	mt7615_init_work(dev);
+-	if (dev->dbdc_support)
+-		mt7615_register_ext_phy(dev);
+ }
+ 
+ static int mt7615_init_hardware(struct mt7615_dev *dev)
+@@ -160,6 +158,12 @@ int mt7615_register_device(struct mt7615_dev *dev)
+ 	mt7615_init_txpower(dev, &dev->mphy.sband_2g.sband);
+ 	mt7615_init_txpower(dev, &dev->mphy.sband_5g.sband);
+ 
++	if (dev->dbdc_support) {
++		ret = mt7615_register_ext_phy(dev);
++		if (ret)
++			return ret;
++	}
++
+ 	return mt7615_init_debugfs(dev);
+ }
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/testmode.c b/drivers/net/wireless/mediatek/mt76/mt7615/testmode.c
+index 59d99264f5e5f..e5544f4e69797 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/testmode.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/testmode.c
+@@ -91,7 +91,7 @@ mt7615_tm_set_tx_power(struct mt7615_phy *phy)
+ 	}
+ 
+ 	return mt76_mcu_skb_send_msg(&dev->mt76, skb,
+-				     MCU_EXT_CMD_SET_TX_POWER_CTRL, false);
++				     MCU_EXT_CMD(SET_TX_POWER_CTRL), false);
+ }
+ 
+ static void
+@@ -229,7 +229,7 @@ mt7615_tm_set_tx_frames(struct mt7615_phy *phy, bool en)
+ 	struct ieee80211_tx_info *info;
+ 	struct sk_buff *skb = phy->mt76->test.tx_skb;
+ 
+-	mt7615_mcu_set_chan_info(phy, MCU_EXT_CMD_SET_RX_PATH);
++	mt7615_mcu_set_chan_info(phy, MCU_EXT_CMD(SET_RX_PATH));
+ 	mt7615_tm_set_tx_antenna(phy, en);
+ 	mt7615_tm_set_rx_enable(dev, !en);
+ 	if (!en || !skb)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
+index af43bcb545781..306e9eaea9177 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
+@@ -7,9 +7,6 @@ int mt76_connac_pm_wake(struct mt76_phy *phy, struct mt76_connac_pm *pm)
+ {
+ 	struct mt76_dev *dev = phy->dev;
+ 
+-	if (!pm->enable)
+-		return 0;
+-
+ 	if (mt76_is_usb(dev))
+ 		return 0;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+index 26b4b875dcc02..7733c8fad2413 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+@@ -176,7 +176,7 @@ int mt76_connac_mcu_set_mac_enable(struct mt76_dev *dev, int band, bool enable,
+ 		.band = band,
+ 	};
+ 
+-	return mt76_mcu_send_msg(dev, MCU_EXT_CMD_MAC_INIT_CTRL, &req_mac,
++	return mt76_mcu_send_msg(dev, MCU_EXT_CMD(MAC_INIT_CTRL), &req_mac,
+ 				 sizeof(req_mac), true);
+ }
+ EXPORT_SYMBOL_GPL(mt76_connac_mcu_set_mac_enable);
+@@ -218,7 +218,7 @@ int mt76_connac_mcu_set_rts_thresh(struct mt76_dev *dev, u32 val, u8 band)
+ 		.pkt_thresh = cpu_to_le32(0x2),
+ 	};
+ 
+-	return mt76_mcu_send_msg(dev, MCU_EXT_CMD_PROTECT_CTRL, &req,
++	return mt76_mcu_send_msg(dev, MCU_EXT_CMD(PROTECT_CTRL), &req,
+ 				 sizeof(req), true);
+ }
+ EXPORT_SYMBOL_GPL(mt76_connac_mcu_set_rts_thresh);
+@@ -1071,7 +1071,7 @@ int mt76_connac_mcu_uni_add_dev(struct mt76_phy *phy,
+ 
+ 	memcpy(dev_req.tlv.omac_addr, vif->addr, ETH_ALEN);
+ 
+-	cmd = enable ? MCU_UNI_CMD_DEV_INFO_UPDATE : MCU_UNI_CMD_BSS_INFO_UPDATE;
++	cmd = enable ? MCU_UNI_CMD(DEV_INFO_UPDATE) : MCU_UNI_CMD(BSS_INFO_UPDATE);
+ 	data = enable ? (void *)&dev_req : (void *)&basic_req;
+ 	len = enable ? sizeof(dev_req) : sizeof(basic_req);
+ 
+@@ -1079,7 +1079,7 @@ int mt76_connac_mcu_uni_add_dev(struct mt76_phy *phy,
+ 	if (err < 0)
+ 		return err;
+ 
+-	cmd = enable ? MCU_UNI_CMD_BSS_INFO_UPDATE : MCU_UNI_CMD_DEV_INFO_UPDATE;
++	cmd = enable ? MCU_UNI_CMD(BSS_INFO_UPDATE) : MCU_UNI_CMD(DEV_INFO_UPDATE);
+ 	data = enable ? (void *)&basic_req : (void *)&dev_req;
+ 	len = enable ? sizeof(basic_req) : sizeof(dev_req);
+ 
+@@ -1131,7 +1131,8 @@ int mt76_connac_mcu_sta_ba(struct mt76_dev *dev, struct mt76_vif *mvif,
+ 	mt76_connac_mcu_wtbl_ba_tlv(dev, skb, params, enable, tx, sta_wtbl,
+ 				    wtbl_hdr);
+ 
+-	ret = mt76_mcu_skb_send_msg(dev, skb, MCU_UNI_CMD_STA_REC_UPDATE, true);
++	ret = mt76_mcu_skb_send_msg(dev, skb,
++				    MCU_UNI_CMD(STA_REC_UPDATE), true);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1141,8 +1142,8 @@ int mt76_connac_mcu_sta_ba(struct mt76_dev *dev, struct mt76_vif *mvif,
+ 
+ 	mt76_connac_mcu_sta_ba_tlv(skb, params, enable, tx);
+ 
+-	return mt76_mcu_skb_send_msg(dev, skb, MCU_UNI_CMD_STA_REC_UPDATE,
+-				     true);
++	return mt76_mcu_skb_send_msg(dev, skb,
++				     MCU_UNI_CMD(STA_REC_UPDATE), true);
+ }
+ EXPORT_SYMBOL_GPL(mt76_connac_mcu_sta_ba);
+ 
+@@ -1179,7 +1180,7 @@ mt76_connac_get_phy_mode(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 		if (ht_cap->ht_supported)
+ 			mode |= PHY_MODE_GN;
+ 
+-		if (he_cap->has_he)
++		if (he_cap && he_cap->has_he)
+ 			mode |= PHY_MODE_AX_24G;
+ 	} else if (band == NL80211_BAND_5GHZ || band == NL80211_BAND_6GHZ) {
+ 		mode |= PHY_MODE_A;
+@@ -1190,7 +1191,7 @@ mt76_connac_get_phy_mode(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 		if (vht_cap->vht_supported)
+ 			mode |= PHY_MODE_AC;
+ 
+-		if (he_cap->has_he) {
++		if (he_cap && he_cap->has_he) {
+ 			if (band == NL80211_BAND_6GHZ)
+ 				mode |= PHY_MODE_AX_6G;
+ 			else
+@@ -1352,7 +1353,7 @@ int mt76_connac_mcu_uni_add_bss(struct mt76_phy *phy,
+ 	basic_req.basic.sta_idx = cpu_to_le16(wcid->idx);
+ 	basic_req.basic.conn_state = !enable;
+ 
+-	err = mt76_mcu_send_msg(mdev, MCU_UNI_CMD_BSS_INFO_UPDATE, &basic_req,
++	err = mt76_mcu_send_msg(mdev, MCU_UNI_CMD(BSS_INFO_UPDATE), &basic_req,
+ 				sizeof(basic_req), true);
+ 	if (err < 0)
+ 		return err;
+@@ -1390,7 +1391,7 @@ int mt76_connac_mcu_uni_add_bss(struct mt76_phy *phy,
+ 
+ 		mt76_connac_mcu_uni_bss_he_tlv(phy, vif,
+ 					       (struct tlv *)&he_req.he);
+-		err = mt76_mcu_send_msg(mdev, MCU_UNI_CMD_BSS_INFO_UPDATE,
++		err = mt76_mcu_send_msg(mdev, MCU_UNI_CMD(BSS_INFO_UPDATE),
+ 					&he_req, sizeof(he_req), true);
+ 		if (err < 0)
+ 			return err;
+@@ -1428,7 +1429,7 @@ int mt76_connac_mcu_uni_add_bss(struct mt76_phy *phy,
+ 	else if (rlm_req.rlm.control_channel > rlm_req.rlm.center_chan)
+ 		rlm_req.rlm.sco = 3; /* SCB */
+ 
+-	return mt76_mcu_send_msg(mdev, MCU_UNI_CMD_BSS_INFO_UPDATE, &rlm_req,
++	return mt76_mcu_send_msg(mdev, MCU_UNI_CMD(BSS_INFO_UPDATE), &rlm_req,
+ 				 sizeof(rlm_req), true);
+ }
+ EXPORT_SYMBOL_GPL(mt76_connac_mcu_uni_add_bss);
+@@ -2008,12 +2009,12 @@ mt76_connac_mcu_rate_txpower_band(struct mt76_phy *phy,
+ 	}
+ 	batch_size = DIV_ROUND_UP(n_chan, batch_len);
+ 
+-	if (!phy->cap.has_5ghz)
+-		last_ch = chan_list_2ghz[n_chan - 1];
+-	else if (phy->cap.has_6ghz)
+-		last_ch = chan_list_6ghz[n_chan - 1];
++	if (phy->cap.has_6ghz)
++		last_ch = chan_list_6ghz[ARRAY_SIZE(chan_list_6ghz) - 1];
++	else if (phy->cap.has_5ghz)
++		last_ch = chan_list_5ghz[ARRAY_SIZE(chan_list_5ghz) - 1];
+ 	else
+-		last_ch = chan_list_5ghz[n_chan - 1];
++		last_ch = chan_list_2ghz[ARRAY_SIZE(chan_list_2ghz) - 1];
+ 
+ 	for (i = 0; i < batch_size; i++) {
+ 		struct mt76_connac_tx_power_limit_tlv tx_power_tlv = {};
+@@ -2143,7 +2144,7 @@ int mt76_connac_mcu_update_arp_filter(struct mt76_dev *dev,
+ 		memcpy(addr, &info->arp_addr_list[i], sizeof(__be32));
+ 	}
+ 
+-	return mt76_mcu_skb_send_msg(dev, skb, MCU_UNI_CMD_OFFLOAD, true);
++	return mt76_mcu_skb_send_msg(dev, skb, MCU_UNI_CMD(OFFLOAD), true);
+ }
+ EXPORT_SYMBOL_GPL(mt76_connac_mcu_update_arp_filter);
+ 
+@@ -2249,7 +2250,8 @@ int mt76_connac_mcu_update_gtk_rekey(struct ieee80211_hw *hw,
+ 	memcpy(gtk_tlv->kck, key->kck, NL80211_KCK_LEN);
+ 	memcpy(gtk_tlv->replay_ctr, key->replay_ctr, NL80211_REPLAY_CTR_LEN);
+ 
+-	return mt76_mcu_skb_send_msg(phy->dev, skb, MCU_UNI_CMD_OFFLOAD, true);
++	return mt76_mcu_skb_send_msg(phy->dev, skb,
++				     MCU_UNI_CMD(OFFLOAD), true);
+ }
+ EXPORT_SYMBOL_GPL(mt76_connac_mcu_update_gtk_rekey);
+ 
+@@ -2275,8 +2277,8 @@ mt76_connac_mcu_set_arp_filter(struct mt76_dev *dev, struct ieee80211_vif *vif,
+ 		},
+ 	};
+ 
+-	return mt76_mcu_send_msg(dev, MCU_UNI_CMD_OFFLOAD, &req, sizeof(req),
+-				 true);
++	return mt76_mcu_send_msg(dev, MCU_UNI_CMD(OFFLOAD), &req,
++				 sizeof(req), true);
+ }
+ 
+ static int
+@@ -2301,8 +2303,8 @@ mt76_connac_mcu_set_gtk_rekey(struct mt76_dev *dev, struct ieee80211_vif *vif,
+ 		},
+ 	};
+ 
+-	return mt76_mcu_send_msg(dev, MCU_UNI_CMD_OFFLOAD, &req, sizeof(req),
+-				 true);
++	return mt76_mcu_send_msg(dev, MCU_UNI_CMD(OFFLOAD), &req,
++				 sizeof(req), true);
+ }
+ 
+ static int
+@@ -2331,8 +2333,8 @@ mt76_connac_mcu_set_suspend_mode(struct mt76_dev *dev,
+ 		},
+ 	};
+ 
+-	return mt76_mcu_send_msg(dev, MCU_UNI_CMD_SUSPEND, &req, sizeof(req),
+-				 true);
++	return mt76_mcu_send_msg(dev, MCU_UNI_CMD(SUSPEND), &req,
++				 sizeof(req), true);
+ }
+ 
+ static int
+@@ -2366,7 +2368,7 @@ mt76_connac_mcu_set_wow_pattern(struct mt76_dev *dev,
+ 	memcpy(ptlv->pattern, pattern->pattern, pattern->pattern_len);
+ 	memcpy(ptlv->mask, pattern->mask, DIV_ROUND_UP(pattern->pattern_len, 8));
+ 
+-	return mt76_mcu_skb_send_msg(dev, skb, MCU_UNI_CMD_SUSPEND, true);
++	return mt76_mcu_skb_send_msg(dev, skb, MCU_UNI_CMD(SUSPEND), true);
+ }
+ 
+ static int
+@@ -2418,8 +2420,8 @@ mt76_connac_mcu_set_wow_ctrl(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 	else if (mt76_is_sdio(dev))
+ 		req.wow_ctrl_tlv.wakeup_hif = WOW_GPIO;
+ 
+-	return mt76_mcu_send_msg(dev, MCU_UNI_CMD_SUSPEND, &req, sizeof(req),
+-				 true);
++	return mt76_mcu_send_msg(dev, MCU_UNI_CMD(SUSPEND), &req,
++				 sizeof(req), true);
+ }
+ 
+ int mt76_connac_mcu_set_hif_suspend(struct mt76_dev *dev, bool suspend)
+@@ -2452,8 +2454,8 @@ int mt76_connac_mcu_set_hif_suspend(struct mt76_dev *dev, bool suspend)
+ 	else if (mt76_is_sdio(dev))
+ 		req.hdr.hif_type = 0;
+ 
+-	return mt76_mcu_send_msg(dev, MCU_UNI_CMD_HIF_CTRL, &req, sizeof(req),
+-				 true);
++	return mt76_mcu_send_msg(dev, MCU_UNI_CMD(HIF_CTRL), &req,
++				 sizeof(req), true);
+ }
+ EXPORT_SYMBOL_GPL(mt76_connac_mcu_set_hif_suspend);
+ 
+@@ -2461,7 +2463,7 @@ void mt76_connac_mcu_set_suspend_iter(void *priv, u8 *mac,
+ 				      struct ieee80211_vif *vif)
+ {
+ 	struct mt76_phy *phy = priv;
+-	bool suspend = test_bit(MT76_STATE_SUSPEND, &phy->state);
++	bool suspend = !test_bit(MT76_STATE_RUNNING, &phy->state);
+ 	struct ieee80211_hw *hw = phy->hw;
+ 	struct cfg80211_wowlan *wowlan = hw->wiphy->wowlan_config;
+ 	int i;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+index 4e2c9dafd7765..5c5fab9154e59 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+@@ -496,29 +496,42 @@ enum {
+ #define MCU_CMD_UNI_EXT_ACK			(MCU_CMD_ACK | MCU_CMD_UNI | \
+ 						 MCU_CMD_QUERY)
+ 
+-#define MCU_FW_PREFIX				BIT(31)
+-#define MCU_UNI_PREFIX				BIT(30)
+ #define MCU_CE_PREFIX				BIT(29)
+-#define MCU_QUERY_PREFIX			BIT(28)
+-#define MCU_CMD_MASK				~(MCU_FW_PREFIX | MCU_UNI_PREFIX |	\
+-						  MCU_CE_PREFIX | MCU_QUERY_PREFIX)
+-
+-#define MCU_QUERY_MASK				BIT(16)
++#define MCU_CMD_MASK				~(MCU_CE_PREFIX)
++
++#define __MCU_CMD_FIELD_ID			GENMASK(7, 0)
++#define __MCU_CMD_FIELD_EXT_ID			GENMASK(15, 8)
++#define __MCU_CMD_FIELD_QUERY			BIT(16)
++#define __MCU_CMD_FIELD_UNI			BIT(17)
++
++#define MCU_CMD(_t)				FIELD_PREP(__MCU_CMD_FIELD_ID,		\
++							   MCU_CMD_##_t)
++#define MCU_EXT_CMD(_t)				(MCU_CMD(EXT_CID) | \
++						 FIELD_PREP(__MCU_CMD_FIELD_EXT_ID,	\
++							    MCU_EXT_CMD_##_t))
++#define MCU_EXT_QUERY(_t)			(MCU_EXT_CMD(_t) | __MCU_CMD_FIELD_QUERY)
++#define MCU_UNI_CMD(_t)				(__MCU_CMD_FIELD_UNI |			\
++						 FIELD_PREP(__MCU_CMD_FIELD_ID,		\
++							    MCU_UNI_CMD_##_t))
+ 
+ enum {
+ 	MCU_EXT_CMD_EFUSE_ACCESS = 0x01,
+ 	MCU_EXT_CMD_RF_REG_ACCESS = 0x02,
++	MCU_EXT_CMD_RF_TEST = 0x04,
+ 	MCU_EXT_CMD_PM_STATE_CTRL = 0x07,
+ 	MCU_EXT_CMD_CHANNEL_SWITCH = 0x08,
+ 	MCU_EXT_CMD_SET_TX_POWER_CTRL = 0x11,
+ 	MCU_EXT_CMD_FW_LOG_2_HOST = 0x13,
++	MCU_EXT_CMD_TXBF_ACTION = 0x1e,
+ 	MCU_EXT_CMD_EFUSE_BUFFER_MODE = 0x21,
++	MCU_EXT_CMD_THERMAL_PROT = 0x23,
+ 	MCU_EXT_CMD_STA_REC_UPDATE = 0x25,
+ 	MCU_EXT_CMD_BSS_INFO_UPDATE = 0x26,
+ 	MCU_EXT_CMD_EDCA_UPDATE = 0x27,
+ 	MCU_EXT_CMD_DEV_INFO_UPDATE = 0x2A,
+-	MCU_EXT_CMD_GET_TEMP = 0x2c,
++	MCU_EXT_CMD_THERMAL_CTRL = 0x2c,
+ 	MCU_EXT_CMD_WTBL_UPDATE = 0x32,
++	MCU_EXT_CMD_SET_DRR_CTRL = 0x36,
+ 	MCU_EXT_CMD_SET_RDD_CTRL = 0x3a,
+ 	MCU_EXT_CMD_ATE_CTRL = 0x3d,
+ 	MCU_EXT_CMD_PROTECT_CTRL = 0x3e,
+@@ -527,35 +540,51 @@ enum {
+ 	MCU_EXT_CMD_RX_HDR_TRANS = 0x47,
+ 	MCU_EXT_CMD_MUAR_UPDATE = 0x48,
+ 	MCU_EXT_CMD_BCN_OFFLOAD = 0x49,
++	MCU_EXT_CMD_RX_AIRTIME_CTRL = 0x4a,
+ 	MCU_EXT_CMD_SET_RX_PATH = 0x4e,
++	MCU_EXT_CMD_EFUSE_FREE_BLOCK = 0x4f,
+ 	MCU_EXT_CMD_TX_POWER_FEATURE_CTRL = 0x58,
+ 	MCU_EXT_CMD_RXDCOC_CAL = 0x59,
++	MCU_EXT_CMD_GET_MIB_INFO = 0x5a,
+ 	MCU_EXT_CMD_TXDPD_CAL = 0x60,
+ 	MCU_EXT_CMD_CAL_CACHE = 0x67,
+-	MCU_EXT_CMD_SET_RDD_TH = 0x7c,
++	MCU_EXT_CMD_SET_RADAR_TH = 0x7c,
+ 	MCU_EXT_CMD_SET_RDD_PATTERN = 0x7d,
++	MCU_EXT_CMD_MWDS_SUPPORT = 0x80,
++	MCU_EXT_CMD_SET_SER_TRIGGER = 0x81,
++	MCU_EXT_CMD_SCS_CTRL = 0x82,
++	MCU_EXT_CMD_TWT_AGRT_UPDATE = 0x94,
++	MCU_EXT_CMD_FW_DBG_CTRL = 0x95,
++	MCU_EXT_CMD_OFFCH_SCAN_CTRL = 0x9a,
++	MCU_EXT_CMD_SET_RDD_TH = 0x9d,
++	MCU_EXT_CMD_MURU_CTRL = 0x9f,
++	MCU_EXT_CMD_SET_SPR = 0xa8,
++	MCU_EXT_CMD_GROUP_PRE_CAL_INFO = 0xab,
++	MCU_EXT_CMD_DPD_PRE_CAL_INFO = 0xac,
++	MCU_EXT_CMD_PHY_STAT_INFO = 0xad,
+ };
+ 
+ enum {
+-	MCU_UNI_CMD_DEV_INFO_UPDATE = MCU_UNI_PREFIX | 0x01,
+-	MCU_UNI_CMD_BSS_INFO_UPDATE = MCU_UNI_PREFIX | 0x02,
+-	MCU_UNI_CMD_STA_REC_UPDATE = MCU_UNI_PREFIX | 0x03,
+-	MCU_UNI_CMD_SUSPEND = MCU_UNI_PREFIX | 0x05,
+-	MCU_UNI_CMD_OFFLOAD = MCU_UNI_PREFIX | 0x06,
+-	MCU_UNI_CMD_HIF_CTRL = MCU_UNI_PREFIX | 0x07,
++	MCU_UNI_CMD_DEV_INFO_UPDATE = 0x01,
++	MCU_UNI_CMD_BSS_INFO_UPDATE = 0x02,
++	MCU_UNI_CMD_STA_REC_UPDATE = 0x03,
++	MCU_UNI_CMD_SUSPEND = 0x05,
++	MCU_UNI_CMD_OFFLOAD = 0x06,
++	MCU_UNI_CMD_HIF_CTRL = 0x07,
+ };
+ 
+ enum {
+-	MCU_CMD_TARGET_ADDRESS_LEN_REQ = MCU_FW_PREFIX | 0x01,
+-	MCU_CMD_FW_START_REQ = MCU_FW_PREFIX | 0x02,
++	MCU_CMD_TARGET_ADDRESS_LEN_REQ = 0x01,
++	MCU_CMD_FW_START_REQ = 0x02,
+ 	MCU_CMD_INIT_ACCESS_REG = 0x3,
+-	MCU_CMD_NIC_POWER_CTRL = MCU_FW_PREFIX | 0x4,
+-	MCU_CMD_PATCH_START_REQ = MCU_FW_PREFIX | 0x05,
+-	MCU_CMD_PATCH_FINISH_REQ = MCU_FW_PREFIX | 0x07,
+-	MCU_CMD_PATCH_SEM_CONTROL = MCU_FW_PREFIX | 0x10,
++	MCU_CMD_NIC_POWER_CTRL = 0x4,
++	MCU_CMD_PATCH_START_REQ = 0x05,
++	MCU_CMD_PATCH_FINISH_REQ = 0x07,
++	MCU_CMD_PATCH_SEM_CONTROL = 0x10,
++	MCU_CMD_WA_PARAM = 0xc4,
+ 	MCU_CMD_EXT_CID = 0xed,
+-	MCU_CMD_FW_SCATTER = MCU_FW_PREFIX | 0xee,
+-	MCU_CMD_RESTART_DL_REQ = MCU_FW_PREFIX | 0xef,
++	MCU_CMD_FW_SCATTER = 0xee,
++	MCU_CMD_RESTART_DL_REQ = 0xef,
+ };
+ 
+ /* offload mcu commands */
+@@ -575,7 +604,7 @@ enum {
+ 	MCU_CMD_GET_NIC_CAPAB = MCU_CE_PREFIX | 0x8a,
+ 	MCU_CMD_SET_MU_EDCA_PARMS = MCU_CE_PREFIX | 0xb0,
+ 	MCU_CMD_REG_WRITE = MCU_CE_PREFIX | 0xc0,
+-	MCU_CMD_REG_READ = MCU_CE_PREFIX | MCU_QUERY_MASK | 0xc0,
++	MCU_CMD_REG_READ = MCU_CE_PREFIX | __MCU_CMD_FIELD_QUERY | 0xc0,
+ 	MCU_CMD_CHIP_CONFIG = MCU_CE_PREFIX | 0xca,
+ 	MCU_CMD_FWLOG_2_HOST = MCU_CE_PREFIX | 0xc5,
+ 	MCU_CMD_GET_WTBL = MCU_CE_PREFIX | 0xcd,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index 809dc18e5083c..38d66411444a1 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -426,9 +426,16 @@ mt7915_mac_fill_rx(struct mt7915_dev *dev, struct sk_buff *skb)
+ 	if (rxd2 & MT_RXD2_NORMAL_AMSDU_ERR)
+ 		return -EINVAL;
+ 
++	hdr_trans = rxd2 & MT_RXD2_NORMAL_HDR_TRANS;
++	if (hdr_trans && (rxd1 & MT_RXD1_NORMAL_CM))
++		return -EINVAL;
++
++	/* ICV error or CCMP/BIP/WPI MIC error */
++	if (rxd1 & MT_RXD1_NORMAL_ICV_ERR)
++		status->flag |= RX_FLAG_ONLY_MONITOR;
++
+ 	unicast = FIELD_GET(MT_RXD3_NORMAL_ADDR_TYPE, rxd3) == MT_RXD3_NORMAL_U2M;
+ 	idx = FIELD_GET(MT_RXD1_NORMAL_WLAN_IDX, rxd1);
+-	hdr_trans = rxd2 & MT_RXD2_NORMAL_HDR_TRANS;
+ 	status->wcid = mt7915_rx_get_wcid(dev, idx, unicast);
+ 
+ 	if (status->wcid) {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index 852d5d97c70b1..8215b3d79bbdc 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -1752,33 +1752,6 @@ int mt7915_mcu_sta_update_hdr_trans(struct mt7915_dev *dev,
+ 				     true);
+ }
+ 
+-int mt7915_mcu_add_smps(struct mt7915_dev *dev, struct ieee80211_vif *vif,
+-			struct ieee80211_sta *sta)
+-{
+-	struct mt7915_vif *mvif = (struct mt7915_vif *)vif->drv_priv;
+-	struct mt7915_sta *msta = (struct mt7915_sta *)sta->drv_priv;
+-	struct wtbl_req_hdr *wtbl_hdr;
+-	struct tlv *sta_wtbl;
+-	struct sk_buff *skb;
+-
+-	skb = mt7915_mcu_alloc_sta_req(dev, mvif, msta,
+-				       MT7915_STA_UPDATE_MAX_SIZE);
+-	if (IS_ERR(skb))
+-		return PTR_ERR(skb);
+-
+-	sta_wtbl = mt7915_mcu_add_tlv(skb, STA_REC_WTBL, sizeof(struct tlv));
+-
+-	wtbl_hdr = mt7915_mcu_alloc_wtbl_req(dev, msta, WTBL_SET, sta_wtbl,
+-					     &skb);
+-	if (IS_ERR(wtbl_hdr))
+-		return PTR_ERR(wtbl_hdr);
+-
+-	mt7915_mcu_wtbl_smps_tlv(skb, sta, sta_wtbl, wtbl_hdr);
+-
+-	return mt76_mcu_skb_send_msg(&dev->mt76, skb,
+-				     MCU_EXT_CMD(STA_REC_UPDATE), true);
+-}
+-
+ static inline bool
+ mt7915_is_ebf_supported(struct mt7915_phy *phy, struct ieee80211_vif *vif,
+ 			struct ieee80211_sta *sta, bool bfee)
+@@ -2049,6 +2022,21 @@ mt7915_mcu_sta_bfee_tlv(struct mt7915_dev *dev, struct sk_buff *skb,
+ 	bfee->fb_identity_matrix = (nrow == 1 && tx_ant == 2);
+ }
+ 
++static enum mcu_mmps_mode
++mt7915_mcu_get_mmps_mode(enum ieee80211_smps_mode smps)
++{
++	switch (smps) {
++	case IEEE80211_SMPS_OFF:
++		return MCU_MMPS_DISABLE;
++	case IEEE80211_SMPS_STATIC:
++		return MCU_MMPS_STATIC;
++	case IEEE80211_SMPS_DYNAMIC:
++		return MCU_MMPS_DYNAMIC;
++	default:
++		return MCU_MMPS_DISABLE;
++	}
++}
++
+ int mt7915_mcu_set_fixed_rate_ctrl(struct mt7915_dev *dev,
+ 				   struct ieee80211_vif *vif,
+ 				   struct ieee80211_sta *sta,
+@@ -2076,7 +2064,11 @@ int mt7915_mcu_set_fixed_rate_ctrl(struct mt7915_dev *dev,
+ 	case RATE_PARAM_FIXED_MCS:
+ 	case RATE_PARAM_FIXED_GI:
+ 	case RATE_PARAM_FIXED_HE_LTF:
+-		ra->phy = *phy;
++		if (phy)
++			ra->phy = *phy;
++		break;
++	case RATE_PARAM_MMPS_UPDATE:
++		ra->mmps_mode = mt7915_mcu_get_mmps_mode(sta->smps_mode);
+ 		break;
+ 	default:
+ 		break;
+@@ -2087,6 +2079,39 @@ int mt7915_mcu_set_fixed_rate_ctrl(struct mt7915_dev *dev,
+ 				     MCU_EXT_CMD(STA_REC_UPDATE), true);
+ }
+ 
++int mt7915_mcu_add_smps(struct mt7915_dev *dev, struct ieee80211_vif *vif,
++			struct ieee80211_sta *sta)
++{
++	struct mt7915_vif *mvif = (struct mt7915_vif *)vif->drv_priv;
++	struct mt7915_sta *msta = (struct mt7915_sta *)sta->drv_priv;
++	struct wtbl_req_hdr *wtbl_hdr;
++	struct tlv *sta_wtbl;
++	struct sk_buff *skb;
++	int ret;
++
++	skb = mt7915_mcu_alloc_sta_req(dev, mvif, msta,
++				       MT7915_STA_UPDATE_MAX_SIZE);
++	if (IS_ERR(skb))
++		return PTR_ERR(skb);
++
++	sta_wtbl = mt7915_mcu_add_tlv(skb, STA_REC_WTBL, sizeof(struct tlv));
++
++	wtbl_hdr = mt7915_mcu_alloc_wtbl_req(dev, msta, WTBL_SET, sta_wtbl,
++					     &skb);
++	if (IS_ERR(wtbl_hdr))
++		return PTR_ERR(wtbl_hdr);
++
++	mt7915_mcu_wtbl_smps_tlv(skb, sta, sta_wtbl, wtbl_hdr);
++
++	ret = mt76_mcu_skb_send_msg(&dev->mt76, skb,
++				    MCU_EXT_CMD(STA_REC_UPDATE), true);
++	if (ret)
++		return ret;
++
++	return mt7915_mcu_set_fixed_rate_ctrl(dev, vif, sta, NULL,
++					      RATE_PARAM_MMPS_UPDATE);
++}
++
+ static int
+ mt7915_mcu_add_rate_ctrl_fixed(struct mt7915_dev *dev,
+ 			       struct ieee80211_vif *vif,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h
+index 1f5a64ba9b59d..628e90d0c394e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h
+@@ -365,6 +365,13 @@ enum {
+ 	MCU_PHY_STATE_OFDMLQ_CNINFO,
+ };
+ 
++enum mcu_mmps_mode {
++	MCU_MMPS_STATIC,
++	MCU_MMPS_DYNAMIC,
++	MCU_MMPS_RSV,
++	MCU_MMPS_DISABLE,
++};
++
+ #define STA_TYPE_STA			BIT(0)
+ #define STA_TYPE_AP			BIT(1)
+ #define STA_TYPE_ADHOC			BIT(2)
+@@ -960,6 +967,7 @@ struct sta_rec_ra_fixed {
+ 
+ enum {
+ 	RATE_PARAM_FIXED = 3,
++	RATE_PARAM_MMPS_UPDATE = 5,
+ 	RATE_PARAM_FIXED_HE_LTF = 7,
+ 	RATE_PARAM_FIXED_MCS,
+ 	RATE_PARAM_FIXED_GI = 11,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c
+index 7cdfdf83529f6..86fd7292b229f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c
+@@ -276,7 +276,7 @@ mt7921_pm_set(void *data, u64 val)
+ 	struct mt7921_dev *dev = data;
+ 	struct mt76_connac_pm *pm = &dev->pm;
+ 
+-	mt7921_mutex_acquire(dev);
++	mutex_lock(&dev->mt76.mutex);
+ 
+ 	if (val == pm->enable)
+ 		goto out;
+@@ -285,7 +285,11 @@ mt7921_pm_set(void *data, u64 val)
+ 		pm->stats.last_wake_event = jiffies;
+ 		pm->stats.last_doze_event = jiffies;
+ 	}
+-	pm->enable = val;
++	/* make sure the chip is awake here and ps_work is scheduled
++	 * just at end of the this routine.
++	 */
++	pm->enable = false;
++	mt76_connac_pm_wake(&dev->mphy, pm);
+ 
+ 	ieee80211_iterate_active_interfaces(mt76_hw(dev),
+ 					    IEEE80211_IFACE_ITER_RESUME_ALL,
+@@ -293,8 +297,10 @@ mt7921_pm_set(void *data, u64 val)
+ 
+ 	mt76_connac_mcu_set_deep_sleep(&dev->mt76, pm->ds_enable);
+ 
++	pm->enable = val;
++	mt76_connac_power_save_sched(&dev->mphy, pm);
+ out:
+-	mt7921_mutex_release(dev);
++	mutex_unlock(&dev->mt76.mutex);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+index db3302b1576a0..fc21a78b37c49 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+@@ -428,10 +428,17 @@ mt7921_mac_fill_rx(struct mt7921_dev *dev, struct sk_buff *skb)
+ 	if (rxd2 & MT_RXD2_NORMAL_AMSDU_ERR)
+ 		return -EINVAL;
+ 
++	hdr_trans = rxd2 & MT_RXD2_NORMAL_HDR_TRANS;
++	if (hdr_trans && (rxd1 & MT_RXD1_NORMAL_CM))
++		return -EINVAL;
++
++	/* ICV error or CCMP/BIP/WPI MIC error */
++	if (rxd1 & MT_RXD1_NORMAL_ICV_ERR)
++		status->flag |= RX_FLAG_ONLY_MONITOR;
++
+ 	chfreq = FIELD_GET(MT_RXD3_NORMAL_CH_FREQ, rxd3);
+ 	unicast = FIELD_GET(MT_RXD3_NORMAL_ADDR_TYPE, rxd3) == MT_RXD3_NORMAL_U2M;
+ 	idx = FIELD_GET(MT_RXD1_NORMAL_WLAN_IDX, rxd1);
+-	hdr_trans = rxd2 & MT_RXD2_NORMAL_HDR_TRANS;
+ 	status->wcid = mt7921_rx_get_wcid(dev, idx, unicast);
+ 
+ 	if (status->wcid) {
+@@ -903,7 +910,7 @@ void mt7921_mac_write_txwi(struct mt7921_dev *dev, __le32 *txwi,
+ 		mt7921_mac_write_txwi_80211(dev, txwi, skb, key);
+ 
+ 	if (txwi[2] & cpu_to_le32(MT_TXD2_FIX_RATE)) {
+-		int rateidx = ffs(vif->bss_conf.basic_rates) - 1;
++		int rateidx = vif ? ffs(vif->bss_conf.basic_rates) - 1 : 0;
+ 		u16 rate, mode;
+ 
+ 		/* hardware won't add HTC for mgmt/ctrl frame */
+@@ -1065,7 +1072,7 @@ out:
+ 	return !!skb;
+ }
+ 
+-static void mt7921_mac_add_txs(struct mt7921_dev *dev, void *data)
++void mt7921_mac_add_txs(struct mt7921_dev *dev, void *data)
+ {
+ 	struct mt7921_sta *msta = NULL;
+ 	struct mt76_wcid *wcid;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index 633c6d2a57acd..8c55562c1a8d9 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -166,7 +166,7 @@ mt7921_init_he_caps(struct mt7921_phy *phy, enum nl80211_band band,
+ 			if (vht_cap->cap & IEEE80211_VHT_CAP_RX_ANTENNA_PATTERN)
+ 				cap |= IEEE80211_HE_6GHZ_CAP_RX_ANTPAT_CONS;
+ 
+-			data->he_6ghz_capa.capa = cpu_to_le16(cap);
++			data[idx].he_6ghz_capa.capa = cpu_to_le16(cap);
+ 		}
+ 		idx++;
+ 	}
+@@ -221,7 +221,7 @@ int __mt7921_start(struct mt7921_phy *phy)
+ 	if (err)
+ 		return err;
+ 
+-	err = mt7921_mcu_set_chan_info(phy, MCU_EXT_CMD_SET_RX_PATH);
++	err = mt7921_mcu_set_chan_info(phy, MCU_EXT_CMD(SET_RX_PATH));
+ 	if (err)
+ 		return err;
+ 
+@@ -318,12 +318,6 @@ static int mt7921_add_interface(struct ieee80211_hw *hw,
+ 		mtxq->wcid = &mvif->sta.wcid;
+ 	}
+ 
+-	if (vif->type != NL80211_IFTYPE_AP &&
+-	    (!mvif->mt76.omac_idx || mvif->mt76.omac_idx > 3))
+-		vif->offload_flags = 0;
+-
+-	vif->offload_flags |= IEEE80211_OFFLOAD_ENCAP_4ADDR;
+-
+ out:
+ 	mt7921_mutex_release(dev);
+ 
+@@ -369,7 +363,7 @@ static int mt7921_set_channel(struct mt7921_phy *phy)
+ 
+ 	mt76_set_channel(phy->mt76);
+ 
+-	ret = mt7921_mcu_set_chan_info(phy, MCU_EXT_CMD_CHANNEL_SWITCH);
++	ret = mt7921_mcu_set_chan_info(phy, MCU_EXT_CMD(CHANNEL_SWITCH));
+ 	if (ret)
+ 		goto out;
+ 
+@@ -1238,7 +1232,6 @@ static int mt7921_suspend(struct ieee80211_hw *hw,
+ {
+ 	struct mt7921_dev *dev = mt7921_hw_dev(hw);
+ 	struct mt7921_phy *phy = mt7921_hw_phy(hw);
+-	int err;
+ 
+ 	cancel_delayed_work_sync(&phy->scan_work);
+ 	cancel_delayed_work_sync(&phy->mt76->mac_work);
+@@ -1249,34 +1242,24 @@ static int mt7921_suspend(struct ieee80211_hw *hw,
+ 	mt7921_mutex_acquire(dev);
+ 
+ 	clear_bit(MT76_STATE_RUNNING, &phy->mt76->state);
+-
+-	set_bit(MT76_STATE_SUSPEND, &phy->mt76->state);
+ 	ieee80211_iterate_active_interfaces(hw,
+ 					    IEEE80211_IFACE_ITER_RESUME_ALL,
+ 					    mt76_connac_mcu_set_suspend_iter,
+ 					    &dev->mphy);
+ 
+-	err = mt76_connac_mcu_set_hif_suspend(&dev->mt76, true);
+-
+ 	mt7921_mutex_release(dev);
+ 
+-	return err;
++	return 0;
+ }
+ 
+ static int mt7921_resume(struct ieee80211_hw *hw)
+ {
+ 	struct mt7921_dev *dev = mt7921_hw_dev(hw);
+ 	struct mt7921_phy *phy = mt7921_hw_phy(hw);
+-	int err;
+ 
+ 	mt7921_mutex_acquire(dev);
+ 
+-	err = mt76_connac_mcu_set_hif_suspend(&dev->mt76, false);
+-	if (err < 0)
+-		goto out;
+-
+ 	set_bit(MT76_STATE_RUNNING, &phy->mt76->state);
+-	clear_bit(MT76_STATE_SUSPEND, &phy->mt76->state);
+ 	ieee80211_iterate_active_interfaces(hw,
+ 					    IEEE80211_IFACE_ITER_RESUME_ALL,
+ 					    mt76_connac_mcu_set_suspend_iter,
+@@ -1284,11 +1267,10 @@ static int mt7921_resume(struct ieee80211_hw *hw)
+ 
+ 	ieee80211_queue_delayed_work(hw, &phy->mt76->mac_work,
+ 				     MT7921_WATCHDOG_TIME);
+-out:
+ 
+ 	mt7921_mutex_release(dev);
+ 
+-	return err;
++	return 0;
+ }
+ 
+ static void mt7921_set_wakeup(struct ieee80211_hw *hw, bool enabled)
+@@ -1334,7 +1316,7 @@ static void mt7921_sta_set_decap_offload(struct ieee80211_hw *hw,
+ 		clear_bit(MT_WCID_FLAG_HDR_TRANS, &msta->wcid.flags);
+ 
+ 	mt76_connac_mcu_sta_update_hdr_trans(&dev->mt76, vif, &msta->wcid,
+-					     MCU_UNI_CMD_STA_REC_UPDATE);
++					     MCU_UNI_CMD(STA_REC_UPDATE));
+ }
+ 
+ static int mt7921_set_sar_specs(struct ieee80211_hw *hw,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+index 6ada1ebe7d68b..484a8c57b862e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+@@ -179,24 +179,20 @@ int mt7921_mcu_parse_response(struct mt76_dev *mdev, int cmd,
+ 	if (seq != rxd->seq)
+ 		return -EAGAIN;
+ 
+-	switch (cmd) {
+-	case MCU_CMD_PATCH_SEM_CONTROL:
++	if (cmd == MCU_CMD_PATCH_SEM_CONTROL) {
+ 		skb_pull(skb, sizeof(*rxd) - 4);
+ 		ret = *skb->data;
+-		break;
+-	case MCU_EXT_CMD_GET_TEMP:
++	} else if (cmd == MCU_EXT_CMD(THERMAL_CTRL)) {
+ 		skb_pull(skb, sizeof(*rxd) + 4);
+ 		ret = le32_to_cpu(*(__le32 *)skb->data);
+-		break;
+-	case MCU_EXT_CMD_EFUSE_ACCESS:
++	} else if (cmd == MCU_EXT_CMD(EFUSE_ACCESS)) {
+ 		ret = mt7921_mcu_parse_eeprom(mdev, skb);
+-		break;
+-	case MCU_UNI_CMD_DEV_INFO_UPDATE:
+-	case MCU_UNI_CMD_BSS_INFO_UPDATE:
+-	case MCU_UNI_CMD_STA_REC_UPDATE:
+-	case MCU_UNI_CMD_HIF_CTRL:
+-	case MCU_UNI_CMD_OFFLOAD:
+-	case MCU_UNI_CMD_SUSPEND: {
++	} else if (cmd == MCU_UNI_CMD(DEV_INFO_UPDATE) ||
++		   cmd == MCU_UNI_CMD(BSS_INFO_UPDATE) ||
++		   cmd == MCU_UNI_CMD(STA_REC_UPDATE) ||
++		   cmd == MCU_UNI_CMD(HIF_CTRL) ||
++		   cmd == MCU_UNI_CMD(OFFLOAD) ||
++		   cmd == MCU_UNI_CMD(SUSPEND)) {
+ 		struct mt7921_mcu_uni_event *event;
+ 
+ 		skb_pull(skb, sizeof(*rxd));
+@@ -205,19 +201,14 @@ int mt7921_mcu_parse_response(struct mt76_dev *mdev, int cmd,
+ 		/* skip invalid event */
+ 		if (mcu_cmd != event->cid)
+ 			ret = -EAGAIN;
+-		break;
+-	}
+-	case MCU_CMD_REG_READ: {
++	} else if (cmd == MCU_CMD_REG_READ) {
+ 		struct mt7921_mcu_reg_event *event;
+ 
+ 		skb_pull(skb, sizeof(*rxd));
+ 		event = (struct mt7921_mcu_reg_event *)skb->data;
+ 		ret = (int)le32_to_cpu(event->val);
+-		break;
+-	}
+-	default:
++	} else {
+ 		skb_pull(skb, sizeof(struct mt7921_mcu_rxd));
+-		break;
+ 	}
+ 
+ 	return ret;
+@@ -228,23 +219,19 @@ int mt7921_mcu_fill_message(struct mt76_dev *mdev, struct sk_buff *skb,
+ 			    int cmd, int *wait_seq)
+ {
+ 	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
+-	int txd_len, mcu_cmd = cmd & MCU_CMD_MASK;
++	int txd_len, mcu_cmd = FIELD_GET(__MCU_CMD_FIELD_ID, cmd);
+ 	struct mt7921_uni_txd *uni_txd;
+ 	struct mt7921_mcu_txd *mcu_txd;
+ 	__le32 *txd;
+ 	u32 val;
+ 	u8 seq;
+ 
+-	switch (cmd) {
+-	case MCU_UNI_CMD_HIF_CTRL:
+-	case MCU_UNI_CMD_SUSPEND:
+-	case MCU_UNI_CMD_OFFLOAD:
+-		mdev->mcu.timeout = HZ / 3;
+-		break;
+-	default:
++	if (cmd == MCU_UNI_CMD(HIF_CTRL) ||
++	    cmd == MCU_UNI_CMD(SUSPEND) ||
++	    cmd == MCU_UNI_CMD(OFFLOAD))
++		mdev->mcu.timeout = HZ;
++	else
+ 		mdev->mcu.timeout = 3 * HZ;
+-		break;
+-	}
+ 
+ 	seq = ++dev->mt76.mcu.msg_seq & 0xf;
+ 	if (!seq)
+@@ -253,7 +240,7 @@ int mt7921_mcu_fill_message(struct mt76_dev *mdev, struct sk_buff *skb,
+ 	if (cmd == MCU_CMD_FW_SCATTER)
+ 		goto exit;
+ 
+-	txd_len = cmd & MCU_UNI_PREFIX ? sizeof(*uni_txd) : sizeof(*mcu_txd);
++	txd_len = cmd & __MCU_CMD_FIELD_UNI ? sizeof(*uni_txd) : sizeof(*mcu_txd);
+ 	txd = (__le32 *)skb_push(skb, txd_len);
+ 
+ 	val = FIELD_PREP(MT_TXD0_TX_BYTES, skb->len) |
+@@ -265,7 +252,7 @@ int mt7921_mcu_fill_message(struct mt76_dev *mdev, struct sk_buff *skb,
+ 	      FIELD_PREP(MT_TXD1_HDR_FORMAT, MT_HDR_FORMAT_CMD);
+ 	txd[1] = cpu_to_le32(val);
+ 
+-	if (cmd & MCU_UNI_PREFIX) {
++	if (cmd & __MCU_CMD_FIELD_UNI) {
+ 		uni_txd = (struct mt7921_uni_txd *)txd;
+ 		uni_txd->len = cpu_to_le16(skb->len - sizeof(uni_txd->txd));
+ 		uni_txd->option = MCU_CMD_UNI_EXT_ACK;
+@@ -283,34 +270,20 @@ int mt7921_mcu_fill_message(struct mt76_dev *mdev, struct sk_buff *skb,
+ 					       MT_TX_MCU_PORT_RX_Q0));
+ 	mcu_txd->pkt_type = MCU_PKT_ID;
+ 	mcu_txd->seq = seq;
++	mcu_txd->cid = mcu_cmd;
++	mcu_txd->s2d_index = MCU_S2D_H2N;
++	mcu_txd->ext_cid = FIELD_GET(__MCU_CMD_FIELD_EXT_ID, cmd);
+ 
+-	switch (cmd & ~MCU_CMD_MASK) {
+-	case MCU_FW_PREFIX:
+-		mcu_txd->set_query = MCU_Q_NA;
+-		mcu_txd->cid = mcu_cmd;
+-		break;
+-	case MCU_CE_PREFIX:
+-		if (cmd & MCU_QUERY_MASK)
++	if (mcu_txd->ext_cid || (cmd & MCU_CE_PREFIX)) {
++		if (cmd & __MCU_CMD_FIELD_QUERY)
+ 			mcu_txd->set_query = MCU_Q_QUERY;
+ 		else
+ 			mcu_txd->set_query = MCU_Q_SET;
+-		mcu_txd->cid = mcu_cmd;
+-		break;
+-	default:
+-		mcu_txd->cid = MCU_CMD_EXT_CID;
+-		if (cmd & MCU_QUERY_PREFIX || cmd == MCU_EXT_CMD_EFUSE_ACCESS)
+-			mcu_txd->set_query = MCU_Q_QUERY;
+-		else
+-			mcu_txd->set_query = MCU_Q_SET;
+-		mcu_txd->ext_cid = mcu_cmd;
+-		mcu_txd->ext_cid_ack = 1;
+-		break;
++		mcu_txd->ext_cid_ack = !!mcu_txd->ext_cid;
++	} else {
++		mcu_txd->set_query = MCU_Q_NA;
+ 	}
+ 
+-	mcu_txd->s2d_index = MCU_S2D_H2N;
+-	WARN_ON(cmd == MCU_EXT_CMD_EFUSE_ACCESS &&
+-		mcu_txd->set_query != MCU_Q_QUERY);
+-
+ exit:
+ 	if (wait_seq)
+ 		*wait_seq = seq;
+@@ -418,6 +391,17 @@ mt7921_mcu_low_power_event(struct mt7921_dev *dev, struct sk_buff *skb)
+ 	trace_lp_event(dev, event->state);
+ }
+ 
++static void
++mt7921_mcu_tx_done_event(struct mt7921_dev *dev, struct sk_buff *skb)
++{
++	struct mt7921_mcu_tx_done_event *event;
++
++	skb_pull(skb, sizeof(struct mt7921_mcu_rxd));
++	event = (struct mt7921_mcu_tx_done_event *)skb->data;
++
++	mt7921_mac_add_txs(dev, event->txs);
++}
++
+ static void
+ mt7921_mcu_rx_unsolicited_event(struct mt7921_dev *dev, struct sk_buff *skb)
+ {
+@@ -445,6 +429,9 @@ mt7921_mcu_rx_unsolicited_event(struct mt7921_dev *dev, struct sk_buff *skb)
+ 	case MCU_EVENT_LP_INFO:
+ 		mt7921_mcu_low_power_event(dev, skb);
+ 		break;
++	case MCU_EVENT_TX_DONE:
++		mt7921_mcu_tx_done_event(dev, skb);
++		break;
+ 	default:
+ 		break;
+ 	}
+@@ -567,7 +554,7 @@ int mt7921_mcu_add_key(struct mt7921_dev *dev, struct ieee80211_vif *vif,
+ 		return ret;
+ 
+ 	return mt76_mcu_skb_send_msg(&dev->mt76, skb,
+-				     MCU_UNI_CMD_STA_REC_UPDATE, true);
++				     MCU_UNI_CMD(STA_REC_UPDATE), true);
+ }
+ 
+ int mt7921_mcu_uni_tx_ba(struct mt7921_dev *dev,
+@@ -997,8 +984,8 @@ int mt7921_mcu_set_tx(struct mt7921_dev *dev, struct ieee80211_vif *vif)
+ 			e->cw_max = cpu_to_le16(10);
+ 	}
+ 
+-	ret = mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_EDCA_UPDATE, &req,
+-				sizeof(req), true);
++	ret = mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(EDCA_UPDATE),
++				&req, sizeof(req), true);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1070,7 +1057,7 @@ int mt7921_mcu_set_chan_info(struct mt7921_phy *phy, int cmd)
+ 	else
+ 		req.switch_reason = CH_SWITCH_NORMAL;
+ 
+-	if (cmd == MCU_EXT_CMD_CHANNEL_SWITCH)
++	if (cmd == MCU_EXT_CMD(CHANNEL_SWITCH))
+ 		req.rx_streams = hweight8(req.rx_streams);
+ 
+ 	if (chandef->width == NL80211_CHAN_WIDTH_80P80) {
+@@ -1093,7 +1080,7 @@ int mt7921_mcu_set_eeprom(struct mt7921_dev *dev)
+ 		.format = EE_FORMAT_WHOLE,
+ 	};
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_EFUSE_BUFFER_MODE,
++	return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(EFUSE_BUFFER_MODE),
+ 				 &req, sizeof(req), true);
+ }
+ EXPORT_SYMBOL_GPL(mt7921_mcu_set_eeprom);
+@@ -1108,8 +1095,9 @@ int mt7921_mcu_get_eeprom(struct mt7921_dev *dev, u32 offset)
+ 	int ret;
+ 	u8 *buf;
+ 
+-	ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_EXT_CMD_EFUSE_ACCESS, &req,
+-					sizeof(req), true, &skb);
++	ret = mt76_mcu_send_and_get_msg(&dev->mt76,
++					MCU_EXT_QUERY(EFUSE_ACCESS),
++					&req, sizeof(req), true, &skb);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1154,7 +1142,7 @@ int mt7921_mcu_uni_bss_ps(struct mt7921_dev *dev, struct ieee80211_vif *vif)
+ 	if (vif->type != NL80211_IFTYPE_STATION)
+ 		return -EOPNOTSUPP;
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD_BSS_INFO_UPDATE,
++	return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(BSS_INFO_UPDATE),
+ 				 &ps_req, sizeof(ps_req), true);
+ }
+ 
+@@ -1190,7 +1178,7 @@ mt7921_mcu_uni_bss_bcnft(struct mt7921_dev *dev, struct ieee80211_vif *vif,
+ 	if (vif->type != NL80211_IFTYPE_STATION)
+ 		return 0;
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD_BSS_INFO_UPDATE,
++	return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(BSS_INFO_UPDATE),
+ 				 &bcnft_req, sizeof(bcnft_req), true);
+ }
+ 
+@@ -1245,7 +1233,7 @@ int mt7921_mcu_sta_update(struct mt7921_dev *dev, struct ieee80211_sta *sta,
+ 		.sta = sta,
+ 		.vif = vif,
+ 		.enable = enable,
+-		.cmd = MCU_UNI_CMD_STA_REC_UPDATE,
++		.cmd = MCU_UNI_CMD(STA_REC_UPDATE),
+ 		.state = state,
+ 		.offload_fw = true,
+ 		.rcpi = to_rcpi(rssi),
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h
+index edc0c73f8c01c..68cb0ce013dbd 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h
+@@ -91,6 +91,33 @@ enum {
+ 	MCU_EVENT_COREDUMP = 0xf0,
+ };
+ 
++struct mt7921_mcu_tx_done_event {
++	u8 pid;
++	u8 status;
++	__le16 seq;
++
++	u8 wlan_idx;
++	u8 tx_cnt;
++	__le16 tx_rate;
++
++	u8 flag;
++	u8 tid;
++	u8 rsp_rate;
++	u8 mcs;
++
++	u8 bw;
++	u8 tx_pwr;
++	u8 reason;
++	u8 rsv0[1];
++
++	__le32 delay;
++	__le32 timestamp;
++	__le32 applied_flag;
++	u8 txs[28];
++
++	u8 rsv1[32];
++} __packed;
++
+ /* ext event table */
+ enum {
+ 	MCU_EXT_EVENT_RATE_REPORT = 0x87,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+index e9c7c3a195076..96647801850a5 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+@@ -446,6 +446,7 @@ int mt7921_mcu_restart(struct mt76_dev *dev);
+ 
+ void mt7921e_queue_rx_skb(struct mt76_dev *mdev, enum mt76_rxq_id q,
+ 			  struct sk_buff *skb);
++int mt7921e_driver_own(struct mt7921_dev *dev);
+ int mt7921e_mac_reset(struct mt7921_dev *dev);
+ int mt7921e_mcu_init(struct mt7921_dev *dev);
+ int mt7921s_wfsys_reset(struct mt7921_dev *dev);
+@@ -463,4 +464,5 @@ int mt7921s_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
+ 			   struct mt76_tx_info *tx_info);
+ void mt7921s_tx_complete_skb(struct mt76_dev *mdev, struct mt76_queue_entry *e);
+ bool mt7921s_tx_status_data(struct mt76_dev *mdev, u8 *update);
++void mt7921_mac_add_txs(struct mt7921_dev *dev, void *data);
+ #endif
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+index 305b63fa1a8a9..40186e6cd865e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+@@ -235,7 +235,6 @@ static int mt7921_pci_suspend(struct pci_dev *pdev, pm_message_t state)
+ 	struct mt76_dev *mdev = pci_get_drvdata(pdev);
+ 	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
+ 	struct mt76_connac_pm *pm = &dev->pm;
+-	bool hif_suspend;
+ 	int i, err;
+ 
+ 	pm->suspended = true;
+@@ -246,12 +245,9 @@ static int mt7921_pci_suspend(struct pci_dev *pdev, pm_message_t state)
+ 	if (err < 0)
+ 		goto restore_suspend;
+ 
+-	hif_suspend = !test_bit(MT76_STATE_SUSPEND, &dev->mphy.state);
+-	if (hif_suspend) {
+-		err = mt76_connac_mcu_set_hif_suspend(mdev, true);
+-		if (err)
+-			goto restore_suspend;
+-	}
++	err = mt76_connac_mcu_set_hif_suspend(mdev, true);
++	if (err)
++		goto restore_suspend;
+ 
+ 	/* always enable deep sleep during suspend to reduce
+ 	 * power consumption
+@@ -302,8 +298,7 @@ restore_napi:
+ 	if (!pm->ds_enable)
+ 		mt76_connac_mcu_set_deep_sleep(&dev->mt76, false);
+ 
+-	if (hif_suspend)
+-		mt76_connac_mcu_set_hif_suspend(mdev, false);
++	mt76_connac_mcu_set_hif_suspend(mdev, false);
+ 
+ restore_suspend:
+ 	pm->suspended = false;
+@@ -318,7 +313,6 @@ static int mt7921_pci_resume(struct pci_dev *pdev)
+ 	struct mt76_connac_pm *pm = &dev->pm;
+ 	int i, err;
+ 
+-	pm->suspended = false;
+ 	err = pci_set_power_state(pdev, PCI_D0);
+ 	if (err)
+ 		return err;
+@@ -356,8 +350,11 @@ static int mt7921_pci_resume(struct pci_dev *pdev)
+ 	if (!pm->ds_enable)
+ 		mt76_connac_mcu_set_deep_sleep(&dev->mt76, false);
+ 
+-	if (!test_bit(MT76_STATE_SUSPEND, &dev->mphy.state))
+-		err = mt76_connac_mcu_set_hif_suspend(mdev, false);
++	err = mt76_connac_mcu_set_hif_suspend(mdev, false);
++	if (err)
++		return err;
++
++	pm->suspended = false;
+ 
+ 	return err;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mac.c
+index f9547d27356e1..85286cc9add14 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mac.c
+@@ -321,6 +321,10 @@ int mt7921e_mac_reset(struct mt7921_dev *dev)
+ 		MT_INT_MCU_CMD);
+ 	mt76_wr(dev, MT_PCIE_MAC_INT_ENABLE, 0xff);
+ 
++	err = mt7921e_driver_own(dev);
++	if (err)
++		return err;
++
+ 	err = mt7921_run_firmware(dev);
+ 	if (err)
+ 		goto out;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
+index 583a89a34734a..7b34c7f2ab3a6 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
+@@ -4,7 +4,7 @@
+ #include "mt7921.h"
+ #include "mcu.h"
+ 
+-static int mt7921e_driver_own(struct mt7921_dev *dev)
++int mt7921e_driver_own(struct mt7921_dev *dev)
+ {
+ 	u32 reg = mt7921_reg_map_l1(dev, MT_TOP_LPCR_HOST_BAND0);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/sdio.c b/drivers/net/wireless/mediatek/mt76/mt7921/sdio.c
+index ddf0eeb8b6889..84be229a899da 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/sdio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/sdio.c
+@@ -62,6 +62,10 @@ static int mt7921s_parse_intr(struct mt76_dev *dev, struct mt76s_intr *intr)
+ 	if (err < 0)
+ 		return err;
+ 
++	if (irq_data->rx.num[0] > 16 ||
++	    irq_data->rx.num[1] > 128)
++		return -EINVAL;
++
+ 	intr->isr = irq_data->isr;
+ 	intr->rec_mb = irq_data->rec_mb;
+ 	intr->tx.wtqcr = irq_data->tx.wtqcr;
+@@ -203,10 +207,11 @@ static int mt7921s_suspend(struct device *__dev)
+ 	struct mt7921_dev *dev = sdio_get_drvdata(func);
+ 	struct mt76_connac_pm *pm = &dev->pm;
+ 	struct mt76_dev *mdev = &dev->mt76;
+-	bool hif_suspend;
+ 	int err;
+ 
+ 	pm->suspended = true;
++	set_bit(MT76_STATE_SUSPEND, &mdev->phy.state);
++
+ 	cancel_delayed_work_sync(&pm->ps_work);
+ 	cancel_work_sync(&pm->wake_work);
+ 
+@@ -214,13 +219,6 @@ static int mt7921s_suspend(struct device *__dev)
+ 	if (err < 0)
+ 		goto restore_suspend;
+ 
+-	hif_suspend = !test_bit(MT76_STATE_SUSPEND, &dev->mphy.state);
+-	if (hif_suspend) {
+-		err = mt76_connac_mcu_set_hif_suspend(mdev, true);
+-		if (err)
+-			goto restore_suspend;
+-	}
+-
+ 	/* always enable deep sleep during suspend to reduce
+ 	 * power consumption
+ 	 */
+@@ -228,35 +226,45 @@ static int mt7921s_suspend(struct device *__dev)
+ 
+ 	mt76_txq_schedule_all(&dev->mphy);
+ 	mt76_worker_disable(&mdev->tx_worker);
+-	mt76_worker_disable(&mdev->sdio.txrx_worker);
+ 	mt76_worker_disable(&mdev->sdio.status_worker);
+-	mt76_worker_disable(&mdev->sdio.net_worker);
+ 	cancel_work_sync(&mdev->sdio.stat_work);
+ 	clear_bit(MT76_READING_STATS, &dev->mphy.state);
+-
+ 	mt76_tx_status_check(mdev, true);
+ 
+-	err = mt7921_mcu_fw_pmctrl(dev);
++	mt76_worker_schedule(&mdev->sdio.txrx_worker);
++	wait_event_timeout(dev->mt76.sdio.wait,
++			   mt76s_txqs_empty(&dev->mt76), 5 * HZ);
++
++	/* It is supposed that SDIO bus is idle at the point */
++	err = mt76_connac_mcu_set_hif_suspend(mdev, true);
+ 	if (err)
+ 		goto restore_worker;
+ 
++	mt76_worker_disable(&mdev->sdio.txrx_worker);
++	mt76_worker_disable(&mdev->sdio.net_worker);
++
++	err = mt7921_mcu_fw_pmctrl(dev);
++	if (err)
++		goto restore_txrx_worker;
++
+ 	sdio_set_host_pm_flags(func, MMC_PM_KEEP_POWER);
+ 
+ 	return 0;
+ 
++restore_txrx_worker:
++	mt76_worker_enable(&mdev->sdio.net_worker);
++	mt76_worker_enable(&mdev->sdio.txrx_worker);
++	mt76_connac_mcu_set_hif_suspend(mdev, false);
++
+ restore_worker:
+ 	mt76_worker_enable(&mdev->tx_worker);
+-	mt76_worker_enable(&mdev->sdio.txrx_worker);
+ 	mt76_worker_enable(&mdev->sdio.status_worker);
+-	mt76_worker_enable(&mdev->sdio.net_worker);
+ 
+ 	if (!pm->ds_enable)
+ 		mt76_connac_mcu_set_deep_sleep(mdev, false);
+ 
+-	if (hif_suspend)
+-		mt76_connac_mcu_set_hif_suspend(mdev, false);
+-
+ restore_suspend:
++	clear_bit(MT76_STATE_SUSPEND, &mdev->phy.state);
+ 	pm->suspended = false;
+ 
+ 	return err;
+@@ -271,6 +279,7 @@ static int mt7921s_resume(struct device *__dev)
+ 	int err;
+ 
+ 	pm->suspended = false;
++	clear_bit(MT76_STATE_SUSPEND, &mdev->phy.state);
+ 
+ 	err = mt7921_mcu_drv_pmctrl(dev);
+ 	if (err < 0)
+@@ -285,10 +294,7 @@ static int mt7921s_resume(struct device *__dev)
+ 	if (!pm->ds_enable)
+ 		mt76_connac_mcu_set_deep_sleep(mdev, false);
+ 
+-	if (!test_bit(MT76_STATE_SUSPEND, &dev->mphy.state))
+-		err = mt76_connac_mcu_set_hif_suspend(mdev, false);
+-
+-	return err;
++	return mt76_connac_mcu_set_hif_suspend(mdev, false);
+ }
+ 
+ static const struct dev_pm_ops mt7921s_pm_ops = {
+diff --git a/drivers/net/wireless/mediatek/mt76/sdio.c b/drivers/net/wireless/mediatek/mt76/sdio.c
+index c99acc21225e1..b0bc7be0fb1fc 100644
+--- a/drivers/net/wireless/mediatek/mt76/sdio.c
++++ b/drivers/net/wireless/mediatek/mt76/sdio.c
+@@ -479,7 +479,8 @@ static void mt76s_status_worker(struct mt76_worker *w)
+ 			resched = true;
+ 
+ 		if (dev->drv->tx_status_data &&
+-		    !test_and_set_bit(MT76_READING_STATS, &dev->phy.state))
++		    !test_and_set_bit(MT76_READING_STATS, &dev->phy.state) &&
++		    !test_bit(MT76_STATE_SUSPEND, &dev->phy.state))
+ 			queue_work(dev->wq, &dev->sdio.stat_work);
+ 	} while (nframes > 0);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/sdio_txrx.c b/drivers/net/wireless/mediatek/mt76/sdio_txrx.c
+index 649a56790b89d..801590a0a334f 100644
+--- a/drivers/net/wireless/mediatek/mt76/sdio_txrx.c
++++ b/drivers/net/wireless/mediatek/mt76/sdio_txrx.c
+@@ -317,7 +317,8 @@ void mt76s_txrx_worker(struct mt76_sdio *sdio)
+ 		if (ret > 0)
+ 			nframes += ret;
+ 
+-		if (test_bit(MT76_MCU_RESET, &dev->phy.state)) {
++		if (test_bit(MT76_MCU_RESET, &dev->phy.state) ||
++		    test_bit(MT76_STATE_SUSPEND, &dev->phy.state)) {
+ 			if (!mt76s_txqs_empty(dev))
+ 				continue;
+ 			else
+diff --git a/drivers/net/wireless/microchip/wilc1000/netdev.c b/drivers/net/wireless/microchip/wilc1000/netdev.c
+index 690572e01a2a7..b5fe2a7cd8dcc 100644
+--- a/drivers/net/wireless/microchip/wilc1000/netdev.c
++++ b/drivers/net/wireless/microchip/wilc1000/netdev.c
+@@ -905,7 +905,6 @@ void wilc_netdev_cleanup(struct wilc *wilc)
+ 
+ 	wilc_wlan_cfg_deinit(wilc);
+ 	wlan_deinit_locks(wilc);
+-	kfree(wilc->bus_data);
+ 	wiphy_unregister(wilc->wiphy);
+ 	wiphy_free(wilc->wiphy);
+ }
+diff --git a/drivers/net/wireless/microchip/wilc1000/sdio.c b/drivers/net/wireless/microchip/wilc1000/sdio.c
+index 26ebf66643425..ec595dbd89592 100644
+--- a/drivers/net/wireless/microchip/wilc1000/sdio.c
++++ b/drivers/net/wireless/microchip/wilc1000/sdio.c
+@@ -167,9 +167,11 @@ free:
+ static void wilc_sdio_remove(struct sdio_func *func)
+ {
+ 	struct wilc *wilc = sdio_get_drvdata(func);
++	struct wilc_sdio *sdio_priv = wilc->bus_data;
+ 
+ 	clk_disable_unprepare(wilc->rtc_clk);
+ 	wilc_netdev_cleanup(wilc);
++	kfree(sdio_priv);
+ }
+ 
+ static int wilc_sdio_reset(struct wilc *wilc)
+diff --git a/drivers/net/wireless/microchip/wilc1000/spi.c b/drivers/net/wireless/microchip/wilc1000/spi.c
+index 640850f989dd9..d8e893f4dab4f 100644
+--- a/drivers/net/wireless/microchip/wilc1000/spi.c
++++ b/drivers/net/wireless/microchip/wilc1000/spi.c
+@@ -190,9 +190,11 @@ free:
+ static int wilc_bus_remove(struct spi_device *spi)
+ {
+ 	struct wilc *wilc = spi_get_drvdata(spi);
++	struct wilc_spi *spi_priv = wilc->bus_data;
+ 
+ 	clk_disable_unprepare(wilc->rtc_clk);
+ 	wilc_netdev_cleanup(wilc);
++	kfree(spi_priv);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
+index a0d4d6e31fb49..dbf1ce3daeb8e 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.c
++++ b/drivers/net/wireless/realtek/rtw88/main.c
+@@ -1868,7 +1868,7 @@ int rtw_core_init(struct rtw_dev *rtwdev)
+ 
+ 	/* default rx filter setting */
+ 	rtwdev->hal.rcr = BIT_APP_FCS | BIT_APP_MIC | BIT_APP_ICV |
+-			  BIT_HTC_LOC_CTRL | BIT_APP_PHYSTS |
++			  BIT_PKTCTL_DLEN | BIT_HTC_LOC_CTRL | BIT_APP_PHYSTS |
+ 			  BIT_AB | BIT_AM | BIT_APM;
+ 
+ 	ret = rtw_load_firmware(rtwdev, RTW_NORMAL_FW);
+diff --git a/drivers/net/wireless/realtek/rtw88/pci.c b/drivers/net/wireless/realtek/rtw88/pci.c
+index a7a6ebfaa203c..08cf66141889b 100644
+--- a/drivers/net/wireless/realtek/rtw88/pci.c
++++ b/drivers/net/wireless/realtek/rtw88/pci.c
+@@ -2,7 +2,6 @@
+ /* Copyright(c) 2018-2019  Realtek Corporation
+  */
+ 
+-#include <linux/dmi.h>
+ #include <linux/module.h>
+ #include <linux/pci.h>
+ #include "main.h"
+@@ -1409,7 +1408,11 @@ static void rtw_pci_link_ps(struct rtw_dev *rtwdev, bool enter)
+ 	 * throughput. This is probably because the ASPM behavior slightly
+ 	 * varies from different SOC.
+ 	 */
+-	if (rtwpci->link_ctrl & PCI_EXP_LNKCTL_ASPM_L1)
++	if (!(rtwpci->link_ctrl & PCI_EXP_LNKCTL_ASPM_L1))
++		return;
++
++	if ((enter && atomic_dec_if_positive(&rtwpci->link_usage) == 0) ||
++	    (!enter && atomic_inc_return(&rtwpci->link_usage) == 1))
+ 		rtw_pci_aspm_set(rtwdev, enter);
+ }
+ 
+@@ -1658,6 +1661,9 @@ static int rtw_pci_napi_poll(struct napi_struct *napi, int budget)
+ 					      priv);
+ 	int work_done = 0;
+ 
++	if (rtwpci->rx_no_aspm)
++		rtw_pci_link_ps(rtwdev, false);
++
+ 	while (work_done < budget) {
+ 		u32 work_done_once;
+ 
+@@ -1681,6 +1687,8 @@ static int rtw_pci_napi_poll(struct napi_struct *napi, int budget)
+ 		if (rtw_pci_get_hw_rx_ring_nr(rtwdev, rtwpci))
+ 			napi_schedule(napi);
+ 	}
++	if (rtwpci->rx_no_aspm)
++		rtw_pci_link_ps(rtwdev, true);
+ 
+ 	return work_done;
+ }
+@@ -1702,50 +1710,13 @@ static void rtw_pci_napi_deinit(struct rtw_dev *rtwdev)
+ 	netif_napi_del(&rtwpci->napi);
+ }
+ 
+-enum rtw88_quirk_dis_pci_caps {
+-	QUIRK_DIS_PCI_CAP_MSI,
+-	QUIRK_DIS_PCI_CAP_ASPM,
+-};
+-
+-static int disable_pci_caps(const struct dmi_system_id *dmi)
+-{
+-	uintptr_t dis_caps = (uintptr_t)dmi->driver_data;
+-
+-	if (dis_caps & BIT(QUIRK_DIS_PCI_CAP_MSI))
+-		rtw_disable_msi = true;
+-	if (dis_caps & BIT(QUIRK_DIS_PCI_CAP_ASPM))
+-		rtw_pci_disable_aspm = true;
+-
+-	return 1;
+-}
+-
+-static const struct dmi_system_id rtw88_pci_quirks[] = {
+-	{
+-		.callback = disable_pci_caps,
+-		.ident = "Protempo Ltd L116HTN6SPW",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Protempo Ltd"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "L116HTN6SPW"),
+-		},
+-		.driver_data = (void *)BIT(QUIRK_DIS_PCI_CAP_ASPM),
+-	},
+-	{
+-		.callback = disable_pci_caps,
+-		.ident = "HP HP Pavilion Laptop 14-ce0xxx",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "HP"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion Laptop 14-ce0xxx"),
+-		},
+-		.driver_data = (void *)BIT(QUIRK_DIS_PCI_CAP_ASPM),
+-	},
+-	{}
+-};
+-
+ int rtw_pci_probe(struct pci_dev *pdev,
+ 		  const struct pci_device_id *id)
+ {
++	struct pci_dev *bridge = pci_upstream_bridge(pdev);
+ 	struct ieee80211_hw *hw;
+ 	struct rtw_dev *rtwdev;
++	struct rtw_pci *rtwpci;
+ 	int drv_data_size;
+ 	int ret;
+ 
+@@ -1763,6 +1734,9 @@ int rtw_pci_probe(struct pci_dev *pdev,
+ 	rtwdev->hci.ops = &rtw_pci_ops;
+ 	rtwdev->hci.type = RTW_HCI_TYPE_PCIE;
+ 
++	rtwpci = (struct rtw_pci *)rtwdev->priv;
++	atomic_set(&rtwpci->link_usage, 1);
++
+ 	ret = rtw_core_init(rtwdev);
+ 	if (ret)
+ 		goto err_release_hw;
+@@ -1791,7 +1765,10 @@ int rtw_pci_probe(struct pci_dev *pdev,
+ 		goto err_destroy_pci;
+ 	}
+ 
+-	dmi_check_system(rtw88_pci_quirks);
++	/* Disable PCIe ASPM L1 while doing NAPI poll for 8821CE */
++	if (pdev->device == 0xc821 && bridge->vendor == PCI_VENDOR_ID_INTEL)
++		rtwpci->rx_no_aspm = true;
++
+ 	rtw_pci_phy_cfg(rtwdev);
+ 
+ 	ret = rtw_register_hw(rtwdev, hw);
+diff --git a/drivers/net/wireless/realtek/rtw88/pci.h b/drivers/net/wireless/realtek/rtw88/pci.h
+index 66f78eb7757c5..0c37efd8c66fa 100644
+--- a/drivers/net/wireless/realtek/rtw88/pci.h
++++ b/drivers/net/wireless/realtek/rtw88/pci.h
+@@ -223,6 +223,8 @@ struct rtw_pci {
+ 	struct rtw_pci_tx_ring tx_rings[RTK_MAX_TX_QUEUE_NUM];
+ 	struct rtw_pci_rx_ring rx_rings[RTK_MAX_RX_QUEUE_NUM];
+ 	u16 link_ctrl;
++	atomic_t link_usage;
++	bool rx_no_aspm;
+ 	DECLARE_BITMAP(flags, NUM_OF_RTW_PCI_FLAGS);
+ 
+ 	void __iomem *mmap;
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821c.h b/drivers/net/wireless/realtek/rtw88/rtw8821c.h
+index 112faa60f653e..d9fbddd7b0f35 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8821c.h
++++ b/drivers/net/wireless/realtek/rtw88/rtw8821c.h
+@@ -131,7 +131,7 @@ _rtw_write32s_mask(struct rtw_dev *rtwdev, u32 addr, u32 mask, u32 data)
+ #define WLAN_TX_FUNC_CFG2		0x30
+ #define WLAN_MAC_OPT_NORM_FUNC1		0x98
+ #define WLAN_MAC_OPT_LB_FUNC1		0x80
+-#define WLAN_MAC_OPT_FUNC2		0x30810041
++#define WLAN_MAC_OPT_FUNC2		0xb0810041
+ 
+ #define WLAN_SIFS_CFG	(WLAN_SIFS_CCK_CONT_TX | \
+ 			(WLAN_SIFS_OFDM_CONT_TX << BIT_SHIFT_SIFS_OFDM_CTX) | \
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822b.c b/drivers/net/wireless/realtek/rtw88/rtw8822b.c
+index c409c8c29ec8b..e870ad7518342 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822b.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822b.c
+@@ -205,7 +205,7 @@ static void rtw8822b_phy_set_param(struct rtw_dev *rtwdev)
+ #define WLAN_TX_FUNC_CFG2		0x30
+ #define WLAN_MAC_OPT_NORM_FUNC1		0x98
+ #define WLAN_MAC_OPT_LB_FUNC1		0x80
+-#define WLAN_MAC_OPT_FUNC2		0x30810041
++#define WLAN_MAC_OPT_FUNC2		0xb0810041
+ 
+ #define WLAN_SIFS_CFG	(WLAN_SIFS_CCK_CONT_TX | \
+ 			(WLAN_SIFS_OFDM_CONT_TX << BIT_SHIFT_SIFS_OFDM_CTX) | \
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.c b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+index 46b881e8e4feb..0e1c5d95d4649 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+@@ -1962,7 +1962,7 @@ static void rtw8822c_phy_set_param(struct rtw_dev *rtwdev)
+ #define WLAN_TX_FUNC_CFG2		0x30
+ #define WLAN_MAC_OPT_NORM_FUNC1		0x98
+ #define WLAN_MAC_OPT_LB_FUNC1		0x80
+-#define WLAN_MAC_OPT_FUNC2		0x30810041
++#define WLAN_MAC_OPT_FUNC2		0xb0810041
+ #define WLAN_MAC_INT_MIG_CFG		0x33330000
+ 
+ #define WLAN_SIFS_CFG	(WLAN_SIFS_CCK_CONT_TX | \
+diff --git a/drivers/net/wireless/realtek/rtw89/mac80211.c b/drivers/net/wireless/realtek/rtw89/mac80211.c
+index 16dc6fb7dbb0b..e9d61e55e2d92 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac80211.c
++++ b/drivers/net/wireless/realtek/rtw89/mac80211.c
+@@ -27,6 +27,7 @@ static void rtw89_ops_tx(struct ieee80211_hw *hw,
+ 	if (ret) {
+ 		rtw89_err(rtwdev, "failed to transmit skb: %d\n", ret);
+ 		ieee80211_free_txskb(hw, skb);
++		return;
+ 	}
+ 	rtw89_core_tx_kick_off(rtwdev, qsel);
+ }
+diff --git a/drivers/net/wireless/realtek/rtw89/phy.c b/drivers/net/wireless/realtek/rtw89/phy.c
+index ab134856baac7..d75e9de8df7c6 100644
+--- a/drivers/net/wireless/realtek/rtw89/phy.c
++++ b/drivers/net/wireless/realtek/rtw89/phy.c
+@@ -654,6 +654,12 @@ rtw89_phy_cofig_rf_reg_store(struct rtw89_dev *rtwdev,
+ 	u16 idx = info->curr_idx % RTW89_H2C_RF_PAGE_SIZE;
+ 	u8 page = info->curr_idx / RTW89_H2C_RF_PAGE_SIZE;
+ 
++	if (page >= RTW89_H2C_RF_PAGE_NUM) {
++		rtw89_warn(rtwdev, "RF parameters exceed size. path=%d, idx=%d",
++			   rf_path, info->curr_idx);
++		return;
++	}
++
+ 	info->rtw89_phy_config_rf_h2c[page][idx] =
+ 		cpu_to_le32((reg->addr << 20) | reg->data);
+ 	info->curr_idx++;
+@@ -662,30 +668,29 @@ rtw89_phy_cofig_rf_reg_store(struct rtw89_dev *rtwdev,
+ static int rtw89_phy_config_rf_reg_fw(struct rtw89_dev *rtwdev,
+ 				      struct rtw89_fw_h2c_rf_reg_info *info)
+ {
+-	u16 page = info->curr_idx / RTW89_H2C_RF_PAGE_SIZE;
+-	u16 len = (info->curr_idx % RTW89_H2C_RF_PAGE_SIZE) * 4;
++	u16 remain = info->curr_idx;
++	u16 len = 0;
+ 	u8 i;
+ 	int ret = 0;
+ 
+-	if (page > RTW89_H2C_RF_PAGE_NUM) {
++	if (remain > RTW89_H2C_RF_PAGE_NUM * RTW89_H2C_RF_PAGE_SIZE) {
+ 		rtw89_warn(rtwdev,
+-			   "rf reg h2c total page num %d larger than %d (RTW89_H2C_RF_PAGE_NUM)\n",
+-			   page, RTW89_H2C_RF_PAGE_NUM);
+-		return -EINVAL;
++			   "rf reg h2c total len %d larger than %d\n",
++			   remain, RTW89_H2C_RF_PAGE_NUM * RTW89_H2C_RF_PAGE_SIZE);
++		ret = -EINVAL;
++		goto out;
+ 	}
+ 
+-	for (i = 0; i < page; i++) {
+-		ret = rtw89_fw_h2c_rf_reg(rtwdev, info,
+-					  RTW89_H2C_RF_PAGE_SIZE * 4, i);
++	for (i = 0; i < RTW89_H2C_RF_PAGE_NUM && remain; i++, remain -= len) {
++		len = remain > RTW89_H2C_RF_PAGE_SIZE ? RTW89_H2C_RF_PAGE_SIZE : remain;
++		ret = rtw89_fw_h2c_rf_reg(rtwdev, info, len * 4, i);
+ 		if (ret)
+-			return ret;
++			goto out;
+ 	}
+-	ret = rtw89_fw_h2c_rf_reg(rtwdev, info, len, i);
+-	if (ret)
+-		return ret;
++out:
+ 	info->curr_idx = 0;
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static void rtw89_phy_config_rf_reg(struct rtw89_dev *rtwdev,
+diff --git a/drivers/net/wireless/rsi/rsi_91x_main.c b/drivers/net/wireless/rsi/rsi_91x_main.c
+index f1bf71e6c6081..5d1490fc32db4 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_main.c
++++ b/drivers/net/wireless/rsi/rsi_91x_main.c
+@@ -23,6 +23,7 @@
+ #include "rsi_common.h"
+ #include "rsi_coex.h"
+ #include "rsi_hal.h"
++#include "rsi_usb.h"
+ 
+ u32 rsi_zone_enabled = /* INFO_ZONE |
+ 			INIT_ZONE |
+@@ -168,6 +169,9 @@ int rsi_read_pkt(struct rsi_common *common, u8 *rx_pkt, s32 rcv_pkt_len)
+ 		frame_desc = &rx_pkt[index];
+ 		actual_length = *(u16 *)&frame_desc[0];
+ 		offset = *(u16 *)&frame_desc[2];
++		if (!rcv_pkt_len && offset >
++			RSI_MAX_RX_USB_PKT_SIZE - FRAME_DESC_SZ)
++			goto fail;
+ 
+ 		queueno = rsi_get_queueno(frame_desc, offset);
+ 		length = rsi_get_length(frame_desc, offset);
+diff --git a/drivers/net/wireless/rsi/rsi_91x_usb.c b/drivers/net/wireless/rsi/rsi_91x_usb.c
+index 6821ea9918956..66fe386ec9cc6 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_usb.c
++++ b/drivers/net/wireless/rsi/rsi_91x_usb.c
+@@ -269,8 +269,12 @@ static void rsi_rx_done_handler(struct urb *urb)
+ 	struct rsi_91x_usbdev *dev = (struct rsi_91x_usbdev *)rx_cb->data;
+ 	int status = -EINVAL;
+ 
++	if (!rx_cb->rx_skb)
++		return;
++
+ 	if (urb->status) {
+ 		dev_kfree_skb(rx_cb->rx_skb);
++		rx_cb->rx_skb = NULL;
+ 		return;
+ 	}
+ 
+@@ -294,8 +298,10 @@ out:
+ 	if (rsi_rx_urb_submit(dev->priv, rx_cb->ep_num, GFP_ATOMIC))
+ 		rsi_dbg(ERR_ZONE, "%s: Failed in urb submission", __func__);
+ 
+-	if (status)
++	if (status) {
+ 		dev_kfree_skb(rx_cb->rx_skb);
++		rx_cb->rx_skb = NULL;
++	}
+ }
+ 
+ static void rsi_rx_urb_kill(struct rsi_hw *adapter, u8 ep_num)
+@@ -324,7 +330,6 @@ static int rsi_rx_urb_submit(struct rsi_hw *adapter, u8 ep_num, gfp_t mem_flags)
+ 	struct sk_buff *skb;
+ 	u8 dword_align_bytes = 0;
+ 
+-#define RSI_MAX_RX_USB_PKT_SIZE	3000
+ 	skb = dev_alloc_skb(RSI_MAX_RX_USB_PKT_SIZE);
+ 	if (!skb)
+ 		return -ENOMEM;
+diff --git a/drivers/net/wireless/rsi/rsi_usb.h b/drivers/net/wireless/rsi/rsi_usb.h
+index 254d19b664123..961851748bc4c 100644
+--- a/drivers/net/wireless/rsi/rsi_usb.h
++++ b/drivers/net/wireless/rsi/rsi_usb.h
+@@ -44,6 +44,8 @@
+ #define RSI_USB_BUF_SIZE	     4096
+ #define RSI_USB_CTRL_BUF_SIZE	     0x04
+ 
++#define RSI_MAX_RX_USB_PKT_SIZE	3000
++
+ struct rx_usb_ctrl_block {
+ 	u8 *data;
+ 	struct urb *rx_urb;
+diff --git a/drivers/net/wwan/mhi_wwan_mbim.c b/drivers/net/wwan/mhi_wwan_mbim.c
+index 71bf9b4f769f5..6872782e8dd89 100644
+--- a/drivers/net/wwan/mhi_wwan_mbim.c
++++ b/drivers/net/wwan/mhi_wwan_mbim.c
+@@ -385,13 +385,13 @@ static void mhi_net_rx_refill_work(struct work_struct *work)
+ 	int err;
+ 
+ 	while (!mhi_queue_is_full(mdev, DMA_FROM_DEVICE)) {
+-		struct sk_buff *skb = alloc_skb(MHI_DEFAULT_MRU, GFP_KERNEL);
++		struct sk_buff *skb = alloc_skb(mbim->mru, GFP_KERNEL);
+ 
+ 		if (unlikely(!skb))
+ 			break;
+ 
+ 		err = mhi_queue_skb(mdev, DMA_FROM_DEVICE, skb,
+-				    MHI_DEFAULT_MRU, MHI_EOT);
++				    mbim->mru, MHI_EOT);
+ 		if (unlikely(err)) {
+ 			kfree_skb(skb);
+ 			break;
+diff --git a/drivers/ntb/hw/mscc/ntb_hw_switchtec.c b/drivers/ntb/hw/mscc/ntb_hw_switchtec.c
+index 4c6eb61a6ac62..ec9cb6c81edae 100644
+--- a/drivers/ntb/hw/mscc/ntb_hw_switchtec.c
++++ b/drivers/ntb/hw/mscc/ntb_hw_switchtec.c
+@@ -419,8 +419,8 @@ static void switchtec_ntb_part_link_speed(struct switchtec_ntb *sndev,
+ 					  enum ntb_width *width)
+ {
+ 	struct switchtec_dev *stdev = sndev->stdev;
+-
+-	u32 pff = ioread32(&stdev->mmio_part_cfg[partition].vep_pff_inst_id);
++	u32 pff =
++		ioread32(&stdev->mmio_part_cfg_all[partition].vep_pff_inst_id);
+ 	u32 linksta = ioread32(&stdev->mmio_pff_csr[pff].pci_cap_region[13]);
+ 
+ 	if (speed)
+@@ -840,7 +840,6 @@ static int switchtec_ntb_init_sndev(struct switchtec_ntb *sndev)
+ 	u64 tpart_vec;
+ 	int self;
+ 	u64 part_map;
+-	int bit;
+ 
+ 	sndev->ntb.pdev = sndev->stdev->pdev;
+ 	sndev->ntb.topo = NTB_TOPO_SWITCH;
+@@ -861,29 +860,28 @@ static int switchtec_ntb_init_sndev(struct switchtec_ntb *sndev)
+ 	part_map = ioread64(&sndev->mmio_ntb->ep_map);
+ 	part_map &= ~(1 << sndev->self_partition);
+ 
+-	if (!ffs(tpart_vec)) {
++	if (!tpart_vec) {
+ 		if (sndev->stdev->partition_count != 2) {
+ 			dev_err(&sndev->stdev->dev,
+ 				"ntb target partition not defined\n");
+ 			return -ENODEV;
+ 		}
+ 
+-		bit = ffs(part_map);
+-		if (!bit) {
++		if (!part_map) {
+ 			dev_err(&sndev->stdev->dev,
+ 				"peer partition is not NT partition\n");
+ 			return -ENODEV;
+ 		}
+ 
+-		sndev->peer_partition = bit - 1;
++		sndev->peer_partition = __ffs64(part_map);
+ 	} else {
+-		if (ffs(tpart_vec) != fls(tpart_vec)) {
++		if (__ffs64(tpart_vec) != (fls64(tpart_vec) - 1)) {
+ 			dev_err(&sndev->stdev->dev,
+ 				"ntb driver only supports 1 pair of 1-1 ntb mapping\n");
+ 			return -ENODEV;
+ 		}
+ 
+-		sndev->peer_partition = ffs(tpart_vec) - 1;
++		sndev->peer_partition = __ffs64(tpart_vec);
+ 		if (!(part_map & (1ULL << sndev->peer_partition))) {
+ 			dev_err(&sndev->stdev->dev,
+ 				"ntb target partition is not NT partition\n");
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index e765d3d0542e5..23a38dcf0fc4d 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -312,6 +312,8 @@ static umode_t nvmem_bin_attr_is_visible(struct kobject *kobj,
+ 	struct device *dev = kobj_to_dev(kobj);
+ 	struct nvmem_device *nvmem = to_nvmem_device(dev);
+ 
++	attr->size = nvmem->size;
++
+ 	return nvmem_bin_attr_get_umode(nvmem);
+ }
+ 
+diff --git a/drivers/of/base.c b/drivers/of/base.c
+index 61de453b885cb..09905b5e7b435 100644
+--- a/drivers/of/base.c
++++ b/drivers/of/base.c
+@@ -1349,9 +1349,14 @@ int of_phandle_iterator_next(struct of_phandle_iterator *it)
+ 		 * property data length
+ 		 */
+ 		if (it->cur + count > it->list_end) {
+-			pr_err("%pOF: %s = %d found %d\n",
+-			       it->parent, it->cells_name,
+-			       count, it->cell_count);
++			if (it->cells_name)
++				pr_err("%pOF: %s = %d found %td\n",
++					it->parent, it->cells_name,
++					count, it->list_end - it->cur);
++			else
++				pr_err("%pOF: phandle %s needs %d, found %td\n",
++					it->parent, of_node_full_name(it->node),
++					count, it->list_end - it->cur);
+ 			goto err;
+ 		}
+ 	}
+diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
+index bdca35284cebd..7e868e5995b7e 100644
+--- a/drivers/of/fdt.c
++++ b/drivers/of/fdt.c
+@@ -482,9 +482,11 @@ static int __init early_init_dt_reserve_memory_arch(phys_addr_t base,
+ 	if (nomap) {
+ 		/*
+ 		 * If the memory is already reserved (by another region), we
+-		 * should not allow it to be marked nomap.
++		 * should not allow it to be marked nomap, but don't worry
++		 * if the region isn't memory as it won't be mapped.
+ 		 */
+-		if (memblock_is_region_reserved(base, size))
++		if (memblock_overlaps_region(&memblock.memory, base, size) &&
++		    memblock_is_region_reserved(base, size))
+ 			return -EBUSY;
+ 
+ 		return memblock_mark_nomap(base, size);
+@@ -965,18 +967,22 @@ static void __init early_init_dt_check_for_elfcorehdr(unsigned long node)
+ 		 elfcorehdr_addr, elfcorehdr_size);
+ }
+ 
+-static phys_addr_t cap_mem_addr;
+-static phys_addr_t cap_mem_size;
++static unsigned long chosen_node_offset = -FDT_ERR_NOTFOUND;
+ 
+ /**
+  * early_init_dt_check_for_usable_mem_range - Decode usable memory range
+  * location from flat tree
+- * @node: reference to node containing usable memory range location ('chosen')
+  */
+-static void __init early_init_dt_check_for_usable_mem_range(unsigned long node)
++void __init early_init_dt_check_for_usable_mem_range(void)
+ {
+ 	const __be32 *prop;
+ 	int len;
++	phys_addr_t cap_mem_addr;
++	phys_addr_t cap_mem_size;
++	unsigned long node = chosen_node_offset;
++
++	if ((long)node < 0)
++		return;
+ 
+ 	pr_debug("Looking for usable-memory-range property... ");
+ 
+@@ -989,6 +995,8 @@ static void __init early_init_dt_check_for_usable_mem_range(unsigned long node)
+ 
+ 	pr_debug("cap_mem_start=%pa cap_mem_size=%pa\n", &cap_mem_addr,
+ 		 &cap_mem_size);
++
++	memblock_cap_memory_range(cap_mem_addr, cap_mem_size);
+ }
+ 
+ #ifdef CONFIG_SERIAL_EARLYCON
+@@ -1137,9 +1145,10 @@ int __init early_init_dt_scan_chosen(unsigned long node, const char *uname,
+ 	    (strcmp(uname, "chosen") != 0 && strcmp(uname, "chosen@0") != 0))
+ 		return 0;
+ 
++	chosen_node_offset = node;
++
+ 	early_init_dt_check_for_initrd(node);
+ 	early_init_dt_check_for_elfcorehdr(node);
+-	early_init_dt_check_for_usable_mem_range(node);
+ 
+ 	/* Retrieve command line */
+ 	p = of_get_flat_dt_prop(node, "bootargs", &l);
+@@ -1275,7 +1284,7 @@ void __init early_init_dt_scan_nodes(void)
+ 	of_scan_flat_dt(early_init_dt_scan_memory, NULL);
+ 
+ 	/* Handle linux,usable-memory-range property */
+-	memblock_cap_memory_range(cap_mem_addr, cap_mem_size);
++	early_init_dt_check_for_usable_mem_range();
+ }
+ 
+ bool __init early_init_dt_scan(void *params)
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index 481ba8682ebf4..35af4fedc15de 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -911,11 +911,18 @@ static void __init of_unittest_dma_ranges_one(const char *path,
+ 	if (!rc) {
+ 		phys_addr_t	paddr;
+ 		dma_addr_t	dma_addr;
+-		struct device	dev_bogus;
++		struct device	*dev_bogus;
+ 
+-		dev_bogus.dma_range_map = map;
+-		paddr = dma_to_phys(&dev_bogus, expect_dma_addr);
+-		dma_addr = phys_to_dma(&dev_bogus, expect_paddr);
++		dev_bogus = kzalloc(sizeof(struct device), GFP_KERNEL);
++		if (!dev_bogus) {
++			unittest(0, "kzalloc() failed\n");
++			kfree(map);
++			return;
++		}
++
++		dev_bogus->dma_range_map = map;
++		paddr = dma_to_phys(dev_bogus, expect_dma_addr);
++		dma_addr = phys_to_dma(dev_bogus, expect_paddr);
+ 
+ 		unittest(paddr == expect_paddr,
+ 			 "of_dma_get_range: wrong phys addr %pap (expecting %llx) on node %pOF\n",
+@@ -925,6 +932,7 @@ static void __init of_unittest_dma_ranges_one(const char *path,
+ 			 &dma_addr, expect_dma_addr, np);
+ 
+ 		kfree(map);
++		kfree(dev_bogus);
+ 	}
+ 	of_node_put(np);
+ #endif
+@@ -934,8 +942,9 @@ static void __init of_unittest_parse_dma_ranges(void)
+ {
+ 	of_unittest_dma_ranges_one("/testcase-data/address-tests/device@70000000",
+ 		0x0, 0x20000000);
+-	of_unittest_dma_ranges_one("/testcase-data/address-tests/bus@80000000/device@1000",
+-		0x100000000, 0x20000000);
++	if (IS_ENABLED(CONFIG_ARCH_DMA_ADDR_T_64BIT))
++		of_unittest_dma_ranges_one("/testcase-data/address-tests/bus@80000000/device@1000",
++			0x100000000, 0x20000000);
+ 	of_unittest_dma_ranges_one("/testcase-data/address-tests/pci@90000000",
+ 		0x80000000, 0x20000000);
+ }
+diff --git a/drivers/parisc/pdc_stable.c b/drivers/parisc/pdc_stable.c
+index e090978518f1a..4760f82def6ec 100644
+--- a/drivers/parisc/pdc_stable.c
++++ b/drivers/parisc/pdc_stable.c
+@@ -979,8 +979,10 @@ pdcs_register_pathentries(void)
+ 		entry->kobj.kset = paths_kset;
+ 		err = kobject_init_and_add(&entry->kobj, &ktype_pdcspath, NULL,
+ 					   "%s", entry->name);
+-		if (err)
++		if (err) {
++			kobject_put(&entry->kobj);
+ 			return err;
++		}
+ 
+ 		/* kobject is now registered */
+ 		write_lock(&entry->rw_lock);
+diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c
+index 850b4533f4ef5..d92c8a25094fa 100644
+--- a/drivers/pci/controller/dwc/pcie-designware.c
++++ b/drivers/pci/controller/dwc/pcie-designware.c
+@@ -672,10 +672,11 @@ void dw_pcie_iatu_detect(struct dw_pcie *pci)
+ 		if (!pci->atu_base) {
+ 			struct resource *res =
+ 				platform_get_resource_byname(pdev, IORESOURCE_MEM, "atu");
+-			if (res)
++			if (res) {
+ 				pci->atu_size = resource_size(res);
+-			pci->atu_base = devm_ioremap_resource(dev, res);
+-			if (IS_ERR(pci->atu_base))
++				pci->atu_base = devm_ioremap_resource(dev, res);
++			}
++			if (!pci->atu_base || IS_ERR(pci->atu_base))
+ 				pci->atu_base = pci->dbi_base + DEFAULT_DBI_ATU_OFFSET;
+ 		}
+ 
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 1c3d1116bb60c..baae67f71ba82 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1534,6 +1534,12 @@ static int qcom_pcie_probe(struct platform_device *pdev)
+ 	const struct qcom_pcie_cfg *pcie_cfg;
+ 	int ret;
+ 
++	pcie_cfg = of_device_get_match_data(dev);
++	if (!pcie_cfg || !pcie_cfg->ops) {
++		dev_err(dev, "Invalid platform data\n");
++		return -EINVAL;
++	}
++
+ 	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
+ 	if (!pcie)
+ 		return -ENOMEM;
+@@ -1553,12 +1559,6 @@ static int qcom_pcie_probe(struct platform_device *pdev)
+ 
+ 	pcie->pci = pci;
+ 
+-	pcie_cfg = of_device_get_match_data(dev);
+-	if (!pcie_cfg || !pcie_cfg->ops) {
+-		dev_err(dev, "Invalid platform data\n");
+-		return -EINVAL;
+-	}
+-
+ 	pcie->ops = pcie_cfg->ops;
+ 	pcie->pipe_clk_need_muxing = pcie_cfg->pipe_clk_need_muxing;
+ 
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index c3b725afa11fd..b2217e2b3efde 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -872,7 +872,6 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+ 		return PCI_BRIDGE_EMUL_HANDLED;
+ 	}
+ 
+-	case PCI_CAP_LIST_ID:
+ 	case PCI_EXP_DEVCAP:
+ 	case PCI_EXP_DEVCTL:
+ 		*value = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg);
+@@ -953,6 +952,9 @@ static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
+ 	/* Support interrupt A for MSI feature */
+ 	bridge->conf.intpin = PCIE_CORE_INT_A_ASSERT_ENABLE;
+ 
++	/* Aardvark HW provides PCIe Capability structure in version 2 */
++	bridge->pcie_conf.cap = cpu_to_le16(2);
++
+ 	/* Indicates supports for Completion Retry Status */
+ 	bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS);
+ 
+@@ -1535,8 +1537,7 @@ static int advk_pcie_probe(struct platform_device *pdev)
+ 		 * only PIO for issuing configuration transfers which does
+ 		 * not use PCIe window configuration.
+ 		 */
+-		if (type != IORESOURCE_MEM && type != IORESOURCE_MEM_64 &&
+-		    type != IORESOURCE_IO)
++		if (type != IORESOURCE_MEM && type != IORESOURCE_IO)
+ 			continue;
+ 
+ 		/*
+@@ -1544,8 +1545,7 @@ static int advk_pcie_probe(struct platform_device *pdev)
+ 		 * configuration is set to transparent memory access so it
+ 		 * does not need window configuration.
+ 		 */
+-		if ((type == IORESOURCE_MEM || type == IORESOURCE_MEM_64) &&
+-		    entry->offset == 0)
++		if (type == IORESOURCE_MEM && entry->offset == 0)
+ 			continue;
+ 
+ 		/*
+diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
+index ed13e81cd691d..357e9a293edf7 100644
+--- a/drivers/pci/controller/pci-mvebu.c
++++ b/drivers/pci/controller/pci-mvebu.c
+@@ -51,10 +51,14 @@
+ 	 PCIE_CONF_FUNC(PCI_FUNC(devfn)) | PCIE_CONF_REG(where) | \
+ 	 PCIE_CONF_ADDR_EN)
+ #define PCIE_CONF_DATA_OFF	0x18fc
++#define PCIE_INT_CAUSE_OFF	0x1900
++#define  PCIE_INT_PM_PME		BIT(28)
+ #define PCIE_MASK_OFF		0x1910
+ #define  PCIE_MASK_ENABLE_INTS          0x0f000000
+ #define PCIE_CTRL_OFF		0x1a00
+ #define  PCIE_CTRL_X1_MODE		0x0001
++#define  PCIE_CTRL_RC_MODE		BIT(1)
++#define  PCIE_CTRL_MASTER_HOT_RESET	BIT(24)
+ #define PCIE_STAT_OFF		0x1a04
+ #define  PCIE_STAT_BUS                  0xff00
+ #define  PCIE_STAT_DEV                  0x1f0000
+@@ -125,6 +129,11 @@ static bool mvebu_pcie_link_up(struct mvebu_pcie_port *port)
+ 	return !(mvebu_readl(port, PCIE_STAT_OFF) & PCIE_STAT_LINK_DOWN);
+ }
+ 
++static u8 mvebu_pcie_get_local_bus_nr(struct mvebu_pcie_port *port)
++{
++	return (mvebu_readl(port, PCIE_STAT_OFF) & PCIE_STAT_BUS) >> 8;
++}
++
+ static void mvebu_pcie_set_local_bus_nr(struct mvebu_pcie_port *port, int nr)
+ {
+ 	u32 stat;
+@@ -213,18 +222,21 @@ static void mvebu_pcie_setup_wins(struct mvebu_pcie_port *port)
+ 
+ static void mvebu_pcie_setup_hw(struct mvebu_pcie_port *port)
+ {
+-	u32 cmd, mask;
++	u32 ctrl, cmd, mask;
+ 
+-	/* Point PCIe unit MBUS decode windows to DRAM space. */
+-	mvebu_pcie_setup_wins(port);
++	/* Setup PCIe controller to Root Complex mode. */
++	ctrl = mvebu_readl(port, PCIE_CTRL_OFF);
++	ctrl |= PCIE_CTRL_RC_MODE;
++	mvebu_writel(port, ctrl, PCIE_CTRL_OFF);
+ 
+-	/* Master + slave enable. */
++	/* Disable Root Bridge I/O space, memory space and bus mastering. */
+ 	cmd = mvebu_readl(port, PCIE_CMD_OFF);
+-	cmd |= PCI_COMMAND_IO;
+-	cmd |= PCI_COMMAND_MEMORY;
+-	cmd |= PCI_COMMAND_MASTER;
++	cmd &= ~(PCI_COMMAND_IO | PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER);
+ 	mvebu_writel(port, cmd, PCIE_CMD_OFF);
+ 
++	/* Point PCIe unit MBUS decode windows to DRAM space. */
++	mvebu_pcie_setup_wins(port);
++
+ 	/* Enable interrupt lines A-D. */
+ 	mask = mvebu_readl(port, PCIE_MASK_OFF);
+ 	mask |= PCIE_MASK_ENABLE_INTS;
+@@ -371,8 +383,7 @@ static void mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port)
+ 
+ 	/* Are the new iobase/iolimit values invalid? */
+ 	if (conf->iolimit < conf->iobase ||
+-	    conf->iolimitupper < conf->iobaseupper ||
+-	    !(conf->command & PCI_COMMAND_IO)) {
++	    conf->iolimitupper < conf->iobaseupper) {
+ 		mvebu_pcie_set_window(port, port->io_target, port->io_attr,
+ 				      &desired, &port->iowin);
+ 		return;
+@@ -409,8 +420,7 @@ static void mvebu_pcie_handle_membase_change(struct mvebu_pcie_port *port)
+ 	struct pci_bridge_emul_conf *conf = &port->bridge.conf;
+ 
+ 	/* Are the new membase/memlimit values invalid? */
+-	if (conf->memlimit < conf->membase ||
+-	    !(conf->command & PCI_COMMAND_MEMORY)) {
++	if (conf->memlimit < conf->membase) {
+ 		mvebu_pcie_set_window(port, port->mem_target, port->mem_attr,
+ 				      &desired, &port->memwin);
+ 		return;
+@@ -430,6 +440,54 @@ static void mvebu_pcie_handle_membase_change(struct mvebu_pcie_port *port)
+ 			      &port->memwin);
+ }
+ 
++static pci_bridge_emul_read_status_t
++mvebu_pci_bridge_emul_base_conf_read(struct pci_bridge_emul *bridge,
++				     int reg, u32 *value)
++{
++	struct mvebu_pcie_port *port = bridge->data;
++
++	switch (reg) {
++	case PCI_COMMAND:
++		*value = mvebu_readl(port, PCIE_CMD_OFF);
++		break;
++
++	case PCI_PRIMARY_BUS: {
++		/*
++		 * From the whole 32bit register we support reading from HW only
++		 * secondary bus number which is mvebu local bus number.
++		 * Other bits are retrieved only from emulated config buffer.
++		 */
++		__le32 *cfgspace = (__le32 *)&bridge->conf;
++		u32 val = le32_to_cpu(cfgspace[PCI_PRIMARY_BUS / 4]);
++		val &= ~0xff00;
++		val |= mvebu_pcie_get_local_bus_nr(port) << 8;
++		*value = val;
++		break;
++	}
++
++	case PCI_INTERRUPT_LINE: {
++		/*
++		 * From the whole 32bit register we support reading from HW only
++		 * one bit: PCI_BRIDGE_CTL_BUS_RESET.
++		 * Other bits are retrieved only from emulated config buffer.
++		 */
++		__le32 *cfgspace = (__le32 *)&bridge->conf;
++		u32 val = le32_to_cpu(cfgspace[PCI_INTERRUPT_LINE / 4]);
++		if (mvebu_readl(port, PCIE_CTRL_OFF) & PCIE_CTRL_MASTER_HOT_RESET)
++			val |= PCI_BRIDGE_CTL_BUS_RESET << 16;
++		else
++			val &= ~(PCI_BRIDGE_CTL_BUS_RESET << 16);
++		*value = val;
++		break;
++	}
++
++	default:
++		return PCI_BRIDGE_EMUL_NOT_HANDLED;
++	}
++
++	return PCI_BRIDGE_EMUL_HANDLED;
++}
++
+ static pci_bridge_emul_read_status_t
+ mvebu_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+ 				     int reg, u32 *value)
+@@ -442,9 +500,7 @@ mvebu_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+ 		break;
+ 
+ 	case PCI_EXP_DEVCTL:
+-		*value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL) &
+-				 ~(PCI_EXP_DEVCTL_URRE | PCI_EXP_DEVCTL_FERE |
+-				   PCI_EXP_DEVCTL_NFERE | PCI_EXP_DEVCTL_CERE);
++		*value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL);
+ 		break;
+ 
+ 	case PCI_EXP_LNKCAP:
+@@ -468,6 +524,18 @@ mvebu_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+ 		*value = mvebu_readl(port, PCIE_RC_RTSTA);
+ 		break;
+ 
++	case PCI_EXP_DEVCAP2:
++		*value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCAP2);
++		break;
++
++	case PCI_EXP_DEVCTL2:
++		*value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL2);
++		break;
++
++	case PCI_EXP_LNKCTL2:
++		*value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL2);
++		break;
++
+ 	default:
+ 		return PCI_BRIDGE_EMUL_NOT_HANDLED;
+ 	}
+@@ -484,26 +552,16 @@ mvebu_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge,
+ 
+ 	switch (reg) {
+ 	case PCI_COMMAND:
+-	{
+-		if (!mvebu_has_ioport(port))
+-			conf->command &= ~PCI_COMMAND_IO;
+-
+-		if ((old ^ new) & PCI_COMMAND_IO)
+-			mvebu_pcie_handle_iobase_change(port);
+-		if ((old ^ new) & PCI_COMMAND_MEMORY)
+-			mvebu_pcie_handle_membase_change(port);
++		if (!mvebu_has_ioport(port)) {
++			conf->command = cpu_to_le16(
++				le16_to_cpu(conf->command) & ~PCI_COMMAND_IO);
++			new &= ~PCI_COMMAND_IO;
++		}
+ 
++		mvebu_writel(port, new, PCIE_CMD_OFF);
+ 		break;
+-	}
+ 
+ 	case PCI_IO_BASE:
+-		/*
+-		 * We keep bit 1 set, it is a read-only bit that
+-		 * indicates we support 32 bits addressing for the
+-		 * I/O
+-		 */
+-		conf->iobase |= PCI_IO_RANGE_TYPE_32;
+-		conf->iolimit |= PCI_IO_RANGE_TYPE_32;
+ 		mvebu_pcie_handle_iobase_change(port);
+ 		break;
+ 
+@@ -516,7 +574,19 @@ mvebu_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge,
+ 		break;
+ 
+ 	case PCI_PRIMARY_BUS:
+-		mvebu_pcie_set_local_bus_nr(port, conf->secondary_bus);
++		if (mask & 0xff00)
++			mvebu_pcie_set_local_bus_nr(port, conf->secondary_bus);
++		break;
++
++	case PCI_INTERRUPT_LINE:
++		if (mask & (PCI_BRIDGE_CTL_BUS_RESET << 16)) {
++			u32 ctrl = mvebu_readl(port, PCIE_CTRL_OFF);
++			if (new & (PCI_BRIDGE_CTL_BUS_RESET << 16))
++				ctrl |= PCIE_CTRL_MASTER_HOT_RESET;
++			else
++				ctrl &= ~PCIE_CTRL_MASTER_HOT_RESET;
++			mvebu_writel(port, ctrl, PCIE_CTRL_OFF);
++		}
+ 		break;
+ 
+ 	default:
+@@ -532,13 +602,6 @@ mvebu_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge,
+ 
+ 	switch (reg) {
+ 	case PCI_EXP_DEVCTL:
+-		/*
+-		 * Armada370 data says these bits must always
+-		 * be zero when in root complex mode.
+-		 */
+-		new &= ~(PCI_EXP_DEVCTL_URRE | PCI_EXP_DEVCTL_FERE |
+-			 PCI_EXP_DEVCTL_NFERE | PCI_EXP_DEVCTL_CERE);
+-
+ 		mvebu_writel(port, new, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL);
+ 		break;
+ 
+@@ -555,12 +618,31 @@ mvebu_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge,
+ 		break;
+ 
+ 	case PCI_EXP_RTSTA:
+-		mvebu_writel(port, new, PCIE_RC_RTSTA);
++		/*
++		 * PME Status bit in Root Status Register (PCIE_RC_RTSTA)
++		 * is read-only and can be cleared only by writing 0b to the
++		 * Interrupt Cause RW0C register (PCIE_INT_CAUSE_OFF). So
++		 * clear PME via Interrupt Cause.
++		 */
++		if (new & PCI_EXP_RTSTA_PME)
++			mvebu_writel(port, ~PCIE_INT_PM_PME, PCIE_INT_CAUSE_OFF);
++		break;
++
++	case PCI_EXP_DEVCTL2:
++		mvebu_writel(port, new, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL2);
++		break;
++
++	case PCI_EXP_LNKCTL2:
++		mvebu_writel(port, new, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL2);
++		break;
++
++	default:
+ 		break;
+ 	}
+ }
+ 
+ static struct pci_bridge_emul_ops mvebu_pci_bridge_emul_ops = {
++	.read_base = mvebu_pci_bridge_emul_base_conf_read,
+ 	.write_base = mvebu_pci_bridge_emul_base_conf_write,
+ 	.read_pcie = mvebu_pci_bridge_emul_pcie_conf_read,
+ 	.write_pcie = mvebu_pci_bridge_emul_pcie_conf_write,
+@@ -570,9 +652,11 @@ static struct pci_bridge_emul_ops mvebu_pci_bridge_emul_ops = {
+  * Initialize the configuration space of the PCI-to-PCI bridge
+  * associated with the given PCIe interface.
+  */
+-static void mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port)
++static int mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port)
+ {
+ 	struct pci_bridge_emul *bridge = &port->bridge;
++	u32 pcie_cap = mvebu_readl(port, PCIE_CAP_PCIEXP);
++	u8 pcie_cap_ver = ((pcie_cap >> 16) & PCI_EXP_FLAGS_VERS);
+ 
+ 	bridge->conf.vendor = PCI_VENDOR_ID_MARVELL;
+ 	bridge->conf.device = mvebu_readl(port, PCIE_DEV_ID_OFF) >> 16;
+@@ -585,11 +669,17 @@ static void mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port)
+ 		bridge->conf.iolimit = PCI_IO_RANGE_TYPE_32;
+ 	}
+ 
++	/*
++	 * Older mvebu hardware provides PCIe Capability structure only in
++	 * version 1. New hardware provides it in version 2.
++	 */
++	bridge->pcie_conf.cap = cpu_to_le16(pcie_cap_ver);
++
+ 	bridge->has_pcie = true;
+ 	bridge->data = port;
+ 	bridge->ops = &mvebu_pci_bridge_emul_ops;
+ 
+-	pci_bridge_emul_init(bridge, PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR);
++	return pci_bridge_emul_init(bridge, PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR);
+ }
+ 
+ static inline struct mvebu_pcie *sys_to_pcie(struct pci_sys_data *sys)
+@@ -1112,9 +1202,93 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
+ 			continue;
+ 		}
+ 
++		ret = mvebu_pci_bridge_emul_init(port);
++		if (ret < 0) {
++			dev_err(dev, "%s: cannot init emulated bridge\n",
++				port->name);
++			devm_iounmap(dev, port->base);
++			port->base = NULL;
++			mvebu_pcie_powerdown(port);
++			continue;
++		}
++
++		/*
++		 * PCIe topology exported by mvebu hw is quite complicated. In
++		 * reality has something like N fully independent host bridges
++		 * where each host bridge has one PCIe Root Port (which acts as
++		 * PCI Bridge device). Each host bridge has its own independent
++		 * internal registers, independent access to PCI config space,
++		 * independent interrupt lines, independent window and memory
++		 * access configuration. But additionally there is some kind of
++		 * peer-to-peer support between PCIe devices behind different
++		 * host bridges limited just to forwarding of memory and I/O
++		 * transactions (forwarding of error messages and config cycles
++		 * is not supported). So we could say there are N independent
++		 * PCIe Root Complexes.
++		 *
++		 * For this kind of setup DT should have been structured into
++		 * N independent PCIe controllers / host bridges. But instead
++		 * structure in past was defined to put PCIe Root Ports of all
++		 * host bridges into one bus zero, like in classic multi-port
++		 * Root Complex setup with just one host bridge.
++		 *
++		 * This means that pci-mvebu.c driver provides "virtual" bus 0
++		 * on which registers all PCIe Root Ports (PCI Bridge devices)
++		 * specified in DT by their BDF addresses and virtually routes
++		 * PCI config access of each PCI bridge device to specific PCIe
++		 * host bridge.
++		 *
++		 * Normally PCI Bridge should choose between Type 0 and Type 1
++		 * config requests based on primary and secondary bus numbers
++		 * configured on the bridge itself. But because mvebu PCI Bridge
++		 * does not have registers for primary and secondary bus numbers
++		 * in its config space, it determinates type of config requests
++		 * via its own custom way.
++		 *
++		 * There are two options how mvebu determinate type of config
++		 * request.
++		 *
++		 * 1. If Secondary Bus Number Enable bit is not set or is not
++		 * available (applies for pre-XP PCIe controllers) then Type 0
++		 * is used if target bus number equals Local Bus Number (bits
++		 * [15:8] in register 0x1a04) and target device number differs
++		 * from Local Device Number (bits [20:16] in register 0x1a04).
++		 * Type 1 is used if target bus number differs from Local Bus
++		 * Number. And when target bus number equals Local Bus Number
++		 * and target device equals Local Device Number then request is
++		 * routed to Local PCI Bridge (PCIe Root Port).
++		 *
++		 * 2. If Secondary Bus Number Enable bit is set (bit 7 in
++		 * register 0x1a2c) then mvebu hw determinate type of config
++		 * request like compliant PCI Bridge based on primary bus number
++		 * which is configured via Local Bus Number (bits [15:8] in
++		 * register 0x1a04) and secondary bus number which is configured
++		 * via Secondary Bus Number (bits [7:0] in register 0x1a2c).
++		 * Local PCI Bridge (PCIe Root Port) is available on primary bus
++		 * as device with Local Device Number (bits [20:16] in register
++		 * 0x1a04).
++		 *
++		 * Secondary Bus Number Enable bit is disabled by default and
++		 * option 2. is not available on pre-XP PCIe controllers. Hence
++		 * this driver always use option 1.
++		 *
++		 * Basically it means that primary and secondary buses shares
++		 * one virtual number configured via Local Bus Number bits and
++		 * Local Device Number bits determinates if accessing primary
++		 * or secondary bus. Set Local Device Number to 1 and redirect
++		 * all writes of PCI Bridge Secondary Bus Number register to
++		 * Local Bus Number (bits [15:8] in register 0x1a04).
++		 *
++		 * So when accessing devices on buses behind secondary bus
++		 * number it would work correctly. And also when accessing
++		 * device 0 at secondary bus number via config space would be
++		 * correctly routed to secondary bus. Due to issues described
++		 * in mvebu_pcie_setup_hw(), PCI Bridges at primary bus (zero)
++		 * are not accessed directly via PCI config space but rarher
++		 * indirectly via kernel emulated PCI bridge driver.
++		 */
+ 		mvebu_pcie_setup_hw(port);
+-		mvebu_pcie_set_local_dev_nr(port, 1);
+-		mvebu_pci_bridge_emul_init(port);
++		mvebu_pcie_set_local_dev_nr(port, 0);
+ 	}
+ 
+ 	pcie->nports = i;
+diff --git a/drivers/pci/controller/pci-xgene.c b/drivers/pci/controller/pci-xgene.c
+index 56d0d50338c89..d83dbd9774182 100644
+--- a/drivers/pci/controller/pci-xgene.c
++++ b/drivers/pci/controller/pci-xgene.c
+@@ -465,7 +465,7 @@ static int xgene_pcie_select_ib_reg(u8 *ib_reg_mask, u64 size)
+ 		return 1;
+ 	}
+ 
+-	if ((size > SZ_1K) && (size < SZ_1T) && !(*ib_reg_mask & (1 << 0))) {
++	if ((size > SZ_1K) && (size < SZ_4G) && !(*ib_reg_mask & (1 << 0))) {
+ 		*ib_reg_mask |= (1 << 0);
+ 		return 0;
+ 	}
+diff --git a/drivers/pci/controller/pcie-apple.c b/drivers/pci/controller/pcie-apple.c
+index b090924b41fee..537ac9fe2e842 100644
+--- a/drivers/pci/controller/pcie-apple.c
++++ b/drivers/pci/controller/pcie-apple.c
+@@ -42,8 +42,9 @@
+ #define   CORE_FABRIC_STAT_MASK		0x001F001F
+ #define CORE_LANE_CFG(port)		(0x84000 + 0x4000 * (port))
+ #define   CORE_LANE_CFG_REFCLK0REQ	BIT(0)
+-#define   CORE_LANE_CFG_REFCLK1		BIT(1)
++#define   CORE_LANE_CFG_REFCLK1REQ	BIT(1)
+ #define   CORE_LANE_CFG_REFCLK0ACK	BIT(2)
++#define   CORE_LANE_CFG_REFCLK1ACK	BIT(3)
+ #define   CORE_LANE_CFG_REFCLKEN	(BIT(9) | BIT(10))
+ #define CORE_LANE_CTL(port)		(0x84004 + 0x4000 * (port))
+ #define   CORE_LANE_CTL_CFGACC		BIT(15)
+@@ -482,9 +483,9 @@ static int apple_pcie_setup_refclk(struct apple_pcie *pcie,
+ 	if (res < 0)
+ 		return res;
+ 
+-	rmw_set(CORE_LANE_CFG_REFCLK1, pcie->base + CORE_LANE_CFG(port->idx));
++	rmw_set(CORE_LANE_CFG_REFCLK1REQ, pcie->base + CORE_LANE_CFG(port->idx));
+ 	res = readl_relaxed_poll_timeout(pcie->base + CORE_LANE_CFG(port->idx),
+-					 stat, stat & CORE_LANE_CFG_REFCLK1,
++					 stat, stat & CORE_LANE_CFG_REFCLK1ACK,
+ 					 100, 50000);
+ 
+ 	if (res < 0)
+diff --git a/drivers/pci/controller/pcie-mediatek-gen3.c b/drivers/pci/controller/pcie-mediatek-gen3.c
+index 17c59b0d6978b..21207df680ccf 100644
+--- a/drivers/pci/controller/pcie-mediatek-gen3.c
++++ b/drivers/pci/controller/pcie-mediatek-gen3.c
+@@ -79,6 +79,9 @@
+ #define PCIE_ICMD_PM_REG		0x198
+ #define PCIE_TURN_OFF_LINK		BIT(4)
+ 
++#define PCIE_MISC_CTRL_REG		0x348
++#define PCIE_DISABLE_DVFSRC_VLT_REQ	BIT(1)
++
+ #define PCIE_TRANS_TABLE_BASE_REG	0x800
+ #define PCIE_ATR_SRC_ADDR_MSB_OFFSET	0x4
+ #define PCIE_ATR_TRSL_ADDR_LSB_OFFSET	0x8
+@@ -297,6 +300,11 @@ static int mtk_pcie_startup_port(struct mtk_pcie_port *port)
+ 	val &= ~PCIE_INTX_ENABLE;
+ 	writel_relaxed(val, port->base + PCIE_INT_ENABLE_REG);
+ 
++	/* Disable DVFSRC voltage request */
++	val = readl_relaxed(port->base + PCIE_MISC_CTRL_REG);
++	val |= PCIE_DISABLE_DVFSRC_VLT_REQ;
++	writel_relaxed(val, port->base + PCIE_MISC_CTRL_REG);
++
+ 	/* Assert all reset signals */
+ 	val = readl_relaxed(port->base + PCIE_RST_CTRL_REG);
+ 	val |= PCIE_MAC_RSTB | PCIE_PHY_RSTB | PCIE_BRG_RSTB | PCIE_PE_RSTB;
+diff --git a/drivers/pci/controller/pcie-mt7621.c b/drivers/pci/controller/pcie-mt7621.c
+index b60dfb45ef7bd..73b91315c1656 100644
+--- a/drivers/pci/controller/pcie-mt7621.c
++++ b/drivers/pci/controller/pcie-mt7621.c
+@@ -598,3 +598,5 @@ static struct platform_driver mt7621_pci_driver = {
+ 	},
+ };
+ builtin_platform_driver(mt7621_pci_driver);
++
++MODULE_LICENSE("GPL v2");
+diff --git a/drivers/pci/controller/pcie-rcar-host.c b/drivers/pci/controller/pcie-rcar-host.c
+index e12c2d8be05a3..780e60159993c 100644
+--- a/drivers/pci/controller/pcie-rcar-host.c
++++ b/drivers/pci/controller/pcie-rcar-host.c
+@@ -50,10 +50,10 @@ struct rcar_msi {
+  */
+ static void __iomem *pcie_base;
+ /*
+- * Static copy of bus clock pointer, so we can check whether the clock
+- * is enabled or not.
++ * Static copy of PCIe device pointer, so we can check whether the
++ * device is runtime suspended or not.
+  */
+-static struct clk *pcie_bus_clk;
++static struct device *pcie_dev;
+ #endif
+ 
+ /* Structure representing the PCIe interface */
+@@ -792,7 +792,7 @@ static int rcar_pcie_get_resources(struct rcar_pcie_host *host)
+ #ifdef CONFIG_ARM
+ 	/* Cache static copy for L1 link state fixup hook on aarch32 */
+ 	pcie_base = pcie->base;
+-	pcie_bus_clk = host->bus_clk;
++	pcie_dev = pcie->dev;
+ #endif
+ 
+ 	return 0;
+@@ -1062,7 +1062,7 @@ static int rcar_pcie_aarch32_abort_handler(unsigned long addr,
+ 
+ 	spin_lock_irqsave(&pmsr_lock, flags);
+ 
+-	if (!pcie_base || !__clk_is_enabled(pcie_bus_clk)) {
++	if (!pcie_base || pm_runtime_suspended(pcie_dev)) {
+ 		ret = 1;
+ 		goto unlock_exit;
+ 	}
+diff --git a/drivers/pci/hotplug/pciehp.h b/drivers/pci/hotplug/pciehp.h
+index 918dccbc74b6b..e0a614acee059 100644
+--- a/drivers/pci/hotplug/pciehp.h
++++ b/drivers/pci/hotplug/pciehp.h
+@@ -75,6 +75,8 @@ extern int pciehp_poll_time;
+  * @reset_lock: prevents access to the Data Link Layer Link Active bit in the
+  *	Link Status register and to the Presence Detect State bit in the Slot
+  *	Status register during a slot reset which may cause them to flap
++ * @depth: Number of additional hotplug ports in the path to the root bus,
++ *	used as lock subclass for @reset_lock
+  * @ist_running: flag to keep user request waiting while IRQ thread is running
+  * @request_result: result of last user request submitted to the IRQ thread
+  * @requester: wait queue to wake up on completion of user request,
+@@ -106,6 +108,7 @@ struct controller {
+ 
+ 	struct hotplug_slot hotplug_slot;	/* hotplug core interface */
+ 	struct rw_semaphore reset_lock;
++	unsigned int depth;
+ 	unsigned int ist_running;
+ 	int request_result;
+ 	wait_queue_head_t requester;
+diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c
+index f34114d452599..4042d87d539dd 100644
+--- a/drivers/pci/hotplug/pciehp_core.c
++++ b/drivers/pci/hotplug/pciehp_core.c
+@@ -166,7 +166,7 @@ static void pciehp_check_presence(struct controller *ctrl)
+ {
+ 	int occupied;
+ 
+-	down_read(&ctrl->reset_lock);
++	down_read_nested(&ctrl->reset_lock, ctrl->depth);
+ 	mutex_lock(&ctrl->state_lock);
+ 
+ 	occupied = pciehp_card_present_or_link_active(ctrl);
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 83a0fa119cae8..963fb50528da1 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -583,7 +583,7 @@ static void pciehp_ignore_dpc_link_change(struct controller *ctrl,
+ 	 * the corresponding link change may have been ignored above.
+ 	 * Synthesize it to ensure that it is acted on.
+ 	 */
+-	down_read(&ctrl->reset_lock);
++	down_read_nested(&ctrl->reset_lock, ctrl->depth);
+ 	if (!pciehp_check_link_active(ctrl))
+ 		pciehp_request(ctrl, PCI_EXP_SLTSTA_DLLSC);
+ 	up_read(&ctrl->reset_lock);
+@@ -746,7 +746,7 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
+ 	 * Disable requests have higher priority than Presence Detect Changed
+ 	 * or Data Link Layer State Changed events.
+ 	 */
+-	down_read(&ctrl->reset_lock);
++	down_read_nested(&ctrl->reset_lock, ctrl->depth);
+ 	if (events & DISABLE_SLOT)
+ 		pciehp_handle_disable_request(ctrl);
+ 	else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC))
+@@ -906,7 +906,7 @@ int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, bool probe)
+ 	if (probe)
+ 		return 0;
+ 
+-	down_write(&ctrl->reset_lock);
++	down_write_nested(&ctrl->reset_lock, ctrl->depth);
+ 
+ 	if (!ATTN_BUTTN(ctrl)) {
+ 		ctrl_mask |= PCI_EXP_SLTCTL_PDCE;
+@@ -962,6 +962,20 @@ static inline void dbg_ctrl(struct controller *ctrl)
+ 
+ #define FLAG(x, y)	(((x) & (y)) ? '+' : '-')
+ 
++static inline int pcie_hotplug_depth(struct pci_dev *dev)
++{
++	struct pci_bus *bus = dev->bus;
++	int depth = 0;
++
++	while (bus->parent) {
++		bus = bus->parent;
++		if (bus->self && bus->self->is_hotplug_bridge)
++			depth++;
++	}
++
++	return depth;
++}
++
+ struct controller *pcie_init(struct pcie_device *dev)
+ {
+ 	struct controller *ctrl;
+@@ -975,6 +989,7 @@ struct controller *pcie_init(struct pcie_device *dev)
+ 		return NULL;
+ 
+ 	ctrl->pcie = dev;
++	ctrl->depth = pcie_hotplug_depth(dev->port);
+ 	pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &slot_cap);
+ 
+ 	if (pdev->hotplug_user_indicators)
+diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
+index d84cf30bb2790..8465221be6d21 100644
+--- a/drivers/pci/msi.c
++++ b/drivers/pci/msi.c
+@@ -1194,19 +1194,24 @@ EXPORT_SYMBOL(pci_free_irq_vectors);
+ 
+ /**
+  * pci_irq_vector - return Linux IRQ number of a device vector
+- * @dev: PCI device to operate on
+- * @nr: device-relative interrupt vector index (0-based).
++ * @dev:	PCI device to operate on
++ * @nr:		Interrupt vector index (0-based)
++ *
++ * @nr has the following meanings depending on the interrupt mode:
++ *   MSI-X:	The index in the MSI-X vector table
++ *   MSI:	The index of the enabled MSI vectors
++ *   INTx:	Must be 0
++ *
++ * Return: The Linux interrupt number or -EINVAl if @nr is out of range.
+  */
+ int pci_irq_vector(struct pci_dev *dev, unsigned int nr)
+ {
+ 	if (dev->msix_enabled) {
+ 		struct msi_desc *entry;
+-		int i = 0;
+ 
+ 		for_each_pci_msi_entry(entry, dev) {
+-			if (i == nr)
++			if (entry->msi_attrib.entry_nr == nr)
+ 				return entry->irq;
+-			i++;
+ 		}
+ 		WARN_ON_ONCE(1);
+ 		return -EINVAL;
+@@ -1230,17 +1235,22 @@ EXPORT_SYMBOL(pci_irq_vector);
+  * pci_irq_get_affinity - return the affinity of a particular MSI vector
+  * @dev:	PCI device to operate on
+  * @nr:		device-relative interrupt vector index (0-based).
++ *
++ * @nr has the following meanings depending on the interrupt mode:
++ *   MSI-X:	The index in the MSI-X vector table
++ *   MSI:	The index of the enabled MSI vectors
++ *   INTx:	Must be 0
++ *
++ * Return: A cpumask pointer or NULL if @nr is out of range
+  */
+ const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr)
+ {
+ 	if (dev->msix_enabled) {
+ 		struct msi_desc *entry;
+-		int i = 0;
+ 
+ 		for_each_pci_msi_entry(entry, dev) {
+-			if (i == nr)
++			if (entry->msi_attrib.entry_nr == nr)
+ 				return &entry->affinity->mask;
+-			i++;
+ 		}
+ 		WARN_ON_ONCE(1);
+ 		return NULL;
+diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c
+index db97cddfc85e1..37504c2cce9b8 100644
+--- a/drivers/pci/pci-bridge-emul.c
++++ b/drivers/pci/pci-bridge-emul.c
+@@ -139,8 +139,13 @@ struct pci_bridge_reg_behavior pci_regs_behavior[PCI_STD_HEADER_SIZEOF / 4] = {
+ 		.ro = GENMASK(7, 0),
+ 	},
+ 
++	/*
++	 * If expansion ROM is unsupported then ROM Base Address register must
++	 * be implemented as read-only register that return 0 when read, same
++	 * as for unused Base Address registers.
++	 */
+ 	[PCI_ROM_ADDRESS1 / 4] = {
+-		.rw = GENMASK(31, 11) | BIT(0),
++		.ro = ~0,
+ 	},
+ 
+ 	/*
+@@ -171,41 +176,55 @@ struct pci_bridge_reg_behavior pcie_cap_regs_behavior[PCI_CAP_PCIE_SIZEOF / 4] =
+ 	[PCI_CAP_LIST_ID / 4] = {
+ 		/*
+ 		 * Capability ID, Next Capability Pointer and
+-		 * Capabilities register are all read-only.
++		 * bits [14:0] of Capabilities register are all read-only.
++		 * Bit 15 of Capabilities register is reserved.
+ 		 */
+-		.ro = ~0,
++		.ro = GENMASK(30, 0),
+ 	},
+ 
+ 	[PCI_EXP_DEVCAP / 4] = {
+-		.ro = ~0,
++		/*
++		 * Bits [31:29] and [17:16] are reserved.
++		 * Bits [27:18] are reserved for non-upstream ports.
++		 * Bits 28 and [14:6] are reserved for non-endpoint devices.
++		 * Other bits are read-only.
++		 */
++		.ro = BIT(15) | GENMASK(5, 0),
+ 	},
+ 
+ 	[PCI_EXP_DEVCTL / 4] = {
+-		/* Device control register is RW */
+-		.rw = GENMASK(15, 0),
++		/*
++		 * Device control register is RW, except bit 15 which is
++		 * reserved for non-endpoints or non-PCIe-to-PCI/X bridges.
++		 */
++		.rw = GENMASK(14, 0),
+ 
+ 		/*
+ 		 * Device status register has bits 6 and [3:0] W1C, [5:4] RO,
+-		 * the rest is reserved
++		 * the rest is reserved. Also bit 6 is reserved for non-upstream
++		 * ports.
+ 		 */
+-		.w1c = (BIT(6) | GENMASK(3, 0)) << 16,
++		.w1c = GENMASK(3, 0) << 16,
+ 		.ro = GENMASK(5, 4) << 16,
+ 	},
+ 
+ 	[PCI_EXP_LNKCAP / 4] = {
+-		/* All bits are RO, except bit 23 which is reserved */
+-		.ro = lower_32_bits(~BIT(23)),
++		/*
++		 * All bits are RO, except bit 23 which is reserved and
++		 * bit 18 which is reserved for non-upstream ports.
++		 */
++		.ro = lower_32_bits(~(BIT(23) | PCI_EXP_LNKCAP_CLKPM)),
+ 	},
+ 
+ 	[PCI_EXP_LNKCTL / 4] = {
+ 		/*
+ 		 * Link control has bits [15:14], [11:3] and [1:0] RW, the
+-		 * rest is reserved.
++		 * rest is reserved. Bit 8 is reserved for non-upstream ports.
+ 		 *
+ 		 * Link status has bits [13:0] RO, and bits [15:14]
+ 		 * W1C.
+ 		 */
+-		.rw = GENMASK(15, 14) | GENMASK(11, 3) | GENMASK(1, 0),
++		.rw = GENMASK(15, 14) | GENMASK(11, 9) | GENMASK(7, 3) | GENMASK(1, 0),
+ 		.ro = GENMASK(13, 0) << 16,
+ 		.w1c = GENMASK(15, 14) << 16,
+ 	},
+@@ -277,11 +296,9 @@ int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
+ 
+ 	if (bridge->has_pcie) {
+ 		bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START;
++		bridge->conf.status |= cpu_to_le16(PCI_STATUS_CAP_LIST);
+ 		bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP;
+-		/* Set PCIe v2, root port, slot support */
+-		bridge->pcie_conf.cap =
+-			cpu_to_le16(PCI_EXP_TYPE_ROOT_PORT << 4 | 2 |
+-				    PCI_EXP_FLAGS_SLOT);
++		bridge->pcie_conf.cap |= cpu_to_le16(PCI_EXP_TYPE_ROOT_PORT << 4);
+ 		bridge->pcie_cap_regs_behavior =
+ 			kmemdup(pcie_cap_regs_behavior,
+ 				sizeof(pcie_cap_regs_behavior),
+@@ -290,6 +307,27 @@ int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
+ 			kfree(bridge->pci_regs_behavior);
+ 			return -ENOMEM;
+ 		}
++		/* These bits are applicable only for PCI and reserved on PCIe */
++		bridge->pci_regs_behavior[PCI_CACHE_LINE_SIZE / 4].ro &=
++			~GENMASK(15, 8);
++		bridge->pci_regs_behavior[PCI_COMMAND / 4].ro &=
++			~((PCI_COMMAND_SPECIAL | PCI_COMMAND_INVALIDATE |
++			   PCI_COMMAND_VGA_PALETTE | PCI_COMMAND_WAIT |
++			   PCI_COMMAND_FAST_BACK) |
++			  (PCI_STATUS_66MHZ | PCI_STATUS_FAST_BACK |
++			   PCI_STATUS_DEVSEL_MASK) << 16);
++		bridge->pci_regs_behavior[PCI_PRIMARY_BUS / 4].ro &=
++			~GENMASK(31, 24);
++		bridge->pci_regs_behavior[PCI_IO_BASE / 4].ro &=
++			~((PCI_STATUS_66MHZ | PCI_STATUS_FAST_BACK |
++			   PCI_STATUS_DEVSEL_MASK) << 16);
++		bridge->pci_regs_behavior[PCI_INTERRUPT_LINE / 4].rw &=
++			~((PCI_BRIDGE_CTL_MASTER_ABORT |
++			   BIT(8) | BIT(9) | BIT(11)) << 16);
++		bridge->pci_regs_behavior[PCI_INTERRUPT_LINE / 4].ro &=
++			~((PCI_BRIDGE_CTL_FAST_BACK) << 16);
++		bridge->pci_regs_behavior[PCI_INTERRUPT_LINE / 4].w1c &=
++			~(BIT(10) << 16);
+ 	}
+ 
+ 	if (flags & PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR) {
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 003950c738d26..20a9326907384 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4103,6 +4103,9 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9120,
+ 			 quirk_dma_func1_alias);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9123,
+ 			 quirk_dma_func1_alias);
++/* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c136 */
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9125,
++			 quirk_dma_func1_alias);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9128,
+ 			 quirk_dma_func1_alias);
+ /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c14 */
+diff --git a/drivers/pcmcia/cs.c b/drivers/pcmcia/cs.c
+index e211e2619680c..f70197154a362 100644
+--- a/drivers/pcmcia/cs.c
++++ b/drivers/pcmcia/cs.c
+@@ -666,18 +666,16 @@ static int pccardd(void *__skt)
+ 		if (events || sysfs_events)
+ 			continue;
+ 
++		set_current_state(TASK_INTERRUPTIBLE);
+ 		if (kthread_should_stop())
+ 			break;
+ 
+-		set_current_state(TASK_INTERRUPTIBLE);
+-
+ 		schedule();
+ 
+-		/* make sure we are running */
+-		__set_current_state(TASK_RUNNING);
+-
+ 		try_to_freeze();
+ 	}
++	/* make sure we are running before we exit */
++	__set_current_state(TASK_RUNNING);
+ 
+ 	/* shut down socket, if a device is still present */
+ 	if (skt->state & SOCKET_PRESENT) {
+diff --git a/drivers/pcmcia/rsrc_nonstatic.c b/drivers/pcmcia/rsrc_nonstatic.c
+index bb15a8bdbaab5..1cac528707111 100644
+--- a/drivers/pcmcia/rsrc_nonstatic.c
++++ b/drivers/pcmcia/rsrc_nonstatic.c
+@@ -690,6 +690,9 @@ static struct resource *__nonstatic_find_io_region(struct pcmcia_socket *s,
+ 	unsigned long min = base;
+ 	int ret;
+ 
++	if (!res)
++		return NULL;
++
+ 	data.mask = align - 1;
+ 	data.offset = base & data.mask;
+ 	data.map = &s_data->io_db;
+@@ -809,6 +812,9 @@ static struct resource *nonstatic_find_mem_region(u_long base, u_long num,
+ 	unsigned long min, max;
+ 	int ret, i, j;
+ 
++	if (!res)
++		return NULL;
++
+ 	low = low || !(s->features & SS_CAP_PAGE_REGS);
+ 
+ 	data.mask = align - 1;
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index bc3cba5f8c5dc..400eb7f579dce 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -1561,7 +1561,8 @@ static int arm_cmn_probe(struct platform_device *pdev)
+ 
+ 	err = perf_pmu_register(&cmn->pmu, name, -1);
+ 	if (err)
+-		cpuhp_state_remove_instance(arm_cmn_hp_state, &cmn->cpuhp_node);
++		cpuhp_state_remove_instance_nocalls(arm_cmn_hp_state, &cmn->cpuhp_node);
++
+ 	return err;
+ }
+ 
+@@ -1572,7 +1573,7 @@ static int arm_cmn_remove(struct platform_device *pdev)
+ 	writel_relaxed(0, cmn->dtc[0].base + CMN_DT_DTC_CTL);
+ 
+ 	perf_pmu_unregister(&cmn->pmu);
+-	cpuhp_state_remove_instance(arm_cmn_hp_state, &cmn->cpuhp_node);
++	cpuhp_state_remove_instance_nocalls(arm_cmn_hp_state, &cmn->cpuhp_node);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/phy/cadence/phy-cadence-sierra.c b/drivers/phy/cadence/phy-cadence-sierra.c
+index e93818e3991fd..3e2d096d54fd7 100644
+--- a/drivers/phy/cadence/phy-cadence-sierra.c
++++ b/drivers/phy/cadence/phy-cadence-sierra.c
+@@ -215,7 +215,10 @@ static const int pll_mux_parent_index[][SIERRA_NUM_CMN_PLLC_PARENTS] = {
+ 	[CMN_PLLLC1] = { PLL1_REFCLK, PLL0_REFCLK },
+ };
+ 
+-static u32 cdns_sierra_pll_mux_table[] = { 0, 1 };
++static u32 cdns_sierra_pll_mux_table[][SIERRA_NUM_CMN_PLLC_PARENTS] = {
++	[CMN_PLLLC] = { 0, 1 },
++	[CMN_PLLLC1] = { 1, 0 },
++};
+ 
+ struct cdns_sierra_inst {
+ 	struct phy *phy;
+@@ -436,11 +439,25 @@ static const struct phy_ops ops = {
+ static u8 cdns_sierra_pll_mux_get_parent(struct clk_hw *hw)
+ {
+ 	struct cdns_sierra_pll_mux *mux = to_cdns_sierra_pll_mux(hw);
++	struct regmap_field *plllc1en_field = mux->plllc1en_field;
++	struct regmap_field *termen_field = mux->termen_field;
+ 	struct regmap_field *field = mux->pfdclk_sel_preg;
+ 	unsigned int val;
++	int index;
+ 
+ 	regmap_field_read(field, &val);
+-	return clk_mux_val_to_index(hw, cdns_sierra_pll_mux_table, 0, val);
++
++	if (strstr(clk_hw_get_name(hw), clk_names[CDNS_SIERRA_PLL_CMNLC1])) {
++		index = clk_mux_val_to_index(hw, cdns_sierra_pll_mux_table[CMN_PLLLC1], 0, val);
++		if (index == 1) {
++			regmap_field_write(plllc1en_field, 1);
++			regmap_field_write(termen_field, 1);
++		}
++	} else {
++		index = clk_mux_val_to_index(hw, cdns_sierra_pll_mux_table[CMN_PLLLC], 0, val);
++	}
++
++	return index;
+ }
+ 
+ static int cdns_sierra_pll_mux_set_parent(struct clk_hw *hw, u8 index)
+@@ -458,7 +475,11 @@ static int cdns_sierra_pll_mux_set_parent(struct clk_hw *hw, u8 index)
+ 		ret |= regmap_field_write(termen_field, 1);
+ 	}
+ 
+-	val = cdns_sierra_pll_mux_table[index];
++	if (strstr(clk_hw_get_name(hw), clk_names[CDNS_SIERRA_PLL_CMNLC1]))
++		val = cdns_sierra_pll_mux_table[CMN_PLLLC1][index];
++	else
++		val = cdns_sierra_pll_mux_table[CMN_PLLLC][index];
++
+ 	ret |= regmap_field_write(field, val);
+ 
+ 	return ret;
+@@ -496,8 +517,8 @@ static int cdns_sierra_pll_mux_register(struct cdns_sierra_phy *sp,
+ 	for (i = 0; i < num_parents; i++) {
+ 		clk = sp->input_clks[pll_mux_parent_index[clk_index][i]];
+ 		if (IS_ERR_OR_NULL(clk)) {
+-			dev_err(dev, "No parent clock for derived_refclk\n");
+-			return PTR_ERR(clk);
++			dev_err(dev, "No parent clock for PLL mux clocks\n");
++			return IS_ERR(clk) ? PTR_ERR(clk) : -ENOENT;
+ 		}
+ 		parent_names[i] = __clk_get_name(clk);
+ 	}
+diff --git a/drivers/phy/mediatek/phy-mtk-mipi-dsi.c b/drivers/phy/mediatek/phy-mtk-mipi-dsi.c
+index 28ad9403c4414..67b005d5b9e35 100644
+--- a/drivers/phy/mediatek/phy-mtk-mipi-dsi.c
++++ b/drivers/phy/mediatek/phy-mtk-mipi-dsi.c
+@@ -146,6 +146,8 @@ static int mtk_mipi_tx_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	mipi_tx->driver_data = of_device_get_match_data(dev);
++	if (!mipi_tx->driver_data)
++		return -ENODEV;
+ 
+ 	mipi_tx->regs = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(mipi_tx->regs))
+diff --git a/drivers/phy/mediatek/phy-mtk-tphy.c b/drivers/phy/mediatek/phy-mtk-tphy.c
+index cdcef865fe9e5..98a942c607a67 100644
+--- a/drivers/phy/mediatek/phy-mtk-tphy.c
++++ b/drivers/phy/mediatek/phy-mtk-tphy.c
+@@ -12,6 +12,7 @@
+ #include <linux/iopoll.h>
+ #include <linux/mfd/syscon.h>
+ #include <linux/module.h>
++#include <linux/nvmem-consumer.h>
+ #include <linux/of_address.h>
+ #include <linux/of_device.h>
+ #include <linux/phy/phy.h>
+@@ -41,6 +42,9 @@
+ #define SSUSB_SIFSLV_V2_U3PHYD		0x200
+ #define SSUSB_SIFSLV_V2_U3PHYA		0x400
+ 
++#define U3P_MISC_REG1		0x04
++#define MR1_EFUSE_AUTO_LOAD_DIS		BIT(6)
++
+ #define U3P_USBPHYACR0		0x000
+ #define PA0_RG_U2PLL_FORCE_ON		BIT(15)
+ #define PA0_USB20_PLL_PREDIV		GENMASK(7, 6)
+@@ -133,6 +137,8 @@
+ #define P3C_RG_SWRST_U3_PHYD_FORCE_EN	BIT(24)
+ 
+ #define U3P_U3_PHYA_REG0	0x000
++#define P3A_RG_IEXT_INTR		GENMASK(15, 10)
++#define P3A_RG_IEXT_INTR_VAL(x)		((0x3f & (x)) << 10)
+ #define P3A_RG_CLKDRV_OFF		GENMASK(3, 2)
+ #define P3A_RG_CLKDRV_OFF_VAL(x)	((0x3 & (x)) << 2)
+ 
+@@ -187,6 +193,19 @@
+ #define P3D_RG_FWAKE_TH		GENMASK(21, 16)
+ #define P3D_RG_FWAKE_TH_VAL(x)	((0x3f & (x)) << 16)
+ 
++#define U3P_U3_PHYD_IMPCAL0		0x010
++#define P3D_RG_FORCE_TX_IMPEL		BIT(31)
++#define P3D_RG_TX_IMPEL			GENMASK(28, 24)
++#define P3D_RG_TX_IMPEL_VAL(x)		((0x1f & (x)) << 24)
++
++#define U3P_U3_PHYD_IMPCAL1		0x014
++#define P3D_RG_FORCE_RX_IMPEL		BIT(31)
++#define P3D_RG_RX_IMPEL			GENMASK(28, 24)
++#define P3D_RG_RX_IMPEL_VAL(x)		((0x1f & (x)) << 24)
++
++#define U3P_U3_PHYD_RSV			0x054
++#define P3D_RG_EFUSE_AUTO_LOAD_DIS	BIT(12)
++
+ #define U3P_U3_PHYD_CDR1		0x05c
+ #define P3D_RG_CDR_BIR_LTD1		GENMASK(28, 24)
+ #define P3D_RG_CDR_BIR_LTD1_VAL(x)	((0x1f & (x)) << 24)
+@@ -307,6 +326,11 @@ struct mtk_phy_pdata {
+ 	 * 48M PLL, fix it by switching PLL to 26M from default 48M
+ 	 */
+ 	bool sw_pll_48m_to_26m;
++	/*
++	 * Some SoCs (e.g. mt8195) drop a bit when use auto load efuse,
++	 * support sw way, also support it for v2/v3 optionally.
++	 */
++	bool sw_efuse_supported;
+ 	enum mtk_phy_version version;
+ };
+ 
+@@ -336,6 +360,10 @@ struct mtk_phy_instance {
+ 	struct regmap *type_sw;
+ 	u32 type_sw_reg;
+ 	u32 type_sw_index;
++	u32 efuse_sw_en;
++	u32 efuse_intr;
++	u32 efuse_tx_imp;
++	u32 efuse_rx_imp;
+ 	int eye_src;
+ 	int eye_vrt;
+ 	int eye_term;
+@@ -1040,6 +1068,130 @@ static int phy_type_set(struct mtk_phy_instance *instance)
+ 	return 0;
+ }
+ 
++static int phy_efuse_get(struct mtk_tphy *tphy, struct mtk_phy_instance *instance)
++{
++	struct device *dev = &instance->phy->dev;
++	int ret = 0;
++
++	/* tphy v1 doesn't support sw efuse, skip it */
++	if (!tphy->pdata->sw_efuse_supported) {
++		instance->efuse_sw_en = 0;
++		return 0;
++	}
++
++	/* software efuse is optional */
++	instance->efuse_sw_en = device_property_read_bool(dev, "nvmem-cells");
++	if (!instance->efuse_sw_en)
++		return 0;
++
++	switch (instance->type) {
++	case PHY_TYPE_USB2:
++		ret = nvmem_cell_read_variable_le_u32(dev, "intr", &instance->efuse_intr);
++		if (ret) {
++			dev_err(dev, "fail to get u2 intr efuse, %d\n", ret);
++			break;
++		}
++
++		/* no efuse, ignore it */
++		if (!instance->efuse_intr) {
++			dev_warn(dev, "no u2 intr efuse, but dts enable it\n");
++			instance->efuse_sw_en = 0;
++			break;
++		}
++
++		dev_dbg(dev, "u2 efuse - intr %x\n", instance->efuse_intr);
++		break;
++
++	case PHY_TYPE_USB3:
++	case PHY_TYPE_PCIE:
++		ret = nvmem_cell_read_variable_le_u32(dev, "intr", &instance->efuse_intr);
++		if (ret) {
++			dev_err(dev, "fail to get u3 intr efuse, %d\n", ret);
++			break;
++		}
++
++		ret = nvmem_cell_read_variable_le_u32(dev, "rx_imp", &instance->efuse_rx_imp);
++		if (ret) {
++			dev_err(dev, "fail to get u3 rx_imp efuse, %d\n", ret);
++			break;
++		}
++
++		ret = nvmem_cell_read_variable_le_u32(dev, "tx_imp", &instance->efuse_tx_imp);
++		if (ret) {
++			dev_err(dev, "fail to get u3 tx_imp efuse, %d\n", ret);
++			break;
++		}
++
++		/* no efuse, ignore it */
++		if (!instance->efuse_intr &&
++		    !instance->efuse_rx_imp &&
++		    !instance->efuse_rx_imp) {
++			dev_warn(dev, "no u3 intr efuse, but dts enable it\n");
++			instance->efuse_sw_en = 0;
++			break;
++		}
++
++		dev_dbg(dev, "u3 efuse - intr %x, rx_imp %x, tx_imp %x\n",
++			instance->efuse_intr, instance->efuse_rx_imp,instance->efuse_tx_imp);
++		break;
++	default:
++		dev_err(dev, "no sw efuse for type %d\n", instance->type);
++		ret = -EINVAL;
++	}
++
++	return ret;
++}
++
++static void phy_efuse_set(struct mtk_phy_instance *instance)
++{
++	struct device *dev = &instance->phy->dev;
++	struct u2phy_banks *u2_banks = &instance->u2_banks;
++	struct u3phy_banks *u3_banks = &instance->u3_banks;
++	u32 tmp;
++
++	if (!instance->efuse_sw_en)
++		return;
++
++	switch (instance->type) {
++	case PHY_TYPE_USB2:
++		tmp = readl(u2_banks->misc + U3P_MISC_REG1);
++		tmp |= MR1_EFUSE_AUTO_LOAD_DIS;
++		writel(tmp, u2_banks->misc + U3P_MISC_REG1);
++
++		tmp = readl(u2_banks->com + U3P_USBPHYACR1);
++		tmp &= ~PA1_RG_INTR_CAL;
++		tmp |= PA1_RG_INTR_CAL_VAL(instance->efuse_intr);
++		writel(tmp, u2_banks->com + U3P_USBPHYACR1);
++		break;
++	case PHY_TYPE_USB3:
++	case PHY_TYPE_PCIE:
++		tmp = readl(u3_banks->phyd + U3P_U3_PHYD_RSV);
++		tmp |= P3D_RG_EFUSE_AUTO_LOAD_DIS;
++		writel(tmp, u3_banks->phyd + U3P_U3_PHYD_RSV);
++
++		tmp = readl(u3_banks->phyd + U3P_U3_PHYD_IMPCAL0);
++		tmp &= ~P3D_RG_TX_IMPEL;
++		tmp |= P3D_RG_TX_IMPEL_VAL(instance->efuse_tx_imp);
++		tmp |= P3D_RG_FORCE_TX_IMPEL;
++		writel(tmp, u3_banks->phyd + U3P_U3_PHYD_IMPCAL0);
++
++		tmp = readl(u3_banks->phyd + U3P_U3_PHYD_IMPCAL1);
++		tmp &= ~P3D_RG_RX_IMPEL;
++		tmp |= P3D_RG_RX_IMPEL_VAL(instance->efuse_rx_imp);
++		tmp |= P3D_RG_FORCE_RX_IMPEL;
++		writel(tmp, u3_banks->phyd + U3P_U3_PHYD_IMPCAL1);
++
++		tmp = readl(u3_banks->phya + U3P_U3_PHYA_REG0);
++		tmp &= ~P3A_RG_IEXT_INTR;
++		tmp |= P3A_RG_IEXT_INTR_VAL(instance->efuse_intr);
++		writel(tmp, u3_banks->phya + U3P_U3_PHYA_REG0);
++		break;
++	default:
++		dev_warn(dev, "no sw efuse for type %d\n", instance->type);
++		break;
++	}
++}
++
+ static int mtk_phy_init(struct phy *phy)
+ {
+ 	struct mtk_phy_instance *instance = phy_get_drvdata(phy);
+@@ -1050,6 +1202,8 @@ static int mtk_phy_init(struct phy *phy)
+ 	if (ret)
+ 		return ret;
+ 
++	phy_efuse_set(instance);
++
+ 	switch (instance->type) {
+ 	case PHY_TYPE_USB2:
+ 		u2_phy_instance_init(tphy, instance);
+@@ -1134,6 +1288,7 @@ static struct phy *mtk_phy_xlate(struct device *dev,
+ 	struct mtk_phy_instance *instance = NULL;
+ 	struct device_node *phy_np = args->np;
+ 	int index;
++	int ret;
+ 
+ 	if (args->args_count != 1) {
+ 		dev_err(dev, "invalid number of cells in 'phy' property\n");
+@@ -1174,6 +1329,10 @@ static struct phy *mtk_phy_xlate(struct device *dev,
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 
++	ret = phy_efuse_get(tphy, instance);
++	if (ret)
++		return ERR_PTR(ret);
++
+ 	phy_parse_property(tphy, instance);
+ 	phy_type_set(instance);
+ 
+@@ -1196,10 +1355,12 @@ static const struct mtk_phy_pdata tphy_v1_pdata = {
+ 
+ static const struct mtk_phy_pdata tphy_v2_pdata = {
+ 	.avoid_rx_sen_degradation = false,
++	.sw_efuse_supported = true,
+ 	.version = MTK_PHY_V2,
+ };
+ 
+ static const struct mtk_phy_pdata tphy_v3_pdata = {
++	.sw_efuse_supported = true,
+ 	.version = MTK_PHY_V3,
+ };
+ 
+@@ -1210,6 +1371,7 @@ static const struct mtk_phy_pdata mt8173_pdata = {
+ 
+ static const struct mtk_phy_pdata mt8195_pdata = {
+ 	.sw_pll_48m_to_26m = true,
++	.sw_efuse_supported = true,
+ 	.version = MTK_PHY_V3,
+ };
+ 
+diff --git a/drivers/phy/socionext/phy-uniphier-usb3ss.c b/drivers/phy/socionext/phy-uniphier-usb3ss.c
+index 6700645bcbe6b..3b5ffc16a6947 100644
+--- a/drivers/phy/socionext/phy-uniphier-usb3ss.c
++++ b/drivers/phy/socionext/phy-uniphier-usb3ss.c
+@@ -22,11 +22,13 @@
+ #include <linux/reset.h>
+ 
+ #define SSPHY_TESTI		0x0
+-#define SSPHY_TESTO		0x4
+ #define TESTI_DAT_MASK		GENMASK(13, 6)
+ #define TESTI_ADR_MASK		GENMASK(5, 1)
+ #define TESTI_WR_EN		BIT(0)
+ 
++#define SSPHY_TESTO		0x4
++#define TESTO_DAT_MASK		GENMASK(7, 0)
++
+ #define PHY_F(regno, msb, lsb) { (regno), (msb), (lsb) }
+ 
+ #define CDR_CPD_TRIM	PHY_F(7, 3, 0)	/* RxPLL charge pump current */
+@@ -84,12 +86,12 @@ static void uniphier_u3ssphy_set_param(struct uniphier_u3ssphy_priv *priv,
+ 	val  = FIELD_PREP(TESTI_DAT_MASK, 1);
+ 	val |= FIELD_PREP(TESTI_ADR_MASK, p->field.reg_no);
+ 	uniphier_u3ssphy_testio_write(priv, val);
+-	val = readl(priv->base + SSPHY_TESTO);
++	val = readl(priv->base + SSPHY_TESTO) & TESTO_DAT_MASK;
+ 
+ 	/* update value */
+-	val &= ~FIELD_PREP(TESTI_DAT_MASK, field_mask);
++	val &= ~field_mask;
+ 	data = field_mask & (p->value << p->field.lsb);
+-	val  = FIELD_PREP(TESTI_DAT_MASK, data);
++	val  = FIELD_PREP(TESTI_DAT_MASK, data | val);
+ 	val |= FIELD_PREP(TESTI_ADR_MASK, p->field.reg_no);
+ 	uniphier_u3ssphy_testio_write(priv, val);
+ 	uniphier_u3ssphy_testio_write(priv, val | TESTI_WR_EN);
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+index 53779822348da..e1ae3beb9f72b 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+@@ -815,6 +815,8 @@ static int mtk_pinconf_bias_get_rsel(struct mtk_pinctrl *hw,
+ 		goto out;
+ 
+ 	err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_PD, &pd);
++	if (err)
++		goto out;
+ 
+ 	if (pu == 0 && pd == 0) {
+ 		*pullup = 0;
+diff --git a/drivers/pinctrl/mediatek/pinctrl-paris.c b/drivers/pinctrl/mediatek/pinctrl-paris.c
+index d4e02c5d74a89..4c6f6d967b18a 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-paris.c
++++ b/drivers/pinctrl/mediatek/pinctrl-paris.c
+@@ -581,7 +581,7 @@ ssize_t mtk_pctrl_show_one_pin(struct mtk_pinctrl *hw,
+ {
+ 	int pinmux, pullup, pullen, len = 0, r1 = -1, r0 = -1, rsel = -1;
+ 	const struct mtk_pin_desc *desc;
+-	u32 try_all_type;
++	u32 try_all_type = 0;
+ 
+ 	if (gpio >= hw->soc->npins)
+ 		return -EINVAL;
+diff --git a/drivers/pinctrl/pinctrl-apple-gpio.c b/drivers/pinctrl/pinctrl-apple-gpio.c
+index a7861079a6502..c772e31d21223 100644
+--- a/drivers/pinctrl/pinctrl-apple-gpio.c
++++ b/drivers/pinctrl/pinctrl-apple-gpio.c
+@@ -114,7 +114,7 @@ static int apple_gpio_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 		dev_err(pctl->dev,
+ 			"missing or empty pinmux property in node %pOFn.\n",
+ 			node);
+-		return ret;
++		return ret ? ret : -EINVAL;
+ 	}
+ 
+ 	num_pins = ret;
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index 5ce260f152ce5..dc52da94af0b9 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -2748,7 +2748,7 @@ static int rockchip_pinctrl_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, info);
+ 
+-	ret = of_platform_populate(np, rockchip_bank_match, NULL, NULL);
++	ret = of_platform_populate(np, NULL, NULL, &pdev->dev);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to register gpio device\n");
+ 		return ret;
+diff --git a/drivers/platform/x86/wmi.c b/drivers/platform/x86/wmi.c
+index c34341f4da763..02aba274c4bc2 100644
+--- a/drivers/platform/x86/wmi.c
++++ b/drivers/platform/x86/wmi.c
+@@ -57,6 +57,11 @@ static_assert(sizeof(typeof_member(struct guid_block, guid)) == 16);
+ static_assert(sizeof(struct guid_block) == 20);
+ static_assert(__alignof__(struct guid_block) == 1);
+ 
++enum {	/* wmi_block flags */
++	WMI_READ_TAKES_NO_ARGS,
++	WMI_PROBED,
++};
++
+ struct wmi_block {
+ 	struct wmi_device dev;
+ 	struct list_head list;
+@@ -67,8 +72,7 @@ struct wmi_block {
+ 	wmi_notify_handler handler;
+ 	void *handler_data;
+ 	u64 req_buf_size;
+-
+-	bool read_takes_no_args;
++	unsigned long flags;
+ };
+ 
+ 
+@@ -367,7 +371,7 @@ static acpi_status __query_block(struct wmi_block *wblock, u8 instance,
+ 	wq_params[0].type = ACPI_TYPE_INTEGER;
+ 	wq_params[0].integer.value = instance;
+ 
+-	if (instance == 0 && wblock->read_takes_no_args)
++	if (instance == 0 && test_bit(WMI_READ_TAKES_NO_ARGS, &wblock->flags))
+ 		input.count = 0;
+ 
+ 	/*
+@@ -1005,6 +1009,7 @@ static int wmi_dev_probe(struct device *dev)
+ 		}
+ 	}
+ 
++	set_bit(WMI_PROBED, &wblock->flags);
+ 	return 0;
+ 
+ probe_misc_failure:
+@@ -1022,6 +1027,8 @@ static void wmi_dev_remove(struct device *dev)
+ 	struct wmi_block *wblock = dev_to_wblock(dev);
+ 	struct wmi_driver *wdriver = drv_to_wdrv(dev->driver);
+ 
++	clear_bit(WMI_PROBED, &wblock->flags);
++
+ 	if (wdriver->filter_callback) {
+ 		misc_deregister(&wblock->char_dev);
+ 		kfree(wblock->char_dev.name);
+@@ -1113,7 +1120,7 @@ static int wmi_create_device(struct device *wmi_bus_dev,
+ 	 * laptops, WQxx may not be a method at all.)
+ 	 */
+ 	if (info->type != ACPI_TYPE_METHOD || info->param_count == 0)
+-		wblock->read_takes_no_args = true;
++		set_bit(WMI_READ_TAKES_NO_ARGS, &wblock->flags);
+ 
+ 	kfree(info);
+ 
+@@ -1319,7 +1326,7 @@ static void acpi_wmi_notify_handler(acpi_handle handle, u32 event,
+ 		return;
+ 
+ 	/* If a driver is bound, then notify the driver. */
+-	if (wblock->dev.dev.driver) {
++	if (test_bit(WMI_PROBED, &wblock->flags) && wblock->dev.dev.driver) {
+ 		struct wmi_driver *driver = drv_to_wdrv(wblock->dev.dev.driver);
+ 		struct acpi_buffer evdata = { ACPI_ALLOCATE_BUFFER, NULL };
+ 		acpi_status status;
+diff --git a/drivers/power/reset/mt6323-poweroff.c b/drivers/power/reset/mt6323-poweroff.c
+index 0532803e6cbc4..d90e76fcb9383 100644
+--- a/drivers/power/reset/mt6323-poweroff.c
++++ b/drivers/power/reset/mt6323-poweroff.c
+@@ -57,6 +57,9 @@ static int mt6323_pwrc_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!res)
++		return -EINVAL;
++
+ 	pwrc->base = res->start;
+ 	pwrc->regmap = mt6397_chip->regmap;
+ 	pwrc->dev = &pdev->dev;
+diff --git a/drivers/ptp/ptp_vclock.c b/drivers/ptp/ptp_vclock.c
+index baee0379482bc..ab1d233173e13 100644
+--- a/drivers/ptp/ptp_vclock.c
++++ b/drivers/ptp/ptp_vclock.c
+@@ -185,8 +185,8 @@ out:
+ }
+ EXPORT_SYMBOL(ptp_get_vclocks_index);
+ 
+-void ptp_convert_timestamp(struct skb_shared_hwtstamps *hwtstamps,
+-			   int vclock_index)
++ktime_t ptp_convert_timestamp(const struct skb_shared_hwtstamps *hwtstamps,
++			      int vclock_index)
+ {
+ 	char name[PTP_CLOCK_NAME_LEN] = "";
+ 	struct ptp_vclock *vclock;
+@@ -198,12 +198,12 @@ void ptp_convert_timestamp(struct skb_shared_hwtstamps *hwtstamps,
+ 	snprintf(name, PTP_CLOCK_NAME_LEN, "ptp%d", vclock_index);
+ 	dev = class_find_device_by_name(ptp_class, name);
+ 	if (!dev)
+-		return;
++		return 0;
+ 
+ 	ptp = dev_get_drvdata(dev);
+ 	if (!ptp->is_virtual_clock) {
+ 		put_device(dev);
+-		return;
++		return 0;
+ 	}
+ 
+ 	vclock = info_to_vclock(ptp->info);
+@@ -215,7 +215,7 @@ void ptp_convert_timestamp(struct skb_shared_hwtstamps *hwtstamps,
+ 	spin_unlock_irqrestore(&vclock->lock, flags);
+ 
+ 	put_device(dev);
+-	hwtstamps->hwtstamp = ns_to_ktime(ns);
++	return ns_to_ktime(ns);
+ }
+ EXPORT_SYMBOL(ptp_convert_timestamp);
+ #endif
+diff --git a/drivers/regulator/da9121-regulator.c b/drivers/regulator/da9121-regulator.c
+index e669250902580..0a4fd449c27d1 100644
+--- a/drivers/regulator/da9121-regulator.c
++++ b/drivers/regulator/da9121-regulator.c
+@@ -253,6 +253,11 @@ static int da9121_set_current_limit(struct regulator_dev *rdev,
+ 		goto error;
+ 	}
+ 
++	if (rdev->desc->ops->is_enabled(rdev)) {
++		ret = -EBUSY;
++		goto error;
++	}
++
+ 	ret = da9121_ceiling_selector(rdev, min_ua, max_ua, &sel);
+ 	if (ret < 0)
+ 		goto error;
+diff --git a/drivers/regulator/qcom-labibb-regulator.c b/drivers/regulator/qcom-labibb-regulator.c
+index b3da0dc58782f..639b71eb41ffe 100644
+--- a/drivers/regulator/qcom-labibb-regulator.c
++++ b/drivers/regulator/qcom-labibb-regulator.c
+@@ -260,7 +260,7 @@ static irqreturn_t qcom_labibb_ocp_isr(int irq, void *chip)
+ 
+ 	/* If the regulator is not enabled, this is a fake event */
+ 	if (!ops->is_enabled(vreg->rdev))
+-		return 0;
++		return IRQ_HANDLED;
+ 
+ 	/* If we tried to recover for too many times it's not getting better */
+ 	if (vreg->ocp_irq_count > LABIBB_MAX_OCP_COUNT)
+diff --git a/drivers/regulator/qcom_smd-regulator.c b/drivers/regulator/qcom_smd-regulator.c
+index 8bac024dde8b4..9fc666107a06c 100644
+--- a/drivers/regulator/qcom_smd-regulator.c
++++ b/drivers/regulator/qcom_smd-regulator.c
+@@ -9,6 +9,7 @@
+ #include <linux/of_device.h>
+ #include <linux/platform_device.h>
+ #include <linux/regulator/driver.h>
++#include <linux/regulator/of_regulator.h>
+ #include <linux/soc/qcom/smd-rpm.h>
+ 
+ struct qcom_rpm_reg {
+@@ -1239,52 +1240,91 @@ static const struct of_device_id rpm_of_match[] = {
+ };
+ MODULE_DEVICE_TABLE(of, rpm_of_match);
+ 
+-static int rpm_reg_probe(struct platform_device *pdev)
++/**
++ * rpm_regulator_init_vreg() - initialize all attributes of a qcom_smd-regulator
++ * @vreg:		Pointer to the individual qcom_smd-regulator resource
++ * @dev:		Pointer to the top level qcom_smd-regulator PMIC device
++ * @node:		Pointer to the individual qcom_smd-regulator resource
++ *			device node
++ * @rpm:		Pointer to the rpm bus node
++ * @pmic_rpm_data:	Pointer to a null-terminated array of qcom_smd-regulator
++ *			resources defined for the top level PMIC device
++ *
++ * Return: 0 on success, errno on failure
++ */
++static int rpm_regulator_init_vreg(struct qcom_rpm_reg *vreg, struct device *dev,
++				   struct device_node *node, struct qcom_smd_rpm *rpm,
++				   const struct rpm_regulator_data *pmic_rpm_data)
+ {
+-	const struct rpm_regulator_data *reg;
+-	const struct of_device_id *match;
+-	struct regulator_config config = { };
++	struct regulator_config config = {};
++	const struct rpm_regulator_data *rpm_data;
+ 	struct regulator_dev *rdev;
++	int ret;
++
++	for (rpm_data = pmic_rpm_data; rpm_data->name; rpm_data++)
++		if (of_node_name_eq(node, rpm_data->name))
++			break;
++
++	if (!rpm_data->name) {
++		dev_err(dev, "Unknown regulator %pOFn\n", node);
++		return -EINVAL;
++	}
++
++	vreg->dev	= dev;
++	vreg->rpm	= rpm;
++	vreg->type	= rpm_data->type;
++	vreg->id	= rpm_data->id;
++
++	memcpy(&vreg->desc, rpm_data->desc, sizeof(vreg->desc));
++	vreg->desc.name = rpm_data->name;
++	vreg->desc.supply_name = rpm_data->supply;
++	vreg->desc.owner = THIS_MODULE;
++	vreg->desc.type = REGULATOR_VOLTAGE;
++	vreg->desc.of_match = rpm_data->name;
++
++	config.dev		= dev;
++	config.of_node		= node;
++	config.driver_data	= vreg;
++
++	rdev = devm_regulator_register(dev, &vreg->desc, &config);
++	if (IS_ERR(rdev)) {
++		ret = PTR_ERR(rdev);
++		dev_err(dev, "%pOFn: devm_regulator_register() failed, ret=%d\n", node, ret);
++		return ret;
++	}
++
++	return 0;
++}
++
++static int rpm_reg_probe(struct platform_device *pdev)
++{
++	struct device *dev = &pdev->dev;
++	const struct rpm_regulator_data *vreg_data;
++	struct device_node *node;
+ 	struct qcom_rpm_reg *vreg;
+ 	struct qcom_smd_rpm *rpm;
++	int ret;
+ 
+ 	rpm = dev_get_drvdata(pdev->dev.parent);
+ 	if (!rpm) {
+-		dev_err(&pdev->dev, "unable to retrieve handle to rpm\n");
++		dev_err(&pdev->dev, "Unable to retrieve handle to rpm\n");
+ 		return -ENODEV;
+ 	}
+ 
+-	match = of_match_device(rpm_of_match, &pdev->dev);
+-	if (!match) {
+-		dev_err(&pdev->dev, "failed to match device\n");
++	vreg_data = of_device_get_match_data(dev);
++	if (!vreg_data)
+ 		return -ENODEV;
+-	}
+ 
+-	for (reg = match->data; reg->name; reg++) {
++	for_each_available_child_of_node(dev->of_node, node) {
+ 		vreg = devm_kzalloc(&pdev->dev, sizeof(*vreg), GFP_KERNEL);
+ 		if (!vreg)
+ 			return -ENOMEM;
+ 
+-		vreg->dev = &pdev->dev;
+-		vreg->type = reg->type;
+-		vreg->id = reg->id;
+-		vreg->rpm = rpm;
+-
+-		memcpy(&vreg->desc, reg->desc, sizeof(vreg->desc));
+-
+-		vreg->desc.id = -1;
+-		vreg->desc.owner = THIS_MODULE;
+-		vreg->desc.type = REGULATOR_VOLTAGE;
+-		vreg->desc.name = reg->name;
+-		vreg->desc.supply_name = reg->supply;
+-		vreg->desc.of_match = reg->name;
+-
+-		config.dev = &pdev->dev;
+-		config.driver_data = vreg;
+-		rdev = devm_regulator_register(&pdev->dev, &vreg->desc, &config);
+-		if (IS_ERR(rdev)) {
+-			dev_err(&pdev->dev, "failed to register %s\n", reg->name);
+-			return PTR_ERR(rdev);
++		ret = rpm_regulator_init_vreg(vreg, dev, node, rpm, vreg_data);
++
++		if (ret < 0) {
++			of_node_put(node);
++			return ret;
+ 		}
+ 	}
+ 
+diff --git a/drivers/remoteproc/imx_rproc.c b/drivers/remoteproc/imx_rproc.c
+index ff8170dbbc3c3..0a45bc0d3f73f 100644
+--- a/drivers/remoteproc/imx_rproc.c
++++ b/drivers/remoteproc/imx_rproc.c
+@@ -804,6 +804,7 @@ static int imx_rproc_remove(struct platform_device *pdev)
+ 	clk_disable_unprepare(priv->clk);
+ 	rproc_del(rproc);
+ 	imx_rproc_free_mbox(rproc);
++	destroy_workqueue(priv->workqueue);
+ 	rproc_free(rproc);
+ 
+ 	return 0;
+diff --git a/drivers/rpmsg/rpmsg_core.c b/drivers/rpmsg/rpmsg_core.c
+index d3eb60059ef16..4c6b4a587be55 100644
+--- a/drivers/rpmsg/rpmsg_core.c
++++ b/drivers/rpmsg/rpmsg_core.c
+@@ -540,13 +540,25 @@ static int rpmsg_dev_probe(struct device *dev)
+ 	err = rpdrv->probe(rpdev);
+ 	if (err) {
+ 		dev_err(dev, "%s: failed: %d\n", __func__, err);
+-		if (ept)
+-			rpmsg_destroy_ept(ept);
+-		goto out;
++		goto destroy_ept;
+ 	}
+ 
+-	if (ept && rpdev->ops->announce_create)
++	if (ept && rpdev->ops->announce_create) {
+ 		err = rpdev->ops->announce_create(rpdev);
++		if (err) {
++			dev_err(dev, "failed to announce creation\n");
++			goto remove_rpdev;
++		}
++	}
++
++	return 0;
++
++remove_rpdev:
++	if (rpdrv->remove)
++		rpdrv->remove(rpdev);
++destroy_ept:
++	if (ept)
++		rpmsg_destroy_ept(ept);
+ out:
+ 	return err;
+ }
+diff --git a/drivers/rtc/dev.c b/drivers/rtc/dev.c
+index e104972a28fdf..69325aeede1a3 100644
+--- a/drivers/rtc/dev.c
++++ b/drivers/rtc/dev.c
+@@ -391,14 +391,14 @@ static long rtc_dev_ioctl(struct file *file,
+ 		}
+ 
+ 		switch(param.param) {
+-			long offset;
+ 		case RTC_PARAM_FEATURES:
+ 			if (param.index != 0)
+ 				err = -EINVAL;
+ 			param.uvalue = rtc->features[0];
+ 			break;
+ 
+-		case RTC_PARAM_CORRECTION:
++		case RTC_PARAM_CORRECTION: {
++			long offset;
+ 			mutex_unlock(&rtc->ops_lock);
+ 			if (param.index != 0)
+ 				return -EINVAL;
+@@ -407,7 +407,7 @@ static long rtc_dev_ioctl(struct file *file,
+ 			if (err == 0)
+ 				param.svalue = offset;
+ 			break;
+-
++		}
+ 		default:
+ 			if (rtc->ops->param_get)
+ 				err = rtc->ops->param_get(rtc->dev.parent, &param);
+diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
+index 4eb53412b8085..dc3f8b0dde989 100644
+--- a/drivers/rtc/rtc-cmos.c
++++ b/drivers/rtc/rtc-cmos.c
+@@ -457,7 +457,10 @@ static int cmos_set_alarm(struct device *dev, struct rtc_wkalrm *t)
+ 	min = t->time.tm_min;
+ 	sec = t->time.tm_sec;
+ 
++	spin_lock_irq(&rtc_lock);
+ 	rtc_control = CMOS_READ(RTC_CONTROL);
++	spin_unlock_irq(&rtc_lock);
++
+ 	if (!(rtc_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+ 		/* Writing 0xff means "don't care" or "match all".  */
+ 		mon = (mon <= 12) ? bin2bcd(mon) : 0xff;
+diff --git a/drivers/rtc/rtc-pxa.c b/drivers/rtc/rtc-pxa.c
+index d2f1d8f754bf3..cf8119b6d3204 100644
+--- a/drivers/rtc/rtc-pxa.c
++++ b/drivers/rtc/rtc-pxa.c
+@@ -330,6 +330,10 @@ static int __init pxa_rtc_probe(struct platform_device *pdev)
+ 	if (sa1100_rtc->irq_alarm < 0)
+ 		return -ENXIO;
+ 
++	sa1100_rtc->rtc = devm_rtc_allocate_device(&pdev->dev);
++	if (IS_ERR(sa1100_rtc->rtc))
++		return PTR_ERR(sa1100_rtc->rtc);
++
+ 	pxa_rtc->base = devm_ioremap(dev, pxa_rtc->ress->start,
+ 				resource_size(pxa_rtc->ress));
+ 	if (!pxa_rtc->base) {
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index f206c433de325..8a13bc08d6575 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -1581,7 +1581,6 @@ void hisi_sas_controller_reset_prepare(struct hisi_hba *hisi_hba)
+ {
+ 	struct Scsi_Host *shost = hisi_hba->shost;
+ 
+-	down(&hisi_hba->sem);
+ 	hisi_hba->phy_state = hisi_hba->hw->get_phys_state(hisi_hba);
+ 
+ 	scsi_block_requests(shost);
+@@ -1606,9 +1605,9 @@ void hisi_sas_controller_reset_done(struct hisi_hba *hisi_hba)
+ 	if (hisi_hba->reject_stp_links_msk)
+ 		hisi_sas_terminate_stp_reject(hisi_hba);
+ 	hisi_sas_reset_init_all_devices(hisi_hba);
+-	up(&hisi_hba->sem);
+ 	scsi_unblock_requests(shost);
+ 	clear_bit(HISI_SAS_RESETTING_BIT, &hisi_hba->flags);
++	up(&hisi_hba->sem);
+ 
+ 	hisi_sas_rescan_topology(hisi_hba, hisi_hba->phy_state);
+ }
+@@ -1619,8 +1618,11 @@ static int hisi_sas_controller_prereset(struct hisi_hba *hisi_hba)
+ 	if (!hisi_hba->hw->soft_reset)
+ 		return -1;
+ 
+-	if (test_and_set_bit(HISI_SAS_RESETTING_BIT, &hisi_hba->flags))
++	down(&hisi_hba->sem);
++	if (test_and_set_bit(HISI_SAS_RESETTING_BIT, &hisi_hba->flags)) {
++		up(&hisi_hba->sem);
+ 		return -1;
++	}
+ 
+ 	if (hisi_sas_debugfs_enable && hisi_hba->debugfs_itct[0].itct)
+ 		hisi_hba->hw->debugfs_snapshot_regs(hisi_hba);
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 0ef6c21bf0811..11a44d9dd9b2d 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -4848,6 +4848,7 @@ static void hisi_sas_reset_prepare_v3_hw(struct pci_dev *pdev)
+ 	int rc;
+ 
+ 	dev_info(dev, "FLR prepare\n");
++	down(&hisi_hba->sem);
+ 	set_bit(HISI_SAS_RESETTING_BIT, &hisi_hba->flags);
+ 	hisi_sas_controller_reset_prepare(hisi_hba);
+ 
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index 2f8e6d0a926fe..54c58392fd5d0 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -1023,7 +1023,6 @@ struct lpfc_hba {
+ #define HBA_DEVLOSS_TMO         0x2000 /* HBA in devloss timeout */
+ #define HBA_RRQ_ACTIVE		0x4000 /* process the rrq active list */
+ #define HBA_IOQ_FLUSH		0x8000 /* FCP/NVME I/O queues being flushed */
+-#define HBA_FW_DUMP_OP		0x10000 /* Skips fn reset before FW dump */
+ #define HBA_RECOVERABLE_UE	0x20000 /* Firmware supports recoverable UE */
+ #define HBA_FORCED_LINK_SPEED	0x40000 /*
+ 					 * Firmware supports Forced Link Speed
+@@ -1040,6 +1039,7 @@ struct lpfc_hba {
+ #define HBA_HBEAT_TMO		0x8000000 /* HBEAT initiated after timeout */
+ #define HBA_FLOGI_OUTSTANDING	0x10000000 /* FLOGI is outstanding */
+ 
++	struct completion *fw_dump_cmpl; /* cmpl event tracker for fw_dump */
+ 	uint32_t fcp_ring_in_use; /* When polling test if intr-hndlr active*/
+ 	struct lpfc_dmabuf slim2p;
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
+index dd4c51b6ef4e2..7a7f17d71811b 100644
+--- a/drivers/scsi/lpfc/lpfc_attr.c
++++ b/drivers/scsi/lpfc/lpfc_attr.c
+@@ -1709,25 +1709,25 @@ lpfc_sli4_pdev_reg_request(struct lpfc_hba *phba, uint32_t opcode)
+ 	before_fc_flag = phba->pport->fc_flag;
+ 	sriov_nr_virtfn = phba->cfg_sriov_nr_virtfn;
+ 
+-	/* Disable SR-IOV virtual functions if enabled */
+-	if (phba->cfg_sriov_nr_virtfn) {
+-		pci_disable_sriov(pdev);
+-		phba->cfg_sriov_nr_virtfn = 0;
+-	}
++	if (opcode == LPFC_FW_DUMP) {
++		init_completion(&online_compl);
++		phba->fw_dump_cmpl = &online_compl;
++	} else {
++		/* Disable SR-IOV virtual functions if enabled */
++		if (phba->cfg_sriov_nr_virtfn) {
++			pci_disable_sriov(pdev);
++			phba->cfg_sriov_nr_virtfn = 0;
++		}
+ 
+-	if (opcode == LPFC_FW_DUMP)
+-		phba->hba_flag |= HBA_FW_DUMP_OP;
++		status = lpfc_do_offline(phba, LPFC_EVT_OFFLINE);
+ 
+-	status = lpfc_do_offline(phba, LPFC_EVT_OFFLINE);
++		if (status != 0)
++			return status;
+ 
+-	if (status != 0) {
+-		phba->hba_flag &= ~HBA_FW_DUMP_OP;
+-		return status;
++		/* wait for the device to be quiesced before firmware reset */
++		msleep(100);
+ 	}
+ 
+-	/* wait for the device to be quiesced before firmware reset */
+-	msleep(100);
+-
+ 	reg_val = readl(phba->sli4_hba.conf_regs_memmap_p +
+ 			LPFC_CTL_PDEV_CTL_OFFSET);
+ 
+@@ -1756,24 +1756,42 @@ lpfc_sli4_pdev_reg_request(struct lpfc_hba *phba, uint32_t opcode)
+ 		lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
+ 				"3153 Fail to perform the requested "
+ 				"access: x%x\n", reg_val);
++		if (phba->fw_dump_cmpl)
++			phba->fw_dump_cmpl = NULL;
+ 		return rc;
+ 	}
+ 
+ 	/* keep the original port state */
+-	if (before_fc_flag & FC_OFFLINE_MODE)
+-		goto out;
+-
+-	init_completion(&online_compl);
+-	job_posted = lpfc_workq_post_event(phba, &status, &online_compl,
+-					   LPFC_EVT_ONLINE);
+-	if (!job_posted)
++	if (before_fc_flag & FC_OFFLINE_MODE) {
++		if (phba->fw_dump_cmpl)
++			phba->fw_dump_cmpl = NULL;
+ 		goto out;
++	}
+ 
+-	wait_for_completion(&online_compl);
++	/* Firmware dump will trigger an HA_ERATT event, and
++	 * lpfc_handle_eratt_s4 routine already handles bringing the port back
++	 * online.
++	 */
++	if (opcode == LPFC_FW_DUMP) {
++		wait_for_completion(phba->fw_dump_cmpl);
++	} else  {
++		init_completion(&online_compl);
++		job_posted = lpfc_workq_post_event(phba, &status, &online_compl,
++						   LPFC_EVT_ONLINE);
++		if (!job_posted)
++			goto out;
+ 
++		wait_for_completion(&online_compl);
++	}
+ out:
+ 	/* in any case, restore the virtual functions enabled as before */
+ 	if (sriov_nr_virtfn) {
++		/* If fw_dump was performed, first disable to clean up */
++		if (opcode == LPFC_FW_DUMP) {
++			pci_disable_sriov(pdev);
++			phba->cfg_sriov_nr_virtfn = 0;
++		}
++
+ 		sriov_err =
+ 			lpfc_sli_probe_sriov_nr_virtfn(phba, sriov_nr_virtfn);
+ 		if (!sriov_err)
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index e83453bea2aee..78024f11b794a 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -3538,11 +3538,6 @@ lpfc_issue_els_rscn(struct lpfc_vport *vport, uint8_t retry)
+ 		return 1;
+ 	}
+ 
+-	/* This will cause the callback-function lpfc_cmpl_els_cmd to
+-	 * trigger the release of node.
+-	 */
+-	if (!(vport->fc_flag & FC_PT2PT))
+-		lpfc_nlp_put(ndlp);
+ 	return 0;
+ }
+ 
+@@ -6899,6 +6894,7 @@ static int
+ lpfc_get_rdp_info(struct lpfc_hba *phba, struct lpfc_rdp_context *rdp_context)
+ {
+ 	LPFC_MBOXQ_t *mbox = NULL;
++	struct lpfc_dmabuf *mp;
+ 	int rc;
+ 
+ 	mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+@@ -6914,8 +6910,11 @@ lpfc_get_rdp_info(struct lpfc_hba *phba, struct lpfc_rdp_context *rdp_context)
+ 	mbox->mbox_cmpl = lpfc_mbx_cmpl_rdp_page_a0;
+ 	mbox->ctx_ndlp = (struct lpfc_rdp_context *)rdp_context;
+ 	rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
+-	if (rc == MBX_NOT_FINISHED)
++	if (rc == MBX_NOT_FINISHED) {
++		mp = (struct lpfc_dmabuf *)mbox->ctx_buf;
++		lpfc_mbuf_free(phba, mp->virt, mp->phys);
+ 		goto issue_mbox_fail;
++	}
+ 
+ 	return 0;
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index 9fe6e5b386ce3..5e54ec503f18b 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -869,10 +869,16 @@ lpfc_work_done(struct lpfc_hba *phba)
+ 	if (phba->pci_dev_grp == LPFC_PCI_DEV_OC)
+ 		lpfc_sli4_post_async_mbox(phba);
+ 
+-	if (ha_copy & HA_ERATT)
++	if (ha_copy & HA_ERATT) {
+ 		/* Handle the error attention event */
+ 		lpfc_handle_eratt(phba);
+ 
++		if (phba->fw_dump_cmpl) {
++			complete(phba->fw_dump_cmpl);
++			phba->fw_dump_cmpl = NULL;
++		}
++	}
++
+ 	if (ha_copy & HA_MBATT)
+ 		lpfc_sli_handle_mb_event(phba);
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index ba17a8f740a95..7628b0634c57a 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -5373,8 +5373,10 @@ lpfc_sli4_async_link_evt(struct lpfc_hba *phba,
+ 	 */
+ 	if (!(phba->hba_flag & HBA_FCOE_MODE)) {
+ 		rc = lpfc_sli_issue_mbox(phba, pmb, MBX_NOWAIT);
+-		if (rc == MBX_NOT_FINISHED)
++		if (rc == MBX_NOT_FINISHED) {
++			lpfc_mbuf_free(phba, mp->virt, mp->phys);
+ 			goto out_free_dmabuf;
++		}
+ 		return;
+ 	}
+ 	/*
+@@ -6337,8 +6339,10 @@ lpfc_sli4_async_fc_evt(struct lpfc_hba *phba, struct lpfc_acqe_fc_la *acqe_fc)
+ 	}
+ 
+ 	rc = lpfc_sli_issue_mbox(phba, pmb, MBX_NOWAIT);
+-	if (rc == MBX_NOT_FINISHED)
++	if (rc == MBX_NOT_FINISHED) {
++		lpfc_mbuf_free(phba, mp->virt, mp->phys);
+ 		goto out_free_dmabuf;
++	}
+ 	return;
+ 
+ out_free_dmabuf:
+diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
+index 27263f02ab9f6..7d717a4ac14d1 100644
+--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
++++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
+@@ -322,6 +322,7 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ {
+ 	struct lpfc_hba    *phba = vport->phba;
+ 	struct lpfc_dmabuf *pcmd;
++	struct lpfc_dmabuf *mp;
+ 	uint64_t nlp_portwwn = 0;
+ 	uint32_t *lp;
+ 	IOCB_t *icmd;
+@@ -571,6 +572,11 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ 		 * a default RPI.
+ 		 */
+ 		if (phba->sli_rev == LPFC_SLI_REV4) {
++			mp = (struct lpfc_dmabuf *)login_mbox->ctx_buf;
++			if (mp) {
++				lpfc_mbuf_free(phba, mp->virt, mp->phys);
++				kfree(mp);
++			}
+ 			mempool_free(login_mbox, phba->mbox_mem_pool);
+ 			login_mbox = NULL;
+ 		} else {
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 5dedb3de271d8..513a78d08b1d5 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -5046,12 +5046,6 @@ lpfc_sli4_brdreset(struct lpfc_hba *phba)
+ 	phba->fcf.fcf_flag = 0;
+ 	spin_unlock_irq(&phba->hbalock);
+ 
+-	/* SLI4 INTF 2: if FW dump is being taken skip INIT_PORT */
+-	if (phba->hba_flag & HBA_FW_DUMP_OP) {
+-		phba->hba_flag &= ~HBA_FW_DUMP_OP;
+-		return rc;
+-	}
+-
+ 	/* Now physically reset the device */
+ 	lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
+ 			"0389 Performing PCI function reset!\n");
+diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h
+index 9787b53a2b598..2cc42432bd0c0 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr.h
++++ b/drivers/scsi/mpi3mr/mpi3mr.h
+@@ -79,7 +79,8 @@ extern int prot_mask;
+ 
+ /* Operational queue management definitions */
+ #define MPI3MR_OP_REQ_Q_QD		512
+-#define MPI3MR_OP_REP_Q_QD		4096
++#define MPI3MR_OP_REP_Q_QD		1024
++#define MPI3MR_OP_REP_Q_QD4K		4096
+ #define MPI3MR_OP_REQ_Q_SEG_SIZE	4096
+ #define MPI3MR_OP_REP_Q_SEG_SIZE	4096
+ #define MPI3MR_MAX_SEG_LIST_SIZE	4096
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+index aa5d877df6f83..2daf633ea2955 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_fw.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+@@ -1278,7 +1278,7 @@ static void mpi3mr_free_op_req_q_segments(struct mpi3mr_ioc *mrioc, u16 q_idx)
+ 			mrioc->op_reply_qinfo[q_idx].q_segment_list = NULL;
+ 		}
+ 	} else
+-		size = mrioc->req_qinfo[q_idx].num_requests *
++		size = mrioc->req_qinfo[q_idx].segment_qd *
+ 		    mrioc->facts.op_req_sz;
+ 
+ 	for (j = 0; j < mrioc->req_qinfo[q_idx].num_segments; j++) {
+@@ -1565,6 +1565,8 @@ static int mpi3mr_create_op_reply_q(struct mpi3mr_ioc *mrioc, u16 qidx)
+ 
+ 	reply_qid = qidx + 1;
+ 	op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD;
++	if (!mrioc->pdev->revision)
++		op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD4K;
+ 	op_reply_q->ci = 0;
+ 	op_reply_q->ephase = 1;
+ 	atomic_set(&op_reply_q->pend_ios, 0);
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index 124cb69740c67..4390c8b9170cd 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -1325,7 +1325,9 @@ int pm8001_mpi_build_cmd(struct pm8001_hba_info *pm8001_ha,
+ 	int q_index = circularQ - pm8001_ha->inbnd_q_tbl;
+ 	int rv;
+ 
+-	WARN_ON(q_index >= PM8001_MAX_INB_NUM);
++	if (WARN_ON(q_index >= pm8001_ha->max_q_num))
++		return -EINVAL;
++
+ 	spin_lock_irqsave(&circularQ->iq_lock, flags);
+ 	rv = pm8001_mpi_msg_free_get(circularQ, pm8001_ha->iomb_size,
+ 			&pMessage);
+diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
+index f6af1562cba49..10e5bffc34aaf 100644
+--- a/drivers/scsi/scsi.c
++++ b/drivers/scsi/scsi.c
+@@ -201,11 +201,11 @@ void scsi_finish_command(struct scsi_cmnd *cmd)
+ 
+ 
+ /*
+- * 1024 is big enough for saturating the fast scsi LUN now
++ * 1024 is big enough for saturating fast SCSI LUNs.
+  */
+ int scsi_device_max_queue_depth(struct scsi_device *sdev)
+ {
+-	return max_t(int, sdev->host->can_queue, 1024);
++	return min_t(int, sdev->host->can_queue, 1024);
+ }
+ 
+ /**
+diff --git a/drivers/scsi/scsi_debugfs.c b/drivers/scsi/scsi_debugfs.c
+index d9109771f274d..db8517f1a485a 100644
+--- a/drivers/scsi/scsi_debugfs.c
++++ b/drivers/scsi/scsi_debugfs.c
+@@ -9,6 +9,7 @@
+ static const char *const scsi_cmd_flags[] = {
+ 	SCSI_CMD_FLAG_NAME(TAGGED),
+ 	SCSI_CMD_FLAG_NAME(INITIALIZED),
++	SCSI_CMD_FLAG_NAME(LAST),
+ };
+ #undef SCSI_CMD_FLAG_NAME
+ 
+diff --git a/drivers/scsi/scsi_pm.c b/drivers/scsi/scsi_pm.c
+index b5a858c29488a..f06ca9d2a597d 100644
+--- a/drivers/scsi/scsi_pm.c
++++ b/drivers/scsi/scsi_pm.c
+@@ -181,7 +181,7 @@ static int sdev_runtime_resume(struct device *dev)
+ 	blk_pre_runtime_resume(sdev->request_queue);
+ 	if (pm && pm->runtime_resume)
+ 		err = pm->runtime_resume(dev);
+-	blk_post_runtime_resume(sdev->request_queue, err);
++	blk_post_runtime_resume(sdev->request_queue);
+ 
+ 	return err;
+ }
+diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
+index 8e4af111c0787..f5a2eed543452 100644
+--- a/drivers/scsi/sr.c
++++ b/drivers/scsi/sr.c
+@@ -856,7 +856,7 @@ static void get_capabilities(struct scsi_cd *cd)
+ 
+ 
+ 	/* allocate transfer buffer */
+-	buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
++	buffer = kmalloc(512, GFP_KERNEL);
+ 	if (!buffer) {
+ 		sr_printk(KERN_ERR, cd, "out of memory.\n");
+ 		return;
+diff --git a/drivers/scsi/sr_vendor.c b/drivers/scsi/sr_vendor.c
+index 1f988a1b9166f..a61635326ae0a 100644
+--- a/drivers/scsi/sr_vendor.c
++++ b/drivers/scsi/sr_vendor.c
+@@ -131,7 +131,7 @@ int sr_set_blocklength(Scsi_CD *cd, int blocklength)
+ 	if (cd->vendor == VENDOR_TOSHIBA)
+ 		density = (blocklength > 2048) ? 0x81 : 0x83;
+ 
+-	buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
++	buffer = kmalloc(512, GFP_KERNEL);
+ 	if (!buffer)
+ 		return -ENOMEM;
+ 
+@@ -179,7 +179,7 @@ int sr_cd_check(struct cdrom_device_info *cdi)
+ 	if (cd->cdi.mask & CDC_MULTI_SESSION)
+ 		return 0;
+ 
+-	buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
++	buffer = kmalloc(512, GFP_KERNEL);
+ 	if (!buffer)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/scsi/ufs/tc-dwc-g210-pci.c b/drivers/scsi/ufs/tc-dwc-g210-pci.c
+index 679289e1a78e6..7b08e2e07cc5f 100644
+--- a/drivers/scsi/ufs/tc-dwc-g210-pci.c
++++ b/drivers/scsi/ufs/tc-dwc-g210-pci.c
+@@ -110,7 +110,6 @@ tc_dwc_g210_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 		return err;
+ 	}
+ 
+-	pci_set_drvdata(pdev, hba);
+ 	pm_runtime_put_noidle(&pdev->dev);
+ 	pm_runtime_allow(&pdev->dev);
+ 
+diff --git a/drivers/scsi/ufs/ufs-mediatek.c b/drivers/scsi/ufs/ufs-mediatek.c
+index 5393b5c9dd9c8..86a938075f308 100644
+--- a/drivers/scsi/ufs/ufs-mediatek.c
++++ b/drivers/scsi/ufs/ufs-mediatek.c
+@@ -557,7 +557,7 @@ static void ufs_mtk_init_va09_pwr_ctrl(struct ufs_hba *hba)
+ 	struct ufs_mtk_host *host = ufshcd_get_variant(hba);
+ 
+ 	host->reg_va09 = regulator_get(hba->dev, "va09");
+-	if (!host->reg_va09)
++	if (IS_ERR(host->reg_va09))
+ 		dev_info(hba->dev, "failed to get va09");
+ 	else
+ 		host->caps |= UFS_MTK_CAP_VA09_PWR_CTRL;
+diff --git a/drivers/scsi/ufs/ufshcd-pci.c b/drivers/scsi/ufs/ufshcd-pci.c
+index f725248ba57f4..f76692053ca17 100644
+--- a/drivers/scsi/ufs/ufshcd-pci.c
++++ b/drivers/scsi/ufs/ufshcd-pci.c
+@@ -538,8 +538,6 @@ ufshcd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 		return err;
+ 	}
+ 
+-	pci_set_drvdata(pdev, hba);
+-
+ 	hba->vops = (struct ufs_hba_variant_ops *)id->driver_data;
+ 
+ 	err = ufshcd_init(hba, mmio_base, pdev->irq);
+diff --git a/drivers/scsi/ufs/ufshcd-pltfrm.c b/drivers/scsi/ufs/ufshcd-pltfrm.c
+index eaeae83b999fd..8b16bbbcb806c 100644
+--- a/drivers/scsi/ufs/ufshcd-pltfrm.c
++++ b/drivers/scsi/ufs/ufshcd-pltfrm.c
+@@ -361,8 +361,6 @@ int ufshcd_pltfrm_init(struct platform_device *pdev,
+ 		goto dealloc_host;
+ 	}
+ 
+-	platform_set_drvdata(pdev, hba);
+-
+ 	pm_runtime_set_active(&pdev->dev);
+ 	pm_runtime_enable(&pdev->dev);
+ 
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 13c09dbd99b92..c94377aa82739 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -1666,7 +1666,8 @@ int ufshcd_hold(struct ufs_hba *hba, bool async)
+ 	bool flush_result;
+ 	unsigned long flags;
+ 
+-	if (!ufshcd_is_clkgating_allowed(hba))
++	if (!ufshcd_is_clkgating_allowed(hba) ||
++	    !hba->clk_gating.is_initialized)
+ 		goto out;
+ 	spin_lock_irqsave(hba->host->host_lock, flags);
+ 	hba->clk_gating.active_reqs++;
+@@ -1826,7 +1827,7 @@ static void __ufshcd_release(struct ufs_hba *hba)
+ 
+ 	if (hba->clk_gating.active_reqs || hba->clk_gating.is_suspended ||
+ 	    hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL ||
+-	    hba->outstanding_tasks ||
++	    hba->outstanding_tasks || !hba->clk_gating.is_initialized ||
+ 	    hba->active_uic_cmd || hba->uic_async_done ||
+ 	    hba->clk_gating.state == CLKS_OFF)
+ 		return;
+@@ -1961,11 +1962,15 @@ static void ufshcd_exit_clk_gating(struct ufs_hba *hba)
+ {
+ 	if (!hba->clk_gating.is_initialized)
+ 		return;
++
+ 	ufshcd_remove_clk_gating_sysfs(hba);
+-	cancel_work_sync(&hba->clk_gating.ungate_work);
+-	cancel_delayed_work_sync(&hba->clk_gating.gate_work);
+-	destroy_workqueue(hba->clk_gating.clk_gating_workq);
++
++	/* Ungate the clock if necessary. */
++	ufshcd_hold(hba, false);
+ 	hba->clk_gating.is_initialized = false;
++	ufshcd_release(hba);
++
++	destroy_workqueue(hba->clk_gating.clk_gating_workq);
+ }
+ 
+ /* Must be called with host lock acquired */
+@@ -9486,6 +9491,13 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
+ 	struct device *dev = hba->dev;
+ 	char eh_wq_name[sizeof("ufs_eh_wq_00")];
+ 
++	/*
++	 * dev_set_drvdata() must be called before any callbacks are registered
++	 * that use dev_get_drvdata() (frequency scaling, clock scaling, hwmon,
++	 * sysfs).
++	 */
++	dev_set_drvdata(dev, hba);
++
+ 	if (!mmio_base) {
+ 		dev_err(hba->dev,
+ 		"Invalid memory reference for mmio_base is NULL\n");
+diff --git a/drivers/soc/imx/gpcv2.c b/drivers/soc/imx/gpcv2.c
+index b8d52d8d29dbb..8176380b02e6e 100644
+--- a/drivers/soc/imx/gpcv2.c
++++ b/drivers/soc/imx/gpcv2.c
+@@ -377,7 +377,7 @@ static int imx_pgc_power_down(struct generic_pm_domain *genpd)
+ 		}
+ 	}
+ 
+-	pm_runtime_put(domain->dev);
++	pm_runtime_put_sync_suspend(domain->dev);
+ 
+ 	return 0;
+ 
+@@ -734,6 +734,7 @@ static const struct imx_pgc_domain imx8mm_pgc_domains[] = {
+ 			.map = IMX8MM_VPUH1_A53_DOMAIN,
+ 		},
+ 		.pgc   = BIT(IMX8MM_PGC_VPUH1),
++		.keep_clocks = true,
+ 	},
+ 
+ 	[IMX8MM_POWER_DOMAIN_DISPMIX] = {
+diff --git a/drivers/soc/mediatek/mtk-scpsys.c b/drivers/soc/mediatek/mtk-scpsys.c
+index ca75b14931ec9..670cc82d17dc2 100644
+--- a/drivers/soc/mediatek/mtk-scpsys.c
++++ b/drivers/soc/mediatek/mtk-scpsys.c
+@@ -411,12 +411,17 @@ out:
+ 	return ret;
+ }
+ 
+-static void init_clks(struct platform_device *pdev, struct clk **clk)
++static int init_clks(struct platform_device *pdev, struct clk **clk)
+ {
+ 	int i;
+ 
+-	for (i = CLK_NONE + 1; i < CLK_MAX; i++)
++	for (i = CLK_NONE + 1; i < CLK_MAX; i++) {
+ 		clk[i] = devm_clk_get(&pdev->dev, clk_names[i]);
++		if (IS_ERR(clk[i]))
++			return PTR_ERR(clk[i]);
++	}
++
++	return 0;
+ }
+ 
+ static struct scp *init_scp(struct platform_device *pdev,
+@@ -426,7 +431,7 @@ static struct scp *init_scp(struct platform_device *pdev,
+ {
+ 	struct genpd_onecell_data *pd_data;
+ 	struct resource *res;
+-	int i, j;
++	int i, j, ret;
+ 	struct scp *scp;
+ 	struct clk *clk[CLK_MAX];
+ 
+@@ -481,7 +486,9 @@ static struct scp *init_scp(struct platform_device *pdev,
+ 
+ 	pd_data->num_domains = num;
+ 
+-	init_clks(pdev, clk);
++	ret = init_clks(pdev, clk);
++	if (ret)
++		return ERR_PTR(ret);
+ 
+ 	for (i = 0; i < num; i++) {
+ 		struct scp_domain *scpd = &scp->domains[i];
+diff --git a/drivers/soc/qcom/cpr.c b/drivers/soc/qcom/cpr.c
+index 1d818a8ba2089..e9b854ed1bdfd 100644
+--- a/drivers/soc/qcom/cpr.c
++++ b/drivers/soc/qcom/cpr.c
+@@ -1010,7 +1010,7 @@ static int cpr_interpolate(const struct corner *corner, int step_volt,
+ 		return corner->uV;
+ 
+ 	temp = f_diff * (uV_high - uV_low);
+-	do_div(temp, f_high - f_low);
++	temp = div64_ul(temp, f_high - f_low);
+ 
+ 	/*
+ 	 * max_volt_scale has units of uV/MHz while freq values
+diff --git a/drivers/soc/ti/pruss.c b/drivers/soc/ti/pruss.c
+index 49da387d77494..b36779309e49b 100644
+--- a/drivers/soc/ti/pruss.c
++++ b/drivers/soc/ti/pruss.c
+@@ -129,7 +129,7 @@ static int pruss_clk_init(struct pruss *pruss, struct device_node *cfg_node)
+ 
+ 	clks_np = of_get_child_by_name(cfg_node, "clocks");
+ 	if (!clks_np) {
+-		dev_err(dev, "%pOF is missing its 'clocks' node\n", clks_np);
++		dev_err(dev, "%pOF is missing its 'clocks' node\n", cfg_node);
+ 		return -ENODEV;
+ 	}
+ 
+diff --git a/drivers/spi/spi-geni-qcom.c b/drivers/spi/spi-geni-qcom.c
+index e2affaee4e769..079d0cb783ee3 100644
+--- a/drivers/spi/spi-geni-qcom.c
++++ b/drivers/spi/spi-geni-qcom.c
+@@ -168,6 +168,30 @@ static void handle_fifo_timeout(struct spi_master *spi,
+ 	}
+ }
+ 
++static void handle_gpi_timeout(struct spi_master *spi, struct spi_message *msg)
++{
++	struct spi_geni_master *mas = spi_master_get_devdata(spi);
++
++	dmaengine_terminate_sync(mas->tx);
++	dmaengine_terminate_sync(mas->rx);
++}
++
++static void spi_geni_handle_err(struct spi_master *spi, struct spi_message *msg)
++{
++	struct spi_geni_master *mas = spi_master_get_devdata(spi);
++
++	switch (mas->cur_xfer_mode) {
++	case GENI_SE_FIFO:
++		handle_fifo_timeout(spi, msg);
++		break;
++	case GENI_GPI_DMA:
++		handle_gpi_timeout(spi, msg);
++		break;
++	default:
++		dev_err(mas->dev, "Abort on Mode:%d not supported", mas->cur_xfer_mode);
++	}
++}
++
+ static bool spi_geni_is_abort_still_pending(struct spi_geni_master *mas)
+ {
+ 	struct geni_se *se = &mas->se;
+@@ -350,17 +374,21 @@ spi_gsi_callback_result(void *cb, const struct dmaengine_result *result)
+ {
+ 	struct spi_master *spi = cb;
+ 
++	spi->cur_msg->status = -EIO;
+ 	if (result->result != DMA_TRANS_NOERROR) {
+ 		dev_err(&spi->dev, "DMA txn failed: %d\n", result->result);
++		spi_finalize_current_transfer(spi);
+ 		return;
+ 	}
+ 
+ 	if (!result->residue) {
++		spi->cur_msg->status = 0;
+ 		dev_dbg(&spi->dev, "DMA txn completed\n");
+-		spi_finalize_current_transfer(spi);
+ 	} else {
+ 		dev_err(&spi->dev, "DMA xfer has pending: %d\n", result->residue);
+ 	}
++
++	spi_finalize_current_transfer(spi);
+ }
+ 
+ static int setup_gsi_xfer(struct spi_transfer *xfer, struct spi_geni_master *mas,
+@@ -922,7 +950,7 @@ static int spi_geni_probe(struct platform_device *pdev)
+ 	spi->can_dma = geni_can_dma;
+ 	spi->dma_map_dev = dev->parent;
+ 	spi->auto_runtime_pm = true;
+-	spi->handle_err = handle_fifo_timeout;
++	spi->handle_err = spi_geni_handle_err;
+ 	spi->use_gpio_descriptors = true;
+ 
+ 	init_completion(&mas->cs_done);
+diff --git a/drivers/spi/spi-hisi-kunpeng.c b/drivers/spi/spi-hisi-kunpeng.c
+index 58b823a16fc4d..525cc0143a305 100644
+--- a/drivers/spi/spi-hisi-kunpeng.c
++++ b/drivers/spi/spi-hisi-kunpeng.c
+@@ -127,7 +127,6 @@ struct hisi_spi {
+ 	void __iomem		*regs;
+ 	int			irq;
+ 	u32			fifo_len; /* depth of the FIFO buffer */
+-	u16			bus_num;
+ 
+ 	/* Current message transfer state info */
+ 	const void		*tx;
+@@ -165,7 +164,10 @@ static int hisi_spi_debugfs_init(struct hisi_spi *hs)
+ {
+ 	char name[32];
+ 
+-	snprintf(name, 32, "hisi_spi%d", hs->bus_num);
++	struct spi_controller *master;
++
++	master = container_of(hs->dev, struct spi_controller, dev);
++	snprintf(name, 32, "hisi_spi%d", master->bus_num);
+ 	hs->debugfs = debugfs_create_dir(name, NULL);
+ 	if (!hs->debugfs)
+ 		return -ENOMEM;
+@@ -467,7 +469,6 @@ static int hisi_spi_probe(struct platform_device *pdev)
+ 	hs = spi_controller_get_devdata(master);
+ 	hs->dev = dev;
+ 	hs->irq = irq;
+-	hs->bus_num = pdev->id;
+ 
+ 	hs->regs = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(hs->regs))
+@@ -490,7 +491,7 @@ static int hisi_spi_probe(struct platform_device *pdev)
+ 	master->use_gpio_descriptors = true;
+ 	master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LOOP;
+ 	master->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32);
+-	master->bus_num = hs->bus_num;
++	master->bus_num = pdev->id;
+ 	master->setup = hisi_spi_setup;
+ 	master->cleanup = hisi_spi_cleanup;
+ 	master->transfer_one = hisi_spi_transfer_one;
+@@ -506,15 +507,15 @@ static int hisi_spi_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	if (hisi_spi_debugfs_init(hs))
+-		dev_info(dev, "failed to create debugfs dir\n");
+-
+ 	ret = spi_register_controller(master);
+ 	if (ret) {
+ 		dev_err(dev, "failed to register spi master, ret=%d\n", ret);
+ 		return ret;
+ 	}
+ 
++	if (hisi_spi_debugfs_init(hs))
++		dev_info(dev, "failed to create debugfs dir\n");
++
+ 	dev_info(dev, "hw version:0x%x max-freq:%u kHz\n",
+ 		readl(hs->regs + HISI_SPI_VERSION),
+ 		master->max_speed_hz / 1000);
+diff --git a/drivers/spi/spi-meson-spifc.c b/drivers/spi/spi-meson-spifc.c
+index 8eca6f24cb799..c8ed7815c4ba6 100644
+--- a/drivers/spi/spi-meson-spifc.c
++++ b/drivers/spi/spi-meson-spifc.c
+@@ -349,6 +349,7 @@ static int meson_spifc_probe(struct platform_device *pdev)
+ 	return 0;
+ out_clk:
+ 	clk_disable_unprepare(spifc->clk);
++	pm_runtime_disable(spifc->dev);
+ out_err:
+ 	spi_master_put(master);
+ 	return ret;
+diff --git a/drivers/spi/spi-uniphier.c b/drivers/spi/spi-uniphier.c
+index 8900e51e1a1cc..342ee8d2c4761 100644
+--- a/drivers/spi/spi-uniphier.c
++++ b/drivers/spi/spi-uniphier.c
+@@ -767,12 +767,13 @@ out_master_put:
+ 
+ static int uniphier_spi_remove(struct platform_device *pdev)
+ {
+-	struct uniphier_spi_priv *priv = platform_get_drvdata(pdev);
++	struct spi_master *master = platform_get_drvdata(pdev);
++	struct uniphier_spi_priv *priv = spi_master_get_devdata(master);
+ 
+-	if (priv->master->dma_tx)
+-		dma_release_channel(priv->master->dma_tx);
+-	if (priv->master->dma_rx)
+-		dma_release_channel(priv->master->dma_rx);
++	if (master->dma_tx)
++		dma_release_channel(master->dma_tx);
++	if (master->dma_rx)
++		dma_release_channel(master->dma_rx);
+ 
+ 	clk_disable_unprepare(priv->clk);
+ 
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index fdd530b150a7a..8ba87b7f8f1a8 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -947,12 +947,9 @@ static void spi_set_cs(struct spi_device *spi, bool enable, bool force)
+ 	spi->controller->last_cs_enable = enable;
+ 	spi->controller->last_cs_mode_high = spi->mode & SPI_CS_HIGH;
+ 
+-	if (spi->cs_gpiod || gpio_is_valid(spi->cs_gpio) ||
+-	    !spi->controller->set_cs_timing) {
+-		if (activate)
+-			spi_delay_exec(&spi->cs_setup, NULL);
+-		else
+-			spi_delay_exec(&spi->cs_hold, NULL);
++	if ((spi->cs_gpiod || gpio_is_valid(spi->cs_gpio) ||
++	    !spi->controller->set_cs_timing) && !activate) {
++		spi_delay_exec(&spi->cs_hold, NULL);
+ 	}
+ 
+ 	if (spi->mode & SPI_CS_HIGH)
+@@ -994,7 +991,9 @@ static void spi_set_cs(struct spi_device *spi, bool enable, bool force)
+ 
+ 	if (spi->cs_gpiod || gpio_is_valid(spi->cs_gpio) ||
+ 	    !spi->controller->set_cs_timing) {
+-		if (!activate)
++		if (activate)
++			spi_delay_exec(&spi->cs_setup, NULL);
++		else
+ 			spi_delay_exec(&spi->cs_inactive, NULL);
+ 	}
+ }
+diff --git a/drivers/staging/greybus/audio_topology.c b/drivers/staging/greybus/audio_topology.c
+index 7f7d558b76d04..62d7674852bec 100644
+--- a/drivers/staging/greybus/audio_topology.c
++++ b/drivers/staging/greybus/audio_topology.c
+@@ -147,6 +147,9 @@ static const char **gb_generate_enum_strings(struct gbaudio_module_info *gb,
+ 
+ 	items = le32_to_cpu(gbenum->items);
+ 	strings = devm_kcalloc(gb->dev, items, sizeof(char *), GFP_KERNEL);
++	if (!strings)
++		return NULL;
++
+ 	data = gbenum->names;
+ 
+ 	for (i = 0; i < items; i++) {
+@@ -655,6 +658,8 @@ static int gbaudio_tplg_create_enum_kctl(struct gbaudio_module_info *gb,
+ 	/* since count=1, and reg is dummy */
+ 	gbe->items = le32_to_cpu(gb_enum->items);
+ 	gbe->texts = gb_generate_enum_strings(gb, gb_enum);
++	if (!gbe->texts)
++		return -ENOMEM;
+ 
+ 	/* debug enum info */
+ 	dev_dbg(gb->dev, "Max:%d, name_length:%d\n", gbe->items,
+@@ -862,6 +867,8 @@ static int gbaudio_tplg_create_enum_ctl(struct gbaudio_module_info *gb,
+ 	/* since count=1, and reg is dummy */
+ 	gbe->items = le32_to_cpu(gb_enum->items);
+ 	gbe->texts = gb_generate_enum_strings(gb, gb_enum);
++	if (!gbe->texts)
++		return -ENOMEM;
+ 
+ 	/* debug enum info */
+ 	dev_dbg(gb->dev, "Max:%d, name_length:%d\n", gbe->items,
+@@ -1072,6 +1079,10 @@ static int gbaudio_tplg_create_widget(struct gbaudio_module_info *module,
+ 			csize += le16_to_cpu(gbenum->names_length);
+ 			control->texts = (const char * const *)
+ 				gb_generate_enum_strings(module, gbenum);
++			if (!control->texts) {
++				ret = -ENOMEM;
++				goto error;
++			}
+ 			control->items = le32_to_cpu(gbenum->items);
+ 		} else {
+ 			csize = sizeof(struct gb_audio_control);
+@@ -1181,6 +1192,10 @@ static int gbaudio_tplg_process_kcontrols(struct gbaudio_module_info *module,
+ 			csize += le16_to_cpu(gbenum->names_length);
+ 			control->texts = (const char * const *)
+ 				gb_generate_enum_strings(module, gbenum);
++			if (!control->texts) {
++				ret = -ENOMEM;
++				goto error;
++			}
+ 			control->items = le32_to_cpu(gbenum->items);
+ 		} else {
+ 			csize = sizeof(struct gb_audio_control);
+diff --git a/drivers/staging/media/atomisp/i2c/ov2680.h b/drivers/staging/media/atomisp/i2c/ov2680.h
+index 874115f35fcad..798b28e134b64 100644
+--- a/drivers/staging/media/atomisp/i2c/ov2680.h
++++ b/drivers/staging/media/atomisp/i2c/ov2680.h
+@@ -289,8 +289,6 @@ static struct ov2680_reg const ov2680_global_setting[] = {
+  */
+ static struct ov2680_reg const ov2680_QCIF_30fps[] = {
+ 	{0x3086, 0x01},
+-	{0x3501, 0x24},
+-	{0x3502, 0x40},
+ 	{0x370a, 0x23},
+ 	{0x3801, 0xa0},
+ 	{0x3802, 0x00},
+@@ -334,8 +332,6 @@ static struct ov2680_reg const ov2680_QCIF_30fps[] = {
+  */
+ static struct ov2680_reg const ov2680_CIF_30fps[] = {
+ 	{0x3086, 0x01},
+-	{0x3501, 0x24},
+-	{0x3502, 0x40},
+ 	{0x370a, 0x23},
+ 	{0x3801, 0xa0},
+ 	{0x3802, 0x00},
+@@ -377,8 +373,6 @@ static struct ov2680_reg const ov2680_CIF_30fps[] = {
+  */
+ static struct ov2680_reg const ov2680_QVGA_30fps[] = {
+ 	{0x3086, 0x01},
+-	{0x3501, 0x24},
+-	{0x3502, 0x40},
+ 	{0x370a, 0x23},
+ 	{0x3801, 0xa0},
+ 	{0x3802, 0x00},
+@@ -420,8 +414,6 @@ static struct ov2680_reg const ov2680_QVGA_30fps[] = {
+  */
+ static struct ov2680_reg const ov2680_656x496_30fps[] = {
+ 	{0x3086, 0x01},
+-	{0x3501, 0x24},
+-	{0x3502, 0x40},
+ 	{0x370a, 0x23},
+ 	{0x3801, 0xa0},
+ 	{0x3802, 0x00},
+@@ -463,8 +455,6 @@ static struct ov2680_reg const ov2680_656x496_30fps[] = {
+  */
+ static struct ov2680_reg const ov2680_720x592_30fps[] = {
+ 	{0x3086, 0x01},
+-	{0x3501, 0x26},
+-	{0x3502, 0x40},
+ 	{0x370a, 0x23},
+ 	{0x3801, 0x00}, // X_ADDR_START;
+ 	{0x3802, 0x00},
+@@ -508,8 +498,6 @@ static struct ov2680_reg const ov2680_720x592_30fps[] = {
+  */
+ static struct ov2680_reg const ov2680_800x600_30fps[] = {
+ 	{0x3086, 0x01},
+-	{0x3501, 0x26},
+-	{0x3502, 0x40},
+ 	{0x370a, 0x23},
+ 	{0x3801, 0x00},
+ 	{0x3802, 0x00},
+@@ -551,8 +539,6 @@ static struct ov2680_reg const ov2680_800x600_30fps[] = {
+  */
+ static struct ov2680_reg const ov2680_720p_30fps[] = {
+ 	{0x3086, 0x00},
+-	{0x3501, 0x48},
+-	{0x3502, 0xe0},
+ 	{0x370a, 0x21},
+ 	{0x3801, 0xa0},
+ 	{0x3802, 0x00},
+@@ -594,8 +580,6 @@ static struct ov2680_reg const ov2680_720p_30fps[] = {
+  */
+ static struct ov2680_reg const ov2680_1296x976_30fps[] = {
+ 	{0x3086, 0x00},
+-	{0x3501, 0x48},
+-	{0x3502, 0xe0},
+ 	{0x370a, 0x21},
+ 	{0x3801, 0xa0},
+ 	{0x3802, 0x00},
+@@ -637,8 +621,6 @@ static struct ov2680_reg const ov2680_1296x976_30fps[] = {
+  */
+ static struct ov2680_reg const ov2680_1456x1096_30fps[] = {
+ 	{0x3086, 0x00},
+-	{0x3501, 0x48},
+-	{0x3502, 0xe0},
+ 	{0x370a, 0x21},
+ 	{0x3801, 0x90},
+ 	{0x3802, 0x00},
+@@ -682,8 +664,6 @@ static struct ov2680_reg const ov2680_1456x1096_30fps[] = {
+ 
+ static struct ov2680_reg const ov2680_1616x916_30fps[] = {
+ 	{0x3086, 0x00},
+-	{0x3501, 0x48},
+-	{0x3502, 0xe0},
+ 	{0x370a, 0x21},
+ 	{0x3801, 0x00},
+ 	{0x3802, 0x00},
+@@ -726,8 +706,6 @@ static struct ov2680_reg const ov2680_1616x916_30fps[] = {
+ #if 0
+ static struct ov2680_reg const ov2680_1616x1082_30fps[] = {
+ 	{0x3086, 0x00},
+-	{0x3501, 0x48},
+-	{0x3502, 0xe0},
+ 	{0x370a, 0x21},
+ 	{0x3801, 0x00},
+ 	{0x3802, 0x00},
+@@ -769,8 +747,6 @@ static struct ov2680_reg const ov2680_1616x1082_30fps[] = {
+  */
+ static struct ov2680_reg const ov2680_1616x1216_30fps[] = {
+ 	{0x3086, 0x00},
+-	{0x3501, 0x48},
+-	{0x3502, 0xe0},
+ 	{0x370a, 0x21},
+ 	{0x3801, 0x00},
+ 	{0x3802, 0x00},
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_cmd.c b/drivers/staging/media/atomisp/pci/atomisp_cmd.c
+index 366161cff5602..ef0b0963cf930 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_cmd.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_cmd.c
+@@ -1715,6 +1715,12 @@ void atomisp_wdt_refresh_pipe(struct atomisp_video_pipe *pipe,
+ {
+ 	unsigned long next;
+ 
++	if (!pipe->asd) {
++		dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, pipe->vdev.name);
++		return;
++	}
++
+ 	if (delay != ATOMISP_WDT_KEEP_CURRENT_DELAY)
+ 		pipe->wdt_duration = delay;
+ 
+@@ -1777,6 +1783,12 @@ void atomisp_wdt_refresh(struct atomisp_sub_device *asd, unsigned int delay)
+ /* ISP2401 */
+ void atomisp_wdt_stop_pipe(struct atomisp_video_pipe *pipe, bool sync)
+ {
++	if (!pipe->asd) {
++		dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, pipe->vdev.name);
++		return;
++	}
++
+ 	if (!atomisp_is_wdt_running(pipe))
+ 		return;
+ 
+@@ -4109,6 +4121,12 @@ void atomisp_handle_parameter_and_buffer(struct atomisp_video_pipe *pipe)
+ 	unsigned long irqflags;
+ 	bool need_to_enqueue_buffer = false;
+ 
++	if (!asd) {
++		dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, pipe->vdev.name);
++		return;
++	}
++
+ 	if (atomisp_is_vf_pipe(pipe))
+ 		return;
+ 
+@@ -4196,6 +4214,12 @@ int atomisp_set_parameters(struct video_device *vdev,
+ 	struct atomisp_css_params *css_param = &asd->params.css_param;
+ 	int ret;
+ 
++	if (!asd) {
++		dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	if (!asd->stream_env[ATOMISP_INPUT_STREAM_GENERAL].stream) {
+ 		dev_err(asd->isp->dev, "%s: internal error!\n", __func__);
+ 		return -EINVAL;
+@@ -4857,6 +4881,12 @@ int atomisp_try_fmt(struct video_device *vdev, struct v4l2_pix_format *f,
+ 	int source_pad = atomisp_subdev_source_pad(vdev);
+ 	int ret;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	if (!isp->inputs[asd->input_curr].camera)
+ 		return -EINVAL;
+ 
+@@ -5194,10 +5224,17 @@ static int atomisp_set_fmt_to_isp(struct video_device *vdev,
+ 	int (*configure_pp_input)(struct atomisp_sub_device *asd,
+ 				  unsigned int width, unsigned int height) =
+ 				      configure_pp_input_nop;
+-	u16 stream_index = atomisp_source_pad_to_stream_id(asd, source_pad);
++	u16 stream_index;
+ 	const struct atomisp_in_fmt_conv *fc;
+ 	int ret, i;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++	stream_index = atomisp_source_pad_to_stream_id(asd, source_pad);
++
+ 	v4l2_fh_init(&fh.vfh, vdev);
+ 
+ 	isp_sink_crop = atomisp_subdev_get_rect(
+@@ -5493,7 +5530,8 @@ static int atomisp_set_fmt_to_snr(struct video_device *vdev,
+ 				  unsigned int padding_w, unsigned int padding_h,
+ 				  unsigned int dvs_env_w, unsigned int dvs_env_h)
+ {
+-	struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
++	struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
++	struct atomisp_sub_device *asd = pipe->asd;
+ 	const struct atomisp_format_bridge *format;
+ 	struct v4l2_subdev_pad_config pad_cfg;
+ 	struct v4l2_subdev_state pad_state = {
+@@ -5504,7 +5542,7 @@ static int atomisp_set_fmt_to_snr(struct video_device *vdev,
+ 	};
+ 	struct v4l2_mbus_framefmt *ffmt = &vformat.format;
+ 	struct v4l2_mbus_framefmt *req_ffmt;
+-	struct atomisp_device *isp = asd->isp;
++	struct atomisp_device *isp;
+ 	struct atomisp_input_stream_info *stream_info =
+ 	    (struct atomisp_input_stream_info *)ffmt->reserved;
+ 	u16 stream_index = ATOMISP_INPUT_STREAM_GENERAL;
+@@ -5512,6 +5550,14 @@ static int atomisp_set_fmt_to_snr(struct video_device *vdev,
+ 	struct v4l2_subdev_fh fh;
+ 	int ret;
+ 
++	if (!asd) {
++		dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
++	isp = asd->isp;
++
+ 	v4l2_fh_init(&fh.vfh, vdev);
+ 
+ 	stream_index = atomisp_source_pad_to_stream_id(asd, source_pad);
+@@ -5602,6 +5648,12 @@ int atomisp_set_fmt(struct video_device *vdev, struct v4l2_format *f)
+ 	struct v4l2_subdev_fh fh;
+ 	int ret;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	if (source_pad >= ATOMISP_SUBDEV_PADS_NUM)
+ 		return -EINVAL;
+ 
+@@ -6034,6 +6086,12 @@ int atomisp_set_fmt_file(struct video_device *vdev, struct v4l2_format *f)
+ 	struct v4l2_subdev_fh fh;
+ 	int ret;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	v4l2_fh_init(&fh.vfh, vdev);
+ 
+ 	dev_dbg(isp->dev, "setting fmt %ux%u 0x%x for file inject\n",
+@@ -6359,6 +6417,12 @@ bool atomisp_is_vf_pipe(struct atomisp_video_pipe *pipe)
+ {
+ 	struct atomisp_sub_device *asd = pipe->asd;
+ 
++	if (!asd) {
++		dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, pipe->vdev.name);
++		return false;
++	}
++
+ 	if (pipe == &asd->video_out_vf)
+ 		return true;
+ 
+@@ -6572,6 +6636,12 @@ static int atomisp_get_pipe_id(struct atomisp_video_pipe *pipe)
+ {
+ 	struct atomisp_sub_device *asd = pipe->asd;
+ 
++	if (!asd) {
++		dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, pipe->vdev.name);
++		return -EINVAL;
++	}
++
+ 	if (ATOMISP_USE_YUVPP(asd)) {
+ 		return IA_CSS_PIPE_ID_YUVPP;
+ 	} else if (asd->vfpp->val == ATOMISP_VFPP_DISABLE_SCALER) {
+@@ -6609,6 +6679,12 @@ int atomisp_get_invalid_frame_num(struct video_device *vdev,
+ 	struct ia_css_pipe_info p_info;
+ 	int ret;
+ 
++	if (!asd) {
++		dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	if (asd->isp->inputs[asd->input_curr].camera_caps->
+ 	    sensor[asd->sensor_curr].stream_num > 1) {
+ 		/* External ISP */
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_fops.c b/drivers/staging/media/atomisp/pci/atomisp_fops.c
+index f82bf082aa796..18fff47bd25d2 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_fops.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_fops.c
+@@ -877,6 +877,11 @@ done:
+ 	else
+ 		pipe->users++;
+ 	rt_mutex_unlock(&isp->mutex);
++
++	/* Ensure that a mode is set */
++	if (asd)
++		v4l2_ctrl_s_ctrl(asd->run_mode, pipe->default_run_mode);
++
+ 	return 0;
+ 
+ css_error:
+@@ -1171,6 +1176,12 @@ static int atomisp_mmap(struct file *file, struct vm_area_struct *vma)
+ 	u32 origin_size, new_size;
+ 	int ret;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	if (!(vma->vm_flags & (VM_WRITE | VM_READ)))
+ 		return -EACCES;
+ 
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
+index d8c9e31314b2e..62dc06e224765 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
+@@ -481,7 +481,7 @@ fail:
+ 
+ static u8 gmin_get_pmic_id_and_addr(struct device *dev)
+ {
+-	struct i2c_client *power;
++	struct i2c_client *power = NULL;
+ 	static u8 pmic_i2c_addr;
+ 
+ 	if (pmic_id)
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_ioctl.c b/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
+index c8a625667e81e..b7dda4b96d49c 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
+@@ -646,6 +646,12 @@ static int atomisp_g_input(struct file *file, void *fh, unsigned int *input)
+ 	struct atomisp_device *isp = video_get_drvdata(vdev);
+ 	struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	rt_mutex_lock(&isp->mutex);
+ 	*input = asd->input_curr;
+ 	rt_mutex_unlock(&isp->mutex);
+@@ -665,6 +671,12 @@ static int atomisp_s_input(struct file *file, void *fh, unsigned int input)
+ 	struct v4l2_subdev *motor;
+ 	int ret;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	rt_mutex_lock(&isp->mutex);
+ 	if (input >= ATOM_ISP_MAX_INPUTS || input >= isp->input_cnt) {
+ 		dev_dbg(isp->dev, "input_cnt: %d\n", isp->input_cnt);
+@@ -761,18 +773,33 @@ static int atomisp_enum_fmt_cap(struct file *file, void *fh,
+ 	struct video_device *vdev = video_devdata(file);
+ 	struct atomisp_device *isp = video_get_drvdata(vdev);
+ 	struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
+-	struct v4l2_subdev_mbus_code_enum code = { 0 };
++	struct v4l2_subdev_mbus_code_enum code = {
++		.which = V4L2_SUBDEV_FORMAT_ACTIVE,
++	};
++	struct v4l2_subdev *camera;
+ 	unsigned int i, fi = 0;
+ 	int rval;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
++	camera = isp->inputs[asd->input_curr].camera;
++	if(!camera) {
++		dev_err(isp->dev, "%s(): camera is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	rt_mutex_lock(&isp->mutex);
+-	rval = v4l2_subdev_call(isp->inputs[asd->input_curr].camera, pad,
+-				enum_mbus_code, NULL, &code);
++
++	rval = v4l2_subdev_call(camera, pad, enum_mbus_code, NULL, &code);
+ 	if (rval == -ENOIOCTLCMD) {
+ 		dev_warn(isp->dev,
+-			 "enum_mbus_code pad op not supported. Please fix your sensor driver!\n");
+-		//	rval = v4l2_subdev_call(isp->inputs[asd->input_curr].camera,
+-		//				video, enum_mbus_fmt, 0, &code.code);
++			 "enum_mbus_code pad op not supported by %s. Please fix your sensor driver!\n",
++			 camera->name);
+ 	}
+ 	rt_mutex_unlock(&isp->mutex);
+ 
+@@ -802,6 +829,8 @@ static int atomisp_enum_fmt_cap(struct file *file, void *fh,
+ 		f->pixelformat = format->pixelformat;
+ 		return 0;
+ 	}
++	dev_err(isp->dev, "%s(): format for code %x not found.\n",
++		__func__, code.code);
+ 
+ 	return -EINVAL;
+ }
+@@ -834,6 +863,72 @@ static int atomisp_g_fmt_file(struct file *file, void *fh,
+ 	return 0;
+ }
+ 
++static int atomisp_adjust_fmt(struct v4l2_format *f)
++{
++	const struct atomisp_format_bridge *format_bridge;
++	u32 padded_width;
++
++	format_bridge = atomisp_get_format_bridge(f->fmt.pix.pixelformat);
++
++	padded_width = f->fmt.pix.width + pad_w;
++
++	if (format_bridge->planar) {
++		f->fmt.pix.bytesperline = padded_width;
++		f->fmt.pix.sizeimage = PAGE_ALIGN(f->fmt.pix.height *
++						  DIV_ROUND_UP(format_bridge->depth *
++						  padded_width, 8));
++	} else {
++		f->fmt.pix.bytesperline = DIV_ROUND_UP(format_bridge->depth *
++						      padded_width, 8);
++		f->fmt.pix.sizeimage = PAGE_ALIGN(f->fmt.pix.height * f->fmt.pix.bytesperline);
++	}
++
++	if (f->fmt.pix.field == V4L2_FIELD_ANY)
++		f->fmt.pix.field = V4L2_FIELD_NONE;
++
++	format_bridge = atomisp_get_format_bridge(f->fmt.pix.pixelformat);
++	if (!format_bridge)
++		return -EINVAL;
++
++	/* Currently, raw formats are broken!!! */
++	if (format_bridge->sh_fmt == IA_CSS_FRAME_FORMAT_RAW) {
++		f->fmt.pix.pixelformat = V4L2_PIX_FMT_YUV420;
++
++		format_bridge = atomisp_get_format_bridge(f->fmt.pix.pixelformat);
++		if (!format_bridge)
++			return -EINVAL;
++	}
++
++	padded_width = f->fmt.pix.width + pad_w;
++
++	if (format_bridge->planar) {
++		f->fmt.pix.bytesperline = padded_width;
++		f->fmt.pix.sizeimage = PAGE_ALIGN(f->fmt.pix.height *
++						  DIV_ROUND_UP(format_bridge->depth *
++						  padded_width, 8));
++	} else {
++		f->fmt.pix.bytesperline = DIV_ROUND_UP(format_bridge->depth *
++						      padded_width, 8);
++		f->fmt.pix.sizeimage = PAGE_ALIGN(f->fmt.pix.height * f->fmt.pix.bytesperline);
++	}
++
++	if (f->fmt.pix.field == V4L2_FIELD_ANY)
++		f->fmt.pix.field = V4L2_FIELD_NONE;
++
++	/*
++	 * FIXME: do we need to setup this differently, depending on the
++	 * sensor or the pipeline?
++	 */
++	f->fmt.pix.colorspace = V4L2_COLORSPACE_REC709;
++	f->fmt.pix.ycbcr_enc = V4L2_YCBCR_ENC_709;
++	f->fmt.pix.xfer_func = V4L2_XFER_FUNC_709;
++
++	f->fmt.pix.width -= pad_w;
++	f->fmt.pix.height -= pad_h;
++
++	return 0;
++}
++
+ /* This function looks up the closest available resolution. */
+ static int atomisp_try_fmt_cap(struct file *file, void *fh,
+ 			       struct v4l2_format *f)
+@@ -845,7 +940,11 @@ static int atomisp_try_fmt_cap(struct file *file, void *fh,
+ 	rt_mutex_lock(&isp->mutex);
+ 	ret = atomisp_try_fmt(vdev, &f->fmt.pix, NULL);
+ 	rt_mutex_unlock(&isp->mutex);
+-	return ret;
++
++	if (ret)
++		return ret;
++
++	return atomisp_adjust_fmt(f);
+ }
+ 
+ static int atomisp_s_fmt_cap(struct file *file, void *fh,
+@@ -1024,9 +1123,16 @@ int __atomisp_reqbufs(struct file *file, void *fh,
+ 	struct ia_css_frame *frame;
+ 	struct videobuf_vmalloc_memory *vm_mem;
+ 	u16 source_pad = atomisp_subdev_source_pad(vdev);
+-	u16 stream_id = atomisp_source_pad_to_stream_id(asd, source_pad);
++	u16 stream_id;
+ 	int ret = 0, i = 0;
+ 
++	if (!asd) {
++		dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++	stream_id = atomisp_source_pad_to_stream_id(asd, source_pad);
++
+ 	if (req->count == 0) {
+ 		mutex_lock(&pipe->capq.vb_lock);
+ 		if (!list_empty(&pipe->capq.stream))
+@@ -1154,6 +1260,12 @@ static int atomisp_qbuf(struct file *file, void *fh, struct v4l2_buffer *buf)
+ 	u32 pgnr;
+ 	int ret = 0;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	rt_mutex_lock(&isp->mutex);
+ 	if (isp->isp_fatal_error) {
+ 		ret = -EIO;
+@@ -1389,6 +1501,12 @@ static int atomisp_dqbuf(struct file *file, void *fh, struct v4l2_buffer *buf)
+ 	struct atomisp_device *isp = video_get_drvdata(vdev);
+ 	int ret = 0;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	rt_mutex_lock(&isp->mutex);
+ 
+ 	if (isp->isp_fatal_error) {
+@@ -1640,6 +1758,12 @@ static int atomisp_streamon(struct file *file, void *fh,
+ 	int ret = 0;
+ 	unsigned long irqflags;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	dev_dbg(isp->dev, "Start stream on pad %d for asd%d\n",
+ 		atomisp_subdev_source_pad(vdev), asd->index);
+ 
+@@ -1901,6 +2025,12 @@ int __atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type)
+ 	unsigned long flags;
+ 	bool first_streamoff = false;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	dev_dbg(isp->dev, "Stop stream on pad %d for asd%d\n",
+ 		atomisp_subdev_source_pad(vdev), asd->index);
+ 
+@@ -2150,6 +2280,12 @@ static int atomisp_g_ctrl(struct file *file, void *fh,
+ 	struct atomisp_device *isp = video_get_drvdata(vdev);
+ 	int i, ret = -EINVAL;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	for (i = 0; i < ctrls_num; i++) {
+ 		if (ci_v4l2_controls[i].id == control->id) {
+ 			ret = 0;
+@@ -2229,6 +2365,12 @@ static int atomisp_s_ctrl(struct file *file, void *fh,
+ 	struct atomisp_device *isp = video_get_drvdata(vdev);
+ 	int i, ret = -EINVAL;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	for (i = 0; i < ctrls_num; i++) {
+ 		if (ci_v4l2_controls[i].id == control->id) {
+ 			ret = 0;
+@@ -2310,6 +2452,12 @@ static int atomisp_queryctl(struct file *file, void *fh,
+ 	struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
+ 	struct atomisp_device *isp = video_get_drvdata(vdev);
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	switch (qc->id) {
+ 	case V4L2_CID_FOCUS_ABSOLUTE:
+ 	case V4L2_CID_FOCUS_RELATIVE:
+@@ -2355,6 +2503,12 @@ static int atomisp_camera_g_ext_ctrls(struct file *file, void *fh,
+ 	int i;
+ 	int ret = 0;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	if (!IS_ISP2401)
+ 		motor = isp->inputs[asd->input_curr].motor;
+ 	else
+@@ -2466,6 +2620,12 @@ static int atomisp_camera_s_ext_ctrls(struct file *file, void *fh,
+ 	int i;
+ 	int ret = 0;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	if (!IS_ISP2401)
+ 		motor = isp->inputs[asd->input_curr].motor;
+ 	else
+@@ -2591,6 +2751,12 @@ static int atomisp_g_parm(struct file *file, void *fh,
+ 	struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
+ 	struct atomisp_device *isp = video_get_drvdata(vdev);
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	if (parm->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) {
+ 		dev_err(isp->dev, "unsupported v4l2 buf type\n");
+ 		return -EINVAL;
+@@ -2613,6 +2779,12 @@ static int atomisp_s_parm(struct file *file, void *fh,
+ 	int rval;
+ 	int fps;
+ 
++	if (!asd) {
++		dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
++			__func__, vdev->name);
++		return -EINVAL;
++	}
++
+ 	if (parm->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) {
+ 		dev_err(isp->dev, "unsupported v4l2 buf type\n");
+ 		return -EINVAL;
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_subdev.c b/drivers/staging/media/atomisp/pci/atomisp_subdev.c
+index 12f22ad007c73..ffaf11e0b0ad8 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_subdev.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_subdev.c
+@@ -1164,23 +1164,28 @@ static int isp_subdev_init_entities(struct atomisp_sub_device *asd)
+ 
+ 	atomisp_init_acc_pipe(asd, &asd->video_acc);
+ 
+-	ret = atomisp_video_init(&asd->video_in, "MEMORY");
++	ret = atomisp_video_init(&asd->video_in, "MEMORY",
++				 ATOMISP_RUN_MODE_SDV);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = atomisp_video_init(&asd->video_out_capture, "CAPTURE");
++	ret = atomisp_video_init(&asd->video_out_capture, "CAPTURE",
++				 ATOMISP_RUN_MODE_STILL_CAPTURE);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = atomisp_video_init(&asd->video_out_vf, "VIEWFINDER");
++	ret = atomisp_video_init(&asd->video_out_vf, "VIEWFINDER",
++				 ATOMISP_RUN_MODE_CONTINUOUS_CAPTURE);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = atomisp_video_init(&asd->video_out_preview, "PREVIEW");
++	ret = atomisp_video_init(&asd->video_out_preview, "PREVIEW",
++				 ATOMISP_RUN_MODE_PREVIEW);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = atomisp_video_init(&asd->video_out_video_capture, "VIDEO");
++	ret = atomisp_video_init(&asd->video_out_video_capture, "VIDEO",
++				 ATOMISP_RUN_MODE_VIDEO);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_subdev.h b/drivers/staging/media/atomisp/pci/atomisp_subdev.h
+index d6fcfab6352d7..a8d210ea5f8be 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_subdev.h
++++ b/drivers/staging/media/atomisp/pci/atomisp_subdev.h
+@@ -81,6 +81,9 @@ struct atomisp_video_pipe {
+ 	/* the link list to store per_frame parameters */
+ 	struct list_head per_frame_params;
+ 
++	/* Store here the initial run mode */
++	unsigned int default_run_mode;
++
+ 	unsigned int buffers_in_css;
+ 
+ 	/* irq_lock is used to protect video buffer state change operations and
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_v4l2.c b/drivers/staging/media/atomisp/pci/atomisp_v4l2.c
+index 1e324f1f656e5..14c39b8987c95 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_v4l2.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_v4l2.c
+@@ -447,7 +447,8 @@ const struct atomisp_dfs_config dfs_config_cht_soc = {
+ 	.dfs_table_size = ARRAY_SIZE(dfs_rules_cht_soc),
+ };
+ 
+-int atomisp_video_init(struct atomisp_video_pipe *video, const char *name)
++int atomisp_video_init(struct atomisp_video_pipe *video, const char *name,
++		       unsigned int run_mode)
+ {
+ 	int ret;
+ 	const char *direction;
+@@ -478,6 +479,7 @@ int atomisp_video_init(struct atomisp_video_pipe *video, const char *name)
+ 		 "ATOMISP ISP %s %s", name, direction);
+ 	video->vdev.release = video_device_release_empty;
+ 	video_set_drvdata(&video->vdev, video->isp);
++	video->default_run_mode = run_mode;
+ 
+ 	return 0;
+ }
+@@ -711,15 +713,15 @@ static int atomisp_mrfld_power(struct atomisp_device *isp, bool enable)
+ 
+ 	dev_dbg(isp->dev, "IUNIT power-%s.\n", enable ? "on" : "off");
+ 
+-	/*WA:Enable DVFS*/
++	/* WA for P-Unit, if DVFS enabled, ISP timeout observed */
+ 	if (IS_CHT && enable)
+-		punit_ddr_dvfs_enable(true);
++		punit_ddr_dvfs_enable(false);
+ 
+ 	/*
+ 	 * FIXME:WA for ECS28A, with this sleep, CTS
+ 	 * android.hardware.camera2.cts.CameraDeviceTest#testCameraDeviceAbort
+ 	 * PASS, no impact on other platforms
+-	*/
++	 */
+ 	if (IS_BYT && enable)
+ 		msleep(10);
+ 
+@@ -727,7 +729,7 @@ static int atomisp_mrfld_power(struct atomisp_device *isp, bool enable)
+ 	iosf_mbi_modify(BT_MBI_UNIT_PMC, MBI_REG_READ, MRFLD_ISPSSPM0,
+ 			val, MRFLD_ISPSSPM0_ISPSSC_MASK);
+ 
+-	/*WA:Enable DVFS*/
++	/* WA:Enable DVFS */
+ 	if (IS_CHT && !enable)
+ 		punit_ddr_dvfs_enable(true);
+ 
+@@ -1182,6 +1184,7 @@ static void atomisp_unregister_entities(struct atomisp_device *isp)
+ 
+ 	v4l2_device_unregister(&isp->v4l2_dev);
+ 	media_device_unregister(&isp->media_dev);
++	media_device_cleanup(&isp->media_dev);
+ }
+ 
+ static int atomisp_register_entities(struct atomisp_device *isp)
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_v4l2.h b/drivers/staging/media/atomisp/pci/atomisp_v4l2.h
+index 81bb356b81720..72611b8286a4a 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_v4l2.h
++++ b/drivers/staging/media/atomisp/pci/atomisp_v4l2.h
+@@ -27,7 +27,8 @@ struct v4l2_device;
+ struct atomisp_device;
+ struct firmware;
+ 
+-int atomisp_video_init(struct atomisp_video_pipe *video, const char *name);
++int atomisp_video_init(struct atomisp_video_pipe *video, const char *name,
++		       unsigned int run_mode);
+ void atomisp_acc_init(struct atomisp_acc_pipe *video, const char *name);
+ void atomisp_video_unregister(struct atomisp_video_pipe *video);
+ void atomisp_acc_unregister(struct atomisp_acc_pipe *video);
+diff --git a/drivers/staging/media/atomisp/pci/sh_css.c b/drivers/staging/media/atomisp/pci/sh_css.c
+index c4b35cbab3737..ba25d0da8b811 100644
+--- a/drivers/staging/media/atomisp/pci/sh_css.c
++++ b/drivers/staging/media/atomisp/pci/sh_css.c
+@@ -522,6 +522,7 @@ ia_css_stream_input_format_bits_per_pixel(struct ia_css_stream *stream)
+ 	return bpp;
+ }
+ 
++/* TODO: move define to proper file in tools */
+ #define GP_ISEL_TPG_MODE 0x90058
+ 
+ #if !defined(ISP2401)
+@@ -573,12 +574,8 @@ sh_css_config_input_network(struct ia_css_stream *stream)
+ 		vblank_cycles = vblank_lines * (width + hblank_cycles);
+ 		sh_css_sp_configure_sync_gen(width, height, hblank_cycles,
+ 					     vblank_cycles);
+-		if (!IS_ISP2401) {
+-			if (pipe->stream->config.mode == IA_CSS_INPUT_MODE_TPG) {
+-				/* TODO: move define to proper file in tools */
+-				ia_css_device_store_uint32(GP_ISEL_TPG_MODE, 0);
+-			}
+-		}
++		if (pipe->stream->config.mode == IA_CSS_INPUT_MODE_TPG)
++			ia_css_device_store_uint32(GP_ISEL_TPG_MODE, 0);
+ 	}
+ 	ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE_PRIVATE,
+ 			    "sh_css_config_input_network() leave:\n");
+@@ -1009,16 +1006,14 @@ static bool sh_css_translate_stream_cfg_to_isys_stream_descr(
+ 	 * ia_css_isys_stream_capture_indication() instead of
+ 	 * ia_css_pipeline_sp_wait_for_isys_stream_N() as isp processing of
+ 	 * capture takes longer than getting an ISYS frame
+-	 *
+-	 * Only 2401 relevant ??
+ 	 */
+-#if 0 // FIXME: NOT USED on Yocto Aero
+-	isys_stream_descr->polling_mode
+-	    = early_polling ? INPUT_SYSTEM_POLL_ON_CAPTURE_REQUEST
+-	      : INPUT_SYSTEM_POLL_ON_WAIT_FOR_FRAME;
+-	ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE_PRIVATE,
+-			    "sh_css_translate_stream_cfg_to_isys_stream_descr() leave:\n");
+-#endif
++	if (IS_ISP2401) {
++		isys_stream_descr->polling_mode
++		    = early_polling ? INPUT_SYSTEM_POLL_ON_CAPTURE_REQUEST
++		      : INPUT_SYSTEM_POLL_ON_WAIT_FOR_FRAME;
++		ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE_PRIVATE,
++				    "sh_css_translate_stream_cfg_to_isys_stream_descr() leave:\n");
++	}
+ 
+ 	return rc;
+ }
+@@ -1433,7 +1428,7 @@ static void start_pipe(
+ 
+ 	assert(me); /* all callers are in this file and call with non null argument */
+ 
+-	if (!IS_ISP2401) {
++	if (IS_ISP2401) {
+ 		coord = &me->config.internal_frame_origin_bqs_on_sctbl;
+ 		params = me->stream->isp_params_configs;
+ 	}
+diff --git a/drivers/staging/media/atomisp/pci/sh_css_mipi.c b/drivers/staging/media/atomisp/pci/sh_css_mipi.c
+index 75489f7d75eec..c1f2f6151c5f8 100644
+--- a/drivers/staging/media/atomisp/pci/sh_css_mipi.c
++++ b/drivers/staging/media/atomisp/pci/sh_css_mipi.c
+@@ -374,17 +374,17 @@ static bool buffers_needed(struct ia_css_pipe *pipe)
+ {
+ 	if (!IS_ISP2401) {
+ 		if (pipe->stream->config.mode == IA_CSS_INPUT_MODE_BUFFERED_SENSOR)
+-			return false;
+-		else
+ 			return true;
++		else
++			return false;
+ 	}
+ 
+ 	if (pipe->stream->config.mode == IA_CSS_INPUT_MODE_BUFFERED_SENSOR ||
+ 	    pipe->stream->config.mode == IA_CSS_INPUT_MODE_TPG ||
+ 	    pipe->stream->config.mode == IA_CSS_INPUT_MODE_PRBS)
+-		return false;
++		return true;
+ 
+-	return true;
++	return false;
+ }
+ 
+ int
+@@ -423,14 +423,17 @@ allocate_mipi_frames(struct ia_css_pipe *pipe,
+ 		return 0; /* AM TODO: Check  */
+ 	}
+ 
+-	if (!IS_ISP2401)
++	if (!IS_ISP2401) {
+ 		port = (unsigned int)pipe->stream->config.source.port.port;
+-	else
+-		err = ia_css_mipi_is_source_port_valid(pipe, &port);
++	} else {
++		/* Returns true if port is valid. So, invert it */
++		err = !ia_css_mipi_is_source_port_valid(pipe, &port);
++	}
+ 
+ 	assert(port < N_CSI_PORTS);
+ 
+-	if (port >= N_CSI_PORTS || err) {
++	if ((!IS_ISP2401 && port >= N_CSI_PORTS) ||
++	    (IS_ISP2401 && err)) {
+ 		ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE_PRIVATE,
+ 				    "allocate_mipi_frames(%p) exit: error: port is not correct (port=%d).\n",
+ 				    pipe, port);
+@@ -552,14 +555,17 @@ free_mipi_frames(struct ia_css_pipe *pipe)
+ 			return err;
+ 		}
+ 
+-		if (!IS_ISP2401)
++		if (!IS_ISP2401) {
+ 			port = (unsigned int)pipe->stream->config.source.port.port;
+-		else
+-			err = ia_css_mipi_is_source_port_valid(pipe, &port);
++		} else {
++			/* Returns true if port is valid. So, invert it */
++			err = !ia_css_mipi_is_source_port_valid(pipe, &port);
++		}
+ 
+ 		assert(port < N_CSI_PORTS);
+ 
+-		if (port >= N_CSI_PORTS || err) {
++		if ((!IS_ISP2401 && port >= N_CSI_PORTS) ||
++		    (IS_ISP2401 && err)) {
+ 			ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE_PRIVATE,
+ 					    "free_mipi_frames(%p, %d) exit: error: pipe port is not correct.\n",
+ 					    pipe, port);
+@@ -663,14 +669,17 @@ send_mipi_frames(struct ia_css_pipe *pipe)
+ 		/* TODO: AM: maybe this should be returning an error. */
+ 	}
+ 
+-	if (!IS_ISP2401)
++	if (!IS_ISP2401) {
+ 		port = (unsigned int)pipe->stream->config.source.port.port;
+-	else
+-		err = ia_css_mipi_is_source_port_valid(pipe, &port);
++	} else {
++		/* Returns true if port is valid. So, invert it */
++		err = !ia_css_mipi_is_source_port_valid(pipe, &port);
++	}
+ 
+ 	assert(port < N_CSI_PORTS);
+ 
+-	if (port >= N_CSI_PORTS || err) {
++	if ((!IS_ISP2401 && port >= N_CSI_PORTS) ||
++	    (IS_ISP2401 && err)) {
+ 		IA_CSS_ERROR("send_mipi_frames(%p) exit: invalid port specified (port=%d).\n",
+ 			     pipe, port);
+ 		return err;
+diff --git a/drivers/staging/media/atomisp/pci/sh_css_params.c b/drivers/staging/media/atomisp/pci/sh_css_params.c
+index dbd3bfe3d343c..ccc0078795648 100644
+--- a/drivers/staging/media/atomisp/pci/sh_css_params.c
++++ b/drivers/staging/media/atomisp/pci/sh_css_params.c
+@@ -2431,7 +2431,7 @@ sh_css_create_isp_params(struct ia_css_stream *stream,
+ 	unsigned int i;
+ 	struct sh_css_ddr_address_map *ddr_ptrs;
+ 	struct sh_css_ddr_address_map_size *ddr_ptrs_size;
+-	int err = 0;
++	int err;
+ 	size_t params_size;
+ 	struct ia_css_isp_parameters *params =
+ 	kvmalloc(sizeof(struct ia_css_isp_parameters), GFP_KERNEL);
+@@ -2473,7 +2473,11 @@ sh_css_create_isp_params(struct ia_css_stream *stream,
+ 	succ &= (ddr_ptrs->macc_tbl != mmgr_NULL);
+ 
+ 	*isp_params_out = params;
+-	return err;
++
++	if (!succ)
++		return -ENOMEM;
++
++	return 0;
+ }
+ 
+ static bool
+diff --git a/drivers/staging/media/hantro/hantro_drv.c b/drivers/staging/media/hantro/hantro_drv.c
+index fb82b9297a2ba..0d56db8376e6a 100644
+--- a/drivers/staging/media/hantro/hantro_drv.c
++++ b/drivers/staging/media/hantro/hantro_drv.c
+@@ -960,7 +960,7 @@ static int hantro_probe(struct platform_device *pdev)
+ 	ret = clk_bulk_prepare(vpu->variant->num_clocks, vpu->clocks);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Failed to prepare clocks\n");
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	ret = v4l2_device_register(&pdev->dev, &vpu->v4l2_dev);
+@@ -1016,6 +1016,7 @@ err_v4l2_unreg:
+ 	v4l2_device_unregister(&vpu->v4l2_dev);
+ err_clk_unprepare:
+ 	clk_bulk_unprepare(vpu->variant->num_clocks, vpu->clocks);
++err_pm_disable:
+ 	pm_runtime_dont_use_autosuspend(vpu->dev);
+ 	pm_runtime_disable(vpu->dev);
+ 	return ret;
+diff --git a/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c b/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c
+index 56cf261a8e958..9cd713c02a455 100644
+--- a/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c
++++ b/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c
+@@ -140,7 +140,7 @@ int hantro_h1_jpeg_enc_run(struct hantro_ctx *ctx)
+ 	return 0;
+ }
+ 
+-void hantro_jpeg_enc_done(struct hantro_ctx *ctx)
++void hantro_h1_jpeg_enc_done(struct hantro_ctx *ctx)
+ {
+ 	struct hantro_dev *vpu = ctx->dev;
+ 	u32 bytesused = vepu_read(vpu, H1_REG_STR_BUF_LIMIT) / 8;
+diff --git a/drivers/staging/media/hantro/hantro_hw.h b/drivers/staging/media/hantro/hantro_hw.h
+index 267a6d33a47b5..60d4602d33ed5 100644
+--- a/drivers/staging/media/hantro/hantro_hw.h
++++ b/drivers/staging/media/hantro/hantro_hw.h
+@@ -239,7 +239,8 @@ int hantro_h1_jpeg_enc_run(struct hantro_ctx *ctx);
+ int rockchip_vpu2_jpeg_enc_run(struct hantro_ctx *ctx);
+ int hantro_jpeg_enc_init(struct hantro_ctx *ctx);
+ void hantro_jpeg_enc_exit(struct hantro_ctx *ctx);
+-void hantro_jpeg_enc_done(struct hantro_ctx *ctx);
++void hantro_h1_jpeg_enc_done(struct hantro_ctx *ctx);
++void rockchip_vpu2_jpeg_enc_done(struct hantro_ctx *ctx);
+ 
+ dma_addr_t hantro_h264_get_ref_buf(struct hantro_ctx *ctx,
+ 				   unsigned int dpb_idx);
+diff --git a/drivers/staging/media/hantro/rockchip_vpu2_hw_jpeg_enc.c b/drivers/staging/media/hantro/rockchip_vpu2_hw_jpeg_enc.c
+index 991213ce16108..5d9ff420f0b5f 100644
+--- a/drivers/staging/media/hantro/rockchip_vpu2_hw_jpeg_enc.c
++++ b/drivers/staging/media/hantro/rockchip_vpu2_hw_jpeg_enc.c
+@@ -171,3 +171,20 @@ int rockchip_vpu2_jpeg_enc_run(struct hantro_ctx *ctx)
+ 
+ 	return 0;
+ }
++
++void rockchip_vpu2_jpeg_enc_done(struct hantro_ctx *ctx)
++{
++	struct hantro_dev *vpu = ctx->dev;
++	u32 bytesused = vepu_read(vpu, VEPU_REG_STR_BUF_LIMIT) / 8;
++	struct vb2_v4l2_buffer *dst_buf = hantro_get_dst_buf(ctx);
++
++	/*
++	 * TODO: Rework the JPEG encoder to eliminate the need
++	 * for a bounce buffer.
++	 */
++	memcpy(vb2_plane_vaddr(&dst_buf->vb2_buf, 0) +
++	       ctx->vpu_dst_fmt->header_size,
++	       ctx->jpeg_enc.bounce_buffer.cpu, bytesused);
++	vb2_set_plane_payload(&dst_buf->vb2_buf, 0,
++			      ctx->vpu_dst_fmt->header_size + bytesused);
++}
+diff --git a/drivers/staging/media/hantro/rockchip_vpu_hw.c b/drivers/staging/media/hantro/rockchip_vpu_hw.c
+index d4f52957cc534..0c22039162a00 100644
+--- a/drivers/staging/media/hantro/rockchip_vpu_hw.c
++++ b/drivers/staging/media/hantro/rockchip_vpu_hw.c
+@@ -343,7 +343,7 @@ static const struct hantro_codec_ops rk3066_vpu_codec_ops[] = {
+ 		.run = hantro_h1_jpeg_enc_run,
+ 		.reset = rockchip_vpu1_enc_reset,
+ 		.init = hantro_jpeg_enc_init,
+-		.done = hantro_jpeg_enc_done,
++		.done = hantro_h1_jpeg_enc_done,
+ 		.exit = hantro_jpeg_enc_exit,
+ 	},
+ 	[HANTRO_MODE_H264_DEC] = {
+@@ -371,7 +371,7 @@ static const struct hantro_codec_ops rk3288_vpu_codec_ops[] = {
+ 		.run = hantro_h1_jpeg_enc_run,
+ 		.reset = rockchip_vpu1_enc_reset,
+ 		.init = hantro_jpeg_enc_init,
+-		.done = hantro_jpeg_enc_done,
++		.done = hantro_h1_jpeg_enc_done,
+ 		.exit = hantro_jpeg_enc_exit,
+ 	},
+ 	[HANTRO_MODE_H264_DEC] = {
+@@ -399,6 +399,7 @@ static const struct hantro_codec_ops rk3399_vpu_codec_ops[] = {
+ 		.run = rockchip_vpu2_jpeg_enc_run,
+ 		.reset = rockchip_vpu2_enc_reset,
+ 		.init = hantro_jpeg_enc_init,
++		.done = rockchip_vpu2_jpeg_enc_done,
+ 		.exit = hantro_jpeg_enc_exit,
+ 	},
+ 	[HANTRO_MODE_H264_DEC] = {
+diff --git a/drivers/staging/rtl8192e/rtllib.h b/drivers/staging/rtl8192e/rtllib.h
+index c6f8b772335c1..c985e4ebc545a 100644
+--- a/drivers/staging/rtl8192e/rtllib.h
++++ b/drivers/staging/rtl8192e/rtllib.h
+@@ -1980,7 +1980,7 @@ void SendDisassociation(struct rtllib_device *ieee, bool deauth, u16 asRsn);
+ void rtllib_softmac_xmit(struct rtllib_txb *txb, struct rtllib_device *ieee);
+ 
+ void rtllib_start_ibss(struct rtllib_device *ieee);
+-void rtllib_softmac_init(struct rtllib_device *ieee);
++int rtllib_softmac_init(struct rtllib_device *ieee);
+ void rtllib_softmac_free(struct rtllib_device *ieee);
+ void rtllib_disassociate(struct rtllib_device *ieee);
+ void rtllib_stop_scan(struct rtllib_device *ieee);
+diff --git a/drivers/staging/rtl8192e/rtllib_module.c b/drivers/staging/rtl8192e/rtllib_module.c
+index 64d9feee1f392..f00ac94b2639b 100644
+--- a/drivers/staging/rtl8192e/rtllib_module.c
++++ b/drivers/staging/rtl8192e/rtllib_module.c
+@@ -88,7 +88,7 @@ struct net_device *alloc_rtllib(int sizeof_priv)
+ 	err = rtllib_networks_allocate(ieee);
+ 	if (err) {
+ 		pr_err("Unable to allocate beacon storage: %d\n", err);
+-		goto failed;
++		goto free_netdev;
+ 	}
+ 	rtllib_networks_initialize(ieee);
+ 
+@@ -121,11 +121,13 @@ struct net_device *alloc_rtllib(int sizeof_priv)
+ 	ieee->hwsec_active = 0;
+ 
+ 	memset(ieee->swcamtable, 0, sizeof(struct sw_cam_table) * 32);
+-	rtllib_softmac_init(ieee);
++	err = rtllib_softmac_init(ieee);
++	if (err)
++		goto free_crypt_info;
+ 
+ 	ieee->pHTInfo = kzalloc(sizeof(struct rt_hi_throughput), GFP_KERNEL);
+ 	if (!ieee->pHTInfo)
+-		return NULL;
++		goto free_softmac;
+ 
+ 	HTUpdateDefaultSetting(ieee);
+ 	HTInitializeHTInfo(ieee);
+@@ -141,8 +143,14 @@ struct net_device *alloc_rtllib(int sizeof_priv)
+ 
+ 	return dev;
+ 
+- failed:
++free_softmac:
++	rtllib_softmac_free(ieee);
++free_crypt_info:
++	lib80211_crypt_info_free(&ieee->crypt_info);
++	rtllib_networks_free(ieee);
++free_netdev:
+ 	free_netdev(dev);
++
+ 	return NULL;
+ }
+ EXPORT_SYMBOL(alloc_rtllib);
+diff --git a/drivers/staging/rtl8192e/rtllib_softmac.c b/drivers/staging/rtl8192e/rtllib_softmac.c
+index d2726d01c7573..503d33be71d99 100644
+--- a/drivers/staging/rtl8192e/rtllib_softmac.c
++++ b/drivers/staging/rtl8192e/rtllib_softmac.c
+@@ -2952,7 +2952,7 @@ void rtllib_start_protocol(struct rtllib_device *ieee)
+ 	}
+ }
+ 
+-void rtllib_softmac_init(struct rtllib_device *ieee)
++int rtllib_softmac_init(struct rtllib_device *ieee)
+ {
+ 	int i;
+ 
+@@ -2963,7 +2963,8 @@ void rtllib_softmac_init(struct rtllib_device *ieee)
+ 		ieee->seq_ctrl[i] = 0;
+ 	ieee->dot11d_info = kzalloc(sizeof(struct rt_dot11d_info), GFP_ATOMIC);
+ 	if (!ieee->dot11d_info)
+-		netdev_err(ieee->dev, "Can't alloc memory for DOT11D\n");
++		return -ENOMEM;
++
+ 	ieee->LinkDetectInfo.SlotIndex = 0;
+ 	ieee->LinkDetectInfo.SlotNum = 2;
+ 	ieee->LinkDetectInfo.NumRecvBcnInPeriod = 0;
+@@ -3029,6 +3030,7 @@ void rtllib_softmac_init(struct rtllib_device *ieee)
+ 
+ 	tasklet_setup(&ieee->ps_task, rtllib_sta_ps);
+ 
++	return 0;
+ }
+ 
+ void rtllib_softmac_free(struct rtllib_device *ieee)
+diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c
+index 2b37bc408fc3d..85102d12d7169 100644
+--- a/drivers/tee/tee_core.c
++++ b/drivers/tee/tee_core.c
+@@ -98,8 +98,10 @@ void teedev_ctx_put(struct tee_context *ctx)
+ 
+ static void teedev_close_context(struct tee_context *ctx)
+ {
+-	tee_device_put(ctx->teedev);
++	struct tee_device *teedev = ctx->teedev;
++
+ 	teedev_ctx_put(ctx);
++	tee_device_put(teedev);
+ }
+ 
+ static int tee_open(struct inode *inode, struct file *filp)
+diff --git a/drivers/thermal/imx8mm_thermal.c b/drivers/thermal/imx8mm_thermal.c
+index 7442e013738f8..af666bd9e8d4d 100644
+--- a/drivers/thermal/imx8mm_thermal.c
++++ b/drivers/thermal/imx8mm_thermal.c
+@@ -21,6 +21,7 @@
+ #define TPS			0x4
+ #define TRITSR			0x20	/* TMU immediate temp */
+ 
++#define TER_ADC_PD		BIT(30)
+ #define TER_EN			BIT(31)
+ #define TRITSR_TEMP0_VAL_MASK	0xff
+ #define TRITSR_TEMP1_VAL_MASK	0xff0000
+@@ -113,6 +114,8 @@ static void imx8mm_tmu_enable(struct imx8mm_tmu *tmu, bool enable)
+ 
+ 	val = readl_relaxed(tmu->base + TER);
+ 	val = enable ? (val | TER_EN) : (val & ~TER_EN);
++	if (tmu->socdata->version == TMU_VER2)
++		val = enable ? (val & ~TER_ADC_PD) : (val | TER_ADC_PD);
+ 	writel_relaxed(val, tmu->base + TER);
+ }
+ 
+diff --git a/drivers/thermal/imx_thermal.c b/drivers/thermal/imx_thermal.c
+index 2c7473d86a59b..16663373b6829 100644
+--- a/drivers/thermal/imx_thermal.c
++++ b/drivers/thermal/imx_thermal.c
+@@ -15,6 +15,7 @@
+ #include <linux/regmap.h>
+ #include <linux/thermal.h>
+ #include <linux/nvmem-consumer.h>
++#include <linux/pm_runtime.h>
+ 
+ #define REG_SET		0x4
+ #define REG_CLR		0x8
+@@ -194,6 +195,7 @@ static struct thermal_soc_data thermal_imx7d_data = {
+ };
+ 
+ struct imx_thermal_data {
++	struct device *dev;
+ 	struct cpufreq_policy *policy;
+ 	struct thermal_zone_device *tz;
+ 	struct thermal_cooling_device *cdev;
+@@ -252,44 +254,15 @@ static int imx_get_temp(struct thermal_zone_device *tz, int *temp)
+ 	const struct thermal_soc_data *soc_data = data->socdata;
+ 	struct regmap *map = data->tempmon;
+ 	unsigned int n_meas;
+-	bool wait, run_measurement;
+ 	u32 val;
++	int ret;
+ 
+-	run_measurement = !data->irq_enabled;
+-	if (!run_measurement) {
+-		/* Check if a measurement is currently in progress */
+-		regmap_read(map, soc_data->temp_data, &val);
+-		wait = !(val & soc_data->temp_valid_mask);
+-	} else {
+-		/*
+-		 * Every time we measure the temperature, we will power on the
+-		 * temperature sensor, enable measurements, take a reading,
+-		 * disable measurements, power off the temperature sensor.
+-		 */
+-		regmap_write(map, soc_data->sensor_ctrl + REG_CLR,
+-			    soc_data->power_down_mask);
+-		regmap_write(map, soc_data->sensor_ctrl + REG_SET,
+-			    soc_data->measure_temp_mask);
+-
+-		wait = true;
+-	}
+-
+-	/*
+-	 * According to the temp sensor designers, it may require up to ~17us
+-	 * to complete a measurement.
+-	 */
+-	if (wait)
+-		usleep_range(20, 50);
++	ret = pm_runtime_resume_and_get(data->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	regmap_read(map, soc_data->temp_data, &val);
+ 
+-	if (run_measurement) {
+-		regmap_write(map, soc_data->sensor_ctrl + REG_CLR,
+-			     soc_data->measure_temp_mask);
+-		regmap_write(map, soc_data->sensor_ctrl + REG_SET,
+-			     soc_data->power_down_mask);
+-	}
+-
+ 	if ((val & soc_data->temp_valid_mask) == 0) {
+ 		dev_dbg(&tz->device, "temp measurement never finished\n");
+ 		return -EAGAIN;
+@@ -328,6 +301,8 @@ static int imx_get_temp(struct thermal_zone_device *tz, int *temp)
+ 		enable_irq(data->irq);
+ 	}
+ 
++	pm_runtime_put(data->dev);
++
+ 	return 0;
+ }
+ 
+@@ -335,24 +310,16 @@ static int imx_change_mode(struct thermal_zone_device *tz,
+ 			   enum thermal_device_mode mode)
+ {
+ 	struct imx_thermal_data *data = tz->devdata;
+-	struct regmap *map = data->tempmon;
+-	const struct thermal_soc_data *soc_data = data->socdata;
+ 
+ 	if (mode == THERMAL_DEVICE_ENABLED) {
+-		regmap_write(map, soc_data->sensor_ctrl + REG_CLR,
+-			     soc_data->power_down_mask);
+-		regmap_write(map, soc_data->sensor_ctrl + REG_SET,
+-			     soc_data->measure_temp_mask);
++		pm_runtime_get(data->dev);
+ 
+ 		if (!data->irq_enabled) {
+ 			data->irq_enabled = true;
+ 			enable_irq(data->irq);
+ 		}
+ 	} else {
+-		regmap_write(map, soc_data->sensor_ctrl + REG_CLR,
+-			     soc_data->measure_temp_mask);
+-		regmap_write(map, soc_data->sensor_ctrl + REG_SET,
+-			     soc_data->power_down_mask);
++		pm_runtime_put(data->dev);
+ 
+ 		if (data->irq_enabled) {
+ 			disable_irq(data->irq);
+@@ -393,6 +360,11 @@ static int imx_set_trip_temp(struct thermal_zone_device *tz, int trip,
+ 			     int temp)
+ {
+ 	struct imx_thermal_data *data = tz->devdata;
++	int ret;
++
++	ret = pm_runtime_resume_and_get(data->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	/* do not allow changing critical threshold */
+ 	if (trip == IMX_TRIP_CRITICAL)
+@@ -406,6 +378,8 @@ static int imx_set_trip_temp(struct thermal_zone_device *tz, int trip,
+ 
+ 	imx_set_alarm_temp(data, temp);
+ 
++	pm_runtime_put(data->dev);
++
+ 	return 0;
+ }
+ 
+@@ -681,6 +655,8 @@ static int imx_thermal_probe(struct platform_device *pdev)
+ 	if (!data)
+ 		return -ENOMEM;
+ 
++	data->dev = &pdev->dev;
++
+ 	map = syscon_regmap_lookup_by_phandle(pdev->dev.of_node, "fsl,tempmon");
+ 	if (IS_ERR(map)) {
+ 		ret = PTR_ERR(map);
+@@ -800,6 +776,16 @@ static int imx_thermal_probe(struct platform_device *pdev)
+ 		     data->socdata->power_down_mask);
+ 	regmap_write(map, data->socdata->sensor_ctrl + REG_SET,
+ 		     data->socdata->measure_temp_mask);
++	/* After power up, we need a delay before first access can be done. */
++	usleep_range(20, 50);
++
++	/* the core was configured and enabled just before */
++	pm_runtime_set_active(&pdev->dev);
++	pm_runtime_enable(data->dev);
++
++	ret = pm_runtime_resume_and_get(data->dev);
++	if (ret < 0)
++		goto disable_runtime_pm;
+ 
+ 	data->irq_enabled = true;
+ 	ret = thermal_zone_device_enable(data->tz);
+@@ -814,10 +800,15 @@ static int imx_thermal_probe(struct platform_device *pdev)
+ 		goto thermal_zone_unregister;
+ 	}
+ 
++	pm_runtime_put(data->dev);
++
+ 	return 0;
+ 
+ thermal_zone_unregister:
+ 	thermal_zone_device_unregister(data->tz);
++disable_runtime_pm:
++	pm_runtime_put_noidle(data->dev);
++	pm_runtime_disable(data->dev);
+ clk_disable:
+ 	clk_disable_unprepare(data->thermal_clk);
+ legacy_cleanup:
+@@ -829,13 +820,9 @@ legacy_cleanup:
+ static int imx_thermal_remove(struct platform_device *pdev)
+ {
+ 	struct imx_thermal_data *data = platform_get_drvdata(pdev);
+-	struct regmap *map = data->tempmon;
+ 
+-	/* Disable measurements */
+-	regmap_write(map, data->socdata->sensor_ctrl + REG_SET,
+-		     data->socdata->power_down_mask);
+-	if (!IS_ERR(data->thermal_clk))
+-		clk_disable_unprepare(data->thermal_clk);
++	pm_runtime_put_noidle(data->dev);
++	pm_runtime_disable(data->dev);
+ 
+ 	thermal_zone_device_unregister(data->tz);
+ 	imx_thermal_unregister_legacy_cooling(data);
+@@ -858,29 +845,79 @@ static int __maybe_unused imx_thermal_suspend(struct device *dev)
+ 	ret = thermal_zone_device_disable(data->tz);
+ 	if (ret)
+ 		return ret;
++
++	return pm_runtime_force_suspend(data->dev);
++}
++
++static int __maybe_unused imx_thermal_resume(struct device *dev)
++{
++	struct imx_thermal_data *data = dev_get_drvdata(dev);
++	int ret;
++
++	ret = pm_runtime_force_resume(data->dev);
++	if (ret)
++		return ret;
++	/* Enabled thermal sensor after resume */
++	return thermal_zone_device_enable(data->tz);
++}
++
++static int __maybe_unused imx_thermal_runtime_suspend(struct device *dev)
++{
++	struct imx_thermal_data *data = dev_get_drvdata(dev);
++	const struct thermal_soc_data *socdata = data->socdata;
++	struct regmap *map = data->tempmon;
++	int ret;
++
++	ret = regmap_write(map, socdata->sensor_ctrl + REG_CLR,
++			   socdata->measure_temp_mask);
++	if (ret)
++		return ret;
++
++	ret = regmap_write(map, socdata->sensor_ctrl + REG_SET,
++			   socdata->power_down_mask);
++	if (ret)
++		return ret;
++
+ 	clk_disable_unprepare(data->thermal_clk);
+ 
+ 	return 0;
+ }
+ 
+-static int __maybe_unused imx_thermal_resume(struct device *dev)
++static int __maybe_unused imx_thermal_runtime_resume(struct device *dev)
+ {
+ 	struct imx_thermal_data *data = dev_get_drvdata(dev);
++	const struct thermal_soc_data *socdata = data->socdata;
++	struct regmap *map = data->tempmon;
+ 	int ret;
+ 
+ 	ret = clk_prepare_enable(data->thermal_clk);
+ 	if (ret)
+ 		return ret;
+-	/* Enabled thermal sensor after resume */
+-	ret = thermal_zone_device_enable(data->tz);
++
++	ret = regmap_write(map, socdata->sensor_ctrl + REG_CLR,
++			   socdata->power_down_mask);
++	if (ret)
++		return ret;
++
++	ret = regmap_write(map, socdata->sensor_ctrl + REG_SET,
++			   socdata->measure_temp_mask);
+ 	if (ret)
+ 		return ret;
+ 
++	/*
++	 * According to the temp sensor designers, it may require up to ~17us
++	 * to complete a measurement.
++	 */
++	usleep_range(20, 50);
++
+ 	return 0;
+ }
+ 
+-static SIMPLE_DEV_PM_OPS(imx_thermal_pm_ops,
+-			 imx_thermal_suspend, imx_thermal_resume);
++static const struct dev_pm_ops imx_thermal_pm_ops = {
++	SET_SYSTEM_SLEEP_PM_OPS(imx_thermal_suspend, imx_thermal_resume)
++	SET_RUNTIME_PM_OPS(imx_thermal_runtime_suspend,
++			   imx_thermal_runtime_resume, NULL)
++};
+ 
+ static struct platform_driver imx_thermal = {
+ 	.driver = {
+diff --git a/drivers/thermal/intel/int340x_thermal/processor_thermal_device.h b/drivers/thermal/intel/int340x_thermal/processor_thermal_device.h
+index be27f633e40ac..9b2a64ef55d02 100644
+--- a/drivers/thermal/intel/int340x_thermal/processor_thermal_device.h
++++ b/drivers/thermal/intel/int340x_thermal/processor_thermal_device.h
+@@ -80,7 +80,8 @@ void proc_thermal_rfim_remove(struct pci_dev *pdev);
+ int proc_thermal_mbox_add(struct pci_dev *pdev, struct proc_thermal_device *proc_priv);
+ void proc_thermal_mbox_remove(struct pci_dev *pdev);
+ 
+-int processor_thermal_send_mbox_cmd(struct pci_dev *pdev, u16 cmd_id, u32 cmd_data, u64 *cmd_resp);
++int processor_thermal_send_mbox_read_cmd(struct pci_dev *pdev, u16 id, u64 *resp);
++int processor_thermal_send_mbox_write_cmd(struct pci_dev *pdev, u16 id, u32 data);
+ int proc_thermal_add(struct device *dev, struct proc_thermal_device *priv);
+ void proc_thermal_remove(struct proc_thermal_device *proc_priv);
+ int proc_thermal_suspend(struct device *dev);
+diff --git a/drivers/thermal/intel/int340x_thermal/processor_thermal_mbox.c b/drivers/thermal/intel/int340x_thermal/processor_thermal_mbox.c
+index 01008ae00e7f7..0b89a4340ff4e 100644
+--- a/drivers/thermal/intel/int340x_thermal/processor_thermal_mbox.c
++++ b/drivers/thermal/intel/int340x_thermal/processor_thermal_mbox.c
+@@ -24,19 +24,15 @@
+ 
+ static DEFINE_MUTEX(mbox_lock);
+ 
+-static int send_mbox_cmd(struct pci_dev *pdev, u16 cmd_id, u32 cmd_data, u64 *cmd_resp)
++static int wait_for_mbox_ready(struct proc_thermal_device *proc_priv)
+ {
+-	struct proc_thermal_device *proc_priv;
+ 	u32 retries, data;
+ 	int ret;
+ 
+-	mutex_lock(&mbox_lock);
+-	proc_priv = pci_get_drvdata(pdev);
+-
+ 	/* Poll for rb bit == 0 */
+ 	retries = MBOX_RETRY_COUNT;
+ 	do {
+-		data = readl((void __iomem *) (proc_priv->mmio_base + MBOX_OFFSET_INTERFACE));
++		data = readl(proc_priv->mmio_base + MBOX_OFFSET_INTERFACE);
+ 		if (data & BIT_ULL(MBOX_BUSY_BIT)) {
+ 			ret = -EBUSY;
+ 			continue;
+@@ -45,53 +41,78 @@ static int send_mbox_cmd(struct pci_dev *pdev, u16 cmd_id, u32 cmd_data, u64 *cm
+ 		break;
+ 	} while (--retries);
+ 
++	return ret;
++}
++
++static int send_mbox_write_cmd(struct pci_dev *pdev, u16 id, u32 data)
++{
++	struct proc_thermal_device *proc_priv;
++	u32 reg_data;
++	int ret;
++
++	proc_priv = pci_get_drvdata(pdev);
++
++	mutex_lock(&mbox_lock);
++
++	ret = wait_for_mbox_ready(proc_priv);
+ 	if (ret)
+ 		goto unlock_mbox;
+ 
+-	if (cmd_id == MBOX_CMD_WORKLOAD_TYPE_WRITE)
+-		writel(cmd_data, (void __iomem *) ((proc_priv->mmio_base + MBOX_OFFSET_DATA)));
+-
++	writel(data, (proc_priv->mmio_base + MBOX_OFFSET_DATA));
+ 	/* Write command register */
+-	data = BIT_ULL(MBOX_BUSY_BIT) | cmd_id;
+-	writel(data, (void __iomem *) ((proc_priv->mmio_base + MBOX_OFFSET_INTERFACE)));
++	reg_data = BIT_ULL(MBOX_BUSY_BIT) | id;
++	writel(reg_data, (proc_priv->mmio_base + MBOX_OFFSET_INTERFACE));
+ 
+-	/* Poll for rb bit == 0 */
+-	retries = MBOX_RETRY_COUNT;
+-	do {
+-		data = readl((void __iomem *) (proc_priv->mmio_base + MBOX_OFFSET_INTERFACE));
+-		if (data & BIT_ULL(MBOX_BUSY_BIT)) {
+-			ret = -EBUSY;
+-			continue;
+-		}
++	ret = wait_for_mbox_ready(proc_priv);
+ 
+-		if (data) {
+-			ret = -ENXIO;
+-			goto unlock_mbox;
+-		}
++unlock_mbox:
++	mutex_unlock(&mbox_lock);
++	return ret;
++}
+ 
+-		ret = 0;
++static int send_mbox_read_cmd(struct pci_dev *pdev, u16 id, u64 *resp)
++{
++	struct proc_thermal_device *proc_priv;
++	u32 reg_data;
++	int ret;
+ 
+-		if (!cmd_resp)
+-			break;
++	proc_priv = pci_get_drvdata(pdev);
+ 
+-		if (cmd_id == MBOX_CMD_WORKLOAD_TYPE_READ)
+-			*cmd_resp = readl((void __iomem *) (proc_priv->mmio_base + MBOX_OFFSET_DATA));
+-		else
+-			*cmd_resp = readq((void __iomem *) (proc_priv->mmio_base + MBOX_OFFSET_DATA));
++	mutex_lock(&mbox_lock);
+ 
+-		break;
+-	} while (--retries);
++	ret = wait_for_mbox_ready(proc_priv);
++	if (ret)
++		goto unlock_mbox;
++
++	/* Write command register */
++	reg_data = BIT_ULL(MBOX_BUSY_BIT) | id;
++	writel(reg_data, (proc_priv->mmio_base + MBOX_OFFSET_INTERFACE));
++
++	ret = wait_for_mbox_ready(proc_priv);
++	if (ret)
++		goto unlock_mbox;
++
++	if (id == MBOX_CMD_WORKLOAD_TYPE_READ)
++		*resp = readl(proc_priv->mmio_base + MBOX_OFFSET_DATA);
++	else
++		*resp = readq(proc_priv->mmio_base + MBOX_OFFSET_DATA);
+ 
+ unlock_mbox:
+ 	mutex_unlock(&mbox_lock);
+ 	return ret;
+ }
+ 
+-int processor_thermal_send_mbox_cmd(struct pci_dev *pdev, u16 cmd_id, u32 cmd_data, u64 *cmd_resp)
++int processor_thermal_send_mbox_read_cmd(struct pci_dev *pdev, u16 id, u64 *resp)
+ {
+-	return send_mbox_cmd(pdev, cmd_id, cmd_data, cmd_resp);
++	return send_mbox_read_cmd(pdev, id, resp);
+ }
+-EXPORT_SYMBOL_GPL(processor_thermal_send_mbox_cmd);
++EXPORT_SYMBOL_NS_GPL(processor_thermal_send_mbox_read_cmd, INT340X_THERMAL);
++
++int processor_thermal_send_mbox_write_cmd(struct pci_dev *pdev, u16 id, u32 data)
++{
++	return send_mbox_write_cmd(pdev, id, data);
++}
++EXPORT_SYMBOL_NS_GPL(processor_thermal_send_mbox_write_cmd, INT340X_THERMAL);
+ 
+ /* List of workload types */
+ static const char * const workload_types[] = {
+@@ -104,7 +125,6 @@ static const char * const workload_types[] = {
+ 	NULL
+ };
+ 
+-
+ static ssize_t workload_available_types_show(struct device *dev,
+ 					       struct device_attribute *attr,
+ 					       char *buf)
+@@ -146,7 +166,7 @@ static ssize_t workload_type_store(struct device *dev,
+ 
+ 	data |= ret;
+ 
+-	ret = send_mbox_cmd(pdev, MBOX_CMD_WORKLOAD_TYPE_WRITE, data, NULL);
++	ret = send_mbox_write_cmd(pdev, MBOX_CMD_WORKLOAD_TYPE_WRITE, data);
+ 	if (ret)
+ 		return false;
+ 
+@@ -161,7 +181,7 @@ static ssize_t workload_type_show(struct device *dev,
+ 	u64 cmd_resp;
+ 	int ret;
+ 
+-	ret = send_mbox_cmd(pdev, MBOX_CMD_WORKLOAD_TYPE_READ, 0, &cmd_resp);
++	ret = send_mbox_read_cmd(pdev, MBOX_CMD_WORKLOAD_TYPE_READ, &cmd_resp);
+ 	if (ret)
+ 		return false;
+ 
+@@ -186,8 +206,6 @@ static const struct attribute_group workload_req_attribute_group = {
+ 	.name = "workload_request"
+ };
+ 
+-
+-
+ static bool workload_req_created;
+ 
+ int proc_thermal_mbox_add(struct pci_dev *pdev, struct proc_thermal_device *proc_priv)
+@@ -196,7 +214,7 @@ int proc_thermal_mbox_add(struct pci_dev *pdev, struct proc_thermal_device *proc
+ 	int ret;
+ 
+ 	/* Check if there is a mailbox support, if fails return success */
+-	ret = send_mbox_cmd(pdev, MBOX_CMD_WORKLOAD_TYPE_READ, 0, &cmd_resp);
++	ret = send_mbox_read_cmd(pdev, MBOX_CMD_WORKLOAD_TYPE_READ, &cmd_resp);
+ 	if (ret)
+ 		return 0;
+ 
+diff --git a/drivers/thermal/intel/int340x_thermal/processor_thermal_rfim.c b/drivers/thermal/intel/int340x_thermal/processor_thermal_rfim.c
+index e693ec8234fbc..8c42e76620333 100644
+--- a/drivers/thermal/intel/int340x_thermal/processor_thermal_rfim.c
++++ b/drivers/thermal/intel/int340x_thermal/processor_thermal_rfim.c
+@@ -9,6 +9,8 @@
+ #include <linux/pci.h>
+ #include "processor_thermal_device.h"
+ 
++MODULE_IMPORT_NS(INT340X_THERMAL);
++
+ struct mmio_reg {
+ 	int read_only;
+ 	u32 offset;
+@@ -194,8 +196,7 @@ static ssize_t rfi_restriction_store(struct device *dev,
+ 				     struct device_attribute *attr,
+ 				     const char *buf, size_t count)
+ {
+-	u16 cmd_id = 0x0008;
+-	u64 cmd_resp;
++	u16 id = 0x0008;
+ 	u32 input;
+ 	int ret;
+ 
+@@ -203,7 +204,7 @@ static ssize_t rfi_restriction_store(struct device *dev,
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = processor_thermal_send_mbox_cmd(to_pci_dev(dev), cmd_id, input, &cmd_resp);
++	ret = processor_thermal_send_mbox_write_cmd(to_pci_dev(dev), id, input);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -214,30 +215,30 @@ static ssize_t rfi_restriction_show(struct device *dev,
+ 				    struct device_attribute *attr,
+ 				    char *buf)
+ {
+-	u16 cmd_id = 0x0007;
+-	u64 cmd_resp;
++	u16 id = 0x0007;
++	u64 resp;
+ 	int ret;
+ 
+-	ret = processor_thermal_send_mbox_cmd(to_pci_dev(dev), cmd_id, 0, &cmd_resp);
++	ret = processor_thermal_send_mbox_read_cmd(to_pci_dev(dev), id, &resp);
+ 	if (ret)
+ 		return ret;
+ 
+-	return sprintf(buf, "%llu\n", cmd_resp);
++	return sprintf(buf, "%llu\n", resp);
+ }
+ 
+ static ssize_t ddr_data_rate_show(struct device *dev,
+ 				  struct device_attribute *attr,
+ 				  char *buf)
+ {
+-	u16 cmd_id = 0x0107;
+-	u64 cmd_resp;
++	u16 id = 0x0107;
++	u64 resp;
+ 	int ret;
+ 
+-	ret = processor_thermal_send_mbox_cmd(to_pci_dev(dev), cmd_id, 0, &cmd_resp);
++	ret = processor_thermal_send_mbox_read_cmd(to_pci_dev(dev), id, &resp);
+ 	if (ret)
+ 		return ret;
+ 
+-	return sprintf(buf, "%llu\n", cmd_resp);
++	return sprintf(buf, "%llu\n", resp);
+ }
+ 
+ static DEVICE_ATTR_RW(rfi_restriction);
+diff --git a/drivers/thunderbolt/acpi.c b/drivers/thunderbolt/acpi.c
+index b67e72d5644b3..7c9597a339295 100644
+--- a/drivers/thunderbolt/acpi.c
++++ b/drivers/thunderbolt/acpi.c
+@@ -7,6 +7,7 @@
+  */
+ 
+ #include <linux/acpi.h>
++#include <linux/pm_runtime.h>
+ 
+ #include "tb.h"
+ 
+@@ -74,8 +75,18 @@ static acpi_status tb_acpi_add_link(acpi_handle handle, u32 level, void *data,
+ 		 pci_pcie_type(pdev) == PCI_EXP_TYPE_DOWNSTREAM))) {
+ 		const struct device_link *link;
+ 
++		/*
++		 * Make them both active first to make sure the NHI does
++		 * not runtime suspend before the consumer. The
++		 * pm_runtime_put() below then allows the consumer to
++		 * runtime suspend again (which then allows NHI runtime
++		 * suspend too now that the device link is established).
++		 */
++		pm_runtime_get_sync(&pdev->dev);
++
+ 		link = device_link_add(&pdev->dev, &nhi->pdev->dev,
+ 				       DL_FLAG_AUTOREMOVE_SUPPLIER |
++				       DL_FLAG_RPM_ACTIVE |
+ 				       DL_FLAG_PM_RUNTIME);
+ 		if (link) {
+ 			dev_dbg(&nhi->pdev->dev, "created link from %s\n",
+@@ -84,6 +95,8 @@ static acpi_status tb_acpi_add_link(acpi_handle handle, u32 level, void *data,
+ 			dev_warn(&nhi->pdev->dev, "device link creation from %s failed\n",
+ 				 dev_name(&pdev->dev));
+ 		}
++
++		pm_runtime_put(&pdev->dev);
+ 	}
+ 
+ out_put:
+diff --git a/drivers/tty/mxser.c b/drivers/tty/mxser.c
+index 93a95a135a71a..39458b42df7b0 100644
+--- a/drivers/tty/mxser.c
++++ b/drivers/tty/mxser.c
+@@ -251,8 +251,6 @@ struct mxser_port {
+ 	u8 MCR;			/* Modem control register */
+ 	u8 FCR;			/* FIFO control register */
+ 
+-	bool ldisc_stop_rx;
+-
+ 	struct async_icount icount; /* kernel counters for 4 input interrupts */
+ 	unsigned int timeout;
+ 
+@@ -262,7 +260,6 @@ struct mxser_port {
+ 	unsigned int xmit_head;
+ 	unsigned int xmit_tail;
+ 	unsigned int xmit_cnt;
+-	int closing;
+ 
+ 	spinlock_t slock;
+ };
+@@ -918,7 +915,6 @@ static void mxser_close(struct tty_struct *tty, struct file *filp)
+ 		return;
+ 	if (tty_port_close_start(port, tty, filp) == 0)
+ 		return;
+-	info->closing = 1;
+ 	mutex_lock(&port->mutex);
+ 	mxser_close_port(port);
+ 	mxser_flush_buffer(tty);
+@@ -927,7 +923,6 @@ static void mxser_close(struct tty_struct *tty, struct file *filp)
+ 	mxser_shutdown_port(port);
+ 	tty_port_set_initialized(port, 0);
+ 	mutex_unlock(&port->mutex);
+-	info->closing = 0;
+ 	/* Right now the tty_port set is done outside of the close_end helper
+ 	   as we don't yet have everyone using refcounts */	
+ 	tty_port_close_end(port, tty);
+@@ -1326,11 +1321,14 @@ static int mxser_get_icount(struct tty_struct *tty,
+ 	return 0;
+ }
+ 
+-static void mxser_stoprx(struct tty_struct *tty)
++/*
++ * This routine is called by the upper-layer tty layer to signal that
++ * incoming characters should be throttled.
++ */
++static void mxser_throttle(struct tty_struct *tty)
+ {
+ 	struct mxser_port *info = tty->driver_data;
+ 
+-	info->ldisc_stop_rx = true;
+ 	if (I_IXOFF(tty)) {
+ 		if (info->board->must_hwid) {
+ 			info->IER &= ~MOXA_MUST_RECV_ISR;
+@@ -1349,21 +1347,11 @@ static void mxser_stoprx(struct tty_struct *tty)
+ 	}
+ }
+ 
+-/*
+- * This routine is called by the upper-layer tty layer to signal that
+- * incoming characters should be throttled.
+- */
+-static void mxser_throttle(struct tty_struct *tty)
+-{
+-	mxser_stoprx(tty);
+-}
+-
+ static void mxser_unthrottle(struct tty_struct *tty)
+ {
+ 	struct mxser_port *info = tty->driver_data;
+ 
+ 	/* startrx */
+-	info->ldisc_stop_rx = false;
+ 	if (I_IXOFF(tty)) {
+ 		if (info->x_char)
+ 			info->x_char = 0;
+@@ -1546,12 +1534,10 @@ static bool mxser_receive_chars_new(struct tty_struct *tty,
+ 	if (hwid == MOXA_MUST_MU150_HWID)
+ 		gdl &= MOXA_MUST_GDL_MASK;
+ 
+-	if (gdl >= tty->receive_room && !port->ldisc_stop_rx)
+-		mxser_stoprx(tty);
+-
+ 	while (gdl--) {
+ 		u8 ch = inb(port->ioaddr + UART_RX);
+-		tty_insert_flip_char(&port->port, ch, 0);
++		if (!tty_insert_flip_char(&port->port, ch, 0))
++			port->icount.buf_overrun++;
+ 	}
+ 
+ 	return true;
+@@ -1561,10 +1547,8 @@ static u8 mxser_receive_chars_old(struct tty_struct *tty,
+ 		                struct mxser_port *port, u8 status)
+ {
+ 	enum mxser_must_hwid hwid = port->board->must_hwid;
+-	int recv_room = tty->receive_room;
+ 	int ignored = 0;
+ 	int max = 256;
+-	int cnt = 0;
+ 	u8 ch;
+ 
+ 	do {
+@@ -1599,14 +1583,10 @@ static u8 mxser_receive_chars_old(struct tty_struct *tty,
+ 					port->icount.overrun++;
+ 				}
+ 			}
+-			tty_insert_flip_char(&port->port, ch, flag);
+-			cnt++;
+-			if (cnt >= recv_room) {
+-				if (!port->ldisc_stop_rx)
+-					mxser_stoprx(tty);
++			if (!tty_insert_flip_char(&port->port, ch, flag)) {
++				port->icount.buf_overrun++;
+ 				break;
+ 			}
+-
+ 		}
+ 
+ 		if (hwid)
+@@ -1621,9 +1601,6 @@ static u8 mxser_receive_chars_old(struct tty_struct *tty,
+ static u8 mxser_receive_chars(struct tty_struct *tty,
+ 		struct mxser_port *port, u8 status)
+ {
+-	if (tty->receive_room == 0 && !port->ldisc_stop_rx)
+-		mxser_stoprx(tty);
+-
+ 	if (!mxser_receive_chars_new(tty, port, status))
+ 		status = mxser_receive_chars_old(tty, port, status);
+ 
+@@ -1683,7 +1660,7 @@ static bool mxser_port_isr(struct mxser_port *port)
+ 
+ 	iir &= MOXA_MUST_IIR_MASK;
+ 	tty = tty_port_tty_get(&port->port);
+-	if (!tty || port->closing || !tty_port_initialized(&port->port)) {
++	if (!tty) {
+ 		status = inb(port->ioaddr + UART_LSR);
+ 		outb(port->FCR | UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT,
+ 				port->ioaddr + UART_FCR);
+@@ -1836,7 +1813,6 @@ static void mxser_initbrd(struct mxser_board *brd, bool high_baud)
+ 		tty_port_init(&info->port);
+ 		info->port.ops = &mxser_port_ops;
+ 		info->board = brd;
+-		info->ldisc_stop_rx = false;
+ 
+ 		/* Enhance mode enabled here */
+ 		if (brd->must_hwid != MOXA_OTHER_UART)
+diff --git a/drivers/tty/serial/8250/8250_bcm7271.c b/drivers/tty/serial/8250/8250_bcm7271.c
+index 5163d60756b73..0877cf24f7de0 100644
+--- a/drivers/tty/serial/8250/8250_bcm7271.c
++++ b/drivers/tty/serial/8250/8250_bcm7271.c
+@@ -1076,14 +1076,18 @@ static int brcmuart_probe(struct platform_device *pdev)
+ 		priv->rx_bufs = dma_alloc_coherent(dev,
+ 						   priv->rx_size,
+ 						   &priv->rx_addr, GFP_KERNEL);
+-		if (!priv->rx_bufs)
++		if (!priv->rx_bufs) {
++			ret = -EINVAL;
+ 			goto err;
++		}
+ 		priv->tx_size = UART_XMIT_SIZE;
+ 		priv->tx_buf = dma_alloc_coherent(dev,
+ 						  priv->tx_size,
+ 						  &priv->tx_addr, GFP_KERNEL);
+-		if (!priv->tx_buf)
++		if (!priv->tx_buf) {
++			ret = -EINVAL;
+ 			goto err;
++		}
+ 	}
+ 
+ 	ret = serial8250_register_8250_port(&up);
+@@ -1097,6 +1101,7 @@ static int brcmuart_probe(struct platform_device *pdev)
+ 	if (priv->dma_enabled) {
+ 		dma_irq = platform_get_irq_byname(pdev,  "dma");
+ 		if (dma_irq < 0) {
++			ret = dma_irq;
+ 			dev_err(dev, "no IRQ resource info\n");
+ 			goto err1;
+ 		}
+@@ -1116,7 +1121,7 @@ err1:
+ err:
+ 	brcmuart_free_bufs(dev, priv);
+ 	brcmuart_arbitration(priv, 0);
+-	return -ENODEV;
++	return ret;
+ }
+ 
+ static int brcmuart_remove(struct platform_device *pdev)
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index 53f57c3b9f42c..1769808031c52 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -414,6 +414,8 @@ static void dw8250_quirks(struct uart_port *p, struct dw8250_data *data)
+ 
+ 		if (of_device_is_compatible(np, "marvell,armada-38x-uart"))
+ 			p->serial_out = dw8250_serial_out38x;
++		if (of_device_is_compatible(np, "starfive,jh7100-uart"))
++			p->set_termios = dw8250_do_set_termios;
+ 
+ 	} else if (acpi_dev_present("APMC0D08", NULL, -1)) {
+ 		p->iotype = UPIO_MEM32;
+@@ -696,6 +698,7 @@ static const struct of_device_id dw8250_of_match[] = {
+ 	{ .compatible = "cavium,octeon-3860-uart" },
+ 	{ .compatible = "marvell,armada-38x-uart" },
+ 	{ .compatible = "renesas,rzn1-uart" },
++	{ .compatible = "starfive,jh7100-uart" },
+ 	{ /* Sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, dw8250_of_match);
+diff --git a/drivers/tty/serial/amba-pl010.c b/drivers/tty/serial/amba-pl010.c
+index e744b953ca346..47654073123d6 100644
+--- a/drivers/tty/serial/amba-pl010.c
++++ b/drivers/tty/serial/amba-pl010.c
+@@ -446,14 +446,11 @@ pl010_set_termios(struct uart_port *port, struct ktermios *termios,
+ 	if ((termios->c_cflag & CREAD) == 0)
+ 		uap->port.ignore_status_mask |= UART_DUMMY_RSR_RX;
+ 
+-	/* first, disable everything */
+ 	old_cr = readb(uap->port.membase + UART010_CR) & ~UART010_CR_MSIE;
+ 
+ 	if (UART_ENABLE_MS(port, termios->c_cflag))
+ 		old_cr |= UART010_CR_MSIE;
+ 
+-	writel(0, uap->port.membase + UART010_CR);
+-
+ 	/* Set baud rate */
+ 	quot -= 1;
+ 	writel((quot & 0xf00) >> 8, uap->port.membase + UART010_LCRM);
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index 52518a606c06a..6ec34260d6b18 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -2105,9 +2105,7 @@ pl011_set_termios(struct uart_port *port, struct ktermios *termios,
+ 	if (port->rs485.flags & SER_RS485_ENABLED)
+ 		termios->c_cflag &= ~CRTSCTS;
+ 
+-	/* first, disable everything */
+ 	old_cr = pl011_read(uap, REG_CR);
+-	pl011_write(0, uap, REG_CR);
+ 
+ 	if (termios->c_cflag & CRTSCTS) {
+ 		if (old_cr & UART011_CR_RTS)
+@@ -2183,32 +2181,13 @@ static const char *pl011_type(struct uart_port *port)
+ 	return uap->port.type == PORT_AMBA ? uap->type : NULL;
+ }
+ 
+-/*
+- * Release the memory region(s) being used by 'port'
+- */
+-static void pl011_release_port(struct uart_port *port)
+-{
+-	release_mem_region(port->mapbase, SZ_4K);
+-}
+-
+-/*
+- * Request the memory region(s) being used by 'port'
+- */
+-static int pl011_request_port(struct uart_port *port)
+-{
+-	return request_mem_region(port->mapbase, SZ_4K, "uart-pl011")
+-			!= NULL ? 0 : -EBUSY;
+-}
+-
+ /*
+  * Configure/autoconfigure the port.
+  */
+ static void pl011_config_port(struct uart_port *port, int flags)
+ {
+-	if (flags & UART_CONFIG_TYPE) {
++	if (flags & UART_CONFIG_TYPE)
+ 		port->type = PORT_AMBA;
+-		pl011_request_port(port);
+-	}
+ }
+ 
+ /*
+@@ -2223,6 +2202,8 @@ static int pl011_verify_port(struct uart_port *port, struct serial_struct *ser)
+ 		ret = -EINVAL;
+ 	if (ser->baud_base < 9600)
+ 		ret = -EINVAL;
++	if (port->mapbase != (unsigned long) ser->iomem_base)
++		ret = -EINVAL;
+ 	return ret;
+ }
+ 
+@@ -2275,8 +2256,6 @@ static const struct uart_ops amba_pl011_pops = {
+ 	.flush_buffer	= pl011_dma_flush_buffer,
+ 	.set_termios	= pl011_set_termios,
+ 	.type		= pl011_type,
+-	.release_port	= pl011_release_port,
+-	.request_port	= pl011_request_port,
+ 	.config_port	= pl011_config_port,
+ 	.verify_port	= pl011_verify_port,
+ #ifdef CONFIG_CONSOLE_POLL
+@@ -2306,8 +2285,6 @@ static const struct uart_ops sbsa_uart_pops = {
+ 	.shutdown	= sbsa_uart_shutdown,
+ 	.set_termios	= sbsa_uart_set_termios,
+ 	.type		= pl011_type,
+-	.release_port	= pl011_release_port,
+-	.request_port	= pl011_request_port,
+ 	.config_port	= pl011_config_port,
+ 	.verify_port	= pl011_verify_port,
+ #ifdef CONFIG_CONSOLE_POLL
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index 2c99a47a25357..269b4500e9e78 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -1004,6 +1004,13 @@ static void atmel_tx_dma(struct uart_port *port)
+ 		desc->callback = atmel_complete_tx_dma;
+ 		desc->callback_param = atmel_port;
+ 		atmel_port->cookie_tx = dmaengine_submit(desc);
++		if (dma_submit_error(atmel_port->cookie_tx)) {
++			dev_err(port->dev, "dma_submit_error %d\n",
++				atmel_port->cookie_tx);
++			return;
++		}
++
++		dma_async_issue_pending(chan);
+ 	}
+ 
+ 	if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+@@ -1258,6 +1265,13 @@ static int atmel_prepare_rx_dma(struct uart_port *port)
+ 	desc->callback_param = port;
+ 	atmel_port->desc_rx = desc;
+ 	atmel_port->cookie_rx = dmaengine_submit(desc);
++	if (dma_submit_error(atmel_port->cookie_rx)) {
++		dev_err(port->dev, "dma_submit_error %d\n",
++			atmel_port->cookie_rx);
++		goto chan_err;
++	}
++
++	dma_async_issue_pending(atmel_port->chan_rx);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 90f82e6c54e46..6f7f382d0b1fa 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -486,18 +486,21 @@ static void imx_uart_stop_tx(struct uart_port *port)
+ static void imx_uart_stop_rx(struct uart_port *port)
+ {
+ 	struct imx_port *sport = (struct imx_port *)port;
+-	u32 ucr1, ucr2;
++	u32 ucr1, ucr2, ucr4;
+ 
+ 	ucr1 = imx_uart_readl(sport, UCR1);
+ 	ucr2 = imx_uart_readl(sport, UCR2);
++	ucr4 = imx_uart_readl(sport, UCR4);
+ 
+ 	if (sport->dma_is_enabled) {
+ 		ucr1 &= ~(UCR1_RXDMAEN | UCR1_ATDMAEN);
+ 	} else {
+ 		ucr1 &= ~UCR1_RRDYEN;
+ 		ucr2 &= ~UCR2_ATEN;
++		ucr4 &= ~UCR4_OREN;
+ 	}
+ 	imx_uart_writel(sport, ucr1, UCR1);
++	imx_uart_writel(sport, ucr4, UCR4);
+ 
+ 	ucr2 &= ~UCR2_RXEN;
+ 	imx_uart_writel(sport, ucr2, UCR2);
+@@ -1544,7 +1547,7 @@ static void imx_uart_shutdown(struct uart_port *port)
+ 	imx_uart_writel(sport, ucr1, UCR1);
+ 
+ 	ucr4 = imx_uart_readl(sport, UCR4);
+-	ucr4 &= ~(UCR4_OREN | UCR4_TCEN);
++	ucr4 &= ~UCR4_TCEN;
+ 	imx_uart_writel(sport, ucr4, UCR4);
+ 
+ 	spin_unlock_irqrestore(&sport->port.lock, flags);
+diff --git a/drivers/tty/serial/liteuart.c b/drivers/tty/serial/liteuart.c
+index 2941659e52747..7f74bf7bdcff8 100644
+--- a/drivers/tty/serial/liteuart.c
++++ b/drivers/tty/serial/liteuart.c
+@@ -436,4 +436,4 @@ module_exit(liteuart_exit);
+ MODULE_AUTHOR("Antmicro <www.antmicro.com>");
+ MODULE_DESCRIPTION("LiteUART serial driver");
+ MODULE_LICENSE("GPL v2");
+-MODULE_ALIAS("platform: liteuart");
++MODULE_ALIAS("platform:liteuart");
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 61e3dd0222af1..dc6129ddef85d 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -162,7 +162,7 @@ static void uart_port_dtr_rts(struct uart_port *uport, int raise)
+ 	int RTS_after_send = !!(uport->rs485.flags & SER_RS485_RTS_AFTER_SEND);
+ 
+ 	if (raise) {
+-		if (rs485_on && !RTS_after_send) {
++		if (rs485_on && RTS_after_send) {
+ 			uart_set_mctrl(uport, TIOCM_DTR);
+ 			uart_clear_mctrl(uport, TIOCM_RTS);
+ 		} else {
+@@ -171,7 +171,7 @@ static void uart_port_dtr_rts(struct uart_port *uport, int raise)
+ 	} else {
+ 		unsigned int clear = TIOCM_DTR;
+ 
+-		clear |= (!rs485_on || !RTS_after_send) ? TIOCM_RTS : 0;
++		clear |= (!rs485_on || RTS_after_send) ? TIOCM_RTS : 0;
+ 		uart_clear_mctrl(uport, clear);
+ 	}
+ }
+@@ -2393,7 +2393,8 @@ uart_configure_port(struct uart_driver *drv, struct uart_state *state,
+ 		 * We probably don't need a spinlock around this, but
+ 		 */
+ 		spin_lock_irqsave(&port->lock, flags);
+-		port->ops->set_mctrl(port, port->mctrl & TIOCM_DTR);
++		port->mctrl &= TIOCM_DTR;
++		port->ops->set_mctrl(port, port->mctrl);
+ 		spin_unlock_irqrestore(&port->lock, flags);
+ 
+ 		/*
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index 3244e7f6818ca..6cfc3bec67492 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -883,6 +883,11 @@ static void stm32_usart_shutdown(struct uart_port *port)
+ 	u32 val, isr;
+ 	int ret;
+ 
++	if (stm32_port->tx_dma_busy) {
++		dmaengine_terminate_async(stm32_port->tx_ch);
++		stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
++	}
++
+ 	/* Disable modem control interrupts */
+ 	stm32_usart_disable_ms(port);
+ 
+@@ -1570,7 +1575,6 @@ static int stm32_usart_serial_remove(struct platform_device *pdev)
+ 	writel_relaxed(cr3, port->membase + ofs->cr3);
+ 
+ 	if (stm32_port->tx_ch) {
+-		dmaengine_terminate_async(stm32_port->tx_ch);
+ 		stm32_usart_of_dma_tx_remove(stm32_port, pdev);
+ 		dma_release_channel(stm32_port->tx_ch);
+ 	}
+diff --git a/drivers/tty/serial/uartlite.c b/drivers/tty/serial/uartlite.c
+index d3d9566e5dbdf..e1fa52d31474f 100644
+--- a/drivers/tty/serial/uartlite.c
++++ b/drivers/tty/serial/uartlite.c
+@@ -626,7 +626,7 @@ static struct uart_driver ulite_uart_driver = {
+  *
+  * Returns: 0 on success, <0 otherwise
+  */
+-static int ulite_assign(struct device *dev, int id, u32 base, int irq,
++static int ulite_assign(struct device *dev, int id, phys_addr_t base, int irq,
+ 			struct uartlite_data *pdata)
+ {
+ 	struct uart_port *port;
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 3bc4a86c3d0a5..ac6c5ccfe1cb7 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -1110,7 +1110,10 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
+ 		} else {
+ 			hub_power_on(hub, true);
+ 		}
+-	}
++	/* Give some time on remote wakeup to let links to transit to U0 */
++	} else if (hub_is_superspeed(hub->hdev))
++		msleep(20);
++
+  init2:
+ 
+ 	/*
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index ab8d7dad9f567..43cf49c4e5e59 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -4974,7 +4974,18 @@ int dwc2_gadget_init(struct dwc2_hsotg *hsotg)
+ 		hsotg->params.g_np_tx_fifo_size);
+ 	dev_dbg(dev, "RXFIFO size: %d\n", hsotg->params.g_rx_fifo_size);
+ 
+-	hsotg->gadget.max_speed = USB_SPEED_HIGH;
++	switch (hsotg->params.speed) {
++	case DWC2_SPEED_PARAM_LOW:
++		hsotg->gadget.max_speed = USB_SPEED_LOW;
++		break;
++	case DWC2_SPEED_PARAM_FULL:
++		hsotg->gadget.max_speed = USB_SPEED_FULL;
++		break;
++	default:
++		hsotg->gadget.max_speed = USB_SPEED_HIGH;
++		break;
++	}
++
+ 	hsotg->gadget.ops = &dwc2_hsotg_gadget_ops;
+ 	hsotg->gadget.name = dev_name(dev);
+ 	hsotg->gadget.otg_caps = &hsotg->params.otg_caps;
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index 13c779a28e94f..f63a27d11fac8 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -4399,11 +4399,12 @@ static int _dwc2_hcd_suspend(struct usb_hcd *hcd)
+ 		 * If not hibernation nor partial power down are supported,
+ 		 * clock gating is used to save power.
+ 		 */
+-		if (!hsotg->params.no_clock_gating)
++		if (!hsotg->params.no_clock_gating) {
+ 			dwc2_host_enter_clock_gating(hsotg);
+ 
+-		/* After entering suspend, hardware is not accessible */
+-		clear_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
++			/* After entering suspend, hardware is not accessible */
++			clear_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
++		}
+ 		break;
+ 	default:
+ 		goto skip_power_saving;
+diff --git a/drivers/usb/dwc3/dwc3-meson-g12a.c b/drivers/usb/dwc3/dwc3-meson-g12a.c
+index d0f9b7c296b0d..bd814df3bf8b8 100644
+--- a/drivers/usb/dwc3/dwc3-meson-g12a.c
++++ b/drivers/usb/dwc3/dwc3-meson-g12a.c
+@@ -755,16 +755,16 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
+ 
+ 	ret = dwc3_meson_g12a_get_phys(priv);
+ 	if (ret)
+-		goto err_disable_clks;
++		goto err_rearm;
+ 
+ 	ret = priv->drvdata->setup_regmaps(priv, base);
+ 	if (ret)
+-		goto err_disable_clks;
++		goto err_rearm;
+ 
+ 	if (priv->vbus) {
+ 		ret = regulator_enable(priv->vbus);
+ 		if (ret)
+-			goto err_disable_clks;
++			goto err_rearm;
+ 	}
+ 
+ 	/* Get dr_mode */
+@@ -825,6 +825,9 @@ err_disable_regulator:
+ 	if (priv->vbus)
+ 		regulator_disable(priv->vbus);
+ 
++err_rearm:
++	reset_control_rearm(priv->reset);
++
+ err_disable_clks:
+ 	clk_bulk_disable_unprepare(priv->drvdata->num_clks,
+ 				   priv->drvdata->clks);
+@@ -852,6 +855,8 @@ static int dwc3_meson_g12a_remove(struct platform_device *pdev)
+ 	pm_runtime_put_noidle(dev);
+ 	pm_runtime_set_suspended(dev);
+ 
++	reset_control_rearm(priv->reset);
++
+ 	clk_bulk_disable_unprepare(priv->drvdata->num_clks,
+ 				   priv->drvdata->clks);
+ 
+@@ -892,7 +897,7 @@ static int __maybe_unused dwc3_meson_g12a_suspend(struct device *dev)
+ 		phy_exit(priv->phys[i]);
+ 	}
+ 
+-	reset_control_assert(priv->reset);
++	reset_control_rearm(priv->reset);
+ 
+ 	return 0;
+ }
+@@ -902,7 +907,9 @@ static int __maybe_unused dwc3_meson_g12a_resume(struct device *dev)
+ 	struct dwc3_meson_g12a *priv = dev_get_drvdata(dev);
+ 	int i, ret;
+ 
+-	reset_control_deassert(priv->reset);
++	ret = reset_control_reset(priv->reset);
++	if (ret)
++		return ret;
+ 
+ 	ret = priv->drvdata->usb_init(priv);
+ 	if (ret)
+diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
+index 3cb01cdd02c29..b81a9e1c13153 100644
+--- a/drivers/usb/dwc3/dwc3-qcom.c
++++ b/drivers/usb/dwc3/dwc3-qcom.c
+@@ -769,9 +769,12 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
+ 
+ 		if (qcom->acpi_pdata->is_urs) {
+ 			qcom->urs_usb = dwc3_qcom_create_urs_usb_platdev(dev);
+-			if (!qcom->urs_usb) {
++			if (IS_ERR_OR_NULL(qcom->urs_usb)) {
+ 				dev_err(dev, "failed to create URS USB platdev\n");
+-				return -ENODEV;
++				if (!qcom->urs_usb)
++					return -ENODEV;
++				else
++					return PTR_ERR(qcom->urs_usb);
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index a7e069b185448..25ad1e97a4585 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -614,7 +614,7 @@ static int ffs_ep0_open(struct inode *inode, struct file *file)
+ 	file->private_data = ffs;
+ 	ffs_data_opened(ffs);
+ 
+-	return 0;
++	return stream_open(inode, file);
+ }
+ 
+ static int ffs_ep0_release(struct inode *inode, struct file *file)
+@@ -1154,7 +1154,7 @@ ffs_epfile_open(struct inode *inode, struct file *file)
+ 	file->private_data = epfile;
+ 	ffs_data_opened(epfile->ffs);
+ 
+-	return 0;
++	return stream_open(inode, file);
+ }
+ 
+ static int ffs_aio_cancel(struct kiocb *kiocb)
+diff --git a/drivers/usb/gadget/function/u_audio.c b/drivers/usb/gadget/function/u_audio.c
+index c46400be54641..4561d7a183ff4 100644
+--- a/drivers/usb/gadget/function/u_audio.c
++++ b/drivers/usb/gadget/function/u_audio.c
+@@ -76,8 +76,8 @@ struct snd_uac_chip {
+ 	struct snd_pcm *pcm;
+ 
+ 	/* pre-calculated values for playback iso completion */
+-	unsigned long long p_interval_mil;
+ 	unsigned long long p_residue_mil;
++	unsigned int p_interval;
+ 	unsigned int p_framesize;
+ };
+ 
+@@ -194,21 +194,24 @@ static void u_audio_iso_complete(struct usb_ep *ep, struct usb_request *req)
+ 		 * If there is a residue from this division, add it to the
+ 		 * residue accumulator.
+ 		 */
++		unsigned long long p_interval_mil = uac->p_interval * 1000000ULL;
++
+ 		pitched_rate_mil = (unsigned long long)
+ 				params->p_srate * prm->pitch;
+ 		div_result = pitched_rate_mil;
+-		do_div(div_result, uac->p_interval_mil);
++		do_div(div_result, uac->p_interval);
++		do_div(div_result, 1000000);
+ 		frames = (unsigned int) div_result;
+ 
+ 		pr_debug("p_srate %d, pitch %d, interval_mil %llu, frames %d\n",
+-				params->p_srate, prm->pitch, uac->p_interval_mil, frames);
++				params->p_srate, prm->pitch, p_interval_mil, frames);
+ 
+ 		p_pktsize = min_t(unsigned int,
+ 					uac->p_framesize * frames,
+ 					ep->maxpacket);
+ 
+ 		if (p_pktsize < ep->maxpacket) {
+-			residue_frames_mil = pitched_rate_mil - frames * uac->p_interval_mil;
++			residue_frames_mil = pitched_rate_mil - frames * p_interval_mil;
+ 			p_pktsize_residue_mil = uac->p_framesize * residue_frames_mil;
+ 		} else
+ 			p_pktsize_residue_mil = 0;
+@@ -222,11 +225,11 @@ static void u_audio_iso_complete(struct usb_ep *ep, struct usb_request *req)
+ 		 * size and decrease the accumulator.
+ 		 */
+ 		div_result = uac->p_residue_mil;
+-		do_div(div_result, uac->p_interval_mil);
++		do_div(div_result, uac->p_interval);
++		do_div(div_result, 1000000);
+ 		if ((unsigned int) div_result >= uac->p_framesize) {
+ 			req->length += uac->p_framesize;
+-			uac->p_residue_mil -= uac->p_framesize *
+-					   uac->p_interval_mil;
++			uac->p_residue_mil -= uac->p_framesize * p_interval_mil;
+ 			pr_debug("increased req length to %d\n", req->length);
+ 		}
+ 		pr_debug("remains uac->p_residue_mil %llu\n", uac->p_residue_mil);
+@@ -591,7 +594,7 @@ int u_audio_start_playback(struct g_audio *audio_dev)
+ 	unsigned int factor;
+ 	const struct usb_endpoint_descriptor *ep_desc;
+ 	int req_len, i;
+-	unsigned int p_interval, p_pktsize;
++	unsigned int p_pktsize;
+ 
+ 	ep = audio_dev->in_ep;
+ 	prm = &uac->p_prm;
+@@ -612,11 +615,10 @@ int u_audio_start_playback(struct g_audio *audio_dev)
+ 	/* pre-compute some values for iso_complete() */
+ 	uac->p_framesize = params->p_ssize *
+ 			    num_channels(params->p_chmask);
+-	p_interval = factor / (1 << (ep_desc->bInterval - 1));
+-	uac->p_interval_mil = (unsigned long long) p_interval * 1000000;
++	uac->p_interval = factor / (1 << (ep_desc->bInterval - 1));
+ 	p_pktsize = min_t(unsigned int,
+ 				uac->p_framesize *
+-					(params->p_srate / p_interval),
++					(params->p_srate / uac->p_interval),
+ 				ep->maxpacket);
+ 
+ 	req_len = p_pktsize;
+@@ -1145,7 +1147,7 @@ int g_audio_setup(struct g_audio *g_audio, const char *pcm_name,
+ 			}
+ 
+ 			kctl->id.device = pcm->device;
+-			kctl->id.subdevice = i;
++			kctl->id.subdevice = 0;
+ 
+ 			err = snd_ctl_add(card, kctl);
+ 			if (err < 0)
+@@ -1168,7 +1170,7 @@ int g_audio_setup(struct g_audio *g_audio, const char *pcm_name,
+ 			}
+ 
+ 			kctl->id.device = pcm->device;
+-			kctl->id.subdevice = i;
++			kctl->id.subdevice = 0;
+ 
+ 
+ 			kctl->tlv.c = u_audio_volume_tlv;
+diff --git a/drivers/usb/host/ehci-brcm.c b/drivers/usb/host/ehci-brcm.c
+index d3626bfa966b4..6a0f64c9e5e88 100644
+--- a/drivers/usb/host/ehci-brcm.c
++++ b/drivers/usb/host/ehci-brcm.c
+@@ -62,8 +62,12 @@ static int ehci_brcm_hub_control(
+ 	u32 __iomem	*status_reg;
+ 	unsigned long flags;
+ 	int retval, irq_disabled = 0;
++	u32 temp;
+ 
+-	status_reg = &ehci->regs->port_status[(wIndex & 0xff) - 1];
++	temp = (wIndex & 0xff) - 1;
++	if (temp >= HCS_N_PORTS_MAX)	/* Avoid index-out-of-bounds warning */
++		temp = 0;
++	status_reg = &ehci->regs->port_status[temp];
+ 
+ 	/*
+ 	 * RESUME is cleared when GetPortStatus() is called 20ms after start
+diff --git a/drivers/usb/host/uhci-platform.c b/drivers/usb/host/uhci-platform.c
+index 70dbd95c3f063..be9e9db7cad10 100644
+--- a/drivers/usb/host/uhci-platform.c
++++ b/drivers/usb/host/uhci-platform.c
+@@ -113,7 +113,8 @@ static int uhci_hcd_platform_probe(struct platform_device *pdev)
+ 				num_ports);
+ 		}
+ 		if (of_device_is_compatible(np, "aspeed,ast2400-uhci") ||
+-		    of_device_is_compatible(np, "aspeed,ast2500-uhci")) {
++		    of_device_is_compatible(np, "aspeed,ast2500-uhci") ||
++		    of_device_is_compatible(np, "aspeed,ast2600-uhci")) {
+ 			uhci->is_aspeed = 1;
+ 			dev_info(&pdev->dev,
+ 				 "Enabled Aspeed implementation workarounds\n");
+diff --git a/drivers/usb/misc/ftdi-elan.c b/drivers/usb/misc/ftdi-elan.c
+index e5a8fcdbb78e7..6c38c62d29b26 100644
+--- a/drivers/usb/misc/ftdi-elan.c
++++ b/drivers/usb/misc/ftdi-elan.c
+@@ -202,6 +202,7 @@ static void ftdi_elan_delete(struct kref *kref)
+ 	mutex_unlock(&ftdi_module_lock);
+ 	kfree(ftdi->bulk_in_buffer);
+ 	ftdi->bulk_in_buffer = NULL;
++	kfree(ftdi);
+ }
+ 
+ static void ftdi_elan_put_kref(struct usb_ftdi *ftdi)
+diff --git a/drivers/vdpa/ifcvf/ifcvf_base.c b/drivers/vdpa/ifcvf/ifcvf_base.c
+index 2808f1ba9f7b8..7d41dfe48adee 100644
+--- a/drivers/vdpa/ifcvf/ifcvf_base.c
++++ b/drivers/vdpa/ifcvf/ifcvf_base.c
+@@ -143,8 +143,8 @@ int ifcvf_init_hw(struct ifcvf_hw *hw, struct pci_dev *pdev)
+ 			IFCVF_DBG(pdev, "hw->isr = %p\n", hw->isr);
+ 			break;
+ 		case VIRTIO_PCI_CAP_DEVICE_CFG:
+-			hw->net_cfg = get_cap_addr(hw, &cap);
+-			IFCVF_DBG(pdev, "hw->net_cfg = %p\n", hw->net_cfg);
++			hw->dev_cfg = get_cap_addr(hw, &cap);
++			IFCVF_DBG(pdev, "hw->dev_cfg = %p\n", hw->dev_cfg);
+ 			break;
+ 		}
+ 
+@@ -153,7 +153,7 @@ next:
+ 	}
+ 
+ 	if (hw->common_cfg == NULL || hw->notify_base == NULL ||
+-	    hw->isr == NULL || hw->net_cfg == NULL) {
++	    hw->isr == NULL || hw->dev_cfg == NULL) {
+ 		IFCVF_ERR(pdev, "Incomplete PCI capabilities\n");
+ 		return -EIO;
+ 	}
+@@ -174,7 +174,7 @@ next:
+ 	IFCVF_DBG(pdev,
+ 		  "PCI capability mapping: common cfg: %p, notify base: %p\n, isr cfg: %p, device cfg: %p, multiplier: %u\n",
+ 		  hw->common_cfg, hw->notify_base, hw->isr,
+-		  hw->net_cfg, hw->notify_off_multiplier);
++		  hw->dev_cfg, hw->notify_off_multiplier);
+ 
+ 	return 0;
+ }
+@@ -242,33 +242,54 @@ int ifcvf_verify_min_features(struct ifcvf_hw *hw, u64 features)
+ 	return 0;
+ }
+ 
+-void ifcvf_read_net_config(struct ifcvf_hw *hw, u64 offset,
++u32 ifcvf_get_config_size(struct ifcvf_hw *hw)
++{
++	struct ifcvf_adapter *adapter;
++	u32 config_size;
++
++	adapter = vf_to_adapter(hw);
++	switch (hw->dev_type) {
++	case VIRTIO_ID_NET:
++		config_size = sizeof(struct virtio_net_config);
++		break;
++	case VIRTIO_ID_BLOCK:
++		config_size = sizeof(struct virtio_blk_config);
++		break;
++	default:
++		config_size = 0;
++		IFCVF_ERR(adapter->pdev, "VIRTIO ID %u not supported\n", hw->dev_type);
++	}
++
++	return config_size;
++}
++
++void ifcvf_read_dev_config(struct ifcvf_hw *hw, u64 offset,
+ 			   void *dst, int length)
+ {
+ 	u8 old_gen, new_gen, *p;
+ 	int i;
+ 
+-	WARN_ON(offset + length > sizeof(struct virtio_net_config));
++	WARN_ON(offset + length > hw->config_size);
+ 	do {
+ 		old_gen = ifc_ioread8(&hw->common_cfg->config_generation);
+ 		p = dst;
+ 		for (i = 0; i < length; i++)
+-			*p++ = ifc_ioread8(hw->net_cfg + offset + i);
++			*p++ = ifc_ioread8(hw->dev_cfg + offset + i);
+ 
+ 		new_gen = ifc_ioread8(&hw->common_cfg->config_generation);
+ 	} while (old_gen != new_gen);
+ }
+ 
+-void ifcvf_write_net_config(struct ifcvf_hw *hw, u64 offset,
++void ifcvf_write_dev_config(struct ifcvf_hw *hw, u64 offset,
+ 			    const void *src, int length)
+ {
+ 	const u8 *p;
+ 	int i;
+ 
+ 	p = src;
+-	WARN_ON(offset + length > sizeof(struct virtio_net_config));
++	WARN_ON(offset + length > hw->config_size);
+ 	for (i = 0; i < length; i++)
+-		ifc_iowrite8(*p++, hw->net_cfg + offset + i);
++		ifc_iowrite8(*p++, hw->dev_cfg + offset + i);
+ }
+ 
+ static void ifcvf_set_features(struct ifcvf_hw *hw, u64 features)
+diff --git a/drivers/vdpa/ifcvf/ifcvf_base.h b/drivers/vdpa/ifcvf/ifcvf_base.h
+index 09918af3ecf82..c486873f370a8 100644
+--- a/drivers/vdpa/ifcvf/ifcvf_base.h
++++ b/drivers/vdpa/ifcvf/ifcvf_base.h
+@@ -71,12 +71,14 @@ struct ifcvf_hw {
+ 	u64 hw_features;
+ 	u32 dev_type;
+ 	struct virtio_pci_common_cfg __iomem *common_cfg;
+-	void __iomem *net_cfg;
++	void __iomem *dev_cfg;
+ 	struct vring_info vring[IFCVF_MAX_QUEUES];
+ 	void __iomem * const *base;
+ 	char config_msix_name[256];
+ 	struct vdpa_callback config_cb;
+ 	unsigned int config_irq;
++	/* virtio-net or virtio-blk device config size */
++	u32 config_size;
+ };
+ 
+ struct ifcvf_adapter {
+@@ -105,9 +107,9 @@ int ifcvf_init_hw(struct ifcvf_hw *hw, struct pci_dev *dev);
+ int ifcvf_start_hw(struct ifcvf_hw *hw);
+ void ifcvf_stop_hw(struct ifcvf_hw *hw);
+ void ifcvf_notify_queue(struct ifcvf_hw *hw, u16 qid);
+-void ifcvf_read_net_config(struct ifcvf_hw *hw, u64 offset,
++void ifcvf_read_dev_config(struct ifcvf_hw *hw, u64 offset,
+ 			   void *dst, int length);
+-void ifcvf_write_net_config(struct ifcvf_hw *hw, u64 offset,
++void ifcvf_write_dev_config(struct ifcvf_hw *hw, u64 offset,
+ 			    const void *src, int length);
+ u8 ifcvf_get_status(struct ifcvf_hw *hw);
+ void ifcvf_set_status(struct ifcvf_hw *hw, u8 status);
+@@ -120,4 +122,5 @@ u16 ifcvf_get_vq_state(struct ifcvf_hw *hw, u16 qid);
+ int ifcvf_set_vq_state(struct ifcvf_hw *hw, u16 qid, u16 num);
+ struct ifcvf_adapter *vf_to_adapter(struct ifcvf_hw *hw);
+ int ifcvf_probed_virtio_net(struct ifcvf_hw *hw);
++u32 ifcvf_get_config_size(struct ifcvf_hw *hw);
+ #endif /* _IFCVF_H_ */
+diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c
+index 6dc75ca70b377..92ba7126e5d6d 100644
+--- a/drivers/vdpa/ifcvf/ifcvf_main.c
++++ b/drivers/vdpa/ifcvf/ifcvf_main.c
+@@ -366,24 +366,9 @@ static u32 ifcvf_vdpa_get_vq_align(struct vdpa_device *vdpa_dev)
+ 
+ static size_t ifcvf_vdpa_get_config_size(struct vdpa_device *vdpa_dev)
+ {
+-	struct ifcvf_adapter *adapter = vdpa_to_adapter(vdpa_dev);
+ 	struct ifcvf_hw *vf = vdpa_to_vf(vdpa_dev);
+-	struct pci_dev *pdev = adapter->pdev;
+-	size_t size;
+-
+-	switch (vf->dev_type) {
+-	case VIRTIO_ID_NET:
+-		size = sizeof(struct virtio_net_config);
+-		break;
+-	case VIRTIO_ID_BLOCK:
+-		size = sizeof(struct virtio_blk_config);
+-		break;
+-	default:
+-		size = 0;
+-		IFCVF_ERR(pdev, "VIRTIO ID %u not supported\n", vf->dev_type);
+-	}
+ 
+-	return size;
++	return  vf->config_size;
+ }
+ 
+ static void ifcvf_vdpa_get_config(struct vdpa_device *vdpa_dev,
+@@ -392,8 +377,7 @@ static void ifcvf_vdpa_get_config(struct vdpa_device *vdpa_dev,
+ {
+ 	struct ifcvf_hw *vf = vdpa_to_vf(vdpa_dev);
+ 
+-	WARN_ON(offset + len > sizeof(struct virtio_net_config));
+-	ifcvf_read_net_config(vf, offset, buf, len);
++	ifcvf_read_dev_config(vf, offset, buf, len);
+ }
+ 
+ static void ifcvf_vdpa_set_config(struct vdpa_device *vdpa_dev,
+@@ -402,8 +386,7 @@ static void ifcvf_vdpa_set_config(struct vdpa_device *vdpa_dev,
+ {
+ 	struct ifcvf_hw *vf = vdpa_to_vf(vdpa_dev);
+ 
+-	WARN_ON(offset + len > sizeof(struct virtio_net_config));
+-	ifcvf_write_net_config(vf, offset, buf, len);
++	ifcvf_write_dev_config(vf, offset, buf, len);
+ }
+ 
+ static void ifcvf_vdpa_set_config_cb(struct vdpa_device *vdpa_dev,
+@@ -542,6 +525,7 @@ static int ifcvf_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
+ 		vf->vring[i].irq = -EINVAL;
+ 
+ 	vf->hw_features = ifcvf_get_hw_features(vf);
++	vf->config_size = ifcvf_get_config_size(vf);
+ 
+ 	adapter->vdpa.mdev = &ifcvf_mgmt_dev->mdev;
+ 	ret = _vdpa_register_device(&adapter->vdpa, vf->nr_vring);
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index 63813fbb5f62a..ef6da39ccb3f9 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -876,8 +876,6 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtque
+ 	MLX5_SET(virtio_q, vq_ctx, umem_3_id, mvq->umem3.id);
+ 	MLX5_SET(virtio_q, vq_ctx, umem_3_size, mvq->umem3.size);
+ 	MLX5_SET(virtio_q, vq_ctx, pd, ndev->mvdev.res.pdn);
+-	if (MLX5_CAP_DEV_VDPA_EMULATION(ndev->mvdev.mdev, eth_frame_offload_type))
+-		MLX5_SET(virtio_q, vq_ctx, virtio_version_1_0, 1);
+ 
+ 	err = mlx5_cmd_exec(ndev->mvdev.mdev, in, inlen, out, sizeof(out));
+ 	if (err)
+@@ -1554,9 +1552,11 @@ static int change_num_qps(struct mlx5_vdpa_dev *mvdev, int newqps)
+ 	return 0;
+ 
+ clean_added:
+-	for (--i; i >= cur_qps; --i)
++	for (--i; i >= 2 * cur_qps; --i)
+ 		teardown_vq(ndev, &ndev->vqs[i]);
+ 
++	ndev->cur_num_vqs = 2 * cur_qps;
++
+ 	return err;
+ }
+ 
+@@ -2676,7 +2676,7 @@ static int mlx5v_probe(struct auxiliary_device *adev,
+ 	mgtdev->mgtdev.ops = &mdev_ops;
+ 	mgtdev->mgtdev.device = mdev->device;
+ 	mgtdev->mgtdev.id_table = id_table;
+-	mgtdev->mgtdev.config_attr_mask = (1 << VDPA_ATTR_DEV_NET_CFG_MACADDR);
++	mgtdev->mgtdev.config_attr_mask = BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR);
+ 	mgtdev->madev = madev;
+ 
+ 	err = vdpa_mgmtdev_register(&mgtdev->mgtdev);
+diff --git a/drivers/video/backlight/qcom-wled.c b/drivers/video/backlight/qcom-wled.c
+index d094299c2a485..f12c76d6e61de 100644
+--- a/drivers/video/backlight/qcom-wled.c
++++ b/drivers/video/backlight/qcom-wled.c
+@@ -231,14 +231,14 @@ struct wled {
+ static int wled3_set_brightness(struct wled *wled, u16 brightness)
+ {
+ 	int rc, i;
+-	u8 v[2];
++	__le16 v;
+ 
+-	v[0] = brightness & 0xff;
+-	v[1] = (brightness >> 8) & 0xf;
++	v = cpu_to_le16(brightness & WLED3_SINK_REG_BRIGHT_MAX);
+ 
+ 	for (i = 0;  i < wled->cfg.num_strings; ++i) {
+ 		rc = regmap_bulk_write(wled->regmap, wled->ctrl_addr +
+-				       WLED3_SINK_REG_BRIGHT(i), v, 2);
++				       WLED3_SINK_REG_BRIGHT(wled->cfg.enabled_strings[i]),
++				       &v, sizeof(v));
+ 		if (rc < 0)
+ 			return rc;
+ 	}
+@@ -250,18 +250,18 @@ static int wled4_set_brightness(struct wled *wled, u16 brightness)
+ {
+ 	int rc, i;
+ 	u16 low_limit = wled->max_brightness * 4 / 1000;
+-	u8 v[2];
++	__le16 v;
+ 
+ 	/* WLED4's lower limit of operation is 0.4% */
+ 	if (brightness > 0 && brightness < low_limit)
+ 		brightness = low_limit;
+ 
+-	v[0] = brightness & 0xff;
+-	v[1] = (brightness >> 8) & 0xf;
++	v = cpu_to_le16(brightness & WLED3_SINK_REG_BRIGHT_MAX);
+ 
+ 	for (i = 0;  i < wled->cfg.num_strings; ++i) {
+ 		rc = regmap_bulk_write(wled->regmap, wled->sink_addr +
+-				       WLED4_SINK_REG_BRIGHT(i), v, 2);
++				       WLED4_SINK_REG_BRIGHT(wled->cfg.enabled_strings[i]),
++				       &v, sizeof(v));
+ 		if (rc < 0)
+ 			return rc;
+ 	}
+@@ -273,21 +273,20 @@ static int wled5_set_brightness(struct wled *wled, u16 brightness)
+ {
+ 	int rc, offset;
+ 	u16 low_limit = wled->max_brightness * 1 / 1000;
+-	u8 v[2];
++	__le16 v;
+ 
+ 	/* WLED5's lower limit is 0.1% */
+ 	if (brightness < low_limit)
+ 		brightness = low_limit;
+ 
+-	v[0] = brightness & 0xff;
+-	v[1] = (brightness >> 8) & 0x7f;
++	v = cpu_to_le16(brightness & WLED5_SINK_REG_BRIGHT_MAX_15B);
+ 
+ 	offset = (wled->cfg.mod_sel == MOD_A) ?
+ 		  WLED5_SINK_REG_MOD_A_BRIGHTNESS_LSB :
+ 		  WLED5_SINK_REG_MOD_B_BRIGHTNESS_LSB;
+ 
+ 	rc = regmap_bulk_write(wled->regmap, wled->sink_addr + offset,
+-			       v, 2);
++			       &v, sizeof(v));
+ 	return rc;
+ }
+ 
+@@ -572,7 +571,7 @@ unlock_mutex:
+ 
+ static void wled_auto_string_detection(struct wled *wled)
+ {
+-	int rc = 0, i, delay_time_us;
++	int rc = 0, i, j, delay_time_us;
+ 	u32 sink_config = 0;
+ 	u8 sink_test = 0, sink_valid = 0, val;
+ 	bool fault_set;
+@@ -619,14 +618,15 @@ static void wled_auto_string_detection(struct wled *wled)
+ 
+ 	/* Iterate through the strings one by one */
+ 	for (i = 0; i < wled->cfg.num_strings; i++) {
+-		sink_test = BIT((WLED4_SINK_REG_CURR_SINK_SHFT + i));
++		j = wled->cfg.enabled_strings[i];
++		sink_test = BIT((WLED4_SINK_REG_CURR_SINK_SHFT + j));
+ 
+ 		/* Enable feedback control */
+ 		rc = regmap_write(wled->regmap, wled->ctrl_addr +
+-				  WLED3_CTRL_REG_FEEDBACK_CONTROL, i + 1);
++				  WLED3_CTRL_REG_FEEDBACK_CONTROL, j + 1);
+ 		if (rc < 0) {
+ 			dev_err(wled->dev, "Failed to enable feedback for SINK %d rc = %d\n",
+-				i + 1, rc);
++				j + 1, rc);
+ 			goto failed_detect;
+ 		}
+ 
+@@ -635,7 +635,7 @@ static void wled_auto_string_detection(struct wled *wled)
+ 				  WLED4_SINK_REG_CURR_SINK, sink_test);
+ 		if (rc < 0) {
+ 			dev_err(wled->dev, "Failed to configure SINK %d rc=%d\n",
+-				i + 1, rc);
++				j + 1, rc);
+ 			goto failed_detect;
+ 		}
+ 
+@@ -662,7 +662,7 @@ static void wled_auto_string_detection(struct wled *wled)
+ 
+ 		if (fault_set)
+ 			dev_dbg(wled->dev, "WLED OVP fault detected with SINK %d\n",
+-				i + 1);
++				j + 1);
+ 		else
+ 			sink_valid |= sink_test;
+ 
+@@ -702,15 +702,16 @@ static void wled_auto_string_detection(struct wled *wled)
+ 	/* Enable valid sinks */
+ 	if (wled->version == 4) {
+ 		for (i = 0; i < wled->cfg.num_strings; i++) {
++			j = wled->cfg.enabled_strings[i];
+ 			if (sink_config &
+-			    BIT(WLED4_SINK_REG_CURR_SINK_SHFT + i))
++			    BIT(WLED4_SINK_REG_CURR_SINK_SHFT + j))
+ 				val = WLED4_SINK_REG_STR_MOD_MASK;
+ 			else
+ 				/* Disable modulator_en for unused sink */
+ 				val = 0;
+ 
+ 			rc = regmap_write(wled->regmap, wled->sink_addr +
+-					  WLED4_SINK_REG_STR_MOD_EN(i), val);
++					  WLED4_SINK_REG_STR_MOD_EN(j), val);
+ 			if (rc < 0) {
+ 				dev_err(wled->dev, "Failed to configure MODULATOR_EN rc=%d\n",
+ 					rc);
+@@ -1256,21 +1257,6 @@ static const struct wled_var_cfg wled5_ovp_cfg = {
+ 	.size = 16,
+ };
+ 
+-static u32 wled3_num_strings_values_fn(u32 idx)
+-{
+-	return idx + 1;
+-}
+-
+-static const struct wled_var_cfg wled3_num_strings_cfg = {
+-	.fn = wled3_num_strings_values_fn,
+-	.size = 3,
+-};
+-
+-static const struct wled_var_cfg wled4_num_strings_cfg = {
+-	.fn = wled3_num_strings_values_fn,
+-	.size = 4,
+-};
+-
+ static u32 wled3_switch_freq_values_fn(u32 idx)
+ {
+ 	return 19200 / (2 * (1 + idx));
+@@ -1344,11 +1330,6 @@ static int wled_configure(struct wled *wled)
+ 			.val_ptr = &cfg->switch_freq,
+ 			.cfg = &wled3_switch_freq_cfg,
+ 		},
+-		{
+-			.name = "qcom,num-strings",
+-			.val_ptr = &cfg->num_strings,
+-			.cfg = &wled3_num_strings_cfg,
+-		},
+ 	};
+ 
+ 	const struct wled_u32_opts wled4_opts[] = {
+@@ -1372,11 +1353,6 @@ static int wled_configure(struct wled *wled)
+ 			.val_ptr = &cfg->switch_freq,
+ 			.cfg = &wled3_switch_freq_cfg,
+ 		},
+-		{
+-			.name = "qcom,num-strings",
+-			.val_ptr = &cfg->num_strings,
+-			.cfg = &wled4_num_strings_cfg,
+-		},
+ 	};
+ 
+ 	const struct wled_u32_opts wled5_opts[] = {
+@@ -1400,11 +1376,6 @@ static int wled_configure(struct wled *wled)
+ 			.val_ptr = &cfg->switch_freq,
+ 			.cfg = &wled3_switch_freq_cfg,
+ 		},
+-		{
+-			.name = "qcom,num-strings",
+-			.val_ptr = &cfg->num_strings,
+-			.cfg = &wled4_num_strings_cfg,
+-		},
+ 		{
+ 			.name = "qcom,modulator-sel",
+ 			.val_ptr = &cfg->mod_sel,
+@@ -1523,16 +1494,57 @@ static int wled_configure(struct wled *wled)
+ 			*bool_opts[i].val_ptr = true;
+ 	}
+ 
+-	cfg->num_strings = cfg->num_strings + 1;
+-
+ 	string_len = of_property_count_elems_of_size(dev->of_node,
+ 						     "qcom,enabled-strings",
+ 						     sizeof(u32));
+-	if (string_len > 0)
+-		of_property_read_u32_array(dev->of_node,
++	if (string_len > 0) {
++		if (string_len > wled->max_string_count) {
++			dev_err(dev, "Cannot have more than %d strings\n",
++				wled->max_string_count);
++			return -EINVAL;
++		}
++
++		rc = of_property_read_u32_array(dev->of_node,
+ 						"qcom,enabled-strings",
+ 						wled->cfg.enabled_strings,
+-						sizeof(u32));
++						string_len);
++		if (rc) {
++			dev_err(dev, "Failed to read %d elements from qcom,enabled-strings: %d\n",
++				string_len, rc);
++			return rc;
++		}
++
++		for (i = 0; i < string_len; ++i) {
++			if (wled->cfg.enabled_strings[i] >= wled->max_string_count) {
++				dev_err(dev,
++					"qcom,enabled-strings index %d at %d is out of bounds\n",
++					wled->cfg.enabled_strings[i], i);
++				return -EINVAL;
++			}
++		}
++
++		cfg->num_strings = string_len;
++	}
++
++	rc = of_property_read_u32(dev->of_node, "qcom,num-strings", &val);
++	if (!rc) {
++		if (val < 1 || val > wled->max_string_count) {
++			dev_err(dev, "qcom,num-strings must be between 1 and %d\n",
++				wled->max_string_count);
++			return -EINVAL;
++		}
++
++		if (string_len > 0) {
++			dev_warn(dev, "Only one of qcom,num-strings or qcom,enabled-strings"
++				      " should be set\n");
++			if (val > string_len) {
++				dev_err(dev, "qcom,num-strings exceeds qcom,enabled-strings\n");
++				return -EINVAL;
++			}
++		}
++
++		cfg->num_strings = val;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
+index 96e5a87827697..b6b7c489c8b62 100644
+--- a/drivers/virtio/virtio_mem.c
++++ b/drivers/virtio/virtio_mem.c
+@@ -592,7 +592,7 @@ static int virtio_mem_sbm_sb_states_prepare_next_mb(struct virtio_mem *vm)
+ 		return -ENOMEM;
+ 
+ 	mutex_lock(&vm->hotplug_mutex);
+-	if (new_bitmap)
++	if (vm->sbm.sb_states)
+ 		memcpy(new_bitmap, vm->sbm.sb_states, old_pages * PAGE_SIZE);
+ 
+ 	old_bitmap = vm->sbm.sb_states;
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index 028b05d445460..962f1477b1fab 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -1197,8 +1197,10 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
+ 	if (virtqueue_use_indirect(_vq, total_sg)) {
+ 		err = virtqueue_add_indirect_packed(vq, sgs, total_sg, out_sgs,
+ 						    in_sgs, data, gfp);
+-		if (err != -ENOMEM)
++		if (err != -ENOMEM) {
++			END_USE(vq);
+ 			return err;
++		}
+ 
+ 		/* fall back on direct */
+ 	}
+diff --git a/drivers/w1/slaves/w1_ds28e04.c b/drivers/w1/slaves/w1_ds28e04.c
+index e4f336111edc6..6cef6e2edb892 100644
+--- a/drivers/w1/slaves/w1_ds28e04.c
++++ b/drivers/w1/slaves/w1_ds28e04.c
+@@ -32,7 +32,7 @@ static int w1_strong_pullup = 1;
+ module_param_named(strong_pullup, w1_strong_pullup, int, 0);
+ 
+ /* enable/disable CRC checking on DS28E04-100 memory accesses */
+-static char w1_enable_crccheck = 1;
++static bool w1_enable_crccheck = true;
+ 
+ #define W1_EEPROM_SIZE		512
+ #define W1_PAGE_COUNT		16
+@@ -339,32 +339,18 @@ static BIN_ATTR_RW(pio, 1);
+ static ssize_t crccheck_show(struct device *dev, struct device_attribute *attr,
+ 			     char *buf)
+ {
+-	if (put_user(w1_enable_crccheck + 0x30, buf))
+-		return -EFAULT;
+-
+-	return sizeof(w1_enable_crccheck);
++	return sysfs_emit(buf, "%d\n", w1_enable_crccheck);
+ }
+ 
+ static ssize_t crccheck_store(struct device *dev, struct device_attribute *attr,
+ 			      const char *buf, size_t count)
+ {
+-	char val;
+-
+-	if (count != 1 || !buf)
+-		return -EINVAL;
++	int err = kstrtobool(buf, &w1_enable_crccheck);
+ 
+-	if (get_user(val, buf))
+-		return -EFAULT;
++	if (err)
++		return err;
+ 
+-	/* convert to decimal */
+-	val = val - 0x30;
+-	if (val != 0 && val != 1)
+-		return -EINVAL;
+-
+-	/* set the new value */
+-	w1_enable_crccheck = val;
+-
+-	return sizeof(w1_enable_crccheck);
++	return count;
+ }
+ 
+ static DEVICE_ATTR_RW(crccheck);
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index fec1b65371665..59ffea8000791 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -250,13 +250,13 @@ void gntdev_put_map(struct gntdev_priv *priv, struct gntdev_grant_map *map)
+ 	if (!refcount_dec_and_test(&map->users))
+ 		return;
+ 
++	if (map->pages && !use_ptemod)
++		unmap_grant_pages(map, 0, map->count);
++
+ 	if (map->notify.flags & UNMAP_NOTIFY_SEND_EVENT) {
+ 		notify_remote_via_evtchn(map->notify.event);
+ 		evtchn_put(map->notify.event);
+ 	}
+-
+-	if (map->pages && !use_ptemod)
+-		unmap_grant_pages(map, 0, map->count);
+ 	gntdev_free_map(map);
+ }
+ 
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index f735b8798ba12..8b090c40daf77 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -1214,7 +1214,12 @@ again:
+ 	ret = btrfs_search_slot(NULL, fs_info->extent_root, &key, path, 0, 0);
+ 	if (ret < 0)
+ 		goto out;
+-	BUG_ON(ret == 0);
++	if (ret == 0) {
++		/* This shouldn't happen, indicates a bug or fs corruption. */
++		ASSERT(ret != 0);
++		ret = -EUCLEAN;
++		goto out;
++	}
+ 
+ #ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
+ 	if (trans && likely(trans->type != __TRANS_DUMMY) &&
+@@ -1360,10 +1365,18 @@ again:
+ 				goto out;
+ 			if (!ret && extent_item_pos) {
+ 				/*
+-				 * we've recorded that parent, so we must extend
+-				 * its inode list here
++				 * We've recorded that parent, so we must extend
++				 * its inode list here.
++				 *
++				 * However if there was corruption we may not
++				 * have found an eie, return an error in this
++				 * case.
+ 				 */
+-				BUG_ON(!eie);
++				ASSERT(eie);
++				if (!eie) {
++					ret = -EUCLEAN;
++					goto out;
++				}
+ 				while (eie->next)
+ 					eie = eie->next;
+ 				eie->next = ref->inode_list;
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index f704339c6b865..35660791e084a 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -1570,12 +1570,9 @@ static struct extent_buffer *btrfs_search_slot_get_root(struct btrfs_root *root,
+ {
+ 	struct btrfs_fs_info *fs_info = root->fs_info;
+ 	struct extent_buffer *b;
+-	int root_lock;
++	int root_lock = 0;
+ 	int level = 0;
+ 
+-	/* We try very hard to do read locks on the root */
+-	root_lock = BTRFS_READ_LOCK;
+-
+ 	if (p->search_commit_root) {
+ 		/*
+ 		 * The commit roots are read only so we always do read locks,
+@@ -1613,6 +1610,9 @@ static struct extent_buffer *btrfs_search_slot_get_root(struct btrfs_root *root,
+ 		goto out;
+ 	}
+ 
++	/* We try very hard to do read locks on the root */
++	root_lock = BTRFS_READ_LOCK;
++
+ 	/*
+ 	 * If the level is set to maximum, we can skip trying to get the read
+ 	 * lock.
+@@ -1639,6 +1639,17 @@ static struct extent_buffer *btrfs_search_slot_get_root(struct btrfs_root *root,
+ 	level = btrfs_header_level(b);
+ 
+ out:
++	/*
++	 * The root may have failed to write out at some point, and thus is no
++	 * longer valid, return an error in this case.
++	 */
++	if (!extent_buffer_uptodate(b)) {
++		if (root_lock)
++			btrfs_tree_unlock_rw(b, root_lock);
++		free_extent_buffer(b);
++		return ERR_PTR(-EIO);
++	}
++
+ 	p->nodes[level] = b;
+ 	if (!p->skip_locking)
+ 		p->locks[level] = root_lock;
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index c85a7d44da798..e0238dd5f2f21 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -322,7 +322,7 @@ static int btrfs_init_dev_replace_tgtdev(struct btrfs_fs_info *fs_info,
+ 	set_blocksize(device->bdev, BTRFS_BDEV_BLOCKSIZE);
+ 	device->fs_devices = fs_info->fs_devices;
+ 
+-	ret = btrfs_get_dev_zone_info(device);
++	ret = btrfs_get_dev_zone_info(device, false);
+ 	if (ret)
+ 		goto error;
+ 
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index b3f2e2232326c..5f0a879c10436 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3571,6 +3571,8 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ 		goto fail_sysfs;
+ 	}
+ 
++	btrfs_free_zone_cache(fs_info);
++
+ 	if (!sb_rdonly(sb) && fs_info->fs_devices->missing_devices &&
+ 	    !btrfs_check_rw_degradable(fs_info, NULL)) {
+ 		btrfs_warn(fs_info,
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 25ef6e3fd3069..7b4ee1b2d5d83 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -3790,23 +3790,35 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
+ 	spin_unlock(&fs_info->relocation_bg_lock);
+ 	if (skip)
+ 		return 1;
++
+ 	/* Check RO and no space case before trying to activate it */
+ 	spin_lock(&block_group->lock);
+ 	if (block_group->ro ||
+ 	    block_group->alloc_offset == block_group->zone_capacity) {
+-		spin_unlock(&block_group->lock);
+-		return 1;
++		ret = 1;
++		/*
++		 * May need to clear fs_info->{treelog,data_reloc}_bg.
++		 * Return the error after taking the locks.
++		 */
+ 	}
+ 	spin_unlock(&block_group->lock);
+ 
+-	if (!btrfs_zone_activate(block_group))
+-		return 1;
++	if (!ret && !btrfs_zone_activate(block_group)) {
++		ret = 1;
++		/*
++		 * May need to clear fs_info->{treelog,data_reloc}_bg.
++		 * Return the error after taking the locks.
++		 */
++	}
+ 
+ 	spin_lock(&space_info->lock);
+ 	spin_lock(&block_group->lock);
+ 	spin_lock(&fs_info->treelog_bg_lock);
+ 	spin_lock(&fs_info->relocation_bg_lock);
+ 
++	if (ret)
++		goto out;
++
+ 	ASSERT(!ffe_ctl->for_treelog ||
+ 	       block_group->start == fs_info->treelog_bg ||
+ 	       fs_info->treelog_bg == 0);
+@@ -3947,6 +3959,28 @@ static void found_extent(struct find_free_extent_ctl *ffe_ctl,
+ 	}
+ }
+ 
++static bool can_allocate_chunk(struct btrfs_fs_info *fs_info,
++			       struct find_free_extent_ctl *ffe_ctl)
++{
++	switch (ffe_ctl->policy) {
++	case BTRFS_EXTENT_ALLOC_CLUSTERED:
++		return true;
++	case BTRFS_EXTENT_ALLOC_ZONED:
++		/*
++		 * If we have enough free space left in an already
++		 * active block group and we can't activate any other
++		 * zone now, do not allow allocating a new chunk and
++		 * let find_free_extent() retry with a smaller size.
++		 */
++		if (ffe_ctl->max_extent_size >= ffe_ctl->min_alloc_size &&
++		    !btrfs_can_activate_zone(fs_info->fs_devices, ffe_ctl->flags))
++			return false;
++		return true;
++	default:
++		BUG();
++	}
++}
++
+ static int chunk_allocation_failed(struct find_free_extent_ctl *ffe_ctl)
+ {
+ 	switch (ffe_ctl->policy) {
+@@ -3987,18 +4021,6 @@ static int find_free_extent_update_loop(struct btrfs_fs_info *fs_info,
+ 		return 0;
+ 	}
+ 
+-	if (ffe_ctl->max_extent_size >= ffe_ctl->min_alloc_size &&
+-	    !btrfs_can_activate_zone(fs_info->fs_devices, ffe_ctl->index)) {
+-		/*
+-		 * If we have enough free space left in an already active block
+-		 * group and we can't activate any other zone now, retry the
+-		 * active ones with a smaller allocation size.  Returning early
+-		 * from here will tell btrfs_reserve_extent() to haven the
+-		 * size.
+-		 */
+-		return -ENOSPC;
+-	}
+-
+ 	if (ffe_ctl->loop >= LOOP_CACHING_WAIT && ffe_ctl->have_caching_bg)
+ 		return 1;
+ 
+@@ -4034,6 +4056,10 @@ static int find_free_extent_update_loop(struct btrfs_fs_info *fs_info,
+ 			struct btrfs_trans_handle *trans;
+ 			int exist = 0;
+ 
++			/*Check if allocation policy allows to create a new chunk */
++			if (!can_allocate_chunk(fs_info, ffe_ctl))
++				return -ENOSPC;
++
+ 			trans = current->journal_info;
+ 			if (trans)
+ 				exist = 1;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index b8c911a4a320f..39a6745434613 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -10595,9 +10595,19 @@ static int btrfs_add_swap_extent(struct swap_info_struct *sis,
+ 				 struct btrfs_swap_info *bsi)
+ {
+ 	unsigned long nr_pages;
++	unsigned long max_pages;
+ 	u64 first_ppage, first_ppage_reported, next_ppage;
+ 	int ret;
+ 
++	/*
++	 * Our swapfile may have had its size extended after the swap header was
++	 * written. In that case activating the swapfile should not go beyond
++	 * the max size set in the swap header.
++	 */
++	if (bsi->nr_pages >= sis->max)
++		return 0;
++
++	max_pages = sis->max - bsi->nr_pages;
+ 	first_ppage = ALIGN(bsi->block_start, PAGE_SIZE) >> PAGE_SHIFT;
+ 	next_ppage = ALIGN_DOWN(bsi->block_start + bsi->block_len,
+ 				PAGE_SIZE) >> PAGE_SHIFT;
+@@ -10605,6 +10615,7 @@ static int btrfs_add_swap_extent(struct swap_info_struct *sis,
+ 	if (first_ppage >= next_ppage)
+ 		return 0;
+ 	nr_pages = next_ppage - first_ppage;
++	nr_pages = min(nr_pages, max_pages);
+ 
+ 	first_ppage_reported = first_ppage;
+ 	if (bsi->start == 0)
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 6c037f1252b77..071f7334f8189 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -940,6 +940,14 @@ int btrfs_quota_enable(struct btrfs_fs_info *fs_info)
+ 	int ret = 0;
+ 	int slot;
+ 
++	/*
++	 * We need to have subvol_sem write locked, to prevent races between
++	 * concurrent tasks trying to enable quotas, because we will unlock
++	 * and relock qgroup_ioctl_lock before setting fs_info->quota_root
++	 * and before setting BTRFS_FS_QUOTA_ENABLED.
++	 */
++	lockdep_assert_held_write(&fs_info->subvol_sem);
++
+ 	mutex_lock(&fs_info->qgroup_ioctl_lock);
+ 	if (fs_info->quota_root)
+ 		goto out;
+@@ -1117,8 +1125,19 @@ out_add_root:
+ 		goto out_free_path;
+ 	}
+ 
++	mutex_unlock(&fs_info->qgroup_ioctl_lock);
++	/*
++	 * Commit the transaction while not holding qgroup_ioctl_lock, to avoid
++	 * a deadlock with tasks concurrently doing other qgroup operations, such
++	 * adding/removing qgroups or adding/deleting qgroup relations for example,
++	 * because all qgroup operations first start or join a transaction and then
++	 * lock the qgroup_ioctl_lock mutex.
++	 * We are safe from a concurrent task trying to enable quotas, by calling
++	 * this function, since we are serialized by fs_info->subvol_sem.
++	 */
+ 	ret = btrfs_commit_transaction(trans);
+ 	trans = NULL;
++	mutex_lock(&fs_info->qgroup_ioctl_lock);
+ 	if (ret)
+ 		goto out_free_path;
+ 
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index fd0ced829edb8..42391d4aeb119 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -2643,7 +2643,7 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path
+ 	device->fs_info = fs_info;
+ 	device->bdev = bdev;
+ 
+-	ret = btrfs_get_dev_zone_info(device);
++	ret = btrfs_get_dev_zone_info(device, false);
+ 	if (ret)
+ 		goto error_free_device;
+ 
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index 678a294695119..e2ccbb339499c 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -5,6 +5,7 @@
+ #include <linux/blkdev.h>
+ #include <linux/sched/mm.h>
+ #include <linux/atomic.h>
++#include <linux/vmalloc.h>
+ #include "ctree.h"
+ #include "volumes.h"
+ #include "zoned.h"
+@@ -213,6 +214,8 @@ static int emulate_report_zones(struct btrfs_device *device, u64 pos,
+ static int btrfs_get_dev_zones(struct btrfs_device *device, u64 pos,
+ 			       struct blk_zone *zones, unsigned int *nr_zones)
+ {
++	struct btrfs_zoned_device_info *zinfo = device->zone_info;
++	u32 zno;
+ 	int ret;
+ 
+ 	if (!*nr_zones)
+@@ -224,6 +227,34 @@ static int btrfs_get_dev_zones(struct btrfs_device *device, u64 pos,
+ 		return 0;
+ 	}
+ 
++	/* Check cache */
++	if (zinfo->zone_cache) {
++		unsigned int i;
++
++		ASSERT(IS_ALIGNED(pos, zinfo->zone_size));
++		zno = pos >> zinfo->zone_size_shift;
++		/*
++		 * We cannot report zones beyond the zone end. So, it is OK to
++		 * cap *nr_zones to at the end.
++		 */
++		*nr_zones = min_t(u32, *nr_zones, zinfo->nr_zones - zno);
++
++		for (i = 0; i < *nr_zones; i++) {
++			struct blk_zone *zone_info;
++
++			zone_info = &zinfo->zone_cache[zno + i];
++			if (!zone_info->len)
++				break;
++		}
++
++		if (i == *nr_zones) {
++			/* Cache hit on all the zones */
++			memcpy(zones, zinfo->zone_cache + zno,
++			       sizeof(*zinfo->zone_cache) * *nr_zones);
++			return 0;
++		}
++	}
++
+ 	ret = blkdev_report_zones(device->bdev, pos >> SECTOR_SHIFT, *nr_zones,
+ 				  copy_zone_info_cb, zones);
+ 	if (ret < 0) {
+@@ -237,6 +268,11 @@ static int btrfs_get_dev_zones(struct btrfs_device *device, u64 pos,
+ 	if (!ret)
+ 		return -EIO;
+ 
++	/* Populate cache */
++	if (zinfo->zone_cache)
++		memcpy(zinfo->zone_cache + zno, zones,
++		       sizeof(*zinfo->zone_cache) * *nr_zones);
++
+ 	return 0;
+ }
+ 
+@@ -300,7 +336,7 @@ int btrfs_get_dev_zone_info_all_devices(struct btrfs_fs_info *fs_info)
+ 		if (!device->bdev)
+ 			continue;
+ 
+-		ret = btrfs_get_dev_zone_info(device);
++		ret = btrfs_get_dev_zone_info(device, true);
+ 		if (ret)
+ 			break;
+ 	}
+@@ -309,7 +345,7 @@ int btrfs_get_dev_zone_info_all_devices(struct btrfs_fs_info *fs_info)
+ 	return ret;
+ }
+ 
+-int btrfs_get_dev_zone_info(struct btrfs_device *device)
++int btrfs_get_dev_zone_info(struct btrfs_device *device, bool populate_cache)
+ {
+ 	struct btrfs_fs_info *fs_info = device->fs_info;
+ 	struct btrfs_zoned_device_info *zone_info = NULL;
+@@ -339,6 +375,8 @@ int btrfs_get_dev_zone_info(struct btrfs_device *device)
+ 	if (!zone_info)
+ 		return -ENOMEM;
+ 
++	device->zone_info = zone_info;
++
+ 	if (!bdev_is_zoned(bdev)) {
+ 		if (!fs_info->zone_size) {
+ 			ret = calculate_emulated_zone_size(fs_info);
+@@ -407,6 +445,23 @@ int btrfs_get_dev_zone_info(struct btrfs_device *device)
+ 		goto out;
+ 	}
+ 
++	/*
++	 * Enable zone cache only for a zoned device. On a non-zoned device, we
++	 * fill the zone info with emulated CONVENTIONAL zones, so no need to
++	 * use the cache.
++	 */
++	if (populate_cache && bdev_is_zoned(device->bdev)) {
++		zone_info->zone_cache = vzalloc(sizeof(struct blk_zone) *
++						zone_info->nr_zones);
++		if (!zone_info->zone_cache) {
++			btrfs_err_in_rcu(device->fs_info,
++				"zoned: failed to allocate zone cache for %s",
++				rcu_str_deref(device->name));
++			ret = -ENOMEM;
++			goto out;
++		}
++	}
++
+ 	/* Get zones type */
+ 	nactive = 0;
+ 	while (sector < nr_sectors) {
+@@ -505,8 +560,6 @@ int btrfs_get_dev_zone_info(struct btrfs_device *device)
+ 
+ 	kfree(zones);
+ 
+-	device->zone_info = zone_info;
+-
+ 	switch (bdev_zoned_model(bdev)) {
+ 	case BLK_ZONED_HM:
+ 		model = "host-managed zoned";
+@@ -539,11 +592,7 @@ int btrfs_get_dev_zone_info(struct btrfs_device *device)
+ out:
+ 	kfree(zones);
+ out_free_zone_info:
+-	bitmap_free(zone_info->active_zones);
+-	bitmap_free(zone_info->empty_zones);
+-	bitmap_free(zone_info->seq_zones);
+-	kfree(zone_info);
+-	device->zone_info = NULL;
++	btrfs_destroy_dev_zone_info(device);
+ 
+ 	return ret;
+ }
+@@ -558,6 +607,7 @@ void btrfs_destroy_dev_zone_info(struct btrfs_device *device)
+ 	bitmap_free(zone_info->active_zones);
+ 	bitmap_free(zone_info->seq_zones);
+ 	bitmap_free(zone_info->empty_zones);
++	vfree(zone_info->zone_cache);
+ 	kfree(zone_info);
+ 	device->zone_info = NULL;
+ }
+@@ -1884,7 +1934,7 @@ int btrfs_zone_finish(struct btrfs_block_group *block_group)
+ 	return ret;
+ }
+ 
+-bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, int raid_index)
++bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, u64 flags)
+ {
+ 	struct btrfs_device *device;
+ 	bool ret = false;
+@@ -1893,8 +1943,7 @@ bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, int raid_index
+ 		return true;
+ 
+ 	/* Non-single profiles are not supported yet */
+-	if (raid_index != BTRFS_RAID_SINGLE)
+-		return false;
++	ASSERT((flags & BTRFS_BLOCK_GROUP_PROFILE_MASK) == 0);
+ 
+ 	/* Check if there is a device with active zones left */
+ 	mutex_lock(&fs_devices->device_list_mutex);
+@@ -1975,3 +2024,21 @@ void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg)
+ 		fs_info->data_reloc_bg = 0;
+ 	spin_unlock(&fs_info->relocation_bg_lock);
+ }
++
++void btrfs_free_zone_cache(struct btrfs_fs_info *fs_info)
++{
++	struct btrfs_fs_devices *fs_devices = fs_info->fs_devices;
++	struct btrfs_device *device;
++
++	if (!btrfs_is_zoned(fs_info))
++		return;
++
++	mutex_lock(&fs_devices->device_list_mutex);
++	list_for_each_entry(device, &fs_devices->devices, dev_list) {
++		if (device->zone_info) {
++			vfree(device->zone_info->zone_cache);
++			device->zone_info->zone_cache = NULL;
++		}
++	}
++	mutex_unlock(&fs_devices->device_list_mutex);
++}
+diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h
+index e53ab7b96437e..2695549208e2e 100644
+--- a/fs/btrfs/zoned.h
++++ b/fs/btrfs/zoned.h
+@@ -28,6 +28,7 @@ struct btrfs_zoned_device_info {
+ 	unsigned long *seq_zones;
+ 	unsigned long *empty_zones;
+ 	unsigned long *active_zones;
++	struct blk_zone *zone_cache;
+ 	struct blk_zone sb_zones[2 * BTRFS_SUPER_MIRROR_MAX];
+ };
+ 
+@@ -35,7 +36,7 @@ struct btrfs_zoned_device_info {
+ int btrfs_get_dev_zone(struct btrfs_device *device, u64 pos,
+ 		       struct blk_zone *zone);
+ int btrfs_get_dev_zone_info_all_devices(struct btrfs_fs_info *fs_info);
+-int btrfs_get_dev_zone_info(struct btrfs_device *device);
++int btrfs_get_dev_zone_info(struct btrfs_device *device, bool populate_cache);
+ void btrfs_destroy_dev_zone_info(struct btrfs_device *device);
+ int btrfs_check_zoned_mode(struct btrfs_fs_info *fs_info);
+ int btrfs_check_mountopts_zoned(struct btrfs_fs_info *info);
+@@ -71,11 +72,11 @@ struct btrfs_device *btrfs_zoned_get_device(struct btrfs_fs_info *fs_info,
+ 					    u64 logical, u64 length);
+ bool btrfs_zone_activate(struct btrfs_block_group *block_group);
+ int btrfs_zone_finish(struct btrfs_block_group *block_group);
+-bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices,
+-			     int raid_index);
++bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, u64 flags);
+ void btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info, u64 logical,
+ 			     u64 length);
+ void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg);
++void btrfs_free_zone_cache(struct btrfs_fs_info *fs_info);
+ #else /* CONFIG_BLK_DEV_ZONED */
+ static inline int btrfs_get_dev_zone(struct btrfs_device *device, u64 pos,
+ 				     struct blk_zone *zone)
+@@ -88,7 +89,8 @@ static inline int btrfs_get_dev_zone_info_all_devices(struct btrfs_fs_info *fs_i
+ 	return 0;
+ }
+ 
+-static inline int btrfs_get_dev_zone_info(struct btrfs_device *device)
++static inline int btrfs_get_dev_zone_info(struct btrfs_device *device,
++					  bool populate_cache)
+ {
+ 	return 0;
+ }
+@@ -222,7 +224,7 @@ static inline int btrfs_zone_finish(struct btrfs_block_group *block_group)
+ }
+ 
+ static inline bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices,
+-					   int raid_index)
++					   u64 flags)
+ {
+ 	return true;
+ }
+@@ -232,6 +234,7 @@ static inline void btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info,
+ 
+ static inline void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg) { }
+ 
++static inline void btrfs_free_zone_cache(struct btrfs_fs_info *fs_info) { }
+ #endif
+ 
+ static inline bool btrfs_dev_is_sequential(struct btrfs_device *device, u64 pos)
+diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
+index 035dc3e245dca..3eee806fd29f0 100644
+--- a/fs/cifs/sess.c
++++ b/fs/cifs/sess.c
+@@ -1354,7 +1354,7 @@ sess_auth_rawntlmssp_negotiate(struct sess_data *sess_data)
+ 				     &blob_len, ses,
+ 				     sess_data->nls_cp);
+ 	if (rc)
+-		goto out;
++		goto out_free_ntlmsspblob;
+ 
+ 	sess_data->iov[1].iov_len = blob_len;
+ 	sess_data->iov[1].iov_base = ntlmsspblob;
+@@ -1362,7 +1362,7 @@ sess_auth_rawntlmssp_negotiate(struct sess_data *sess_data)
+ 
+ 	rc = _sess_auth_rawntlmssp_assemble_req(sess_data);
+ 	if (rc)
+-		goto out;
++		goto out_free_ntlmsspblob;
+ 
+ 	rc = sess_sendreceive(sess_data);
+ 
+@@ -1376,14 +1376,14 @@ sess_auth_rawntlmssp_negotiate(struct sess_data *sess_data)
+ 		rc = 0;
+ 
+ 	if (rc)
+-		goto out;
++		goto out_free_ntlmsspblob;
+ 
+ 	cifs_dbg(FYI, "rawntlmssp session setup challenge phase\n");
+ 
+ 	if (smb_buf->WordCount != 4) {
+ 		rc = -EIO;
+ 		cifs_dbg(VFS, "bad word count %d\n", smb_buf->WordCount);
+-		goto out;
++		goto out_free_ntlmsspblob;
+ 	}
+ 
+ 	ses->Suid = smb_buf->Uid;   /* UID left in wire format (le) */
+@@ -1397,10 +1397,13 @@ sess_auth_rawntlmssp_negotiate(struct sess_data *sess_data)
+ 		cifs_dbg(VFS, "bad security blob length %d\n",
+ 				blob_len);
+ 		rc = -EINVAL;
+-		goto out;
++		goto out_free_ntlmsspblob;
+ 	}
+ 
+ 	rc = decode_ntlmssp_challenge(bcc_ptr, blob_len, ses);
++
++out_free_ntlmsspblob:
++	kfree(ntlmsspblob);
+ out:
+ 	sess_free_buffer(sess_data);
+ 
+diff --git a/fs/debugfs/file.c b/fs/debugfs/file.c
+index 7d162b0efbf03..950c63fa4d0b2 100644
+--- a/fs/debugfs/file.c
++++ b/fs/debugfs/file.c
+@@ -147,7 +147,7 @@ static int debugfs_locked_down(struct inode *inode,
+ 			       struct file *filp,
+ 			       const struct file_operations *real_fops)
+ {
+-	if ((inode->i_mode & 07777) == 0444 &&
++	if ((inode->i_mode & 07777 & ~0444) == 0 &&
+ 	    !(filp->f_mode & FMODE_WRITE) &&
+ 	    !real_fops->unlocked_ioctl &&
+ 	    !real_fops->compat_ioctl &&
+diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
+index c502c065d0075..28d1f35b11a4d 100644
+--- a/fs/dlm/lock.c
++++ b/fs/dlm/lock.c
+@@ -3973,6 +3973,14 @@ static int validate_message(struct dlm_lkb *lkb, struct dlm_message *ms)
+ 	int from = ms->m_header.h_nodeid;
+ 	int error = 0;
+ 
++	/* currently mixing of user/kernel locks are not supported */
++	if (ms->m_flags & DLM_IFL_USER && ~lkb->lkb_flags & DLM_IFL_USER) {
++		log_error(lkb->lkb_resource->res_ls,
++			  "got user dlm message for a kernel lock");
++		error = -EINVAL;
++		goto out;
++	}
++
+ 	switch (ms->m_type) {
+ 	case DLM_MSG_CONVERT:
+ 	case DLM_MSG_UNLOCK:
+@@ -4001,6 +4009,7 @@ static int validate_message(struct dlm_lkb *lkb, struct dlm_message *ms)
+ 		error = -EINVAL;
+ 	}
+ 
++out:
+ 	if (error)
+ 		log_error(lkb->lkb_resource->res_ls,
+ 			  "ignore invalid message %d from %d %x %x %x %d",
+diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
+index 8f715c620e1f8..7a8efce1c343e 100644
+--- a/fs/dlm/lowcomms.c
++++ b/fs/dlm/lowcomms.c
+@@ -592,8 +592,8 @@ int dlm_lowcomms_nodes_set_mark(int nodeid, unsigned int mark)
+ static void lowcomms_error_report(struct sock *sk)
+ {
+ 	struct connection *con;
+-	struct sockaddr_storage saddr;
+ 	void (*orig_report)(struct sock *) = NULL;
++	struct inet_sock *inet;
+ 
+ 	read_lock_bh(&sk->sk_callback_lock);
+ 	con = sock2con(sk);
+@@ -601,33 +601,33 @@ static void lowcomms_error_report(struct sock *sk)
+ 		goto out;
+ 
+ 	orig_report = listen_sock.sk_error_report;
+-	if (kernel_getpeername(sk->sk_socket, (struct sockaddr *)&saddr) < 0) {
+-		printk_ratelimited(KERN_ERR "dlm: node %d: socket error "
+-				   "sending to node %d, port %d, "
+-				   "sk_err=%d/%d\n", dlm_our_nodeid(),
+-				   con->nodeid, dlm_config.ci_tcp_port,
+-				   sk->sk_err, sk->sk_err_soft);
+-	} else if (saddr.ss_family == AF_INET) {
+-		struct sockaddr_in *sin4 = (struct sockaddr_in *)&saddr;
+ 
++	inet = inet_sk(sk);
++	switch (sk->sk_family) {
++	case AF_INET:
+ 		printk_ratelimited(KERN_ERR "dlm: node %d: socket error "
+-				   "sending to node %d at %pI4, port %d, "
++				   "sending to node %d at %pI4, dport %d, "
+ 				   "sk_err=%d/%d\n", dlm_our_nodeid(),
+-				   con->nodeid, &sin4->sin_addr.s_addr,
+-				   dlm_config.ci_tcp_port, sk->sk_err,
++				   con->nodeid, &inet->inet_daddr,
++				   ntohs(inet->inet_dport), sk->sk_err,
+ 				   sk->sk_err_soft);
+-	} else {
+-		struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *)&saddr;
+-
++		break;
++#if IS_ENABLED(CONFIG_IPV6)
++	case AF_INET6:
+ 		printk_ratelimited(KERN_ERR "dlm: node %d: socket error "
+-				   "sending to node %d at %u.%u.%u.%u, "
+-				   "port %d, sk_err=%d/%d\n", dlm_our_nodeid(),
+-				   con->nodeid, sin6->sin6_addr.s6_addr32[0],
+-				   sin6->sin6_addr.s6_addr32[1],
+-				   sin6->sin6_addr.s6_addr32[2],
+-				   sin6->sin6_addr.s6_addr32[3],
+-				   dlm_config.ci_tcp_port, sk->sk_err,
++				   "sending to node %d at %pI6c, "
++				   "dport %d, sk_err=%d/%d\n", dlm_our_nodeid(),
++				   con->nodeid, &sk->sk_v6_daddr,
++				   ntohs(inet->inet_dport), sk->sk_err,
+ 				   sk->sk_err_soft);
++		break;
++#endif
++	default:
++		printk_ratelimited(KERN_ERR "dlm: node %d: socket error "
++				   "invalid socket family %d set, "
++				   "sk_err=%d/%d\n", dlm_our_nodeid(),
++				   sk->sk_family, sk->sk_err, sk->sk_err_soft);
++		goto out;
+ 	}
+ 
+ 	/* below sendcon only handling */
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 404dd50856e5d..af7088085d4e4 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -2935,6 +2935,7 @@ bool ext4_fc_replay_check_excluded(struct super_block *sb, ext4_fsblk_t block);
+ void ext4_fc_replay_cleanup(struct super_block *sb);
+ int ext4_fc_commit(journal_t *journal, tid_t commit_tid);
+ int __init ext4_fc_init_dentry_cache(void);
++void ext4_fc_destroy_dentry_cache(void);
+ 
+ /* mballoc.c */
+ extern const struct seq_operations ext4_mb_seq_groups_ops;
+diff --git a/fs/ext4/ext4_jbd2.c b/fs/ext4/ext4_jbd2.c
+index 6def7339056db..3477a16d08aee 100644
+--- a/fs/ext4/ext4_jbd2.c
++++ b/fs/ext4/ext4_jbd2.c
+@@ -162,6 +162,8 @@ int __ext4_journal_ensure_credits(handle_t *handle, int check_cred,
+ {
+ 	if (!ext4_handle_valid(handle))
+ 		return 0;
++	if (is_handle_aborted(handle))
++		return -EROFS;
+ 	if (jbd2_handle_buffer_credits(handle) >= check_cred &&
+ 	    handle->h_revoke_credits >= revoke_cred)
+ 		return 0;
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 0ecf819bf1891..fac884dbb42e2 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -4647,8 +4647,6 @@ static long ext4_zero_range(struct file *file, loff_t offset,
+ 	ret = ext4_mark_inode_dirty(handle, inode);
+ 	if (unlikely(ret))
+ 		goto out_handle;
+-	ext4_fc_track_range(handle, inode, offset >> inode->i_sb->s_blocksize_bits,
+-			(offset + len - 1) >> inode->i_sb->s_blocksize_bits);
+ 	/* Zero out partial block at the edges of the range */
+ 	ret = ext4_zero_partial_blocks(handle, inode, offset, len);
+ 	if (ret >= 0)
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index 0f32b445582ab..ace68ca90b01e 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -1812,11 +1812,14 @@ ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl,
+ 		}
+ 	}
+ 
+-	ret = ext4_punch_hole(inode,
+-		le32_to_cpu(lrange.fc_lblk) << sb->s_blocksize_bits,
+-		le32_to_cpu(lrange.fc_len) <<  sb->s_blocksize_bits);
+-	if (ret)
+-		jbd_debug(1, "ext4_punch_hole returned %d", ret);
++	down_write(&EXT4_I(inode)->i_data_sem);
++	ret = ext4_ext_remove_space(inode, lrange.fc_lblk,
++				lrange.fc_lblk + lrange.fc_len - 1);
++	up_write(&EXT4_I(inode)->i_data_sem);
++	if (ret) {
++		iput(inode);
++		return 0;
++	}
+ 	ext4_ext_replay_shrink_inode(inode,
+ 		i_size_read(inode) >> sb->s_blocksize_bits);
+ 	ext4_mark_inode_dirty(NULL, inode);
+@@ -2192,3 +2195,8 @@ int __init ext4_fc_init_dentry_cache(void)
+ 
+ 	return 0;
+ }
++
++void ext4_fc_destroy_dentry_cache(void)
++{
++	kmem_cache_destroy(ext4_fc_dentry_cachep);
++}
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index bfd3545f1e5d9..3bdfe010e17f9 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -741,10 +741,11 @@ out_sem:
+ 			if (ret)
+ 				return ret;
+ 		}
+-		ext4_fc_track_range(handle, inode, map->m_lblk,
+-			    map->m_lblk + map->m_len - 1);
+ 	}
+-
++	if (retval > 0 && (map->m_flags & EXT4_MAP_UNWRITTEN ||
++				map->m_flags & EXT4_MAP_MAPPED))
++		ext4_fc_track_range(handle, inode, map->m_lblk,
++					map->m_lblk + map->m_len - 1);
+ 	if (retval < 0)
+ 		ext_debug(inode, "failed with err %d\n", retval);
+ 	return retval;
+@@ -1844,30 +1845,16 @@ int ext4_da_get_block_prep(struct inode *inode, sector_t iblock,
+ 	return 0;
+ }
+ 
+-static int bget_one(handle_t *handle, struct inode *inode,
+-		    struct buffer_head *bh)
+-{
+-	get_bh(bh);
+-	return 0;
+-}
+-
+-static int bput_one(handle_t *handle, struct inode *inode,
+-		    struct buffer_head *bh)
+-{
+-	put_bh(bh);
+-	return 0;
+-}
+-
+ static int __ext4_journalled_writepage(struct page *page,
+ 				       unsigned int len)
+ {
+ 	struct address_space *mapping = page->mapping;
+ 	struct inode *inode = mapping->host;
+-	struct buffer_head *page_bufs = NULL;
+ 	handle_t *handle = NULL;
+ 	int ret = 0, err = 0;
+ 	int inline_data = ext4_has_inline_data(inode);
+ 	struct buffer_head *inode_bh = NULL;
++	loff_t size;
+ 
+ 	ClearPageChecked(page);
+ 
+@@ -1877,14 +1864,6 @@ static int __ext4_journalled_writepage(struct page *page,
+ 		inode_bh = ext4_journalled_write_inline_data(inode, len, page);
+ 		if (inode_bh == NULL)
+ 			goto out;
+-	} else {
+-		page_bufs = page_buffers(page);
+-		if (!page_bufs) {
+-			BUG();
+-			goto out;
+-		}
+-		ext4_walk_page_buffers(handle, inode, page_bufs, 0, len,
+-				       NULL, bget_one);
+ 	}
+ 	/*
+ 	 * We need to release the page lock before we start the
+@@ -1905,7 +1884,8 @@ static int __ext4_journalled_writepage(struct page *page,
+ 
+ 	lock_page(page);
+ 	put_page(page);
+-	if (page->mapping != mapping) {
++	size = i_size_read(inode);
++	if (page->mapping != mapping || page_offset(page) > size) {
+ 		/* The page got truncated from under us */
+ 		ext4_journal_stop(handle);
+ 		ret = 0;
+@@ -1915,6 +1895,13 @@ static int __ext4_journalled_writepage(struct page *page,
+ 	if (inline_data) {
+ 		ret = ext4_mark_inode_dirty(handle, inode);
+ 	} else {
++		struct buffer_head *page_bufs = page_buffers(page);
++
++		if (page->index == size >> PAGE_SHIFT)
++			len = size & ~PAGE_MASK;
++		else
++			len = PAGE_SIZE;
++
+ 		ret = ext4_walk_page_buffers(handle, inode, page_bufs, 0, len,
+ 					     NULL, do_journal_get_write_access);
+ 
+@@ -1935,9 +1922,6 @@ static int __ext4_journalled_writepage(struct page *page,
+ out:
+ 	unlock_page(page);
+ out_no_pagelock:
+-	if (!inline_data && page_bufs)
+-		ext4_walk_page_buffers(NULL, inode, page_bufs, 0, len,
+-				       NULL, bput_one);
+ 	brelse(inode_bh);
+ 	return ret;
+ }
+@@ -4523,7 +4507,7 @@ has_buffer:
+ static int __ext4_get_inode_loc_noinmem(struct inode *inode,
+ 					struct ext4_iloc *iloc)
+ {
+-	ext4_fsblk_t err_blk;
++	ext4_fsblk_t err_blk = 0;
+ 	int ret;
+ 
+ 	ret = __ext4_get_inode_loc(inode->i_sb, inode->i_ino, NULL, iloc,
+@@ -4538,7 +4522,7 @@ static int __ext4_get_inode_loc_noinmem(struct inode *inode,
+ 
+ int ext4_get_inode_loc(struct inode *inode, struct ext4_iloc *iloc)
+ {
+-	ext4_fsblk_t err_blk;
++	ext4_fsblk_t err_blk = 0;
+ 	int ret;
+ 
+ 	ret = __ext4_get_inode_loc(inode->i_sb, inode->i_ino, inode, iloc,
+@@ -5427,8 +5411,7 @@ int ext4_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
+ 				ext4_fc_track_range(handle, inode,
+ 					(attr->ia_size > 0 ? attr->ia_size - 1 : 0) >>
+ 					inode->i_sb->s_blocksize_bits,
+-					(oldsize > 0 ? oldsize - 1 : 0) >>
+-					inode->i_sb->s_blocksize_bits);
++					EXT_MAX_BLOCKS - 1);
+ 			else
+ 				ext4_fc_track_range(
+ 					handle, inode,
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index 606dee9e08a32..220a4c8178b5e 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -1117,8 +1117,6 @@ resizefs_out:
+ 		    sizeof(range)))
+ 			return -EFAULT;
+ 
+-		range.minlen = max((unsigned int)range.minlen,
+-				   q->limits.discard_granularity);
+ 		ret = ext4_trim_fs(sb, &range);
+ 		if (ret < 0)
+ 			return ret;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 215b7068f548a..ea764137462ef 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -4814,7 +4814,7 @@ ext4_mb_release_group_pa(struct ext4_buddy *e4b,
+  */
+ static noinline_for_stack int
+ ext4_mb_discard_group_preallocations(struct super_block *sb,
+-					ext4_group_t group, int needed)
++				     ext4_group_t group, int *busy)
+ {
+ 	struct ext4_group_info *grp = ext4_get_group_info(sb, group);
+ 	struct buffer_head *bitmap_bh = NULL;
+@@ -4822,8 +4822,7 @@ ext4_mb_discard_group_preallocations(struct super_block *sb,
+ 	struct list_head list;
+ 	struct ext4_buddy e4b;
+ 	int err;
+-	int busy = 0;
+-	int free, free_total = 0;
++	int free = 0;
+ 
+ 	mb_debug(sb, "discard preallocation for group %u\n", group);
+ 	if (list_empty(&grp->bb_prealloc_list))
+@@ -4846,19 +4845,14 @@ ext4_mb_discard_group_preallocations(struct super_block *sb,
+ 		goto out_dbg;
+ 	}
+ 
+-	if (needed == 0)
+-		needed = EXT4_CLUSTERS_PER_GROUP(sb) + 1;
+-
+ 	INIT_LIST_HEAD(&list);
+-repeat:
+-	free = 0;
+ 	ext4_lock_group(sb, group);
+ 	list_for_each_entry_safe(pa, tmp,
+ 				&grp->bb_prealloc_list, pa_group_list) {
+ 		spin_lock(&pa->pa_lock);
+ 		if (atomic_read(&pa->pa_count)) {
+ 			spin_unlock(&pa->pa_lock);
+-			busy = 1;
++			*busy = 1;
+ 			continue;
+ 		}
+ 		if (pa->pa_deleted) {
+@@ -4898,22 +4892,13 @@ repeat:
+ 		call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
+ 	}
+ 
+-	free_total += free;
+-
+-	/* if we still need more blocks and some PAs were used, try again */
+-	if (free_total < needed && busy) {
+-		ext4_unlock_group(sb, group);
+-		cond_resched();
+-		busy = 0;
+-		goto repeat;
+-	}
+ 	ext4_unlock_group(sb, group);
+ 	ext4_mb_unload_buddy(&e4b);
+ 	put_bh(bitmap_bh);
+ out_dbg:
+ 	mb_debug(sb, "discarded (%d) blocks preallocated for group %u bb_free (%d)\n",
+-		 free_total, group, grp->bb_free);
+-	return free_total;
++		 free, group, grp->bb_free);
++	return free;
+ }
+ 
+ /*
+@@ -5455,13 +5440,24 @@ static int ext4_mb_discard_preallocations(struct super_block *sb, int needed)
+ {
+ 	ext4_group_t i, ngroups = ext4_get_groups_count(sb);
+ 	int ret;
+-	int freed = 0;
++	int freed = 0, busy = 0;
++	int retry = 0;
+ 
+ 	trace_ext4_mb_discard_preallocations(sb, needed);
++
++	if (needed == 0)
++		needed = EXT4_CLUSTERS_PER_GROUP(sb) + 1;
++ repeat:
+ 	for (i = 0; i < ngroups && needed > 0; i++) {
+-		ret = ext4_mb_discard_group_preallocations(sb, i, needed);
++		ret = ext4_mb_discard_group_preallocations(sb, i, &busy);
+ 		freed += ret;
+ 		needed -= ret;
++		cond_resched();
++	}
++
++	if (needed > 0 && busy && ++retry < 3) {
++		busy = 0;
++		goto repeat;
+ 	}
+ 
+ 	return freed;
+@@ -6404,6 +6400,7 @@ ext4_trim_all_free(struct super_block *sb, ext4_group_t group,
+  */
+ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ {
++	struct request_queue *q = bdev_get_queue(sb->s_bdev);
+ 	struct ext4_group_info *grp;
+ 	ext4_group_t group, first_group, last_group;
+ 	ext4_grpblk_t cnt = 0, first_cluster, last_cluster;
+@@ -6422,6 +6419,13 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ 	    start >= max_blks ||
+ 	    range->len < sb->s_blocksize)
+ 		return -EINVAL;
++	/* No point to try to trim less than discard granularity */
++	if (range->minlen < q->limits.discard_granularity) {
++		minlen = EXT4_NUM_B2C(EXT4_SB(sb),
++			q->limits.discard_granularity >> sb->s_blocksize_bits);
++		if (minlen > EXT4_CLUSTERS_PER_GROUP(sb))
++			goto out;
++	}
+ 	if (end >= max_blks)
+ 		end = max_blks - 1;
+ 	if (end <= first_data_blk)
+diff --git a/fs/ext4/migrate.c b/fs/ext4/migrate.c
+index 7e0b4f81c6c06..ff8916e1d38e9 100644
+--- a/fs/ext4/migrate.c
++++ b/fs/ext4/migrate.c
+@@ -437,12 +437,12 @@ int ext4_ext_migrate(struct inode *inode)
+ 	percpu_down_write(&sbi->s_writepages_rwsem);
+ 
+ 	/*
+-	 * Worst case we can touch the allocation bitmaps, a bgd
+-	 * block, and a block to link in the orphan list.  We do need
+-	 * need to worry about credits for modifying the quota inode.
++	 * Worst case we can touch the allocation bitmaps and a block
++	 * group descriptor block.  We do need need to worry about
++	 * credits for modifying the quota inode.
+ 	 */
+ 	handle = ext4_journal_start(inode, EXT4_HT_MIGRATE,
+-		4 + EXT4_MAXQUOTAS_TRANS_BLOCKS(inode->i_sb));
++		3 + EXT4_MAXQUOTAS_TRANS_BLOCKS(inode->i_sb));
+ 
+ 	if (IS_ERR(handle)) {
+ 		retval = PTR_ERR(handle);
+@@ -459,6 +459,13 @@ int ext4_ext_migrate(struct inode *inode)
+ 		ext4_journal_stop(handle);
+ 		goto out_unlock;
+ 	}
++	/*
++	 * Use the correct seed for checksum (i.e. the seed from 'inode').  This
++	 * is so that the metadata blocks will have the correct checksum after
++	 * the migration.
++	 */
++	ei = EXT4_I(inode);
++	EXT4_I(tmp_inode)->i_csum_seed = ei->i_csum_seed;
+ 	i_size_write(tmp_inode, i_size_read(inode));
+ 	/*
+ 	 * Set the i_nlink to zero so it will be deleted later
+@@ -467,7 +474,6 @@ int ext4_ext_migrate(struct inode *inode)
+ 	clear_nlink(tmp_inode);
+ 
+ 	ext4_ext_tree_init(handle, tmp_inode);
+-	ext4_orphan_add(handle, tmp_inode);
+ 	ext4_journal_stop(handle);
+ 
+ 	/*
+@@ -492,17 +498,10 @@ int ext4_ext_migrate(struct inode *inode)
+ 
+ 	handle = ext4_journal_start(inode, EXT4_HT_MIGRATE, 1);
+ 	if (IS_ERR(handle)) {
+-		/*
+-		 * It is impossible to update on-disk structures without
+-		 * a handle, so just rollback in-core changes and live other
+-		 * work to orphan_list_cleanup()
+-		 */
+-		ext4_orphan_del(NULL, tmp_inode);
+ 		retval = PTR_ERR(handle);
+ 		goto out_tmp_inode;
+ 	}
+ 
+-	ei = EXT4_I(inode);
+ 	i_data = ei->i_data;
+ 	memset(&lb, 0, sizeof(lb));
+ 
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 4e33b5eca694d..24a7ad80353b5 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -6275,10 +6275,7 @@ static int ext4_quota_on(struct super_block *sb, int type, int format_id,
+ 
+ 	lockdep_set_quota_inode(path->dentry->d_inode, I_DATA_SEM_QUOTA);
+ 	err = dquot_quota_on(sb, type, format_id, path);
+-	if (err) {
+-		lockdep_set_quota_inode(path->dentry->d_inode,
+-					     I_DATA_SEM_NORMAL);
+-	} else {
++	if (!err) {
+ 		struct inode *inode = d_inode(path->dentry);
+ 		handle_t *handle;
+ 
+@@ -6298,7 +6295,12 @@ static int ext4_quota_on(struct super_block *sb, int type, int format_id,
+ 		ext4_journal_stop(handle);
+ 	unlock_inode:
+ 		inode_unlock(inode);
++		if (err)
++			dquot_quota_off(sb, type);
+ 	}
++	if (err)
++		lockdep_set_quota_inode(path->dentry->d_inode,
++					     I_DATA_SEM_NORMAL);
+ 	return err;
+ }
+ 
+@@ -6361,8 +6363,19 @@ int ext4_enable_quotas(struct super_block *sb)
+ 					"Failed to enable quota tracking "
+ 					"(type=%d, err=%d). Please run "
+ 					"e2fsck to fix.", type, err);
+-				for (type--; type >= 0; type--)
++				for (type--; type >= 0; type--) {
++					struct inode *inode;
++
++					inode = sb_dqopt(sb)->files[type];
++					if (inode)
++						inode = igrab(inode);
+ 					dquot_quota_off(sb, type);
++					if (inode) {
++						lockdep_set_quota_inode(inode,
++							I_DATA_SEM_NORMAL);
++						iput(inode);
++					}
++				}
+ 
+ 				return err;
+ 			}
+@@ -6466,7 +6479,7 @@ static ssize_t ext4_quota_write(struct super_block *sb, int type,
+ 	struct buffer_head *bh;
+ 	handle_t *handle = journal_current_handle();
+ 
+-	if (EXT4_SB(sb)->s_journal && !handle) {
++	if (!handle) {
+ 		ext4_msg(sb, KERN_WARNING, "Quota write (off=%llu, len=%llu)"
+ 			" cancelled because transaction is not started",
+ 			(unsigned long long)off, (unsigned long long)len);
+@@ -6649,6 +6662,7 @@ static int __init ext4_init_fs(void)
+ out:
+ 	unregister_as_ext2();
+ 	unregister_as_ext3();
++	ext4_fc_destroy_dentry_cache();
+ out05:
+ 	destroy_inodecache();
+ out1:
+@@ -6675,6 +6689,7 @@ static void __exit ext4_exit_fs(void)
+ 	unregister_as_ext2();
+ 	unregister_as_ext3();
+ 	unregister_filesystem(&ext4_fs_type);
++	ext4_fc_destroy_dentry_cache();
+ 	destroy_inodecache();
+ 	ext4_exit_mballoc();
+ 	ext4_exit_sysfs();
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index f1693d45bb782..2011e97424438 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -1302,8 +1302,8 @@ static void update_ckpt_flags(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ 	unsigned long flags;
+ 
+ 	if (cpc->reason & CP_UMOUNT) {
+-		if (le32_to_cpu(ckpt->cp_pack_total_block_count) >
+-			sbi->blocks_per_seg - NM_I(sbi)->nat_bits_blocks) {
++		if (le32_to_cpu(ckpt->cp_pack_total_block_count) +
++			NM_I(sbi)->nat_bits_blocks > sbi->blocks_per_seg) {
+ 			clear_ckpt_flags(sbi, CP_NAT_BITS_FLAG);
+ 			f2fs_notice(sbi, "Disable nat_bits due to no space");
+ 		} else if (!is_set_ckpt_flags(sbi, CP_NAT_BITS_FLAG) &&
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 49121a21f749f..190a3c4d4c913 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -1468,25 +1468,38 @@ static int f2fs_write_raw_pages(struct compress_ctx *cc,
+ 					enum iostat_type io_type)
+ {
+ 	struct address_space *mapping = cc->inode->i_mapping;
+-	int _submitted, compr_blocks, ret;
+-	int i = -1, err = 0;
++	int _submitted, compr_blocks, ret, i;
+ 
+ 	compr_blocks = f2fs_compressed_blocks(cc);
+-	if (compr_blocks < 0) {
+-		err = compr_blocks;
+-		goto out_err;
++
++	for (i = 0; i < cc->cluster_size; i++) {
++		if (!cc->rpages[i])
++			continue;
++
++		redirty_page_for_writepage(wbc, cc->rpages[i]);
++		unlock_page(cc->rpages[i]);
+ 	}
+ 
++	if (compr_blocks < 0)
++		return compr_blocks;
++
+ 	for (i = 0; i < cc->cluster_size; i++) {
+ 		if (!cc->rpages[i])
+ 			continue;
+ retry_write:
++		lock_page(cc->rpages[i]);
++
+ 		if (cc->rpages[i]->mapping != mapping) {
++continue_unlock:
+ 			unlock_page(cc->rpages[i]);
+ 			continue;
+ 		}
+ 
+-		BUG_ON(!PageLocked(cc->rpages[i]));
++		if (!PageDirty(cc->rpages[i]))
++			goto continue_unlock;
++
++		if (!clear_page_dirty_for_io(cc->rpages[i]))
++			goto continue_unlock;
+ 
+ 		ret = f2fs_write_single_data_page(cc->rpages[i], &_submitted,
+ 						NULL, NULL, wbc, io_type,
+@@ -1501,26 +1514,15 @@ retry_write:
+ 				 * avoid deadlock caused by cluster update race
+ 				 * from foreground operation.
+ 				 */
+-				if (IS_NOQUOTA(cc->inode)) {
+-					err = 0;
+-					goto out_err;
+-				}
++				if (IS_NOQUOTA(cc->inode))
++					return 0;
+ 				ret = 0;
+ 				cond_resched();
+ 				congestion_wait(BLK_RW_ASYNC,
+ 						DEFAULT_IO_TIMEOUT);
+-				lock_page(cc->rpages[i]);
+-
+-				if (!PageDirty(cc->rpages[i])) {
+-					unlock_page(cc->rpages[i]);
+-					continue;
+-				}
+-
+-				clear_page_dirty_for_io(cc->rpages[i]);
+ 				goto retry_write;
+ 			}
+-			err = ret;
+-			goto out_err;
++			return ret;
+ 		}
+ 
+ 		*submitted += _submitted;
+@@ -1529,14 +1531,6 @@ retry_write:
+ 	f2fs_balance_fs(F2FS_M_SB(mapping), true);
+ 
+ 	return 0;
+-out_err:
+-	for (++i; i < cc->cluster_size; i++) {
+-		if (!cc->rpages[i])
+-			continue;
+-		redirty_page_for_writepage(wbc, cc->rpages[i]);
+-		unlock_page(cc->rpages[i]);
+-	}
+-	return err;
+ }
+ 
+ int f2fs_write_multi_pages(struct compress_ctx *cc,
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 9f754aaef558b..3ba75587a2cda 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2617,6 +2617,11 @@ bool f2fs_should_update_outplace(struct inode *inode, struct f2fs_io_info *fio)
+ {
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 
++	/* The below cases were checked when setting it. */
++	if (f2fs_is_pinned_file(inode))
++		return false;
++	if (fio && is_sbi_flag_set(sbi, SBI_NEED_FSCK))
++		return true;
+ 	if (f2fs_lfs_mode(sbi))
+ 		return true;
+ 	if (S_ISDIR(inode->i_mode))
+@@ -2625,8 +2630,6 @@ bool f2fs_should_update_outplace(struct inode *inode, struct f2fs_io_info *fio)
+ 		return true;
+ 	if (f2fs_is_atomic_file(inode))
+ 		return true;
+-	if (is_sbi_flag_set(sbi, SBI_NEED_FSCK))
+-		return true;
+ 
+ 	/* swap file is migrating in aligned write mode */
+ 	if (is_inode_flag_set(inode, FI_ALIGNED_WRITE))
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index ce9fc9f130002..d753094a4919f 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1018,6 +1018,7 @@ struct f2fs_sm_info {
+ 	unsigned int segment_count;	/* total # of segments */
+ 	unsigned int main_segments;	/* # of segments in main area */
+ 	unsigned int reserved_segments;	/* # of reserved segments */
++	unsigned int additional_reserved_segments;/* reserved segs for IO align feature */
+ 	unsigned int ovp_segments;	/* # of overprovision segments */
+ 
+ 	/* a threshold to reclaim prefree segments */
+@@ -2198,6 +2199,11 @@ static inline int inc_valid_block_count(struct f2fs_sb_info *sbi,
+ 
+ 	if (!__allow_reserved_blocks(sbi, inode, true))
+ 		avail_user_block_count -= F2FS_OPTION(sbi).root_reserved_blocks;
++
++	if (F2FS_IO_ALIGNED(sbi))
++		avail_user_block_count -= sbi->blocks_per_seg *
++				SM_I(sbi)->additional_reserved_segments;
++
+ 	if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
+ 		if (avail_user_block_count > sbi->unusable_block_count)
+ 			avail_user_block_count -= sbi->unusable_block_count;
+@@ -2444,6 +2450,11 @@ static inline int inc_valid_node_count(struct f2fs_sb_info *sbi,
+ 
+ 	if (!__allow_reserved_blocks(sbi, inode, false))
+ 		valid_block_count += F2FS_OPTION(sbi).root_reserved_blocks;
++
++	if (F2FS_IO_ALIGNED(sbi))
++		valid_block_count += sbi->blocks_per_seg *
++				SM_I(sbi)->additional_reserved_segments;
++
+ 	user_block_count = sbi->user_block_count;
+ 	if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED)))
+ 		user_block_count -= sbi->unusable_block_count;
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 92ec2699bc859..b752d1a6e6ff8 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -3143,17 +3143,17 @@ static int f2fs_ioc_set_pin_file(struct file *filp, unsigned long arg)
+ 
+ 	inode_lock(inode);
+ 
+-	if (f2fs_should_update_outplace(inode, NULL)) {
+-		ret = -EINVAL;
+-		goto out;
+-	}
+-
+ 	if (!pin) {
+ 		clear_inode_flag(inode, FI_PIN_FILE);
+ 		f2fs_i_gc_failures_write(inode, 0);
+ 		goto done;
+ 	}
+ 
++	if (f2fs_should_update_outplace(inode, NULL)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
+ 	if (f2fs_pin_file_control(inode, false)) {
+ 		ret = -EAGAIN;
+ 		goto out;
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index a946ce0ead341..b538cbcba351d 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1026,6 +1026,9 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ 		set_sbi_flag(sbi, SBI_NEED_FSCK);
+ 	}
+ 
++	if (f2fs_check_nid_range(sbi, dni->ino))
++		return false;
++
+ 	*nofs = ofs_of_node(node_page);
+ 	source_blkaddr = data_blkaddr(NULL, node_page, ofs_in_node);
+ 	f2fs_put_page(node_page, 1);
+@@ -1039,7 +1042,7 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ 			if (!test_and_set_bit(segno, SIT_I(sbi)->invalid_segmap)) {
+ 				f2fs_err(sbi, "mismatched blkaddr %u (source_blkaddr %u) in seg %u",
+ 					 blkaddr, source_blkaddr, segno);
+-				f2fs_bug_on(sbi, 1);
++				set_sbi_flag(sbi, SBI_NEED_FSCK);
+ 			}
+ 		}
+ #endif
+@@ -1457,7 +1460,8 @@ next_step:
+ 
+ 		if (phase == 3) {
+ 			inode = f2fs_iget(sb, dni.ino);
+-			if (IS_ERR(inode) || is_bad_inode(inode))
++			if (IS_ERR(inode) || is_bad_inode(inode) ||
++					special_file(inode->i_mode))
+ 				continue;
+ 
+ 			if (!down_write_trylock(
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 0f8b2df3e1e01..1db7823d5a131 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -516,6 +516,11 @@ make_now:
+ 	} else if (ino == F2FS_COMPRESS_INO(sbi)) {
+ #ifdef CONFIG_F2FS_FS_COMPRESSION
+ 		inode->i_mapping->a_ops = &f2fs_compress_aops;
++		/*
++		 * generic_error_remove_page only truncates pages of regular
++		 * inode
++		 */
++		inode->i_mode |= S_IFREG;
+ #endif
+ 		mapping_set_gfp_mask(inode->i_mapping,
+ 			GFP_NOFS | __GFP_HIGHMEM | __GFP_MOVABLE);
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index 46fde9f3f28ea..0291cd55cf09b 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -538,7 +538,8 @@ static inline unsigned int free_segments(struct f2fs_sb_info *sbi)
+ 
+ static inline unsigned int reserved_segments(struct f2fs_sb_info *sbi)
+ {
+-	return SM_I(sbi)->reserved_segments;
++	return SM_I(sbi)->reserved_segments +
++			SM_I(sbi)->additional_reserved_segments;
+ }
+ 
+ static inline unsigned int free_sections(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 040b6d02e1d8a..08faa787a773b 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -328,6 +328,46 @@ static inline void limit_reserve_root(struct f2fs_sb_info *sbi)
+ 					   F2FS_OPTION(sbi).s_resgid));
+ }
+ 
++static inline int adjust_reserved_segment(struct f2fs_sb_info *sbi)
++{
++	unsigned int sec_blks = sbi->blocks_per_seg * sbi->segs_per_sec;
++	unsigned int avg_vblocks;
++	unsigned int wanted_reserved_segments;
++	block_t avail_user_block_count;
++
++	if (!F2FS_IO_ALIGNED(sbi))
++		return 0;
++
++	/* average valid block count in section in worst case */
++	avg_vblocks = sec_blks / F2FS_IO_SIZE(sbi);
++
++	/*
++	 * we need enough free space when migrating one section in worst case
++	 */
++	wanted_reserved_segments = (F2FS_IO_SIZE(sbi) / avg_vblocks) *
++						reserved_segments(sbi);
++	wanted_reserved_segments -= reserved_segments(sbi);
++
++	avail_user_block_count = sbi->user_block_count -
++				sbi->current_reserved_blocks -
++				F2FS_OPTION(sbi).root_reserved_blocks;
++
++	if (wanted_reserved_segments * sbi->blocks_per_seg >
++					avail_user_block_count) {
++		f2fs_err(sbi, "IO align feature can't grab additional reserved segment: %u, available segments: %u",
++			wanted_reserved_segments,
++			avail_user_block_count >> sbi->log_blocks_per_seg);
++		return -ENOSPC;
++	}
++
++	SM_I(sbi)->additional_reserved_segments = wanted_reserved_segments;
++
++	f2fs_info(sbi, "IO align feature needs additional reserved segment: %u",
++			 wanted_reserved_segments);
++
++	return 0;
++}
++
+ static inline void adjust_unusable_cap_perc(struct f2fs_sb_info *sbi)
+ {
+ 	if (!F2FS_OPTION(sbi).unusable_cap_perc)
+@@ -4180,6 +4220,10 @@ try_onemore:
+ 		goto free_nm;
+ 	}
+ 
++	err = adjust_reserved_segment(sbi);
++	if (err)
++		goto free_nm;
++
+ 	/* For write statistics */
+ 	sbi->sectors_written_start = f2fs_get_sectors_written(sbi);
+ 
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index 7d289249cd7eb..8cdeac090f81d 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -415,7 +415,9 @@ out:
+ 	if (a->struct_type == RESERVED_BLOCKS) {
+ 		spin_lock(&sbi->stat_lock);
+ 		if (t > (unsigned long)(sbi->user_block_count -
+-				F2FS_OPTION(sbi).root_reserved_blocks)) {
++				F2FS_OPTION(sbi).root_reserved_blocks -
++				sbi->blocks_per_seg *
++				SM_I(sbi)->additional_reserved_segments)) {
+ 			spin_unlock(&sbi->stat_lock);
+ 			return -EINVAL;
+ 		}
+diff --git a/fs/f2fs/xattr.c b/fs/f2fs/xattr.c
+index e348f33bcb2be..797ac505a075a 100644
+--- a/fs/f2fs/xattr.c
++++ b/fs/f2fs/xattr.c
+@@ -684,8 +684,17 @@ static int __f2fs_setxattr(struct inode *inode, int index,
+ 	}
+ 
+ 	last = here;
+-	while (!IS_XATTR_LAST_ENTRY(last))
++	while (!IS_XATTR_LAST_ENTRY(last)) {
++		if ((void *)(last) + sizeof(__u32) > last_base_addr ||
++			(void *)XATTR_NEXT_ENTRY(last) > last_base_addr) {
++			f2fs_err(F2FS_I_SB(inode), "inode (%lu) has invalid last xattr entry, entry_size: %zu",
++					inode->i_ino, ENTRY_SIZE(last));
++			set_sbi_flag(F2FS_I_SB(inode), SBI_NEED_FSCK);
++			error = -EFSCORRUPTED;
++			goto exit;
++		}
+ 		last = XATTR_NEXT_ENTRY(last);
++	}
+ 
+ 	newsize = XATTR_ALIGN(sizeof(struct f2fs_xattr_entry) + len + size);
+ 
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 9d6c5f6361f7d..df81768c81a73 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -2910,7 +2910,7 @@ fuse_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ 
+ static int fuse_writeback_range(struct inode *inode, loff_t start, loff_t end)
+ {
+-	int err = filemap_write_and_wait_range(inode->i_mapping, start, -1);
++	int err = filemap_write_and_wait_range(inode->i_mapping, start, LLONG_MAX);
+ 
+ 	if (!err)
+ 		fuse_sync_writes(inode);
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index 49d2e686be740..a7c6c7498be0b 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -409,10 +409,11 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end)
+ 	struct vm_area_struct *vma;
+ 
+ 	/*
+-	 * end == 0 indicates that the entire range after
+-	 * start should be unmapped.
++	 * end == 0 indicates that the entire range after start should be
++	 * unmapped.  Note, end is exclusive, whereas the interval tree takes
++	 * an inclusive "last".
+ 	 */
+-	vma_interval_tree_foreach(vma, root, start, end ? end : ULONG_MAX) {
++	vma_interval_tree_foreach(vma, root, start, end ? end - 1 : ULONG_MAX) {
+ 		unsigned long v_offset;
+ 		unsigned long v_end;
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index fb2a0cb4aaf83..e0fbb940fe5c3 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -5928,6 +5928,7 @@ static int io_poll_update(struct io_kiocb *req, unsigned int issue_flags)
+ 	 * update those. For multishot, if we're racing with completion, just
+ 	 * let completion re-add it.
+ 	 */
++	io_poll_remove_double(preq);
+ 	completing = !__io_poll_remove_one(preq, &preq->poll, false);
+ 	if (completing && (preq->poll.events & EPOLLONESHOT)) {
+ 		ret = -EALREADY;
+@@ -6544,12 +6545,15 @@ static __cold void io_drain_req(struct io_kiocb *req)
+ 	u32 seq = io_get_sequence(req);
+ 
+ 	/* Still need defer if there is pending req in defer list. */
++	spin_lock(&ctx->completion_lock);
+ 	if (!req_need_defer(req, seq) && list_empty_careful(&ctx->defer_list)) {
++		spin_unlock(&ctx->completion_lock);
+ queue:
+ 		ctx->drain_active = false;
+ 		io_req_task_queue(req);
+ 		return;
+ 	}
++	spin_unlock(&ctx->completion_lock);
+ 
+ 	ret = io_req_prep_async(req);
+ 	if (ret) {
+diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c
+index 4fc8cd698d1a4..bd7d58d27bfc6 100644
+--- a/fs/jffs2/file.c
++++ b/fs/jffs2/file.c
+@@ -136,20 +136,15 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
+ 	struct page *pg;
+ 	struct inode *inode = mapping->host;
+ 	struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode);
++	struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb);
+ 	pgoff_t index = pos >> PAGE_SHIFT;
+ 	uint32_t pageofs = index << PAGE_SHIFT;
+ 	int ret = 0;
+ 
+-	pg = grab_cache_page_write_begin(mapping, index, flags);
+-	if (!pg)
+-		return -ENOMEM;
+-	*pagep = pg;
+-
+ 	jffs2_dbg(1, "%s()\n", __func__);
+ 
+ 	if (pageofs > inode->i_size) {
+ 		/* Make new hole frag from old EOF to new page */
+-		struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb);
+ 		struct jffs2_raw_inode ri;
+ 		struct jffs2_full_dnode *fn;
+ 		uint32_t alloc_len;
+@@ -160,7 +155,7 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
+ 		ret = jffs2_reserve_space(c, sizeof(ri), &alloc_len,
+ 					  ALLOC_NORMAL, JFFS2_SUMMARY_INODE_SIZE);
+ 		if (ret)
+-			goto out_page;
++			goto out_err;
+ 
+ 		mutex_lock(&f->sem);
+ 		memset(&ri, 0, sizeof(ri));
+@@ -190,7 +185,7 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
+ 			ret = PTR_ERR(fn);
+ 			jffs2_complete_reservation(c);
+ 			mutex_unlock(&f->sem);
+-			goto out_page;
++			goto out_err;
+ 		}
+ 		ret = jffs2_add_full_dnode_to_inode(c, f, fn);
+ 		if (f->metadata) {
+@@ -205,13 +200,26 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
+ 			jffs2_free_full_dnode(fn);
+ 			jffs2_complete_reservation(c);
+ 			mutex_unlock(&f->sem);
+-			goto out_page;
++			goto out_err;
+ 		}
+ 		jffs2_complete_reservation(c);
+ 		inode->i_size = pageofs;
+ 		mutex_unlock(&f->sem);
+ 	}
+ 
++	/*
++	 * While getting a page and reading data in, lock c->alloc_sem until
++	 * the page is Uptodate. Otherwise GC task may attempt to read the same
++	 * page in read_cache_page(), which causes a deadlock.
++	 */
++	mutex_lock(&c->alloc_sem);
++	pg = grab_cache_page_write_begin(mapping, index, flags);
++	if (!pg) {
++		ret = -ENOMEM;
++		goto release_sem;
++	}
++	*pagep = pg;
++
+ 	/*
+ 	 * Read in the page if it wasn't already present. Cannot optimize away
+ 	 * the whole page write case until jffs2_write_end can handle the
+@@ -221,15 +229,17 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
+ 		mutex_lock(&f->sem);
+ 		ret = jffs2_do_readpage_nolock(inode, pg);
+ 		mutex_unlock(&f->sem);
+-		if (ret)
+-			goto out_page;
++		if (ret) {
++			unlock_page(pg);
++			put_page(pg);
++			goto release_sem;
++		}
+ 	}
+ 	jffs2_dbg(1, "end write_begin(). pg->flags %lx\n", pg->flags);
+-	return ret;
+ 
+-out_page:
+-	unlock_page(pg);
+-	put_page(pg);
++release_sem:
++	mutex_unlock(&c->alloc_sem);
++out_err:
+ 	return ret;
+ }
+ 
+diff --git a/fs/ksmbd/connection.c b/fs/ksmbd/connection.c
+index 83a94d0bb480f..d1d0105be5b1d 100644
+--- a/fs/ksmbd/connection.c
++++ b/fs/ksmbd/connection.c
+@@ -62,6 +62,7 @@ struct ksmbd_conn *ksmbd_conn_alloc(void)
+ 	atomic_set(&conn->req_running, 0);
+ 	atomic_set(&conn->r_count, 0);
+ 	conn->total_credits = 1;
++	conn->outstanding_credits = 1;
+ 
+ 	init_waitqueue_head(&conn->req_running_q);
+ 	INIT_LIST_HEAD(&conn->conns_list);
+diff --git a/fs/ksmbd/connection.h b/fs/ksmbd/connection.h
+index e5403c587a58c..8694aef482c1a 100644
+--- a/fs/ksmbd/connection.h
++++ b/fs/ksmbd/connection.h
+@@ -61,8 +61,8 @@ struct ksmbd_conn {
+ 	atomic_t			req_running;
+ 	/* References which are made for this Server object*/
+ 	atomic_t			r_count;
+-	unsigned short			total_credits;
+-	unsigned short			max_credits;
++	unsigned int			total_credits;
++	unsigned int			outstanding_credits;
+ 	spinlock_t			credits_lock;
+ 	wait_queue_head_t		req_running_q;
+ 	/* Lock to protect requests list*/
+diff --git a/fs/ksmbd/ksmbd_netlink.h b/fs/ksmbd/ksmbd_netlink.h
+index c6718a05d347f..71bfb7de44725 100644
+--- a/fs/ksmbd/ksmbd_netlink.h
++++ b/fs/ksmbd/ksmbd_netlink.h
+@@ -103,6 +103,8 @@ struct ksmbd_startup_request {
+ 					 * we set the SPARSE_FILES bit (0x40).
+ 					 */
+ 	__u32	sub_auth[3];		/* Subauth value for Security ID */
++	__u32	smb2_max_credits;	/* MAX credits */
++	__u32	reserved[128];		/* Reserved room */
+ 	__u32	ifc_list_sz;		/* interfaces list size */
+ 	__s8	____payload[];
+ };
+@@ -113,7 +115,7 @@ struct ksmbd_startup_request {
+  * IPC request to shutdown ksmbd server.
+  */
+ struct ksmbd_shutdown_request {
+-	__s32	reserved;
++	__s32	reserved[16];
+ };
+ 
+ /*
+@@ -122,6 +124,7 @@ struct ksmbd_shutdown_request {
+ struct ksmbd_login_request {
+ 	__u32	handle;
+ 	__s8	account[KSMBD_REQ_MAX_ACCOUNT_NAME_SZ]; /* user account name */
++	__u32	reserved[16];				/* Reserved room */
+ };
+ 
+ /*
+@@ -135,6 +138,7 @@ struct ksmbd_login_response {
+ 	__u16	status;
+ 	__u16	hash_sz;			/* hash size */
+ 	__s8	hash[KSMBD_REQ_MAX_HASH_SZ];	/* password hash */
++	__u32	reserved[16];			/* Reserved room */
+ };
+ 
+ /*
+@@ -143,6 +147,7 @@ struct ksmbd_login_response {
+ struct ksmbd_share_config_request {
+ 	__u32	handle;
+ 	__s8	share_name[KSMBD_REQ_MAX_SHARE_NAME]; /* share name */
++	__u32	reserved[16];		/* Reserved room */
+ };
+ 
+ /*
+@@ -157,6 +162,7 @@ struct ksmbd_share_config_response {
+ 	__u16	force_directory_mode;
+ 	__u16	force_uid;
+ 	__u16	force_gid;
++	__u32	reserved[128];		/* Reserved room */
+ 	__u32	veto_list_sz;
+ 	__s8	____payload[];
+ };
+@@ -187,6 +193,7 @@ struct ksmbd_tree_connect_request {
+ 	__s8	account[KSMBD_REQ_MAX_ACCOUNT_NAME_SZ];
+ 	__s8	share[KSMBD_REQ_MAX_SHARE_NAME];
+ 	__s8	peer_addr[64];
++	__u32	reserved[16];		/* Reserved room */
+ };
+ 
+ /*
+@@ -196,6 +203,7 @@ struct ksmbd_tree_connect_response {
+ 	__u32	handle;
+ 	__u16	status;
+ 	__u16	connection_flags;
++	__u32	reserved[16];		/* Reserved room */
+ };
+ 
+ /*
+@@ -204,6 +212,7 @@ struct ksmbd_tree_connect_response {
+ struct ksmbd_tree_disconnect_request {
+ 	__u64	session_id;	/* session id */
+ 	__u64	connect_id;	/* tree connection id */
++	__u32	reserved[16];	/* Reserved room */
+ };
+ 
+ /*
+@@ -212,6 +221,7 @@ struct ksmbd_tree_disconnect_request {
+ struct ksmbd_logout_request {
+ 	__s8	account[KSMBD_REQ_MAX_ACCOUNT_NAME_SZ]; /* user account name */
+ 	__u32	account_flags;
++	__u32	reserved[16];				/* Reserved room */
+ };
+ 
+ /*
+diff --git a/fs/ksmbd/smb2misc.c b/fs/ksmbd/smb2misc.c
+index 50d0b1022289e..4a9460153b595 100644
+--- a/fs/ksmbd/smb2misc.c
++++ b/fs/ksmbd/smb2misc.c
+@@ -289,7 +289,7 @@ static int smb2_validate_credit_charge(struct ksmbd_conn *conn,
+ 	unsigned int req_len = 0, expect_resp_len = 0, calc_credit_num, max_len;
+ 	unsigned short credit_charge = le16_to_cpu(hdr->CreditCharge);
+ 	void *__hdr = hdr;
+-	int ret;
++	int ret = 0;
+ 
+ 	switch (hdr->Command) {
+ 	case SMB2_QUERY_INFO:
+@@ -326,21 +326,27 @@ static int smb2_validate_credit_charge(struct ksmbd_conn *conn,
+ 		ksmbd_debug(SMB, "Insufficient credit charge, given: %d, needed: %d\n",
+ 			    credit_charge, calc_credit_num);
+ 		return 1;
+-	} else if (credit_charge > conn->max_credits) {
++	} else if (credit_charge > conn->vals->max_credits) {
+ 		ksmbd_debug(SMB, "Too large credit charge: %d\n", credit_charge);
+ 		return 1;
+ 	}
+ 
+ 	spin_lock(&conn->credits_lock);
+-	if (credit_charge <= conn->total_credits) {
+-		conn->total_credits -= credit_charge;
+-		ret = 0;
+-	} else {
++	if (credit_charge > conn->total_credits) {
+ 		ksmbd_debug(SMB, "Insufficient credits granted, given: %u, granted: %u\n",
+ 			    credit_charge, conn->total_credits);
+ 		ret = 1;
+ 	}
++
++	if ((u64)conn->outstanding_credits + credit_charge > conn->vals->max_credits) {
++		ksmbd_debug(SMB, "Limits exceeding the maximum allowable outstanding requests, given : %u, pending : %u\n",
++			    credit_charge, conn->outstanding_credits);
++		ret = 1;
++	} else
++		conn->outstanding_credits += credit_charge;
++
+ 	spin_unlock(&conn->credits_lock);
++
+ 	return ret;
+ }
+ 
+diff --git a/fs/ksmbd/smb2ops.c b/fs/ksmbd/smb2ops.c
+index 02a44d28bdafc..ab23da2120b94 100644
+--- a/fs/ksmbd/smb2ops.c
++++ b/fs/ksmbd/smb2ops.c
+@@ -19,6 +19,7 @@ static struct smb_version_values smb21_server_values = {
+ 	.max_read_size = SMB21_DEFAULT_IOSIZE,
+ 	.max_write_size = SMB21_DEFAULT_IOSIZE,
+ 	.max_trans_size = SMB21_DEFAULT_IOSIZE,
++	.max_credits = SMB2_MAX_CREDITS,
+ 	.large_lock_type = 0,
+ 	.exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE,
+ 	.shared_lock_type = SMB2_LOCKFLAG_SHARED,
+@@ -44,6 +45,7 @@ static struct smb_version_values smb30_server_values = {
+ 	.max_read_size = SMB3_DEFAULT_IOSIZE,
+ 	.max_write_size = SMB3_DEFAULT_IOSIZE,
+ 	.max_trans_size = SMB3_DEFAULT_TRANS_SIZE,
++	.max_credits = SMB2_MAX_CREDITS,
+ 	.large_lock_type = 0,
+ 	.exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE,
+ 	.shared_lock_type = SMB2_LOCKFLAG_SHARED,
+@@ -70,6 +72,7 @@ static struct smb_version_values smb302_server_values = {
+ 	.max_read_size = SMB3_DEFAULT_IOSIZE,
+ 	.max_write_size = SMB3_DEFAULT_IOSIZE,
+ 	.max_trans_size = SMB3_DEFAULT_TRANS_SIZE,
++	.max_credits = SMB2_MAX_CREDITS,
+ 	.large_lock_type = 0,
+ 	.exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE,
+ 	.shared_lock_type = SMB2_LOCKFLAG_SHARED,
+@@ -96,6 +99,7 @@ static struct smb_version_values smb311_server_values = {
+ 	.max_read_size = SMB3_DEFAULT_IOSIZE,
+ 	.max_write_size = SMB3_DEFAULT_IOSIZE,
+ 	.max_trans_size = SMB3_DEFAULT_TRANS_SIZE,
++	.max_credits = SMB2_MAX_CREDITS,
+ 	.large_lock_type = 0,
+ 	.exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE,
+ 	.shared_lock_type = SMB2_LOCKFLAG_SHARED,
+@@ -197,7 +201,6 @@ void init_smb2_1_server(struct ksmbd_conn *conn)
+ 	conn->ops = &smb2_0_server_ops;
+ 	conn->cmds = smb2_0_server_cmds;
+ 	conn->max_cmds = ARRAY_SIZE(smb2_0_server_cmds);
+-	conn->max_credits = SMB2_MAX_CREDITS;
+ 	conn->signing_algorithm = SIGNING_ALG_HMAC_SHA256_LE;
+ 
+ 	if (server_conf.flags & KSMBD_GLOBAL_FLAG_SMB2_LEASES)
+@@ -215,7 +218,6 @@ void init_smb3_0_server(struct ksmbd_conn *conn)
+ 	conn->ops = &smb3_0_server_ops;
+ 	conn->cmds = smb2_0_server_cmds;
+ 	conn->max_cmds = ARRAY_SIZE(smb2_0_server_cmds);
+-	conn->max_credits = SMB2_MAX_CREDITS;
+ 	conn->signing_algorithm = SIGNING_ALG_AES_CMAC_LE;
+ 
+ 	if (server_conf.flags & KSMBD_GLOBAL_FLAG_SMB2_LEASES)
+@@ -240,7 +242,6 @@ void init_smb3_02_server(struct ksmbd_conn *conn)
+ 	conn->ops = &smb3_0_server_ops;
+ 	conn->cmds = smb2_0_server_cmds;
+ 	conn->max_cmds = ARRAY_SIZE(smb2_0_server_cmds);
+-	conn->max_credits = SMB2_MAX_CREDITS;
+ 	conn->signing_algorithm = SIGNING_ALG_AES_CMAC_LE;
+ 
+ 	if (server_conf.flags & KSMBD_GLOBAL_FLAG_SMB2_LEASES)
+@@ -265,7 +266,6 @@ int init_smb3_11_server(struct ksmbd_conn *conn)
+ 	conn->ops = &smb3_11_server_ops;
+ 	conn->cmds = smb2_0_server_cmds;
+ 	conn->max_cmds = ARRAY_SIZE(smb2_0_server_cmds);
+-	conn->max_credits = SMB2_MAX_CREDITS;
+ 	conn->signing_algorithm = SIGNING_ALG_AES_CMAC_LE;
+ 
+ 	if (server_conf.flags & KSMBD_GLOBAL_FLAG_SMB2_LEASES)
+@@ -304,3 +304,11 @@ void init_smb2_max_trans_size(unsigned int sz)
+ 	smb302_server_values.max_trans_size = sz;
+ 	smb311_server_values.max_trans_size = sz;
+ }
++
++void init_smb2_max_credits(unsigned int sz)
++{
++	smb21_server_values.max_credits = sz;
++	smb30_server_values.max_credits = sz;
++	smb302_server_values.max_credits = sz;
++	smb311_server_values.max_credits = sz;
++}
+diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c
+index b8b3a4c28b749..32300bd6af7ab 100644
+--- a/fs/ksmbd/smb2pdu.c
++++ b/fs/ksmbd/smb2pdu.c
+@@ -299,16 +299,15 @@ int smb2_set_rsp_credits(struct ksmbd_work *work)
+ 	struct smb2_hdr *req_hdr = ksmbd_req_buf_next(work);
+ 	struct smb2_hdr *hdr = ksmbd_resp_buf_next(work);
+ 	struct ksmbd_conn *conn = work->conn;
+-	unsigned short credits_requested;
++	unsigned short credits_requested, aux_max;
+ 	unsigned short credit_charge, credits_granted = 0;
+-	unsigned short aux_max, aux_credits;
+ 
+ 	if (work->send_no_response)
+ 		return 0;
+ 
+ 	hdr->CreditCharge = req_hdr->CreditCharge;
+ 
+-	if (conn->total_credits > conn->max_credits) {
++	if (conn->total_credits > conn->vals->max_credits) {
+ 		hdr->CreditRequest = 0;
+ 		pr_err("Total credits overflow: %d\n", conn->total_credits);
+ 		return -EINVAL;
+@@ -316,6 +315,14 @@ int smb2_set_rsp_credits(struct ksmbd_work *work)
+ 
+ 	credit_charge = max_t(unsigned short,
+ 			      le16_to_cpu(req_hdr->CreditCharge), 1);
++	if (credit_charge > conn->total_credits) {
++		ksmbd_debug(SMB, "Insufficient credits granted, given: %u, granted: %u\n",
++			    credit_charge, conn->total_credits);
++		return -EINVAL;
++	}
++
++	conn->total_credits -= credit_charge;
++	conn->outstanding_credits -= credit_charge;
+ 	credits_requested = max_t(unsigned short,
+ 				  le16_to_cpu(req_hdr->CreditRequest), 1);
+ 
+@@ -325,16 +332,14 @@ int smb2_set_rsp_credits(struct ksmbd_work *work)
+ 	 * TODO: Need to adjuct CreditRequest value according to
+ 	 * current cpu load
+ 	 */
+-	aux_credits = credits_requested - 1;
+ 	if (hdr->Command == SMB2_NEGOTIATE)
+-		aux_max = 0;
++		aux_max = 1;
+ 	else
+-		aux_max = conn->max_credits - credit_charge;
+-	aux_credits = min_t(unsigned short, aux_credits, aux_max);
+-	credits_granted = credit_charge + aux_credits;
++		aux_max = conn->vals->max_credits - credit_charge;
++	credits_granted = min_t(unsigned short, credits_requested, aux_max);
+ 
+-	if (conn->max_credits - conn->total_credits < credits_granted)
+-		credits_granted = conn->max_credits -
++	if (conn->vals->max_credits - conn->total_credits < credits_granted)
++		credits_granted = conn->vals->max_credits -
+ 			conn->total_credits;
+ 
+ 	conn->total_credits += credits_granted;
+@@ -1455,11 +1460,6 @@ static int ntlm_authenticate(struct ksmbd_work *work)
+ 
+ 	sess->user = user;
+ 	if (user_guest(sess->user)) {
+-		if (conn->sign) {
+-			ksmbd_debug(SMB, "Guest login not allowed when signing enabled\n");
+-			return -EPERM;
+-		}
+-
+ 		rsp->SessionFlags = SMB2_SESSION_FLAG_IS_GUEST_LE;
+ 	} else {
+ 		struct authenticate_message *authblob;
+@@ -1472,38 +1472,39 @@ static int ntlm_authenticate(struct ksmbd_work *work)
+ 			ksmbd_debug(SMB, "authentication failed\n");
+ 			return -EPERM;
+ 		}
++	}
+ 
+-		/*
+-		 * If session state is SMB2_SESSION_VALID, We can assume
+-		 * that it is reauthentication. And the user/password
+-		 * has been verified, so return it here.
+-		 */
+-		if (sess->state == SMB2_SESSION_VALID) {
+-			if (conn->binding)
+-				goto binding_session;
+-			return 0;
+-		}
++	/*
++	 * If session state is SMB2_SESSION_VALID, We can assume
++	 * that it is reauthentication. And the user/password
++	 * has been verified, so return it here.
++	 */
++	if (sess->state == SMB2_SESSION_VALID) {
++		if (conn->binding)
++			goto binding_session;
++		return 0;
++	}
+ 
+-		if ((conn->sign || server_conf.enforced_signing) ||
+-		    (req->SecurityMode & SMB2_NEGOTIATE_SIGNING_REQUIRED))
+-			sess->sign = true;
++	if ((rsp->SessionFlags != SMB2_SESSION_FLAG_IS_GUEST_LE &&
++	     (conn->sign || server_conf.enforced_signing)) ||
++	    (req->SecurityMode & SMB2_NEGOTIATE_SIGNING_REQUIRED))
++		sess->sign = true;
+ 
+-		if (smb3_encryption_negotiated(conn) &&
+-		    !(req->Flags & SMB2_SESSION_REQ_FLAG_BINDING)) {
+-			rc = conn->ops->generate_encryptionkey(sess);
+-			if (rc) {
+-				ksmbd_debug(SMB,
+-					    "SMB3 encryption key generation failed\n");
+-				return -EINVAL;
+-			}
+-			sess->enc = true;
+-			rsp->SessionFlags = SMB2_SESSION_FLAG_ENCRYPT_DATA_LE;
+-			/*
+-			 * signing is disable if encryption is enable
+-			 * on this session
+-			 */
+-			sess->sign = false;
++	if (smb3_encryption_negotiated(conn) &&
++			!(req->Flags & SMB2_SESSION_REQ_FLAG_BINDING)) {
++		rc = conn->ops->generate_encryptionkey(sess);
++		if (rc) {
++			ksmbd_debug(SMB,
++					"SMB3 encryption key generation failed\n");
++			return -EINVAL;
+ 		}
++		sess->enc = true;
++		rsp->SessionFlags = SMB2_SESSION_FLAG_ENCRYPT_DATA_LE;
++		/*
++		 * signing is disable if encryption is enable
++		 * on this session
++		 */
++		sess->sign = false;
+ 	}
+ 
+ binding_session:
+diff --git a/fs/ksmbd/smb2pdu.h b/fs/ksmbd/smb2pdu.h
+index 4a3e4339d4c4f..725b800c29c8a 100644
+--- a/fs/ksmbd/smb2pdu.h
++++ b/fs/ksmbd/smb2pdu.h
+@@ -980,6 +980,7 @@ int init_smb3_11_server(struct ksmbd_conn *conn);
+ void init_smb2_max_read_size(unsigned int sz);
+ void init_smb2_max_write_size(unsigned int sz);
+ void init_smb2_max_trans_size(unsigned int sz);
++void init_smb2_max_credits(unsigned int sz);
+ 
+ bool is_smb2_neg_cmd(struct ksmbd_work *work);
+ bool is_smb2_rsp(struct ksmbd_work *work);
+diff --git a/fs/ksmbd/smb_common.h b/fs/ksmbd/smb_common.h
+index 50590842b651e..e1369b4345a93 100644
+--- a/fs/ksmbd/smb_common.h
++++ b/fs/ksmbd/smb_common.h
+@@ -365,6 +365,7 @@ struct smb_version_values {
+ 	__u32		max_read_size;
+ 	__u32		max_write_size;
+ 	__u32		max_trans_size;
++	__u32		max_credits;
+ 	__u32		large_lock_type;
+ 	__u32		exclusive_lock_type;
+ 	__u32		shared_lock_type;
+diff --git a/fs/ksmbd/transport_ipc.c b/fs/ksmbd/transport_ipc.c
+index 1acf1892a466c..3ad6881e0f7ed 100644
+--- a/fs/ksmbd/transport_ipc.c
++++ b/fs/ksmbd/transport_ipc.c
+@@ -301,6 +301,8 @@ static int ipc_server_config_on_startup(struct ksmbd_startup_request *req)
+ 		init_smb2_max_write_size(req->smb2_max_write);
+ 	if (req->smb2_max_trans)
+ 		init_smb2_max_trans_size(req->smb2_max_trans);
++	if (req->smb2_max_credits)
++		init_smb2_max_credits(req->smb2_max_credits);
+ 
+ 	ret = ksmbd_set_netbios_name(req->netbios_name);
+ 	ret |= ksmbd_set_server_string(req->server_string);
+diff --git a/fs/ksmbd/transport_tcp.c b/fs/ksmbd/transport_tcp.c
+index c14320e03b698..82a1429bbe127 100644
+--- a/fs/ksmbd/transport_tcp.c
++++ b/fs/ksmbd/transport_tcp.c
+@@ -404,7 +404,7 @@ static int create_socket(struct interface *iface)
+ 				  &ksmbd_socket);
+ 		if (ret) {
+ 			pr_err("Can't create socket for ipv4: %d\n", ret);
+-			goto out_error;
++			goto out_clear;
+ 		}
+ 
+ 		sin.sin_family = PF_INET;
+@@ -462,6 +462,7 @@ static int create_socket(struct interface *iface)
+ 
+ out_error:
+ 	tcp_destroy_socket(ksmbd_socket);
++out_clear:
+ 	iface->ksmbd_socket = NULL;
+ 	return ret;
+ }
+diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c
+index c3ac1b6aa3aaa..84088581bbe09 100644
+--- a/fs/nfsd/nfs3xdr.c
++++ b/fs/nfsd/nfs3xdr.c
+@@ -487,11 +487,6 @@ neither:
+ 	return true;
+ }
+ 
+-static bool fs_supports_change_attribute(struct super_block *sb)
+-{
+-	return sb->s_flags & SB_I_VERSION || sb->s_export_op->fetch_iversion;
+-}
+-
+ /*
+  * Fill in the pre_op attr for the wcc data
+  */
+@@ -500,26 +495,24 @@ void fill_pre_wcc(struct svc_fh *fhp)
+ 	struct inode    *inode;
+ 	struct kstat	stat;
+ 	bool v4 = (fhp->fh_maxsize == NFS4_FHSIZE);
++	__be32 err;
+ 
+ 	if (fhp->fh_no_wcc || fhp->fh_pre_saved)
+ 		return;
+ 	inode = d_inode(fhp->fh_dentry);
+-	if (fs_supports_change_attribute(inode->i_sb) || !v4) {
+-		__be32 err = fh_getattr(fhp, &stat);
+-
+-		if (err) {
+-			/* Grab the times from inode anyway */
+-			stat.mtime = inode->i_mtime;
+-			stat.ctime = inode->i_ctime;
+-			stat.size  = inode->i_size;
+-		}
+-		fhp->fh_pre_mtime = stat.mtime;
+-		fhp->fh_pre_ctime = stat.ctime;
+-		fhp->fh_pre_size  = stat.size;
++	err = fh_getattr(fhp, &stat);
++	if (err) {
++		/* Grab the times from inode anyway */
++		stat.mtime = inode->i_mtime;
++		stat.ctime = inode->i_ctime;
++		stat.size  = inode->i_size;
+ 	}
+ 	if (v4)
+ 		fhp->fh_pre_change = nfsd4_change_attribute(&stat, inode);
+ 
++	fhp->fh_pre_mtime = stat.mtime;
++	fhp->fh_pre_ctime = stat.ctime;
++	fhp->fh_pre_size  = stat.size;
+ 	fhp->fh_pre_saved = true;
+ }
+ 
+@@ -530,6 +523,7 @@ void fill_post_wcc(struct svc_fh *fhp)
+ {
+ 	bool v4 = (fhp->fh_maxsize == NFS4_FHSIZE);
+ 	struct inode *inode = d_inode(fhp->fh_dentry);
++	__be32 err;
+ 
+ 	if (fhp->fh_no_wcc)
+ 		return;
+@@ -537,16 +531,12 @@ void fill_post_wcc(struct svc_fh *fhp)
+ 	if (fhp->fh_post_saved)
+ 		printk("nfsd: inode locked twice during operation.\n");
+ 
+-	fhp->fh_post_saved = true;
+-
+-	if (fs_supports_change_attribute(inode->i_sb) || !v4) {
+-		__be32 err = fh_getattr(fhp, &fhp->fh_post_attr);
+-
+-		if (err) {
+-			fhp->fh_post_saved = false;
+-			fhp->fh_post_attr.ctime = inode->i_ctime;
+-		}
+-	}
++	err = fh_getattr(fhp, &fhp->fh_post_attr);
++	if (err) {
++		fhp->fh_post_saved = false;
++		fhp->fh_post_attr.ctime = inode->i_ctime;
++	} else
++		fhp->fh_post_saved = true;
+ 	if (v4)
+ 		fhp->fh_post_change =
+ 			nfsd4_change_attribute(&fhp->fh_post_attr, inode);
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 1956d377d1a60..b94b3bb2b8a6e 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -6040,7 +6040,11 @@ nfs4_preprocess_stateid_op(struct svc_rqst *rqstp,
+ 		*nfp = NULL;
+ 
+ 	if (ZERO_STATEID(stateid) || ONE_STATEID(stateid)) {
+-		status = check_special_stateids(net, fhp, stateid, flags);
++		if (cstid)
++			status = nfserr_bad_stateid;
++		else
++			status = check_special_stateids(net, fhp, stateid,
++									flags);
+ 		goto done;
+ 	}
+ 
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index c99857689e2c2..7f2472e4b88f9 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -987,6 +987,10 @@ nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf,
+ 	iov_iter_kvec(&iter, WRITE, vec, vlen, *cnt);
+ 	if (flags & RWF_SYNC) {
+ 		down_write(&nf->nf_rwsem);
++		if (verf)
++			nfsd_copy_boot_verifier(verf,
++					net_generic(SVC_NET(rqstp),
++					nfsd_net_id));
+ 		host_err = vfs_iter_write(file, &iter, &pos, flags);
+ 		if (host_err < 0)
+ 			nfsd_reset_boot_verifier(net_generic(SVC_NET(rqstp),
+diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
+index f0fb25727d961..eb05038b71911 100644
+--- a/fs/ubifs/super.c
++++ b/fs/ubifs/super.c
+@@ -1853,7 +1853,6 @@ out:
+ 		kthread_stop(c->bgt);
+ 		c->bgt = NULL;
+ 	}
+-	free_wbufs(c);
+ 	kfree(c->write_reserve_buf);
+ 	c->write_reserve_buf = NULL;
+ 	vfree(c->ileb_buf);
+diff --git a/fs/udf/ialloc.c b/fs/udf/ialloc.c
+index 2ecf0e87660e3..b5d611cee749c 100644
+--- a/fs/udf/ialloc.c
++++ b/fs/udf/ialloc.c
+@@ -77,6 +77,7 @@ struct inode *udf_new_inode(struct inode *dir, umode_t mode)
+ 					GFP_KERNEL);
+ 	}
+ 	if (!iinfo->i_data) {
++		make_bad_inode(inode);
+ 		iput(inode);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+@@ -86,6 +87,7 @@ struct inode *udf_new_inode(struct inode *dir, umode_t mode)
+ 			      dinfo->i_location.partitionReferenceNum,
+ 			      start, &err);
+ 	if (err) {
++		make_bad_inode(inode);
+ 		iput(inode);
+ 		return ERR_PTR(err);
+ 	}
+diff --git a/include/acpi/acpi_bus.h b/include/acpi/acpi_bus.h
+index 480f9207a4c6b..d6fe27b695c3d 100644
+--- a/include/acpi/acpi_bus.h
++++ b/include/acpi/acpi_bus.h
+@@ -613,9 +613,10 @@ int acpi_enable_wakeup_device_power(struct acpi_device *dev, int state);
+ int acpi_disable_wakeup_device_power(struct acpi_device *dev);
+ 
+ #ifdef CONFIG_X86
+-bool acpi_device_always_present(struct acpi_device *adev);
++bool acpi_device_override_status(struct acpi_device *adev, unsigned long long *status);
+ #else
+-static inline bool acpi_device_always_present(struct acpi_device *adev)
++static inline bool acpi_device_override_status(struct acpi_device *adev,
++					       unsigned long long *status)
+ {
+ 	return false;
+ }
+diff --git a/include/acpi/actypes.h b/include/acpi/actypes.h
+index ff8b3c913f217..248242dca28d3 100644
+--- a/include/acpi/actypes.h
++++ b/include/acpi/actypes.h
+@@ -536,8 +536,14 @@ typedef u64 acpi_integer;
+  * Can be used with access_width of struct acpi_generic_address and access_size of
+  * struct acpi_resource_generic_register.
+  */
+-#define ACPI_ACCESS_BIT_WIDTH(size)     (1 << ((size) + 2))
+-#define ACPI_ACCESS_BYTE_WIDTH(size)    (1 << ((size) - 1))
++#define ACPI_ACCESS_BIT_SHIFT		2
++#define ACPI_ACCESS_BYTE_SHIFT		-1
++#define ACPI_ACCESS_BIT_MAX		(31 - ACPI_ACCESS_BIT_SHIFT)
++#define ACPI_ACCESS_BYTE_MAX		(31 - ACPI_ACCESS_BYTE_SHIFT)
++#define ACPI_ACCESS_BIT_DEFAULT		(8 - ACPI_ACCESS_BIT_SHIFT)
++#define ACPI_ACCESS_BYTE_DEFAULT	(8 - ACPI_ACCESS_BYTE_SHIFT)
++#define ACPI_ACCESS_BIT_WIDTH(size)	(1 << ((size) + ACPI_ACCESS_BIT_SHIFT))
++#define ACPI_ACCESS_BYTE_WIDTH(size)	(1 << ((size) + ACPI_ACCESS_BYTE_SHIFT))
+ 
+ /*******************************************************************************
+  *
+diff --git a/include/asm-generic/bitops/find.h b/include/asm-generic/bitops/find.h
+index 0d132ee2a2913..835f959a25f25 100644
+--- a/include/asm-generic/bitops/find.h
++++ b/include/asm-generic/bitops/find.h
+@@ -97,6 +97,7 @@ unsigned long find_next_zero_bit(const unsigned long *addr, unsigned long size,
+ 
+ #ifdef CONFIG_GENERIC_FIND_FIRST_BIT
+ 
++#ifndef find_first_bit
+ /**
+  * find_first_bit - find the first set bit in a memory region
+  * @addr: The address to start the search at
+@@ -116,7 +117,9 @@ unsigned long find_first_bit(const unsigned long *addr, unsigned long size)
+ 
+ 	return _find_first_bit(addr, size);
+ }
++#endif
+ 
++#ifndef find_first_zero_bit
+ /**
+  * find_first_zero_bit - find the first cleared bit in a memory region
+  * @addr: The address to start the search at
+@@ -136,6 +139,8 @@ unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size)
+ 
+ 	return _find_first_zero_bit(addr, size);
+ }
++#endif
++
+ #else /* CONFIG_GENERIC_FIND_FIRST_BIT */
+ 
+ #ifndef find_first_bit
+diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
+index 0cd95953cdf55..96c264c4be4fe 100644
+--- a/include/drm/drm_drv.h
++++ b/include/drm/drm_drv.h
+@@ -291,8 +291,9 @@ struct drm_driver {
+ 	/**
+ 	 * @gem_create_object: constructor for gem objects
+ 	 *
+-	 * Hook for allocating the GEM object struct, for use by the CMA and
+-	 * SHMEM GEM helpers.
++	 * Hook for allocating the GEM object struct, for use by the CMA
++	 * and SHMEM GEM helpers. Returns a GEM object on success, or an
++	 * ERR_PTR()-encoded error code otherwise.
+ 	 */
+ 	struct drm_gem_object *(*gem_create_object)(struct drm_device *dev,
+ 						    size_t size);
+diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
+index f011e4c407f2e..bbc22fad8d802 100644
+--- a/include/drm/gpu_scheduler.h
++++ b/include/drm/gpu_scheduler.h
+@@ -28,6 +28,7 @@
+ #include <linux/dma-fence.h>
+ #include <linux/completion.h>
+ #include <linux/xarray.h>
++#include <linux/irq_work.h>
+ 
+ #define MAX_WAIT_SCHED_ENTITY_Q_EMPTY msecs_to_jiffies(1000)
+ 
+@@ -286,7 +287,16 @@ struct drm_sched_job {
+ 	struct list_head		list;
+ 	struct drm_gpu_scheduler	*sched;
+ 	struct drm_sched_fence		*s_fence;
+-	struct dma_fence_cb		finish_cb;
++
++	/*
++	 * work is used only after finish_cb has been used and will not be
++	 * accessed anymore.
++	 */
++	union {
++		struct dma_fence_cb		finish_cb;
++		struct irq_work 		work;
++	};
++
+ 	uint64_t			id;
+ 	atomic_t			karma;
+ 	enum drm_sched_priority		s_priority;
+diff --git a/include/linux/blk-pm.h b/include/linux/blk-pm.h
+index b80c65aba2493..2580e05a8ab67 100644
+--- a/include/linux/blk-pm.h
++++ b/include/linux/blk-pm.h
+@@ -14,7 +14,7 @@ extern void blk_pm_runtime_init(struct request_queue *q, struct device *dev);
+ extern int blk_pre_runtime_suspend(struct request_queue *q);
+ extern void blk_post_runtime_suspend(struct request_queue *q, int err);
+ extern void blk_pre_runtime_resume(struct request_queue *q);
+-extern void blk_post_runtime_resume(struct request_queue *q, int err);
++extern void blk_post_runtime_resume(struct request_queue *q);
+ extern void blk_set_runtime_active(struct request_queue *q);
+ #else
+ static inline void blk_pm_runtime_init(struct request_queue *q,
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 755f38e893be1..9f20b0f539f78 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1082,7 +1082,7 @@ struct bpf_array {
+ };
+ 
+ #define BPF_COMPLEXITY_LIMIT_INSNS      1000000 /* yes. 1M insns */
+-#define MAX_TAIL_CALL_CNT 32
++#define MAX_TAIL_CALL_CNT 33
+ 
+ #define BPF_F_ACCESS_MASK	(BPF_F_RDONLY |		\
+ 				 BPF_F_RDONLY_PROG |	\
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index c8a78e830fcab..182b16a910849 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -396,6 +396,13 @@ static inline bool bpf_verifier_log_needed(const struct bpf_verifier_log *log)
+ 		 log->level == BPF_LOG_KERNEL);
+ }
+ 
++static inline bool
++bpf_verifier_log_attr_valid(const struct bpf_verifier_log *log)
++{
++	return log->len_total >= 128 && log->len_total <= UINT_MAX >> 2 &&
++	       log->level && log->ubuf && !(log->level & ~BPF_LOG_MASK);
++}
++
+ #define BPF_MAX_SUBPROGS 256
+ 
+ struct bpf_subprog_info {
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index f453be385bd47..26742ca14609a 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -349,6 +349,8 @@ struct hid_item {
+ /* BIT(9) reserved for backward compatibility, was NO_INIT_INPUT_REPORTS */
+ #define HID_QUIRK_ALWAYS_POLL			BIT(10)
+ #define HID_QUIRK_INPUT_PER_APP			BIT(11)
++#define HID_QUIRK_X_INVERT			BIT(12)
++#define HID_QUIRK_Y_INVERT			BIT(13)
+ #define HID_QUIRK_SKIP_OUTPUT_REPORTS		BIT(16)
+ #define HID_QUIRK_SKIP_OUTPUT_REPORT_ID		BIT(17)
+ #define HID_QUIRK_NO_OUTPUT_REPORTS_ON_INTR_EP	BIT(18)
+diff --git a/include/linux/iio/trigger.h b/include/linux/iio/trigger.h
+index 096f68dd2e0ca..4c69b144677b1 100644
+--- a/include/linux/iio/trigger.h
++++ b/include/linux/iio/trigger.h
+@@ -55,6 +55,7 @@ struct iio_trigger_ops {
+  * @attached_own_device:[INTERN] if we are using our own device as trigger,
+  *			i.e. if we registered a poll function to the same
+  *			device as the one providing the trigger.
++ * @reenable_work:	[INTERN] work item used to ensure reenable can sleep.
+  **/
+ struct iio_trigger {
+ 	const struct iio_trigger_ops	*ops;
+@@ -74,6 +75,7 @@ struct iio_trigger {
+ 	unsigned long pool[BITS_TO_LONGS(CONFIG_IIO_CONSUMERS_PER_TRIGGER)];
+ 	struct mutex			pool_lock;
+ 	bool				attached_own_device;
++	struct work_struct		reenable_work;
+ };
+ 
+ 
+diff --git a/include/linux/kasan.h b/include/linux/kasan.h
+index d8783b6826695..89c99e5e67de5 100644
+--- a/include/linux/kasan.h
++++ b/include/linux/kasan.h
+@@ -474,12 +474,12 @@ static inline void kasan_populate_early_vm_area_shadow(void *start,
+  * allocations with real shadow memory. With KASAN vmalloc, the special
+  * case is unnecessary, as the work is handled in the generic case.
+  */
+-int kasan_module_alloc(void *addr, size_t size);
++int kasan_module_alloc(void *addr, size_t size, gfp_t gfp_mask);
+ void kasan_free_shadow(const struct vm_struct *vm);
+ 
+ #else /* (CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS) && !CONFIG_KASAN_VMALLOC */
+ 
+-static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
++static inline int kasan_module_alloc(void *addr, size_t size, gfp_t gfp_mask) { return 0; }
+ static inline void kasan_free_shadow(const struct vm_struct *vm) {}
+ 
+ #endif /* (CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS) && !CONFIG_KASAN_VMALLOC */
+diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
+index 936dc0b6c226a..aed44e9b5d899 100644
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -1047,6 +1047,15 @@ static inline int is_highmem_idx(enum zone_type idx)
+ #endif
+ }
+ 
++#ifdef CONFIG_ZONE_DMA
++bool has_managed_dma(void);
++#else
++static inline bool has_managed_dma(void)
++{
++	return false;
++}
++#endif
++
+ /**
+  * is_highmem - helper function to quickly check if a struct zone is a
+  *              highmem zone or not.  This is an attempt to keep references
+diff --git a/include/linux/mtd/rawnand.h b/include/linux/mtd/rawnand.h
+index b2f9dd3cbd695..5b88cd51fadb5 100644
+--- a/include/linux/mtd/rawnand.h
++++ b/include/linux/mtd/rawnand.h
+@@ -1539,6 +1539,8 @@ int nand_read_data_op(struct nand_chip *chip, void *buf, unsigned int len,
+ 		      bool force_8bit, bool check_only);
+ int nand_write_data_op(struct nand_chip *chip, const void *buf,
+ 		       unsigned int len, bool force_8bit);
++int nand_read_page_hwecc_oob_first(struct nand_chip *chip, uint8_t *buf,
++				   int oob_required, int page);
+ 
+ /* Scan and identify a NAND device */
+ int nand_scan_with_ids(struct nand_chip *chip, unsigned int max_chips,
+diff --git a/include/linux/mtd/spi-nor.h b/include/linux/mtd/spi-nor.h
+index f67457748ed84..fc90fce26e337 100644
+--- a/include/linux/mtd/spi-nor.h
++++ b/include/linux/mtd/spi-nor.h
+@@ -371,7 +371,6 @@ struct spi_nor_flash_parameter;
+  * @bouncebuf_size:	size of the bounce buffer
+  * @info:		SPI NOR part JEDEC MFR ID and other info
+  * @manufacturer:	SPI NOR manufacturer
+- * @page_size:		the page size of the SPI NOR
+  * @addr_width:		number of address bytes
+  * @erase_opcode:	the opcode for erasing a sector
+  * @read_opcode:	the read opcode
+@@ -401,7 +400,6 @@ struct spi_nor {
+ 	size_t			bouncebuf_size;
+ 	const struct flash_info	*info;
+ 	const struct spi_nor_manufacturer *manufacturer;
+-	u32			page_size;
+ 	u8			addr_width;
+ 	u8			erase_opcode;
+ 	u8			read_opcode;
+diff --git a/include/linux/netfilter_netdev.h b/include/linux/netfilter_netdev.h
+index b71b57a83bb4f..b4dd96e4dc8dc 100644
+--- a/include/linux/netfilter_netdev.h
++++ b/include/linux/netfilter_netdev.h
+@@ -94,7 +94,7 @@ static inline struct sk_buff *nf_hook_egress(struct sk_buff *skb, int *rc,
+ 		return skb;
+ #endif
+ 
+-	e = rcu_dereference(dev->nf_hooks_egress);
++	e = rcu_dereference_check(dev->nf_hooks_egress, rcu_read_lock_bh_held());
+ 	if (!e)
+ 		return skb;
+ 
+diff --git a/include/linux/of_fdt.h b/include/linux/of_fdt.h
+index cf48983d3c867..ad09beb6d13c4 100644
+--- a/include/linux/of_fdt.h
++++ b/include/linux/of_fdt.h
+@@ -62,6 +62,7 @@ extern int early_init_dt_scan_chosen(unsigned long node, const char *uname,
+ 				     int depth, void *data);
+ extern int early_init_dt_scan_memory(unsigned long node, const char *uname,
+ 				     int depth, void *data);
++extern void early_init_dt_check_for_usable_mem_range(void);
+ extern int early_init_dt_scan_chosen_stdout(void);
+ extern void early_init_fdt_scan_reserved_mem(void);
+ extern void early_init_fdt_reserve_self(void);
+@@ -86,6 +87,7 @@ extern void unflatten_and_copy_device_tree(void);
+ extern void early_init_devtree(void *);
+ extern void early_get_first_memblock_info(void *, phys_addr_t *);
+ #else /* CONFIG_OF_EARLY_FLATTREE */
++static inline void early_init_dt_check_for_usable_mem_range(void) {}
+ static inline int early_init_dt_scan_chosen_stdout(void) { return -ENODEV; }
+ static inline void early_init_fdt_scan_reserved_mem(void) {}
+ static inline void early_init_fdt_reserve_self(void) {}
+diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h
+index eddd66d426caf..016de5776b6db 100644
+--- a/include/linux/pm_runtime.h
++++ b/include/linux/pm_runtime.h
+@@ -58,6 +58,7 @@ extern void pm_runtime_get_suppliers(struct device *dev);
+ extern void pm_runtime_put_suppliers(struct device *dev);
+ extern void pm_runtime_new_link(struct device *dev);
+ extern void pm_runtime_drop_link(struct device_link *link);
++extern void pm_runtime_release_supplier(struct device_link *link, bool check_idle);
+ 
+ extern int devm_pm_runtime_enable(struct device *dev);
+ 
+@@ -283,6 +284,8 @@ static inline void pm_runtime_get_suppliers(struct device *dev) {}
+ static inline void pm_runtime_put_suppliers(struct device *dev) {}
+ static inline void pm_runtime_new_link(struct device *dev) {}
+ static inline void pm_runtime_drop_link(struct device_link *link) {}
++static inline void pm_runtime_release_supplier(struct device_link *link,
++					       bool check_idle) {}
+ 
+ #endif /* !CONFIG_PM */
+ 
+diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
+index 0a23300d49af7..0819c82dba920 100644
+--- a/include/linux/psi_types.h
++++ b/include/linux/psi_types.h
+@@ -21,7 +21,17 @@ enum psi_task_count {
+ 	 * don't have to special case any state tracking for it.
+ 	 */
+ 	NR_ONCPU,
+-	NR_PSI_TASK_COUNTS = 4,
++	/*
++	 * For IO and CPU stalls the presence of running/oncpu tasks
++	 * in the domain means a partial rather than a full stall.
++	 * For memory it's not so simple because of page reclaimers:
++	 * they are running/oncpu while representing a stall. To tell
++	 * whether a domain has productivity left or not, we need to
++	 * distinguish between regular running (i.e. productive)
++	 * threads and memstall ones.
++	 */
++	NR_MEMSTALL_RUNNING,
++	NR_PSI_TASK_COUNTS = 5,
+ };
+ 
+ /* Task state bitmasks */
+@@ -29,6 +39,7 @@ enum psi_task_count {
+ #define TSK_MEMSTALL	(1 << NR_MEMSTALL)
+ #define TSK_RUNNING	(1 << NR_RUNNING)
+ #define TSK_ONCPU	(1 << NR_ONCPU)
++#define TSK_MEMSTALL_RUNNING	(1 << NR_MEMSTALL_RUNNING)
+ 
+ /* Resources that workloads could be stalled on */
+ enum psi_res {
+diff --git a/include/linux/ptp_clock_kernel.h b/include/linux/ptp_clock_kernel.h
+index 2e5565067355b..554454cb86931 100644
+--- a/include/linux/ptp_clock_kernel.h
++++ b/include/linux/ptp_clock_kernel.h
+@@ -351,15 +351,17 @@ int ptp_get_vclocks_index(int pclock_index, int **vclock_index);
+  *
+  * @hwtstamps:    skb_shared_hwtstamps structure pointer
+  * @vclock_index: phc index of ptp vclock.
++ *
++ * Returns converted timestamp, or 0 on error.
+  */
+-void ptp_convert_timestamp(struct skb_shared_hwtstamps *hwtstamps,
+-			   int vclock_index);
++ktime_t ptp_convert_timestamp(const struct skb_shared_hwtstamps *hwtstamps,
++			      int vclock_index);
+ #else
+ static inline int ptp_get_vclocks_index(int pclock_index, int **vclock_index)
+ { return 0; }
+-static inline void ptp_convert_timestamp(struct skb_shared_hwtstamps *hwtstamps,
+-					 int vclock_index)
+-{ }
++static inline ktime_t ptp_convert_timestamp(const struct skb_shared_hwtstamps *hwtstamps,
++					    int vclock_index)
++{ return 0; }
+ 
+ #endif
+ 
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 4507d77d6941f..60ab0c2fe5674 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -287,7 +287,9 @@ struct tc_skb_ext {
+ 	__u32 chain;
+ 	__u16 mru;
+ 	__u16 zone;
+-	bool post_ct;
++	u8 post_ct:1;
++	u8 post_ct_snat:1;
++	u8 post_ct_dnat:1;
+ };
+ #endif
+ 
+diff --git a/include/linux/stmmac.h b/include/linux/stmmac.h
+index a6f03b36fc4f7..1450397fc0bcd 100644
+--- a/include/linux/stmmac.h
++++ b/include/linux/stmmac.h
+@@ -233,6 +233,7 @@ struct plat_stmmacenet_data {
+ 	int (*clks_config)(void *priv, bool enabled);
+ 	int (*crosststamp)(ktime_t *device, struct system_counterval_t *system,
+ 			   void *ctx);
++	void (*dump_debug_regs)(void *priv);
+ 	void *bsp_priv;
+ 	struct clk *stmmac_clk;
+ 	struct clk *pclk;
+diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
+index 6e022cc712e61..880227b9f0440 100644
+--- a/include/linux/vmalloc.h
++++ b/include/linux/vmalloc.h
+@@ -28,6 +28,13 @@ struct notifier_block;		/* in notifier.h */
+ #define VM_MAP_PUT_PAGES	0x00000200	/* put pages and free array in vfree */
+ #define VM_NO_HUGE_VMAP		0x00000400	/* force PAGE_SIZE pte mapping */
+ 
++#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
++	!defined(CONFIG_KASAN_VMALLOC)
++#define VM_DEFER_KMEMLEAK	0x00000800	/* defer kmemleak object creation */
++#else
++#define VM_DEFER_KMEMLEAK	0
++#endif
++
+ /*
+  * VM_KASAN is used slightly differently depending on CONFIG_KASAN_VMALLOC.
+  *
+diff --git a/include/media/cec.h b/include/media/cec.h
+index 208c9613c07eb..77346f757036d 100644
+--- a/include/media/cec.h
++++ b/include/media/cec.h
+@@ -26,13 +26,17 @@
+  * @dev:	cec device
+  * @cdev:	cec character device
+  * @minor:	device node minor number
++ * @lock:	lock to serialize open/release and registration
+  * @registered:	the device was correctly registered
+  * @unregistered: the device was unregistered
++ * @lock_fhs:	lock to control access to @fhs
+  * @fhs:	the list of open filehandles (cec_fh)
+- * @lock:	lock to control access to this structure
+  *
+  * This structure represents a cec-related device node.
+  *
++ * To add or remove filehandles from @fhs the @lock must be taken first,
++ * followed by @lock_fhs. It is safe to access @fhs if either lock is held.
++ *
+  * The @parent is a physical device. It must be set by core or device drivers
+  * before registering the node.
+  */
+@@ -43,10 +47,13 @@ struct cec_devnode {
+ 
+ 	/* device info */
+ 	int minor;
++	/* serialize open/release and registration */
++	struct mutex lock;
+ 	bool registered;
+ 	bool unregistered;
++	/* protect access to fhs */
++	struct mutex lock_fhs;
+ 	struct list_head fhs;
+-	struct mutex lock;
+ };
+ 
+ struct cec_adapter;
+diff --git a/include/net/inet_frag.h b/include/net/inet_frag.h
+index 48cc5795ceda6..63540be0fc34a 100644
+--- a/include/net/inet_frag.h
++++ b/include/net/inet_frag.h
+@@ -117,8 +117,15 @@ int fqdir_init(struct fqdir **fqdirp, struct inet_frags *f, struct net *net);
+ 
+ static inline void fqdir_pre_exit(struct fqdir *fqdir)
+ {
+-	fqdir->high_thresh = 0; /* prevent creation of new frags */
+-	fqdir->dead = true;
++	/* Prevent creation of new frags.
++	 * Pairs with READ_ONCE() in inet_frag_find().
++	 */
++	WRITE_ONCE(fqdir->high_thresh, 0);
++
++	/* Pairs with READ_ONCE() in inet_frag_kill(), ip_expire()
++	 * and ip6frag_expire_frag_queue().
++	 */
++	WRITE_ONCE(fqdir->dead, true);
+ }
+ void fqdir_exit(struct fqdir *fqdir);
+ 
+diff --git a/include/net/ipv6_frag.h b/include/net/ipv6_frag.h
+index 851029ecff13c..0a4779175a523 100644
+--- a/include/net/ipv6_frag.h
++++ b/include/net/ipv6_frag.h
+@@ -67,7 +67,8 @@ ip6frag_expire_frag_queue(struct net *net, struct frag_queue *fq)
+ 	struct sk_buff *head;
+ 
+ 	rcu_read_lock();
+-	if (fq->q.fqdir->dead)
++	/* Paired with the WRITE_ONCE() in fqdir_pre_exit(). */
++	if (READ_ONCE(fq->q.fqdir->dead))
+ 		goto out_rcu_unlock;
+ 	spin_lock(&fq->q.lock);
+ 
+diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
+index 9e71691c491b7..9e7b21c0b3a6d 100644
+--- a/include/net/pkt_sched.h
++++ b/include/net/pkt_sched.h
+@@ -197,7 +197,9 @@ struct tc_skb_cb {
+ 	struct qdisc_skb_cb qdisc_cb;
+ 
+ 	u16 mru;
+-	bool post_ct;
++	u8 post_ct:1;
++	u8 post_ct_snat:1;
++	u8 post_ct_dnat:1;
+ 	u16 zone; /* Only valid if post_ct = true */
+ };
+ 
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index c70e6d2b2fdd6..fddca0aa73efc 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -1244,6 +1244,7 @@ struct psched_ratecfg {
+ 	u64	rate_bytes_ps; /* bytes per second */
+ 	u32	mult;
+ 	u16	overhead;
++	u16	mpu;
+ 	u8	linklayer;
+ 	u8	shift;
+ };
+@@ -1253,6 +1254,9 @@ static inline u64 psched_l2t_ns(const struct psched_ratecfg *r,
+ {
+ 	len += r->overhead;
+ 
++	if (len < r->mpu)
++		len = r->mpu;
++
+ 	if (unlikely(r->linklayer == TC_LINKLAYER_ATM))
+ 		return ((u64)(DIV_ROUND_UP(len,48)*53) * r->mult) >> r->shift;
+ 
+@@ -1275,6 +1279,7 @@ static inline void psched_ratecfg_getrate(struct tc_ratespec *res,
+ 	res->rate = min_t(u64, r->rate_bytes_ps, ~0U);
+ 
+ 	res->overhead = r->overhead;
++	res->mpu = r->mpu;
+ 	res->linklayer = (r->linklayer & TC_LINKLAYER_MASK);
+ }
+ 
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 2308210793a01..2b1ce8534993c 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -200,6 +200,11 @@ struct xfrm_state {
+ 	struct xfrm_algo_aead	*aead;
+ 	const char		*geniv;
+ 
++	/* mapping change rate limiting */
++	__be16 new_mapping_sport;
++	u32 new_mapping;	/* seconds */
++	u32 mapping_maxage;	/* seconds for input SA */
++
+ 	/* Data for encapsulator */
+ 	struct xfrm_encap_tmpl	*encap;
+ 	struct sock __rcu	*encap_sk;
+@@ -1162,7 +1167,7 @@ static inline int xfrm_route_forward(struct sk_buff *skb, unsigned short family)
+ {
+ 	struct net *net = dev_net(skb->dev);
+ 
+-	if (xfrm_default_allow(net, XFRM_POLICY_FWD))
++	if (xfrm_default_allow(net, XFRM_POLICY_OUT))
+ 		return !net->xfrm.policy_count[XFRM_POLICY_OUT] ||
+ 			(skb_dst(skb)->flags & DST_NOXFRM) ||
+ 			__xfrm_route_forward(skb, family);
+diff --git a/include/sound/hda_codec.h b/include/sound/hda_codec.h
+index 0e45963bb767f..82d9daa178517 100644
+--- a/include/sound/hda_codec.h
++++ b/include/sound/hda_codec.h
+@@ -8,7 +8,7 @@
+ #ifndef __SOUND_HDA_CODEC_H
+ #define __SOUND_HDA_CODEC_H
+ 
+-#include <linux/kref.h>
++#include <linux/refcount.h>
+ #include <linux/mod_devicetable.h>
+ #include <sound/info.h>
+ #include <sound/control.h>
+@@ -166,8 +166,8 @@ struct hda_pcm {
+ 	bool own_chmap;		/* codec driver provides own channel maps */
+ 	/* private: */
+ 	struct hda_codec *codec;
+-	struct kref kref;
+ 	struct list_head list;
++	unsigned int disconnected:1;
+ };
+ 
+ /* codec information */
+@@ -187,6 +187,8 @@ struct hda_codec {
+ 
+ 	/* PCM to create, set by patch_ops.build_pcms callback */
+ 	struct list_head pcm_list_head;
++	refcount_t pcm_ref;
++	wait_queue_head_t remove_sleep;
+ 
+ 	/* codec specific info */
+ 	void *spec;
+@@ -420,7 +422,7 @@ void snd_hda_codec_cleanup_for_unbind(struct hda_codec *codec);
+ 
+ static inline void snd_hda_codec_pcm_get(struct hda_pcm *pcm)
+ {
+-	kref_get(&pcm->kref);
++	refcount_inc(&pcm->codec->pcm_ref);
+ }
+ void snd_hda_codec_pcm_put(struct hda_pcm *pcm);
+ 
+diff --git a/include/trace/events/cgroup.h b/include/trace/events/cgroup.h
+index 7f42a3de59e6b..dd7d7c9efecdf 100644
+--- a/include/trace/events/cgroup.h
++++ b/include/trace/events/cgroup.h
+@@ -59,8 +59,8 @@ DECLARE_EVENT_CLASS(cgroup,
+ 
+ 	TP_STRUCT__entry(
+ 		__field(	int,		root			)
+-		__field(	int,		id			)
+ 		__field(	int,		level			)
++		__field(	u64,		id			)
+ 		__string(	path,		path			)
+ 	),
+ 
+@@ -71,7 +71,7 @@ DECLARE_EVENT_CLASS(cgroup,
+ 		__assign_str(path, path);
+ 	),
+ 
+-	TP_printk("root=%d id=%d level=%d path=%s",
++	TP_printk("root=%d id=%llu level=%d path=%s",
+ 		  __entry->root, __entry->id, __entry->level, __get_str(path))
+ );
+ 
+@@ -126,8 +126,8 @@ DECLARE_EVENT_CLASS(cgroup_migrate,
+ 
+ 	TP_STRUCT__entry(
+ 		__field(	int,		dst_root		)
+-		__field(	int,		dst_id			)
+ 		__field(	int,		dst_level		)
++		__field(	u64,		dst_id			)
+ 		__field(	int,		pid			)
+ 		__string(	dst_path,	path			)
+ 		__string(	comm,		task->comm		)
+@@ -142,7 +142,7 @@ DECLARE_EVENT_CLASS(cgroup_migrate,
+ 		__assign_str(comm, task->comm);
+ 	),
+ 
+-	TP_printk("dst_root=%d dst_id=%d dst_level=%d dst_path=%s pid=%d comm=%s",
++	TP_printk("dst_root=%d dst_id=%llu dst_level=%d dst_path=%s pid=%d comm=%s",
+ 		  __entry->dst_root, __entry->dst_id, __entry->dst_level,
+ 		  __get_str(dst_path), __entry->pid, __get_str(comm))
+ );
+@@ -171,8 +171,8 @@ DECLARE_EVENT_CLASS(cgroup_event,
+ 
+ 	TP_STRUCT__entry(
+ 		__field(	int,		root			)
+-		__field(	int,		id			)
+ 		__field(	int,		level			)
++		__field(	u64,		id			)
+ 		__string(	path,		path			)
+ 		__field(	int,		val			)
+ 	),
+@@ -185,7 +185,7 @@ DECLARE_EVENT_CLASS(cgroup_event,
+ 		__entry->val = val;
+ 	),
+ 
+-	TP_printk("root=%d id=%d level=%d path=%s val=%d",
++	TP_printk("root=%d id=%llu level=%d path=%s val=%d",
+ 		  __entry->root, __entry->id, __entry->level, __get_str(path),
+ 		  __entry->val)
+ );
+diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
+index 3a99358c262b4..7b5dcff84cf27 100644
+--- a/include/trace/events/sunrpc.h
++++ b/include/trace/events/sunrpc.h
+@@ -1744,10 +1744,11 @@ TRACE_EVENT(svc_xprt_create_err,
+ 		const char *program,
+ 		const char *protocol,
+ 		struct sockaddr *sap,
++		size_t salen,
+ 		const struct svc_xprt *xprt
+ 	),
+ 
+-	TP_ARGS(program, protocol, sap, xprt),
++	TP_ARGS(program, protocol, sap, salen, xprt),
+ 
+ 	TP_STRUCT__entry(
+ 		__field(long, error)
+@@ -1760,7 +1761,7 @@ TRACE_EVENT(svc_xprt_create_err,
+ 		__entry->error = PTR_ERR(xprt);
+ 		__assign_str(program, program);
+ 		__assign_str(protocol, protocol);
+-		memcpy(__entry->addr, sap, sizeof(__entry->addr));
++		memcpy(__entry->addr, sap, min(salen, sizeof(__entry->addr)));
+ 	),
+ 
+ 	TP_printk("addr=%pISpc program=%s protocol=%s error=%ld",
+@@ -2146,17 +2147,17 @@ DECLARE_EVENT_CLASS(svcsock_accept_class,
+ 	TP_STRUCT__entry(
+ 		__field(long, status)
+ 		__string(service, service)
+-		__array(unsigned char, addr, sizeof(struct sockaddr_in6))
++		__field(unsigned int, netns_ino)
+ 	),
+ 
+ 	TP_fast_assign(
+ 		__entry->status = status;
+ 		__assign_str(service, service);
+-		memcpy(__entry->addr, &xprt->xpt_local, sizeof(__entry->addr));
++		__entry->netns_ino = xprt->xpt_net->ns.inum;
+ 	),
+ 
+-	TP_printk("listener=%pISpc service=%s status=%ld",
+-		__entry->addr, __get_str(service), __entry->status
++	TP_printk("addr=listener service=%s status=%ld",
++		__get_str(service), __entry->status
+ 	)
+ );
+ 
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index ba5af15e25f5c..b12cfceddb6e9 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -1744,7 +1744,7 @@ union bpf_attr {
+  * 		if the maximum number of tail calls has been reached for this
+  * 		chain of programs. This limit is defined in the kernel by the
+  * 		macro **MAX_TAIL_CALL_CNT** (not accessible to user space),
+- * 		which is currently set to 32.
++ *		which is currently set to 33.
+  * 	Return
+  * 		0 on success, or a negative error in case of failure.
+  *
+diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h
+index eda0426ec4c2b..4e29d78518902 100644
+--- a/include/uapi/linux/xfrm.h
++++ b/include/uapi/linux/xfrm.h
+@@ -313,6 +313,7 @@ enum xfrm_attr_type_t {
+ 	XFRMA_SET_MARK,		/* __u32 */
+ 	XFRMA_SET_MARK_MASK,	/* __u32 */
+ 	XFRMA_IF_ID,		/* __u32 */
++	XFRMA_MTIMER_THRESH,	/* __u32 in seconds for input SA */
+ 	__XFRMA_MAX
+ 
+ #define XFRMA_OUTPUT_MARK XFRMA_SET_MARK	/* Compatibility */
+diff --git a/include/uapi/misc/habanalabs.h b/include/uapi/misc/habanalabs.h
+index 00b3095904995..c5760acebdd1d 100644
+--- a/include/uapi/misc/habanalabs.h
++++ b/include/uapi/misc/habanalabs.h
+@@ -911,14 +911,18 @@ struct hl_wait_cs_in {
+ 	 */
+ 	__u32 flags;
+ 
+-	/* Multi CS API info- valid entries in multi-CS array */
+-	__u8 seq_arr_len;
+-	__u8 pad[3];
++	union {
++		struct {
++			/* Multi CS API info- valid entries in multi-CS array */
++			__u8 seq_arr_len;
++			__u8 pad[7];
++		};
+ 
+-	/* Absolute timeout to wait for an interrupt in microseconds.
+-	 * Relevant only when HL_WAIT_CS_FLAGS_INTERRUPT is set
+-	 */
+-	__u32 interrupt_timeout_us;
++		/* Absolute timeout to wait for an interrupt in microseconds.
++		 * Relevant only when HL_WAIT_CS_FLAGS_INTERRUPT is set
++		 */
++		__u64 interrupt_timeout_us;
++	};
+ };
+ 
+ #define HL_WAIT_CS_STATUS_COMPLETED	0
+diff --git a/kernel/audit.c b/kernel/audit.c
+index 4cebadb5f30db..eab7282668ab9 100644
+--- a/kernel/audit.c
++++ b/kernel/audit.c
+@@ -1540,6 +1540,20 @@ static void audit_receive(struct sk_buff  *skb)
+ 		nlh = nlmsg_next(nlh, &len);
+ 	}
+ 	audit_ctl_unlock();
++
++	/* can't block with the ctrl lock, so penalize the sender now */
++	if (audit_backlog_limit &&
++	    (skb_queue_len(&audit_queue) > audit_backlog_limit)) {
++		DECLARE_WAITQUEUE(wait, current);
++
++		/* wake kauditd to try and flush the queue */
++		wake_up_interruptible(&kauditd_wait);
++
++		add_wait_queue_exclusive(&audit_backlog_wait, &wait);
++		set_current_state(TASK_UNINTERRUPTIBLE);
++		schedule_timeout(audit_backlog_wait_time);
++		remove_wait_queue(&audit_backlog_wait, &wait);
++	}
+ }
+ 
+ /* Log information about who is connecting to the audit multicast socket */
+@@ -1824,7 +1838,9 @@ struct audit_buffer *audit_log_start(struct audit_context *ctx, gfp_t gfp_mask,
+ 	 *    task_tgid_vnr() since auditd_pid is set in audit_receive_msg()
+ 	 *    using a PID anchored in the caller's namespace
+ 	 * 2. generator holding the audit_cmd_mutex - we don't want to block
+-	 *    while holding the mutex */
++	 *    while holding the mutex, although we do penalize the sender
++	 *    later in audit_receive() when it is safe to block
++	 */
+ 	if (!(auditd_test_task(current) || audit_ctl_owner_current())) {
+ 		long stime = audit_backlog_wait_time;
+ 
+diff --git a/kernel/bpf/bloom_filter.c b/kernel/bpf/bloom_filter.c
+index 277a05e9c9849..b141a1346f72d 100644
+--- a/kernel/bpf/bloom_filter.c
++++ b/kernel/bpf/bloom_filter.c
+@@ -82,6 +82,11 @@ static int bloom_map_delete_elem(struct bpf_map *map, void *value)
+ 	return -EOPNOTSUPP;
+ }
+ 
++static int bloom_map_get_next_key(struct bpf_map *map, void *key, void *next_key)
++{
++	return -EOPNOTSUPP;
++}
++
+ static struct bpf_map *bloom_map_alloc(union bpf_attr *attr)
+ {
+ 	u32 bitset_bytes, bitset_mask, nr_hash_funcs, nr_bits;
+@@ -192,6 +197,7 @@ const struct bpf_map_ops bloom_filter_map_ops = {
+ 	.map_meta_equal = bpf_map_meta_equal,
+ 	.map_alloc = bloom_map_alloc,
+ 	.map_free = bloom_map_free,
++	.map_get_next_key = bloom_map_get_next_key,
+ 	.map_push_elem = bloom_map_push_elem,
+ 	.map_peek_elem = bloom_map_peek_elem,
+ 	.map_pop_elem = bloom_map_pop_elem,
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 9bdb03767db57..5e037070cb656 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -4460,8 +4460,7 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size,
+ 		log->len_total = log_size;
+ 
+ 		/* log attributes have to be sane */
+-		if (log->len_total < 128 || log->len_total > UINT_MAX >> 8 ||
+-		    !log->level || !log->ubuf) {
++		if (!bpf_verifier_log_attr_valid(log)) {
+ 			err = -EINVAL;
+ 			goto errout;
+ 		}
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 2405e39d800fe..b52dc845ecea3 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -1574,7 +1574,8 @@ select_insn:
+ 
+ 		if (unlikely(index >= array->map.max_entries))
+ 			goto out;
+-		if (unlikely(tail_call_cnt > MAX_TAIL_CALL_CNT))
++
++		if (unlikely(tail_call_cnt >= MAX_TAIL_CALL_CNT))
+ 			goto out;
+ 
+ 		tail_call_cnt++;
+diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
+index 80da1db47c686..5a8d9f7467bf4 100644
+--- a/kernel/bpf/inode.c
++++ b/kernel/bpf/inode.c
+@@ -648,12 +648,22 @@ static int bpf_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ 	int opt;
+ 
+ 	opt = fs_parse(fc, bpf_fs_parameters, param, &result);
+-	if (opt < 0)
++	if (opt < 0) {
+ 		/* We might like to report bad mount options here, but
+ 		 * traditionally we've ignored all mount options, so we'd
+ 		 * better continue to ignore non-existing options for bpf.
+ 		 */
+-		return opt == -ENOPARAM ? 0 : opt;
++		if (opt == -ENOPARAM) {
++			opt = vfs_parse_fs_param_source(fc, param);
++			if (opt != -ENOPARAM)
++				return opt;
++
++			return 0;
++		}
++
++		if (opt < 0)
++			return opt;
++	}
+ 
+ 	switch (opt) {
+ 	case OPT_MODE:
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 4e51bf3f9603a..6b987407752ab 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -5965,6 +5965,7 @@ static int __check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn
+ 	}
+ 
+ 	if (insn->code == (BPF_JMP | BPF_CALL) &&
++	    insn->src_reg == 0 &&
+ 	    insn->imm == BPF_FUNC_timer_set_callback) {
+ 		struct bpf_verifier_state *async_cb;
+ 
+@@ -8961,15 +8962,15 @@ static void mark_ptr_or_null_reg(struct bpf_func_state *state,
+ {
+ 	if (reg_type_may_be_null(reg->type) && reg->id == id &&
+ 	    !WARN_ON_ONCE(!reg->id)) {
+-		/* Old offset (both fixed and variable parts) should
+-		 * have been known-zero, because we don't allow pointer
+-		 * arithmetic on pointers that might be NULL.
+-		 */
+ 		if (WARN_ON_ONCE(reg->smin_value || reg->smax_value ||
+ 				 !tnum_equals_const(reg->var_off, 0) ||
+ 				 reg->off)) {
+-			__mark_reg_known_zero(reg);
+-			reg->off = 0;
++			/* Old offset (both fixed and variable parts) should
++			 * have been known-zero, because we don't allow pointer
++			 * arithmetic on pointers that might be NULL. If we
++			 * see this happening, don't convert the register.
++			 */
++			return;
+ 		}
+ 		if (is_null) {
+ 			reg->type = SCALAR_VALUE;
+@@ -9388,9 +9389,13 @@ static int check_ld_imm(struct bpf_verifier_env *env, struct bpf_insn *insn)
+ 		return 0;
+ 	}
+ 
+-	if (insn->src_reg == BPF_PSEUDO_BTF_ID) {
+-		mark_reg_known_zero(env, regs, insn->dst_reg);
++	/* All special src_reg cases are listed below. From this point onwards
++	 * we either succeed and assign a corresponding dst_reg->type after
++	 * zeroing the offset, or fail and reject the program.
++	 */
++	mark_reg_known_zero(env, regs, insn->dst_reg);
+ 
++	if (insn->src_reg == BPF_PSEUDO_BTF_ID) {
+ 		dst_reg->type = aux->btf_var.reg_type;
+ 		switch (dst_reg->type) {
+ 		case PTR_TO_MEM:
+@@ -9428,7 +9433,6 @@ static int check_ld_imm(struct bpf_verifier_env *env, struct bpf_insn *insn)
+ 	}
+ 
+ 	map = env->used_maps[aux->map_index];
+-	mark_reg_known_zero(env, regs, insn->dst_reg);
+ 	dst_reg->map_ptr = map;
+ 
+ 	if (insn->src_reg == BPF_PSEUDO_MAP_VALUE ||
+@@ -13960,11 +13964,11 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr)
+ 		log->ubuf = (char __user *) (unsigned long) attr->log_buf;
+ 		log->len_total = attr->log_size;
+ 
+-		ret = -EINVAL;
+ 		/* log attributes have to be sane */
+-		if (log->len_total < 128 || log->len_total > UINT_MAX >> 2 ||
+-		    !log->level || !log->ubuf || log->level & ~BPF_LOG_MASK)
++		if (!bpf_verifier_log_attr_valid(log)) {
++			ret = -EINVAL;
+ 			goto err_unlock;
++		}
+ 	}
+ 
+ 	if (IS_ERR(btf_vmlinux)) {
+diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
+index 5f84e6cdb78ea..4d40dcce7604b 100644
+--- a/kernel/dma/pool.c
++++ b/kernel/dma/pool.c
+@@ -203,7 +203,7 @@ static int __init dma_atomic_pool_init(void)
+ 						    GFP_KERNEL);
+ 	if (!atomic_pool_kernel)
+ 		ret = -ENOMEM;
+-	if (IS_ENABLED(CONFIG_ZONE_DMA)) {
++	if (has_managed_dma()) {
+ 		atomic_pool_dma = __dma_atomic_pool_init(atomic_pool_size,
+ 						GFP_KERNEL | GFP_DMA);
+ 		if (!atomic_pool_dma)
+@@ -226,7 +226,7 @@ static inline struct gen_pool *dma_guess_pool(struct gen_pool *prev, gfp_t gfp)
+ 	if (prev == NULL) {
+ 		if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp & GFP_DMA32))
+ 			return atomic_pool_dma32;
+-		if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp & GFP_DMA))
++		if (atomic_pool_dma && (gfp & GFP_DMA))
+ 			return atomic_pool_dma;
+ 		return atomic_pool_kernel;
+ 	}
+diff --git a/kernel/locking/ww_rt_mutex.c b/kernel/locking/ww_rt_mutex.c
+index 0e00205cf467a..d1473c624105c 100644
+--- a/kernel/locking/ww_rt_mutex.c
++++ b/kernel/locking/ww_rt_mutex.c
+@@ -26,7 +26,7 @@ int ww_mutex_trylock(struct ww_mutex *lock, struct ww_acquire_ctx *ww_ctx)
+ 
+ 	if (__rt_mutex_trylock(&rtm->rtmutex)) {
+ 		ww_mutex_set_context_fastpath(lock, ww_ctx);
+-		mutex_acquire_nest(&rtm->dep_map, 0, 1, ww_ctx->dep_map, _RET_IP_);
++		mutex_acquire_nest(&rtm->dep_map, 0, 1, &ww_ctx->dep_map, _RET_IP_);
+ 		return 1;
+ 	}
+ 
+diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
+index 8b410d982990c..05e4d6c28d1f5 100644
+--- a/kernel/rcu/rcutorture.c
++++ b/kernel/rcu/rcutorture.c
+@@ -46,6 +46,7 @@
+ #include <linux/oom.h>
+ #include <linux/tick.h>
+ #include <linux/rcupdate_trace.h>
++#include <linux/nmi.h>
+ 
+ #include "rcu.h"
+ 
+@@ -109,6 +110,8 @@ torture_param(int, shutdown_secs, 0, "Shutdown time (s), <= zero to disable.");
+ torture_param(int, stall_cpu, 0, "Stall duration (s), zero to disable.");
+ torture_param(int, stall_cpu_holdoff, 10,
+ 	     "Time to wait before starting stall (s).");
++torture_param(bool, stall_no_softlockup, false,
++	     "Avoid softlockup warning during cpu stall.");
+ torture_param(int, stall_cpu_irqsoff, 0, "Disable interrupts while stalling.");
+ torture_param(int, stall_cpu_block, 0, "Sleep while stalling.");
+ torture_param(int, stall_gp_kthread, 0,
+@@ -2052,6 +2055,8 @@ static int rcu_torture_stall(void *args)
+ #else
+ 				schedule_timeout_uninterruptible(HZ);
+ #endif
++			} else if (stall_no_softlockup) {
++				touch_softlockup_watchdog();
+ 			}
+ 		if (stall_cpu_irqsoff)
+ 			local_irq_enable();
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index ef8d36f580fc3..906b6887622d3 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -2982,7 +2982,7 @@ __call_rcu(struct rcu_head *head, rcu_callback_t func)
+ 	head->func = func;
+ 	head->next = NULL;
+ 	local_irq_save(flags);
+-	kasan_record_aux_stack(head);
++	kasan_record_aux_stack_noalloc(head);
+ 	rdp = this_cpu_ptr(&rcu_data);
+ 
+ 	/* Add the callback to our list. */
+@@ -3547,7 +3547,7 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
+ 		return;
+ 	}
+ 
+-	kasan_record_aux_stack(ptr);
++	kasan_record_aux_stack_noalloc(ptr);
+ 	success = add_ptr_to_bulk_krc_lock(&krcp, &flags, ptr, !head);
+ 	if (!success) {
+ 		run_page_cache_worker(krcp);
+diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
+index f3947c49eee71..9e58e77b992ed 100644
+--- a/kernel/rcu/tree_exp.h
++++ b/kernel/rcu/tree_exp.h
+@@ -387,6 +387,7 @@ retry_ipi:
+ 			continue;
+ 		}
+ 		if (get_cpu() == cpu) {
++			mask_ofl_test |= mask;
+ 			put_cpu();
+ 			continue;
+ 		}
+diff --git a/kernel/sched/cpuacct.c b/kernel/sched/cpuacct.c
+index 893eece65bfda..ab67d97a84428 100644
+--- a/kernel/sched/cpuacct.c
++++ b/kernel/sched/cpuacct.c
+@@ -21,15 +21,11 @@ static const char * const cpuacct_stat_desc[] = {
+ 	[CPUACCT_STAT_SYSTEM] = "system",
+ };
+ 
+-struct cpuacct_usage {
+-	u64	usages[CPUACCT_STAT_NSTATS];
+-};
+-
+ /* track CPU usage of a group of tasks and its child groups */
+ struct cpuacct {
+ 	struct cgroup_subsys_state	css;
+ 	/* cpuusage holds pointer to a u64-type object on every CPU */
+-	struct cpuacct_usage __percpu	*cpuusage;
++	u64 __percpu	*cpuusage;
+ 	struct kernel_cpustat __percpu	*cpustat;
+ };
+ 
+@@ -49,7 +45,7 @@ static inline struct cpuacct *parent_ca(struct cpuacct *ca)
+ 	return css_ca(ca->css.parent);
+ }
+ 
+-static DEFINE_PER_CPU(struct cpuacct_usage, root_cpuacct_cpuusage);
++static DEFINE_PER_CPU(u64, root_cpuacct_cpuusage);
+ static struct cpuacct root_cpuacct = {
+ 	.cpustat	= &kernel_cpustat,
+ 	.cpuusage	= &root_cpuacct_cpuusage,
+@@ -68,7 +64,7 @@ cpuacct_css_alloc(struct cgroup_subsys_state *parent_css)
+ 	if (!ca)
+ 		goto out;
+ 
+-	ca->cpuusage = alloc_percpu(struct cpuacct_usage);
++	ca->cpuusage = alloc_percpu(u64);
+ 	if (!ca->cpuusage)
+ 		goto out_free_ca;
+ 
+@@ -99,7 +95,8 @@ static void cpuacct_css_free(struct cgroup_subsys_state *css)
+ static u64 cpuacct_cpuusage_read(struct cpuacct *ca, int cpu,
+ 				 enum cpuacct_stat_index index)
+ {
+-	struct cpuacct_usage *cpuusage = per_cpu_ptr(ca->cpuusage, cpu);
++	u64 *cpuusage = per_cpu_ptr(ca->cpuusage, cpu);
++	u64 *cpustat = per_cpu_ptr(ca->cpustat, cpu)->cpustat;
+ 	u64 data;
+ 
+ 	/*
+@@ -115,14 +112,17 @@ static u64 cpuacct_cpuusage_read(struct cpuacct *ca, int cpu,
+ 	raw_spin_rq_lock_irq(cpu_rq(cpu));
+ #endif
+ 
+-	if (index == CPUACCT_STAT_NSTATS) {
+-		int i = 0;
+-
+-		data = 0;
+-		for (i = 0; i < CPUACCT_STAT_NSTATS; i++)
+-			data += cpuusage->usages[i];
+-	} else {
+-		data = cpuusage->usages[index];
++	switch (index) {
++	case CPUACCT_STAT_USER:
++		data = cpustat[CPUTIME_USER] + cpustat[CPUTIME_NICE];
++		break;
++	case CPUACCT_STAT_SYSTEM:
++		data = cpustat[CPUTIME_SYSTEM] + cpustat[CPUTIME_IRQ] +
++			cpustat[CPUTIME_SOFTIRQ];
++		break;
++	case CPUACCT_STAT_NSTATS:
++		data = *cpuusage;
++		break;
+ 	}
+ 
+ #ifndef CONFIG_64BIT
+@@ -132,10 +132,14 @@ static u64 cpuacct_cpuusage_read(struct cpuacct *ca, int cpu,
+ 	return data;
+ }
+ 
+-static void cpuacct_cpuusage_write(struct cpuacct *ca, int cpu, u64 val)
++static void cpuacct_cpuusage_write(struct cpuacct *ca, int cpu)
+ {
+-	struct cpuacct_usage *cpuusage = per_cpu_ptr(ca->cpuusage, cpu);
+-	int i;
++	u64 *cpuusage = per_cpu_ptr(ca->cpuusage, cpu);
++	u64 *cpustat = per_cpu_ptr(ca->cpustat, cpu)->cpustat;
++
++	/* Don't allow to reset global kernel_cpustat */
++	if (ca == &root_cpuacct)
++		return;
+ 
+ #ifndef CONFIG_64BIT
+ 	/*
+@@ -143,9 +147,10 @@ static void cpuacct_cpuusage_write(struct cpuacct *ca, int cpu, u64 val)
+ 	 */
+ 	raw_spin_rq_lock_irq(cpu_rq(cpu));
+ #endif
+-
+-	for (i = 0; i < CPUACCT_STAT_NSTATS; i++)
+-		cpuusage->usages[i] = val;
++	*cpuusage = 0;
++	cpustat[CPUTIME_USER] = cpustat[CPUTIME_NICE] = 0;
++	cpustat[CPUTIME_SYSTEM] = cpustat[CPUTIME_IRQ] = 0;
++	cpustat[CPUTIME_SOFTIRQ] = 0;
+ 
+ #ifndef CONFIG_64BIT
+ 	raw_spin_rq_unlock_irq(cpu_rq(cpu));
+@@ -196,7 +201,7 @@ static int cpuusage_write(struct cgroup_subsys_state *css, struct cftype *cft,
+ 		return -EINVAL;
+ 
+ 	for_each_possible_cpu(cpu)
+-		cpuacct_cpuusage_write(ca, cpu, 0);
++		cpuacct_cpuusage_write(ca, cpu);
+ 
+ 	return 0;
+ }
+@@ -243,25 +248,10 @@ static int cpuacct_all_seq_show(struct seq_file *m, void *V)
+ 	seq_puts(m, "\n");
+ 
+ 	for_each_possible_cpu(cpu) {
+-		struct cpuacct_usage *cpuusage = per_cpu_ptr(ca->cpuusage, cpu);
+-
+ 		seq_printf(m, "%d", cpu);
+-
+-		for (index = 0; index < CPUACCT_STAT_NSTATS; index++) {
+-#ifndef CONFIG_64BIT
+-			/*
+-			 * Take rq->lock to make 64-bit read safe on 32-bit
+-			 * platforms.
+-			 */
+-			raw_spin_rq_lock_irq(cpu_rq(cpu));
+-#endif
+-
+-			seq_printf(m, " %llu", cpuusage->usages[index]);
+-
+-#ifndef CONFIG_64BIT
+-			raw_spin_rq_unlock_irq(cpu_rq(cpu));
+-#endif
+-		}
++		for (index = 0; index < CPUACCT_STAT_NSTATS; index++)
++			seq_printf(m, " %llu",
++				   cpuacct_cpuusage_read(ca, cpu, index));
+ 		seq_puts(m, "\n");
+ 	}
+ 	return 0;
+@@ -339,16 +329,11 @@ static struct cftype files[] = {
+ void cpuacct_charge(struct task_struct *tsk, u64 cputime)
+ {
+ 	struct cpuacct *ca;
+-	int index = CPUACCT_STAT_SYSTEM;
+-	struct pt_regs *regs = get_irq_regs() ? : task_pt_regs(tsk);
+-
+-	if (regs && user_mode(regs))
+-		index = CPUACCT_STAT_USER;
+ 
+ 	rcu_read_lock();
+ 
+ 	for (ca = task_ca(tsk); ca; ca = parent_ca(ca))
+-		__this_cpu_add(ca->cpuusage->usages[index], cputime);
++		__this_cpu_add(*ca->cpuusage, cputime);
+ 
+ 	rcu_read_unlock();
+ }
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index 9392aea1804e5..b7ec42732b284 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -148,10 +148,10 @@ void account_guest_time(struct task_struct *p, u64 cputime)
+ 
+ 	/* Add guest time to cpustat. */
+ 	if (task_nice(p) > 0) {
+-		cpustat[CPUTIME_NICE] += cputime;
++		task_group_account_field(p, CPUTIME_NICE, cputime);
+ 		cpustat[CPUTIME_GUEST_NICE] += cputime;
+ 	} else {
+-		cpustat[CPUTIME_USER] += cputime;
++		task_group_account_field(p, CPUTIME_USER, cputime);
+ 		cpustat[CPUTIME_GUEST] += cputime;
+ 	}
+ }
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 6e476f6d94351..f2cf047b25e56 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -6398,8 +6398,10 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
+ 	 * pattern is IO completions.
+ 	 */
+ 	if (is_per_cpu_kthread(current) &&
++	    in_task() &&
+ 	    prev == smp_processor_id() &&
+-	    this_rq()->nr_running <= 1) {
++	    this_rq()->nr_running <= 1 &&
++	    asym_fits_capacity(task_util, prev)) {
+ 		return prev;
+ 	}
+ 
+diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
+index 1652f2bb54b79..69b19d3af690f 100644
+--- a/kernel/sched/psi.c
++++ b/kernel/sched/psi.c
+@@ -34,13 +34,19 @@
+  * delayed on that resource such that nobody is advancing and the CPU
+  * goes idle. This leaves both workload and CPU unproductive.
+  *
+- * Naturally, the FULL state doesn't exist for the CPU resource at the
+- * system level, but exist at the cgroup level, means all non-idle tasks
+- * in a cgroup are delayed on the CPU resource which used by others outside
+- * of the cgroup or throttled by the cgroup cpu.max configuration.
+- *
+  *	SOME = nr_delayed_tasks != 0
+- *	FULL = nr_delayed_tasks != 0 && nr_running_tasks == 0
++ *	FULL = nr_delayed_tasks != 0 && nr_productive_tasks == 0
++ *
++ * What it means for a task to be productive is defined differently
++ * for each resource. For IO, productive means a running task. For
++ * memory, productive means a running task that isn't a reclaimer. For
++ * CPU, productive means an oncpu task.
++ *
++ * Naturally, the FULL state doesn't exist for the CPU resource at the
++ * system level, but exist at the cgroup level. At the cgroup level,
++ * FULL means all non-idle tasks in the cgroup are delayed on the CPU
++ * resource which is being used by others outside of the cgroup or
++ * throttled by the cgroup cpu.max configuration.
+  *
+  * The percentage of wallclock time spent in those compound stall
+  * states gives pressure numbers between 0 and 100 for each resource,
+@@ -81,13 +87,13 @@
+  *
+  *	threads = min(nr_nonidle_tasks, nr_cpus)
+  *	   SOME = min(nr_delayed_tasks / threads, 1)
+- *	   FULL = (threads - min(nr_running_tasks, threads)) / threads
++ *	   FULL = (threads - min(nr_productive_tasks, threads)) / threads
+  *
+  * For the 257 number crunchers on 256 CPUs, this yields:
+  *
+  *	threads = min(257, 256)
+  *	   SOME = min(1 / 256, 1)             = 0.4%
+- *	   FULL = (256 - min(257, 256)) / 256 = 0%
++ *	   FULL = (256 - min(256, 256)) / 256 = 0%
+  *
+  * For the 1 out of 4 memory-delayed tasks, this yields:
+  *
+@@ -112,7 +118,7 @@
+  * For each runqueue, we track:
+  *
+  *	   tSOME[cpu] = time(nr_delayed_tasks[cpu] != 0)
+- *	   tFULL[cpu] = time(nr_delayed_tasks[cpu] && !nr_running_tasks[cpu])
++ *	   tFULL[cpu] = time(nr_delayed_tasks[cpu] && !nr_productive_tasks[cpu])
+  *	tNONIDLE[cpu] = time(nr_nonidle_tasks[cpu] != 0)
+  *
+  * and then periodically aggregate:
+@@ -233,7 +239,8 @@ static bool test_state(unsigned int *tasks, enum psi_states state)
+ 	case PSI_MEM_SOME:
+ 		return unlikely(tasks[NR_MEMSTALL]);
+ 	case PSI_MEM_FULL:
+-		return unlikely(tasks[NR_MEMSTALL] && !tasks[NR_RUNNING]);
++		return unlikely(tasks[NR_MEMSTALL] &&
++			tasks[NR_RUNNING] == tasks[NR_MEMSTALL_RUNNING]);
+ 	case PSI_CPU_SOME:
+ 		return unlikely(tasks[NR_RUNNING] > tasks[NR_ONCPU]);
+ 	case PSI_CPU_FULL:
+@@ -710,10 +717,11 @@ static void psi_group_change(struct psi_group *group, int cpu,
+ 		if (groupc->tasks[t]) {
+ 			groupc->tasks[t]--;
+ 		} else if (!psi_bug) {
+-			printk_deferred(KERN_ERR "psi: task underflow! cpu=%d t=%d tasks=[%u %u %u %u] clear=%x set=%x\n",
++			printk_deferred(KERN_ERR "psi: task underflow! cpu=%d t=%d tasks=[%u %u %u %u %u] clear=%x set=%x\n",
+ 					cpu, t, groupc->tasks[0],
+ 					groupc->tasks[1], groupc->tasks[2],
+-					groupc->tasks[3], clear, set);
++					groupc->tasks[3], groupc->tasks[4],
++					clear, set);
+ 			psi_bug = 1;
+ 		}
+ 	}
+@@ -854,12 +862,15 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+ 		int clear = TSK_ONCPU, set = 0;
+ 
+ 		/*
+-		 * When we're going to sleep, psi_dequeue() lets us handle
+-		 * TSK_RUNNING and TSK_IOWAIT here, where we can combine it
+-		 * with TSK_ONCPU and save walking common ancestors twice.
++		 * When we're going to sleep, psi_dequeue() lets us
++		 * handle TSK_RUNNING, TSK_MEMSTALL_RUNNING and
++		 * TSK_IOWAIT here, where we can combine it with
++		 * TSK_ONCPU and save walking common ancestors twice.
+ 		 */
+ 		if (sleep) {
+ 			clear |= TSK_RUNNING;
++			if (prev->in_memstall)
++				clear |= TSK_MEMSTALL_RUNNING;
+ 			if (prev->in_iowait)
+ 				set |= TSK_IOWAIT;
+ 		}
+@@ -908,7 +919,7 @@ void psi_memstall_enter(unsigned long *flags)
+ 	rq = this_rq_lock_irq(&rf);
+ 
+ 	current->in_memstall = 1;
+-	psi_task_change(current, 0, TSK_MEMSTALL);
++	psi_task_change(current, 0, TSK_MEMSTALL | TSK_MEMSTALL_RUNNING);
+ 
+ 	rq_unlock_irq(rq, &rf);
+ }
+@@ -937,7 +948,7 @@ void psi_memstall_leave(unsigned long *flags)
+ 	rq = this_rq_lock_irq(&rf);
+ 
+ 	current->in_memstall = 0;
+-	psi_task_change(current, TSK_MEMSTALL, 0);
++	psi_task_change(current, TSK_MEMSTALL | TSK_MEMSTALL_RUNNING, 0);
+ 
+ 	rq_unlock_irq(rq, &rf);
+ }
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index b48baaba2fc2e..7b4f4fbbb4048 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -52,11 +52,8 @@ void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime)
+ 	rt_b->rt_period_timer.function = sched_rt_period_timer;
+ }
+ 
+-static void start_rt_bandwidth(struct rt_bandwidth *rt_b)
++static inline void do_start_rt_bandwidth(struct rt_bandwidth *rt_b)
+ {
+-	if (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF)
+-		return;
+-
+ 	raw_spin_lock(&rt_b->rt_runtime_lock);
+ 	if (!rt_b->rt_period_active) {
+ 		rt_b->rt_period_active = 1;
+@@ -75,6 +72,14 @@ static void start_rt_bandwidth(struct rt_bandwidth *rt_b)
+ 	raw_spin_unlock(&rt_b->rt_runtime_lock);
+ }
+ 
++static void start_rt_bandwidth(struct rt_bandwidth *rt_b)
++{
++	if (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF)
++		return;
++
++	do_start_rt_bandwidth(rt_b);
++}
++
+ void init_rt_rq(struct rt_rq *rt_rq)
+ {
+ 	struct rt_prio_array *array;
+@@ -1031,13 +1036,17 @@ static void update_curr_rt(struct rq *rq)
+ 
+ 	for_each_sched_rt_entity(rt_se) {
+ 		struct rt_rq *rt_rq = rt_rq_of_se(rt_se);
++		int exceeded;
+ 
+ 		if (sched_rt_runtime(rt_rq) != RUNTIME_INF) {
+ 			raw_spin_lock(&rt_rq->rt_runtime_lock);
+ 			rt_rq->rt_time += delta_exec;
+-			if (sched_rt_runtime_exceeded(rt_rq))
++			exceeded = sched_rt_runtime_exceeded(rt_rq);
++			if (exceeded)
+ 				resched_curr(rq);
+ 			raw_spin_unlock(&rt_rq->rt_runtime_lock);
++			if (exceeded)
++				do_start_rt_bandwidth(sched_rt_bandwidth(rt_rq));
+ 		}
+ 	}
+ }
+@@ -2911,8 +2920,12 @@ static int sched_rt_global_validate(void)
+ 
+ static void sched_rt_do_global(void)
+ {
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&def_rt_bandwidth.rt_runtime_lock, flags);
+ 	def_rt_bandwidth.rt_runtime = global_rt_runtime();
+ 	def_rt_bandwidth.rt_period = ns_to_ktime(global_rt_period());
++	raw_spin_unlock_irqrestore(&def_rt_bandwidth.rt_runtime_lock, flags);
+ }
+ 
+ int sched_rt_handler(struct ctl_table *table, int write, void *buffer,
+diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
+index cfb0893a83d45..3a3c826dd83a7 100644
+--- a/kernel/sched/stats.h
++++ b/kernel/sched/stats.h
+@@ -118,6 +118,9 @@ static inline void psi_enqueue(struct task_struct *p, bool wakeup)
+ 	if (static_branch_likely(&psi_disabled))
+ 		return;
+ 
++	if (p->in_memstall)
++		set |= TSK_MEMSTALL_RUNNING;
++
+ 	if (!wakeup || p->sched_psi_wake_requeue) {
+ 		if (p->in_memstall)
+ 			set |= TSK_MEMSTALL;
+@@ -148,7 +151,7 @@ static inline void psi_dequeue(struct task_struct *p, bool sleep)
+ 		return;
+ 
+ 	if (p->in_memstall)
+-		clear |= TSK_MEMSTALL;
++		clear |= (TSK_MEMSTALL | TSK_MEMSTALL_RUNNING);
+ 
+ 	psi_task_change(p, clear, 0);
+ }
+diff --git a/kernel/signal.c b/kernel/signal.c
+index dfcee3888b00e..cf97b9c4d665a 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -2684,19 +2684,19 @@ relock:
+ 		goto relock;
+ 	}
+ 
+-	/* Has this task already been marked for death? */
+-	if (signal_group_exit(signal)) {
+-		ksig->info.si_signo = signr = SIGKILL;
+-		sigdelset(&current->pending.signal, SIGKILL);
+-		trace_signal_deliver(SIGKILL, SEND_SIG_NOINFO,
+-				&sighand->action[SIGKILL - 1]);
+-		recalc_sigpending();
+-		goto fatal;
+-	}
+-
+ 	for (;;) {
+ 		struct k_sigaction *ka;
+ 
++		/* Has this task already been marked for death? */
++		if (signal_group_exit(signal)) {
++			ksig->info.si_signo = signr = SIGKILL;
++			sigdelset(&current->pending.signal, SIGKILL);
++			trace_signal_deliver(SIGKILL, SEND_SIG_NOINFO,
++				&sighand->action[SIGKILL - 1]);
++			recalc_sigpending();
++			goto fatal;
++		}
++
+ 		if (unlikely(current->jobctl & JOBCTL_STOP_PENDING) &&
+ 		    do_signal_stop(0))
+ 			goto relock;
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index b8a14d2fb5ba6..bcad1a1e5dcf1 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -107,7 +107,7 @@ static u64 suspend_start;
+  * This delay could be due to SMIs, NMIs, or to VCPU preemptions.  Used as
+  * a lower bound for cs->uncertainty_margin values when registering clocks.
+  */
+-#define WATCHDOG_MAX_SKEW (50 * NSEC_PER_USEC)
++#define WATCHDOG_MAX_SKEW (100 * NSEC_PER_USEC)
+ 
+ #ifdef CONFIG_CLOCKSOURCE_WATCHDOG
+ static void clocksource_watchdog_work(struct work_struct *work);
+@@ -205,17 +205,24 @@ EXPORT_SYMBOL_GPL(max_cswd_read_retries);
+ static int verify_n_cpus = 8;
+ module_param(verify_n_cpus, int, 0644);
+ 
+-static bool cs_watchdog_read(struct clocksource *cs, u64 *csnow, u64 *wdnow)
++enum wd_read_status {
++	WD_READ_SUCCESS,
++	WD_READ_UNSTABLE,
++	WD_READ_SKIP
++};
++
++static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow, u64 *wdnow)
+ {
+ 	unsigned int nretries;
+-	u64 wd_end, wd_delta;
+-	int64_t wd_delay;
++	u64 wd_end, wd_end2, wd_delta;
++	int64_t wd_delay, wd_seq_delay;
+ 
+ 	for (nretries = 0; nretries <= max_cswd_read_retries; nretries++) {
+ 		local_irq_disable();
+ 		*wdnow = watchdog->read(watchdog);
+ 		*csnow = cs->read(cs);
+ 		wd_end = watchdog->read(watchdog);
++		wd_end2 = watchdog->read(watchdog);
+ 		local_irq_enable();
+ 
+ 		wd_delta = clocksource_delta(wd_end, *wdnow, watchdog->mask);
+@@ -226,13 +233,34 @@ static bool cs_watchdog_read(struct clocksource *cs, u64 *csnow, u64 *wdnow)
+ 				pr_warn("timekeeping watchdog on CPU%d: %s retried %d times before success\n",
+ 					smp_processor_id(), watchdog->name, nretries);
+ 			}
+-			return true;
++			return WD_READ_SUCCESS;
+ 		}
++
++		/*
++		 * Now compute delay in consecutive watchdog read to see if
++		 * there is too much external interferences that cause
++		 * significant delay in reading both clocksource and watchdog.
++		 *
++		 * If consecutive WD read-back delay > WATCHDOG_MAX_SKEW/2,
++		 * report system busy, reinit the watchdog and skip the current
++		 * watchdog test.
++		 */
++		wd_delta = clocksource_delta(wd_end2, wd_end, watchdog->mask);
++		wd_seq_delay = clocksource_cyc2ns(wd_delta, watchdog->mult, watchdog->shift);
++		if (wd_seq_delay > WATCHDOG_MAX_SKEW/2)
++			goto skip_test;
+ 	}
+ 
+ 	pr_warn("timekeeping watchdog on CPU%d: %s read-back delay of %lldns, attempt %d, marking unstable\n",
+ 		smp_processor_id(), watchdog->name, wd_delay, nretries);
+-	return false;
++	return WD_READ_UNSTABLE;
++
++skip_test:
++	pr_info("timekeeping watchdog on CPU%d: %s wd-wd read-back delay of %lldns\n",
++		smp_processor_id(), watchdog->name, wd_seq_delay);
++	pr_info("wd-%s-wd read-back delay of %lldns, clock-skew test skipped!\n",
++		cs->name, wd_delay);
++	return WD_READ_SKIP;
+ }
+ 
+ static u64 csnow_mid;
+@@ -356,6 +384,7 @@ static void clocksource_watchdog(struct timer_list *unused)
+ 	int next_cpu, reset_pending;
+ 	int64_t wd_nsec, cs_nsec;
+ 	struct clocksource *cs;
++	enum wd_read_status read_ret;
+ 	u32 md;
+ 
+ 	spin_lock(&watchdog_lock);
+@@ -373,9 +402,12 @@ static void clocksource_watchdog(struct timer_list *unused)
+ 			continue;
+ 		}
+ 
+-		if (!cs_watchdog_read(cs, &csnow, &wdnow)) {
+-			/* Clock readout unreliable, so give it up. */
+-			__clocksource_unstable(cs);
++		read_ret = cs_watchdog_read(cs, &csnow, &wdnow);
++
++		if (read_ret != WD_READ_SUCCESS) {
++			if (read_ret == WD_READ_UNSTABLE)
++				/* Clock readout unreliable, so give it up. */
++				__clocksource_unstable(cs);
+ 			continue;
+ 		}
+ 
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index ae9755037b7ee..e36d184615fb7 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -1400,9 +1400,6 @@ static const struct bpf_func_proto bpf_perf_prog_read_value_proto = {
+ BPF_CALL_4(bpf_read_branch_records, struct bpf_perf_event_data_kern *, ctx,
+ 	   void *, buf, u32, size, u64, flags)
+ {
+-#ifndef CONFIG_X86
+-	return -ENOENT;
+-#else
+ 	static const u32 br_entry_size = sizeof(struct perf_branch_entry);
+ 	struct perf_branch_stack *br_stack = ctx->data->br_stack;
+ 	u32 to_copy;
+@@ -1411,7 +1408,7 @@ BPF_CALL_4(bpf_read_branch_records, struct bpf_perf_event_data_kern *, ctx,
+ 		return -EINVAL;
+ 
+ 	if (unlikely(!br_stack))
+-		return -EINVAL;
++		return -ENOENT;
+ 
+ 	if (flags & BPF_F_GET_BRANCH_RECORDS_SIZE)
+ 		return br_stack->nr * br_entry_size;
+@@ -1423,7 +1420,6 @@ BPF_CALL_4(bpf_read_branch_records, struct bpf_perf_event_data_kern *, ctx,
+ 	memcpy(buf, br_stack->entries, to_copy);
+ 
+ 	return to_copy;
+-#endif
+ }
+ 
+ static const struct bpf_func_proto bpf_read_branch_records_proto = {
+diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c
+index ca9c13b2ecf4b..4b5a637d3ec00 100644
+--- a/kernel/trace/trace_events_synth.c
++++ b/kernel/trace/trace_events_synth.c
+@@ -2054,6 +2054,13 @@ static int create_synth_event(const char *raw_command)
+ 
+ 	last_cmd_set(raw_command);
+ 
++	name = raw_command;
++
++	/* Don't try to process if not our system */
++	if (name[0] != 's' || name[1] != ':')
++		return -ECANCELED;
++	name += 2;
++
+ 	p = strpbrk(raw_command, " \t");
+ 	if (!p) {
+ 		synth_err(SYNTH_ERR_INVALID_CMD, 0);
+@@ -2062,12 +2069,6 @@ static int create_synth_event(const char *raw_command)
+ 
+ 	fields = skip_spaces(p);
+ 
+-	name = raw_command;
+-
+-	if (name[0] != 's' || name[1] != ':')
+-		return -ECANCELED;
+-	name += 2;
+-
+ 	/* This interface accepts group name prefix */
+ 	if (strchr(name, '/')) {
+ 		len = str_has_prefix(name, SYNTH_SYSTEM "/");
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 33272a7b69129..1bb85f7a4593a 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -1175,15 +1175,18 @@ static int probes_profile_seq_show(struct seq_file *m, void *v)
+ {
+ 	struct dyn_event *ev = v;
+ 	struct trace_kprobe *tk;
++	unsigned long nmissed;
+ 
+ 	if (!is_trace_kprobe(ev))
+ 		return 0;
+ 
+ 	tk = to_trace_kprobe(ev);
++	nmissed = trace_kprobe_is_return(tk) ?
++		tk->rp.kp.nmissed + tk->rp.nmissed : tk->rp.kp.nmissed;
+ 	seq_printf(m, "  %-44s %15lu %15lu\n",
+ 		   trace_probe_name(&tk->tp),
+ 		   trace_kprobe_nhit(tk),
+-		   tk->rp.kp.nmissed);
++		   nmissed);
+ 
+ 	return 0;
+ }
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index 7520d43aed554..b58674e8644a6 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -2123,6 +2123,13 @@ out_unhook_irq:
+ 	return -EINVAL;
+ }
+ 
++static void osnoise_unhook_events(void)
++{
++	unhook_thread_events();
++	unhook_softirq_events();
++	unhook_irq_events();
++}
++
+ /*
+  * osnoise_workload_start - start the workload and hook to events
+  */
+@@ -2155,7 +2162,14 @@ static int osnoise_workload_start(void)
+ 
+ 	retval = start_per_cpu_kthreads();
+ 	if (retval) {
+-		unhook_irq_events();
++		trace_osnoise_callback_enabled = false;
++		/*
++		 * Make sure that ftrace_nmi_enter/exit() see
++		 * trace_osnoise_callback_enabled as false before continuing.
++		 */
++		barrier();
++
++		osnoise_unhook_events();
+ 		return retval;
+ 	}
+ 
+@@ -2186,9 +2200,7 @@ static void osnoise_workload_stop(void)
+ 
+ 	stop_per_cpu_kthreads();
+ 
+-	unhook_irq_events();
+-	unhook_softirq_events();
+-	unhook_thread_events();
++	osnoise_unhook_events();
+ }
+ 
+ static void osnoise_tracer_start(struct trace_array *tr)
+diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
+index 3ed2a3f372972..bb4605b60de79 100644
+--- a/kernel/trace/trace_probe.c
++++ b/kernel/trace/trace_probe.c
+@@ -356,6 +356,8 @@ static int __parse_imm_string(char *str, char **pbuf, int offs)
+ 		return -EINVAL;
+ 	}
+ 	*pbuf = kstrndup(str, len - 1, GFP_KERNEL);
++	if (!*pbuf)
++		return -ENOMEM;
+ 	return 0;
+ }
+ 
+diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
+index 8bfcd3b094226..f755bde42fd07 100644
+--- a/kernel/trace/trace_syscalls.c
++++ b/kernel/trace/trace_syscalls.c
+@@ -323,8 +323,7 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
+ 
+ 	trace_ctx = tracing_gen_ctx();
+ 
+-	buffer = tr->array_buffer.buffer;
+-	event = trace_buffer_lock_reserve(buffer,
++	event = trace_event_buffer_lock_reserve(&buffer, trace_file,
+ 			sys_data->enter_event->event.type, size, trace_ctx);
+ 	if (!event)
+ 		return;
+@@ -367,8 +366,7 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
+ 
+ 	trace_ctx = tracing_gen_ctx();
+ 
+-	buffer = tr->array_buffer.buffer;
+-	event = trace_buffer_lock_reserve(buffer,
++	event = trace_event_buffer_lock_reserve(&buffer, trace_file,
+ 			sys_data->exit_event->event.type, sizeof(*entry),
+ 			trace_ctx);
+ 	if (!event)
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index f5f0039d31e5a..78ec1c16ccf4b 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -1619,6 +1619,11 @@ create_local_trace_uprobe(char *name, unsigned long offs,
+ 	tu->path = path;
+ 	tu->ref_ctr_offset = ref_ctr_offset;
+ 	tu->filename = kstrdup(name, GFP_KERNEL);
++	if (!tu->filename) {
++		ret = -ENOMEM;
++		goto error;
++	}
++
+ 	init_trace_event_call(tu);
+ 
+ 	ptype = is_ret_probe(tu) ? PROBE_PRINT_RETURN : PROBE_PRINT_NORMAL;
+diff --git a/kernel/tsacct.c b/kernel/tsacct.c
+index f00de83d02462..1d261fbe367bf 100644
+--- a/kernel/tsacct.c
++++ b/kernel/tsacct.c
+@@ -38,11 +38,10 @@ void bacct_add_tsk(struct user_namespace *user_ns,
+ 	stats->ac_btime = clamp_t(time64_t, btime, 0, U32_MAX);
+ 	stats->ac_btime64 = btime;
+ 
+-	if (thread_group_leader(tsk)) {
++	if (tsk->flags & PF_EXITING)
+ 		stats->ac_exitcode = tsk->exit_code;
+-		if (tsk->flags & PF_FORKNOEXEC)
+-			stats->ac_flag |= AFORK;
+-	}
++	if (thread_group_leader(tsk) && (tsk->flags & PF_FORKNOEXEC))
++		stats->ac_flag |= AFORK;
+ 	if (tsk->flags & PF_SUPERPRIV)
+ 		stats->ac_flag |= ASU;
+ 	if (tsk->flags & PF_DUMPCORE)
+diff --git a/lib/kunit/test.c b/lib/kunit/test.c
+index 3bd741e50a2d3..f96498ede2cc5 100644
+--- a/lib/kunit/test.c
++++ b/lib/kunit/test.c
+@@ -504,16 +504,18 @@ int kunit_run_tests(struct kunit_suite *suite)
+ 		struct kunit_result_stats param_stats = { 0 };
+ 		test_case->status = KUNIT_SKIPPED;
+ 
+-		if (test_case->generate_params) {
++		if (!test_case->generate_params) {
++			/* Non-parameterised test. */
++			kunit_run_case_catch_errors(suite, test_case, &test);
++			kunit_update_stats(&param_stats, test.status);
++		} else {
+ 			/* Get initial param. */
+ 			param_desc[0] = '\0';
+ 			test.param_value = test_case->generate_params(NULL, param_desc);
+-		}
+ 
+-		do {
+-			kunit_run_case_catch_errors(suite, test_case, &test);
++			while (test.param_value) {
++				kunit_run_case_catch_errors(suite, test_case, &test);
+ 
+-			if (test_case->generate_params) {
+ 				if (param_desc[0] == '\0') {
+ 					snprintf(param_desc, sizeof(param_desc),
+ 						 "param-%d", test.param_index);
+@@ -530,11 +532,11 @@ int kunit_run_tests(struct kunit_suite *suite)
+ 				param_desc[0] = '\0';
+ 				test.param_value = test_case->generate_params(test.param_value, param_desc);
+ 				test.param_index++;
+-			}
+ 
+-			kunit_update_stats(&param_stats, test.status);
++				kunit_update_stats(&param_stats, test.status);
++			}
++		}
+ 
+-		} while (test.param_value);
+ 
+ 		kunit_print_test_stats(&test, param_stats);
+ 
+diff --git a/lib/logic_iomem.c b/lib/logic_iomem.c
+index 9bdfde0c0f86d..549b22d4bcde1 100644
+--- a/lib/logic_iomem.c
++++ b/lib/logic_iomem.c
+@@ -21,15 +21,15 @@ struct logic_iomem_area {
+ 
+ #define AREA_SHIFT	24
+ #define MAX_AREA_SIZE	(1 << AREA_SHIFT)
+-#define MAX_AREAS	((1ULL<<32) / MAX_AREA_SIZE)
++#define MAX_AREAS	((1U << 31) / MAX_AREA_SIZE)
+ #define AREA_BITS	((MAX_AREAS - 1) << AREA_SHIFT)
+ #define AREA_MASK	(MAX_AREA_SIZE - 1)
+ #ifdef CONFIG_64BIT
+ #define IOREMAP_BIAS	0xDEAD000000000000UL
+ #define IOREMAP_MASK	0xFFFFFFFF00000000UL
+ #else
+-#define IOREMAP_BIAS	0
+-#define IOREMAP_MASK	0
++#define IOREMAP_BIAS	0x80000000UL
++#define IOREMAP_MASK	0x80000000UL
+ #endif
+ 
+ static DEFINE_MUTEX(regions_mtx);
+@@ -79,7 +79,7 @@ static void __iomem *real_ioremap(phys_addr_t offset, size_t size)
+ static void real_iounmap(void __iomem *addr)
+ {
+ 	WARN(1, "invalid iounmap for addr 0x%llx\n",
+-	     (unsigned long long __force)addr);
++	     (unsigned long long)(uintptr_t __force)addr);
+ }
+ #endif /* CONFIG_LOGIC_IOMEM_FALLBACK */
+ 
+@@ -173,7 +173,7 @@ EXPORT_SYMBOL(iounmap);
+ static u##sz real_raw_read ## op(const volatile void __iomem *addr)	\
+ {									\
+ 	WARN(1, "Invalid read" #op " at address %llx\n",		\
+-	     (unsigned long long __force)addr);				\
++	     (unsigned long long)(uintptr_t __force)addr);		\
+ 	return (u ## sz)~0ULL;						\
+ }									\
+ 									\
+@@ -181,7 +181,8 @@ static void real_raw_write ## op(u ## sz val,				\
+ 				 volatile void __iomem *addr)		\
+ {									\
+ 	WARN(1, "Invalid writeq" #op " of 0x%llx at address %llx\n",	\
+-	     (unsigned long long)val, (unsigned long long __force)addr);\
++	     (unsigned long long)val,					\
++	     (unsigned long long)(uintptr_t __force)addr);\
+ }									\
+ 
+ MAKE_FALLBACK(b, 8);
+@@ -194,14 +195,14 @@ MAKE_FALLBACK(q, 64);
+ static void real_memset_io(volatile void __iomem *addr, int value, size_t size)
+ {
+ 	WARN(1, "Invalid memset_io at address 0x%llx\n",
+-	     (unsigned long long __force)addr);
++	     (unsigned long long)(uintptr_t __force)addr);
+ }
+ 
+ static void real_memcpy_fromio(void *buffer, const volatile void __iomem *addr,
+ 			       size_t size)
+ {
+ 	WARN(1, "Invalid memcpy_fromio at address 0x%llx\n",
+-	     (unsigned long long __force)addr);
++	     (unsigned long long)(uintptr_t __force)addr);
+ 
+ 	memset(buffer, 0xff, size);
+ }
+@@ -210,7 +211,7 @@ static void real_memcpy_toio(volatile void __iomem *addr, const void *buffer,
+ 			     size_t size)
+ {
+ 	WARN(1, "Invalid memcpy_toio at address 0x%llx\n",
+-	     (unsigned long long __force)addr);
++	     (unsigned long long)(uintptr_t __force)addr);
+ }
+ #endif /* CONFIG_LOGIC_IOMEM_FALLBACK */
+ 
+diff --git a/lib/mpi/mpi-mod.c b/lib/mpi/mpi-mod.c
+index 47bc59edd4ff9..54fcc01564d9d 100644
+--- a/lib/mpi/mpi-mod.c
++++ b/lib/mpi/mpi-mod.c
+@@ -40,6 +40,8 @@ mpi_barrett_t mpi_barrett_init(MPI m, int copy)
+ 
+ 	mpi_normalize(m);
+ 	ctx = kcalloc(1, sizeof(*ctx), GFP_KERNEL);
++	if (!ctx)
++		return NULL;
+ 
+ 	if (copy) {
+ 		ctx->m = mpi_copy(m);
+diff --git a/lib/test_bpf.c b/lib/test_bpf.c
+index adae39567264f..0c5cb2d6436a4 100644
+--- a/lib/test_bpf.c
++++ b/lib/test_bpf.c
+@@ -14683,7 +14683,7 @@ static struct tail_call_test tail_call_tests[] = {
+ 			BPF_EXIT_INSN(),
+ 		},
+ 		.flags = FLAG_NEED_STATE | FLAG_RESULT_IN_STATE,
+-		.result = (MAX_TAIL_CALL_CNT + 1 + 1) * MAX_TESTRUNS,
++		.result = (MAX_TAIL_CALL_CNT + 1) * MAX_TESTRUNS,
+ 	},
+ 	{
+ 		"Tail call count preserved across function calls",
+@@ -14705,7 +14705,7 @@ static struct tail_call_test tail_call_tests[] = {
+ 		},
+ 		.stack_depth = 8,
+ 		.flags = FLAG_NEED_STATE | FLAG_RESULT_IN_STATE,
+-		.result = (MAX_TAIL_CALL_CNT + 1 + 1) * MAX_TESTRUNS,
++		.result = (MAX_TAIL_CALL_CNT + 1) * MAX_TESTRUNS,
+ 	},
+ 	{
+ 		"Tail call error path, NULL target",
+diff --git a/lib/test_hmm.c b/lib/test_hmm.c
+index e2ce8f9b7605e..767538089a62e 100644
+--- a/lib/test_hmm.c
++++ b/lib/test_hmm.c
+@@ -1086,9 +1086,33 @@ static long dmirror_fops_unlocked_ioctl(struct file *filp,
+ 	return 0;
+ }
+ 
++static int dmirror_fops_mmap(struct file *file, struct vm_area_struct *vma)
++{
++	unsigned long addr;
++
++	for (addr = vma->vm_start; addr < vma->vm_end; addr += PAGE_SIZE) {
++		struct page *page;
++		int ret;
++
++		page = alloc_page(GFP_KERNEL | __GFP_ZERO);
++		if (!page)
++			return -ENOMEM;
++
++		ret = vm_insert_page(vma, addr, page);
++		if (ret) {
++			__free_page(page);
++			return ret;
++		}
++		put_page(page);
++	}
++
++	return 0;
++}
++
+ static const struct file_operations dmirror_fops = {
+ 	.open		= dmirror_fops_open,
+ 	.release	= dmirror_fops_release,
++	.mmap		= dmirror_fops_mmap,
+ 	.unlocked_ioctl = dmirror_fops_unlocked_ioctl,
+ 	.llseek		= default_llseek,
+ 	.owner		= THIS_MODULE,
+diff --git a/lib/test_meminit.c b/lib/test_meminit.c
+index e4f706a404b3a..3ca717f113977 100644
+--- a/lib/test_meminit.c
++++ b/lib/test_meminit.c
+@@ -337,6 +337,7 @@ static int __init do_kmem_cache_size_bulk(int size, int *total_failures)
+ 		if (num)
+ 			kmem_cache_free_bulk(c, num, objects);
+ 	}
++	kmem_cache_destroy(c);
+ 	*total_failures += fail;
+ 	return 1;
+ }
+diff --git a/mm/hmm.c b/mm/hmm.c
+index 842e265992380..bd56641c79d4e 100644
+--- a/mm/hmm.c
++++ b/mm/hmm.c
+@@ -300,7 +300,8 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
+ 	 * Since each architecture defines a struct page for the zero page, just
+ 	 * fall through and treat it like a normal page.
+ 	 */
+-	if (pte_special(pte) && !pte_devmap(pte) &&
++	if (!vm_normal_page(walk->vma, addr, pte) &&
++	    !pte_devmap(pte) &&
+ 	    !is_zero_pfn(pte_pfn(pte))) {
+ 		if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) {
+ 			pte_unmap(ptep);
+@@ -518,7 +519,7 @@ static int hmm_vma_walk_test(unsigned long start, unsigned long end,
+ 	struct hmm_range *range = hmm_vma_walk->range;
+ 	struct vm_area_struct *vma = walk->vma;
+ 
+-	if (!(vma->vm_flags & (VM_IO | VM_PFNMAP | VM_MIXEDMAP)) &&
++	if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)) &&
+ 	    vma->vm_flags & VM_READ)
+ 		return 0;
+ 
+diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
+index d8ccff4c1275e..47ed4fc33a29e 100644
+--- a/mm/kasan/quarantine.c
++++ b/mm/kasan/quarantine.c
+@@ -132,11 +132,22 @@ static void *qlink_to_object(struct qlist_node *qlink, struct kmem_cache *cache)
+ static void qlink_free(struct qlist_node *qlink, struct kmem_cache *cache)
+ {
+ 	void *object = qlink_to_object(qlink, cache);
++	struct kasan_free_meta *meta = kasan_get_free_meta(cache, object);
+ 	unsigned long flags;
+ 
+ 	if (IS_ENABLED(CONFIG_SLAB))
+ 		local_irq_save(flags);
+ 
++	/*
++	 * If init_on_free is enabled and KASAN's free metadata is stored in
++	 * the object, zero the metadata. Otherwise, the object's memory will
++	 * not be properly zeroed, as KASAN saves the metadata after the slab
++	 * allocator zeroes the object.
++	 */
++	if (slab_want_init_on_free(cache) &&
++	    cache->kasan_info.free_meta_offset == 0)
++		memzero_explicit(meta, sizeof(*meta));
++
+ 	/*
+ 	 * As the object now gets freed from the quarantine, assume that its
+ 	 * free track is no longer valid.
+diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
+index 4a4929b29a237..94136f84b4497 100644
+--- a/mm/kasan/shadow.c
++++ b/mm/kasan/shadow.c
+@@ -498,7 +498,7 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
+ 
+ #else /* CONFIG_KASAN_VMALLOC */
+ 
+-int kasan_module_alloc(void *addr, size_t size)
++int kasan_module_alloc(void *addr, size_t size, gfp_t gfp_mask)
+ {
+ 	void *ret;
+ 	size_t scaled_size;
+@@ -520,9 +520,14 @@ int kasan_module_alloc(void *addr, size_t size)
+ 			__builtin_return_address(0));
+ 
+ 	if (ret) {
++		struct vm_struct *vm = find_vm_area(addr);
+ 		__memset(ret, KASAN_SHADOW_INIT, shadow_size);
+-		find_vm_area(addr)->flags |= VM_KASAN;
++		vm->flags |= VM_KASAN;
+ 		kmemleak_ignore(ret);
++
++		if (vm->flags & VM_DEFER_KMEMLEAK)
++			kmemleak_vmalloc(vm, size, gfp_mask);
++
+ 		return 0;
+ 	}
+ 
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index c5952749ad40b..d9492eaa47138 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -4204,7 +4204,9 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...)
+ 	va_list args;
+ 	static DEFINE_RATELIMIT_STATE(nopage_rs, 10*HZ, 1);
+ 
+-	if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs))
++	if ((gfp_mask & __GFP_NOWARN) ||
++	     !__ratelimit(&nopage_rs) ||
++	     ((gfp_mask & __GFP_DMA) && !has_managed_dma()))
+ 		return;
+ 
+ 	va_start(args, fmt);
+@@ -9460,3 +9462,18 @@ bool take_page_off_buddy(struct page *page)
+ 	return ret;
+ }
+ #endif
++
++#ifdef CONFIG_ZONE_DMA
++bool has_managed_dma(void)
++{
++	struct pglist_data *pgdat;
++
++	for_each_online_pgdat(pgdat) {
++		struct zone *zone = &pgdat->node_zones[ZONE_DMA];
++
++		if (managed_zone(zone))
++			return true;
++	}
++	return false;
++}
++#endif /* CONFIG_ZONE_DMA */
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 18f93c2d68f16..f6b0f53f8a320 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -554,7 +554,7 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
+ 	struct shmem_inode_info *info;
+ 	struct page *page;
+ 	unsigned long batch = sc ? sc->nr_to_scan : 128;
+-	int removed = 0, split = 0;
++	int split = 0;
+ 
+ 	if (list_empty(&sbinfo->shrinklist))
+ 		return SHRINK_STOP;
+@@ -569,7 +569,6 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
+ 		/* inode is about to be evicted */
+ 		if (!inode) {
+ 			list_del_init(&info->shrinklist);
+-			removed++;
+ 			goto next;
+ 		}
+ 
+@@ -577,12 +576,12 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
+ 		if (round_up(inode->i_size, PAGE_SIZE) ==
+ 				round_up(inode->i_size, HPAGE_PMD_SIZE)) {
+ 			list_move(&info->shrinklist, &to_remove);
+-			removed++;
+ 			goto next;
+ 		}
+ 
+ 		list_move(&info->shrinklist, &list);
+ next:
++		sbinfo->shrinklist_len--;
+ 		if (!--batch)
+ 			break;
+ 	}
+@@ -602,7 +601,7 @@ next:
+ 		inode = &info->vfs_inode;
+ 
+ 		if (nr_to_split && split >= nr_to_split)
+-			goto leave;
++			goto move_back;
+ 
+ 		page = find_get_page(inode->i_mapping,
+ 				(inode->i_size & HPAGE_PMD_MASK) >> PAGE_SHIFT);
+@@ -616,38 +615,44 @@ next:
+ 		}
+ 
+ 		/*
+-		 * Leave the inode on the list if we failed to lock
+-		 * the page at this time.
++		 * Move the inode on the list back to shrinklist if we failed
++		 * to lock the page at this time.
+ 		 *
+ 		 * Waiting for the lock may lead to deadlock in the
+ 		 * reclaim path.
+ 		 */
+ 		if (!trylock_page(page)) {
+ 			put_page(page);
+-			goto leave;
++			goto move_back;
+ 		}
+ 
+ 		ret = split_huge_page(page);
+ 		unlock_page(page);
+ 		put_page(page);
+ 
+-		/* If split failed leave the inode on the list */
++		/* If split failed move the inode on the list back to shrinklist */
+ 		if (ret)
+-			goto leave;
++			goto move_back;
+ 
+ 		split++;
+ drop:
+ 		list_del_init(&info->shrinklist);
+-		removed++;
+-leave:
++		goto put;
++move_back:
++		/*
++		 * Make sure the inode is either on the global list or deleted
++		 * from any local list before iput() since it could be deleted
++		 * in another thread once we put the inode (then the local list
++		 * is corrupted).
++		 */
++		spin_lock(&sbinfo->shrinklist_lock);
++		list_move(&info->shrinklist, &sbinfo->shrinklist);
++		sbinfo->shrinklist_len++;
++		spin_unlock(&sbinfo->shrinklist_lock);
++put:
+ 		iput(inode);
+ 	}
+ 
+-	spin_lock(&sbinfo->shrinklist_lock);
+-	list_splice_tail(&list, &sbinfo->shrinklist);
+-	sbinfo->shrinklist_len -= removed;
+-	spin_unlock(&sbinfo->shrinklist_lock);
+-
+ 	return split;
+ }
+ 
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index d2a00ad4e1dd1..bf3c2fe8f5285 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -3074,7 +3074,8 @@ again:
+ 	clear_vm_uninitialized_flag(area);
+ 
+ 	size = PAGE_ALIGN(size);
+-	kmemleak_vmalloc(area, size, gfp_mask);
++	if (!(vm_flags & VM_DEFER_KMEMLEAK))
++		kmemleak_vmalloc(area, size, gfp_mask);
+ 
+ 	return addr;
+ 
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index cfca99e295b80..02f43f3e2c564 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -536,7 +536,7 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
+ 	ax25_cb *ax25;
+ 	struct net_device *dev;
+ 	char devname[IFNAMSIZ];
+-	unsigned long opt;
++	unsigned int opt;
+ 	int res = 0;
+ 
+ 	if (level != SOL_AX25)
+@@ -568,7 +568,7 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
+ 		break;
+ 
+ 	case AX25_T1:
+-		if (opt < 1 || opt > ULONG_MAX / HZ) {
++		if (opt < 1 || opt > UINT_MAX / HZ) {
+ 			res = -EINVAL;
+ 			break;
+ 		}
+@@ -577,7 +577,7 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
+ 		break;
+ 
+ 	case AX25_T2:
+-		if (opt < 1 || opt > ULONG_MAX / HZ) {
++		if (opt < 1 || opt > UINT_MAX / HZ) {
+ 			res = -EINVAL;
+ 			break;
+ 		}
+@@ -593,7 +593,7 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
+ 		break;
+ 
+ 	case AX25_T3:
+-		if (opt < 1 || opt > ULONG_MAX / HZ) {
++		if (opt < 1 || opt > UINT_MAX / HZ) {
+ 			res = -EINVAL;
+ 			break;
+ 		}
+@@ -601,7 +601,7 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
+ 		break;
+ 
+ 	case AX25_IDLE:
+-		if (opt > ULONG_MAX / (60 * HZ)) {
++		if (opt > UINT_MAX / (60 * HZ)) {
+ 			res = -EINVAL;
+ 			break;
+ 		}
+diff --git a/net/batman-adv/netlink.c b/net/batman-adv/netlink.c
+index 29276284d281c..00875e1d8c44c 100644
+--- a/net/batman-adv/netlink.c
++++ b/net/batman-adv/netlink.c
+@@ -1368,21 +1368,21 @@ static const struct genl_small_ops batadv_netlink_ops[] = {
+ 	{
+ 		.cmd = BATADV_CMD_TP_METER,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.doit = batadv_netlink_tp_meter_start,
+ 		.internal_flags = BATADV_FLAG_NEED_MESH,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_TP_METER_CANCEL,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.doit = batadv_netlink_tp_meter_cancel,
+ 		.internal_flags = BATADV_FLAG_NEED_MESH,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_GET_ROUTING_ALGOS,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_algo_dump,
+ 	},
+ 	{
+@@ -1397,68 +1397,68 @@ static const struct genl_small_ops batadv_netlink_ops[] = {
+ 	{
+ 		.cmd = BATADV_CMD_GET_TRANSTABLE_LOCAL,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_tt_local_dump,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_GET_TRANSTABLE_GLOBAL,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_tt_global_dump,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_GET_ORIGINATORS,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_orig_dump,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_GET_NEIGHBORS,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_hardif_neigh_dump,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_GET_GATEWAYS,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_gw_dump,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_GET_BLA_CLAIM,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_bla_claim_dump,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_GET_BLA_BACKBONE,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_bla_backbone_dump,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_GET_DAT_CACHE,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_dat_cache_dump,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_GET_MCAST_FLAGS,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.dumpit = batadv_mcast_flags_dump,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_SET_MESH,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.doit = batadv_netlink_set_mesh,
+ 		.internal_flags = BATADV_FLAG_NEED_MESH,
+ 	},
+ 	{
+ 		.cmd = BATADV_CMD_SET_HARDIF,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.doit = batadv_netlink_set_hardif,
+ 		.internal_flags = BATADV_FLAG_NEED_MESH |
+ 				  BATADV_FLAG_NEED_HARDIF,
+@@ -1474,7 +1474,7 @@ static const struct genl_small_ops batadv_netlink_ops[] = {
+ 	{
+ 		.cmd = BATADV_CMD_SET_VLAN,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 		.doit = batadv_netlink_set_vlan,
+ 		.internal_flags = BATADV_FLAG_NEED_MESH |
+ 				  BATADV_FLAG_NEED_VLAN,
+diff --git a/net/bluetooth/cmtp/core.c b/net/bluetooth/cmtp/core.c
+index 0a2d78e811cf5..83eb84e8e688f 100644
+--- a/net/bluetooth/cmtp/core.c
++++ b/net/bluetooth/cmtp/core.c
+@@ -501,9 +501,7 @@ static int __init cmtp_init(void)
+ {
+ 	BT_INFO("CMTP (CAPI Emulation) ver %s", VERSION);
+ 
+-	cmtp_init_sockets();
+-
+-	return 0;
++	return cmtp_init_sockets();
+ }
+ 
+ static void __exit cmtp_exit(void)
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 2cf77d76c50be..6c00ce302f095 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3883,6 +3883,7 @@ int hci_register_dev(struct hci_dev *hdev)
+ 	return id;
+ 
+ err_wqueue:
++	debugfs_remove_recursive(hdev->debugfs);
+ 	destroy_workqueue(hdev->workqueue);
+ 	destroy_workqueue(hdev->req_workqueue);
+ err:
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 7d0db1ca12482..8882c6dfb48f4 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -1326,8 +1326,10 @@ static void hci_cc_le_set_ext_adv_enable(struct hci_dev *hdev,
+ 					   &conn->le_conn_timeout,
+ 					   conn->conn_timeout);
+ 	} else {
+-		if (adv) {
+-			adv->enabled = false;
++		if (cp->num_of_sets) {
++			if (adv)
++				adv->enabled = false;
++
+ 			/* If just one instance was disabled check if there are
+ 			 * any other instance enabled before clearing HCI_LE_ADV
+ 			 */
+@@ -4445,7 +4447,6 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev,
+ {
+ 	struct hci_ev_sync_conn_complete *ev = (void *) skb->data;
+ 	struct hci_conn *conn;
+-	unsigned int notify_evt;
+ 
+ 	BT_DBG("%s status 0x%2.2x", hdev->name, ev->status);
+ 
+@@ -4517,22 +4518,18 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev,
+ 	}
+ 
+ 	bt_dev_dbg(hdev, "SCO connected with air mode: %02x", ev->air_mode);
+-
+-	switch (ev->air_mode) {
+-	case 0x02:
+-		notify_evt = HCI_NOTIFY_ENABLE_SCO_CVSD;
+-		break;
+-	case 0x03:
+-		notify_evt = HCI_NOTIFY_ENABLE_SCO_TRANSP;
+-		break;
+-	}
+-
+ 	/* Notify only in case of SCO over HCI transport data path which
+ 	 * is zero and non-zero value shall be non-HCI transport data path
+ 	 */
+-	if (conn->codec.data_path == 0) {
+-		if (hdev->notify)
+-			hdev->notify(hdev, notify_evt);
++	if (conn->codec.data_path == 0 && hdev->notify) {
++		switch (ev->air_mode) {
++		case 0x02:
++			hdev->notify(hdev, HCI_NOTIFY_ENABLE_SCO_CVSD);
++			break;
++		case 0x03:
++			hdev->notify(hdev, HCI_NOTIFY_ENABLE_SCO_TRANSP);
++			break;
++		}
+ 	}
+ 
+ 	hci_connect_cfm(conn, ev->status);
+@@ -5825,7 +5822,8 @@ static void hci_le_adv_report_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 		struct hci_ev_le_advertising_info *ev = ptr;
+ 		s8 rssi;
+ 
+-		if (ev->length <= HCI_MAX_AD_LENGTH) {
++		if (ev->length <= HCI_MAX_AD_LENGTH &&
++		    ev->data + ev->length <= skb_tail_pointer(skb)) {
+ 			rssi = ev->data[ev->length];
+ 			process_adv_report(hdev, ev->evt_type, &ev->bdaddr,
+ 					   ev->bdaddr_type, NULL, 0, rssi,
+@@ -5835,6 +5833,11 @@ static void hci_le_adv_report_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 		}
+ 
+ 		ptr += sizeof(*ev) + ev->length + 1;
++
++		if (ptr > (void *) skb_tail_pointer(skb) - sizeof(*ev)) {
++			bt_dev_err(hdev, "Malicious advertising data. Stopping processing");
++			break;
++		}
+ 	}
+ 
+ 	hci_dev_unlock(hdev);
+diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
+index 92611bfc0b9e1..cd71a312feb14 100644
+--- a/net/bluetooth/hci_request.c
++++ b/net/bluetooth/hci_request.c
+@@ -1935,7 +1935,7 @@ int __hci_req_enable_ext_advertising(struct hci_request *req, u8 instance)
+ 	/* Set duration per instance since controller is responsible for
+ 	 * scheduling it.
+ 	 */
+-	if (adv_instance && adv_instance->duration) {
++	if (adv_instance && adv_instance->timeout) {
+ 		u16 duration = adv_instance->timeout * MSEC_PER_SEC;
+ 
+ 		/* Time = N * 10 ms */
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index d0dad1fafe079..33b3c0ffc3399 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -889,10 +889,6 @@ static int hci_sock_release(struct socket *sock)
+ 	}
+ 
+ 	sock_orphan(sk);
+-
+-	skb_queue_purge(&sk->sk_receive_queue);
+-	skb_queue_purge(&sk->sk_write_queue);
+-
+ 	release_sock(sk);
+ 	sock_put(sk);
+ 	return 0;
+@@ -1915,7 +1911,8 @@ static int hci_sock_setsockopt(struct socket *sock, int level, int optname,
+ 			       sockptr_t optval, unsigned int len)
+ {
+ 	struct sock *sk = sock->sk;
+-	int err = 0, opt = 0;
++	int err = 0;
++	u16 opt;
+ 
+ 	BT_DBG("sk %p, opt %d", sk, optname);
+ 
+@@ -1941,7 +1938,7 @@ static int hci_sock_setsockopt(struct socket *sock, int level, int optname,
+ 			goto done;
+ 		}
+ 
+-		if (copy_from_sockptr(&opt, optval, sizeof(u16))) {
++		if (copy_from_sockptr(&opt, optval, sizeof(opt))) {
+ 			err = -EFAULT;
+ 			break;
+ 		}
+@@ -2058,6 +2055,12 @@ static int hci_sock_getsockopt(struct socket *sock, int level, int optname,
+ 	return err;
+ }
+ 
++static void hci_sock_destruct(struct sock *sk)
++{
++	skb_queue_purge(&sk->sk_receive_queue);
++	skb_queue_purge(&sk->sk_write_queue);
++}
++
+ static const struct proto_ops hci_sock_ops = {
+ 	.family		= PF_BLUETOOTH,
+ 	.owner		= THIS_MODULE,
+@@ -2111,6 +2114,7 @@ static int hci_sock_create(struct net *net, struct socket *sock, int protocol,
+ 
+ 	sock->state = SS_UNCONNECTED;
+ 	sk->sk_state = BT_OPEN;
++	sk->sk_destruct = hci_sock_destruct;
+ 
+ 	bt_sock_link(&hci_sk_list, sk);
+ 	return 0;
+diff --git a/net/bluetooth/hci_sysfs.c b/net/bluetooth/hci_sysfs.c
+index 7827639ecf5c3..4e3e0451b08c1 100644
+--- a/net/bluetooth/hci_sysfs.c
++++ b/net/bluetooth/hci_sysfs.c
+@@ -86,6 +86,8 @@ static void bt_host_release(struct device *dev)
+ 
+ 	if (hci_dev_test_flag(hdev, HCI_UNREGISTER))
+ 		hci_release_dev(hdev);
++	else
++		kfree(hdev);
+ 	module_put(THIS_MODULE);
+ }
+ 
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index 160c016a5dfb9..d2c6785205992 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -161,7 +161,11 @@ static int l2cap_sock_bind(struct socket *sock, struct sockaddr *addr, int alen)
+ 		break;
+ 	}
+ 
+-	if (chan->psm && bdaddr_type_is_le(chan->src_type))
++	/* Use L2CAP_MODE_LE_FLOWCTL (CoC) in case of LE address and
++	 * L2CAP_MODE_EXT_FLOWCTL (ECRED) has not been set.
++	 */
++	if (chan->psm && bdaddr_type_is_le(chan->src_type) &&
++	    chan->mode != L2CAP_MODE_EXT_FLOWCTL)
+ 		chan->mode = L2CAP_MODE_LE_FLOWCTL;
+ 
+ 	chan->state = BT_BOUND;
+@@ -172,6 +176,21 @@ done:
+ 	return err;
+ }
+ 
++static void l2cap_sock_init_pid(struct sock *sk)
++{
++	struct l2cap_chan *chan = l2cap_pi(sk)->chan;
++
++	/* Only L2CAP_MODE_EXT_FLOWCTL ever need to access the PID in order to
++	 * group the channels being requested.
++	 */
++	if (chan->mode != L2CAP_MODE_EXT_FLOWCTL)
++		return;
++
++	spin_lock(&sk->sk_peer_lock);
++	sk->sk_peer_pid = get_pid(task_tgid(current));
++	spin_unlock(&sk->sk_peer_lock);
++}
++
+ static int l2cap_sock_connect(struct socket *sock, struct sockaddr *addr,
+ 			      int alen, int flags)
+ {
+@@ -240,9 +259,15 @@ static int l2cap_sock_connect(struct socket *sock, struct sockaddr *addr,
+ 			return -EINVAL;
+ 	}
+ 
+-	if (chan->psm && bdaddr_type_is_le(chan->src_type) && !chan->mode)
++	/* Use L2CAP_MODE_LE_FLOWCTL (CoC) in case of LE address and
++	 * L2CAP_MODE_EXT_FLOWCTL (ECRED) has not been set.
++	 */
++	if (chan->psm && bdaddr_type_is_le(chan->src_type) &&
++	    chan->mode != L2CAP_MODE_EXT_FLOWCTL)
+ 		chan->mode = L2CAP_MODE_LE_FLOWCTL;
+ 
++	l2cap_sock_init_pid(sk);
++
+ 	err = l2cap_chan_connect(chan, la.l2_psm, __le16_to_cpu(la.l2_cid),
+ 				 &la.l2_bdaddr, la.l2_bdaddr_type);
+ 	if (err)
+@@ -298,6 +323,8 @@ static int l2cap_sock_listen(struct socket *sock, int backlog)
+ 		goto done;
+ 	}
+ 
++	l2cap_sock_init_pid(sk);
++
+ 	sk->sk_max_ack_backlog = backlog;
+ 	sk->sk_ack_backlog = 0;
+ 
+@@ -876,6 +903,8 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+ 	struct l2cap_conn *conn;
+ 	int len, err = 0;
+ 	u32 opt;
++	u16 mtu;
++	u8 mode;
+ 
+ 	BT_DBG("sk %p", sk);
+ 
+@@ -1058,16 +1087,16 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+ 			break;
+ 		}
+ 
+-		if (copy_from_sockptr(&opt, optval, sizeof(u16))) {
++		if (copy_from_sockptr(&mtu, optval, sizeof(u16))) {
+ 			err = -EFAULT;
+ 			break;
+ 		}
+ 
+ 		if (chan->mode == L2CAP_MODE_EXT_FLOWCTL &&
+ 		    sk->sk_state == BT_CONNECTED)
+-			err = l2cap_chan_reconfigure(chan, opt);
++			err = l2cap_chan_reconfigure(chan, mtu);
+ 		else
+-			chan->imtu = opt;
++			chan->imtu = mtu;
+ 
+ 		break;
+ 
+@@ -1089,14 +1118,14 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+ 			break;
+ 		}
+ 
+-		if (copy_from_sockptr(&opt, optval, sizeof(u8))) {
++		if (copy_from_sockptr(&mode, optval, sizeof(u8))) {
+ 			err = -EFAULT;
+ 			break;
+ 		}
+ 
+-		BT_DBG("opt %u", opt);
++		BT_DBG("mode %u", mode);
+ 
+-		err = l2cap_set_mode(chan, opt);
++		err = l2cap_set_mode(chan, mode);
+ 		if (err)
+ 			break;
+ 
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 3e5283607b97c..c068d05e9616f 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -3927,7 +3927,9 @@ static int exp_debug_feature_changed(bool enabled, struct sock *skip)
+ }
+ #endif
+ 
+-static int exp_quality_report_feature_changed(bool enabled, struct sock *skip)
++static int exp_quality_report_feature_changed(bool enabled,
++					      struct hci_dev *hdev,
++					      struct sock *skip)
+ {
+ 	struct mgmt_ev_exp_feature_changed ev;
+ 
+@@ -3935,7 +3937,7 @@ static int exp_quality_report_feature_changed(bool enabled, struct sock *skip)
+ 	memcpy(ev.uuid, quality_report_uuid, 16);
+ 	ev.flags = cpu_to_le32(enabled ? BIT(0) : 0);
+ 
+-	return mgmt_limited_event(MGMT_EV_EXP_FEATURE_CHANGED, NULL,
++	return mgmt_limited_event(MGMT_EV_EXP_FEATURE_CHANGED, hdev,
+ 				  &ev, sizeof(ev),
+ 				  HCI_MGMT_EXP_FEATURE_EVENTS, skip);
+ }
+@@ -3967,10 +3969,10 @@ static int set_zero_key_func(struct sock *sk, struct hci_dev *hdev,
+ #endif
+ 
+ 	if (hdev && use_ll_privacy(hdev) && !hdev_is_powered(hdev)) {
+-		bool changed = hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY);
+-
+-		hci_dev_clear_flag(hdev, HCI_ENABLE_LL_PRIVACY);
++		bool changed;
+ 
++		changed = hci_dev_test_and_clear_flag(hdev,
++						      HCI_ENABLE_LL_PRIVACY);
+ 		if (changed)
+ 			exp_ll_privacy_feature_changed(false, hdev, sk);
+ 	}
+@@ -4065,15 +4067,15 @@ static int set_rpa_resolution_func(struct sock *sk, struct hci_dev *hdev,
+ 	val = !!cp->param[0];
+ 
+ 	if (val) {
+-		changed = !hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY);
+-		hci_dev_set_flag(hdev, HCI_ENABLE_LL_PRIVACY);
++		changed = !hci_dev_test_and_set_flag(hdev,
++						     HCI_ENABLE_LL_PRIVACY);
+ 		hci_dev_clear_flag(hdev, HCI_ADVERTISING);
+ 
+ 		/* Enable LL privacy + supported settings changed */
+ 		flags = BIT(0) | BIT(1);
+ 	} else {
+-		changed = hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY);
+-		hci_dev_clear_flag(hdev, HCI_ENABLE_LL_PRIVACY);
++		changed = hci_dev_test_and_clear_flag(hdev,
++						      HCI_ENABLE_LL_PRIVACY);
+ 
+ 		/* Disable LL privacy + supported settings changed */
+ 		flags = BIT(1);
+@@ -4156,14 +4158,15 @@ static int set_quality_report_func(struct sock *sk, struct hci_dev *hdev,
+ 				&rp, sizeof(rp));
+ 
+ 	if (changed)
+-		exp_quality_report_feature_changed(val, sk);
++		exp_quality_report_feature_changed(val, hdev, sk);
+ 
+ unlock_quality_report:
+ 	hci_req_sync_unlock(hdev);
+ 	return err;
+ }
+ 
+-static int exp_offload_codec_feature_changed(bool enabled, struct sock *skip)
++static int exp_offload_codec_feature_changed(bool enabled, struct hci_dev *hdev,
++					     struct sock *skip)
+ {
+ 	struct mgmt_ev_exp_feature_changed ev;
+ 
+@@ -4171,7 +4174,7 @@ static int exp_offload_codec_feature_changed(bool enabled, struct sock *skip)
+ 	memcpy(ev.uuid, offload_codecs_uuid, 16);
+ 	ev.flags = cpu_to_le32(enabled ? BIT(0) : 0);
+ 
+-	return mgmt_limited_event(MGMT_EV_EXP_FEATURE_CHANGED, NULL,
++	return mgmt_limited_event(MGMT_EV_EXP_FEATURE_CHANGED, hdev,
+ 				  &ev, sizeof(ev),
+ 				  HCI_MGMT_EXP_FEATURE_EVENTS, skip);
+ }
+@@ -4229,7 +4232,7 @@ static int set_offload_codec_func(struct sock *sk, struct hci_dev *hdev,
+ 				&rp, sizeof(rp));
+ 
+ 	if (changed)
+-		exp_offload_codec_feature_changed(val, sk);
++		exp_offload_codec_feature_changed(val, hdev, sk);
+ 
+ 	return err;
+ }
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index b5af68c105a83..4fd882686b04d 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -743,6 +743,9 @@ static int br_nf_dev_queue_xmit(struct net *net, struct sock *sk, struct sk_buff
+ 	if (nf_bridge->frag_max_size && nf_bridge->frag_max_size < mtu)
+ 		mtu = nf_bridge->frag_max_size;
+ 
++	nf_bridge_update_protocol(skb);
++	nf_bridge_push_encap_header(skb);
++
+ 	if (skb_is_gso(skb) || skb->len + mtu_reserved <= mtu) {
+ 		nf_bridge_info_free(skb);
+ 		return br_dev_queue_push_xmit(net, sk, skb);
+@@ -760,8 +763,6 @@ static int br_nf_dev_queue_xmit(struct net *net, struct sock *sk, struct sk_buff
+ 
+ 		IPCB(skb)->frag_max_size = nf_bridge->frag_max_size;
+ 
+-		nf_bridge_update_protocol(skb);
+-
+ 		data = this_cpu_ptr(&brnf_frag_data_storage);
+ 
+ 		if (skb_vlan_tag_present(skb)) {
+@@ -789,8 +790,6 @@ static int br_nf_dev_queue_xmit(struct net *net, struct sock *sk, struct sk_buff
+ 
+ 		IP6CB(skb)->frag_max_size = nf_bridge->frag_max_size;
+ 
+-		nf_bridge_update_protocol(skb);
+-
+ 		data = this_cpu_ptr(&brnf_frag_data_storage);
+ 		data->encap_size = nf_bridge_encap_header_len(skb);
+ 		data->size = ETH_HLEN + data->encap_size;
+diff --git a/net/core/dev.c b/net/core/dev.c
+index c4708e2487fb6..2078d04c6482f 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -9656,6 +9656,12 @@ static int bpf_xdp_link_update(struct bpf_link *link, struct bpf_prog *new_prog,
+ 		goto out_unlock;
+ 	}
+ 	old_prog = link->prog;
++	if (old_prog->type != new_prog->type ||
++	    old_prog->expected_attach_type != new_prog->expected_attach_type) {
++		err = -EINVAL;
++		goto out_unlock;
++	}
++
+ 	if (old_prog == new_prog) {
+ 		/* no-op, don't disturb drivers */
+ 		bpf_prog_put(new_prog);
+diff --git a/net/core/devlink.c b/net/core/devlink.c
+index c06c9ba6e8c5e..f2fff89a99715 100644
+--- a/net/core/devlink.c
++++ b/net/core/devlink.c
+@@ -8840,8 +8840,6 @@ static const struct genl_small_ops devlink_nl_ops[] = {
+ 			    GENL_DONT_VALIDATE_DUMP_STRICT,
+ 		.dumpit = devlink_nl_cmd_health_reporter_dump_get_dumpit,
+ 		.flags = GENL_ADMIN_PERM,
+-		.internal_flags = DEVLINK_NL_FLAG_NEED_DEVLINK_OR_PORT |
+-				  DEVLINK_NL_FLAG_NO_LOCK,
+ 	},
+ 	{
+ 		.cmd = DEVLINK_CMD_HEALTH_REPORTER_DUMP_CLEAR,
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 6102f093d59a5..5b82a817f65a6 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -4742,12 +4742,14 @@ static int _bpf_setsockopt(struct sock *sk, int level, int optname,
+ 		switch (optname) {
+ 		case SO_RCVBUF:
+ 			val = min_t(u32, val, sysctl_rmem_max);
++			val = min_t(int, val, INT_MAX / 2);
+ 			sk->sk_userlocks |= SOCK_RCVBUF_LOCK;
+ 			WRITE_ONCE(sk->sk_rcvbuf,
+ 				   max_t(int, val * 2, SOCK_MIN_RCVBUF));
+ 			break;
+ 		case SO_SNDBUF:
+ 			val = min_t(u32, val, sysctl_wmem_max);
++			val = min_t(int, val, INT_MAX / 2);
+ 			sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
+ 			WRITE_ONCE(sk->sk_sndbuf,
+ 				   max_t(int, val * 2, SOCK_MIN_SNDBUF));
+@@ -8185,9 +8187,9 @@ void bpf_warn_invalid_xdp_action(u32 act)
+ {
+ 	const u32 act_max = XDP_REDIRECT;
+ 
+-	WARN_ONCE(1, "%s XDP return value %u, expect packet loss!\n",
+-		  act > act_max ? "Illegal" : "Driver unsupported",
+-		  act);
++	pr_warn_once("%s XDP return value %u, expect packet loss!\n",
++		     act > act_max ? "Illegal" : "Driver unsupported",
++		     act);
+ }
+ EXPORT_SYMBOL_GPL(bpf_warn_invalid_xdp_action);
+ 
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index 9c01c642cf9ef..d7f9ee830d34c 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -1820,6 +1820,9 @@ static void remove_queue_kobjects(struct net_device *dev)
+ 
+ 	net_rx_queue_update_kobjects(dev, real_rx, 0);
+ 	netdev_queue_update_kobjects(dev, real_tx, 0);
++
++	dev->real_num_rx_queues = 0;
++	dev->real_num_tx_queues = 0;
+ #ifdef CONFIG_SYSFS
+ 	kset_unregister(dev->queues_kset);
+ #endif
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index 202fa5eacd0f9..9702d2b0d9207 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -164,8 +164,10 @@ static void ops_exit_list(const struct pernet_operations *ops,
+ {
+ 	struct net *net;
+ 	if (ops->exit) {
+-		list_for_each_entry(net, net_exit_list, exit_list)
++		list_for_each_entry(net, net_exit_list, exit_list) {
+ 			ops->exit(net);
++			cond_resched();
++		}
+ 	}
+ 	if (ops->exit_batch)
+ 		ops->exit_batch(net_exit_list);
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 41e91d0f70614..7de234693a3bf 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -843,6 +843,8 @@ static int sock_timestamping_bind_phc(struct sock *sk, int phc_index)
+ 	}
+ 
+ 	num = ethtool_get_phc_vclocks(dev, &vclock_index);
++	dev_put(dev);
++
+ 	for (i = 0; i < num; i++) {
+ 		if (*(vclock_index + i) == phc_index) {
+ 			match = true;
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index 4ca4b11f4e5ff..687c81386518c 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -292,15 +292,23 @@ static int sock_map_link(struct bpf_map *map, struct sock *sk)
+ 	if (skb_verdict)
+ 		psock_set_prog(&psock->progs.skb_verdict, skb_verdict);
+ 
++	/* msg_* and stream_* programs references tracked in psock after this
++	 * point. Reference dec and cleanup will occur through psock destructor
++	 */
+ 	ret = sock_map_init_proto(sk, psock);
+-	if (ret < 0)
+-		goto out_drop;
++	if (ret < 0) {
++		sk_psock_put(sk, psock);
++		goto out;
++	}
+ 
+ 	write_lock_bh(&sk->sk_callback_lock);
+ 	if (stream_parser && stream_verdict && !psock->saved_data_ready) {
+ 		ret = sk_psock_init_strp(sk, psock);
+-		if (ret)
+-			goto out_unlock_drop;
++		if (ret) {
++			write_unlock_bh(&sk->sk_callback_lock);
++			sk_psock_put(sk, psock);
++			goto out;
++		}
+ 		sk_psock_start_strp(sk, psock);
+ 	} else if (!stream_parser && stream_verdict && !psock->saved_data_ready) {
+ 		sk_psock_start_verdict(sk,psock);
+@@ -309,10 +317,6 @@ static int sock_map_link(struct bpf_map *map, struct sock *sk)
+ 	}
+ 	write_unlock_bh(&sk->sk_callback_lock);
+ 	return 0;
+-out_unlock_drop:
+-	write_unlock_bh(&sk->sk_callback_lock);
+-out_drop:
+-	sk_psock_put(sk, psock);
+ out_progs:
+ 	if (skb_verdict)
+ 		bpf_prog_put(skb_verdict);
+@@ -325,6 +329,7 @@ out_put_stream_parser:
+ out_put_stream_verdict:
+ 	if (stream_verdict)
+ 		bpf_prog_put(stream_verdict);
++out:
+ 	return ret;
+ }
+ 
+diff --git a/net/dsa/switch.c b/net/dsa/switch.c
+index bb155a16d4540..80816f7e1f996 100644
+--- a/net/dsa/switch.c
++++ b/net/dsa/switch.c
+@@ -675,7 +675,7 @@ static int
+ dsa_switch_mrp_add_ring_role(struct dsa_switch *ds,
+ 			     struct dsa_notifier_mrp_ring_role_info *info)
+ {
+-	if (!ds->ops->port_mrp_add)
++	if (!ds->ops->port_mrp_add_ring_role)
+ 		return -EOPNOTSUPP;
+ 
+ 	if (ds->index == info->sw_index)
+@@ -689,7 +689,7 @@ static int
+ dsa_switch_mrp_del_ring_role(struct dsa_switch *ds,
+ 			     struct dsa_notifier_mrp_ring_role_info *info)
+ {
+-	if (!ds->ops->port_mrp_del)
++	if (!ds->ops->port_mrp_del_ring_role)
+ 		return -EOPNOTSUPP;
+ 
+ 	if (ds->index == info->sw_index)
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index 92c29ab3d0428..5dfb94abe7b10 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -29,6 +29,7 @@
+ #include <linux/init.h>
+ #include <linux/slab.h>
+ #include <linux/netlink.h>
++#include <linux/hash.h>
+ 
+ #include <net/arp.h>
+ #include <net/ip.h>
+@@ -249,7 +250,6 @@ void free_fib_info(struct fib_info *fi)
+ 		pr_warn("Freeing alive fib_info %p\n", fi);
+ 		return;
+ 	}
+-	fib_info_cnt--;
+ 
+ 	call_rcu(&fi->rcu, free_fib_info_rcu);
+ }
+@@ -260,6 +260,10 @@ void fib_release_info(struct fib_info *fi)
+ 	spin_lock_bh(&fib_info_lock);
+ 	if (fi && refcount_dec_and_test(&fi->fib_treeref)) {
+ 		hlist_del(&fi->fib_hash);
++
++		/* Paired with READ_ONCE() in fib_create_info(). */
++		WRITE_ONCE(fib_info_cnt, fib_info_cnt - 1);
++
+ 		if (fi->fib_prefsrc)
+ 			hlist_del(&fi->fib_lhash);
+ 		if (fi->nh) {
+@@ -316,11 +320,15 @@ static inline int nh_comp(struct fib_info *fi, struct fib_info *ofi)
+ 
+ static inline unsigned int fib_devindex_hashfn(unsigned int val)
+ {
+-	unsigned int mask = DEVINDEX_HASHSIZE - 1;
++	return hash_32(val, DEVINDEX_HASHBITS);
++}
++
++static struct hlist_head *
++fib_info_devhash_bucket(const struct net_device *dev)
++{
++	u32 val = net_hash_mix(dev_net(dev)) ^ dev->ifindex;
+ 
+-	return (val ^
+-		(val >> DEVINDEX_HASHBITS) ^
+-		(val >> (DEVINDEX_HASHBITS * 2))) & mask;
++	return &fib_info_devhash[fib_devindex_hashfn(val)];
+ }
+ 
+ static unsigned int fib_info_hashfn_1(int init_val, u8 protocol, u8 scope,
+@@ -430,12 +438,11 @@ int ip_fib_check_default(__be32 gw, struct net_device *dev)
+ {
+ 	struct hlist_head *head;
+ 	struct fib_nh *nh;
+-	unsigned int hash;
+ 
+ 	spin_lock(&fib_info_lock);
+ 
+-	hash = fib_devindex_hashfn(dev->ifindex);
+-	head = &fib_info_devhash[hash];
++	head = fib_info_devhash_bucket(dev);
++
+ 	hlist_for_each_entry(nh, head, nh_hash) {
+ 		if (nh->fib_nh_dev == dev &&
+ 		    nh->fib_nh_gw4 == gw &&
+@@ -1430,7 +1437,9 @@ struct fib_info *fib_create_info(struct fib_config *cfg,
+ #endif
+ 
+ 	err = -ENOBUFS;
+-	if (fib_info_cnt >= fib_info_hash_size) {
++
++	/* Paired with WRITE_ONCE() in fib_release_info() */
++	if (READ_ONCE(fib_info_cnt) >= fib_info_hash_size) {
+ 		unsigned int new_size = fib_info_hash_size << 1;
+ 		struct hlist_head *new_info_hash;
+ 		struct hlist_head *new_laddrhash;
+@@ -1462,7 +1471,6 @@ struct fib_info *fib_create_info(struct fib_config *cfg,
+ 		return ERR_PTR(err);
+ 	}
+ 
+-	fib_info_cnt++;
+ 	fi->fib_net = net;
+ 	fi->fib_protocol = cfg->fc_protocol;
+ 	fi->fib_scope = cfg->fc_scope;
+@@ -1589,6 +1597,7 @@ link_it:
+ 	refcount_set(&fi->fib_treeref, 1);
+ 	refcount_set(&fi->fib_clntref, 1);
+ 	spin_lock_bh(&fib_info_lock);
++	fib_info_cnt++;
+ 	hlist_add_head(&fi->fib_hash,
+ 		       &fib_info_hash[fib_info_hashfn(fi)]);
+ 	if (fi->fib_prefsrc) {
+@@ -1602,12 +1611,10 @@ link_it:
+ 	} else {
+ 		change_nexthops(fi) {
+ 			struct hlist_head *head;
+-			unsigned int hash;
+ 
+ 			if (!nexthop_nh->fib_nh_dev)
+ 				continue;
+-			hash = fib_devindex_hashfn(nexthop_nh->fib_nh_dev->ifindex);
+-			head = &fib_info_devhash[hash];
++			head = fib_info_devhash_bucket(nexthop_nh->fib_nh_dev);
+ 			hlist_add_head(&nexthop_nh->nh_hash, head);
+ 		} endfor_nexthops(fi)
+ 	}
+@@ -1959,8 +1966,7 @@ void fib_nhc_update_mtu(struct fib_nh_common *nhc, u32 new, u32 orig)
+ 
+ void fib_sync_mtu(struct net_device *dev, u32 orig_mtu)
+ {
+-	unsigned int hash = fib_devindex_hashfn(dev->ifindex);
+-	struct hlist_head *head = &fib_info_devhash[hash];
++	struct hlist_head *head = fib_info_devhash_bucket(dev);
+ 	struct fib_nh *nh;
+ 
+ 	hlist_for_each_entry(nh, head, nh_hash) {
+@@ -1979,12 +1985,11 @@ void fib_sync_mtu(struct net_device *dev, u32 orig_mtu)
+  */
+ int fib_sync_down_dev(struct net_device *dev, unsigned long event, bool force)
+ {
+-	int ret = 0;
+-	int scope = RT_SCOPE_NOWHERE;
++	struct hlist_head *head = fib_info_devhash_bucket(dev);
+ 	struct fib_info *prev_fi = NULL;
+-	unsigned int hash = fib_devindex_hashfn(dev->ifindex);
+-	struct hlist_head *head = &fib_info_devhash[hash];
++	int scope = RT_SCOPE_NOWHERE;
+ 	struct fib_nh *nh;
++	int ret = 0;
+ 
+ 	if (force)
+ 		scope = -1;
+@@ -2129,7 +2134,6 @@ out:
+ int fib_sync_up(struct net_device *dev, unsigned char nh_flags)
+ {
+ 	struct fib_info *prev_fi;
+-	unsigned int hash;
+ 	struct hlist_head *head;
+ 	struct fib_nh *nh;
+ 	int ret;
+@@ -2145,8 +2149,7 @@ int fib_sync_up(struct net_device *dev, unsigned char nh_flags)
+ 	}
+ 
+ 	prev_fi = NULL;
+-	hash = fib_devindex_hashfn(dev->ifindex);
+-	head = &fib_info_devhash[hash];
++	head = fib_info_devhash_bucket(dev);
+ 	ret = 0;
+ 
+ 	hlist_for_each_entry(nh, head, nh_hash) {
+diff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c
+index 05cd198d7a6ba..341096807100c 100644
+--- a/net/ipv4/inet_fragment.c
++++ b/net/ipv4/inet_fragment.c
+@@ -235,9 +235,9 @@ void inet_frag_kill(struct inet_frag_queue *fq)
+ 		/* The RCU read lock provides a memory barrier
+ 		 * guaranteeing that if fqdir->dead is false then
+ 		 * the hash table destruction will not start until
+-		 * after we unlock.  Paired with inet_frags_exit_net().
++		 * after we unlock.  Paired with fqdir_pre_exit().
+ 		 */
+-		if (!fqdir->dead) {
++		if (!READ_ONCE(fqdir->dead)) {
+ 			rhashtable_remove_fast(&fqdir->rhashtable, &fq->node,
+ 					       fqdir->f->rhash_params);
+ 			refcount_dec(&fq->refcnt);
+@@ -352,9 +352,11 @@ static struct inet_frag_queue *inet_frag_create(struct fqdir *fqdir,
+ /* TODO : call from rcu_read_lock() and no longer use refcount_inc_not_zero() */
+ struct inet_frag_queue *inet_frag_find(struct fqdir *fqdir, void *key)
+ {
++	/* This pairs with WRITE_ONCE() in fqdir_pre_exit(). */
++	long high_thresh = READ_ONCE(fqdir->high_thresh);
+ 	struct inet_frag_queue *fq = NULL, *prev;
+ 
+-	if (!fqdir->high_thresh || frag_mem_limit(fqdir) > fqdir->high_thresh)
++	if (!high_thresh || frag_mem_limit(fqdir) > high_thresh)
+ 		return NULL;
+ 
+ 	rcu_read_lock();
+diff --git a/net/ipv4/ip_fragment.c b/net/ipv4/ip_fragment.c
+index cfeb8890f94ee..fad803d2d711e 100644
+--- a/net/ipv4/ip_fragment.c
++++ b/net/ipv4/ip_fragment.c
+@@ -144,7 +144,8 @@ static void ip_expire(struct timer_list *t)
+ 
+ 	rcu_read_lock();
+ 
+-	if (qp->q.fqdir->dead)
++	/* Paired with WRITE_ONCE() in fqdir_pre_exit(). */
++	if (READ_ONCE(qp->q.fqdir->dead))
+ 		goto out_rcu_unlock;
+ 
+ 	spin_lock(&qp->q.lock);
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 2ac2b95c56943..99db2e41ed10f 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -604,8 +604,9 @@ static int gre_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
+ 
+ 	key = &info->key;
+ 	ip_tunnel_init_flow(&fl4, IPPROTO_GRE, key->u.ipv4.dst, key->u.ipv4.src,
+-			    tunnel_id_to_key32(key->tun_id), key->tos, 0,
+-			    skb->mark, skb_get_hash(skb));
++			    tunnel_id_to_key32(key->tun_id),
++			    key->tos & ~INET_ECN_MASK, 0, skb->mark,
++			    skb_get_hash(skb));
+ 	rt = ip_route_output_key(dev_net(dev), &fl4);
+ 	if (IS_ERR(rt))
+ 		return PTR_ERR(rt);
+diff --git a/net/ipv4/netfilter/ipt_CLUSTERIP.c b/net/ipv4/netfilter/ipt_CLUSTERIP.c
+index 8fd1aba8af31c..b518f20c9a244 100644
+--- a/net/ipv4/netfilter/ipt_CLUSTERIP.c
++++ b/net/ipv4/netfilter/ipt_CLUSTERIP.c
+@@ -520,8 +520,11 @@ static int clusterip_tg_check(const struct xt_tgchk_param *par)
+ 			if (IS_ERR(config))
+ 				return PTR_ERR(config);
+ 		}
+-	} else if (memcmp(&config->clustermac, &cipinfo->clustermac, ETH_ALEN))
++	} else if (memcmp(&config->clustermac, &cipinfo->clustermac, ETH_ALEN)) {
++		clusterip_config_entry_put(config);
++		clusterip_config_put(config);
+ 		return -EINVAL;
++	}
+ 
+ 	ret = nf_ct_netns_get(par->net, par->family);
+ 	if (ret < 0) {
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index f70aa0932bd6c..9b9b02052fd36 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -196,12 +196,39 @@ msg_bytes_ready:
+ 		long timeo;
+ 		int data;
+ 
++		if (sock_flag(sk, SOCK_DONE))
++			goto out;
++
++		if (sk->sk_err) {
++			copied = sock_error(sk);
++			goto out;
++		}
++
++		if (sk->sk_shutdown & RCV_SHUTDOWN)
++			goto out;
++
++		if (sk->sk_state == TCP_CLOSE) {
++			copied = -ENOTCONN;
++			goto out;
++		}
++
+ 		timeo = sock_rcvtimeo(sk, nonblock);
++		if (!timeo) {
++			copied = -EAGAIN;
++			goto out;
++		}
++
++		if (signal_pending(current)) {
++			copied = sock_intr_errno(timeo);
++			goto out;
++		}
++
+ 		data = tcp_msg_wait_data(sk, psock, timeo);
+ 		if (data && !sk_psock_queue_empty(psock))
+ 			goto msg_bytes_ready;
+ 		copied = -EAGAIN;
+ 	}
++out:
+ 	release_sock(sk);
+ 	sk_psock_put(sk, psock);
+ 	return copied;
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index d831d24396932..f5a511c57aa23 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -755,6 +755,7 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
+ 		fl6->daddr = key->u.ipv6.dst;
+ 		fl6->flowlabel = key->label;
+ 		fl6->flowi6_uid = sock_net_uid(dev_net(dev), NULL);
++		fl6->fl6_gre_key = tunnel_id_to_key32(key->tun_id);
+ 
+ 		dsfield = key->tos;
+ 		flags = key->tun_flags &
+@@ -990,6 +991,7 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ 		fl6.daddr = key->u.ipv6.dst;
+ 		fl6.flowlabel = key->label;
+ 		fl6.flowi6_uid = sock_net_uid(dev_net(dev), NULL);
++		fl6.fl6_gre_key = tunnel_id_to_key32(key->tun_id);
+ 
+ 		dsfield = key->tos;
+ 		if (!(tun_info->key.tun_flags & TUNNEL_ERSPAN_OPT))
+@@ -1098,6 +1100,7 @@ static void ip6gre_tnl_link_config_common(struct ip6_tnl *t)
+ 	fl6->flowi6_oif = p->link;
+ 	fl6->flowlabel = 0;
+ 	fl6->flowi6_proto = IPPROTO_GRE;
++	fl6->fl6_gre_key = t->parms.o_key;
+ 
+ 	if (!(p->flags&IP6_TNL_F_USE_ORIG_TCLASS))
+ 		fl6->flowlabel |= IPV6_TCLASS_MASK & p->flowinfo;
+@@ -1544,7 +1547,7 @@ static void ip6gre_fb_tunnel_init(struct net_device *dev)
+ static struct inet6_protocol ip6gre_protocol __read_mostly = {
+ 	.handler     = gre_rcv,
+ 	.err_handler = ip6gre_err,
+-	.flags       = INET6_PROTO_NOPOLICY|INET6_PROTO_FINAL,
++	.flags       = INET6_PROTO_FINAL,
+ };
+ 
+ static void ip6gre_destroy_tunnels(struct net *net, struct list_head *head)
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 0544563ede522..d2e8b84ed2836 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -4924,7 +4924,7 @@ void ieee80211_rx_list(struct ieee80211_hw *hw, struct ieee80211_sta *pubsta,
+ 				goto drop;
+ 			break;
+ 		case RX_ENC_VHT:
+-			if (WARN_ONCE(status->rate_idx > 9 ||
++			if (WARN_ONCE(status->rate_idx > 11 ||
+ 				      !status->nss ||
+ 				      status->nss > 8,
+ 				      "Rate marked as a VHT rate but data is invalid: MCS: %d, NSS: %d\n",
+diff --git a/net/mctp/test/route-test.c b/net/mctp/test/route-test.c
+index 36fac3daf86a4..750f9f9b4daf9 100644
+--- a/net/mctp/test/route-test.c
++++ b/net/mctp/test/route-test.c
+@@ -150,11 +150,6 @@ static void mctp_test_fragment(struct kunit *test)
+ 	rt = mctp_test_create_route(&init_net, NULL, 10, mtu);
+ 	KUNIT_ASSERT_TRUE(test, rt);
+ 
+-	/* The refcount would usually be incremented as part of a route lookup,
+-	 * but we're setting the route directly here.
+-	 */
+-	refcount_inc(&rt->rt.refs);
+-
+ 	rc = mctp_do_fragment_route(&rt->rt, skb, mtu, MCTP_TAG_OWNER);
+ 	KUNIT_EXPECT_FALSE(test, rc);
+ 
+@@ -290,7 +285,7 @@ static void __mctp_route_test_init(struct kunit *test,
+ 				   struct mctp_test_route **rtp,
+ 				   struct socket **sockp)
+ {
+-	struct sockaddr_mctp addr;
++	struct sockaddr_mctp addr = {0};
+ 	struct mctp_test_route *rt;
+ 	struct mctp_test_dev *dev;
+ 	struct socket *sock;
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index fe98e4f475baa..6661b1d6520f1 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -821,10 +821,13 @@ bool mptcp_established_options(struct sock *sk, struct sk_buff *skb,
+ 	if (mptcp_established_options_mp(sk, skb, snd_data_fin, &opt_size, remaining, opts))
+ 		ret = true;
+ 	else if (mptcp_established_options_dss(sk, skb, snd_data_fin, &opt_size, remaining, opts)) {
++		unsigned int mp_fail_size;
++
+ 		ret = true;
+-		if (mptcp_established_options_mp_fail(sk, &opt_size, remaining, opts)) {
+-			*size += opt_size;
+-			remaining -= opt_size;
++		if (mptcp_established_options_mp_fail(sk, &mp_fail_size,
++						      remaining - opt_size, opts)) {
++			*size += opt_size + mp_fail_size;
++			remaining -= opt_size - mp_fail_size;
+ 			return true;
+ 		}
+ 	}
+@@ -1316,6 +1319,7 @@ void mptcp_write_options(__be32 *ptr, const struct tcp_sock *tp,
+ 				put_unaligned_be32(mpext->data_len << 16 |
+ 						   TCPOPT_NOP << 8 | TCPOPT_NOP, ptr);
+ 			}
++			ptr += 1;
+ 		}
+ 	} else if (OPTIONS_MPTCP_MPC & opts->suboptions) {
+ 		u8 len, flag = MPTCP_CAP_HMAC_SHA256;
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index f523051f5aef3..65764c8171b37 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -710,6 +710,8 @@ static void mptcp_pm_nl_rm_addr_or_subflow(struct mptcp_sock *msk,
+ 		return;
+ 
+ 	for (i = 0; i < rm_list->nr; i++) {
++		bool removed = false;
++
+ 		list_for_each_entry_safe(subflow, tmp, &msk->conn_list, node) {
+ 			struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+ 			int how = RCV_SHUTDOWN | SEND_SHUTDOWN;
+@@ -729,15 +731,19 @@ static void mptcp_pm_nl_rm_addr_or_subflow(struct mptcp_sock *msk,
+ 			mptcp_close_ssk(sk, ssk, subflow);
+ 			spin_lock_bh(&msk->pm.lock);
+ 
+-			if (rm_type == MPTCP_MIB_RMADDR) {
+-				msk->pm.add_addr_accepted--;
+-				WRITE_ONCE(msk->pm.accept_addr, true);
+-			} else if (rm_type == MPTCP_MIB_RMSUBFLOW) {
+-				msk->pm.local_addr_used--;
+-			}
++			removed = true;
+ 			msk->pm.subflows--;
+ 			__MPTCP_INC_STATS(sock_net(sk), rm_type);
+ 		}
++		if (!removed)
++			continue;
++
++		if (rm_type == MPTCP_MIB_RMADDR) {
++			msk->pm.add_addr_accepted--;
++			WRITE_ONCE(msk->pm.accept_addr, true);
++		} else if (rm_type == MPTCP_MIB_RMSUBFLOW) {
++			msk->pm.local_addr_used--;
++		}
+ 	}
+ }
+ 
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 54613f5b75217..0cd55e4c30fab 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -972,7 +972,9 @@ static void __mptcp_mem_reclaim_partial(struct sock *sk)
+ 
+ 	lockdep_assert_held_once(&sk->sk_lock.slock);
+ 
+-	__mptcp_rmem_reclaim(sk, reclaimable - 1);
++	if (reclaimable > SK_MEM_QUANTUM)
++		__mptcp_rmem_reclaim(sk, reclaimable - 1);
++
+ 	sk_mem_reclaim_partial(sk);
+ }
+ 
+diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c
+index bd689938a2e0c..58e96a0fe0b4c 100644
+--- a/net/netfilter/nft_payload.c
++++ b/net/netfilter/nft_payload.c
+@@ -546,6 +546,9 @@ static int nft_payload_l4csum_offset(const struct nft_pktinfo *pkt,
+ 				     struct sk_buff *skb,
+ 				     unsigned int *l4csum_offset)
+ {
++	if (pkt->fragoff)
++		return -1;
++
+ 	switch (pkt->tprot) {
+ 	case IPPROTO_TCP:
+ 		*l4csum_offset = offsetof(struct tcphdr, check);
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index dce866d93feed..2c8051d8cca69 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -1290,6 +1290,11 @@ static struct nft_pipapo_match *pipapo_clone(struct nft_pipapo_match *old)
+ 	if (!new->scratch_aligned)
+ 		goto out_scratch;
+ #endif
++	for_each_possible_cpu(i)
++		*per_cpu_ptr(new->scratch, i) = NULL;
++
++	if (pipapo_realloc_scratch(new, old->bsize_max))
++		goto out_scratch_realloc;
+ 
+ 	rcu_head_init(&new->rcu);
+ 
+@@ -1334,6 +1339,9 @@ out_lt:
+ 		kvfree(dst->lt);
+ 		dst--;
+ 	}
++out_scratch_realloc:
++	for_each_possible_cpu(i)
++		kfree(*per_cpu_ptr(new->scratch, i));
+ #ifdef NFT_PIPAPO_ALIGN
+ 	free_percpu(new->scratch_aligned);
+ #endif
+diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c
+index f1ba7dd3d253d..fa9dc2ba39418 100644
+--- a/net/netrom/af_netrom.c
++++ b/net/netrom/af_netrom.c
+@@ -298,7 +298,7 @@ static int nr_setsockopt(struct socket *sock, int level, int optname,
+ {
+ 	struct sock *sk = sock->sk;
+ 	struct nr_sock *nr = nr_sk(sk);
+-	unsigned long opt;
++	unsigned int opt;
+ 
+ 	if (level != SOL_NETROM)
+ 		return -ENOPROTOOPT;
+@@ -306,18 +306,18 @@ static int nr_setsockopt(struct socket *sock, int level, int optname,
+ 	if (optlen < sizeof(unsigned int))
+ 		return -EINVAL;
+ 
+-	if (copy_from_sockptr(&opt, optval, sizeof(unsigned long)))
++	if (copy_from_sockptr(&opt, optval, sizeof(opt)))
+ 		return -EFAULT;
+ 
+ 	switch (optname) {
+ 	case NETROM_T1:
+-		if (opt < 1 || opt > ULONG_MAX / HZ)
++		if (opt < 1 || opt > UINT_MAX / HZ)
+ 			return -EINVAL;
+ 		nr->t1 = opt * HZ;
+ 		return 0;
+ 
+ 	case NETROM_T2:
+-		if (opt < 1 || opt > ULONG_MAX / HZ)
++		if (opt < 1 || opt > UINT_MAX / HZ)
+ 			return -EINVAL;
+ 		nr->t2 = opt * HZ;
+ 		return 0;
+@@ -329,13 +329,13 @@ static int nr_setsockopt(struct socket *sock, int level, int optname,
+ 		return 0;
+ 
+ 	case NETROM_T4:
+-		if (opt < 1 || opt > ULONG_MAX / HZ)
++		if (opt < 1 || opt > UINT_MAX / HZ)
+ 			return -EINVAL;
+ 		nr->t4 = opt * HZ;
+ 		return 0;
+ 
+ 	case NETROM_IDLE:
+-		if (opt > ULONG_MAX / (60 * HZ))
++		if (opt > UINT_MAX / (60 * HZ))
+ 			return -EINVAL;
+ 		nr->idle = opt * 60 * HZ;
+ 		return 0;
+diff --git a/net/nfc/llcp_sock.c b/net/nfc/llcp_sock.c
+index 6cfd30fc07985..0b93a17b9f11f 100644
+--- a/net/nfc/llcp_sock.c
++++ b/net/nfc/llcp_sock.c
+@@ -789,6 +789,11 @@ static int llcp_sock_sendmsg(struct socket *sock, struct msghdr *msg,
+ 
+ 	lock_sock(sk);
+ 
++	if (!llcp_sock->local) {
++		release_sock(sk);
++		return -ENODEV;
++	}
++
+ 	if (sk->sk_type == SOCK_DGRAM) {
+ 		DECLARE_SOCKADDR(struct sockaddr_nfc_llcp *, addr,
+ 				 msg->msg_name);
+diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c
+index 6d262d9aa10ea..02096f2ec6784 100644
+--- a/net/openvswitch/flow.c
++++ b/net/openvswitch/flow.c
+@@ -859,7 +859,7 @@ int ovs_flow_key_extract(const struct ip_tunnel_info *tun_info,
+ #if IS_ENABLED(CONFIG_NET_TC_SKB_EXT)
+ 	struct tc_skb_ext *tc_ext;
+ #endif
+-	bool post_ct = false;
++	bool post_ct = false, post_ct_snat = false, post_ct_dnat = false;
+ 	int res, err;
+ 	u16 zone = 0;
+ 
+@@ -900,6 +900,8 @@ int ovs_flow_key_extract(const struct ip_tunnel_info *tun_info,
+ 		key->recirc_id = tc_ext ? tc_ext->chain : 0;
+ 		OVS_CB(skb)->mru = tc_ext ? tc_ext->mru : 0;
+ 		post_ct = tc_ext ? tc_ext->post_ct : false;
++		post_ct_snat = post_ct ? tc_ext->post_ct_snat : false;
++		post_ct_dnat = post_ct ? tc_ext->post_ct_dnat : false;
+ 		zone = post_ct ? tc_ext->zone : 0;
+ 	} else {
+ 		key->recirc_id = 0;
+@@ -911,8 +913,16 @@ int ovs_flow_key_extract(const struct ip_tunnel_info *tun_info,
+ 	err = key_extract(skb, key);
+ 	if (!err) {
+ 		ovs_ct_fill_key(skb, key, post_ct);   /* Must be after key_extract(). */
+-		if (post_ct && !skb_get_nfct(skb))
+-			key->ct_zone = zone;
++		if (post_ct) {
++			if (!skb_get_nfct(skb)) {
++				key->ct_zone = zone;
++			} else {
++				if (!post_ct_dnat)
++					key->ct_state &= ~OVS_CS_F_DST_NAT;
++				if (!post_ct_snat)
++					key->ct_state &= ~OVS_CS_F_SRC_NAT;
++			}
++		}
+ 	}
+ 	return err;
+ }
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index ab3591408419f..2a17eb77c9049 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -839,6 +839,12 @@ static int ct_nat_execute(struct sk_buff *skb, struct nf_conn *ct,
+ 	}
+ 
+ 	err = nf_nat_packet(ct, ctinfo, hooknum, skb);
++	if (err == NF_ACCEPT) {
++		if (maniptype == NF_NAT_MANIP_SRC)
++			tc_skb_cb(skb)->post_ct_snat = 1;
++		if (maniptype == NF_NAT_MANIP_DST)
++			tc_skb_cb(skb)->post_ct_dnat = 1;
++	}
+ out:
+ 	return err;
+ }
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 35c74bdde848e..cc9409aa755eb 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -1625,6 +1625,8 @@ int tcf_classify(struct sk_buff *skb,
+ 		ext->chain = last_executed_chain;
+ 		ext->mru = cb->mru;
+ 		ext->post_ct = cb->post_ct;
++		ext->post_ct_snat = cb->post_ct_snat;
++		ext->post_ct_dnat = cb->post_ct_dnat;
+ 		ext->zone = cb->zone;
+ 	}
+ 
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index efcd0b5e9a323..910a36ed56343 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1062,7 +1062,7 @@ static int qdisc_graft(struct net_device *dev, struct Qdisc *parent,
+ 
+ 		qdisc_offload_graft_root(dev, new, old, extack);
+ 
+-		if (new && new->ops->attach)
++		if (new && new->ops->attach && !ingress)
+ 			goto skip;
+ 
+ 		for (i = 0; i < num_q; i++) {
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index 3b0f620958037..5d391fe3137dc 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -1474,6 +1474,7 @@ void psched_ratecfg_precompute(struct psched_ratecfg *r,
+ {
+ 	memset(r, 0, sizeof(*r));
+ 	r->overhead = conf->overhead;
++	r->mpu = conf->mpu;
+ 	r->rate_bytes_ps = max_t(u64, conf->rate, rate64);
+ 	r->linklayer = (conf->linklayer & TC_LINKLAYER_MASK);
+ 	psched_ratecfg_precompute__(r->rate_bytes_ps, &r->mult, &r->shift);
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 1c9289f56dc47..211cd91b6c408 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -632,10 +632,12 @@ static int smc_connect_decline_fallback(struct smc_sock *smc, int reason_code,
+ 
+ static void smc_conn_abort(struct smc_sock *smc, int local_first)
+ {
++	struct smc_connection *conn = &smc->conn;
++	struct smc_link_group *lgr = conn->lgr;
++
++	smc_conn_free(conn);
+ 	if (local_first)
+-		smc_lgr_cleanup_early(&smc->conn);
+-	else
+-		smc_conn_free(&smc->conn);
++		smc_lgr_cleanup_early(lgr);
+ }
+ 
+ /* check if there is a rdma device available for this connection. */
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index a6849362f4ddd..6dddcfd6cf734 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -171,8 +171,10 @@ static int smc_lgr_register_conn(struct smc_connection *conn, bool first)
+ 
+ 	if (!conn->lgr->is_smcd) {
+ 		rc = smcr_lgr_conn_assign_link(conn, first);
+-		if (rc)
++		if (rc) {
++			conn->lgr = NULL;
+ 			return rc;
++		}
+ 	}
+ 	/* find a new alert_token_local value not yet used by some connection
+ 	 * in this link group
+@@ -622,15 +624,13 @@ int smcd_nl_get_lgr(struct sk_buff *skb, struct netlink_callback *cb)
+ 	return skb->len;
+ }
+ 
+-void smc_lgr_cleanup_early(struct smc_connection *conn)
++void smc_lgr_cleanup_early(struct smc_link_group *lgr)
+ {
+-	struct smc_link_group *lgr = conn->lgr;
+ 	spinlock_t *lgr_lock;
+ 
+ 	if (!lgr)
+ 		return;
+ 
+-	smc_conn_free(conn);
+ 	smc_lgr_list_head(lgr, &lgr_lock);
+ 	spin_lock_bh(lgr_lock);
+ 	/* do not use this link group for new connections */
+@@ -1459,16 +1459,11 @@ void smc_smcd_terminate_all(struct smcd_dev *smcd)
+ /* Called when an SMCR device is removed or the smc module is unloaded.
+  * If smcibdev is given, all SMCR link groups using this device are terminated.
+  * If smcibdev is NULL, all SMCR link groups are terminated.
+- *
+- * We must wait here for QPs been destroyed before we destroy the CQs,
+- * or we won't received any CQEs and cdc_pend_tx_wr cannot reach 0 thus
+- * smc_sock cannot be released.
+  */
+ void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)
+ {
+ 	struct smc_link_group *lgr, *lg;
+ 	LIST_HEAD(lgr_free_list);
+-	LIST_HEAD(lgr_linkdown_list);
+ 	int i;
+ 
+ 	spin_lock_bh(&smc_lgr_list.lock);
+@@ -1480,7 +1475,7 @@ void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)
+ 		list_for_each_entry_safe(lgr, lg, &smc_lgr_list.list, list) {
+ 			for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {
+ 				if (lgr->lnk[i].smcibdev == smcibdev)
+-					list_move_tail(&lgr->list, &lgr_linkdown_list);
++					smcr_link_down_cond_sched(&lgr->lnk[i]);
+ 			}
+ 		}
+ 	}
+@@ -1492,16 +1487,6 @@ void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)
+ 		__smc_lgr_terminate(lgr, false);
+ 	}
+ 
+-	list_for_each_entry_safe(lgr, lg, &lgr_linkdown_list, list) {
+-		for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {
+-			if (lgr->lnk[i].smcibdev == smcibdev) {
+-				mutex_lock(&lgr->llc_conf_mutex);
+-				smcr_link_down_cond(&lgr->lnk[i]);
+-				mutex_unlock(&lgr->llc_conf_mutex);
+-			}
+-		}
+-	}
+-
+ 	if (smcibdev) {
+ 		if (atomic_read(&smcibdev->lnk_cnt))
+ 			wait_event(smcibdev->lnks_deleted,
+@@ -1832,8 +1817,10 @@ create:
+ 		write_lock_bh(&lgr->conns_lock);
+ 		rc = smc_lgr_register_conn(conn, true);
+ 		write_unlock_bh(&lgr->conns_lock);
+-		if (rc)
++		if (rc) {
++			smc_lgr_cleanup_early(lgr);
+ 			goto out;
++		}
+ 	}
+ 	conn->local_tx_ctrl.common.type = SMC_CDC_MSG_TYPE;
+ 	conn->local_tx_ctrl.len = SMC_WR_TX_SIZE;
+diff --git a/net/smc/smc_core.h b/net/smc/smc_core.h
+index d63b08274197e..73d0c35d3eb77 100644
+--- a/net/smc/smc_core.h
++++ b/net/smc/smc_core.h
+@@ -468,7 +468,7 @@ static inline void smc_set_pci_values(struct pci_dev *pci_dev,
+ struct smc_sock;
+ struct smc_clc_msg_accept_confirm;
+ 
+-void smc_lgr_cleanup_early(struct smc_connection *conn);
++void smc_lgr_cleanup_early(struct smc_link_group *lgr);
+ void smc_lgr_terminate_sched(struct smc_link_group *lgr);
+ void smcr_port_add(struct smc_ib_device *smcibdev, u8 ibport);
+ void smcr_port_err(struct smc_ib_device *smcibdev, u8 ibport);
+diff --git a/net/socket.c b/net/socket.c
+index 7f64a6eccf63f..5053eb0100e48 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -829,6 +829,7 @@ void __sock_recv_timestamp(struct msghdr *msg, struct sock *sk,
+ 	int empty = 1, false_tstamp = 0;
+ 	struct skb_shared_hwtstamps *shhwtstamps =
+ 		skb_hwtstamps(skb);
++	ktime_t hwtstamp;
+ 
+ 	/* Race occurred between timestamp enabling and packet
+ 	   receiving.  Fill in the current time for now. */
+@@ -877,10 +878,12 @@ void __sock_recv_timestamp(struct msghdr *msg, struct sock *sk,
+ 	    (sk->sk_tsflags & SOF_TIMESTAMPING_RAW_HARDWARE) &&
+ 	    !skb_is_swtx_tstamp(skb, false_tstamp)) {
+ 		if (sk->sk_tsflags & SOF_TIMESTAMPING_BIND_PHC)
+-			ptp_convert_timestamp(shhwtstamps, sk->sk_bind_phc);
++			hwtstamp = ptp_convert_timestamp(shhwtstamps,
++							 sk->sk_bind_phc);
++		else
++			hwtstamp = shhwtstamps->hwtstamp;
+ 
+-		if (ktime_to_timespec64_cond(shhwtstamps->hwtstamp,
+-					     tss.ts + 2)) {
++		if (ktime_to_timespec64_cond(hwtstamp, tss.ts + 2)) {
+ 			empty = 0;
+ 
+ 			if ((sk->sk_tsflags & SOF_TIMESTAMPING_OPT_PKTINFO) &&
+diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
+index 1e99ba1b9d723..008f1b05a7a9f 100644
+--- a/net/sunrpc/svc_xprt.c
++++ b/net/sunrpc/svc_xprt.c
+@@ -243,7 +243,7 @@ static struct svc_xprt *__svc_xpo_create(struct svc_xprt_class *xcl,
+ 	xprt = xcl->xcl_ops->xpo_create(serv, net, sap, len, flags);
+ 	if (IS_ERR(xprt))
+ 		trace_svc_xprt_create_err(serv->sv_program->pg_name,
+-					  xcl->xcl_name, sap, xprt);
++					  xcl->xcl_name, sap, len, xprt);
+ 	return xprt;
+ }
+ 
+diff --git a/net/unix/garbage.c b/net/unix/garbage.c
+index 12e2ddaf887f2..d45d5366115a7 100644
+--- a/net/unix/garbage.c
++++ b/net/unix/garbage.c
+@@ -192,8 +192,11 @@ void wait_for_unix_gc(void)
+ {
+ 	/* If number of inflight sockets is insane,
+ 	 * force a garbage collect right now.
++	 * Paired with the WRITE_ONCE() in unix_inflight(),
++	 * unix_notinflight() and gc_in_progress().
+ 	 */
+-	if (unix_tot_inflight > UNIX_INFLIGHT_TRIGGER_GC && !gc_in_progress)
++	if (READ_ONCE(unix_tot_inflight) > UNIX_INFLIGHT_TRIGGER_GC &&
++	    !READ_ONCE(gc_in_progress))
+ 		unix_gc();
+ 	wait_event(unix_gc_wait, gc_in_progress == false);
+ }
+@@ -213,7 +216,9 @@ void unix_gc(void)
+ 	if (gc_in_progress)
+ 		goto out;
+ 
+-	gc_in_progress = true;
++	/* Paired with READ_ONCE() in wait_for_unix_gc(). */
++	WRITE_ONCE(gc_in_progress, true);
++
+ 	/* First, select candidates for garbage collection.  Only
+ 	 * in-flight sockets are considered, and from those only ones
+ 	 * which don't have any external reference.
+@@ -299,7 +304,10 @@ void unix_gc(void)
+ 
+ 	/* All candidates should have been detached by now. */
+ 	BUG_ON(!list_empty(&gc_candidates));
+-	gc_in_progress = false;
++
++	/* Paired with READ_ONCE() in wait_for_unix_gc(). */
++	WRITE_ONCE(gc_in_progress, false);
++
+ 	wake_up(&unix_gc_wait);
+ 
+  out:
+diff --git a/net/unix/scm.c b/net/unix/scm.c
+index 052ae709ce289..aa27a02478dc1 100644
+--- a/net/unix/scm.c
++++ b/net/unix/scm.c
+@@ -60,7 +60,8 @@ void unix_inflight(struct user_struct *user, struct file *fp)
+ 		} else {
+ 			BUG_ON(list_empty(&u->link));
+ 		}
+-		unix_tot_inflight++;
++		/* Paired with READ_ONCE() in wait_for_unix_gc() */
++		WRITE_ONCE(unix_tot_inflight, unix_tot_inflight + 1);
+ 	}
+ 	user->unix_inflight++;
+ 	spin_unlock(&unix_gc_lock);
+@@ -80,7 +81,8 @@ void unix_notinflight(struct user_struct *user, struct file *fp)
+ 
+ 		if (atomic_long_dec_and_test(&u->inflight))
+ 			list_del_init(&u->link);
+-		unix_tot_inflight--;
++		/* Paired with READ_ONCE() in wait_for_unix_gc() */
++		WRITE_ONCE(unix_tot_inflight, unix_tot_inflight - 1);
+ 	}
+ 	user->unix_inflight--;
+ 	spin_unlock(&unix_gc_lock);
+diff --git a/net/xfrm/xfrm_compat.c b/net/xfrm/xfrm_compat.c
+index 2bf2693901631..a0f62fa02e06e 100644
+--- a/net/xfrm/xfrm_compat.c
++++ b/net/xfrm/xfrm_compat.c
+@@ -127,6 +127,7 @@ static const struct nla_policy compat_policy[XFRMA_MAX+1] = {
+ 	[XFRMA_SET_MARK]	= { .type = NLA_U32 },
+ 	[XFRMA_SET_MARK_MASK]	= { .type = NLA_U32 },
+ 	[XFRMA_IF_ID]		= { .type = NLA_U32 },
++	[XFRMA_MTIMER_THRESH]	= { .type = NLA_U32 },
+ };
+ 
+ static struct nlmsghdr *xfrm_nlmsg_put_compat(struct sk_buff *skb,
+@@ -274,9 +275,10 @@ static int xfrm_xlate64_attr(struct sk_buff *dst, const struct nlattr *src)
+ 	case XFRMA_SET_MARK:
+ 	case XFRMA_SET_MARK_MASK:
+ 	case XFRMA_IF_ID:
++	case XFRMA_MTIMER_THRESH:
+ 		return xfrm_nla_cpy(dst, src, nla_len(src));
+ 	default:
+-		BUILD_BUG_ON(XFRMA_MAX != XFRMA_IF_ID);
++		BUILD_BUG_ON(XFRMA_MAX != XFRMA_MTIMER_THRESH);
+ 		pr_warn_once("unsupported nla_type %d\n", src->nla_type);
+ 		return -EOPNOTSUPP;
+ 	}
+@@ -431,7 +433,7 @@ static int xfrm_xlate32_attr(void *dst, const struct nlattr *nla,
+ 	int err;
+ 
+ 	if (type > XFRMA_MAX) {
+-		BUILD_BUG_ON(XFRMA_MAX != XFRMA_IF_ID);
++		BUILD_BUG_ON(XFRMA_MAX != XFRMA_MTIMER_THRESH);
+ 		NL_SET_ERR_MSG(extack, "Bad attribute");
+ 		return -EOPNOTSUPP;
+ 	}
+diff --git a/net/xfrm/xfrm_interface.c b/net/xfrm/xfrm_interface.c
+index 41de46b5ffa94..57448fc519fcd 100644
+--- a/net/xfrm/xfrm_interface.c
++++ b/net/xfrm/xfrm_interface.c
+@@ -637,11 +637,16 @@ static int xfrmi_newlink(struct net *src_net, struct net_device *dev,
+ 			struct netlink_ext_ack *extack)
+ {
+ 	struct net *net = dev_net(dev);
+-	struct xfrm_if_parms p;
++	struct xfrm_if_parms p = {};
+ 	struct xfrm_if *xi;
+ 	int err;
+ 
+ 	xfrmi_netlink_parms(data, &p);
++	if (!p.if_id) {
++		NL_SET_ERR_MSG(extack, "if_id must be non zero");
++		return -EINVAL;
++	}
++
+ 	xi = xfrmi_locate(net, &p);
+ 	if (xi)
+ 		return -EEXIST;
+@@ -666,7 +671,12 @@ static int xfrmi_changelink(struct net_device *dev, struct nlattr *tb[],
+ {
+ 	struct xfrm_if *xi = netdev_priv(dev);
+ 	struct net *net = xi->net;
+-	struct xfrm_if_parms p;
++	struct xfrm_if_parms p = {};
++
++	if (!p.if_id) {
++		NL_SET_ERR_MSG(extack, "if_id must be non zero");
++		return -EINVAL;
++	}
+ 
+ 	xfrmi_netlink_parms(data, &p);
+ 	xi = xfrmi_locate(net, &p);
+diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c
+index 229544bc70c21..4dc4a7bbe51cf 100644
+--- a/net/xfrm/xfrm_output.c
++++ b/net/xfrm/xfrm_output.c
+@@ -647,10 +647,12 @@ static int xfrm_output_gso(struct net *net, struct sock *sk, struct sk_buff *skb
+  * This requires hardware to know the inner packet type to calculate
+  * the inner header checksum. Save inner ip protocol here to avoid
+  * traversing the packet in the vendor's xmit code.
+- * If the encap type is IPIP, just save skb->inner_ipproto. Otherwise,
+- * get the ip protocol from the IP header.
++ * For IPsec tunnel mode save the ip protocol from the IP header of the
++ * plain text packet. Otherwise If the encap type is IPIP, just save
++ * skb->inner_ipproto in any other case get the ip protocol from the IP
++ * header.
+  */
+-static void xfrm_get_inner_ipproto(struct sk_buff *skb)
++static void xfrm_get_inner_ipproto(struct sk_buff *skb, struct xfrm_state *x)
+ {
+ 	struct xfrm_offload *xo = xfrm_offload(skb);
+ 	const struct ethhdr *eth;
+@@ -658,6 +660,25 @@ static void xfrm_get_inner_ipproto(struct sk_buff *skb)
+ 	if (!xo)
+ 		return;
+ 
++	if (x->outer_mode.encap == XFRM_MODE_TUNNEL) {
++		switch (x->outer_mode.family) {
++		case AF_INET:
++			xo->inner_ipproto = ip_hdr(skb)->protocol;
++			break;
++		case AF_INET6:
++			xo->inner_ipproto = ipv6_hdr(skb)->nexthdr;
++			break;
++		default:
++			break;
++		}
++
++		return;
++	}
++
++	/* non-Tunnel Mode */
++	if (!skb->encapsulation)
++		return;
++
+ 	if (skb->inner_protocol_type == ENCAP_TYPE_IPPROTO) {
+ 		xo->inner_ipproto = skb->inner_ipproto;
+ 		return;
+@@ -712,8 +733,7 @@ int xfrm_output(struct sock *sk, struct sk_buff *skb)
+ 		sp->xvec[sp->len++] = x;
+ 		xfrm_state_hold(x);
+ 
+-		if (skb->encapsulation)
+-			xfrm_get_inner_ipproto(skb);
++		xfrm_get_inner_ipproto(skb, x);
+ 		skb->encapsulation = 1;
+ 
+ 		if (skb_is_gso(skb)) {
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 1a06585022abb..4924b9135c6ec 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -31,8 +31,10 @@
+ #include <linux/if_tunnel.h>
+ #include <net/dst.h>
+ #include <net/flow.h>
++#include <net/inet_ecn.h>
+ #include <net/xfrm.h>
+ #include <net/ip.h>
++#include <net/gre.h>
+ #if IS_ENABLED(CONFIG_IPV6_MIP6)
+ #include <net/mip6.h>
+ #endif
+@@ -3294,7 +3296,7 @@ decode_session4(struct sk_buff *skb, struct flowi *fl, bool reverse)
+ 	fl4->flowi4_proto = iph->protocol;
+ 	fl4->daddr = reverse ? iph->saddr : iph->daddr;
+ 	fl4->saddr = reverse ? iph->daddr : iph->saddr;
+-	fl4->flowi4_tos = iph->tos;
++	fl4->flowi4_tos = iph->tos & ~INET_ECN_MASK;
+ 
+ 	if (!ip_is_fragment(iph)) {
+ 		switch (iph->protocol) {
+@@ -3422,6 +3424,26 @@ decode_session6(struct sk_buff *skb, struct flowi *fl, bool reverse)
+ 			}
+ 			fl6->flowi6_proto = nexthdr;
+ 			return;
++		case IPPROTO_GRE:
++			if (!onlyproto &&
++			    (nh + offset + 12 < skb->data ||
++			     pskb_may_pull(skb, nh + offset + 12 - skb->data))) {
++				struct gre_base_hdr *gre_hdr;
++				__be32 *gre_key;
++
++				nh = skb_network_header(skb);
++				gre_hdr = (struct gre_base_hdr *)(nh + offset);
++				gre_key = (__be32 *)(gre_hdr + 1);
++
++				if (gre_hdr->flags & GRE_KEY) {
++					if (gre_hdr->flags & GRE_CSUM)
++						gre_key++;
++					fl6->fl6_gre_key = *gre_key;
++				}
++			}
++			fl6->flowi6_proto = nexthdr;
++			return;
++
+ #if IS_ENABLED(CONFIG_IPV6_MIP6)
+ 		case IPPROTO_MH:
+ 			offset += ipv6_optlen(exthdr);
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index a2f4001221d16..78d51399a0f4b 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -1593,6 +1593,9 @@ static struct xfrm_state *xfrm_state_clone(struct xfrm_state *orig,
+ 	x->km.seq = orig->km.seq;
+ 	x->replay = orig->replay;
+ 	x->preplay = orig->preplay;
++	x->mapping_maxage = orig->mapping_maxage;
++	x->new_mapping = 0;
++	x->new_mapping_sport = 0;
+ 
+ 	return x;
+ 
+@@ -2242,7 +2245,7 @@ int km_query(struct xfrm_state *x, struct xfrm_tmpl *t, struct xfrm_policy *pol)
+ }
+ EXPORT_SYMBOL(km_query);
+ 
+-int km_new_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr, __be16 sport)
++static int __km_new_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr, __be16 sport)
+ {
+ 	int err = -EINVAL;
+ 	struct xfrm_mgr *km;
+@@ -2257,6 +2260,24 @@ int km_new_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr, __be16 sport)
+ 	rcu_read_unlock();
+ 	return err;
+ }
++
++int km_new_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr, __be16 sport)
++{
++	int ret = 0;
++
++	if (x->mapping_maxage) {
++		if ((jiffies / HZ - x->new_mapping) > x->mapping_maxage ||
++		    x->new_mapping_sport != sport) {
++			x->new_mapping_sport = sport;
++			x->new_mapping = jiffies / HZ;
++			ret = __km_new_mapping(x, ipaddr, sport);
++		}
++	} else {
++		ret = __km_new_mapping(x, ipaddr, sport);
++	}
++
++	return ret;
++}
+ EXPORT_SYMBOL(km_new_mapping);
+ 
+ void km_policy_expired(struct xfrm_policy *pol, int dir, int hard, u32 portid)
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index 7c36cc1f3d79c..c60441be883a8 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -282,6 +282,10 @@ static int verify_newsa_info(struct xfrm_usersa_info *p,
+ 
+ 	err = 0;
+ 
++	if (attrs[XFRMA_MTIMER_THRESH])
++		if (!attrs[XFRMA_ENCAP])
++			err = -EINVAL;
++
+ out:
+ 	return err;
+ }
+@@ -521,6 +525,7 @@ static void xfrm_update_ae_params(struct xfrm_state *x, struct nlattr **attrs,
+ 	struct nlattr *lt = attrs[XFRMA_LTIME_VAL];
+ 	struct nlattr *et = attrs[XFRMA_ETIMER_THRESH];
+ 	struct nlattr *rt = attrs[XFRMA_REPLAY_THRESH];
++	struct nlattr *mt = attrs[XFRMA_MTIMER_THRESH];
+ 
+ 	if (re) {
+ 		struct xfrm_replay_state_esn *replay_esn;
+@@ -552,6 +557,9 @@ static void xfrm_update_ae_params(struct xfrm_state *x, struct nlattr **attrs,
+ 
+ 	if (rt)
+ 		x->replay_maxdiff = nla_get_u32(rt);
++
++	if (mt)
++		x->mapping_maxage = nla_get_u32(mt);
+ }
+ 
+ static void xfrm_smark_init(struct nlattr **attrs, struct xfrm_mark *m)
+@@ -621,8 +629,13 @@ static struct xfrm_state *xfrm_state_construct(struct net *net,
+ 
+ 	xfrm_smark_init(attrs, &x->props.smark);
+ 
+-	if (attrs[XFRMA_IF_ID])
++	if (attrs[XFRMA_IF_ID]) {
+ 		x->if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
++		if (!x->if_id) {
++			err = -EINVAL;
++			goto error;
++		}
++	}
+ 
+ 	err = __xfrm_init_state(x, false, attrs[XFRMA_OFFLOAD_DEV]);
+ 	if (err)
+@@ -1024,8 +1037,13 @@ static int copy_to_user_state_extra(struct xfrm_state *x,
+ 		if (ret)
+ 			goto out;
+ 	}
+-	if (x->security)
++	if (x->security) {
+ 		ret = copy_sec_ctx(x->security, skb);
++		if (ret)
++			goto out;
++	}
++	if (x->mapping_maxage)
++		ret = nla_put_u32(skb, XFRMA_MTIMER_THRESH, x->mapping_maxage);
+ out:
+ 	return ret;
+ }
+@@ -1413,8 +1431,13 @@ static int xfrm_alloc_userspi(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 
+ 	mark = xfrm_mark_get(attrs, &m);
+ 
+-	if (attrs[XFRMA_IF_ID])
++	if (attrs[XFRMA_IF_ID]) {
+ 		if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
++		if (!if_id) {
++			err = -EINVAL;
++			goto out_noput;
++		}
++	}
+ 
+ 	if (p->info.seq) {
+ 		x = xfrm_find_acq_byseq(net, mark, p->info.seq);
+@@ -1727,8 +1750,13 @@ static struct xfrm_policy *xfrm_policy_construct(struct net *net, struct xfrm_us
+ 
+ 	xfrm_mark_get(attrs, &xp->mark);
+ 
+-	if (attrs[XFRMA_IF_ID])
++	if (attrs[XFRMA_IF_ID]) {
+ 		xp->if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
++		if (!xp->if_id) {
++			err = -EINVAL;
++			goto error;
++		}
++	}
+ 
+ 	return xp;
+  error:
+@@ -3058,7 +3086,7 @@ static inline unsigned int xfrm_sa_len(struct xfrm_state *x)
+ 	if (x->props.extra_flags)
+ 		l += nla_total_size(sizeof(x->props.extra_flags));
+ 	if (x->xso.dev)
+-		 l += nla_total_size(sizeof(x->xso));
++		 l += nla_total_size(sizeof(struct xfrm_user_offload));
+ 	if (x->props.smark.v | x->props.smark.m) {
+ 		l += nla_total_size(sizeof(x->props.smark.v));
+ 		l += nla_total_size(sizeof(x->props.smark.m));
+@@ -3069,6 +3097,9 @@ static inline unsigned int xfrm_sa_len(struct xfrm_state *x)
+ 	/* Must count x->lastused as it may become non-zero behind our back. */
+ 	l += nla_total_size_64bit(sizeof(u64));
+ 
++	if (x->mapping_maxage)
++		l += nla_total_size(sizeof(x->mapping_maxage));
++
+ 	return l;
+ }
+ 
+diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
+index a886dff1ba899..38638845db9d7 100644
+--- a/samples/bpf/Makefile
++++ b/samples/bpf/Makefile
+@@ -215,6 +215,11 @@ TPROGS_LDFLAGS := -L$(SYSROOT)/usr/lib
+ endif
+ 
+ TPROGS_LDLIBS			+= $(LIBBPF) -lelf -lz
++TPROGLDLIBS_xdp_monitor		+= -lm
++TPROGLDLIBS_xdp_redirect	+= -lm
++TPROGLDLIBS_xdp_redirect_cpu	+= -lm
++TPROGLDLIBS_xdp_redirect_map	+= -lm
++TPROGLDLIBS_xdp_redirect_map_multi += -lm
+ TPROGLDLIBS_tracex4		+= -lrt
+ TPROGLDLIBS_trace_output	+= -lrt
+ TPROGLDLIBS_map_perf_test	+= -lrt
+@@ -328,7 +333,7 @@ $(BPF_SAMPLES_PATH)/*.c: verify_target_bpf $(LIBBPF)
+ $(src)/*.c: verify_target_bpf $(LIBBPF)
+ 
+ libbpf_hdrs: $(LIBBPF)
+-$(obj)/$(TRACE_HELPERS): | libbpf_hdrs
++$(obj)/$(TRACE_HELPERS) $(obj)/$(CGROUP_HELPERS) $(obj)/$(XDP_SAMPLE): | libbpf_hdrs
+ 
+ .PHONY: libbpf_hdrs
+ 
+@@ -343,6 +348,17 @@ $(obj)/hbm_out_kern.o: $(src)/hbm.h $(src)/hbm_kern.h
+ $(obj)/hbm.o: $(src)/hbm.h
+ $(obj)/hbm_edt_kern.o: $(src)/hbm.h $(src)/hbm_kern.h
+ 
++# Override includes for xdp_sample_user.o because $(srctree)/usr/include in
++# TPROGS_CFLAGS causes conflicts
++XDP_SAMPLE_CFLAGS += -Wall -O2 \
++		     -I$(src)/../../tools/include \
++		     -I$(src)/../../tools/include/uapi \
++		     -I$(LIBBPF_INCLUDE) \
++		     -I$(src)/../../tools/testing/selftests/bpf
++
++$(obj)/$(XDP_SAMPLE): TPROGS_CFLAGS = $(XDP_SAMPLE_CFLAGS)
++$(obj)/$(XDP_SAMPLE): $(src)/xdp_sample_user.h $(src)/xdp_sample_shared.h
++
+ -include $(BPF_SAMPLES_PATH)/Makefile.target
+ 
+ VMLINUX_BTF_PATHS ?= $(abspath $(if $(O),$(O)/vmlinux))				\
+diff --git a/samples/bpf/Makefile.target b/samples/bpf/Makefile.target
+index 5a368affa0386..7621f55e2947d 100644
+--- a/samples/bpf/Makefile.target
++++ b/samples/bpf/Makefile.target
+@@ -73,14 +73,3 @@ quiet_cmd_tprog-cobjs	= CC  $@
+       cmd_tprog-cobjs	= $(CC) $(tprogc_flags) -c -o $@ $<
+ $(tprog-cobjs): $(obj)/%.o: $(src)/%.c FORCE
+ 	$(call if_changed_dep,tprog-cobjs)
+-
+-# Override includes for xdp_sample_user.o because $(srctree)/usr/include in
+-# TPROGS_CFLAGS causes conflicts
+-XDP_SAMPLE_CFLAGS += -Wall -O2 -lm \
+-		     -I./tools/include \
+-		     -I./tools/include/uapi \
+-		     -I./tools/lib \
+-		     -I./tools/testing/selftests/bpf
+-$(obj)/xdp_sample_user.o: $(src)/xdp_sample_user.c \
+-	$(src)/xdp_sample_user.h $(src)/xdp_sample_shared.h
+-	$(CC) $(XDP_SAMPLE_CFLAGS) -c -o $@ $<
+diff --git a/samples/bpf/lwt_len_hist_kern.c b/samples/bpf/lwt_len_hist_kern.c
+index 9ed63e10e1709..1fa14c54963a1 100644
+--- a/samples/bpf/lwt_len_hist_kern.c
++++ b/samples/bpf/lwt_len_hist_kern.c
+@@ -16,13 +16,6 @@
+ #include <uapi/linux/in.h>
+ #include <bpf/bpf_helpers.h>
+ 
+-# define printk(fmt, ...)						\
+-		({							\
+-			char ____fmt[] = fmt;				\
+-			bpf_trace_printk(____fmt, sizeof(____fmt),	\
+-				     ##__VA_ARGS__);			\
+-		})
+-
+ struct bpf_elf_map {
+ 	__u32 type;
+ 	__u32 size_key;
+diff --git a/samples/bpf/xdp_sample_user.h b/samples/bpf/xdp_sample_user.h
+index d97465ff8c62c..5f44b877ecf5f 100644
+--- a/samples/bpf/xdp_sample_user.h
++++ b/samples/bpf/xdp_sample_user.h
+@@ -45,7 +45,9 @@ const char *get_driver_name(int ifindex);
+ int get_mac_addr(int ifindex, void *mac_addr);
+ 
+ #pragma GCC diagnostic push
++#ifndef __clang__
+ #pragma GCC diagnostic ignored "-Wstringop-truncation"
++#endif
+ __attribute__((unused))
+ static inline char *safe_strncpy(char *dst, const char *src, size_t size)
+ {
+diff --git a/scripts/dtc/dtx_diff b/scripts/dtc/dtx_diff
+index d3422ee15e300..f2bbde4bba86b 100755
+--- a/scripts/dtc/dtx_diff
++++ b/scripts/dtc/dtx_diff
+@@ -59,12 +59,8 @@ Otherwise DTx is treated as a dts source file (aka .dts).
+    or '/include/' to be processed.
+ 
+    If DTx_1 and DTx_2 are in different architectures, then this script
+-   may not work since \${ARCH} is part of the include path.  Two possible
+-   workarounds:
+-
+-      `basename $0` \\
+-          <(ARCH=arch_of_dtx_1 `basename $0` DTx_1) \\
+-          <(ARCH=arch_of_dtx_2 `basename $0` DTx_2)
++   may not work since \${ARCH} is part of the include path.  The following
++   workaround can be used:
+ 
+       `basename $0` ARCH=arch_of_dtx_1 DTx_1 >tmp_dtx_1.dts
+       `basename $0` ARCH=arch_of_dtx_2 DTx_2 >tmp_dtx_2.dts
+diff --git a/scripts/sphinx-pre-install b/scripts/sphinx-pre-install
+index 288e86a9d1e58..f126ecbb0494d 100755
+--- a/scripts/sphinx-pre-install
++++ b/scripts/sphinx-pre-install
+@@ -78,6 +78,7 @@ my %texlive = (
+ 	'ucs.sty'            => 'texlive-ucs',
+ 	'upquote.sty'        => 'texlive-upquote',
+ 	'wrapfig.sty'        => 'texlive-wrapfig',
++	'ctexhook.sty'       => 'texlive-ctex',
+ );
+ 
+ #
+@@ -369,6 +370,9 @@ sub give_debian_hints()
+ 	);
+ 
+ 	if ($pdf) {
++		check_missing_file(["/usr/share/texlive/texmf-dist/tex/latex/ctex/ctexhook.sty"],
++				   "texlive-lang-chinese", 2);
++
+ 		check_missing_file(["/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf"],
+ 				   "fonts-dejavu", 2);
+ 
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index dde4ecc0cd186..49b4f59db35e7 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -988,18 +988,22 @@ out:
+ static int selinux_add_opt(int token, const char *s, void **mnt_opts)
+ {
+ 	struct selinux_mnt_opts *opts = *mnt_opts;
++	bool is_alloc_opts = false;
+ 
+ 	if (token == Opt_seclabel)	/* eaten and completely ignored */
+ 		return 0;
+ 
++	if (!s)
++		return -ENOMEM;
++
+ 	if (!opts) {
+ 		opts = kzalloc(sizeof(struct selinux_mnt_opts), GFP_KERNEL);
+ 		if (!opts)
+ 			return -ENOMEM;
+ 		*mnt_opts = opts;
++		is_alloc_opts = true;
+ 	}
+-	if (!s)
+-		return -ENOMEM;
++
+ 	switch (token) {
+ 	case Opt_context:
+ 		if (opts->context || opts->defcontext)
+@@ -1024,6 +1028,10 @@ static int selinux_add_opt(int token, const char *s, void **mnt_opts)
+ 	}
+ 	return 0;
+ Einval:
++	if (is_alloc_opts) {
++		kfree(opts);
++		*mnt_opts = NULL;
++	}
+ 	pr_warn(SEL_MOUNT_FAIL_MSG);
+ 	return -EINVAL;
+ }
+diff --git a/sound/core/jack.c b/sound/core/jack.c
+index 537df1e98f8ac..d1e3055f2b6a5 100644
+--- a/sound/core/jack.c
++++ b/sound/core/jack.c
+@@ -62,10 +62,13 @@ static int snd_jack_dev_free(struct snd_device *device)
+ 	struct snd_card *card = device->card;
+ 	struct snd_jack_kctl *jack_kctl, *tmp_jack_kctl;
+ 
++	down_write(&card->controls_rwsem);
+ 	list_for_each_entry_safe(jack_kctl, tmp_jack_kctl, &jack->kctl_list, list) {
+ 		list_del_init(&jack_kctl->list);
+ 		snd_ctl_remove(card, jack_kctl->kctl);
+ 	}
++	up_write(&card->controls_rwsem);
++
+ 	if (jack->private_free)
+ 		jack->private_free(jack);
+ 
+diff --git a/sound/core/misc.c b/sound/core/misc.c
+index 3579dd7a161f7..50e4aaa6270d1 100644
+--- a/sound/core/misc.c
++++ b/sound/core/misc.c
+@@ -112,7 +112,7 @@ snd_pci_quirk_lookup_id(u16 vendor, u16 device,
+ {
+ 	const struct snd_pci_quirk *q;
+ 
+-	for (q = list; q->subvendor; q++) {
++	for (q = list; q->subvendor || q->subdevice; q++) {
+ 		if (q->subvendor != vendor)
+ 			continue;
+ 		if (!q->subdevice ||
+diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
+index 20a0a4771b9a8..3ee9edf858156 100644
+--- a/sound/core/oss/pcm_oss.c
++++ b/sound/core/oss/pcm_oss.c
+@@ -2065,7 +2065,7 @@ static int snd_pcm_oss_set_trigger(struct snd_pcm_oss_file *pcm_oss_file, int tr
+ 	int err, cmd;
+ 
+ #ifdef OSS_DEBUG
+-	pcm_dbg(substream->pcm, "pcm_oss: trigger = 0x%x\n", trigger);
++	pr_debug("pcm_oss: trigger = 0x%x\n", trigger);
+ #endif
+ 	
+ 	psubstream = pcm_oss_file->streams[SNDRV_PCM_STREAM_PLAYBACK];
+diff --git a/sound/core/pcm.c b/sound/core/pcm.c
+index 6fd3677685d70..ba4a987ed1c62 100644
+--- a/sound/core/pcm.c
++++ b/sound/core/pcm.c
+@@ -810,7 +810,11 @@ EXPORT_SYMBOL(snd_pcm_new_internal);
+ static void free_chmap(struct snd_pcm_str *pstr)
+ {
+ 	if (pstr->chmap_kctl) {
+-		snd_ctl_remove(pstr->pcm->card, pstr->chmap_kctl);
++		struct snd_card *card = pstr->pcm->card;
++
++		down_write(&card->controls_rwsem);
++		snd_ctl_remove(card, pstr->chmap_kctl);
++		up_write(&card->controls_rwsem);
+ 		pstr->chmap_kctl = NULL;
+ 	}
+ }
+diff --git a/sound/core/seq/seq_queue.c b/sound/core/seq/seq_queue.c
+index d6c02dea976c8..bc933104c3eea 100644
+--- a/sound/core/seq/seq_queue.c
++++ b/sound/core/seq/seq_queue.c
+@@ -235,12 +235,15 @@ struct snd_seq_queue *snd_seq_queue_find_name(char *name)
+ 
+ /* -------------------------------------------------------- */
+ 
++#define MAX_CELL_PROCESSES_IN_QUEUE	1000
++
+ void snd_seq_check_queue(struct snd_seq_queue *q, int atomic, int hop)
+ {
+ 	unsigned long flags;
+ 	struct snd_seq_event_cell *cell;
+ 	snd_seq_tick_time_t cur_tick;
+ 	snd_seq_real_time_t cur_time;
++	int processed = 0;
+ 
+ 	if (q == NULL)
+ 		return;
+@@ -263,6 +266,8 @@ void snd_seq_check_queue(struct snd_seq_queue *q, int atomic, int hop)
+ 		if (!cell)
+ 			break;
+ 		snd_seq_dispatch_event(cell, atomic, hop);
++		if (++processed >= MAX_CELL_PROCESSES_IN_QUEUE)
++			goto out; /* the rest processed at the next batch */
+ 	}
+ 
+ 	/* Process time queue... */
+@@ -272,14 +277,19 @@ void snd_seq_check_queue(struct snd_seq_queue *q, int atomic, int hop)
+ 		if (!cell)
+ 			break;
+ 		snd_seq_dispatch_event(cell, atomic, hop);
++		if (++processed >= MAX_CELL_PROCESSES_IN_QUEUE)
++			goto out; /* the rest processed at the next batch */
+ 	}
+ 
++ out:
+ 	/* free lock */
+ 	spin_lock_irqsave(&q->check_lock, flags);
+ 	if (q->check_again) {
+ 		q->check_again = 0;
+-		spin_unlock_irqrestore(&q->check_lock, flags);
+-		goto __again;
++		if (processed < MAX_CELL_PROCESSES_IN_QUEUE) {
++			spin_unlock_irqrestore(&q->check_lock, flags);
++			goto __again;
++		}
+ 	}
+ 	q->check_blocked = 0;
+ 	spin_unlock_irqrestore(&q->check_lock, flags);
+diff --git a/sound/hda/hdac_stream.c b/sound/hda/hdac_stream.c
+index 9867555883c34..aa7955fdf68a0 100644
+--- a/sound/hda/hdac_stream.c
++++ b/sound/hda/hdac_stream.c
+@@ -534,17 +534,11 @@ static void azx_timecounter_init(struct hdac_stream *azx_dev,
+ 	cc->mask = CLOCKSOURCE_MASK(32);
+ 
+ 	/*
+-	 * Converting from 24 MHz to ns means applying a 125/3 factor.
+-	 * To avoid any saturation issues in intermediate operations,
+-	 * the 125 factor is applied first. The division is applied
+-	 * last after reading the timecounter value.
+-	 * Applying the 1/3 factor as part of the multiplication
+-	 * requires at least 20 bits for a decent precision, however
+-	 * overflows occur after about 4 hours or less, not a option.
++	 * Calculate the optimal mult/shift values. The counter wraps
++	 * around after ~178.9 seconds.
+ 	 */
+-
+-	cc->mult = 125; /* saturation after 195 years */
+-	cc->shift = 0;
++	clocks_calc_mult_shift(&cc->mult, &cc->shift, 24000000,
++			       NSEC_PER_SEC, 178);
+ 
+ 	nsec = 0; /* audio time is elapsed time since trigger */
+ 	timecounter_init(tc, cc, nsec);
+diff --git a/sound/pci/hda/hda_bind.c b/sound/pci/hda/hda_bind.c
+index 1c8bffc3eec6e..7153bd53e1893 100644
+--- a/sound/pci/hda/hda_bind.c
++++ b/sound/pci/hda/hda_bind.c
+@@ -156,6 +156,11 @@ static int hda_codec_driver_remove(struct device *dev)
+ 		return codec->bus->core.ext_ops->hdev_detach(&codec->core);
+ 	}
+ 
++	refcount_dec(&codec->pcm_ref);
++	snd_hda_codec_disconnect_pcms(codec);
++	wait_event(codec->remove_sleep, !refcount_read(&codec->pcm_ref));
++	snd_power_sync_ref(codec->bus->card);
++
+ 	if (codec->patch_ops.free)
+ 		codec->patch_ops.free(codec);
+ 	snd_hda_codec_cleanup_for_unbind(codec);
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 0c4a337c9fc0d..7016b48227bf2 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -703,20 +703,10 @@ get_hda_cvt_setup(struct hda_codec *codec, hda_nid_t nid)
+ /*
+  * PCM device
+  */
+-static void release_pcm(struct kref *kref)
+-{
+-	struct hda_pcm *pcm = container_of(kref, struct hda_pcm, kref);
+-
+-	if (pcm->pcm)
+-		snd_device_free(pcm->codec->card, pcm->pcm);
+-	clear_bit(pcm->device, pcm->codec->bus->pcm_dev_bits);
+-	kfree(pcm->name);
+-	kfree(pcm);
+-}
+-
+ void snd_hda_codec_pcm_put(struct hda_pcm *pcm)
+ {
+-	kref_put(&pcm->kref, release_pcm);
++	if (refcount_dec_and_test(&pcm->codec->pcm_ref))
++		wake_up(&pcm->codec->remove_sleep);
+ }
+ EXPORT_SYMBOL_GPL(snd_hda_codec_pcm_put);
+ 
+@@ -731,7 +721,6 @@ struct hda_pcm *snd_hda_codec_pcm_new(struct hda_codec *codec,
+ 		return NULL;
+ 
+ 	pcm->codec = codec;
+-	kref_init(&pcm->kref);
+ 	va_start(args, fmt);
+ 	pcm->name = kvasprintf(GFP_KERNEL, fmt, args);
+ 	va_end(args);
+@@ -741,6 +730,7 @@ struct hda_pcm *snd_hda_codec_pcm_new(struct hda_codec *codec,
+ 	}
+ 
+ 	list_add_tail(&pcm->list, &codec->pcm_list_head);
++	refcount_inc(&codec->pcm_ref);
+ 	return pcm;
+ }
+ EXPORT_SYMBOL_GPL(snd_hda_codec_pcm_new);
+@@ -748,15 +738,31 @@ EXPORT_SYMBOL_GPL(snd_hda_codec_pcm_new);
+ /*
+  * codec destructor
+  */
++void snd_hda_codec_disconnect_pcms(struct hda_codec *codec)
++{
++	struct hda_pcm *pcm;
++
++	list_for_each_entry(pcm, &codec->pcm_list_head, list) {
++		if (pcm->disconnected)
++			continue;
++		if (pcm->pcm)
++			snd_device_disconnect(codec->card, pcm->pcm);
++		snd_hda_codec_pcm_put(pcm);
++		pcm->disconnected = 1;
++	}
++}
++
+ static void codec_release_pcms(struct hda_codec *codec)
+ {
+ 	struct hda_pcm *pcm, *n;
+ 
+ 	list_for_each_entry_safe(pcm, n, &codec->pcm_list_head, list) {
+-		list_del_init(&pcm->list);
++		list_del(&pcm->list);
+ 		if (pcm->pcm)
+-			snd_device_disconnect(codec->card, pcm->pcm);
+-		snd_hda_codec_pcm_put(pcm);
++			snd_device_free(pcm->codec->card, pcm->pcm);
++		clear_bit(pcm->device, pcm->codec->bus->pcm_dev_bits);
++		kfree(pcm->name);
++		kfree(pcm);
+ 	}
+ }
+ 
+@@ -769,6 +775,7 @@ void snd_hda_codec_cleanup_for_unbind(struct hda_codec *codec)
+ 		codec->registered = 0;
+ 	}
+ 
++	snd_hda_codec_disconnect_pcms(codec);
+ 	cancel_delayed_work_sync(&codec->jackpoll_work);
+ 	if (!codec->in_freeing)
+ 		snd_hda_ctls_clear(codec);
+@@ -792,6 +799,7 @@ void snd_hda_codec_cleanup_for_unbind(struct hda_codec *codec)
+ 	remove_conn_list(codec);
+ 	snd_hdac_regmap_exit(&codec->core);
+ 	codec->configured = 0;
++	refcount_set(&codec->pcm_ref, 1); /* reset refcount */
+ }
+ EXPORT_SYMBOL_GPL(snd_hda_codec_cleanup_for_unbind);
+ 
+@@ -958,6 +966,8 @@ int snd_hda_codec_device_new(struct hda_bus *bus, struct snd_card *card,
+ 	snd_array_init(&codec->verbs, sizeof(struct hda_verb *), 8);
+ 	INIT_LIST_HEAD(&codec->conn_list);
+ 	INIT_LIST_HEAD(&codec->pcm_list_head);
++	refcount_set(&codec->pcm_ref, 1);
++	init_waitqueue_head(&codec->remove_sleep);
+ 
+ 	INIT_DELAYED_WORK(&codec->jackpoll_work, hda_jackpoll_work);
+ 	codec->depop_delay = -1;
+@@ -1727,8 +1737,11 @@ void snd_hda_ctls_clear(struct hda_codec *codec)
+ {
+ 	int i;
+ 	struct hda_nid_item *items = codec->mixers.list;
++
++	down_write(&codec->card->controls_rwsem);
+ 	for (i = 0; i < codec->mixers.used; i++)
+ 		snd_ctl_remove(codec->card, items[i].kctl);
++	up_write(&codec->card->controls_rwsem);
+ 	snd_array_free(&codec->mixers);
+ 	snd_array_free(&codec->nids);
+ }
+diff --git a/sound/pci/hda/hda_controller.c b/sound/pci/hda/hda_controller.c
+index 930ae4002a818..75dcb14ff20ad 100644
+--- a/sound/pci/hda/hda_controller.c
++++ b/sound/pci/hda/hda_controller.c
+@@ -504,7 +504,6 @@ static int azx_get_time_info(struct snd_pcm_substream *substream,
+ 		snd_pcm_gettime(substream->runtime, system_ts);
+ 
+ 		nsec = timecounter_read(&azx_dev->core.tc);
+-		nsec = div_u64(nsec, 3); /* can be optimized */
+ 		if (audio_tstamp_config->report_delay)
+ 			nsec = azx_adjust_codec_delay(substream, nsec);
+ 
+diff --git a/sound/pci/hda/hda_local.h b/sound/pci/hda/hda_local.h
+index d22c96eb2f8fb..8621f576446b8 100644
+--- a/sound/pci/hda/hda_local.h
++++ b/sound/pci/hda/hda_local.h
+@@ -137,6 +137,7 @@ int __snd_hda_add_vmaster(struct hda_codec *codec, char *name,
+ int snd_hda_codec_reset(struct hda_codec *codec);
+ void snd_hda_codec_register(struct hda_codec *codec);
+ void snd_hda_codec_cleanup_for_unbind(struct hda_codec *codec);
++void snd_hda_codec_disconnect_pcms(struct hda_codec *codec);
+ 
+ #define snd_hda_regmap_sync(codec)	snd_hdac_regmap_sync(&(codec)->core)
+ 
+diff --git a/sound/pci/hda/patch_cs8409-tables.c b/sound/pci/hda/patch_cs8409-tables.c
+index 0fb0a428428b4..df0b4522babf7 100644
+--- a/sound/pci/hda/patch_cs8409-tables.c
++++ b/sound/pci/hda/patch_cs8409-tables.c
+@@ -252,6 +252,7 @@ struct sub_codec cs8409_cs42l42_codec = {
+ 	.init_seq_num = ARRAY_SIZE(cs42l42_init_reg_seq),
+ 	.hp_jack_in = 0,
+ 	.mic_jack_in = 0,
++	.force_status_change = 1,
+ 	.paged = 1,
+ 	.suspended = 1,
+ 	.no_type_dect = 0,
+@@ -443,6 +444,7 @@ struct sub_codec dolphin_cs42l42_0 = {
+ 	.init_seq_num = ARRAY_SIZE(dolphin_c0_init_reg_seq),
+ 	.hp_jack_in = 0,
+ 	.mic_jack_in = 0,
++	.force_status_change = 1,
+ 	.paged = 1,
+ 	.suspended = 1,
+ 	.no_type_dect = 0,
+@@ -456,6 +458,7 @@ struct sub_codec dolphin_cs42l42_1 = {
+ 	.init_seq_num = ARRAY_SIZE(dolphin_c1_init_reg_seq),
+ 	.hp_jack_in = 0,
+ 	.mic_jack_in = 0,
++	.force_status_change = 1,
+ 	.paged = 1,
+ 	.suspended = 1,
+ 	.no_type_dect = 1,
+diff --git a/sound/pci/hda/patch_cs8409.c b/sound/pci/hda/patch_cs8409.c
+index 039b9f2f8e947..aff2b5abb81ea 100644
+--- a/sound/pci/hda/patch_cs8409.c
++++ b/sound/pci/hda/patch_cs8409.c
+@@ -628,15 +628,17 @@ static void cs42l42_run_jack_detect(struct sub_codec *cs42l42)
+ 	cs8409_i2c_write(cs42l42, 0x1b74, 0x07);
+ 	cs8409_i2c_write(cs42l42, 0x131b, 0xFD);
+ 	cs8409_i2c_write(cs42l42, 0x1120, 0x80);
+-	/* Wait ~100us*/
+-	usleep_range(100, 200);
++	/* Wait ~20ms*/
++	usleep_range(20000, 25000);
+ 	cs8409_i2c_write(cs42l42, 0x111f, 0x77);
+ 	cs8409_i2c_write(cs42l42, 0x1120, 0xc0);
+ }
+ 
+ static int cs42l42_handle_tip_sense(struct sub_codec *cs42l42, unsigned int reg_ts_status)
+ {
+-	int status_changed = 0;
++	int status_changed = cs42l42->force_status_change;
++
++	cs42l42->force_status_change = 0;
+ 
+ 	/* TIP_SENSE INSERT/REMOVE */
+ 	switch (reg_ts_status) {
+@@ -791,6 +793,7 @@ static void cs42l42_suspend(struct sub_codec *cs42l42)
+ 	cs42l42->last_page = 0;
+ 	cs42l42->hp_jack_in = 0;
+ 	cs42l42->mic_jack_in = 0;
++	cs42l42->force_status_change = 1;
+ 
+ 	/* Put CS42L42 into Reset */
+ 	gpio_data = snd_hda_codec_read(codec, CS8409_PIN_AFG, 0, AC_VERB_GET_GPIO_DATA, 0);
+diff --git a/sound/pci/hda/patch_cs8409.h b/sound/pci/hda/patch_cs8409.h
+index ade2b838590cf..d0b725c7285b6 100644
+--- a/sound/pci/hda/patch_cs8409.h
++++ b/sound/pci/hda/patch_cs8409.h
+@@ -305,6 +305,7 @@ struct sub_codec {
+ 
+ 	unsigned int hp_jack_in:1;
+ 	unsigned int mic_jack_in:1;
++	unsigned int force_status_change:1;
+ 	unsigned int suspended:1;
+ 	unsigned int paged:1;
+ 	unsigned int last_page;
+diff --git a/sound/soc/amd/Kconfig b/sound/soc/amd/Kconfig
+index 2c6af3f8f2961..ff535370e525a 100644
+--- a/sound/soc/amd/Kconfig
++++ b/sound/soc/amd/Kconfig
+@@ -68,7 +68,7 @@ config SND_SOC_AMD_VANGOGH_MACH
+ 	tristate "AMD Vangogh support for NAU8821 CS35L41"
+ 	select SND_SOC_NAU8821
+ 	select SND_SOC_CS35L41_SPI
+-	depends on SND_SOC_AMD_ACP5x && I2C
++	depends on SND_SOC_AMD_ACP5x && I2C && SPI_MASTER
+ 	help
+ 	  This option enables machine driver for Vangogh platform
+ 	  using NAU8821 and CS35L41 codecs.
+diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
+index 326f2d611ad4e..3a610ba183ffb 100644
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -241,8 +241,7 @@ config SND_SOC_ALL_CODECS
+ 	imply SND_SOC_UDA1380
+ 	imply SND_SOC_WCD9335
+ 	imply SND_SOC_WCD934X
+-	imply SND_SOC_WCD937X
+-	imply SND_SOC_WCD938X
++	imply SND_SOC_WCD938X_SDW
+ 	imply SND_SOC_LPASS_RX_MACRO
+ 	imply SND_SOC_LPASS_TX_MACRO
+ 	imply SND_SOC_WL1273
+diff --git a/sound/soc/codecs/cs42l42.c b/sound/soc/codecs/cs42l42.c
+index 27a1c4c73074f..a63fba4e6c9c2 100644
+--- a/sound/soc/codecs/cs42l42.c
++++ b/sound/soc/codecs/cs42l42.c
+@@ -521,8 +521,25 @@ static int cs42l42_set_jack(struct snd_soc_component *component, struct snd_soc_
+ {
+ 	struct cs42l42_private *cs42l42 = snd_soc_component_get_drvdata(component);
+ 
++	/* Prevent race with interrupt handler */
++	mutex_lock(&cs42l42->jack_detect_mutex);
+ 	cs42l42->jack = jk;
+ 
++	if (jk) {
++		switch (cs42l42->hs_type) {
++		case CS42L42_PLUG_CTIA:
++		case CS42L42_PLUG_OMTP:
++			snd_soc_jack_report(jk, SND_JACK_HEADSET, SND_JACK_HEADSET);
++			break;
++		case CS42L42_PLUG_HEADPHONE:
++			snd_soc_jack_report(jk, SND_JACK_HEADPHONE, SND_JACK_HEADPHONE);
++			break;
++		default:
++			break;
++		}
++	}
++	mutex_unlock(&cs42l42->jack_detect_mutex);
++
+ 	return 0;
+ }
+ 
+@@ -1611,6 +1628,8 @@ static irqreturn_t cs42l42_irq_thread(int irq, void *data)
+ 		CS42L42_M_DETECT_FT_MASK |
+ 		CS42L42_M_HSBIAS_HIZ_MASK);
+ 
++	mutex_lock(&cs42l42->jack_detect_mutex);
++
+ 	/* Check auto-detect status */
+ 	if ((~masks[5]) & irq_params_table[5].mask) {
+ 		if (stickies[5] & CS42L42_HSDET_AUTO_DONE_MASK) {
+@@ -1689,6 +1708,8 @@ static irqreturn_t cs42l42_irq_thread(int irq, void *data)
+ 		}
+ 	}
+ 
++	mutex_unlock(&cs42l42->jack_detect_mutex);
++
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -2033,6 +2054,7 @@ static int cs42l42_i2c_probe(struct i2c_client *i2c_client,
+ 
+ 	cs42l42->dev = &i2c_client->dev;
+ 	i2c_set_clientdata(i2c_client, cs42l42);
++	mutex_init(&cs42l42->jack_detect_mutex);
+ 
+ 	cs42l42->regmap = devm_regmap_init_i2c(i2c_client, &cs42l42_regmap);
+ 	if (IS_ERR(cs42l42->regmap)) {
+diff --git a/sound/soc/codecs/cs42l42.h b/sound/soc/codecs/cs42l42.h
+index f45bcc9a3a62f..02128ebf8989a 100644
+--- a/sound/soc/codecs/cs42l42.h
++++ b/sound/soc/codecs/cs42l42.h
+@@ -12,6 +12,7 @@
+ #ifndef __CS42L42_H__
+ #define __CS42L42_H__
+ 
++#include <linux/mutex.h>
+ #include <sound/jack.h>
+ 
+ #define CS42L42_PAGE_REGISTER	0x00	/* Page Select Register */
+@@ -838,6 +839,7 @@ struct  cs42l42_private {
+ 	struct gpio_desc *reset_gpio;
+ 	struct completion pdn_done;
+ 	struct snd_soc_jack *jack;
++	struct mutex jack_detect_mutex;
+ 	int pll_config;
+ 	int bclk;
+ 	u32 sclk;
+diff --git a/sound/soc/codecs/rt5663.c b/sound/soc/codecs/rt5663.c
+index 0389b2bb360e2..2138f62e6af5d 100644
+--- a/sound/soc/codecs/rt5663.c
++++ b/sound/soc/codecs/rt5663.c
+@@ -3461,6 +3461,7 @@ static void rt5663_calibrate(struct rt5663_priv *rt5663)
+ static int rt5663_parse_dp(struct rt5663_priv *rt5663, struct device *dev)
+ {
+ 	int table_size;
++	int ret;
+ 
+ 	device_property_read_u32(dev, "realtek,dc_offset_l_manual",
+ 		&rt5663->pdata.dc_offset_l_manual);
+@@ -3477,9 +3478,11 @@ static int rt5663_parse_dp(struct rt5663_priv *rt5663, struct device *dev)
+ 		table_size = sizeof(struct impedance_mapping_table) *
+ 			rt5663->pdata.impedance_sensing_num;
+ 		rt5663->imp_table = devm_kzalloc(dev, table_size, GFP_KERNEL);
+-		device_property_read_u32_array(dev,
++		ret = device_property_read_u32_array(dev,
+ 			"realtek,impedance_sensing_table",
+ 			(u32 *)rt5663->imp_table, table_size);
++		if (ret)
++			return ret;
+ 	}
+ 
+ 	return 0;
+@@ -3504,8 +3507,11 @@ static int rt5663_i2c_probe(struct i2c_client *i2c,
+ 
+ 	if (pdata)
+ 		rt5663->pdata = *pdata;
+-	else
+-		rt5663_parse_dp(rt5663, &i2c->dev);
++	else {
++		ret = rt5663_parse_dp(rt5663, &i2c->dev);
++		if (ret)
++			return ret;
++	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(rt5663->supplies); i++)
+ 		rt5663->supplies[i].supply = rt5663_supply_names[i];
+diff --git a/sound/soc/fsl/fsl_asrc.c b/sound/soc/fsl/fsl_asrc.c
+index 24b41881a68f8..d7d1536a4f377 100644
+--- a/sound/soc/fsl/fsl_asrc.c
++++ b/sound/soc/fsl/fsl_asrc.c
+@@ -19,6 +19,7 @@
+ #include "fsl_asrc.h"
+ 
+ #define IDEAL_RATIO_DECIMAL_DEPTH 26
++#define DIVIDER_NUM  64
+ 
+ #define pair_err(fmt, ...) \
+ 	dev_err(&asrc->pdev->dev, "Pair %c: " fmt, 'A' + index, ##__VA_ARGS__)
+@@ -101,6 +102,55 @@ static unsigned char clk_map_imx8qxp[2][ASRC_CLK_MAP_LEN] = {
+ 	},
+ };
+ 
++/*
++ * According to RM, the divider range is 1 ~ 8,
++ * prescaler is power of 2 from 1 ~ 128.
++ */
++static int asrc_clk_divider[DIVIDER_NUM] = {
++	1,  2,  4,  8,  16,  32,  64,  128,  /* divider = 1 */
++	2,  4,  8, 16,  32,  64, 128,  256,  /* divider = 2 */
++	3,  6, 12, 24,  48,  96, 192,  384,  /* divider = 3 */
++	4,  8, 16, 32,  64, 128, 256,  512,  /* divider = 4 */
++	5, 10, 20, 40,  80, 160, 320,  640,  /* divider = 5 */
++	6, 12, 24, 48,  96, 192, 384,  768,  /* divider = 6 */
++	7, 14, 28, 56, 112, 224, 448,  896,  /* divider = 7 */
++	8, 16, 32, 64, 128, 256, 512, 1024,  /* divider = 8 */
++};
++
++/*
++ * Check if the divider is available for internal ratio mode
++ */
++static bool fsl_asrc_divider_avail(int clk_rate, int rate, int *div)
++{
++	u32 rem, i;
++	u64 n;
++
++	if (div)
++		*div = 0;
++
++	if (clk_rate == 0 || rate == 0)
++		return false;
++
++	n = clk_rate;
++	rem = do_div(n, rate);
++
++	if (div)
++		*div = n;
++
++	if (rem != 0)
++		return false;
++
++	for (i = 0; i < DIVIDER_NUM; i++) {
++		if (n == asrc_clk_divider[i])
++			break;
++	}
++
++	if (i == DIVIDER_NUM)
++		return false;
++
++	return true;
++}
++
+ /**
+  * fsl_asrc_sel_proc - Select the pre-processing and post-processing options
+  * @inrate: input sample rate
+@@ -330,12 +380,12 @@ static int fsl_asrc_config_pair(struct fsl_asrc_pair *pair, bool use_ideal_rate)
+ 	enum asrc_word_width input_word_width;
+ 	enum asrc_word_width output_word_width;
+ 	u32 inrate, outrate, indiv, outdiv;
+-	u32 clk_index[2], div[2], rem[2];
++	u32 clk_index[2], div[2];
+ 	u64 clk_rate;
+ 	int in, out, channels;
+ 	int pre_proc, post_proc;
+ 	struct clk *clk;
+-	bool ideal;
++	bool ideal, div_avail;
+ 
+ 	if (!config) {
+ 		pair_err("invalid pair config\n");
+@@ -415,8 +465,7 @@ static int fsl_asrc_config_pair(struct fsl_asrc_pair *pair, bool use_ideal_rate)
+ 	clk = asrc_priv->asrck_clk[clk_index[ideal ? OUT : IN]];
+ 
+ 	clk_rate = clk_get_rate(clk);
+-	rem[IN] = do_div(clk_rate, inrate);
+-	div[IN] = (u32)clk_rate;
++	div_avail = fsl_asrc_divider_avail(clk_rate, inrate, &div[IN]);
+ 
+ 	/*
+ 	 * The divider range is [1, 1024], defined by the hardware. For non-
+@@ -425,7 +474,7 @@ static int fsl_asrc_config_pair(struct fsl_asrc_pair *pair, bool use_ideal_rate)
+ 	 * only result in different converting speeds. So remainder does not
+ 	 * matter, as long as we keep the divider within its valid range.
+ 	 */
+-	if (div[IN] == 0 || (!ideal && (div[IN] > 1024 || rem[IN] != 0))) {
++	if (div[IN] == 0 || (!ideal && !div_avail)) {
+ 		pair_err("failed to support input sample rate %dHz by asrck_%x\n",
+ 				inrate, clk_index[ideal ? OUT : IN]);
+ 		return -EINVAL;
+@@ -436,13 +485,12 @@ static int fsl_asrc_config_pair(struct fsl_asrc_pair *pair, bool use_ideal_rate)
+ 	clk = asrc_priv->asrck_clk[clk_index[OUT]];
+ 	clk_rate = clk_get_rate(clk);
+ 	if (ideal && use_ideal_rate)
+-		rem[OUT] = do_div(clk_rate, IDEAL_RATIO_RATE);
++		div_avail = fsl_asrc_divider_avail(clk_rate, IDEAL_RATIO_RATE, &div[OUT]);
+ 	else
+-		rem[OUT] = do_div(clk_rate, outrate);
+-	div[OUT] = clk_rate;
++		div_avail = fsl_asrc_divider_avail(clk_rate, outrate, &div[OUT]);
+ 
+ 	/* Output divider has the same limitation as the input one */
+-	if (div[OUT] == 0 || (!ideal && (div[OUT] > 1024 || rem[OUT] != 0))) {
++	if (div[OUT] == 0 || (!ideal && !div_avail)) {
+ 		pair_err("failed to support output sample rate %dHz by asrck_%x\n",
+ 				outrate, clk_index[OUT]);
+ 		return -EINVAL;
+@@ -621,8 +669,7 @@ static void fsl_asrc_select_clk(struct fsl_asrc_priv *asrc_priv,
+ 			clk_index = asrc_priv->clk_map[j][i];
+ 			clk_rate = clk_get_rate(asrc_priv->asrck_clk[clk_index]);
+ 			/* Only match a perfect clock source with no remainder */
+-			if (clk_rate != 0 && (clk_rate / rate[j]) <= 1024 &&
+-			    (clk_rate % rate[j]) == 0)
++			if (fsl_asrc_divider_avail(clk_rate, rate[j], NULL))
+ 				break;
+ 		}
+ 
+diff --git a/sound/soc/fsl/fsl_mqs.c b/sound/soc/fsl/fsl_mqs.c
+index 27b4536dce443..ceaecbe3a25e4 100644
+--- a/sound/soc/fsl/fsl_mqs.c
++++ b/sound/soc/fsl/fsl_mqs.c
+@@ -337,4 +337,4 @@ module_platform_driver(fsl_mqs_driver);
+ MODULE_AUTHOR("Shengjiu Wang <Shengjiu.Wang@nxp.com>");
+ MODULE_DESCRIPTION("MQS codec driver");
+ MODULE_LICENSE("GPL v2");
+-MODULE_ALIAS("platform: fsl-mqs");
++MODULE_ALIAS("platform:fsl-mqs");
+diff --git a/sound/soc/fsl/imx-card.c b/sound/soc/fsl/imx-card.c
+index 6f06afd23b16a..c7503a5f7cfbc 100644
+--- a/sound/soc/fsl/imx-card.c
++++ b/sound/soc/fsl/imx-card.c
+@@ -120,7 +120,7 @@ struct imx_card_data {
+ 
+ static struct imx_akcodec_fs_mul ak4458_fs_mul[] = {
+ 	/* Normal, < 32kHz */
+-	{ .rmin = 8000,   .rmax = 24000,  .wmin = 1024, .wmax = 1024, },
++	{ .rmin = 8000,   .rmax = 24000,  .wmin = 256,  .wmax = 1024, },
+ 	/* Normal, 32kHz */
+ 	{ .rmin = 32000,  .rmax = 32000,  .wmin = 256,  .wmax = 1024, },
+ 	/* Normal */
+@@ -151,8 +151,8 @@ static struct imx_akcodec_fs_mul ak4497_fs_mul[] = {
+ 	 * Table 7      - mapping multiplier and speed mode
+ 	 * Tables 8 & 9 - mapping speed mode and LRCK fs
+ 	 */
+-	{ .rmin = 8000,   .rmax = 32000,  .wmin = 1024, .wmax = 1024, }, /* Normal, <= 32kHz */
+-	{ .rmin = 44100,  .rmax = 48000,  .wmin = 512,  .wmax = 512, }, /* Normal */
++	{ .rmin = 8000,   .rmax = 32000,  .wmin = 256,  .wmax = 1024, }, /* Normal, <= 32kHz */
++	{ .rmin = 44100,  .rmax = 48000,  .wmin = 256,  .wmax = 512, }, /* Normal */
+ 	{ .rmin = 88200,  .rmax = 96000,  .wmin = 256,  .wmax = 256, }, /* Double */
+ 	{ .rmin = 176400, .rmax = 192000, .wmin = 128,  .wmax = 128, }, /* Quad */
+ 	{ .rmin = 352800, .rmax = 384000, .wmin = 128,  .wmax = 128, }, /* Oct */
+@@ -164,7 +164,7 @@ static struct imx_akcodec_fs_mul ak4497_fs_mul[] = {
+  * (Table 4 from datasheet)
+  */
+ static struct imx_akcodec_fs_mul ak5558_fs_mul[] = {
+-	{ .rmin = 8000,   .rmax = 32000,  .wmin = 1024, .wmax = 1024, },
++	{ .rmin = 8000,   .rmax = 32000,  .wmin = 512,  .wmax = 1024, },
+ 	{ .rmin = 44100,  .rmax = 48000,  .wmin = 512,  .wmax = 512, },
+ 	{ .rmin = 88200,  .rmax = 96000,  .wmin = 256,  .wmax = 256, },
+ 	{ .rmin = 176400, .rmax = 192000, .wmin = 128,  .wmax = 128, },
+@@ -247,13 +247,14 @@ static bool codec_is_akcodec(unsigned int type)
+ }
+ 
+ static unsigned long akcodec_get_mclk_rate(struct snd_pcm_substream *substream,
+-					   struct snd_pcm_hw_params *params)
++					   struct snd_pcm_hw_params *params,
++					   int slots, int slot_width)
+ {
+ 	struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ 	struct imx_card_data *data = snd_soc_card_get_drvdata(rtd->card);
+ 	const struct imx_card_plat_data *plat_data = data->plat_data;
+ 	struct dai_link_data *link_data = &data->link_data[rtd->num];
+-	unsigned int width = link_data->slots * link_data->slot_width;
++	unsigned int width = slots * slot_width;
+ 	unsigned int rate = params_rate(params);
+ 	int i;
+ 
+@@ -349,7 +350,7 @@ static int imx_aif_hw_params(struct snd_pcm_substream *substream,
+ 
+ 	/* Set MCLK freq */
+ 	if (codec_is_akcodec(plat_data->type))
+-		mclk_freq = akcodec_get_mclk_rate(substream, params);
++		mclk_freq = akcodec_get_mclk_rate(substream, params, slots, slot_width);
+ 	else
+ 		mclk_freq = params_rate(params) * slots * slot_width;
+ 	/* Use the maximum freq from DSD512 (512*44100 = 22579200) */
+@@ -553,8 +554,23 @@ static int imx_card_parse_of(struct imx_card_data *data)
+ 			link_data->cpu_sysclk_id = FSL_SAI_CLK_MAST1;
+ 
+ 			/* sai may support mclk/bclk = 1 */
+-			if (of_find_property(np, "fsl,mclk-equal-bclk", NULL))
++			if (of_find_property(np, "fsl,mclk-equal-bclk", NULL)) {
+ 				link_data->one2one_ratio = true;
++			} else {
++				int i;
++
++				/*
++				 * i.MX8MQ don't support one2one ratio, then
++				 * with ak4497 only 16bit case is supported.
++				 */
++				for (i = 0; i < ARRAY_SIZE(ak4497_fs_mul); i++) {
++					if (ak4497_fs_mul[i].rmin == 705600 &&
++					    ak4497_fs_mul[i].rmax == 768000) {
++						ak4497_fs_mul[i].wmin = 32;
++						ak4497_fs_mul[i].wmax = 32;
++					}
++				}
++			}
+ 		}
+ 
+ 		link->cpus->of_node = args.np;
+diff --git a/sound/soc/fsl/imx-hdmi.c b/sound/soc/fsl/imx-hdmi.c
+index f10359a288005..929f69b758af4 100644
+--- a/sound/soc/fsl/imx-hdmi.c
++++ b/sound/soc/fsl/imx-hdmi.c
+@@ -145,6 +145,8 @@ static int imx_hdmi_probe(struct platform_device *pdev)
+ 	data->dai.capture_only = false;
+ 	data->dai.init = imx_hdmi_init;
+ 
++	put_device(&cpu_pdev->dev);
++
+ 	if (of_node_name_eq(cpu_np, "sai")) {
+ 		data->cpu_priv.sysclk_id[1] = FSL_SAI_CLK_MAST1;
+ 		data->cpu_priv.sysclk_id[0] = FSL_SAI_CLK_MAST1;
+diff --git a/sound/soc/generic/test-component.c b/sound/soc/generic/test-component.c
+index 85385a771d807..8fc97d3ff0110 100644
+--- a/sound/soc/generic/test-component.c
++++ b/sound/soc/generic/test-component.c
+@@ -532,13 +532,16 @@ static int test_driver_probe(struct platform_device *pdev)
+ 	struct device_node *node = dev->of_node;
+ 	struct device_node *ep;
+ 	const struct of_device_id *of_id = of_match_device(test_of_match, &pdev->dev);
+-	const struct test_adata *adata = of_id->data;
++	const struct test_adata *adata;
+ 	struct snd_soc_component_driver *cdriv;
+ 	struct snd_soc_dai_driver *ddriv;
+ 	struct test_dai_name *dname;
+ 	struct test_priv *priv;
+ 	int num, ret, i;
+ 
++	if (!of_id)
++		return -EINVAL;
++	adata = of_id->data;
+ 	num = of_graph_get_endpoint_count(node);
+ 	if (!num) {
+ 		dev_err(dev, "no port exits\n");
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 77219c3f8766c..54eefaff62a7e 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -188,7 +188,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 		},
+ 		.driver_data = (void *)(SOF_SDW_TGL_HDMI |
+ 					SOF_SDW_PCH_DMIC |
+-					RT711_JD2),
++					RT711_JD1),
+ 	},
+ 	{
+ 		/* NUC15 'Bishop County' LAPBC510 and LAPBC710 skews */
+diff --git a/sound/soc/intel/catpt/dsp.c b/sound/soc/intel/catpt/dsp.c
+index 9c5fd18f2600f..346bec0003066 100644
+--- a/sound/soc/intel/catpt/dsp.c
++++ b/sound/soc/intel/catpt/dsp.c
+@@ -65,6 +65,7 @@ static int catpt_dma_memcpy(struct catpt_dev *cdev, struct dma_chan *chan,
+ {
+ 	struct dma_async_tx_descriptor *desc;
+ 	enum dma_status status;
++	int ret;
+ 
+ 	desc = dmaengine_prep_dma_memcpy(chan, dst_addr, src_addr, size,
+ 					 DMA_CTRL_ACK);
+@@ -77,13 +78,22 @@ static int catpt_dma_memcpy(struct catpt_dev *cdev, struct dma_chan *chan,
+ 	catpt_updatel_shim(cdev, HMDC,
+ 			   CATPT_HMDC_HDDA(CATPT_DMA_DEVID, chan->chan_id),
+ 			   CATPT_HMDC_HDDA(CATPT_DMA_DEVID, chan->chan_id));
+-	dmaengine_submit(desc);
++
++	ret = dma_submit_error(dmaengine_submit(desc));
++	if (ret) {
++		dev_err(cdev->dev, "submit tx failed: %d\n", ret);
++		goto clear_hdda;
++	}
++
+ 	status = dma_wait_for_async_tx(desc);
++	ret = (status == DMA_COMPLETE) ? 0 : -EPROTO;
++
++clear_hdda:
+ 	/* regardless of status, disable access to HOST memory in demand mode */
+ 	catpt_updatel_shim(cdev, HMDC,
+ 			   CATPT_HMDC_HDDA(CATPT_DMA_DEVID, chan->chan_id), 0);
+ 
+-	return (status == DMA_COMPLETE) ? 0 : -EPROTO;
++	return ret;
+ }
+ 
+ int catpt_dma_memcpy_todsp(struct catpt_dev *cdev, struct dma_chan *chan,
+diff --git a/sound/soc/intel/skylake/skl-pcm.c b/sound/soc/intel/skylake/skl-pcm.c
+index 9ecaf6a1e8475..e4aa366d356eb 100644
+--- a/sound/soc/intel/skylake/skl-pcm.c
++++ b/sound/soc/intel/skylake/skl-pcm.c
+@@ -1251,7 +1251,6 @@ static int skl_platform_soc_get_time_info(
+ 		snd_pcm_gettime(substream->runtime, system_ts);
+ 
+ 		nsec = timecounter_read(&hstr->tc);
+-		nsec = div_u64(nsec, 3); /* can be optimized */
+ 		if (audio_tstamp_config->report_delay)
+ 			nsec = skl_adjust_codec_delay(substream, nsec);
+ 
+diff --git a/sound/soc/mediatek/mt8173/mt8173-max98090.c b/sound/soc/mediatek/mt8173/mt8173-max98090.c
+index fc94314bfc02f..3bdd4931316cd 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-max98090.c
++++ b/sound/soc/mediatek/mt8173/mt8173-max98090.c
+@@ -180,6 +180,9 @@ static int mt8173_max98090_dev_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
+ 			__func__, ret);
++
++	of_node_put(codec_node);
++	of_node_put(platform_node);
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c
+index 0f28dc2217c09..390da5bf727eb 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c
++++ b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c
+@@ -218,6 +218,8 @@ static int mt8173_rt5650_rt5514_dev_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
+ 			__func__, ret);
++
++	of_node_put(platform_node);
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c
+index 077c6ee067806..c8e4e85e10575 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c
++++ b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c
+@@ -285,6 +285,8 @@ static int mt8173_rt5650_rt5676_dev_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
+ 			__func__, ret);
++
++	of_node_put(platform_node);
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/mediatek/mt8173/mt8173-rt5650.c b/sound/soc/mediatek/mt8173/mt8173-rt5650.c
+index 2cbf679f5c74b..d8cf0802813a0 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-rt5650.c
++++ b/sound/soc/mediatek/mt8173/mt8173-rt5650.c
+@@ -323,6 +323,8 @@ static int mt8173_rt5650_dev_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
+ 			__func__, ret);
++
++	of_node_put(platform_node);
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c b/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
+index a4d26a6fc8492..bda103211e0bd 100644
+--- a/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
++++ b/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
+@@ -781,7 +781,11 @@ static int mt8183_da7219_max98357_dev_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	return devm_snd_soc_register_card(&pdev->dev, card);
++	ret = devm_snd_soc_register_card(&pdev->dev, card);
++
++	of_node_put(platform_node);
++	of_node_put(hdmi_codec);
++	return ret;
+ }
+ 
+ #ifdef CONFIG_OF
+diff --git a/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c b/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c
+index aeb1af86047ef..9f0bf15fe465e 100644
+--- a/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c
++++ b/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c
+@@ -780,7 +780,12 @@ mt8183_mt6358_ts3a227_max98357_dev_probe(struct platform_device *pdev)
+ 				 __func__, ret);
+ 	}
+ 
+-	return devm_snd_soc_register_card(&pdev->dev, card);
++	ret = devm_snd_soc_register_card(&pdev->dev, card);
++
++	of_node_put(platform_node);
++	of_node_put(ec_codec);
++	of_node_put(hdmi_codec);
++	return ret;
+ }
+ 
+ #ifdef CONFIG_OF
+diff --git a/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c b/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c
+index a606133951b70..24a5d0adec1ba 100644
+--- a/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c
++++ b/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c
+@@ -1172,7 +1172,11 @@ static int mt8192_mt6359_dev_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	return devm_snd_soc_register_card(&pdev->dev, card);
++	ret = devm_snd_soc_register_card(&pdev->dev, card);
++
++	of_node_put(platform_node);
++	of_node_put(hdmi_codec);
++	return ret;
+ }
+ 
+ #ifdef CONFIG_OF
+diff --git a/sound/soc/mediatek/mt8195/mt8195-afe-pcm.c b/sound/soc/mediatek/mt8195/mt8195-afe-pcm.c
+index 2bb05a828e8d2..15b4cae2524c1 100644
+--- a/sound/soc/mediatek/mt8195/mt8195-afe-pcm.c
++++ b/sound/soc/mediatek/mt8195/mt8195-afe-pcm.c
+@@ -3028,7 +3028,7 @@ static const struct reg_sequence mt8195_afe_reg_defaults[] = {
+ 
+ static const struct reg_sequence mt8195_cg_patch[] = {
+ 	{ AUDIO_TOP_CON0, 0xfffffffb },
+-	{ AUDIO_TOP_CON1, 0xfffffffa },
++	{ AUDIO_TOP_CON1, 0xfffffff8 },
+ };
+ 
+ static int mt8195_afe_init_registers(struct mtk_base_afe *afe)
+diff --git a/sound/soc/mediatek/mt8195/mt8195-dai-pcm.c b/sound/soc/mediatek/mt8195/mt8195-dai-pcm.c
+index 5d10d2c4c991c..151914c873acd 100644
+--- a/sound/soc/mediatek/mt8195/mt8195-dai-pcm.c
++++ b/sound/soc/mediatek/mt8195/mt8195-dai-pcm.c
+@@ -80,8 +80,15 @@ static const struct snd_soc_dapm_widget mtk_dai_pcm_widgets[] = {
+ 			   mtk_dai_pcm_o001_mix,
+ 			   ARRAY_SIZE(mtk_dai_pcm_o001_mix)),
+ 
++	SND_SOC_DAPM_SUPPLY("PCM_EN", PCM_INTF_CON1,
++			    PCM_INTF_CON1_PCM_EN_SHIFT, 0, NULL, 0),
++
+ 	SND_SOC_DAPM_INPUT("PCM1_INPUT"),
+ 	SND_SOC_DAPM_OUTPUT("PCM1_OUTPUT"),
++
++	SND_SOC_DAPM_CLOCK_SUPPLY("aud_asrc11"),
++	SND_SOC_DAPM_CLOCK_SUPPLY("aud_asrc12"),
++	SND_SOC_DAPM_CLOCK_SUPPLY("aud_pcmif"),
+ };
+ 
+ static const struct snd_soc_dapm_route mtk_dai_pcm_routes[] = {
+@@ -97,22 +104,18 @@ static const struct snd_soc_dapm_route mtk_dai_pcm_routes[] = {
+ 	{"PCM1 Playback", NULL, "O000"},
+ 	{"PCM1 Playback", NULL, "O001"},
+ 
++	{"PCM1 Playback", NULL, "PCM_EN"},
++	{"PCM1 Playback", NULL, "aud_asrc12"},
++	{"PCM1 Playback", NULL, "aud_pcmif"},
++
++	{"PCM1 Capture", NULL, "PCM_EN"},
++	{"PCM1 Capture", NULL, "aud_asrc11"},
++	{"PCM1 Capture", NULL, "aud_pcmif"},
++
+ 	{"PCM1_OUTPUT", NULL, "PCM1 Playback"},
+ 	{"PCM1 Capture", NULL, "PCM1_INPUT"},
+ };
+ 
+-static void mtk_dai_pcm_enable(struct mtk_base_afe *afe)
+-{
+-	regmap_update_bits(afe->regmap, PCM_INTF_CON1,
+-			   PCM_INTF_CON1_PCM_EN, PCM_INTF_CON1_PCM_EN);
+-}
+-
+-static void mtk_dai_pcm_disable(struct mtk_base_afe *afe)
+-{
+-	regmap_update_bits(afe->regmap, PCM_INTF_CON1,
+-			   PCM_INTF_CON1_PCM_EN, 0x0);
+-}
+-
+ static int mtk_dai_pcm_configure(struct snd_pcm_substream *substream,
+ 				 struct snd_soc_dai *dai)
+ {
+@@ -207,54 +210,22 @@ static int mtk_dai_pcm_configure(struct snd_pcm_substream *substream,
+ }
+ 
+ /* dai ops */
+-static int mtk_dai_pcm_startup(struct snd_pcm_substream *substream,
+-			       struct snd_soc_dai *dai)
+-{
+-	struct mtk_base_afe *afe = snd_soc_dai_get_drvdata(dai);
+-	struct mt8195_afe_private *afe_priv = afe->platform_priv;
+-
+-	if (dai->component->active)
+-		return 0;
+-
+-	mt8195_afe_enable_clk(afe, afe_priv->clk[MT8195_CLK_AUD_ASRC11]);
+-	mt8195_afe_enable_clk(afe, afe_priv->clk[MT8195_CLK_AUD_ASRC12]);
+-	mt8195_afe_enable_clk(afe, afe_priv->clk[MT8195_CLK_AUD_PCMIF]);
+-
+-	return 0;
+-}
+-
+-static void mtk_dai_pcm_shutdown(struct snd_pcm_substream *substream,
+-				 struct snd_soc_dai *dai)
+-{
+-	struct mtk_base_afe *afe = snd_soc_dai_get_drvdata(dai);
+-	struct mt8195_afe_private *afe_priv = afe->platform_priv;
+-
+-	if (dai->component->active)
+-		return;
+-
+-	mtk_dai_pcm_disable(afe);
+-
+-	mt8195_afe_disable_clk(afe, afe_priv->clk[MT8195_CLK_AUD_PCMIF]);
+-	mt8195_afe_disable_clk(afe, afe_priv->clk[MT8195_CLK_AUD_ASRC12]);
+-	mt8195_afe_disable_clk(afe, afe_priv->clk[MT8195_CLK_AUD_ASRC11]);
+-}
+-
+ static int mtk_dai_pcm_prepare(struct snd_pcm_substream *substream,
+ 			       struct snd_soc_dai *dai)
+ {
+-	struct mtk_base_afe *afe = snd_soc_dai_get_drvdata(dai);
+-	int ret = 0;
++	int ret;
+ 
+-	if (snd_soc_dai_stream_active(dai, SNDRV_PCM_STREAM_PLAYBACK) &&
+-	    snd_soc_dai_stream_active(dai, SNDRV_PCM_STREAM_CAPTURE))
++	dev_dbg(dai->dev, "%s(), id %d, stream %d, widget active p %d, c %d\n",
++		__func__, dai->id, substream->stream,
++		dai->playback_widget->active, dai->capture_widget->active);
++
++	if (dai->playback_widget->active || dai->capture_widget->active)
+ 		return 0;
+ 
+ 	ret = mtk_dai_pcm_configure(substream, dai);
+ 	if (ret)
+ 		return ret;
+ 
+-	mtk_dai_pcm_enable(afe);
+-
+ 	return 0;
+ }
+ 
+@@ -316,8 +287,6 @@ static int mtk_dai_pcm_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ }
+ 
+ static const struct snd_soc_dai_ops mtk_dai_pcm_ops = {
+-	.startup	= mtk_dai_pcm_startup,
+-	.shutdown	= mtk_dai_pcm_shutdown,
+ 	.prepare	= mtk_dai_pcm_prepare,
+ 	.set_fmt	= mtk_dai_pcm_set_fmt,
+ };
+diff --git a/sound/soc/mediatek/mt8195/mt8195-reg.h b/sound/soc/mediatek/mt8195/mt8195-reg.h
+index d06f9cf85a4ec..d3871353db415 100644
+--- a/sound/soc/mediatek/mt8195/mt8195-reg.h
++++ b/sound/soc/mediatek/mt8195/mt8195-reg.h
+@@ -2550,6 +2550,7 @@
+ #define PCM_INTF_CON1_PCM_FMT(x)       (((x) & 0x3) << 1)
+ #define PCM_INTF_CON1_PCM_FMT_MASK     (0x3 << 1)
+ #define PCM_INTF_CON1_PCM_EN           BIT(0)
++#define PCM_INTF_CON1_PCM_EN_SHIFT     0
+ 
+ /* PCM_INTF_CON2 */
+ #define PCM_INTF_CON2_CLK_DOMAIN_SEL(x)   (((x) & 0x3) << 23)
+diff --git a/sound/soc/samsung/idma.c b/sound/soc/samsung/idma.c
+index 66bcc2f97544b..c3f1b054e2389 100644
+--- a/sound/soc/samsung/idma.c
++++ b/sound/soc/samsung/idma.c
+@@ -360,6 +360,8 @@ static int preallocate_idma_buffer(struct snd_pcm *pcm, int stream)
+ 	buf->addr = idma.lp_tx_addr;
+ 	buf->bytes = idma_hardware.buffer_bytes_max;
+ 	buf->area = (unsigned char * __force)ioremap(buf->addr, buf->bytes);
++	if (!buf->area)
++		return -ENOMEM;
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/sof/intel/hda-codec.c b/sound/soc/sof/intel/hda-codec.c
+index 13cd96e6724a4..2f3f4a733d9e6 100644
+--- a/sound/soc/sof/intel/hda-codec.c
++++ b/sound/soc/sof/intel/hda-codec.c
+@@ -20,9 +20,10 @@
+ #include "../../codecs/hdac_hda.h"
+ #endif /* CONFIG_SND_SOC_SOF_HDA_AUDIO_CODEC */
+ 
++#define CODEC_PROBE_RETRIES	3
++
+ #if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA_AUDIO_CODEC)
+ #define IDISP_VID_INTEL	0x80860000
+-#define CODEC_PROBE_RETRIES 3
+ 
+ /* load the legacy HDA codec driver */
+ static int request_codec_module(struct hda_codec *codec)
+diff --git a/sound/soc/sof/intel/hda-pcm.c b/sound/soc/sof/intel/hda-pcm.c
+index cc8ddef37f37b..41cb60955f5c1 100644
+--- a/sound/soc/sof/intel/hda-pcm.c
++++ b/sound/soc/sof/intel/hda-pcm.c
+@@ -172,38 +172,74 @@ snd_pcm_uframes_t hda_dsp_pcm_pointer(struct snd_sof_dev *sdev,
+ 		goto found;
+ 	}
+ 
+-	/*
+-	 * DPIB/posbuf position mode:
+-	 * For Playback, Use DPIB register from HDA space which
+-	 * reflects the actual data transferred.
+-	 * For Capture, Use the position buffer for pointer, as DPIB
+-	 * is not accurate enough, its update may be completed
+-	 * earlier than the data written to DDR.
+-	 */
+-	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
++	switch (sof_hda_position_quirk) {
++	case SOF_HDA_POSITION_QUIRK_USE_SKYLAKE_LEGACY:
++		/*
++		 * This legacy code, inherited from the Skylake driver,
++		 * mixes DPIB registers and DPIB DDR updates and
++		 * does not seem to follow any known hardware recommendations.
++		 * It's not clear e.g. why there is a different flow
++		 * for capture and playback, the only information that matters is
++		 * what traffic class is used, and on all SOF-enabled platforms
++		 * only VC0 is supported so the work-around was likely not necessary
++		 * and quite possibly wrong.
++		 */
++
++		/* DPIB/posbuf position mode:
++		 * For Playback, Use DPIB register from HDA space which
++		 * reflects the actual data transferred.
++		 * For Capture, Use the position buffer for pointer, as DPIB
++		 * is not accurate enough, its update may be completed
++		 * earlier than the data written to DDR.
++		 */
++		if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
++			pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,
++					       AZX_REG_VS_SDXDPIB_XBASE +
++					       (AZX_REG_VS_SDXDPIB_XINTERVAL *
++						hstream->index));
++		} else {
++			/*
++			 * For capture stream, we need more workaround to fix the
++			 * position incorrect issue:
++			 *
++			 * 1. Wait at least 20us before reading position buffer after
++			 * the interrupt generated(IOC), to make sure position update
++			 * happens on frame boundary i.e. 20.833uSec for 48KHz.
++			 * 2. Perform a dummy Read to DPIB register to flush DMA
++			 * position value.
++			 * 3. Read the DMA Position from posbuf. Now the readback
++			 * value should be >= period boundary.
++			 */
++			usleep_range(20, 21);
++			snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,
++					 AZX_REG_VS_SDXDPIB_XBASE +
++					 (AZX_REG_VS_SDXDPIB_XINTERVAL *
++					  hstream->index));
++			pos = snd_hdac_stream_get_pos_posbuf(hstream);
++		}
++		break;
++	case SOF_HDA_POSITION_QUIRK_USE_DPIB_REGISTERS:
++		/*
++		 * In case VC1 traffic is disabled this is the recommended option
++		 */
+ 		pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,
+ 				       AZX_REG_VS_SDXDPIB_XBASE +
+ 				       (AZX_REG_VS_SDXDPIB_XINTERVAL *
+ 					hstream->index));
+-	} else {
++		break;
++	case SOF_HDA_POSITION_QUIRK_USE_DPIB_DDR_UPDATE:
+ 		/*
+-		 * For capture stream, we need more workaround to fix the
+-		 * position incorrect issue:
+-		 *
+-		 * 1. Wait at least 20us before reading position buffer after
+-		 * the interrupt generated(IOC), to make sure position update
+-		 * happens on frame boundary i.e. 20.833uSec for 48KHz.
+-		 * 2. Perform a dummy Read to DPIB register to flush DMA
+-		 * position value.
+-		 * 3. Read the DMA Position from posbuf. Now the readback
+-		 * value should be >= period boundary.
++		 * This is the recommended option when VC1 is enabled.
++		 * While this isn't needed for SOF platforms it's added for
++		 * consistency and debug.
+ 		 */
+-		usleep_range(20, 21);
+-		snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR,
+-				 AZX_REG_VS_SDXDPIB_XBASE +
+-				 (AZX_REG_VS_SDXDPIB_XINTERVAL *
+-				  hstream->index));
+ 		pos = snd_hdac_stream_get_pos_posbuf(hstream);
++		break;
++	default:
++		dev_err_once(sdev->dev, "hda_position_quirk value %d not supported\n",
++			     sof_hda_position_quirk);
++		pos = 0;
++		break;
+ 	}
+ 
+ 	if (pos >= hstream->bufsize)
+diff --git a/sound/soc/sof/intel/hda.c b/sound/soc/sof/intel/hda.c
+index 2c0d4d06ab364..25200a0e1dc9d 100644
+--- a/sound/soc/sof/intel/hda.c
++++ b/sound/soc/sof/intel/hda.c
+@@ -440,6 +440,10 @@ MODULE_PARM_DESC(use_msi, "SOF HDA use PCI MSI mode");
+ #define hda_use_msi	(1)
+ #endif
+ 
++int sof_hda_position_quirk = SOF_HDA_POSITION_QUIRK_USE_DPIB_REGISTERS;
++module_param_named(position_quirk, sof_hda_position_quirk, int, 0444);
++MODULE_PARM_DESC(position_quirk, "SOF HDaudio position quirk");
++
+ static char *hda_model;
+ module_param(hda_model, charp, 0444);
+ MODULE_PARM_DESC(hda_model, "Use the given HDA board model.");
+@@ -618,7 +622,10 @@ static int hda_init(struct snd_sof_dev *sdev)
+ 	/* HDA bus init */
+ 	sof_hda_bus_init(bus, &pci->dev);
+ 
+-	bus->use_posbuf = 1;
++	if (sof_hda_position_quirk == SOF_HDA_POSITION_QUIRK_USE_DPIB_REGISTERS)
++		bus->use_posbuf = 0;
++	else
++		bus->use_posbuf = 1;
+ 	bus->bdl_pos_adj = 0;
+ 	bus->sync_write = 1;
+ 
+diff --git a/sound/soc/sof/intel/hda.h b/sound/soc/sof/intel/hda.h
+index 1195018a1f4f5..dba4733ccf9ae 100644
+--- a/sound/soc/sof/intel/hda.h
++++ b/sound/soc/sof/intel/hda.h
+@@ -738,4 +738,10 @@ struct sof_ipc_dai_config;
+ int hda_ctrl_dai_widget_setup(struct snd_soc_dapm_widget *w);
+ int hda_ctrl_dai_widget_free(struct snd_soc_dapm_widget *w);
+ 
++#define SOF_HDA_POSITION_QUIRK_USE_SKYLAKE_LEGACY	(0) /* previous implementation */
++#define SOF_HDA_POSITION_QUIRK_USE_DPIB_REGISTERS	(1) /* recommended if VC0 only */
++#define SOF_HDA_POSITION_QUIRK_USE_DPIB_DDR_UPDATE	(2) /* recommended with VC0 or VC1 */
++
++extern int sof_hda_position_quirk;
++
+ #endif
+diff --git a/sound/soc/sof/ipc.c b/sound/soc/sof/ipc.c
+index e6c53c6c470e4..ca30c506a0fd6 100644
+--- a/sound/soc/sof/ipc.c
++++ b/sound/soc/sof/ipc.c
+@@ -547,7 +547,8 @@ static void ipc_period_elapsed(struct snd_sof_dev *sdev, u32 msg_id)
+ 
+ 	if (spcm->pcm.compress)
+ 		snd_sof_compr_fragment_elapsed(stream->cstream);
+-	else if (!stream->substream->runtime->no_period_wakeup)
++	else if (stream->substream->runtime &&
++		 !stream->substream->runtime->no_period_wakeup)
+ 		/* only inform ALSA for period_wakeup mode */
+ 		snd_sof_pcm_period_elapsed(stream->substream);
+ }
+diff --git a/sound/soc/sof/pcm.c b/sound/soc/sof/pcm.c
+index fa0bfcd2474e0..216a1f576fc85 100644
+--- a/sound/soc/sof/pcm.c
++++ b/sound/soc/sof/pcm.c
+@@ -100,9 +100,8 @@ void snd_sof_pcm_period_elapsed(struct snd_pcm_substream *substream)
+ }
+ EXPORT_SYMBOL(snd_sof_pcm_period_elapsed);
+ 
+-static int sof_pcm_dsp_pcm_free(struct snd_pcm_substream *substream,
+-				struct snd_sof_dev *sdev,
+-				struct snd_sof_pcm *spcm)
++int sof_pcm_dsp_pcm_free(struct snd_pcm_substream *substream, struct snd_sof_dev *sdev,
++			 struct snd_sof_pcm *spcm)
+ {
+ 	struct sof_ipc_stream stream;
+ 	struct sof_ipc_reply reply;
+diff --git a/sound/soc/sof/sof-audio.c b/sound/soc/sof/sof-audio.c
+index 7cbe757c1fe29..ddd993351ce14 100644
+--- a/sound/soc/sof/sof-audio.c
++++ b/sound/soc/sof/sof-audio.c
+@@ -122,6 +122,14 @@ int sof_widget_free(struct snd_sof_dev *sdev, struct snd_sof_widget *swidget)
+ 	case snd_soc_dapm_buffer:
+ 		ipc_free.hdr.cmd |= SOF_IPC_TPLG_BUFFER_FREE;
+ 		break;
++	case snd_soc_dapm_dai_in:
++	case snd_soc_dapm_dai_out:
++	{
++		struct snd_sof_dai *dai = swidget->private;
++
++		dai->configured = false;
++		fallthrough;
++	}
+ 	default:
+ 		ipc_free.hdr.cmd |= SOF_IPC_TPLG_COMP_FREE;
+ 		break;
+@@ -203,7 +211,8 @@ int sof_widget_setup(struct snd_sof_dev *sdev, struct snd_sof_widget *swidget)
+ 		break;
+ 	case snd_soc_dapm_scheduler:
+ 		pipeline = swidget->private;
+-		ret = sof_load_pipeline_ipc(sdev, pipeline, &r);
++		ret = sof_ipc_tx_message(sdev->ipc, pipeline->hdr.cmd, pipeline,
++					 sizeof(*pipeline), &r, sizeof(r));
+ 		break;
+ 	default:
+ 		hdr = swidget->private;
+@@ -595,16 +604,25 @@ const struct sof_ipc_pipe_new *snd_sof_pipeline_find(struct snd_sof_dev *sdev,
+ 
+ int sof_set_up_pipelines(struct snd_sof_dev *sdev, bool verify)
+ {
++	struct sof_ipc_fw_version *v = &sdev->fw_ready.version;
+ 	struct snd_sof_widget *swidget;
+ 	struct snd_sof_route *sroute;
+ 	int ret;
+ 
+ 	/* restore pipeline components */
+-	list_for_each_entry_reverse(swidget, &sdev->widget_list, list) {
++	list_for_each_entry(swidget, &sdev->widget_list, list) {
+ 		/* only set up the widgets belonging to static pipelines */
+ 		if (!verify && swidget->dynamic_pipeline_widget)
+ 			continue;
+ 
++		/*
++		 * For older firmware, skip scheduler widgets in this loop,
++		 * sof_widget_setup() will be called in the 'complete pipeline' loop
++		 */
++		if (v->abi_version < SOF_ABI_VER(3, 19, 0) &&
++		    swidget->id == snd_soc_dapm_scheduler)
++			continue;
++
+ 		/* update DAI config. The IPC will be sent in sof_widget_setup() */
+ 		if (WIDGET_IS_DAI(swidget->id)) {
+ 			struct snd_sof_dai *dai = swidget->private;
+@@ -652,6 +670,12 @@ int sof_set_up_pipelines(struct snd_sof_dev *sdev, bool verify)
+ 			if (!verify && swidget->dynamic_pipeline_widget)
+ 				continue;
+ 
++			if (v->abi_version < SOF_ABI_VER(3, 19, 0)) {
++				ret = sof_widget_setup(sdev, swidget);
++				if (ret < 0)
++					return ret;
++			}
++
+ 			swidget->complete =
+ 				snd_sof_complete_pipeline(sdev, swidget);
+ 			break;
+@@ -664,11 +688,61 @@ int sof_set_up_pipelines(struct snd_sof_dev *sdev, bool verify)
+ }
+ 
+ /*
+- * This function doesn't free widgets during suspend. It only resets the set up status for all
+- * routes and use_count for all widgets.
++ * Free the PCM, its associated widgets and set the prepared flag to false for all PCMs that
++ * did not get suspended(ex: paused streams) so the widgets can be set up again during resume.
++ */
++static int sof_tear_down_left_over_pipelines(struct snd_sof_dev *sdev)
++{
++	struct snd_sof_widget *swidget;
++	struct snd_sof_pcm *spcm;
++	int dir, ret;
++
++	/*
++	 * free all PCMs and their associated DAPM widgets if their connected DAPM widget
++	 * list is not NULL. This should only be true for paused streams at this point.
++	 * This is equivalent to the handling of FE DAI suspend trigger for running streams.
++	 */
++	list_for_each_entry(spcm, &sdev->pcm_list, list)
++		for_each_pcm_streams(dir) {
++			struct snd_pcm_substream *substream = spcm->stream[dir].substream;
++
++			if (!substream || !substream->runtime)
++				continue;
++
++			if (spcm->stream[dir].list) {
++				ret = sof_pcm_dsp_pcm_free(substream, sdev, spcm);
++				if (ret < 0)
++					return ret;
++
++				ret = sof_widget_list_free(sdev, spcm, dir);
++				if (ret < 0) {
++					dev_err(sdev->dev, "failed to free widgets during suspend\n");
++					return ret;
++				}
++			}
++		}
++
++	/*
++	 * free any left over DAI widgets. This is equivalent to the handling of suspend trigger
++	 * for the BE DAI for running streams.
++	 */
++	list_for_each_entry(swidget, &sdev->widget_list, list)
++		if (WIDGET_IS_DAI(swidget->id) && swidget->use_count == 1) {
++			ret = sof_widget_free(sdev, swidget);
++			if (ret < 0)
++				return ret;
++		}
++
++	return 0;
++}
++
++/*
++ * For older firmware, this function doesn't free widgets for static pipelines during suspend.
++ * It only resets use_count for all widgets.
+  */
+ int sof_tear_down_pipelines(struct snd_sof_dev *sdev, bool verify)
+ {
++	struct sof_ipc_fw_version *v = &sdev->fw_ready.version;
+ 	struct snd_sof_widget *swidget;
+ 	struct snd_sof_route *sroute;
+ 	int ret;
+@@ -676,12 +750,18 @@ int sof_tear_down_pipelines(struct snd_sof_dev *sdev, bool verify)
+ 	/*
+ 	 * This function is called during suspend and for one-time topology verification during
+ 	 * first boot. In both cases, there is no need to protect swidget->use_count and
+-	 * sroute->setup because during suspend all streams are suspended and during topology
+-	 * loading the sound card unavailable to open PCMs.
++	 * sroute->setup because during suspend all running streams are suspended and during
++	 * topology loading the sound card unavailable to open PCMs.
+ 	 */
+-	list_for_each_entry_reverse(swidget, &sdev->widget_list, list) {
+-		if (!verify) {
++	list_for_each_entry(swidget, &sdev->widget_list, list) {
++		if (swidget->dynamic_pipeline_widget)
++			continue;
++
++		/* Do not free widgets for static pipelines with FW ABI older than 3.19 */
++		if (!verify && !swidget->dynamic_pipeline_widget &&
++		    v->abi_version < SOF_ABI_VER(3, 19, 0)) {
+ 			swidget->use_count = 0;
++			swidget->complete = 0;
+ 			continue;
+ 		}
+ 
+@@ -690,6 +770,19 @@ int sof_tear_down_pipelines(struct snd_sof_dev *sdev, bool verify)
+ 			return ret;
+ 	}
+ 
++	/*
++	 * Tear down all pipelines associated with PCMs that did not get suspended
++	 * and unset the prepare flag so that they can be set up again during resume.
++	 * Skip this step for older firmware.
++	 */
++	if (!verify && v->abi_version >= SOF_ABI_VER(3, 19, 0)) {
++		ret = sof_tear_down_left_over_pipelines(sdev);
++		if (ret < 0) {
++			dev_err(sdev->dev, "failed to tear down paused pipelines\n");
++			return ret;
++		}
++	}
++
+ 	list_for_each_entry(sroute, &sdev->route_list, list)
+ 		sroute->setup = false;
+ 
+diff --git a/sound/soc/sof/sof-audio.h b/sound/soc/sof/sof-audio.h
+index 05e98e231b85d..faa4f36ef0d16 100644
+--- a/sound/soc/sof/sof-audio.h
++++ b/sound/soc/sof/sof-audio.h
+@@ -184,10 +184,6 @@ void snd_sof_control_notify(struct snd_sof_dev *sdev,
+ int snd_sof_load_topology(struct snd_soc_component *scomp, const char *file);
+ int snd_sof_complete_pipeline(struct snd_sof_dev *sdev,
+ 			      struct snd_sof_widget *swidget);
+-
+-int sof_load_pipeline_ipc(struct snd_sof_dev *sdev,
+-			  struct sof_ipc_pipe_new *pipeline,
+-			  struct sof_ipc_comp_reply *r);
+ int sof_pipeline_core_enable(struct snd_sof_dev *sdev,
+ 			     const struct snd_sof_widget *swidget);
+ 
+@@ -271,4 +267,6 @@ int sof_widget_free(struct snd_sof_dev *sdev, struct snd_sof_widget *swidget);
+ /* PCM */
+ int sof_widget_list_setup(struct snd_sof_dev *sdev, struct snd_sof_pcm *spcm, int dir);
+ int sof_widget_list_free(struct snd_sof_dev *sdev, struct snd_sof_pcm *spcm, int dir);
++int sof_pcm_dsp_pcm_free(struct snd_pcm_substream *substream, struct snd_sof_dev *sdev,
++			 struct snd_sof_pcm *spcm);
+ #endif
+diff --git a/sound/soc/sof/topology.c b/sound/soc/sof/topology.c
+index bb9e62bbe5db9..136ffc3b050bd 100644
+--- a/sound/soc/sof/topology.c
++++ b/sound/soc/sof/topology.c
+@@ -1690,23 +1690,6 @@ err:
+ /*
+  * Pipeline Topology
+  */
+-int sof_load_pipeline_ipc(struct snd_sof_dev *sdev,
+-			  struct sof_ipc_pipe_new *pipeline,
+-			  struct sof_ipc_comp_reply *r)
+-{
+-	int ret = sof_core_enable(sdev, pipeline->core);
+-
+-	if (ret < 0)
+-		return ret;
+-
+-	ret = sof_ipc_tx_message(sdev->ipc, pipeline->hdr.cmd, pipeline,
+-				 sizeof(*pipeline), r, sizeof(*r));
+-	if (ret < 0)
+-		dev_err(sdev->dev, "error: load pipeline ipc failure\n");
+-
+-	return ret;
+-}
+-
+ static int sof_widget_load_pipeline(struct snd_soc_component *scomp, int index,
+ 				    struct snd_sof_widget *swidget,
+ 				    struct snd_soc_tplg_dapm_widget *tw)
+diff --git a/sound/soc/uniphier/Kconfig b/sound/soc/uniphier/Kconfig
+index aa3592ee1358b..ddfa6424c656b 100644
+--- a/sound/soc/uniphier/Kconfig
++++ b/sound/soc/uniphier/Kconfig
+@@ -23,7 +23,6 @@ config SND_SOC_UNIPHIER_LD11
+ 	tristate "UniPhier LD11/LD20 Device Driver"
+ 	depends on SND_SOC_UNIPHIER
+ 	select SND_SOC_UNIPHIER_AIO
+-	select SND_SOC_UNIPHIER_AIO_DMA
+ 	help
+ 	  This adds ASoC driver for Socionext UniPhier LD11/LD20
+ 	  input and output that can be used with other codecs.
+@@ -34,7 +33,6 @@ config SND_SOC_UNIPHIER_PXS2
+ 	tristate "UniPhier PXs2 Device Driver"
+ 	depends on SND_SOC_UNIPHIER
+ 	select SND_SOC_UNIPHIER_AIO
+-	select SND_SOC_UNIPHIER_AIO_DMA
+ 	help
+ 	  This adds ASoC driver for Socionext UniPhier PXs2
+ 	  input and output that can be used with other codecs.
+diff --git a/sound/usb/format.c b/sound/usb/format.c
+index f5e676a51b30d..405dc0bf6678c 100644
+--- a/sound/usb/format.c
++++ b/sound/usb/format.c
+@@ -375,7 +375,7 @@ static int parse_uac2_sample_rate_range(struct snd_usb_audio *chip,
+ 		for (rate = min; rate <= max; rate += res) {
+ 
+ 			/* Filter out invalid rates on Presonus Studio 1810c */
+-			if (chip->usb_id == USB_ID(0x0194f, 0x010c) &&
++			if (chip->usb_id == USB_ID(0x194f, 0x010c) &&
+ 			    !s1810c_valid_sample_rate(fp, rate))
+ 				goto skip_rate;
+ 
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index 823b6b8de942d..d48729e6a3b0a 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -3254,7 +3254,7 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
+ 		err = snd_rme_controls_create(mixer);
+ 		break;
+ 
+-	case USB_ID(0x0194f, 0x010c): /* Presonus Studio 1810c */
++	case USB_ID(0x194f, 0x010c): /* Presonus Studio 1810c */
+ 		err = snd_sc1810_init_mixer(mixer);
+ 		break;
+ 	case USB_ID(0x2a39, 0x3fb0): /* RME Babyface Pro FS */
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 64e1c20311ed4..ab9f3da49941f 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1290,7 +1290,7 @@ int snd_usb_apply_interface_quirk(struct snd_usb_audio *chip,
+ 	if (chip->usb_id == USB_ID(0x0763, 0x2012))
+ 		return fasttrackpro_skip_setting_quirk(chip, iface, altno);
+ 	/* presonus studio 1810c: skip altsets incompatible with device_setup */
+-	if (chip->usb_id == USB_ID(0x0194f, 0x010c))
++	if (chip->usb_id == USB_ID(0x194f, 0x010c))
+ 		return s1810c_skip_setting_quirk(chip, iface, altno);
+ 
+ 
+diff --git a/tools/bpf/bpftool/Documentation/Makefile b/tools/bpf/bpftool/Documentation/Makefile
+index c49487905cebe..f89929c7038d5 100644
+--- a/tools/bpf/bpftool/Documentation/Makefile
++++ b/tools/bpf/bpftool/Documentation/Makefile
+@@ -1,6 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ include ../../../scripts/Makefile.include
+-include ../../../scripts/utilities.mak
+ 
+ INSTALL ?= install
+ RM ?= rm -f
+diff --git a/tools/bpf/bpftool/Documentation/bpftool-btf.rst b/tools/bpf/bpftool/Documentation/bpftool-btf.rst
+index 88b28aa7431f6..4425d942dd39a 100644
+--- a/tools/bpf/bpftool/Documentation/bpftool-btf.rst
++++ b/tools/bpf/bpftool/Documentation/bpftool-btf.rst
+@@ -13,7 +13,7 @@ SYNOPSIS
+ 	**bpftool** [*OPTIONS*] **btf** *COMMAND*
+ 
+ 	*OPTIONS* := { { **-j** | **--json** } [{ **-p** | **--pretty** }] | {**-d** | **--debug** } |
+-		{ **-B** | **--base-btf** } }
++	{ **-B** | **--base-btf** } }
+ 
+ 	*COMMANDS* := { **dump** | **help** }
+ 
+diff --git a/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst b/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst
+index 3e4395eede4f7..13a217a2503d8 100644
+--- a/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst
++++ b/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst
+@@ -13,7 +13,7 @@ SYNOPSIS
+ 	**bpftool** [*OPTIONS*] **cgroup** *COMMAND*
+ 
+ 	*OPTIONS* := { { **-j** | **--json** } [{ **-p** | **--pretty** }] | { **-d** | **--debug** } |
+-		{ **-f** | **--bpffs** } }
++	{ **-f** | **--bpffs** } }
+ 
+ 	*COMMANDS* :=
+ 	{ **show** | **list** | **tree** | **attach** | **detach** | **help** }
+diff --git a/tools/bpf/bpftool/Documentation/bpftool-gen.rst b/tools/bpf/bpftool/Documentation/bpftool-gen.rst
+index 2ef2f2df02799..2a137f8a4cea0 100644
+--- a/tools/bpf/bpftool/Documentation/bpftool-gen.rst
++++ b/tools/bpf/bpftool/Documentation/bpftool-gen.rst
+@@ -13,7 +13,7 @@ SYNOPSIS
+ 	**bpftool** [*OPTIONS*] **gen** *COMMAND*
+ 
+ 	*OPTIONS* := { { **-j** | **--json** } [{ **-p** | **--pretty** }] | { **-d** | **--debug** } |
+-		{ **-L** | **--use-loader** } }
++	{ **-L** | **--use-loader** } }
+ 
+ 	*COMMAND* := { **object** | **skeleton** | **help** }
+ 
+diff --git a/tools/bpf/bpftool/Documentation/bpftool-link.rst b/tools/bpf/bpftool/Documentation/bpftool-link.rst
+index 0de90f086238c..9434349636a5e 100644
+--- a/tools/bpf/bpftool/Documentation/bpftool-link.rst
++++ b/tools/bpf/bpftool/Documentation/bpftool-link.rst
+@@ -13,7 +13,7 @@ SYNOPSIS
+ 	**bpftool** [*OPTIONS*] **link** *COMMAND*
+ 
+ 	*OPTIONS* := { { **-j** | **--json** } [{ **-p** | **--pretty** }] | { **-d** | **--debug** } |
+-		{ **-f** | **--bpffs** } | { **-n** | **--nomount** } }
++	{ **-f** | **--bpffs** } | { **-n** | **--nomount** } }
+ 
+ 	*COMMANDS* := { **show** | **list** | **pin** | **help** }
+ 
+diff --git a/tools/bpf/bpftool/Documentation/bpftool-map.rst b/tools/bpf/bpftool/Documentation/bpftool-map.rst
+index d0c4abe08abab..1445cadc15d4c 100644
+--- a/tools/bpf/bpftool/Documentation/bpftool-map.rst
++++ b/tools/bpf/bpftool/Documentation/bpftool-map.rst
+@@ -13,11 +13,11 @@ SYNOPSIS
+ 	**bpftool** [*OPTIONS*] **map** *COMMAND*
+ 
+ 	*OPTIONS* := { { **-j** | **--json** } [{ **-p** | **--pretty** }] | { **-d** | **--debug** } |
+-		{ **-f** | **--bpffs** } | { **-n** | **--nomount** } }
++	{ **-f** | **--bpffs** } | { **-n** | **--nomount** } }
+ 
+ 	*COMMANDS* :=
+-	{ **show** | **list** | **create** | **dump** | **update** | **lookup** | **getnext**
+-	| **delete** | **pin** | **help** }
++	{ **show** | **list** | **create** | **dump** | **update** | **lookup** | **getnext** |
++	**delete** | **pin** | **help** }
+ 
+ MAP COMMANDS
+ =============
+diff --git a/tools/bpf/bpftool/Documentation/bpftool-prog.rst b/tools/bpf/bpftool/Documentation/bpftool-prog.rst
+index 91608cb7e44a0..f27265bd589b4 100644
+--- a/tools/bpf/bpftool/Documentation/bpftool-prog.rst
++++ b/tools/bpf/bpftool/Documentation/bpftool-prog.rst
+@@ -13,12 +13,12 @@ SYNOPSIS
+ 	**bpftool** [*OPTIONS*] **prog** *COMMAND*
+ 
+ 	*OPTIONS* := { { **-j** | **--json** } [{ **-p** | **--pretty** }] | { **-d** | **--debug** } |
+-		{ **-f** | **--bpffs** } | { **-m** | **--mapcompat** } | { **-n** | **--nomount** } |
+-		{ **-L** | **--use-loader** } }
++	{ **-f** | **--bpffs** } | { **-m** | **--mapcompat** } | { **-n** | **--nomount** } |
++	{ **-L** | **--use-loader** } }
+ 
+ 	*COMMANDS* :=
+-	{ **show** | **list** | **dump xlated** | **dump jited** | **pin** | **load**
+-	| **loadall** | **help** }
++	{ **show** | **list** | **dump xlated** | **dump jited** | **pin** | **load** |
++	**loadall** | **help** }
+ 
+ PROG COMMANDS
+ =============
+diff --git a/tools/bpf/bpftool/Documentation/bpftool.rst b/tools/bpf/bpftool/Documentation/bpftool.rst
+index bb23f55bb05ad..8ac86565c501e 100644
+--- a/tools/bpf/bpftool/Documentation/bpftool.rst
++++ b/tools/bpf/bpftool/Documentation/bpftool.rst
+@@ -19,14 +19,14 @@ SYNOPSIS
+ 	*OBJECT* := { **map** | **program** | **cgroup** | **perf** | **net** | **feature** }
+ 
+ 	*OPTIONS* := { { **-V** | **--version** } |
+-		{ **-j** | **--json** } [{ **-p** | **--pretty** }] | { **-d** | **--debug** } }
++	{ **-j** | **--json** } [{ **-p** | **--pretty** }] | { **-d** | **--debug** } }
+ 
+ 	*MAP-COMMANDS* :=
+ 	{ **show** | **list** | **create** | **dump** | **update** | **lookup** | **getnext** |
+-		**delete** | **pin** | **event_pipe** | **help** }
++	**delete** | **pin** | **event_pipe** | **help** }
+ 
+ 	*PROG-COMMANDS* := { **show** | **list** | **dump jited** | **dump xlated** | **pin** |
+-		**load** | **attach** | **detach** | **help** }
++	**load** | **attach** | **detach** | **help** }
+ 
+ 	*CGROUP-COMMANDS* := { **show** | **list** | **attach** | **detach** | **help** }
+ 
+diff --git a/tools/bpf/bpftool/Makefile b/tools/bpf/bpftool/Makefile
+index 7cfba11c30146..b83dbd3cb9c7b 100644
+--- a/tools/bpf/bpftool/Makefile
++++ b/tools/bpf/bpftool/Makefile
+@@ -1,6 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ include ../../scripts/Makefile.include
+-include ../../scripts/utilities.mak
+ 
+ ifeq ($(srctree),)
+ srctree := $(patsubst %/,%,$(dir $(CURDIR)))
+diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
+index 28237d7cef67f..8fbcff9d557d9 100644
+--- a/tools/bpf/bpftool/main.c
++++ b/tools/bpf/bpftool/main.c
+@@ -400,6 +400,8 @@ int main(int argc, char **argv)
+ 	};
+ 	int opt, ret;
+ 
++	setlinebuf(stdout);
++
+ 	last_do_help = do_help;
+ 	pretty_output = false;
+ 	json_output = false;
+diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
+index 515d229526026..6ccd17b8eb560 100644
+--- a/tools/bpf/bpftool/prog.c
++++ b/tools/bpf/bpftool/prog.c
+@@ -639,8 +639,8 @@ prog_dump(struct bpf_prog_info *info, enum dump_mode mode,
+ 	char func_sig[1024];
+ 	unsigned char *buf;
+ 	__u32 member_len;
++	int fd, err = -1;
+ 	ssize_t n;
+-	int fd;
+ 
+ 	if (mode == DUMP_JITED) {
+ 		if (info->jited_prog_len == 0 || !info->jited_prog_insns) {
+@@ -679,7 +679,7 @@ prog_dump(struct bpf_prog_info *info, enum dump_mode mode,
+ 		if (fd < 0) {
+ 			p_err("can't open file %s: %s", filepath,
+ 			      strerror(errno));
+-			return -1;
++			goto exit_free;
+ 		}
+ 
+ 		n = write(fd, buf, member_len);
+@@ -687,7 +687,7 @@ prog_dump(struct bpf_prog_info *info, enum dump_mode mode,
+ 		if (n != (ssize_t)member_len) {
+ 			p_err("error writing output file: %s",
+ 			      n < 0 ? strerror(errno) : "short write");
+-			return -1;
++			goto exit_free;
+ 		}
+ 
+ 		if (json_output)
+@@ -701,7 +701,7 @@ prog_dump(struct bpf_prog_info *info, enum dump_mode mode,
+ 						     info->netns_ino,
+ 						     &disasm_opt);
+ 			if (!name)
+-				return -1;
++				goto exit_free;
+ 		}
+ 
+ 		if (info->nr_jited_func_lens && info->jited_func_lens) {
+@@ -796,9 +796,12 @@ prog_dump(struct bpf_prog_info *info, enum dump_mode mode,
+ 		kernel_syms_destroy(&dd);
+ 	}
+ 
+-	btf__free(btf);
++	err = 0;
+ 
+-	return 0;
++exit_free:
++	btf__free(btf);
++	bpf_prog_linfo__free(prog_linfo);
++	return err;
+ }
+ 
+ static int do_dump(int argc, char **argv)
+diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
+index 73409e27be01f..5d26f3c6f918e 100644
+--- a/tools/bpf/resolve_btfids/main.c
++++ b/tools/bpf/resolve_btfids/main.c
+@@ -168,7 +168,7 @@ static struct btf_id *btf_id__find(struct rb_root *root, const char *name)
+ 	return NULL;
+ }
+ 
+-static struct btf_id*
++static struct btf_id *
+ btf_id__add(struct rb_root *root, char *name, bool unique)
+ {
+ 	struct rb_node **p = &root->rb_node;
+@@ -732,7 +732,8 @@ int main(int argc, const char **argv)
+ 	if (obj.efile.idlist_shndx == -1 ||
+ 	    obj.efile.symbols_shndx == -1) {
+ 		pr_debug("Cannot find .BTF_ids or symbols sections, nothing to do\n");
+-		return 0;
++		err = 0;
++		goto out;
+ 	}
+ 
+ 	if (symbols_collect(&obj))
+diff --git a/tools/include/nolibc/nolibc.h b/tools/include/nolibc/nolibc.h
+index 3430667b0d241..3e2c6f2ed587f 100644
+--- a/tools/include/nolibc/nolibc.h
++++ b/tools/include/nolibc/nolibc.h
+@@ -399,16 +399,22 @@ struct stat {
+ })
+ 
+ /* startup code */
++/*
++ * x86-64 System V ABI mandates:
++ * 1) %rsp must be 16-byte aligned right before the function call.
++ * 2) The deepest stack frame should be zero (the %rbp).
++ *
++ */
+ asm(".section .text\n"
+     ".global _start\n"
+     "_start:\n"
+     "pop %rdi\n"                // argc   (first arg, %rdi)
+     "mov %rsp, %rsi\n"          // argv[] (second arg, %rsi)
+     "lea 8(%rsi,%rdi,8),%rdx\n" // then a NULL then envp (third arg, %rdx)
+-    "and $-16, %rsp\n"          // x86 ABI : esp must be 16-byte aligned when
+-    "sub $8, %rsp\n"            // entering the callee
++    "xor %ebp, %ebp\n"          // zero the stack frame
++    "and $-16, %rsp\n"          // x86 ABI : esp must be 16-byte aligned before call
+     "call main\n"               // main() returns the status code, we'll exit with it.
+-    "movzb %al, %rdi\n"         // retrieve exit code from 8 lower bits
++    "mov %eax, %edi\n"          // retrieve exit code (32 bit)
+     "mov $60, %rax\n"           // NR_exit == 60
+     "syscall\n"                 // really exit
+     "hlt\n"                     // ensure it does not return
+@@ -577,20 +583,28 @@ struct sys_stat_struct {
+ })
+ 
+ /* startup code */
++/*
++ * i386 System V ABI mandates:
++ * 1) last pushed argument must be 16-byte aligned.
++ * 2) The deepest stack frame should be set to zero
++ *
++ */
+ asm(".section .text\n"
+     ".global _start\n"
+     "_start:\n"
+     "pop %eax\n"                // argc   (first arg, %eax)
+     "mov %esp, %ebx\n"          // argv[] (second arg, %ebx)
+     "lea 4(%ebx,%eax,4),%ecx\n" // then a NULL then envp (third arg, %ecx)
+-    "and $-16, %esp\n"          // x86 ABI : esp must be 16-byte aligned when
++    "xor %ebp, %ebp\n"          // zero the stack frame
++    "and $-16, %esp\n"          // x86 ABI : esp must be 16-byte aligned before
++    "sub $4, %esp\n"            // the call instruction (args are aligned)
+     "push %ecx\n"               // push all registers on the stack so that we
+     "push %ebx\n"               // support both regparm and plain stack modes
+     "push %eax\n"
+     "call main\n"               // main() returns the status code in %eax
+-    "movzbl %al, %ebx\n"        // retrieve exit code from lower 8 bits
+-    "movl   $1, %eax\n"         // NR_exit == 1
+-    "int    $0x80\n"            // exit now
++    "mov %eax, %ebx\n"          // retrieve exit code (32-bit int)
++    "movl $1, %eax\n"           // NR_exit == 1
++    "int $0x80\n"               // exit now
+     "hlt\n"                     // ensure it does not
+     "");
+ 
+@@ -774,7 +788,6 @@ asm(".section .text\n"
+     "and %r3, %r1, $-8\n"         // AAPCS : sp must be 8-byte aligned in the
+     "mov %sp, %r3\n"              //         callee, an bl doesn't push (lr=pc)
+     "bl main\n"                   // main() returns the status code, we'll exit with it.
+-    "and %r0, %r0, $0xff\n"       // limit exit code to 8 bits
+     "movs r7, $1\n"               // NR_exit == 1
+     "svc $0x00\n"
+     "");
+@@ -971,7 +984,6 @@ asm(".section .text\n"
+     "add x2, x2, x1\n"            //           + argv
+     "and sp, x1, -16\n"           // sp must be 16-byte aligned in the callee
+     "bl main\n"                   // main() returns the status code, we'll exit with it.
+-    "and x0, x0, 0xff\n"          // limit exit code to 8 bits
+     "mov x8, 93\n"                // NR_exit == 93
+     "svc #0\n"
+     "");
+@@ -1176,7 +1188,7 @@ asm(".section .text\n"
+     "addiu $sp,$sp,-16\n"         // the callee expects to save a0..a3 there!
+     "jal main\n"                  // main() returns the status code, we'll exit with it.
+     "nop\n"                       // delayed slot
+-    "and $a0, $v0, 0xff\n"        // limit exit code to 8 bits
++    "move $a0, $v0\n"             // retrieve 32-bit exit code from v0
+     "li $v0, 4001\n"              // NR_exit == 4001
+     "syscall\n"
+     ".end __start\n"
+@@ -1374,7 +1386,6 @@ asm(".section .text\n"
+     "add   a2,a2,a1\n"           //             + argv
+     "andi  sp,a1,-16\n"          // sp must be 16-byte aligned
+     "call  main\n"               // main() returns the status code, we'll exit with it.
+-    "andi  a0, a0, 0xff\n"       // limit exit code to 8 bits
+     "li a7, 93\n"                // NR_exit == 93
+     "ecall\n"
+     "");
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index ba5af15e25f5c..b12cfceddb6e9 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -1744,7 +1744,7 @@ union bpf_attr {
+  * 		if the maximum number of tail calls has been reached for this
+  * 		chain of programs. This limit is defined in the kernel by the
+  * 		macro **MAX_TAIL_CALL_CNT** (not accessible to user space),
+- * 		which is currently set to 32.
++ *		which is currently set to 33.
+  * 	Return
+  * 		0 on success, or a negative error in case of failure.
+  *
+diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
+index 7e4c5586bd877..dc86259980231 100644
+--- a/tools/lib/bpf/btf.c
++++ b/tools/lib/bpf/btf.c
+@@ -2711,15 +2711,11 @@ void btf_ext__free(struct btf_ext *btf_ext)
+ 	free(btf_ext);
+ }
+ 
+-struct btf_ext *btf_ext__new(__u8 *data, __u32 size)
++struct btf_ext *btf_ext__new(const __u8 *data, __u32 size)
+ {
+ 	struct btf_ext *btf_ext;
+ 	int err;
+ 
+-	err = btf_ext_parse_hdr(data, size);
+-	if (err)
+-		return libbpf_err_ptr(err);
+-
+ 	btf_ext = calloc(1, sizeof(struct btf_ext));
+ 	if (!btf_ext)
+ 		return libbpf_err_ptr(-ENOMEM);
+@@ -2732,6 +2728,10 @@ struct btf_ext *btf_ext__new(__u8 *data, __u32 size)
+ 	}
+ 	memcpy(btf_ext->data, data, size);
+ 
++	err = btf_ext_parse_hdr(btf_ext->data, size);
++	if (err)
++		goto done;
++
+ 	if (btf_ext->hdr->hdr_len < offsetofend(struct btf_ext_header, line_info_len)) {
+ 		err = -EINVAL;
+ 		goto done;
+@@ -3443,8 +3443,8 @@ static long btf_hash_struct(struct btf_type *t)
+ }
+ 
+ /*
+- * Check structural compatibility of two FUNC_PROTOs, ignoring referenced type
+- * IDs. This check is performed during type graph equivalence check and
++ * Check structural compatibility of two STRUCTs/UNIONs, ignoring referenced
++ * type IDs. This check is performed during type graph equivalence check and
+  * referenced types equivalence is checked separately.
+  */
+ static bool btf_shallow_equal_struct(struct btf_type *t1, struct btf_type *t2)
+@@ -3817,6 +3817,31 @@ static int btf_dedup_identical_arrays(struct btf_dedup *d, __u32 id1, __u32 id2)
+ 	return btf_equal_array(t1, t2);
+ }
+ 
++/* Check if given two types are identical STRUCT/UNION definitions */
++static bool btf_dedup_identical_structs(struct btf_dedup *d, __u32 id1, __u32 id2)
++{
++	const struct btf_member *m1, *m2;
++	struct btf_type *t1, *t2;
++	int n, i;
++
++	t1 = btf_type_by_id(d->btf, id1);
++	t2 = btf_type_by_id(d->btf, id2);
++
++	if (!btf_is_composite(t1) || btf_kind(t1) != btf_kind(t2))
++		return false;
++
++	if (!btf_shallow_equal_struct(t1, t2))
++		return false;
++
++	m1 = btf_members(t1);
++	m2 = btf_members(t2);
++	for (i = 0, n = btf_vlen(t1); i < n; i++, m1++, m2++) {
++		if (m1->type != m2->type)
++			return false;
++	}
++	return true;
++}
++
+ /*
+  * Check equivalence of BTF type graph formed by candidate struct/union (we'll
+  * call it "candidate graph" in this description for brevity) to a type graph
+@@ -3928,6 +3953,8 @@ static int btf_dedup_is_equiv(struct btf_dedup *d, __u32 cand_id,
+ 
+ 	hypot_type_id = d->hypot_map[canon_id];
+ 	if (hypot_type_id <= BTF_MAX_NR_TYPES) {
++		if (hypot_type_id == cand_id)
++			return 1;
+ 		/* In some cases compiler will generate different DWARF types
+ 		 * for *identical* array type definitions and use them for
+ 		 * different fields within the *same* struct. This breaks type
+@@ -3936,8 +3963,18 @@ static int btf_dedup_is_equiv(struct btf_dedup *d, __u32 cand_id,
+ 		 * types within a single CU. So work around that by explicitly
+ 		 * allowing identical array types here.
+ 		 */
+-		return hypot_type_id == cand_id ||
+-		       btf_dedup_identical_arrays(d, hypot_type_id, cand_id);
++		if (btf_dedup_identical_arrays(d, hypot_type_id, cand_id))
++			return 1;
++		/* It turns out that similar situation can happen with
++		 * struct/union sometimes, sigh... Handle the case where
++		 * structs/unions are exactly the same, down to the referenced
++		 * type IDs. Anything more complicated (e.g., if referenced
++		 * types are different, but equivalent) is *way more*
++		 * complicated and requires a many-to-many equivalence mapping.
++		 */
++		if (btf_dedup_identical_structs(d, hypot_type_id, cand_id))
++			return 1;
++		return 0;
+ 	}
+ 
+ 	if (btf_dedup_hypot_map_add(d, canon_id, cand_id))
+diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
+index bc005ba3ceec3..17c0a46d8cd22 100644
+--- a/tools/lib/bpf/btf.h
++++ b/tools/lib/bpf/btf.h
+@@ -157,7 +157,7 @@ LIBBPF_API int btf__get_map_kv_tids(const struct btf *btf, const char *map_name,
+ 				    __u32 expected_value_size,
+ 				    __u32 *key_type_id, __u32 *value_type_id);
+ 
+-LIBBPF_API struct btf_ext *btf_ext__new(__u8 *data, __u32 size);
++LIBBPF_API struct btf_ext *btf_ext__new(const __u8 *data, __u32 size);
+ LIBBPF_API void btf_ext__free(struct btf_ext *btf_ext);
+ LIBBPF_API const void *btf_ext__get_raw_data(const struct btf_ext *btf_ext,
+ 					     __u32 *size);
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index 17db62b5002e8..5cae71600631b 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -2194,7 +2194,7 @@ static int btf_dump_dump_type_data(struct btf_dump *d,
+ 				   __u8 bits_offset,
+ 				   __u8 bit_sz)
+ {
+-	int size, err;
++	int size, err = 0;
+ 
+ 	size = btf_dump_type_data_check_overflow(d, t, id, data, bits_offset);
+ 	if (size < 0)
+diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c
+index 9934851ccde76..737e7cbe3e547 100644
+--- a/tools/lib/bpf/gen_loader.c
++++ b/tools/lib/bpf/gen_loader.c
+@@ -371,8 +371,9 @@ int bpf_gen__finish(struct bpf_gen *gen, int nr_progs, int nr_maps)
+ {
+ 	int i;
+ 
+-	if (nr_progs != gen->nr_progs || nr_maps != gen->nr_maps) {
+-		pr_warn("progs/maps mismatch\n");
++	if (nr_progs < gen->nr_progs || nr_maps != gen->nr_maps) {
++		pr_warn("nr_progs %d/%d nr_maps %d/%d mismatch\n",
++			nr_progs, gen->nr_progs, nr_maps, gen->nr_maps);
+ 		gen->error = -EFAULT;
+ 		return gen->error;
+ 	}
+@@ -597,8 +598,9 @@ void bpf_gen__record_extern(struct bpf_gen *gen, const char *name, bool is_weak,
+ static struct ksym_desc *get_ksym_desc(struct bpf_gen *gen, struct ksym_relo_desc *relo)
+ {
+ 	struct ksym_desc *kdesc;
++	int i;
+ 
+-	for (int i = 0; i < gen->nr_ksyms; i++) {
++	for (i = 0; i < gen->nr_ksyms; i++) {
+ 		if (!strcmp(gen->ksyms[i].name, relo->name)) {
+ 			gen->ksyms[i].ref++;
+ 			return &gen->ksyms[i];
+@@ -992,9 +994,11 @@ void bpf_gen__prog_load(struct bpf_gen *gen,
+ 	debug_ret(gen, "prog_load %s insn_cnt %d", attr.prog_name, attr.insn_cnt);
+ 	/* successful or not, close btf module FDs used in extern ksyms and attach_btf_obj_fd */
+ 	cleanup_relos(gen, insns);
+-	if (gen->attach_kind)
++	if (gen->attach_kind) {
+ 		emit_sys_close_blob(gen,
+ 				    attr_field(prog_load_attr, attach_btf_obj_fd));
++		gen->attach_kind = 0;
++	}
+ 	emit_check_err(gen);
+ 	/* remember prog_fd in the stack, if successful */
+ 	emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_7,
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 7c74342bb6680..c7ba5e6ed9cfe 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -400,6 +400,7 @@ struct bpf_map {
+ 	char *pin_path;
+ 	bool pinned;
+ 	bool reused;
++	bool skipped;
+ 	__u64 map_extra;
+ };
+ 
+@@ -2752,13 +2753,12 @@ static int btf_fixup_datasec(struct bpf_object *obj, struct btf *btf,
+ 
+ 	for (i = 0, vsi = btf_var_secinfos(t); i < vars; i++, vsi++) {
+ 		t_var = btf__type_by_id(btf, vsi->type);
+-		var = btf_var(t_var);
+-
+-		if (!btf_is_var(t_var)) {
++		if (!t_var || !btf_is_var(t_var)) {
+ 			pr_debug("Non-VAR type seen in section %s\n", name);
+ 			return -EINVAL;
+ 		}
+ 
++		var = btf_var(t_var);
+ 		if (var->linkage == BTF_VAR_STATIC)
+ 			continue;
+ 
+@@ -3191,11 +3191,11 @@ static int bpf_object__elf_collect(struct bpf_object *obj)
+ 	Elf_Scn *scn;
+ 	Elf64_Shdr *sh;
+ 
+-	/* ELF section indices are 1-based, so allocate +1 element to keep
+-	 * indexing simple. Also include 0th invalid section into sec_cnt for
+-	 * simpler and more traditional iteration logic.
++	/* ELF section indices are 0-based, but sec #0 is special "invalid"
++	 * section. e_shnum does include sec #0, so e_shnum is the necessary
++	 * size of an array to keep all the sections.
+ 	 */
+-	obj->efile.sec_cnt = 1 + obj->efile.ehdr->e_shnum;
++	obj->efile.sec_cnt = obj->efile.ehdr->e_shnum;
+ 	obj->efile.secs = calloc(obj->efile.sec_cnt, sizeof(*obj->efile.secs));
+ 	if (!obj->efile.secs)
+ 		return -ENOMEM;
+@@ -3555,7 +3555,7 @@ static int bpf_object__collect_externs(struct bpf_object *obj)
+ 
+ 	scn = elf_sec_by_idx(obj, obj->efile.symbols_shndx);
+ 	sh = elf_sec_hdr(obj, scn);
+-	if (!sh)
++	if (!sh || sh->sh_entsize != sizeof(Elf64_Sym))
+ 		return -LIBBPF_ERRNO__FORMAT;
+ 
+ 	dummy_var_btf_id = add_dummy_ksym_var(obj->btf);
+@@ -4988,6 +4988,26 @@ bpf_object__create_maps(struct bpf_object *obj)
+ 	for (i = 0; i < obj->nr_maps; i++) {
+ 		map = &obj->maps[i];
+ 
++		/* To support old kernels, we skip creating global data maps
++		 * (.rodata, .data, .kconfig, etc); later on, during program
++		 * loading, if we detect that at least one of the to-be-loaded
++		 * programs is referencing any global data map, we'll error
++		 * out with program name and relocation index logged.
++		 * This approach allows to accommodate Clang emitting
++		 * unnecessary .rodata.str1.1 sections for string literals,
++		 * but also it allows to have CO-RE applications that use
++		 * global variables in some of BPF programs, but not others.
++		 * If those global variable-using programs are not loaded at
++		 * runtime due to bpf_program__set_autoload(prog, false),
++		 * bpf_object loading will succeed just fine even on old
++		 * kernels.
++		 */
++		if (bpf_map__is_internal(map) &&
++		    !kernel_supports(obj, FEAT_GLOBAL_DATA)) {
++			map->skipped = true;
++			continue;
++		}
++
+ 		retried = false;
+ retry:
+ 		if (map->pin_path) {
+@@ -5587,6 +5607,13 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog)
+ 				insn[0].src_reg = BPF_PSEUDO_MAP_IDX_VALUE;
+ 				insn[0].imm = relo->map_idx;
+ 			} else {
++				const struct bpf_map *map = &obj->maps[relo->map_idx];
++
++				if (map->skipped) {
++					pr_warn("prog '%s': relo #%d: kernel doesn't support global data\n",
++						prog->name, i);
++					return -ENOTSUP;
++				}
+ 				insn[0].src_reg = BPF_PSEUDO_MAP_VALUE;
+ 				insn[0].imm = obj->maps[relo->map_idx].fd;
+ 			}
+@@ -6121,6 +6148,8 @@ bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_path)
+ 		 */
+ 		if (prog_is_subprog(obj, prog))
+ 			continue;
++		if (!prog->load)
++			continue;
+ 
+ 		err = bpf_object__relocate_calls(obj, prog);
+ 		if (err) {
+@@ -6134,6 +6163,8 @@ bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_path)
+ 		prog = &obj->programs[i];
+ 		if (prog_is_subprog(obj, prog))
+ 			continue;
++		if (!prog->load)
++			continue;
+ 		err = bpf_object__relocate_data(obj, prog);
+ 		if (err) {
+ 			pr_warn("prog '%s': failed to relocate data references: %d\n",
+@@ -6915,10 +6946,6 @@ static int bpf_object__sanitize_maps(struct bpf_object *obj)
+ 	bpf_object__for_each_map(m, obj) {
+ 		if (!bpf_map__is_internal(m))
+ 			continue;
+-		if (!kernel_supports(obj, FEAT_GLOBAL_DATA)) {
+-			pr_warn("kernel doesn't support global data\n");
+-			return -ENOTSUP;
+-		}
+ 		if (!kernel_supports(obj, FEAT_ARRAY_MMAP))
+ 			m->def.map_flags ^= BPF_F_MMAPABLE;
+ 	}
+@@ -7707,6 +7734,9 @@ int bpf_object__pin_maps(struct bpf_object *obj, const char *path)
+ 		char *pin_path = NULL;
+ 		char buf[PATH_MAX];
+ 
++		if (map->skipped)
++			continue;
++
+ 		if (path) {
+ 			int len;
+ 
+@@ -9028,7 +9058,10 @@ int bpf_map__set_inner_map_fd(struct bpf_map *map, int fd)
+ 		pr_warn("error: inner_map_fd already specified\n");
+ 		return libbpf_err(-EINVAL);
+ 	}
+-	zfree(&map->inner_map);
++	if (map->inner_map) {
++		bpf_map__destroy(map->inner_map);
++		zfree(&map->inner_map);
++	}
+ 	map->inner_map_fd = fd;
+ 	return 0;
+ }
+@@ -9735,7 +9768,7 @@ bpf_program__attach_kprobe_opts(const struct bpf_program *prog,
+ 		gen_kprobe_legacy_event_name(probe_name, sizeof(probe_name),
+ 					     func_name, offset);
+ 
+-		legacy_probe = strdup(func_name);
++		legacy_probe = strdup(probe_name);
+ 		if (!legacy_probe)
+ 			return libbpf_err_ptr(-ENOMEM);
+ 
+diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
+index 9de0f299706b1..a28e00ce64d4d 100644
+--- a/tools/lib/bpf/libbpf.h
++++ b/tools/lib/bpf/libbpf.h
+@@ -431,7 +431,6 @@ bpf_program__attach_iter(const struct bpf_program *prog,
+  * one instance. In this case bpf_program__fd(prog) is equal to
+  * bpf_program__nth_fd(prog, 0).
+  */
+-LIBBPF_DEPRECATED_SINCE(0, 7, "use bpf_program__insns() for getting bpf_program instructions")
+ struct bpf_prog_prep_result {
+ 	/*
+ 	 * If not NULL, load new instruction array.
+diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c
+index f677dccdeae44..6923a0ab3b127 100644
+--- a/tools/lib/bpf/linker.c
++++ b/tools/lib/bpf/linker.c
+@@ -210,6 +210,7 @@ void bpf_linker__free(struct bpf_linker *linker)
+ 	}
+ 	free(linker->secs);
+ 
++	free(linker->glob_syms);
+ 	free(linker);
+ }
+ 
+@@ -1999,7 +2000,7 @@ add_sym:
+ static int linker_append_elf_relos(struct bpf_linker *linker, struct src_obj *obj)
+ {
+ 	struct src_sec *src_symtab = &obj->secs[obj->symtab_sec_idx];
+-	struct dst_sec *dst_symtab = &linker->secs[linker->symtab_sec_idx];
++	struct dst_sec *dst_symtab;
+ 	int i, err;
+ 
+ 	for (i = 1; i < obj->sec_cnt; i++) {
+@@ -2032,6 +2033,9 @@ static int linker_append_elf_relos(struct bpf_linker *linker, struct src_obj *ob
+ 			return -1;
+ 		}
+ 
++		/* add_dst_sec() above could have invalidated linker->secs */
++		dst_symtab = &linker->secs[linker->symtab_sec_idx];
++
+ 		/* shdr->sh_link points to SYMTAB */
+ 		dst_sec->shdr->sh_link = linker->symtab_sec_idx;
+ 
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 3df74cf5651af..83c04ebc940c9 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -143,7 +143,10 @@ FEATURE_CHECK_LDFLAGS-libcrypto = -lcrypto
+ ifdef CSINCLUDES
+   LIBOPENCSD_CFLAGS := -I$(CSINCLUDES)
+ endif
+-OPENCSDLIBS := -lopencsd_c_api -lopencsd -lstdc++
++OPENCSDLIBS := -lopencsd_c_api
++ifeq ($(findstring -static,${LDFLAGS}),-static)
++  OPENCSDLIBS += -lopencsd -lstdc++
++endif
+ ifdef CSLIBS
+   LIBOPENCSD_LDFLAGS := -L$(CSLIBS)
+ endif
+diff --git a/tools/perf/tests/shell/stat_all_metricgroups.sh b/tools/perf/tests/shell/stat_all_metricgroups.sh
+index de24d374ce247..cb35e488809ae 100755
+--- a/tools/perf/tests/shell/stat_all_metricgroups.sh
++++ b/tools/perf/tests/shell/stat_all_metricgroups.sh
+@@ -6,7 +6,7 @@ set -e
+ 
+ for m in $(perf list --raw-dump metricgroups); do
+   echo "Testing $m"
+-  perf stat -M "$m" true
++  perf stat -M "$m" -a true
+ done
+ 
+ exit 0
+diff --git a/tools/perf/util/cputopo.c b/tools/perf/util/cputopo.c
+index 51b429c86f980..aad7a9e6e31b3 100644
+--- a/tools/perf/util/cputopo.c
++++ b/tools/perf/util/cputopo.c
+@@ -165,7 +165,8 @@ static bool has_die_topology(void)
+ 	if (uname(&uts) < 0)
+ 		return false;
+ 
+-	if (strncmp(uts.machine, "x86_64", 6))
++	if (strncmp(uts.machine, "x86_64", 6) &&
++	    strncmp(uts.machine, "s390x", 5))
+ 		return false;
+ 
+ 	scnprintf(filename, MAXPATHLEN, DIE_CPUS_FMT,
+diff --git a/tools/perf/util/debug.c b/tools/perf/util/debug.c
+index 2c06abf6dcd26..65e6c22f38e4f 100644
+--- a/tools/perf/util/debug.c
++++ b/tools/perf/util/debug.c
+@@ -179,7 +179,7 @@ static int trace_event_printer(enum binary_printer_ops op,
+ 		break;
+ 	case BINARY_PRINT_CHAR_DATA:
+ 		printed += color_fprintf(fp, color, "%c",
+-			      isprint(ch) ? ch : '.');
++			      isprint(ch) && isascii(ch) ? ch : '.');
+ 		break;
+ 	case BINARY_PRINT_CHAR_PAD:
+ 		printed += color_fprintf(fp, color, " ");
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index ac0127be04593..99dfc82654319 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -1064,6 +1064,17 @@ void __weak arch_evsel__fixup_new_cycles(struct perf_event_attr *attr __maybe_un
+ {
+ }
+ 
++static void evsel__set_default_freq_period(struct record_opts *opts,
++					   struct perf_event_attr *attr)
++{
++	if (opts->freq) {
++		attr->freq = 1;
++		attr->sample_freq = opts->freq;
++	} else {
++		attr->sample_period = opts->default_interval;
++	}
++}
++
+ /*
+  * The enable_on_exec/disabled value strategy:
+  *
+@@ -1130,14 +1141,12 @@ void evsel__config(struct evsel *evsel, struct record_opts *opts,
+ 	 * We default some events to have a default interval. But keep
+ 	 * it a weak assumption overridable by the user.
+ 	 */
+-	if (!attr->sample_period) {
+-		if (opts->freq) {
+-			attr->freq		= 1;
+-			attr->sample_freq	= opts->freq;
+-		} else {
+-			attr->sample_period = opts->default_interval;
+-		}
+-	}
++	if ((evsel->is_libpfm_event && !attr->sample_period) ||
++	    (!evsel->is_libpfm_event && (!attr->sample_period ||
++					 opts->user_freq != UINT_MAX ||
++					 opts->user_interval != ULLONG_MAX)))
++		evsel__set_default_freq_period(opts, attr);
++
+ 	/*
+ 	 * If attr->freq was set (here or earlier), ask for period
+ 	 * to be sampled.
+diff --git a/tools/perf/util/metricgroup.c b/tools/perf/util/metricgroup.c
+index fffe02aae3ed1..5c73c0d867b07 100644
+--- a/tools/perf/util/metricgroup.c
++++ b/tools/perf/util/metricgroup.c
+@@ -209,8 +209,8 @@ static struct metric *metric__new(const struct pmu_event *pe,
+ 	m->metric_name = pe->metric_name;
+ 	m->modifier = modifier ? strdup(modifier) : NULL;
+ 	if (modifier && !m->modifier) {
+-		free(m);
+ 		expr__ctx_free(m->pctx);
++		free(m);
+ 		return NULL;
+ 	}
+ 	m->metric_expr = pe->metric_expr;
+@@ -314,7 +314,7 @@ static int setup_metric_events(struct hashmap *ids,
+ 		 */
+ 		metric_id = evsel__metric_id(ev);
+ 		evlist__for_each_entry_continue(metric_evlist, ev) {
+-			if (!strcmp(evsel__metric_id(metric_events[i]), metric_id))
++			if (!strcmp(evsel__metric_id(ev), metric_id))
+ 				ev->metric_leader = metric_events[i];
+ 		}
+ 	}
+diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
+index b2a02c9ab8ea9..a834918a0a0d3 100644
+--- a/tools/perf/util/probe-event.c
++++ b/tools/perf/util/probe-event.c
+@@ -3083,6 +3083,9 @@ static int find_probe_trace_events_from_map(struct perf_probe_event *pev,
+ 	for (j = 0; j < num_matched_functions; j++) {
+ 		sym = syms[j];
+ 
++		if (sym->type != STT_FUNC)
++			continue;
++
+ 		/* There can be duplicated symbols in the map */
+ 		for (i = 0; i < j; i++)
+ 			if (sym->start == syms[i]->start) {
+diff --git a/tools/testing/kunit/kunit_json.py b/tools/testing/kunit/kunit_json.py
+index 746bec72b9ac2..b6e66c5d64d14 100644
+--- a/tools/testing/kunit/kunit_json.py
++++ b/tools/testing/kunit/kunit_json.py
+@@ -30,6 +30,8 @@ def _get_group_json(test: Test, def_config: str,
+ 			test_case = {"name": subtest.name, "status": "FAIL"}
+ 			if subtest.status == TestStatus.SUCCESS:
+ 				test_case["status"] = "PASS"
++			elif subtest.status == TestStatus.SKIPPED:
++				test_case["status"] = "SKIP"
+ 			elif subtest.status == TestStatus.TEST_CRASHED:
+ 				test_case["status"] = "ERROR"
+ 			test_cases.append(test_case)
+diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
+index 9c41267314573..34cb0a12ba180 100755
+--- a/tools/testing/kunit/kunit_tool_test.py
++++ b/tools/testing/kunit/kunit_tool_test.py
+@@ -383,6 +383,12 @@ class KUnitJsonTest(unittest.TestCase):
+ 			{'name': 'example_simple_test', 'status': 'ERROR'},
+ 			result["sub_groups"][1]["test_cases"][0])
+ 
++	def test_skipped_test_json(self):
++		result = self._json_for('test_skip_tests.log')
++		self.assertEqual(
++			{'name': 'example_skip_test', 'status': 'SKIP'},
++			result["sub_groups"][1]["test_cases"][1])
++
+ 	def test_no_tests_json(self):
+ 		result = self._json_for('test_is_test_passed-no_tests_run_with_header.log')
+ 		self.assertEqual(0, len(result['sub_groups']))
+diff --git a/tools/testing/selftests/bpf/btf_helpers.c b/tools/testing/selftests/bpf/btf_helpers.c
+index b5b6b013a245e..3d1a748d09d81 100644
+--- a/tools/testing/selftests/bpf/btf_helpers.c
++++ b/tools/testing/selftests/bpf/btf_helpers.c
+@@ -251,18 +251,23 @@ const char *btf_type_c_dump(const struct btf *btf)
+ 	d = btf_dump__new(btf, NULL, &opts, btf_dump_printf);
+ 	if (libbpf_get_error(d)) {
+ 		fprintf(stderr, "Failed to create btf_dump instance: %ld\n", libbpf_get_error(d));
+-		return NULL;
++		goto err_out;
+ 	}
+ 
+ 	for (i = 1; i < btf__type_cnt(btf); i++) {
+ 		err = btf_dump__dump_type(d, i);
+ 		if (err) {
+ 			fprintf(stderr, "Failed to dump type [%d]: %d\n", i, err);
+-			return NULL;
++			goto err_out;
+ 		}
+ 	}
+ 
++	btf_dump__free(d);
+ 	fflush(buf_file);
+ 	fclose(buf_file);
+ 	return buf;
++err_out:
++	btf_dump__free(d);
++	fclose(buf_file);
++	return NULL;
+ }
+diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
+index 9454331aaf85f..ea6823215e9c0 100644
+--- a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
++++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
+@@ -1206,13 +1206,14 @@ static void test_task_vma(void)
+ 		goto out;
+ 
+ 	/* Read CMP_BUFFER_SIZE (1kB) from bpf_iter. Read in small chunks
+-	 * to trigger seq_file corner cases. The expected output is much
+-	 * longer than 1kB, so the while loop will terminate.
++	 * to trigger seq_file corner cases.
+ 	 */
+ 	len = 0;
+ 	while (len < CMP_BUFFER_SIZE) {
+ 		err = read_fd_into_buffer(iter_fd, task_vma_output + len,
+ 					  min(read_size, CMP_BUFFER_SIZE - len));
++		if (!err)
++			break;
+ 		if (CHECK(err < 0, "read_iter_fd", "read_iter_fd failed\n"))
+ 			goto out;
+ 		len += err;
+diff --git a/tools/testing/selftests/bpf/prog_tests/migrate_reuseport.c b/tools/testing/selftests/bpf/prog_tests/migrate_reuseport.c
+index 7589c03fd26be..eb2feaac81fe2 100644
+--- a/tools/testing/selftests/bpf/prog_tests/migrate_reuseport.c
++++ b/tools/testing/selftests/bpf/prog_tests/migrate_reuseport.c
+@@ -204,8 +204,8 @@ static int pass_ack(struct migrate_reuseport_test_case *test_case)
+ {
+ 	int err;
+ 
+-	err = bpf_link__detach(test_case->link);
+-	if (!ASSERT_OK(err, "bpf_link__detach"))
++	err = bpf_link__destroy(test_case->link);
++	if (!ASSERT_OK(err, "bpf_link__destroy"))
+ 		return -1;
+ 
+ 	test_case->link = NULL;
+diff --git a/tools/testing/selftests/bpf/prog_tests/skb_ctx.c b/tools/testing/selftests/bpf/prog_tests/skb_ctx.c
+index c437e6ba8fe20..db4d72563aaeb 100644
+--- a/tools/testing/selftests/bpf/prog_tests/skb_ctx.c
++++ b/tools/testing/selftests/bpf/prog_tests/skb_ctx.c
+@@ -111,4 +111,6 @@ void test_skb_ctx(void)
+ 		   "ctx_out_mark",
+ 		   "skb->mark == %u, expected %d\n",
+ 		   skb.mark, 10);
++
++	bpf_object__close(obj);
+ }
+diff --git a/tools/testing/selftests/bpf/prog_tests/tc_redirect.c b/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
+index 4b18b73df10b6..c2426df58e172 100644
+--- a/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
++++ b/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
+@@ -105,6 +105,13 @@ static int setns_by_fd(int nsfd)
+ 	if (!ASSERT_OK(err, "unshare"))
+ 		return err;
+ 
++	/* Make our /sys mount private, so the following umount won't
++	 * trigger the global umount in case it's shared.
++	 */
++	err = mount("none", "/sys", NULL, MS_PRIVATE, NULL);
++	if (!ASSERT_OK(err, "remount private /sys"))
++		return err;
++
+ 	err = umount2("/sys", MNT_DETACH);
+ 	if (!ASSERT_OK(err, "umount2 /sys"))
+ 		return err;
+diff --git a/tools/testing/selftests/bpf/xdpxceiver.c b/tools/testing/selftests/bpf/xdpxceiver.c
+index 6c7cf8aadc792..621342ec30c48 100644
+--- a/tools/testing/selftests/bpf/xdpxceiver.c
++++ b/tools/testing/selftests/bpf/xdpxceiver.c
+@@ -1219,7 +1219,7 @@ static bool hugepages_present(struct ifobject *ifobject)
+ 	void *bufs;
+ 
+ 	bufs = mmap(NULL, mmap_sz, PROT_READ | PROT_WRITE,
+-		    MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE | MAP_HUGETLB, -1, 0);
++		    MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB, -1, 0);
+ 	if (bufs == MAP_FAILED)
+ 		return false;
+ 
+@@ -1366,6 +1366,10 @@ static void run_pkt_test(struct test_spec *test, enum test_mode mode, enum test_
+ 		testapp_invalid_desc(test);
+ 		break;
+ 	case TEST_TYPE_UNALIGNED_INV_DESC:
++		if (!hugepages_present(test->ifobj_tx)) {
++			ksft_test_result_skip("No 2M huge pages present.\n");
++			return;
++		}
+ 		test_spec_set_name(test, "UNALIGNED_INV_DESC");
+ 		test->ifobj_tx->umem->unaligned_mode = true;
+ 		test->ifobj_rx->umem->unaligned_mode = true;
+diff --git a/tools/testing/selftests/clone3/clone3.c b/tools/testing/selftests/clone3/clone3.c
+index 42be3b9258301..076cf4325f783 100644
+--- a/tools/testing/selftests/clone3/clone3.c
++++ b/tools/testing/selftests/clone3/clone3.c
+@@ -52,6 +52,12 @@ static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode)
+ 		size = sizeof(struct __clone_args);
+ 
+ 	switch (test_mode) {
++	case CLONE3_ARGS_NO_TEST:
++		/*
++		 * Uses default 'flags' and 'SIGCHLD'
++		 * assignment.
++		 */
++		break;
+ 	case CLONE3_ARGS_ALL_0:
+ 		args.flags = 0;
+ 		args.exit_signal = 0;
+diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/profile.tc b/tools/testing/selftests/ftrace/test.d/kprobe/profile.tc
+index 98166fa3eb91c..34fb89b0c61fa 100644
+--- a/tools/testing/selftests/ftrace/test.d/kprobe/profile.tc
++++ b/tools/testing/selftests/ftrace/test.d/kprobe/profile.tc
+@@ -1,6 +1,6 @@
+ #!/bin/sh
+ # SPDX-License-Identifier: GPL-2.0
+-# description: Kprobe dynamic event - adding and removing
++# description: Kprobe profile
+ # requires: kprobe_events
+ 
+ ! grep -q 'myevent' kprobe_profile
+diff --git a/tools/testing/selftests/kselftest_harness.h b/tools/testing/selftests/kselftest_harness.h
+index ae0f0f33b2a6e..79a182cfa43ad 100644
+--- a/tools/testing/selftests/kselftest_harness.h
++++ b/tools/testing/selftests/kselftest_harness.h
+@@ -969,7 +969,7 @@ void __run_test(struct __fixture_metadata *f,
+ 	t->passed = 1;
+ 	t->skip = 0;
+ 	t->trigger = 0;
+-	t->step = 0;
++	t->step = 1;
+ 	t->no_print = 0;
+ 	memset(t->results->reason, 0, sizeof(t->results->reason));
+ 
+diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
+index 3cb5ac5da0875..289d4cc32d2f1 100644
+--- a/tools/testing/selftests/kvm/.gitignore
++++ b/tools/testing/selftests/kvm/.gitignore
+@@ -7,11 +7,11 @@
+ /s390x/memop
+ /s390x/resets
+ /s390x/sync_regs_test
++/x86_64/cpuid_test
+ /x86_64/cr4_cpuid_sync_test
+ /x86_64/debug_regs
+ /x86_64/evmcs_test
+ /x86_64/emulator_error_test
+-/x86_64/get_cpuid_test
+ /x86_64/get_msr_index_features
+ /x86_64/kvm_clock_test
+ /x86_64/kvm_pv_test
+diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
+index 17342b575e855..290b1b0552d6e 100644
+--- a/tools/testing/selftests/kvm/Makefile
++++ b/tools/testing/selftests/kvm/Makefile
+@@ -38,11 +38,11 @@ LIBKVM_x86_64 = lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.c lib/x8
+ LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S lib/aarch64/spinlock.c lib/aarch64/gic.c lib/aarch64/gic_v3.c lib/aarch64/vgic.c
+ LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318_test_handler.c
+ 
+-TEST_GEN_PROGS_x86_64 = x86_64/cr4_cpuid_sync_test
++TEST_GEN_PROGS_x86_64 = x86_64/cpuid_test
++TEST_GEN_PROGS_x86_64 += x86_64/cr4_cpuid_sync_test
+ TEST_GEN_PROGS_x86_64 += x86_64/get_msr_index_features
+ TEST_GEN_PROGS_x86_64 += x86_64/evmcs_test
+ TEST_GEN_PROGS_x86_64 += x86_64/emulator_error_test
+-TEST_GEN_PROGS_x86_64 += x86_64/get_cpuid_test
+ TEST_GEN_PROGS_x86_64 += x86_64/hyperv_clock
+ TEST_GEN_PROGS_x86_64 += x86_64/hyperv_cpuid
+ TEST_GEN_PROGS_x86_64 += x86_64/hyperv_features
+diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
+index 05e65ca1c30cd..5a3a4809b49ad 100644
+--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
++++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
+@@ -358,6 +358,8 @@ uint64_t kvm_get_feature_msr(uint64_t msr_index);
+ struct kvm_cpuid2 *kvm_get_supported_cpuid(void);
+ 
+ struct kvm_cpuid2 *vcpu_get_cpuid(struct kvm_vm *vm, uint32_t vcpuid);
++int __vcpu_set_cpuid(struct kvm_vm *vm, uint32_t vcpuid,
++		     struct kvm_cpuid2 *cpuid);
+ void vcpu_set_cpuid(struct kvm_vm *vm, uint32_t vcpuid,
+ 		    struct kvm_cpuid2 *cpuid);
+ 
+@@ -401,6 +403,11 @@ uint64_t vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid, uint64_t vaddr);
+ void vm_set_page_table_entry(struct kvm_vm *vm, int vcpuid, uint64_t vaddr,
+ 			     uint64_t pte);
+ 
++/*
++ * get_cpuid() - find matching CPUID entry and return pointer to it.
++ */
++struct kvm_cpuid_entry2 *get_cpuid(struct kvm_cpuid2 *cpuid, uint32_t function,
++				   uint32_t index);
+ /*
+  * set_cpuid() - overwrites a matching cpuid entry with the provided value.
+  *		 matches based on ent->function && ent->index. returns true
+diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
+index eef7b34756d5c..6441b03c46a90 100644
+--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
++++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
+@@ -847,6 +847,17 @@ kvm_get_supported_cpuid_index(uint32_t function, uint32_t index)
+ 	return entry;
+ }
+ 
++
++int __vcpu_set_cpuid(struct kvm_vm *vm, uint32_t vcpuid,
++		     struct kvm_cpuid2 *cpuid)
++{
++	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
++
++	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
++
++	return ioctl(vcpu->fd, KVM_SET_CPUID2, cpuid);
++}
++
+ /*
+  * VM VCPU CPUID Set
+  *
+@@ -864,12 +875,9 @@ kvm_get_supported_cpuid_index(uint32_t function, uint32_t index)
+ void vcpu_set_cpuid(struct kvm_vm *vm,
+ 		uint32_t vcpuid, struct kvm_cpuid2 *cpuid)
+ {
+-	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
+ 	int rc;
+ 
+-	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
+-
+-	rc = ioctl(vcpu->fd, KVM_SET_CPUID2, cpuid);
++	rc = __vcpu_set_cpuid(vm, vcpuid, cpuid);
+ 	TEST_ASSERT(rc == 0, "KVM_SET_CPUID2 failed, rc: %i errno: %i",
+ 		    rc, errno);
+ 
+@@ -1337,6 +1345,23 @@ void assert_on_unhandled_exception(struct kvm_vm *vm, uint32_t vcpuid)
+ 	}
+ }
+ 
++struct kvm_cpuid_entry2 *get_cpuid(struct kvm_cpuid2 *cpuid, uint32_t function,
++				   uint32_t index)
++{
++	int i;
++
++	for (i = 0; i < cpuid->nent; i++) {
++		struct kvm_cpuid_entry2 *cur = &cpuid->entries[i];
++
++		if (cur->function == function && cur->index == index)
++			return cur;
++	}
++
++	TEST_FAIL("CPUID function 0x%x index 0x%x not found ", function, index);
++
++	return NULL;
++}
++
+ bool set_cpuid(struct kvm_cpuid2 *cpuid,
+ 	       struct kvm_cpuid_entry2 *ent)
+ {
+diff --git a/tools/testing/selftests/kvm/x86_64/cpuid_test.c b/tools/testing/selftests/kvm/x86_64/cpuid_test.c
+new file mode 100644
+index 0000000000000..16d2465c56342
+--- /dev/null
++++ b/tools/testing/selftests/kvm/x86_64/cpuid_test.c
+@@ -0,0 +1,209 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/*
++ * Copyright (C) 2021, Red Hat Inc.
++ *
++ * Generic tests for KVM CPUID set/get ioctls
++ */
++#include <asm/kvm_para.h>
++#include <linux/kvm_para.h>
++#include <stdint.h>
++
++#include "test_util.h"
++#include "kvm_util.h"
++#include "processor.h"
++
++#define VCPU_ID 0
++
++/* CPUIDs known to differ */
++struct {
++	u32 function;
++	u32 index;
++} mangled_cpuids[] = {
++	/*
++	 * These entries depend on the vCPU's XCR0 register and IA32_XSS MSR,
++	 * which are not controlled for by this test.
++	 */
++	{.function = 0xd, .index = 0},
++	{.function = 0xd, .index = 1},
++};
++
++static void test_guest_cpuids(struct kvm_cpuid2 *guest_cpuid)
++{
++	int i;
++	u32 eax, ebx, ecx, edx;
++
++	for (i = 0; i < guest_cpuid->nent; i++) {
++		eax = guest_cpuid->entries[i].function;
++		ecx = guest_cpuid->entries[i].index;
++
++		cpuid(&eax, &ebx, &ecx, &edx);
++
++		GUEST_ASSERT(eax == guest_cpuid->entries[i].eax &&
++			     ebx == guest_cpuid->entries[i].ebx &&
++			     ecx == guest_cpuid->entries[i].ecx &&
++			     edx == guest_cpuid->entries[i].edx);
++	}
++
++}
++
++static void test_cpuid_40000000(struct kvm_cpuid2 *guest_cpuid)
++{
++	u32 eax = 0x40000000, ebx, ecx = 0, edx;
++
++	cpuid(&eax, &ebx, &ecx, &edx);
++
++	GUEST_ASSERT(eax == 0x40000001);
++}
++
++static void guest_main(struct kvm_cpuid2 *guest_cpuid)
++{
++	GUEST_SYNC(1);
++
++	test_guest_cpuids(guest_cpuid);
++
++	GUEST_SYNC(2);
++
++	test_cpuid_40000000(guest_cpuid);
++
++	GUEST_DONE();
++}
++
++static bool is_cpuid_mangled(struct kvm_cpuid_entry2 *entrie)
++{
++	int i;
++
++	for (i = 0; i < sizeof(mangled_cpuids); i++) {
++		if (mangled_cpuids[i].function == entrie->function &&
++		    mangled_cpuids[i].index == entrie->index)
++			return true;
++	}
++
++	return false;
++}
++
++static void check_cpuid(struct kvm_cpuid2 *cpuid, struct kvm_cpuid_entry2 *entrie)
++{
++	int i;
++
++	for (i = 0; i < cpuid->nent; i++) {
++		if (cpuid->entries[i].function == entrie->function &&
++		    cpuid->entries[i].index == entrie->index) {
++			if (is_cpuid_mangled(entrie))
++				return;
++
++			TEST_ASSERT(cpuid->entries[i].eax == entrie->eax &&
++				    cpuid->entries[i].ebx == entrie->ebx &&
++				    cpuid->entries[i].ecx == entrie->ecx &&
++				    cpuid->entries[i].edx == entrie->edx,
++				    "CPUID 0x%x.%x differ: 0x%x:0x%x:0x%x:0x%x vs 0x%x:0x%x:0x%x:0x%x",
++				    entrie->function, entrie->index,
++				    cpuid->entries[i].eax, cpuid->entries[i].ebx,
++				    cpuid->entries[i].ecx, cpuid->entries[i].edx,
++				    entrie->eax, entrie->ebx, entrie->ecx, entrie->edx);
++			return;
++		}
++	}
++
++	TEST_ASSERT(false, "CPUID 0x%x.%x not found", entrie->function, entrie->index);
++}
++
++static void compare_cpuids(struct kvm_cpuid2 *cpuid1, struct kvm_cpuid2 *cpuid2)
++{
++	int i;
++
++	for (i = 0; i < cpuid1->nent; i++)
++		check_cpuid(cpuid2, &cpuid1->entries[i]);
++
++	for (i = 0; i < cpuid2->nent; i++)
++		check_cpuid(cpuid1, &cpuid2->entries[i]);
++}
++
++static void run_vcpu(struct kvm_vm *vm, uint32_t vcpuid, int stage)
++{
++	struct ucall uc;
++
++	_vcpu_run(vm, vcpuid);
++
++	switch (get_ucall(vm, vcpuid, &uc)) {
++	case UCALL_SYNC:
++		TEST_ASSERT(!strcmp((const char *)uc.args[0], "hello") &&
++			    uc.args[1] == stage + 1,
++			    "Stage %d: Unexpected register values vmexit, got %lx",
++			    stage + 1, (ulong)uc.args[1]);
++		return;
++	case UCALL_DONE:
++		return;
++	case UCALL_ABORT:
++		TEST_ASSERT(false, "%s at %s:%ld\n\tvalues: %#lx, %#lx", (const char *)uc.args[0],
++			    __FILE__, uc.args[1], uc.args[2], uc.args[3]);
++	default:
++		TEST_ASSERT(false, "Unexpected exit: %s",
++			    exit_reason_str(vcpu_state(vm, vcpuid)->exit_reason));
++	}
++}
++
++struct kvm_cpuid2 *vcpu_alloc_cpuid(struct kvm_vm *vm, vm_vaddr_t *p_gva, struct kvm_cpuid2 *cpuid)
++{
++	int size = sizeof(*cpuid) + cpuid->nent * sizeof(cpuid->entries[0]);
++	vm_vaddr_t gva = vm_vaddr_alloc(vm, size, KVM_UTIL_MIN_VADDR);
++	struct kvm_cpuid2 *guest_cpuids = addr_gva2hva(vm, gva);
++
++	memcpy(guest_cpuids, cpuid, size);
++
++	*p_gva = gva;
++	return guest_cpuids;
++}
++
++static void set_cpuid_after_run(struct kvm_vm *vm, struct kvm_cpuid2 *cpuid)
++{
++	struct kvm_cpuid_entry2 *ent;
++	int rc;
++	u32 eax, ebx, x;
++
++	/* Setting unmodified CPUID is allowed */
++	rc = __vcpu_set_cpuid(vm, VCPU_ID, cpuid);
++	TEST_ASSERT(!rc, "Setting unmodified CPUID after KVM_RUN failed: %d", rc);
++
++	/* Changing CPU features is forbidden */
++	ent = get_cpuid(cpuid, 0x7, 0);
++	ebx = ent->ebx;
++	ent->ebx--;
++	rc = __vcpu_set_cpuid(vm, VCPU_ID, cpuid);
++	TEST_ASSERT(rc, "Changing CPU features should fail");
++	ent->ebx = ebx;
++
++	/* Changing MAXPHYADDR is forbidden */
++	ent = get_cpuid(cpuid, 0x80000008, 0);
++	eax = ent->eax;
++	x = eax & 0xff;
++	ent->eax = (eax & ~0xffu) | (x - 1);
++	rc = __vcpu_set_cpuid(vm, VCPU_ID, cpuid);
++	TEST_ASSERT(rc, "Changing MAXPHYADDR should fail");
++	ent->eax = eax;
++}
++
++int main(void)
++{
++	struct kvm_cpuid2 *supp_cpuid, *cpuid2;
++	vm_vaddr_t cpuid_gva;
++	struct kvm_vm *vm;
++	int stage;
++
++	vm = vm_create_default(VCPU_ID, 0, guest_main);
++
++	supp_cpuid = kvm_get_supported_cpuid();
++	cpuid2 = vcpu_get_cpuid(vm, VCPU_ID);
++
++	compare_cpuids(supp_cpuid, cpuid2);
++
++	vcpu_alloc_cpuid(vm, &cpuid_gva, cpuid2);
++
++	vcpu_args_set(vm, VCPU_ID, 1, cpuid_gva);
++
++	for (stage = 0; stage < 3; stage++)
++		run_vcpu(vm, VCPU_ID, stage);
++
++	set_cpuid_after_run(vm, cpuid2);
++
++	kvm_vm_free(vm);
++}
+diff --git a/tools/testing/selftests/kvm/x86_64/get_cpuid_test.c b/tools/testing/selftests/kvm/x86_64/get_cpuid_test.c
+deleted file mode 100644
+index a711f83749eab..0000000000000
+--- a/tools/testing/selftests/kvm/x86_64/get_cpuid_test.c
++++ /dev/null
+@@ -1,179 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/*
+- * Copyright (C) 2021, Red Hat Inc.
+- *
+- * Generic tests for KVM CPUID set/get ioctls
+- */
+-#include <asm/kvm_para.h>
+-#include <linux/kvm_para.h>
+-#include <stdint.h>
+-
+-#include "test_util.h"
+-#include "kvm_util.h"
+-#include "processor.h"
+-
+-#define VCPU_ID 0
+-
+-/* CPUIDs known to differ */
+-struct {
+-	u32 function;
+-	u32 index;
+-} mangled_cpuids[] = {
+-	/*
+-	 * These entries depend on the vCPU's XCR0 register and IA32_XSS MSR,
+-	 * which are not controlled for by this test.
+-	 */
+-	{.function = 0xd, .index = 0},
+-	{.function = 0xd, .index = 1},
+-};
+-
+-static void test_guest_cpuids(struct kvm_cpuid2 *guest_cpuid)
+-{
+-	int i;
+-	u32 eax, ebx, ecx, edx;
+-
+-	for (i = 0; i < guest_cpuid->nent; i++) {
+-		eax = guest_cpuid->entries[i].function;
+-		ecx = guest_cpuid->entries[i].index;
+-
+-		cpuid(&eax, &ebx, &ecx, &edx);
+-
+-		GUEST_ASSERT(eax == guest_cpuid->entries[i].eax &&
+-			     ebx == guest_cpuid->entries[i].ebx &&
+-			     ecx == guest_cpuid->entries[i].ecx &&
+-			     edx == guest_cpuid->entries[i].edx);
+-	}
+-
+-}
+-
+-static void test_cpuid_40000000(struct kvm_cpuid2 *guest_cpuid)
+-{
+-	u32 eax = 0x40000000, ebx, ecx = 0, edx;
+-
+-	cpuid(&eax, &ebx, &ecx, &edx);
+-
+-	GUEST_ASSERT(eax == 0x40000001);
+-}
+-
+-static void guest_main(struct kvm_cpuid2 *guest_cpuid)
+-{
+-	GUEST_SYNC(1);
+-
+-	test_guest_cpuids(guest_cpuid);
+-
+-	GUEST_SYNC(2);
+-
+-	test_cpuid_40000000(guest_cpuid);
+-
+-	GUEST_DONE();
+-}
+-
+-static bool is_cpuid_mangled(struct kvm_cpuid_entry2 *entrie)
+-{
+-	int i;
+-
+-	for (i = 0; i < sizeof(mangled_cpuids); i++) {
+-		if (mangled_cpuids[i].function == entrie->function &&
+-		    mangled_cpuids[i].index == entrie->index)
+-			return true;
+-	}
+-
+-	return false;
+-}
+-
+-static void check_cpuid(struct kvm_cpuid2 *cpuid, struct kvm_cpuid_entry2 *entrie)
+-{
+-	int i;
+-
+-	for (i = 0; i < cpuid->nent; i++) {
+-		if (cpuid->entries[i].function == entrie->function &&
+-		    cpuid->entries[i].index == entrie->index) {
+-			if (is_cpuid_mangled(entrie))
+-				return;
+-
+-			TEST_ASSERT(cpuid->entries[i].eax == entrie->eax &&
+-				    cpuid->entries[i].ebx == entrie->ebx &&
+-				    cpuid->entries[i].ecx == entrie->ecx &&
+-				    cpuid->entries[i].edx == entrie->edx,
+-				    "CPUID 0x%x.%x differ: 0x%x:0x%x:0x%x:0x%x vs 0x%x:0x%x:0x%x:0x%x",
+-				    entrie->function, entrie->index,
+-				    cpuid->entries[i].eax, cpuid->entries[i].ebx,
+-				    cpuid->entries[i].ecx, cpuid->entries[i].edx,
+-				    entrie->eax, entrie->ebx, entrie->ecx, entrie->edx);
+-			return;
+-		}
+-	}
+-
+-	TEST_ASSERT(false, "CPUID 0x%x.%x not found", entrie->function, entrie->index);
+-}
+-
+-static void compare_cpuids(struct kvm_cpuid2 *cpuid1, struct kvm_cpuid2 *cpuid2)
+-{
+-	int i;
+-
+-	for (i = 0; i < cpuid1->nent; i++)
+-		check_cpuid(cpuid2, &cpuid1->entries[i]);
+-
+-	for (i = 0; i < cpuid2->nent; i++)
+-		check_cpuid(cpuid1, &cpuid2->entries[i]);
+-}
+-
+-static void run_vcpu(struct kvm_vm *vm, uint32_t vcpuid, int stage)
+-{
+-	struct ucall uc;
+-
+-	_vcpu_run(vm, vcpuid);
+-
+-	switch (get_ucall(vm, vcpuid, &uc)) {
+-	case UCALL_SYNC:
+-		TEST_ASSERT(!strcmp((const char *)uc.args[0], "hello") &&
+-			    uc.args[1] == stage + 1,
+-			    "Stage %d: Unexpected register values vmexit, got %lx",
+-			    stage + 1, (ulong)uc.args[1]);
+-		return;
+-	case UCALL_DONE:
+-		return;
+-	case UCALL_ABORT:
+-		TEST_ASSERT(false, "%s at %s:%ld\n\tvalues: %#lx, %#lx", (const char *)uc.args[0],
+-			    __FILE__, uc.args[1], uc.args[2], uc.args[3]);
+-	default:
+-		TEST_ASSERT(false, "Unexpected exit: %s",
+-			    exit_reason_str(vcpu_state(vm, vcpuid)->exit_reason));
+-	}
+-}
+-
+-struct kvm_cpuid2 *vcpu_alloc_cpuid(struct kvm_vm *vm, vm_vaddr_t *p_gva, struct kvm_cpuid2 *cpuid)
+-{
+-	int size = sizeof(*cpuid) + cpuid->nent * sizeof(cpuid->entries[0]);
+-	vm_vaddr_t gva = vm_vaddr_alloc(vm, size, KVM_UTIL_MIN_VADDR);
+-	struct kvm_cpuid2 *guest_cpuids = addr_gva2hva(vm, gva);
+-
+-	memcpy(guest_cpuids, cpuid, size);
+-
+-	*p_gva = gva;
+-	return guest_cpuids;
+-}
+-
+-int main(void)
+-{
+-	struct kvm_cpuid2 *supp_cpuid, *cpuid2;
+-	vm_vaddr_t cpuid_gva;
+-	struct kvm_vm *vm;
+-	int stage;
+-
+-	vm = vm_create_default(VCPU_ID, 0, guest_main);
+-
+-	supp_cpuid = kvm_get_supported_cpuid();
+-	cpuid2 = vcpu_get_cpuid(vm, VCPU_ID);
+-
+-	compare_cpuids(supp_cpuid, cpuid2);
+-
+-	vcpu_alloc_cpuid(vm, &cpuid_gva, cpuid2);
+-
+-	vcpu_args_set(vm, VCPU_ID, 1, cpuid_gva);
+-
+-	for (stage = 0; stage < 3; stage++)
+-		run_vcpu(vm, VCPU_ID, stage);
+-
+-	kvm_vm_free(vm);
+-}
+diff --git a/tools/testing/selftests/powerpc/security/spectre_v2.c b/tools/testing/selftests/powerpc/security/spectre_v2.c
+index adc2b7294e5fd..83647b8277e7d 100644
+--- a/tools/testing/selftests/powerpc/security/spectre_v2.c
++++ b/tools/testing/selftests/powerpc/security/spectre_v2.c
+@@ -193,7 +193,7 @@ int spectre_v2_test(void)
+ 			 * We are not vulnerable and reporting otherwise, so
+ 			 * missing such a mismatch is safe.
+ 			 */
+-			if (state == VULNERABLE)
++			if (miss_percent > 95)
+ 				return 4;
+ 
+ 			return 1;
+diff --git a/tools/testing/selftests/powerpc/signal/.gitignore b/tools/testing/selftests/powerpc/signal/.gitignore
+index ce3375cd8e73e..8f6c816099a48 100644
+--- a/tools/testing/selftests/powerpc/signal/.gitignore
++++ b/tools/testing/selftests/powerpc/signal/.gitignore
+@@ -4,3 +4,4 @@ signal_tm
+ sigfuz
+ sigreturn_vdso
+ sig_sc_double_restart
++sigreturn_kernel
+diff --git a/tools/testing/selftests/powerpc/signal/Makefile b/tools/testing/selftests/powerpc/signal/Makefile
+index d6ae54663aed7..84e201572466d 100644
+--- a/tools/testing/selftests/powerpc/signal/Makefile
++++ b/tools/testing/selftests/powerpc/signal/Makefile
+@@ -1,5 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ TEST_GEN_PROGS := signal signal_tm sigfuz sigreturn_vdso sig_sc_double_restart
++TEST_GEN_PROGS += sigreturn_kernel
+ 
+ CFLAGS += -maltivec
+ $(OUTPUT)/signal_tm: CFLAGS += -mhtm
+diff --git a/tools/testing/selftests/powerpc/signal/sigreturn_kernel.c b/tools/testing/selftests/powerpc/signal/sigreturn_kernel.c
+new file mode 100644
+index 0000000000000..0a1b6e591eeed
+--- /dev/null
++++ b/tools/testing/selftests/powerpc/signal/sigreturn_kernel.c
+@@ -0,0 +1,132 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Test that we can't sigreturn to kernel addresses, or to kernel mode.
++ */
++
++#define _GNU_SOURCE
++
++#include <stdio.h>
++#include <signal.h>
++#include <stdlib.h>
++#include <sys/types.h>
++#include <sys/wait.h>
++#include <unistd.h>
++
++#include "utils.h"
++
++#define MSR_PR (1ul << 14)
++
++static volatile unsigned long long sigreturn_addr;
++static volatile unsigned long long sigreturn_msr_mask;
++
++static void sigusr1_handler(int signo, siginfo_t *si, void *uc_ptr)
++{
++	ucontext_t *uc = (ucontext_t *)uc_ptr;
++
++	if (sigreturn_addr)
++		UCONTEXT_NIA(uc) = sigreturn_addr;
++
++	if (sigreturn_msr_mask)
++		UCONTEXT_MSR(uc) &= sigreturn_msr_mask;
++}
++
++static pid_t fork_child(void)
++{
++	pid_t pid;
++
++	pid = fork();
++	if (pid == 0) {
++		raise(SIGUSR1);
++		exit(0);
++	}
++
++	return pid;
++}
++
++static int expect_segv(pid_t pid)
++{
++	int child_ret;
++
++	waitpid(pid, &child_ret, 0);
++	FAIL_IF(WIFEXITED(child_ret));
++	FAIL_IF(!WIFSIGNALED(child_ret));
++	FAIL_IF(WTERMSIG(child_ret) != 11);
++
++	return 0;
++}
++
++int test_sigreturn_kernel(void)
++{
++	struct sigaction act;
++	int child_ret, i;
++	pid_t pid;
++
++	act.sa_sigaction = sigusr1_handler;
++	act.sa_flags = SA_SIGINFO;
++	sigemptyset(&act.sa_mask);
++
++	FAIL_IF(sigaction(SIGUSR1, &act, NULL));
++
++	for (i = 0; i < 2; i++) {
++		// Return to kernel
++		sigreturn_addr = 0xcull << 60;
++		pid = fork_child();
++		expect_segv(pid);
++
++		// Return to kernel virtual
++		sigreturn_addr = 0xc008ull << 48;
++		pid = fork_child();
++		expect_segv(pid);
++
++		// Return out of range
++		sigreturn_addr = 0xc010ull << 48;
++		pid = fork_child();
++		expect_segv(pid);
++
++		// Return to no-man's land, just below PAGE_OFFSET
++		sigreturn_addr = (0xcull << 60) - (64 * 1024);
++		pid = fork_child();
++		expect_segv(pid);
++
++		// Return to no-man's land, above TASK_SIZE_4PB
++		sigreturn_addr = 0x1ull << 52;
++		pid = fork_child();
++		expect_segv(pid);
++
++		// Return to 0xd space
++		sigreturn_addr = 0xdull << 60;
++		pid = fork_child();
++		expect_segv(pid);
++
++		// Return to 0xe space
++		sigreturn_addr = 0xeull << 60;
++		pid = fork_child();
++		expect_segv(pid);
++
++		// Return to 0xf space
++		sigreturn_addr = 0xfull << 60;
++		pid = fork_child();
++		expect_segv(pid);
++
++		// Attempt to set PR=0 for 2nd loop (should be blocked by kernel)
++		sigreturn_msr_mask = ~MSR_PR;
++	}
++
++	printf("All children killed as expected\n");
++
++	// Don't change address, just MSR, should return to user as normal
++	sigreturn_addr = 0;
++	sigreturn_msr_mask = ~MSR_PR;
++	pid = fork_child();
++	waitpid(pid, &child_ret, 0);
++	FAIL_IF(!WIFEXITED(child_ret));
++	FAIL_IF(WIFSIGNALED(child_ret));
++	FAIL_IF(WEXITSTATUS(child_ret) != 0);
++
++	return 0;
++}
++
++int main(void)
++{
++	return test_harness(test_sigreturn_kernel, "sigreturn_kernel");
++}
+diff --git a/tools/testing/selftests/vm/charge_reserved_hugetlb.sh b/tools/testing/selftests/vm/charge_reserved_hugetlb.sh
+index fe8fcfb334e06..a5cb4b09a46c4 100644
+--- a/tools/testing/selftests/vm/charge_reserved_hugetlb.sh
++++ b/tools/testing/selftests/vm/charge_reserved_hugetlb.sh
+@@ -24,19 +24,23 @@ if [[ "$1" == "-cgroup-v2" ]]; then
+   reservation_usage_file=rsvd.current
+ fi
+ 
+-cgroup_path=/dev/cgroup/memory
+-if [[ ! -e $cgroup_path ]]; then
+-  mkdir -p $cgroup_path
+-  if [[ $cgroup2 ]]; then
++if [[ $cgroup2 ]]; then
++  cgroup_path=$(mount -t cgroup2 | head -1 | awk -e '{print $3}')
++  if [[ -z "$cgroup_path" ]]; then
++    cgroup_path=/dev/cgroup/memory
+     mount -t cgroup2 none $cgroup_path
+-  else
++    do_umount=1
++  fi
++  echo "+hugetlb" >$cgroup_path/cgroup.subtree_control
++else
++  cgroup_path=$(mount -t cgroup | grep ",hugetlb" | awk -e '{print $3}')
++  if [[ -z "$cgroup_path" ]]; then
++    cgroup_path=/dev/cgroup/memory
+     mount -t cgroup memory,hugetlb $cgroup_path
++    do_umount=1
+   fi
+ fi
+-
+-if [[ $cgroup2 ]]; then
+-  echo "+hugetlb" >/dev/cgroup/memory/cgroup.subtree_control
+-fi
++export cgroup_path
+ 
+ function cleanup() {
+   if [[ $cgroup2 ]]; then
+@@ -108,7 +112,7 @@ function setup_cgroup() {
+ 
+ function wait_for_hugetlb_memory_to_get_depleted() {
+   local cgroup="$1"
+-  local path="/dev/cgroup/memory/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
++  local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
+   # Wait for hugetlbfs memory to get depleted.
+   while [ $(cat $path) != 0 ]; do
+     echo Waiting for hugetlb memory to get depleted.
+@@ -121,7 +125,7 @@ function wait_for_hugetlb_memory_to_get_reserved() {
+   local cgroup="$1"
+   local size="$2"
+ 
+-  local path="/dev/cgroup/memory/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
++  local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
+   # Wait for hugetlbfs memory to get written.
+   while [ $(cat $path) != $size ]; do
+     echo Waiting for hugetlb memory reservation to reach size $size.
+@@ -134,7 +138,7 @@ function wait_for_hugetlb_memory_to_get_written() {
+   local cgroup="$1"
+   local size="$2"
+ 
+-  local path="/dev/cgroup/memory/$cgroup/hugetlb.${MB}MB.$fault_usage_file"
++  local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$fault_usage_file"
+   # Wait for hugetlbfs memory to get written.
+   while [ $(cat $path) != $size ]; do
+     echo Waiting for hugetlb memory to reach size $size.
+@@ -574,5 +578,7 @@ for populate in "" "-o"; do
+   done     # populate
+ done       # method
+ 
+-umount $cgroup_path
+-rmdir $cgroup_path
++if [[ $do_umount ]]; then
++  umount $cgroup_path
++  rmdir $cgroup_path
++fi
+diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c
+index 864f126ffd78f..203323967b507 100644
+--- a/tools/testing/selftests/vm/hmm-tests.c
++++ b/tools/testing/selftests/vm/hmm-tests.c
+@@ -1248,6 +1248,48 @@ TEST_F(hmm, anon_teardown)
+ 	}
+ }
+ 
++/*
++ * Test memory snapshot without faulting in pages accessed by the device.
++ */
++TEST_F(hmm, mixedmap)
++{
++	struct hmm_buffer *buffer;
++	unsigned long npages;
++	unsigned long size;
++	unsigned char *m;
++	int ret;
++
++	npages = 1;
++	size = npages << self->page_shift;
++
++	buffer = malloc(sizeof(*buffer));
++	ASSERT_NE(buffer, NULL);
++
++	buffer->fd = -1;
++	buffer->size = size;
++	buffer->mirror = malloc(npages);
++	ASSERT_NE(buffer->mirror, NULL);
++
++
++	/* Reserve a range of addresses. */
++	buffer->ptr = mmap(NULL, size,
++			   PROT_READ | PROT_WRITE,
++			   MAP_PRIVATE,
++			   self->fd, 0);
++	ASSERT_NE(buffer->ptr, MAP_FAILED);
++
++	/* Simulate a device snapshotting CPU pagetables. */
++	ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_SNAPSHOT, buffer, npages);
++	ASSERT_EQ(ret, 0);
++	ASSERT_EQ(buffer->cpages, npages);
++
++	/* Check what the device saw. */
++	m = buffer->mirror;
++	ASSERT_EQ(m[0], HMM_DMIRROR_PROT_READ);
++
++	hmm_buffer_free(buffer);
++}
++
+ /*
+  * Test memory snapshot without faulting in pages accessed by the device.
+  */
+diff --git a/tools/testing/selftests/vm/hugetlb_reparenting_test.sh b/tools/testing/selftests/vm/hugetlb_reparenting_test.sh
+index 4a9a3afe9fd4d..bf2d2a684edfd 100644
+--- a/tools/testing/selftests/vm/hugetlb_reparenting_test.sh
++++ b/tools/testing/selftests/vm/hugetlb_reparenting_test.sh
+@@ -18,19 +18,24 @@ if [[ "$1" == "-cgroup-v2" ]]; then
+   usage_file=current
+ fi
+ 
+-CGROUP_ROOT='/dev/cgroup/memory'
+-MNT='/mnt/huge/'
+ 
+-if [[ ! -e $CGROUP_ROOT ]]; then
+-  mkdir -p $CGROUP_ROOT
+-  if [[ $cgroup2 ]]; then
++if [[ $cgroup2 ]]; then
++  CGROUP_ROOT=$(mount -t cgroup2 | head -1 | awk -e '{print $3}')
++  if [[ -z "$CGROUP_ROOT" ]]; then
++    CGROUP_ROOT=/dev/cgroup/memory
+     mount -t cgroup2 none $CGROUP_ROOT
+-    sleep 1
+-    echo "+hugetlb +memory" >$CGROUP_ROOT/cgroup.subtree_control
+-  else
++    do_umount=1
++  fi
++  echo "+hugetlb +memory" >$CGROUP_ROOT/cgroup.subtree_control
++else
++  CGROUP_ROOT=$(mount -t cgroup | grep ",hugetlb" | awk -e '{print $3}')
++  if [[ -z "$CGROUP_ROOT" ]]; then
++    CGROUP_ROOT=/dev/cgroup/memory
+     mount -t cgroup memory,hugetlb $CGROUP_ROOT
++    do_umount=1
+   fi
+ fi
++MNT='/mnt/huge/'
+ 
+ function get_machine_hugepage_size() {
+   hpz=$(grep -i hugepagesize /proc/meminfo)
+diff --git a/tools/testing/selftests/vm/write_hugetlb_memory.sh b/tools/testing/selftests/vm/write_hugetlb_memory.sh
+index d3d0d108924d4..70a02301f4c27 100644
+--- a/tools/testing/selftests/vm/write_hugetlb_memory.sh
++++ b/tools/testing/selftests/vm/write_hugetlb_memory.sh
+@@ -14,7 +14,7 @@ want_sleep=$8
+ reserve=$9
+ 
+ echo "Putting task in cgroup '$cgroup'"
+-echo $$ > /dev/cgroup/memory/"$cgroup"/cgroup.procs
++echo $$ > ${cgroup_path:-/dev/cgroup/memory}/"$cgroup"/cgroup.procs
+ 
+ echo "Method is $method"
+ 
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 72c4e6b393896..5bd62342c482b 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -3153,6 +3153,7 @@ update_halt_poll_stats(struct kvm_vcpu *vcpu, u64 poll_ns, bool waited)
+  */
+ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
+ {
++	bool halt_poll_allowed = !kvm_arch_no_poll(vcpu);
+ 	ktime_t start, cur, poll_end;
+ 	bool waited = false;
+ 	u64 block_ns;
+@@ -3160,7 +3161,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
+ 	kvm_arch_vcpu_blocking(vcpu);
+ 
+ 	start = cur = poll_end = ktime_get();
+-	if (vcpu->halt_poll_ns && !kvm_arch_no_poll(vcpu)) {
++	if (vcpu->halt_poll_ns && halt_poll_allowed) {
+ 		ktime_t stop = ktime_add_ns(ktime_get(), vcpu->halt_poll_ns);
+ 
+ 		++vcpu->stat.generic.halt_attempted_poll;
+@@ -3215,7 +3216,7 @@ out:
+ 	update_halt_poll_stats(
+ 		vcpu, ktime_to_ns(ktime_sub(poll_end, start)), waited);
+ 
+-	if (!kvm_arch_no_poll(vcpu)) {
++	if (halt_poll_allowed) {
+ 		if (!vcpu_valid_wakeup(vcpu)) {
+ 			shrink_halt_poll_ns(vcpu);
+ 		} else if (vcpu->kvm->max_halt_poll_ns) {


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-01-29 17:40 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-01-29 17:40 UTC (permalink / raw
  To: gentoo-commits

commit:     9a6d05b7a247feb26f16e24eb6a180e52f1bb30e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jan 29 17:40:11 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jan 29 17:40:11 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9a6d05b7

Linux patch 5.16.4

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1003_linux-5.16.4.patch | 1035 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1039 insertions(+)

diff --git a/0000_README b/0000_README
index 1eb44c73..ff7d994b 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1002_linux-5.16.3.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.16.3
 
+Patch:  1003_linux-5.16.4.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.4
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1003_linux-5.16.4.patch b/1003_linux-5.16.4.patch
new file mode 100644
index 00000000..a5939fae
--- /dev/null
+++ b/1003_linux-5.16.4.patch
@@ -0,0 +1,1035 @@
+diff --git a/Makefile b/Makefile
+index acb8ffee65dc5..36ff4ed4763b3 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h
+index 8b300dd28defd..72b0e71cc3de8 100644
+--- a/arch/arm64/include/asm/extable.h
++++ b/arch/arm64/include/asm/extable.h
+@@ -33,15 +33,6 @@ do {							\
+ 	(b)->data = (tmp).data;				\
+ } while (0)
+ 
+-static inline bool in_bpf_jit(struct pt_regs *regs)
+-{
+-	if (!IS_ENABLED(CONFIG_BPF_JIT))
+-		return false;
+-
+-	return regs->pc >= BPF_JIT_REGION_START &&
+-	       regs->pc < BPF_JIT_REGION_END;
+-}
+-
+ #ifdef CONFIG_BPF_JIT
+ bool ex_handler_bpf(const struct exception_table_entry *ex,
+ 		    struct pt_regs *regs);
+diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
+index 1b9a1e2426127..0af70d9abede3 100644
+--- a/arch/arm64/include/asm/memory.h
++++ b/arch/arm64/include/asm/memory.h
+@@ -44,11 +44,8 @@
+ #define _PAGE_OFFSET(va)	(-(UL(1) << (va)))
+ #define PAGE_OFFSET		(_PAGE_OFFSET(VA_BITS))
+ #define KIMAGE_VADDR		(MODULES_END)
+-#define BPF_JIT_REGION_START	(_PAGE_END(VA_BITS_MIN))
+-#define BPF_JIT_REGION_SIZE	(SZ_128M)
+-#define BPF_JIT_REGION_END	(BPF_JIT_REGION_START + BPF_JIT_REGION_SIZE)
+ #define MODULES_END		(MODULES_VADDR + MODULES_VSIZE)
+-#define MODULES_VADDR		(BPF_JIT_REGION_END)
++#define MODULES_VADDR		(_PAGE_END(VA_BITS_MIN))
+ #define MODULES_VSIZE		(SZ_128M)
+ #define VMEMMAP_START		(-(UL(1) << (VA_BITS - VMEMMAP_SHIFT)))
+ #define VMEMMAP_END		(VMEMMAP_START + VMEMMAP_SIZE)
+diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
+index 7b21213a570fc..e8986e6067a91 100644
+--- a/arch/arm64/kernel/traps.c
++++ b/arch/arm64/kernel/traps.c
+@@ -994,7 +994,7 @@ static struct break_hook bug_break_hook = {
+ static int reserved_fault_handler(struct pt_regs *regs, unsigned int esr)
+ {
+ 	pr_err("%s generated an invalid instruction at %pS!\n",
+-		in_bpf_jit(regs) ? "BPF JIT" : "Kernel text patching",
++		"Kernel text patching",
+ 		(void *)instruction_pointer(regs));
+ 
+ 	/* We cannot handle this */
+diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c
+index 1c403536c9bb0..9bc4066c5bf33 100644
+--- a/arch/arm64/mm/ptdump.c
++++ b/arch/arm64/mm/ptdump.c
+@@ -41,8 +41,6 @@ static struct addr_marker address_markers[] = {
+ 	{ 0 /* KASAN_SHADOW_START */,	"Kasan shadow start" },
+ 	{ KASAN_SHADOW_END,		"Kasan shadow end" },
+ #endif
+-	{ BPF_JIT_REGION_START,		"BPF start" },
+-	{ BPF_JIT_REGION_END,		"BPF end" },
+ 	{ MODULES_VADDR,		"Modules start" },
+ 	{ MODULES_END,			"Modules end" },
+ 	{ VMALLOC_START,		"vmalloc() area" },
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index 1090a957b3abc..71ef9dcd9b578 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -1145,15 +1145,12 @@ out:
+ 
+ u64 bpf_jit_alloc_exec_limit(void)
+ {
+-	return BPF_JIT_REGION_SIZE;
++	return VMALLOC_END - VMALLOC_START;
+ }
+ 
+ void *bpf_jit_alloc_exec(unsigned long size)
+ {
+-	return __vmalloc_node_range(size, PAGE_SIZE, BPF_JIT_REGION_START,
+-				    BPF_JIT_REGION_END, GFP_KERNEL,
+-				    PAGE_KERNEL, 0, NUMA_NO_NODE,
+-				    __builtin_return_address(0));
++	return vmalloc(size);
+ }
+ 
+ void bpf_jit_free_exec(void *addr)
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c
+index 8c2b77eb94593..162ae71861247 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c
+@@ -119,6 +119,12 @@ int dcn31_smu_send_msg_with_param(
+ 
+ 	result = dcn31_smu_wait_for_response(clk_mgr, 10, 200000);
+ 
++	if (result == VBIOSSMC_Result_Failed) {
++		ASSERT(0);
++		REG_WRITE(MP1_SMN_C2PMSG_91, VBIOSSMC_Result_OK);
++		return -1;
++	}
++
+ 	if (IS_SMU_TIMEOUT(result)) {
+ 		ASSERT(0);
+ 		dm_helpers_smu_timeout(CTX, msg_id, param, 10 * 200000);
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+index da85169006d4f..a0aa6dbe120e2 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
++++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+@@ -305,6 +305,7 @@ struct drm_i915_gem_object {
+ #define I915_BO_READONLY          BIT(6)
+ #define I915_TILING_QUIRK_BIT     7 /* unknown swizzling; do not release! */
+ #define I915_BO_PROTECTED         BIT(8)
++#define I915_BO_WAS_BOUND_BIT     9
+ 	/**
+ 	 * @mem_flags - Mutable placement-related flags
+ 	 *
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+index 1d3f40abd0258..9053cea3395a6 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+@@ -10,6 +10,8 @@
+ #include "i915_gem_lmem.h"
+ #include "i915_gem_mman.h"
+ 
++#include "gt/intel_gt.h"
++
+ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
+ 				 struct sg_table *pages,
+ 				 unsigned int sg_page_sizes)
+@@ -217,6 +219,14 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
+ 	__i915_gem_object_reset_page_iter(obj);
+ 	obj->mm.page_sizes.phys = obj->mm.page_sizes.sg = 0;
+ 
++	if (test_and_clear_bit(I915_BO_WAS_BOUND_BIT, &obj->flags)) {
++		struct drm_i915_private *i915 = to_i915(obj->base.dev);
++		intel_wakeref_t wakeref;
++
++		with_intel_runtime_pm_if_active(&i915->runtime_pm, wakeref)
++			intel_gt_invalidate_tlbs(&i915->gt);
++	}
++
+ 	return pages;
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c
+index 1cb1948ac9594..7df7bbf5845ee 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt.c
++++ b/drivers/gpu/drm/i915/gt/intel_gt.c
+@@ -30,6 +30,8 @@ void intel_gt_init_early(struct intel_gt *gt, struct drm_i915_private *i915)
+ 
+ 	spin_lock_init(&gt->irq_lock);
+ 
++	mutex_init(&gt->tlb_invalidate_lock);
++
+ 	INIT_LIST_HEAD(&gt->closed_vma);
+ 	spin_lock_init(&gt->closed_lock);
+ 
+@@ -907,3 +909,103 @@ void intel_gt_info_print(const struct intel_gt_info *info,
+ 
+ 	intel_sseu_dump(&info->sseu, p);
+ }
++
++struct reg_and_bit {
++	i915_reg_t reg;
++	u32 bit;
++};
++
++static struct reg_and_bit
++get_reg_and_bit(const struct intel_engine_cs *engine, const bool gen8,
++		const i915_reg_t *regs, const unsigned int num)
++{
++	const unsigned int class = engine->class;
++	struct reg_and_bit rb = { };
++
++	if (drm_WARN_ON_ONCE(&engine->i915->drm,
++			     class >= num || !regs[class].reg))
++		return rb;
++
++	rb.reg = regs[class];
++	if (gen8 && class == VIDEO_DECODE_CLASS)
++		rb.reg.reg += 4 * engine->instance; /* GEN8_M2TCR */
++	else
++		rb.bit = engine->instance;
++
++	rb.bit = BIT(rb.bit);
++
++	return rb;
++}
++
++void intel_gt_invalidate_tlbs(struct intel_gt *gt)
++{
++	static const i915_reg_t gen8_regs[] = {
++		[RENDER_CLASS]			= GEN8_RTCR,
++		[VIDEO_DECODE_CLASS]		= GEN8_M1TCR, /* , GEN8_M2TCR */
++		[VIDEO_ENHANCEMENT_CLASS]	= GEN8_VTCR,
++		[COPY_ENGINE_CLASS]		= GEN8_BTCR,
++	};
++	static const i915_reg_t gen12_regs[] = {
++		[RENDER_CLASS]			= GEN12_GFX_TLB_INV_CR,
++		[VIDEO_DECODE_CLASS]		= GEN12_VD_TLB_INV_CR,
++		[VIDEO_ENHANCEMENT_CLASS]	= GEN12_VE_TLB_INV_CR,
++		[COPY_ENGINE_CLASS]		= GEN12_BLT_TLB_INV_CR,
++	};
++	struct drm_i915_private *i915 = gt->i915;
++	struct intel_uncore *uncore = gt->uncore;
++	struct intel_engine_cs *engine;
++	enum intel_engine_id id;
++	const i915_reg_t *regs;
++	unsigned int num = 0;
++
++	if (I915_SELFTEST_ONLY(gt->awake == -ENODEV))
++		return;
++
++	if (GRAPHICS_VER(i915) == 12) {
++		regs = gen12_regs;
++		num = ARRAY_SIZE(gen12_regs);
++	} else if (GRAPHICS_VER(i915) >= 8 && GRAPHICS_VER(i915) <= 11) {
++		regs = gen8_regs;
++		num = ARRAY_SIZE(gen8_regs);
++	} else if (GRAPHICS_VER(i915) < 8) {
++		return;
++	}
++
++	if (drm_WARN_ONCE(&i915->drm, !num,
++			  "Platform does not implement TLB invalidation!"))
++		return;
++
++	GEM_TRACE("\n");
++
++	assert_rpm_wakelock_held(&i915->runtime_pm);
++
++	mutex_lock(&gt->tlb_invalidate_lock);
++	intel_uncore_forcewake_get(uncore, FORCEWAKE_ALL);
++
++	for_each_engine(engine, gt, id) {
++		/*
++		 * HW architecture suggest typical invalidation time at 40us,
++		 * with pessimistic cases up to 100us and a recommendation to
++		 * cap at 1ms. We go a bit higher just in case.
++		 */
++		const unsigned int timeout_us = 100;
++		const unsigned int timeout_ms = 4;
++		struct reg_and_bit rb;
++
++		rb = get_reg_and_bit(engine, regs == gen8_regs, regs, num);
++		if (!i915_mmio_reg_offset(rb.reg))
++			continue;
++
++		intel_uncore_write_fw(uncore, rb.reg, rb.bit);
++		if (__intel_wait_for_register_fw(uncore,
++						 rb.reg, rb.bit, 0,
++						 timeout_us, timeout_ms,
++						 NULL))
++			drm_err_ratelimited(&gt->i915->drm,
++					    "%s TLB invalidation did not complete in %ums!\n",
++					    engine->name, timeout_ms);
++	}
++
++	intel_uncore_forcewake_put_delayed(uncore, FORCEWAKE_ALL);
++	mutex_unlock(&gt->tlb_invalidate_lock);
++}
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt.h b/drivers/gpu/drm/i915/gt/intel_gt.h
+index 74e771871a9bd..c0169d6017c2d 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt.h
++++ b/drivers/gpu/drm/i915/gt/intel_gt.h
+@@ -90,4 +90,6 @@ void intel_gt_info_print(const struct intel_gt_info *info,
+ 
+ void intel_gt_watchdog_work(struct work_struct *work);
+ 
++void intel_gt_invalidate_tlbs(struct intel_gt *gt);
++
+ #endif /* __INTEL_GT_H__ */
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt_types.h b/drivers/gpu/drm/i915/gt/intel_gt_types.h
+index 14216cc471b1b..f206877964908 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt_types.h
++++ b/drivers/gpu/drm/i915/gt/intel_gt_types.h
+@@ -73,6 +73,8 @@ struct intel_gt {
+ 
+ 	struct intel_uc uc;
+ 
++	struct mutex tlb_invalidate_lock;
++
+ 	struct i915_wa_list wa_list;
+ 
+ 	struct intel_gt_timelines {
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index bcee121bec5ad..14ce8809efdd5 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -2697,6 +2697,12 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
+ #define   GAMT_CHKN_DISABLE_DYNAMIC_CREDIT_SHARING	(1 << 28)
+ #define   GAMT_CHKN_DISABLE_I2M_CYCLE_ON_WR_PORT	(1 << 24)
+ 
++#define GEN8_RTCR	_MMIO(0x4260)
++#define GEN8_M1TCR	_MMIO(0x4264)
++#define GEN8_M2TCR	_MMIO(0x4268)
++#define GEN8_BTCR	_MMIO(0x426c)
++#define GEN8_VTCR	_MMIO(0x4270)
++
+ #if 0
+ #define PRB0_TAIL	_MMIO(0x2030)
+ #define PRB0_HEAD	_MMIO(0x2034)
+@@ -2792,6 +2798,11 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
+ #define   FAULT_VA_HIGH_BITS		(0xf << 0)
+ #define   FAULT_GTT_SEL			(1 << 4)
+ 
++#define GEN12_GFX_TLB_INV_CR	_MMIO(0xced8)
++#define GEN12_VD_TLB_INV_CR	_MMIO(0xcedc)
++#define GEN12_VE_TLB_INV_CR	_MMIO(0xcee0)
++#define GEN12_BLT_TLB_INV_CR	_MMIO(0xcee4)
++
+ #define GEN12_AUX_ERR_DBG		_MMIO(0x43f4)
+ 
+ #define FPGA_DBG		_MMIO(0x42300)
+diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
+index bef795e265a66..cb288e6bdc020 100644
+--- a/drivers/gpu/drm/i915/i915_vma.c
++++ b/drivers/gpu/drm/i915/i915_vma.c
+@@ -431,6 +431,9 @@ int i915_vma_bind(struct i915_vma *vma,
+ 		vma->ops->bind_vma(vma->vm, NULL, vma, cache_level, bind_flags);
+ 	}
+ 
++	if (vma->obj)
++		set_bit(I915_BO_WAS_BOUND_BIT, &vma->obj->flags);
++
+ 	atomic_or(bind_flags, &vma->flags);
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/i915/intel_uncore.c b/drivers/gpu/drm/i915/intel_uncore.c
+index e072054adac57..e21c779cb487f 100644
+--- a/drivers/gpu/drm/i915/intel_uncore.c
++++ b/drivers/gpu/drm/i915/intel_uncore.c
+@@ -724,7 +724,8 @@ void intel_uncore_forcewake_get__locked(struct intel_uncore *uncore,
+ }
+ 
+ static void __intel_uncore_forcewake_put(struct intel_uncore *uncore,
+-					 enum forcewake_domains fw_domains)
++					 enum forcewake_domains fw_domains,
++					 bool delayed)
+ {
+ 	struct intel_uncore_forcewake_domain *domain;
+ 	unsigned int tmp;
+@@ -739,7 +740,11 @@ static void __intel_uncore_forcewake_put(struct intel_uncore *uncore,
+ 			continue;
+ 		}
+ 
+-		fw_domains_put(uncore, domain->mask);
++		if (delayed &&
++		    !(domain->uncore->fw_domains_timer & domain->mask))
++			fw_domain_arm_timer(domain);
++		else
++			fw_domains_put(uncore, domain->mask);
+ 	}
+ }
+ 
+@@ -760,7 +765,20 @@ void intel_uncore_forcewake_put(struct intel_uncore *uncore,
+ 		return;
+ 
+ 	spin_lock_irqsave(&uncore->lock, irqflags);
+-	__intel_uncore_forcewake_put(uncore, fw_domains);
++	__intel_uncore_forcewake_put(uncore, fw_domains, false);
++	spin_unlock_irqrestore(&uncore->lock, irqflags);
++}
++
++void intel_uncore_forcewake_put_delayed(struct intel_uncore *uncore,
++					enum forcewake_domains fw_domains)
++{
++	unsigned long irqflags;
++
++	if (!uncore->fw_get_funcs)
++		return;
++
++	spin_lock_irqsave(&uncore->lock, irqflags);
++	__intel_uncore_forcewake_put(uncore, fw_domains, true);
+ 	spin_unlock_irqrestore(&uncore->lock, irqflags);
+ }
+ 
+@@ -802,7 +820,7 @@ void intel_uncore_forcewake_put__locked(struct intel_uncore *uncore,
+ 	if (!uncore->fw_get_funcs)
+ 		return;
+ 
+-	__intel_uncore_forcewake_put(uncore, fw_domains);
++	__intel_uncore_forcewake_put(uncore, fw_domains, false);
+ }
+ 
+ void assert_forcewakes_inactive(struct intel_uncore *uncore)
+diff --git a/drivers/gpu/drm/i915/intel_uncore.h b/drivers/gpu/drm/i915/intel_uncore.h
+index 3248e4e2c540c..d08088fa4c7e9 100644
+--- a/drivers/gpu/drm/i915/intel_uncore.h
++++ b/drivers/gpu/drm/i915/intel_uncore.h
+@@ -243,6 +243,8 @@ void intel_uncore_forcewake_get(struct intel_uncore *uncore,
+ 				enum forcewake_domains domains);
+ void intel_uncore_forcewake_put(struct intel_uncore *uncore,
+ 				enum forcewake_domains domains);
++void intel_uncore_forcewake_put_delayed(struct intel_uncore *uncore,
++					enum forcewake_domains domains);
+ void intel_uncore_forcewake_flush(struct intel_uncore *uncore,
+ 				  enum forcewake_domains fw_domains);
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+index 2a7cec4cb8a89..f9f28516ffb41 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+@@ -1112,15 +1112,14 @@ extern int vmw_execbuf_fence_commands(struct drm_file *file_priv,
+ 				      struct vmw_private *dev_priv,
+ 				      struct vmw_fence_obj **p_fence,
+ 				      uint32_t *p_handle);
+-extern void vmw_execbuf_copy_fence_user(struct vmw_private *dev_priv,
++extern int vmw_execbuf_copy_fence_user(struct vmw_private *dev_priv,
+ 					struct vmw_fpriv *vmw_fp,
+ 					int ret,
+ 					struct drm_vmw_fence_rep __user
+ 					*user_fence_rep,
+ 					struct vmw_fence_obj *fence,
+ 					uint32_t fence_handle,
+-					int32_t out_fence_fd,
+-					struct sync_file *sync_file);
++					int32_t out_fence_fd);
+ bool vmw_cmd_describe(const void *buf, u32 *size, char const **cmd);
+ 
+ /**
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+index 5f2ffa9de5c8f..9144e8f88c812 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+@@ -3823,17 +3823,17 @@ int vmw_execbuf_fence_commands(struct drm_file *file_priv,
+  * Also if copying fails, user-space will be unable to signal the fence object
+  * so we wait for it immediately, and then unreference the user-space reference.
+  */
+-void
++int
+ vmw_execbuf_copy_fence_user(struct vmw_private *dev_priv,
+ 			    struct vmw_fpriv *vmw_fp, int ret,
+ 			    struct drm_vmw_fence_rep __user *user_fence_rep,
+ 			    struct vmw_fence_obj *fence, uint32_t fence_handle,
+-			    int32_t out_fence_fd, struct sync_file *sync_file)
++			    int32_t out_fence_fd)
+ {
+ 	struct drm_vmw_fence_rep fence_rep;
+ 
+ 	if (user_fence_rep == NULL)
+-		return;
++		return 0;
+ 
+ 	memset(&fence_rep, 0, sizeof(fence_rep));
+ 
+@@ -3861,20 +3861,14 @@ vmw_execbuf_copy_fence_user(struct vmw_private *dev_priv,
+ 	 * handle.
+ 	 */
+ 	if (unlikely(ret != 0) && (fence_rep.error == 0)) {
+-		if (sync_file)
+-			fput(sync_file->file);
+-
+-		if (fence_rep.fd != -1) {
+-			put_unused_fd(fence_rep.fd);
+-			fence_rep.fd = -1;
+-		}
+-
+ 		ttm_ref_object_base_unref(vmw_fp->tfile, fence_handle,
+ 					  TTM_REF_USAGE);
+ 		VMW_DEBUG_USER("Fence copy error. Syncing.\n");
+ 		(void) vmw_fence_obj_wait(fence, false, false,
+ 					  VMW_FENCE_WAIT_TIMEOUT);
+ 	}
++
++	return ret ? -EFAULT : 0;
+ }
+ 
+ /**
+@@ -4212,16 +4206,23 @@ int vmw_execbuf_process(struct drm_file *file_priv,
+ 
+ 			(void) vmw_fence_obj_wait(fence, false, false,
+ 						  VMW_FENCE_WAIT_TIMEOUT);
++		}
++	}
++
++	ret = vmw_execbuf_copy_fence_user(dev_priv, vmw_fpriv(file_priv), ret,
++				    user_fence_rep, fence, handle, out_fence_fd);
++
++	if (sync_file) {
++		if (ret) {
++			/* usercopy of fence failed, put the file object */
++			fput(sync_file->file);
++			put_unused_fd(out_fence_fd);
+ 		} else {
+ 			/* Link the fence with the FD created earlier */
+ 			fd_install(out_fence_fd, sync_file->file);
+ 		}
+ 	}
+ 
+-	vmw_execbuf_copy_fence_user(dev_priv, vmw_fpriv(file_priv), ret,
+-				    user_fence_rep, fence, handle, out_fence_fd,
+-				    sync_file);
+-
+ 	/* Don't unreference when handing fence out */
+ 	if (unlikely(out_fence != NULL)) {
+ 		*out_fence = fence;
+@@ -4239,7 +4240,7 @@ int vmw_execbuf_process(struct drm_file *file_priv,
+ 	 */
+ 	vmw_validation_unref_lists(&val_ctx);
+ 
+-	return 0;
++	return ret;
+ 
+ out_unlock_binding:
+ 	mutex_unlock(&dev_priv->binding_mutex);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
+index 9fe12329a4d58..b4d9d7258a546 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
+@@ -1159,7 +1159,7 @@ int vmw_fence_event_ioctl(struct drm_device *dev, void *data,
+ 	}
+ 
+ 	vmw_execbuf_copy_fence_user(dev_priv, vmw_fp, 0, user_fence_rep, fence,
+-				    handle, -1, NULL);
++				    handle, -1);
+ 	vmw_fence_obj_unreference(&fence);
+ 	return 0;
+ out_no_create:
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index 74fa419092138..14e8f665b13be 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -2516,7 +2516,7 @@ void vmw_kms_helper_validation_finish(struct vmw_private *dev_priv,
+ 	if (file_priv)
+ 		vmw_execbuf_copy_fence_user(dev_priv, vmw_fpriv(file_priv),
+ 					    ret, user_fence_rep, fence,
+-					    handle, -1, NULL);
++					    handle, -1);
+ 	if (out_fence)
+ 		*out_fence = fence;
+ 	else
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+index 2b06d78baa086..a19dd6797070c 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+@@ -1850,6 +1850,14 @@ struct bnx2x {
+ 
+ 	/* Vxlan/Geneve related information */
+ 	u16 udp_tunnel_ports[BNX2X_UDP_PORT_MAX];
++
++#define FW_CAP_INVALIDATE_VF_FP_HSI	BIT(0)
++	u32 fw_cap;
++
++	u32 fw_major;
++	u32 fw_minor;
++	u32 fw_rev;
++	u32 fw_eng;
+ };
+ 
+ /* Tx queues may be less or equal to Rx queues */
+@@ -2525,5 +2533,6 @@ void bnx2x_register_phc(struct bnx2x *bp);
+  * Meant for implicit re-load flows.
+  */
+ int bnx2x_vlan_reconfigure_vid(struct bnx2x *bp);
+-
++int bnx2x_init_firmware(struct bnx2x *bp);
++void bnx2x_release_firmware(struct bnx2x *bp);
+ #endif /* bnx2x.h */
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+index e8e8c2d593c55..e57fe0034ce2a 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+@@ -2364,10 +2364,8 @@ int bnx2x_compare_fw_ver(struct bnx2x *bp, u32 load_code, bool print_err)
+ 	if (load_code != FW_MSG_CODE_DRV_LOAD_COMMON_CHIP &&
+ 	    load_code != FW_MSG_CODE_DRV_LOAD_COMMON) {
+ 		/* build my FW version dword */
+-		u32 my_fw = (BCM_5710_FW_MAJOR_VERSION) +
+-			(BCM_5710_FW_MINOR_VERSION << 8) +
+-			(BCM_5710_FW_REVISION_VERSION << 16) +
+-			(BCM_5710_FW_ENGINEERING_VERSION << 24);
++		u32 my_fw = (bp->fw_major) + (bp->fw_minor << 8) +
++				(bp->fw_rev << 16) + (bp->fw_eng << 24);
+ 
+ 		/* read loaded FW from chip */
+ 		u32 loaded_fw = REG_RD(bp, XSEM_REG_PRAM);
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h
+index 3f8435208bf49..a84d015da5dfa 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h
+@@ -241,6 +241,8 @@
+ 	IRO[221].m2))
+ #define XSTORM_VF_TO_PF_OFFSET(funcId) \
+ 	(IRO[48].base + ((funcId) * IRO[48].m1))
++#define XSTORM_ETH_FUNCTION_INFO_FP_HSI_VALID_E2_OFFSET(fid)	\
++	(IRO[386].base + ((fid) * IRO[386].m1))
+ #define COMMON_ASM_INVALID_ASSERT_OPCODE 0x0
+ 
+ /* eth hsi version */
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h
+index 622fadc50316e..611efee758340 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h
+@@ -3024,7 +3024,8 @@ struct afex_stats {
+ 
+ #define BCM_5710_FW_MAJOR_VERSION			7
+ #define BCM_5710_FW_MINOR_VERSION			13
+-#define BCM_5710_FW_REVISION_VERSION		15
++#define BCM_5710_FW_REVISION_VERSION		21
++#define BCM_5710_FW_REVISION_VERSION_V15	15
+ #define BCM_5710_FW_ENGINEERING_VERSION		0
+ #define BCM_5710_FW_COMPILE_FLAGS			1
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+index aec666e976831..125dafe1db7ee 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+@@ -74,9 +74,19 @@
+ 	__stringify(BCM_5710_FW_MINOR_VERSION) "."	\
+ 	__stringify(BCM_5710_FW_REVISION_VERSION) "."	\
+ 	__stringify(BCM_5710_FW_ENGINEERING_VERSION)
++
++#define FW_FILE_VERSION_V15				\
++	__stringify(BCM_5710_FW_MAJOR_VERSION) "."      \
++	__stringify(BCM_5710_FW_MINOR_VERSION) "."	\
++	__stringify(BCM_5710_FW_REVISION_VERSION_V15) "."	\
++	__stringify(BCM_5710_FW_ENGINEERING_VERSION)
++
+ #define FW_FILE_NAME_E1		"bnx2x/bnx2x-e1-" FW_FILE_VERSION ".fw"
+ #define FW_FILE_NAME_E1H	"bnx2x/bnx2x-e1h-" FW_FILE_VERSION ".fw"
+ #define FW_FILE_NAME_E2		"bnx2x/bnx2x-e2-" FW_FILE_VERSION ".fw"
++#define FW_FILE_NAME_E1_V15	"bnx2x/bnx2x-e1-" FW_FILE_VERSION_V15 ".fw"
++#define FW_FILE_NAME_E1H_V15	"bnx2x/bnx2x-e1h-" FW_FILE_VERSION_V15 ".fw"
++#define FW_FILE_NAME_E2_V15	"bnx2x/bnx2x-e2-" FW_FILE_VERSION_V15 ".fw"
+ 
+ /* Time in jiffies before concluding the transmitter is hung */
+ #define TX_TIMEOUT		(5*HZ)
+@@ -747,9 +757,7 @@ static int bnx2x_mc_assert(struct bnx2x *bp)
+ 		  CHIP_IS_E1(bp) ? "everest1" :
+ 		  CHIP_IS_E1H(bp) ? "everest1h" :
+ 		  CHIP_IS_E2(bp) ? "everest2" : "everest3",
+-		  BCM_5710_FW_MAJOR_VERSION,
+-		  BCM_5710_FW_MINOR_VERSION,
+-		  BCM_5710_FW_REVISION_VERSION);
++		  bp->fw_major, bp->fw_minor, bp->fw_rev);
+ 
+ 	return rc;
+ }
+@@ -12308,6 +12316,15 @@ static int bnx2x_init_bp(struct bnx2x *bp)
+ 
+ 	bnx2x_read_fwinfo(bp);
+ 
++	if (IS_PF(bp)) {
++		rc = bnx2x_init_firmware(bp);
++
++		if (rc) {
++			bnx2x_free_mem_bp(bp);
++			return rc;
++		}
++	}
++
+ 	func = BP_FUNC(bp);
+ 
+ 	/* need to reset chip if undi was active */
+@@ -12320,6 +12337,7 @@ static int bnx2x_init_bp(struct bnx2x *bp)
+ 
+ 		rc = bnx2x_prev_unload(bp);
+ 		if (rc) {
++			bnx2x_release_firmware(bp);
+ 			bnx2x_free_mem_bp(bp);
+ 			return rc;
+ 		}
+@@ -13317,16 +13335,11 @@ static int bnx2x_check_firmware(struct bnx2x *bp)
+ 	/* Check FW version */
+ 	offset = be32_to_cpu(fw_hdr->fw_version.offset);
+ 	fw_ver = firmware->data + offset;
+-	if ((fw_ver[0] != BCM_5710_FW_MAJOR_VERSION) ||
+-	    (fw_ver[1] != BCM_5710_FW_MINOR_VERSION) ||
+-	    (fw_ver[2] != BCM_5710_FW_REVISION_VERSION) ||
+-	    (fw_ver[3] != BCM_5710_FW_ENGINEERING_VERSION)) {
++	if (fw_ver[0] != bp->fw_major || fw_ver[1] != bp->fw_minor ||
++	    fw_ver[2] != bp->fw_rev || fw_ver[3] != bp->fw_eng) {
+ 		BNX2X_ERR("Bad FW version:%d.%d.%d.%d. Should be %d.%d.%d.%d\n",
+-		       fw_ver[0], fw_ver[1], fw_ver[2], fw_ver[3],
+-		       BCM_5710_FW_MAJOR_VERSION,
+-		       BCM_5710_FW_MINOR_VERSION,
+-		       BCM_5710_FW_REVISION_VERSION,
+-		       BCM_5710_FW_ENGINEERING_VERSION);
++			  fw_ver[0], fw_ver[1], fw_ver[2], fw_ver[3],
++			  bp->fw_major, bp->fw_minor, bp->fw_rev, bp->fw_eng);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -13404,34 +13417,51 @@ do {									\
+ 	     (u8 *)bp->arr, len);					\
+ } while (0)
+ 
+-static int bnx2x_init_firmware(struct bnx2x *bp)
++int bnx2x_init_firmware(struct bnx2x *bp)
+ {
+-	const char *fw_file_name;
++	const char *fw_file_name, *fw_file_name_v15;
+ 	struct bnx2x_fw_file_hdr *fw_hdr;
+ 	int rc;
+ 
+ 	if (bp->firmware)
+ 		return 0;
+ 
+-	if (CHIP_IS_E1(bp))
++	if (CHIP_IS_E1(bp)) {
+ 		fw_file_name = FW_FILE_NAME_E1;
+-	else if (CHIP_IS_E1H(bp))
++		fw_file_name_v15 = FW_FILE_NAME_E1_V15;
++	} else if (CHIP_IS_E1H(bp)) {
+ 		fw_file_name = FW_FILE_NAME_E1H;
+-	else if (!CHIP_IS_E1x(bp))
++		fw_file_name_v15 = FW_FILE_NAME_E1H_V15;
++	} else if (!CHIP_IS_E1x(bp)) {
+ 		fw_file_name = FW_FILE_NAME_E2;
+-	else {
++		fw_file_name_v15 = FW_FILE_NAME_E2_V15;
++	} else {
+ 		BNX2X_ERR("Unsupported chip revision\n");
+ 		return -EINVAL;
+ 	}
++
+ 	BNX2X_DEV_INFO("Loading %s\n", fw_file_name);
+ 
+ 	rc = request_firmware(&bp->firmware, fw_file_name, &bp->pdev->dev);
+ 	if (rc) {
+-		BNX2X_ERR("Can't load firmware file %s\n",
+-			  fw_file_name);
+-		goto request_firmware_exit;
++		BNX2X_DEV_INFO("Trying to load older fw %s\n", fw_file_name_v15);
++
++		/* try to load prev version */
++		rc = request_firmware(&bp->firmware, fw_file_name_v15, &bp->pdev->dev);
++
++		if (rc)
++			goto request_firmware_exit;
++
++		bp->fw_rev = BCM_5710_FW_REVISION_VERSION_V15;
++	} else {
++		bp->fw_cap |= FW_CAP_INVALIDATE_VF_FP_HSI;
++		bp->fw_rev = BCM_5710_FW_REVISION_VERSION;
+ 	}
+ 
++	bp->fw_major = BCM_5710_FW_MAJOR_VERSION;
++	bp->fw_minor = BCM_5710_FW_MINOR_VERSION;
++	bp->fw_eng = BCM_5710_FW_ENGINEERING_VERSION;
++
+ 	rc = bnx2x_check_firmware(bp);
+ 	if (rc) {
+ 		BNX2X_ERR("Corrupt firmware file %s\n", fw_file_name);
+@@ -13487,7 +13517,7 @@ request_firmware_exit:
+ 	return rc;
+ }
+ 
+-static void bnx2x_release_firmware(struct bnx2x *bp)
++void bnx2x_release_firmware(struct bnx2x *bp)
+ {
+ 	kfree(bp->init_ops_offsets);
+ 	kfree(bp->init_ops);
+@@ -14004,6 +14034,7 @@ static int bnx2x_init_one(struct pci_dev *pdev,
+ 	return 0;
+ 
+ init_one_freemem:
++	bnx2x_release_firmware(bp);
+ 	bnx2x_free_mem_bp(bp);
+ 
+ init_one_exit:
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+index 74a8931ce1d1d..11d15cd036005 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+@@ -758,9 +758,18 @@ static void bnx2x_vf_igu_reset(struct bnx2x *bp, struct bnx2x_virtf *vf)
+ 
+ void bnx2x_vf_enable_access(struct bnx2x *bp, u8 abs_vfid)
+ {
++	u16 abs_fid;
++
++	abs_fid = FW_VF_HANDLE(abs_vfid);
++
+ 	/* set the VF-PF association in the FW */
+-	storm_memset_vf_to_pf(bp, FW_VF_HANDLE(abs_vfid), BP_FUNC(bp));
+-	storm_memset_func_en(bp, FW_VF_HANDLE(abs_vfid), 1);
++	storm_memset_vf_to_pf(bp, abs_fid, BP_FUNC(bp));
++	storm_memset_func_en(bp, abs_fid, 1);
++
++	/* Invalidate fp_hsi version for vfs */
++	if (bp->fw_cap & FW_CAP_INVALIDATE_VF_FP_HSI)
++		REG_WR8(bp, BAR_XSTRORM_INTMEM +
++			    XSTORM_ETH_FUNCTION_INFO_FP_HSI_VALID_E2_OFFSET(abs_fid), 0);
+ 
+ 	/* clear vf errors*/
+ 	bnx2x_vf_semi_clear_err(bp, abs_vfid);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index e0fbb940fe5c3..15f303180d70c 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1830,6 +1830,18 @@ static inline void io_get_task_refs(int nr)
+ 		io_task_refs_refill(tctx);
+ }
+ 
++static __cold void io_uring_drop_tctx_refs(struct task_struct *task)
++{
++	struct io_uring_task *tctx = task->io_uring;
++	unsigned int refs = tctx->cached_refs;
++
++	if (refs) {
++		tctx->cached_refs = 0;
++		percpu_counter_sub(&tctx->inflight, refs);
++		put_task_struct_many(task, refs);
++	}
++}
++
+ static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
+ 				     s32 res, u32 cflags)
+ {
+@@ -2250,6 +2262,10 @@ static void tctx_task_work(struct callback_head *cb)
+ 	}
+ 
+ 	ctx_flush_and_put(ctx, &locked);
++
++	/* relaxed read is enough as only the task itself sets ->in_idle */
++	if (unlikely(atomic_read(&tctx->in_idle)))
++		io_uring_drop_tctx_refs(current);
+ }
+ 
+ static void io_req_task_work_add(struct io_kiocb *req)
+@@ -9818,18 +9834,6 @@ static s64 tctx_inflight(struct io_uring_task *tctx, bool tracked)
+ 	return percpu_counter_sum(&tctx->inflight);
+ }
+ 
+-static __cold void io_uring_drop_tctx_refs(struct task_struct *task)
+-{
+-	struct io_uring_task *tctx = task->io_uring;
+-	unsigned int refs = tctx->cached_refs;
+-
+-	if (refs) {
+-		tctx->cached_refs = 0;
+-		percpu_counter_sub(&tctx->inflight, refs);
+-		put_task_struct_many(task, refs);
+-	}
+-}
+-
+ /*
+  * Find any io_uring ctx that this task has registered or done IO on, and cancel
+  * requests. @sqd should be not-null IFF it's an SQPOLL thread cancellation.
+@@ -9887,10 +9891,14 @@ static __cold void io_uring_cancel_generic(bool cancel_all,
+ 			schedule();
+ 		finish_wait(&tctx->wait, &wait);
+ 	} while (1);
+-	atomic_dec(&tctx->in_idle);
+ 
+ 	io_uring_clean_tctx(tctx);
+ 	if (cancel_all) {
++		/*
++		 * We shouldn't run task_works after cancel, so just leave
++		 * ->in_idle set for normal exit.
++		 */
++		atomic_dec(&tctx->in_idle);
+ 		/* for exec all current's requests should be gone, kill tctx */
+ 		__io_uring_free(current);
+ 	}
+diff --git a/fs/select.c b/fs/select.c
+index 945896d0ac9e7..5edffee1162c2 100644
+--- a/fs/select.c
++++ b/fs/select.c
+@@ -458,9 +458,11 @@ get_max:
+ 	return max;
+ }
+ 
+-#define POLLIN_SET (EPOLLRDNORM | EPOLLRDBAND | EPOLLIN | EPOLLHUP | EPOLLERR)
+-#define POLLOUT_SET (EPOLLWRBAND | EPOLLWRNORM | EPOLLOUT | EPOLLERR)
+-#define POLLEX_SET (EPOLLPRI)
++#define POLLIN_SET (EPOLLRDNORM | EPOLLRDBAND | EPOLLIN | EPOLLHUP | EPOLLERR |\
++			EPOLLNVAL)
++#define POLLOUT_SET (EPOLLWRBAND | EPOLLWRNORM | EPOLLOUT | EPOLLERR |\
++			 EPOLLNVAL)
++#define POLLEX_SET (EPOLLPRI | EPOLLNVAL)
+ 
+ static inline void wait_key_set(poll_table *wait, unsigned long in,
+ 				unsigned long out, unsigned long bit,
+@@ -527,6 +529,7 @@ static int do_select(int n, fd_set_bits *fds, struct timespec64 *end_time)
+ 					break;
+ 				if (!(bit & all_bits))
+ 					continue;
++				mask = EPOLLNVAL;
+ 				f = fdget(i);
+ 				if (f.file) {
+ 					wait_key_set(wait, in, out, bit,
+@@ -534,34 +537,34 @@ static int do_select(int n, fd_set_bits *fds, struct timespec64 *end_time)
+ 					mask = vfs_poll(f.file, wait);
+ 
+ 					fdput(f);
+-					if ((mask & POLLIN_SET) && (in & bit)) {
+-						res_in |= bit;
+-						retval++;
+-						wait->_qproc = NULL;
+-					}
+-					if ((mask & POLLOUT_SET) && (out & bit)) {
+-						res_out |= bit;
+-						retval++;
+-						wait->_qproc = NULL;
+-					}
+-					if ((mask & POLLEX_SET) && (ex & bit)) {
+-						res_ex |= bit;
+-						retval++;
+-						wait->_qproc = NULL;
+-					}
+-					/* got something, stop busy polling */
+-					if (retval) {
+-						can_busy_loop = false;
+-						busy_flag = 0;
+-
+-					/*
+-					 * only remember a returned
+-					 * POLL_BUSY_LOOP if we asked for it
+-					 */
+-					} else if (busy_flag & mask)
+-						can_busy_loop = true;
+-
+ 				}
++				if ((mask & POLLIN_SET) && (in & bit)) {
++					res_in |= bit;
++					retval++;
++					wait->_qproc = NULL;
++				}
++				if ((mask & POLLOUT_SET) && (out & bit)) {
++					res_out |= bit;
++					retval++;
++					wait->_qproc = NULL;
++				}
++				if ((mask & POLLEX_SET) && (ex & bit)) {
++					res_ex |= bit;
++					retval++;
++					wait->_qproc = NULL;
++				}
++				/* got something, stop busy polling */
++				if (retval) {
++					can_busy_loop = false;
++					busy_flag = 0;
++
++				/*
++				 * only remember a returned
++				 * POLL_BUSY_LOOP if we asked for it
++				 */
++				} else if (busy_flag & mask)
++					can_busy_loop = true;
++
+ 			}
+ 			if (res_in)
+ 				*rinp = res_in;
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 906b6887622d3..28fd0cef9b1fb 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -1590,10 +1590,11 @@ static void __maybe_unused rcu_advance_cbs_nowake(struct rcu_node *rnp,
+ 						  struct rcu_data *rdp)
+ {
+ 	rcu_lockdep_assert_cblist_protected(rdp);
+-	if (!rcu_seq_state(rcu_seq_current(&rnp->gp_seq)) ||
+-	    !raw_spin_trylock_rcu_node(rnp))
++	if (!rcu_seq_state(rcu_seq_current(&rnp->gp_seq)) || !raw_spin_trylock_rcu_node(rnp))
+ 		return;
+-	WARN_ON_ONCE(rcu_advance_cbs(rnp, rdp));
++	// The grace period cannot end while we hold the rcu_node lock.
++	if (rcu_seq_state(rcu_seq_current(&rnp->gp_seq)))
++		WARN_ON_ONCE(rcu_advance_cbs(rnp, rdp));
+ 	raw_spin_unlock_rcu_node(rnp);
+ }
+ 
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 2ed5f2a0879d3..fdc952f270c14 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -629,11 +629,17 @@ static DEFINE_SPINLOCK(stats_flush_lock);
+ static DEFINE_PER_CPU(unsigned int, stats_updates);
+ static atomic_t stats_flush_threshold = ATOMIC_INIT(0);
+ 
+-static inline void memcg_rstat_updated(struct mem_cgroup *memcg)
++static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val)
+ {
++	unsigned int x;
++
+ 	cgroup_rstat_updated(memcg->css.cgroup, smp_processor_id());
+-	if (!(__this_cpu_inc_return(stats_updates) % MEMCG_CHARGE_BATCH))
+-		atomic_inc(&stats_flush_threshold);
++
++	x = __this_cpu_add_return(stats_updates, abs(val));
++	if (x > MEMCG_CHARGE_BATCH) {
++		atomic_add(x / MEMCG_CHARGE_BATCH, &stats_flush_threshold);
++		__this_cpu_write(stats_updates, 0);
++	}
+ }
+ 
+ static void __mem_cgroup_flush_stats(void)
+@@ -656,7 +662,7 @@ void mem_cgroup_flush_stats(void)
+ 
+ static void flush_memcg_stats_dwork(struct work_struct *w)
+ {
+-	mem_cgroup_flush_stats();
++	__mem_cgroup_flush_stats();
+ 	queue_delayed_work(system_unbound_wq, &stats_flush_dwork, 2UL*HZ);
+ }
+ 
+@@ -672,7 +678,7 @@ void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val)
+ 		return;
+ 
+ 	__this_cpu_add(memcg->vmstats_percpu->state[idx], val);
+-	memcg_rstat_updated(memcg);
++	memcg_rstat_updated(memcg, val);
+ }
+ 
+ /* idx can be of type enum memcg_stat_item or node_stat_item. */
+@@ -705,7 +711,7 @@ void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
+ 	/* Update lruvec */
+ 	__this_cpu_add(pn->lruvec_stats_percpu->state[idx], val);
+ 
+-	memcg_rstat_updated(memcg);
++	memcg_rstat_updated(memcg, val);
+ }
+ 
+ /**
+@@ -789,7 +795,7 @@ void __count_memcg_events(struct mem_cgroup *memcg, enum vm_event_item idx,
+ 		return;
+ 
+ 	__this_cpu_add(memcg->vmstats_percpu->events[idx], count);
+-	memcg_rstat_updated(memcg);
++	memcg_rstat_updated(memcg, count);
+ }
+ 
+ static unsigned long memcg_events(struct mem_cgroup *memcg, int event)


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-01-29 20:46 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-01-29 20:46 UTC (permalink / raw
  To: gentoo-commits

commit:     82a0f8aaa2447a77ba0ad1285f995c750ac4e97d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jan 29 20:43:23 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jan 29 20:46:35 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=82a0f8aa

Select CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL=y as default

Bug: https://bugs.gentoo.org/832224

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 24b75095..3712fa96 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,9 +6,9 @@
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2021-12-21 08:57:43.779324794 -0500
-+++ b/distro/Kconfig	2021-12-21 14:12:07.964572417 -0500
-@@ -0,0 +1,283 @@
+--- /dev/null	2022-01-29 13:28:12.679255142 -0500
++++ b/distro/Kconfig	2022-01-29 15:29:29.800465617 -0500
+@@ -0,0 +1,285 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -16,6 +16,8 @@
 +
 +	default y
 +
++	select CPU_FREQ_DEFAULT_GOV_SCHEDUTIL
++
 +	help
 +		In order to boot Gentoo Linux a minimal set of config settings needs to
 +		be enabled in the kernel; to avoid the users from having to enable them


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-02-01 17:21 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-02-01 17:21 UTC (permalink / raw
  To: gentoo-commits

commit:     9531e5312113989d7f6c60a965ac5b2357fa6459
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Feb  1 17:21:07 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Feb  1 17:21:07 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9531e531

Linux patch 5.16.5

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1004_linux-5.16.5.patch | 8026 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 8030 insertions(+)

diff --git a/0000_README b/0000_README
index ff7d994b..f8c4cea5 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch:  1003_linux-5.16.4.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.16.4
 
+Patch:  1004_linux-5.16.5.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.5
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1004_linux-5.16.5.patch b/1004_linux-5.16.5.patch
new file mode 100644
index 00000000..282c5f8b
--- /dev/null
+++ b/1004_linux-5.16.5.patch
@@ -0,0 +1,8026 @@
+diff --git a/Documentation/accounting/psi.rst b/Documentation/accounting/psi.rst
+index f2b3439edcc2c..860fe651d6453 100644
+--- a/Documentation/accounting/psi.rst
++++ b/Documentation/accounting/psi.rst
+@@ -92,7 +92,8 @@ Triggers can be set on more than one psi metric and more than one trigger
+ for the same psi metric can be specified. However for each trigger a separate
+ file descriptor is required to be able to poll it separately from others,
+ therefore for each trigger a separate open() syscall should be made even
+-when opening the same psi interface file.
++when opening the same psi interface file. Write operations to a file descriptor
++with an already existing psi trigger will fail with EBUSY.
+ 
+ Monitors activate only when system enters stall state for the monitored
+ psi metric and deactivates upon exit from the stall state. While system is
+diff --git a/Documentation/devicetree/bindings/net/can/tcan4x5x.txt b/Documentation/devicetree/bindings/net/can/tcan4x5x.txt
+index 0968b40aef1e8..e3501bfa22e90 100644
+--- a/Documentation/devicetree/bindings/net/can/tcan4x5x.txt
++++ b/Documentation/devicetree/bindings/net/can/tcan4x5x.txt
+@@ -31,7 +31,7 @@ tcan4x5x: tcan4x5x@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		spi-max-frequency = <10000000>;
+-		bosch,mram-cfg = <0x0 0 0 32 0 0 1 1>;
++		bosch,mram-cfg = <0x0 0 0 16 0 0 1 1>;
+ 		interrupt-parent = <&gpio1>;
+ 		interrupts = <14 IRQ_TYPE_LEVEL_LOW>;
+ 		device-state-gpios = <&gpio3 21 GPIO_ACTIVE_HIGH>;
+diff --git a/Makefile b/Makefile
+index 36ff4ed4763b3..2f0e5c3d9e2a7 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
+index 7d23d4bb2168b..6fe67963ba5a0 100644
+--- a/arch/arm/include/asm/assembler.h
++++ b/arch/arm/include/asm/assembler.h
+@@ -288,6 +288,7 @@
+  */
+ #define ALT_UP(instr...)					\
+ 	.pushsection ".alt.smp.init", "a"			;\
++	.align	2						;\
+ 	.long	9998b - .					;\
+ 9997:	instr							;\
+ 	.if . - 9997b == 2					;\
+@@ -299,6 +300,7 @@
+ 	.popsection
+ #define ALT_UP_B(label)					\
+ 	.pushsection ".alt.smp.init", "a"			;\
++	.align	2						;\
+ 	.long	9998b - .					;\
+ 	W(b)	. + (label - 9998b)					;\
+ 	.popsection
+diff --git a/arch/arm/include/asm/processor.h b/arch/arm/include/asm/processor.h
+index 6af68edfa53ab..bdc35c0e8dfb9 100644
+--- a/arch/arm/include/asm/processor.h
++++ b/arch/arm/include/asm/processor.h
+@@ -96,6 +96,7 @@ unsigned long __get_wchan(struct task_struct *p);
+ #define __ALT_SMP_ASM(smp, up)						\
+ 	"9998:	" smp "\n"						\
+ 	"	.pushsection \".alt.smp.init\", \"a\"\n"		\
++	"	.align	2\n"						\
+ 	"	.long	9998b - .\n"					\
+ 	"	" up "\n"						\
+ 	"	.popsection\n"
+diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
+index 36fbc33292526..32dbfd81f42a4 100644
+--- a/arch/arm/include/asm/uaccess.h
++++ b/arch/arm/include/asm/uaccess.h
+@@ -11,6 +11,7 @@
+ #include <linux/string.h>
+ #include <asm/memory.h>
+ #include <asm/domain.h>
++#include <asm/unaligned.h>
+ #include <asm/unified.h>
+ #include <asm/compiler.h>
+ 
+@@ -497,7 +498,10 @@ do {									\
+ 	}								\
+ 	default: __err = __get_user_bad(); break;			\
+ 	}								\
+-	*(type *)(dst) = __val;						\
++	if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS))		\
++		put_unaligned(__val, (type *)(dst));			\
++	else								\
++		*(type *)(dst) = __val; /* aligned by caller */		\
+ 	if (__err)							\
+ 		goto err_label;						\
+ } while (0)
+@@ -507,7 +511,9 @@ do {									\
+ 	const type *__pk_ptr = (dst);					\
+ 	unsigned long __dst = (unsigned long)__pk_ptr;			\
+ 	int __err = 0;							\
+-	type __val = *(type *)src;					\
++	type __val = IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)	\
++		     ? get_unaligned((type *)(src))			\
++		     : *(type *)(src);	/* aligned by caller */		\
+ 	switch (sizeof(type)) {						\
+ 	case 1: __put_user_asm_byte(__val, __dst, __err, ""); break;	\
+ 	case 2:	__put_user_asm_half(__val, __dst, __err, ""); break;	\
+diff --git a/arch/arm/probes/kprobes/Makefile b/arch/arm/probes/kprobes/Makefile
+index 14db56f49f0a3..6159010dac4a6 100644
+--- a/arch/arm/probes/kprobes/Makefile
++++ b/arch/arm/probes/kprobes/Makefile
+@@ -1,4 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
++KASAN_SANITIZE_actions-common.o := n
++KASAN_SANITIZE_actions-arm.o := n
++KASAN_SANITIZE_actions-thumb.o := n
+ obj-$(CONFIG_KPROBES)		+= core.o actions-common.o checkers-common.o
+ obj-$(CONFIG_ARM_KPROBES_TEST)	+= test-kprobes.o
+ test-kprobes-objs		:= test-core.o
+diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
+index 0418399e0a201..c5d0097154020 100644
+--- a/arch/arm64/kvm/hyp/exception.c
++++ b/arch/arm64/kvm/hyp/exception.c
+@@ -38,7 +38,10 @@ static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
+ 
+ static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val)
+ {
+-	write_sysreg_el1(val, SYS_SPSR);
++	if (has_vhe())
++		write_sysreg_el1(val, SYS_SPSR);
++	else
++		__vcpu_sys_reg(vcpu, SPSR_EL1) = val;
+ }
+ 
+ static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val)
+diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
+index f8ceebe4982eb..4c77ff556f0ae 100644
+--- a/arch/arm64/kvm/hyp/pgtable.c
++++ b/arch/arm64/kvm/hyp/pgtable.c
+@@ -921,13 +921,9 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
+ 	 */
+ 	stage2_put_pte(ptep, mmu, addr, level, mm_ops);
+ 
+-	if (need_flush) {
+-		kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
+-
+-		dcache_clean_inval_poc((unsigned long)pte_follow,
+-				    (unsigned long)pte_follow +
+-					    kvm_granule_size(level));
+-	}
++	if (need_flush && mm_ops->dcache_clean_inval_poc)
++		mm_ops->dcache_clean_inval_poc(kvm_pte_follow(pte, mm_ops),
++					       kvm_granule_size(level));
+ 
+ 	if (childp)
+ 		mm_ops->put_page(childp);
+@@ -1089,15 +1085,13 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
+ 	struct kvm_pgtable *pgt = arg;
+ 	struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops;
+ 	kvm_pte_t pte = *ptep;
+-	kvm_pte_t *pte_follow;
+ 
+ 	if (!kvm_pte_valid(pte) || !stage2_pte_cacheable(pgt, pte))
+ 		return 0;
+ 
+-	pte_follow = kvm_pte_follow(pte, mm_ops);
+-	dcache_clean_inval_poc((unsigned long)pte_follow,
+-			    (unsigned long)pte_follow +
+-				    kvm_granule_size(level));
++	if (mm_ops->dcache_clean_inval_poc)
++		mm_ops->dcache_clean_inval_poc(kvm_pte_follow(pte, mm_ops),
++					       kvm_granule_size(level));
+ 	return 0;
+ }
+ 
+diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c
+index 20db2f281cf23..4fb419f7b8b61 100644
+--- a/arch/arm64/kvm/hyp/vgic-v3-sr.c
++++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c
+@@ -983,6 +983,9 @@ static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+ 	val = ((vtr >> 29) & 7) << ICC_CTLR_EL1_PRI_BITS_SHIFT;
+ 	/* IDbits */
+ 	val |= ((vtr >> 23) & 7) << ICC_CTLR_EL1_ID_BITS_SHIFT;
++	/* SEIS */
++	if (kvm_vgic_global_state.ich_vtr_el2 & ICH_VTR_SEIS_MASK)
++		val |= BIT(ICC_CTLR_EL1_SEIS_SHIFT);
+ 	/* A3V */
+ 	val |= ((vtr >> 21) & 1) << ICC_CTLR_EL1_A3V_SHIFT;
+ 	/* EOImode */
+diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
+index 04f62c4b07fb5..0216201251e2f 100644
+--- a/arch/arm64/kvm/vgic/vgic-v3.c
++++ b/arch/arm64/kvm/vgic/vgic-v3.c
+@@ -609,6 +609,18 @@ static int __init early_gicv4_enable(char *buf)
+ }
+ early_param("kvm-arm.vgic_v4_enable", early_gicv4_enable);
+ 
++static const struct midr_range broken_seis[] = {
++	MIDR_ALL_VERSIONS(MIDR_APPLE_M1_ICESTORM),
++	MIDR_ALL_VERSIONS(MIDR_APPLE_M1_FIRESTORM),
++	{},
++};
++
++static bool vgic_v3_broken_seis(void)
++{
++	return ((kvm_vgic_global_state.ich_vtr_el2 & ICH_VTR_SEIS_MASK) &&
++		is_midr_in_range_list(read_cpuid_id(), broken_seis));
++}
++
+ /**
+  * vgic_v3_probe - probe for a VGICv3 compatible interrupt controller
+  * @info:	pointer to the GIC description
+@@ -676,9 +688,10 @@ int vgic_v3_probe(const struct gic_kvm_info *info)
+ 		group1_trap = true;
+ 	}
+ 
+-	if (kvm_vgic_global_state.ich_vtr_el2 & ICH_VTR_SEIS_MASK) {
+-		kvm_info("GICv3 with locally generated SEI\n");
++	if (vgic_v3_broken_seis()) {
++		kvm_info("GICv3 with broken locally generated SEI\n");
+ 
++		kvm_vgic_global_state.ich_vtr_el2 &= ~ICH_VTR_SEIS_MASK;
+ 		group0_trap = true;
+ 		group1_trap = true;
+ 		if (ich_vtr_el2 & ICH_VTR_TDS_MASK)
+diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
+index c3d53811a15e1..5d1a3125fd2b0 100644
+--- a/arch/arm64/mm/extable.c
++++ b/arch/arm64/mm/extable.c
+@@ -43,8 +43,8 @@ static bool
+ ex_handler_load_unaligned_zeropad(const struct exception_table_entry *ex,
+ 				  struct pt_regs *regs)
+ {
+-	int reg_data = FIELD_GET(EX_DATA_REG_DATA, ex->type);
+-	int reg_addr = FIELD_GET(EX_DATA_REG_ADDR, ex->type);
++	int reg_data = FIELD_GET(EX_DATA_REG_DATA, ex->data);
++	int reg_addr = FIELD_GET(EX_DATA_REG_ADDR, ex->data);
+ 	unsigned long data, addr, offset;
+ 
+ 	addr = pt_regs_read_reg(regs, reg_addr);
+diff --git a/arch/ia64/pci/fixup.c b/arch/ia64/pci/fixup.c
+index acb55a41260dd..2bcdd7d3a1ada 100644
+--- a/arch/ia64/pci/fixup.c
++++ b/arch/ia64/pci/fixup.c
+@@ -76,5 +76,5 @@ static void pci_fixup_video(struct pci_dev *pdev)
+ 		}
+ 	}
+ }
+-DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_ANY_ID, PCI_ANY_ID,
+-				PCI_CLASS_DISPLAY_VGA, 8, pci_fixup_video);
++DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_ANY_ID, PCI_ANY_ID,
++			       PCI_CLASS_DISPLAY_VGA, 8, pci_fixup_video);
+diff --git a/arch/mips/loongson64/vbios_quirk.c b/arch/mips/loongson64/vbios_quirk.c
+index 9a29e94d3db1d..3115d4de982c5 100644
+--- a/arch/mips/loongson64/vbios_quirk.c
++++ b/arch/mips/loongson64/vbios_quirk.c
+@@ -3,7 +3,7 @@
+ #include <linux/pci.h>
+ #include <loongson.h>
+ 
+-static void pci_fixup_radeon(struct pci_dev *pdev)
++static void pci_fixup_video(struct pci_dev *pdev)
+ {
+ 	struct resource *res = &pdev->resource[PCI_ROM_RESOURCE];
+ 
+@@ -22,8 +22,7 @@ static void pci_fixup_radeon(struct pci_dev *pdev)
+ 	res->flags = IORESOURCE_MEM | IORESOURCE_ROM_SHADOW |
+ 		     IORESOURCE_PCI_FIXED;
+ 
+-	dev_info(&pdev->dev, "BAR %d: assigned %pR for Radeon ROM\n",
+-		 PCI_ROM_RESOURCE, res);
++	dev_info(&pdev->dev, "Video device with shadowed ROM at %pR\n", res);
+ }
+-DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_ATI, 0x9615,
+-				PCI_CLASS_DISPLAY_VGA, 8, pci_fixup_radeon);
++DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_ATI, 0x9615,
++			       PCI_CLASS_DISPLAY_VGA, 8, pci_fixup_video);
+diff --git a/arch/powerpc/include/asm/book3s/32/mmu-hash.h b/arch/powerpc/include/asm/book3s/32/mmu-hash.h
+index f5be185cbdf8d..94ad7acfd0565 100644
+--- a/arch/powerpc/include/asm/book3s/32/mmu-hash.h
++++ b/arch/powerpc/include/asm/book3s/32/mmu-hash.h
+@@ -143,6 +143,8 @@ static __always_inline void update_user_segments(u32 val)
+ 	update_user_segment(15, val);
+ }
+ 
++int __init find_free_bat(void);
++unsigned int bat_block_size(unsigned long base, unsigned long top);
+ #endif /* !__ASSEMBLY__ */
+ 
+ /* We happily ignore the smaller BATs on 601, we don't actually use
+diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
+index fff391b9b97bc..5b150cf573618 100644
+--- a/arch/powerpc/include/asm/kvm_book3s_64.h
++++ b/arch/powerpc/include/asm/kvm_book3s_64.h
+@@ -39,7 +39,6 @@ struct kvm_nested_guest {
+ 	pgd_t *shadow_pgtable;		/* our page table for this guest */
+ 	u64 l1_gr_to_hr;		/* L1's addr of part'n-scoped table */
+ 	u64 process_table;		/* process table entry for this guest */
+-	u64 hfscr;			/* HFSCR that the L1 requested for this nested guest */
+ 	long refcnt;			/* number of pointers to this struct */
+ 	struct mutex tlb_lock;		/* serialize page faults and tlbies */
+ 	struct kvm_nested_guest *next;
+diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
+index e4d23193eba75..2ec07dacbaa13 100644
+--- a/arch/powerpc/include/asm/kvm_host.h
++++ b/arch/powerpc/include/asm/kvm_host.h
+@@ -814,6 +814,7 @@ struct kvm_vcpu_arch {
+ 
+ 	/* For support of nested guests */
+ 	struct kvm_nested_guest *nested;
++	u64 nested_hfscr;	/* HFSCR that the L1 requested for the nested guest */
+ 	u32 nested_vcpu_id;
+ 	gpa_t nested_io_gpr;
+ #endif
+diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
+index baea657bc8687..bca31a61e57f8 100644
+--- a/arch/powerpc/include/asm/ppc-opcode.h
++++ b/arch/powerpc/include/asm/ppc-opcode.h
+@@ -498,6 +498,7 @@
+ #define PPC_RAW_LDX(r, base, b)		(0x7c00002a | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
+ #define PPC_RAW_LHZ(r, base, i)		(0xa0000000 | ___PPC_RT(r) | ___PPC_RA(base) | IMM_L(i))
+ #define PPC_RAW_LHBRX(r, base, b)	(0x7c00062c | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
++#define PPC_RAW_LWBRX(r, base, b)	(0x7c00042c | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
+ #define PPC_RAW_LDBRX(r, base, b)	(0x7c000428 | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
+ #define PPC_RAW_STWCX(s, a, b)		(0x7c00012d | ___PPC_RS(s) | ___PPC_RA(a) | ___PPC_RB(b))
+ #define PPC_RAW_CMPWI(a, i)		(0x2c000000 | ___PPC_RA(a) | IMM_L(i))
+diff --git a/arch/powerpc/include/asm/syscall.h b/arch/powerpc/include/asm/syscall.h
+index 52d05b465e3ec..25fc8ad9a27af 100644
+--- a/arch/powerpc/include/asm/syscall.h
++++ b/arch/powerpc/include/asm/syscall.h
+@@ -90,7 +90,7 @@ static inline void syscall_get_arguments(struct task_struct *task,
+ 	unsigned long val, mask = -1UL;
+ 	unsigned int n = 6;
+ 
+-	if (is_32bit_task())
++	if (is_tsk_32bit_task(task))
+ 		mask = 0xffffffff;
+ 
+ 	while (n--) {
+@@ -105,7 +105,7 @@ static inline void syscall_get_arguments(struct task_struct *task,
+ 
+ static inline int syscall_get_arch(struct task_struct *task)
+ {
+-	if (is_32bit_task())
++	if (is_tsk_32bit_task(task))
+ 		return AUDIT_ARCH_PPC;
+ 	else if (IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN))
+ 		return AUDIT_ARCH_PPC64LE;
+diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
+index 5725029aaa295..d6e649b3c70b6 100644
+--- a/arch/powerpc/include/asm/thread_info.h
++++ b/arch/powerpc/include/asm/thread_info.h
+@@ -168,8 +168,10 @@ static inline bool test_thread_local_flags(unsigned int flags)
+ 
+ #ifdef CONFIG_COMPAT
+ #define is_32bit_task()	(test_thread_flag(TIF_32BIT))
++#define is_tsk_32bit_task(tsk)	(test_tsk_thread_flag(tsk, TIF_32BIT))
+ #else
+ #define is_32bit_task()	(IS_ENABLED(CONFIG_PPC32))
++#define is_tsk_32bit_task(tsk)	(IS_ENABLED(CONFIG_PPC32))
+ #endif
+ 
+ #if defined(CONFIG_PPC64)
+diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
+index 5fa68c2ef1f81..36f3f5a8868dd 100644
+--- a/arch/powerpc/kernel/Makefile
++++ b/arch/powerpc/kernel/Makefile
+@@ -11,6 +11,7 @@ CFLAGS_prom_init.o      += -fPIC
+ CFLAGS_btext.o		+= -fPIC
+ endif
+ 
++CFLAGS_early_32.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
+ CFLAGS_cputable.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
+ CFLAGS_prom_init.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
+ CFLAGS_btext.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
+diff --git a/arch/powerpc/kernel/interrupt_64.S b/arch/powerpc/kernel/interrupt_64.S
+index 4b1ff94e67eb4..4c6d1a8dcefed 100644
+--- a/arch/powerpc/kernel/interrupt_64.S
++++ b/arch/powerpc/kernel/interrupt_64.S
+@@ -30,6 +30,7 @@ COMPAT_SYS_CALL_TABLE:
+ 	.ifc \srr,srr
+ 	mfspr	r11,SPRN_SRR0
+ 	ld	r12,_NIP(r1)
++	clrrdi  r11,r11,2
+ 	clrrdi  r12,r12,2
+ 100:	tdne	r11,r12
+ 	EMIT_WARN_ENTRY 100b,__FILE__,__LINE__,(BUGFLAG_WARNING | BUGFLAG_ONCE)
+@@ -40,6 +41,7 @@ COMPAT_SYS_CALL_TABLE:
+ 	.else
+ 	mfspr	r11,SPRN_HSRR0
+ 	ld	r12,_NIP(r1)
++	clrrdi  r11,r11,2
+ 	clrrdi  r12,r12,2
+ 100:	tdne	r11,r12
+ 	EMIT_WARN_ENTRY 100b,__FILE__,__LINE__,(BUGFLAG_WARNING | BUGFLAG_ONCE)
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 94da0d25eb125..a2fd1db29f7e8 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -1731,7 +1731,6 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu,
+ 
+ static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu)
+ {
+-	struct kvm_nested_guest *nested = vcpu->arch.nested;
+ 	int r;
+ 	int srcu_idx;
+ 
+@@ -1831,7 +1830,7 @@ static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu)
+ 		 * it into a HEAI.
+ 		 */
+ 		if (!(vcpu->arch.hfscr_permitted & (1UL << cause)) ||
+-					(nested->hfscr & (1UL << cause))) {
++				(vcpu->arch.nested_hfscr & (1UL << cause))) {
+ 			vcpu->arch.trap = BOOK3S_INTERRUPT_H_EMUL_ASSIST;
+ 
+ 			/*
+diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
+index 89295b52a97c3..6c4e0e93105ff 100644
+--- a/arch/powerpc/kvm/book3s_hv_nested.c
++++ b/arch/powerpc/kvm/book3s_hv_nested.c
+@@ -362,7 +362,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
+ 	/* set L1 state to L2 state */
+ 	vcpu->arch.nested = l2;
+ 	vcpu->arch.nested_vcpu_id = l2_hv.vcpu_token;
+-	l2->hfscr = l2_hv.hfscr;
++	vcpu->arch.nested_hfscr = l2_hv.hfscr;
+ 	vcpu->arch.regs = l2_regs;
+ 
+ 	/* Guest must always run with ME enabled, HV disabled. */
+diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
+index 9e5d0f413b712..0b08e85d38391 100644
+--- a/arch/powerpc/lib/Makefile
++++ b/arch/powerpc/lib/Makefile
+@@ -19,6 +19,9 @@ CFLAGS_code-patching.o += -DDISABLE_BRANCH_PROFILING
+ CFLAGS_feature-fixups.o += -DDISABLE_BRANCH_PROFILING
+ endif
+ 
++CFLAGS_code-patching.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
++CFLAGS_feature-fixups.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
++
+ obj-y += alloc.o code-patching.o feature-fixups.o pmem.o test_code-patching.o
+ 
+ ifndef CONFIG_KASAN
+diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
+index 27061583a0107..203735caf6915 100644
+--- a/arch/powerpc/mm/book3s32/mmu.c
++++ b/arch/powerpc/mm/book3s32/mmu.c
+@@ -76,7 +76,7 @@ unsigned long p_block_mapped(phys_addr_t pa)
+ 	return 0;
+ }
+ 
+-static int find_free_bat(void)
++int __init find_free_bat(void)
+ {
+ 	int b;
+ 	int n = mmu_has_feature(MMU_FTR_USE_HIGH_BATS) ? 8 : 4;
+@@ -100,7 +100,7 @@ static int find_free_bat(void)
+  * - block size has to be a power of two. This is calculated by finding the
+  *   highest bit set to 1.
+  */
+-static unsigned int block_size(unsigned long base, unsigned long top)
++unsigned int bat_block_size(unsigned long base, unsigned long top)
+ {
+ 	unsigned int max_size = SZ_256M;
+ 	unsigned int base_shift = (ffs(base) - 1) & 31;
+@@ -145,7 +145,7 @@ static unsigned long __init __mmu_mapin_ram(unsigned long base, unsigned long to
+ 	int idx;
+ 
+ 	while ((idx = find_free_bat()) != -1 && base != top) {
+-		unsigned int size = block_size(base, top);
++		unsigned int size = bat_block_size(base, top);
+ 
+ 		if (size < 128 << 10)
+ 			break;
+@@ -196,18 +196,17 @@ void mmu_mark_initmem_nx(void)
+ 	int nb = mmu_has_feature(MMU_FTR_USE_HIGH_BATS) ? 8 : 4;
+ 	int i;
+ 	unsigned long base = (unsigned long)_stext - PAGE_OFFSET;
+-	unsigned long top = (unsigned long)_etext - PAGE_OFFSET;
++	unsigned long top = ALIGN((unsigned long)_etext - PAGE_OFFSET, SZ_128K);
+ 	unsigned long border = (unsigned long)__init_begin - PAGE_OFFSET;
+ 	unsigned long size;
+ 
+-	for (i = 0; i < nb - 1 && base < top && top - base > (128 << 10);) {
+-		size = block_size(base, top);
++	for (i = 0; i < nb - 1 && base < top;) {
++		size = bat_block_size(base, top);
+ 		setibat(i++, PAGE_OFFSET + base, base, size, PAGE_KERNEL_TEXT);
+ 		base += size;
+ 	}
+ 	if (base < top) {
+-		size = block_size(base, top);
+-		size = max(size, 128UL << 10);
++		size = bat_block_size(base, top);
+ 		if ((top - base) > size) {
+ 			size <<= 1;
+ 			if (strict_kernel_rwx_enabled() && base + size > border)
+diff --git a/arch/powerpc/mm/kasan/book3s_32.c b/arch/powerpc/mm/kasan/book3s_32.c
+index 35b287b0a8da4..450a67ef0bbe1 100644
+--- a/arch/powerpc/mm/kasan/book3s_32.c
++++ b/arch/powerpc/mm/kasan/book3s_32.c
+@@ -10,48 +10,51 @@ int __init kasan_init_region(void *start, size_t size)
+ {
+ 	unsigned long k_start = (unsigned long)kasan_mem_to_shadow(start);
+ 	unsigned long k_end = (unsigned long)kasan_mem_to_shadow(start + size);
+-	unsigned long k_cur = k_start;
+-	int k_size = k_end - k_start;
+-	int k_size_base = 1 << (ffs(k_size) - 1);
++	unsigned long k_nobat = k_start;
++	unsigned long k_cur;
++	phys_addr_t phys;
+ 	int ret;
+-	void *block;
+ 
+-	block = memblock_alloc(k_size, k_size_base);
+-
+-	if (block && k_size_base >= SZ_128K && k_start == ALIGN(k_start, k_size_base)) {
+-		int shift = ffs(k_size - k_size_base);
+-		int k_size_more = shift ? 1 << (shift - 1) : 0;
+-
+-		setbat(-1, k_start, __pa(block), k_size_base, PAGE_KERNEL);
+-		if (k_size_more >= SZ_128K)
+-			setbat(-1, k_start + k_size_base, __pa(block) + k_size_base,
+-			       k_size_more, PAGE_KERNEL);
+-		if (v_block_mapped(k_start))
+-			k_cur = k_start + k_size_base;
+-		if (v_block_mapped(k_start + k_size_base))
+-			k_cur = k_start + k_size_base + k_size_more;
+-
+-		update_bats();
++	while (k_nobat < k_end) {
++		unsigned int k_size = bat_block_size(k_nobat, k_end);
++		int idx = find_free_bat();
++
++		if (idx == -1)
++			break;
++		if (k_size < SZ_128K)
++			break;
++		phys = memblock_phys_alloc_range(k_size, k_size, 0,
++						 MEMBLOCK_ALLOC_ANYWHERE);
++		if (!phys)
++			break;
++
++		setbat(idx, k_nobat, phys, k_size, PAGE_KERNEL);
++		k_nobat += k_size;
+ 	}
++	if (k_nobat != k_start)
++		update_bats();
+ 
+-	if (!block)
+-		block = memblock_alloc(k_size, PAGE_SIZE);
+-	if (!block)
+-		return -ENOMEM;
++	if (k_nobat < k_end) {
++		phys = memblock_phys_alloc_range(k_end - k_nobat, PAGE_SIZE, 0,
++						 MEMBLOCK_ALLOC_ANYWHERE);
++		if (!phys)
++			return -ENOMEM;
++	}
+ 
+ 	ret = kasan_init_shadow_page_tables(k_start, k_end);
+ 	if (ret)
+ 		return ret;
+ 
+-	kasan_update_early_region(k_start, k_cur, __pte(0));
++	kasan_update_early_region(k_start, k_nobat, __pte(0));
+ 
+-	for (; k_cur < k_end; k_cur += PAGE_SIZE) {
++	for (k_cur = k_nobat; k_cur < k_end; k_cur += PAGE_SIZE) {
+ 		pmd_t *pmd = pmd_off_k(k_cur);
+-		void *va = block + k_cur - k_start;
+-		pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
++		pte_t pte = pfn_pte(PHYS_PFN(phys + k_cur - k_nobat), PAGE_KERNEL);
+ 
+ 		__set_pte_at(&init_mm, k_cur, pte_offset_kernel(pmd, k_cur), pte, 0);
+ 	}
+ 	flush_tlb_kernel_range(k_start, k_end);
++	memset(kasan_mem_to_shadow(start), 0, k_end - k_start);
++
+ 	return 0;
+ }
+diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
+index 90ce75f0f1e2a..8acf8a611a265 100644
+--- a/arch/powerpc/net/bpf_jit_comp.c
++++ b/arch/powerpc/net/bpf_jit_comp.c
+@@ -23,15 +23,15 @@ static void bpf_jit_fill_ill_insns(void *area, unsigned int size)
+ 	memset32(area, BREAKPOINT_INSTRUCTION, size / 4);
+ }
+ 
+-/* Fix the branch target addresses for subprog calls */
+-static int bpf_jit_fixup_subprog_calls(struct bpf_prog *fp, u32 *image,
+-				       struct codegen_context *ctx, u32 *addrs)
++/* Fix updated addresses (for subprog calls, ldimm64, et al) during extra pass */
++static int bpf_jit_fixup_addresses(struct bpf_prog *fp, u32 *image,
++				   struct codegen_context *ctx, u32 *addrs)
+ {
+ 	const struct bpf_insn *insn = fp->insnsi;
+ 	bool func_addr_fixed;
+ 	u64 func_addr;
+ 	u32 tmp_idx;
+-	int i, ret;
++	int i, j, ret;
+ 
+ 	for (i = 0; i < fp->len; i++) {
+ 		/*
+@@ -66,6 +66,23 @@ static int bpf_jit_fixup_subprog_calls(struct bpf_prog *fp, u32 *image,
+ 			 * of the JITed sequence remains unchanged.
+ 			 */
+ 			ctx->idx = tmp_idx;
++		} else if (insn[i].code == (BPF_LD | BPF_IMM | BPF_DW)) {
++			tmp_idx = ctx->idx;
++			ctx->idx = addrs[i] / 4;
++#ifdef CONFIG_PPC32
++			PPC_LI32(ctx->b2p[insn[i].dst_reg] - 1, (u32)insn[i + 1].imm);
++			PPC_LI32(ctx->b2p[insn[i].dst_reg], (u32)insn[i].imm);
++			for (j = ctx->idx - addrs[i] / 4; j < 4; j++)
++				EMIT(PPC_RAW_NOP());
++#else
++			func_addr = ((u64)(u32)insn[i].imm) | (((u64)(u32)insn[i + 1].imm) << 32);
++			PPC_LI64(b2p[insn[i].dst_reg], func_addr);
++			/* overwrite rest with nops */
++			for (j = ctx->idx - addrs[i] / 4; j < 5; j++)
++				EMIT(PPC_RAW_NOP());
++#endif
++			ctx->idx = tmp_idx;
++			i++;
+ 		}
+ 	}
+ 
+@@ -193,13 +210,13 @@ skip_init_ctx:
+ 		/*
+ 		 * Do not touch the prologue and epilogue as they will remain
+ 		 * unchanged. Only fix the branch target address for subprog
+-		 * calls in the body.
++		 * calls in the body, and ldimm64 instructions.
+ 		 *
+ 		 * This does not change the offsets and lengths of the subprog
+ 		 * call instruction sequences and hence, the size of the JITed
+ 		 * image as well.
+ 		 */
+-		bpf_jit_fixup_subprog_calls(fp, code_base, &cgctx, addrs);
++		bpf_jit_fixup_addresses(fp, code_base, &cgctx, addrs);
+ 
+ 		/* There is no need to perform the usual passes. */
+ 		goto skip_codegen_passes;
+diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
+index 8a4faa05f9e41..0448b0c008835 100644
+--- a/arch/powerpc/net/bpf_jit_comp32.c
++++ b/arch/powerpc/net/bpf_jit_comp32.c
+@@ -191,6 +191,9 @@ void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx, u64 fun
+ 
+ 	if (image && rel < 0x2000000 && rel >= -0x2000000) {
+ 		PPC_BL_ABS(func);
++		EMIT(PPC_RAW_NOP());
++		EMIT(PPC_RAW_NOP());
++		EMIT(PPC_RAW_NOP());
+ 	} else {
+ 		/* Load function address into r0 */
+ 		EMIT(PPC_RAW_LIS(_R0, IMM_H(func)));
+@@ -289,6 +292,8 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
+ 		bool func_addr_fixed;
+ 		u64 func_addr;
+ 		u32 true_cond;
++		u32 tmp_idx;
++		int j;
+ 
+ 		/*
+ 		 * addrs[] maps a BPF bytecode address into a real offset from
+@@ -836,8 +841,12 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
+ 		 * 16 byte instruction that uses two 'struct bpf_insn'
+ 		 */
+ 		case BPF_LD | BPF_IMM | BPF_DW: /* dst = (u64) imm */
++			tmp_idx = ctx->idx;
+ 			PPC_LI32(dst_reg_h, (u32)insn[i + 1].imm);
+ 			PPC_LI32(dst_reg, (u32)insn[i].imm);
++			/* padding to allow full 4 instructions for later patching */
++			for (j = ctx->idx - tmp_idx; j < 4; j++)
++				EMIT(PPC_RAW_NOP());
+ 			/* Adjust for two bpf instructions */
+ 			addrs[++i] = ctx->idx * 4;
+ 			break;
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index 8571aafcc9e1e..a26a782e8b78e 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -318,6 +318,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
+ 		u64 imm64;
+ 		u32 true_cond;
+ 		u32 tmp_idx;
++		int j;
+ 
+ 		/*
+ 		 * addrs[] maps a BPF bytecode address into a real offset from
+@@ -632,17 +633,21 @@ bpf_alu32_trunc:
+ 				EMIT(PPC_RAW_MR(dst_reg, b2p[TMP_REG_1]));
+ 				break;
+ 			case 64:
+-				/*
+-				 * Way easier and faster(?) to store the value
+-				 * into stack and then use ldbrx
+-				 *
+-				 * ctx->seen will be reliable in pass2, but
+-				 * the instructions generated will remain the
+-				 * same across all passes
+-				 */
++				/* Store the value to stack and then use byte-reverse loads */
+ 				PPC_BPF_STL(dst_reg, 1, bpf_jit_stack_local(ctx));
+ 				EMIT(PPC_RAW_ADDI(b2p[TMP_REG_1], 1, bpf_jit_stack_local(ctx)));
+-				EMIT(PPC_RAW_LDBRX(dst_reg, 0, b2p[TMP_REG_1]));
++				if (cpu_has_feature(CPU_FTR_ARCH_206)) {
++					EMIT(PPC_RAW_LDBRX(dst_reg, 0, b2p[TMP_REG_1]));
++				} else {
++					EMIT(PPC_RAW_LWBRX(dst_reg, 0, b2p[TMP_REG_1]));
++					if (IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN))
++						EMIT(PPC_RAW_SLDI(dst_reg, dst_reg, 32));
++					EMIT(PPC_RAW_LI(b2p[TMP_REG_2], 4));
++					EMIT(PPC_RAW_LWBRX(b2p[TMP_REG_2], b2p[TMP_REG_2], b2p[TMP_REG_1]));
++					if (IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
++						EMIT(PPC_RAW_SLDI(b2p[TMP_REG_2], b2p[TMP_REG_2], 32));
++					EMIT(PPC_RAW_OR(dst_reg, dst_reg, b2p[TMP_REG_2]));
++				}
+ 				break;
+ 			}
+ 			break;
+@@ -806,9 +811,13 @@ emit_clear:
+ 		case BPF_LD | BPF_IMM | BPF_DW: /* dst = (u64) imm */
+ 			imm64 = ((u64)(u32) insn[i].imm) |
+ 				    (((u64)(u32) insn[i+1].imm) << 32);
++			tmp_idx = ctx->idx;
++			PPC_LI64(dst_reg, imm64);
++			/* padding to allow full 5 instructions for later patching */
++			for (j = ctx->idx - tmp_idx; j < 5; j++)
++				EMIT(PPC_RAW_NOP());
+ 			/* Adjust for two bpf instructions */
+ 			addrs[++i] = ctx->idx * 4;
+-			PPC_LI64(dst_reg, imm64);
+ 			break;
+ 
+ 		/*
+diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
+index bef6b1abce702..e78de70509472 100644
+--- a/arch/powerpc/perf/core-book3s.c
++++ b/arch/powerpc/perf/core-book3s.c
+@@ -1326,9 +1326,20 @@ static void power_pmu_disable(struct pmu *pmu)
+ 		 * Otherwise provide a warning if there is PMI pending, but
+ 		 * no counter is found overflown.
+ 		 */
+-		if (any_pmc_overflown(cpuhw))
+-			clear_pmi_irq_pending();
+-		else
++		if (any_pmc_overflown(cpuhw)) {
++			/*
++			 * Since power_pmu_disable runs under local_irq_save, it
++			 * could happen that code hits a PMC overflow without PMI
++			 * pending in paca. Hence only clear PMI pending if it was
++			 * set.
++			 *
++			 * If a PMI is pending, then MSR[EE] must be disabled (because
++			 * the masked PMI handler disabling EE). So it is safe to
++			 * call clear_pmi_irq_pending().
++			 */
++			if (pmi_irq_pending())
++				clear_pmi_irq_pending();
++		} else
+ 			WARN_ON(pmi_irq_pending());
+ 
+ 		val = mmcra = cpuhw->mmcr.mmcra;
+diff --git a/arch/s390/hypfs/hypfs_vm.c b/arch/s390/hypfs/hypfs_vm.c
+index 33f973ff97442..e8f15dbb89d02 100644
+--- a/arch/s390/hypfs/hypfs_vm.c
++++ b/arch/s390/hypfs/hypfs_vm.c
+@@ -20,6 +20,7 @@
+ 
+ static char local_guest[] = "        ";
+ static char all_guests[] = "*       ";
++static char *all_groups = all_guests;
+ static char *guest_query;
+ 
+ struct diag2fc_data {
+@@ -62,10 +63,11 @@ static int diag2fc(int size, char* query, void *addr)
+ 
+ 	memcpy(parm_list.userid, query, NAME_LEN);
+ 	ASCEBC(parm_list.userid, NAME_LEN);
+-	parm_list.addr = (unsigned long) addr ;
++	memcpy(parm_list.aci_grp, all_groups, NAME_LEN);
++	ASCEBC(parm_list.aci_grp, NAME_LEN);
++	parm_list.addr = (unsigned long)addr;
+ 	parm_list.size = size;
+ 	parm_list.fmt = 0x02;
+-	memset(parm_list.aci_grp, 0x40, NAME_LEN);
+ 	rc = -1;
+ 
+ 	diag_stat_inc(DIAG_STAT_X2FC);
+diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
+index d52d85367bf73..b032e556eeb71 100644
+--- a/arch/s390/kernel/module.c
++++ b/arch/s390/kernel/module.c
+@@ -33,7 +33,7 @@
+ #define DEBUGP(fmt , ...)
+ #endif
+ 
+-#define PLT_ENTRY_SIZE 20
++#define PLT_ENTRY_SIZE 22
+ 
+ void *module_alloc(unsigned long size)
+ {
+@@ -341,27 +341,26 @@ static int apply_rela(Elf_Rela *rela, Elf_Addr base, Elf_Sym *symtab,
+ 	case R_390_PLTOFF32:	/* 32 bit offset from GOT to PLT. */
+ 	case R_390_PLTOFF64:	/* 16 bit offset from GOT to PLT. */
+ 		if (info->plt_initialized == 0) {
+-			unsigned int insn[5];
+-			unsigned int *ip = me->core_layout.base +
+-					   me->arch.plt_offset +
+-					   info->plt_offset;
+-
+-			insn[0] = 0x0d10e310;	/* basr 1,0  */
+-			insn[1] = 0x100a0004;	/* lg	1,10(1) */
++			unsigned char insn[PLT_ENTRY_SIZE];
++			char *plt_base;
++			char *ip;
++
++			plt_base = me->core_layout.base + me->arch.plt_offset;
++			ip = plt_base + info->plt_offset;
++			*(int *)insn = 0x0d10e310;	/* basr 1,0  */
++			*(int *)&insn[4] = 0x100c0004;	/* lg	1,12(1) */
+ 			if (IS_ENABLED(CONFIG_EXPOLINE) && !nospec_disable) {
+-				unsigned int *ij;
+-				ij = me->core_layout.base +
+-					me->arch.plt_offset +
+-					me->arch.plt_size - PLT_ENTRY_SIZE;
+-				insn[2] = 0xa7f40000 +	/* j __jump_r1 */
+-					(unsigned int)(u16)
+-					(((unsigned long) ij - 8 -
+-					  (unsigned long) ip) / 2);
++				char *jump_r1;
++
++				jump_r1 = plt_base + me->arch.plt_size -
++					PLT_ENTRY_SIZE;
++				/* brcl	0xf,__jump_r1 */
++				*(short *)&insn[8] = 0xc0f4;
++				*(int *)&insn[10] = (jump_r1 - (ip + 8)) / 2;
+ 			} else {
+-				insn[2] = 0x07f10000;	/* br %r1 */
++				*(int *)&insn[8] = 0x07f10000;	/* br %r1 */
+ 			}
+-			insn[3] = (unsigned int) (val >> 32);
+-			insn[4] = (unsigned int) val;
++			*(long *)&insn[14] = val;
+ 
+ 			write(ip, insn, sizeof(insn));
+ 			info->plt_initialized = 1;
+diff --git a/arch/s390/kernel/nmi.c b/arch/s390/kernel/nmi.c
+index 20f8e1868853f..a50f2ff1b00e8 100644
+--- a/arch/s390/kernel/nmi.c
++++ b/arch/s390/kernel/nmi.c
+@@ -273,7 +273,14 @@ static int notrace s390_validate_registers(union mci mci, int umode)
+ 		/* Validate vector registers */
+ 		union ctlreg0 cr0;
+ 
+-		if (!mci.vr) {
++		/*
++		 * The vector validity must only be checked if not running a
++		 * KVM guest. For KVM guests the machine check is forwarded by
++		 * KVM and it is the responsibility of the guest to take
++		 * appropriate actions. The host vector or FPU values have been
++		 * saved by KVM and will be restored by KVM.
++		 */
++		if (!mci.vr && !test_cpu_flag(CIF_MCCK_GUEST)) {
+ 			/*
+ 			 * Vector registers can't be restored. If the kernel
+ 			 * currently uses vector registers the system is
+@@ -316,11 +323,21 @@ static int notrace s390_validate_registers(union mci mci, int umode)
+ 	if (cr2.gse) {
+ 		if (!mci.gs) {
+ 			/*
+-			 * Guarded storage register can't be restored and
+-			 * the current processes uses guarded storage.
+-			 * It has to be terminated.
++			 * 2 cases:
++			 * - machine check in kernel or userspace
++			 * - machine check while running SIE (KVM guest)
++			 * For kernel or userspace the userspace values of
++			 * guarded storage control can not be recreated, the
++			 * process must be terminated.
++			 * For SIE the guest values of guarded storage can not
++			 * be recreated. This is either due to a bug or due to
++			 * GS being disabled in the guest. The guest will be
++			 * notified by KVM code and the guests machine check
++			 * handling must take care of this.  The host values
++			 * are saved by KVM and are not affected.
+ 			 */
+-			kill_task = 1;
++			if (!test_cpu_flag(CIF_MCCK_GUEST))
++				kill_task = 1;
+ 		} else {
+ 			load_gs_cb((struct gs_cb *)mcesa->guarded_storage_save_area);
+ 		}
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 1e33c75ffa260..18d825a39032a 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -6242,6 +6242,19 @@ __init int intel_pmu_init(void)
+ 			pmu->num_counters = x86_pmu.num_counters;
+ 			pmu->num_counters_fixed = x86_pmu.num_counters_fixed;
+ 		}
++
++		/*
++		 * Quirk: For some Alder Lake machine, when all E-cores are disabled in
++		 * a BIOS, the leaf 0xA will enumerate all counters of P-cores. However,
++		 * the X86_FEATURE_HYBRID_CPU is still set. The above codes will
++		 * mistakenly add extra counters for P-cores. Correct the number of
++		 * counters here.
++		 */
++		if ((pmu->num_counters > 8) || (pmu->num_counters_fixed > 4)) {
++			pmu->num_counters = x86_pmu.num_counters;
++			pmu->num_counters_fixed = x86_pmu.num_counters_fixed;
++		}
++
+ 		pmu->max_pebs_events = min_t(unsigned, MAX_PEBS_EVENTS, pmu->num_counters);
+ 		pmu->unconstrained = (struct event_constraint)
+ 					__EVENT_CONSTRAINT(0, (1ULL << pmu->num_counters) - 1,
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 3660f698fb2aa..ed869443efb21 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -5482,7 +5482,7 @@ static struct intel_uncore_type icx_uncore_imc = {
+ 	.fixed_ctr_bits	= 48,
+ 	.fixed_ctr	= SNR_IMC_MMIO_PMON_FIXED_CTR,
+ 	.fixed_ctl	= SNR_IMC_MMIO_PMON_FIXED_CTL,
+-	.event_descs	= hswep_uncore_imc_events,
++	.event_descs	= snr_uncore_imc_events,
+ 	.perf_ctr	= SNR_IMC_MMIO_PMON_CTR0,
+ 	.event_ctl	= SNR_IMC_MMIO_PMON_CTL0,
+ 	.event_mask	= SNBEP_PMON_RAW_EVENT_MASK,
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 59fc339ba5282..5d3645b325e27 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1497,6 +1497,7 @@ struct kvm_x86_ops {
+ };
+ 
+ struct kvm_x86_nested_ops {
++	void (*leave_nested)(struct kvm_vcpu *vcpu);
+ 	int (*check_events)(struct kvm_vcpu *vcpu);
+ 	bool (*hv_timer_pending)(struct kvm_vcpu *vcpu);
+ 	void (*triple_fault)(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
+index fc85eb17cb6d0..c757ba7605fbf 100644
+--- a/arch/x86/kernel/cpu/mce/amd.c
++++ b/arch/x86/kernel/cpu/mce/amd.c
+@@ -401,7 +401,7 @@ static void threshold_restart_bank(void *_tr)
+ 	u32 hi, lo;
+ 
+ 	/* sysfs write might race against an offline operation */
+-	if (this_cpu_read(threshold_banks))
++	if (!this_cpu_read(threshold_banks) && !tr->set_lvt_off)
+ 		return;
+ 
+ 	rdmsr(tr->b->address, lo, hi);
+diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c
+index bb9a46a804bf2..baafbb37be678 100644
+--- a/arch/x86/kernel/cpu/mce/intel.c
++++ b/arch/x86/kernel/cpu/mce/intel.c
+@@ -486,6 +486,7 @@ static void intel_ppin_init(struct cpuinfo_x86 *c)
+ 	case INTEL_FAM6_BROADWELL_X:
+ 	case INTEL_FAM6_SKYLAKE_X:
+ 	case INTEL_FAM6_ICELAKE_X:
++	case INTEL_FAM6_ICELAKE_D:
+ 	case INTEL_FAM6_SAPPHIRERAPIDS_X:
+ 	case INTEL_FAM6_XEON_PHI_KNL:
+ 	case INTEL_FAM6_XEON_PHI_KNM:
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index e5e597fc3c86f..add8f58d686e3 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -113,6 +113,7 @@ static int kvm_cpuid_check_equal(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2
+ 		orig = &vcpu->arch.cpuid_entries[i];
+ 		if (e2[i].function != orig->function ||
+ 		    e2[i].index != orig->index ||
++		    e2[i].flags != orig->flags ||
+ 		    e2[i].eax != orig->eax || e2[i].ebx != orig->ebx ||
+ 		    e2[i].ecx != orig->ecx || e2[i].edx != orig->edx)
+ 			return -EINVAL;
+@@ -176,10 +177,26 @@ void kvm_update_pv_runtime(struct kvm_vcpu *vcpu)
+ 		vcpu->arch.pv_cpuid.features = best->eax;
+ }
+ 
++/*
++ * Calculate guest's supported XCR0 taking into account guest CPUID data and
++ * supported_xcr0 (comprised of host configuration and KVM_SUPPORTED_XCR0).
++ */
++static u64 cpuid_get_supported_xcr0(struct kvm_cpuid_entry2 *entries, int nent)
++{
++	struct kvm_cpuid_entry2 *best;
++
++	best = cpuid_entry2_find(entries, nent, 0xd, 0);
++	if (!best)
++		return 0;
++
++	return (best->eax | ((u64)best->edx << 32)) & supported_xcr0;
++}
++
+ static void __kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2 *entries,
+ 				       int nent)
+ {
+ 	struct kvm_cpuid_entry2 *best;
++	u64 guest_supported_xcr0 = cpuid_get_supported_xcr0(entries, nent);
+ 
+ 	best = cpuid_entry2_find(entries, nent, 1, 0);
+ 	if (best) {
+@@ -218,6 +235,21 @@ static void __kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu, struct kvm_cpuid_e
+ 					   vcpu->arch.ia32_misc_enable_msr &
+ 					   MSR_IA32_MISC_ENABLE_MWAIT);
+ 	}
++
++	/*
++	 * Bits 127:0 of the allowed SECS.ATTRIBUTES (CPUID.0x12.0x1) enumerate
++	 * the supported XSAVE Feature Request Mask (XFRM), i.e. the enclave's
++	 * requested XCR0 value.  The enclave's XFRM must be a subset of XCRO
++	 * at the time of EENTER, thus adjust the allowed XFRM by the guest's
++	 * supported XCR0.  Similar to XCR0 handling, FP and SSE are forced to
++	 * '1' even on CPUs that don't support XSAVE.
++	 */
++	best = cpuid_entry2_find(entries, nent, 0x12, 0x1);
++	if (best) {
++		best->ecx &= guest_supported_xcr0 & 0xffffffff;
++		best->edx &= guest_supported_xcr0 >> 32;
++		best->ecx |= XFEATURE_MASK_FPSSE;
++	}
+ }
+ 
+ void kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu)
+@@ -241,27 +273,8 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
+ 		kvm_apic_set_version(vcpu);
+ 	}
+ 
+-	best = kvm_find_cpuid_entry(vcpu, 0xD, 0);
+-	if (!best)
+-		vcpu->arch.guest_supported_xcr0 = 0;
+-	else
+-		vcpu->arch.guest_supported_xcr0 =
+-			(best->eax | ((u64)best->edx << 32)) & supported_xcr0;
+-
+-	/*
+-	 * Bits 127:0 of the allowed SECS.ATTRIBUTES (CPUID.0x12.0x1) enumerate
+-	 * the supported XSAVE Feature Request Mask (XFRM), i.e. the enclave's
+-	 * requested XCR0 value.  The enclave's XFRM must be a subset of XCRO
+-	 * at the time of EENTER, thus adjust the allowed XFRM by the guest's
+-	 * supported XCR0.  Similar to XCR0 handling, FP and SSE are forced to
+-	 * '1' even on CPUs that don't support XSAVE.
+-	 */
+-	best = kvm_find_cpuid_entry(vcpu, 0x12, 0x1);
+-	if (best) {
+-		best->ecx &= vcpu->arch.guest_supported_xcr0 & 0xffffffff;
+-		best->edx &= vcpu->arch.guest_supported_xcr0 >> 32;
+-		best->ecx |= XFEATURE_MASK_FPSSE;
+-	}
++	vcpu->arch.guest_supported_xcr0 =
++		cpuid_get_supported_xcr0(vcpu->arch.cpuid_entries, vcpu->arch.cpuid_nent);
+ 
+ 	kvm_update_pv_runtime(vcpu);
+ 
+@@ -326,8 +339,14 @@ static int kvm_set_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2 *e2,
+ 	 * KVM_SET_CPUID{,2} again. To support this legacy behavior, check
+ 	 * whether the supplied CPUID data is equal to what's already set.
+ 	 */
+-	if (vcpu->arch.last_vmentry_cpu != -1)
+-		return kvm_cpuid_check_equal(vcpu, e2, nent);
++	if (vcpu->arch.last_vmentry_cpu != -1) {
++		r = kvm_cpuid_check_equal(vcpu, e2, nent);
++		if (r)
++			return r;
++
++		kvfree(e2);
++		return 0;
++	}
+ 
+ 	r = kvm_check_cpuid(e2, nent);
+ 	if (r)
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 7c009867d6f23..e8e383fbe8868 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -2623,7 +2623,7 @@ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s)
+ 	kvm_apic_set_version(vcpu);
+ 
+ 	apic_update_ppr(apic);
+-	hrtimer_cancel(&apic->lapic_timer.timer);
++	cancel_apic_timer(apic);
+ 	apic->lapic_timer.expired_tscdeadline = 0;
+ 	apic_update_lvtt(apic);
+ 	apic_manage_nmi_watchdog(apic, kvm_lapic_get_reg(apic, APIC_LVT0));
+diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
+index f8b7bc04b3e7a..a67f8bee3adc3 100644
+--- a/arch/x86/kvm/svm/nested.c
++++ b/arch/x86/kvm/svm/nested.c
+@@ -964,9 +964,9 @@ void svm_free_nested(struct vcpu_svm *svm)
+ /*
+  * Forcibly leave nested mode in order to be able to reset the VCPU later on.
+  */
+-void svm_leave_nested(struct vcpu_svm *svm)
++void svm_leave_nested(struct kvm_vcpu *vcpu)
+ {
+-	struct kvm_vcpu *vcpu = &svm->vcpu;
++	struct vcpu_svm *svm = to_svm(vcpu);
+ 
+ 	if (is_guest_mode(vcpu)) {
+ 		svm->nested.nested_run_pending = 0;
+@@ -1345,7 +1345,7 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
+ 		return -EINVAL;
+ 
+ 	if (!(kvm_state->flags & KVM_STATE_NESTED_GUEST_MODE)) {
+-		svm_leave_nested(svm);
++		svm_leave_nested(vcpu);
+ 		svm_set_gif(svm, !!(kvm_state->flags & KVM_STATE_NESTED_GIF_SET));
+ 		return 0;
+ 	}
+@@ -1410,7 +1410,7 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
+ 	 */
+ 
+ 	if (is_guest_mode(vcpu))
+-		svm_leave_nested(svm);
++		svm_leave_nested(vcpu);
+ 	else
+ 		svm->nested.vmcb02.ptr->save = svm->vmcb01.ptr->save;
+ 
+@@ -1464,6 +1464,7 @@ static bool svm_get_nested_state_pages(struct kvm_vcpu *vcpu)
+ }
+ 
+ struct kvm_x86_nested_ops svm_nested_ops = {
++	.leave_nested = svm_leave_nested,
+ 	.check_events = svm_check_nested_events,
+ 	.triple_fault = nested_svm_triple_fault,
+ 	.get_nested_state_pages = svm_get_nested_state_pages,
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 5151efa424acb..3efada37272c0 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -290,7 +290,7 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
+ 
+ 	if ((old_efer & EFER_SVME) != (efer & EFER_SVME)) {
+ 		if (!(efer & EFER_SVME)) {
+-			svm_leave_nested(svm);
++			svm_leave_nested(vcpu);
+ 			svm_set_gif(svm, true);
+ 			/* #GP intercept is still needed for vmware backdoor */
+ 			if (!enable_vmware_backdoor)
+@@ -312,7 +312,11 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
+ 				return ret;
+ 			}
+ 
+-			if (svm_gp_erratum_intercept)
++			/*
++			 * Never intercept #GP for SEV guests, KVM can't
++			 * decrypt guest memory to workaround the erratum.
++			 */
++			if (svm_gp_erratum_intercept && !sev_guest(vcpu->kvm))
+ 				set_exception_intercept(svm, GP_VECTOR);
+ 		}
+ 	}
+@@ -1238,9 +1242,10 @@ static void init_vmcb(struct kvm_vcpu *vcpu)
+ 	 * Guest access to VMware backdoor ports could legitimately
+ 	 * trigger #GP because of TSS I/O permission bitmap.
+ 	 * We intercept those #GP and allow access to them anyway
+-	 * as VMware does.
++	 * as VMware does.  Don't intercept #GP for SEV guests as KVM can't
++	 * decrypt guest memory to decode the faulting instruction.
+ 	 */
+-	if (enable_vmware_backdoor)
++	if (enable_vmware_backdoor && !sev_guest(vcpu->kvm))
+ 		set_exception_intercept(svm, GP_VECTOR);
+ 
+ 	svm_set_intercept(svm, INTERCEPT_INTR);
+@@ -2301,10 +2306,6 @@ static int gp_interception(struct kvm_vcpu *vcpu)
+ 	if (error_code)
+ 		goto reinject;
+ 
+-	/* All SVM instructions expect page aligned RAX */
+-	if (svm->vmcb->save.rax & ~PAGE_MASK)
+-		goto reinject;
+-
+ 	/* Decode the instruction for usage later */
+ 	if (x86_decode_emulated_instruction(vcpu, 0, NULL, 0) != EMULATION_OK)
+ 		goto reinject;
+@@ -2322,8 +2323,13 @@ static int gp_interception(struct kvm_vcpu *vcpu)
+ 		if (!is_guest_mode(vcpu))
+ 			return kvm_emulate_instruction(vcpu,
+ 				EMULTYPE_VMWARE_GP | EMULTYPE_NO_DECODE);
+-	} else
++	} else {
++		/* All SVM instructions expect page aligned RAX */
++		if (svm->vmcb->save.rax & ~PAGE_MASK)
++			goto reinject;
++
+ 		return emulate_svm_instr(vcpu, opcode);
++	}
+ 
+ reinject:
+ 	kvm_queue_exception_e(vcpu, GP_VECTOR, error_code);
+@@ -4464,8 +4470,13 @@ static bool svm_can_emulate_instruction(struct kvm_vcpu *vcpu, void *insn, int i
+ 	bool smep, smap, is_user;
+ 	unsigned long cr4;
+ 
++	/* Emulation is always possible when KVM has access to all guest state. */
++	if (!sev_guest(vcpu->kvm))
++		return true;
++
+ 	/*
+-	 * When the guest is an SEV-ES guest, emulation is not possible.
++	 * Emulation is impossible for SEV-ES guests as KVM doesn't have access
++	 * to guest register state.
+ 	 */
+ 	if (sev_es_guest(vcpu->kvm))
+ 		return false;
+@@ -4513,21 +4524,11 @@ static bool svm_can_emulate_instruction(struct kvm_vcpu *vcpu, void *insn, int i
+ 	if (likely(!insn || insn_len))
+ 		return true;
+ 
+-	/*
+-	 * If RIP is invalid, go ahead with emulation which will cause an
+-	 * internal error exit.
+-	 */
+-	if (!kvm_vcpu_gfn_to_memslot(vcpu, kvm_rip_read(vcpu) >> PAGE_SHIFT))
+-		return true;
+-
+ 	cr4 = kvm_read_cr4(vcpu);
+ 	smep = cr4 & X86_CR4_SMEP;
+ 	smap = cr4 & X86_CR4_SMAP;
+ 	is_user = svm_get_cpl(vcpu) == 3;
+ 	if (smap && (!smep || is_user)) {
+-		if (!sev_guest(vcpu->kvm))
+-			return true;
+-
+ 		pr_err_ratelimited("KVM: SEV Guest triggered AMD Erratum 1096\n");
+ 		kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu);
+ 	}
+diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
+index 1c7306c370fa3..1b460e539926b 100644
+--- a/arch/x86/kvm/svm/svm.h
++++ b/arch/x86/kvm/svm/svm.h
+@@ -470,7 +470,7 @@ static inline bool nested_exit_on_nmi(struct vcpu_svm *svm)
+ 
+ int enter_svm_guest_mode(struct kvm_vcpu *vcpu,
+ 			 u64 vmcb_gpa, struct vmcb *vmcb12, bool from_vmrun);
+-void svm_leave_nested(struct vcpu_svm *svm);
++void svm_leave_nested(struct kvm_vcpu *vcpu);
+ void svm_free_nested(struct vcpu_svm *svm);
+ int svm_allocate_nested(struct vcpu_svm *svm);
+ int nested_svm_vmrun(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kvm/vmx/evmcs.c b/arch/x86/kvm/vmx/evmcs.c
+index ba6f99f584ac3..09fac0ddac8bd 100644
+--- a/arch/x86/kvm/vmx/evmcs.c
++++ b/arch/x86/kvm/vmx/evmcs.c
+@@ -12,8 +12,6 @@
+ 
+ DEFINE_STATIC_KEY_FALSE(enable_evmcs);
+ 
+-#if IS_ENABLED(CONFIG_HYPERV)
+-
+ #define EVMCS1_OFFSET(x) offsetof(struct hv_enlightened_vmcs, x)
+ #define EVMCS1_FIELD(number, name, clean_field)[ROL16(number, 6)] = \
+ 		{EVMCS1_OFFSET(name), clean_field}
+@@ -296,6 +294,7 @@ const struct evmcs_field vmcs_field_to_evmcs_1[] = {
+ };
+ const unsigned int nr_evmcs_1_fields = ARRAY_SIZE(vmcs_field_to_evmcs_1);
+ 
++#if IS_ENABLED(CONFIG_HYPERV)
+ __init void evmcs_sanitize_exec_ctrls(struct vmcs_config *vmcs_conf)
+ {
+ 	vmcs_conf->pin_based_exec_ctrl &= ~EVMCS1_UNSUPPORTED_PINCTRL;
+diff --git a/arch/x86/kvm/vmx/evmcs.h b/arch/x86/kvm/vmx/evmcs.h
+index 16731d2cf231b..6255fa7167720 100644
+--- a/arch/x86/kvm/vmx/evmcs.h
++++ b/arch/x86/kvm/vmx/evmcs.h
+@@ -63,8 +63,6 @@ DECLARE_STATIC_KEY_FALSE(enable_evmcs);
+ #define EVMCS1_UNSUPPORTED_VMENTRY_CTRL (VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL)
+ #define EVMCS1_UNSUPPORTED_VMFUNC (VMX_VMFUNC_EPTP_SWITCHING)
+ 
+-#if IS_ENABLED(CONFIG_HYPERV)
+-
+ struct evmcs_field {
+ 	u16 offset;
+ 	u16 clean_field;
+@@ -73,26 +71,56 @@ struct evmcs_field {
+ extern const struct evmcs_field vmcs_field_to_evmcs_1[];
+ extern const unsigned int nr_evmcs_1_fields;
+ 
+-static __always_inline int get_evmcs_offset(unsigned long field,
+-					    u16 *clean_field)
++static __always_inline int evmcs_field_offset(unsigned long field,
++					      u16 *clean_field)
+ {
+ 	unsigned int index = ROL16(field, 6);
+ 	const struct evmcs_field *evmcs_field;
+ 
+-	if (unlikely(index >= nr_evmcs_1_fields)) {
+-		WARN_ONCE(1, "KVM: accessing unsupported EVMCS field %lx\n",
+-			  field);
++	if (unlikely(index >= nr_evmcs_1_fields))
+ 		return -ENOENT;
+-	}
+ 
+ 	evmcs_field = &vmcs_field_to_evmcs_1[index];
+ 
++	/*
++	 * Use offset=0 to detect holes in eVMCS. This offset belongs to
++	 * 'revision_id' but this field has no encoding and is supposed to
++	 * be accessed directly.
++	 */
++	if (unlikely(!evmcs_field->offset))
++		return -ENOENT;
++
+ 	if (clean_field)
+ 		*clean_field = evmcs_field->clean_field;
+ 
+ 	return evmcs_field->offset;
+ }
+ 
++static inline u64 evmcs_read_any(struct hv_enlightened_vmcs *evmcs,
++				 unsigned long field, u16 offset)
++{
++	/*
++	 * vmcs12_read_any() doesn't care whether the supplied structure
++	 * is 'struct vmcs12' or 'struct hv_enlightened_vmcs' as it takes
++	 * the exact offset of the required field, use it for convenience
++	 * here.
++	 */
++	return vmcs12_read_any((void *)evmcs, field, offset);
++}
++
++#if IS_ENABLED(CONFIG_HYPERV)
++
++static __always_inline int get_evmcs_offset(unsigned long field,
++					    u16 *clean_field)
++{
++	int offset = evmcs_field_offset(field, clean_field);
++
++	WARN_ONCE(offset < 0, "KVM: accessing unsupported EVMCS field %lx\n",
++		  field);
++
++	return offset;
++}
++
+ static __always_inline void evmcs_write64(unsigned long field, u64 value)
+ {
+ 	u16 clean_field;
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 9c941535f78c0..c605c2c01394b 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -7,6 +7,7 @@
+ #include <asm/mmu_context.h>
+ 
+ #include "cpuid.h"
++#include "evmcs.h"
+ #include "hyperv.h"
+ #include "mmu.h"
+ #include "nested.h"
+@@ -5074,27 +5075,49 @@ static int handle_vmread(struct kvm_vcpu *vcpu)
+ 	if (!nested_vmx_check_permission(vcpu))
+ 		return 1;
+ 
+-	/*
+-	 * In VMX non-root operation, when the VMCS-link pointer is INVALID_GPA,
+-	 * any VMREAD sets the ALU flags for VMfailInvalid.
+-	 */
+-	if (vmx->nested.current_vmptr == INVALID_GPA ||
+-	    (is_guest_mode(vcpu) &&
+-	     get_vmcs12(vcpu)->vmcs_link_pointer == INVALID_GPA))
+-		return nested_vmx_failInvalid(vcpu);
+-
+ 	/* Decode instruction info and find the field to read */
+ 	field = kvm_register_read(vcpu, (((instr_info) >> 28) & 0xf));
+ 
+-	offset = vmcs_field_to_offset(field);
+-	if (offset < 0)
+-		return nested_vmx_fail(vcpu, VMXERR_UNSUPPORTED_VMCS_COMPONENT);
++	if (!evmptr_is_valid(vmx->nested.hv_evmcs_vmptr)) {
++		/*
++		 * In VMX non-root operation, when the VMCS-link pointer is INVALID_GPA,
++		 * any VMREAD sets the ALU flags for VMfailInvalid.
++		 */
++		if (vmx->nested.current_vmptr == INVALID_GPA ||
++		    (is_guest_mode(vcpu) &&
++		     get_vmcs12(vcpu)->vmcs_link_pointer == INVALID_GPA))
++			return nested_vmx_failInvalid(vcpu);
+ 
+-	if (!is_guest_mode(vcpu) && is_vmcs12_ext_field(field))
+-		copy_vmcs02_to_vmcs12_rare(vcpu, vmcs12);
++		offset = get_vmcs12_field_offset(field);
++		if (offset < 0)
++			return nested_vmx_fail(vcpu, VMXERR_UNSUPPORTED_VMCS_COMPONENT);
++
++		if (!is_guest_mode(vcpu) && is_vmcs12_ext_field(field))
++			copy_vmcs02_to_vmcs12_rare(vcpu, vmcs12);
++
++		/* Read the field, zero-extended to a u64 value */
++		value = vmcs12_read_any(vmcs12, field, offset);
++	} else {
++		/*
++		 * Hyper-V TLFS (as of 6.0b) explicitly states, that while an
++		 * enlightened VMCS is active VMREAD/VMWRITE instructions are
++		 * unsupported. Unfortunately, certain versions of Windows 11
++		 * don't comply with this requirement which is not enforced in
++		 * genuine Hyper-V. Allow VMREAD from an enlightened VMCS as a
++		 * workaround, as misbehaving guests will panic on VM-Fail.
++		 * Note, enlightened VMCS is incompatible with shadow VMCS so
++		 * all VMREADs from L2 should go to L1.
++		 */
++		if (WARN_ON_ONCE(is_guest_mode(vcpu)))
++			return nested_vmx_failInvalid(vcpu);
+ 
+-	/* Read the field, zero-extended to a u64 value */
+-	value = vmcs12_read_any(vmcs12, field, offset);
++		offset = evmcs_field_offset(field, NULL);
++		if (offset < 0)
++			return nested_vmx_fail(vcpu, VMXERR_UNSUPPORTED_VMCS_COMPONENT);
++
++		/* Read the field, zero-extended to a u64 value */
++		value = evmcs_read_any(vmx->nested.hv_evmcs, field, offset);
++	}
+ 
+ 	/*
+ 	 * Now copy part of this value to register or memory, as requested.
+@@ -5189,7 +5212,7 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu)
+ 
+ 	field = kvm_register_read(vcpu, (((instr_info) >> 28) & 0xf));
+ 
+-	offset = vmcs_field_to_offset(field);
++	offset = get_vmcs12_field_offset(field);
+ 	if (offset < 0)
+ 		return nested_vmx_fail(vcpu, VMXERR_UNSUPPORTED_VMCS_COMPONENT);
+ 
+@@ -6435,7 +6458,7 @@ static u64 nested_vmx_calc_vmcs_enum_msr(void)
+ 	max_idx = 0;
+ 	for (i = 0; i < nr_vmcs12_fields; i++) {
+ 		/* The vmcs12 table is very, very sparsely populated. */
+-		if (!vmcs_field_to_offset_table[i])
++		if (!vmcs12_field_offsets[i])
+ 			continue;
+ 
+ 		idx = vmcs_field_index(VMCS12_IDX_TO_ENC(i));
+@@ -6744,6 +6767,7 @@ __init int nested_vmx_hardware_setup(int (*exit_handlers[])(struct kvm_vcpu *))
+ }
+ 
+ struct kvm_x86_nested_ops vmx_nested_ops = {
++	.leave_nested = vmx_leave_nested,
+ 	.check_events = vmx_check_nested_events,
+ 	.hv_timer_pending = nested_vmx_preemption_timer_pending,
+ 	.triple_fault = nested_vmx_triple_fault,
+diff --git a/arch/x86/kvm/vmx/vmcs12.c b/arch/x86/kvm/vmx/vmcs12.c
+index cab6ba7a5005e..2251b60920f81 100644
+--- a/arch/x86/kvm/vmx/vmcs12.c
++++ b/arch/x86/kvm/vmx/vmcs12.c
+@@ -8,7 +8,7 @@
+ 	FIELD(number, name),						\
+ 	[ROL16(number##_HIGH, 6)] = VMCS12_OFFSET(name) + sizeof(u32)
+ 
+-const unsigned short vmcs_field_to_offset_table[] = {
++const unsigned short vmcs12_field_offsets[] = {
+ 	FIELD(VIRTUAL_PROCESSOR_ID, virtual_processor_id),
+ 	FIELD(POSTED_INTR_NV, posted_intr_nv),
+ 	FIELD(GUEST_ES_SELECTOR, guest_es_selector),
+@@ -151,4 +151,4 @@ const unsigned short vmcs_field_to_offset_table[] = {
+ 	FIELD(HOST_RSP, host_rsp),
+ 	FIELD(HOST_RIP, host_rip),
+ };
+-const unsigned int nr_vmcs12_fields = ARRAY_SIZE(vmcs_field_to_offset_table);
++const unsigned int nr_vmcs12_fields = ARRAY_SIZE(vmcs12_field_offsets);
+diff --git a/arch/x86/kvm/vmx/vmcs12.h b/arch/x86/kvm/vmx/vmcs12.h
+index 2a45f026ee11d..746129ddd5ae0 100644
+--- a/arch/x86/kvm/vmx/vmcs12.h
++++ b/arch/x86/kvm/vmx/vmcs12.h
+@@ -361,10 +361,10 @@ static inline void vmx_check_vmcs12_offsets(void)
+ 	CHECK_OFFSET(guest_pml_index, 996);
+ }
+ 
+-extern const unsigned short vmcs_field_to_offset_table[];
++extern const unsigned short vmcs12_field_offsets[];
+ extern const unsigned int nr_vmcs12_fields;
+ 
+-static inline short vmcs_field_to_offset(unsigned long field)
++static inline short get_vmcs12_field_offset(unsigned long field)
+ {
+ 	unsigned short offset;
+ 	unsigned int index;
+@@ -377,7 +377,7 @@ static inline short vmcs_field_to_offset(unsigned long field)
+ 		return -ENOENT;
+ 
+ 	index = array_index_nospec(index, nr_vmcs12_fields);
+-	offset = vmcs_field_to_offset_table[index];
++	offset = vmcs12_field_offsets[index];
+ 	if (offset == 0)
+ 		return -ENOENT;
+ 	return offset;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 4acf43111e9c2..0714fa0e7ede0 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -3508,6 +3508,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		if (data & ~supported_xss)
+ 			return 1;
+ 		vcpu->arch.ia32_xss = data;
++		kvm_update_cpuid_runtime(vcpu);
+ 		break;
+ 	case MSR_SMI_COUNT:
+ 		if (!msr_info->host_initiated)
+@@ -4784,8 +4785,10 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
+ 		vcpu->arch.apic->sipi_vector = events->sipi_vector;
+ 
+ 	if (events->flags & KVM_VCPUEVENT_VALID_SMM) {
+-		if (!!(vcpu->arch.hflags & HF_SMM_MASK) != events->smi.smm)
++		if (!!(vcpu->arch.hflags & HF_SMM_MASK) != events->smi.smm) {
++			kvm_x86_ops.nested_ops->leave_nested(vcpu);
+ 			kvm_smm_changed(vcpu, events->smi.smm);
++		}
+ 
+ 		vcpu->arch.smi_pending = events->smi.pending;
+ 
+@@ -11062,7 +11065,8 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
+ 
+ 		vcpu->arch.msr_misc_features_enables = 0;
+ 
+-		vcpu->arch.xcr0 = XFEATURE_MASK_FP;
++		__kvm_set_xcr(vcpu, 0, XFEATURE_MASK_FP);
++		__kvm_set_msr(vcpu, MSR_IA32_XSS, 0, true);
+ 	}
+ 
+ 	/* All GPRs except RDX (handled below) are zeroed on RESET/INIT. */
+@@ -11079,8 +11083,6 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
+ 	cpuid_0x1 = kvm_find_cpuid_entry(vcpu, 1, 0);
+ 	kvm_rdx_write(vcpu, cpuid_0x1 ? cpuid_0x1->eax : 0x600);
+ 
+-	vcpu->arch.ia32_xss = 0;
+-
+ 	static_call(kvm_x86_vcpu_reset)(vcpu, init_event);
+ 
+ 	kvm_set_rflags(vcpu, X86_EFLAGS_FIXED);
+diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c
+index 2edd86649468f..615a76d700194 100644
+--- a/arch/x86/pci/fixup.c
++++ b/arch/x86/pci/fixup.c
+@@ -353,8 +353,8 @@ static void pci_fixup_video(struct pci_dev *pdev)
+ 		}
+ 	}
+ }
+-DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_ANY_ID, PCI_ANY_ID,
+-				PCI_CLASS_DISPLAY_VGA, 8, pci_fixup_video);
++DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_ANY_ID, PCI_ANY_ID,
++			       PCI_CLASS_DISPLAY_VGA, 8, pci_fixup_video);
+ 
+ 
+ static const struct dmi_system_id msi_k8t_dmi_table[] = {
+diff --git a/block/bio.c b/block/bio.c
+index 15ab0d6d1c06e..99cad261ec531 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -569,7 +569,8 @@ static void bio_truncate(struct bio *bio, unsigned new_size)
+ 				offset = new_size - done;
+ 			else
+ 				offset = 0;
+-			zero_user(bv.bv_page, offset, bv.bv_len - offset);
++			zero_user(bv.bv_page, bv.bv_offset + offset,
++				  bv.bv_len - offset);
+ 			truncated = true;
+ 		}
+ 		done += bv.bv_len;
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 1378d084c770f..9ebeb9bdf5832 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -1258,20 +1258,32 @@ void __blk_account_io_start(struct request *rq)
+ }
+ 
+ static unsigned long __part_start_io_acct(struct block_device *part,
+-					  unsigned int sectors, unsigned int op)
++					  unsigned int sectors, unsigned int op,
++					  unsigned long start_time)
+ {
+ 	const int sgrp = op_stat_group(op);
+-	unsigned long now = READ_ONCE(jiffies);
+ 
+ 	part_stat_lock();
+-	update_io_ticks(part, now, false);
++	update_io_ticks(part, start_time, false);
+ 	part_stat_inc(part, ios[sgrp]);
+ 	part_stat_add(part, sectors[sgrp], sectors);
+ 	part_stat_local_inc(part, in_flight[op_is_write(op)]);
+ 	part_stat_unlock();
+ 
+-	return now;
++	return start_time;
++}
++
++/**
++ * bio_start_io_acct_time - start I/O accounting for bio based drivers
++ * @bio:	bio to start account for
++ * @start_time:	start time that should be passed back to bio_end_io_acct().
++ */
++void bio_start_io_acct_time(struct bio *bio, unsigned long start_time)
++{
++	__part_start_io_acct(bio->bi_bdev, bio_sectors(bio),
++			     bio_op(bio), start_time);
+ }
++EXPORT_SYMBOL_GPL(bio_start_io_acct_time);
+ 
+ /**
+  * bio_start_io_acct - start I/O accounting for bio based drivers
+@@ -1281,14 +1293,15 @@ static unsigned long __part_start_io_acct(struct block_device *part,
+  */
+ unsigned long bio_start_io_acct(struct bio *bio)
+ {
+-	return __part_start_io_acct(bio->bi_bdev, bio_sectors(bio), bio_op(bio));
++	return __part_start_io_acct(bio->bi_bdev, bio_sectors(bio),
++				    bio_op(bio), jiffies);
+ }
+ EXPORT_SYMBOL_GPL(bio_start_io_acct);
+ 
+ unsigned long disk_start_io_acct(struct gendisk *disk, unsigned int sectors,
+ 				 unsigned int op)
+ {
+-	return __part_start_io_acct(disk->part0, sectors, op);
++	return __part_start_io_acct(disk->part0, sectors, op, jiffies);
+ }
+ EXPORT_SYMBOL(disk_start_io_acct);
+ 
+diff --git a/block/blk-ia-ranges.c b/block/blk-ia-ranges.c
+index b925f3db3ab7a..18c68d8b9138e 100644
+--- a/block/blk-ia-ranges.c
++++ b/block/blk-ia-ranges.c
+@@ -144,7 +144,7 @@ int disk_register_independent_access_ranges(struct gendisk *disk,
+ 				   &q->kobj, "%s", "independent_access_ranges");
+ 	if (ret) {
+ 		q->ia_ranges = NULL;
+-		kfree(iars);
++		kobject_put(&iars->kobj);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index ae79c33001297..7de3f5b6e8d0a 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -722,6 +722,13 @@ void __init efi_systab_report_header(const efi_table_hdr_t *systab_hdr,
+ 		systab_hdr->revision >> 16,
+ 		systab_hdr->revision & 0xffff,
+ 		vendor);
++
++	if (IS_ENABLED(CONFIG_X86_64) &&
++	    systab_hdr->revision > EFI_1_10_SYSTEM_TABLE_REVISION &&
++	    !strcmp(vendor, "Apple")) {
++		pr_info("Apple Mac detected, using EFI v1.10 runtime services only\n");
++		efi.runtime_version = EFI_1_10_SYSTEM_TABLE_REVISION;
++	}
+ }
+ 
+ static __initdata char memory_type_name[][13] = {
+diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c
+index 2363fee9211c9..9cc556013d085 100644
+--- a/drivers/firmware/efi/libstub/arm64-stub.c
++++ b/drivers/firmware/efi/libstub/arm64-stub.c
+@@ -119,9 +119,9 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
+ 	if (image->image_base != _text)
+ 		efi_err("FIRMWARE BUG: efi_loaded_image_t::image_base has bogus value\n");
+ 
+-	if (!IS_ALIGNED((u64)_text, EFI_KIMG_ALIGN))
+-		efi_err("FIRMWARE BUG: kernel image not aligned on %ldk boundary\n",
+-			EFI_KIMG_ALIGN >> 10);
++	if (!IS_ALIGNED((u64)_text, SEGMENT_ALIGN))
++		efi_err("FIRMWARE BUG: kernel image not aligned on %dk boundary\n",
++			SEGMENT_ALIGN >> 10);
+ 
+ 	kernel_size = _edata - _text;
+ 	kernel_memsize = kernel_size + (_end - _edata);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index fab8faf345604..f999638a04ed6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -1523,6 +1523,87 @@ static const u16 amdgpu_unsupported_pciidlist[] = {
+ 	0x99A0,
+ 	0x99A2,
+ 	0x99A4,
++	/* radeon secondary ids */
++	0x3171,
++	0x3e70,
++	0x4164,
++	0x4165,
++	0x4166,
++	0x4168,
++	0x4170,
++	0x4171,
++	0x4172,
++	0x4173,
++	0x496e,
++	0x4a69,
++	0x4a6a,
++	0x4a6b,
++	0x4a70,
++	0x4a74,
++	0x4b69,
++	0x4b6b,
++	0x4b6c,
++	0x4c6e,
++	0x4e64,
++	0x4e65,
++	0x4e66,
++	0x4e67,
++	0x4e68,
++	0x4e69,
++	0x4e6a,
++	0x4e71,
++	0x4f73,
++	0x5569,
++	0x556b,
++	0x556d,
++	0x556f,
++	0x5571,
++	0x5854,
++	0x5874,
++	0x5940,
++	0x5941,
++	0x5b72,
++	0x5b73,
++	0x5b74,
++	0x5b75,
++	0x5d44,
++	0x5d45,
++	0x5d6d,
++	0x5d6f,
++	0x5d72,
++	0x5d77,
++	0x5e6b,
++	0x5e6d,
++	0x7120,
++	0x7124,
++	0x7129,
++	0x712e,
++	0x712f,
++	0x7162,
++	0x7163,
++	0x7166,
++	0x7167,
++	0x7172,
++	0x7173,
++	0x71a0,
++	0x71a1,
++	0x71a3,
++	0x71a7,
++	0x71bb,
++	0x71e0,
++	0x71e1,
++	0x71e2,
++	0x71e6,
++	0x71e7,
++	0x71f2,
++	0x7269,
++	0x726b,
++	0x726e,
++	0x72a0,
++	0x72a8,
++	0x72b1,
++	0x72b3,
++	0x793f,
+ };
+ 
+ static const struct pci_device_id pciidlist[] = {
+diff --git a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
+index 6b248cd2a461c..b9c7407563784 100644
+--- a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
++++ b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
+@@ -503,7 +503,6 @@ static void dcn_bw_calc_rq_dlg_ttu(
+ 	//input[in_idx].dout.output_standard;
+ 
+ 	/*todo: soc->sr_enter_plus_exit_time??*/
+-	dlg_sys_param->t_srx_delay_us = dc->dcn_ip->dcfclk_cstate_latency / v->dcf_clk_deep_sleep;
+ 
+ 	dml1_rq_dlg_get_rq_params(dml, rq_param, &input->pipe.src);
+ 	dml1_extract_rq_regs(dml, rq_regs, rq_param);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
+index 98852b5862956..b52046bb78dc8 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
+@@ -1880,7 +1880,6 @@ noinline bool dcn30_internal_validate_bw(
+ 	dc->res_pool->funcs->update_soc_for_wm_a(dc, context);
+ 	pipe_cnt = dc->res_pool->funcs->populate_dml_pipes(dc, context, pipes, fast_validate);
+ 
+-	DC_FP_START();
+ 	if (!pipe_cnt) {
+ 		out = true;
+ 		goto validate_out;
+@@ -2106,7 +2105,6 @@ validate_fail:
+ 	out = false;
+ 
+ validate_out:
+-	DC_FP_END();
+ 	return out;
+ }
+ 
+@@ -2308,7 +2306,9 @@ bool dcn30_validate_bandwidth(struct dc *dc,
+ 
+ 	BW_VAL_TRACE_COUNT();
+ 
++	DC_FP_START();
+ 	out = dcn30_internal_validate_bw(dc, context, pipes, &pipe_cnt, &vlevel, fast_validate);
++	DC_FP_END();
+ 
+ 	if (pipe_cnt == 0)
+ 		goto validate_out;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
+index e472b729d8690..9254da120e615 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
+@@ -1391,6 +1391,17 @@ static void set_wm_ranges(
+ 	pp_smu->nv_funcs.set_wm_ranges(&pp_smu->nv_funcs.pp_smu, &ranges);
+ }
+ 
++static void dcn301_calculate_wm_and_dlg(
++		struct dc *dc, struct dc_state *context,
++		display_e2e_pipe_params_st *pipes,
++		int pipe_cnt,
++		int vlevel)
++{
++	DC_FP_START();
++	dcn301_calculate_wm_and_dlg_fp(dc, context, pipes, pipe_cnt, vlevel);
++	DC_FP_END();
++}
++
+ static struct resource_funcs dcn301_res_pool_funcs = {
+ 	.destroy = dcn301_destroy_resource_pool,
+ 	.link_enc_create = dcn301_link_encoder_create,
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
+index 246071c72f6bf..548cdef8a8ade 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
+@@ -1576,8 +1576,6 @@ void dml20_rq_dlg_get_dlg_reg(struct display_mode_lib *mode_lib,
+ 	dlg_sys_param.total_flip_bytes = get_total_immediate_flip_bytes(mode_lib,
+ 			e2e_pipe_param,
+ 			num_pipes);
+-	dlg_sys_param.t_srx_delay_us = mode_lib->ip.dcfclk_cstate_latency
+-			/ dlg_sys_param.deepsleep_dcfclk_mhz; // TODO: Deprecated
+ 
+ 	print__dlg_sys_params_st(mode_lib, &dlg_sys_param);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c
+index 015e7f2c0b160..0fc9f3e3ffaef 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c
+@@ -1577,8 +1577,6 @@ void dml20v2_rq_dlg_get_dlg_reg(struct display_mode_lib *mode_lib,
+ 	dlg_sys_param.total_flip_bytes = get_total_immediate_flip_bytes(mode_lib,
+ 			e2e_pipe_param,
+ 			num_pipes);
+-	dlg_sys_param.t_srx_delay_us = mode_lib->ip.dcfclk_cstate_latency
+-			/ dlg_sys_param.deepsleep_dcfclk_mhz; // TODO: Deprecated
+ 
+ 	print__dlg_sys_params_st(mode_lib, &dlg_sys_param);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
+index 46c433c0bcb0f..c2807ab8bf5ab 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
+@@ -1688,8 +1688,6 @@ void dml21_rq_dlg_get_dlg_reg(
+ 			mode_lib,
+ 			e2e_pipe_param,
+ 			num_pipes);
+-	dlg_sys_param.t_srx_delay_us = mode_lib->ip.dcfclk_cstate_latency
+-			/ dlg_sys_param.deepsleep_dcfclk_mhz; // TODO: Deprecated
+ 
+ 	print__dlg_sys_params_st(mode_lib, &dlg_sys_param);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_rq_dlg_calc_30.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_rq_dlg_calc_30.c
+index aef8542700544..747167083dea6 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_rq_dlg_calc_30.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_rq_dlg_calc_30.c
+@@ -1858,8 +1858,6 @@ void dml30_rq_dlg_get_dlg_reg(struct display_mode_lib *mode_lib,
+ 	dlg_sys_param.total_flip_bytes = get_total_immediate_flip_bytes(mode_lib,
+ 		e2e_pipe_param,
+ 		num_pipes);
+-	dlg_sys_param.t_srx_delay_us = mode_lib->ip.dcfclk_cstate_latency
+-		/ dlg_sys_param.deepsleep_dcfclk_mhz; // TODO: Deprecated
+ 
+ 	print__dlg_sys_params_st(mode_lib, &dlg_sys_param);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn301/dcn301_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn301/dcn301_fpu.c
+index 94c32832a0e7b..0a7a338649731 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn301/dcn301_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn301/dcn301_fpu.c
+@@ -327,7 +327,7 @@ void dcn301_fpu_init_soc_bounding_box(struct bp_soc_bb_info bb_info)
+ 		dcn3_01_soc.sr_exit_time_us = bb_info.dram_sr_exit_latency_100ns * 10;
+ }
+ 
+-void dcn301_calculate_wm_and_dlg(struct dc *dc,
++void dcn301_calculate_wm_and_dlg_fp(struct dc *dc,
+ 		struct dc_state *context,
+ 		display_e2e_pipe_params_st *pipes,
+ 		int pipe_cnt,
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn301/dcn301_fpu.h b/drivers/gpu/drm/amd/display/dc/dml/dcn301/dcn301_fpu.h
+index fc7065d178422..774b0fdfc80be 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn301/dcn301_fpu.h
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn301/dcn301_fpu.h
+@@ -34,7 +34,7 @@ void dcn301_fpu_set_wm_ranges(int i,
+ 
+ void dcn301_fpu_init_soc_bounding_box(struct bp_soc_bb_info bb_info);
+ 
+-void dcn301_calculate_wm_and_dlg(struct dc *dc,
++void dcn301_calculate_wm_and_dlg_fp(struct dc *dc,
+ 		struct dc_state *context,
+ 		display_e2e_pipe_params_st *pipes,
+ 		int pipe_cnt,
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h b/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
+index d46a2733024ce..8f9f1d607f7cb 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
++++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
+@@ -546,7 +546,6 @@ struct _vcs_dpi_display_dlg_sys_params_st {
+ 	double t_sr_wm_us;
+ 	double t_extra_us;
+ 	double mem_trip_us;
+-	double t_srx_delay_us;
+ 	double deepsleep_dcfclk_mhz;
+ 	double total_flip_bw;
+ 	unsigned int total_flip_bytes;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_rq_dlg_helpers.c b/drivers/gpu/drm/amd/display/dc/dml/display_rq_dlg_helpers.c
+index 71ea503cb32ff..412e75eb47041 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/display_rq_dlg_helpers.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/display_rq_dlg_helpers.c
+@@ -141,9 +141,6 @@ void print__dlg_sys_params_st(struct display_mode_lib *mode_lib, const struct _v
+ 	dml_print("DML_RQ_DLG_CALC:    t_urg_wm_us          = %3.2f\n", dlg_sys_param->t_urg_wm_us);
+ 	dml_print("DML_RQ_DLG_CALC:    t_sr_wm_us           = %3.2f\n", dlg_sys_param->t_sr_wm_us);
+ 	dml_print("DML_RQ_DLG_CALC:    t_extra_us           = %3.2f\n", dlg_sys_param->t_extra_us);
+-	dml_print(
+-			"DML_RQ_DLG_CALC:    t_srx_delay_us       = %3.2f\n",
+-			dlg_sys_param->t_srx_delay_us);
+ 	dml_print(
+ 			"DML_RQ_DLG_CALC:    deepsleep_dcfclk_mhz = %3.2f\n",
+ 			dlg_sys_param->deepsleep_dcfclk_mhz);
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
+index 59dc2c5b58dd7..3df559c591f89 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
+@@ -1331,10 +1331,6 @@ void dml1_rq_dlg_get_dlg_params(
+ 	if (dual_plane)
+ 		DTRACE("DLG: %s: swath_height_c     = %d", __func__, swath_height_c);
+ 
+-	DTRACE(
+-			"DLG: %s: t_srx_delay_us     = %3.2f",
+-			__func__,
+-			(double) dlg_sys_param->t_srx_delay_us);
+ 	DTRACE("DLG: %s: line_time_in_us    = %3.2f", __func__, (double) line_time_in_us);
+ 	DTRACE("DLG: %s: vupdate_offset     = %d", __func__, vupdate_offset);
+ 	DTRACE("DLG: %s: vupdate_width      = %d", __func__, vupdate_width);
+diff --git a/drivers/gpu/drm/ast/ast_tables.h b/drivers/gpu/drm/ast/ast_tables.h
+index d9eb353a4bf09..dbe1cc620f6e6 100644
+--- a/drivers/gpu/drm/ast/ast_tables.h
++++ b/drivers/gpu/drm/ast/ast_tables.h
+@@ -282,8 +282,6 @@ static const struct ast_vbios_enhtable res_1360x768[] = {
+ };
+ 
+ static const struct ast_vbios_enhtable res_1600x900[] = {
+-	{1800, 1600, 24, 80, 1000,  900, 1, 3, VCLK108,		/* 60Hz */
+-	 (SyncPP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 3, 0x3A },
+ 	{1760, 1600, 48, 32, 926, 900, 3, 5, VCLK97_75,		/* 60Hz CVT RB */
+ 	 (SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo |
+ 	  AST2500PreCatchCRT), 60, 1, 0x3A },
+diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
+index ff1416cd609a5..a1e4c7905ebbe 100644
+--- a/drivers/gpu/drm/drm_atomic.c
++++ b/drivers/gpu/drm/drm_atomic.c
+@@ -1310,8 +1310,10 @@ int drm_atomic_check_only(struct drm_atomic_state *state)
+ 
+ 	DRM_DEBUG_ATOMIC("checking %p\n", state);
+ 
+-	for_each_new_crtc_in_state(state, crtc, new_crtc_state, i)
+-		requested_crtc |= drm_crtc_mask(crtc);
++	for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) {
++		if (new_crtc_state->enable)
++			requested_crtc |= drm_crtc_mask(crtc);
++	}
+ 
+ 	for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, i) {
+ 		ret = drm_atomic_plane_check(old_plane_state, new_plane_state);
+@@ -1360,8 +1362,10 @@ int drm_atomic_check_only(struct drm_atomic_state *state)
+ 		}
+ 	}
+ 
+-	for_each_new_crtc_in_state(state, crtc, new_crtc_state, i)
+-		affected_crtc |= drm_crtc_mask(crtc);
++	for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) {
++		if (new_crtc_state->enable)
++			affected_crtc |= drm_crtc_mask(crtc);
++	}
+ 
+ 	/*
+ 	 * For commits that allow modesets drivers can add other CRTCs to the
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
+index 225fa5879ebd9..90488ab8c6d8e 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
+@@ -469,8 +469,8 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
+ 		return -EINVAL;
+ 	}
+ 
+-	if (args->stream_size > SZ_64K || args->nr_relocs > SZ_64K ||
+-	    args->nr_bos > SZ_64K || args->nr_pmrs > 128) {
++	if (args->stream_size > SZ_128K || args->nr_relocs > SZ_128K ||
++	    args->nr_bos > SZ_128K || args->nr_pmrs > 128) {
+ 		DRM_ERROR("submit arguments out of size limits\n");
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 78aad5216a613..a305ff7e8c6fb 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -1557,6 +1557,8 @@ static int a6xx_pm_suspend(struct msm_gpu *gpu)
+ 		for (i = 0; i < gpu->nr_rings; i++)
+ 			a6xx_gpu->shadow[i] = 0;
+ 
++	gpu->suspend_count++;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dspp.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dspp.c
+index a98e964c3b6fa..355894a3b48c3 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dspp.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dspp.c
+@@ -26,9 +26,16 @@ static void dpu_setup_dspp_pcc(struct dpu_hw_dspp *ctx,
+ 		struct dpu_hw_pcc_cfg *cfg)
+ {
+ 
+-	u32 base = ctx->cap->sblk->pcc.base;
++	u32 base;
+ 
+-	if (!ctx || !base) {
++	if (!ctx) {
++		DRM_ERROR("invalid ctx %pK\n", ctx);
++		return;
++	}
++
++	base = ctx->cap->sblk->pcc.base;
++
++	if (!base) {
+ 		DRM_ERROR("invalid ctx %pK pcc base 0x%x\n", ctx, base);
+ 		return;
+ 	}
+diff --git a/drivers/gpu/drm/msm/dsi/dsi.c b/drivers/gpu/drm/msm/dsi/dsi.c
+index fc280cc434943..122fadcf7cc1e 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi.c
++++ b/drivers/gpu/drm/msm/dsi/dsi.c
+@@ -40,7 +40,12 @@ static int dsi_get_phy(struct msm_dsi *msm_dsi)
+ 
+ 	of_node_put(phy_node);
+ 
+-	if (!phy_pdev || !msm_dsi->phy) {
++	if (!phy_pdev) {
++		DRM_DEV_ERROR(&pdev->dev, "%s: phy driver is not ready\n", __func__);
++		return -EPROBE_DEFER;
++	}
++	if (!msm_dsi->phy) {
++		put_device(&phy_pdev->dev);
+ 		DRM_DEV_ERROR(&pdev->dev, "%s: phy driver is not ready\n", __func__);
+ 		return -EPROBE_DEFER;
+ 	}
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+index 9842e04b58580..baa6af0c3bccf 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+@@ -808,12 +808,14 @@ int msm_dsi_phy_enable(struct msm_dsi_phy *phy,
+ 			struct msm_dsi_phy_clk_request *clk_req,
+ 			struct msm_dsi_phy_shared_timings *shared_timings)
+ {
+-	struct device *dev = &phy->pdev->dev;
++	struct device *dev;
+ 	int ret;
+ 
+ 	if (!phy || !phy->cfg->ops.enable)
+ 		return -EINVAL;
+ 
++	dev = &phy->pdev->dev;
++
+ 	ret = dsi_phy_enable_resource(phy);
+ 	if (ret) {
+ 		DRM_DEV_ERROR(dev, "%s: resource enable failed, %d\n",
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c
+index 75b64e6ae0350..a439794a32e81 100644
+--- a/drivers/gpu/drm/msm/hdmi/hdmi.c
++++ b/drivers/gpu/drm/msm/hdmi/hdmi.c
+@@ -95,10 +95,15 @@ static int msm_hdmi_get_phy(struct hdmi *hdmi)
+ 
+ 	of_node_put(phy_node);
+ 
+-	if (!phy_pdev || !hdmi->phy) {
++	if (!phy_pdev) {
+ 		DRM_DEV_ERROR(&pdev->dev, "phy driver is not ready\n");
+ 		return -EPROBE_DEFER;
+ 	}
++	if (!hdmi->phy) {
++		DRM_DEV_ERROR(&pdev->dev, "phy driver is not ready\n");
++		put_device(&phy_pdev->dev);
++		return -EPROBE_DEFER;
++	}
+ 
+ 	hdmi->phy_dev = get_device(&phy_pdev->dev);
+ 
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index 892c04365239b..f04a2337da006 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -466,7 +466,7 @@ static int msm_init_vram(struct drm_device *dev)
+ 		of_node_put(node);
+ 		if (ret)
+ 			return ret;
+-		size = r.end - r.start;
++		size = r.end - r.start + 1;
+ 		DRM_INFO("using VRAM carveout: %lx@%pa\n", size, &r.start);
+ 
+ 		/* if we have no IOMMU, then we need to use carveout allocator.
+diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
+index ca873a3b98dbe..f2d05bff42453 100644
+--- a/drivers/hv/hv_balloon.c
++++ b/drivers/hv/hv_balloon.c
+@@ -1660,6 +1660,13 @@ static int balloon_connect_vsp(struct hv_device *dev)
+ 	unsigned long t;
+ 	int ret;
+ 
++	/*
++	 * max_pkt_size should be large enough for one vmbus packet header plus
++	 * our receive buffer size. Hyper-V sends messages up to
++	 * HV_HYP_PAGE_SIZE bytes long on balloon channel.
++	 */
++	dev->channel->max_pkt_size = HV_HYP_PAGE_SIZE * 2;
++
+ 	ret = vmbus_open(dev->channel, dm_ring_size, dm_ring_size, NULL, 0,
+ 			 balloon_onchannelcallback, dev);
+ 	if (ret)
+diff --git a/drivers/hwmon/adt7470.c b/drivers/hwmon/adt7470.c
+index d519aca4a9d64..fb6d14d213a18 100644
+--- a/drivers/hwmon/adt7470.c
++++ b/drivers/hwmon/adt7470.c
+@@ -662,6 +662,9 @@ static int adt7470_fan_write(struct device *dev, u32 attr, int channel, long val
+ 	struct adt7470_data *data = dev_get_drvdata(dev);
+ 	int err;
+ 
++	if (val <= 0)
++		return -EINVAL;
++
+ 	val = FAN_RPM_TO_PERIOD(val);
+ 	val = clamp_val(val, 1, 65534);
+ 
+diff --git a/drivers/hwmon/lm90.c b/drivers/hwmon/lm90.c
+index 74019dff2550e..1c9493c708132 100644
+--- a/drivers/hwmon/lm90.c
++++ b/drivers/hwmon/lm90.c
+@@ -373,7 +373,7 @@ static const struct lm90_params lm90_params[] = {
+ 		.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
+ 		  | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_CRIT,
+ 		.alert_alarms = 0x7c,
+-		.max_convrate = 8,
++		.max_convrate = 7,
+ 	},
+ 	[lm86] = {
+ 		.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
+@@ -394,12 +394,13 @@ static const struct lm90_params lm90_params[] = {
+ 		.max_convrate = 9,
+ 	},
+ 	[max6646] = {
+-		.flags = LM90_HAVE_CRIT,
++		.flags = LM90_HAVE_CRIT | LM90_HAVE_BROKEN_ALERT,
+ 		.alert_alarms = 0x7c,
+ 		.max_convrate = 6,
+ 		.reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL,
+ 	},
+ 	[max6654] = {
++		.flags = LM90_HAVE_BROKEN_ALERT,
+ 		.alert_alarms = 0x7c,
+ 		.max_convrate = 7,
+ 		.reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL,
+@@ -418,7 +419,7 @@ static const struct lm90_params lm90_params[] = {
+ 	},
+ 	[max6680] = {
+ 		.flags = LM90_HAVE_OFFSET | LM90_HAVE_CRIT
+-		  | LM90_HAVE_CRIT_ALRM_SWP,
++		  | LM90_HAVE_CRIT_ALRM_SWP | LM90_HAVE_BROKEN_ALERT,
+ 		.alert_alarms = 0x7c,
+ 		.max_convrate = 7,
+ 	},
+@@ -848,7 +849,7 @@ static int lm90_update_device(struct device *dev)
+ 		 * Re-enable ALERT# output if it was originally enabled and
+ 		 * relevant alarms are all clear
+ 		 */
+-		if (!(data->config_orig & 0x80) &&
++		if ((client->irq || !(data->config_orig & 0x80)) &&
+ 		    !(data->alarms & data->alert_alarms)) {
+ 			if (data->config & 0x80) {
+ 				dev_dbg(&client->dev, "Re-enabling ALERT#\n");
+@@ -1807,22 +1808,22 @@ static bool lm90_is_tripped(struct i2c_client *client, u16 *status)
+ 
+ 	if (st & LM90_STATUS_LLOW)
+ 		hwmon_notify_event(data->hwmon_dev, hwmon_temp,
+-				   hwmon_temp_min, 0);
++				   hwmon_temp_min_alarm, 0);
+ 	if (st & LM90_STATUS_RLOW)
+ 		hwmon_notify_event(data->hwmon_dev, hwmon_temp,
+-				   hwmon_temp_min, 1);
++				   hwmon_temp_min_alarm, 1);
+ 	if (st2 & MAX6696_STATUS2_R2LOW)
+ 		hwmon_notify_event(data->hwmon_dev, hwmon_temp,
+-				   hwmon_temp_min, 2);
++				   hwmon_temp_min_alarm, 2);
+ 	if (st & LM90_STATUS_LHIGH)
+ 		hwmon_notify_event(data->hwmon_dev, hwmon_temp,
+-				   hwmon_temp_max, 0);
++				   hwmon_temp_max_alarm, 0);
+ 	if (st & LM90_STATUS_RHIGH)
+ 		hwmon_notify_event(data->hwmon_dev, hwmon_temp,
+-				   hwmon_temp_max, 1);
++				   hwmon_temp_max_alarm, 1);
+ 	if (st2 & MAX6696_STATUS2_R2HIGH)
+ 		hwmon_notify_event(data->hwmon_dev, hwmon_temp,
+-				   hwmon_temp_max, 2);
++				   hwmon_temp_max_alarm, 2);
+ 
+ 	return true;
+ }
+diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c
+index 57ce8633a7256..3496a823e4d4f 100644
+--- a/drivers/hwmon/nct6775.c
++++ b/drivers/hwmon/nct6775.c
+@@ -1175,7 +1175,7 @@ static inline u8 in_to_reg(u32 val, u8 nr)
+ 
+ struct nct6775_data {
+ 	int addr;	/* IO base of hw monitor block */
+-	int sioreg;	/* SIO register address */
++	struct nct6775_sio_data *sio_data;
+ 	enum kinds kind;
+ 	const char *name;
+ 
+@@ -3561,7 +3561,7 @@ clear_caseopen(struct device *dev, struct device_attribute *attr,
+ 	       const char *buf, size_t count)
+ {
+ 	struct nct6775_data *data = dev_get_drvdata(dev);
+-	struct nct6775_sio_data *sio_data = dev_get_platdata(dev);
++	struct nct6775_sio_data *sio_data = data->sio_data;
+ 	int nr = to_sensor_dev_attr(attr)->index - INTRUSION_ALARM_BASE;
+ 	unsigned long val;
+ 	u8 reg;
+@@ -3969,7 +3969,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	data->kind = sio_data->kind;
+-	data->sioreg = sio_data->sioreg;
++	data->sio_data = sio_data;
+ 
+ 	if (sio_data->access == access_direct) {
+ 		data->addr = res->start;
+diff --git a/drivers/irqchip/irq-realtek-rtl.c b/drivers/irqchip/irq-realtek-rtl.c
+index fd9f275592d29..568614edd88f4 100644
+--- a/drivers/irqchip/irq-realtek-rtl.c
++++ b/drivers/irqchip/irq-realtek-rtl.c
+@@ -62,7 +62,7 @@ static struct irq_chip realtek_ictl_irq = {
+ 
+ static int intc_map(struct irq_domain *d, unsigned int irq, irq_hw_number_t hw)
+ {
+-	irq_set_chip_and_handler(hw, &realtek_ictl_irq, handle_level_irq);
++	irq_set_chip_and_handler(irq, &realtek_ictl_irq, handle_level_irq);
+ 
+ 	return 0;
+ }
+@@ -95,7 +95,8 @@ out:
+  * SoC interrupts are cascaded to MIPS CPU interrupts according to the
+  * interrupt-map in the device tree. Each SoC interrupt gets 4 bits for
+  * the CPU interrupt in an Interrupt Routing Register. Max 32 SoC interrupts
+- * thus go into 4 IRRs.
++ * thus go into 4 IRRs. A routing value of '0' means the interrupt is left
++ * disconnected. Routing values {1..15} connect to output lines {0..14}.
+  */
+ static int __init map_interrupts(struct device_node *node, struct irq_domain *domain)
+ {
+@@ -134,7 +135,7 @@ static int __init map_interrupts(struct device_node *node, struct irq_domain *do
+ 		of_node_put(cpu_ictl);
+ 
+ 		cpu_int = be32_to_cpup(imap + 2);
+-		if (cpu_int > 7)
++		if (cpu_int > 7 || cpu_int < 2)
+ 			return -EINVAL;
+ 
+ 		if (!(mips_irqs_set & BIT(cpu_int))) {
+@@ -143,7 +144,8 @@ static int __init map_interrupts(struct device_node *node, struct irq_domain *do
+ 			mips_irqs_set |= BIT(cpu_int);
+ 		}
+ 
+-		regs[(soc_int * 4) / 32] |= cpu_int << (soc_int * 4) % 32;
++		/* Use routing values (1..6) for CPU interrupts (2..7) */
++		regs[(soc_int * 4) / 32] |= (cpu_int - 1) << (soc_int * 4) % 32;
+ 		imap += 3;
+ 	}
+ 
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index b93fcc91176e5..bd814fa0b7216 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -489,7 +489,7 @@ static void start_io_acct(struct dm_io *io)
+ 	struct mapped_device *md = io->md;
+ 	struct bio *bio = io->orig_bio;
+ 
+-	io->start_time = bio_start_io_acct(bio);
++	bio_start_io_acct_time(bio, io->start_time);
+ 	if (unlikely(dm_stats_used(&md->stats)))
+ 		dm_stats_account_io(&md->stats, bio_data_dir(bio),
+ 				    bio->bi_iter.bi_sector, bio_sectors(bio),
+@@ -535,7 +535,7 @@ static struct dm_io *alloc_io(struct mapped_device *md, struct bio *bio)
+ 	io->md = md;
+ 	spin_lock_init(&io->endio_lock);
+ 
+-	start_io_acct(io);
++	io->start_time = jiffies;
+ 
+ 	return io;
+ }
+@@ -1510,9 +1510,6 @@ static void init_clone_info(struct clone_info *ci, struct mapped_device *md,
+ 	ci->sector = bio->bi_iter.bi_sector;
+ }
+ 
+-#define __dm_part_stat_sub(part, field, subnd)	\
+-	(part_stat_get(part, field) -= (subnd))
+-
+ /*
+  * Entry point to split a bio into clones and submit them to the targets.
+  */
+@@ -1548,23 +1545,12 @@ static void __split_and_process_bio(struct mapped_device *md,
+ 						  GFP_NOIO, &md->queue->bio_split);
+ 			ci.io->orig_bio = b;
+ 
+-			/*
+-			 * Adjust IO stats for each split, otherwise upon queue
+-			 * reentry there will be redundant IO accounting.
+-			 * NOTE: this is a stop-gap fix, a proper fix involves
+-			 * significant refactoring of DM core's bio splitting
+-			 * (by eliminating DM's splitting and just using bio_split)
+-			 */
+-			part_stat_lock();
+-			__dm_part_stat_sub(dm_disk(md)->part0,
+-					   sectors[op_stat_group(bio_op(bio))], ci.sector_count);
+-			part_stat_unlock();
+-
+ 			bio_chain(b, bio);
+ 			trace_block_split(b, bio->bi_iter.bi_sector);
+ 			submit_bio_noacct(bio);
+ 		}
+ 	}
++	start_io_acct(ci.io);
+ 
+ 	/* drop the extra reference count */
+ 	dm_io_dec_pending(ci.io, errno_to_blk_status(error));
+diff --git a/drivers/mtd/nand/raw/mpc5121_nfc.c b/drivers/mtd/nand/raw/mpc5121_nfc.c
+index cb293c50acb87..5b9271b9c3265 100644
+--- a/drivers/mtd/nand/raw/mpc5121_nfc.c
++++ b/drivers/mtd/nand/raw/mpc5121_nfc.c
+@@ -291,7 +291,6 @@ static int ads5121_chipselect_init(struct mtd_info *mtd)
+ /* Control chips select signal on ADS5121 board */
+ static void ads5121_select_chip(struct nand_chip *nand, int chip)
+ {
+-	struct mtd_info *mtd = nand_to_mtd(nand);
+ 	struct mpc5121_nfc_prv *prv = nand_get_controller_data(nand);
+ 	u8 v;
+ 
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 30d94cb43113d..755f4a59f784e 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -336,6 +336,9 @@ m_can_fifo_read(struct m_can_classdev *cdev,
+ 	u32 addr_offset = cdev->mcfg[MRAM_RXF0].off + fgi * RXF0_ELEMENT_SIZE +
+ 		offset;
+ 
++	if (val_count == 0)
++		return 0;
++
+ 	return cdev->ops->read_fifo(cdev, addr_offset, val, val_count);
+ }
+ 
+@@ -346,6 +349,9 @@ m_can_fifo_write(struct m_can_classdev *cdev,
+ 	u32 addr_offset = cdev->mcfg[MRAM_TXB].off + fpi * TXB_ELEMENT_SIZE +
+ 		offset;
+ 
++	if (val_count == 0)
++		return 0;
++
+ 	return cdev->ops->write_fifo(cdev, addr_offset, val, val_count);
+ }
+ 
+diff --git a/drivers/net/can/m_can/tcan4x5x-regmap.c b/drivers/net/can/m_can/tcan4x5x-regmap.c
+index ca80dbaf7a3f5..26e212b8ca7a6 100644
+--- a/drivers/net/can/m_can/tcan4x5x-regmap.c
++++ b/drivers/net/can/m_can/tcan4x5x-regmap.c
+@@ -12,7 +12,7 @@
+ #define TCAN4X5X_SPI_INSTRUCTION_WRITE (0x61 << 24)
+ #define TCAN4X5X_SPI_INSTRUCTION_READ (0x41 << 24)
+ 
+-#define TCAN4X5X_MAX_REGISTER 0x8ffc
++#define TCAN4X5X_MAX_REGISTER 0x87fc
+ 
+ static int tcan4x5x_regmap_gather_write(void *context,
+ 					const void *reg, size_t reg_len,
+diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
+index b719f72281c44..3d469073fbc54 100644
+--- a/drivers/net/ethernet/google/gve/gve.h
++++ b/drivers/net/ethernet/google/gve/gve.h
+@@ -830,7 +830,7 @@ static inline bool gve_is_gqi(struct gve_priv *priv)
+ /* buffers */
+ int gve_alloc_page(struct gve_priv *priv, struct device *dev,
+ 		   struct page **page, dma_addr_t *dma,
+-		   enum dma_data_direction);
++		   enum dma_data_direction, gfp_t gfp_flags);
+ void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma,
+ 		   enum dma_data_direction);
+ /* tx handling */
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index 59b66f679e46e..28e2d4d8ed7c6 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -752,9 +752,9 @@ static void gve_free_rings(struct gve_priv *priv)
+ 
+ int gve_alloc_page(struct gve_priv *priv, struct device *dev,
+ 		   struct page **page, dma_addr_t *dma,
+-		   enum dma_data_direction dir)
++		   enum dma_data_direction dir, gfp_t gfp_flags)
+ {
+-	*page = alloc_page(GFP_KERNEL);
++	*page = alloc_page(gfp_flags);
+ 	if (!*page) {
+ 		priv->page_alloc_fail++;
+ 		return -ENOMEM;
+@@ -797,7 +797,7 @@ static int gve_alloc_queue_page_list(struct gve_priv *priv, u32 id,
+ 	for (i = 0; i < pages; i++) {
+ 		err = gve_alloc_page(priv, &priv->pdev->dev, &qpl->pages[i],
+ 				     &qpl->page_buses[i],
+-				     gve_qpl_dma_dir(priv, id));
++				     gve_qpl_dma_dir(priv, id), GFP_KERNEL);
+ 		/* caller handles clean up */
+ 		if (err)
+ 			return -ENOMEM;
+diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
+index 3d04b5aff331b..04a08904305a9 100644
+--- a/drivers/net/ethernet/google/gve/gve_rx.c
++++ b/drivers/net/ethernet/google/gve/gve_rx.c
+@@ -86,7 +86,8 @@ static int gve_rx_alloc_buffer(struct gve_priv *priv, struct device *dev,
+ 	dma_addr_t dma;
+ 	int err;
+ 
+-	err = gve_alloc_page(priv, dev, &page, &dma, DMA_FROM_DEVICE);
++	err = gve_alloc_page(priv, dev, &page, &dma, DMA_FROM_DEVICE,
++			     GFP_ATOMIC);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
+index beb8bb079023c..8c939628e2d85 100644
+--- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c
++++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
+@@ -157,7 +157,7 @@ static int gve_alloc_page_dqo(struct gve_priv *priv,
+ 	int err;
+ 
+ 	err = gve_alloc_page(priv, &priv->pdev->dev, &buf_state->page_info.page,
+-			     &buf_state->addr, DMA_FROM_DEVICE);
++			     &buf_state->addr, DMA_FROM_DEVICE, GFP_KERNEL);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 41afaeea881bc..70491e07b0ff6 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -2496,8 +2496,7 @@ static irqreturn_t hclgevf_misc_irq_handle(int irq, void *data)
+ 		break;
+ 	}
+ 
+-	if (event_cause != HCLGEVF_VECTOR0_EVENT_OTHER)
+-		hclgevf_enable_vector(&hdev->misc_vector, true);
++	hclgevf_enable_vector(&hdev->misc_vector, true);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 0bb3911dd014d..682a440151a87 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -2598,6 +2598,7 @@ static void __ibmvnic_reset(struct work_struct *work)
+ 	struct ibmvnic_rwi *rwi;
+ 	unsigned long flags;
+ 	u32 reset_state;
++	int num_fails = 0;
+ 	int rc = 0;
+ 
+ 	adapter = container_of(work, struct ibmvnic_adapter, ibmvnic_reset);
+@@ -2651,11 +2652,23 @@ static void __ibmvnic_reset(struct work_struct *work)
+ 				rc = do_hard_reset(adapter, rwi, reset_state);
+ 				rtnl_unlock();
+ 			}
+-			if (rc) {
+-				/* give backing device time to settle down */
++			if (rc)
++				num_fails++;
++			else
++				num_fails = 0;
++
++			/* If auto-priority-failover is enabled we can get
++			 * back to back failovers during resets, resulting
++			 * in at least two failed resets (from high-priority
++			 * backing device to low-priority one and then back)
++			 * If resets continue to fail beyond that, give the
++			 * adapter some time to settle down before retrying.
++			 */
++			if (num_fails >= 3) {
+ 				netdev_dbg(adapter->netdev,
+-					   "[S:%s] Hard reset failed, waiting 60 secs\n",
+-					   adapter_state_to_string(adapter->state));
++					   "[S:%s] Hard reset failed %d times, waiting 60 secs\n",
++					   adapter_state_to_string(adapter->state),
++					   num_fails);
+ 				set_current_state(TASK_UNINTERRUPTIBLE);
+ 				schedule_timeout(60 * HZ);
+ 			}
+@@ -3836,11 +3849,25 @@ static void send_request_cap(struct ibmvnic_adapter *adapter, int retry)
+ 	struct device *dev = &adapter->vdev->dev;
+ 	union ibmvnic_crq crq;
+ 	int max_entries;
++	int cap_reqs;
++
++	/* We send out 6 or 7 REQUEST_CAPABILITY CRQs below (depending on
++	 * the PROMISC flag). Initialize this count upfront. When the tasklet
++	 * receives a response to all of these, it will send the next protocol
++	 * message (QUERY_IP_OFFLOAD).
++	 */
++	if (!(adapter->netdev->flags & IFF_PROMISC) ||
++	    adapter->promisc_supported)
++		cap_reqs = 7;
++	else
++		cap_reqs = 6;
+ 
+ 	if (!retry) {
+ 		/* Sub-CRQ entries are 32 byte long */
+ 		int entries_page = 4 * PAGE_SIZE / (sizeof(u64) * 4);
+ 
++		atomic_set(&adapter->running_cap_crqs, cap_reqs);
++
+ 		if (adapter->min_tx_entries_per_subcrq > entries_page ||
+ 		    adapter->min_rx_add_entries_per_subcrq > entries_page) {
+ 			dev_err(dev, "Fatal, invalid entries per sub-crq\n");
+@@ -3901,44 +3928,45 @@ static void send_request_cap(struct ibmvnic_adapter *adapter, int retry)
+ 					adapter->opt_rx_comp_queues;
+ 
+ 		adapter->req_rx_add_queues = adapter->max_rx_add_queues;
++	} else {
++		atomic_add(cap_reqs, &adapter->running_cap_crqs);
+ 	}
+-
+ 	memset(&crq, 0, sizeof(crq));
+ 	crq.request_capability.first = IBMVNIC_CRQ_CMD;
+ 	crq.request_capability.cmd = REQUEST_CAPABILITY;
+ 
+ 	crq.request_capability.capability = cpu_to_be16(REQ_TX_QUEUES);
+ 	crq.request_capability.number = cpu_to_be64(adapter->req_tx_queues);
+-	atomic_inc(&adapter->running_cap_crqs);
++	cap_reqs--;
+ 	ibmvnic_send_crq(adapter, &crq);
+ 
+ 	crq.request_capability.capability = cpu_to_be16(REQ_RX_QUEUES);
+ 	crq.request_capability.number = cpu_to_be64(adapter->req_rx_queues);
+-	atomic_inc(&adapter->running_cap_crqs);
++	cap_reqs--;
+ 	ibmvnic_send_crq(adapter, &crq);
+ 
+ 	crq.request_capability.capability = cpu_to_be16(REQ_RX_ADD_QUEUES);
+ 	crq.request_capability.number = cpu_to_be64(adapter->req_rx_add_queues);
+-	atomic_inc(&adapter->running_cap_crqs);
++	cap_reqs--;
+ 	ibmvnic_send_crq(adapter, &crq);
+ 
+ 	crq.request_capability.capability =
+ 	    cpu_to_be16(REQ_TX_ENTRIES_PER_SUBCRQ);
+ 	crq.request_capability.number =
+ 	    cpu_to_be64(adapter->req_tx_entries_per_subcrq);
+-	atomic_inc(&adapter->running_cap_crqs);
++	cap_reqs--;
+ 	ibmvnic_send_crq(adapter, &crq);
+ 
+ 	crq.request_capability.capability =
+ 	    cpu_to_be16(REQ_RX_ADD_ENTRIES_PER_SUBCRQ);
+ 	crq.request_capability.number =
+ 	    cpu_to_be64(adapter->req_rx_add_entries_per_subcrq);
+-	atomic_inc(&adapter->running_cap_crqs);
++	cap_reqs--;
+ 	ibmvnic_send_crq(adapter, &crq);
+ 
+ 	crq.request_capability.capability = cpu_to_be16(REQ_MTU);
+ 	crq.request_capability.number = cpu_to_be64(adapter->req_mtu);
+-	atomic_inc(&adapter->running_cap_crqs);
++	cap_reqs--;
+ 	ibmvnic_send_crq(adapter, &crq);
+ 
+ 	if (adapter->netdev->flags & IFF_PROMISC) {
+@@ -3946,16 +3974,21 @@ static void send_request_cap(struct ibmvnic_adapter *adapter, int retry)
+ 			crq.request_capability.capability =
+ 			    cpu_to_be16(PROMISC_REQUESTED);
+ 			crq.request_capability.number = cpu_to_be64(1);
+-			atomic_inc(&adapter->running_cap_crqs);
++			cap_reqs--;
+ 			ibmvnic_send_crq(adapter, &crq);
+ 		}
+ 	} else {
+ 		crq.request_capability.capability =
+ 		    cpu_to_be16(PROMISC_REQUESTED);
+ 		crq.request_capability.number = cpu_to_be64(0);
+-		atomic_inc(&adapter->running_cap_crqs);
++		cap_reqs--;
+ 		ibmvnic_send_crq(adapter, &crq);
+ 	}
++
++	/* Keep at end to catch any discrepancy between expected and actual
++	 * CRQs sent.
++	 */
++	WARN_ON(cap_reqs != 0);
+ }
+ 
+ static int pending_scrq(struct ibmvnic_adapter *adapter,
+@@ -4349,118 +4382,132 @@ static void send_query_map(struct ibmvnic_adapter *adapter)
+ static void send_query_cap(struct ibmvnic_adapter *adapter)
+ {
+ 	union ibmvnic_crq crq;
++	int cap_reqs;
++
++	/* We send out 25 QUERY_CAPABILITY CRQs below.  Initialize this count
++	 * upfront. When the tasklet receives a response to all of these, it
++	 * can send out the next protocol messaage (REQUEST_CAPABILITY).
++	 */
++	cap_reqs = 25;
++
++	atomic_set(&adapter->running_cap_crqs, cap_reqs);
+ 
+-	atomic_set(&adapter->running_cap_crqs, 0);
+ 	memset(&crq, 0, sizeof(crq));
+ 	crq.query_capability.first = IBMVNIC_CRQ_CMD;
+ 	crq.query_capability.cmd = QUERY_CAPABILITY;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MIN_TX_QUEUES);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MIN_RX_QUEUES);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MIN_RX_ADD_QUEUES);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MAX_TX_QUEUES);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MAX_RX_QUEUES);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MAX_RX_ADD_QUEUES);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability =
+ 	    cpu_to_be16(MIN_TX_ENTRIES_PER_SUBCRQ);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability =
+ 	    cpu_to_be16(MIN_RX_ADD_ENTRIES_PER_SUBCRQ);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability =
+ 	    cpu_to_be16(MAX_TX_ENTRIES_PER_SUBCRQ);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability =
+ 	    cpu_to_be16(MAX_RX_ADD_ENTRIES_PER_SUBCRQ);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(TCP_IP_OFFLOAD);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(PROMISC_SUPPORTED);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MIN_MTU);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MAX_MTU);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MAX_MULTICAST_FILTERS);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(VLAN_HEADER_INSERTION);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(RX_VLAN_HEADER_INSERTION);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(MAX_TX_SG_ENTRIES);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(RX_SG_SUPPORTED);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(OPT_TX_COMP_SUB_QUEUES);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(OPT_RX_COMP_QUEUES);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability =
+ 			cpu_to_be16(OPT_RX_BUFADD_Q_PER_RX_COMP_Q);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability =
+ 			cpu_to_be16(OPT_TX_ENTRIES_PER_SUBCRQ);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability =
+ 			cpu_to_be16(OPT_RXBA_ENTRIES_PER_SUBCRQ);
+-	atomic_inc(&adapter->running_cap_crqs);
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
+ 
+ 	crq.query_capability.capability = cpu_to_be16(TX_RX_DESC_REQ);
+-	atomic_inc(&adapter->running_cap_crqs);
++
+ 	ibmvnic_send_crq(adapter, &crq);
++	cap_reqs--;
++
++	/* Keep at end to catch any discrepancy between expected and actual
++	 * CRQs sent.
++	 */
++	WARN_ON(cap_reqs != 0);
+ }
+ 
+ static void send_query_ip_offload(struct ibmvnic_adapter *adapter)
+@@ -4764,6 +4811,8 @@ static void handle_request_cap_rsp(union ibmvnic_crq *crq,
+ 	char *name;
+ 
+ 	atomic_dec(&adapter->running_cap_crqs);
++	netdev_dbg(adapter->netdev, "Outstanding request-caps: %d\n",
++		   atomic_read(&adapter->running_cap_crqs));
+ 	switch (be16_to_cpu(crq->request_capability_rsp.capability)) {
+ 	case REQ_TX_QUEUES:
+ 		req_value = &adapter->req_tx_queues;
+@@ -5442,12 +5491,6 @@ static void ibmvnic_tasklet(struct tasklet_struct *t)
+ 			ibmvnic_handle_crq(crq, adapter);
+ 			crq->generic.first = 0;
+ 		}
+-
+-		/* remain in tasklet until all
+-		 * capabilities responses are received
+-		 */
+-		if (!adapter->wait_capability)
+-			done = true;
+ 	}
+ 	/* if capabilities CRQ's were sent in this tasklet, the following
+ 	 * tasklet must wait until all responses are received
+diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
+index 4d939af0a626c..2e02cc68cd3f7 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e.h
++++ b/drivers/net/ethernet/intel/i40e/i40e.h
+@@ -174,7 +174,6 @@ enum i40e_interrupt_policy {
+ 
+ struct i40e_lump_tracking {
+ 	u16 num_entries;
+-	u16 search_hint;
+ 	u16 list[0];
+ #define I40E_PILE_VALID_BIT  0x8000
+ #define I40E_IWARP_IRQ_PILE_ID  (I40E_PILE_VALID_BIT - 2)
+@@ -848,12 +847,12 @@ struct i40e_vsi {
+ 	struct rtnl_link_stats64 net_stats_offsets;
+ 	struct i40e_eth_stats eth_stats;
+ 	struct i40e_eth_stats eth_stats_offsets;
+-	u32 tx_restart;
+-	u32 tx_busy;
++	u64 tx_restart;
++	u64 tx_busy;
+ 	u64 tx_linearize;
+ 	u64 tx_force_wb;
+-	u32 rx_buf_failed;
+-	u32 rx_page_failed;
++	u64 rx_buf_failed;
++	u64 rx_page_failed;
+ 
+ 	/* These are containers of ring pointers, allocated at run-time */
+ 	struct i40e_ring **rx_rings;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+index 2c1b1da1220ec..1e57cc8c47d7b 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+@@ -240,7 +240,7 @@ static void i40e_dbg_dump_vsi_seid(struct i40e_pf *pf, int seid)
+ 		 (unsigned long int)vsi->net_stats_offsets.rx_compressed,
+ 		 (unsigned long int)vsi->net_stats_offsets.tx_compressed);
+ 	dev_info(&pf->pdev->dev,
+-		 "    tx_restart = %d, tx_busy = %d, rx_buf_failed = %d, rx_page_failed = %d\n",
++		 "    tx_restart = %llu, tx_busy = %llu, rx_buf_failed = %llu, rx_page_failed = %llu\n",
+ 		 vsi->tx_restart, vsi->tx_busy,
+ 		 vsi->rx_buf_failed, vsi->rx_page_failed);
+ 	rcu_read_lock();
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 61afc220fc6cd..f605c0205e4e7 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -196,10 +196,6 @@ int i40e_free_virt_mem_d(struct i40e_hw *hw, struct i40e_virt_mem *mem)
+  * @id: an owner id to stick on the items assigned
+  *
+  * Returns the base item index of the lump, or negative for error
+- *
+- * The search_hint trick and lack of advanced fit-finding only work
+- * because we're highly likely to have all the same size lump requests.
+- * Linear search time and any fragmentation should be minimal.
+  **/
+ static int i40e_get_lump(struct i40e_pf *pf, struct i40e_lump_tracking *pile,
+ 			 u16 needed, u16 id)
+@@ -214,8 +210,21 @@ static int i40e_get_lump(struct i40e_pf *pf, struct i40e_lump_tracking *pile,
+ 		return -EINVAL;
+ 	}
+ 
+-	/* start the linear search with an imperfect hint */
+-	i = pile->search_hint;
++	/* Allocate last queue in the pile for FDIR VSI queue
++	 * so it doesn't fragment the qp_pile
++	 */
++	if (pile == pf->qp_pile && pf->vsi[id]->type == I40E_VSI_FDIR) {
++		if (pile->list[pile->num_entries - 1] & I40E_PILE_VALID_BIT) {
++			dev_err(&pf->pdev->dev,
++				"Cannot allocate queue %d for I40E_VSI_FDIR\n",
++				pile->num_entries - 1);
++			return -ENOMEM;
++		}
++		pile->list[pile->num_entries - 1] = id | I40E_PILE_VALID_BIT;
++		return pile->num_entries - 1;
++	}
++
++	i = 0;
+ 	while (i < pile->num_entries) {
+ 		/* skip already allocated entries */
+ 		if (pile->list[i] & I40E_PILE_VALID_BIT) {
+@@ -234,7 +243,6 @@ static int i40e_get_lump(struct i40e_pf *pf, struct i40e_lump_tracking *pile,
+ 			for (j = 0; j < needed; j++)
+ 				pile->list[i+j] = id | I40E_PILE_VALID_BIT;
+ 			ret = i;
+-			pile->search_hint = i + j;
+ 			break;
+ 		}
+ 
+@@ -257,7 +265,7 @@ static int i40e_put_lump(struct i40e_lump_tracking *pile, u16 index, u16 id)
+ {
+ 	int valid_id = (id | I40E_PILE_VALID_BIT);
+ 	int count = 0;
+-	int i;
++	u16 i;
+ 
+ 	if (!pile || index >= pile->num_entries)
+ 		return -EINVAL;
+@@ -269,8 +277,6 @@ static int i40e_put_lump(struct i40e_lump_tracking *pile, u16 index, u16 id)
+ 		count++;
+ 	}
+ 
+-	if (count && index < pile->search_hint)
+-		pile->search_hint = index;
+ 
+ 	return count;
+ }
+@@ -772,9 +778,9 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi)
+ 	struct rtnl_link_stats64 *ns;   /* netdev stats */
+ 	struct i40e_eth_stats *oes;
+ 	struct i40e_eth_stats *es;     /* device's eth stats */
+-	u32 tx_restart, tx_busy;
++	u64 tx_restart, tx_busy;
+ 	struct i40e_ring *p;
+-	u32 rx_page, rx_buf;
++	u64 rx_page, rx_buf;
+ 	u64 bytes, packets;
+ 	unsigned int start;
+ 	u64 tx_linearize;
+@@ -10574,15 +10580,9 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
+ 	}
+ 	i40e_get_oem_version(&pf->hw);
+ 
+-	if (test_bit(__I40E_EMP_RESET_INTR_RECEIVED, pf->state) &&
+-	    ((hw->aq.fw_maj_ver == 4 && hw->aq.fw_min_ver <= 33) ||
+-	     hw->aq.fw_maj_ver < 4) && hw->mac.type == I40E_MAC_XL710) {
+-		/* The following delay is necessary for 4.33 firmware and older
+-		 * to recover after EMP reset. 200 ms should suffice but we
+-		 * put here 300 ms to be sure that FW is ready to operate
+-		 * after reset.
+-		 */
+-		mdelay(300);
++	if (test_and_clear_bit(__I40E_EMP_RESET_INTR_RECEIVED, pf->state)) {
++		/* The following delay is necessary for firmware update. */
++		mdelay(1000);
+ 	}
+ 
+ 	/* re-verify the eeprom if we just had an EMP reset */
+@@ -11792,7 +11792,6 @@ static int i40e_init_interrupt_scheme(struct i40e_pf *pf)
+ 		return -ENOMEM;
+ 
+ 	pf->irq_pile->num_entries = vectors;
+-	pf->irq_pile->search_hint = 0;
+ 
+ 	/* track first vector for misc interrupts, ignore return */
+ 	(void)i40e_get_lump(pf, pf->irq_pile, 1, I40E_PILE_VALID_BIT - 1);
+@@ -12595,7 +12594,6 @@ static int i40e_sw_init(struct i40e_pf *pf)
+ 		goto sw_init_done;
+ 	}
+ 	pf->qp_pile->num_entries = pf->hw.func_caps.num_tx_qp;
+-	pf->qp_pile->search_hint = 0;
+ 
+ 	pf->tx_timeout_recovery_level = 1;
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_register.h b/drivers/net/ethernet/intel/i40e/i40e_register.h
+index 8d0588a27a053..1908eed4fa5ee 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_register.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_register.h
+@@ -413,6 +413,9 @@
+ #define I40E_VFINT_DYN_CTLN(_INTVF) (0x00024800 + ((_INTVF) * 4)) /* _i=0...511 */ /* Reset: VFR */
+ #define I40E_VFINT_DYN_CTLN_CLEARPBA_SHIFT 1
+ #define I40E_VFINT_DYN_CTLN_CLEARPBA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN_CLEARPBA_SHIFT)
++#define I40E_VFINT_ICR0_ADMINQ_SHIFT 30
++#define I40E_VFINT_ICR0_ADMINQ_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ADMINQ_SHIFT)
++#define I40E_VFINT_ICR0_ENA(_VF) (0x0002C000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */
+ #define I40E_VPINT_AEQCTL(_VF) (0x0002B800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */
+ #define I40E_VPINT_AEQCTL_MSIX_INDX_SHIFT 0
+ #define I40E_VPINT_AEQCTL_ITR_INDX_SHIFT 11
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 048f1678ab8ac..c6f643e54c4f7 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -1376,6 +1376,32 @@ static i40e_status i40e_config_vf_promiscuous_mode(struct i40e_vf *vf,
+ 	return aq_ret;
+ }
+ 
++/**
++ * i40e_sync_vfr_reset
++ * @hw: pointer to hw struct
++ * @vf_id: VF identifier
++ *
++ * Before trigger hardware reset, we need to know if no other process has
++ * reserved the hardware for any reset operations. This check is done by
++ * examining the status of the RSTAT1 register used to signal the reset.
++ **/
++static int i40e_sync_vfr_reset(struct i40e_hw *hw, int vf_id)
++{
++	u32 reg;
++	int i;
++
++	for (i = 0; i < I40E_VFR_WAIT_COUNT; i++) {
++		reg = rd32(hw, I40E_VFINT_ICR0_ENA(vf_id)) &
++			   I40E_VFINT_ICR0_ADMINQ_MASK;
++		if (reg)
++			return 0;
++
++		usleep_range(100, 200);
++	}
++
++	return -EAGAIN;
++}
++
+ /**
+  * i40e_trigger_vf_reset
+  * @vf: pointer to the VF structure
+@@ -1390,9 +1416,11 @@ static void i40e_trigger_vf_reset(struct i40e_vf *vf, bool flr)
+ 	struct i40e_pf *pf = vf->pf;
+ 	struct i40e_hw *hw = &pf->hw;
+ 	u32 reg, reg_idx, bit_idx;
++	bool vf_active;
++	u32 radq;
+ 
+ 	/* warn the VF */
+-	clear_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states);
++	vf_active = test_and_clear_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states);
+ 
+ 	/* Disable VF's configuration API during reset. The flag is re-enabled
+ 	 * in i40e_alloc_vf_res(), when it's safe again to access VF's VSI.
+@@ -1406,7 +1434,19 @@ static void i40e_trigger_vf_reset(struct i40e_vf *vf, bool flr)
+ 	 * just need to clean up, so don't hit the VFRTRIG register.
+ 	 */
+ 	if (!flr) {
+-		/* reset VF using VPGEN_VFRTRIG reg */
++		/* Sync VFR reset before trigger next one */
++		radq = rd32(hw, I40E_VFINT_ICR0_ENA(vf->vf_id)) &
++			    I40E_VFINT_ICR0_ADMINQ_MASK;
++		if (vf_active && !radq)
++			/* waiting for finish reset by virtual driver */
++			if (i40e_sync_vfr_reset(hw, vf->vf_id))
++				dev_info(&pf->pdev->dev,
++					 "Reset VF %d never finished\n",
++				vf->vf_id);
++
++		/* Reset VF using VPGEN_VFRTRIG reg. It is also setting
++		 * in progress state in rstat1 register.
++		 */
+ 		reg = rd32(hw, I40E_VPGEN_VFRTRIG(vf->vf_id));
+ 		reg |= I40E_VPGEN_VFRTRIG_VFSWR_MASK;
+ 		wr32(hw, I40E_VPGEN_VFRTRIG(vf->vf_id), reg);
+@@ -2617,6 +2657,59 @@ error_param:
+ 				       aq_ret);
+ }
+ 
++/**
++ * i40e_check_enough_queue - find big enough queue number
++ * @vf: pointer to the VF info
++ * @needed: the number of items needed
++ *
++ * Returns the base item index of the queue, or negative for error
++ **/
++static int i40e_check_enough_queue(struct i40e_vf *vf, u16 needed)
++{
++	unsigned int  i, cur_queues, more, pool_size;
++	struct i40e_lump_tracking *pile;
++	struct i40e_pf *pf = vf->pf;
++	struct i40e_vsi *vsi;
++
++	vsi = pf->vsi[vf->lan_vsi_idx];
++	cur_queues = vsi->alloc_queue_pairs;
++
++	/* if current allocated queues are enough for need */
++	if (cur_queues >= needed)
++		return vsi->base_queue;
++
++	pile = pf->qp_pile;
++	if (cur_queues > 0) {
++		/* if the allocated queues are not zero
++		 * just check if there are enough queues for more
++		 * behind the allocated queues.
++		 */
++		more = needed - cur_queues;
++		for (i = vsi->base_queue + cur_queues;
++			i < pile->num_entries; i++) {
++			if (pile->list[i] & I40E_PILE_VALID_BIT)
++				break;
++
++			if (more-- == 1)
++				/* there is enough */
++				return vsi->base_queue;
++		}
++	}
++
++	pool_size = 0;
++	for (i = 0; i < pile->num_entries; i++) {
++		if (pile->list[i] & I40E_PILE_VALID_BIT) {
++			pool_size = 0;
++			continue;
++		}
++		if (needed <= ++pool_size)
++			/* there is enough */
++			return i;
++	}
++
++	return -ENOMEM;
++}
++
+ /**
+  * i40e_vc_request_queues_msg
+  * @vf: pointer to the VF info
+@@ -2651,6 +2744,12 @@ static int i40e_vc_request_queues_msg(struct i40e_vf *vf, u8 *msg)
+ 			 req_pairs - cur_pairs,
+ 			 pf->queues_left);
+ 		vfres->num_queue_pairs = pf->queues_left + cur_pairs;
++	} else if (i40e_check_enough_queue(vf, req_pairs) < 0) {
++		dev_warn(&pf->pdev->dev,
++			 "VF %d requested %d more queues, but there is not enough for it.\n",
++			 vf->vf_id,
++			 req_pairs - cur_pairs);
++		vfres->num_queue_pairs = cur_pairs;
+ 	} else {
+ 		/* successful request */
+ 		vf->num_req_queues = req_pairs;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+index 49575a640a84c..03c42fd0fea19 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+@@ -19,6 +19,7 @@
+ #define I40E_MAX_VF_PROMISC_FLAGS	3
+ 
+ #define I40E_VF_STATE_WAIT_COUNT	20
++#define I40E_VFR_WAIT_COUNT		100
+ 
+ /* Various queue ctrls */
+ enum i40e_queue_ctrl {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+index 186d00a9ab35c..3631d612aaca1 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+@@ -1570,6 +1570,8 @@ static struct mac_ops	cgx_mac_ops    = {
+ 	.mac_enadis_pause_frm =		cgx_lmac_enadis_pause_frm,
+ 	.mac_pause_frm_config =		cgx_lmac_pause_frm_config,
+ 	.mac_enadis_ptp_config =	cgx_lmac_ptp_config,
++	.mac_rx_tx_enable =		cgx_lmac_rx_tx_enable,
++	.mac_tx_enable =		cgx_lmac_tx_enable,
+ };
+ 
+ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
+index fc6e7423cbd81..b33e7d1d0851c 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
+@@ -107,6 +107,9 @@ struct mac_ops {
+ 	void			(*mac_enadis_ptp_config)(void  *cgxd,
+ 							 int lmac_id,
+ 							 bool enable);
++
++	int			(*mac_rx_tx_enable)(void *cgxd, int lmac_id, bool enable);
++	int			(*mac_tx_enable)(void *cgxd, int lmac_id, bool enable);
+ };
+ 
+ struct cgx {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+index 4e79e918a1617..58e2aeebc14f8 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+@@ -732,6 +732,7 @@ enum nix_af_status {
+ 	NIX_AF_ERR_BANDPROF_INVAL_REQ  = -428,
+ 	NIX_AF_ERR_CQ_CTX_WRITE_ERR  = -429,
+ 	NIX_AF_ERR_AQ_CTX_RETRY_WRITE  = -430,
++	NIX_AF_ERR_LINK_CREDITS  = -431,
+ };
+ 
+ /* For NIX RX vtag action  */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/npc_profile.h b/drivers/net/ethernet/marvell/octeontx2/af/npc_profile.h
+index 0fe7ad35e36fd..4180376fa6763 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/npc_profile.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/npc_profile.h
+@@ -185,7 +185,6 @@ enum npc_kpu_parser_state {
+ 	NPC_S_KPU2_QINQ,
+ 	NPC_S_KPU2_ETAG,
+ 	NPC_S_KPU2_EXDSA,
+-	NPC_S_KPU2_NGIO,
+ 	NPC_S_KPU2_CPT_CTAG,
+ 	NPC_S_KPU2_CPT_QINQ,
+ 	NPC_S_KPU3_CTAG,
+@@ -212,6 +211,7 @@ enum npc_kpu_parser_state {
+ 	NPC_S_KPU5_NSH,
+ 	NPC_S_KPU5_CPT_IP,
+ 	NPC_S_KPU5_CPT_IP6,
++	NPC_S_KPU5_NGIO,
+ 	NPC_S_KPU6_IP6_EXT,
+ 	NPC_S_KPU6_IP6_HOP_DEST,
+ 	NPC_S_KPU6_IP6_ROUT,
+@@ -1120,15 +1120,6 @@ static struct npc_kpu_profile_cam kpu1_cam_entries[] = {
+ 		0x0000,
+ 		0x0000,
+ 	},
+-	{
+-		NPC_S_KPU1_ETHER, 0xff,
+-		NPC_ETYPE_CTAG,
+-		0xffff,
+-		NPC_ETYPE_NGIO,
+-		0xffff,
+-		0x0000,
+-		0x0000,
+-	},
+ 	{
+ 		NPC_S_KPU1_ETHER, 0xff,
+ 		NPC_ETYPE_CTAG,
+@@ -1966,6 +1957,15 @@ static struct npc_kpu_profile_cam kpu2_cam_entries[] = {
+ 		0x0000,
+ 		0x0000,
+ 	},
++	{
++		NPC_S_KPU2_CTAG, 0xff,
++		NPC_ETYPE_NGIO,
++		0xffff,
++		0x0000,
++		0x0000,
++		0x0000,
++		0x0000,
++	},
+ 	{
+ 		NPC_S_KPU2_CTAG, 0xff,
+ 		NPC_ETYPE_PPPOE,
+@@ -2749,15 +2749,6 @@ static struct npc_kpu_profile_cam kpu2_cam_entries[] = {
+ 		0x0000,
+ 		0x0000,
+ 	},
+-	{
+-		NPC_S_KPU2_NGIO, 0xff,
+-		0x0000,
+-		0x0000,
+-		0x0000,
+-		0x0000,
+-		0x0000,
+-		0x0000,
+-	},
+ 	{
+ 		NPC_S_KPU2_CPT_CTAG, 0xff,
+ 		NPC_ETYPE_IP,
+@@ -5089,6 +5080,15 @@ static struct npc_kpu_profile_cam kpu5_cam_entries[] = {
+ 		0x0000,
+ 		0x0000,
+ 	},
++	{
++		NPC_S_KPU5_NGIO, 0xff,
++		0x0000,
++		0x0000,
++		0x0000,
++		0x0000,
++		0x0000,
++		0x0000,
++	},
+ 	{
+ 		NPC_S_NA, 0X00,
+ 		0x0000,
+@@ -8422,14 +8422,6 @@ static struct npc_kpu_profile_action kpu1_action_entries[] = {
+ 		0,
+ 		0, 0, 0, 0,
+ 	},
+-	{
+-		NPC_ERRLEV_RE, NPC_EC_NOERR,
+-		8, 12, 0, 0, 0,
+-		NPC_S_KPU2_NGIO, 12, 1,
+-		NPC_LID_LA, NPC_LT_LA_ETHER,
+-		0,
+-		0, 0, 0, 0,
+-	},
+ 	{
+ 		NPC_ERRLEV_RE, NPC_EC_NOERR,
+ 		8, 12, 0, 0, 0,
+@@ -9194,6 +9186,14 @@ static struct npc_kpu_profile_action kpu2_action_entries[] = {
+ 		0,
+ 		0, 0, 0, 0,
+ 	},
++	{
++		NPC_ERRLEV_RE, NPC_EC_NOERR,
++		0, 0, 0, 2, 0,
++		NPC_S_KPU5_NGIO, 6, 1,
++		NPC_LID_LB, NPC_LT_LB_CTAG,
++		0,
++		0, 0, 0, 0,
++	},
+ 	{
+ 		NPC_ERRLEV_RE, NPC_EC_NOERR,
+ 		8, 0, 6, 2, 0,
+@@ -9890,14 +9890,6 @@ static struct npc_kpu_profile_action kpu2_action_entries[] = {
+ 		NPC_F_LB_U_UNK_ETYPE | NPC_F_LB_L_EXDSA,
+ 		0, 0, 0, 0,
+ 	},
+-	{
+-		NPC_ERRLEV_RE, NPC_EC_NOERR,
+-		0, 0, 0, 0, 1,
+-		NPC_S_NA, 0, 1,
+-		NPC_LID_LC, NPC_LT_LC_NGIO,
+-		0,
+-		0, 0, 0, 0,
+-	},
+ 	{
+ 		NPC_ERRLEV_RE, NPC_EC_NOERR,
+ 		8, 0, 6, 2, 0,
+@@ -11973,6 +11965,14 @@ static struct npc_kpu_profile_action kpu5_action_entries[] = {
+ 		0,
+ 		0, 0, 0, 0,
+ 	},
++	{
++		NPC_ERRLEV_RE, NPC_EC_NOERR,
++		0, 0, 0, 0, 1,
++		NPC_S_NA, 0, 1,
++		NPC_LID_LC, NPC_LT_LC_NGIO,
++		0,
++		0, 0, 0, 0,
++	},
+ 	{
+ 		NPC_ERRLEV_LC, NPC_EC_UNK,
+ 		0, 0, 0, 0, 1,
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
+index e695fa0e82a94..9ea2f6ac38ec1 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
+@@ -30,6 +30,8 @@ static struct mac_ops	rpm_mac_ops   = {
+ 	.mac_enadis_pause_frm =		rpm_lmac_enadis_pause_frm,
+ 	.mac_pause_frm_config =		rpm_lmac_pause_frm_config,
+ 	.mac_enadis_ptp_config =	rpm_lmac_ptp_config,
++	.mac_rx_tx_enable =		rpm_lmac_rx_tx_enable,
++	.mac_tx_enable =		rpm_lmac_tx_enable,
+ };
+ 
+ struct mac_ops *rpm_get_mac_ops(void)
+@@ -54,6 +56,43 @@ int rpm_get_nr_lmacs(void *rpmd)
+ 	return hweight8(rpm_read(rpm, 0, CGXX_CMRX_RX_LMACS) & 0xFULL);
+ }
+ 
++int rpm_lmac_tx_enable(void *rpmd, int lmac_id, bool enable)
++{
++	rpm_t *rpm = rpmd;
++	u64 cfg, last;
++
++	if (!is_lmac_valid(rpm, lmac_id))
++		return -ENODEV;
++
++	cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG);
++	last = cfg;
++	if (enable)
++		cfg |= RPM_TX_EN;
++	else
++		cfg &= ~(RPM_TX_EN);
++
++	if (cfg != last)
++		rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg);
++	return !!(last & RPM_TX_EN);
++}
++
++int rpm_lmac_rx_tx_enable(void *rpmd, int lmac_id, bool enable)
++{
++	rpm_t *rpm = rpmd;
++	u64 cfg;
++
++	if (!is_lmac_valid(rpm, lmac_id))
++		return -ENODEV;
++
++	cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG);
++	if (enable)
++		cfg |= RPM_RX_EN | RPM_TX_EN;
++	else
++		cfg &= ~(RPM_RX_EN | RPM_TX_EN);
++	rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg);
++	return 0;
++}
++
+ void rpm_lmac_enadis_rx_pause_fwding(void *rpmd, int lmac_id, bool enable)
+ {
+ 	rpm_t *rpm = rpmd;
+@@ -252,23 +291,20 @@ int rpm_lmac_internal_loopback(void *rpmd, int lmac_id, bool enable)
+ 	if (!rpm || lmac_id >= rpm->lmac_count)
+ 		return -ENODEV;
+ 	lmac_type = rpm->mac_ops->get_lmac_type(rpm, lmac_id);
+-	if (lmac_type == LMAC_MODE_100G_R) {
+-		cfg = rpm_read(rpm, lmac_id, RPMX_MTI_PCS100X_CONTROL1);
+-
+-		if (enable)
+-			cfg |= RPMX_MTI_PCS_LBK;
+-		else
+-			cfg &= ~RPMX_MTI_PCS_LBK;
+-		rpm_write(rpm, lmac_id, RPMX_MTI_PCS100X_CONTROL1, cfg);
+-	} else {
+-		cfg = rpm_read(rpm, lmac_id, RPMX_MTI_LPCSX_CONTROL1);
+-		if (enable)
+-			cfg |= RPMX_MTI_PCS_LBK;
+-		else
+-			cfg &= ~RPMX_MTI_PCS_LBK;
+-		rpm_write(rpm, lmac_id, RPMX_MTI_LPCSX_CONTROL1, cfg);
++
++	if (lmac_type == LMAC_MODE_QSGMII || lmac_type == LMAC_MODE_SGMII) {
++		dev_err(&rpm->pdev->dev, "loopback not supported for LPC mode\n");
++		return 0;
+ 	}
+ 
++	cfg = rpm_read(rpm, lmac_id, RPMX_MTI_PCS100X_CONTROL1);
++
++	if (enable)
++		cfg |= RPMX_MTI_PCS_LBK;
++	else
++		cfg &= ~RPMX_MTI_PCS_LBK;
++	rpm_write(rpm, lmac_id, RPMX_MTI_PCS100X_CONTROL1, cfg);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h
+index 57c8a687b488a..ff580311edd03 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h
+@@ -43,6 +43,8 @@
+ #define RPMX_MTI_STAT_DATA_HI_CDC            0x10038
+ 
+ #define RPM_LMAC_FWI			0xa
++#define RPM_TX_EN			BIT_ULL(0)
++#define RPM_RX_EN			BIT_ULL(1)
+ 
+ /* Function Declarations */
+ int rpm_get_nr_lmacs(void *rpmd);
+@@ -57,4 +59,6 @@ int rpm_lmac_enadis_pause_frm(void *rpmd, int lmac_id, u8 tx_pause,
+ int rpm_get_tx_stats(void *rpmd, int lmac_id, int idx, u64 *tx_stat);
+ int rpm_get_rx_stats(void *rpmd, int lmac_id, int idx, u64 *rx_stat);
+ void rpm_lmac_ptp_config(void *rpmd, int lmac_id, bool enable);
++int rpm_lmac_rx_tx_enable(void *rpmd, int lmac_id, bool enable);
++int rpm_lmac_tx_enable(void *rpmd, int lmac_id, bool enable);
+ #endif /* RPM_H */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index 3ca6b942ebe25..54e1b27a7dfec 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -520,8 +520,11 @@ static void rvu_block_reset(struct rvu *rvu, int blkaddr, u64 rst_reg)
+ 
+ 	rvu_write64(rvu, blkaddr, rst_reg, BIT_ULL(0));
+ 	err = rvu_poll_reg(rvu, blkaddr, rst_reg, BIT_ULL(63), true);
+-	if (err)
+-		dev_err(rvu->dev, "HW block:%d reset failed\n", blkaddr);
++	if (err) {
++		dev_err(rvu->dev, "HW block:%d reset timeout retrying again\n", blkaddr);
++		while (rvu_poll_reg(rvu, blkaddr, rst_reg, BIT_ULL(63), true) == -EBUSY)
++			;
++	}
+ }
+ 
+ static void rvu_reset_all_blocks(struct rvu *rvu)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+index 66e45d733824e..5ed94cfb47d2d 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+@@ -806,6 +806,7 @@ bool is_mac_feature_supported(struct rvu *rvu, int pf, int feature);
+ u32  rvu_cgx_get_fifolen(struct rvu *rvu);
+ void *rvu_first_cgx_pdata(struct rvu *rvu);
+ int cgxlmac_to_pf(struct rvu *rvu, int cgx_id, int lmac_id);
++int rvu_cgx_config_tx(void *cgxd, int lmac_id, bool enable);
+ 
+ int npc_get_nixlf_mcam_index(struct npc_mcam *mcam, u16 pcifunc, int nixlf,
+ 			     int type);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+index 2ca182a4ce823..8a7ac5a8b821d 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+@@ -441,16 +441,26 @@ void rvu_cgx_enadis_rx_bp(struct rvu *rvu, int pf, bool enable)
+ int rvu_cgx_config_rxtx(struct rvu *rvu, u16 pcifunc, bool start)
+ {
+ 	int pf = rvu_get_pf(pcifunc);
++	struct mac_ops *mac_ops;
+ 	u8 cgx_id, lmac_id;
++	void *cgxd;
+ 
+ 	if (!is_cgx_config_permitted(rvu, pcifunc))
+ 		return LMAC_AF_ERR_PERM_DENIED;
+ 
+ 	rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id);
++	cgxd = rvu_cgx_pdata(cgx_id, rvu);
++	mac_ops = get_mac_ops(cgxd);
++
++	return mac_ops->mac_rx_tx_enable(cgxd, lmac_id, start);
++}
+ 
+-	cgx_lmac_rx_tx_enable(rvu_cgx_pdata(cgx_id, rvu), lmac_id, start);
++int rvu_cgx_config_tx(void *cgxd, int lmac_id, bool enable)
++{
++	struct mac_ops *mac_ops;
+ 
+-	return 0;
++	mac_ops = get_mac_ops(cgxd);
++	return mac_ops->mac_tx_enable(cgxd, lmac_id, enable);
+ }
+ 
+ void rvu_cgx_disable_dmac_entries(struct rvu *rvu, u16 pcifunc)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+index a09a507369ac3..d1eddb769a419 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+@@ -1224,6 +1224,8 @@ static void print_nix_cn10k_sq_ctx(struct seq_file *m,
+ 	seq_printf(m, "W3: head_offset\t\t\t%d\nW3: smenq_next_sqb_vld\t\t%d\n\n",
+ 		   sq_ctx->head_offset, sq_ctx->smenq_next_sqb_vld);
+ 
++	seq_printf(m, "W3: smq_next_sq_vld\t\t%d\nW3: smq_pend\t\t\t%d\n",
++		   sq_ctx->smq_next_sq_vld, sq_ctx->smq_pend);
+ 	seq_printf(m, "W4: next_sqb \t\t\t%llx\n\n", sq_ctx->next_sqb);
+ 	seq_printf(m, "W5: tail_sqb \t\t\t%llx\n\n", sq_ctx->tail_sqb);
+ 	seq_printf(m, "W6: smenq_sqb \t\t\t%llx\n\n", sq_ctx->smenq_sqb);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+index d8b1948aaa0ae..97fb61915379a 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+@@ -512,11 +512,11 @@ static int rvu_nix_get_bpid(struct rvu *rvu, struct nix_bp_cfg_req *req,
+ 	cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST);
+ 	lmac_chan_cnt = cfg & 0xFF;
+ 
+-	cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST1);
+-	sdp_chan_cnt = cfg & 0xFFF;
+-
+ 	cgx_bpid_cnt = hw->cgx_links * lmac_chan_cnt;
+ 	lbk_bpid_cnt = hw->lbk_links * ((cfg >> 16) & 0xFF);
++
++	cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST1);
++	sdp_chan_cnt = cfg & 0xFFF;
+ 	sdp_bpid_cnt = hw->sdp_links * sdp_chan_cnt;
+ 
+ 	pfvf = rvu_get_pfvf(rvu, req->hdr.pcifunc);
+@@ -2068,8 +2068,8 @@ static int nix_smq_flush(struct rvu *rvu, int blkaddr,
+ 	/* enable cgx tx if disabled */
+ 	if (is_pf_cgxmapped(rvu, pf)) {
+ 		rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id);
+-		restore_tx_en = !cgx_lmac_tx_enable(rvu_cgx_pdata(cgx_id, rvu),
+-						    lmac_id, true);
++		restore_tx_en = !rvu_cgx_config_tx(rvu_cgx_pdata(cgx_id, rvu),
++						   lmac_id, true);
+ 	}
+ 
+ 	cfg = rvu_read64(rvu, blkaddr, NIX_AF_SMQX_CFG(smq));
+@@ -2092,7 +2092,7 @@ static int nix_smq_flush(struct rvu *rvu, int blkaddr,
+ 	rvu_cgx_enadis_rx_bp(rvu, pf, true);
+ 	/* restore cgx tx state */
+ 	if (restore_tx_en)
+-		cgx_lmac_tx_enable(rvu_cgx_pdata(cgx_id, rvu), lmac_id, false);
++		rvu_cgx_config_tx(rvu_cgx_pdata(cgx_id, rvu), lmac_id, false);
+ 	return err;
+ }
+ 
+@@ -3878,7 +3878,7 @@ nix_config_link_credits(struct rvu *rvu, int blkaddr, int link,
+ 	/* Enable cgx tx if disabled for credits to be back */
+ 	if (is_pf_cgxmapped(rvu, pf)) {
+ 		rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id);
+-		restore_tx_en = !cgx_lmac_tx_enable(rvu_cgx_pdata(cgx_id, rvu),
++		restore_tx_en = !rvu_cgx_config_tx(rvu_cgx_pdata(cgx_id, rvu),
+ 						    lmac_id, true);
+ 	}
+ 
+@@ -3891,8 +3891,8 @@ nix_config_link_credits(struct rvu *rvu, int blkaddr, int link,
+ 			    NIX_AF_TL1X_SW_XOFF(schq), BIT_ULL(0));
+ 	}
+ 
+-	rc = -EBUSY;
+-	poll_tmo = jiffies + usecs_to_jiffies(10000);
++	rc = NIX_AF_ERR_LINK_CREDITS;
++	poll_tmo = jiffies + usecs_to_jiffies(200000);
+ 	/* Wait for credits to return */
+ 	do {
+ 		if (time_after(jiffies, poll_tmo))
+@@ -3918,7 +3918,7 @@ exit:
+ 
+ 	/* Restore state of cgx tx */
+ 	if (restore_tx_en)
+-		cgx_lmac_tx_enable(rvu_cgx_pdata(cgx_id, rvu), lmac_id, false);
++		rvu_cgx_config_tx(rvu_cgx_pdata(cgx_id, rvu), lmac_id, false);
+ 
+ 	mutex_unlock(&rvu->rsrc_lock);
+ 	return rc;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+index c0005a1feee69..91f86d77cd41b 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+@@ -402,6 +402,7 @@ static void npc_fixup_vf_rule(struct rvu *rvu, struct npc_mcam *mcam,
+ 			      int blkaddr, int index, struct mcam_entry *entry,
+ 			      bool *enable)
+ {
++	struct rvu_npc_mcam_rule *rule;
+ 	u16 owner, target_func;
+ 	struct rvu_pfvf *pfvf;
+ 	u64 rx_action;
+@@ -423,6 +424,12 @@ static void npc_fixup_vf_rule(struct rvu *rvu, struct npc_mcam *mcam,
+ 	      test_bit(NIXLF_INITIALIZED, &pfvf->flags)))
+ 		*enable = false;
+ 
++	/* fix up not needed for the rules added by user(ntuple filters) */
++	list_for_each_entry(rule, &mcam->mcam_rules, list) {
++		if (rule->entry == index)
++			return;
++	}
++
+ 	/* copy VF default entry action to the VF mcam entry */
+ 	rx_action = npc_get_default_entry_action(rvu, mcam, blkaddr,
+ 						 target_func);
+@@ -489,8 +496,8 @@ static void npc_config_mcam_entry(struct rvu *rvu, struct npc_mcam *mcam,
+ 	}
+ 
+ 	/* PF installing VF rule */
+-	if (intf == NIX_INTF_RX && actindex < mcam->bmap_entries)
+-		npc_fixup_vf_rule(rvu, mcam, blkaddr, index, entry, &enable);
++	if (is_npc_intf_rx(intf) && actindex < mcam->bmap_entries)
++		npc_fixup_vf_rule(rvu, mcam, blkaddr, actindex, entry, &enable);
+ 
+ 	/* Set 'action' */
+ 	rvu_write64(rvu, blkaddr,
+@@ -916,7 +923,8 @@ static void npc_update_vf_flow_entry(struct rvu *rvu, struct npc_mcam *mcam,
+ 				     int blkaddr, u16 pcifunc, u64 rx_action)
+ {
+ 	int actindex, index, bank, entry;
+-	bool enable;
++	struct rvu_npc_mcam_rule *rule;
++	bool enable, update;
+ 
+ 	if (!(pcifunc & RVU_PFVF_FUNC_MASK))
+ 		return;
+@@ -924,6 +932,14 @@ static void npc_update_vf_flow_entry(struct rvu *rvu, struct npc_mcam *mcam,
+ 	mutex_lock(&mcam->lock);
+ 	for (index = 0; index < mcam->bmap_entries; index++) {
+ 		if (mcam->entry2target_pffunc[index] == pcifunc) {
++			update = true;
++			/* update not needed for the rules added via ntuple filters */
++			list_for_each_entry(rule, &mcam->mcam_rules, list) {
++				if (rule->entry == index)
++					update = false;
++			}
++			if (!update)
++				continue;
+ 			bank = npc_get_bank(mcam, index);
+ 			actindex = index;
+ 			entry = index & (mcam->banksize - 1);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
+index ff2b21999f36f..19c53e591d0da 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
+@@ -1098,14 +1098,6 @@ find_rule:
+ 		write_req.cntr = rule->cntr;
+ 	}
+ 
+-	err = rvu_mbox_handler_npc_mcam_write_entry(rvu, &write_req,
+-						    &write_rsp);
+-	if (err) {
+-		rvu_mcam_remove_counter_from_rule(rvu, owner, rule);
+-		if (new)
+-			kfree(rule);
+-		return err;
+-	}
+ 	/* update rule */
+ 	memcpy(&rule->packet, &dummy.packet, sizeof(rule->packet));
+ 	memcpy(&rule->mask, &dummy.mask, sizeof(rule->mask));
+@@ -1132,6 +1124,18 @@ find_rule:
+ 	if (req->default_rule)
+ 		pfvf->def_ucast_rule = rule;
+ 
++	/* write to mcam entry registers */
++	err = rvu_mbox_handler_npc_mcam_write_entry(rvu, &write_req,
++						    &write_rsp);
++	if (err) {
++		rvu_mcam_remove_counter_from_rule(rvu, owner, rule);
++		if (new) {
++			list_del(&rule->list);
++			kfree(rule);
++		}
++		return err;
++	}
++
+ 	/* VF's MAC address is being changed via PF  */
+ 	if (pf_set_vfs_mac) {
+ 		ether_addr_copy(pfvf->default_mac, req->packet.dmac);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+index 61e52812983fa..14509fc64cce9 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+@@ -603,6 +603,7 @@ static inline void __cn10k_aura_freeptr(struct otx2_nic *pfvf, u64 aura,
+ 			size++;
+ 		tar_addr |=  ((size - 1) & 0x7) << 4;
+ 	}
++	dma_wmb();
+ 	memcpy((u64 *)lmt_info->lmt_addr, ptrs, sizeof(u64) * num_ptrs);
+ 	/* Perform LMTST flush */
+ 	cn10k_lmt_flush(val, tar_addr);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index 1e0d0c9c1dac3..ba7f6b295ca55 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -394,7 +394,12 @@ static int otx2_forward_vf_mbox_msgs(struct otx2_nic *pf,
+ 		dst_mdev->msg_size = mbox_hdr->msg_size;
+ 		dst_mdev->num_msgs = num_msgs;
+ 		err = otx2_sync_mbox_msg(dst_mbox);
+-		if (err) {
++		/* Error code -EIO indicate there is a communication failure
++		 * to the AF. Rest of the error codes indicate that AF processed
++		 * VF messages and set the error codes in response messages
++		 * (if any) so simply forward responses to VF.
++		 */
++		if (err == -EIO) {
+ 			dev_warn(pf->dev,
+ 				 "AF not responding to VF%d messages\n", vf);
+ 			/* restore PF mbase and exit */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-visconti.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-visconti.c
+index e2e0f977875d7..dde5b772a5af7 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-visconti.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-visconti.c
+@@ -22,21 +22,21 @@
+ #define ETHER_CLK_SEL_RMII_CLK_EN BIT(2)
+ #define ETHER_CLK_SEL_RMII_CLK_RST BIT(3)
+ #define ETHER_CLK_SEL_DIV_SEL_2 BIT(4)
+-#define ETHER_CLK_SEL_DIV_SEL_20 BIT(0)
++#define ETHER_CLK_SEL_DIV_SEL_20 0
+ #define ETHER_CLK_SEL_FREQ_SEL_125M	(BIT(9) | BIT(8))
+ #define ETHER_CLK_SEL_FREQ_SEL_50M	BIT(9)
+ #define ETHER_CLK_SEL_FREQ_SEL_25M	BIT(8)
+ #define ETHER_CLK_SEL_FREQ_SEL_2P5M	0
+-#define ETHER_CLK_SEL_TX_CLK_EXT_SEL_IN BIT(0)
++#define ETHER_CLK_SEL_TX_CLK_EXT_SEL_IN 0
+ #define ETHER_CLK_SEL_TX_CLK_EXT_SEL_TXC BIT(10)
+ #define ETHER_CLK_SEL_TX_CLK_EXT_SEL_DIV BIT(11)
+-#define ETHER_CLK_SEL_RX_CLK_EXT_SEL_IN  BIT(0)
++#define ETHER_CLK_SEL_RX_CLK_EXT_SEL_IN  0
+ #define ETHER_CLK_SEL_RX_CLK_EXT_SEL_RXC BIT(12)
+ #define ETHER_CLK_SEL_RX_CLK_EXT_SEL_DIV BIT(13)
+-#define ETHER_CLK_SEL_TX_CLK_O_TX_I	 BIT(0)
++#define ETHER_CLK_SEL_TX_CLK_O_TX_I	 0
+ #define ETHER_CLK_SEL_TX_CLK_O_RMII_I	 BIT(14)
+ #define ETHER_CLK_SEL_TX_O_E_N_IN	 BIT(15)
+-#define ETHER_CLK_SEL_RMII_CLK_SEL_IN	 BIT(0)
++#define ETHER_CLK_SEL_RMII_CLK_SEL_IN	 0
+ #define ETHER_CLK_SEL_RMII_CLK_SEL_RX_C	 BIT(16)
+ 
+ #define ETHER_CLK_SEL_RX_TX_CLK_EN (ETHER_CLK_SEL_RX_CLK_EN | ETHER_CLK_SEL_TX_CLK_EN)
+@@ -96,31 +96,41 @@ static void visconti_eth_fix_mac_speed(void *priv, unsigned int speed)
+ 	val |= ETHER_CLK_SEL_TX_O_E_N_IN;
+ 	writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
+ 
++	/* Set Clock-Mux, Start clock, Set TX_O direction */
+ 	switch (dwmac->phy_intf_sel) {
+ 	case ETHER_CONFIG_INTF_RGMII:
+ 		val = clk_sel_val | ETHER_CLK_SEL_RX_CLK_EXT_SEL_RXC;
++		writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
++
++		val |= ETHER_CLK_SEL_RX_TX_CLK_EN;
++		writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
++
++		val &= ~ETHER_CLK_SEL_TX_O_E_N_IN;
++		writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
+ 		break;
+ 	case ETHER_CONFIG_INTF_RMII:
+ 		val = clk_sel_val | ETHER_CLK_SEL_RX_CLK_EXT_SEL_DIV |
+-			ETHER_CLK_SEL_TX_CLK_EXT_SEL_TXC | ETHER_CLK_SEL_TX_O_E_N_IN |
++			ETHER_CLK_SEL_TX_CLK_EXT_SEL_DIV | ETHER_CLK_SEL_TX_O_E_N_IN |
+ 			ETHER_CLK_SEL_RMII_CLK_SEL_RX_C;
++		writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
++
++		val |= ETHER_CLK_SEL_RMII_CLK_RST;
++		writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
++
++		val |= ETHER_CLK_SEL_RMII_CLK_EN | ETHER_CLK_SEL_RX_TX_CLK_EN;
++		writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
+ 		break;
+ 	case ETHER_CONFIG_INTF_MII:
+ 	default:
+ 		val = clk_sel_val | ETHER_CLK_SEL_RX_CLK_EXT_SEL_RXC |
+-			ETHER_CLK_SEL_TX_CLK_EXT_SEL_DIV | ETHER_CLK_SEL_TX_O_E_N_IN |
+-			ETHER_CLK_SEL_RMII_CLK_EN;
++			ETHER_CLK_SEL_TX_CLK_EXT_SEL_TXC | ETHER_CLK_SEL_TX_O_E_N_IN;
++		writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
++
++		val |= ETHER_CLK_SEL_RX_TX_CLK_EN;
++		writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
+ 		break;
+ 	}
+ 
+-	/* Start clock */
+-	writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
+-	val |= ETHER_CLK_SEL_RX_TX_CLK_EN;
+-	writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
+-
+-	val &= ~ETHER_CLK_SEL_TX_O_E_N_IN;
+-	writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
+-
+ 	spin_unlock_irqrestore(&dwmac->lock, flags);
+ }
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index e81a79845d425..ac8e3b932bf1e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -899,6 +899,9 @@ static int stmmac_init_ptp(struct stmmac_priv *priv)
+ 	bool xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac;
+ 	int ret;
+ 
++	if (priv->plat->ptp_clk_freq_config)
++		priv->plat->ptp_clk_freq_config(priv);
++
+ 	ret = stmmac_init_tstamp_counter(priv, STMMAC_HWTS_ACTIVE);
+ 	if (ret)
+ 		return ret;
+@@ -921,8 +924,6 @@ static int stmmac_init_ptp(struct stmmac_priv *priv)
+ 	priv->hwts_tx_en = 0;
+ 	priv->hwts_rx_en = 0;
+ 
+-	stmmac_ptp_register(priv);
+-
+ 	return 0;
+ }
+ 
+@@ -3245,7 +3246,7 @@ static int stmmac_fpe_start_wq(struct stmmac_priv *priv)
+ /**
+  * stmmac_hw_setup - setup mac in a usable state.
+  *  @dev : pointer to the device structure.
+- *  @init_ptp: initialize PTP if set
++ *  @ptp_register: register PTP if set
+  *  Description:
+  *  this is the main function to setup the HW in a usable state because the
+  *  dma engine is reset, the core registers are configured (e.g. AXI,
+@@ -3255,7 +3256,7 @@ static int stmmac_fpe_start_wq(struct stmmac_priv *priv)
+  *  0 on success and an appropriate (-)ve integer as defined in errno.h
+  *  file on failure.
+  */
+-static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
++static int stmmac_hw_setup(struct net_device *dev, bool ptp_register)
+ {
+ 	struct stmmac_priv *priv = netdev_priv(dev);
+ 	u32 rx_cnt = priv->plat->rx_queues_to_use;
+@@ -3312,13 +3313,13 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
+ 
+ 	stmmac_mmc_setup(priv);
+ 
+-	if (init_ptp) {
+-		ret = stmmac_init_ptp(priv);
+-		if (ret == -EOPNOTSUPP)
+-			netdev_warn(priv->dev, "PTP not supported by HW\n");
+-		else if (ret)
+-			netdev_warn(priv->dev, "PTP init failed\n");
+-	}
++	ret = stmmac_init_ptp(priv);
++	if (ret == -EOPNOTSUPP)
++		netdev_warn(priv->dev, "PTP not supported by HW\n");
++	else if (ret)
++		netdev_warn(priv->dev, "PTP init failed\n");
++	else if (ptp_register)
++		stmmac_ptp_register(priv);
+ 
+ 	priv->eee_tw_timer = STMMAC_DEFAULT_TWT_LS;
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
+index be9b58b2abf9b..ac8bc1c8614d3 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
+@@ -297,9 +297,6 @@ void stmmac_ptp_register(struct stmmac_priv *priv)
+ {
+ 	int i;
+ 
+-	if (priv->plat->ptp_clk_freq_config)
+-		priv->plat->ptp_clk_freq_config(priv);
+-
+ 	for (i = 0; i < priv->dma_cap.pps_out_num; i++) {
+ 		if (i >= STMMAC_PPS_MAX)
+ 			break;
+diff --git a/drivers/net/ethernet/ti/cpsw_priv.c b/drivers/net/ethernet/ti/cpsw_priv.c
+index 6bb5ac51d23c3..f8e591d69d2cb 100644
+--- a/drivers/net/ethernet/ti/cpsw_priv.c
++++ b/drivers/net/ethernet/ti/cpsw_priv.c
+@@ -1144,7 +1144,7 @@ int cpsw_fill_rx_channels(struct cpsw_priv *priv)
+ static struct page_pool *cpsw_create_page_pool(struct cpsw_common *cpsw,
+ 					       int size)
+ {
+-	struct page_pool_params pp_params;
++	struct page_pool_params pp_params = {};
+ 	struct page_pool *pool;
+ 
+ 	pp_params.order = 0;
+diff --git a/drivers/net/hamradio/yam.c b/drivers/net/hamradio/yam.c
+index 6376b8485976c..980f2be32f05a 100644
+--- a/drivers/net/hamradio/yam.c
++++ b/drivers/net/hamradio/yam.c
+@@ -950,9 +950,7 @@ static int yam_siocdevprivate(struct net_device *dev, struct ifreq *ifr, void __
+ 		ym = memdup_user(data, sizeof(struct yamdrv_ioctl_mcs));
+ 		if (IS_ERR(ym))
+ 			return PTR_ERR(ym);
+-		if (ym->cmd != SIOCYAMSMCS)
+-			return -EINVAL;
+-		if (ym->bitrate > YAM_MAXBITRATE) {
++		if (ym->cmd != SIOCYAMSMCS || ym->bitrate > YAM_MAXBITRATE) {
+ 			kfree(ym);
+ 			return -EINVAL;
+ 		}
+diff --git a/drivers/net/phy/broadcom.c b/drivers/net/phy/broadcom.c
+index bb5104ae46104..3c683e0e40e9e 100644
+--- a/drivers/net/phy/broadcom.c
++++ b/drivers/net/phy/broadcom.c
+@@ -854,6 +854,7 @@ static struct phy_driver broadcom_drivers[] = {
+ 	.phy_id_mask	= 0xfffffff0,
+ 	.name		= "Broadcom BCM54616S",
+ 	/* PHY_GBIT_FEATURES */
++	.soft_reset     = genphy_soft_reset,
+ 	.config_init	= bcm54xx_config_init,
+ 	.config_aneg	= bcm54616s_config_aneg,
+ 	.config_intr	= bcm_phy_config_intr,
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 74d8e1dc125f8..ce0bb5951b81e 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -1746,6 +1746,9 @@ void phy_detach(struct phy_device *phydev)
+ 	    phy_driver_is_genphy_10g(phydev))
+ 		device_release_driver(&phydev->mdio.dev);
+ 
++	/* Assert the reset signal */
++	phy_device_reset(phydev, 1);
++
+ 	/*
+ 	 * The phydev might go away on the put_device() below, so avoid
+ 	 * a use-after-free bug by reading the underlying bus first.
+@@ -1757,9 +1760,6 @@ void phy_detach(struct phy_device *phydev)
+ 		ndev_owner = dev->dev.parent->driver->owner;
+ 	if (ndev_owner != bus->owner)
+ 		module_put(bus->owner);
+-
+-	/* Assert the reset signal */
+-	phy_device_reset(phydev, 1);
+ }
+ EXPORT_SYMBOL(phy_detach);
+ 
+diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c
+index 0c6c0d1843bce..c1512c9925a66 100644
+--- a/drivers/net/phy/sfp-bus.c
++++ b/drivers/net/phy/sfp-bus.c
+@@ -651,6 +651,11 @@ struct sfp_bus *sfp_bus_find_fwnode(struct fwnode_handle *fwnode)
+ 	else if (ret < 0)
+ 		return ERR_PTR(ret);
+ 
++	if (!fwnode_device_is_available(ref.fwnode)) {
++		fwnode_handle_put(ref.fwnode);
++		return NULL;
++	}
++
+ 	bus = sfp_bus_get(ref.fwnode);
+ 	fwnode_handle_put(ref.fwnode);
+ 	if (!bus)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+index fcbcfc9f5a04f..58be537adb1f1 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+@@ -145,7 +145,7 @@ void mt7615_mcu_fill_msg(struct mt7615_dev *dev, struct sk_buff *skb,
+ 	mcu_txd->cid = mcu_cmd;
+ 	mcu_txd->ext_cid = FIELD_GET(__MCU_CMD_FIELD_EXT_ID, cmd);
+ 
+-	if (mcu_txd->ext_cid || (cmd & MCU_CE_PREFIX)) {
++	if (mcu_txd->ext_cid || (cmd & __MCU_CMD_FIELD_CE)) {
+ 		if (cmd & __MCU_CMD_FIELD_QUERY)
+ 			mcu_txd->set_query = MCU_Q_QUERY;
+ 		else
+@@ -193,7 +193,7 @@ int mt7615_mcu_parse_response(struct mt76_dev *mdev, int cmd,
+ 		skb_pull(skb, sizeof(*rxd));
+ 		event = (struct mt7615_mcu_uni_event *)skb->data;
+ 		ret = le32_to_cpu(event->status);
+-	} else if (cmd == MCU_CMD_REG_READ) {
++	} else if (cmd == MCU_CE_QUERY(REG_READ)) {
+ 		struct mt7615_mcu_reg_event *event;
+ 
+ 		skb_pull(skb, sizeof(*rxd));
+@@ -2737,13 +2737,13 @@ int mt7615_mcu_set_bss_pm(struct mt7615_dev *dev, struct ieee80211_vif *vif,
+ 	if (vif->type != NL80211_IFTYPE_STATION)
+ 		return 0;
+ 
+-	err = mt76_mcu_send_msg(&dev->mt76, MCU_CMD_SET_BSS_ABORT, &req_hdr,
+-				sizeof(req_hdr), false);
++	err = mt76_mcu_send_msg(&dev->mt76, MCU_CE_CMD(SET_BSS_ABORT),
++				&req_hdr, sizeof(req_hdr), false);
+ 	if (err < 0 || !enable)
+ 		return err;
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_CMD_SET_BSS_CONNECTED, &req,
+-				 sizeof(req), false);
++	return mt76_mcu_send_msg(&dev->mt76, MCU_CE_CMD(SET_BSS_CONNECTED),
++				 &req, sizeof(req), false);
+ }
+ 
+ int mt7615_mcu_set_roc(struct mt7615_phy *phy, struct ieee80211_vif *vif,
+@@ -2762,6 +2762,6 @@ int mt7615_mcu_set_roc(struct mt7615_phy *phy, struct ieee80211_vif *vif,
+ 
+ 	phy->roc_grant = false;
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_CMD_SET_ROC, &req,
+-				 sizeof(req), false);
++	return mt76_mcu_send_msg(&dev->mt76, MCU_CE_CMD(SET_ROC),
++				 &req, sizeof(req), false);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+index 7733c8fad2413..1fb8432aa27ca 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+@@ -160,7 +160,8 @@ int mt76_connac_mcu_set_channel_domain(struct mt76_phy *phy)
+ 
+ 	memcpy(__skb_push(skb, sizeof(hdr)), &hdr, sizeof(hdr));
+ 
+-	return mt76_mcu_skb_send_msg(dev, skb, MCU_CMD_SET_CHAN_DOMAIN, false);
++	return mt76_mcu_skb_send_msg(dev, skb, MCU_CE_CMD(SET_CHAN_DOMAIN),
++				     false);
+ }
+ EXPORT_SYMBOL_GPL(mt76_connac_mcu_set_channel_domain);
+ 
+@@ -198,8 +199,8 @@ int mt76_connac_mcu_set_vif_ps(struct mt76_dev *dev, struct ieee80211_vif *vif)
+ 	if (vif->type != NL80211_IFTYPE_STATION)
+ 		return -EOPNOTSUPP;
+ 
+-	return mt76_mcu_send_msg(dev, MCU_CMD_SET_PS_PROFILE, &req,
+-				 sizeof(req), false);
++	return mt76_mcu_send_msg(dev, MCU_CE_CMD(SET_PS_PROFILE),
++				 &req, sizeof(req), false);
+ }
+ EXPORT_SYMBOL_GPL(mt76_connac_mcu_set_vif_ps);
+ 
+@@ -1523,7 +1524,8 @@ int mt76_connac_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 		req->scan_func |= SCAN_FUNC_RANDOM_MAC;
+ 	}
+ 
+-	err = mt76_mcu_skb_send_msg(mdev, skb, MCU_CMD_START_HW_SCAN, false);
++	err = mt76_mcu_skb_send_msg(mdev, skb, MCU_CE_CMD(START_HW_SCAN),
++				    false);
+ 	if (err < 0)
+ 		clear_bit(MT76_HW_SCANNING, &phy->state);
+ 
+@@ -1551,8 +1553,8 @@ int mt76_connac_mcu_cancel_hw_scan(struct mt76_phy *phy,
+ 		ieee80211_scan_completed(phy->hw, &info);
+ 	}
+ 
+-	return mt76_mcu_send_msg(phy->dev, MCU_CMD_CANCEL_HW_SCAN, &req,
+-				 sizeof(req), false);
++	return mt76_mcu_send_msg(phy->dev, MCU_CE_CMD(CANCEL_HW_SCAN),
++				 &req, sizeof(req), false);
+ }
+ EXPORT_SYMBOL_GPL(mt76_connac_mcu_cancel_hw_scan);
+ 
+@@ -1638,7 +1640,8 @@ int mt76_connac_mcu_sched_scan_req(struct mt76_phy *phy,
+ 		memcpy(skb_put(skb, sreq->ie_len), sreq->ie, sreq->ie_len);
+ 	}
+ 
+-	return mt76_mcu_skb_send_msg(mdev, skb, MCU_CMD_SCHED_SCAN_REQ, false);
++	return mt76_mcu_skb_send_msg(mdev, skb, MCU_CE_CMD(SCHED_SCAN_REQ),
++				     false);
+ }
+ EXPORT_SYMBOL_GPL(mt76_connac_mcu_sched_scan_req);
+ 
+@@ -1658,8 +1661,8 @@ int mt76_connac_mcu_sched_scan_enable(struct mt76_phy *phy,
+ 	else
+ 		clear_bit(MT76_HW_SCHED_SCANNING, &phy->state);
+ 
+-	return mt76_mcu_send_msg(phy->dev, MCU_CMD_SCHED_SCAN_ENABLE, &req,
+-				 sizeof(req), false);
++	return mt76_mcu_send_msg(phy->dev, MCU_CE_CMD(SCHED_SCAN_ENABLE),
++				 &req, sizeof(req), false);
+ }
+ EXPORT_SYMBOL_GPL(mt76_connac_mcu_sched_scan_enable);
+ 
+@@ -1671,8 +1674,8 @@ int mt76_connac_mcu_chip_config(struct mt76_dev *dev)
+ 
+ 	memcpy(req.data, "assert", 7);
+ 
+-	return mt76_mcu_send_msg(dev, MCU_CMD_CHIP_CONFIG, &req, sizeof(req),
+-				 false);
++	return mt76_mcu_send_msg(dev, MCU_CE_CMD(CHIP_CONFIG),
++				 &req, sizeof(req), false);
+ }
+ EXPORT_SYMBOL_GPL(mt76_connac_mcu_chip_config);
+ 
+@@ -1684,8 +1687,8 @@ int mt76_connac_mcu_set_deep_sleep(struct mt76_dev *dev, bool enable)
+ 
+ 	snprintf(req.data, sizeof(req.data), "KeepFullPwr %d", !enable);
+ 
+-	return mt76_mcu_send_msg(dev, MCU_CMD_CHIP_CONFIG, &req, sizeof(req),
+-				 false);
++	return mt76_mcu_send_msg(dev, MCU_CE_CMD(CHIP_CONFIG),
++				 &req, sizeof(req), false);
+ }
+ EXPORT_SYMBOL_GPL(mt76_connac_mcu_set_deep_sleep);
+ 
+@@ -1787,8 +1790,8 @@ int mt76_connac_mcu_get_nic_capability(struct mt76_phy *phy)
+ 	struct sk_buff *skb;
+ 	int ret, i;
+ 
+-	ret = mt76_mcu_send_and_get_msg(phy->dev, MCU_CMD_GET_NIC_CAPAB, NULL,
+-					0, true, &skb);
++	ret = mt76_mcu_send_and_get_msg(phy->dev, MCU_CE_CMD(GET_NIC_CAPAB),
++					NULL, 0, true, &skb);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -2071,7 +2074,8 @@ mt76_connac_mcu_rate_txpower_band(struct mt76_phy *phy,
+ 		memcpy(skb->data, &tx_power_tlv, sizeof(tx_power_tlv));
+ 
+ 		err = mt76_mcu_skb_send_msg(dev, skb,
+-					    MCU_CMD_SET_RATE_TX_POWER, false);
++					    MCU_CE_CMD(SET_RATE_TX_POWER),
++					    false);
+ 		if (err < 0)
+ 			return err;
+ 	}
+@@ -2163,8 +2167,8 @@ int mt76_connac_mcu_set_p2p_oppps(struct ieee80211_hw *hw,
+ 		.bss_idx = mvif->idx,
+ 	};
+ 
+-	return mt76_mcu_send_msg(phy->dev, MCU_CMD_SET_P2P_OPPPS, &req,
+-				 sizeof(req), false);
++	return mt76_mcu_send_msg(phy->dev, MCU_CE_CMD(SET_P2P_OPPPS),
++				 &req, sizeof(req), false);
+ }
+ EXPORT_SYMBOL_GPL(mt76_connac_mcu_set_p2p_oppps);
+ 
+@@ -2490,8 +2494,8 @@ u32 mt76_connac_mcu_reg_rr(struct mt76_dev *dev, u32 offset)
+ 		.addr = cpu_to_le32(offset),
+ 	};
+ 
+-	return mt76_mcu_send_msg(dev, MCU_CMD_REG_READ, &req, sizeof(req),
+-				 true);
++	return mt76_mcu_send_msg(dev, MCU_CE_QUERY(REG_READ), &req,
++				 sizeof(req), true);
+ }
+ EXPORT_SYMBOL_GPL(mt76_connac_mcu_reg_rr);
+ 
+@@ -2505,7 +2509,8 @@ void mt76_connac_mcu_reg_wr(struct mt76_dev *dev, u32 offset, u32 val)
+ 		.val = cpu_to_le32(val),
+ 	};
+ 
+-	mt76_mcu_send_msg(dev, MCU_CMD_REG_WRITE, &req, sizeof(req), false);
++	mt76_mcu_send_msg(dev, MCU_CE_CMD(REG_WRITE), &req,
++			  sizeof(req), false);
+ }
+ EXPORT_SYMBOL_GPL(mt76_connac_mcu_reg_wr);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+index 5c5fab9154e59..acb9a286d3546 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+@@ -496,13 +496,11 @@ enum {
+ #define MCU_CMD_UNI_EXT_ACK			(MCU_CMD_ACK | MCU_CMD_UNI | \
+ 						 MCU_CMD_QUERY)
+ 
+-#define MCU_CE_PREFIX				BIT(29)
+-#define MCU_CMD_MASK				~(MCU_CE_PREFIX)
+-
+ #define __MCU_CMD_FIELD_ID			GENMASK(7, 0)
+ #define __MCU_CMD_FIELD_EXT_ID			GENMASK(15, 8)
+ #define __MCU_CMD_FIELD_QUERY			BIT(16)
+ #define __MCU_CMD_FIELD_UNI			BIT(17)
++#define __MCU_CMD_FIELD_CE			BIT(18)
+ 
+ #define MCU_CMD(_t)				FIELD_PREP(__MCU_CMD_FIELD_ID,		\
+ 							   MCU_CMD_##_t)
+@@ -513,6 +511,10 @@ enum {
+ #define MCU_UNI_CMD(_t)				(__MCU_CMD_FIELD_UNI |			\
+ 						 FIELD_PREP(__MCU_CMD_FIELD_ID,		\
+ 							    MCU_UNI_CMD_##_t))
++#define MCU_CE_CMD(_t)				(__MCU_CMD_FIELD_CE |			\
++						 FIELD_PREP(__MCU_CMD_FIELD_ID,		\
++							   MCU_CE_CMD_##_t))
++#define MCU_CE_QUERY(_t)			(MCU_CE_CMD(_t) | __MCU_CMD_FIELD_QUERY)
+ 
+ enum {
+ 	MCU_EXT_CMD_EFUSE_ACCESS = 0x01,
+@@ -589,26 +591,26 @@ enum {
+ 
+ /* offload mcu commands */
+ enum {
+-	MCU_CMD_TEST_CTRL = MCU_CE_PREFIX | 0x01,
+-	MCU_CMD_START_HW_SCAN = MCU_CE_PREFIX | 0x03,
+-	MCU_CMD_SET_PS_PROFILE = MCU_CE_PREFIX | 0x05,
+-	MCU_CMD_SET_CHAN_DOMAIN = MCU_CE_PREFIX | 0x0f,
+-	MCU_CMD_SET_BSS_CONNECTED = MCU_CE_PREFIX | 0x16,
+-	MCU_CMD_SET_BSS_ABORT = MCU_CE_PREFIX | 0x17,
+-	MCU_CMD_CANCEL_HW_SCAN = MCU_CE_PREFIX | 0x1b,
+-	MCU_CMD_SET_ROC = MCU_CE_PREFIX | 0x1d,
+-	MCU_CMD_SET_P2P_OPPPS = MCU_CE_PREFIX | 0x33,
+-	MCU_CMD_SET_RATE_TX_POWER = MCU_CE_PREFIX | 0x5d,
+-	MCU_CMD_SCHED_SCAN_ENABLE = MCU_CE_PREFIX | 0x61,
+-	MCU_CMD_SCHED_SCAN_REQ = MCU_CE_PREFIX | 0x62,
+-	MCU_CMD_GET_NIC_CAPAB = MCU_CE_PREFIX | 0x8a,
+-	MCU_CMD_SET_MU_EDCA_PARMS = MCU_CE_PREFIX | 0xb0,
+-	MCU_CMD_REG_WRITE = MCU_CE_PREFIX | 0xc0,
+-	MCU_CMD_REG_READ = MCU_CE_PREFIX | __MCU_CMD_FIELD_QUERY | 0xc0,
+-	MCU_CMD_CHIP_CONFIG = MCU_CE_PREFIX | 0xca,
+-	MCU_CMD_FWLOG_2_HOST = MCU_CE_PREFIX | 0xc5,
+-	MCU_CMD_GET_WTBL = MCU_CE_PREFIX | 0xcd,
+-	MCU_CMD_GET_TXPWR = MCU_CE_PREFIX | 0xd0,
++	MCU_CE_CMD_TEST_CTRL = 0x01,
++	MCU_CE_CMD_START_HW_SCAN = 0x03,
++	MCU_CE_CMD_SET_PS_PROFILE = 0x05,
++	MCU_CE_CMD_SET_CHAN_DOMAIN = 0x0f,
++	MCU_CE_CMD_SET_BSS_CONNECTED = 0x16,
++	MCU_CE_CMD_SET_BSS_ABORT = 0x17,
++	MCU_CE_CMD_CANCEL_HW_SCAN = 0x1b,
++	MCU_CE_CMD_SET_ROC = 0x1d,
++	MCU_CE_CMD_SET_P2P_OPPPS = 0x33,
++	MCU_CE_CMD_SET_RATE_TX_POWER = 0x5d,
++	MCU_CE_CMD_SCHED_SCAN_ENABLE = 0x61,
++	MCU_CE_CMD_SCHED_SCAN_REQ = 0x62,
++	MCU_CE_CMD_GET_NIC_CAPAB = 0x8a,
++	MCU_CE_CMD_SET_MU_EDCA_PARMS = 0xb0,
++	MCU_CE_CMD_REG_WRITE = 0xc0,
++	MCU_CE_CMD_REG_READ = 0xc0,
++	MCU_CE_CMD_CHIP_CONFIG = 0xca,
++	MCU_CE_CMD_FWLOG_2_HOST = 0xc5,
++	MCU_CE_CMD_GET_WTBL = 0xcd,
++	MCU_CE_CMD_GET_TXPWR = 0xd0,
+ };
+ 
+ enum {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+index 484a8c57b862e..4c6adbb969550 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+@@ -163,8 +163,8 @@ mt7921_mcu_parse_eeprom(struct mt76_dev *dev, struct sk_buff *skb)
+ int mt7921_mcu_parse_response(struct mt76_dev *mdev, int cmd,
+ 			      struct sk_buff *skb, int seq)
+ {
++	int mcu_cmd = FIELD_GET(__MCU_CMD_FIELD_ID, cmd);
+ 	struct mt7921_mcu_rxd *rxd;
+-	int mcu_cmd = cmd & MCU_CMD_MASK;
+ 	int ret = 0;
+ 
+ 	if (!skb) {
+@@ -201,7 +201,7 @@ int mt7921_mcu_parse_response(struct mt76_dev *mdev, int cmd,
+ 		/* skip invalid event */
+ 		if (mcu_cmd != event->cid)
+ 			ret = -EAGAIN;
+-	} else if (cmd == MCU_CMD_REG_READ) {
++	} else if (cmd == MCU_CE_QUERY(REG_READ)) {
+ 		struct mt7921_mcu_reg_event *event;
+ 
+ 		skb_pull(skb, sizeof(*rxd));
+@@ -274,7 +274,7 @@ int mt7921_mcu_fill_message(struct mt76_dev *mdev, struct sk_buff *skb,
+ 	mcu_txd->s2d_index = MCU_S2D_H2N;
+ 	mcu_txd->ext_cid = FIELD_GET(__MCU_CMD_FIELD_EXT_ID, cmd);
+ 
+-	if (mcu_txd->ext_cid || (cmd & MCU_CE_PREFIX)) {
++	if (mcu_txd->ext_cid || (cmd & __MCU_CMD_FIELD_CE)) {
+ 		if (cmd & __MCU_CMD_FIELD_QUERY)
+ 			mcu_txd->set_query = MCU_Q_QUERY;
+ 		else
+@@ -883,8 +883,8 @@ int mt7921_mcu_fw_log_2_host(struct mt7921_dev *dev, u8 ctrl)
+ 		.ctrl_val = ctrl
+ 	};
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_CMD_FWLOG_2_HOST, &data,
+-				 sizeof(data), false);
++	return mt76_mcu_send_msg(&dev->mt76, MCU_CE_CMD(FWLOG_2_HOST),
++				 &data, sizeof(data), false);
+ }
+ 
+ int mt7921_run_firmware(struct mt7921_dev *dev)
+@@ -1009,8 +1009,8 @@ int mt7921_mcu_set_tx(struct mt7921_dev *dev, struct ieee80211_vif *vif)
+ 		e->timer = q->mu_edca_timer;
+ 	}
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_CMD_SET_MU_EDCA_PARMS, &req_mu,
+-				 sizeof(req_mu), false);
++	return mt76_mcu_send_msg(&dev->mt76, MCU_CE_CMD(SET_MU_EDCA_PARMS),
++				 &req_mu, sizeof(req_mu), false);
+ }
+ 
+ int mt7921_mcu_set_chan_info(struct mt7921_phy *phy, int cmd)
+@@ -1214,13 +1214,13 @@ mt7921_mcu_set_bss_pm(struct mt7921_dev *dev, struct ieee80211_vif *vif,
+ 	if (vif->type != NL80211_IFTYPE_STATION)
+ 		return 0;
+ 
+-	err = mt76_mcu_send_msg(&dev->mt76, MCU_CMD_SET_BSS_ABORT, &req_hdr,
+-				sizeof(req_hdr), false);
++	err = mt76_mcu_send_msg(&dev->mt76, MCU_CE_CMD(SET_BSS_ABORT),
++				&req_hdr, sizeof(req_hdr), false);
+ 	if (err < 0 || !enable)
+ 		return err;
+ 
+-	return mt76_mcu_send_msg(&dev->mt76, MCU_CMD_SET_BSS_CONNECTED, &req,
+-				 sizeof(req), false);
++	return mt76_mcu_send_msg(&dev->mt76, MCU_CE_CMD(SET_BSS_CONNECTED),
++				 &req, sizeof(req), false);
+ }
+ 
+ int mt7921_mcu_sta_update(struct mt7921_dev *dev, struct ieee80211_sta *sta,
+@@ -1330,7 +1330,7 @@ int mt7921_get_txpwr_info(struct mt7921_dev *dev, struct mt7921_txpwr *txpwr)
+ 	struct sk_buff *skb;
+ 	int ret;
+ 
+-	ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_CMD_GET_TXPWR,
++	ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_CE_CMD(GET_TXPWR),
+ 					&req, sizeof(req), true, &skb);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/testmode.c b/drivers/net/wireless/mediatek/mt76/mt7921/testmode.c
+index 8bd43879dd6fe..bdec8684ce94c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/testmode.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/testmode.c
+@@ -66,7 +66,7 @@ mt7921_tm_set(struct mt7921_dev *dev, struct mt7921_tm_cmd *req)
+ 	if (!mt76_testmode_enabled(phy))
+ 		goto out;
+ 
+-	ret = mt76_mcu_send_msg(&dev->mt76, MCU_CMD_TEST_CTRL, &cmd,
++	ret = mt76_mcu_send_msg(&dev->mt76, MCU_CE_CMD(TEST_CTRL), &cmd,
+ 				sizeof(cmd), false);
+ 	if (ret)
+ 		goto out;
+@@ -95,7 +95,7 @@ mt7921_tm_query(struct mt7921_dev *dev, struct mt7921_tm_cmd *req,
+ 	struct sk_buff *skb;
+ 	int ret;
+ 
+-	ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_CMD_TEST_CTRL,
++	ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_CE_CMD(TEST_CTRL),
+ 					&cmd, sizeof(cmd), true, &skb);
+ 	if (ret)
+ 		goto out;
+diff --git a/drivers/pci/controller/pcie-mt7621.c b/drivers/pci/controller/pcie-mt7621.c
+index 73b91315c1656..6ec53aa81b920 100644
+--- a/drivers/pci/controller/pcie-mt7621.c
++++ b/drivers/pci/controller/pcie-mt7621.c
+@@ -109,15 +109,6 @@ static inline void pcie_write(struct mt7621_pcie *pcie, u32 val, u32 reg)
+ 	writel_relaxed(val, pcie->base + reg);
+ }
+ 
+-static inline void pcie_rmw(struct mt7621_pcie *pcie, u32 reg, u32 clr, u32 set)
+-{
+-	u32 val = readl_relaxed(pcie->base + reg);
+-
+-	val &= ~clr;
+-	val |= set;
+-	writel_relaxed(val, pcie->base + reg);
+-}
+-
+ static inline u32 pcie_port_read(struct mt7621_pcie_port *port, u32 reg)
+ {
+ 	return readl_relaxed(port->base + reg);
+diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
+index f2e961f998ca2..341156e2a29b9 100644
+--- a/drivers/remoteproc/Kconfig
++++ b/drivers/remoteproc/Kconfig
+@@ -180,6 +180,7 @@ config QCOM_Q6V5_ADSP
+ 	depends on RPMSG_QCOM_GLINK_SMEM || RPMSG_QCOM_GLINK_SMEM=n
+ 	depends on QCOM_SYSMON || QCOM_SYSMON=n
+ 	depends on RPMSG_QCOM_GLINK || RPMSG_QCOM_GLINK=n
++	depends on QCOM_AOSS_QMP || QCOM_AOSS_QMP=n
+ 	select MFD_SYSCON
+ 	select QCOM_PIL_INFO
+ 	select QCOM_MDT_LOADER
+@@ -199,6 +200,7 @@ config QCOM_Q6V5_MSS
+ 	depends on RPMSG_QCOM_GLINK_SMEM || RPMSG_QCOM_GLINK_SMEM=n
+ 	depends on QCOM_SYSMON || QCOM_SYSMON=n
+ 	depends on RPMSG_QCOM_GLINK || RPMSG_QCOM_GLINK=n
++	depends on QCOM_AOSS_QMP || QCOM_AOSS_QMP=n
+ 	select MFD_SYSCON
+ 	select QCOM_MDT_LOADER
+ 	select QCOM_PIL_INFO
+@@ -218,6 +220,7 @@ config QCOM_Q6V5_PAS
+ 	depends on RPMSG_QCOM_GLINK_SMEM || RPMSG_QCOM_GLINK_SMEM=n
+ 	depends on QCOM_SYSMON || QCOM_SYSMON=n
+ 	depends on RPMSG_QCOM_GLINK || RPMSG_QCOM_GLINK=n
++	depends on QCOM_AOSS_QMP || QCOM_AOSS_QMP=n
+ 	select MFD_SYSCON
+ 	select QCOM_PIL_INFO
+ 	select QCOM_MDT_LOADER
+@@ -239,6 +242,7 @@ config QCOM_Q6V5_WCSS
+ 	depends on RPMSG_QCOM_GLINK_SMEM || RPMSG_QCOM_GLINK_SMEM=n
+ 	depends on QCOM_SYSMON || QCOM_SYSMON=n
+ 	depends on RPMSG_QCOM_GLINK || RPMSG_QCOM_GLINK=n
++	depends on QCOM_AOSS_QMP || QCOM_AOSS_QMP=n
+ 	select MFD_SYSCON
+ 	select QCOM_MDT_LOADER
+ 	select QCOM_PIL_INFO
+diff --git a/drivers/remoteproc/qcom_q6v5.c b/drivers/remoteproc/qcom_q6v5.c
+index eada7e34f3af5..442a388f81028 100644
+--- a/drivers/remoteproc/qcom_q6v5.c
++++ b/drivers/remoteproc/qcom_q6v5.c
+@@ -10,6 +10,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/interrupt.h>
+ #include <linux/module.h>
++#include <linux/soc/qcom/qcom_aoss.h>
+ #include <linux/soc/qcom/smem.h>
+ #include <linux/soc/qcom/smem_state.h>
+ #include <linux/remoteproc.h>
+diff --git a/drivers/rpmsg/rpmsg_char.c b/drivers/rpmsg/rpmsg_char.c
+index b5907b80727cc..3ec34ad0e893d 100644
+--- a/drivers/rpmsg/rpmsg_char.c
++++ b/drivers/rpmsg/rpmsg_char.c
+@@ -90,7 +90,7 @@ static int rpmsg_eptdev_destroy(struct device *dev, void *data)
+ 	/* wake up any blocked readers */
+ 	wake_up_interruptible(&eptdev->readq);
+ 
+-	device_del(&eptdev->dev);
++	cdev_device_del(&eptdev->cdev, &eptdev->dev);
+ 	put_device(&eptdev->dev);
+ 
+ 	return 0;
+@@ -333,7 +333,6 @@ static void rpmsg_eptdev_release_device(struct device *dev)
+ 
+ 	ida_simple_remove(&rpmsg_ept_ida, dev->id);
+ 	ida_simple_remove(&rpmsg_minor_ida, MINOR(eptdev->dev.devt));
+-	cdev_del(&eptdev->cdev);
+ 	kfree(eptdev);
+ }
+ 
+@@ -378,19 +377,13 @@ static int rpmsg_eptdev_create(struct rpmsg_ctrldev *ctrldev,
+ 	dev->id = ret;
+ 	dev_set_name(dev, "rpmsg%d", ret);
+ 
+-	ret = cdev_add(&eptdev->cdev, dev->devt, 1);
++	ret = cdev_device_add(&eptdev->cdev, &eptdev->dev);
+ 	if (ret)
+ 		goto free_ept_ida;
+ 
+ 	/* We can now rely on the release function for cleanup */
+ 	dev->release = rpmsg_eptdev_release_device;
+ 
+-	ret = device_add(dev);
+-	if (ret) {
+-		dev_err(dev, "device_add failed: %d\n", ret);
+-		put_device(dev);
+-	}
+-
+ 	return ret;
+ 
+ free_ept_ida:
+@@ -459,7 +452,6 @@ static void rpmsg_ctrldev_release_device(struct device *dev)
+ 
+ 	ida_simple_remove(&rpmsg_ctrl_ida, dev->id);
+ 	ida_simple_remove(&rpmsg_minor_ida, MINOR(dev->devt));
+-	cdev_del(&ctrldev->cdev);
+ 	kfree(ctrldev);
+ }
+ 
+@@ -494,19 +486,13 @@ static int rpmsg_chrdev_probe(struct rpmsg_device *rpdev)
+ 	dev->id = ret;
+ 	dev_set_name(&ctrldev->dev, "rpmsg_ctrl%d", ret);
+ 
+-	ret = cdev_add(&ctrldev->cdev, dev->devt, 1);
++	ret = cdev_device_add(&ctrldev->cdev, &ctrldev->dev);
+ 	if (ret)
+ 		goto free_ctrl_ida;
+ 
+ 	/* We can now rely on the release function for cleanup */
+ 	dev->release = rpmsg_ctrldev_release_device;
+ 
+-	ret = device_add(dev);
+-	if (ret) {
+-		dev_err(&rpdev->dev, "device_add failed: %d\n", ret);
+-		put_device(dev);
+-	}
+-
+ 	dev_set_drvdata(&rpdev->dev, ctrldev);
+ 
+ 	return ret;
+@@ -532,7 +518,7 @@ static void rpmsg_chrdev_remove(struct rpmsg_device *rpdev)
+ 	if (ret)
+ 		dev_warn(&rpdev->dev, "failed to nuke endpoints: %d\n", ret);
+ 
+-	device_del(&ctrldev->dev);
++	cdev_device_del(&ctrldev->cdev, &ctrldev->dev);
+ 	put_device(&ctrldev->dev);
+ }
+ 
+diff --git a/drivers/s390/scsi/zfcp_fc.c b/drivers/s390/scsi/zfcp_fc.c
+index d24cafe02708f..511bf8e0a436c 100644
+--- a/drivers/s390/scsi/zfcp_fc.c
++++ b/drivers/s390/scsi/zfcp_fc.c
+@@ -521,6 +521,8 @@ static void zfcp_fc_adisc_handler(void *data)
+ 		goto out;
+ 	}
+ 
++	/* re-init to undo drop from zfcp_fc_adisc() */
++	port->d_id = ntoh24(adisc_resp->adisc_port_id);
+ 	/* port is good, unblock rport without going through erp */
+ 	zfcp_scsi_schedule_rport_register(port);
+  out:
+@@ -534,6 +536,7 @@ static int zfcp_fc_adisc(struct zfcp_port *port)
+ 	struct zfcp_fc_req *fc_req;
+ 	struct zfcp_adapter *adapter = port->adapter;
+ 	struct Scsi_Host *shost = adapter->scsi_host;
++	u32 d_id;
+ 	int ret;
+ 
+ 	fc_req = kmem_cache_zalloc(zfcp_fc_req_cache, GFP_ATOMIC);
+@@ -558,7 +561,15 @@ static int zfcp_fc_adisc(struct zfcp_port *port)
+ 	fc_req->u.adisc.req.adisc_cmd = ELS_ADISC;
+ 	hton24(fc_req->u.adisc.req.adisc_port_id, fc_host_port_id(shost));
+ 
+-	ret = zfcp_fsf_send_els(adapter, port->d_id, &fc_req->ct_els,
++	d_id = port->d_id; /* remember as destination for send els below */
++	/*
++	 * Force fresh GID_PN lookup on next port recovery.
++	 * Must happen after request setup and before sending request,
++	 * to prevent race with port->d_id re-init in zfcp_fc_adisc_handler().
++	 */
++	port->d_id = 0;
++
++	ret = zfcp_fsf_send_els(adapter, d_id, &fc_req->ct_els,
+ 				ZFCP_FC_CTELS_TMO);
+ 	if (ret)
+ 		kmem_cache_free(zfcp_fc_req_cache, fc_req);
+diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+index 71fa62bd30830..9be273c320e21 100644
+--- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
++++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+@@ -82,7 +82,7 @@ static int bnx2fc_bind_pcidev(struct bnx2fc_hba *hba);
+ static void bnx2fc_unbind_pcidev(struct bnx2fc_hba *hba);
+ static struct fc_lport *bnx2fc_if_create(struct bnx2fc_interface *interface,
+ 				  struct device *parent, int npiv);
+-static void bnx2fc_destroy_work(struct work_struct *work);
++static void bnx2fc_port_destroy(struct fcoe_port *port);
+ 
+ static struct bnx2fc_hba *bnx2fc_hba_lookup(struct net_device *phys_dev);
+ static struct bnx2fc_interface *bnx2fc_interface_lookup(struct net_device
+@@ -907,9 +907,6 @@ static void bnx2fc_indicate_netevent(void *context, unsigned long event,
+ 				__bnx2fc_destroy(interface);
+ 		}
+ 		mutex_unlock(&bnx2fc_dev_lock);
+-
+-		/* Ensure ALL destroy work has been completed before return */
+-		flush_workqueue(bnx2fc_wq);
+ 		return;
+ 
+ 	default:
+@@ -1215,8 +1212,8 @@ static int bnx2fc_vport_destroy(struct fc_vport *vport)
+ 	mutex_unlock(&n_port->lp_mutex);
+ 	bnx2fc_free_vport(interface->hba, port->lport);
+ 	bnx2fc_port_shutdown(port->lport);
++	bnx2fc_port_destroy(port);
+ 	bnx2fc_interface_put(interface);
+-	queue_work(bnx2fc_wq, &port->destroy_work);
+ 	return 0;
+ }
+ 
+@@ -1525,7 +1522,6 @@ static struct fc_lport *bnx2fc_if_create(struct bnx2fc_interface *interface,
+ 	port->lport = lport;
+ 	port->priv = interface;
+ 	port->get_netdev = bnx2fc_netdev;
+-	INIT_WORK(&port->destroy_work, bnx2fc_destroy_work);
+ 
+ 	/* Configure fcoe_port */
+ 	rc = bnx2fc_lport_config(lport);
+@@ -1653,8 +1649,8 @@ static void __bnx2fc_destroy(struct bnx2fc_interface *interface)
+ 	bnx2fc_interface_cleanup(interface);
+ 	bnx2fc_stop(interface);
+ 	list_del(&interface->list);
++	bnx2fc_port_destroy(port);
+ 	bnx2fc_interface_put(interface);
+-	queue_work(bnx2fc_wq, &port->destroy_work);
+ }
+ 
+ /**
+@@ -1694,15 +1690,12 @@ netdev_err:
+ 	return rc;
+ }
+ 
+-static void bnx2fc_destroy_work(struct work_struct *work)
++static void bnx2fc_port_destroy(struct fcoe_port *port)
+ {
+-	struct fcoe_port *port;
+ 	struct fc_lport *lport;
+ 
+-	port = container_of(work, struct fcoe_port, destroy_work);
+ 	lport = port->lport;
+-
+-	BNX2FC_HBA_DBG(lport, "Entered bnx2fc_destroy_work\n");
++	BNX2FC_HBA_DBG(lport, "Entered %s, destroying lport %p\n", __func__, lport);
+ 
+ 	bnx2fc_if_destroy(lport);
+ }
+@@ -2556,9 +2549,6 @@ static void bnx2fc_ulp_exit(struct cnic_dev *dev)
+ 			__bnx2fc_destroy(interface);
+ 	mutex_unlock(&bnx2fc_dev_lock);
+ 
+-	/* Ensure ALL destroy work has been completed before return */
+-	flush_workqueue(bnx2fc_wq);
+-
+ 	bnx2fc_ulp_stop(hba);
+ 	/* unregister cnic device */
+ 	if (test_and_clear_bit(BNX2FC_CNIC_REGISTERED, &hba->reg_with_cnic))
+diff --git a/drivers/scsi/elx/libefc/efc_els.c b/drivers/scsi/elx/libefc/efc_els.c
+index 24db0accb256e..5f690378fe9a9 100644
+--- a/drivers/scsi/elx/libefc/efc_els.c
++++ b/drivers/scsi/elx/libefc/efc_els.c
+@@ -46,18 +46,14 @@ efc_els_io_alloc_size(struct efc_node *node, u32 reqlen, u32 rsplen)
+ 
+ 	efc = node->efc;
+ 
+-	spin_lock_irqsave(&node->els_ios_lock, flags);
+-
+ 	if (!node->els_io_enabled) {
+ 		efc_log_err(efc, "els io alloc disabled\n");
+-		spin_unlock_irqrestore(&node->els_ios_lock, flags);
+ 		return NULL;
+ 	}
+ 
+ 	els = mempool_alloc(efc->els_io_pool, GFP_ATOMIC);
+ 	if (!els) {
+ 		atomic_add_return(1, &efc->els_io_alloc_failed_count);
+-		spin_unlock_irqrestore(&node->els_ios_lock, flags);
+ 		return NULL;
+ 	}
+ 
+@@ -74,7 +70,6 @@ efc_els_io_alloc_size(struct efc_node *node, u32 reqlen, u32 rsplen)
+ 					      &els->io.req.phys, GFP_DMA);
+ 	if (!els->io.req.virt) {
+ 		mempool_free(els, efc->els_io_pool);
+-		spin_unlock_irqrestore(&node->els_ios_lock, flags);
+ 		return NULL;
+ 	}
+ 
+@@ -94,10 +89,11 @@ efc_els_io_alloc_size(struct efc_node *node, u32 reqlen, u32 rsplen)
+ 
+ 		/* add els structure to ELS IO list */
+ 		INIT_LIST_HEAD(&els->list_entry);
++		spin_lock_irqsave(&node->els_ios_lock, flags);
+ 		list_add_tail(&els->list_entry, &node->els_ios_list);
++		spin_unlock_irqrestore(&node->els_ios_lock, flags);
+ 	}
+ 
+-	spin_unlock_irqrestore(&node->els_ios_lock, flags);
+ 	return els;
+ }
+ 
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index 0b96b14bbfe11..9c5211f2ea84c 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -322,6 +322,7 @@ static int addr_cnt;
+ #define GSM1_ESCAPE_BITS	0x20
+ #define XON			0x11
+ #define XOFF			0x13
++#define ISO_IEC_646_MASK	0x7F
+ 
+ static const struct tty_port_operations gsm_port_ops;
+ 
+@@ -531,7 +532,8 @@ static int gsm_stuff_frame(const u8 *input, u8 *output, int len)
+ 	int olen = 0;
+ 	while (len--) {
+ 		if (*input == GSM1_SOF || *input == GSM1_ESCAPE
+-		    || *input == XON || *input == XOFF) {
++		    || (*input & ISO_IEC_646_MASK) == XON
++		    || (*input & ISO_IEC_646_MASK) == XOFF) {
+ 			*output++ = GSM1_ESCAPE;
+ 			*output++ = *input++ ^ GSM1_ESCAPE_BITS;
+ 			olen++;
+diff --git a/drivers/tty/rpmsg_tty.c b/drivers/tty/rpmsg_tty.c
+index dae2a4e44f386..29db413bbc030 100644
+--- a/drivers/tty/rpmsg_tty.c
++++ b/drivers/tty/rpmsg_tty.c
+@@ -50,10 +50,17 @@ static int rpmsg_tty_cb(struct rpmsg_device *rpdev, void *data, int len, void *p
+ static int rpmsg_tty_install(struct tty_driver *driver, struct tty_struct *tty)
+ {
+ 	struct rpmsg_tty_port *cport = idr_find(&tty_idr, tty->index);
++	struct tty_port *port;
+ 
+ 	tty->driver_data = cport;
+ 
+-	return tty_port_install(&cport->port, driver, tty);
++	port = tty_port_get(&cport->port);
++	return tty_port_install(port, driver, tty);
++}
++
++static void rpmsg_tty_cleanup(struct tty_struct *tty)
++{
++	tty_port_put(tty->port);
+ }
+ 
+ static int rpmsg_tty_open(struct tty_struct *tty, struct file *filp)
+@@ -106,12 +113,19 @@ static unsigned int rpmsg_tty_write_room(struct tty_struct *tty)
+ 	return size;
+ }
+ 
++static void rpmsg_tty_hangup(struct tty_struct *tty)
++{
++	tty_port_hangup(tty->port);
++}
++
+ static const struct tty_operations rpmsg_tty_ops = {
+ 	.install	= rpmsg_tty_install,
+ 	.open		= rpmsg_tty_open,
+ 	.close		= rpmsg_tty_close,
+ 	.write		= rpmsg_tty_write,
+ 	.write_room	= rpmsg_tty_write_room,
++	.hangup		= rpmsg_tty_hangup,
++	.cleanup	= rpmsg_tty_cleanup,
+ };
+ 
+ static struct rpmsg_tty_port *rpmsg_tty_alloc_cport(void)
+@@ -137,8 +151,10 @@ static struct rpmsg_tty_port *rpmsg_tty_alloc_cport(void)
+ 	return cport;
+ }
+ 
+-static void rpmsg_tty_release_cport(struct rpmsg_tty_port *cport)
++static void rpmsg_tty_destruct_port(struct tty_port *port)
+ {
++	struct rpmsg_tty_port *cport = container_of(port, struct rpmsg_tty_port, port);
++
+ 	mutex_lock(&idr_lock);
+ 	idr_remove(&tty_idr, cport->id);
+ 	mutex_unlock(&idr_lock);
+@@ -146,7 +162,10 @@ static void rpmsg_tty_release_cport(struct rpmsg_tty_port *cport)
+ 	kfree(cport);
+ }
+ 
+-static const struct tty_port_operations rpmsg_tty_port_ops = { };
++static const struct tty_port_operations rpmsg_tty_port_ops = {
++	.destruct = rpmsg_tty_destruct_port,
++};
++
+ 
+ static int rpmsg_tty_probe(struct rpmsg_device *rpdev)
+ {
+@@ -166,7 +185,8 @@ static int rpmsg_tty_probe(struct rpmsg_device *rpdev)
+ 					   cport->id, dev);
+ 	if (IS_ERR(tty_dev)) {
+ 		ret = dev_err_probe(dev, PTR_ERR(tty_dev), "Failed to register tty port\n");
+-		goto err_destroy;
++		tty_port_put(&cport->port);
++		return ret;
+ 	}
+ 
+ 	cport->rpdev = rpdev;
+@@ -177,12 +197,6 @@ static int rpmsg_tty_probe(struct rpmsg_device *rpdev)
+ 		rpdev->src, rpdev->dst, cport->id);
+ 
+ 	return 0;
+-
+-err_destroy:
+-	tty_port_destroy(&cport->port);
+-	rpmsg_tty_release_cport(cport);
+-
+-	return ret;
+ }
+ 
+ static void rpmsg_tty_remove(struct rpmsg_device *rpdev)
+@@ -192,13 +206,11 @@ static void rpmsg_tty_remove(struct rpmsg_device *rpdev)
+ 	dev_dbg(&rpdev->dev, "Removing rpmsg tty device %d\n", cport->id);
+ 
+ 	/* User hang up to release the tty */
+-	if (tty_port_initialized(&cport->port))
+-		tty_port_tty_hangup(&cport->port, false);
++	tty_port_tty_hangup(&cport->port, false);
+ 
+ 	tty_unregister_device(rpmsg_tty_driver, cport->id);
+ 
+-	tty_port_destroy(&cport->port);
+-	rpmsg_tty_release_cport(cport);
++	tty_port_put(&cport->port);
+ }
+ 
+ static struct rpmsg_device_id rpmsg_driver_tty_id_table[] = {
+diff --git a/drivers/tty/serial/8250/8250_of.c b/drivers/tty/serial/8250/8250_of.c
+index bce28729dd7bd..be8626234627e 100644
+--- a/drivers/tty/serial/8250/8250_of.c
++++ b/drivers/tty/serial/8250/8250_of.c
+@@ -83,8 +83,17 @@ static int of_platform_serial_setup(struct platform_device *ofdev,
+ 		port->mapsize = resource_size(&resource);
+ 
+ 		/* Check for shifted address mapping */
+-		if (of_property_read_u32(np, "reg-offset", &prop) == 0)
++		if (of_property_read_u32(np, "reg-offset", &prop) == 0) {
++			if (prop >= port->mapsize) {
++				dev_warn(&ofdev->dev, "reg-offset %u exceeds region size %pa\n",
++					 prop, &port->mapsize);
++				ret = -EINVAL;
++				goto err_unprepare;
++			}
++
+ 			port->mapbase += prop;
++			port->mapsize -= prop;
++		}
+ 
+ 		port->iotype = UPIO_MEM;
+ 		if (of_property_read_u32(np, "reg-io-width", &prop) == 0) {
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 60f8fffdfd776..8817f6c912cfd 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -5174,8 +5174,30 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 	{	PCI_VENDOR_ID_INTASHIELD, PCI_DEVICE_ID_INTASHIELD_IS400,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,    /* 135a.0dc0 */
+ 		pbn_b2_4_115200 },
++	/* Brainboxes Devices */
+ 	/*
+-	 * BrainBoxes UC-260
++	* Brainboxes UC-101
++	*/
++	{       PCI_VENDOR_ID_INTASHIELD, 0x0BA1,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	/*
++	 * Brainboxes UC-235/246
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0AA1,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_1_115200 },
++	/*
++	 * Brainboxes UC-257
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0861,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	/*
++	 * Brainboxes UC-260/271/701/756
+ 	 */
+ 	{	PCI_VENDOR_ID_INTASHIELD, 0x0D21,
+ 		PCI_ANY_ID, PCI_ANY_ID,
+@@ -5183,7 +5205,81 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		pbn_b2_4_115200 },
+ 	{	PCI_VENDOR_ID_INTASHIELD, 0x0E34,
+ 		PCI_ANY_ID, PCI_ANY_ID,
+-		 PCI_CLASS_COMMUNICATION_MULTISERIAL << 8, 0xffff00,
++		PCI_CLASS_COMMUNICATION_MULTISERIAL << 8, 0xffff00,
++		pbn_b2_4_115200 },
++	/*
++	 * Brainboxes UC-268
++	 */
++	{       PCI_VENDOR_ID_INTASHIELD, 0x0841,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_4_115200 },
++	/*
++	 * Brainboxes UC-275/279
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0881,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_8_115200 },
++	/*
++	 * Brainboxes UC-302
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x08E1,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	/*
++	 * Brainboxes UC-310
++	 */
++	{       PCI_VENDOR_ID_INTASHIELD, 0x08C1,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	/*
++	 * Brainboxes UC-313
++	 */
++	{       PCI_VENDOR_ID_INTASHIELD, 0x08A3,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	/*
++	 * Brainboxes UC-320/324
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0A61,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_1_115200 },
++	/*
++	 * Brainboxes UC-346
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0B02,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_4_115200 },
++	/*
++	 * Brainboxes UC-357
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0A81,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0A83,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_2_115200 },
++	/*
++	 * Brainboxes UC-368
++	 */
++	{	PCI_VENDOR_ID_INTASHIELD, 0x0C41,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
++		pbn_b2_4_115200 },
++	/*
++	 * Brainboxes UC-420/431
++	 */
++	{       PCI_VENDOR_ID_INTASHIELD, 0x0921,
++		PCI_ANY_ID, PCI_ANY_ID,
++		0, 0,
+ 		pbn_b2_4_115200 },
+ 	/*
+ 	 * Perle PCI-RAS cards
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index 6ec34260d6b18..da54f827c5efc 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -1615,8 +1615,12 @@ static void pl011_set_mctrl(struct uart_port *port, unsigned int mctrl)
+ 	    container_of(port, struct uart_amba_port, port);
+ 	unsigned int cr;
+ 
+-	if (port->rs485.flags & SER_RS485_ENABLED)
+-		mctrl &= ~TIOCM_RTS;
++	if (port->rs485.flags & SER_RS485_ENABLED) {
++		if (port->rs485.flags & SER_RS485_RTS_AFTER_SEND)
++			mctrl &= ~TIOCM_RTS;
++		else
++			mctrl |= TIOCM_RTS;
++	}
+ 
+ 	cr = pl011_read(uap, REG_CR);
+ 
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index 6cfc3bec67492..2d3fbcbfaf108 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -696,7 +696,7 @@ static void stm32_usart_start_tx(struct uart_port *port)
+ 	struct serial_rs485 *rs485conf = &port->rs485;
+ 	struct circ_buf *xmit = &port->state->xmit;
+ 
+-	if (uart_circ_empty(xmit))
++	if (uart_circ_empty(xmit) && !port->x_char)
+ 		return;
+ 
+ 	if (rs485conf->flags & SER_RS485_ENABLED) {
+diff --git a/drivers/usb/cdns3/drd.c b/drivers/usb/cdns3/drd.c
+index 55c73b1d87047..d00ff98dffabf 100644
+--- a/drivers/usb/cdns3/drd.c
++++ b/drivers/usb/cdns3/drd.c
+@@ -483,11 +483,11 @@ int cdns_drd_exit(struct cdns *cdns)
+ /* Indicate the cdns3 core was power lost before */
+ bool cdns_power_is_lost(struct cdns *cdns)
+ {
+-	if (cdns->version == CDNS3_CONTROLLER_V1) {
+-		if (!(readl(&cdns->otg_v1_regs->simulate) & BIT(0)))
++	if (cdns->version == CDNS3_CONTROLLER_V0) {
++		if (!(readl(&cdns->otg_v0_regs->simulate) & BIT(0)))
+ 			return true;
+ 	} else {
+-		if (!(readl(&cdns->otg_v0_regs->simulate) & BIT(0)))
++		if (!(readl(&cdns->otg_v1_regs->simulate) & BIT(0)))
+ 			return true;
+ 	}
+ 	return false;
+diff --git a/drivers/usb/common/ulpi.c b/drivers/usb/common/ulpi.c
+index 4169cf40a03b5..8f8405b0d6080 100644
+--- a/drivers/usb/common/ulpi.c
++++ b/drivers/usb/common/ulpi.c
+@@ -39,8 +39,11 @@ static int ulpi_match(struct device *dev, struct device_driver *driver)
+ 	struct ulpi *ulpi = to_ulpi_dev(dev);
+ 	const struct ulpi_device_id *id;
+ 
+-	/* Some ULPI devices don't have a vendor id so rely on OF match */
+-	if (ulpi->id.vendor == 0)
++	/*
++	 * Some ULPI devices don't have a vendor id
++	 * or provide an id_table so rely on OF match.
++	 */
++	if (ulpi->id.vendor == 0 || !drv->id_table)
+ 		return of_driver_match_device(dev, driver);
+ 
+ 	for (id = drv->id_table; id->vendor; id++)
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index 8948b3bf7622b..95d13b9adc139 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -1563,6 +1563,13 @@ int usb_hcd_submit_urb (struct urb *urb, gfp_t mem_flags)
+ 		urb->hcpriv = NULL;
+ 		INIT_LIST_HEAD(&urb->urb_list);
+ 		atomic_dec(&urb->use_count);
++		/*
++		 * Order the write of urb->use_count above before the read
++		 * of urb->reject below.  Pairs with the memory barriers in
++		 * usb_kill_urb() and usb_poison_urb().
++		 */
++		smp_mb__after_atomic();
++
+ 		atomic_dec(&urb->dev->urbnum);
+ 		if (atomic_read(&urb->reject))
+ 			wake_up(&usb_kill_urb_queue);
+@@ -1665,6 +1672,13 @@ static void __usb_hcd_giveback_urb(struct urb *urb)
+ 
+ 	usb_anchor_resume_wakeups(anchor);
+ 	atomic_dec(&urb->use_count);
++	/*
++	 * Order the write of urb->use_count above before the read
++	 * of urb->reject below.  Pairs with the memory barriers in
++	 * usb_kill_urb() and usb_poison_urb().
++	 */
++	smp_mb__after_atomic();
++
+ 	if (unlikely(atomic_read(&urb->reject)))
+ 		wake_up(&usb_kill_urb_queue);
+ 	usb_put_urb(urb);
+diff --git a/drivers/usb/core/urb.c b/drivers/usb/core/urb.c
+index 30727729a44cc..33d62d7e3929f 100644
+--- a/drivers/usb/core/urb.c
++++ b/drivers/usb/core/urb.c
+@@ -715,6 +715,12 @@ void usb_kill_urb(struct urb *urb)
+ 	if (!(urb && urb->dev && urb->ep))
+ 		return;
+ 	atomic_inc(&urb->reject);
++	/*
++	 * Order the write of urb->reject above before the read
++	 * of urb->use_count below.  Pairs with the barriers in
++	 * __usb_hcd_giveback_urb() and usb_hcd_submit_urb().
++	 */
++	smp_mb__after_atomic();
+ 
+ 	usb_hcd_unlink_urb(urb, -ENOENT);
+ 	wait_event(usb_kill_urb_queue, atomic_read(&urb->use_count) == 0);
+@@ -756,6 +762,12 @@ void usb_poison_urb(struct urb *urb)
+ 	if (!urb)
+ 		return;
+ 	atomic_inc(&urb->reject);
++	/*
++	 * Order the write of urb->reject above before the read
++	 * of urb->use_count below.  Pairs with the barriers in
++	 * __usb_hcd_giveback_urb() and usb_hcd_submit_urb().
++	 */
++	smp_mb__after_atomic();
+ 
+ 	if (!urb->dev || !urb->ep)
+ 		return;
+diff --git a/drivers/usb/dwc3/dwc3-xilinx.c b/drivers/usb/dwc3/dwc3-xilinx.c
+index 9cc3ad701a295..a6f3a9b38789e 100644
+--- a/drivers/usb/dwc3/dwc3-xilinx.c
++++ b/drivers/usb/dwc3/dwc3-xilinx.c
+@@ -99,17 +99,29 @@ static int dwc3_xlnx_init_zynqmp(struct dwc3_xlnx *priv_data)
+ 	struct device		*dev = priv_data->dev;
+ 	struct reset_control	*crst, *hibrst, *apbrst;
+ 	struct phy		*usb3_phy;
+-	int			ret;
++	int			ret = 0;
+ 	u32			reg;
+ 
+-	usb3_phy = devm_phy_get(dev, "usb3-phy");
+-	if (PTR_ERR(usb3_phy) == -EPROBE_DEFER) {
+-		ret = -EPROBE_DEFER;
++	usb3_phy = devm_phy_optional_get(dev, "usb3-phy");
++	if (IS_ERR(usb3_phy)) {
++		ret = PTR_ERR(usb3_phy);
++		dev_err_probe(dev, ret,
++			      "failed to get USB3 PHY\n");
+ 		goto err;
+-	} else if (IS_ERR(usb3_phy)) {
+-		usb3_phy = NULL;
+ 	}
+ 
++	/*
++	 * The following core resets are not required unless a USB3 PHY
++	 * is used, and the subsequent register settings are not required
++	 * unless a core reset is performed (they should be set properly
++	 * by the first-stage boot loader, but may be reverted by a core
++	 * reset). They may also break the configuration if USB3 is actually
++	 * in use but the usb3-phy entry is missing from the device tree.
++	 * Therefore, skip these operations in this case.
++	 */
++	if (!usb3_phy)
++		goto skip_usb3_phy;
++
+ 	crst = devm_reset_control_get_exclusive(dev, "usb_crst");
+ 	if (IS_ERR(crst)) {
+ 		ret = PTR_ERR(crst);
+@@ -188,6 +200,7 @@ static int dwc3_xlnx_init_zynqmp(struct dwc3_xlnx *priv_data)
+ 		goto err;
+ 	}
+ 
++skip_usb3_phy:
+ 	/*
+ 	 * This routes the USB DMA traffic to go through FPD path instead
+ 	 * of reaching DDR directly. This traffic routing is needed to
+diff --git a/drivers/usb/gadget/function/f_sourcesink.c b/drivers/usb/gadget/function/f_sourcesink.c
+index 1abf08e5164af..6803cd60cc6dc 100644
+--- a/drivers/usb/gadget/function/f_sourcesink.c
++++ b/drivers/usb/gadget/function/f_sourcesink.c
+@@ -584,6 +584,7 @@ static int source_sink_start_ep(struct f_sourcesink *ss, bool is_in,
+ 
+ 	if (is_iso) {
+ 		switch (speed) {
++		case USB_SPEED_SUPER_PLUS:
+ 		case USB_SPEED_SUPER:
+ 			size = ss->isoc_maxpacket *
+ 					(ss->isoc_mult + 1) *
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index c1edcc9b13cec..dc570ce4e8319 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -437,6 +437,9 @@ static int __maybe_unused xhci_plat_suspend(struct device *dev)
+ 	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
+ 	int ret;
+ 
++	if (pm_runtime_suspended(dev))
++		pm_runtime_resume(dev);
++
+ 	ret = xhci_priv_suspend_quirk(hcd);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index 29191d33c0e3e..1a05e3dcfec8a 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -2301,6 +2301,16 @@ UNUSUAL_DEV(  0x2027, 0xa001, 0x0000, 0x9999,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_euscsi_init,
+ 		US_FL_SCM_MULT_TARG ),
+ 
++/*
++ * Reported by DocMAX <mail@vacharakis.de>
++ * and Thomas Weißschuh <linux@weissschuh.net>
++ */
++UNUSUAL_DEV( 0x2109, 0x0715, 0x9999, 0x9999,
++		"VIA Labs, Inc.",
++		"VL817 SATA Bridge",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_IGNORE_UAS),
++
+ UNUSUAL_DEV( 0x2116, 0x0320, 0x0001, 0x0001,
+ 		"ST",
+ 		"2A",
+diff --git a/drivers/usb/typec/tcpm/tcpci.c b/drivers/usb/typec/tcpm/tcpci.c
+index 35a1307349a20..e07d26a3cd8e1 100644
+--- a/drivers/usb/typec/tcpm/tcpci.c
++++ b/drivers/usb/typec/tcpm/tcpci.c
+@@ -75,9 +75,25 @@ static int tcpci_write16(struct tcpci *tcpci, unsigned int reg, u16 val)
+ static int tcpci_set_cc(struct tcpc_dev *tcpc, enum typec_cc_status cc)
+ {
+ 	struct tcpci *tcpci = tcpc_to_tcpci(tcpc);
++	bool vconn_pres;
++	enum typec_cc_polarity polarity = TYPEC_POLARITY_CC1;
+ 	unsigned int reg;
+ 	int ret;
+ 
++	ret = regmap_read(tcpci->regmap, TCPC_POWER_STATUS, &reg);
++	if (ret < 0)
++		return ret;
++
++	vconn_pres = !!(reg & TCPC_POWER_STATUS_VCONN_PRES);
++	if (vconn_pres) {
++		ret = regmap_read(tcpci->regmap, TCPC_TCPC_CTRL, &reg);
++		if (ret < 0)
++			return ret;
++
++		if (reg & TCPC_TCPC_CTRL_ORIENTATION)
++			polarity = TYPEC_POLARITY_CC2;
++	}
++
+ 	switch (cc) {
+ 	case TYPEC_CC_RA:
+ 		reg = (TCPC_ROLE_CTRL_CC_RA << TCPC_ROLE_CTRL_CC1_SHIFT) |
+@@ -112,6 +128,16 @@ static int tcpci_set_cc(struct tcpc_dev *tcpc, enum typec_cc_status cc)
+ 		break;
+ 	}
+ 
++	if (vconn_pres) {
++		if (polarity == TYPEC_POLARITY_CC2) {
++			reg &= ~(TCPC_ROLE_CTRL_CC1_MASK << TCPC_ROLE_CTRL_CC1_SHIFT);
++			reg |= (TCPC_ROLE_CTRL_CC_OPEN << TCPC_ROLE_CTRL_CC1_SHIFT);
++		} else {
++			reg &= ~(TCPC_ROLE_CTRL_CC2_MASK << TCPC_ROLE_CTRL_CC2_SHIFT);
++			reg |= (TCPC_ROLE_CTRL_CC_OPEN << TCPC_ROLE_CTRL_CC2_SHIFT);
++		}
++	}
++
+ 	ret = regmap_write(tcpci->regmap, TCPC_ROLE_CTRL, reg);
+ 	if (ret < 0)
+ 		return ret;
+diff --git a/drivers/usb/typec/tcpm/tcpci.h b/drivers/usb/typec/tcpm/tcpci.h
+index 2be7a77d400ef..b2edd45f13c68 100644
+--- a/drivers/usb/typec/tcpm/tcpci.h
++++ b/drivers/usb/typec/tcpm/tcpci.h
+@@ -98,6 +98,7 @@
+ #define TCPC_POWER_STATUS_SOURCING_VBUS	BIT(4)
+ #define TCPC_POWER_STATUS_VBUS_DET	BIT(3)
+ #define TCPC_POWER_STATUS_VBUS_PRES	BIT(2)
++#define TCPC_POWER_STATUS_VCONN_PRES	BIT(1)
+ #define TCPC_POWER_STATUS_SINKING_VBUS	BIT(0)
+ 
+ #define TCPC_FAULT_STATUS		0x1f
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 59d4fa2443f2b..5fce795b69c7f 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -5156,7 +5156,8 @@ static void _tcpm_pd_vbus_off(struct tcpm_port *port)
+ 	case SNK_TRYWAIT_DEBOUNCE:
+ 		break;
+ 	case SNK_ATTACH_WAIT:
+-		tcpm_set_state(port, SNK_UNATTACHED, 0);
++	case SNK_DEBOUNCED:
++		/* Do nothing, as TCPM is still waiting for vbus to reaach VSAFE5V to connect */
+ 		break;
+ 
+ 	case SNK_NEGOTIATE_CAPABILITIES:
+@@ -5263,6 +5264,10 @@ static void _tcpm_pd_vbus_vsafe0v(struct tcpm_port *port)
+ 	case PR_SWAP_SNK_SRC_SOURCE_ON:
+ 		/* Do nothing, vsafe0v is expected during transition */
+ 		break;
++	case SNK_ATTACH_WAIT:
++	case SNK_DEBOUNCED:
++		/*Do nothing, still waiting for VSAFE5V for connect */
++		break;
+ 	default:
+ 		if (port->pwr_role == TYPEC_SINK && port->auto_vbus_discharge_enabled)
+ 			tcpm_set_state(port, SNK_UNATTACHED, 0);
+diff --git a/drivers/usb/typec/ucsi/ucsi_ccg.c b/drivers/usb/typec/ucsi/ucsi_ccg.c
+index bff96d64dddff..6db7c8ddd51cd 100644
+--- a/drivers/usb/typec/ucsi/ucsi_ccg.c
++++ b/drivers/usb/typec/ucsi/ucsi_ccg.c
+@@ -325,7 +325,7 @@ static int ucsi_ccg_init(struct ucsi_ccg *uc)
+ 		if (status < 0)
+ 			return status;
+ 
+-		if (!data)
++		if (!(data & DEV_INT))
+ 			return 0;
+ 
+ 		status = ccg_write(uc, CCGX_RAB_INTR_REG, &data, sizeof(data));
+diff --git a/drivers/video/fbdev/hyperv_fb.c b/drivers/video/fbdev/hyperv_fb.c
+index 23999df527393..c8e0ea27caf1d 100644
+--- a/drivers/video/fbdev/hyperv_fb.c
++++ b/drivers/video/fbdev/hyperv_fb.c
+@@ -287,8 +287,6 @@ struct hvfb_par {
+ 
+ static uint screen_width = HVFB_WIDTH;
+ static uint screen_height = HVFB_HEIGHT;
+-static uint screen_width_max = HVFB_WIDTH;
+-static uint screen_height_max = HVFB_HEIGHT;
+ static uint screen_depth;
+ static uint screen_fb_size;
+ static uint dio_fb_size; /* FB size for deferred IO */
+@@ -582,7 +580,6 @@ static int synthvid_get_supported_resolution(struct hv_device *hdev)
+ 	int ret = 0;
+ 	unsigned long t;
+ 	u8 index;
+-	int i;
+ 
+ 	memset(msg, 0, sizeof(struct synthvid_msg));
+ 	msg->vid_hdr.type = SYNTHVID_RESOLUTION_REQUEST;
+@@ -613,13 +610,6 @@ static int synthvid_get_supported_resolution(struct hv_device *hdev)
+ 		goto out;
+ 	}
+ 
+-	for (i = 0; i < msg->resolution_resp.resolution_count; i++) {
+-		screen_width_max = max_t(unsigned int, screen_width_max,
+-		    msg->resolution_resp.supported_resolution[i].width);
+-		screen_height_max = max_t(unsigned int, screen_height_max,
+-		    msg->resolution_resp.supported_resolution[i].height);
+-	}
+-
+ 	screen_width =
+ 		msg->resolution_resp.supported_resolution[index].width;
+ 	screen_height =
+@@ -941,7 +931,7 @@ static void hvfb_get_option(struct fb_info *info)
+ 
+ 	if (x < HVFB_WIDTH_MIN || y < HVFB_HEIGHT_MIN ||
+ 	    (synthvid_ver_ge(par->synthvid_version, SYNTHVID_VERSION_WIN10) &&
+-	    (x > screen_width_max || y > screen_height_max)) ||
++	    (x * y * screen_depth / 8 > screen_fb_size)) ||
+ 	    (par->synthvid_version == SYNTHVID_VERSION_WIN8 &&
+ 	     x * y * screen_depth / 8 > SYNTHVID_FB_SIZE_WIN8) ||
+ 	    (par->synthvid_version == SYNTHVID_VERSION_WIN7 &&
+@@ -1194,8 +1184,8 @@ static int hvfb_probe(struct hv_device *hdev,
+ 	}
+ 
+ 	hvfb_get_option(info);
+-	pr_info("Screen resolution: %dx%d, Color depth: %d\n",
+-		screen_width, screen_height, screen_depth);
++	pr_info("Screen resolution: %dx%d, Color depth: %d, Frame buffer size: %d\n",
++		screen_width, screen_height, screen_depth, screen_fb_size);
+ 
+ 	ret = hvfb_getmem(hdev, info);
+ 	if (ret) {
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index edfecfe62b4b6..124b9e6815e5f 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -1187,6 +1187,35 @@ static int defrag_collect_targets(struct btrfs_inode *inode,
+ 		if (em->generation < newer_than)
+ 			goto next;
+ 
++		/*
++		 * Our start offset might be in the middle of an existing extent
++		 * map, so take that into account.
++		 */
++		range_len = em->len - (cur - em->start);
++		/*
++		 * If this range of the extent map is already flagged for delalloc,
++		 * skip it, because:
++		 *
++		 * 1) We could deadlock later, when trying to reserve space for
++		 *    delalloc, because in case we can't immediately reserve space
++		 *    the flusher can start delalloc and wait for the respective
++		 *    ordered extents to complete. The deadlock would happen
++		 *    because we do the space reservation while holding the range
++		 *    locked, and starting writeback, or finishing an ordered
++		 *    extent, requires locking the range;
++		 *
++		 * 2) If there's delalloc there, it means there's dirty pages for
++		 *    which writeback has not started yet (we clean the delalloc
++		 *    flag when starting writeback and after creating an ordered
++		 *    extent). If we mark pages in an adjacent range for defrag,
++		 *    then we will have a larger contiguous range for delalloc,
++		 *    very likely resulting in a larger extent after writeback is
++		 *    triggered (except in a case of free space fragmentation).
++		 */
++		if (test_range_bit(&inode->io_tree, cur, cur + range_len - 1,
++				   EXTENT_DELALLOC, 0, NULL))
++			goto next;
++
+ 		/*
+ 		 * For do_compress case, we want to compress all valid file
+ 		 * extents, thus no @extent_thresh or mergeable check.
+@@ -1195,7 +1224,7 @@ static int defrag_collect_targets(struct btrfs_inode *inode,
+ 			goto add;
+ 
+ 		/* Skip too large extent */
+-		if (em->len >= extent_thresh)
++		if (range_len >= extent_thresh)
+ 			goto next;
+ 
+ 		next_mergeable = defrag_check_next_extent(&inode->vfs_inode, em,
+@@ -1416,9 +1445,11 @@ static int defrag_one_cluster(struct btrfs_inode *inode,
+ 	list_for_each_entry(entry, &target_list, list) {
+ 		u32 range_len = entry->len;
+ 
+-		/* Reached the limit */
+-		if (max_sectors && max_sectors == *sectors_defragged)
++		/* Reached or beyond the limit */
++		if (max_sectors && *sectors_defragged >= max_sectors) {
++			ret = 1;
+ 			break;
++		}
+ 
+ 		if (max_sectors)
+ 			range_len = min_t(u32, range_len,
+@@ -1439,7 +1470,8 @@ static int defrag_one_cluster(struct btrfs_inode *inode,
+ 				       extent_thresh, newer_than, do_compress);
+ 		if (ret < 0)
+ 			break;
+-		*sectors_defragged += range_len;
++		*sectors_defragged += range_len >>
++				      inode->root->fs_info->sectorsize_bits;
+ 	}
+ out:
+ 	list_for_each_entry_safe(entry, tmp, &target_list, list) {
+@@ -1458,6 +1490,12 @@ out:
+  * @newer_than:	   minimum transid to defrag
+  * @max_to_defrag: max number of sectors to be defragged, if 0, the whole inode
+  *		   will be defragged.
++ *
++ * Return <0 for error.
++ * Return >=0 for the number of sectors defragged, and range->start will be updated
++ * to indicate the file offset where next defrag should be started at.
++ * (Mostly for autodefrag, which sets @max_to_defrag thus we may exit early without
++ *  defragging all the range).
+  */
+ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra,
+ 		      struct btrfs_ioctl_defrag_range_args *range,
+@@ -1473,6 +1511,7 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra,
+ 	int compress_type = BTRFS_COMPRESS_ZLIB;
+ 	int ret = 0;
+ 	u32 extent_thresh = range->extent_thresh;
++	pgoff_t start_index;
+ 
+ 	if (isize == 0)
+ 		return 0;
+@@ -1492,12 +1531,16 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra,
+ 
+ 	if (range->start + range->len > range->start) {
+ 		/* Got a specific range */
+-		last_byte = min(isize, range->start + range->len) - 1;
++		last_byte = min(isize, range->start + range->len);
+ 	} else {
+ 		/* Defrag until file end */
+-		last_byte = isize - 1;
++		last_byte = isize;
+ 	}
+ 
++	/* Align the range */
++	cur = round_down(range->start, fs_info->sectorsize);
++	last_byte = round_up(last_byte, fs_info->sectorsize) - 1;
++
+ 	/*
+ 	 * If we were not given a ra, allocate a readahead context. As
+ 	 * readahead is just an optimization, defrag will work without it so
+@@ -1510,16 +1553,26 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra,
+ 			file_ra_state_init(ra, inode->i_mapping);
+ 	}
+ 
+-	/* Align the range */
+-	cur = round_down(range->start, fs_info->sectorsize);
+-	last_byte = round_up(last_byte, fs_info->sectorsize) - 1;
++	/*
++	 * Make writeback start from the beginning of the range, so that the
++	 * defrag range can be written sequentially.
++	 */
++	start_index = cur >> PAGE_SHIFT;
++	if (start_index < inode->i_mapping->writeback_index)
++		inode->i_mapping->writeback_index = start_index;
+ 
+ 	while (cur < last_byte) {
++		const unsigned long prev_sectors_defragged = sectors_defragged;
+ 		u64 cluster_end;
+ 
+ 		/* The cluster size 256K should always be page aligned */
+ 		BUILD_BUG_ON(!IS_ALIGNED(CLUSTER_SIZE, PAGE_SIZE));
+ 
++		if (btrfs_defrag_cancelled(fs_info)) {
++			ret = -EAGAIN;
++			break;
++		}
++
+ 		/* We want the cluster end at page boundary when possible */
+ 		cluster_end = (((cur >> PAGE_SHIFT) +
+ 			       (SZ_256K >> PAGE_SHIFT)) << PAGE_SHIFT) - 1;
+@@ -1541,14 +1594,27 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra,
+ 				cluster_end + 1 - cur, extent_thresh,
+ 				newer_than, do_compress,
+ 				&sectors_defragged, max_to_defrag);
++
++		if (sectors_defragged > prev_sectors_defragged)
++			balance_dirty_pages_ratelimited(inode->i_mapping);
++
+ 		btrfs_inode_unlock(inode, 0);
+ 		if (ret < 0)
+ 			break;
+ 		cur = cluster_end + 1;
++		if (ret > 0) {
++			ret = 0;
++			break;
++		}
+ 	}
+ 
+ 	if (ra_allocated)
+ 		kfree(ra);
++	/*
++	 * Update range.start for autodefrag, this will indicate where to start
++	 * in next run.
++	 */
++	range->start = cur;
+ 	if (sectors_defragged) {
+ 		/*
+ 		 * We have defragged some sectors, for compression case they
+@@ -3060,10 +3126,8 @@ static noinline int btrfs_ioctl_snap_destroy(struct file *file,
+ 	btrfs_inode_lock(inode, 0);
+ 	err = btrfs_delete_subvolume(dir, dentry);
+ 	btrfs_inode_unlock(inode, 0);
+-	if (!err) {
+-		fsnotify_rmdir(dir, dentry);
+-		d_delete(dentry);
+-	}
++	if (!err)
++		d_delete_notify(dir, dentry);
+ 
+ out_dput:
+ 	dput(dentry);
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index c447fa2e2d1fe..2f8696f3b925d 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -2218,6 +2218,7 @@ static int unsafe_request_wait(struct inode *inode)
+ 	struct ceph_mds_client *mdsc = ceph_sb_to_client(inode->i_sb)->mdsc;
+ 	struct ceph_inode_info *ci = ceph_inode(inode);
+ 	struct ceph_mds_request *req1 = NULL, *req2 = NULL;
++	unsigned int max_sessions;
+ 	int ret, err = 0;
+ 
+ 	spin_lock(&ci->i_unsafe_lock);
+@@ -2235,37 +2236,45 @@ static int unsafe_request_wait(struct inode *inode)
+ 	}
+ 	spin_unlock(&ci->i_unsafe_lock);
+ 
++	/*
++	 * The mdsc->max_sessions is unlikely to be changed
++	 * mostly, here we will retry it by reallocating the
++	 * sessions array memory to get rid of the mdsc->mutex
++	 * lock.
++	 */
++retry:
++	max_sessions = mdsc->max_sessions;
++
+ 	/*
+ 	 * Trigger to flush the journal logs in all the relevant MDSes
+ 	 * manually, or in the worst case we must wait at most 5 seconds
+ 	 * to wait the journal logs to be flushed by the MDSes periodically.
+ 	 */
+-	if (req1 || req2) {
++	if ((req1 || req2) && likely(max_sessions)) {
+ 		struct ceph_mds_session **sessions = NULL;
+ 		struct ceph_mds_session *s;
+ 		struct ceph_mds_request *req;
+-		unsigned int max;
+ 		int i;
+ 
+-		/*
+-		 * The mdsc->max_sessions is unlikely to be changed
+-		 * mostly, here we will retry it by reallocating the
+-		 * sessions arrary memory to get rid of the mdsc->mutex
+-		 * lock.
+-		 */
+-retry:
+-		max = mdsc->max_sessions;
+-		sessions = krealloc(sessions, max * sizeof(s), __GFP_ZERO);
+-		if (!sessions)
+-			return -ENOMEM;
++		sessions = kzalloc(max_sessions * sizeof(s), GFP_KERNEL);
++		if (!sessions) {
++			err = -ENOMEM;
++			goto out;
++		}
+ 
+ 		spin_lock(&ci->i_unsafe_lock);
+ 		if (req1) {
+ 			list_for_each_entry(req, &ci->i_unsafe_dirops,
+ 					    r_unsafe_dir_item) {
+ 				s = req->r_session;
+-				if (unlikely(s->s_mds >= max)) {
++				if (unlikely(s->s_mds >= max_sessions)) {
+ 					spin_unlock(&ci->i_unsafe_lock);
++					for (i = 0; i < max_sessions; i++) {
++						s = sessions[i];
++						if (s)
++							ceph_put_mds_session(s);
++					}
++					kfree(sessions);
+ 					goto retry;
+ 				}
+ 				if (!sessions[s->s_mds]) {
+@@ -2278,8 +2287,14 @@ retry:
+ 			list_for_each_entry(req, &ci->i_unsafe_iops,
+ 					    r_unsafe_target_item) {
+ 				s = req->r_session;
+-				if (unlikely(s->s_mds >= max)) {
++				if (unlikely(s->s_mds >= max_sessions)) {
+ 					spin_unlock(&ci->i_unsafe_lock);
++					for (i = 0; i < max_sessions; i++) {
++						s = sessions[i];
++						if (s)
++							ceph_put_mds_session(s);
++					}
++					kfree(sessions);
+ 					goto retry;
+ 				}
+ 				if (!sessions[s->s_mds]) {
+@@ -2300,7 +2315,7 @@ retry:
+ 		spin_unlock(&ci->i_ceph_lock);
+ 
+ 		/* send flush mdlog request to MDSes */
+-		for (i = 0; i < max; i++) {
++		for (i = 0; i < max_sessions; i++) {
+ 			s = sessions[i];
+ 			if (s) {
+ 				send_flush_mdlog(s);
+@@ -2317,15 +2332,19 @@ retry:
+ 					ceph_timeout_jiffies(req1->r_timeout));
+ 		if (ret)
+ 			err = -EIO;
+-		ceph_mdsc_put_request(req1);
+ 	}
+ 	if (req2) {
+ 		ret = !wait_for_completion_timeout(&req2->r_safe_completion,
+ 					ceph_timeout_jiffies(req2->r_timeout));
+ 		if (ret)
+ 			err = -EIO;
+-		ceph_mdsc_put_request(req2);
+ 	}
++
++out:
++	if (req1)
++		ceph_mdsc_put_request(req1);
++	if (req2)
++		ceph_mdsc_put_request(req2);
+ 	return err;
+ }
+ 
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index c138e8126286c..7f3291e027b06 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -579,6 +579,7 @@ static int ceph_finish_async_create(struct inode *dir, struct dentry *dentry,
+ 	struct ceph_inode_info *ci = ceph_inode(dir);
+ 	struct inode *inode;
+ 	struct timespec64 now;
++	struct ceph_string *pool_ns;
+ 	struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(dir->i_sb);
+ 	struct ceph_vino vino = { .ino = req->r_deleg_ino,
+ 				  .snap = CEPH_NOSNAP };
+@@ -628,6 +629,12 @@ static int ceph_finish_async_create(struct inode *dir, struct dentry *dentry,
+ 	in.max_size = cpu_to_le64(lo->stripe_unit);
+ 
+ 	ceph_file_layout_to_legacy(lo, &in.layout);
++	/* lo is private, so pool_ns can't change */
++	pool_ns = rcu_dereference_raw(lo->pool_ns);
++	if (pool_ns) {
++		iinfo.pool_ns_len = pool_ns->len;
++		iinfo.pool_ns_data = pool_ns->str;
++	}
+ 
+ 	down_read(&mdsc->snap_rwsem);
+ 	ret = ceph_fill_inode(inode, NULL, &iinfo, NULL, req->r_session,
+@@ -746,8 +753,10 @@ retry:
+ 				restore_deleg_ino(dir, req->r_deleg_ino);
+ 				ceph_mdsc_put_request(req);
+ 				try_async = false;
++				ceph_put_string(rcu_dereference_raw(lo.pool_ns));
+ 				goto retry;
+ 			}
++			ceph_put_string(rcu_dereference_raw(lo.pool_ns));
+ 			goto out_req;
+ 		}
+ 	}
+diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c
+index 1466b5d01cbb9..d3cd2a94d1e8c 100644
+--- a/fs/configfs/dir.c
++++ b/fs/configfs/dir.c
+@@ -1780,8 +1780,8 @@ void configfs_unregister_group(struct config_group *group)
+ 	configfs_detach_group(&group->cg_item);
+ 	d_inode(dentry)->i_flags |= S_DEAD;
+ 	dont_mount(dentry);
++	d_drop(dentry);
+ 	fsnotify_rmdir(d_inode(parent), dentry);
+-	d_delete(dentry);
+ 	inode_unlock(d_inode(parent));
+ 
+ 	dput(dentry);
+@@ -1922,10 +1922,10 @@ void configfs_unregister_subsystem(struct configfs_subsystem *subsys)
+ 	configfs_detach_group(&group->cg_item);
+ 	d_inode(dentry)->i_flags |= S_DEAD;
+ 	dont_mount(dentry);
+-	fsnotify_rmdir(d_inode(root), dentry);
+ 	inode_unlock(d_inode(dentry));
+ 
+-	d_delete(dentry);
++	d_drop(dentry);
++	fsnotify_rmdir(d_inode(root), dentry);
+ 
+ 	inode_unlock(d_inode(root));
+ 
+diff --git a/fs/devpts/inode.c b/fs/devpts/inode.c
+index 42e5a766d33c7..4f25015aa5342 100644
+--- a/fs/devpts/inode.c
++++ b/fs/devpts/inode.c
+@@ -621,8 +621,8 @@ void devpts_pty_kill(struct dentry *dentry)
+ 
+ 	dentry->d_fsdata = NULL;
+ 	drop_nlink(dentry->d_inode);
+-	fsnotify_unlink(d_inode(dentry->d_parent), dentry);
+ 	d_drop(dentry);
++	fsnotify_unlink(d_inode(dentry->d_parent), dentry);
+ 	dput(dentry);	/* d_alloc_name() in devpts_pty_new() */
+ }
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 15f303180d70c..698db7fb62e06 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -7761,10 +7761,15 @@ static __cold void io_rsrc_node_ref_zero(struct percpu_ref *ref)
+ 	struct io_ring_ctx *ctx = node->rsrc_data->ctx;
+ 	unsigned long flags;
+ 	bool first_add = false;
++	unsigned long delay = HZ;
+ 
+ 	spin_lock_irqsave(&ctx->rsrc_ref_lock, flags);
+ 	node->done = true;
+ 
++	/* if we are mid-quiesce then do not delay */
++	if (node->rsrc_data->quiesce)
++		delay = 0;
++
+ 	while (!list_empty(&ctx->rsrc_ref_list)) {
+ 		node = list_first_entry(&ctx->rsrc_ref_list,
+ 					    struct io_rsrc_node, node);
+@@ -7777,7 +7782,7 @@ static __cold void io_rsrc_node_ref_zero(struct percpu_ref *ref)
+ 	spin_unlock_irqrestore(&ctx->rsrc_ref_lock, flags);
+ 
+ 	if (first_add)
+-		mod_delayed_work(system_wq, &ctx->rsrc_put_work, HZ);
++		mod_delayed_work(system_wq, &ctx->rsrc_put_work, delay);
+ }
+ 
+ static struct io_rsrc_node *io_rsrc_node_alloc(struct io_ring_ctx *ctx)
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index 35302bc192eb9..bd9ac98916043 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -2970,6 +2970,7 @@ struct journal_head *jbd2_journal_grab_journal_head(struct buffer_head *bh)
+ 	jbd_unlock_bh_journal_head(bh);
+ 	return jh;
+ }
++EXPORT_SYMBOL(jbd2_journal_grab_journal_head);
+ 
+ static void __journal_remove_journal_head(struct buffer_head *bh)
+ {
+@@ -3022,6 +3023,7 @@ void jbd2_journal_put_journal_head(struct journal_head *jh)
+ 		jbd_unlock_bh_journal_head(bh);
+ 	}
+ }
++EXPORT_SYMBOL(jbd2_journal_put_journal_head);
+ 
+ /*
+  * Initialize jbd inode head
+diff --git a/fs/namei.c b/fs/namei.c
+index 1f9d2187c7655..3c0568d3155be 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -3973,13 +3973,12 @@ int vfs_rmdir(struct user_namespace *mnt_userns, struct inode *dir,
+ 	dentry->d_inode->i_flags |= S_DEAD;
+ 	dont_mount(dentry);
+ 	detach_mounts(dentry);
+-	fsnotify_rmdir(dir, dentry);
+ 
+ out:
+ 	inode_unlock(dentry->d_inode);
+ 	dput(dentry);
+ 	if (!error)
+-		d_delete(dentry);
++		d_delete_notify(dir, dentry);
+ 	return error;
+ }
+ EXPORT_SYMBOL(vfs_rmdir);
+@@ -4101,7 +4100,6 @@ int vfs_unlink(struct user_namespace *mnt_userns, struct inode *dir,
+ 			if (!error) {
+ 				dont_mount(dentry);
+ 				detach_mounts(dentry);
+-				fsnotify_unlink(dir, dentry);
+ 			}
+ 		}
+ 	}
+@@ -4109,9 +4107,11 @@ out:
+ 	inode_unlock(target);
+ 
+ 	/* We don't d_delete() NFS sillyrenamed files--they still exist. */
+-	if (!error && !(dentry->d_flags & DCACHE_NFSFS_RENAMED)) {
++	if (!error && dentry->d_flags & DCACHE_NFSFS_RENAMED) {
++		fsnotify_unlink(dir, dentry);
++	} else if (!error) {
+ 		fsnotify_link_count(target);
+-		d_delete(dentry);
++		d_delete_notify(dir, dentry);
+ 	}
+ 
+ 	return error;
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index 731d31015b6aa..24ce5652d9be8 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -1967,6 +1967,24 @@ out:
+ 
+ no_open:
+ 	res = nfs_lookup(dir, dentry, lookup_flags);
++	if (!res) {
++		inode = d_inode(dentry);
++		if ((lookup_flags & LOOKUP_DIRECTORY) && inode &&
++		    !S_ISDIR(inode->i_mode))
++			res = ERR_PTR(-ENOTDIR);
++		else if (inode && S_ISREG(inode->i_mode))
++			res = ERR_PTR(-EOPENSTALE);
++	} else if (!IS_ERR(res)) {
++		inode = d_inode(res);
++		if ((lookup_flags & LOOKUP_DIRECTORY) && inode &&
++		    !S_ISDIR(inode->i_mode)) {
++			dput(res);
++			res = ERR_PTR(-ENOTDIR);
++		} else if (inode && S_ISREG(inode->i_mode)) {
++			dput(res);
++			res = ERR_PTR(-EOPENSTALE);
++		}
++	}
+ 	if (switched) {
+ 		d_lookup_done(dentry);
+ 		if (!res)
+@@ -2379,6 +2397,8 @@ nfs_link(struct dentry *old_dentry, struct inode *dir, struct dentry *dentry)
+ 
+ 	trace_nfs_link_enter(inode, dir, dentry);
+ 	d_drop(dentry);
++	if (S_ISREG(inode->i_mode))
++		nfs_sync_inode(inode);
+ 	error = NFS_PROTO(dir)->link(inode, dir, &dentry->d_name);
+ 	if (error == 0) {
+ 		nfs_set_verifier(dentry, nfs_save_change_attribute(dir));
+@@ -2468,6 +2488,8 @@ int nfs_rename(struct user_namespace *mnt_userns, struct inode *old_dir,
+ 		}
+ 	}
+ 
++	if (S_ISREG(old_inode->i_mode))
++		nfs_sync_inode(old_inode);
+ 	task = nfs_async_rename(old_dir, new_dir, old_dentry, new_dentry, NULL);
+ 	if (IS_ERR(task)) {
+ 		error = PTR_ERR(task);
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index 51a49e0cfe376..d0761ca8cb542 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1249,7 +1249,8 @@ static void nfsdfs_remove_file(struct inode *dir, struct dentry *dentry)
+ 	clear_ncl(d_inode(dentry));
+ 	dget(dentry);
+ 	ret = simple_unlink(dir, dentry);
+-	d_delete(dentry);
++	d_drop(dentry);
++	fsnotify_unlink(dir, dentry);
+ 	dput(dentry);
+ 	WARN_ON_ONCE(ret);
+ }
+@@ -1340,8 +1341,8 @@ void nfsd_client_rmdir(struct dentry *dentry)
+ 	dget(dentry);
+ 	ret = simple_rmdir(dir, dentry);
+ 	WARN_ON_ONCE(ret);
++	d_drop(dentry);
+ 	fsnotify_rmdir(dir, dentry);
+-	d_delete(dentry);
+ 	dput(dentry);
+ 	inode_unlock(dir);
+ }
+diff --git a/fs/ocfs2/suballoc.c b/fs/ocfs2/suballoc.c
+index 481017e1dac5a..166c8918c825a 100644
+--- a/fs/ocfs2/suballoc.c
++++ b/fs/ocfs2/suballoc.c
+@@ -1251,26 +1251,23 @@ static int ocfs2_test_bg_bit_allocatable(struct buffer_head *bg_bh,
+ {
+ 	struct ocfs2_group_desc *bg = (struct ocfs2_group_desc *) bg_bh->b_data;
+ 	struct journal_head *jh;
+-	int ret = 1;
++	int ret;
+ 
+ 	if (ocfs2_test_bit(nr, (unsigned long *)bg->bg_bitmap))
+ 		return 0;
+ 
+-	if (!buffer_jbd(bg_bh))
++	jh = jbd2_journal_grab_journal_head(bg_bh);
++	if (!jh)
+ 		return 1;
+ 
+-	jbd_lock_bh_journal_head(bg_bh);
+-	if (buffer_jbd(bg_bh)) {
+-		jh = bh2jh(bg_bh);
+-		spin_lock(&jh->b_state_lock);
+-		bg = (struct ocfs2_group_desc *) jh->b_committed_data;
+-		if (bg)
+-			ret = !ocfs2_test_bit(nr, (unsigned long *)bg->bg_bitmap);
+-		else
+-			ret = 1;
+-		spin_unlock(&jh->b_state_lock);
+-	}
+-	jbd_unlock_bh_journal_head(bg_bh);
++	spin_lock(&jh->b_state_lock);
++	bg = (struct ocfs2_group_desc *) jh->b_committed_data;
++	if (bg)
++		ret = !ocfs2_test_bit(nr, (unsigned long *)bg->bg_bitmap);
++	else
++		ret = 1;
++	spin_unlock(&jh->b_state_lock);
++	jbd2_journal_put_journal_head(jh);
+ 
+ 	return ret;
+ }
+diff --git a/fs/udf/inode.c b/fs/udf/inode.c
+index 1d6b7a50736ba..ea8f6cd01f501 100644
+--- a/fs/udf/inode.c
++++ b/fs/udf/inode.c
+@@ -258,10 +258,6 @@ int udf_expand_file_adinicb(struct inode *inode)
+ 	char *kaddr;
+ 	struct udf_inode_info *iinfo = UDF_I(inode);
+ 	int err;
+-	struct writeback_control udf_wbc = {
+-		.sync_mode = WB_SYNC_NONE,
+-		.nr_to_write = 1,
+-	};
+ 
+ 	WARN_ON_ONCE(!inode_is_locked(inode));
+ 	if (!iinfo->i_lenAlloc) {
+@@ -305,8 +301,10 @@ int udf_expand_file_adinicb(struct inode *inode)
+ 		iinfo->i_alloc_type = ICBTAG_FLAG_AD_LONG;
+ 	/* from now on we have normal address_space methods */
+ 	inode->i_data.a_ops = &udf_aops;
++	set_page_dirty(page);
++	unlock_page(page);
+ 	up_write(&iinfo->i_data_sem);
+-	err = inode->i_data.a_ops->writepage(page, &udf_wbc);
++	err = filemap_fdatawrite(inode->i_mapping);
+ 	if (err) {
+ 		/* Restore everything back so that we don't lose data... */
+ 		lock_page(page);
+@@ -317,6 +315,7 @@ int udf_expand_file_adinicb(struct inode *inode)
+ 		unlock_page(page);
+ 		iinfo->i_alloc_type = ICBTAG_FLAG_AD_IN_ICB;
+ 		inode->i_data.a_ops = &udf_adinicb_aops;
++		iinfo->i_lenAlloc = inode->i_size;
+ 		up_write(&iinfo->i_data_sem);
+ 	}
+ 	put_page(page);
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index bd4370baccca3..d73887c805e05 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -1254,6 +1254,7 @@ unsigned long disk_start_io_acct(struct gendisk *disk, unsigned int sectors,
+ void disk_end_io_acct(struct gendisk *disk, unsigned int op,
+ 		unsigned long start_time);
+ 
++void bio_start_io_acct_time(struct bio *bio, unsigned long start_time);
+ unsigned long bio_start_io_acct(struct bio *bio);
+ void bio_end_io_acct_remapped(struct bio *bio, unsigned long start_time,
+ 		struct block_device *orig_bdev);
+diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h
+index 845a0ffc16ee8..d8f07baf272ad 100644
+--- a/include/linux/ethtool.h
++++ b/include/linux/ethtool.h
+@@ -95,7 +95,7 @@ struct ethtool_link_ext_state_info {
+ 		enum ethtool_link_ext_substate_bad_signal_integrity bad_signal_integrity;
+ 		enum ethtool_link_ext_substate_cable_issue cable_issue;
+ 		enum ethtool_link_ext_substate_module module;
+-		u8 __link_ext_substate;
++		u32 __link_ext_substate;
+ 	};
+ };
+ 
+diff --git a/include/linux/fsnotify.h b/include/linux/fsnotify.h
+index 787545e87eeb0..bec1e23ecf787 100644
+--- a/include/linux/fsnotify.h
++++ b/include/linux/fsnotify.h
+@@ -221,6 +221,43 @@ static inline void fsnotify_link(struct inode *dir, struct inode *inode,
+ 		      dir, &new_dentry->d_name, 0);
+ }
+ 
++/*
++ * fsnotify_delete - @dentry was unlinked and unhashed
++ *
++ * Caller must make sure that dentry->d_name is stable.
++ *
++ * Note: unlike fsnotify_unlink(), we have to pass also the unlinked inode
++ * as this may be called after d_delete() and old_dentry may be negative.
++ */
++static inline void fsnotify_delete(struct inode *dir, struct inode *inode,
++				   struct dentry *dentry)
++{
++	__u32 mask = FS_DELETE;
++
++	if (S_ISDIR(inode->i_mode))
++		mask |= FS_ISDIR;
++
++	fsnotify_name(mask, inode, FSNOTIFY_EVENT_INODE, dir, &dentry->d_name,
++		      0);
++}
++
++/**
++ * d_delete_notify - delete a dentry and call fsnotify_delete()
++ * @dentry: The dentry to delete
++ *
++ * This helper is used to guaranty that the unlinked inode cannot be found
++ * by lookup of this name after fsnotify_delete() event has been delivered.
++ */
++static inline void d_delete_notify(struct inode *dir, struct dentry *dentry)
++{
++	struct inode *inode = d_inode(dentry);
++
++	ihold(inode);
++	d_delete(dentry);
++	fsnotify_delete(dir, inode, dentry);
++	iput(inode);
++}
++
+ /*
+  * fsnotify_unlink - 'name' was unlinked
+  *
+@@ -228,10 +265,10 @@ static inline void fsnotify_link(struct inode *dir, struct inode *inode,
+  */
+ static inline void fsnotify_unlink(struct inode *dir, struct dentry *dentry)
+ {
+-	/* Expected to be called before d_delete() */
+-	WARN_ON_ONCE(d_is_negative(dentry));
++	if (WARN_ON_ONCE(d_is_negative(dentry)))
++		return;
+ 
+-	fsnotify_dirent(dir, dentry, FS_DELETE);
++	fsnotify_delete(dir, d_inode(dentry), dentry);
+ }
+ 
+ /*
+@@ -255,10 +292,10 @@ static inline void fsnotify_mkdir(struct inode *dir, struct dentry *dentry)
+  */
+ static inline void fsnotify_rmdir(struct inode *dir, struct dentry *dentry)
+ {
+-	/* Expected to be called before d_delete() */
+-	WARN_ON_ONCE(d_is_negative(dentry));
++	if (WARN_ON_ONCE(d_is_negative(dentry)))
++		return;
+ 
+-	fsnotify_dirent(dir, dentry, FS_DELETE | FS_ISDIR);
++	fsnotify_delete(dir, d_inode(dentry), dentry);
+ }
+ 
+ /*
+diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
+index df8de62f4710f..f0c7b352340a4 100644
+--- a/include/linux/lsm_hook_defs.h
++++ b/include/linux/lsm_hook_defs.h
+@@ -82,7 +82,7 @@ LSM_HOOK(int, 0, sb_add_mnt_opt, const char *option, const char *val,
+ 	 int len, void **mnt_opts)
+ LSM_HOOK(int, 0, move_mount, const struct path *from_path,
+ 	 const struct path *to_path)
+-LSM_HOOK(int, 0, dentry_init_security, struct dentry *dentry,
++LSM_HOOK(int, -EOPNOTSUPP, dentry_init_security, struct dentry *dentry,
+ 	 int mode, const struct qstr *name, const char **xattr_name,
+ 	 void **ctx, u32 *ctxlen)
+ LSM_HOOK(int, 0, dentry_create_files_as, struct dentry *dentry, int mode,
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index a7e4a9e7d807a..73210623bda7a 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -1524,11 +1524,18 @@ static inline u8 page_kasan_tag(const struct page *page)
+ 
+ static inline void page_kasan_tag_set(struct page *page, u8 tag)
+ {
+-	if (kasan_enabled()) {
+-		tag ^= 0xff;
+-		page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
+-		page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
+-	}
++	unsigned long old_flags, flags;
++
++	if (!kasan_enabled())
++		return;
++
++	tag ^= 0xff;
++	old_flags = READ_ONCE(page->flags);
++	do {
++		flags = old_flags;
++		flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
++		flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
++	} while (unlikely(!try_cmpxchg(&page->flags, &old_flags, flags)));
+ }
+ 
+ static inline void page_kasan_tag_reset(struct page *page)
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 6aadcc0ecb5b0..6cbefb660fa3b 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -2636,6 +2636,7 @@ struct packet_type {
+ 					      struct net_device *);
+ 	bool			(*id_match)(struct packet_type *ptype,
+ 					    struct sock *sk);
++	struct net		*af_packet_net;
+ 	void			*af_packet_priv;
+ 	struct list_head	list;
+ };
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index 318c489b735bc..d7f927f8c335d 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -681,18 +681,6 @@ struct perf_event {
+ 	u64				total_time_running;
+ 	u64				tstamp;
+ 
+-	/*
+-	 * timestamp shadows the actual context timing but it can
+-	 * be safely used in NMI interrupt context. It reflects the
+-	 * context time as it was when the event was last scheduled in,
+-	 * or when ctx_sched_in failed to schedule the event because we
+-	 * run out of PMC.
+-	 *
+-	 * ctx_time already accounts for ctx->timestamp. Therefore to
+-	 * compute ctx_time for a sample, simply add perf_clock().
+-	 */
+-	u64				shadow_ctx_time;
+-
+ 	struct perf_event_attr		attr;
+ 	u16				header_size;
+ 	u16				id_header_size;
+@@ -839,6 +827,7 @@ struct perf_event_context {
+ 	 */
+ 	u64				time;
+ 	u64				timestamp;
++	u64				timeoffset;
+ 
+ 	/*
+ 	 * These fields let us detect when two contexts have both
+@@ -921,6 +910,8 @@ struct bpf_perf_event_data_kern {
+ struct perf_cgroup_info {
+ 	u64				time;
+ 	u64				timestamp;
++	u64				timeoffset;
++	int				active;
+ };
+ 
+ struct perf_cgroup {
+diff --git a/include/linux/psi.h b/include/linux/psi.h
+index 65eb1476ac705..57823b30c2d3d 100644
+--- a/include/linux/psi.h
++++ b/include/linux/psi.h
+@@ -24,18 +24,17 @@ void psi_memstall_enter(unsigned long *flags);
+ void psi_memstall_leave(unsigned long *flags);
+ 
+ int psi_show(struct seq_file *s, struct psi_group *group, enum psi_res res);
+-
+-#ifdef CONFIG_CGROUPS
+-int psi_cgroup_alloc(struct cgroup *cgrp);
+-void psi_cgroup_free(struct cgroup *cgrp);
+-void cgroup_move_task(struct task_struct *p, struct css_set *to);
+-
+ struct psi_trigger *psi_trigger_create(struct psi_group *group,
+ 			char *buf, size_t nbytes, enum psi_res res);
+-void psi_trigger_replace(void **trigger_ptr, struct psi_trigger *t);
++void psi_trigger_destroy(struct psi_trigger *t);
+ 
+ __poll_t psi_trigger_poll(void **trigger_ptr, struct file *file,
+ 			poll_table *wait);
++
++#ifdef CONFIG_CGROUPS
++int psi_cgroup_alloc(struct cgroup *cgrp);
++void psi_cgroup_free(struct cgroup *cgrp);
++void cgroup_move_task(struct task_struct *p, struct css_set *to);
+ #endif
+ 
+ #else /* CONFIG_PSI */
+diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
+index 0819c82dba920..6f190002a2022 100644
+--- a/include/linux/psi_types.h
++++ b/include/linux/psi_types.h
+@@ -140,9 +140,6 @@ struct psi_trigger {
+ 	 * events to one per window
+ 	 */
+ 	u64 last_event_time;
+-
+-	/* Refcounting to prevent premature destruction */
+-	struct kref refcount;
+ };
+ 
+ struct psi_group {
+diff --git a/include/linux/usb/role.h b/include/linux/usb/role.h
+index 031f148ab3734..b5deafd91f67b 100644
+--- a/include/linux/usb/role.h
++++ b/include/linux/usb/role.h
+@@ -91,6 +91,12 @@ fwnode_usb_role_switch_get(struct fwnode_handle *node)
+ 
+ static inline void usb_role_switch_put(struct usb_role_switch *sw) { }
+ 
++static inline struct usb_role_switch *
++usb_role_switch_find_by_fwnode(const struct fwnode_handle *fwnode)
++{
++	return NULL;
++}
++
+ static inline struct usb_role_switch *
+ usb_role_switch_register(struct device *parent,
+ 			 const struct usb_role_switch_desc *desc)
+diff --git a/include/net/addrconf.h b/include/net/addrconf.h
+index 78ea3e332688f..e7ce719838b5e 100644
+--- a/include/net/addrconf.h
++++ b/include/net/addrconf.h
+@@ -6,6 +6,8 @@
+ #define RTR_SOLICITATION_INTERVAL	(4*HZ)
+ #define RTR_SOLICITATION_MAX_INTERVAL	(3600*HZ)	/* 1 hour */
+ 
++#define MIN_VALID_LIFETIME		(2*3600)	/* 2 hours */
++
+ #define TEMP_VALID_LIFETIME		(7*86400)
+ #define TEMP_PREFERRED_LIFETIME		(86400)
+ #define REGEN_MAX_RETRY			(3)
+diff --git a/include/net/ip.h b/include/net/ip.h
+index b71e88507c4a0..ff68af118020a 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -525,19 +525,18 @@ static inline void ip_select_ident_segs(struct net *net, struct sk_buff *skb,
+ {
+ 	struct iphdr *iph = ip_hdr(skb);
+ 
++	/* We had many attacks based on IPID, use the private
++	 * generator as much as we can.
++	 */
++	if (sk && inet_sk(sk)->inet_daddr) {
++		iph->id = htons(inet_sk(sk)->inet_id);
++		inet_sk(sk)->inet_id += segs;
++		return;
++	}
+ 	if ((iph->frag_off & htons(IP_DF)) && !skb->ignore_df) {
+-		/* This is only to work around buggy Windows95/2000
+-		 * VJ compression implementations.  If the ID field
+-		 * does not change, they drop every other packet in
+-		 * a TCP stream using header compression.
+-		 */
+-		if (sk && inet_sk(sk)->inet_daddr) {
+-			iph->id = htons(inet_sk(sk)->inet_id);
+-			inet_sk(sk)->inet_id += segs;
+-		} else {
+-			iph->id = 0;
+-		}
++		iph->id = 0;
+ 	} else {
++		/* Unfortunately we need the big hammer to get a suitable IPID */
+ 		__ip_select_ident(net, iph, segs);
+ 	}
+ }
+diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h
+index 83b8070d1cc93..c85b040728d7e 100644
+--- a/include/net/ip6_fib.h
++++ b/include/net/ip6_fib.h
+@@ -281,7 +281,7 @@ static inline bool fib6_get_cookie_safe(const struct fib6_info *f6i,
+ 	fn = rcu_dereference(f6i->fib6_node);
+ 
+ 	if (fn) {
+-		*cookie = fn->fn_sernum;
++		*cookie = READ_ONCE(fn->fn_sernum);
+ 		/* pairs with smp_wmb() in __fib6_update_sernum_upto_root() */
+ 		smp_rmb();
+ 		status = true;
+diff --git a/include/net/route.h b/include/net/route.h
+index 2e6c0e153e3a5..2551f3f03b37e 100644
+--- a/include/net/route.h
++++ b/include/net/route.h
+@@ -369,7 +369,7 @@ static inline struct neighbour *ip_neigh_gw4(struct net_device *dev,
+ {
+ 	struct neighbour *neigh;
+ 
+-	neigh = __ipv4_neigh_lookup_noref(dev, daddr);
++	neigh = __ipv4_neigh_lookup_noref(dev, (__force u32)daddr);
+ 	if (unlikely(!neigh))
+ 		neigh = __neigh_create(&arp_tbl, &daddr, dev, false);
+ 
+diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
+index 7b5dcff84cf27..54eb1654d9ecc 100644
+--- a/include/trace/events/sunrpc.h
++++ b/include/trace/events/sunrpc.h
+@@ -953,7 +953,8 @@ TRACE_EVENT(rpc_socket_nospace,
+ 		{ BIT(XPRT_REMOVE),		"REMOVE" },		\
+ 		{ BIT(XPRT_CONGESTED),		"CONGESTED" },		\
+ 		{ BIT(XPRT_CWND_WAIT),		"CWND_WAIT" },		\
+-		{ BIT(XPRT_WRITE_SPACE),	"WRITE_SPACE" })
++		{ BIT(XPRT_WRITE_SPACE),	"WRITE_SPACE" },	\
++		{ BIT(XPRT_SND_IS_COOKIE),	"SND_IS_COOKIE" })
+ 
+ DECLARE_EVENT_CLASS(rpc_xprt_lifetime_class,
+ 	TP_PROTO(
+@@ -1150,8 +1151,11 @@ DECLARE_EVENT_CLASS(xprt_writelock_event,
+ 			__entry->task_id = -1;
+ 			__entry->client_id = -1;
+ 		}
+-		__entry->snd_task_id = xprt->snd_task ?
+-					xprt->snd_task->tk_pid : -1;
++		if (xprt->snd_task &&
++		    !test_bit(XPRT_SND_IS_COOKIE, &xprt->state))
++			__entry->snd_task_id = xprt->snd_task->tk_pid;
++		else
++			__entry->snd_task_id = -1;
+ 	),
+ 
+ 	TP_printk(SUNRPC_TRACE_TASK_SPECIFIER
+@@ -1196,8 +1200,12 @@ DECLARE_EVENT_CLASS(xprt_cong_event,
+ 			__entry->task_id = -1;
+ 			__entry->client_id = -1;
+ 		}
+-		__entry->snd_task_id = xprt->snd_task ?
+-					xprt->snd_task->tk_pid : -1;
++		if (xprt->snd_task &&
++		    !test_bit(XPRT_SND_IS_COOKIE, &xprt->state))
++			__entry->snd_task_id = xprt->snd_task->tk_pid;
++		else
++			__entry->snd_task_id = -1;
++
+ 		__entry->cong = xprt->cong;
+ 		__entry->cwnd = xprt->cwnd;
+ 		__entry->wait = test_bit(XPRT_CWND_WAIT, &xprt->state);
+diff --git a/include/uapi/linux/cyclades.h b/include/uapi/linux/cyclades.h
+new file mode 100644
+index 0000000000000..6225c5aebe06a
+--- /dev/null
++++ b/include/uapi/linux/cyclades.h
+@@ -0,0 +1,35 @@
++/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
++
++#ifndef _UAPI_LINUX_CYCLADES_H
++#define _UAPI_LINUX_CYCLADES_H
++
++#warning "Support for features provided by this header has been removed"
++#warning "Please consider updating your code"
++
++struct cyclades_monitor {
++	unsigned long int_count;
++	unsigned long char_count;
++	unsigned long char_max;
++	unsigned long char_last;
++};
++
++#define CYGETMON		0x435901
++#define CYGETTHRESH		0x435902
++#define CYSETTHRESH		0x435903
++#define CYGETDEFTHRESH		0x435904
++#define CYSETDEFTHRESH		0x435905
++#define CYGETTIMEOUT		0x435906
++#define CYSETTIMEOUT		0x435907
++#define CYGETDEFTIMEOUT		0x435908
++#define CYSETDEFTIMEOUT		0x435909
++#define CYSETRFLOW		0x43590a
++#define CYGETRFLOW		0x43590b
++#define CYSETRTSDTR_INV		0x43590c
++#define CYGETRTSDTR_INV		0x43590d
++#define CYZSETPOLLCYCLE		0x43590e
++#define CYZGETPOLLCYCLE		0x43590f
++#define CYGETCD1400VER		0x435910
++#define CYSETWAIT		0x435912
++#define CYGETWAIT		0x435913
++
++#endif /* _UAPI_LINUX_CYCLADES_H */
+diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
+index 6e75bbee39f0b..0dcaed4d3f4ce 100644
+--- a/kernel/bpf/stackmap.c
++++ b/kernel/bpf/stackmap.c
+@@ -525,13 +525,14 @@ BPF_CALL_4(bpf_get_task_stack, struct task_struct *, task, void *, buf,
+ 	   u32, size, u64, flags)
+ {
+ 	struct pt_regs *regs;
+-	long res;
++	long res = -EINVAL;
+ 
+ 	if (!try_get_task_stack(task))
+ 		return -EFAULT;
+ 
+ 	regs = task_pt_regs(task);
+-	res = __bpf_get_stack(regs, task, NULL, buf, size, flags);
++	if (regs)
++		res = __bpf_get_stack(regs, task, NULL, buf, size, flags);
+ 	put_task_stack(task);
+ 
+ 	return res;
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index cafb8c114a21c..d18c2ef3180ed 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -3642,6 +3642,12 @@ static ssize_t cgroup_pressure_write(struct kernfs_open_file *of, char *buf,
+ 	cgroup_get(cgrp);
+ 	cgroup_kn_unlock(of->kn);
+ 
++	/* Allow only one trigger per file descriptor */
++	if (ctx->psi.trigger) {
++		cgroup_put(cgrp);
++		return -EBUSY;
++	}
++
+ 	psi = cgroup_ino(cgrp) == 1 ? &psi_system : &cgrp->psi;
+ 	new = psi_trigger_create(psi, buf, nbytes, res);
+ 	if (IS_ERR(new)) {
+@@ -3649,8 +3655,7 @@ static ssize_t cgroup_pressure_write(struct kernfs_open_file *of, char *buf,
+ 		return PTR_ERR(new);
+ 	}
+ 
+-	psi_trigger_replace(&ctx->psi.trigger, new);
+-
++	smp_store_release(&ctx->psi.trigger, new);
+ 	cgroup_put(cgrp);
+ 
+ 	return nbytes;
+@@ -3689,7 +3694,7 @@ static void cgroup_pressure_release(struct kernfs_open_file *of)
+ {
+ 	struct cgroup_file_ctx *ctx = of->priv;
+ 
+-	psi_trigger_replace(&ctx->psi.trigger, NULL);
++	psi_trigger_destroy(ctx->psi.trigger);
+ }
+ 
+ bool cgroup_psi_enabled(void)
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 63f0414666438..6ed890480c4aa 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -674,6 +674,23 @@ perf_event_set_state(struct perf_event *event, enum perf_event_state state)
+ 	WRITE_ONCE(event->state, state);
+ }
+ 
++/*
++ * UP store-release, load-acquire
++ */
++
++#define __store_release(ptr, val)					\
++do {									\
++	barrier();							\
++	WRITE_ONCE(*(ptr), (val));					\
++} while (0)
++
++#define __load_acquire(ptr)						\
++({									\
++	__unqual_scalar_typeof(*(ptr)) ___p = READ_ONCE(*(ptr));	\
++	barrier();							\
++	___p;								\
++})
++
+ #ifdef CONFIG_CGROUP_PERF
+ 
+ static inline bool
+@@ -719,34 +736,51 @@ static inline u64 perf_cgroup_event_time(struct perf_event *event)
+ 	return t->time;
+ }
+ 
+-static inline void __update_cgrp_time(struct perf_cgroup *cgrp)
++static inline u64 perf_cgroup_event_time_now(struct perf_event *event, u64 now)
+ {
+-	struct perf_cgroup_info *info;
+-	u64 now;
+-
+-	now = perf_clock();
++	struct perf_cgroup_info *t;
+ 
+-	info = this_cpu_ptr(cgrp->info);
++	t = per_cpu_ptr(event->cgrp->info, event->cpu);
++	if (!__load_acquire(&t->active))
++		return t->time;
++	now += READ_ONCE(t->timeoffset);
++	return now;
++}
+ 
+-	info->time += now - info->timestamp;
++static inline void __update_cgrp_time(struct perf_cgroup_info *info, u64 now, bool adv)
++{
++	if (adv)
++		info->time += now - info->timestamp;
+ 	info->timestamp = now;
++	/*
++	 * see update_context_time()
++	 */
++	WRITE_ONCE(info->timeoffset, info->time - info->timestamp);
+ }
+ 
+-static inline void update_cgrp_time_from_cpuctx(struct perf_cpu_context *cpuctx)
++static inline void update_cgrp_time_from_cpuctx(struct perf_cpu_context *cpuctx, bool final)
+ {
+ 	struct perf_cgroup *cgrp = cpuctx->cgrp;
+ 	struct cgroup_subsys_state *css;
++	struct perf_cgroup_info *info;
+ 
+ 	if (cgrp) {
++		u64 now = perf_clock();
++
+ 		for (css = &cgrp->css; css; css = css->parent) {
+ 			cgrp = container_of(css, struct perf_cgroup, css);
+-			__update_cgrp_time(cgrp);
++			info = this_cpu_ptr(cgrp->info);
++
++			__update_cgrp_time(info, now, true);
++			if (final)
++				__store_release(&info->active, 0);
+ 		}
+ 	}
+ }
+ 
+ static inline void update_cgrp_time_from_event(struct perf_event *event)
+ {
++	struct perf_cgroup_info *info;
+ 	struct perf_cgroup *cgrp;
+ 
+ 	/*
+@@ -760,8 +794,10 @@ static inline void update_cgrp_time_from_event(struct perf_event *event)
+ 	/*
+ 	 * Do not update time when cgroup is not active
+ 	 */
+-	if (cgroup_is_descendant(cgrp->css.cgroup, event->cgrp->css.cgroup))
+-		__update_cgrp_time(event->cgrp);
++	if (cgroup_is_descendant(cgrp->css.cgroup, event->cgrp->css.cgroup)) {
++		info = this_cpu_ptr(event->cgrp->info);
++		__update_cgrp_time(info, perf_clock(), true);
++	}
+ }
+ 
+ static inline void
+@@ -785,7 +821,8 @@ perf_cgroup_set_timestamp(struct task_struct *task,
+ 	for (css = &cgrp->css; css; css = css->parent) {
+ 		cgrp = container_of(css, struct perf_cgroup, css);
+ 		info = this_cpu_ptr(cgrp->info);
+-		info->timestamp = ctx->timestamp;
++		__update_cgrp_time(info, ctx->timestamp, false);
++		__store_release(&info->active, 1);
+ 	}
+ }
+ 
+@@ -981,14 +1018,6 @@ out:
+ 	return ret;
+ }
+ 
+-static inline void
+-perf_cgroup_set_shadow_time(struct perf_event *event, u64 now)
+-{
+-	struct perf_cgroup_info *t;
+-	t = per_cpu_ptr(event->cgrp->info, event->cpu);
+-	event->shadow_ctx_time = now - t->timestamp;
+-}
+-
+ static inline void
+ perf_cgroup_event_enable(struct perf_event *event, struct perf_event_context *ctx)
+ {
+@@ -1066,7 +1095,8 @@ static inline void update_cgrp_time_from_event(struct perf_event *event)
+ {
+ }
+ 
+-static inline void update_cgrp_time_from_cpuctx(struct perf_cpu_context *cpuctx)
++static inline void update_cgrp_time_from_cpuctx(struct perf_cpu_context *cpuctx,
++						bool final)
+ {
+ }
+ 
+@@ -1098,12 +1128,12 @@ perf_cgroup_switch(struct task_struct *task, struct task_struct *next)
+ {
+ }
+ 
+-static inline void
+-perf_cgroup_set_shadow_time(struct perf_event *event, u64 now)
++static inline u64 perf_cgroup_event_time(struct perf_event *event)
+ {
++	return 0;
+ }
+ 
+-static inline u64 perf_cgroup_event_time(struct perf_event *event)
++static inline u64 perf_cgroup_event_time_now(struct perf_event *event, u64 now)
+ {
+ 	return 0;
+ }
+@@ -1525,22 +1555,59 @@ static void perf_unpin_context(struct perf_event_context *ctx)
+ /*
+  * Update the record of the current time in a context.
+  */
+-static void update_context_time(struct perf_event_context *ctx)
++static void __update_context_time(struct perf_event_context *ctx, bool adv)
+ {
+ 	u64 now = perf_clock();
+ 
+-	ctx->time += now - ctx->timestamp;
++	if (adv)
++		ctx->time += now - ctx->timestamp;
+ 	ctx->timestamp = now;
++
++	/*
++	 * The above: time' = time + (now - timestamp), can be re-arranged
++	 * into: time` = now + (time - timestamp), which gives a single value
++	 * offset to compute future time without locks on.
++	 *
++	 * See perf_event_time_now(), which can be used from NMI context where
++	 * it's (obviously) not possible to acquire ctx->lock in order to read
++	 * both the above values in a consistent manner.
++	 */
++	WRITE_ONCE(ctx->timeoffset, ctx->time - ctx->timestamp);
++}
++
++static void update_context_time(struct perf_event_context *ctx)
++{
++	__update_context_time(ctx, true);
+ }
+ 
+ static u64 perf_event_time(struct perf_event *event)
+ {
+ 	struct perf_event_context *ctx = event->ctx;
+ 
++	if (unlikely(!ctx))
++		return 0;
++
+ 	if (is_cgroup_event(event))
+ 		return perf_cgroup_event_time(event);
+ 
+-	return ctx ? ctx->time : 0;
++	return ctx->time;
++}
++
++static u64 perf_event_time_now(struct perf_event *event, u64 now)
++{
++	struct perf_event_context *ctx = event->ctx;
++
++	if (unlikely(!ctx))
++		return 0;
++
++	if (is_cgroup_event(event))
++		return perf_cgroup_event_time_now(event, now);
++
++	if (!(__load_acquire(&ctx->is_active) & EVENT_TIME))
++		return ctx->time;
++
++	now += READ_ONCE(ctx->timeoffset);
++	return now;
+ }
+ 
+ static enum event_type_t get_event_type(struct perf_event *event)
+@@ -2346,7 +2413,7 @@ __perf_remove_from_context(struct perf_event *event,
+ 
+ 	if (ctx->is_active & EVENT_TIME) {
+ 		update_context_time(ctx);
+-		update_cgrp_time_from_cpuctx(cpuctx);
++		update_cgrp_time_from_cpuctx(cpuctx, false);
+ 	}
+ 
+ 	event_sched_out(event, cpuctx, ctx);
+@@ -2357,6 +2424,9 @@ __perf_remove_from_context(struct perf_event *event,
+ 	list_del_event(event, ctx);
+ 
+ 	if (!ctx->nr_events && ctx->is_active) {
++		if (ctx == &cpuctx->ctx)
++			update_cgrp_time_from_cpuctx(cpuctx, true);
++
+ 		ctx->is_active = 0;
+ 		ctx->rotate_necessary = 0;
+ 		if (ctx->task) {
+@@ -2388,7 +2458,11 @@ static void perf_remove_from_context(struct perf_event *event, unsigned long fla
+ 	 * event_function_call() user.
+ 	 */
+ 	raw_spin_lock_irq(&ctx->lock);
+-	if (!ctx->is_active) {
++	/*
++	 * Cgroup events are per-cpu events, and must IPI because of
++	 * cgrp_cpuctx_list.
++	 */
++	if (!ctx->is_active && !is_cgroup_event(event)) {
+ 		__perf_remove_from_context(event, __get_cpu_context(ctx),
+ 					   ctx, (void *)flags);
+ 		raw_spin_unlock_irq(&ctx->lock);
+@@ -2478,40 +2552,6 @@ void perf_event_disable_inatomic(struct perf_event *event)
+ 	irq_work_queue(&event->pending);
+ }
+ 
+-static void perf_set_shadow_time(struct perf_event *event,
+-				 struct perf_event_context *ctx)
+-{
+-	/*
+-	 * use the correct time source for the time snapshot
+-	 *
+-	 * We could get by without this by leveraging the
+-	 * fact that to get to this function, the caller
+-	 * has most likely already called update_context_time()
+-	 * and update_cgrp_time_xx() and thus both timestamp
+-	 * are identical (or very close). Given that tstamp is,
+-	 * already adjusted for cgroup, we could say that:
+-	 *    tstamp - ctx->timestamp
+-	 * is equivalent to
+-	 *    tstamp - cgrp->timestamp.
+-	 *
+-	 * Then, in perf_output_read(), the calculation would
+-	 * work with no changes because:
+-	 * - event is guaranteed scheduled in
+-	 * - no scheduled out in between
+-	 * - thus the timestamp would be the same
+-	 *
+-	 * But this is a bit hairy.
+-	 *
+-	 * So instead, we have an explicit cgroup call to remain
+-	 * within the time source all along. We believe it
+-	 * is cleaner and simpler to understand.
+-	 */
+-	if (is_cgroup_event(event))
+-		perf_cgroup_set_shadow_time(event, event->tstamp);
+-	else
+-		event->shadow_ctx_time = event->tstamp - ctx->timestamp;
+-}
+-
+ #define MAX_INTERRUPTS (~0ULL)
+ 
+ static void perf_log_throttle(struct perf_event *event, int enable);
+@@ -2552,8 +2592,6 @@ event_sched_in(struct perf_event *event,
+ 
+ 	perf_pmu_disable(event->pmu);
+ 
+-	perf_set_shadow_time(event, ctx);
+-
+ 	perf_log_itrace_start(event);
+ 
+ 	if (event->pmu->add(event, PERF_EF_START)) {
+@@ -2857,11 +2895,14 @@ perf_install_in_context(struct perf_event_context *ctx,
+ 	 * perf_event_attr::disabled events will not run and can be initialized
+ 	 * without IPI. Except when this is the first event for the context, in
+ 	 * that case we need the magic of the IPI to set ctx->is_active.
++	 * Similarly, cgroup events for the context also needs the IPI to
++	 * manipulate the cgrp_cpuctx_list.
+ 	 *
+ 	 * The IOC_ENABLE that is sure to follow the creation of a disabled
+ 	 * event will issue the IPI and reprogram the hardware.
+ 	 */
+-	if (__perf_effective_state(event) == PERF_EVENT_STATE_OFF && ctx->nr_events) {
++	if (__perf_effective_state(event) == PERF_EVENT_STATE_OFF &&
++	    ctx->nr_events && !is_cgroup_event(event)) {
+ 		raw_spin_lock_irq(&ctx->lock);
+ 		if (ctx->task == TASK_TOMBSTONE) {
+ 			raw_spin_unlock_irq(&ctx->lock);
+@@ -3247,16 +3288,6 @@ static void ctx_sched_out(struct perf_event_context *ctx,
+ 		return;
+ 	}
+ 
+-	ctx->is_active &= ~event_type;
+-	if (!(ctx->is_active & EVENT_ALL))
+-		ctx->is_active = 0;
+-
+-	if (ctx->task) {
+-		WARN_ON_ONCE(cpuctx->task_ctx != ctx);
+-		if (!ctx->is_active)
+-			cpuctx->task_ctx = NULL;
+-	}
+-
+ 	/*
+ 	 * Always update time if it was set; not only when it changes.
+ 	 * Otherwise we can 'forget' to update time for any but the last
+@@ -3270,7 +3301,22 @@ static void ctx_sched_out(struct perf_event_context *ctx,
+ 	if (is_active & EVENT_TIME) {
+ 		/* update (and stop) ctx time */
+ 		update_context_time(ctx);
+-		update_cgrp_time_from_cpuctx(cpuctx);
++		update_cgrp_time_from_cpuctx(cpuctx, ctx == &cpuctx->ctx);
++		/*
++		 * CPU-release for the below ->is_active store,
++		 * see __load_acquire() in perf_event_time_now()
++		 */
++		barrier();
++	}
++
++	ctx->is_active &= ~event_type;
++	if (!(ctx->is_active & EVENT_ALL))
++		ctx->is_active = 0;
++
++	if (ctx->task) {
++		WARN_ON_ONCE(cpuctx->task_ctx != ctx);
++		if (!ctx->is_active)
++			cpuctx->task_ctx = NULL;
+ 	}
+ 
+ 	is_active ^= ctx->is_active; /* changed bits */
+@@ -3707,13 +3753,19 @@ static noinline int visit_groups_merge(struct perf_cpu_context *cpuctx,
+ 	return 0;
+ }
+ 
++/*
++ * Because the userpage is strictly per-event (there is no concept of context,
++ * so there cannot be a context indirection), every userpage must be updated
++ * when context time starts :-(
++ *
++ * IOW, we must not miss EVENT_TIME edges.
++ */
+ static inline bool event_update_userpage(struct perf_event *event)
+ {
+ 	if (likely(!atomic_read(&event->mmap_count)))
+ 		return false;
+ 
+ 	perf_event_update_time(event);
+-	perf_set_shadow_time(event, event->ctx);
+ 	perf_event_update_userpage(event);
+ 
+ 	return true;
+@@ -3797,13 +3849,23 @@ ctx_sched_in(struct perf_event_context *ctx,
+ 	     struct task_struct *task)
+ {
+ 	int is_active = ctx->is_active;
+-	u64 now;
+ 
+ 	lockdep_assert_held(&ctx->lock);
+ 
+ 	if (likely(!ctx->nr_events))
+ 		return;
+ 
++	if (is_active ^ EVENT_TIME) {
++		/* start ctx time */
++		__update_context_time(ctx, false);
++		perf_cgroup_set_timestamp(task, ctx);
++		/*
++		 * CPU-release for the below ->is_active store,
++		 * see __load_acquire() in perf_event_time_now()
++		 */
++		barrier();
++	}
++
+ 	ctx->is_active |= (event_type | EVENT_TIME);
+ 	if (ctx->task) {
+ 		if (!is_active)
+@@ -3814,13 +3876,6 @@ ctx_sched_in(struct perf_event_context *ctx,
+ 
+ 	is_active ^= ctx->is_active; /* changed bits */
+ 
+-	if (is_active & EVENT_TIME) {
+-		/* start ctx time */
+-		now = perf_clock();
+-		ctx->timestamp = now;
+-		perf_cgroup_set_timestamp(task, ctx);
+-	}
+-
+ 	/*
+ 	 * First go through the list and put on any pinned groups
+ 	 * in order to give them the best chance of going on.
+@@ -4414,6 +4469,18 @@ static inline u64 perf_event_count(struct perf_event *event)
+ 	return local64_read(&event->count) + atomic64_read(&event->child_count);
+ }
+ 
++static void calc_timer_values(struct perf_event *event,
++				u64 *now,
++				u64 *enabled,
++				u64 *running)
++{
++	u64 ctx_time;
++
++	*now = perf_clock();
++	ctx_time = perf_event_time_now(event, *now);
++	__perf_update_times(event, ctx_time, enabled, running);
++}
++
+ /*
+  * NMI-safe method to read a local event, that is an event that
+  * is:
+@@ -4473,10 +4540,9 @@ int perf_event_read_local(struct perf_event *event, u64 *value,
+ 
+ 	*value = local64_read(&event->count);
+ 	if (enabled || running) {
+-		u64 now = event->shadow_ctx_time + perf_clock();
+-		u64 __enabled, __running;
++		u64 __enabled, __running, __now;;
+ 
+-		__perf_update_times(event, now, &__enabled, &__running);
++		calc_timer_values(event, &__now, &__enabled, &__running);
+ 		if (enabled)
+ 			*enabled = __enabled;
+ 		if (running)
+@@ -5798,18 +5864,6 @@ static int perf_event_index(struct perf_event *event)
+ 	return event->pmu->event_idx(event);
+ }
+ 
+-static void calc_timer_values(struct perf_event *event,
+-				u64 *now,
+-				u64 *enabled,
+-				u64 *running)
+-{
+-	u64 ctx_time;
+-
+-	*now = perf_clock();
+-	ctx_time = event->shadow_ctx_time + *now;
+-	__perf_update_times(event, ctx_time, enabled, running);
+-}
+-
+ static void perf_event_init_userpage(struct perf_event *event)
+ {
+ 	struct perf_event_mmap_page *userpg;
+@@ -6349,7 +6403,6 @@ accounting:
+ 		ring_buffer_attach(event, rb);
+ 
+ 		perf_event_update_time(event);
+-		perf_set_shadow_time(event, event->ctx);
+ 		perf_event_init_userpage(event);
+ 		perf_event_update_userpage(event);
+ 	} else {
+diff --git a/kernel/power/wakelock.c b/kernel/power/wakelock.c
+index 105df4dfc7839..52571dcad768b 100644
+--- a/kernel/power/wakelock.c
++++ b/kernel/power/wakelock.c
+@@ -39,23 +39,20 @@ ssize_t pm_show_wakelocks(char *buf, bool show_active)
+ {
+ 	struct rb_node *node;
+ 	struct wakelock *wl;
+-	char *str = buf;
+-	char *end = buf + PAGE_SIZE;
++	int len = 0;
+ 
+ 	mutex_lock(&wakelocks_lock);
+ 
+ 	for (node = rb_first(&wakelocks_tree); node; node = rb_next(node)) {
+ 		wl = rb_entry(node, struct wakelock, node);
+ 		if (wl->ws->active == show_active)
+-			str += scnprintf(str, end - str, "%s ", wl->name);
++			len += sysfs_emit_at(buf, len, "%s ", wl->name);
+ 	}
+-	if (str > buf)
+-		str--;
+ 
+-	str += scnprintf(str, end - str, "\n");
++	len += sysfs_emit_at(buf, len, "\n");
+ 
+ 	mutex_unlock(&wakelocks_lock);
+-	return (str - buf);
++	return len;
+ }
+ 
+ #if CONFIG_PM_WAKELOCKS_LIMIT > 0
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index f2cf047b25e56..069e01772d922 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3382,7 +3382,6 @@ void set_task_rq_fair(struct sched_entity *se,
+ 	se->avg.last_update_time = n_last_update_time;
+ }
+ 
+-
+ /*
+  * When on migration a sched_entity joins/leaves the PELT hierarchy, we need to
+  * propagate its contribution. The key to this propagation is the invariant
+@@ -3450,7 +3449,6 @@ void set_task_rq_fair(struct sched_entity *se,
+  * XXX: only do this for the part of runnable > running ?
+  *
+  */
+-
+ static inline void
+ update_tg_cfs_util(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq *gcfs_rq)
+ {
+@@ -3682,7 +3680,19 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
+ 
+ 		r = removed_util;
+ 		sub_positive(&sa->util_avg, r);
+-		sa->util_sum = sa->util_avg * divider;
++		sub_positive(&sa->util_sum, r * divider);
++		/*
++		 * Because of rounding, se->util_sum might ends up being +1 more than
++		 * cfs->util_sum. Although this is not a problem by itself, detaching
++		 * a lot of tasks with the rounding problem between 2 updates of
++		 * util_avg (~1ms) can make cfs->util_sum becoming null whereas
++		 * cfs_util_avg is not.
++		 * Check that util_sum is still above its lower bound for the new
++		 * util_avg. Given that period_contrib might have moved since the last
++		 * sync, we are only sure that util_sum must be above or equal to
++		 *    util_avg * minimum possible divider
++		 */
++		sa->util_sum = max_t(u32, sa->util_sum, sa->util_avg * PELT_MIN_DIVIDER);
+ 
+ 		r = removed_runnable;
+ 		sub_positive(&sa->runnable_avg, r);
+diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
+index b5add64d9698c..3d2825408e3a2 100644
+--- a/kernel/sched/membarrier.c
++++ b/kernel/sched/membarrier.c
+@@ -147,11 +147,11 @@
+ #endif
+ 
+ #ifdef CONFIG_RSEQ
+-#define MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ_BITMASK		\
++#define MEMBARRIER_PRIVATE_EXPEDITED_RSEQ_BITMASK		\
+ 	(MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ			\
+-	| MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ_BITMASK)
++	| MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ)
+ #else
+-#define MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ_BITMASK	0
++#define MEMBARRIER_PRIVATE_EXPEDITED_RSEQ_BITMASK	0
+ #endif
+ 
+ #define MEMBARRIER_CMD_BITMASK						\
+@@ -159,7 +159,8 @@
+ 	| MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED			\
+ 	| MEMBARRIER_CMD_PRIVATE_EXPEDITED				\
+ 	| MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED			\
+-	| MEMBARRIER_PRIVATE_EXPEDITED_SYNC_CORE_BITMASK)
++	| MEMBARRIER_PRIVATE_EXPEDITED_SYNC_CORE_BITMASK		\
++	| MEMBARRIER_PRIVATE_EXPEDITED_RSEQ_BITMASK)
+ 
+ static void ipi_mb(void *info)
+ {
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+index e06071bf3472c..c336f5f481bca 100644
+--- a/kernel/sched/pelt.h
++++ b/kernel/sched/pelt.h
+@@ -37,9 +37,11 @@ update_irq_load_avg(struct rq *rq, u64 running)
+ }
+ #endif
+ 
++#define PELT_MIN_DIVIDER	(LOAD_AVG_MAX - 1024)
++
+ static inline u32 get_pelt_divider(struct sched_avg *avg)
+ {
+-	return LOAD_AVG_MAX - 1024 + avg->period_contrib;
++	return PELT_MIN_DIVIDER + avg->period_contrib;
+ }
+ 
+ static inline void cfs_se_util_change(struct sched_avg *avg)
+diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
+index 69b19d3af690f..422f3b0445cf1 100644
+--- a/kernel/sched/psi.c
++++ b/kernel/sched/psi.c
+@@ -1082,44 +1082,6 @@ int psi_show(struct seq_file *m, struct psi_group *group, enum psi_res res)
+ 	return 0;
+ }
+ 
+-static int psi_io_show(struct seq_file *m, void *v)
+-{
+-	return psi_show(m, &psi_system, PSI_IO);
+-}
+-
+-static int psi_memory_show(struct seq_file *m, void *v)
+-{
+-	return psi_show(m, &psi_system, PSI_MEM);
+-}
+-
+-static int psi_cpu_show(struct seq_file *m, void *v)
+-{
+-	return psi_show(m, &psi_system, PSI_CPU);
+-}
+-
+-static int psi_open(struct file *file, int (*psi_show)(struct seq_file *, void *))
+-{
+-	if (file->f_mode & FMODE_WRITE && !capable(CAP_SYS_RESOURCE))
+-		return -EPERM;
+-
+-	return single_open(file, psi_show, NULL);
+-}
+-
+-static int psi_io_open(struct inode *inode, struct file *file)
+-{
+-	return psi_open(file, psi_io_show);
+-}
+-
+-static int psi_memory_open(struct inode *inode, struct file *file)
+-{
+-	return psi_open(file, psi_memory_show);
+-}
+-
+-static int psi_cpu_open(struct inode *inode, struct file *file)
+-{
+-	return psi_open(file, psi_cpu_show);
+-}
+-
+ struct psi_trigger *psi_trigger_create(struct psi_group *group,
+ 			char *buf, size_t nbytes, enum psi_res res)
+ {
+@@ -1162,7 +1124,6 @@ struct psi_trigger *psi_trigger_create(struct psi_group *group,
+ 	t->event = 0;
+ 	t->last_event_time = 0;
+ 	init_waitqueue_head(&t->event_wait);
+-	kref_init(&t->refcount);
+ 
+ 	mutex_lock(&group->trigger_lock);
+ 
+@@ -1191,15 +1152,19 @@ struct psi_trigger *psi_trigger_create(struct psi_group *group,
+ 	return t;
+ }
+ 
+-static void psi_trigger_destroy(struct kref *ref)
++void psi_trigger_destroy(struct psi_trigger *t)
+ {
+-	struct psi_trigger *t = container_of(ref, struct psi_trigger, refcount);
+-	struct psi_group *group = t->group;
++	struct psi_group *group;
+ 	struct task_struct *task_to_destroy = NULL;
+ 
+-	if (static_branch_likely(&psi_disabled))
++	/*
++	 * We do not check psi_disabled since it might have been disabled after
++	 * the trigger got created.
++	 */
++	if (!t)
+ 		return;
+ 
++	group = t->group;
+ 	/*
+ 	 * Wakeup waiters to stop polling. Can happen if cgroup is deleted
+ 	 * from under a polling process.
+@@ -1235,9 +1200,9 @@ static void psi_trigger_destroy(struct kref *ref)
+ 	mutex_unlock(&group->trigger_lock);
+ 
+ 	/*
+-	 * Wait for both *trigger_ptr from psi_trigger_replace and
+-	 * poll_task RCUs to complete their read-side critical sections
+-	 * before destroying the trigger and optionally the poll_task
++	 * Wait for psi_schedule_poll_work RCU to complete its read-side
++	 * critical section before destroying the trigger and optionally the
++	 * poll_task.
+ 	 */
+ 	synchronize_rcu();
+ 	/*
+@@ -1254,18 +1219,6 @@ static void psi_trigger_destroy(struct kref *ref)
+ 	kfree(t);
+ }
+ 
+-void psi_trigger_replace(void **trigger_ptr, struct psi_trigger *new)
+-{
+-	struct psi_trigger *old = *trigger_ptr;
+-
+-	if (static_branch_likely(&psi_disabled))
+-		return;
+-
+-	rcu_assign_pointer(*trigger_ptr, new);
+-	if (old)
+-		kref_put(&old->refcount, psi_trigger_destroy);
+-}
+-
+ __poll_t psi_trigger_poll(void **trigger_ptr,
+ 				struct file *file, poll_table *wait)
+ {
+@@ -1275,27 +1228,57 @@ __poll_t psi_trigger_poll(void **trigger_ptr,
+ 	if (static_branch_likely(&psi_disabled))
+ 		return DEFAULT_POLLMASK | EPOLLERR | EPOLLPRI;
+ 
+-	rcu_read_lock();
+-
+-	t = rcu_dereference(*(void __rcu __force **)trigger_ptr);
+-	if (!t) {
+-		rcu_read_unlock();
++	t = smp_load_acquire(trigger_ptr);
++	if (!t)
+ 		return DEFAULT_POLLMASK | EPOLLERR | EPOLLPRI;
+-	}
+-	kref_get(&t->refcount);
+-
+-	rcu_read_unlock();
+ 
+ 	poll_wait(file, &t->event_wait, wait);
+ 
+ 	if (cmpxchg(&t->event, 1, 0) == 1)
+ 		ret |= EPOLLPRI;
+ 
+-	kref_put(&t->refcount, psi_trigger_destroy);
+-
+ 	return ret;
+ }
+ 
++#ifdef CONFIG_PROC_FS
++static int psi_io_show(struct seq_file *m, void *v)
++{
++	return psi_show(m, &psi_system, PSI_IO);
++}
++
++static int psi_memory_show(struct seq_file *m, void *v)
++{
++	return psi_show(m, &psi_system, PSI_MEM);
++}
++
++static int psi_cpu_show(struct seq_file *m, void *v)
++{
++	return psi_show(m, &psi_system, PSI_CPU);
++}
++
++static int psi_open(struct file *file, int (*psi_show)(struct seq_file *, void *))
++{
++	if (file->f_mode & FMODE_WRITE && !capable(CAP_SYS_RESOURCE))
++		return -EPERM;
++
++	return single_open(file, psi_show, NULL);
++}
++
++static int psi_io_open(struct inode *inode, struct file *file)
++{
++	return psi_open(file, psi_io_show);
++}
++
++static int psi_memory_open(struct inode *inode, struct file *file)
++{
++	return psi_open(file, psi_memory_show);
++}
++
++static int psi_cpu_open(struct inode *inode, struct file *file)
++{
++	return psi_open(file, psi_cpu_show);
++}
++
+ static ssize_t psi_write(struct file *file, const char __user *user_buf,
+ 			 size_t nbytes, enum psi_res res)
+ {
+@@ -1316,14 +1299,24 @@ static ssize_t psi_write(struct file *file, const char __user *user_buf,
+ 
+ 	buf[buf_size - 1] = '\0';
+ 
+-	new = psi_trigger_create(&psi_system, buf, nbytes, res);
+-	if (IS_ERR(new))
+-		return PTR_ERR(new);
+-
+ 	seq = file->private_data;
++
+ 	/* Take seq->lock to protect seq->private from concurrent writes */
+ 	mutex_lock(&seq->lock);
+-	psi_trigger_replace(&seq->private, new);
++
++	/* Allow only one trigger per file descriptor */
++	if (seq->private) {
++		mutex_unlock(&seq->lock);
++		return -EBUSY;
++	}
++
++	new = psi_trigger_create(&psi_system, buf, nbytes, res);
++	if (IS_ERR(new)) {
++		mutex_unlock(&seq->lock);
++		return PTR_ERR(new);
++	}
++
++	smp_store_release(&seq->private, new);
+ 	mutex_unlock(&seq->lock);
+ 
+ 	return nbytes;
+@@ -1358,7 +1351,7 @@ static int psi_fop_release(struct inode *inode, struct file *file)
+ {
+ 	struct seq_file *seq = file->private_data;
+ 
+-	psi_trigger_replace(&seq->private, NULL);
++	psi_trigger_destroy(seq->private);
+ 	return single_release(inode, file);
+ }
+ 
+@@ -1400,3 +1393,5 @@ static int __init psi_proc_init(void)
+ 	return 0;
+ }
+ module_init(psi_proc_init);
++
++#endif /* CONFIG_PROC_FS */
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 78ea542ce3bc2..ae9f9e4af9314 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -7734,7 +7734,8 @@ static struct tracing_log_err *get_tracing_log_err(struct trace_array *tr)
+ 		err = kzalloc(sizeof(*err), GFP_KERNEL);
+ 		if (!err)
+ 			err = ERR_PTR(-ENOMEM);
+-		tr->n_err_log_entries++;
++		else
++			tr->n_err_log_entries++;
+ 
+ 		return err;
+ 	}
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 319f9c8ca7e7d..e697bfedac2f5 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -2487,6 +2487,8 @@ static struct hist_field *parse_unary(struct hist_trigger_data *hist_data,
+ 		(HIST_FIELD_FL_TIMESTAMP | HIST_FIELD_FL_TIMESTAMP_USECS);
+ 	expr->fn = hist_field_unary_minus;
+ 	expr->operands[0] = operand1;
++	expr->size = operand1->size;
++	expr->is_signed = operand1->is_signed;
+ 	expr->operator = FIELD_OP_UNARY_MINUS;
+ 	expr->name = expr_str(expr, 0);
+ 	expr->type = kstrdup_const(operand1->type, GFP_KERNEL);
+@@ -2703,6 +2705,7 @@ static struct hist_field *parse_expr(struct hist_trigger_data *hist_data,
+ 
+ 		/* The operand sizes should be the same, so just pick one */
+ 		expr->size = operand1->size;
++		expr->is_signed = operand1->is_signed;
+ 
+ 		expr->operator = field_op;
+ 		expr->type = kstrdup_const(operand1->type, GFP_KERNEL);
+@@ -3919,6 +3922,7 @@ static int trace_action_create(struct hist_trigger_data *hist_data,
+ 
+ 			var_ref_idx = find_var_ref_idx(hist_data, var_ref);
+ 			if (WARN_ON(var_ref_idx < 0)) {
++				kfree(p);
+ 				ret = var_ref_idx;
+ 				goto err;
+ 			}
+diff --git a/kernel/ucount.c b/kernel/ucount.c
+index 7b32c356ebc5c..65b597431c861 100644
+--- a/kernel/ucount.c
++++ b/kernel/ucount.c
+@@ -190,6 +190,7 @@ struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid)
+ 			kfree(new);
+ 		} else {
+ 			hlist_add_head(&new->node, hashent);
++			get_user_ns(new->ns);
+ 			spin_unlock_irq(&ucounts_lock);
+ 			return new;
+ 		}
+@@ -210,6 +211,7 @@ void put_ucounts(struct ucounts *ucounts)
+ 	if (atomic_dec_and_lock_irqsave(&ucounts->count, &ucounts_lock, flags)) {
+ 		hlist_del_init(&ucounts->node);
+ 		spin_unlock_irqrestore(&ucounts_lock, flags);
++		put_user_ns(ucounts->ns);
+ 		kfree(ucounts);
+ 	}
+ }
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 8882c6dfb48f4..816a0f6823a3a 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -5822,6 +5822,11 @@ static void hci_le_adv_report_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 		struct hci_ev_le_advertising_info *ev = ptr;
+ 		s8 rssi;
+ 
++		if (ptr > (void *)skb_tail_pointer(skb) - sizeof(*ev)) {
++			bt_dev_err(hdev, "Malicious advertising data.");
++			break;
++		}
++
+ 		if (ev->length <= HCI_MAX_AD_LENGTH &&
+ 		    ev->data + ev->length <= skb_tail_pointer(skb)) {
+ 			rssi = ev->data[ev->length];
+@@ -5833,11 +5838,6 @@ static void hci_le_adv_report_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 		}
+ 
+ 		ptr += sizeof(*ev) + ev->length + 1;
+-
+-		if (ptr > (void *) skb_tail_pointer(skb) - sizeof(*ev)) {
+-			bt_dev_err(hdev, "Malicious advertising data. Stopping processing");
+-			break;
+-		}
+ 	}
+ 
+ 	hci_dev_unlock(hdev);
+diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c
+index 49e105e0a4479..f02351b4acaca 100644
+--- a/net/bridge/br_vlan.c
++++ b/net/bridge/br_vlan.c
+@@ -560,10 +560,10 @@ static bool __allowed_ingress(const struct net_bridge *br,
+ 		    !br_opt_get(br, BROPT_VLAN_STATS_ENABLED)) {
+ 			if (*state == BR_STATE_FORWARDING) {
+ 				*state = br_vlan_get_pvid_state(vg);
+-				return br_vlan_state_allowed(*state, true);
+-			} else {
+-				return true;
++				if (!br_vlan_state_allowed(*state, true))
++					goto drop;
+ 			}
++			return true;
+ 		}
+ 	}
+ 	v = br_vlan_find(vg, *vid);
+@@ -2020,7 +2020,8 @@ static int br_vlan_rtm_dump(struct sk_buff *skb, struct netlink_callback *cb)
+ 			goto out_err;
+ 		}
+ 		err = br_vlan_dump_dev(dev, skb, cb, dump_flags);
+-		if (err && err != -EMSGSIZE)
++		/* if the dump completed without an error we return 0 here */
++		if (err != -EMSGSIZE)
+ 			goto out_err;
+ 	} else {
+ 		for_each_netdev_rcu(net, dev) {
+diff --git a/net/core/net-procfs.c b/net/core/net-procfs.c
+index d8b9dbabd4a43..88cc0ad7d386e 100644
+--- a/net/core/net-procfs.c
++++ b/net/core/net-procfs.c
+@@ -190,12 +190,23 @@ static const struct seq_operations softnet_seq_ops = {
+ 	.show  = softnet_seq_show,
+ };
+ 
+-static void *ptype_get_idx(loff_t pos)
++static void *ptype_get_idx(struct seq_file *seq, loff_t pos)
+ {
++	struct list_head *ptype_list = NULL;
+ 	struct packet_type *pt = NULL;
++	struct net_device *dev;
+ 	loff_t i = 0;
+ 	int t;
+ 
++	for_each_netdev_rcu(seq_file_net(seq), dev) {
++		ptype_list = &dev->ptype_all;
++		list_for_each_entry_rcu(pt, ptype_list, list) {
++			if (i == pos)
++				return pt;
++			++i;
++		}
++	}
++
+ 	list_for_each_entry_rcu(pt, &ptype_all, list) {
+ 		if (i == pos)
+ 			return pt;
+@@ -216,22 +227,40 @@ static void *ptype_seq_start(struct seq_file *seq, loff_t *pos)
+ 	__acquires(RCU)
+ {
+ 	rcu_read_lock();
+-	return *pos ? ptype_get_idx(*pos - 1) : SEQ_START_TOKEN;
++	return *pos ? ptype_get_idx(seq, *pos - 1) : SEQ_START_TOKEN;
+ }
+ 
+ static void *ptype_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+ {
++	struct net_device *dev;
+ 	struct packet_type *pt;
+ 	struct list_head *nxt;
+ 	int hash;
+ 
+ 	++*pos;
+ 	if (v == SEQ_START_TOKEN)
+-		return ptype_get_idx(0);
++		return ptype_get_idx(seq, 0);
+ 
+ 	pt = v;
+ 	nxt = pt->list.next;
++	if (pt->dev) {
++		if (nxt != &pt->dev->ptype_all)
++			goto found;
++
++		dev = pt->dev;
++		for_each_netdev_continue_rcu(seq_file_net(seq), dev) {
++			if (!list_empty(&dev->ptype_all)) {
++				nxt = dev->ptype_all.next;
++				goto found;
++			}
++		}
++
++		nxt = ptype_all.next;
++		goto ptype_all;
++	}
++
+ 	if (pt->type == htons(ETH_P_ALL)) {
++ptype_all:
+ 		if (nxt != &ptype_all)
+ 			goto found;
+ 		hash = 0;
+@@ -260,7 +289,8 @@ static int ptype_seq_show(struct seq_file *seq, void *v)
+ 
+ 	if (v == SEQ_START_TOKEN)
+ 		seq_puts(seq, "Type Device      Function\n");
+-	else if (pt->dev == NULL || dev_net(pt->dev) == seq_file_net(seq)) {
++	else if ((!pt->af_packet_net || net_eq(pt->af_packet_net, seq_file_net(seq))) &&
++		 (!pt->dev || net_eq(dev_net(pt->dev), seq_file_net(seq)))) {
+ 		if (pt->type == htons(ETH_P_ALL))
+ 			seq_puts(seq, "ALL ");
+ 		else
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 9bca57ef8b838..a4d2eb691cbc1 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -162,12 +162,19 @@ int ip_build_and_send_pkt(struct sk_buff *skb, const struct sock *sk,
+ 	iph->daddr    = (opt && opt->opt.srr ? opt->opt.faddr : daddr);
+ 	iph->saddr    = saddr;
+ 	iph->protocol = sk->sk_protocol;
+-	if (ip_dont_fragment(sk, &rt->dst)) {
++	/* Do not bother generating IPID for small packets (eg SYNACK) */
++	if (skb->len <= IPV4_MIN_MTU || ip_dont_fragment(sk, &rt->dst)) {
+ 		iph->frag_off = htons(IP_DF);
+ 		iph->id = 0;
+ 	} else {
+ 		iph->frag_off = 0;
+-		__ip_select_ident(net, iph, 1);
++		/* TCP packets here are SYNACK with fat IPv4/TCP options.
++		 * Avoid using the hashed IP ident generator.
++		 */
++		if (sk->sk_protocol == IPPROTO_TCP)
++			iph->id = (__force __be16)prandom_u32();
++		else
++			__ip_select_ident(net, iph, 1);
+ 	}
+ 
+ 	if (opt && opt->opt.optlen) {
+@@ -826,15 +833,24 @@ int ip_do_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ 		/* Everything is OK. Generate! */
+ 		ip_fraglist_init(skb, iph, hlen, &iter);
+ 
+-		if (iter.frag)
+-			ip_options_fragment(iter.frag);
+-
+ 		for (;;) {
+ 			/* Prepare header of the next frame,
+ 			 * before previous one went down. */
+ 			if (iter.frag) {
++				bool first_frag = (iter.offset == 0);
++
+ 				IPCB(iter.frag)->flags = IPCB(skb)->flags;
+ 				ip_fraglist_prepare(skb, &iter);
++				if (first_frag && IPCB(skb)->opt.optlen) {
++					/* ipcb->opt is not populated for frags
++					 * coming from __ip_make_skb(),
++					 * ip_options_fragment() needs optlen
++					 */
++					IPCB(iter.frag)->opt.optlen =
++						IPCB(skb)->opt.optlen;
++					ip_options_fragment(iter.frag);
++					ip_send_check(iter.iph);
++				}
+ 			}
+ 
+ 			skb->tstamp = tstamp;
+diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
+index 1e44a43acfe2d..086822cb1cc96 100644
+--- a/net/ipv4/ping.c
++++ b/net/ipv4/ping.c
+@@ -220,7 +220,8 @@ static struct sock *ping_lookup(struct net *net, struct sk_buff *skb, u16 ident)
+ 			continue;
+ 		}
+ 
+-		if (sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif)
++		if (sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif &&
++		    sk->sk_bound_dev_if != inet_sdif(skb))
+ 			continue;
+ 
+ 		sock_hold(sk);
+diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
+index bb446e60cf580..b8689052079cd 100644
+--- a/net/ipv4/raw.c
++++ b/net/ipv4/raw.c
+@@ -721,6 +721,7 @@ static int raw_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ 	int ret = -EINVAL;
+ 	int chk_addr_ret;
+ 
++	lock_sock(sk);
+ 	if (sk->sk_state != TCP_CLOSE || addr_len < sizeof(struct sockaddr_in))
+ 		goto out;
+ 
+@@ -740,7 +741,9 @@ static int raw_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ 		inet->inet_saddr = 0;  /* Use device */
+ 	sk_dst_reset(sk);
+ 	ret = 0;
+-out:	return ret;
++out:
++	release_sock(sk);
++	return ret;
+ }
+ 
+ /*
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 3445f8017430f..87961f1d9959b 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -2589,7 +2589,7 @@ int addrconf_prefix_rcv_add_addr(struct net *net, struct net_device *dev,
+ 				 __u32 valid_lft, u32 prefered_lft)
+ {
+ 	struct inet6_ifaddr *ifp = ipv6_get_ifaddr(net, addr, dev, 1);
+-	int create = 0;
++	int create = 0, update_lft = 0;
+ 
+ 	if (!ifp && valid_lft) {
+ 		int max_addresses = in6_dev->cnf.max_addresses;
+@@ -2633,19 +2633,32 @@ int addrconf_prefix_rcv_add_addr(struct net *net, struct net_device *dev,
+ 		unsigned long now;
+ 		u32 stored_lft;
+ 
+-		/* Update lifetime (RFC4862 5.5.3 e)
+-		 * We deviate from RFC4862 by honoring all Valid Lifetimes to
+-		 * improve the reaction of SLAAC to renumbering events
+-		 * (draft-gont-6man-slaac-renum-06, Section 4.2)
+-		 */
++		/* update lifetime (RFC2462 5.5.3 e) */
+ 		spin_lock_bh(&ifp->lock);
+ 		now = jiffies;
+ 		if (ifp->valid_lft > (now - ifp->tstamp) / HZ)
+ 			stored_lft = ifp->valid_lft - (now - ifp->tstamp) / HZ;
+ 		else
+ 			stored_lft = 0;
+-
+ 		if (!create && stored_lft) {
++			const u32 minimum_lft = min_t(u32,
++				stored_lft, MIN_VALID_LIFETIME);
++			valid_lft = max(valid_lft, minimum_lft);
++
++			/* RFC4862 Section 5.5.3e:
++			 * "Note that the preferred lifetime of the
++			 *  corresponding address is always reset to
++			 *  the Preferred Lifetime in the received
++			 *  Prefix Information option, regardless of
++			 *  whether the valid lifetime is also reset or
++			 *  ignored."
++			 *
++			 * So we should always update prefered_lft here.
++			 */
++			update_lft = 1;
++		}
++
++		if (update_lft) {
+ 			ifp->valid_lft = valid_lft;
+ 			ifp->prefered_lft = prefered_lft;
+ 			ifp->tstamp = now;
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index 0371d2c141455..a506e57c4032a 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -111,7 +111,7 @@ void fib6_update_sernum(struct net *net, struct fib6_info *f6i)
+ 	fn = rcu_dereference_protected(f6i->fib6_node,
+ 			lockdep_is_held(&f6i->fib6_table->tb6_lock));
+ 	if (fn)
+-		fn->fn_sernum = fib6_new_sernum(net);
++		WRITE_ONCE(fn->fn_sernum, fib6_new_sernum(net));
+ }
+ 
+ /*
+@@ -589,12 +589,13 @@ static int fib6_dump_table(struct fib6_table *table, struct sk_buff *skb,
+ 		spin_unlock_bh(&table->tb6_lock);
+ 		if (res > 0) {
+ 			cb->args[4] = 1;
+-			cb->args[5] = w->root->fn_sernum;
++			cb->args[5] = READ_ONCE(w->root->fn_sernum);
+ 		}
+ 	} else {
+-		if (cb->args[5] != w->root->fn_sernum) {
++		int sernum = READ_ONCE(w->root->fn_sernum);
++		if (cb->args[5] != sernum) {
+ 			/* Begin at the root if the tree changed */
+-			cb->args[5] = w->root->fn_sernum;
++			cb->args[5] = sernum;
+ 			w->state = FWS_INIT;
+ 			w->node = w->root;
+ 			w->skip = w->count;
+@@ -1344,7 +1345,7 @@ static void __fib6_update_sernum_upto_root(struct fib6_info *rt,
+ 	/* paired with smp_rmb() in fib6_get_cookie_safe() */
+ 	smp_wmb();
+ 	while (fn) {
+-		fn->fn_sernum = sernum;
++		WRITE_ONCE(fn->fn_sernum, sernum);
+ 		fn = rcu_dereference_protected(fn->parent,
+ 				lockdep_is_held(&rt->fib6_table->tb6_lock));
+ 	}
+@@ -2173,8 +2174,8 @@ static int fib6_clean_node(struct fib6_walker *w)
+ 	};
+ 
+ 	if (c->sernum != FIB6_NO_SERNUM_CHANGE &&
+-	    w->node->fn_sernum != c->sernum)
+-		w->node->fn_sernum = c->sernum;
++	    READ_ONCE(w->node->fn_sernum) != c->sernum)
++		WRITE_ONCE(w->node->fn_sernum, c->sernum);
+ 
+ 	if (!c->func) {
+ 		WARN_ON_ONCE(c->sernum == FIB6_NO_SERNUM_CHANGE);
+@@ -2542,7 +2543,7 @@ static void ipv6_route_seq_setup_walk(struct ipv6_route_iter *iter,
+ 	iter->w.state = FWS_INIT;
+ 	iter->w.node = iter->w.root;
+ 	iter->w.args = iter;
+-	iter->sernum = iter->w.root->fn_sernum;
++	iter->sernum = READ_ONCE(iter->w.root->fn_sernum);
+ 	INIT_LIST_HEAD(&iter->w.lh);
+ 	fib6_walker_link(net, &iter->w);
+ }
+@@ -2570,8 +2571,10 @@ static struct fib6_table *ipv6_route_seq_next_table(struct fib6_table *tbl,
+ 
+ static void ipv6_route_check_sernum(struct ipv6_route_iter *iter)
+ {
+-	if (iter->sernum != iter->w.root->fn_sernum) {
+-		iter->sernum = iter->w.root->fn_sernum;
++	int sernum = READ_ONCE(iter->w.root->fn_sernum);
++
++	if (iter->sernum != sernum) {
++		iter->sernum = sernum;
+ 		iter->w.state = FWS_INIT;
+ 		iter->w.node = iter->w.root;
+ 		WARN_ON(iter->w.skip);
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 484aca492cc06..7c6a0bdb0e1ba 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1036,14 +1036,14 @@ int ip6_tnl_xmit_ctl(struct ip6_tnl *t,
+ 
+ 		if (unlikely(!ipv6_chk_addr_and_flags(net, laddr, ldev, false,
+ 						      0, IFA_F_TENTATIVE)))
+-			pr_warn("%s xmit: Local address not yet configured!\n",
+-				p->name);
++			pr_warn_ratelimited("%s xmit: Local address not yet configured!\n",
++					    p->name);
+ 		else if (!(p->flags & IP6_TNL_F_ALLOW_LOCAL_REMOTE) &&
+ 			 !ipv6_addr_is_multicast(raddr) &&
+ 			 unlikely(ipv6_chk_addr_and_flags(net, raddr, ldev,
+ 							  true, 0, IFA_F_TENTATIVE)))
+-			pr_warn("%s xmit: Routing loop! Remote address found on this node!\n",
+-				p->name);
++			pr_warn_ratelimited("%s xmit: Routing loop! Remote address found on this node!\n",
++					    p->name);
+ 		else
+ 			ret = 1;
+ 		rcu_read_unlock();
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 1deb6297aab66..49fee1f1951c2 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -2802,7 +2802,7 @@ static void ip6_link_failure(struct sk_buff *skb)
+ 			if (from) {
+ 				fn = rcu_dereference(from->fib6_node);
+ 				if (fn && (rt->rt6i_flags & RTF_DEFAULT))
+-					fn->fn_sernum = -1;
++					WRITE_ONCE(fn->fn_sernum, -1);
+ 			}
+ 		}
+ 		rcu_read_unlock();
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 4712a90a1820c..7f79974607643 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -1922,15 +1922,17 @@ repeat:
+ 		pr_debug("nf_conntrack_in: Can't track with proto module\n");
+ 		nf_conntrack_put(&ct->ct_general);
+ 		skb->_nfct = 0;
+-		NF_CT_STAT_INC_ATOMIC(state->net, invalid);
+-		if (ret == -NF_DROP)
+-			NF_CT_STAT_INC_ATOMIC(state->net, drop);
+ 		/* Special case: TCP tracker reports an attempt to reopen a
+ 		 * closed/aborted connection. We have to go back and create a
+ 		 * fresh conntrack.
+ 		 */
+ 		if (ret == -NF_REPEAT)
+ 			goto repeat;
++
++		NF_CT_STAT_INC_ATOMIC(state->net, invalid);
++		if (ret == -NF_DROP)
++			NF_CT_STAT_INC_ATOMIC(state->net, drop);
++
+ 		ret = -ret;
+ 		goto out;
+ 	}
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 76c2dca7f0a59..43eef5c712c1e 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -1773,6 +1773,7 @@ static int fanout_add(struct sock *sk, struct fanout_args *args)
+ 		match->prot_hook.dev = po->prot_hook.dev;
+ 		match->prot_hook.func = packet_rcv_fanout;
+ 		match->prot_hook.af_packet_priv = match;
++		match->prot_hook.af_packet_net = read_pnet(&match->net);
+ 		match->prot_hook.id_match = match_fanout_group;
+ 		match->max_num_members = args->max_num_members;
+ 		list_add(&match->list, &fanout_list);
+@@ -3358,6 +3359,7 @@ static int packet_create(struct net *net, struct socket *sock, int protocol,
+ 		po->prot_hook.func = packet_rcv_spkt;
+ 
+ 	po->prot_hook.af_packet_priv = sk;
++	po->prot_hook.af_packet_net = sock_net(sk);
+ 
+ 	if (proto) {
+ 		po->prot_hook.type = proto;
+diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
+index 6be2672a65eab..df864e6922679 100644
+--- a/net/rxrpc/call_event.c
++++ b/net/rxrpc/call_event.c
+@@ -157,7 +157,7 @@ static void rxrpc_congestion_timeout(struct rxrpc_call *call)
+ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
+ {
+ 	struct sk_buff *skb;
+-	unsigned long resend_at, rto_j;
++	unsigned long resend_at;
+ 	rxrpc_seq_t cursor, seq, top;
+ 	ktime_t now, max_age, oldest, ack_ts;
+ 	int ix;
+@@ -165,10 +165,8 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
+ 
+ 	_enter("{%d,%d}", call->tx_hard_ack, call->tx_top);
+ 
+-	rto_j = call->peer->rto_j;
+-
+ 	now = ktime_get_real();
+-	max_age = ktime_sub(now, jiffies_to_usecs(rto_j));
++	max_age = ktime_sub(now, jiffies_to_usecs(call->peer->rto_j));
+ 
+ 	spin_lock_bh(&call->lock);
+ 
+@@ -213,7 +211,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
+ 	}
+ 
+ 	resend_at = nsecs_to_jiffies(ktime_to_ns(ktime_sub(now, oldest)));
+-	resend_at += jiffies + rto_j;
++	resend_at += jiffies + rxrpc_get_rto_backoff(call->peer, retrans);
+ 	WRITE_ONCE(call->resend_at, resend_at);
+ 
+ 	if (unacked)
+diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
+index 10f2bf2e9068a..a45c83f22236e 100644
+--- a/net/rxrpc/output.c
++++ b/net/rxrpc/output.c
+@@ -468,7 +468,7 @@ done:
+ 			if (call->peer->rtt_count > 1) {
+ 				unsigned long nowj = jiffies, ack_lost_at;
+ 
+-				ack_lost_at = rxrpc_get_rto_backoff(call->peer, retrans);
++				ack_lost_at = rxrpc_get_rto_backoff(call->peer, false);
+ 				ack_lost_at += nowj;
+ 				WRITE_ONCE(call->ack_lost_at, ack_lost_at);
+ 				rxrpc_reduce_call_timer(call, ack_lost_at, nowj,
+diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
+index 9267922ea9c37..23a9d6242429f 100644
+--- a/net/sched/sch_htb.c
++++ b/net/sched/sch_htb.c
+@@ -1810,6 +1810,26 @@ static int htb_change_class(struct Qdisc *sch, u32 classid,
+ 	if (!hopt->rate.rate || !hopt->ceil.rate)
+ 		goto failure;
+ 
++	if (q->offload) {
++		/* Options not supported by the offload. */
++		if (hopt->rate.overhead || hopt->ceil.overhead) {
++			NL_SET_ERR_MSG(extack, "HTB offload doesn't support the overhead parameter");
++			goto failure;
++		}
++		if (hopt->rate.mpu || hopt->ceil.mpu) {
++			NL_SET_ERR_MSG(extack, "HTB offload doesn't support the mpu parameter");
++			goto failure;
++		}
++		if (hopt->quantum) {
++			NL_SET_ERR_MSG(extack, "HTB offload doesn't support the quantum parameter");
++			goto failure;
++		}
++		if (hopt->prio) {
++			NL_SET_ERR_MSG(extack, "HTB offload doesn't support the prio parameter");
++			goto failure;
++		}
++	}
++
+ 	/* Keeping backward compatible with rate_table based iproute2 tc */
+ 	if (hopt->rate.linklayer == TC_LINKLAYER_UNAWARE)
+ 		qdisc_put_rtab(qdisc_get_rtab(&hopt->rate, tb[TCA_HTB_RTAB],
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 211cd91b6c408..85e077a69c67d 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -566,12 +566,17 @@ static void smc_stat_fallback(struct smc_sock *smc)
+ 	mutex_unlock(&net->smc.mutex_fback_rsn);
+ }
+ 
+-static void smc_switch_to_fallback(struct smc_sock *smc, int reason_code)
++static int smc_switch_to_fallback(struct smc_sock *smc, int reason_code)
+ {
+ 	wait_queue_head_t *smc_wait = sk_sleep(&smc->sk);
+-	wait_queue_head_t *clc_wait = sk_sleep(smc->clcsock->sk);
++	wait_queue_head_t *clc_wait;
+ 	unsigned long flags;
+ 
++	mutex_lock(&smc->clcsock_release_lock);
++	if (!smc->clcsock) {
++		mutex_unlock(&smc->clcsock_release_lock);
++		return -EBADF;
++	}
+ 	smc->use_fallback = true;
+ 	smc->fallback_rsn = reason_code;
+ 	smc_stat_fallback(smc);
+@@ -586,18 +591,30 @@ static void smc_switch_to_fallback(struct smc_sock *smc, int reason_code)
+ 		 * smc socket->wq, which should be removed
+ 		 * to clcsocket->wq during the fallback.
+ 		 */
++		clc_wait = sk_sleep(smc->clcsock->sk);
+ 		spin_lock_irqsave(&smc_wait->lock, flags);
+ 		spin_lock_nested(&clc_wait->lock, SINGLE_DEPTH_NESTING);
+ 		list_splice_init(&smc_wait->head, &clc_wait->head);
+ 		spin_unlock(&clc_wait->lock);
+ 		spin_unlock_irqrestore(&smc_wait->lock, flags);
+ 	}
++	mutex_unlock(&smc->clcsock_release_lock);
++	return 0;
+ }
+ 
+ /* fall back during connect */
+ static int smc_connect_fallback(struct smc_sock *smc, int reason_code)
+ {
+-	smc_switch_to_fallback(smc, reason_code);
++	struct net *net = sock_net(&smc->sk);
++	int rc = 0;
++
++	rc = smc_switch_to_fallback(smc, reason_code);
++	if (rc) { /* fallback fails */
++		this_cpu_inc(net->smc.smc_stats->clnt_hshake_err_cnt);
++		if (smc->sk.sk_state == SMC_INIT)
++			sock_put(&smc->sk); /* passive closing */
++		return rc;
++	}
+ 	smc_copy_sock_settings_to_clc(smc);
+ 	smc->connect_nonblock = 0;
+ 	if (smc->sk.sk_state == SMC_INIT)
+@@ -1514,11 +1531,12 @@ static void smc_listen_decline(struct smc_sock *new_smc, int reason_code,
+ {
+ 	/* RDMA setup failed, switch back to TCP */
+ 	smc_conn_abort(new_smc, local_first);
+-	if (reason_code < 0) { /* error, no fallback possible */
++	if (reason_code < 0 ||
++	    smc_switch_to_fallback(new_smc, reason_code)) {
++		/* error, no fallback possible */
+ 		smc_listen_out_err(new_smc);
+ 		return;
+ 	}
+-	smc_switch_to_fallback(new_smc, reason_code);
+ 	if (reason_code && reason_code != SMC_CLC_DECL_PEERDECL) {
+ 		if (smc_clc_send_decline(new_smc, reason_code, version) < 0) {
+ 			smc_listen_out_err(new_smc);
+@@ -1960,8 +1978,11 @@ static void smc_listen_work(struct work_struct *work)
+ 
+ 	/* check if peer is smc capable */
+ 	if (!tcp_sk(newclcsock->sk)->syn_smc) {
+-		smc_switch_to_fallback(new_smc, SMC_CLC_DECL_PEERNOSMC);
+-		smc_listen_out_connected(new_smc);
++		rc = smc_switch_to_fallback(new_smc, SMC_CLC_DECL_PEERNOSMC);
++		if (rc)
++			smc_listen_out_err(new_smc);
++		else
++			smc_listen_out_connected(new_smc);
+ 		return;
+ 	}
+ 
+@@ -2250,7 +2271,9 @@ static int smc_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 
+ 	if (msg->msg_flags & MSG_FASTOPEN) {
+ 		if (sk->sk_state == SMC_INIT && !smc->connect_nonblock) {
+-			smc_switch_to_fallback(smc, SMC_CLC_DECL_OPTUNSUPP);
++			rc = smc_switch_to_fallback(smc, SMC_CLC_DECL_OPTUNSUPP);
++			if (rc)
++				goto out;
+ 		} else {
+ 			rc = -EINVAL;
+ 			goto out;
+@@ -2443,6 +2466,11 @@ static int smc_setsockopt(struct socket *sock, int level, int optname,
+ 	/* generic setsockopts reaching us here always apply to the
+ 	 * CLC socket
+ 	 */
++	mutex_lock(&smc->clcsock_release_lock);
++	if (!smc->clcsock) {
++		mutex_unlock(&smc->clcsock_release_lock);
++		return -EBADF;
++	}
+ 	if (unlikely(!smc->clcsock->ops->setsockopt))
+ 		rc = -EOPNOTSUPP;
+ 	else
+@@ -2452,6 +2480,7 @@ static int smc_setsockopt(struct socket *sock, int level, int optname,
+ 		sk->sk_err = smc->clcsock->sk->sk_err;
+ 		sk_error_report(sk);
+ 	}
++	mutex_unlock(&smc->clcsock_release_lock);
+ 
+ 	if (optlen < sizeof(int))
+ 		return -EINVAL;
+@@ -2468,7 +2497,7 @@ static int smc_setsockopt(struct socket *sock, int level, int optname,
+ 	case TCP_FASTOPEN_NO_COOKIE:
+ 		/* option not supported by SMC */
+ 		if (sk->sk_state == SMC_INIT && !smc->connect_nonblock) {
+-			smc_switch_to_fallback(smc, SMC_CLC_DECL_OPTUNSUPP);
++			rc = smc_switch_to_fallback(smc, SMC_CLC_DECL_OPTUNSUPP);
+ 		} else {
+ 			rc = -EINVAL;
+ 		}
+@@ -2511,13 +2540,23 @@ static int smc_getsockopt(struct socket *sock, int level, int optname,
+ 			  char __user *optval, int __user *optlen)
+ {
+ 	struct smc_sock *smc;
++	int rc;
+ 
+ 	smc = smc_sk(sock->sk);
++	mutex_lock(&smc->clcsock_release_lock);
++	if (!smc->clcsock) {
++		mutex_unlock(&smc->clcsock_release_lock);
++		return -EBADF;
++	}
+ 	/* socket options apply to the CLC socket */
+-	if (unlikely(!smc->clcsock->ops->getsockopt))
++	if (unlikely(!smc->clcsock->ops->getsockopt)) {
++		mutex_unlock(&smc->clcsock_release_lock);
+ 		return -EOPNOTSUPP;
+-	return smc->clcsock->ops->getsockopt(smc->clcsock, level, optname,
+-					     optval, optlen);
++	}
++	rc = smc->clcsock->ops->getsockopt(smc->clcsock, level, optname,
++					   optval, optlen);
++	mutex_unlock(&smc->clcsock_release_lock);
++	return rc;
+ }
+ 
+ static int smc_ioctl(struct socket *sock, unsigned int cmd,
+diff --git a/net/sunrpc/rpc_pipe.c b/net/sunrpc/rpc_pipe.c
+index ee5336d73fddc..35588f0afa864 100644
+--- a/net/sunrpc/rpc_pipe.c
++++ b/net/sunrpc/rpc_pipe.c
+@@ -600,9 +600,9 @@ static int __rpc_rmdir(struct inode *dir, struct dentry *dentry)
+ 
+ 	dget(dentry);
+ 	ret = simple_rmdir(dir, dentry);
++	d_drop(dentry);
+ 	if (!ret)
+ 		fsnotify_rmdir(dir, dentry);
+-	d_delete(dentry);
+ 	dput(dentry);
+ 	return ret;
+ }
+@@ -613,9 +613,9 @@ static int __rpc_unlink(struct inode *dir, struct dentry *dentry)
+ 
+ 	dget(dentry);
+ 	ret = simple_unlink(dir, dentry);
++	d_drop(dentry);
+ 	if (!ret)
+ 		fsnotify_unlink(dir, dentry);
+-	d_delete(dentry);
+ 	dput(dentry);
+ 	return ret;
+ }
+diff --git a/security/security.c b/security/security.c
+index c88167a414b41..64abdfb20bc2c 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -1056,8 +1056,19 @@ int security_dentry_init_security(struct dentry *dentry, int mode,
+ 				  const char **xattr_name, void **ctx,
+ 				  u32 *ctxlen)
+ {
+-	return call_int_hook(dentry_init_security, -EOPNOTSUPP, dentry, mode,
+-				name, xattr_name, ctx, ctxlen);
++	struct security_hook_list *hp;
++	int rc;
++
++	/*
++	 * Only one module will provide a security context.
++	 */
++	hlist_for_each_entry(hp, &security_hook_heads.dentry_init_security, list) {
++		rc = hp->hook.dentry_init_security(dentry, mode, name,
++						   xattr_name, ctx, ctxlen);
++		if (rc != LSM_RET_DEFAULT(dentry_init_security))
++			return rc;
++	}
++	return LSM_RET_DEFAULT(dentry_init_security);
+ }
+ EXPORT_SYMBOL(security_dentry_init_security);
+ 
+diff --git a/tools/testing/scatterlist/linux/mm.h b/tools/testing/scatterlist/linux/mm.h
+index 16ec895bbe5ff..5bd9e6e806254 100644
+--- a/tools/testing/scatterlist/linux/mm.h
++++ b/tools/testing/scatterlist/linux/mm.h
+@@ -74,7 +74,7 @@ static inline unsigned long page_to_phys(struct page *page)
+ 	      __UNIQUE_ID(min1_), __UNIQUE_ID(min2_),   \
+ 	      x, y)
+ 
+-#define preemptible() (1)
++#define pagefault_disabled() (0)
+ 
+ static inline void *kmap(struct page *page)
+ {
+@@ -127,6 +127,7 @@ kmalloc_array(unsigned int n, unsigned int size, unsigned int flags)
+ #define kmemleak_free(a)
+ 
+ #define PageSlab(p) (0)
++#define flush_dcache_page(p)
+ 
+ #define MAX_ERRNO	4095
+ 
+diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
+index 290b1b0552d6e..4fdfb42aeddba 100644
+--- a/tools/testing/selftests/kvm/Makefile
++++ b/tools/testing/selftests/kvm/Makefile
+@@ -77,6 +77,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
+ TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
+ TEST_GEN_PROGS_x86_64 += x86_64/vmx_pi_mmio_test
+ TEST_GEN_PROGS_x86_64 += x86_64/sev_migrate_tests
++TEST_GEN_PROGS_x86_64 += access_tracking_perf_test
+ TEST_GEN_PROGS_x86_64 += demand_paging_test
+ TEST_GEN_PROGS_x86_64 += dirty_log_test
+ TEST_GEN_PROGS_x86_64 += dirty_log_perf_test
+diff --git a/tools/testing/selftests/kvm/x86_64/smm_test.c b/tools/testing/selftests/kvm/x86_64/smm_test.c
+index d0fe2fdce58c4..db2a17559c3d5 100644
+--- a/tools/testing/selftests/kvm/x86_64/smm_test.c
++++ b/tools/testing/selftests/kvm/x86_64/smm_test.c
+@@ -105,7 +105,6 @@ static void guest_code(void *arg)
+ 
+ 		if (cpu_has_svm()) {
+ 			run_guest(svm->vmcb, svm->vmcb_gpa);
+-			svm->vmcb->save.rip += 3;
+ 			run_guest(svm->vmcb, svm->vmcb_gpa);
+ 		} else {
+ 			vmlaunch();
+diff --git a/usr/include/Makefile b/usr/include/Makefile
+index 1c2ae1368079d..adc6cb2587369 100644
+--- a/usr/include/Makefile
++++ b/usr/include/Makefile
+@@ -28,13 +28,13 @@ no-header-test += linux/am437x-vpfe.h
+ no-header-test += linux/android/binder.h
+ no-header-test += linux/android/binderfs.h
+ no-header-test += linux/coda.h
++no-header-test += linux/cyclades.h
+ no-header-test += linux/errqueue.h
+ no-header-test += linux/fsmap.h
+ no-header-test += linux/hdlc/ioctl.h
+ no-header-test += linux/ivtv.h
+ no-header-test += linux/kexec.h
+ no-header-test += linux/matroxfb.h
+-no-header-test += linux/nfc.h
+ no-header-test += linux/omap3isp.h
+ no-header-test += linux/omapfb.h
+ no-header-test += linux/patchkey.h
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 5bd62342c482b..71ddc7a8bc302 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -2112,7 +2112,6 @@ struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn
+ 
+ 	return NULL;
+ }
+-EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_memslot);
+ 
+ bool kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn)
+ {


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-02-05 12:11 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-02-05 12:11 UTC (permalink / raw
  To: gentoo-commits

commit:     7df4e6b2c2e1f2f4a94b864a5c72fefd406995e2
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Feb  5 12:11:40 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Feb  5 12:11:40 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7df4e6b2

Linux patch 5.16.6

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1005_linux-5.16.6.patch | 1425 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1429 insertions(+)

diff --git a/0000_README b/0000_README
index f8c4cea5..0310fc8f 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  1004_linux-5.16.5.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.16.5
 
+Patch:  1005_linux-5.16.6.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.6
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1005_linux-5.16.6.patch b/1005_linux-5.16.6.patch
new file mode 100644
index 00000000..4092e52b
--- /dev/null
+++ b/1005_linux-5.16.6.patch
@@ -0,0 +1,1425 @@
+diff --git a/Makefile b/Makefile
+index 2f0e5c3d9e2a7..2d7b7fe5cbad6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 8465914892fad..e6aad838065b9 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -1739,18 +1739,18 @@ static int vc4_hdmi_cec_adap_enable(struct cec_adapter *adap, bool enable)
+ 	u32 val;
+ 	int ret;
+ 
+-	ret = pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev);
+-	if (ret)
+-		return ret;
++	if (enable) {
++		ret = pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev);
++		if (ret)
++			return ret;
+ 
+-	val = HDMI_READ(HDMI_CEC_CNTRL_5);
+-	val &= ~(VC4_HDMI_CEC_TX_SW_RESET | VC4_HDMI_CEC_RX_SW_RESET |
+-		 VC4_HDMI_CEC_CNT_TO_4700_US_MASK |
+-		 VC4_HDMI_CEC_CNT_TO_4500_US_MASK);
+-	val |= ((4700 / usecs) << VC4_HDMI_CEC_CNT_TO_4700_US_SHIFT) |
+-	       ((4500 / usecs) << VC4_HDMI_CEC_CNT_TO_4500_US_SHIFT);
++		val = HDMI_READ(HDMI_CEC_CNTRL_5);
++		val &= ~(VC4_HDMI_CEC_TX_SW_RESET | VC4_HDMI_CEC_RX_SW_RESET |
++			 VC4_HDMI_CEC_CNT_TO_4700_US_MASK |
++			 VC4_HDMI_CEC_CNT_TO_4500_US_MASK);
++		val |= ((4700 / usecs) << VC4_HDMI_CEC_CNT_TO_4700_US_SHIFT) |
++			((4500 / usecs) << VC4_HDMI_CEC_CNT_TO_4500_US_SHIFT);
+ 
+-	if (enable) {
+ 		HDMI_WRITE(HDMI_CEC_CNTRL_5, val |
+ 			   VC4_HDMI_CEC_TX_SW_RESET | VC4_HDMI_CEC_RX_SW_RESET);
+ 		HDMI_WRITE(HDMI_CEC_CNTRL_5, val);
+@@ -1778,7 +1778,10 @@ static int vc4_hdmi_cec_adap_enable(struct cec_adapter *adap, bool enable)
+ 			HDMI_WRITE(HDMI_CEC_CPU_MASK_SET, VC4_HDMI_CPU_CEC);
+ 		HDMI_WRITE(HDMI_CEC_CNTRL_5, val |
+ 			   VC4_HDMI_CEC_TX_SW_RESET | VC4_HDMI_CEC_RX_SW_RESET);
++
++		pm_runtime_put(&vc4_hdmi->pdev->dev);
+ 	}
++
+ 	return 0;
+ }
+ 
+@@ -1889,8 +1892,6 @@ static int vc4_hdmi_cec_init(struct vc4_hdmi *vc4_hdmi)
+ 	if (ret < 0)
+ 		goto err_remove_handlers;
+ 
+-	pm_runtime_put(&vc4_hdmi->pdev->dev);
+-
+ 	return 0;
+ 
+ err_remove_handlers:
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+index 30d24d19f40d1..7086d0e1e4558 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+@@ -721,7 +721,9 @@ static void xgbe_stop_timers(struct xgbe_prv_data *pdata)
+ 		if (!channel->tx_ring)
+ 			break;
+ 
++		/* Deactivate the Tx timer */
+ 		del_timer_sync(&channel->tx_timer);
++		channel->tx_timer_active = 0;
+ 	}
+ }
+ 
+@@ -2553,6 +2555,14 @@ read_again:
+ 			buf2_len = xgbe_rx_buf2_len(rdata, packet, len);
+ 			len += buf2_len;
+ 
++			if (buf2_len > rdata->rx.buf.dma_len) {
++				/* Hardware inconsistency within the descriptors
++				 * that has resulted in a length underflow.
++				 */
++				error = 1;
++				goto skip_data;
++			}
++
+ 			if (!skb) {
+ 				skb = xgbe_create_skb(pdata, napi, rdata,
+ 						      buf1_len);
+@@ -2582,8 +2592,10 @@ skip_data:
+ 		if (!last || context_next)
+ 			goto read_again;
+ 
+-		if (!skb)
++		if (!skb || error) {
++			dev_kfree_skb(skb);
+ 			goto next_packet;
++		}
+ 
+ 		/* Be sure we don't exceed the configured MTU */
+ 		max_len = netdev->mtu + ETH_HLEN;
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index 44e2dc8328a22..85391ebf8714e 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -6345,7 +6345,8 @@ static void e1000e_s0ix_entry_flow(struct e1000_adapter *adapter)
+ 	u32 mac_data;
+ 	u16 phy_data;
+ 
+-	if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID) {
++	if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID &&
++	    hw->mac.type >= e1000_pch_adp) {
+ 		/* Request ME configure the device for S0ix */
+ 		mac_data = er32(H2ME);
+ 		mac_data |= E1000_H2ME_START_DPG;
+@@ -6494,7 +6495,8 @@ static void e1000e_s0ix_exit_flow(struct e1000_adapter *adapter)
+ 	u16 phy_data;
+ 	u32 i = 0;
+ 
+-	if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID) {
++	if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID &&
++	    hw->mac.type >= e1000_pch_adp) {
+ 		/* Request ME unconfigure the device from S0ix */
+ 		mac_data = er32(H2ME);
+ 		mac_data &= ~E1000_H2ME_START_DPG;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
+index 2e02cc68cd3f7..80c5cecaf2b56 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e.h
++++ b/drivers/net/ethernet/intel/i40e/i40e.h
+@@ -144,6 +144,7 @@ enum i40e_state_t {
+ 	__I40E_VIRTCHNL_OP_PENDING,
+ 	__I40E_RECOVERY_MODE,
+ 	__I40E_VF_RESETS_DISABLED,	/* disable resets during i40e_remove */
++	__I40E_IN_REMOVE,
+ 	__I40E_VFS_RELEASING,
+ 	/* This must be last as it determines the size of the BITMAP */
+ 	__I40E_STATE_SIZE__,
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index f605c0205e4e7..d3af1457fa0dc 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -5372,7 +5372,15 @@ static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc,
+ 	/* There is no need to reset BW when mqprio mode is on.  */
+ 	if (pf->flags & I40E_FLAG_TC_MQPRIO)
+ 		return 0;
+-	if (!vsi->mqprio_qopt.qopt.hw && !(pf->flags & I40E_FLAG_DCB_ENABLED)) {
++
++	if (!vsi->mqprio_qopt.qopt.hw) {
++		if (pf->flags & I40E_FLAG_DCB_ENABLED)
++			goto skip_reset;
++
++		if (IS_ENABLED(CONFIG_I40E_DCB) &&
++		    i40e_dcb_hw_get_num_tc(&pf->hw) == 1)
++			goto skip_reset;
++
+ 		ret = i40e_set_bw_limit(vsi, vsi->seid, 0);
+ 		if (ret)
+ 			dev_info(&pf->pdev->dev,
+@@ -5380,6 +5388,8 @@ static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc,
+ 				 vsi->seid);
+ 		return ret;
+ 	}
++
++skip_reset:
+ 	memset(&bw_data, 0, sizeof(bw_data));
+ 	bw_data.tc_valid_bits = enabled_tc;
+ 	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++)
+@@ -10853,6 +10863,9 @@ static void i40e_reset_and_rebuild(struct i40e_pf *pf, bool reinit,
+ 				   bool lock_acquired)
+ {
+ 	int ret;
++
++	if (test_bit(__I40E_IN_REMOVE, pf->state))
++		return;
+ 	/* Now we wait for GRST to settle out.
+ 	 * We don't have to delete the VEBs or VSIs from the hw switch
+ 	 * because the reset will make them disappear.
+@@ -12212,6 +12225,8 @@ int i40e_reconfig_rss_queues(struct i40e_pf *pf, int queue_count)
+ 
+ 		vsi->req_queue_pairs = queue_count;
+ 		i40e_prep_for_reset(pf);
++		if (test_bit(__I40E_IN_REMOVE, pf->state))
++			return pf->alloc_rss_size;
+ 
+ 		pf->alloc_rss_size = new_rss_size;
+ 
+@@ -13038,6 +13053,10 @@ static int i40e_xdp_setup(struct i40e_vsi *vsi, struct bpf_prog *prog,
+ 	if (need_reset)
+ 		i40e_prep_for_reset(pf);
+ 
++	/* VSI shall be deleted in a moment, just return EINVAL */
++	if (test_bit(__I40E_IN_REMOVE, pf->state))
++		return -EINVAL;
++
+ 	old_prog = xchg(&vsi->xdp_prog, prog);
+ 
+ 	if (need_reset) {
+@@ -15928,8 +15947,13 @@ static void i40e_remove(struct pci_dev *pdev)
+ 	i40e_write_rx_ctl(hw, I40E_PFQF_HENA(0), 0);
+ 	i40e_write_rx_ctl(hw, I40E_PFQF_HENA(1), 0);
+ 
+-	while (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state))
++	/* Grab __I40E_RESET_RECOVERY_PENDING and set __I40E_IN_REMOVE
++	 * flags, once they are set, i40e_rebuild should not be called as
++	 * i40e_prep_for_reset always returns early.
++	 */
++	while (test_and_set_bit(__I40E_RESET_RECOVERY_PENDING, pf->state))
+ 		usleep_range(1000, 2000);
++	set_bit(__I40E_IN_REMOVE, pf->state);
+ 
+ 	if (pf->flags & I40E_FLAG_SRIOV_ENABLED) {
+ 		set_bit(__I40E_VF_RESETS_DISABLED, pf->state);
+@@ -16128,6 +16152,9 @@ static void i40e_pci_error_reset_done(struct pci_dev *pdev)
+ {
+ 	struct i40e_pf *pf = pci_get_drvdata(pdev);
+ 
++	if (test_bit(__I40E_IN_REMOVE, pf->state))
++		return;
++
+ 	i40e_reset_and_rebuild(pf, false, false);
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index b47a0d3ef22fb..0952a58adad1f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -225,7 +225,7 @@ static inline int mlx5e_get_max_num_channels(struct mlx5_core_dev *mdev)
+ struct mlx5e_tx_wqe {
+ 	struct mlx5_wqe_ctrl_seg ctrl;
+ 	struct mlx5_wqe_eth_seg  eth;
+-	struct mlx5_wqe_data_seg data[0];
++	struct mlx5_wqe_data_seg data[];
+ };
+ 
+ struct mlx5e_rx_wqe_ll {
+@@ -242,8 +242,8 @@ struct mlx5e_umr_wqe {
+ 	struct mlx5_wqe_umr_ctrl_seg   uctrl;
+ 	struct mlx5_mkey_seg           mkc;
+ 	union {
+-		struct mlx5_mtt inline_mtts[0];
+-		struct mlx5_klm inline_klms[0];
++		DECLARE_FLEX_ARRAY(struct mlx5_mtt, inline_mtts);
++		DECLARE_FLEX_ARRAY(struct mlx5_klm, inline_klms);
+ 	};
+ };
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
+index 50977f01a0503..2c2a4ca4da307 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
+@@ -569,7 +569,8 @@ static int mlx5e_htb_convert_rate(struct mlx5e_priv *priv, u64 rate,
+ 
+ static void mlx5e_htb_convert_ceil(struct mlx5e_priv *priv, u64 ceil, u32 *max_average_bw)
+ {
+-	*max_average_bw = div_u64(ceil, BYTES_IN_MBIT);
++	/* Hardware treats 0 as "unlimited", set at least 1. */
++	*max_average_bw = max_t(u32, div_u64(ceil, BYTES_IN_MBIT), 1);
+ 
+ 	qos_dbg(priv->mdev, "Convert: ceil %llu -> max_average_bw %u\n",
+ 		ceil, *max_average_bw);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c
+index 9c076aa20306a..b6f5c1bcdbcd4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c
+@@ -183,18 +183,7 @@ void mlx5e_rep_bond_unslave(struct mlx5_eswitch *esw,
+ 
+ static bool mlx5e_rep_is_lag_netdev(struct net_device *netdev)
+ {
+-	struct mlx5e_rep_priv *rpriv;
+-	struct mlx5e_priv *priv;
+-
+-	/* A given netdev is not a representor or not a slave of LAG configuration */
+-	if (!mlx5e_eswitch_rep(netdev) || !netif_is_lag_port(netdev))
+-		return false;
+-
+-	priv = netdev_priv(netdev);
+-	rpriv = priv->ppriv;
+-
+-	/* Egress acl forward to vport is supported only non-uplink representor */
+-	return rpriv->rep->vport != MLX5_VPORT_UPLINK;
++	return netif_is_lag_port(netdev) && mlx5e_eswitch_vf_rep(netdev);
+ }
+ 
+ static void mlx5e_rep_changelowerstate_event(struct net_device *netdev, void *ptr)
+@@ -210,9 +199,6 @@ static void mlx5e_rep_changelowerstate_event(struct net_device *netdev, void *pt
+ 	u16 fwd_vport_num;
+ 	int err;
+ 
+-	if (!mlx5e_rep_is_lag_netdev(netdev))
+-		return;
+-
+ 	info = ptr;
+ 	lag_info = info->lower_state_info;
+ 	/* This is not an event of a representor becoming active slave */
+@@ -266,9 +252,6 @@ static void mlx5e_rep_changeupper_event(struct net_device *netdev, void *ptr)
+ 	struct net_device *lag_dev;
+ 	struct mlx5e_priv *priv;
+ 
+-	if (!mlx5e_rep_is_lag_netdev(netdev))
+-		return;
+-
+ 	priv = netdev_priv(netdev);
+ 	rpriv = priv->ppriv;
+ 	lag_dev = info->upper_dev;
+@@ -293,6 +276,19 @@ static int mlx5e_rep_esw_bond_netevent(struct notifier_block *nb,
+ 				       unsigned long event, void *ptr)
+ {
+ 	struct net_device *netdev = netdev_notifier_info_to_dev(ptr);
++	struct mlx5e_rep_priv *rpriv;
++	struct mlx5e_rep_bond *bond;
++	struct mlx5e_priv *priv;
++
++	if (!mlx5e_rep_is_lag_netdev(netdev))
++		return NOTIFY_DONE;
++
++	bond = container_of(nb, struct mlx5e_rep_bond, nb);
++	priv = netdev_priv(netdev);
++	rpriv = mlx5_eswitch_get_uplink_priv(priv->mdev->priv.eswitch, REP_ETH);
++	/* Verify VF representor is on the same device of the bond handling the netevent. */
++	if (rpriv->uplink_priv.bond != bond)
++		return NOTIFY_DONE;
+ 
+ 	switch (event) {
+ 	case NETDEV_CHANGELOWERSTATE:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c
+index c6d2f8c78db71..48dc121b2cb4c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c
+@@ -491,7 +491,7 @@ void mlx5e_rep_bridge_init(struct mlx5e_priv *priv)
+ 	}
+ 
+ 	br_offloads->netdev_nb.notifier_call = mlx5_esw_bridge_switchdev_port_event;
+-	err = register_netdevice_notifier(&br_offloads->netdev_nb);
++	err = register_netdevice_notifier_net(&init_net, &br_offloads->netdev_nb);
+ 	if (err) {
+ 		esw_warn(mdev, "Failed to register bridge offloads netdevice notifier (err=%d)\n",
+ 			 err);
+@@ -509,7 +509,9 @@ err_register_swdev_blk:
+ err_register_swdev:
+ 	destroy_workqueue(br_offloads->wq);
+ err_alloc_wq:
++	rtnl_lock();
+ 	mlx5_esw_bridge_cleanup(esw);
++	rtnl_unlock();
+ }
+ 
+ void mlx5e_rep_bridge_cleanup(struct mlx5e_priv *priv)
+@@ -524,7 +526,7 @@ void mlx5e_rep_bridge_cleanup(struct mlx5e_priv *priv)
+ 		return;
+ 
+ 	cancel_delayed_work_sync(&br_offloads->update_work);
+-	unregister_netdevice_notifier(&br_offloads->netdev_nb);
++	unregister_netdevice_notifier_net(&init_net, &br_offloads->netdev_nb);
+ 	unregister_switchdev_blocking_notifier(&br_offloads->nb_blk);
+ 	unregister_switchdev_notifier(&br_offloads->nb);
+ 	destroy_workqueue(br_offloads->wq);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
+index 4cdf8e5b24c22..b789af07829c0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
+@@ -167,6 +167,11 @@ static inline u16 mlx5e_txqsq_get_next_pi(struct mlx5e_txqsq *sq, u16 size)
+ 	return pi;
+ }
+ 
++static inline u16 mlx5e_shampo_get_cqe_header_index(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
++{
++	return be16_to_cpu(cqe->shampo.header_entry_index) & (rq->mpwqe.shampo->hd_per_wq - 1);
++}
++
+ struct mlx5e_shampo_umr {
+ 	u16 len;
+ };
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+index 2f0df5cc1a2d9..efae2444c26f1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+@@ -341,8 +341,10 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd,
+ 
+ 	/* copy the inline part if required */
+ 	if (sq->min_inline_mode != MLX5_INLINE_MODE_NONE) {
+-		memcpy(eseg->inline_hdr.start, xdptxd->data, MLX5E_XDP_MIN_INLINE);
++		memcpy(eseg->inline_hdr.start, xdptxd->data, sizeof(eseg->inline_hdr.start));
+ 		eseg->inline_hdr.sz = cpu_to_be16(MLX5E_XDP_MIN_INLINE);
++		memcpy(dseg, xdptxd->data + sizeof(eseg->inline_hdr.start),
++		       MLX5E_XDP_MIN_INLINE - sizeof(eseg->inline_hdr.start));
+ 		dma_len  -= MLX5E_XDP_MIN_INLINE;
+ 		dma_addr += MLX5E_XDP_MIN_INLINE;
+ 		dseg++;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c
+index 2db9573a3fe69..b56fea142c246 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c
+@@ -157,11 +157,20 @@ static void mlx5e_ipsec_set_swp(struct sk_buff *skb,
+ 	/* Tunnel mode */
+ 	if (mode == XFRM_MODE_TUNNEL) {
+ 		eseg->swp_inner_l3_offset = skb_inner_network_offset(skb) / 2;
+-		eseg->swp_inner_l4_offset = skb_inner_transport_offset(skb) / 2;
+ 		if (xo->proto == IPPROTO_IPV6)
+ 			eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6;
+-		if (inner_ip_hdr(skb)->protocol == IPPROTO_UDP)
++
++		switch (xo->inner_ipproto) {
++		case IPPROTO_UDP:
+ 			eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L4_UDP;
++			fallthrough;
++		case IPPROTO_TCP:
++			/* IP | ESP | IP | [TCP | UDP] */
++			eseg->swp_inner_l4_offset = skb_inner_transport_offset(skb) / 2;
++			break;
++		default:
++			break;
++		}
+ 		return;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h
+index b98db50c3418d..428881e0adcbe 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h
+@@ -131,14 +131,17 @@ static inline bool
+ mlx5e_ipsec_txwqe_build_eseg_csum(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ 				  struct mlx5_wqe_eth_seg *eseg)
+ {
+-	struct xfrm_offload *xo = xfrm_offload(skb);
++	u8 inner_ipproto;
+ 
+ 	if (!mlx5e_ipsec_eseg_meta(eseg))
+ 		return false;
+ 
+ 	eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM;
+-	if (xo->inner_ipproto) {
+-		eseg->cs_flags |= MLX5_ETH_WQE_L4_INNER_CSUM | MLX5_ETH_WQE_L3_INNER_CSUM;
++	inner_ipproto = xfrm_offload(skb)->inner_ipproto;
++	if (inner_ipproto) {
++		eseg->cs_flags |= MLX5_ETH_WQE_L3_INNER_CSUM;
++		if (inner_ipproto == IPPROTO_TCP || inner_ipproto == IPPROTO_UDP)
++			eseg->cs_flags |= MLX5_ETH_WQE_L4_INNER_CSUM;
+ 	} else if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
+ 		eseg->cs_flags |= MLX5_ETH_WQE_L4_CSUM;
+ 		sq->stats->csum_partial_inner++;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index dfc6604b9538b..bf25d0aa74c3b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -1116,7 +1116,7 @@ static void mlx5e_shampo_update_ipv6_udp_hdr(struct mlx5e_rq *rq, struct ipv6hdr
+ static void mlx5e_shampo_update_fin_psh_flags(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe,
+ 					      struct tcphdr *skb_tcp_hd)
+ {
+-	u16 header_index = be16_to_cpu(cqe->shampo.header_entry_index);
++	u16 header_index = mlx5e_shampo_get_cqe_header_index(rq, cqe);
+ 	struct tcphdr *last_tcp_hd;
+ 	void *last_hd_addr;
+ 
+@@ -1866,7 +1866,7 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
+ 	return skb;
+ }
+ 
+-static void
++static struct sk_buff *
+ mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
+ 			  struct mlx5_cqe64 *cqe, u16 header_index)
+ {
+@@ -1890,7 +1890,7 @@ mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
+ 		skb = mlx5e_build_linear_skb(rq, hdr, frag_size, rx_headroom, head_size);
+ 
+ 		if (unlikely(!skb))
+-			return;
++			return NULL;
+ 
+ 		/* queue up for recycling/reuse */
+ 		page_ref_inc(head->page);
+@@ -1902,7 +1902,7 @@ mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
+ 				     ALIGN(head_size, sizeof(long)));
+ 		if (unlikely(!skb)) {
+ 			rq->stats->buff_alloc_err++;
+-			return;
++			return NULL;
+ 		}
+ 
+ 		prefetchw(skb->data);
+@@ -1913,9 +1913,7 @@ mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
+ 		skb->tail += head_size;
+ 		skb->len  += head_size;
+ 	}
+-	rq->hw_gro_data->skb = skb;
+-	NAPI_GRO_CB(skb)->count = 1;
+-	skb_shinfo(skb)->gso_size = mpwrq_get_cqe_byte_cnt(cqe) - head_size;
++	return skb;
+ }
+ 
+ static void
+@@ -1968,13 +1966,14 @@ mlx5e_free_rx_shampo_hd_entry(struct mlx5e_rq *rq, u16 header_index)
+ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
+ {
+ 	u16 data_bcnt		= mpwrq_get_cqe_byte_cnt(cqe) - cqe->shampo.header_size;
+-	u16 header_index	= be16_to_cpu(cqe->shampo.header_entry_index);
++	u16 header_index	= mlx5e_shampo_get_cqe_header_index(rq, cqe);
+ 	u32 wqe_offset		= be32_to_cpu(cqe->shampo.data_offset);
+ 	u16 cstrides		= mpwrq_get_cqe_consumed_strides(cqe);
+ 	u32 data_offset		= wqe_offset & (PAGE_SIZE - 1);
+ 	u32 cqe_bcnt		= mpwrq_get_cqe_byte_cnt(cqe);
+ 	u16 wqe_id		= be16_to_cpu(cqe->wqe_id);
+ 	u32 page_idx		= wqe_offset >> PAGE_SHIFT;
++	u16 head_size		= cqe->shampo.header_size;
+ 	struct sk_buff **skb	= &rq->hw_gro_data->skb;
+ 	bool flush		= cqe->shampo.flush;
+ 	bool match		= cqe->shampo.match;
+@@ -2007,9 +2006,16 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cq
+ 	}
+ 
+ 	if (!*skb) {
+-		mlx5e_skb_from_cqe_shampo(rq, wi, cqe, header_index);
++		if (likely(head_size))
++			*skb = mlx5e_skb_from_cqe_shampo(rq, wi, cqe, header_index);
++		else
++			*skb = mlx5e_skb_from_cqe_mpwrq_nonlinear(rq, wi, cqe_bcnt, data_offset,
++								  page_idx);
+ 		if (unlikely(!*skb))
+ 			goto free_hd_entry;
++
++		NAPI_GRO_CB(*skb)->count = 1;
++		skb_shinfo(*skb)->gso_size = cqe_bcnt - head_size;
+ 	} else {
+ 		NAPI_GRO_CB(*skb)->count++;
+ 		if (NAPI_GRO_CB(*skb)->count == 2 &&
+@@ -2023,8 +2029,10 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cq
+ 		}
+ 	}
+ 
+-	di = &wi->umr.dma_info[page_idx];
+-	mlx5e_fill_skb_data(*skb, rq, di, data_bcnt, data_offset);
++	if (likely(head_size)) {
++		di = &wi->umr.dma_info[page_idx];
++		mlx5e_fill_skb_data(*skb, rq, di, data_bcnt, data_offset);
++	}
+ 
+ 	mlx5e_shampo_complete_rx_cqe(rq, cqe, cqe_bcnt, *skb);
+ 	if (flush)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 9b3adaccc9beb..eae37934cdf70 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -1425,7 +1425,8 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv,
+ 		if (err)
+ 			goto err_out;
+ 
+-		if (!attr->chain && esw_attr->int_port) {
++		if (!attr->chain && esw_attr->int_port &&
++		    attr->action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) {
+ 			/* If decap route device is internal port, change the
+ 			 * source vport value in reg_c0 back to uplink just in
+ 			 * case the rule performs goto chain > 0. If we have a miss
+@@ -3420,6 +3421,18 @@ actions_match_supported(struct mlx5e_priv *priv,
+ 		return false;
+ 	}
+ 
++	if (!(~actions &
++	      (MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_DROP))) {
++		NL_SET_ERR_MSG_MOD(extack, "Rule cannot support forward+drop action");
++		return false;
++	}
++
++	if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR &&
++	    actions & MLX5_FLOW_CONTEXT_ACTION_DROP) {
++		NL_SET_ERR_MSG_MOD(extack, "Drop with modify header action is not supported");
++		return false;
++	}
++
+ 	if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR &&
+ 	    !modify_header_match_supported(priv, &parse_attr->spec, flow_action,
+ 					   actions, ct_flow, ct_clear, extack))
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c
+index f690f430f40f8..05e08cec5a8cf 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c
+@@ -1574,6 +1574,8 @@ struct mlx5_esw_bridge_offloads *mlx5_esw_bridge_init(struct mlx5_eswitch *esw)
+ {
+ 	struct mlx5_esw_bridge_offloads *br_offloads;
+ 
++	ASSERT_RTNL();
++
+ 	br_offloads = kvzalloc(sizeof(*br_offloads), GFP_KERNEL);
+ 	if (!br_offloads)
+ 		return ERR_PTR(-ENOMEM);
+@@ -1590,6 +1592,8 @@ void mlx5_esw_bridge_cleanup(struct mlx5_eswitch *esw)
+ {
+ 	struct mlx5_esw_bridge_offloads *br_offloads = esw->br_offloads;
+ 
++	ASSERT_RTNL();
++
+ 	if (!br_offloads)
+ 		return;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/bridge_tracepoint.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/bridge_tracepoint.h
+index 3401188e0a602..51ac24e6ec3c3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/bridge_tracepoint.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/bridge_tracepoint.h
+@@ -21,7 +21,7 @@ DECLARE_EVENT_CLASS(mlx5_esw_bridge_fdb_template,
+ 			    __field(unsigned int, used)
+ 			    ),
+ 		    TP_fast_assign(
+-			    strncpy(__entry->dev_name,
++			    strscpy(__entry->dev_name,
+ 				    netdev_name(fdb->dev),
+ 				    IFNAMSIZ);
+ 			    memcpy(__entry->addr, fdb->key.addr, ETH_ALEN);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+index 0b0234f9d694c..84dbe46d5ede6 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+@@ -132,7 +132,7 @@ static void mlx5_stop_sync_reset_poll(struct mlx5_core_dev *dev)
+ {
+ 	struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
+ 
+-	del_timer(&fw_reset->timer);
++	del_timer_sync(&fw_reset->timer);
+ }
+ 
+ static void mlx5_sync_reset_clear_reset_requested(struct mlx5_core_dev *dev, bool poll_health)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
+index d5e47630e2849..df58cba37930a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
+@@ -121,12 +121,13 @@ u32 mlx5_chains_get_nf_ft_chain(struct mlx5_fs_chains *chains)
+ 
+ u32 mlx5_chains_get_prio_range(struct mlx5_fs_chains *chains)
+ {
+-	if (!mlx5_chains_prios_supported(chains))
+-		return 1;
+-
+ 	if (mlx5_chains_ignore_flow_level_supported(chains))
+ 		return UINT_MAX;
+ 
++	if (!chains->dev->priv.eswitch ||
++	    chains->dev->priv.eswitch->mode != MLX5_ESWITCH_OFFLOADS)
++		return 1;
++
+ 	/* We should get here only for eswitch case */
+ 	return FDB_TC_MAX_PRIO;
+ }
+@@ -211,7 +212,7 @@ static int
+ create_chain_restore(struct fs_chain *chain)
+ {
+ 	struct mlx5_eswitch *esw = chain->chains->dev->priv.eswitch;
+-	char modact[MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)];
++	u8 modact[MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)] = {};
+ 	struct mlx5_fs_chains *chains = chain->chains;
+ 	enum mlx5e_tc_attr_to_reg chain_to_reg;
+ 	struct mlx5_modify_hdr *mod_hdr;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/port.c b/drivers/net/ethernet/mellanox/mlx5/core/port.c
+index 1ef2b6a848c10..7b16a1188aabb 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/port.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/port.c
+@@ -406,23 +406,24 @@ int mlx5_query_module_eeprom(struct mlx5_core_dev *dev,
+ 
+ 	switch (module_id) {
+ 	case MLX5_MODULE_ID_SFP:
+-		mlx5_sfp_eeprom_params_set(&query.i2c_address, &query.page, &query.offset);
++		mlx5_sfp_eeprom_params_set(&query.i2c_address, &query.page, &offset);
+ 		break;
+ 	case MLX5_MODULE_ID_QSFP:
+ 	case MLX5_MODULE_ID_QSFP_PLUS:
+ 	case MLX5_MODULE_ID_QSFP28:
+-		mlx5_qsfp_eeprom_params_set(&query.i2c_address, &query.page, &query.offset);
++		mlx5_qsfp_eeprom_params_set(&query.i2c_address, &query.page, &offset);
+ 		break;
+ 	default:
+ 		mlx5_core_err(dev, "Module ID not recognized: 0x%x\n", module_id);
+ 		return -EINVAL;
+ 	}
+ 
+-	if (query.offset + size > MLX5_EEPROM_PAGE_LENGTH)
++	if (offset + size > MLX5_EEPROM_PAGE_LENGTH)
+ 		/* Cross pages read, read until offset 256 in low page */
+-		size -= offset + size - MLX5_EEPROM_PAGE_LENGTH;
++		size = MLX5_EEPROM_PAGE_LENGTH - offset;
+ 
+ 	query.size = size;
++	query.offset = offset;
+ 
+ 	return mlx5_query_mcia(dev, &query, data);
+ }
+diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
+index c8f90cb1ee8f3..87e42db1b61e6 100644
+--- a/drivers/net/ipa/ipa_endpoint.c
++++ b/drivers/net/ipa/ipa_endpoint.c
+@@ -1069,21 +1069,33 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
+ 	u32 backlog;
+ 	int delta;
+ 
+-	if (!endpoint->replenish_enabled) {
++	if (!test_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags)) {
+ 		if (add_one)
+ 			atomic_inc(&endpoint->replenish_saved);
+ 		return;
+ 	}
+ 
++	/* If already active, just update the backlog */
++	if (test_and_set_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags)) {
++		if (add_one)
++			atomic_inc(&endpoint->replenish_backlog);
++		return;
++	}
++
+ 	while (atomic_dec_not_zero(&endpoint->replenish_backlog))
+ 		if (ipa_endpoint_replenish_one(endpoint))
+ 			goto try_again_later;
++
++	clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags);
++
+ 	if (add_one)
+ 		atomic_inc(&endpoint->replenish_backlog);
+ 
+ 	return;
+ 
+ try_again_later:
++	clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags);
++
+ 	/* The last one didn't succeed, so fix the backlog */
+ 	delta = add_one ? 2 : 1;
+ 	backlog = atomic_add_return(delta, &endpoint->replenish_backlog);
+@@ -1106,7 +1118,7 @@ static void ipa_endpoint_replenish_enable(struct ipa_endpoint *endpoint)
+ 	u32 max_backlog;
+ 	u32 saved;
+ 
+-	endpoint->replenish_enabled = true;
++	set_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags);
+ 	while ((saved = atomic_xchg(&endpoint->replenish_saved, 0)))
+ 		atomic_add(saved, &endpoint->replenish_backlog);
+ 
+@@ -1120,7 +1132,7 @@ static void ipa_endpoint_replenish_disable(struct ipa_endpoint *endpoint)
+ {
+ 	u32 backlog;
+ 
+-	endpoint->replenish_enabled = false;
++	clear_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags);
+ 	while ((backlog = atomic_xchg(&endpoint->replenish_backlog, 0)))
+ 		atomic_add(backlog, &endpoint->replenish_saved);
+ }
+@@ -1665,7 +1677,8 @@ static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint)
+ 		/* RX transactions require a single TRE, so the maximum
+ 		 * backlog is the same as the maximum outstanding TREs.
+ 		 */
+-		endpoint->replenish_enabled = false;
++		clear_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags);
++		clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags);
+ 		atomic_set(&endpoint->replenish_saved,
+ 			   gsi_channel_tre_max(gsi, endpoint->channel_id));
+ 		atomic_set(&endpoint->replenish_backlog, 0);
+diff --git a/drivers/net/ipa/ipa_endpoint.h b/drivers/net/ipa/ipa_endpoint.h
+index 0a859d10312dc..0313cdc607de3 100644
+--- a/drivers/net/ipa/ipa_endpoint.h
++++ b/drivers/net/ipa/ipa_endpoint.h
+@@ -40,6 +40,19 @@ enum ipa_endpoint_name {
+ 
+ #define IPA_ENDPOINT_MAX		32	/* Max supported by driver */
+ 
++/**
++ * enum ipa_replenish_flag:	RX buffer replenish flags
++ *
++ * @IPA_REPLENISH_ENABLED:	Whether receive buffer replenishing is enabled
++ * @IPA_REPLENISH_ACTIVE:	Whether replenishing is underway
++ * @IPA_REPLENISH_COUNT:	Number of defined replenish flags
++ */
++enum ipa_replenish_flag {
++	IPA_REPLENISH_ENABLED,
++	IPA_REPLENISH_ACTIVE,
++	IPA_REPLENISH_COUNT,	/* Number of flags (must be last) */
++};
++
+ /**
+  * struct ipa_endpoint - IPA endpoint information
+  * @ipa:		IPA pointer
+@@ -51,7 +64,7 @@ enum ipa_endpoint_name {
+  * @trans_tre_max:	Maximum number of TRE descriptors per transaction
+  * @evt_ring_id:	GSI event ring used by the endpoint
+  * @netdev:		Network device pointer, if endpoint uses one
+- * @replenish_enabled:	Whether receive buffer replenishing is enabled
++ * @replenish_flags:	Replenishing state flags
+  * @replenish_ready:	Number of replenish transactions without doorbell
+  * @replenish_saved:	Replenish requests held while disabled
+  * @replenish_backlog:	Number of buffers needed to fill hardware queue
+@@ -72,7 +85,7 @@ struct ipa_endpoint {
+ 	struct net_device *netdev;
+ 
+ 	/* Receive buffer replenishing for RX endpoints */
+-	bool replenish_enabled;
++	DECLARE_BITMAP(replenish_flags, IPA_REPLENISH_COUNT);
+ 	u32 replenish_ready;
+ 	atomic_t replenish_saved;
+ 	atomic_t replenish_backlog;
+diff --git a/drivers/net/ipa/ipa_power.c b/drivers/net/ipa/ipa_power.c
+index b1c6c0fcb654f..f2989aac47a62 100644
+--- a/drivers/net/ipa/ipa_power.c
++++ b/drivers/net/ipa/ipa_power.c
+@@ -11,6 +11,8 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/bitops.h>
+ 
++#include "linux/soc/qcom/qcom_aoss.h"
++
+ #include "ipa.h"
+ #include "ipa_power.h"
+ #include "ipa_endpoint.h"
+@@ -64,6 +66,7 @@ enum ipa_power_flag {
+  * struct ipa_power - IPA power management information
+  * @dev:		IPA device pointer
+  * @core:		IPA core clock
++ * @qmp:		QMP handle for AOSS communication
+  * @spinlock:		Protects modem TX queue enable/disable
+  * @flags:		Boolean state flags
+  * @interconnect_count:	Number of elements in interconnect[]
+@@ -72,6 +75,7 @@ enum ipa_power_flag {
+ struct ipa_power {
+ 	struct device *dev;
+ 	struct clk *core;
++	struct qmp *qmp;
+ 	spinlock_t spinlock;	/* used with STOPPED/STARTED power flags */
+ 	DECLARE_BITMAP(flags, IPA_POWER_FLAG_COUNT);
+ 	u32 interconnect_count;
+@@ -382,6 +386,47 @@ void ipa_power_modem_queue_active(struct ipa *ipa)
+ 	clear_bit(IPA_POWER_FLAG_STARTED, ipa->power->flags);
+ }
+ 
++static int ipa_power_retention_init(struct ipa_power *power)
++{
++	struct qmp *qmp = qmp_get(power->dev);
++
++	if (IS_ERR(qmp)) {
++		if (PTR_ERR(qmp) == -EPROBE_DEFER)
++			return -EPROBE_DEFER;
++
++		/* We assume any other error means it's not defined/needed */
++		qmp = NULL;
++	}
++	power->qmp = qmp;
++
++	return 0;
++}
++
++static void ipa_power_retention_exit(struct ipa_power *power)
++{
++	qmp_put(power->qmp);
++	power->qmp = NULL;
++}
++
++/* Control register retention on power collapse */
++void ipa_power_retention(struct ipa *ipa, bool enable)
++{
++	static const char fmt[] = "{ class: bcm, res: ipa_pc, val: %c }";
++	struct ipa_power *power = ipa->power;
++	char buf[36];	/* Exactly enough for fmt[]; size a multiple of 4 */
++	int ret;
++
++	if (!power->qmp)
++		return;		/* Not needed on this platform */
++
++	(void)snprintf(buf, sizeof(buf), fmt, enable ? '1' : '0');
++
++	ret = qmp_send(power->qmp, buf, sizeof(buf));
++	if (ret)
++		dev_err(power->dev, "error %d sending QMP %sable request\n",
++			ret, enable ? "en" : "dis");
++}
++
+ int ipa_power_setup(struct ipa *ipa)
+ {
+ 	int ret;
+@@ -438,12 +483,18 @@ ipa_power_init(struct device *dev, const struct ipa_power_data *data)
+ 	if (ret)
+ 		goto err_kfree;
+ 
++	ret = ipa_power_retention_init(power);
++	if (ret)
++		goto err_interconnect_exit;
++
+ 	pm_runtime_set_autosuspend_delay(dev, IPA_AUTOSUSPEND_DELAY);
+ 	pm_runtime_use_autosuspend(dev);
+ 	pm_runtime_enable(dev);
+ 
+ 	return power;
+ 
++err_interconnect_exit:
++	ipa_interconnect_exit(power);
+ err_kfree:
+ 	kfree(power);
+ err_clk_put:
+@@ -460,6 +511,7 @@ void ipa_power_exit(struct ipa_power *power)
+ 
+ 	pm_runtime_disable(dev);
+ 	pm_runtime_dont_use_autosuspend(dev);
++	ipa_power_retention_exit(power);
+ 	ipa_interconnect_exit(power);
+ 	kfree(power);
+ 	clk_put(clk);
+diff --git a/drivers/net/ipa/ipa_power.h b/drivers/net/ipa/ipa_power.h
+index 2151805d7fbb0..6f84f057a2095 100644
+--- a/drivers/net/ipa/ipa_power.h
++++ b/drivers/net/ipa/ipa_power.h
+@@ -40,6 +40,13 @@ void ipa_power_modem_queue_wake(struct ipa *ipa);
+  */
+ void ipa_power_modem_queue_active(struct ipa *ipa);
+ 
++/**
++ * ipa_power_retention() - Control register retention on power collapse
++ * @ipa:	IPA pointer
++ * @enable:	Whether retention should be enabled or disabled
++ */
++void ipa_power_retention(struct ipa *ipa, bool enable);
++
+ /**
+  * ipa_power_setup() - Set up IPA power management
+  * @ipa:	IPA pointer
+diff --git a/drivers/net/ipa/ipa_uc.c b/drivers/net/ipa/ipa_uc.c
+index 856e55a080a7f..fe11910518d95 100644
+--- a/drivers/net/ipa/ipa_uc.c
++++ b/drivers/net/ipa/ipa_uc.c
+@@ -11,6 +11,7 @@
+ 
+ #include "ipa.h"
+ #include "ipa_uc.h"
++#include "ipa_power.h"
+ 
+ /**
+  * DOC:  The IPA embedded microcontroller
+@@ -154,6 +155,7 @@ static void ipa_uc_response_hdlr(struct ipa *ipa, enum ipa_irq_id irq_id)
+ 	case IPA_UC_RESPONSE_INIT_COMPLETED:
+ 		if (ipa->uc_powered) {
+ 			ipa->uc_loaded = true;
++			ipa_power_retention(ipa, true);
+ 			pm_runtime_mark_last_busy(dev);
+ 			(void)pm_runtime_put_autosuspend(dev);
+ 			ipa->uc_powered = false;
+@@ -184,6 +186,9 @@ void ipa_uc_deconfig(struct ipa *ipa)
+ 
+ 	ipa_interrupt_remove(ipa->interrupt, IPA_IRQ_UC_1);
+ 	ipa_interrupt_remove(ipa->interrupt, IPA_IRQ_UC_0);
++	if (ipa->uc_loaded)
++		ipa_power_retention(ipa, false);
++
+ 	if (!ipa->uc_powered)
+ 		return;
+ 
+diff --git a/drivers/net/phy/at803x.c b/drivers/net/phy/at803x.c
+index dae95d9a07e88..32eeed6728619 100644
+--- a/drivers/net/phy/at803x.c
++++ b/drivers/net/phy/at803x.c
+@@ -1688,19 +1688,19 @@ static int qca808x_read_status(struct phy_device *phydev)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	if (phydev->link && phydev->speed == SPEED_2500)
+-		phydev->interface = PHY_INTERFACE_MODE_2500BASEX;
+-	else
+-		phydev->interface = PHY_INTERFACE_MODE_SMII;
+-
+-	/* generate seed as a lower random value to make PHY linked as SLAVE easily,
+-	 * except for master/slave configuration fault detected.
+-	 * the reason for not putting this code into the function link_change_notify is
+-	 * the corner case where the link partner is also the qca8081 PHY and the seed
+-	 * value is configured as the same value, the link can't be up and no link change
+-	 * occurs.
+-	 */
+-	if (!phydev->link) {
++	if (phydev->link) {
++		if (phydev->speed == SPEED_2500)
++			phydev->interface = PHY_INTERFACE_MODE_2500BASEX;
++		else
++			phydev->interface = PHY_INTERFACE_MODE_SGMII;
++	} else {
++		/* generate seed as a lower random value to make PHY linked as SLAVE easily,
++		 * except for master/slave configuration fault detected.
++		 * the reason for not putting this code into the function link_change_notify is
++		 * the corner case where the link partner is also the qca8081 PHY and the seed
++		 * value is configured as the same value, the link can't be up and no link change
++		 * occurs.
++		 */
+ 		if (phydev->master_slave_state == MASTER_SLAVE_STATE_ERR) {
+ 			qca808x_phy_ms_seed_enable(phydev, false);
+ 		} else {
+diff --git a/drivers/net/usb/ipheth.c b/drivers/net/usb/ipheth.c
+index cd33955df0b65..6a769df0b4213 100644
+--- a/drivers/net/usb/ipheth.c
++++ b/drivers/net/usb/ipheth.c
+@@ -121,7 +121,7 @@ static int ipheth_alloc_urbs(struct ipheth_device *iphone)
+ 	if (tx_buf == NULL)
+ 		goto free_rx_urb;
+ 
+-	rx_buf = usb_alloc_coherent(iphone->udev, IPHETH_BUF_SIZE,
++	rx_buf = usb_alloc_coherent(iphone->udev, IPHETH_BUF_SIZE + IPHETH_IP_ALIGN,
+ 				    GFP_KERNEL, &rx_urb->transfer_dma);
+ 	if (rx_buf == NULL)
+ 		goto free_tx_buf;
+@@ -146,7 +146,7 @@ error_nomem:
+ 
+ static void ipheth_free_urbs(struct ipheth_device *iphone)
+ {
+-	usb_free_coherent(iphone->udev, IPHETH_BUF_SIZE, iphone->rx_buf,
++	usb_free_coherent(iphone->udev, IPHETH_BUF_SIZE + IPHETH_IP_ALIGN, iphone->rx_buf,
+ 			  iphone->rx_urb->transfer_dma);
+ 	usb_free_coherent(iphone->udev, IPHETH_BUF_SIZE, iphone->tx_buf,
+ 			  iphone->tx_urb->transfer_dma);
+@@ -317,7 +317,7 @@ static int ipheth_rx_submit(struct ipheth_device *dev, gfp_t mem_flags)
+ 
+ 	usb_fill_bulk_urb(dev->rx_urb, udev,
+ 			  usb_rcvbulkpipe(udev, dev->bulk_in),
+-			  dev->rx_buf, IPHETH_BUF_SIZE,
++			  dev->rx_buf, IPHETH_BUF_SIZE + IPHETH_IP_ALIGN,
+ 			  ipheth_rcvbulk_callback,
+ 			  dev);
+ 	dev->rx_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 963fb50528da1..1d3108e6c1284 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -642,6 +642,8 @@ read_status:
+ 	 */
+ 	if (ctrl->power_fault_detected)
+ 		status &= ~PCI_EXP_SLTSTA_PFD;
++	else if (status & PCI_EXP_SLTSTA_PFD)
++		ctrl->power_fault_detected = true;
+ 
+ 	events |= status;
+ 	if (!events) {
+@@ -651,7 +653,7 @@ read_status:
+ 	}
+ 
+ 	if (status) {
+-		pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, events);
++		pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, status);
+ 
+ 		/*
+ 		 * In MSI mode, all event bits must be zero before the port
+@@ -725,8 +727,7 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
+ 	}
+ 
+ 	/* Check Power Fault Detected */
+-	if ((events & PCI_EXP_SLTSTA_PFD) && !ctrl->power_fault_detected) {
+-		ctrl->power_fault_detected = 1;
++	if (events & PCI_EXP_SLTSTA_PFD) {
+ 		ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(ctrl));
+ 		pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
+ 				      PCI_EXP_SLTCTL_ATTN_IND_ON);
+diff --git a/fs/lockd/svcsubs.c b/fs/lockd/svcsubs.c
+index cb3a7512c33ec..0a22a2faf5522 100644
+--- a/fs/lockd/svcsubs.c
++++ b/fs/lockd/svcsubs.c
+@@ -179,19 +179,21 @@ nlm_delete_file(struct nlm_file *file)
+ static int nlm_unlock_files(struct nlm_file *file)
+ {
+ 	struct file_lock lock;
+-	struct file *f;
+ 
++	locks_init_lock(&lock);
+ 	lock.fl_type  = F_UNLCK;
+ 	lock.fl_start = 0;
+ 	lock.fl_end   = OFFSET_MAX;
+-	for (f = file->f_file[0]; f <= file->f_file[1]; f++) {
+-		if (f && vfs_lock_file(f, F_SETLK, &lock, NULL) < 0) {
+-			pr_warn("lockd: unlock failure in %s:%d\n",
+-				__FILE__, __LINE__);
+-			return 1;
+-		}
+-	}
++	if (file->f_file[O_RDONLY] &&
++	    vfs_lock_file(file->f_file[O_RDONLY], F_SETLK, &lock, NULL))
++		goto out_err;
++	if (file->f_file[O_WRONLY] &&
++	    vfs_lock_file(file->f_file[O_WRONLY], F_SETLK, &lock, NULL))
++		goto out_err;
+ 	return 0;
++out_err:
++	pr_warn("lockd: unlock failure in %s:%d\n", __FILE__, __LINE__);
++	return 1;
+ }
+ 
+ /*
+diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
+index 559bc1e9926d6..f98a737ee8636 100644
+--- a/fs/notify/fanotify/fanotify_user.c
++++ b/fs/notify/fanotify/fanotify_user.c
+@@ -656,9 +656,6 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group,
+ 	if (fanotify_is_perm_event(event->mask))
+ 		FANOTIFY_PERM(event)->fd = fd;
+ 
+-	if (f)
+-		fd_install(fd, f);
+-
+ 	if (info_mode) {
+ 		ret = copy_info_records_to_user(event, info, info_mode, pidfd,
+ 						buf, count);
+@@ -666,6 +663,9 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group,
+ 			goto out_close_fd;
+ 	}
+ 
++	if (f)
++		fd_install(fd, f);
++
+ 	return metadata.event_len;
+ 
+ out_close_fd:
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index b193d08a3dc36..e040970408d4f 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -145,7 +145,7 @@ static int ovl_copy_fileattr(struct inode *inode, struct path *old,
+ 		if (err == -ENOTTY || err == -EINVAL)
+ 			return 0;
+ 		pr_warn("failed to retrieve lower fileattr (%pd2, err=%i)\n",
+-			old, err);
++			old->dentry, err);
+ 		return err;
+ 	}
+ 
+@@ -157,7 +157,9 @@ static int ovl_copy_fileattr(struct inode *inode, struct path *old,
+ 	 */
+ 	if (oldfa.flags & OVL_PROT_FS_FLAGS_MASK) {
+ 		err = ovl_set_protattr(inode, new->dentry, &oldfa);
+-		if (err)
++		if (err == -EPERM)
++			pr_warn_once("copying fileattr: no xattr on upper\n");
++		else if (err)
+ 			return err;
+ 	}
+ 
+@@ -167,8 +169,16 @@ static int ovl_copy_fileattr(struct inode *inode, struct path *old,
+ 
+ 	err = ovl_real_fileattr_get(new, &newfa);
+ 	if (err) {
++		/*
++		 * Returning an error if upper doesn't support fileattr will
++		 * result in a regression, so revert to the old behavior.
++		 */
++		if (err == -ENOTTY || err == -EINVAL) {
++			pr_warn_once("copying fileattr: no support on upper\n");
++			return 0;
++		}
+ 		pr_warn("failed to retrieve upper fileattr (%pd2, err=%i)\n",
+-			new, err);
++			new->dentry, err);
+ 		return err;
+ 	}
+ 
+diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
+index e98de5e73ba59..507d065905792 100644
+--- a/kernel/bpf/trampoline.c
++++ b/kernel/bpf/trampoline.c
+@@ -542,11 +542,12 @@ static __always_inline u64 notrace bpf_prog_start_time(void)
+ static void notrace inc_misses_counter(struct bpf_prog *prog)
+ {
+ 	struct bpf_prog_stats *stats;
++	unsigned int flags;
+ 
+ 	stats = this_cpu_ptr(prog->stats);
+-	u64_stats_update_begin(&stats->syncp);
++	flags = u64_stats_update_begin_irqsave(&stats->syncp);
+ 	u64_stats_inc(&stats->misses);
+-	u64_stats_update_end(&stats->syncp);
++	u64_stats_update_end_irqrestore(&stats->syncp, flags);
+ }
+ 
+ /* The logic is similar to bpf_prog_run(), but with an explicit
+diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
+index 41e0837a5a0bd..0e877dbcfeea9 100644
+--- a/kernel/cgroup/cgroup-v1.c
++++ b/kernel/cgroup/cgroup-v1.c
+@@ -549,6 +549,14 @@ static ssize_t cgroup_release_agent_write(struct kernfs_open_file *of,
+ 
+ 	BUILD_BUG_ON(sizeof(cgrp->root->release_agent_path) < PATH_MAX);
+ 
++	/*
++	 * Release agent gets called with all capabilities,
++	 * require capabilities to set release agent.
++	 */
++	if ((of->file->f_cred->user_ns != &init_user_ns) ||
++	    !capable(CAP_SYS_ADMIN))
++		return -EPERM;
++
+ 	cgrp = cgroup_kn_lock_live(of->kn, false);
+ 	if (!cgrp)
+ 		return -ENODEV;
+@@ -954,6 +962,12 @@ int cgroup1_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ 		/* Specifying two release agents is forbidden */
+ 		if (ctx->release_agent)
+ 			return invalfc(fc, "release_agent respecified");
++		/*
++		 * Release agent gets called with all capabilities,
++		 * require capabilities to set release agent.
++		 */
++		if ((fc->user_ns != &init_user_ns) || !capable(CAP_SYS_ADMIN))
++			return invalfc(fc, "Setting release_agent not allowed");
+ 		ctx->release_agent = param->string;
+ 		param->string = NULL;
+ 		break;
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index d0e163a020997..ff8f2f522eb55 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -1615,8 +1615,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
+ 	 * Make sure that subparts_cpus is a subset of cpus_allowed.
+ 	 */
+ 	if (cs->nr_subparts_cpus) {
+-		cpumask_andnot(cs->subparts_cpus, cs->subparts_cpus,
+-			       cs->cpus_allowed);
++		cpumask_and(cs->subparts_cpus, cs->subparts_cpus, cs->cpus_allowed);
+ 		cs->nr_subparts_cpus = cpumask_weight(cs->subparts_cpus);
+ 	}
+ 	spin_unlock_irq(&callback_lock);
+diff --git a/mm/gup.c b/mm/gup.c
+index 2c51e9748a6a5..37087529bb954 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -124,8 +124,8 @@ static inline struct page *try_get_compound_head(struct page *page, int refs)
+  * considered failure, and furthermore, a likely bug in the caller, so a warning
+  * is also emitted.
+  */
+-struct page *try_grab_compound_head(struct page *page,
+-				    int refs, unsigned int flags)
++__maybe_unused struct page *try_grab_compound_head(struct page *page,
++						   int refs, unsigned int flags)
+ {
+ 	if (flags & FOLL_GET)
+ 		return try_get_compound_head(page, refs);
+@@ -208,10 +208,35 @@ static void put_compound_head(struct page *page, int refs, unsigned int flags)
+  */
+ bool __must_check try_grab_page(struct page *page, unsigned int flags)
+ {
+-	if (!(flags & (FOLL_GET | FOLL_PIN)))
+-		return true;
++	WARN_ON_ONCE((flags & (FOLL_GET | FOLL_PIN)) == (FOLL_GET | FOLL_PIN));
+ 
+-	return try_grab_compound_head(page, 1, flags);
++	if (flags & FOLL_GET)
++		return try_get_page(page);
++	else if (flags & FOLL_PIN) {
++		int refs = 1;
++
++		page = compound_head(page);
++
++		if (WARN_ON_ONCE(page_ref_count(page) <= 0))
++			return false;
++
++		if (hpage_pincount_available(page))
++			hpage_pincount_add(page, 1);
++		else
++			refs = GUP_PIN_COUNTING_BIAS;
++
++		/*
++		 * Similar to try_grab_compound_head(): even if using the
++		 * hpage_pincount_add/_sub() routines, be sure to
++		 * *also* increment the normal page refcount field at least
++		 * once, so that the page really is pinned.
++		 */
++		page_ref_add(page, refs);
++
++		mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_ACQUIRED, 1);
++	}
++
++	return true;
+ }
+ 
+ /**
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 2af8aeeadadf0..abab13633f845 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -3254,8 +3254,8 @@ static int __rtnl_newlink(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 	struct nlattr *slave_attr[RTNL_SLAVE_MAX_TYPE + 1];
+ 	unsigned char name_assign_type = NET_NAME_USER;
+ 	struct nlattr *linkinfo[IFLA_INFO_MAX + 1];
+-	const struct rtnl_link_ops *m_ops = NULL;
+-	struct net_device *master_dev = NULL;
++	const struct rtnl_link_ops *m_ops;
++	struct net_device *master_dev;
+ 	struct net *net = sock_net(skb->sk);
+ 	const struct rtnl_link_ops *ops;
+ 	struct nlattr *tb[IFLA_MAX + 1];
+@@ -3293,6 +3293,8 @@ replay:
+ 	else
+ 		dev = NULL;
+ 
++	master_dev = NULL;
++	m_ops = NULL;
+ 	if (dev) {
+ 		master_dev = netdev_master_upper_dev_get(dev);
+ 		if (master_dev)
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 2bb28bfd83bf6..94cbba9fb12b1 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1321,10 +1321,13 @@ new_segment:
+ 
+ 			/* skb changing from pure zc to mixed, must charge zc */
+ 			if (unlikely(skb_zcopy_pure(skb))) {
+-				if (!sk_wmem_schedule(sk, skb->data_len))
++				u32 extra = skb->truesize -
++					    SKB_TRUESIZE(skb_end_offset(skb));
++
++				if (!sk_wmem_schedule(sk, extra))
+ 					goto wait_for_space;
+ 
+-				sk_mem_charge(sk, skb->data_len);
++				sk_mem_charge(sk, extra);
+ 				skb_shinfo(skb)->flags &= ~SKBFL_PURE_ZEROCOPY;
+ 			}
+ 
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 0ce46849ec3d4..2b8e84d246bdc 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -1660,6 +1660,8 @@ static struct sk_buff *tcp_shift_skb_data(struct sock *sk, struct sk_buff *skb,
+ 	    (mss != tcp_skb_seglen(skb)))
+ 		goto out;
+ 
++	if (!tcp_skb_can_collapse(prev, skb))
++		goto out;
+ 	len = skb->len;
+ 	pcount = tcp_skb_pcount(skb);
+ 	if (tcp_skb_shift(prev, skb, pcount, len))
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 43eef5c712c1e..fe9b4c04744a2 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -1788,7 +1788,10 @@ static int fanout_add(struct sock *sk, struct fanout_args *args)
+ 		err = -ENOSPC;
+ 		if (refcount_read(&match->sk_ref) < match->max_num_members) {
+ 			__dev_remove_pack(&po->prot_hook);
+-			po->fanout = match;
++
++			/* Paired with packet_setsockopt(PACKET_FANOUT_DATA) */
++			WRITE_ONCE(po->fanout, match);
++
+ 			po->rollover = rollover;
+ 			rollover = NULL;
+ 			refcount_set(&match->sk_ref, refcount_read(&match->sk_ref) + 1);
+@@ -3941,7 +3944,8 @@ packet_setsockopt(struct socket *sock, int level, int optname, sockptr_t optval,
+ 	}
+ 	case PACKET_FANOUT_DATA:
+ 	{
+-		if (!po->fanout)
++		/* Paired with the WRITE_ONCE() in fanout_add() */
++		if (!READ_ONCE(po->fanout))
+ 			return -EINVAL;
+ 
+ 		return fanout_set_data(po, optval, optlen);
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index cc9409aa755eb..56dba8519d7c3 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -1945,9 +1945,9 @@ static int tc_new_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
+ 	bool prio_allocate;
+ 	u32 parent;
+ 	u32 chain_index;
+-	struct Qdisc *q = NULL;
++	struct Qdisc *q;
+ 	struct tcf_chain_info chain_info;
+-	struct tcf_chain *chain = NULL;
++	struct tcf_chain *chain;
+ 	struct tcf_block *block;
+ 	struct tcf_proto *tp;
+ 	unsigned long cl;
+@@ -1976,6 +1976,8 @@ replay:
+ 	tp = NULL;
+ 	cl = 0;
+ 	block = NULL;
++	q = NULL;
++	chain = NULL;
+ 	flags = 0;
+ 
+ 	if (prio == 0) {
+@@ -2798,8 +2800,8 @@ static int tc_ctl_chain(struct sk_buff *skb, struct nlmsghdr *n,
+ 	struct tcmsg *t;
+ 	u32 parent;
+ 	u32 chain_index;
+-	struct Qdisc *q = NULL;
+-	struct tcf_chain *chain = NULL;
++	struct Qdisc *q;
++	struct tcf_chain *chain;
+ 	struct tcf_block *block;
+ 	unsigned long cl;
+ 	int err;
+@@ -2809,6 +2811,7 @@ static int tc_ctl_chain(struct sk_buff *skb, struct nlmsghdr *n,
+ 		return -EPERM;
+ 
+ replay:
++	q = NULL;
+ 	err = nlmsg_parse_deprecated(n, sizeof(*t), tca, TCA_MAX,
+ 				     rtm_tca_policy, extack);
+ 	if (err < 0)
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index 7ef639a9d4a6f..f06dc9dfe15eb 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -75,6 +75,7 @@ init()
+ 
+ 		# let $ns2 reach any $ns1 address from any interface
+ 		ip -net "$ns2" route add default via 10.0.$i.1 dev ns2eth$i metric 10$i
++		ip -net "$ns2" route add default via dead:beef:$i::1 dev ns2eth$i metric 10$i
+ 	done
+ }
+ 
+@@ -1386,7 +1387,7 @@ ipv6_tests()
+ 	reset
+ 	ip netns exec $ns1 ./pm_nl_ctl limits 0 1
+ 	ip netns exec $ns2 ./pm_nl_ctl limits 0 1
+-	ip netns exec $ns2 ./pm_nl_ctl add dead:beef:3::2 flags subflow
++	ip netns exec $ns2 ./pm_nl_ctl add dead:beef:3::2 dev ns2eth3 flags subflow
+ 	run_tests $ns1 $ns2 dead:beef:1::1 0 0 0 slow
+ 	chk_join_nr "single subflow IPv6" 1 1 1
+ 
+@@ -1421,7 +1422,7 @@ ipv6_tests()
+ 	ip netns exec $ns1 ./pm_nl_ctl limits 0 2
+ 	ip netns exec $ns1 ./pm_nl_ctl add dead:beef:2::1 flags signal
+ 	ip netns exec $ns2 ./pm_nl_ctl limits 1 2
+-	ip netns exec $ns2 ./pm_nl_ctl add dead:beef:3::2 flags subflow
++	ip netns exec $ns2 ./pm_nl_ctl add dead:beef:3::2 dev ns2eth3 flags subflow
+ 	run_tests $ns1 $ns2 dead:beef:1::1 0 -1 -1 slow
+ 	chk_join_nr "remove subflow and signal IPv6" 2 2 2
+ 	chk_add_nr 1 1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-02-05 19:02 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-02-05 19:02 UTC (permalink / raw
  To: gentoo-commits

commit:     d31cb26d558943c7677b45bbe9f5d7787ac7f9ab
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Feb  5 19:02:14 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Feb  5 19:02:14 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d31cb26d

Linux patch 5.16.7

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |  4 ++++
 1006_linux-5.16.7.patch | 57 +++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 61 insertions(+)

diff --git a/0000_README b/0000_README
index 0310fc8f..2af50adb 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  1005_linux-5.16.6.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.16.6
 
+Patch:  1006_linux-5.16.7.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.7
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1006_linux-5.16.7.patch b/1006_linux-5.16.7.patch
new file mode 100644
index 00000000..2b9c7f5f
--- /dev/null
+++ b/1006_linux-5.16.7.patch
@@ -0,0 +1,57 @@
+diff --git a/Makefile b/Makefile
+index 2d7b7fe5cbad6..b642e5650c0b1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index e6aad838065b9..c000946996edb 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -1736,21 +1736,15 @@ static int vc4_hdmi_cec_adap_enable(struct cec_adapter *adap, bool enable)
+ 	struct vc4_hdmi *vc4_hdmi = cec_get_drvdata(adap);
+ 	/* clock period in microseconds */
+ 	const u32 usecs = 1000000 / CEC_CLOCK_FREQ;
+-	u32 val;
+-	int ret;
+-
+-	if (enable) {
+-		ret = pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev);
+-		if (ret)
+-			return ret;
++	u32 val = HDMI_READ(HDMI_CEC_CNTRL_5);
+ 
+-		val = HDMI_READ(HDMI_CEC_CNTRL_5);
+-		val &= ~(VC4_HDMI_CEC_TX_SW_RESET | VC4_HDMI_CEC_RX_SW_RESET |
+-			 VC4_HDMI_CEC_CNT_TO_4700_US_MASK |
+-			 VC4_HDMI_CEC_CNT_TO_4500_US_MASK);
+-		val |= ((4700 / usecs) << VC4_HDMI_CEC_CNT_TO_4700_US_SHIFT) |
+-			((4500 / usecs) << VC4_HDMI_CEC_CNT_TO_4500_US_SHIFT);
++	val &= ~(VC4_HDMI_CEC_TX_SW_RESET | VC4_HDMI_CEC_RX_SW_RESET |
++		 VC4_HDMI_CEC_CNT_TO_4700_US_MASK |
++		 VC4_HDMI_CEC_CNT_TO_4500_US_MASK);
++	val |= ((4700 / usecs) << VC4_HDMI_CEC_CNT_TO_4700_US_SHIFT) |
++	       ((4500 / usecs) << VC4_HDMI_CEC_CNT_TO_4500_US_SHIFT);
+ 
++	if (enable) {
+ 		HDMI_WRITE(HDMI_CEC_CNTRL_5, val |
+ 			   VC4_HDMI_CEC_TX_SW_RESET | VC4_HDMI_CEC_RX_SW_RESET);
+ 		HDMI_WRITE(HDMI_CEC_CNTRL_5, val);
+@@ -1778,10 +1772,7 @@ static int vc4_hdmi_cec_adap_enable(struct cec_adapter *adap, bool enable)
+ 			HDMI_WRITE(HDMI_CEC_CPU_MASK_SET, VC4_HDMI_CPU_CEC);
+ 		HDMI_WRITE(HDMI_CEC_CNTRL_5, val |
+ 			   VC4_HDMI_CEC_TX_SW_RESET | VC4_HDMI_CEC_RX_SW_RESET);
+-
+-		pm_runtime_put(&vc4_hdmi->pdev->dev);
+ 	}
+-
+ 	return 0;
+ }
+ 


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-02-08 13:34 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-02-08 13:34 UTC (permalink / raw
  To: gentoo-commits

commit:     a8586fe04190d60e515a4a0a05e58a3a3ef5d46b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Feb  8 13:33:42 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Feb  8 13:33:42 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a8586fe0

iwlwifi: fix use-after-free

Bug: https://bugs.gentoo.org/832795

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                           |  4 ++++
 2410_iwlwifi-fix-use-after-free.patch | 37 +++++++++++++++++++++++++++++++++++
 2 files changed, 41 insertions(+)

diff --git a/0000_README b/0000_README
index 2af50adb..735f80d8 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch
 From:   https://patchwork.kernel.org/project/linux-wireless/patch/70e27cbc652cbdb78277b9c691a3a5ba02653afb.1641540175.git.objelf@gmail.com/
 Desc:   mt76: mt7921e: fix possible probe failure after reboot
 
+Patch:  2410_iwlwifi-fix-use-after-free.patch
+From:   https://marc.info/?l=linux-wireless&m=164431994900440&w=2
+Desc:   iwlwifi: fix use-after-free
+
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2410_iwlwifi-fix-use-after-free.patch b/2410_iwlwifi-fix-use-after-free.patch
new file mode 100644
index 00000000..4c94467b
--- /dev/null
+++ b/2410_iwlwifi-fix-use-after-free.patch
@@ -0,0 +1,37 @@
+If no firmware was present at all (or, presumably, all of the
+firmware files failed to parse), we end up unbinding by calling
+device_release_driver(), which calls remove(), which then in
+iwlwifi calls iwl_drv_stop(), freeing the 'drv' struct. However
+the new code I added will still erroneously access it after it
+was freed.
+
+Set 'failure=false' in this case to avoid the access, all data
+was already freed anyway.
+
+Cc: stable@vger.kernel.org
+Reported-by: Stefan Agner <stefan@agner.ch>
+Reported-by: Wolfgang Walter <linux@stwm.de>
+Reported-by: Jason Self <jason@bluehome.net>
+Reported-by: Dominik Behr <dominik@dominikbehr.com>
+Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
+Fixes: ab07506b0454 ("iwlwifi: fix leaks/bad data after failed firmware load")
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+---
+ drivers/net/wireless/intel/iwlwifi/iwl-drv.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+index 83e3b731ad29..6651e78b39ec 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+@@ -1707,6 +1707,8 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
+  out_unbind:
+ 	complete(&drv->request_firmware_complete);
+ 	device_release_driver(drv->trans->dev);
++	/* drv has just been freed by the release */
++	failure = false;
+  free:
+ 	if (failure)
+ 		iwl_dealloc_ucode(drv);
+-- 
+2.34.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-02-08 18:19 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-02-08 18:19 UTC (permalink / raw
  To: gentoo-commits

commit:     2f51148c7195ea86716547fd62f0f85cd9b4ce44
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Feb  8 18:18:59 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Feb  8 18:18:59 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2f51148c

Linux patch 5.16.8

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1007_linux-5.16.8.patch | 5970 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5974 insertions(+)

diff --git a/0000_README b/0000_README
index 735f80d8..13f566fe 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1006_linux-5.16.7.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.16.7
 
+Patch:  1007_linux-5.16.8.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.8
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1007_linux-5.16.8.patch b/1007_linux-5.16.8.patch
new file mode 100644
index 00000000..2a4a9628
--- /dev/null
+++ b/1007_linux-5.16.8.patch
@@ -0,0 +1,5970 @@
+diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
+index 60d1d7ee07197..01fb97dd2dd06 100644
+--- a/Documentation/gpu/todo.rst
++++ b/Documentation/gpu/todo.rst
+@@ -311,30 +311,6 @@ Contact: Daniel Vetter, Noralf Tronnes
+ 
+ Level: Advanced
+ 
+-Garbage collect fbdev scrolling acceleration
+---------------------------------------------
+-
+-Scroll acceleration has been disabled in fbcon. Now it works as the old
+-SCROLL_REDRAW mode. A ton of code was removed in fbcon.c and the hook bmove was
+-removed from fbcon_ops.
+-Remaining tasks:
+-
+-- a bunch of the hooks in fbcon_ops could be removed or simplified by calling
+-  directly instead of the function table (with a switch on p->rotate)
+-
+-- fb_copyarea is unused after this, and can be deleted from all drivers
+-
+-- after that, fb_copyarea can be deleted from fb_ops in include/linux/fb.h as
+-  well as cfb_copyarea
+-
+-Note that not all acceleration code can be deleted, since clearing and cursor
+-support is still accelerated, which might be good candidates for further
+-deletion projects.
+-
+-Contact: Daniel Vetter
+-
+-Level: Intermediate
+-
+ idr_init_base()
+ ---------------
+ 
+diff --git a/Makefile b/Makefile
+index b642e5650c0b1..0cbab4df51b92 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index 19b8441aa8f26..e8fdc10395b6a 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -73,6 +73,7 @@
+ #define ARM_CPU_PART_CORTEX_A76		0xD0B
+ #define ARM_CPU_PART_NEOVERSE_N1	0xD0C
+ #define ARM_CPU_PART_CORTEX_A77		0xD0D
++#define ARM_CPU_PART_CORTEX_A510	0xD46
+ #define ARM_CPU_PART_CORTEX_A710	0xD47
+ #define ARM_CPU_PART_NEOVERSE_N2	0xD49
+ 
+@@ -115,6 +116,7 @@
+ #define MIDR_CORTEX_A76	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76)
+ #define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1)
+ #define MIDR_CORTEX_A77	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77)
++#define MIDR_CORTEX_A510 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A510)
+ #define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710)
+ #define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2)
+ #define MIDR_THUNDERX	MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index e4727dc771bf3..b2222d8eb0b55 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -764,6 +764,24 @@ static bool kvm_vcpu_exit_request(struct kvm_vcpu *vcpu, int *ret)
+ 			xfer_to_guest_mode_work_pending();
+ }
+ 
++/*
++ * Actually run the vCPU, entering an RCU extended quiescent state (EQS) while
++ * the vCPU is running.
++ *
++ * This must be noinstr as instrumentation may make use of RCU, and this is not
++ * safe during the EQS.
++ */
++static int noinstr kvm_arm_vcpu_enter_exit(struct kvm_vcpu *vcpu)
++{
++	int ret;
++
++	guest_state_enter_irqoff();
++	ret = kvm_call_hyp_ret(__kvm_vcpu_run, vcpu);
++	guest_state_exit_irqoff();
++
++	return ret;
++}
++
+ /**
+  * kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute guest code
+  * @vcpu:	The VCPU pointer
+@@ -854,9 +872,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+ 		 * Enter the guest
+ 		 */
+ 		trace_kvm_entry(*vcpu_pc(vcpu));
+-		guest_enter_irqoff();
++		guest_timing_enter_irqoff();
+ 
+-		ret = kvm_call_hyp_ret(__kvm_vcpu_run, vcpu);
++		ret = kvm_arm_vcpu_enter_exit(vcpu);
+ 
+ 		vcpu->mode = OUTSIDE_GUEST_MODE;
+ 		vcpu->stat.exits++;
+@@ -891,26 +909,23 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+ 		kvm_arch_vcpu_ctxsync_fp(vcpu);
+ 
+ 		/*
+-		 * We may have taken a host interrupt in HYP mode (ie
+-		 * while executing the guest). This interrupt is still
+-		 * pending, as we haven't serviced it yet!
++		 * We must ensure that any pending interrupts are taken before
++		 * we exit guest timing so that timer ticks are accounted as
++		 * guest time. Transiently unmask interrupts so that any
++		 * pending interrupts are taken.
+ 		 *
+-		 * We're now back in SVC mode, with interrupts
+-		 * disabled.  Enabling the interrupts now will have
+-		 * the effect of taking the interrupt again, in SVC
+-		 * mode this time.
++		 * Per ARM DDI 0487G.b section D1.13.4, an ISB (or other
++		 * context synchronization event) is necessary to ensure that
++		 * pending interrupts are taken.
+ 		 */
+ 		local_irq_enable();
++		isb();
++		local_irq_disable();
++
++		guest_timing_exit_irqoff();
++
++		local_irq_enable();
+ 
+-		/*
+-		 * We do local_irq_enable() before calling guest_exit() so
+-		 * that if a timer interrupt hits while running the guest we
+-		 * account that tick as being spent in the guest.  We enable
+-		 * preemption after calling guest_exit() so that if we get
+-		 * preempted we make sure ticks after that is not counted as
+-		 * guest time.
+-		 */
+-		guest_exit();
+ 		trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
+ 
+ 		/* Exit types that need handling before we can be preempted */
+diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
+index 275a27368a04c..a5ab5215094ee 100644
+--- a/arch/arm64/kvm/handle_exit.c
++++ b/arch/arm64/kvm/handle_exit.c
+@@ -226,6 +226,14 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index)
+ {
+ 	struct kvm_run *run = vcpu->run;
+ 
++	if (ARM_SERROR_PENDING(exception_index)) {
++		/*
++		 * The SError is handled by handle_exit_early(). If the guest
++		 * survives it will re-execute the original instruction.
++		 */
++		return 1;
++	}
++
+ 	exception_index = ARM_EXCEPTION_CODE(exception_index);
+ 
+ 	switch (exception_index) {
+diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
+index 96c5f3fb78389..adb67f8c9d7d3 100644
+--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
++++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
+@@ -446,7 +446,8 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
+ 	if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
+ 		vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);
+ 
+-	if (ARM_SERROR_PENDING(*exit_code)) {
++	if (ARM_SERROR_PENDING(*exit_code) &&
++	    ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) {
+ 		u8 esr_ec = kvm_vcpu_trap_get_class(vcpu);
+ 
+ 		/*
+diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
+index fb84619df0127..6078d7e1dd762 100644
+--- a/arch/riscv/kvm/vcpu.c
++++ b/arch/riscv/kvm/vcpu.c
+@@ -74,6 +74,7 @@ int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
+ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
+ {
+ 	struct kvm_cpu_context *cntx;
++	struct kvm_vcpu_csr *reset_csr = &vcpu->arch.guest_reset_csr;
+ 
+ 	/* Mark this VCPU never ran */
+ 	vcpu->arch.ran_atleast_once = false;
+@@ -89,6 +90,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
+ 	cntx->hstatus |= HSTATUS_SPVP;
+ 	cntx->hstatus |= HSTATUS_SPV;
+ 
++	/* By default, make CY, TM, and IR counters accessible in VU mode */
++	reset_csr->scounteren = 0x7;
++
+ 	/* Setup VCPU timer */
+ 	kvm_riscv_vcpu_timer_init(vcpu);
+ 
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 18d825a39032a..5434030e851e4 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -4709,6 +4709,19 @@ static __initconst const struct x86_pmu intel_pmu = {
+ 	.lbr_read		= intel_pmu_lbr_read_64,
+ 	.lbr_save		= intel_pmu_lbr_save,
+ 	.lbr_restore		= intel_pmu_lbr_restore,
++
++	/*
++	 * SMM has access to all 4 rings and while traditionally SMM code only
++	 * ran in CPL0, 2021-era firmware is starting to make use of CPL3 in SMM.
++	 *
++	 * Since the EVENTSEL.{USR,OS} CPL filtering makes no distinction
++	 * between SMM or not, this results in what should be pure userspace
++	 * counters including SMM data.
++	 *
++	 * This is a clear privilege issue, therefore globally disable
++	 * counting SMM by default.
++	 */
++	.attr_freeze_on_smi	= 1,
+ };
+ 
+ static __init void intel_clovertown_quirk(void)
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index 7f406c14715fd..2d33bba9a1440 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -897,8 +897,9 @@ static void pt_handle_status(struct pt *pt)
+ 		 * means we are already losing data; need to let the decoder
+ 		 * know.
+ 		 */
+-		if (!intel_pt_validate_hw_cap(PT_CAP_topa_multiple_entries) ||
+-		    buf->output_off == pt_buffer_region_size(buf)) {
++		if (!buf->single &&
++		    (!intel_pt_validate_hw_cap(PT_CAP_topa_multiple_entries) ||
++		     buf->output_off == pt_buffer_region_size(buf))) {
+ 			perf_aux_output_flag(&pt->handle,
+ 			                     PERF_AUX_FLAG_TRUNCATED);
+ 			advance++;
+diff --git a/block/bio-integrity.c b/block/bio-integrity.c
+index d251147154592..0827b19820c52 100644
+--- a/block/bio-integrity.c
++++ b/block/bio-integrity.c
+@@ -373,7 +373,7 @@ void bio_integrity_advance(struct bio *bio, unsigned int bytes_done)
+ 	struct blk_integrity *bi = blk_get_integrity(bio->bi_bdev->bd_disk);
+ 	unsigned bytes = bio_integrity_bytes(bi, bytes_done >> 9);
+ 
+-	bip->bip_iter.bi_sector += bytes_done >> 9;
++	bip->bip_iter.bi_sector += bio_integrity_intervals(bi, bytes_done >> 9);
+ 	bvec_iter_advance(bip->bip_vec, &bip->bip_iter, bytes);
+ }
+ 
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index aba0c67d1bd65..1cdf8cfcc31b3 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -2043,6 +2043,9 @@ static bool ata_log_supported(struct ata_device *dev, u8 log)
+ {
+ 	struct ata_port *ap = dev->link->ap;
+ 
++	if (dev->horkage & ATA_HORKAGE_NO_LOG_DIR)
++		return false;
++
+ 	if (ata_read_log_page(dev, ATA_LOG_DIRECTORY, 0, ap->sector_buf, 1))
+ 		return false;
+ 	return get_unaligned_le16(&ap->sector_buf[log * 2]) ? true : false;
+@@ -4123,6 +4126,13 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 	{ "WDC WD3000JD-*",		NULL,	ATA_HORKAGE_WD_BROKEN_LPM },
+ 	{ "WDC WD3200JD-*",		NULL,	ATA_HORKAGE_WD_BROKEN_LPM },
+ 
++	/*
++	 * This sata dom device goes on a walkabout when the ATA_LOG_DIRECTORY
++	 * log page is accessed. Ensure we never ask for this log page with
++	 * these devices.
++	 */
++	{ "SATADOM-ML 3ME",		NULL,	ATA_HORKAGE_NO_LOG_DIR },
++
+ 	/* End Marker */
+ 	{ }
+ };
+diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
+index 56bf5ad01ad54..8f5848aa144fe 100644
+--- a/drivers/dma-buf/dma-heap.c
++++ b/drivers/dma-buf/dma-heap.c
+@@ -14,6 +14,7 @@
+ #include <linux/xarray.h>
+ #include <linux/list.h>
+ #include <linux/slab.h>
++#include <linux/nospec.h>
+ #include <linux/uaccess.h>
+ #include <linux/syscalls.h>
+ #include <linux/dma-heap.h>
+@@ -135,6 +136,7 @@ static long dma_heap_ioctl(struct file *file, unsigned int ucmd,
+ 	if (nr >= ARRAY_SIZE(dma_heap_ioctl_cmds))
+ 		return -EINVAL;
+ 
++	nr = array_index_nospec(nr, ARRAY_SIZE(dma_heap_ioctl_cmds));
+ 	/* Get the kernel ioctl cmd that matches */
+ 	kcmd = dma_heap_ioctl_cmds[nr];
+ 
+diff --git a/drivers/edac/altera_edac.c b/drivers/edac/altera_edac.c
+index 3a6d2416cb0f6..5dd29789f97d3 100644
+--- a/drivers/edac/altera_edac.c
++++ b/drivers/edac/altera_edac.c
+@@ -350,7 +350,7 @@ static int altr_sdram_probe(struct platform_device *pdev)
+ 	if (irq < 0) {
+ 		edac_printk(KERN_ERR, EDAC_MC,
+ 			    "No irq %d in DT\n", irq);
+-		return -ENODEV;
++		return irq;
+ 	}
+ 
+ 	/* Arria10 has a 2nd IRQ */
+diff --git a/drivers/edac/xgene_edac.c b/drivers/edac/xgene_edac.c
+index 2ccd1db5e98ff..7197f9fa02457 100644
+--- a/drivers/edac/xgene_edac.c
++++ b/drivers/edac/xgene_edac.c
+@@ -1919,7 +1919,7 @@ static int xgene_edac_probe(struct platform_device *pdev)
+ 			irq = platform_get_irq_optional(pdev, i);
+ 			if (irq < 0) {
+ 				dev_err(&pdev->dev, "No IRQ resource\n");
+-				rc = -EINVAL;
++				rc = irq;
+ 				goto out_err;
+ 			}
+ 			rc = devm_request_irq(&pdev->dev, irq,
+diff --git a/drivers/gpio/gpio-idt3243x.c b/drivers/gpio/gpio-idt3243x.c
+index 08493b05be2da..52b8b72ded77f 100644
+--- a/drivers/gpio/gpio-idt3243x.c
++++ b/drivers/gpio/gpio-idt3243x.c
+@@ -132,7 +132,7 @@ static int idt_gpio_probe(struct platform_device *pdev)
+ 	struct device *dev = &pdev->dev;
+ 	struct gpio_irq_chip *girq;
+ 	struct idt_gpio_ctrl *ctrl;
+-	unsigned int parent_irq;
++	int parent_irq;
+ 	int ngpios;
+ 	int ret;
+ 
+diff --git a/drivers/gpio/gpio-mpc8xxx.c b/drivers/gpio/gpio-mpc8xxx.c
+index 01634c8d27b38..a964e25ea6206 100644
+--- a/drivers/gpio/gpio-mpc8xxx.c
++++ b/drivers/gpio/gpio-mpc8xxx.c
+@@ -47,7 +47,7 @@ struct mpc8xxx_gpio_chip {
+ 				unsigned offset, int value);
+ 
+ 	struct irq_domain *irq;
+-	unsigned int irqn;
++	int irqn;
+ };
+ 
+ /*
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index f999638a04ed6..c811161ce9f09 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -2241,8 +2241,7 @@ static int amdgpu_pmops_prepare(struct device *dev)
+ 	 * DPM_FLAG_SMART_SUSPEND works properly
+ 	 */
+ 	if (amdgpu_device_supports_boco(drm_dev))
+-		return pm_runtime_suspended(dev) &&
+-			pm_suspend_via_firmware();
++		return pm_runtime_suspended(dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+index 61ec6145bbb16..614c1362a21d2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+@@ -1147,6 +1147,9 @@ static void gmc_v10_0_get_clockgating_state(void *handle, u32 *flags)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
++	if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 1, 3))
++		return;
++
+ 	adev->mmhub.funcs->get_clockgating(adev, flags);
+ 
+ 	if (adev->ip_versions[ATHUB_HWIP][0] >= IP_VERSION(2, 1, 0))
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/vg_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/vg_clk_mgr.c
+index 3eee32faa208c..329ce4e84b83c 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/vg_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/vg_clk_mgr.c
+@@ -582,32 +582,32 @@ static struct wm_table lpddr5_wm_table = {
+ 			.wm_inst = WM_A,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.65333,
+-			.sr_exit_time_us = 7.95,
+-			.sr_enter_plus_exit_time_us = 9,
++			.sr_exit_time_us = 13.5,
++			.sr_enter_plus_exit_time_us = 16.5,
+ 			.valid = true,
+ 		},
+ 		{
+ 			.wm_inst = WM_B,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.65333,
+-			.sr_exit_time_us = 9.82,
+-			.sr_enter_plus_exit_time_us = 11.196,
++			.sr_exit_time_us = 13.5,
++			.sr_enter_plus_exit_time_us = 16.5,
+ 			.valid = true,
+ 		},
+ 		{
+ 			.wm_inst = WM_C,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.65333,
+-			.sr_exit_time_us = 9.89,
+-			.sr_enter_plus_exit_time_us = 11.24,
++			.sr_exit_time_us = 13.5,
++			.sr_enter_plus_exit_time_us = 16.5,
+ 			.valid = true,
+ 		},
+ 		{
+ 			.wm_inst = WM_D,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.65333,
+-			.sr_exit_time_us = 9.748,
+-			.sr_enter_plus_exit_time_us = 11.102,
++			.sr_exit_time_us = 13.5,
++			.sr_enter_plus_exit_time_us = 16.5,
+ 			.valid = true,
+ 		},
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
+index 9df38e2ee4f40..ed53dcead839e 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
+@@ -329,38 +329,38 @@ static struct clk_bw_params dcn31_bw_params = {
+ 
+ };
+ 
+-static struct wm_table ddr4_wm_table = {
++static struct wm_table ddr5_wm_table = {
+ 	.entries = {
+ 		{
+ 			.wm_inst = WM_A,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.72,
+-			.sr_exit_time_us = 6.09,
+-			.sr_enter_plus_exit_time_us = 7.14,
++			.sr_exit_time_us = 9,
++			.sr_enter_plus_exit_time_us = 11,
+ 			.valid = true,
+ 		},
+ 		{
+ 			.wm_inst = WM_B,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.72,
+-			.sr_exit_time_us = 10.12,
+-			.sr_enter_plus_exit_time_us = 11.48,
++			.sr_exit_time_us = 9,
++			.sr_enter_plus_exit_time_us = 11,
+ 			.valid = true,
+ 		},
+ 		{
+ 			.wm_inst = WM_C,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.72,
+-			.sr_exit_time_us = 10.12,
+-			.sr_enter_plus_exit_time_us = 11.48,
++			.sr_exit_time_us = 9,
++			.sr_enter_plus_exit_time_us = 11,
+ 			.valid = true,
+ 		},
+ 		{
+ 			.wm_inst = WM_D,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.72,
+-			.sr_exit_time_us = 10.12,
+-			.sr_enter_plus_exit_time_us = 11.48,
++			.sr_exit_time_us = 9,
++			.sr_enter_plus_exit_time_us = 11,
+ 			.valid = true,
+ 		},
+ 	}
+@@ -688,7 +688,7 @@ void dcn31_clk_mgr_construct(
+ 		if (ctx->dc_bios->integrated_info->memory_type == LpDdr5MemType) {
+ 			dcn31_bw_params.wm_table = lpddr5_wm_table;
+ 		} else {
+-			dcn31_bw_params.wm_table = ddr4_wm_table;
++			dcn31_bw_params.wm_table = ddr5_wm_table;
+ 		}
+ 		/* Saved clocks configured at boot for debug purposes */
+ 		 dcn31_dump_clk_registers(&clk_mgr->base.base.boot_snapshot, &clk_mgr->base.base, &log_info);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 13bc69d6b6791..ccd6cdbe46f43 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -4730,6 +4730,26 @@ static bool retrieve_link_cap(struct dc_link *link)
+ 		dp_hw_fw_revision.ieee_fw_rev,
+ 		sizeof(dp_hw_fw_revision.ieee_fw_rev));
+ 
++	/* Quirk for Apple MBP 2018 15" Retina panels: wrong DP_MAX_LINK_RATE */
++	{
++		uint8_t str_mbp_2018[] = { 101, 68, 21, 103, 98, 97 };
++		uint8_t fwrev_mbp_2018[] = { 7, 4 };
++		uint8_t fwrev_mbp_2018_vega[] = { 8, 4 };
++
++		/* We also check for the firmware revision as 16,1 models have an
++		 * identical device id and are incorrectly quirked otherwise.
++		 */
++		if ((link->dpcd_caps.sink_dev_id == 0x0010fa) &&
++		    !memcmp(link->dpcd_caps.sink_dev_id_str, str_mbp_2018,
++			     sizeof(str_mbp_2018)) &&
++		    (!memcmp(link->dpcd_caps.sink_fw_revision, fwrev_mbp_2018,
++			     sizeof(fwrev_mbp_2018)) ||
++		    !memcmp(link->dpcd_caps.sink_fw_revision, fwrev_mbp_2018_vega,
++			     sizeof(fwrev_mbp_2018_vega)))) {
++			link->reported_link_cap.link_rate = LINK_RATE_RBR2;
++		}
++	}
++
+ 	memset(&link->dpcd_caps.dsc_caps, '\0',
+ 			sizeof(link->dpcd_caps.dsc_caps));
+ 	memset(&link->dpcd_caps.fec_cap, '\0', sizeof(link->dpcd_caps.fec_cap));
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index a4108025fe299..446d37320b948 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -3681,14 +3681,14 @@ static ssize_t sienna_cichlid_get_gpu_metrics(struct smu_context *smu,
+ 
+ static int sienna_cichlid_enable_mgpu_fan_boost(struct smu_context *smu)
+ {
+-	struct smu_table_context *table_context = &smu->smu_table;
+-	PPTable_t *smc_pptable = table_context->driver_pptable;
++	uint16_t *mgpu_fan_boost_limit_rpm;
+ 
++	GET_PPTABLE_MEMBER(MGpuFanBoostLimitRpm, &mgpu_fan_boost_limit_rpm);
+ 	/*
+ 	 * Skip the MGpuFanBoost setting for those ASICs
+ 	 * which do not support it
+ 	 */
+-	if (!smc_pptable->MGpuFanBoostLimitRpm)
++	if (*mgpu_fan_boost_limit_rpm == 0)
+ 		return 0;
+ 
+ 	return smu_cmn_send_smc_msg_with_param(smu,
+diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c b/drivers/gpu/drm/i915/display/intel_overlay.c
+index 7e3f5c6ca4846..dfa5f18171e3b 100644
+--- a/drivers/gpu/drm/i915/display/intel_overlay.c
++++ b/drivers/gpu/drm/i915/display/intel_overlay.c
+@@ -959,6 +959,9 @@ static int check_overlay_dst(struct intel_overlay *overlay,
+ 	const struct intel_crtc_state *pipe_config =
+ 		overlay->crtc->config;
+ 
++	if (rec->dst_height == 0 || rec->dst_width == 0)
++		return -EINVAL;
++
+ 	if (rec->dst_x < pipe_config->pipe_src_w &&
+ 	    rec->dst_x + rec->dst_width <= pipe_config->pipe_src_w &&
+ 	    rec->dst_y < pipe_config->pipe_src_h &&
+diff --git a/drivers/gpu/drm/i915/display/intel_tc.c b/drivers/gpu/drm/i915/display/intel_tc.c
+index 40faa18947c99..dbd7d0d83a141 100644
+--- a/drivers/gpu/drm/i915/display/intel_tc.c
++++ b/drivers/gpu/drm/i915/display/intel_tc.c
+@@ -345,10 +345,11 @@ static bool icl_tc_phy_status_complete(struct intel_digital_port *dig_port)
+ static bool adl_tc_phy_status_complete(struct intel_digital_port *dig_port)
+ {
+ 	struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
++	enum tc_port tc_port = intel_port_to_tc(i915, dig_port->base.port);
+ 	struct intel_uncore *uncore = &i915->uncore;
+ 	u32 val;
+ 
+-	val = intel_uncore_read(uncore, TCSS_DDI_STATUS(dig_port->tc_phy_fia_idx));
++	val = intel_uncore_read(uncore, TCSS_DDI_STATUS(tc_port));
+ 	if (val == 0xffffffff) {
+ 		drm_dbg_kms(&i915->drm,
+ 			    "Port %s: PHY in TCCOLD, assuming not complete\n",
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+index cb0bf6ffd0e38..cf8d17128439f 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+@@ -2372,9 +2372,14 @@ static int eb_pin_timeline(struct i915_execbuffer *eb, struct intel_context *ce,
+ 				      timeout) < 0) {
+ 			i915_request_put(rq);
+ 
+-			tl = intel_context_timeline_lock(ce);
++			/*
++			 * Error path, cannot use intel_context_timeline_lock as
++			 * that is user interruptable and this clean up step
++			 * must be done.
++			 */
++			mutex_lock(&ce->timeline->mutex);
+ 			intel_context_exit(ce);
+-			intel_context_timeline_unlock(tl);
++			mutex_unlock(&ce->timeline->mutex);
+ 
+ 			if (nonblock)
+ 				return -EWOULDBLOCK;
+diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
+index 169837de395d3..8a4b8ff4cde10 100644
+--- a/drivers/gpu/drm/i915/i915_pci.c
++++ b/drivers/gpu/drm/i915/i915_pci.c
+@@ -866,7 +866,7 @@ static const struct intel_device_info jsl_info = {
+ 	TGL_CURSOR_OFFSETS, \
+ 	.has_global_mocs = 1, \
+ 	.has_pxp = 1, \
+-	.display.has_dsb = 1
++	.display.has_dsb = 0 /* FIXME: LUT load is broken with DSB */
+ 
+ static const struct intel_device_info tgl_info = {
+ 	GEN12_FEATURES,
+diff --git a/drivers/gpu/drm/kmb/kmb_plane.c b/drivers/gpu/drm/kmb/kmb_plane.c
+index 00404ba4126dd..2735b8eb35376 100644
+--- a/drivers/gpu/drm/kmb/kmb_plane.c
++++ b/drivers/gpu/drm/kmb/kmb_plane.c
+@@ -158,12 +158,6 @@ static void kmb_plane_atomic_disable(struct drm_plane *plane,
+ 	case LAYER_1:
+ 		kmb->plane_status[plane_id].ctrl = LCD_CTRL_VL2_ENABLE;
+ 		break;
+-	case LAYER_2:
+-		kmb->plane_status[plane_id].ctrl = LCD_CTRL_GL1_ENABLE;
+-		break;
+-	case LAYER_3:
+-		kmb->plane_status[plane_id].ctrl = LCD_CTRL_GL2_ENABLE;
+-		break;
+ 	}
+ 
+ 	kmb->plane_status[plane_id].disable = true;
+diff --git a/drivers/gpu/drm/mxsfb/mxsfb_kms.c b/drivers/gpu/drm/mxsfb/mxsfb_kms.c
+index 0655582ae8ed6..4cfb6c0016799 100644
+--- a/drivers/gpu/drm/mxsfb/mxsfb_kms.c
++++ b/drivers/gpu/drm/mxsfb/mxsfb_kms.c
+@@ -361,7 +361,11 @@ static void mxsfb_crtc_atomic_enable(struct drm_crtc *crtc,
+ 		bridge_state =
+ 			drm_atomic_get_new_bridge_state(state,
+ 							mxsfb->bridge);
+-		bus_format = bridge_state->input_bus_cfg.format;
++		if (!bridge_state)
++			bus_format = MEDIA_BUS_FMT_FIXED;
++		else
++			bus_format = bridge_state->input_bus_cfg.format;
++
+ 		if (bus_format == MEDIA_BUS_FMT_FIXED) {
+ 			dev_warn_once(drm->dev,
+ 				      "Bridge does not provide bus format, assuming MEDIA_BUS_FMT_RGB888_1X24.\n"
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/base.c
+index d0f52d59fc2f9..64e423dddd9e7 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/base.c
+@@ -38,7 +38,7 @@ nvbios_addr(struct nvkm_bios *bios, u32 *addr, u8 size)
+ 		*addr += bios->imaged_addr;
+ 	}
+ 
+-	if (unlikely(*addr + size >= bios->size)) {
++	if (unlikely(*addr + size > bios->size)) {
+ 		nvkm_error(&bios->subdev, "OOB %d %08x %08x\n", size, p, *addr);
+ 		return false;
+ 	}
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index c903b74f46a46..35f0d5e7533d6 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -3322,7 +3322,7 @@ static int cm_lap_handler(struct cm_work *work)
+ 	ret = cm_init_av_by_path(param->alternate_path, NULL, &alt_av);
+ 	if (ret) {
+ 		rdma_destroy_ah_attr(&ah_attr);
+-		return -EINVAL;
++		goto deref;
+ 	}
+ 
+ 	spin_lock_irq(&cm_id_priv->lock);
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index a3834ef691910..a8da4291e7e3b 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -67,8 +67,8 @@ static const char * const cma_events[] = {
+ 	[RDMA_CM_EVENT_TIMEWAIT_EXIT]	 = "timewait exit",
+ };
+ 
+-static void cma_set_mgid(struct rdma_id_private *id_priv, struct sockaddr *addr,
+-			 union ib_gid *mgid);
++static void cma_iboe_set_mgid(struct sockaddr *addr, union ib_gid *mgid,
++			      enum ib_gid_type gid_type);
+ 
+ const char *__attribute_const__ rdma_event_msg(enum rdma_cm_event_type event)
+ {
+@@ -1846,17 +1846,19 @@ static void destroy_mc(struct rdma_id_private *id_priv,
+ 		if (dev_addr->bound_dev_if)
+ 			ndev = dev_get_by_index(dev_addr->net,
+ 						dev_addr->bound_dev_if);
+-		if (ndev) {
++		if (ndev && !send_only) {
++			enum ib_gid_type gid_type;
+ 			union ib_gid mgid;
+ 
+-			cma_set_mgid(id_priv, (struct sockaddr *)&mc->addr,
+-				     &mgid);
+-
+-			if (!send_only)
+-				cma_igmp_send(ndev, &mgid, false);
+-
+-			dev_put(ndev);
++			gid_type = id_priv->cma_dev->default_gid_type
++					   [id_priv->id.port_num -
++					    rdma_start_port(
++						    id_priv->cma_dev->device)];
++			cma_iboe_set_mgid((struct sockaddr *)&mc->addr, &mgid,
++					  gid_type);
++			cma_igmp_send(ndev, &mgid, false);
+ 		}
++		dev_put(ndev);
+ 
+ 		cancel_work_sync(&mc->iboe_join.work);
+ 	}
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index 2b72c4fa95506..9d6ac9dff39a2 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -95,6 +95,7 @@ struct ucma_context {
+ 	u64			uid;
+ 
+ 	struct list_head	list;
++	struct list_head	mc_list;
+ 	struct work_struct	close_work;
+ };
+ 
+@@ -105,6 +106,7 @@ struct ucma_multicast {
+ 
+ 	u64			uid;
+ 	u8			join_state;
++	struct list_head	list;
+ 	struct sockaddr_storage	addr;
+ };
+ 
+@@ -198,6 +200,7 @@ static struct ucma_context *ucma_alloc_ctx(struct ucma_file *file)
+ 
+ 	INIT_WORK(&ctx->close_work, ucma_close_id);
+ 	init_completion(&ctx->comp);
++	INIT_LIST_HEAD(&ctx->mc_list);
+ 	/* So list_del() will work if we don't do ucma_finish_ctx() */
+ 	INIT_LIST_HEAD(&ctx->list);
+ 	ctx->file = file;
+@@ -484,19 +487,19 @@ err1:
+ 
+ static void ucma_cleanup_multicast(struct ucma_context *ctx)
+ {
+-	struct ucma_multicast *mc;
+-	unsigned long index;
++	struct ucma_multicast *mc, *tmp;
+ 
+-	xa_for_each(&multicast_table, index, mc) {
+-		if (mc->ctx != ctx)
+-			continue;
++	xa_lock(&multicast_table);
++	list_for_each_entry_safe(mc, tmp, &ctx->mc_list, list) {
++		list_del(&mc->list);
+ 		/*
+ 		 * At this point mc->ctx->ref is 0 so the mc cannot leave the
+ 		 * lock on the reader and this is enough serialization
+ 		 */
+-		xa_erase(&multicast_table, index);
++		__xa_erase(&multicast_table, mc->id);
+ 		kfree(mc);
+ 	}
++	xa_unlock(&multicast_table);
+ }
+ 
+ static void ucma_cleanup_mc_events(struct ucma_multicast *mc)
+@@ -1469,12 +1472,16 @@ static ssize_t ucma_process_join(struct ucma_file *file,
+ 	mc->uid = cmd->uid;
+ 	memcpy(&mc->addr, addr, cmd->addr_size);
+ 
+-	if (xa_alloc(&multicast_table, &mc->id, NULL, xa_limit_32b,
++	xa_lock(&multicast_table);
++	if (__xa_alloc(&multicast_table, &mc->id, NULL, xa_limit_32b,
+ 		     GFP_KERNEL)) {
+ 		ret = -ENOMEM;
+ 		goto err_free_mc;
+ 	}
+ 
++	list_add_tail(&mc->list, &ctx->mc_list);
++	xa_unlock(&multicast_table);
++
+ 	mutex_lock(&ctx->mutex);
+ 	ret = rdma_join_multicast(ctx->cm_id, (struct sockaddr *)&mc->addr,
+ 				  join_state, mc);
+@@ -1500,8 +1507,11 @@ err_leave_multicast:
+ 	mutex_unlock(&ctx->mutex);
+ 	ucma_cleanup_mc_events(mc);
+ err_xa_erase:
+-	xa_erase(&multicast_table, mc->id);
++	xa_lock(&multicast_table);
++	list_del(&mc->list);
++	__xa_erase(&multicast_table, mc->id);
+ err_free_mc:
++	xa_unlock(&multicast_table);
+ 	kfree(mc);
+ err_put_ctx:
+ 	ucma_put_ctx(ctx);
+@@ -1569,15 +1579,17 @@ static ssize_t ucma_leave_multicast(struct ucma_file *file,
+ 		mc = ERR_PTR(-EINVAL);
+ 	else if (!refcount_inc_not_zero(&mc->ctx->ref))
+ 		mc = ERR_PTR(-ENXIO);
+-	else
+-		__xa_erase(&multicast_table, mc->id);
+-	xa_unlock(&multicast_table);
+ 
+ 	if (IS_ERR(mc)) {
++		xa_unlock(&multicast_table);
+ 		ret = PTR_ERR(mc);
+ 		goto out;
+ 	}
+ 
++	list_del(&mc->list);
++	__xa_erase(&multicast_table, mc->id);
++	xa_unlock(&multicast_table);
++
+ 	mutex_lock(&mc->ctx->mutex);
+ 	rdma_leave_multicast(mc->ctx->cm_id, (struct sockaddr *) &mc->addr);
+ 	mutex_unlock(&mc->ctx->mutex);
+diff --git a/drivers/infiniband/hw/hfi1/ipoib.h b/drivers/infiniband/hw/hfi1/ipoib.h
+index 9091229342464..aec60d4888eb0 100644
+--- a/drivers/infiniband/hw/hfi1/ipoib.h
++++ b/drivers/infiniband/hw/hfi1/ipoib.h
+@@ -55,7 +55,7 @@ union hfi1_ipoib_flow {
+  */
+ struct ipoib_txreq {
+ 	struct sdma_txreq           txreq;
+-	struct hfi1_sdma_header     sdma_hdr;
++	struct hfi1_sdma_header     *sdma_hdr;
+ 	int                         sdma_status;
+ 	int                         complete;
+ 	struct hfi1_ipoib_dev_priv *priv;
+diff --git a/drivers/infiniband/hw/hfi1/ipoib_main.c b/drivers/infiniband/hw/hfi1/ipoib_main.c
+index e1a2b02bbd91b..5d814afdf7f39 100644
+--- a/drivers/infiniband/hw/hfi1/ipoib_main.c
++++ b/drivers/infiniband/hw/hfi1/ipoib_main.c
+@@ -22,26 +22,35 @@ static int hfi1_ipoib_dev_init(struct net_device *dev)
+ 	int ret;
+ 
+ 	dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
++	if (!dev->tstats)
++		return -ENOMEM;
+ 
+ 	ret = priv->netdev_ops->ndo_init(dev);
+ 	if (ret)
+-		return ret;
++		goto out_ret;
+ 
+ 	ret = hfi1_netdev_add_data(priv->dd,
+ 				   qpn_from_mac(priv->netdev->dev_addr),
+ 				   dev);
+ 	if (ret < 0) {
+ 		priv->netdev_ops->ndo_uninit(dev);
+-		return ret;
++		goto out_ret;
+ 	}
+ 
+ 	return 0;
++out_ret:
++	free_percpu(dev->tstats);
++	dev->tstats = NULL;
++	return ret;
+ }
+ 
+ static void hfi1_ipoib_dev_uninit(struct net_device *dev)
+ {
+ 	struct hfi1_ipoib_dev_priv *priv = hfi1_ipoib_priv(dev);
+ 
++	free_percpu(dev->tstats);
++	dev->tstats = NULL;
++
+ 	hfi1_netdev_remove_data(priv->dd, qpn_from_mac(priv->netdev->dev_addr));
+ 
+ 	priv->netdev_ops->ndo_uninit(dev);
+@@ -166,12 +175,7 @@ static void hfi1_ipoib_netdev_dtor(struct net_device *dev)
+ 	hfi1_ipoib_rxq_deinit(priv->netdev);
+ 
+ 	free_percpu(dev->tstats);
+-}
+-
+-static void hfi1_ipoib_free_rdma_netdev(struct net_device *dev)
+-{
+-	hfi1_ipoib_netdev_dtor(dev);
+-	free_netdev(dev);
++	dev->tstats = NULL;
+ }
+ 
+ static void hfi1_ipoib_set_id(struct net_device *dev, int id)
+@@ -211,24 +215,23 @@ static int hfi1_ipoib_setup_rn(struct ib_device *device,
+ 	priv->port_num = port_num;
+ 	priv->netdev_ops = netdev->netdev_ops;
+ 
+-	netdev->netdev_ops = &hfi1_ipoib_netdev_ops;
+-
+ 	ib_query_pkey(device, port_num, priv->pkey_index, &priv->pkey);
+ 
+ 	rc = hfi1_ipoib_txreq_init(priv);
+ 	if (rc) {
+ 		dd_dev_err(dd, "IPoIB netdev TX init - failed(%d)\n", rc);
+-		hfi1_ipoib_free_rdma_netdev(netdev);
+ 		return rc;
+ 	}
+ 
+ 	rc = hfi1_ipoib_rxq_init(netdev);
+ 	if (rc) {
+ 		dd_dev_err(dd, "IPoIB netdev RX init - failed(%d)\n", rc);
+-		hfi1_ipoib_free_rdma_netdev(netdev);
++		hfi1_ipoib_txreq_deinit(priv);
+ 		return rc;
+ 	}
+ 
++	netdev->netdev_ops = &hfi1_ipoib_netdev_ops;
++
+ 	netdev->priv_destructor = hfi1_ipoib_netdev_dtor;
+ 	netdev->needs_free_netdev = true;
+ 
+diff --git a/drivers/infiniband/hw/hfi1/ipoib_tx.c b/drivers/infiniband/hw/hfi1/ipoib_tx.c
+index f4010890309f7..d6bbdb8fcb50a 100644
+--- a/drivers/infiniband/hw/hfi1/ipoib_tx.c
++++ b/drivers/infiniband/hw/hfi1/ipoib_tx.c
+@@ -122,7 +122,7 @@ static void hfi1_ipoib_free_tx(struct ipoib_txreq *tx, int budget)
+ 		dd_dev_warn(priv->dd,
+ 			    "%s: Status = 0x%x pbc 0x%llx txq = %d sde = %d\n",
+ 			    __func__, tx->sdma_status,
+-			    le64_to_cpu(tx->sdma_hdr.pbc), tx->txq->q_idx,
++			    le64_to_cpu(tx->sdma_hdr->pbc), tx->txq->q_idx,
+ 			    tx->txq->sde->this_idx);
+ 	}
+ 
+@@ -231,7 +231,7 @@ static int hfi1_ipoib_build_tx_desc(struct ipoib_txreq *tx,
+ {
+ 	struct hfi1_devdata *dd = txp->dd;
+ 	struct sdma_txreq *txreq = &tx->txreq;
+-	struct hfi1_sdma_header *sdma_hdr = &tx->sdma_hdr;
++	struct hfi1_sdma_header *sdma_hdr = tx->sdma_hdr;
+ 	u16 pkt_bytes =
+ 		sizeof(sdma_hdr->pbc) + (txp->hdr_dwords << 2) + tx->skb->len;
+ 	int ret;
+@@ -256,7 +256,7 @@ static void hfi1_ipoib_build_ib_tx_headers(struct ipoib_txreq *tx,
+ 					   struct ipoib_txparms *txp)
+ {
+ 	struct hfi1_ipoib_dev_priv *priv = tx->txq->priv;
+-	struct hfi1_sdma_header *sdma_hdr = &tx->sdma_hdr;
++	struct hfi1_sdma_header *sdma_hdr = tx->sdma_hdr;
+ 	struct sk_buff *skb = tx->skb;
+ 	struct hfi1_pportdata *ppd = ppd_from_ibp(txp->ibp);
+ 	struct rdma_ah_attr *ah_attr = txp->ah_attr;
+@@ -483,7 +483,7 @@ static int hfi1_ipoib_send_dma_single(struct net_device *dev,
+ 	if (likely(!ret)) {
+ tx_ok:
+ 		trace_sdma_output_ibhdr(txq->priv->dd,
+-					&tx->sdma_hdr.hdr,
++					&tx->sdma_hdr->hdr,
+ 					ib_is_sc5(txp->flow.sc5));
+ 		hfi1_ipoib_check_queue_depth(txq);
+ 		return NETDEV_TX_OK;
+@@ -547,7 +547,7 @@ static int hfi1_ipoib_send_dma_list(struct net_device *dev,
+ 	hfi1_ipoib_check_queue_depth(txq);
+ 
+ 	trace_sdma_output_ibhdr(txq->priv->dd,
+-				&tx->sdma_hdr.hdr,
++				&tx->sdma_hdr->hdr,
+ 				ib_is_sc5(txp->flow.sc5));
+ 
+ 	if (!netdev_xmit_more())
+@@ -683,7 +683,8 @@ int hfi1_ipoib_txreq_init(struct hfi1_ipoib_dev_priv *priv)
+ {
+ 	struct net_device *dev = priv->netdev;
+ 	u32 tx_ring_size, tx_item_size;
+-	int i;
++	struct hfi1_ipoib_circ_buf *tx_ring;
++	int i, j;
+ 
+ 	/*
+ 	 * Ring holds 1 less than tx_ring_size
+@@ -701,7 +702,9 @@ int hfi1_ipoib_txreq_init(struct hfi1_ipoib_dev_priv *priv)
+ 
+ 	for (i = 0; i < dev->num_tx_queues; i++) {
+ 		struct hfi1_ipoib_txq *txq = &priv->txqs[i];
++		struct ipoib_txreq *tx;
+ 
++		tx_ring = &txq->tx_ring;
+ 		iowait_init(&txq->wait,
+ 			    0,
+ 			    hfi1_ipoib_flush_txq,
+@@ -725,14 +728,19 @@ int hfi1_ipoib_txreq_init(struct hfi1_ipoib_dev_priv *priv)
+ 					     priv->dd->node);
+ 
+ 		txq->tx_ring.items =
+-			kcalloc_node(tx_ring_size, tx_item_size,
+-				     GFP_KERNEL, priv->dd->node);
++			kvzalloc_node(array_size(tx_ring_size, tx_item_size),
++				      GFP_KERNEL, priv->dd->node);
+ 		if (!txq->tx_ring.items)
+ 			goto free_txqs;
+ 
+ 		txq->tx_ring.max_items = tx_ring_size;
+-		txq->tx_ring.shift = ilog2(tx_ring_size);
++		txq->tx_ring.shift = ilog2(tx_item_size);
+ 		txq->tx_ring.avail = hfi1_ipoib_ring_hwat(txq);
++		tx_ring = &txq->tx_ring;
++		for (j = 0; j < tx_ring_size; j++)
++			hfi1_txreq_from_idx(tx_ring, j)->sdma_hdr =
++				kzalloc_node(sizeof(*tx->sdma_hdr),
++					     GFP_KERNEL, priv->dd->node);
+ 
+ 		netif_tx_napi_add(dev, &txq->napi,
+ 				  hfi1_ipoib_poll_tx_ring,
+@@ -746,7 +754,10 @@ free_txqs:
+ 		struct hfi1_ipoib_txq *txq = &priv->txqs[i];
+ 
+ 		netif_napi_del(&txq->napi);
+-		kfree(txq->tx_ring.items);
++		tx_ring = &txq->tx_ring;
++		for (j = 0; j < tx_ring_size; j++)
++			kfree(hfi1_txreq_from_idx(tx_ring, j)->sdma_hdr);
++		kvfree(tx_ring->items);
+ 	}
+ 
+ 	kfree(priv->txqs);
+@@ -780,17 +791,20 @@ static void hfi1_ipoib_drain_tx_list(struct hfi1_ipoib_txq *txq)
+ 
+ void hfi1_ipoib_txreq_deinit(struct hfi1_ipoib_dev_priv *priv)
+ {
+-	int i;
++	int i, j;
+ 
+ 	for (i = 0; i < priv->netdev->num_tx_queues; i++) {
+ 		struct hfi1_ipoib_txq *txq = &priv->txqs[i];
++		struct hfi1_ipoib_circ_buf *tx_ring = &txq->tx_ring;
+ 
+ 		iowait_cancel_work(&txq->wait);
+ 		iowait_sdma_drain(&txq->wait);
+ 		hfi1_ipoib_drain_tx_list(txq);
+ 		netif_napi_del(&txq->napi);
+ 		hfi1_ipoib_drain_tx_ring(txq);
+-		kfree(txq->tx_ring.items);
++		for (j = 0; j < tx_ring->max_items; j++)
++			kfree(hfi1_txreq_from_idx(tx_ring, j)->sdma_hdr);
++		kvfree(tx_ring->items);
+ 	}
+ 
+ 	kfree(priv->txqs);
+diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
+index 0d2fa3338784e..accf65df627a2 100644
+--- a/drivers/infiniband/hw/mlx4/main.c
++++ b/drivers/infiniband/hw/mlx4/main.c
+@@ -3247,7 +3247,7 @@ static void mlx4_ib_event(struct mlx4_dev *dev, void *ibdev_ptr,
+ 	case MLX4_DEV_EVENT_PORT_MGMT_CHANGE:
+ 		ew = kmalloc(sizeof *ew, GFP_ATOMIC);
+ 		if (!ew)
+-			break;
++			return;
+ 
+ 		INIT_WORK(&ew->work, handle_port_mgmt_change_event);
+ 		memcpy(&ew->ib_eqe, eqe, sizeof *eqe);
+diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
+index 3305f2744bfaa..ae50b56e89132 100644
+--- a/drivers/infiniband/sw/rdmavt/qp.c
++++ b/drivers/infiniband/sw/rdmavt/qp.c
+@@ -3073,6 +3073,8 @@ do_write:
+ 	case IB_WR_ATOMIC_FETCH_AND_ADD:
+ 		if (unlikely(!(qp->qp_access_flags & IB_ACCESS_REMOTE_ATOMIC)))
+ 			goto inv_err;
++		if (unlikely(wqe->atomic_wr.remote_addr & (sizeof(u64) - 1)))
++			goto inv_err;
+ 		if (unlikely(!rvt_rkey_ok(qp, &qp->r_sge.sge, sizeof(u64),
+ 					  wqe->atomic_wr.remote_addr,
+ 					  wqe->atomic_wr.rkey,
+diff --git a/drivers/infiniband/sw/siw/siw.h b/drivers/infiniband/sw/siw/siw.h
+index 368959ae9a8cc..df03d84c6868a 100644
+--- a/drivers/infiniband/sw/siw/siw.h
++++ b/drivers/infiniband/sw/siw/siw.h
+@@ -644,14 +644,9 @@ static inline struct siw_sqe *orq_get_current(struct siw_qp *qp)
+ 	return &qp->orq[qp->orq_get % qp->attrs.orq_size];
+ }
+ 
+-static inline struct siw_sqe *orq_get_tail(struct siw_qp *qp)
+-{
+-	return &qp->orq[qp->orq_put % qp->attrs.orq_size];
+-}
+-
+ static inline struct siw_sqe *orq_get_free(struct siw_qp *qp)
+ {
+-	struct siw_sqe *orq_e = orq_get_tail(qp);
++	struct siw_sqe *orq_e = &qp->orq[qp->orq_put % qp->attrs.orq_size];
+ 
+ 	if (READ_ONCE(orq_e->flags) == 0)
+ 		return orq_e;
+diff --git a/drivers/infiniband/sw/siw/siw_qp_rx.c b/drivers/infiniband/sw/siw/siw_qp_rx.c
+index 60116f20653c7..875ea6f1b04a2 100644
+--- a/drivers/infiniband/sw/siw/siw_qp_rx.c
++++ b/drivers/infiniband/sw/siw/siw_qp_rx.c
+@@ -1153,11 +1153,12 @@ static int siw_check_tx_fence(struct siw_qp *qp)
+ 
+ 	spin_lock_irqsave(&qp->orq_lock, flags);
+ 
+-	rreq = orq_get_current(qp);
+-
+ 	/* free current orq entry */
++	rreq = orq_get_current(qp);
+ 	WRITE_ONCE(rreq->flags, 0);
+ 
++	qp->orq_get++;
++
+ 	if (qp->tx_ctx.orq_fence) {
+ 		if (unlikely(tx_waiting->wr_status != SIW_WR_QUEUED)) {
+ 			pr_warn("siw: [QP %u]: fence resume: bad status %d\n",
+@@ -1165,10 +1166,12 @@ static int siw_check_tx_fence(struct siw_qp *qp)
+ 			rv = -EPROTO;
+ 			goto out;
+ 		}
+-		/* resume SQ processing */
++		/* resume SQ processing, if possible */
+ 		if (tx_waiting->sqe.opcode == SIW_OP_READ ||
+ 		    tx_waiting->sqe.opcode == SIW_OP_READ_LOCAL_INV) {
+-			rreq = orq_get_tail(qp);
++
++			/* SQ processing was stopped because of a full ORQ */
++			rreq = orq_get_free(qp);
+ 			if (unlikely(!rreq)) {
+ 				pr_warn("siw: [QP %u]: no ORQE\n", qp_id(qp));
+ 				rv = -EPROTO;
+@@ -1181,15 +1184,14 @@ static int siw_check_tx_fence(struct siw_qp *qp)
+ 			resume_tx = 1;
+ 
+ 		} else if (siw_orq_empty(qp)) {
++			/*
++			 * SQ processing was stopped by fenced work request.
++			 * Resume since all previous Read's are now completed.
++			 */
+ 			qp->tx_ctx.orq_fence = 0;
+ 			resume_tx = 1;
+-		} else {
+-			pr_warn("siw: [QP %u]: fence resume: orq idx: %d:%d\n",
+-				qp_id(qp), qp->orq_get, qp->orq_put);
+-			rv = -EPROTO;
+ 		}
+ 	}
+-	qp->orq_get++;
+ out:
+ 	spin_unlock_irqrestore(&qp->orq_lock, flags);
+ 
+diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c
+index 1b36350601faa..aa3f60d54a70f 100644
+--- a/drivers/infiniband/sw/siw/siw_verbs.c
++++ b/drivers/infiniband/sw/siw/siw_verbs.c
+@@ -311,7 +311,8 @@ int siw_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attrs,
+ 
+ 	if (atomic_inc_return(&sdev->num_qp) > SIW_MAX_QP) {
+ 		siw_dbg(base_dev, "too many QP's\n");
+-		return -ENOMEM;
++		rv = -ENOMEM;
++		goto err_atomic;
+ 	}
+ 	if (attrs->qp_type != IB_QPT_RC) {
+ 		siw_dbg(base_dev, "only RC QP's supported\n");
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index b94822fc2c9f7..ce952b0afbfd9 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -21,6 +21,7 @@
+ #include <linux/export.h>
+ #include <linux/kmemleak.h>
+ #include <linux/cc_platform.h>
++#include <linux/iopoll.h>
+ #include <asm/pci-direct.h>
+ #include <asm/iommu.h>
+ #include <asm/apic.h>
+@@ -834,6 +835,7 @@ static int iommu_ga_log_enable(struct amd_iommu *iommu)
+ 		status = readl(iommu->mmio_base + MMIO_STATUS_OFFSET);
+ 		if (status & (MMIO_STATUS_GALOG_RUN_MASK))
+ 			break;
++		udelay(10);
+ 	}
+ 
+ 	if (WARN_ON(i >= LOOP_TIMEOUT))
+diff --git a/drivers/iommu/intel/irq_remapping.c b/drivers/iommu/intel/irq_remapping.c
+index f912fe45bea2c..a673195978843 100644
+--- a/drivers/iommu/intel/irq_remapping.c
++++ b/drivers/iommu/intel/irq_remapping.c
+@@ -569,9 +569,8 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu)
+ 					    fn, &intel_ir_domain_ops,
+ 					    iommu);
+ 	if (!iommu->ir_domain) {
+-		irq_domain_free_fwnode(fn);
+ 		pr_err("IR%d: failed to allocate irqdomain\n", iommu->seq_id);
+-		goto out_free_bitmap;
++		goto out_free_fwnode;
+ 	}
+ 	iommu->ir_msi_domain =
+ 		arch_create_remap_msi_irq_domain(iommu->ir_domain,
+@@ -595,7 +594,7 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu)
+ 
+ 		if (dmar_enable_qi(iommu)) {
+ 			pr_err("Failed to enable queued invalidation\n");
+-			goto out_free_bitmap;
++			goto out_free_ir_domain;
+ 		}
+ 	}
+ 
+@@ -619,6 +618,14 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu)
+ 
+ 	return 0;
+ 
++out_free_ir_domain:
++	if (iommu->ir_msi_domain)
++		irq_domain_remove(iommu->ir_msi_domain);
++	iommu->ir_msi_domain = NULL;
++	irq_domain_remove(iommu->ir_domain);
++	iommu->ir_domain = NULL;
++out_free_fwnode:
++	irq_domain_free_fwnode(fn);
+ out_free_bitmap:
+ 	bitmap_free(bitmap);
+ out_free_pages:
+diff --git a/drivers/net/dsa/Kconfig b/drivers/net/dsa/Kconfig
+index 7b1457a6e327b..c0c91440340ae 100644
+--- a/drivers/net/dsa/Kconfig
++++ b/drivers/net/dsa/Kconfig
+@@ -36,6 +36,7 @@ config NET_DSA_LANTIQ_GSWIP
+ config NET_DSA_MT7530
+ 	tristate "MediaTek MT753x and MT7621 Ethernet switch support"
+ 	select NET_DSA_TAG_MTK
++	select MEDIATEK_GE_PHY
+ 	help
+ 	  This enables support for the MediaTek MT7530, MT7531, and MT7621
+ 	  Ethernet switch chips.
+diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c
+index 326b56b49216a..89b026924196c 100644
+--- a/drivers/net/ethernet/google/gve/gve_adminq.c
++++ b/drivers/net/ethernet/google/gve/gve_adminq.c
+@@ -301,7 +301,7 @@ static int gve_adminq_parse_err(struct gve_priv *priv, u32 status)
+  */
+ static int gve_adminq_kick_and_wait(struct gve_priv *priv)
+ {
+-	u32 tail, head;
++	int tail, head;
+ 	int i;
+ 
+ 	tail = ioread32be(&priv->reg_bar0->adminq_event_counter);
+diff --git a/drivers/net/ethernet/intel/e1000e/e1000.h b/drivers/net/ethernet/intel/e1000e/e1000.h
+index c3def0ee77884..8d06c9d8ff8b1 100644
+--- a/drivers/net/ethernet/intel/e1000e/e1000.h
++++ b/drivers/net/ethernet/intel/e1000e/e1000.h
+@@ -115,7 +115,8 @@ enum e1000_boards {
+ 	board_pch_lpt,
+ 	board_pch_spt,
+ 	board_pch_cnp,
+-	board_pch_tgp
++	board_pch_tgp,
++	board_pch_adp
+ };
+ 
+ struct e1000_ps_page {
+@@ -502,6 +503,7 @@ extern const struct e1000_info e1000_pch_lpt_info;
+ extern const struct e1000_info e1000_pch_spt_info;
+ extern const struct e1000_info e1000_pch_cnp_info;
+ extern const struct e1000_info e1000_pch_tgp_info;
++extern const struct e1000_info e1000_pch_adp_info;
+ extern const struct e1000_info e1000_es2_info;
+ 
+ void e1000e_ptp_init(struct e1000_adapter *adapter);
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+index 5e4fc9b4e2adb..c908c84b86d22 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+@@ -6021,3 +6021,23 @@ const struct e1000_info e1000_pch_tgp_info = {
+ 	.phy_ops		= &ich8_phy_ops,
+ 	.nvm_ops		= &spt_nvm_ops,
+ };
++
++const struct e1000_info e1000_pch_adp_info = {
++	.mac			= e1000_pch_adp,
++	.flags			= FLAG_IS_ICH
++				  | FLAG_HAS_WOL
++				  | FLAG_HAS_HW_TIMESTAMP
++				  | FLAG_HAS_CTRLEXT_ON_LOAD
++				  | FLAG_HAS_AMT
++				  | FLAG_HAS_FLASH
++				  | FLAG_HAS_JUMBO_FRAMES
++				  | FLAG_APME_IN_WUC,
++	.flags2			= FLAG2_HAS_PHY_STATS
++				  | FLAG2_HAS_EEE,
++	.pba			= 26,
++	.max_hw_frame_size	= 9022,
++	.get_variants		= e1000_get_variants_ich8lan,
++	.mac_ops		= &ich8_mac_ops,
++	.phy_ops		= &ich8_phy_ops,
++	.nvm_ops		= &spt_nvm_ops,
++};
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index 85391ebf8714e..c063128ed1df3 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -52,6 +52,7 @@ static const struct e1000_info *e1000_info_tbl[] = {
+ 	[board_pch_spt]		= &e1000_pch_spt_info,
+ 	[board_pch_cnp]		= &e1000_pch_cnp_info,
+ 	[board_pch_tgp]		= &e1000_pch_tgp_info,
++	[board_pch_adp]		= &e1000_pch_adp_info,
+ };
+ 
+ struct e1000_reg_info {
+@@ -7904,22 +7905,22 @@ static const struct pci_device_id e1000_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V14), board_pch_tgp },
+ 	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM15), board_pch_tgp },
+ 	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V15), board_pch_tgp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM23), board_pch_tgp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V23), board_pch_tgp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM16), board_pch_tgp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V16), board_pch_tgp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM17), board_pch_tgp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V17), board_pch_tgp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM22), board_pch_tgp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V22), board_pch_tgp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM18), board_pch_tgp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V18), board_pch_tgp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM19), board_pch_tgp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V19), board_pch_tgp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM20), board_pch_tgp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V20), board_pch_tgp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM21), board_pch_tgp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V21), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM23), board_pch_adp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V23), board_pch_adp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM16), board_pch_adp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V16), board_pch_adp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM17), board_pch_adp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V17), board_pch_adp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM22), board_pch_adp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V22), board_pch_adp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM18), board_pch_adp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V18), board_pch_adp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM19), board_pch_adp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V19), board_pch_adp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM20), board_pch_adp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V20), board_pch_adp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM21), board_pch_adp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V21), board_pch_adp },
+ 
+ 	{ 0, 0, 0, 0, 0, 0, 0 }	/* terminate list */
+ };
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-visconti.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-visconti.c
+index dde5b772a5af7..c3f10a92b62b8 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-visconti.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-visconti.c
+@@ -49,13 +49,15 @@ struct visconti_eth {
+ 	void __iomem *reg;
+ 	u32 phy_intf_sel;
+ 	struct clk *phy_ref_clk;
++	struct device *dev;
+ 	spinlock_t lock; /* lock to protect register update */
+ };
+ 
+ static void visconti_eth_fix_mac_speed(void *priv, unsigned int speed)
+ {
+ 	struct visconti_eth *dwmac = priv;
+-	unsigned int val, clk_sel_val;
++	struct net_device *netdev = dev_get_drvdata(dwmac->dev);
++	unsigned int val, clk_sel_val = 0;
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&dwmac->lock, flags);
+@@ -85,7 +87,9 @@ static void visconti_eth_fix_mac_speed(void *priv, unsigned int speed)
+ 		break;
+ 	default:
+ 		/* No bit control */
+-		break;
++		netdev_err(netdev, "Unsupported speed request (%d)", speed);
++		spin_unlock_irqrestore(&dwmac->lock, flags);
++		return;
+ 	}
+ 
+ 	writel(val, dwmac->reg + MAC_CTRL_REG);
+@@ -229,6 +233,7 @@ static int visconti_eth_dwmac_probe(struct platform_device *pdev)
+ 
+ 	spin_lock_init(&dwmac->lock);
+ 	dwmac->reg = stmmac_res.addr;
++	dwmac->dev = &pdev->dev;
+ 	plat_dat->bsp_priv = dwmac;
+ 	plat_dat->fix_mac_speed = visconti_eth_fix_mac_speed;
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac_dma.h b/drivers/net/ethernet/stmicro/stmmac/dwmac_dma.h
+index 1914ad698cab2..acd70b9a3173c 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac_dma.h
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac_dma.h
+@@ -150,6 +150,7 @@
+ 
+ #define NUM_DWMAC100_DMA_REGS	9
+ #define NUM_DWMAC1000_DMA_REGS	23
++#define NUM_DWMAC4_DMA_REGS	27
+ 
+ void dwmac_enable_dma_transmission(void __iomem *ioaddr);
+ void dwmac_enable_dma_irq(void __iomem *ioaddr, u32 chan, bool rx, bool tx);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+index d89455803beda..8f563b446d5ca 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+@@ -21,10 +21,18 @@
+ #include "dwxgmac2.h"
+ 
+ #define REG_SPACE_SIZE	0x1060
++#define GMAC4_REG_SPACE_SIZE	0x116C
+ #define MAC100_ETHTOOL_NAME	"st_mac100"
+ #define GMAC_ETHTOOL_NAME	"st_gmac"
+ #define XGMAC_ETHTOOL_NAME	"st_xgmac"
+ 
++/* Same as DMA_CHAN_BASE_ADDR defined in dwmac4_dma.h
++ *
++ * It is here because dwmac_dma.h and dwmac4_dam.h can not be included at the
++ * same time due to the conflicting macro names.
++ */
++#define GMAC4_DMA_CHAN_BASE_ADDR  0x00001100
++
+ #define ETHTOOL_DMA_OFFSET	55
+ 
+ struct stmmac_stats {
+@@ -435,6 +443,8 @@ static int stmmac_ethtool_get_regs_len(struct net_device *dev)
+ 
+ 	if (priv->plat->has_xgmac)
+ 		return XGMAC_REGSIZE * 4;
++	else if (priv->plat->has_gmac4)
++		return GMAC4_REG_SPACE_SIZE;
+ 	return REG_SPACE_SIZE;
+ }
+ 
+@@ -447,8 +457,13 @@ static void stmmac_ethtool_gregs(struct net_device *dev,
+ 	stmmac_dump_mac_regs(priv, priv->hw, reg_space);
+ 	stmmac_dump_dma_regs(priv, priv->ioaddr, reg_space);
+ 
+-	if (!priv->plat->has_xgmac) {
+-		/* Copy DMA registers to where ethtool expects them */
++	/* Copy DMA registers to where ethtool expects them */
++	if (priv->plat->has_gmac4) {
++		/* GMAC4 dumps its DMA registers at its DMA_CHAN_BASE_ADDR */
++		memcpy(&reg_space[ETHTOOL_DMA_OFFSET],
++		       &reg_space[GMAC4_DMA_CHAN_BASE_ADDR / 4],
++		       NUM_DWMAC4_DMA_REGS * 4);
++	} else if (!priv->plat->has_xgmac) {
+ 		memcpy(&reg_space[ETHTOOL_DMA_OFFSET],
+ 		       &reg_space[DMA_BUS_MODE / 4],
+ 		       NUM_DWMAC1000_DMA_REGS * 4);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
+index 074e2cdfb0fa6..a7ec9f4d46ced 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
+@@ -145,15 +145,20 @@ static int adjust_systime(void __iomem *ioaddr, u32 sec, u32 nsec,
+ 
+ static void get_systime(void __iomem *ioaddr, u64 *systime)
+ {
+-	u64 ns;
+-
+-	/* Get the TSSS value */
+-	ns = readl(ioaddr + PTP_STNSR);
+-	/* Get the TSS and convert sec time value to nanosecond */
+-	ns += readl(ioaddr + PTP_STSR) * 1000000000ULL;
++	u64 ns, sec0, sec1;
++
++	/* Get the TSS value */
++	sec1 = readl_relaxed(ioaddr + PTP_STSR);
++	do {
++		sec0 = sec1;
++		/* Get the TSSS value */
++		ns = readl_relaxed(ioaddr + PTP_STNSR);
++		/* Get the TSS value */
++		sec1 = readl_relaxed(ioaddr + PTP_STSR);
++	} while (sec0 != sec1);
+ 
+ 	if (systime)
+-		*systime = ns;
++		*systime = ns + (sec1 * 1000000000ULL);
+ }
+ 
+ static void get_ptptime(void __iomem *ptpaddr, u64 *ptp_time)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index ac8e3b932bf1e..c5ad28e543e43 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -7137,6 +7137,10 @@ int stmmac_dvr_remove(struct device *dev)
+ 
+ 	netdev_info(priv->dev, "%s: removing driver", __func__);
+ 
++	pm_runtime_get_sync(dev);
++	pm_runtime_disable(dev);
++	pm_runtime_put_noidle(dev);
++
+ 	stmmac_stop_all_dma(priv);
+ 	stmmac_mac_set(priv, priv->ioaddr, false);
+ 	netif_carrier_off(ndev);
+@@ -7155,8 +7159,6 @@ int stmmac_dvr_remove(struct device *dev)
+ 	if (priv->plat->stmmac_rst)
+ 		reset_control_assert(priv->plat->stmmac_rst);
+ 	reset_control_assert(priv->plat->stmmac_ahb_rst);
+-	pm_runtime_put(dev);
+-	pm_runtime_disable(dev);
+ 	if (priv->hw->pcs != STMMAC_PCS_TBI &&
+ 	    priv->hw->pcs != STMMAC_PCS_RTBI)
+ 		stmmac_mdio_unregister(ndev);
+diff --git a/drivers/net/ieee802154/ca8210.c b/drivers/net/ieee802154/ca8210.c
+index ece6ff6049f66..f3438d3e104ac 100644
+--- a/drivers/net/ieee802154/ca8210.c
++++ b/drivers/net/ieee802154/ca8210.c
+@@ -1771,6 +1771,7 @@ static int ca8210_async_xmit_complete(
+ 			status
+ 		);
+ 		if (status != MAC_TRANSACTION_OVERFLOW) {
++			dev_kfree_skb_any(priv->tx_skb);
+ 			ieee802154_wake_queue(priv->hw);
+ 			return 0;
+ 		}
+diff --git a/drivers/net/ieee802154/mac802154_hwsim.c b/drivers/net/ieee802154/mac802154_hwsim.c
+index 8caa61ec718f5..36f1c5aa98fc6 100644
+--- a/drivers/net/ieee802154/mac802154_hwsim.c
++++ b/drivers/net/ieee802154/mac802154_hwsim.c
+@@ -786,6 +786,7 @@ static int hwsim_add_one(struct genl_info *info, struct device *dev,
+ 		goto err_pib;
+ 	}
+ 
++	pib->channel = 13;
+ 	rcu_assign_pointer(phy->pib, pib);
+ 	phy->idx = idx;
+ 	INIT_LIST_HEAD(&phy->edges);
+diff --git a/drivers/net/ieee802154/mcr20a.c b/drivers/net/ieee802154/mcr20a.c
+index 8dc04e2590b18..383231b854642 100644
+--- a/drivers/net/ieee802154/mcr20a.c
++++ b/drivers/net/ieee802154/mcr20a.c
+@@ -976,8 +976,8 @@ static void mcr20a_hw_setup(struct mcr20a_local *lp)
+ 	dev_dbg(printdev(lp), "%s\n", __func__);
+ 
+ 	phy->symbol_duration = 16;
+-	phy->lifs_period = 40;
+-	phy->sifs_period = 12;
++	phy->lifs_period = 40 * phy->symbol_duration;
++	phy->sifs_period = 12 * phy->symbol_duration;
+ 
+ 	hw->flags = IEEE802154_HW_TX_OMIT_CKSUM |
+ 			IEEE802154_HW_AFILT |
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 16aa3a478e9e8..3d08743317634 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -3870,6 +3870,18 @@ static void macsec_common_dellink(struct net_device *dev, struct list_head *head
+ 	struct macsec_dev *macsec = macsec_priv(dev);
+ 	struct net_device *real_dev = macsec->real_dev;
+ 
++	/* If h/w offloading is available, propagate to the device */
++	if (macsec_is_offloaded(macsec)) {
++		const struct macsec_ops *ops;
++		struct macsec_context ctx;
++
++		ops = macsec_get_ops(netdev_priv(dev), &ctx);
++		if (ops) {
++			ctx.secy = &macsec->secy;
++			macsec_offload(ops->mdo_del_secy, &ctx);
++		}
++	}
++
+ 	unregister_netdevice_queue(dev, head);
+ 	list_del_rcu(&macsec->secys);
+ 	macsec_del_dev(macsec);
+@@ -3884,18 +3896,6 @@ static void macsec_dellink(struct net_device *dev, struct list_head *head)
+ 	struct net_device *real_dev = macsec->real_dev;
+ 	struct macsec_rxh_data *rxd = macsec_data_rtnl(real_dev);
+ 
+-	/* If h/w offloading is available, propagate to the device */
+-	if (macsec_is_offloaded(macsec)) {
+-		const struct macsec_ops *ops;
+-		struct macsec_context ctx;
+-
+-		ops = macsec_get_ops(netdev_priv(dev), &ctx);
+-		if (ops) {
+-			ctx.secy = &macsec->secy;
+-			macsec_offload(ops->mdo_del_secy, &ctx);
+-		}
+-	}
+-
+ 	macsec_common_dellink(dev, head);
+ 
+ 	if (list_empty(&rxd->secys)) {
+@@ -4018,6 +4018,15 @@ static int macsec_newlink(struct net *net, struct net_device *dev,
+ 	    !macsec_check_offload(macsec->offload, macsec))
+ 		return -EOPNOTSUPP;
+ 
++	/* send_sci must be set to true when transmit sci explicitly is set */
++	if ((data && data[IFLA_MACSEC_SCI]) &&
++	    (data && data[IFLA_MACSEC_INC_SCI])) {
++		u8 send_sci = !!nla_get_u8(data[IFLA_MACSEC_INC_SCI]);
++
++		if (!send_sci)
++			return -EINVAL;
++	}
++
+ 	if (data && data[IFLA_MACSEC_ICV_LEN])
+ 		icv_len = nla_get_u8(data[IFLA_MACSEC_ICV_LEN]);
+ 	mtu = real_dev->mtu - icv_len - macsec_extra_len(true);
+diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
+index c3203ff1c654c..1e3a09cad9611 100644
+--- a/drivers/nvme/host/fabrics.h
++++ b/drivers/nvme/host/fabrics.h
+@@ -170,6 +170,7 @@ nvmf_ctlr_matches_baseopts(struct nvme_ctrl *ctrl,
+ 			struct nvmf_ctrl_options *opts)
+ {
+ 	if (ctrl->state == NVME_CTRL_DELETING ||
++	    ctrl->state == NVME_CTRL_DELETING_NOIO ||
+ 	    ctrl->state == NVME_CTRL_DEAD ||
+ 	    strcmp(opts->subsysnqn, ctrl->opts->subsysnqn) ||
+ 	    strcmp(opts->host->nqn, ctrl->opts->host->nqn) ||
+diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+index b607d10e4cbd8..e9e6b90c6e7ca 100644
+--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+@@ -1264,16 +1264,18 @@ static int bcm2835_pinctrl_probe(struct platform_device *pdev)
+ 				     sizeof(*girq->parents),
+ 				     GFP_KERNEL);
+ 	if (!girq->parents) {
+-		pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range);
+-		return -ENOMEM;
++		err = -ENOMEM;
++		goto out_remove;
+ 	}
+ 
+ 	if (is_7211) {
+ 		pc->wake_irq = devm_kcalloc(dev, BCM2835_NUM_IRQS,
+ 					    sizeof(*pc->wake_irq),
+ 					    GFP_KERNEL);
+-		if (!pc->wake_irq)
+-			return -ENOMEM;
++		if (!pc->wake_irq) {
++			err = -ENOMEM;
++			goto out_remove;
++		}
+ 	}
+ 
+ 	/*
+@@ -1301,8 +1303,10 @@ static int bcm2835_pinctrl_probe(struct platform_device *pdev)
+ 
+ 		len = strlen(dev_name(pc->dev)) + 16;
+ 		name = devm_kzalloc(pc->dev, len, GFP_KERNEL);
+-		if (!name)
+-			return -ENOMEM;
++		if (!name) {
++			err = -ENOMEM;
++			goto out_remove;
++		}
+ 
+ 		snprintf(name, len, "%s:bank%d", dev_name(pc->dev), i);
+ 
+@@ -1321,11 +1325,14 @@ static int bcm2835_pinctrl_probe(struct platform_device *pdev)
+ 	err = gpiochip_add_data(&pc->gpio_chip, pc);
+ 	if (err) {
+ 		dev_err(dev, "could not add GPIO chip\n");
+-		pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range);
+-		return err;
++		goto out_remove;
+ 	}
+ 
+ 	return 0;
++
++out_remove:
++	pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range);
++	return err;
+ }
+ 
+ static struct platform_driver bcm2835_pinctrl_driver = {
+diff --git a/drivers/pinctrl/intel/pinctrl-intel.c b/drivers/pinctrl/intel/pinctrl-intel.c
+index 85750974d1825..826d494f3cc66 100644
+--- a/drivers/pinctrl/intel/pinctrl-intel.c
++++ b/drivers/pinctrl/intel/pinctrl-intel.c
+@@ -451,8 +451,8 @@ static void intel_gpio_set_gpio_mode(void __iomem *padcfg0)
+ 	value &= ~PADCFG0_PMODE_MASK;
+ 	value |= PADCFG0_PMODE_GPIO;
+ 
+-	/* Disable input and output buffers */
+-	value |= PADCFG0_GPIORXDIS;
++	/* Disable TX buffer and enable RX (this will be input) */
++	value &= ~PADCFG0_GPIORXDIS;
+ 	value |= PADCFG0_GPIOTXDIS;
+ 
+ 	/* Disable SCI/SMI/NMI generation */
+@@ -497,9 +497,6 @@ static int intel_gpio_request_enable(struct pinctrl_dev *pctldev,
+ 
+ 	intel_gpio_set_gpio_mode(padcfg0);
+ 
+-	/* Disable TX buffer and enable RX (this will be input) */
+-	__intel_gpio_set_direction(padcfg0, true);
+-
+ 	raw_spin_unlock_irqrestore(&pctrl->lock, flags);
+ 
+ 	return 0;
+@@ -1115,9 +1112,6 @@ static int intel_gpio_irq_type(struct irq_data *d, unsigned int type)
+ 
+ 	intel_gpio_set_gpio_mode(reg);
+ 
+-	/* Disable TX buffer and enable RX (this will be input) */
+-	__intel_gpio_set_direction(reg, true);
+-
+ 	value = readl(reg);
+ 
+ 	value &= ~(PADCFG0_RXEVCFG_MASK | PADCFG0_RXINV);
+@@ -1216,6 +1210,39 @@ static irqreturn_t intel_gpio_irq(int irq, void *data)
+ 	return IRQ_RETVAL(ret);
+ }
+ 
++static void intel_gpio_irq_init(struct intel_pinctrl *pctrl)
++{
++	int i;
++
++	for (i = 0; i < pctrl->ncommunities; i++) {
++		const struct intel_community *community;
++		void __iomem *base;
++		unsigned int gpp;
++
++		community = &pctrl->communities[i];
++		base = community->regs;
++
++		for (gpp = 0; gpp < community->ngpps; gpp++) {
++			/* Mask and clear all interrupts */
++			writel(0, base + community->ie_offset + gpp * 4);
++			writel(0xffff, base + community->is_offset + gpp * 4);
++		}
++	}
++}
++
++static int intel_gpio_irq_init_hw(struct gpio_chip *gc)
++{
++	struct intel_pinctrl *pctrl = gpiochip_get_data(gc);
++
++	/*
++	 * Make sure the interrupt lines are in a proper state before
++	 * further configuration.
++	 */
++	intel_gpio_irq_init(pctrl);
++
++	return 0;
++}
++
+ static int intel_gpio_add_community_ranges(struct intel_pinctrl *pctrl,
+ 				const struct intel_community *community)
+ {
+@@ -1320,6 +1347,7 @@ static int intel_gpio_probe(struct intel_pinctrl *pctrl, int irq)
+ 	girq->num_parents = 0;
+ 	girq->default_type = IRQ_TYPE_NONE;
+ 	girq->handler = handle_bad_irq;
++	girq->init_hw = intel_gpio_irq_init_hw;
+ 
+ 	ret = devm_gpiochip_add_data(pctrl->dev, &pctrl->chip, pctrl);
+ 	if (ret) {
+@@ -1695,26 +1723,6 @@ int intel_pinctrl_suspend_noirq(struct device *dev)
+ }
+ EXPORT_SYMBOL_GPL(intel_pinctrl_suspend_noirq);
+ 
+-static void intel_gpio_irq_init(struct intel_pinctrl *pctrl)
+-{
+-	size_t i;
+-
+-	for (i = 0; i < pctrl->ncommunities; i++) {
+-		const struct intel_community *community;
+-		void __iomem *base;
+-		unsigned int gpp;
+-
+-		community = &pctrl->communities[i];
+-		base = community->regs;
+-
+-		for (gpp = 0; gpp < community->ngpps; gpp++) {
+-			/* Mask and clear all interrupts */
+-			writel(0, base + community->ie_offset + gpp * 4);
+-			writel(0xffff, base + community->is_offset + gpp * 4);
+-		}
+-	}
+-}
+-
+ static bool intel_gpio_update_reg(void __iomem *reg, u32 mask, u32 value)
+ {
+ 	u32 curr, updated;
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sun50i-h616.c b/drivers/pinctrl/sunxi/pinctrl-sun50i-h616.c
+index ce1917e230f41..152b71226a807 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sun50i-h616.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sun50i-h616.c
+@@ -363,16 +363,16 @@ static const struct sunxi_desc_pin h616_pins[] = {
+ 		  SUNXI_FUNCTION(0x0, "gpio_in"),
+ 		  SUNXI_FUNCTION(0x1, "gpio_out"),
+ 		  SUNXI_FUNCTION(0x2, "uart2"),		/* CTS */
+-		  SUNXI_FUNCTION(0x3, "i2s3"),	/* DO0 */
++		  SUNXI_FUNCTION(0x3, "i2s3_dout0"),	/* DO0 */
+ 		  SUNXI_FUNCTION(0x4, "spi1"),		/* MISO */
+-		  SUNXI_FUNCTION(0x5, "i2s3"),	/* DI1 */
++		  SUNXI_FUNCTION(0x5, "i2s3_din1"),	/* DI1 */
+ 		  SUNXI_FUNCTION_IRQ_BANK(0x6, 6, 8)),	/* PH_EINT8 */
+ 	SUNXI_PIN(SUNXI_PINCTRL_PIN(H, 9),
+ 		  SUNXI_FUNCTION(0x0, "gpio_in"),
+ 		  SUNXI_FUNCTION(0x1, "gpio_out"),
+-		  SUNXI_FUNCTION(0x3, "i2s3"),	/* DI0 */
++		  SUNXI_FUNCTION(0x3, "i2s3_din0"),	/* DI0 */
+ 		  SUNXI_FUNCTION(0x4, "spi1"),		/* CS1 */
+-		  SUNXI_FUNCTION(0x3, "i2s3"),	/* DO1 */
++		  SUNXI_FUNCTION(0x5, "i2s3_dout1"),	/* DO1 */
+ 		  SUNXI_FUNCTION_IRQ_BANK(0x6, 6, 9)),	/* PH_EINT9 */
+ 	SUNXI_PIN(SUNXI_PINCTRL_PIN(H, 10),
+ 		  SUNXI_FUNCTION(0x0, "gpio_in"),
+diff --git a/drivers/rtc/rtc-mc146818-lib.c b/drivers/rtc/rtc-mc146818-lib.c
+index dcfaf09946ee3..2065842f775d6 100644
+--- a/drivers/rtc/rtc-mc146818-lib.c
++++ b/drivers/rtc/rtc-mc146818-lib.c
+@@ -104,7 +104,7 @@ again:
+ 	time->tm_year += real_year - 72;
+ #endif
+ 
+-	if (century > 20)
++	if (century > 19)
+ 		time->tm_year += (century - 19) * 100;
+ 
+ 	/*
+diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+index 9be273c320e21..a826456c60759 100644
+--- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
++++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+@@ -508,7 +508,8 @@ static int bnx2fc_l2_rcv_thread(void *arg)
+ 
+ static void bnx2fc_recv_frame(struct sk_buff *skb)
+ {
+-	u32 fr_len;
++	u64 crc_err;
++	u32 fr_len, fr_crc;
+ 	struct fc_lport *lport;
+ 	struct fcoe_rcv_info *fr;
+ 	struct fc_stats *stats;
+@@ -542,6 +543,11 @@ static void bnx2fc_recv_frame(struct sk_buff *skb)
+ 	skb_pull(skb, sizeof(struct fcoe_hdr));
+ 	fr_len = skb->len - sizeof(struct fcoe_crc_eof);
+ 
++	stats = per_cpu_ptr(lport->stats, get_cpu());
++	stats->RxFrames++;
++	stats->RxWords += fr_len / FCOE_WORD_TO_BYTE;
++	put_cpu();
++
+ 	fp = (struct fc_frame *)skb;
+ 	fc_frame_init(fp);
+ 	fr_dev(fp) = lport;
+@@ -624,16 +630,15 @@ static void bnx2fc_recv_frame(struct sk_buff *skb)
+ 		return;
+ 	}
+ 
+-	stats = per_cpu_ptr(lport->stats, smp_processor_id());
+-	stats->RxFrames++;
+-	stats->RxWords += fr_len / FCOE_WORD_TO_BYTE;
++	fr_crc = le32_to_cpu(fr_crc(fp));
+ 
+-	if (le32_to_cpu(fr_crc(fp)) !=
+-			~crc32(~0, skb->data, fr_len)) {
+-		if (stats->InvalidCRCCount < 5)
++	if (unlikely(fr_crc != ~crc32(~0, skb->data, fr_len))) {
++		stats = per_cpu_ptr(lport->stats, get_cpu());
++		crc_err = (stats->InvalidCRCCount++);
++		put_cpu();
++		if (crc_err < 5)
+ 			printk(KERN_WARNING PFX "dropping frame with "
+ 			       "CRC error\n");
+-		stats->InvalidCRCCount++;
+ 		kfree_skb(skb);
+ 		return;
+ 	}
+diff --git a/drivers/soc/mediatek/mtk-scpsys.c b/drivers/soc/mediatek/mtk-scpsys.c
+index 670cc82d17dc2..ca75b14931ec9 100644
+--- a/drivers/soc/mediatek/mtk-scpsys.c
++++ b/drivers/soc/mediatek/mtk-scpsys.c
+@@ -411,17 +411,12 @@ out:
+ 	return ret;
+ }
+ 
+-static int init_clks(struct platform_device *pdev, struct clk **clk)
++static void init_clks(struct platform_device *pdev, struct clk **clk)
+ {
+ 	int i;
+ 
+-	for (i = CLK_NONE + 1; i < CLK_MAX; i++) {
++	for (i = CLK_NONE + 1; i < CLK_MAX; i++)
+ 		clk[i] = devm_clk_get(&pdev->dev, clk_names[i]);
+-		if (IS_ERR(clk[i]))
+-			return PTR_ERR(clk[i]);
+-	}
+-
+-	return 0;
+ }
+ 
+ static struct scp *init_scp(struct platform_device *pdev,
+@@ -431,7 +426,7 @@ static struct scp *init_scp(struct platform_device *pdev,
+ {
+ 	struct genpd_onecell_data *pd_data;
+ 	struct resource *res;
+-	int i, j, ret;
++	int i, j;
+ 	struct scp *scp;
+ 	struct clk *clk[CLK_MAX];
+ 
+@@ -486,9 +481,7 @@ static struct scp *init_scp(struct platform_device *pdev,
+ 
+ 	pd_data->num_domains = num;
+ 
+-	ret = init_clks(pdev, clk);
+-	if (ret)
+-		return ERR_PTR(ret);
++	init_clks(pdev, clk);
+ 
+ 	for (i = 0; i < num; i++) {
+ 		struct scp_domain *scpd = &scp->domains[i];
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index f3de3305d0f59..44744c55304e1 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -573,7 +573,7 @@ static void bcm_qspi_chip_select(struct bcm_qspi *qspi, int cs)
+ 	u32 rd = 0;
+ 	u32 wr = 0;
+ 
+-	if (qspi->base[CHIP_SELECT]) {
++	if (cs >= 0 && qspi->base[CHIP_SELECT]) {
+ 		rd = bcm_qspi_read(qspi, CHIP_SELECT, 0);
+ 		wr = (rd & ~0xff) | (1 << cs);
+ 		if (rd == wr)
+diff --git a/drivers/spi/spi-meson-spicc.c b/drivers/spi/spi-meson-spicc.c
+index c208efeadd184..0bc7daa7afc83 100644
+--- a/drivers/spi/spi-meson-spicc.c
++++ b/drivers/spi/spi-meson-spicc.c
+@@ -693,6 +693,11 @@ static int meson_spicc_probe(struct platform_device *pdev)
+ 	writel_relaxed(0, spicc->base + SPICC_INTREG);
+ 
+ 	irq = platform_get_irq(pdev, 0);
++	if (irq < 0) {
++		ret = irq;
++		goto out_master;
++	}
++
+ 	ret = devm_request_irq(&pdev->dev, irq, meson_spicc_irq,
+ 			       0, NULL, spicc);
+ 	if (ret) {
+diff --git a/drivers/spi/spi-mt65xx.c b/drivers/spi/spi-mt65xx.c
+index a15de10ee286a..753bd313e6fda 100644
+--- a/drivers/spi/spi-mt65xx.c
++++ b/drivers/spi/spi-mt65xx.c
+@@ -624,7 +624,7 @@ static irqreturn_t mtk_spi_interrupt(int irq, void *dev_id)
+ 	else
+ 		mdata->state = MTK_SPI_IDLE;
+ 
+-	if (!master->can_dma(master, master->cur_msg->spi, trans)) {
++	if (!master->can_dma(master, NULL, trans)) {
+ 		if (trans->rx_buf) {
+ 			cnt = mdata->xfer_len / 4;
+ 			ioread32_rep(mdata->base + SPI_RX_DATA_REG,
+diff --git a/drivers/spi/spi-stm32-qspi.c b/drivers/spi/spi-stm32-qspi.c
+index 514337c86d2c3..ffdc55f87e821 100644
+--- a/drivers/spi/spi-stm32-qspi.c
++++ b/drivers/spi/spi-stm32-qspi.c
+@@ -688,7 +688,7 @@ static int stm32_qspi_probe(struct platform_device *pdev)
+ 	struct resource *res;
+ 	int ret, irq;
+ 
+-	ctrl = spi_alloc_master(dev, sizeof(*qspi));
++	ctrl = devm_spi_alloc_master(dev, sizeof(*qspi));
+ 	if (!ctrl)
+ 		return -ENOMEM;
+ 
+@@ -697,58 +697,46 @@ static int stm32_qspi_probe(struct platform_device *pdev)
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qspi");
+ 	qspi->io_base = devm_ioremap_resource(dev, res);
+-	if (IS_ERR(qspi->io_base)) {
+-		ret = PTR_ERR(qspi->io_base);
+-		goto err_master_put;
+-	}
++	if (IS_ERR(qspi->io_base))
++		return PTR_ERR(qspi->io_base);
+ 
+ 	qspi->phys_base = res->start;
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qspi_mm");
+ 	qspi->mm_base = devm_ioremap_resource(dev, res);
+-	if (IS_ERR(qspi->mm_base)) {
+-		ret = PTR_ERR(qspi->mm_base);
+-		goto err_master_put;
+-	}
++	if (IS_ERR(qspi->mm_base))
++		return PTR_ERR(qspi->mm_base);
+ 
+ 	qspi->mm_size = resource_size(res);
+-	if (qspi->mm_size > STM32_QSPI_MAX_MMAP_SZ) {
+-		ret = -EINVAL;
+-		goto err_master_put;
+-	}
++	if (qspi->mm_size > STM32_QSPI_MAX_MMAP_SZ)
++		return -EINVAL;
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq < 0) {
+-		ret = irq;
+-		goto err_master_put;
+-	}
++	if (irq < 0)
++		return irq;
+ 
+ 	ret = devm_request_irq(dev, irq, stm32_qspi_irq, 0,
+ 			       dev_name(dev), qspi);
+ 	if (ret) {
+ 		dev_err(dev, "failed to request irq\n");
+-		goto err_master_put;
++		return ret;
+ 	}
+ 
+ 	init_completion(&qspi->data_completion);
+ 	init_completion(&qspi->match_completion);
+ 
+ 	qspi->clk = devm_clk_get(dev, NULL);
+-	if (IS_ERR(qspi->clk)) {
+-		ret = PTR_ERR(qspi->clk);
+-		goto err_master_put;
+-	}
++	if (IS_ERR(qspi->clk))
++		return PTR_ERR(qspi->clk);
+ 
+ 	qspi->clk_rate = clk_get_rate(qspi->clk);
+-	if (!qspi->clk_rate) {
+-		ret = -EINVAL;
+-		goto err_master_put;
+-	}
++	if (!qspi->clk_rate)
++		return -EINVAL;
+ 
+ 	ret = clk_prepare_enable(qspi->clk);
+ 	if (ret) {
+ 		dev_err(dev, "can not enable the clock\n");
+-		goto err_master_put;
++		return ret;
+ 	}
+ 
+ 	rstc = devm_reset_control_get_exclusive(dev, NULL);
+@@ -784,7 +772,7 @@ static int stm32_qspi_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(dev);
+ 	pm_runtime_get_noresume(dev);
+ 
+-	ret = devm_spi_register_master(dev, ctrl);
++	ret = spi_register_master(ctrl);
+ 	if (ret)
+ 		goto err_pm_runtime_free;
+ 
+@@ -806,8 +794,6 @@ err_dma_free:
+ 	stm32_qspi_dma_free(qspi);
+ err_clk_disable:
+ 	clk_disable_unprepare(qspi->clk);
+-err_master_put:
+-	spi_master_put(qspi->ctrl);
+ 
+ 	return ret;
+ }
+@@ -817,6 +803,7 @@ static int stm32_qspi_remove(struct platform_device *pdev)
+ 	struct stm32_qspi *qspi = platform_get_drvdata(pdev);
+ 
+ 	pm_runtime_get_sync(qspi->dev);
++	spi_unregister_master(qspi->ctrl);
+ 	/* disable qspi */
+ 	writel_relaxed(0, qspi->io_base + QSPI_CR);
+ 	stm32_qspi_dma_free(qspi);
+diff --git a/drivers/spi/spi-uniphier.c b/drivers/spi/spi-uniphier.c
+index 342ee8d2c4761..cc0da48222311 100644
+--- a/drivers/spi/spi-uniphier.c
++++ b/drivers/spi/spi-uniphier.c
+@@ -726,7 +726,7 @@ static int uniphier_spi_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "failed to get TX DMA capacities: %d\n",
+ 				ret);
+-			goto out_disable_clk;
++			goto out_release_dma;
+ 		}
+ 		dma_tx_burst = caps.max_burst;
+ 	}
+@@ -735,7 +735,7 @@ static int uniphier_spi_probe(struct platform_device *pdev)
+ 	if (IS_ERR_OR_NULL(master->dma_rx)) {
+ 		if (PTR_ERR(master->dma_rx) == -EPROBE_DEFER) {
+ 			ret = -EPROBE_DEFER;
+-			goto out_disable_clk;
++			goto out_release_dma;
+ 		}
+ 		master->dma_rx = NULL;
+ 		dma_rx_burst = INT_MAX;
+@@ -744,7 +744,7 @@ static int uniphier_spi_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "failed to get RX DMA capacities: %d\n",
+ 				ret);
+-			goto out_disable_clk;
++			goto out_release_dma;
+ 		}
+ 		dma_rx_burst = caps.max_burst;
+ 	}
+@@ -753,10 +753,20 @@ static int uniphier_spi_probe(struct platform_device *pdev)
+ 
+ 	ret = devm_spi_register_master(&pdev->dev, master);
+ 	if (ret)
+-		goto out_disable_clk;
++		goto out_release_dma;
+ 
+ 	return 0;
+ 
++out_release_dma:
++	if (!IS_ERR_OR_NULL(master->dma_rx)) {
++		dma_release_channel(master->dma_rx);
++		master->dma_rx = NULL;
++	}
++	if (!IS_ERR_OR_NULL(master->dma_tx)) {
++		dma_release_channel(master->dma_tx);
++		master->dma_tx = NULL;
++	}
++
+ out_disable_clk:
+ 	clk_disable_unprepare(priv->clk);
+ 
+diff --git a/drivers/video/console/Kconfig b/drivers/video/console/Kconfig
+index 840d9813b0bc6..fcc46380e7c91 100644
+--- a/drivers/video/console/Kconfig
++++ b/drivers/video/console/Kconfig
+@@ -78,6 +78,26 @@ config FRAMEBUFFER_CONSOLE
+ 	help
+ 	  Low-level framebuffer-based console driver.
+ 
++config FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION
++	bool "Enable legacy fbcon hardware acceleration code"
++	depends on FRAMEBUFFER_CONSOLE
++	default y if PARISC
++	default n
++	help
++	  This option enables the fbcon (framebuffer text-based) hardware
++	  acceleration for graphics drivers which were written for the fbdev
++	  graphics interface.
++
++	  On modern machines, on mainstream machines (like x86-64) or when
++	  using a modern Linux distribution those fbdev drivers usually aren't used.
++	  So enabling this option wouldn't have any effect, which is why you want
++	  to disable this option on such newer machines.
++
++	  If you compile this kernel for older machines which still require the
++	  fbdev drivers, you may want to say Y.
++
++	  If unsure, select n.
++
+ config FRAMEBUFFER_CONSOLE_DETECT_PRIMARY
+        bool "Map the console to the primary display device"
+        depends on FRAMEBUFFER_CONSOLE
+diff --git a/drivers/video/fbdev/core/bitblit.c b/drivers/video/fbdev/core/bitblit.c
+index 01fae2c96965c..f98e8f298bc19 100644
+--- a/drivers/video/fbdev/core/bitblit.c
++++ b/drivers/video/fbdev/core/bitblit.c
+@@ -43,6 +43,21 @@ static void update_attr(u8 *dst, u8 *src, int attribute,
+ 	}
+ }
+ 
++static void bit_bmove(struct vc_data *vc, struct fb_info *info, int sy,
++		      int sx, int dy, int dx, int height, int width)
++{
++	struct fb_copyarea area;
++
++	area.sx = sx * vc->vc_font.width;
++	area.sy = sy * vc->vc_font.height;
++	area.dx = dx * vc->vc_font.width;
++	area.dy = dy * vc->vc_font.height;
++	area.height = height * vc->vc_font.height;
++	area.width = width * vc->vc_font.width;
++
++	info->fbops->fb_copyarea(info, &area);
++}
++
+ static void bit_clear(struct vc_data *vc, struct fb_info *info, int sy,
+ 		      int sx, int height, int width)
+ {
+@@ -378,6 +393,7 @@ static int bit_update_start(struct fb_info *info)
+ 
+ void fbcon_set_bitops(struct fbcon_ops *ops)
+ {
++	ops->bmove = bit_bmove;
+ 	ops->clear = bit_clear;
+ 	ops->putcs = bit_putcs;
+ 	ops->clear_margins = bit_clear_margins;
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 99ecd9a6d844a..f36829eeb5a93 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -173,6 +173,8 @@ static void fbcon_putcs(struct vc_data *vc, const unsigned short *s,
+ 			int count, int ypos, int xpos);
+ static void fbcon_clear_margins(struct vc_data *vc, int bottom_only);
+ static void fbcon_cursor(struct vc_data *vc, int mode);
++static void fbcon_bmove(struct vc_data *vc, int sy, int sx, int dy, int dx,
++			int height, int width);
+ static int fbcon_switch(struct vc_data *vc);
+ static int fbcon_blank(struct vc_data *vc, int blank, int mode_switch);
+ static void fbcon_set_palette(struct vc_data *vc, const unsigned char *table);
+@@ -180,8 +182,16 @@ static void fbcon_set_palette(struct vc_data *vc, const unsigned char *table);
+ /*
+  *  Internal routines
+  */
++static __inline__ void ywrap_up(struct vc_data *vc, int count);
++static __inline__ void ywrap_down(struct vc_data *vc, int count);
++static __inline__ void ypan_up(struct vc_data *vc, int count);
++static __inline__ void ypan_down(struct vc_data *vc, int count);
++static void fbcon_bmove_rec(struct vc_data *vc, struct fbcon_display *p, int sy, int sx,
++			    int dy, int dx, int height, int width, u_int y_break);
+ static void fbcon_set_disp(struct fb_info *info, struct fb_var_screeninfo *var,
+ 			   int unit);
++static void fbcon_redraw_move(struct vc_data *vc, struct fbcon_display *p,
++			      int line, int count, int dy);
+ static void fbcon_modechanged(struct fb_info *info);
+ static void fbcon_set_all_vcs(struct fb_info *info);
+ static void fbcon_start(void);
+@@ -1015,7 +1025,7 @@ static void fbcon_init(struct vc_data *vc, int init)
+ 	struct vc_data *svc = *default_mode;
+ 	struct fbcon_display *t, *p = &fb_display[vc->vc_num];
+ 	int logo = 1, new_rows, new_cols, rows, cols;
+-	int ret;
++	int cap, ret;
+ 
+ 	if (WARN_ON(info_idx == -1))
+ 	    return;
+@@ -1024,6 +1034,7 @@ static void fbcon_init(struct vc_data *vc, int init)
+ 		con2fb_map[vc->vc_num] = info_idx;
+ 
+ 	info = registered_fb[con2fb_map[vc->vc_num]];
++	cap = info->flags;
+ 
+ 	if (logo_shown < 0 && console_loglevel <= CONSOLE_LOGLEVEL_QUIET)
+ 		logo_shown = FBCON_LOGO_DONTSHOW;
+@@ -1125,6 +1136,14 @@ static void fbcon_init(struct vc_data *vc, int init)
+ 
+ 	ops->graphics = 0;
+ 
++#ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION
++	if ((cap & FBINFO_HWACCEL_COPYAREA) &&
++	    !(cap & FBINFO_HWACCEL_DISABLED))
++		p->scrollmode = SCROLL_MOVE;
++	else /* default to something safe */
++		p->scrollmode = SCROLL_REDRAW;
++#endif
++
+ 	/*
+ 	 *  ++guenther: console.c:vc_allocate() relies on initializing
+ 	 *  vc_{cols,rows}, but we must not set those if we are only
+@@ -1211,13 +1230,14 @@ finished:
+  *  This system is now divided into two levels because of complications
+  *  caused by hardware scrolling. Top level functions:
+  *
+- *	fbcon_clear(), fbcon_putc(), fbcon_clear_margins()
++ *	fbcon_bmove(), fbcon_clear(), fbcon_putc(), fbcon_clear_margins()
+  *
+  *  handles y values in range [0, scr_height-1] that correspond to real
+  *  screen positions. y_wrap shift means that first line of bitmap may be
+  *  anywhere on this display. These functions convert lineoffsets to
+  *  bitmap offsets and deal with the wrap-around case by splitting blits.
+  *
++ *	fbcon_bmove_physical_8()    -- These functions fast implementations
+  *	fbcon_clear_physical_8()    -- of original fbcon_XXX fns.
+  *	fbcon_putc_physical_8()	    -- (font width != 8) may be added later
+  *
+@@ -1390,6 +1410,224 @@ static void fbcon_set_disp(struct fb_info *info, struct fb_var_screeninfo *var,
+ 	}
+ }
+ 
++static __inline__ void ywrap_up(struct vc_data *vc, int count)
++{
++	struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
++	struct fbcon_ops *ops = info->fbcon_par;
++	struct fbcon_display *p = &fb_display[vc->vc_num];
++
++	p->yscroll += count;
++	if (p->yscroll >= p->vrows)	/* Deal with wrap */
++		p->yscroll -= p->vrows;
++	ops->var.xoffset = 0;
++	ops->var.yoffset = p->yscroll * vc->vc_font.height;
++	ops->var.vmode |= FB_VMODE_YWRAP;
++	ops->update_start(info);
++	scrollback_max += count;
++	if (scrollback_max > scrollback_phys_max)
++		scrollback_max = scrollback_phys_max;
++	scrollback_current = 0;
++}
++
++static __inline__ void ywrap_down(struct vc_data *vc, int count)
++{
++	struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
++	struct fbcon_ops *ops = info->fbcon_par;
++	struct fbcon_display *p = &fb_display[vc->vc_num];
++
++	p->yscroll -= count;
++	if (p->yscroll < 0)	/* Deal with wrap */
++		p->yscroll += p->vrows;
++	ops->var.xoffset = 0;
++	ops->var.yoffset = p->yscroll * vc->vc_font.height;
++	ops->var.vmode |= FB_VMODE_YWRAP;
++	ops->update_start(info);
++	scrollback_max -= count;
++	if (scrollback_max < 0)
++		scrollback_max = 0;
++	scrollback_current = 0;
++}
++
++static __inline__ void ypan_up(struct vc_data *vc, int count)
++{
++	struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
++	struct fbcon_display *p = &fb_display[vc->vc_num];
++	struct fbcon_ops *ops = info->fbcon_par;
++
++	p->yscroll += count;
++	if (p->yscroll > p->vrows - vc->vc_rows) {
++		ops->bmove(vc, info, p->vrows - vc->vc_rows,
++			    0, 0, 0, vc->vc_rows, vc->vc_cols);
++		p->yscroll -= p->vrows - vc->vc_rows;
++	}
++
++	ops->var.xoffset = 0;
++	ops->var.yoffset = p->yscroll * vc->vc_font.height;
++	ops->var.vmode &= ~FB_VMODE_YWRAP;
++	ops->update_start(info);
++	fbcon_clear_margins(vc, 1);
++	scrollback_max += count;
++	if (scrollback_max > scrollback_phys_max)
++		scrollback_max = scrollback_phys_max;
++	scrollback_current = 0;
++}
++
++static __inline__ void ypan_up_redraw(struct vc_data *vc, int t, int count)
++{
++	struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
++	struct fbcon_ops *ops = info->fbcon_par;
++	struct fbcon_display *p = &fb_display[vc->vc_num];
++
++	p->yscroll += count;
++
++	if (p->yscroll > p->vrows - vc->vc_rows) {
++		p->yscroll -= p->vrows - vc->vc_rows;
++		fbcon_redraw_move(vc, p, t + count, vc->vc_rows - count, t);
++	}
++
++	ops->var.xoffset = 0;
++	ops->var.yoffset = p->yscroll * vc->vc_font.height;
++	ops->var.vmode &= ~FB_VMODE_YWRAP;
++	ops->update_start(info);
++	fbcon_clear_margins(vc, 1);
++	scrollback_max += count;
++	if (scrollback_max > scrollback_phys_max)
++		scrollback_max = scrollback_phys_max;
++	scrollback_current = 0;
++}
++
++static __inline__ void ypan_down(struct vc_data *vc, int count)
++{
++	struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
++	struct fbcon_display *p = &fb_display[vc->vc_num];
++	struct fbcon_ops *ops = info->fbcon_par;
++
++	p->yscroll -= count;
++	if (p->yscroll < 0) {
++		ops->bmove(vc, info, 0, 0, p->vrows - vc->vc_rows,
++			    0, vc->vc_rows, vc->vc_cols);
++		p->yscroll += p->vrows - vc->vc_rows;
++	}
++
++	ops->var.xoffset = 0;
++	ops->var.yoffset = p->yscroll * vc->vc_font.height;
++	ops->var.vmode &= ~FB_VMODE_YWRAP;
++	ops->update_start(info);
++	fbcon_clear_margins(vc, 1);
++	scrollback_max -= count;
++	if (scrollback_max < 0)
++		scrollback_max = 0;
++	scrollback_current = 0;
++}
++
++static __inline__ void ypan_down_redraw(struct vc_data *vc, int t, int count)
++{
++	struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
++	struct fbcon_ops *ops = info->fbcon_par;
++	struct fbcon_display *p = &fb_display[vc->vc_num];
++
++	p->yscroll -= count;
++
++	if (p->yscroll < 0) {
++		p->yscroll += p->vrows - vc->vc_rows;
++		fbcon_redraw_move(vc, p, t, vc->vc_rows - count, t + count);
++	}
++
++	ops->var.xoffset = 0;
++	ops->var.yoffset = p->yscroll * vc->vc_font.height;
++	ops->var.vmode &= ~FB_VMODE_YWRAP;
++	ops->update_start(info);
++	fbcon_clear_margins(vc, 1);
++	scrollback_max -= count;
++	if (scrollback_max < 0)
++		scrollback_max = 0;
++	scrollback_current = 0;
++}
++
++static void fbcon_redraw_move(struct vc_data *vc, struct fbcon_display *p,
++			      int line, int count, int dy)
++{
++	unsigned short *s = (unsigned short *)
++		(vc->vc_origin + vc->vc_size_row * line);
++
++	while (count--) {
++		unsigned short *start = s;
++		unsigned short *le = advance_row(s, 1);
++		unsigned short c;
++		int x = 0;
++		unsigned short attr = 1;
++
++		do {
++			c = scr_readw(s);
++			if (attr != (c & 0xff00)) {
++				attr = c & 0xff00;
++				if (s > start) {
++					fbcon_putcs(vc, start, s - start,
++						    dy, x);
++					x += s - start;
++					start = s;
++				}
++			}
++			console_conditional_schedule();
++			s++;
++		} while (s < le);
++		if (s > start)
++			fbcon_putcs(vc, start, s - start, dy, x);
++		console_conditional_schedule();
++		dy++;
++	}
++}
++
++static void fbcon_redraw_blit(struct vc_data *vc, struct fb_info *info,
++			struct fbcon_display *p, int line, int count, int ycount)
++{
++	int offset = ycount * vc->vc_cols;
++	unsigned short *d = (unsigned short *)
++	    (vc->vc_origin + vc->vc_size_row * line);
++	unsigned short *s = d + offset;
++	struct fbcon_ops *ops = info->fbcon_par;
++
++	while (count--) {
++		unsigned short *start = s;
++		unsigned short *le = advance_row(s, 1);
++		unsigned short c;
++		int x = 0;
++
++		do {
++			c = scr_readw(s);
++
++			if (c == scr_readw(d)) {
++				if (s > start) {
++					ops->bmove(vc, info, line + ycount, x,
++						   line, x, 1, s-start);
++					x += s - start + 1;
++					start = s + 1;
++				} else {
++					x++;
++					start++;
++				}
++			}
++
++			scr_writew(c, d);
++			console_conditional_schedule();
++			s++;
++			d++;
++		} while (s < le);
++		if (s > start)
++			ops->bmove(vc, info, line + ycount, x, line, x, 1,
++				   s-start);
++		console_conditional_schedule();
++		if (ycount > 0)
++			line++;
++		else {
++			line--;
++			/* NOTE: We subtract two lines from these pointers */
++			s -= vc->vc_size_row;
++			d -= vc->vc_size_row;
++		}
++	}
++}
++
+ static void fbcon_redraw(struct vc_data *vc, struct fbcon_display *p,
+ 			 int line, int count, int offset)
+ {
+@@ -1450,6 +1688,7 @@ static bool fbcon_scroll(struct vc_data *vc, unsigned int t, unsigned int b,
+ {
+ 	struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
+ 	struct fbcon_display *p = &fb_display[vc->vc_num];
++	int scroll_partial = info->flags & FBINFO_PARTIAL_PAN_OK;
+ 
+ 	if (fbcon_is_inactive(vc, info))
+ 		return true;
+@@ -1466,32 +1705,291 @@ static bool fbcon_scroll(struct vc_data *vc, unsigned int t, unsigned int b,
+ 	case SM_UP:
+ 		if (count > vc->vc_rows)	/* Maximum realistic size */
+ 			count = vc->vc_rows;
+-		fbcon_redraw(vc, p, t, b - t - count,
+-			     count * vc->vc_cols);
+-		fbcon_clear(vc, b - count, 0, count, vc->vc_cols);
+-		scr_memsetw((unsigned short *) (vc->vc_origin +
+-						vc->vc_size_row *
+-						(b - count)),
+-			    vc->vc_video_erase_char,
+-			    vc->vc_size_row * count);
+-		return true;
++		if (logo_shown >= 0)
++			goto redraw_up;
++		switch (fb_scrollmode(p)) {
++		case SCROLL_MOVE:
++			fbcon_redraw_blit(vc, info, p, t, b - t - count,
++				     count);
++			fbcon_clear(vc, b - count, 0, count, vc->vc_cols);
++			scr_memsetw((unsigned short *) (vc->vc_origin +
++							vc->vc_size_row *
++							(b - count)),
++				    vc->vc_video_erase_char,
++				    vc->vc_size_row * count);
++			return true;
++
++		case SCROLL_WRAP_MOVE:
++			if (b - t - count > 3 * vc->vc_rows >> 2) {
++				if (t > 0)
++					fbcon_bmove(vc, 0, 0, count, 0, t,
++						    vc->vc_cols);
++				ywrap_up(vc, count);
++				if (vc->vc_rows - b > 0)
++					fbcon_bmove(vc, b - count, 0, b, 0,
++						    vc->vc_rows - b,
++						    vc->vc_cols);
++			} else if (info->flags & FBINFO_READS_FAST)
++				fbcon_bmove(vc, t + count, 0, t, 0,
++					    b - t - count, vc->vc_cols);
++			else
++				goto redraw_up;
++			fbcon_clear(vc, b - count, 0, count, vc->vc_cols);
++			break;
++
++		case SCROLL_PAN_REDRAW:
++			if ((p->yscroll + count <=
++			     2 * (p->vrows - vc->vc_rows))
++			    && ((!scroll_partial && (b - t == vc->vc_rows))
++				|| (scroll_partial
++				    && (b - t - count >
++					3 * vc->vc_rows >> 2)))) {
++				if (t > 0)
++					fbcon_redraw_move(vc, p, 0, t, count);
++				ypan_up_redraw(vc, t, count);
++				if (vc->vc_rows - b > 0)
++					fbcon_redraw_move(vc, p, b,
++							  vc->vc_rows - b, b);
++			} else
++				fbcon_redraw_move(vc, p, t + count, b - t - count, t);
++			fbcon_clear(vc, b - count, 0, count, vc->vc_cols);
++			break;
++
++		case SCROLL_PAN_MOVE:
++			if ((p->yscroll + count <=
++			     2 * (p->vrows - vc->vc_rows))
++			    && ((!scroll_partial && (b - t == vc->vc_rows))
++				|| (scroll_partial
++				    && (b - t - count >
++					3 * vc->vc_rows >> 2)))) {
++				if (t > 0)
++					fbcon_bmove(vc, 0, 0, count, 0, t,
++						    vc->vc_cols);
++				ypan_up(vc, count);
++				if (vc->vc_rows - b > 0)
++					fbcon_bmove(vc, b - count, 0, b, 0,
++						    vc->vc_rows - b,
++						    vc->vc_cols);
++			} else if (info->flags & FBINFO_READS_FAST)
++				fbcon_bmove(vc, t + count, 0, t, 0,
++					    b - t - count, vc->vc_cols);
++			else
++				goto redraw_up;
++			fbcon_clear(vc, b - count, 0, count, vc->vc_cols);
++			break;
++
++		case SCROLL_REDRAW:
++		      redraw_up:
++			fbcon_redraw(vc, p, t, b - t - count,
++				     count * vc->vc_cols);
++			fbcon_clear(vc, b - count, 0, count, vc->vc_cols);
++			scr_memsetw((unsigned short *) (vc->vc_origin +
++							vc->vc_size_row *
++							(b - count)),
++				    vc->vc_video_erase_char,
++				    vc->vc_size_row * count);
++			return true;
++		}
++		break;
+ 
+ 	case SM_DOWN:
+ 		if (count > vc->vc_rows)	/* Maximum realistic size */
+ 			count = vc->vc_rows;
+-		fbcon_redraw(vc, p, b - 1, b - t - count,
+-			     -count * vc->vc_cols);
+-		fbcon_clear(vc, t, 0, count, vc->vc_cols);
+-		scr_memsetw((unsigned short *) (vc->vc_origin +
+-						vc->vc_size_row *
+-						t),
+-			    vc->vc_video_erase_char,
+-			    vc->vc_size_row * count);
+-		return true;
++		if (logo_shown >= 0)
++			goto redraw_down;
++		switch (fb_scrollmode(p)) {
++		case SCROLL_MOVE:
++			fbcon_redraw_blit(vc, info, p, b - 1, b - t - count,
++				     -count);
++			fbcon_clear(vc, t, 0, count, vc->vc_cols);
++			scr_memsetw((unsigned short *) (vc->vc_origin +
++							vc->vc_size_row *
++							t),
++				    vc->vc_video_erase_char,
++				    vc->vc_size_row * count);
++			return true;
++
++		case SCROLL_WRAP_MOVE:
++			if (b - t - count > 3 * vc->vc_rows >> 2) {
++				if (vc->vc_rows - b > 0)
++					fbcon_bmove(vc, b, 0, b - count, 0,
++						    vc->vc_rows - b,
++						    vc->vc_cols);
++				ywrap_down(vc, count);
++				if (t > 0)
++					fbcon_bmove(vc, count, 0, 0, 0, t,
++						    vc->vc_cols);
++			} else if (info->flags & FBINFO_READS_FAST)
++				fbcon_bmove(vc, t, 0, t + count, 0,
++					    b - t - count, vc->vc_cols);
++			else
++				goto redraw_down;
++			fbcon_clear(vc, t, 0, count, vc->vc_cols);
++			break;
++
++		case SCROLL_PAN_MOVE:
++			if ((count - p->yscroll <= p->vrows - vc->vc_rows)
++			    && ((!scroll_partial && (b - t == vc->vc_rows))
++				|| (scroll_partial
++				    && (b - t - count >
++					3 * vc->vc_rows >> 2)))) {
++				if (vc->vc_rows - b > 0)
++					fbcon_bmove(vc, b, 0, b - count, 0,
++						    vc->vc_rows - b,
++						    vc->vc_cols);
++				ypan_down(vc, count);
++				if (t > 0)
++					fbcon_bmove(vc, count, 0, 0, 0, t,
++						    vc->vc_cols);
++			} else if (info->flags & FBINFO_READS_FAST)
++				fbcon_bmove(vc, t, 0, t + count, 0,
++					    b - t - count, vc->vc_cols);
++			else
++				goto redraw_down;
++			fbcon_clear(vc, t, 0, count, vc->vc_cols);
++			break;
++
++		case SCROLL_PAN_REDRAW:
++			if ((count - p->yscroll <= p->vrows - vc->vc_rows)
++			    && ((!scroll_partial && (b - t == vc->vc_rows))
++				|| (scroll_partial
++				    && (b - t - count >
++					3 * vc->vc_rows >> 2)))) {
++				if (vc->vc_rows - b > 0)
++					fbcon_redraw_move(vc, p, b, vc->vc_rows - b,
++							  b - count);
++				ypan_down_redraw(vc, t, count);
++				if (t > 0)
++					fbcon_redraw_move(vc, p, count, t, 0);
++			} else
++				fbcon_redraw_move(vc, p, t, b - t - count, t + count);
++			fbcon_clear(vc, t, 0, count, vc->vc_cols);
++			break;
++
++		case SCROLL_REDRAW:
++		      redraw_down:
++			fbcon_redraw(vc, p, b - 1, b - t - count,
++				     -count * vc->vc_cols);
++			fbcon_clear(vc, t, 0, count, vc->vc_cols);
++			scr_memsetw((unsigned short *) (vc->vc_origin +
++							vc->vc_size_row *
++							t),
++				    vc->vc_video_erase_char,
++				    vc->vc_size_row * count);
++			return true;
++		}
+ 	}
+ 	return false;
+ }
+ 
++
++static void fbcon_bmove(struct vc_data *vc, int sy, int sx, int dy, int dx,
++			int height, int width)
++{
++	struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
++	struct fbcon_display *p = &fb_display[vc->vc_num];
++
++	if (fbcon_is_inactive(vc, info))
++		return;
++
++	if (!width || !height)
++		return;
++
++	/*  Split blits that cross physical y_wrap case.
++	 *  Pathological case involves 4 blits, better to use recursive
++	 *  code rather than unrolled case
++	 *
++	 *  Recursive invocations don't need to erase the cursor over and
++	 *  over again, so we use fbcon_bmove_rec()
++	 */
++	fbcon_bmove_rec(vc, p, sy, sx, dy, dx, height, width,
++			p->vrows - p->yscroll);
++}
++
++static void fbcon_bmove_rec(struct vc_data *vc, struct fbcon_display *p, int sy, int sx,
++			    int dy, int dx, int height, int width, u_int y_break)
++{
++	struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
++	struct fbcon_ops *ops = info->fbcon_par;
++	u_int b;
++
++	if (sy < y_break && sy + height > y_break) {
++		b = y_break - sy;
++		if (dy < sy) {	/* Avoid trashing self */
++			fbcon_bmove_rec(vc, p, sy, sx, dy, dx, b, width,
++					y_break);
++			fbcon_bmove_rec(vc, p, sy + b, sx, dy + b, dx,
++					height - b, width, y_break);
++		} else {
++			fbcon_bmove_rec(vc, p, sy + b, sx, dy + b, dx,
++					height - b, width, y_break);
++			fbcon_bmove_rec(vc, p, sy, sx, dy, dx, b, width,
++					y_break);
++		}
++		return;
++	}
++
++	if (dy < y_break && dy + height > y_break) {
++		b = y_break - dy;
++		if (dy < sy) {	/* Avoid trashing self */
++			fbcon_bmove_rec(vc, p, sy, sx, dy, dx, b, width,
++					y_break);
++			fbcon_bmove_rec(vc, p, sy + b, sx, dy + b, dx,
++					height - b, width, y_break);
++		} else {
++			fbcon_bmove_rec(vc, p, sy + b, sx, dy + b, dx,
++					height - b, width, y_break);
++			fbcon_bmove_rec(vc, p, sy, sx, dy, dx, b, width,
++					y_break);
++		}
++		return;
++	}
++	ops->bmove(vc, info, real_y(p, sy), sx, real_y(p, dy), dx,
++		   height, width);
++}
++
++static void updatescrollmode_accel(struct fbcon_display *p,
++					struct fb_info *info,
++					struct vc_data *vc)
++{
++#ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION
++	struct fbcon_ops *ops = info->fbcon_par;
++	int cap = info->flags;
++	u16 t = 0;
++	int ypan = FBCON_SWAP(ops->rotate, info->fix.ypanstep,
++				  info->fix.xpanstep);
++	int ywrap = FBCON_SWAP(ops->rotate, info->fix.ywrapstep, t);
++	int yres = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
++	int vyres = FBCON_SWAP(ops->rotate, info->var.yres_virtual,
++				   info->var.xres_virtual);
++	int good_pan = (cap & FBINFO_HWACCEL_YPAN) &&
++		divides(ypan, vc->vc_font.height) && vyres > yres;
++	int good_wrap = (cap & FBINFO_HWACCEL_YWRAP) &&
++		divides(ywrap, vc->vc_font.height) &&
++		divides(vc->vc_font.height, vyres) &&
++		divides(vc->vc_font.height, yres);
++	int reading_fast = cap & FBINFO_READS_FAST;
++	int fast_copyarea = (cap & FBINFO_HWACCEL_COPYAREA) &&
++		!(cap & FBINFO_HWACCEL_DISABLED);
++	int fast_imageblit = (cap & FBINFO_HWACCEL_IMAGEBLIT) &&
++		!(cap & FBINFO_HWACCEL_DISABLED);
++
++	if (good_wrap || good_pan) {
++		if (reading_fast || fast_copyarea)
++			p->scrollmode = good_wrap ?
++				SCROLL_WRAP_MOVE : SCROLL_PAN_MOVE;
++		else
++			p->scrollmode = good_wrap ? SCROLL_REDRAW :
++				SCROLL_PAN_REDRAW;
++	} else {
++		if (reading_fast || (fast_copyarea && !fast_imageblit))
++			p->scrollmode = SCROLL_MOVE;
++		else
++			p->scrollmode = SCROLL_REDRAW;
++	}
++#endif
++}
++
+ static void updatescrollmode(struct fbcon_display *p,
+ 					struct fb_info *info,
+ 					struct vc_data *vc)
+@@ -1507,6 +2005,9 @@ static void updatescrollmode(struct fbcon_display *p,
+ 		p->vrows -= (yres - (fh * vc->vc_rows)) / fh;
+ 	if ((yres % fh) && (vyres % fh < yres % fh))
+ 		p->vrows--;
++
++	/* update scrollmode in case hardware acceleration is used */
++	updatescrollmode_accel(p, info, vc);
+ }
+ 
+ #define PITCH(w) (((w) + 7) >> 3)
+@@ -1664,7 +2165,21 @@ static int fbcon_switch(struct vc_data *vc)
+ 
+ 	updatescrollmode(p, info, vc);
+ 
+-	scrollback_phys_max = 0;
++	switch (fb_scrollmode(p)) {
++	case SCROLL_WRAP_MOVE:
++		scrollback_phys_max = p->vrows - vc->vc_rows;
++		break;
++	case SCROLL_PAN_MOVE:
++	case SCROLL_PAN_REDRAW:
++		scrollback_phys_max = p->vrows - 2 * vc->vc_rows;
++		if (scrollback_phys_max < 0)
++			scrollback_phys_max = 0;
++		break;
++	default:
++		scrollback_phys_max = 0;
++		break;
++	}
++
+ 	scrollback_max = 0;
+ 	scrollback_current = 0;
+ 
+diff --git a/drivers/video/fbdev/core/fbcon.h b/drivers/video/fbdev/core/fbcon.h
+index a00603b4451ae..969d41ecede5f 100644
+--- a/drivers/video/fbdev/core/fbcon.h
++++ b/drivers/video/fbdev/core/fbcon.h
+@@ -29,6 +29,9 @@ struct fbcon_display {
+     /* Filled in by the low-level console driver */
+     const u_char *fontdata;
+     int userfont;                   /* != 0 if fontdata kmalloc()ed */
++#ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION
++    u_short scrollmode;             /* Scroll Method, use fb_scrollmode() */
++#endif
+     u_short inverse;                /* != 0 text black on white as default */
+     short yscroll;                  /* Hardware scrolling */
+     int vrows;                      /* number of virtual rows */
+@@ -51,6 +54,8 @@ struct fbcon_display {
+ };
+ 
+ struct fbcon_ops {
++	void (*bmove)(struct vc_data *vc, struct fb_info *info, int sy,
++		      int sx, int dy, int dx, int height, int width);
+ 	void (*clear)(struct vc_data *vc, struct fb_info *info, int sy,
+ 		      int sx, int height, int width);
+ 	void (*putcs)(struct vc_data *vc, struct fb_info *info,
+@@ -149,6 +154,73 @@ static inline int attr_col_ec(int shift, struct vc_data *vc,
+ #define attr_bgcol_ec(bgshift, vc, info) attr_col_ec(bgshift, vc, info, 0)
+ #define attr_fgcol_ec(fgshift, vc, info) attr_col_ec(fgshift, vc, info, 1)
+ 
++    /*
++     *  Scroll Method
++     */
++
++/* There are several methods fbcon can use to move text around the screen:
++ *
++ *                     Operation   Pan    Wrap
++ *---------------------------------------------
++ * SCROLL_MOVE         copyarea    No     No
++ * SCROLL_PAN_MOVE     copyarea    Yes    No
++ * SCROLL_WRAP_MOVE    copyarea    No     Yes
++ * SCROLL_REDRAW       imageblit   No     No
++ * SCROLL_PAN_REDRAW   imageblit   Yes    No
++ * SCROLL_WRAP_REDRAW  imageblit   No     Yes
++ *
++ * (SCROLL_WRAP_REDRAW is not implemented yet)
++ *
++ * In general, fbcon will choose the best scrolling
++ * method based on the rule below:
++ *
++ * Pan/Wrap > accel imageblit > accel copyarea >
++ * soft imageblit > (soft copyarea)
++ *
++ * Exception to the rule: Pan + accel copyarea is
++ * preferred over Pan + accel imageblit.
++ *
++ * The above is typical for PCI/AGP cards. Unless
++ * overridden, fbcon will never use soft copyarea.
++ *
++ * If you need to override the above rule, set the
++ * appropriate flags in fb_info->flags.  For example,
++ * to prefer copyarea over imageblit, set
++ * FBINFO_READS_FAST.
++ *
++ * Other notes:
++ * + use the hardware engine to move the text
++ *    (hw-accelerated copyarea() and fillrect())
++ * + use hardware-supported panning on a large virtual screen
++ * + amifb can not only pan, but also wrap the display by N lines
++ *    (i.e. visible line i = physical line (i+N) % yres).
++ * + read what's already rendered on the screen and
++ *     write it in a different place (this is cfb_copyarea())
++ * + re-render the text to the screen
++ *
++ * Whether to use wrapping or panning can only be figured out at
++ * runtime (when we know whether our font height is a multiple
++ * of the pan/wrap step)
++ *
++ */
++
++#define SCROLL_MOVE	   0x001
++#define SCROLL_PAN_MOVE	   0x002
++#define SCROLL_WRAP_MOVE   0x003
++#define SCROLL_REDRAW	   0x004
++#define SCROLL_PAN_REDRAW  0x005
++
++static inline u_short fb_scrollmode(struct fbcon_display *fb)
++{
++#ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION
++	return fb->scrollmode;
++#else
++	/* hardcoded to SCROLL_REDRAW if acceleration was disabled. */
++	return SCROLL_REDRAW;
++#endif
++}
++
++
+ #ifdef CONFIG_FB_TILEBLITTING
+ extern void fbcon_set_tileops(struct vc_data *vc, struct fb_info *info);
+ #endif
+diff --git a/drivers/video/fbdev/core/fbcon_ccw.c b/drivers/video/fbdev/core/fbcon_ccw.c
+index ffa78936eaab0..2789ace796342 100644
+--- a/drivers/video/fbdev/core/fbcon_ccw.c
++++ b/drivers/video/fbdev/core/fbcon_ccw.c
+@@ -59,12 +59,31 @@ static void ccw_update_attr(u8 *dst, u8 *src, int attribute,
+ 	}
+ }
+ 
++
++static void ccw_bmove(struct vc_data *vc, struct fb_info *info, int sy,
++		     int sx, int dy, int dx, int height, int width)
++{
++	struct fbcon_ops *ops = info->fbcon_par;
++	struct fb_copyarea area;
++	u32 vyres = GETVYRES(ops->p, info);
++
++	area.sx = sy * vc->vc_font.height;
++	area.sy = vyres - ((sx + width) * vc->vc_font.width);
++	area.dx = dy * vc->vc_font.height;
++	area.dy = vyres - ((dx + width) * vc->vc_font.width);
++	area.width = height * vc->vc_font.height;
++	area.height  = width * vc->vc_font.width;
++
++	info->fbops->fb_copyarea(info, &area);
++}
++
+ static void ccw_clear(struct vc_data *vc, struct fb_info *info, int sy,
+ 		     int sx, int height, int width)
+ {
++	struct fbcon_ops *ops = info->fbcon_par;
+ 	struct fb_fillrect region;
+ 	int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
+-	u32 vyres = info->var.yres;
++	u32 vyres = GETVYRES(ops->p, info);
+ 
+ 	region.color = attr_bgcol_ec(bgshift,vc,info);
+ 	region.dx = sy * vc->vc_font.height;
+@@ -121,7 +140,7 @@ static void ccw_putcs(struct vc_data *vc, struct fb_info *info,
+ 	u32 cnt, pitch, size;
+ 	u32 attribute = get_attribute(info, scr_readw(s));
+ 	u8 *dst, *buf = NULL;
+-	u32 vyres = info->var.yres;
++	u32 vyres = GETVYRES(ops->p, info);
+ 
+ 	if (!ops->fontbuffer)
+ 		return;
+@@ -210,7 +229,7 @@ static void ccw_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+ 	int attribute, use_sw = vc->vc_cursor_type & CUR_SW;
+ 	int err = 1, dx, dy;
+ 	char *src;
+-	u32 vyres = info->var.yres;
++	u32 vyres = GETVYRES(ops->p, info);
+ 
+ 	if (!ops->fontbuffer)
+ 		return;
+@@ -368,7 +387,7 @@ static int ccw_update_start(struct fb_info *info)
+ {
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 	u32 yoffset;
+-	u32 vyres = info->var.yres;
++	u32 vyres = GETVYRES(ops->p, info);
+ 	int err;
+ 
+ 	yoffset = (vyres - info->var.yres) - ops->var.xoffset;
+@@ -383,6 +402,7 @@ static int ccw_update_start(struct fb_info *info)
+ 
+ void fbcon_rotate_ccw(struct fbcon_ops *ops)
+ {
++	ops->bmove = ccw_bmove;
+ 	ops->clear = ccw_clear;
+ 	ops->putcs = ccw_putcs;
+ 	ops->clear_margins = ccw_clear_margins;
+diff --git a/drivers/video/fbdev/core/fbcon_cw.c b/drivers/video/fbdev/core/fbcon_cw.c
+index 92e5b7fb51ee2..86a254c1b2b7b 100644
+--- a/drivers/video/fbdev/core/fbcon_cw.c
++++ b/drivers/video/fbdev/core/fbcon_cw.c
+@@ -44,12 +44,31 @@ static void cw_update_attr(u8 *dst, u8 *src, int attribute,
+ 	}
+ }
+ 
++
++static void cw_bmove(struct vc_data *vc, struct fb_info *info, int sy,
++		     int sx, int dy, int dx, int height, int width)
++{
++	struct fbcon_ops *ops = info->fbcon_par;
++	struct fb_copyarea area;
++	u32 vxres = GETVXRES(ops->p, info);
++
++	area.sx = vxres - ((sy + height) * vc->vc_font.height);
++	area.sy = sx * vc->vc_font.width;
++	area.dx = vxres - ((dy + height) * vc->vc_font.height);
++	area.dy = dx * vc->vc_font.width;
++	area.width = height * vc->vc_font.height;
++	area.height  = width * vc->vc_font.width;
++
++	info->fbops->fb_copyarea(info, &area);
++}
++
+ static void cw_clear(struct vc_data *vc, struct fb_info *info, int sy,
+ 		     int sx, int height, int width)
+ {
++	struct fbcon_ops *ops = info->fbcon_par;
+ 	struct fb_fillrect region;
+ 	int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
+-	u32 vxres = info->var.xres;
++	u32 vxres = GETVXRES(ops->p, info);
+ 
+ 	region.color = attr_bgcol_ec(bgshift,vc,info);
+ 	region.dx = vxres - ((sy + height) * vc->vc_font.height);
+@@ -106,7 +125,7 @@ static void cw_putcs(struct vc_data *vc, struct fb_info *info,
+ 	u32 cnt, pitch, size;
+ 	u32 attribute = get_attribute(info, scr_readw(s));
+ 	u8 *dst, *buf = NULL;
+-	u32 vxres = info->var.xres;
++	u32 vxres = GETVXRES(ops->p, info);
+ 
+ 	if (!ops->fontbuffer)
+ 		return;
+@@ -193,7 +212,7 @@ static void cw_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+ 	int attribute, use_sw = vc->vc_cursor_type & CUR_SW;
+ 	int err = 1, dx, dy;
+ 	char *src;
+-	u32 vxres = info->var.xres;
++	u32 vxres = GETVXRES(ops->p, info);
+ 
+ 	if (!ops->fontbuffer)
+ 		return;
+@@ -350,7 +369,7 @@ static void cw_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+ static int cw_update_start(struct fb_info *info)
+ {
+ 	struct fbcon_ops *ops = info->fbcon_par;
+-	u32 vxres = info->var.xres;
++	u32 vxres = GETVXRES(ops->p, info);
+ 	u32 xoffset;
+ 	int err;
+ 
+@@ -366,6 +385,7 @@ static int cw_update_start(struct fb_info *info)
+ 
+ void fbcon_rotate_cw(struct fbcon_ops *ops)
+ {
++	ops->bmove = cw_bmove;
+ 	ops->clear = cw_clear;
+ 	ops->putcs = cw_putcs;
+ 	ops->clear_margins = cw_clear_margins;
+diff --git a/drivers/video/fbdev/core/fbcon_rotate.h b/drivers/video/fbdev/core/fbcon_rotate.h
+index b528b2e54283a..01cbe303b8a29 100644
+--- a/drivers/video/fbdev/core/fbcon_rotate.h
++++ b/drivers/video/fbdev/core/fbcon_rotate.h
+@@ -11,6 +11,15 @@
+ #ifndef _FBCON_ROTATE_H
+ #define _FBCON_ROTATE_H
+ 
++#define GETVYRES(s,i) ({                           \
++        (fb_scrollmode(s) == SCROLL_REDRAW || fb_scrollmode(s) == SCROLL_MOVE) ? \
++        (i)->var.yres : (i)->var.yres_virtual; })
++
++#define GETVXRES(s,i) ({                           \
++        (fb_scrollmode(s) == SCROLL_REDRAW || fb_scrollmode(s) == SCROLL_MOVE || !(i)->fix.xpanstep) ? \
++        (i)->var.xres : (i)->var.xres_virtual; })
++
++
+ static inline int pattern_test_bit(u32 x, u32 y, u32 pitch, const char *pat)
+ {
+ 	u32 tmp = (y * pitch) + x, index = tmp / 8,  bit = tmp % 8;
+diff --git a/drivers/video/fbdev/core/fbcon_ud.c b/drivers/video/fbdev/core/fbcon_ud.c
+index 09619bd8e0217..23bc045769d08 100644
+--- a/drivers/video/fbdev/core/fbcon_ud.c
++++ b/drivers/video/fbdev/core/fbcon_ud.c
+@@ -44,13 +44,33 @@ static void ud_update_attr(u8 *dst, u8 *src, int attribute,
+ 	}
+ }
+ 
++
++static void ud_bmove(struct vc_data *vc, struct fb_info *info, int sy,
++		     int sx, int dy, int dx, int height, int width)
++{
++	struct fbcon_ops *ops = info->fbcon_par;
++	struct fb_copyarea area;
++	u32 vyres = GETVYRES(ops->p, info);
++	u32 vxres = GETVXRES(ops->p, info);
++
++	area.sy = vyres - ((sy + height) * vc->vc_font.height);
++	area.sx = vxres - ((sx + width) * vc->vc_font.width);
++	area.dy = vyres - ((dy + height) * vc->vc_font.height);
++	area.dx = vxres - ((dx + width) * vc->vc_font.width);
++	area.height = height * vc->vc_font.height;
++	area.width  = width * vc->vc_font.width;
++
++	info->fbops->fb_copyarea(info, &area);
++}
++
+ static void ud_clear(struct vc_data *vc, struct fb_info *info, int sy,
+ 		     int sx, int height, int width)
+ {
++	struct fbcon_ops *ops = info->fbcon_par;
+ 	struct fb_fillrect region;
+ 	int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
+-	u32 vyres = info->var.yres;
+-	u32 vxres = info->var.xres;
++	u32 vyres = GETVYRES(ops->p, info);
++	u32 vxres = GETVXRES(ops->p, info);
+ 
+ 	region.color = attr_bgcol_ec(bgshift,vc,info);
+ 	region.dy = vyres - ((sy + height) * vc->vc_font.height);
+@@ -142,8 +162,8 @@ static void ud_putcs(struct vc_data *vc, struct fb_info *info,
+ 	u32 mod = vc->vc_font.width % 8, cnt, pitch, size;
+ 	u32 attribute = get_attribute(info, scr_readw(s));
+ 	u8 *dst, *buf = NULL;
+-	u32 vyres = info->var.yres;
+-	u32 vxres = info->var.xres;
++	u32 vyres = GETVYRES(ops->p, info);
++	u32 vxres = GETVXRES(ops->p, info);
+ 
+ 	if (!ops->fontbuffer)
+ 		return;
+@@ -239,8 +259,8 @@ static void ud_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+ 	int attribute, use_sw = vc->vc_cursor_type & CUR_SW;
+ 	int err = 1, dx, dy;
+ 	char *src;
+-	u32 vyres = info->var.yres;
+-	u32 vxres = info->var.xres;
++	u32 vyres = GETVYRES(ops->p, info);
++	u32 vxres = GETVXRES(ops->p, info);
+ 
+ 	if (!ops->fontbuffer)
+ 		return;
+@@ -390,8 +410,8 @@ static int ud_update_start(struct fb_info *info)
+ {
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 	int xoffset, yoffset;
+-	u32 vyres = info->var.yres;
+-	u32 vxres = info->var.xres;
++	u32 vyres = GETVYRES(ops->p, info);
++	u32 vxres = GETVXRES(ops->p, info);
+ 	int err;
+ 
+ 	xoffset = vxres - info->var.xres - ops->var.xoffset;
+@@ -409,6 +429,7 @@ static int ud_update_start(struct fb_info *info)
+ 
+ void fbcon_rotate_ud(struct fbcon_ops *ops)
+ {
++	ops->bmove = ud_bmove;
+ 	ops->clear = ud_clear;
+ 	ops->putcs = ud_putcs;
+ 	ops->clear_margins = ud_clear_margins;
+diff --git a/drivers/video/fbdev/core/tileblit.c b/drivers/video/fbdev/core/tileblit.c
+index 72af95053bcb5..2768eff247ba4 100644
+--- a/drivers/video/fbdev/core/tileblit.c
++++ b/drivers/video/fbdev/core/tileblit.c
+@@ -16,6 +16,21 @@
+ #include <asm/types.h>
+ #include "fbcon.h"
+ 
++static void tile_bmove(struct vc_data *vc, struct fb_info *info, int sy,
++		       int sx, int dy, int dx, int height, int width)
++{
++	struct fb_tilearea area;
++
++	area.sx = sx;
++	area.sy = sy;
++	area.dx = dx;
++	area.dy = dy;
++	area.height = height;
++	area.width = width;
++
++	info->tileops->fb_tilecopy(info, &area);
++}
++
+ static void tile_clear(struct vc_data *vc, struct fb_info *info, int sy,
+ 		       int sx, int height, int width)
+ {
+@@ -118,6 +133,7 @@ void fbcon_set_tileops(struct vc_data *vc, struct fb_info *info)
+ 	struct fb_tilemap map;
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 
++	ops->bmove = tile_bmove;
+ 	ops->clear = tile_clear;
+ 	ops->putcs = tile_putcs;
+ 	ops->clear_margins = tile_clear_margins;
+diff --git a/drivers/video/fbdev/skeletonfb.c b/drivers/video/fbdev/skeletonfb.c
+index 0fe922f726e98..bcacfb6934fa9 100644
+--- a/drivers/video/fbdev/skeletonfb.c
++++ b/drivers/video/fbdev/skeletonfb.c
+@@ -505,15 +505,15 @@ void xxxfb_fillrect(struct fb_info *p, const struct fb_fillrect *region)
+ }
+ 
+ /**
+- *      xxxfb_copyarea - OBSOLETE function.
++ *      xxxfb_copyarea - REQUIRED function. Can use generic routines if
++ *                       non acclerated hardware and packed pixel based.
+  *                       Copies one area of the screen to another area.
+- *                       Will be deleted in a future version
+  *
+  *      @info: frame buffer structure that represents a single frame buffer
+  *      @area: Structure providing the data to copy the framebuffer contents
+  *	       from one region to another.
+  *
+- *      This drawing operation copied a rectangular area from one area of the
++ *      This drawing operation copies a rectangular area from one area of the
+  *	screen to another area.
+  */
+ void xxxfb_copyarea(struct fb_info *p, const struct fb_copyarea *area) 
+@@ -645,9 +645,9 @@ static const struct fb_ops xxxfb_ops = {
+ 	.fb_setcolreg	= xxxfb_setcolreg,
+ 	.fb_blank	= xxxfb_blank,
+ 	.fb_pan_display	= xxxfb_pan_display,
+-	.fb_fillrect	= xxxfb_fillrect,	/* Needed !!!   */
+-	.fb_copyarea	= xxxfb_copyarea,	/* Obsolete     */
+-	.fb_imageblit	= xxxfb_imageblit,	/* Needed !!!   */
++	.fb_fillrect	= xxxfb_fillrect, 	/* Needed !!! */
++	.fb_copyarea	= xxxfb_copyarea,	/* Needed !!! */
++	.fb_imageblit	= xxxfb_imageblit,	/* Needed !!! */
+ 	.fb_cursor	= xxxfb_cursor,		/* Optional !!! */
+ 	.fb_sync	= xxxfb_sync,
+ 	.fb_ioctl	= xxxfb_ioctl,
+diff --git a/fs/9p/fid.c b/fs/9p/fid.c
+index 6aab046c98e29..79df61fe0e596 100644
+--- a/fs/9p/fid.c
++++ b/fs/9p/fid.c
+@@ -96,12 +96,8 @@ static struct p9_fid *v9fs_fid_find(struct dentry *dentry, kuid_t uid, int any)
+ 		 dentry, dentry, from_kuid(&init_user_ns, uid),
+ 		 any);
+ 	ret = NULL;
+-
+-	if (d_inode(dentry))
+-		ret = v9fs_fid_find_inode(d_inode(dentry), uid);
+-
+ 	/* we'll recheck under lock if there's anything to look in */
+-	if (!ret && dentry->d_fsdata) {
++	if (dentry->d_fsdata) {
+ 		struct hlist_head *h = (struct hlist_head *)&dentry->d_fsdata;
+ 
+ 		spin_lock(&dentry->d_lock);
+@@ -113,6 +109,9 @@ static struct p9_fid *v9fs_fid_find(struct dentry *dentry, kuid_t uid, int any)
+ 			}
+ 		}
+ 		spin_unlock(&dentry->d_lock);
++	} else {
++		if (dentry->d_inode)
++			ret = v9fs_fid_find_inode(dentry->d_inode, uid);
+ 	}
+ 
+ 	return ret;
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 444e9c89ff3e9..b67c965725ea6 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -2547,6 +2547,19 @@ int btrfs_inc_block_group_ro(struct btrfs_block_group *cache,
+ 	int ret;
+ 	bool dirty_bg_running;
+ 
++	/*
++	 * This can only happen when we are doing read-only scrub on read-only
++	 * mount.
++	 * In that case we should not start a new transaction on read-only fs.
++	 * Thus here we skip all chunk allocations.
++	 */
++	if (sb_rdonly(fs_info->sb)) {
++		mutex_lock(&fs_info->ro_block_group_mutex);
++		ret = inc_block_group_ro(cache, 0);
++		mutex_unlock(&fs_info->ro_block_group_mutex);
++		return ret;
++	}
++
+ 	do {
+ 		trans = btrfs_join_transaction(fs_info->extent_root);
+ 		if (IS_ERR(trans))
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 124b9e6815e5f..48e03e176f319 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -779,10 +779,7 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+ 		goto fail;
+ 	}
+ 
+-	spin_lock(&fs_info->trans_lock);
+-	list_add(&pending_snapshot->list,
+-		 &trans->transaction->pending_snapshots);
+-	spin_unlock(&fs_info->trans_lock);
++	trans->pending_snapshot = pending_snapshot;
+ 
+ 	ret = btrfs_commit_transaction(trans);
+ 	if (ret)
+@@ -3313,7 +3310,7 @@ static long btrfs_ioctl_rm_dev(struct file *file, void __user *arg)
+ 	struct block_device *bdev = NULL;
+ 	fmode_t mode;
+ 	int ret;
+-	bool cancel;
++	bool cancel = false;
+ 
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EPERM;
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 071f7334f8189..26134b7476a2f 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1185,9 +1185,24 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
+ 	struct btrfs_trans_handle *trans = NULL;
+ 	int ret = 0;
+ 
++	/*
++	 * We need to have subvol_sem write locked, to prevent races between
++	 * concurrent tasks trying to disable quotas, because we will unlock
++	 * and relock qgroup_ioctl_lock across BTRFS_FS_QUOTA_ENABLED changes.
++	 */
++	lockdep_assert_held_write(&fs_info->subvol_sem);
++
+ 	mutex_lock(&fs_info->qgroup_ioctl_lock);
+ 	if (!fs_info->quota_root)
+ 		goto out;
++
++	/*
++	 * Request qgroup rescan worker to complete and wait for it. This wait
++	 * must be done before transaction start for quota disable since it may
++	 * deadlock with transaction by the qgroup rescan worker.
++	 */
++	clear_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
++	btrfs_qgroup_wait_for_completion(fs_info, false);
+ 	mutex_unlock(&fs_info->qgroup_ioctl_lock);
+ 
+ 	/*
+@@ -1205,14 +1220,13 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
+ 	if (IS_ERR(trans)) {
+ 		ret = PTR_ERR(trans);
+ 		trans = NULL;
++		set_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
+ 		goto out;
+ 	}
+ 
+ 	if (!fs_info->quota_root)
+ 		goto out;
+ 
+-	clear_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
+-	btrfs_qgroup_wait_for_completion(fs_info, false);
+ 	spin_lock(&fs_info->qgroup_lock);
+ 	quota_root = fs_info->quota_root;
+ 	fs_info->quota_root = NULL;
+@@ -3380,6 +3394,9 @@ qgroup_rescan_init(struct btrfs_fs_info *fs_info, u64 progress_objectid,
+ 			btrfs_warn(fs_info,
+ 			"qgroup rescan init failed, qgroup is not enabled");
+ 			ret = -EINVAL;
++		} else if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags)) {
++			/* Quota disable is in progress */
++			ret = -EBUSY;
+ 		}
+ 
+ 		if (ret) {
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 1c3a1189c0bdf..27b93a6c41bb4 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -2032,6 +2032,27 @@ static inline void btrfs_wait_delalloc_flush(struct btrfs_fs_info *fs_info)
+ 		btrfs_wait_ordered_roots(fs_info, U64_MAX, 0, (u64)-1);
+ }
+ 
++/*
++ * Add a pending snapshot associated with the given transaction handle to the
++ * respective handle. This must be called after the transaction commit started
++ * and while holding fs_info->trans_lock.
++ * This serves to guarantee a caller of btrfs_commit_transaction() that it can
++ * safely free the pending snapshot pointer in case btrfs_commit_transaction()
++ * returns an error.
++ */
++static void add_pending_snapshot(struct btrfs_trans_handle *trans)
++{
++	struct btrfs_transaction *cur_trans = trans->transaction;
++
++	if (!trans->pending_snapshot)
++		return;
++
++	lockdep_assert_held(&trans->fs_info->trans_lock);
++	ASSERT(cur_trans->state >= TRANS_STATE_COMMIT_START);
++
++	list_add(&trans->pending_snapshot->list, &cur_trans->pending_snapshots);
++}
++
+ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ {
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+@@ -2105,6 +2126,8 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ 	if (cur_trans->state >= TRANS_STATE_COMMIT_START) {
+ 		enum btrfs_trans_state want_state = TRANS_STATE_COMPLETED;
+ 
++		add_pending_snapshot(trans);
++
+ 		spin_unlock(&fs_info->trans_lock);
+ 		refcount_inc(&cur_trans->use_count);
+ 
+@@ -2195,6 +2218,7 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ 	 * COMMIT_DOING so make sure to wait for num_writers to == 1 again.
+ 	 */
+ 	spin_lock(&fs_info->trans_lock);
++	add_pending_snapshot(trans);
+ 	cur_trans->state = TRANS_STATE_COMMIT_DOING;
+ 	spin_unlock(&fs_info->trans_lock);
+ 	wait_event(cur_trans->writer_wait,
+diff --git a/fs/btrfs/transaction.h b/fs/btrfs/transaction.h
+index ba45065f94511..eba07b8119bbd 100644
+--- a/fs/btrfs/transaction.h
++++ b/fs/btrfs/transaction.h
+@@ -123,6 +123,8 @@ struct btrfs_trans_handle {
+ 	struct btrfs_transaction *transaction;
+ 	struct btrfs_block_rsv *block_rsv;
+ 	struct btrfs_block_rsv *orig_rsv;
++	/* Set by a task that wants to create a snapshot. */
++	struct btrfs_pending_snapshot *pending_snapshot;
+ 	refcount_t use_count;
+ 	unsigned int type;
+ 	/*
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 1060164b984a7..cefd0e9623ba9 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -1945,6 +1945,19 @@ cifs_set_cifscreds(struct smb3_fs_context *ctx, struct cifs_ses *ses)
+ 		}
+ 	}
+ 
++	ctx->workstation_name = kstrdup(ses->workstation_name, GFP_KERNEL);
++	if (!ctx->workstation_name) {
++		cifs_dbg(FYI, "Unable to allocate memory for workstation_name\n");
++		rc = -ENOMEM;
++		kfree(ctx->username);
++		ctx->username = NULL;
++		kfree_sensitive(ctx->password);
++		ctx->password = NULL;
++		kfree(ctx->domainname);
++		ctx->domainname = NULL;
++		goto out_key_put;
++	}
++
+ out_key_put:
+ 	up_read(&key->sem);
+ 	key_put(key);
+diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
+index 3eee806fd29f0..38574fc70117e 100644
+--- a/fs/cifs/sess.c
++++ b/fs/cifs/sess.c
+@@ -675,7 +675,11 @@ static int size_of_ntlmssp_blob(struct cifs_ses *ses, int base_size)
+ 	else
+ 		sz += sizeof(__le16);
+ 
+-	sz += sizeof(__le16) * strnlen(ses->workstation_name, CIFS_MAX_WORKSTATION_LEN);
++	if (ses->workstation_name)
++		sz += sizeof(__le16) * strnlen(ses->workstation_name,
++			CIFS_MAX_WORKSTATION_LEN);
++	else
++		sz += sizeof(__le16);
+ 
+ 	return sz;
+ }
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index af7088085d4e4..d248a01132c3b 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -2936,6 +2936,9 @@ void ext4_fc_replay_cleanup(struct super_block *sb);
+ int ext4_fc_commit(journal_t *journal, tid_t commit_tid);
+ int __init ext4_fc_init_dentry_cache(void);
+ void ext4_fc_destroy_dentry_cache(void);
++int ext4_fc_record_regions(struct super_block *sb, int ino,
++			   ext4_lblk_t lblk, ext4_fsblk_t pblk,
++			   int len, int replay);
+ 
+ /* mballoc.c */
+ extern const struct seq_operations ext4_mb_seq_groups_ops;
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index fac884dbb42e2..9b37d16b24ffd 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -6101,11 +6101,15 @@ int ext4_ext_clear_bb(struct inode *inode)
+ 
+ 					ext4_mb_mark_bb(inode->i_sb,
+ 							path[j].p_block, 1, 0);
++					ext4_fc_record_regions(inode->i_sb, inode->i_ino,
++							0, path[j].p_block, 1, 1);
+ 				}
+ 				ext4_ext_drop_refs(path);
+ 				kfree(path);
+ 			}
+ 			ext4_mb_mark_bb(inode->i_sb, map.m_pblk, map.m_len, 0);
++			ext4_fc_record_regions(inode->i_sb, inode->i_ino,
++					map.m_lblk, map.m_pblk, map.m_len, 1);
+ 		}
+ 		cur = cur + map.m_len;
+ 	}
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index ace68ca90b01e..3b79fb063c07a 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -1435,14 +1435,15 @@ static int ext4_fc_record_modified_inode(struct super_block *sb, int ino)
+ 		if (state->fc_modified_inodes[i] == ino)
+ 			return 0;
+ 	if (state->fc_modified_inodes_used == state->fc_modified_inodes_size) {
+-		state->fc_modified_inodes_size +=
+-			EXT4_FC_REPLAY_REALLOC_INCREMENT;
+ 		state->fc_modified_inodes = krealloc(
+-					state->fc_modified_inodes, sizeof(int) *
+-					state->fc_modified_inodes_size,
+-					GFP_KERNEL);
++				state->fc_modified_inodes,
++				sizeof(int) * (state->fc_modified_inodes_size +
++				EXT4_FC_REPLAY_REALLOC_INCREMENT),
++				GFP_KERNEL);
+ 		if (!state->fc_modified_inodes)
+ 			return -ENOMEM;
++		state->fc_modified_inodes_size +=
++			EXT4_FC_REPLAY_REALLOC_INCREMENT;
+ 	}
+ 	state->fc_modified_inodes[state->fc_modified_inodes_used++] = ino;
+ 	return 0;
+@@ -1474,7 +1475,9 @@ static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl,
+ 	}
+ 	inode = NULL;
+ 
+-	ext4_fc_record_modified_inode(sb, ino);
++	ret = ext4_fc_record_modified_inode(sb, ino);
++	if (ret)
++		goto out;
+ 
+ 	raw_fc_inode = (struct ext4_inode *)
+ 		(val + offsetof(struct ext4_fc_inode, fc_raw_inode));
+@@ -1606,16 +1609,23 @@ out:
+ }
+ 
+ /*
+- * Record physical disk regions which are in use as per fast commit area. Our
+- * simple replay phase allocator excludes these regions from allocation.
++ * Record physical disk regions which are in use as per fast commit area,
++ * and used by inodes during replay phase. Our simple replay phase
++ * allocator excludes these regions from allocation.
+  */
+-static int ext4_fc_record_regions(struct super_block *sb, int ino,
+-		ext4_lblk_t lblk, ext4_fsblk_t pblk, int len)
++int ext4_fc_record_regions(struct super_block *sb, int ino,
++		ext4_lblk_t lblk, ext4_fsblk_t pblk, int len, int replay)
+ {
+ 	struct ext4_fc_replay_state *state;
+ 	struct ext4_fc_alloc_region *region;
+ 
+ 	state = &EXT4_SB(sb)->s_fc_replay_state;
++	/*
++	 * during replay phase, the fc_regions_valid may not same as
++	 * fc_regions_used, update it when do new additions.
++	 */
++	if (replay && state->fc_regions_used != state->fc_regions_valid)
++		state->fc_regions_used = state->fc_regions_valid;
+ 	if (state->fc_regions_used == state->fc_regions_size) {
+ 		state->fc_regions_size +=
+ 			EXT4_FC_REPLAY_REALLOC_INCREMENT;
+@@ -1633,6 +1643,9 @@ static int ext4_fc_record_regions(struct super_block *sb, int ino,
+ 	region->pblk = pblk;
+ 	region->len = len;
+ 
++	if (replay)
++		state->fc_regions_valid++;
++
+ 	return 0;
+ }
+ 
+@@ -1664,6 +1677,8 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
+ 	}
+ 
+ 	ret = ext4_fc_record_modified_inode(sb, inode->i_ino);
++	if (ret)
++		goto out;
+ 
+ 	start = le32_to_cpu(ex->ee_block);
+ 	start_pblk = ext4_ext_pblock(ex);
+@@ -1681,18 +1696,14 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
+ 		map.m_pblk = 0;
+ 		ret = ext4_map_blocks(NULL, inode, &map, 0);
+ 
+-		if (ret < 0) {
+-			iput(inode);
+-			return 0;
+-		}
++		if (ret < 0)
++			goto out;
+ 
+ 		if (ret == 0) {
+ 			/* Range is not mapped */
+ 			path = ext4_find_extent(inode, cur, NULL, 0);
+-			if (IS_ERR(path)) {
+-				iput(inode);
+-				return 0;
+-			}
++			if (IS_ERR(path))
++				goto out;
+ 			memset(&newex, 0, sizeof(newex));
+ 			newex.ee_block = cpu_to_le32(cur);
+ 			ext4_ext_store_pblock(
+@@ -1706,10 +1717,8 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
+ 			up_write((&EXT4_I(inode)->i_data_sem));
+ 			ext4_ext_drop_refs(path);
+ 			kfree(path);
+-			if (ret) {
+-				iput(inode);
+-				return 0;
+-			}
++			if (ret)
++				goto out;
+ 			goto next;
+ 		}
+ 
+@@ -1722,10 +1731,8 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
+ 			ret = ext4_ext_replay_update_ex(inode, cur, map.m_len,
+ 					ext4_ext_is_unwritten(ex),
+ 					start_pblk + cur - start);
+-			if (ret) {
+-				iput(inode);
+-				return 0;
+-			}
++			if (ret)
++				goto out;
+ 			/*
+ 			 * Mark the old blocks as free since they aren't used
+ 			 * anymore. We maintain an array of all the modified
+@@ -1745,10 +1752,8 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
+ 			ext4_ext_is_unwritten(ex), map.m_pblk);
+ 		ret = ext4_ext_replay_update_ex(inode, cur, map.m_len,
+ 					ext4_ext_is_unwritten(ex), map.m_pblk);
+-		if (ret) {
+-			iput(inode);
+-			return 0;
+-		}
++		if (ret)
++			goto out;
+ 		/*
+ 		 * We may have split the extent tree while toggling the state.
+ 		 * Try to shrink the extent tree now.
+@@ -1760,6 +1765,7 @@ next:
+ 	}
+ 	ext4_ext_replay_shrink_inode(inode, i_size_read(inode) >>
+ 					sb->s_blocksize_bits);
++out:
+ 	iput(inode);
+ 	return 0;
+ }
+@@ -1789,6 +1795,8 @@ ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl,
+ 	}
+ 
+ 	ret = ext4_fc_record_modified_inode(sb, inode->i_ino);
++	if (ret)
++		goto out;
+ 
+ 	jbd_debug(1, "DEL_RANGE, inode %ld, lblk %d, len %d\n",
+ 			inode->i_ino, le32_to_cpu(lrange.fc_lblk),
+@@ -1798,10 +1806,8 @@ ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl,
+ 		map.m_len = remaining;
+ 
+ 		ret = ext4_map_blocks(NULL, inode, &map, 0);
+-		if (ret < 0) {
+-			iput(inode);
+-			return 0;
+-		}
++		if (ret < 0)
++			goto out;
+ 		if (ret > 0) {
+ 			remaining -= ret;
+ 			cur += ret;
+@@ -1813,18 +1819,17 @@ ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl,
+ 	}
+ 
+ 	down_write(&EXT4_I(inode)->i_data_sem);
+-	ret = ext4_ext_remove_space(inode, lrange.fc_lblk,
+-				lrange.fc_lblk + lrange.fc_len - 1);
++	ret = ext4_ext_remove_space(inode, le32_to_cpu(lrange.fc_lblk),
++				le32_to_cpu(lrange.fc_lblk) +
++				le32_to_cpu(lrange.fc_len) - 1);
+ 	up_write(&EXT4_I(inode)->i_data_sem);
+-	if (ret) {
+-		iput(inode);
+-		return 0;
+-	}
++	if (ret)
++		goto out;
+ 	ext4_ext_replay_shrink_inode(inode,
+ 		i_size_read(inode) >> sb->s_blocksize_bits);
+ 	ext4_mark_inode_dirty(NULL, inode);
++out:
+ 	iput(inode);
+-
+ 	return 0;
+ }
+ 
+@@ -1980,7 +1985,7 @@ static int ext4_fc_replay_scan(journal_t *journal,
+ 			ret = ext4_fc_record_regions(sb,
+ 				le32_to_cpu(ext.fc_ino),
+ 				le32_to_cpu(ex->ee_block), ext4_ext_pblock(ex),
+-				ext4_ext_get_actual_len(ex));
++				ext4_ext_get_actual_len(ex), 0);
+ 			if (ret < 0)
+ 				break;
+ 			ret = JBD2_FC_REPLAY_CONTINUE;
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 39a1ab129fdc9..d091133a4b460 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -1133,7 +1133,15 @@ static void ext4_restore_inline_data(handle_t *handle, struct inode *inode,
+ 				     struct ext4_iloc *iloc,
+ 				     void *buf, int inline_size)
+ {
+-	ext4_create_inline_data(handle, inode, inline_size);
++	int ret;
++
++	ret = ext4_create_inline_data(handle, inode, inline_size);
++	if (ret) {
++		ext4_msg(inode->i_sb, KERN_EMERG,
++			"error restoring inline_data for inode -- potential data loss! (inode %lu, error %d)",
++			inode->i_ino, ret);
++		return;
++	}
+ 	ext4_write_inline_data(inode, iloc, buf, 0, inline_size);
+ 	ext4_set_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
+ }
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index ea764137462ef..c849fd845d9b8 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -5753,7 +5753,8 @@ static ext4_fsblk_t ext4_mb_new_blocks_simple(handle_t *handle,
+ 	struct super_block *sb = ar->inode->i_sb;
+ 	ext4_group_t group;
+ 	ext4_grpblk_t blkoff;
+-	int i = sb->s_blocksize;
++	ext4_grpblk_t max = EXT4_CLUSTERS_PER_GROUP(sb);
++	ext4_grpblk_t i = 0;
+ 	ext4_fsblk_t goal, block;
+ 	struct ext4_super_block *es = EXT4_SB(sb)->s_es;
+ 
+@@ -5775,19 +5776,26 @@ static ext4_fsblk_t ext4_mb_new_blocks_simple(handle_t *handle,
+ 		ext4_get_group_no_and_offset(sb,
+ 			max(ext4_group_first_block_no(sb, group), goal),
+ 			NULL, &blkoff);
+-		i = mb_find_next_zero_bit(bitmap_bh->b_data, sb->s_blocksize,
++		while (1) {
++			i = mb_find_next_zero_bit(bitmap_bh->b_data, max,
+ 						blkoff);
++			if (i >= max)
++				break;
++			if (ext4_fc_replay_check_excluded(sb,
++				ext4_group_first_block_no(sb, group) + i)) {
++				blkoff = i + 1;
++			} else
++				break;
++		}
+ 		brelse(bitmap_bh);
+-		if (i >= sb->s_blocksize)
+-			continue;
+-		if (ext4_fc_replay_check_excluded(sb,
+-			ext4_group_first_block_no(sb, group) + i))
+-			continue;
+-		break;
++		if (i < max)
++			break;
+ 	}
+ 
+-	if (group >= ext4_get_groups_count(sb) && i >= sb->s_blocksize)
++	if (group >= ext4_get_groups_count(sb) || i >= max) {
++		*errp = -ENOSPC;
+ 		return 0;
++	}
+ 
+ 	block = ext4_group_first_block_no(sb, group) + i;
+ 	ext4_mb_mark_bb(sb, block, 1, 1);
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index b94b3bb2b8a6e..ac0ddde8beef2 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4112,8 +4112,10 @@ nfsd4_setclientid_confirm(struct svc_rqst *rqstp,
+ 			status = nfserr_clid_inuse;
+ 			if (client_has_state(old)
+ 					&& !same_creds(&unconf->cl_cred,
+-							&old->cl_cred))
++							&old->cl_cred)) {
++				old = NULL;
+ 				goto out;
++			}
+ 			status = mark_client_expired_locked(old);
+ 			if (status) {
+ 				old = NULL;
+diff --git a/include/linux/fb.h b/include/linux/fb.h
+index 3da95842b2075..02f362c661c80 100644
+--- a/include/linux/fb.h
++++ b/include/linux/fb.h
+@@ -262,7 +262,7 @@ struct fb_ops {
+ 
+ 	/* Draws a rectangle */
+ 	void (*fb_fillrect) (struct fb_info *info, const struct fb_fillrect *rect);
+-	/* Copy data from area to another. Obsolete. */
++	/* Copy data from area to another */
+ 	void (*fb_copyarea) (struct fb_info *info, const struct fb_copyarea *region);
+ 	/* Draws a image to the display */
+ 	void (*fb_imageblit) (struct fb_info *info, const struct fb_image *image);
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index c310648cc8f1a..05e7fd79f1acc 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -15,6 +15,8 @@
+ #include <linux/minmax.h>
+ #include <linux/mm.h>
+ #include <linux/mmu_notifier.h>
++#include <linux/ftrace.h>
++#include <linux/instrumentation.h>
+ #include <linux/preempt.h>
+ #include <linux/msi.h>
+ #include <linux/slab.h>
+@@ -362,8 +364,11 @@ struct kvm_vcpu {
+ 	int last_used_slot;
+ };
+ 
+-/* must be called with irqs disabled */
+-static __always_inline void guest_enter_irqoff(void)
++/*
++ * Start accounting time towards a guest.
++ * Must be called before entering guest context.
++ */
++static __always_inline void guest_timing_enter_irqoff(void)
+ {
+ 	/*
+ 	 * This is running in ioctl context so its safe to assume that it's the
+@@ -372,7 +377,18 @@ static __always_inline void guest_enter_irqoff(void)
+ 	instrumentation_begin();
+ 	vtime_account_guest_enter();
+ 	instrumentation_end();
++}
+ 
++/*
++ * Enter guest context and enter an RCU extended quiescent state.
++ *
++ * Between guest_context_enter_irqoff() and guest_context_exit_irqoff() it is
++ * unsafe to use any code which may directly or indirectly use RCU, tracing
++ * (including IRQ flag tracing), or lockdep. All code in this period must be
++ * non-instrumentable.
++ */
++static __always_inline void guest_context_enter_irqoff(void)
++{
+ 	/*
+ 	 * KVM does not hold any references to rcu protected data when it
+ 	 * switches CPU into a guest mode. In fact switching to a guest mode
+@@ -388,16 +404,79 @@ static __always_inline void guest_enter_irqoff(void)
+ 	}
+ }
+ 
+-static __always_inline void guest_exit_irqoff(void)
++/*
++ * Deprecated. Architectures should move to guest_timing_enter_irqoff() and
++ * guest_state_enter_irqoff().
++ */
++static __always_inline void guest_enter_irqoff(void)
++{
++	guest_timing_enter_irqoff();
++	guest_context_enter_irqoff();
++}
++
++/**
++ * guest_state_enter_irqoff - Fixup state when entering a guest
++ *
++ * Entry to a guest will enable interrupts, but the kernel state is interrupts
++ * disabled when this is invoked. Also tell RCU about it.
++ *
++ * 1) Trace interrupts on state
++ * 2) Invoke context tracking if enabled to adjust RCU state
++ * 3) Tell lockdep that interrupts are enabled
++ *
++ * Invoked from architecture specific code before entering a guest.
++ * Must be called with interrupts disabled and the caller must be
++ * non-instrumentable.
++ * The caller has to invoke guest_timing_enter_irqoff() before this.
++ *
++ * Note: this is analogous to exit_to_user_mode().
++ */
++static __always_inline void guest_state_enter_irqoff(void)
++{
++	instrumentation_begin();
++	trace_hardirqs_on_prepare();
++	lockdep_hardirqs_on_prepare(CALLER_ADDR0);
++	instrumentation_end();
++
++	guest_context_enter_irqoff();
++	lockdep_hardirqs_on(CALLER_ADDR0);
++}
++
++/*
++ * Exit guest context and exit an RCU extended quiescent state.
++ *
++ * Between guest_context_enter_irqoff() and guest_context_exit_irqoff() it is
++ * unsafe to use any code which may directly or indirectly use RCU, tracing
++ * (including IRQ flag tracing), or lockdep. All code in this period must be
++ * non-instrumentable.
++ */
++static __always_inline void guest_context_exit_irqoff(void)
+ {
+ 	context_tracking_guest_exit();
++}
+ 
++/*
++ * Stop accounting time towards a guest.
++ * Must be called after exiting guest context.
++ */
++static __always_inline void guest_timing_exit_irqoff(void)
++{
+ 	instrumentation_begin();
+ 	/* Flush the guest cputime we spent on the guest */
+ 	vtime_account_guest_exit();
+ 	instrumentation_end();
+ }
+ 
++/*
++ * Deprecated. Architectures should move to guest_state_exit_irqoff() and
++ * guest_timing_exit_irqoff().
++ */
++static __always_inline void guest_exit_irqoff(void)
++{
++	guest_context_exit_irqoff();
++	guest_timing_exit_irqoff();
++}
++
+ static inline void guest_exit(void)
+ {
+ 	unsigned long flags;
+@@ -407,6 +486,33 @@ static inline void guest_exit(void)
+ 	local_irq_restore(flags);
+ }
+ 
++/**
++ * guest_state_exit_irqoff - Establish state when returning from guest mode
++ *
++ * Entry from a guest disables interrupts, but guest mode is traced as
++ * interrupts enabled. Also with NO_HZ_FULL RCU might be idle.
++ *
++ * 1) Tell lockdep that interrupts are disabled
++ * 2) Invoke context tracking if enabled to reactivate RCU
++ * 3) Trace interrupts off state
++ *
++ * Invoked from architecture specific code after exiting a guest.
++ * Must be invoked with interrupts disabled and the caller must be
++ * non-instrumentable.
++ * The caller has to invoke guest_timing_exit_irqoff() after this.
++ *
++ * Note: this is analogous to enter_from_user_mode().
++ */
++static __always_inline void guest_state_exit_irqoff(void)
++{
++	lockdep_hardirqs_off(CALLER_ADDR0);
++	guest_context_exit_irqoff();
++
++	instrumentation_begin();
++	trace_hardirqs_off_finish();
++	instrumentation_end();
++}
++
+ static inline int kvm_vcpu_exiting_guest_mode(struct kvm_vcpu *vcpu)
+ {
+ 	/*
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index 2a8404b26083c..430b460221082 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -428,6 +428,7 @@ enum {
+ 	ATA_HORKAGE_MAX_TRIM_128M = (1 << 26),	/* Limit max trim size to 128M */
+ 	ATA_HORKAGE_NO_NCQ_ON_ATI = (1 << 27),	/* Disable NCQ on ATI chipset */
+ 	ATA_HORKAGE_NO_ID_DEV_LOG = (1 << 28),	/* Identify device log missing */
++	ATA_HORKAGE_NO_LOG_DIR	= (1 << 29),	/* Do not read log directory */
+ 
+ 	 /* DMA mask for user DMA control: User visible values; DO NOT
+ 	    renumber */
+diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
+index e24d2c992b112..d468efcf48f45 100644
+--- a/include/linux/pgtable.h
++++ b/include/linux/pgtable.h
+@@ -62,6 +62,7 @@ static inline unsigned long pte_index(unsigned long address)
+ {
+ 	return (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
+ }
++#define pte_index pte_index
+ 
+ #ifndef pmd_index
+ static inline unsigned long pmd_index(unsigned long address)
+diff --git a/include/net/neighbour.h b/include/net/neighbour.h
+index 38a0c1d245708..85a10c251542e 100644
+--- a/include/net/neighbour.h
++++ b/include/net/neighbour.h
+@@ -336,7 +336,8 @@ static inline struct neighbour *neigh_create(struct neigh_table *tbl,
+ 	return __neigh_create(tbl, pkey, dev, true);
+ }
+ void neigh_destroy(struct neighbour *neigh);
+-int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb);
++int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb,
++		       const bool immediate_ok);
+ int neigh_update(struct neighbour *neigh, const u8 *lladdr, u8 new, u32 flags,
+ 		 u32 nlmsg_pid);
+ void __neigh_set_probe_once(struct neighbour *neigh);
+@@ -446,17 +447,24 @@ static inline struct neighbour * neigh_clone(struct neighbour *neigh)
+ 
+ #define neigh_hold(n)	refcount_inc(&(n)->refcnt)
+ 
+-static inline int neigh_event_send(struct neighbour *neigh, struct sk_buff *skb)
++static __always_inline int neigh_event_send_probe(struct neighbour *neigh,
++						  struct sk_buff *skb,
++						  const bool immediate_ok)
+ {
+ 	unsigned long now = jiffies;
+-	
++
+ 	if (READ_ONCE(neigh->used) != now)
+ 		WRITE_ONCE(neigh->used, now);
+-	if (!(neigh->nud_state&(NUD_CONNECTED|NUD_DELAY|NUD_PROBE)))
+-		return __neigh_event_send(neigh, skb);
++	if (!(neigh->nud_state & (NUD_CONNECTED | NUD_DELAY | NUD_PROBE)))
++		return __neigh_event_send(neigh, skb, immediate_ok);
+ 	return 0;
+ }
+ 
++static inline int neigh_event_send(struct neighbour *neigh, struct sk_buff *skb)
++{
++	return neigh_event_send_probe(neigh, skb, true);
++}
++
+ #if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
+ static inline int neigh_hh_bridge(struct hh_cache *hh, struct sk_buff *skb)
+ {
+diff --git a/include/uapi/sound/asound.h b/include/uapi/sound/asound.h
+index 5fbb79e30819a..c245ad2ca5d49 100644
+--- a/include/uapi/sound/asound.h
++++ b/include/uapi/sound/asound.h
+@@ -56,8 +56,10 @@
+  *                                                                          *
+  ****************************************************************************/
+ 
++#define AES_IEC958_STATUS_SIZE		24
++
+ struct snd_aes_iec958 {
+-	unsigned char status[24];	/* AES/IEC958 channel status bits */
++	unsigned char status[AES_IEC958_STATUS_SIZE]; /* AES/IEC958 channel status bits */
+ 	unsigned char subcode[147];	/* AES/IEC958 subcode bits */
+ 	unsigned char pad;		/* nothing */
+ 	unsigned char dig_subframe[4];	/* AES/IEC958 subframe bits */
+diff --git a/ipc/sem.c b/ipc/sem.c
+index 6693daf4fe112..0dbdb98fdf2d9 100644
+--- a/ipc/sem.c
++++ b/ipc/sem.c
+@@ -1964,6 +1964,7 @@ static struct sem_undo *find_alloc_undo(struct ipc_namespace *ns, int semid)
+ 	 */
+ 	un = lookup_undo(ulp, semid);
+ 	if (un) {
++		spin_unlock(&ulp->lock);
+ 		kvfree(new);
+ 		goto success;
+ 	}
+@@ -1976,9 +1977,8 @@ static struct sem_undo *find_alloc_undo(struct ipc_namespace *ns, int semid)
+ 	ipc_assert_locked_object(&sma->sem_perm);
+ 	list_add(&new->list_id, &sma->list_id);
+ 	un = new;
+-
+-success:
+ 	spin_unlock(&ulp->lock);
++success:
+ 	sem_unlock(sma, -1);
+ out:
+ 	return un;
+diff --git a/kernel/audit.c b/kernel/audit.c
+index eab7282668ab9..94ded5de91317 100644
+--- a/kernel/audit.c
++++ b/kernel/audit.c
+@@ -541,20 +541,22 @@ static void kauditd_printk_skb(struct sk_buff *skb)
+ /**
+  * kauditd_rehold_skb - Handle a audit record send failure in the hold queue
+  * @skb: audit record
++ * @error: error code (unused)
+  *
+  * Description:
+  * This should only be used by the kauditd_thread when it fails to flush the
+  * hold queue.
+  */
+-static void kauditd_rehold_skb(struct sk_buff *skb)
++static void kauditd_rehold_skb(struct sk_buff *skb, __always_unused int error)
+ {
+-	/* put the record back in the queue at the same place */
+-	skb_queue_head(&audit_hold_queue, skb);
++	/* put the record back in the queue */
++	skb_queue_tail(&audit_hold_queue, skb);
+ }
+ 
+ /**
+  * kauditd_hold_skb - Queue an audit record, waiting for auditd
+  * @skb: audit record
++ * @error: error code
+  *
+  * Description:
+  * Queue the audit record, waiting for an instance of auditd.  When this
+@@ -564,19 +566,31 @@ static void kauditd_rehold_skb(struct sk_buff *skb)
+  * and queue it, if we have room.  If we want to hold on to the record, but we
+  * don't have room, record a record lost message.
+  */
+-static void kauditd_hold_skb(struct sk_buff *skb)
++static void kauditd_hold_skb(struct sk_buff *skb, int error)
+ {
+ 	/* at this point it is uncertain if we will ever send this to auditd so
+ 	 * try to send the message via printk before we go any further */
+ 	kauditd_printk_skb(skb);
+ 
+ 	/* can we just silently drop the message? */
+-	if (!audit_default) {
+-		kfree_skb(skb);
+-		return;
++	if (!audit_default)
++		goto drop;
++
++	/* the hold queue is only for when the daemon goes away completely,
++	 * not -EAGAIN failures; if we are in a -EAGAIN state requeue the
++	 * record on the retry queue unless it's full, in which case drop it
++	 */
++	if (error == -EAGAIN) {
++		if (!audit_backlog_limit ||
++		    skb_queue_len(&audit_retry_queue) < audit_backlog_limit) {
++			skb_queue_tail(&audit_retry_queue, skb);
++			return;
++		}
++		audit_log_lost("kauditd retry queue overflow");
++		goto drop;
+ 	}
+ 
+-	/* if we have room, queue the message */
++	/* if we have room in the hold queue, queue the message */
+ 	if (!audit_backlog_limit ||
+ 	    skb_queue_len(&audit_hold_queue) < audit_backlog_limit) {
+ 		skb_queue_tail(&audit_hold_queue, skb);
+@@ -585,24 +599,32 @@ static void kauditd_hold_skb(struct sk_buff *skb)
+ 
+ 	/* we have no other options - drop the message */
+ 	audit_log_lost("kauditd hold queue overflow");
++drop:
+ 	kfree_skb(skb);
+ }
+ 
+ /**
+  * kauditd_retry_skb - Queue an audit record, attempt to send again to auditd
+  * @skb: audit record
++ * @error: error code (unused)
+  *
+  * Description:
+  * Not as serious as kauditd_hold_skb() as we still have a connected auditd,
+  * but for some reason we are having problems sending it audit records so
+  * queue the given record and attempt to resend.
+  */
+-static void kauditd_retry_skb(struct sk_buff *skb)
++static void kauditd_retry_skb(struct sk_buff *skb, __always_unused int error)
+ {
+-	/* NOTE: because records should only live in the retry queue for a
+-	 * short period of time, before either being sent or moved to the hold
+-	 * queue, we don't currently enforce a limit on this queue */
+-	skb_queue_tail(&audit_retry_queue, skb);
++	if (!audit_backlog_limit ||
++	    skb_queue_len(&audit_retry_queue) < audit_backlog_limit) {
++		skb_queue_tail(&audit_retry_queue, skb);
++		return;
++	}
++
++	/* we have to drop the record, send it via printk as a last effort */
++	kauditd_printk_skb(skb);
++	audit_log_lost("kauditd retry queue overflow");
++	kfree_skb(skb);
+ }
+ 
+ /**
+@@ -640,7 +662,7 @@ static void auditd_reset(const struct auditd_connection *ac)
+ 	/* flush the retry queue to the hold queue, but don't touch the main
+ 	 * queue since we need to process that normally for multicast */
+ 	while ((skb = skb_dequeue(&audit_retry_queue)))
+-		kauditd_hold_skb(skb);
++		kauditd_hold_skb(skb, -ECONNREFUSED);
+ }
+ 
+ /**
+@@ -714,16 +736,18 @@ static int kauditd_send_queue(struct sock *sk, u32 portid,
+ 			      struct sk_buff_head *queue,
+ 			      unsigned int retry_limit,
+ 			      void (*skb_hook)(struct sk_buff *skb),
+-			      void (*err_hook)(struct sk_buff *skb))
++			      void (*err_hook)(struct sk_buff *skb, int error))
+ {
+ 	int rc = 0;
+-	struct sk_buff *skb;
++	struct sk_buff *skb = NULL;
++	struct sk_buff *skb_tail;
+ 	unsigned int failed = 0;
+ 
+ 	/* NOTE: kauditd_thread takes care of all our locking, we just use
+ 	 *       the netlink info passed to us (e.g. sk and portid) */
+ 
+-	while ((skb = skb_dequeue(queue))) {
++	skb_tail = skb_peek_tail(queue);
++	while ((skb != skb_tail) && (skb = skb_dequeue(queue))) {
+ 		/* call the skb_hook for each skb we touch */
+ 		if (skb_hook)
+ 			(*skb_hook)(skb);
+@@ -731,7 +755,7 @@ static int kauditd_send_queue(struct sock *sk, u32 portid,
+ 		/* can we send to anyone via unicast? */
+ 		if (!sk) {
+ 			if (err_hook)
+-				(*err_hook)(skb);
++				(*err_hook)(skb, -ECONNREFUSED);
+ 			continue;
+ 		}
+ 
+@@ -745,7 +769,7 @@ retry:
+ 			    rc == -ECONNREFUSED || rc == -EPERM) {
+ 				sk = NULL;
+ 				if (err_hook)
+-					(*err_hook)(skb);
++					(*err_hook)(skb, rc);
+ 				if (rc == -EAGAIN)
+ 					rc = 0;
+ 				/* continue to drain the queue */
+diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
+index 9e0c10c6892ad..f1c51c45667d3 100644
+--- a/kernel/bpf/ringbuf.c
++++ b/kernel/bpf/ringbuf.c
+@@ -104,7 +104,7 @@ static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node)
+ 	}
+ 
+ 	rb = vmap(pages, nr_meta_pages + 2 * nr_data_pages,
+-		  VM_ALLOC | VM_USERMAP, PAGE_KERNEL);
++		  VM_MAP | VM_USERMAP, PAGE_KERNEL);
+ 	if (rb) {
+ 		kmemleak_not_leak(pages);
+ 		rb->pages = pages;
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index ff8f2f522eb55..d729cbd2445af 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -1530,10 +1530,15 @@ static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs,
+ 	struct cpuset *sibling;
+ 	struct cgroup_subsys_state *pos_css;
+ 
++	percpu_rwsem_assert_held(&cpuset_rwsem);
++
+ 	/*
+ 	 * Check all its siblings and call update_cpumasks_hier()
+ 	 * if their use_parent_ecpus flag is set in order for them
+ 	 * to use the right effective_cpus value.
++	 *
++	 * The update_cpumasks_hier() function may sleep. So we have to
++	 * release the RCU read lock before calling it.
+ 	 */
+ 	rcu_read_lock();
+ 	cpuset_for_each_child(sibling, pos_css, parent) {
+@@ -1541,8 +1546,13 @@ static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs,
+ 			continue;
+ 		if (!sibling->use_parent_ecpus)
+ 			continue;
++		if (!css_tryget_online(&sibling->css))
++			continue;
+ 
++		rcu_read_unlock();
+ 		update_cpumasks_hier(sibling, tmp);
++		rcu_read_lock();
++		css_put(&sibling->css);
+ 	}
+ 	rcu_read_unlock();
+ }
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 6ed890480c4aa..04e6e2dae60e4 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -3234,6 +3234,15 @@ static int perf_event_modify_breakpoint(struct perf_event *bp,
+ 	return err;
+ }
+ 
++/*
++ * Copy event-type-independent attributes that may be modified.
++ */
++static void perf_event_modify_copy_attr(struct perf_event_attr *to,
++					const struct perf_event_attr *from)
++{
++	to->sig_data = from->sig_data;
++}
++
+ static int perf_event_modify_attr(struct perf_event *event,
+ 				  struct perf_event_attr *attr)
+ {
+@@ -3256,10 +3265,17 @@ static int perf_event_modify_attr(struct perf_event *event,
+ 	WARN_ON_ONCE(event->ctx->parent_ctx);
+ 
+ 	mutex_lock(&event->child_mutex);
++	/*
++	 * Event-type-independent attributes must be copied before event-type
++	 * modification, which will validate that final attributes match the
++	 * source attributes after all relevant attributes have been copied.
++	 */
++	perf_event_modify_copy_attr(&event->attr, attr);
+ 	err = func(event, attr);
+ 	if (err)
+ 		goto out;
+ 	list_for_each_entry(child, &event->child_list, child_list) {
++		perf_event_modify_copy_attr(&child->attr, attr);
+ 		err = func(child, attr);
+ 		if (err)
+ 			goto out;
+diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
+index 228e3954b90c1..983ec5974aaf8 100644
+--- a/mm/debug_vm_pgtable.c
++++ b/mm/debug_vm_pgtable.c
+@@ -171,6 +171,8 @@ static void __init pte_advanced_tests(struct pgtable_debug_args *args)
+ 	ptep_test_and_clear_young(args->vma, args->vaddr, args->ptep);
+ 	pte = ptep_get(args->ptep);
+ 	WARN_ON(pte_young(pte));
++
++	ptep_get_and_clear_full(args->mm, args->vaddr, args->ptep, 1);
+ }
+ 
+ static void __init pte_savedwrite_tests(struct pgtable_debug_args *args)
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index b57383c17cf60..adbe5aa011848 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -1403,7 +1403,8 @@ static void kmemleak_scan(void)
+ {
+ 	unsigned long flags;
+ 	struct kmemleak_object *object;
+-	int i;
++	struct zone *zone;
++	int __maybe_unused i;
+ 	int new_leaks = 0;
+ 
+ 	jiffies_last_scan = jiffies;
+@@ -1443,9 +1444,9 @@ static void kmemleak_scan(void)
+ 	 * Struct page scanning for each node.
+ 	 */
+ 	get_online_mems();
+-	for_each_online_node(i) {
+-		unsigned long start_pfn = node_start_pfn(i);
+-		unsigned long end_pfn = node_end_pfn(i);
++	for_each_populated_zone(zone) {
++		unsigned long start_pfn = zone->zone_start_pfn;
++		unsigned long end_pfn = zone_end_pfn(zone);
+ 		unsigned long pfn;
+ 
+ 		for (pfn = start_pfn; pfn < end_pfn; pfn++) {
+@@ -1454,8 +1455,8 @@ static void kmemleak_scan(void)
+ 			if (!page)
+ 				continue;
+ 
+-			/* only scan pages belonging to this node */
+-			if (page_to_nid(page) != i)
++			/* only scan pages belonging to this zone */
++			if (page_zone(page) != zone)
+ 				continue;
+ 			/* only scan if page is in use */
+ 			if (page_count(page) == 0)
+diff --git a/net/bridge/netfilter/nft_reject_bridge.c b/net/bridge/netfilter/nft_reject_bridge.c
+index eba0efe64d05a..fbf858ddec352 100644
+--- a/net/bridge/netfilter/nft_reject_bridge.c
++++ b/net/bridge/netfilter/nft_reject_bridge.c
+@@ -49,7 +49,7 @@ static void nft_reject_br_send_v4_tcp_reset(struct net *net,
+ {
+ 	struct sk_buff *nskb;
+ 
+-	nskb = nf_reject_skb_v4_tcp_reset(net, oldskb, dev, hook);
++	nskb = nf_reject_skb_v4_tcp_reset(net, oldskb, NULL, hook);
+ 	if (!nskb)
+ 		return;
+ 
+@@ -65,7 +65,7 @@ static void nft_reject_br_send_v4_unreach(struct net *net,
+ {
+ 	struct sk_buff *nskb;
+ 
+-	nskb = nf_reject_skb_v4_unreach(net, oldskb, dev, hook, code);
++	nskb = nf_reject_skb_v4_unreach(net, oldskb, NULL, hook, code);
+ 	if (!nskb)
+ 		return;
+ 
+@@ -81,7 +81,7 @@ static void nft_reject_br_send_v6_tcp_reset(struct net *net,
+ {
+ 	struct sk_buff *nskb;
+ 
+-	nskb = nf_reject_skb_v6_tcp_reset(net, oldskb, dev, hook);
++	nskb = nf_reject_skb_v6_tcp_reset(net, oldskb, NULL, hook);
+ 	if (!nskb)
+ 		return;
+ 
+@@ -98,7 +98,7 @@ static void nft_reject_br_send_v6_unreach(struct net *net,
+ {
+ 	struct sk_buff *nskb;
+ 
+-	nskb = nf_reject_skb_v6_unreach(net, oldskb, dev, hook, code);
++	nskb = nf_reject_skb_v6_unreach(net, oldskb, NULL, hook, code);
+ 	if (!nskb)
+ 		return;
+ 
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index dda12fbd177ba..63a1142d85685 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -1133,7 +1133,8 @@ out:
+ 	neigh_release(neigh);
+ }
+ 
+-int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb)
++int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb,
++		       const bool immediate_ok)
+ {
+ 	int rc;
+ 	bool immediate_probe = false;
+@@ -1154,12 +1155,17 @@ int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb)
+ 			atomic_set(&neigh->probes,
+ 				   NEIGH_VAR(neigh->parms, UCAST_PROBES));
+ 			neigh_del_timer(neigh);
+-			neigh->nud_state     = NUD_INCOMPLETE;
++			neigh->nud_state = NUD_INCOMPLETE;
+ 			neigh->updated = now;
+-			next = now + max(NEIGH_VAR(neigh->parms, RETRANS_TIME),
+-					 HZ/100);
++			if (!immediate_ok) {
++				next = now + 1;
++			} else {
++				immediate_probe = true;
++				next = now + max(NEIGH_VAR(neigh->parms,
++							   RETRANS_TIME),
++						 HZ / 100);
++			}
+ 			neigh_add_timer(neigh, next);
+-			immediate_probe = true;
+ 		} else {
+ 			neigh->nud_state = NUD_FAILED;
+ 			neigh->updated = jiffies;
+@@ -1571,7 +1577,7 @@ static void neigh_managed_work(struct work_struct *work)
+ 
+ 	write_lock_bh(&tbl->lock);
+ 	list_for_each_entry(neigh, &tbl->managed_list, managed_list)
+-		neigh_event_send(neigh, NULL);
++		neigh_event_send_probe(neigh, NULL, false);
+ 	queue_delayed_work(system_power_efficient_wq, &tbl->managed_work,
+ 			   NEIGH_VAR(&tbl->parms, DELAY_PROBE_TIME));
+ 	write_unlock_bh(&tbl->lock);
+diff --git a/net/ieee802154/nl802154.c b/net/ieee802154/nl802154.c
+index 277124f206e06..e0b072aecf0f3 100644
+--- a/net/ieee802154/nl802154.c
++++ b/net/ieee802154/nl802154.c
+@@ -1441,7 +1441,7 @@ static int nl802154_send_key(struct sk_buff *msg, u32 cmd, u32 portid,
+ 
+ 	hdr = nl802154hdr_put(msg, portid, seq, flags, cmd);
+ 	if (!hdr)
+-		return -1;
++		return -ENOBUFS;
+ 
+ 	if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex))
+ 		goto nla_put_failure;
+@@ -1634,7 +1634,7 @@ static int nl802154_send_device(struct sk_buff *msg, u32 cmd, u32 portid,
+ 
+ 	hdr = nl802154hdr_put(msg, portid, seq, flags, cmd);
+ 	if (!hdr)
+-		return -1;
++		return -ENOBUFS;
+ 
+ 	if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex))
+ 		goto nla_put_failure;
+@@ -1812,7 +1812,7 @@ static int nl802154_send_devkey(struct sk_buff *msg, u32 cmd, u32 portid,
+ 
+ 	hdr = nl802154hdr_put(msg, portid, seq, flags, cmd);
+ 	if (!hdr)
+-		return -1;
++		return -ENOBUFS;
+ 
+ 	if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex))
+ 		goto nla_put_failure;
+@@ -1988,7 +1988,7 @@ static int nl802154_send_seclevel(struct sk_buff *msg, u32 cmd, u32 portid,
+ 
+ 	hdr = nl802154hdr_put(msg, portid, seq, flags, cmd);
+ 	if (!hdr)
+-		return -1;
++		return -ENOBUFS;
+ 
+ 	if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex))
+ 		goto nla_put_failure;
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 65764c8171b37..5d305fafd0e99 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -459,6 +459,18 @@ static unsigned int fill_remote_addresses_vec(struct mptcp_sock *msk, bool fullm
+ 	return i;
+ }
+ 
++static struct mptcp_pm_addr_entry *
++__lookup_addr(struct pm_nl_pernet *pernet, struct mptcp_addr_info *info)
++{
++	struct mptcp_pm_addr_entry *entry;
++
++	list_for_each_entry(entry, &pernet->local_addr_list, list) {
++		if (addresses_equal(&entry->addr, info, true))
++			return entry;
++	}
++	return NULL;
++}
++
+ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
+ {
+ 	struct sock *sk = (struct sock *)msk;
+@@ -1725,17 +1737,21 @@ static int mptcp_nl_cmd_set_flags(struct sk_buff *skb, struct genl_info *info)
+ 	if (addr.flags & MPTCP_PM_ADDR_FLAG_BACKUP)
+ 		bkup = 1;
+ 
+-	list_for_each_entry(entry, &pernet->local_addr_list, list) {
+-		if (addresses_equal(&entry->addr, &addr.addr, true)) {
+-			mptcp_nl_addr_backup(net, &entry->addr, bkup);
+-
+-			if (bkup)
+-				entry->flags |= MPTCP_PM_ADDR_FLAG_BACKUP;
+-			else
+-				entry->flags &= ~MPTCP_PM_ADDR_FLAG_BACKUP;
+-		}
++	spin_lock_bh(&pernet->lock);
++	entry = __lookup_addr(pernet, &addr.addr);
++	if (!entry) {
++		spin_unlock_bh(&pernet->lock);
++		return -EINVAL;
+ 	}
+ 
++	if (bkup)
++		entry->flags |= MPTCP_PM_ADDR_FLAG_BACKUP;
++	else
++		entry->flags &= ~MPTCP_PM_ADDR_FLAG_BACKUP;
++	addr = *entry;
++	spin_unlock_bh(&pernet->lock);
++
++	mptcp_nl_addr_backup(net, &addr.addr, bkup);
+ 	return 0;
+ }
+ 
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 85e077a69c67d..0f5c305c55c5a 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -566,17 +566,115 @@ static void smc_stat_fallback(struct smc_sock *smc)
+ 	mutex_unlock(&net->smc.mutex_fback_rsn);
+ }
+ 
++/* must be called under rcu read lock */
++static void smc_fback_wakeup_waitqueue(struct smc_sock *smc, void *key)
++{
++	struct socket_wq *wq;
++	__poll_t flags;
++
++	wq = rcu_dereference(smc->sk.sk_wq);
++	if (!skwq_has_sleeper(wq))
++		return;
++
++	/* wake up smc sk->sk_wq */
++	if (!key) {
++		/* sk_state_change */
++		wake_up_interruptible_all(&wq->wait);
++	} else {
++		flags = key_to_poll(key);
++		if (flags & (EPOLLIN | EPOLLOUT))
++			/* sk_data_ready or sk_write_space */
++			wake_up_interruptible_sync_poll(&wq->wait, flags);
++		else if (flags & EPOLLERR)
++			/* sk_error_report */
++			wake_up_interruptible_poll(&wq->wait, flags);
++	}
++}
++
++static int smc_fback_mark_woken(wait_queue_entry_t *wait,
++				unsigned int mode, int sync, void *key)
++{
++	struct smc_mark_woken *mark =
++		container_of(wait, struct smc_mark_woken, wait_entry);
++
++	mark->woken = true;
++	mark->key = key;
++	return 0;
++}
++
++static void smc_fback_forward_wakeup(struct smc_sock *smc, struct sock *clcsk,
++				     void (*clcsock_callback)(struct sock *sk))
++{
++	struct smc_mark_woken mark = { .woken = false };
++	struct socket_wq *wq;
++
++	init_waitqueue_func_entry(&mark.wait_entry,
++				  smc_fback_mark_woken);
++	rcu_read_lock();
++	wq = rcu_dereference(clcsk->sk_wq);
++	if (!wq)
++		goto out;
++	add_wait_queue(sk_sleep(clcsk), &mark.wait_entry);
++	clcsock_callback(clcsk);
++	remove_wait_queue(sk_sleep(clcsk), &mark.wait_entry);
++
++	if (mark.woken)
++		smc_fback_wakeup_waitqueue(smc, mark.key);
++out:
++	rcu_read_unlock();
++}
++
++static void smc_fback_state_change(struct sock *clcsk)
++{
++	struct smc_sock *smc =
++		smc_clcsock_user_data(clcsk);
++
++	if (!smc)
++		return;
++	smc_fback_forward_wakeup(smc, clcsk, smc->clcsk_state_change);
++}
++
++static void smc_fback_data_ready(struct sock *clcsk)
++{
++	struct smc_sock *smc =
++		smc_clcsock_user_data(clcsk);
++
++	if (!smc)
++		return;
++	smc_fback_forward_wakeup(smc, clcsk, smc->clcsk_data_ready);
++}
++
++static void smc_fback_write_space(struct sock *clcsk)
++{
++	struct smc_sock *smc =
++		smc_clcsock_user_data(clcsk);
++
++	if (!smc)
++		return;
++	smc_fback_forward_wakeup(smc, clcsk, smc->clcsk_write_space);
++}
++
++static void smc_fback_error_report(struct sock *clcsk)
++{
++	struct smc_sock *smc =
++		smc_clcsock_user_data(clcsk);
++
++	if (!smc)
++		return;
++	smc_fback_forward_wakeup(smc, clcsk, smc->clcsk_error_report);
++}
++
+ static int smc_switch_to_fallback(struct smc_sock *smc, int reason_code)
+ {
+-	wait_queue_head_t *smc_wait = sk_sleep(&smc->sk);
+-	wait_queue_head_t *clc_wait;
+-	unsigned long flags;
++	struct sock *clcsk;
+ 
+ 	mutex_lock(&smc->clcsock_release_lock);
+ 	if (!smc->clcsock) {
+ 		mutex_unlock(&smc->clcsock_release_lock);
+ 		return -EBADF;
+ 	}
++	clcsk = smc->clcsock->sk;
++
+ 	smc->use_fallback = true;
+ 	smc->fallback_rsn = reason_code;
+ 	smc_stat_fallback(smc);
+@@ -587,16 +685,22 @@ static int smc_switch_to_fallback(struct smc_sock *smc, int reason_code)
+ 		smc->clcsock->wq.fasync_list =
+ 			smc->sk.sk_socket->wq.fasync_list;
+ 
+-		/* There may be some entries remaining in
+-		 * smc socket->wq, which should be removed
+-		 * to clcsocket->wq during the fallback.
++		/* There might be some wait entries remaining
++		 * in smc sk->sk_wq and they should be woken up
++		 * as clcsock's wait queue is woken up.
+ 		 */
+-		clc_wait = sk_sleep(smc->clcsock->sk);
+-		spin_lock_irqsave(&smc_wait->lock, flags);
+-		spin_lock_nested(&clc_wait->lock, SINGLE_DEPTH_NESTING);
+-		list_splice_init(&smc_wait->head, &clc_wait->head);
+-		spin_unlock(&clc_wait->lock);
+-		spin_unlock_irqrestore(&smc_wait->lock, flags);
++		smc->clcsk_state_change = clcsk->sk_state_change;
++		smc->clcsk_data_ready = clcsk->sk_data_ready;
++		smc->clcsk_write_space = clcsk->sk_write_space;
++		smc->clcsk_error_report = clcsk->sk_error_report;
++
++		clcsk->sk_state_change = smc_fback_state_change;
++		clcsk->sk_data_ready = smc_fback_data_ready;
++		clcsk->sk_write_space = smc_fback_write_space;
++		clcsk->sk_error_report = smc_fback_error_report;
++
++		smc->clcsock->sk->sk_user_data =
++			(void *)((uintptr_t)smc | SK_USER_DATA_NOCOPY);
+ 	}
+ 	mutex_unlock(&smc->clcsock_release_lock);
+ 	return 0;
+@@ -2111,10 +2215,9 @@ out:
+ 
+ static void smc_clcsock_data_ready(struct sock *listen_clcsock)
+ {
+-	struct smc_sock *lsmc;
++	struct smc_sock *lsmc =
++		smc_clcsock_user_data(listen_clcsock);
+ 
+-	lsmc = (struct smc_sock *)
+-	       ((uintptr_t)listen_clcsock->sk_user_data & ~SK_USER_DATA_NOCOPY);
+ 	if (!lsmc)
+ 		return;
+ 	lsmc->clcsk_data_ready(listen_clcsock);
+diff --git a/net/smc/smc.h b/net/smc/smc.h
+index 1a4fc1c6c4ab6..b2e1d9e03dcf6 100644
+--- a/net/smc/smc.h
++++ b/net/smc/smc.h
+@@ -139,6 +139,12 @@ enum smc_urg_state {
+ 	SMC_URG_READ	= 3,			/* data was already read */
+ };
+ 
++struct smc_mark_woken {
++	bool woken;
++	void *key;
++	wait_queue_entry_t wait_entry;
++};
++
+ struct smc_connection {
+ 	struct rb_node		alert_node;
+ 	struct smc_link_group	*lgr;		/* link group of connection */
+@@ -227,8 +233,14 @@ struct smc_connection {
+ struct smc_sock {				/* smc sock container */
+ 	struct sock		sk;
+ 	struct socket		*clcsock;	/* internal tcp socket */
++	void			(*clcsk_state_change)(struct sock *sk);
++						/* original stat_change fct. */
+ 	void			(*clcsk_data_ready)(struct sock *sk);
+-						/* original data_ready fct. **/
++						/* original data_ready fct. */
++	void			(*clcsk_write_space)(struct sock *sk);
++						/* original write_space fct. */
++	void			(*clcsk_error_report)(struct sock *sk);
++						/* original error_report fct. */
+ 	struct smc_connection	conn;		/* smc connection */
+ 	struct smc_sock		*listen_smc;	/* listen parent */
+ 	struct work_struct	connect_work;	/* handle non-blocking connect*/
+@@ -263,6 +275,12 @@ static inline struct smc_sock *smc_sk(const struct sock *sk)
+ 	return (struct smc_sock *)sk;
+ }
+ 
++static inline struct smc_sock *smc_clcsock_user_data(struct sock *clcsk)
++{
++	return (struct smc_sock *)
++	       ((uintptr_t)clcsk->sk_user_data & ~SK_USER_DATA_NOCOPY);
++}
++
+ extern struct workqueue_struct	*smc_hs_wq;	/* wq for handshake work */
+ extern struct workqueue_struct	*smc_close_wq;	/* wq for close work */
+ 
+diff --git a/security/selinux/ss/conditional.c b/security/selinux/ss/conditional.c
+index 2ec6e5cd25d9b..feb206f3acb4a 100644
+--- a/security/selinux/ss/conditional.c
++++ b/security/selinux/ss/conditional.c
+@@ -152,6 +152,8 @@ static void cond_list_destroy(struct policydb *p)
+ 	for (i = 0; i < p->cond_list_len; i++)
+ 		cond_node_destroy(&p->cond_list[i]);
+ 	kfree(p->cond_list);
++	p->cond_list = NULL;
++	p->cond_list_len = 0;
+ }
+ 
+ void cond_policydb_destroy(struct policydb *p)
+@@ -441,7 +443,6 @@ int cond_read_list(struct policydb *p, void *fp)
+ 	return 0;
+ err:
+ 	cond_list_destroy(p);
+-	p->cond_list = NULL;
+ 	return rc;
+ }
+ 
+diff --git a/sound/pci/hda/hda_auto_parser.c b/sound/pci/hda/hda_auto_parser.c
+index 4a854475a0e60..500d0d474d27b 100644
+--- a/sound/pci/hda/hda_auto_parser.c
++++ b/sound/pci/hda/hda_auto_parser.c
+@@ -985,7 +985,7 @@ void snd_hda_pick_fixup(struct hda_codec *codec,
+ 	int id = HDA_FIXUP_ID_NOT_SET;
+ 	const char *name = NULL;
+ 	const char *type = NULL;
+-	int vendor, device;
++	unsigned int vendor, device;
+ 
+ 	if (codec->fixup_id != HDA_FIXUP_ID_NOT_SET)
+ 		return;
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 7016b48227bf2..f552785d301e0 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -3000,6 +3000,10 @@ void snd_hda_codec_shutdown(struct hda_codec *codec)
+ {
+ 	struct hda_pcm *cpcm;
+ 
++	/* Skip the shutdown if codec is not registered */
++	if (!codec->registered)
++		return;
++
+ 	list_for_each_entry(cpcm, &codec->pcm_list_head, list)
+ 		snd_pcm_suspend_all(cpcm->pcm);
+ 
+diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c
+index 3bf5e34107038..fc114e5224806 100644
+--- a/sound/pci/hda/hda_generic.c
++++ b/sound/pci/hda/hda_generic.c
+@@ -91,6 +91,12 @@ static void snd_hda_gen_spec_free(struct hda_gen_spec *spec)
+ 	free_kctls(spec);
+ 	snd_array_free(&spec->paths);
+ 	snd_array_free(&spec->loopback_list);
++#ifdef CONFIG_SND_HDA_GENERIC_LEDS
++	if (spec->led_cdevs[LED_AUDIO_MUTE])
++		led_classdev_unregister(spec->led_cdevs[LED_AUDIO_MUTE]);
++	if (spec->led_cdevs[LED_AUDIO_MICMUTE])
++		led_classdev_unregister(spec->led_cdevs[LED_AUDIO_MICMUTE]);
++#endif
+ }
+ 
+ /*
+@@ -3922,7 +3928,10 @@ static int create_mute_led_cdev(struct hda_codec *codec,
+ 						enum led_brightness),
+ 				bool micmute)
+ {
++	struct hda_gen_spec *spec = codec->spec;
+ 	struct led_classdev *cdev;
++	int idx = micmute ? LED_AUDIO_MICMUTE : LED_AUDIO_MUTE;
++	int err;
+ 
+ 	cdev = devm_kzalloc(&codec->core.dev, sizeof(*cdev), GFP_KERNEL);
+ 	if (!cdev)
+@@ -3932,10 +3941,14 @@ static int create_mute_led_cdev(struct hda_codec *codec,
+ 	cdev->max_brightness = 1;
+ 	cdev->default_trigger = micmute ? "audio-micmute" : "audio-mute";
+ 	cdev->brightness_set_blocking = callback;
+-	cdev->brightness = ledtrig_audio_get(micmute ? LED_AUDIO_MICMUTE : LED_AUDIO_MUTE);
++	cdev->brightness = ledtrig_audio_get(idx);
+ 	cdev->flags = LED_CORE_SUSPENDRESUME;
+ 
+-	return devm_led_classdev_register(&codec->core.dev, cdev);
++	err = led_classdev_register(&codec->core.dev, cdev);
++	if (err < 0)
++		return err;
++	spec->led_cdevs[idx] = cdev;
++	return 0;
+ }
+ 
+ /**
+diff --git a/sound/pci/hda/hda_generic.h b/sound/pci/hda/hda_generic.h
+index c43bd0f0338ea..362ddcaea15b3 100644
+--- a/sound/pci/hda/hda_generic.h
++++ b/sound/pci/hda/hda_generic.h
+@@ -294,6 +294,9 @@ struct hda_gen_spec {
+ 				   struct hda_jack_callback *cb);
+ 	void (*mic_autoswitch_hook)(struct hda_codec *codec,
+ 				    struct hda_jack_callback *cb);
++
++	/* leds */
++	struct led_classdev *led_cdevs[NUM_AUDIO_LEDS];
+ };
+ 
+ /* values for add_stereo_mix_input flag */
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index fa80a79e9f966..18f04137f61cf 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -97,6 +97,7 @@ struct alc_spec {
+ 	unsigned int gpio_mic_led_mask;
+ 	struct alc_coef_led mute_led_coef;
+ 	struct alc_coef_led mic_led_coef;
++	struct mutex coef_mutex;
+ 
+ 	hda_nid_t headset_mic_pin;
+ 	hda_nid_t headphone_mic_pin;
+@@ -132,8 +133,8 @@ struct alc_spec {
+  * COEF access helper functions
+  */
+ 
+-static int alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+-			       unsigned int coef_idx)
++static int __alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
++				 unsigned int coef_idx)
+ {
+ 	unsigned int val;
+ 
+@@ -142,28 +143,61 @@ static int alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+ 	return val;
+ }
+ 
++static int alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
++			       unsigned int coef_idx)
++{
++	struct alc_spec *spec = codec->spec;
++	unsigned int val;
++
++	mutex_lock(&spec->coef_mutex);
++	val = __alc_read_coefex_idx(codec, nid, coef_idx);
++	mutex_unlock(&spec->coef_mutex);
++	return val;
++}
++
+ #define alc_read_coef_idx(codec, coef_idx) \
+ 	alc_read_coefex_idx(codec, 0x20, coef_idx)
+ 
+-static void alc_write_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+-				 unsigned int coef_idx, unsigned int coef_val)
++static void __alc_write_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
++				   unsigned int coef_idx, unsigned int coef_val)
+ {
+ 	snd_hda_codec_write(codec, nid, 0, AC_VERB_SET_COEF_INDEX, coef_idx);
+ 	snd_hda_codec_write(codec, nid, 0, AC_VERB_SET_PROC_COEF, coef_val);
+ }
+ 
++static void alc_write_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
++				 unsigned int coef_idx, unsigned int coef_val)
++{
++	struct alc_spec *spec = codec->spec;
++
++	mutex_lock(&spec->coef_mutex);
++	__alc_write_coefex_idx(codec, nid, coef_idx, coef_val);
++	mutex_unlock(&spec->coef_mutex);
++}
++
+ #define alc_write_coef_idx(codec, coef_idx, coef_val) \
+ 	alc_write_coefex_idx(codec, 0x20, coef_idx, coef_val)
+ 
++static void __alc_update_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
++				    unsigned int coef_idx, unsigned int mask,
++				    unsigned int bits_set)
++{
++	unsigned int val = __alc_read_coefex_idx(codec, nid, coef_idx);
++
++	if (val != -1)
++		__alc_write_coefex_idx(codec, nid, coef_idx,
++				       (val & ~mask) | bits_set);
++}
++
+ static void alc_update_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+ 				  unsigned int coef_idx, unsigned int mask,
+ 				  unsigned int bits_set)
+ {
+-	unsigned int val = alc_read_coefex_idx(codec, nid, coef_idx);
++	struct alc_spec *spec = codec->spec;
+ 
+-	if (val != -1)
+-		alc_write_coefex_idx(codec, nid, coef_idx,
+-				     (val & ~mask) | bits_set);
++	mutex_lock(&spec->coef_mutex);
++	__alc_update_coefex_idx(codec, nid, coef_idx, mask, bits_set);
++	mutex_unlock(&spec->coef_mutex);
+ }
+ 
+ #define alc_update_coef_idx(codec, coef_idx, mask, bits_set)	\
+@@ -196,13 +230,17 @@ struct coef_fw {
+ static void alc_process_coef_fw(struct hda_codec *codec,
+ 				const struct coef_fw *fw)
+ {
++	struct alc_spec *spec = codec->spec;
++
++	mutex_lock(&spec->coef_mutex);
+ 	for (; fw->nid; fw++) {
+ 		if (fw->mask == (unsigned short)-1)
+-			alc_write_coefex_idx(codec, fw->nid, fw->idx, fw->val);
++			__alc_write_coefex_idx(codec, fw->nid, fw->idx, fw->val);
+ 		else
+-			alc_update_coefex_idx(codec, fw->nid, fw->idx,
+-					      fw->mask, fw->val);
++			__alc_update_coefex_idx(codec, fw->nid, fw->idx,
++						fw->mask, fw->val);
+ 	}
++	mutex_unlock(&spec->coef_mutex);
+ }
+ 
+ /*
+@@ -1148,6 +1186,7 @@ static int alc_alloc_spec(struct hda_codec *codec, hda_nid_t mixer_nid)
+ 	codec->spdif_status_reset = 1;
+ 	codec->forced_resume = 1;
+ 	codec->patch_ops = alc_patch_ops;
++	mutex_init(&spec->coef_mutex);
+ 
+ 	err = alc_codec_rename_from_preset(codec);
+ 	if (err < 0) {
+@@ -2120,6 +2159,7 @@ static void alc1220_fixup_gb_x570(struct hda_codec *codec,
+ {
+ 	static const hda_nid_t conn1[] = { 0x0c };
+ 	static const struct coef_fw gb_x570_coefs[] = {
++		WRITE_COEF(0x07, 0x03c0),
+ 		WRITE_COEF(0x1a, 0x01c1),
+ 		WRITE_COEF(0x1b, 0x0202),
+ 		WRITE_COEF(0x43, 0x3005),
+@@ -2546,7 +2586,8 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte EP45-DS3/Z87X-UD3H", ALC889_FIXUP_FRONT_HP_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1458, 0xa0b8, "Gigabyte AZ370-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
+ 	SND_PCI_QUIRK(0x1458, 0xa0cd, "Gigabyte X570 Aorus Master", ALC1220_FIXUP_GB_X570),
+-	SND_PCI_QUIRK(0x1458, 0xa0ce, "Gigabyte X570 Aorus Xtreme", ALC1220_FIXUP_CLEVO_P950),
++	SND_PCI_QUIRK(0x1458, 0xa0ce, "Gigabyte X570 Aorus Xtreme", ALC1220_FIXUP_GB_X570),
++	SND_PCI_QUIRK(0x1458, 0xa0d5, "Gigabyte X570S Aorus Master", ALC1220_FIXUP_GB_X570),
+ 	SND_PCI_QUIRK(0x1462, 0x11f7, "MSI-GE63", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x1228, "MSI-GP63", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x1229, "MSI-GP73", ALC1220_FIXUP_CLEVO_P950),
+@@ -2621,6 +2662,7 @@ static const struct hda_model_fixup alc882_fixup_models[] = {
+ 	{.id = ALC882_FIXUP_NO_PRIMARY_HP, .name = "no-primary-hp"},
+ 	{.id = ALC887_FIXUP_ASUS_BASS, .name = "asus-bass"},
+ 	{.id = ALC1220_FIXUP_GB_DUAL_CODECS, .name = "dual-codecs"},
++	{.id = ALC1220_FIXUP_GB_X570, .name = "gb-x570"},
+ 	{.id = ALC1220_FIXUP_CLEVO_P950, .name = "clevo-p950"},
+ 	{}
+ };
+@@ -8815,6 +8857,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
++	SND_PCI_QUIRK(0x1043, 0x16b2, "ASUS GU603", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+ 	SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
+diff --git a/sound/soc/codecs/cpcap.c b/sound/soc/codecs/cpcap.c
+index 598e09024e239..ffdf8b615efaa 100644
+--- a/sound/soc/codecs/cpcap.c
++++ b/sound/soc/codecs/cpcap.c
+@@ -1667,6 +1667,8 @@ static int cpcap_codec_probe(struct platform_device *pdev)
+ {
+ 	struct device_node *codec_node =
+ 		of_get_child_by_name(pdev->dev.parent->of_node, "audio-codec");
++	if (!codec_node)
++		return -ENODEV;
+ 
+ 	pdev->dev.of_node = codec_node;
+ 
+diff --git a/sound/soc/codecs/hdmi-codec.c b/sound/soc/codecs/hdmi-codec.c
+index b61f980cabdc0..b07607a9ecea4 100644
+--- a/sound/soc/codecs/hdmi-codec.c
++++ b/sound/soc/codecs/hdmi-codec.c
+@@ -277,7 +277,7 @@ struct hdmi_codec_priv {
+ 	bool busy;
+ 	struct snd_soc_jack *jack;
+ 	unsigned int jack_status;
+-	u8 iec_status[5];
++	u8 iec_status[AES_IEC958_STATUS_SIZE];
+ };
+ 
+ static const struct snd_soc_dapm_widget hdmi_widgets[] = {
+diff --git a/sound/soc/codecs/lpass-rx-macro.c b/sound/soc/codecs/lpass-rx-macro.c
+index aec5127260fd4..6ffe88345de5f 100644
+--- a/sound/soc/codecs/lpass-rx-macro.c
++++ b/sound/soc/codecs/lpass-rx-macro.c
+@@ -2688,8 +2688,8 @@ static uint32_t get_iir_band_coeff(struct snd_soc_component *component,
+ 	int reg, b2_reg;
+ 
+ 	/* Address does not automatically update if reading */
+-	reg = CDC_RX_SIDETONE_IIR0_IIR_COEF_B1_CTL + 16 * iir_idx;
+-	b2_reg = CDC_RX_SIDETONE_IIR0_IIR_COEF_B2_CTL + 16 * iir_idx;
++	reg = CDC_RX_SIDETONE_IIR0_IIR_COEF_B1_CTL + 0x80 * iir_idx;
++	b2_reg = CDC_RX_SIDETONE_IIR0_IIR_COEF_B2_CTL + 0x80 * iir_idx;
+ 
+ 	snd_soc_component_write(component, reg,
+ 				((band_idx * BAND_MAX + coeff_idx) *
+@@ -2718,7 +2718,7 @@ static uint32_t get_iir_band_coeff(struct snd_soc_component *component,
+ static void set_iir_band_coeff(struct snd_soc_component *component,
+ 			       int iir_idx, int band_idx, uint32_t value)
+ {
+-	int reg = CDC_RX_SIDETONE_IIR0_IIR_COEF_B2_CTL + 16 * iir_idx;
++	int reg = CDC_RX_SIDETONE_IIR0_IIR_COEF_B2_CTL + 0x80 * iir_idx;
+ 
+ 	snd_soc_component_write(component, reg, (value & 0xFF));
+ 	snd_soc_component_write(component, reg, (value >> 8) & 0xFF);
+@@ -2739,7 +2739,7 @@ static int rx_macro_put_iir_band_audio_mixer(
+ 	int iir_idx = ctl->iir_idx;
+ 	int band_idx = ctl->band_idx;
+ 	u32 coeff[BAND_MAX];
+-	int reg = CDC_RX_SIDETONE_IIR0_IIR_COEF_B1_CTL + 16 * iir_idx;
++	int reg = CDC_RX_SIDETONE_IIR0_IIR_COEF_B1_CTL + 0x80 * iir_idx;
+ 
+ 	memcpy(&coeff[0], ucontrol->value.bytes.data, params->max);
+ 
+diff --git a/sound/soc/codecs/max9759.c b/sound/soc/codecs/max9759.c
+index 00e9d4fd1651f..0c261335c8a16 100644
+--- a/sound/soc/codecs/max9759.c
++++ b/sound/soc/codecs/max9759.c
+@@ -64,7 +64,8 @@ static int speaker_gain_control_put(struct snd_kcontrol *kcontrol,
+ 	struct snd_soc_component *c = snd_soc_kcontrol_component(kcontrol);
+ 	struct max9759 *priv = snd_soc_component_get_drvdata(c);
+ 
+-	if (ucontrol->value.integer.value[0] > 3)
++	if (ucontrol->value.integer.value[0] < 0 ||
++	    ucontrol->value.integer.value[0] > 3)
+ 		return -EINVAL;
+ 
+ 	priv->gain = ucontrol->value.integer.value[0];
+diff --git a/sound/soc/codecs/rt5682-i2c.c b/sound/soc/codecs/rt5682-i2c.c
+index 20e0f90ea4986..20fc0f3766ded 100644
+--- a/sound/soc/codecs/rt5682-i2c.c
++++ b/sound/soc/codecs/rt5682-i2c.c
+@@ -59,18 +59,12 @@ static void rt5682_jd_check_handler(struct work_struct *work)
+ 	struct rt5682_priv *rt5682 = container_of(work, struct rt5682_priv,
+ 		jd_check_work.work);
+ 
+-	if (snd_soc_component_read(rt5682->component, RT5682_AJD1_CTRL)
+-		& RT5682_JDH_RS_MASK) {
++	if (snd_soc_component_read(rt5682->component, RT5682_AJD1_CTRL) & RT5682_JDH_RS_MASK)
+ 		/* jack out */
+-		rt5682->jack_type = rt5682_headset_detect(rt5682->component, 0);
+-
+-		snd_soc_jack_report(rt5682->hs_jack, rt5682->jack_type,
+-			SND_JACK_HEADSET |
+-			SND_JACK_BTN_0 | SND_JACK_BTN_1 |
+-			SND_JACK_BTN_2 | SND_JACK_BTN_3);
+-	} else {
++		mod_delayed_work(system_power_efficient_wq,
++				 &rt5682->jack_detect_work, 0);
++	else
+ 		schedule_delayed_work(&rt5682->jd_check_work, 500);
+-	}
+ }
+ 
+ static irqreturn_t rt5682_irq(int irq, void *data)
+@@ -198,7 +192,6 @@ static int rt5682_i2c_probe(struct i2c_client *i2c,
+ 	}
+ 
+ 	mutex_init(&rt5682->calibrate_mutex);
+-	mutex_init(&rt5682->jdet_mutex);
+ 	rt5682_calibrate(rt5682);
+ 
+ 	rt5682_apply_patch_list(rt5682, &i2c->dev);
+diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
+index b34a8542077dc..7e6e2f26accd0 100644
+--- a/sound/soc/codecs/rt5682.c
++++ b/sound/soc/codecs/rt5682.c
+@@ -922,15 +922,13 @@ static void rt5682_enable_push_button_irq(struct snd_soc_component *component,
+  *
+  * Returns detect status.
+  */
+-int rt5682_headset_detect(struct snd_soc_component *component, int jack_insert)
++static int rt5682_headset_detect(struct snd_soc_component *component, int jack_insert)
+ {
+ 	struct rt5682_priv *rt5682 = snd_soc_component_get_drvdata(component);
+ 	struct snd_soc_dapm_context *dapm = &component->dapm;
+ 	unsigned int val, count;
+ 
+ 	if (jack_insert) {
+-		snd_soc_dapm_mutex_lock(dapm);
+-
+ 		snd_soc_component_update_bits(component, RT5682_PWR_ANLG_1,
+ 			RT5682_PWR_VREF2 | RT5682_PWR_MB,
+ 			RT5682_PWR_VREF2 | RT5682_PWR_MB);
+@@ -981,8 +979,6 @@ int rt5682_headset_detect(struct snd_soc_component *component, int jack_insert)
+ 		snd_soc_component_update_bits(component, RT5682_MICBIAS_2,
+ 			RT5682_PWR_CLK25M_MASK | RT5682_PWR_CLK1M_MASK,
+ 			RT5682_PWR_CLK25M_PU | RT5682_PWR_CLK1M_PU);
+-
+-		snd_soc_dapm_mutex_unlock(dapm);
+ 	} else {
+ 		rt5682_enable_push_button_irq(component, false);
+ 		snd_soc_component_update_bits(component, RT5682_CBJ_CTRL_1,
+@@ -1011,7 +1007,6 @@ int rt5682_headset_detect(struct snd_soc_component *component, int jack_insert)
+ 	dev_dbg(component->dev, "jack_type = %d\n", rt5682->jack_type);
+ 	return rt5682->jack_type;
+ }
+-EXPORT_SYMBOL_GPL(rt5682_headset_detect);
+ 
+ static int rt5682_set_jack_detect(struct snd_soc_component *component,
+ 		struct snd_soc_jack *hs_jack, void *data)
+@@ -1094,6 +1089,7 @@ void rt5682_jack_detect_handler(struct work_struct *work)
+ {
+ 	struct rt5682_priv *rt5682 =
+ 		container_of(work, struct rt5682_priv, jack_detect_work.work);
++	struct snd_soc_dapm_context *dapm;
+ 	int val, btn_type;
+ 
+ 	while (!rt5682->component)
+@@ -1102,7 +1098,9 @@ void rt5682_jack_detect_handler(struct work_struct *work)
+ 	while (!rt5682->component->card->instantiated)
+ 		usleep_range(10000, 15000);
+ 
+-	mutex_lock(&rt5682->jdet_mutex);
++	dapm = snd_soc_component_get_dapm(rt5682->component);
++
++	snd_soc_dapm_mutex_lock(dapm);
+ 	mutex_lock(&rt5682->calibrate_mutex);
+ 
+ 	val = snd_soc_component_read(rt5682->component, RT5682_AJD1_CTRL)
+@@ -1162,6 +1160,9 @@ void rt5682_jack_detect_handler(struct work_struct *work)
+ 		rt5682->irq_work_delay_time = 50;
+ 	}
+ 
++	mutex_unlock(&rt5682->calibrate_mutex);
++	snd_soc_dapm_mutex_unlock(dapm);
++
+ 	snd_soc_jack_report(rt5682->hs_jack, rt5682->jack_type,
+ 		SND_JACK_HEADSET |
+ 		SND_JACK_BTN_0 | SND_JACK_BTN_1 |
+@@ -1174,9 +1175,6 @@ void rt5682_jack_detect_handler(struct work_struct *work)
+ 		else
+ 			cancel_delayed_work_sync(&rt5682->jd_check_work);
+ 	}
+-
+-	mutex_unlock(&rt5682->calibrate_mutex);
+-	mutex_unlock(&rt5682->jdet_mutex);
+ }
+ EXPORT_SYMBOL_GPL(rt5682_jack_detect_handler);
+ 
+@@ -1526,7 +1524,6 @@ static int rt5682_hp_event(struct snd_soc_dapm_widget *w,
+ {
+ 	struct snd_soc_component *component =
+ 		snd_soc_dapm_to_component(w->dapm);
+-	struct rt5682_priv *rt5682 = snd_soc_component_get_drvdata(component);
+ 
+ 	switch (event) {
+ 	case SND_SOC_DAPM_PRE_PMU:
+@@ -1538,17 +1535,12 @@ static int rt5682_hp_event(struct snd_soc_dapm_widget *w,
+ 			RT5682_DEPOP_1, 0x60, 0x60);
+ 		snd_soc_component_update_bits(component,
+ 			RT5682_DAC_ADC_DIG_VOL1, 0x00c0, 0x0080);
+-
+-		mutex_lock(&rt5682->jdet_mutex);
+-
+ 		snd_soc_component_update_bits(component, RT5682_HP_CTRL_2,
+ 			RT5682_HP_C2_DAC_L_EN | RT5682_HP_C2_DAC_R_EN,
+ 			RT5682_HP_C2_DAC_L_EN | RT5682_HP_C2_DAC_R_EN);
+ 		usleep_range(5000, 10000);
+ 		snd_soc_component_update_bits(component, RT5682_CHARGE_PUMP_1,
+ 			RT5682_CP_SW_SIZE_MASK, RT5682_CP_SW_SIZE_L);
+-
+-		mutex_unlock(&rt5682->jdet_mutex);
+ 		break;
+ 
+ 	case SND_SOC_DAPM_POST_PMD:
+diff --git a/sound/soc/codecs/rt5682.h b/sound/soc/codecs/rt5682.h
+index c917c76200ea2..52ff0d9c36c58 100644
+--- a/sound/soc/codecs/rt5682.h
++++ b/sound/soc/codecs/rt5682.h
+@@ -1463,7 +1463,6 @@ struct rt5682_priv {
+ 
+ 	int jack_type;
+ 	int irq_work_delay_time;
+-	struct mutex jdet_mutex;
+ };
+ 
+ extern const char *rt5682_supply_names[RT5682_NUM_SUPPLIES];
+@@ -1473,7 +1472,6 @@ int rt5682_sel_asrc_clk_src(struct snd_soc_component *component,
+ 
+ void rt5682_apply_patch_list(struct rt5682_priv *rt5682, struct device *dev);
+ 
+-int rt5682_headset_detect(struct snd_soc_component *component, int jack_insert);
+ void rt5682_jack_detect_handler(struct work_struct *work);
+ 
+ bool rt5682_volatile_register(struct device *dev, unsigned int reg);
+diff --git a/sound/soc/codecs/wcd938x.c b/sound/soc/codecs/wcd938x.c
+index 67151c7770c65..bbc261ab2025b 100644
+--- a/sound/soc/codecs/wcd938x.c
++++ b/sound/soc/codecs/wcd938x.c
+@@ -1432,14 +1432,10 @@ static int wcd938x_sdw_connect_port(struct wcd938x_sdw_ch_info *ch_info,
+ 	return 0;
+ }
+ 
+-static int wcd938x_connect_port(struct wcd938x_sdw_priv *wcd, u8 ch_id, u8 enable)
++static int wcd938x_connect_port(struct wcd938x_sdw_priv *wcd, u8 port_num, u8 ch_id, u8 enable)
+ {
+-	u8 port_num;
+-
+-	port_num = wcd->ch_info[ch_id].port_num;
+-
+ 	return wcd938x_sdw_connect_port(&wcd->ch_info[ch_id],
+-					&wcd->port_config[port_num],
++					&wcd->port_config[port_num - 1],
+ 					enable);
+ }
+ 
+@@ -2563,7 +2559,7 @@ static int wcd938x_ear_pa_put_gain(struct snd_kcontrol *kcontrol,
+ 				      WCD938X_EAR_GAIN_MASK,
+ 				      ucontrol->value.integer.value[0]);
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static int wcd938x_get_compander(struct snd_kcontrol *kcontrol,
+@@ -2593,6 +2589,7 @@ static int wcd938x_set_compander(struct snd_kcontrol *kcontrol,
+ 	struct wcd938x_priv *wcd938x = snd_soc_component_get_drvdata(component);
+ 	struct wcd938x_sdw_priv *wcd;
+ 	int value = ucontrol->value.integer.value[0];
++	int portidx;
+ 	struct soc_mixer_control *mc;
+ 	bool hphr;
+ 
+@@ -2606,12 +2603,14 @@ static int wcd938x_set_compander(struct snd_kcontrol *kcontrol,
+ 	else
+ 		wcd938x->comp1_enable = value;
+ 
++	portidx = wcd->ch_info[mc->reg].port_num;
++
+ 	if (value)
+-		wcd938x_connect_port(wcd, mc->reg, true);
++		wcd938x_connect_port(wcd, portidx, mc->reg, true);
+ 	else
+-		wcd938x_connect_port(wcd, mc->reg, false);
++		wcd938x_connect_port(wcd, portidx, mc->reg, false);
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static int wcd938x_ldoh_get(struct snd_kcontrol *kcontrol,
+@@ -2882,9 +2881,11 @@ static int wcd938x_get_swr_port(struct snd_kcontrol *kcontrol,
+ 	struct wcd938x_sdw_priv *wcd;
+ 	struct soc_mixer_control *mixer = (struct soc_mixer_control *)kcontrol->private_value;
+ 	int dai_id = mixer->shift;
+-	int portidx = mixer->reg;
++	int portidx, ch_idx = mixer->reg;
++
+ 
+ 	wcd = wcd938x->sdw_priv[dai_id];
++	portidx = wcd->ch_info[ch_idx].port_num;
+ 
+ 	ucontrol->value.integer.value[0] = wcd->port_enable[portidx];
+ 
+@@ -2899,12 +2900,14 @@ static int wcd938x_set_swr_port(struct snd_kcontrol *kcontrol,
+ 	struct wcd938x_sdw_priv *wcd;
+ 	struct soc_mixer_control *mixer =
+ 		(struct soc_mixer_control *)kcontrol->private_value;
+-	int portidx = mixer->reg;
++	int ch_idx = mixer->reg;
++	int portidx;
+ 	int dai_id = mixer->shift;
+ 	bool enable;
+ 
+ 	wcd = wcd938x->sdw_priv[dai_id];
+ 
++	portidx = wcd->ch_info[ch_idx].port_num;
+ 	if (ucontrol->value.integer.value[0])
+ 		enable = true;
+ 	else
+@@ -2912,9 +2915,9 @@ static int wcd938x_set_swr_port(struct snd_kcontrol *kcontrol,
+ 
+ 	wcd->port_enable[portidx] = enable;
+ 
+-	wcd938x_connect_port(wcd, portidx, enable);
++	wcd938x_connect_port(wcd, portidx, ch_idx, enable);
+ 
+-	return 0;
++	return 1;
+ 
+ }
+ 
+diff --git a/sound/soc/fsl/pcm030-audio-fabric.c b/sound/soc/fsl/pcm030-audio-fabric.c
+index af3c3b90c0aca..83b4a22bf15ac 100644
+--- a/sound/soc/fsl/pcm030-audio-fabric.c
++++ b/sound/soc/fsl/pcm030-audio-fabric.c
+@@ -93,16 +93,21 @@ static int pcm030_fabric_probe(struct platform_device *op)
+ 		dev_err(&op->dev, "platform_device_alloc() failed\n");
+ 
+ 	ret = platform_device_add(pdata->codec_device);
+-	if (ret)
++	if (ret) {
+ 		dev_err(&op->dev, "platform_device_add() failed: %d\n", ret);
++		platform_device_put(pdata->codec_device);
++	}
+ 
+ 	ret = snd_soc_register_card(card);
+-	if (ret)
++	if (ret) {
+ 		dev_err(&op->dev, "snd_soc_register_card() failed: %d\n", ret);
++		platform_device_del(pdata->codec_device);
++		platform_device_put(pdata->codec_device);
++	}
+ 
+ 	platform_set_drvdata(op, pdata);
+-
+ 	return ret;
++
+ }
+ 
+ static int pcm030_fabric_remove(struct platform_device *op)
+diff --git a/sound/soc/generic/simple-card.c b/sound/soc/generic/simple-card.c
+index a3a7990b5cb66..bc3e24c6a28a8 100644
+--- a/sound/soc/generic/simple-card.c
++++ b/sound/soc/generic/simple-card.c
+@@ -28,6 +28,30 @@ static const struct snd_soc_ops simple_ops = {
+ 	.hw_params	= asoc_simple_hw_params,
+ };
+ 
++static int asoc_simple_parse_platform(struct device_node *node,
++				      struct snd_soc_dai_link_component *dlc)
++{
++	struct of_phandle_args args;
++	int ret;
++
++	if (!node)
++		return 0;
++
++	/*
++	 * Get node via "sound-dai = <&phandle port>"
++	 * it will be used as xxx_of_node on soc_bind_dai_link()
++	 */
++	ret = of_parse_phandle_with_args(node, DAI, CELL, 0, &args);
++	if (ret)
++		return ret;
++
++	/* dai_name is not required and may not exist for plat component */
++
++	dlc->of_node = args.np;
++
++	return 0;
++}
++
+ static int asoc_simple_parse_dai(struct device_node *node,
+ 				 struct snd_soc_dai_link_component *dlc,
+ 				 int *is_single_link)
+@@ -289,7 +313,7 @@ static int simple_dai_link_of(struct asoc_simple_priv *priv,
+ 	if (ret < 0)
+ 		goto dai_link_of_err;
+ 
+-	ret = asoc_simple_parse_dai(plat, platforms, NULL);
++	ret = asoc_simple_parse_platform(plat, platforms);
+ 	if (ret < 0)
+ 		goto dai_link_of_err;
+ 
+diff --git a/sound/soc/qcom/qdsp6/q6apm-dai.c b/sound/soc/qcom/qdsp6/q6apm-dai.c
+index eb1c3aec479ba..19c4a90ec1ea9 100644
+--- a/sound/soc/qcom/qdsp6/q6apm-dai.c
++++ b/sound/soc/qcom/qdsp6/q6apm-dai.c
+@@ -308,8 +308,11 @@ static int q6apm_dai_close(struct snd_soc_component *component,
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+ 	struct q6apm_dai_rtd *prtd = runtime->private_data;
+ 
+-	q6apm_graph_stop(prtd->graph);
+-	q6apm_unmap_memory_regions(prtd->graph, substream->stream);
++	if (prtd->state) { /* only stop graph that is started */
++		q6apm_graph_stop(prtd->graph);
++		q6apm_unmap_memory_regions(prtd->graph, substream->stream);
++	}
++
+ 	q6apm_graph_close(prtd->graph);
+ 	prtd->graph = NULL;
+ 	kfree(prtd);
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index 08eaa9ddf191e..dc0e7c8d31f37 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -316,13 +316,27 @@ int snd_soc_put_volsw(struct snd_kcontrol *kcontrol,
+ 	if (sign_bit)
+ 		mask = BIT(sign_bit + 1) - 1;
+ 
+-	val = ((ucontrol->value.integer.value[0] + min) & mask);
++	val = ucontrol->value.integer.value[0];
++	if (mc->platform_max && val > mc->platform_max)
++		return -EINVAL;
++	if (val > max - min)
++		return -EINVAL;
++	if (val < 0)
++		return -EINVAL;
++	val = (val + min) & mask;
+ 	if (invert)
+ 		val = max - val;
+ 	val_mask = mask << shift;
+ 	val = val << shift;
+ 	if (snd_soc_volsw_is_stereo(mc)) {
+-		val2 = ((ucontrol->value.integer.value[1] + min) & mask);
++		val2 = ucontrol->value.integer.value[1];
++		if (mc->platform_max && val2 > mc->platform_max)
++			return -EINVAL;
++		if (val2 > max - min)
++			return -EINVAL;
++		if (val2 < 0)
++			return -EINVAL;
++		val2 = (val2 + min) & mask;
+ 		if (invert)
+ 			val2 = max - val2;
+ 		if (reg == reg2) {
+@@ -409,8 +423,15 @@ int snd_soc_put_volsw_sx(struct snd_kcontrol *kcontrol,
+ 	int err = 0;
+ 	unsigned int val, val_mask;
+ 
++	val = ucontrol->value.integer.value[0];
++	if (mc->platform_max && val > mc->platform_max)
++		return -EINVAL;
++	if (val > max - min)
++		return -EINVAL;
++	if (val < 0)
++		return -EINVAL;
+ 	val_mask = mask << shift;
+-	val = (ucontrol->value.integer.value[0] + min) & mask;
++	val = (val + min) & mask;
+ 	val = val << shift;
+ 
+ 	err = snd_soc_component_update_bits(component, reg, val_mask, val);
+@@ -858,6 +879,8 @@ int snd_soc_put_xr_sx(struct snd_kcontrol *kcontrol,
+ 	long val = ucontrol->value.integer.value[0];
+ 	unsigned int i;
+ 
++	if (val < mc->min || val > mc->max)
++		return -EINVAL;
+ 	if (invert)
+ 		val = max - val;
+ 	val &= mask;
+diff --git a/sound/soc/xilinx/xlnx_formatter_pcm.c b/sound/soc/xilinx/xlnx_formatter_pcm.c
+index 91afea9d5de67..ce19a6058b279 100644
+--- a/sound/soc/xilinx/xlnx_formatter_pcm.c
++++ b/sound/soc/xilinx/xlnx_formatter_pcm.c
+@@ -37,6 +37,7 @@
+ #define XLNX_AUD_XFER_COUNT	0x28
+ #define XLNX_AUD_CH_STS_START	0x2C
+ #define XLNX_BYTES_PER_CH	0x44
++#define XLNX_AUD_ALIGN_BYTES	64
+ 
+ #define AUD_STS_IOC_IRQ_MASK	BIT(31)
+ #define AUD_STS_CH_STS_MASK	BIT(29)
+@@ -368,12 +369,32 @@ static int xlnx_formatter_pcm_open(struct snd_soc_component *component,
+ 	snd_soc_set_runtime_hwparams(substream, &xlnx_pcm_hardware);
+ 	runtime->private_data = stream_data;
+ 
+-	/* Resize the period size divisible by 64 */
++	/* Resize the period bytes as divisible by 64 */
+ 	err = snd_pcm_hw_constraint_step(runtime, 0,
+-					 SNDRV_PCM_HW_PARAM_PERIOD_BYTES, 64);
++					 SNDRV_PCM_HW_PARAM_PERIOD_BYTES,
++					 XLNX_AUD_ALIGN_BYTES);
+ 	if (err) {
+ 		dev_err(component->dev,
+-			"unable to set constraint on period bytes\n");
++			"Unable to set constraint on period bytes\n");
++		return err;
++	}
++
++	/* Resize the buffer bytes as divisible by 64 */
++	err = snd_pcm_hw_constraint_step(runtime, 0,
++					 SNDRV_PCM_HW_PARAM_BUFFER_BYTES,
++					 XLNX_AUD_ALIGN_BYTES);
++	if (err) {
++		dev_err(component->dev,
++			"Unable to set constraint on buffer bytes\n");
++		return err;
++	}
++
++	/* Set periods as integer multiple */
++	err = snd_pcm_hw_constraint_integer(runtime,
++					    SNDRV_PCM_HW_PARAM_PERIODS);
++	if (err < 0) {
++		dev_err(component->dev,
++			"Unable to set constraint on periods to be integer\n");
+ 		return err;
+ 	}
+ 
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 6e7bac8203baa..2bd28474e8fae 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -1526,6 +1526,10 @@ error:
+ 		usb_audio_err(chip,
+ 			"cannot get connectors status: req = %#x, wValue = %#x, wIndex = %#x, type = %d\n",
+ 			UAC_GET_CUR, validx, idx, cval->val_type);
++
++		if (val)
++			*val = 0;
++
+ 		return filter_error(cval, ret);
+ 	}
+ 
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index b1522e43173e1..0ea39565e6232 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -84,7 +84,7 @@
+  * combination.
+  */
+ {
+-	USB_DEVICE(0x041e, 0x4095),
++	USB_AUDIO_DEVICE(0x041e, 0x4095),
+ 	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+ 		.ifnum = QUIRK_ANY_INTERFACE,
+ 		.type = QUIRK_COMPOSITE,
+diff --git a/tools/bpf/resolve_btfids/Makefile b/tools/bpf/resolve_btfids/Makefile
+index 751643f860b29..0292fe94fc706 100644
+--- a/tools/bpf/resolve_btfids/Makefile
++++ b/tools/bpf/resolve_btfids/Makefile
+@@ -9,7 +9,11 @@ ifeq ($(V),1)
+   msg =
+ else
+   Q = @
+-  msg = @printf '  %-8s %s%s\n' "$(1)" "$(notdir $(2))" "$(if $(3), $(3))";
++  ifeq ($(silent),1)
++    msg =
++  else
++    msg = @printf '  %-8s %s%s\n' "$(1)" "$(notdir $(2))" "$(if $(3), $(3))";
++  endif
+   MAKEFLAGS=--no-print-directory
+ endif
+ 
+diff --git a/tools/include/uapi/sound/asound.h b/tools/include/uapi/sound/asound.h
+index 5fbb79e30819a..c245ad2ca5d49 100644
+--- a/tools/include/uapi/sound/asound.h
++++ b/tools/include/uapi/sound/asound.h
+@@ -56,8 +56,10 @@
+  *                                                                          *
+  ****************************************************************************/
+ 
++#define AES_IEC958_STATUS_SIZE		24
++
+ struct snd_aes_iec958 {
+-	unsigned char status[24];	/* AES/IEC958 channel status bits */
++	unsigned char status[AES_IEC958_STATUS_SIZE]; /* AES/IEC958 channel status bits */
+ 	unsigned char subcode[147];	/* AES/IEC958 subcode bits */
+ 	unsigned char pad;		/* nothing */
+ 	unsigned char dig_subframe[4];	/* AES/IEC958 subframe bits */
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 21735829b860c..750ef1c446c8a 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -2823,7 +2823,7 @@ static inline bool func_uaccess_safe(struct symbol *func)
+ 
+ static inline const char *call_dest_name(struct instruction *insn)
+ {
+-	static char pvname[16];
++	static char pvname[19];
+ 	struct reloc *rel;
+ 	int idx;
+ 
+diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
+index 588601000f3f9..db00ca6a67deb 100644
+--- a/tools/perf/util/stat-display.c
++++ b/tools/perf/util/stat-display.c
+@@ -584,15 +584,16 @@ static void collect_all_aliases(struct perf_stat_config *config, struct evsel *c
+ 
+ 	alias = list_prepare_entry(counter, &(evlist->core.entries), core.node);
+ 	list_for_each_entry_continue (alias, &evlist->core.entries, core.node) {
+-		if (strcmp(evsel__name(alias), evsel__name(counter)) ||
+-		    alias->scale != counter->scale ||
+-		    alias->cgrp != counter->cgrp ||
+-		    strcmp(alias->unit, counter->unit) ||
+-		    evsel__is_clock(alias) != evsel__is_clock(counter) ||
+-		    !strcmp(alias->pmu_name, counter->pmu_name))
+-			break;
+-		alias->merged_stat = true;
+-		cb(config, alias, data, false);
++		/* Merge events with the same name, etc. but on different PMUs. */
++		if (!strcmp(evsel__name(alias), evsel__name(counter)) &&
++			alias->scale == counter->scale &&
++			alias->cgrp == counter->cgrp &&
++			!strcmp(alias->unit, counter->unit) &&
++			evsel__is_clock(alias) == evsel__is_clock(counter) &&
++			strcmp(alias->pmu_name, counter->pmu_name)) {
++			alias->merged_stat = true;
++			cb(config, alias, data, false);
++		}
+ 	}
+ }
+ 
+diff --git a/tools/testing/selftests/exec/Makefile b/tools/testing/selftests/exec/Makefile
+index dd61118df66ed..12c5e27d32c16 100644
+--- a/tools/testing/selftests/exec/Makefile
++++ b/tools/testing/selftests/exec/Makefile
+@@ -5,7 +5,7 @@ CFLAGS += -D_GNU_SOURCE
+ 
+ TEST_PROGS := binfmt_script non-regular
+ TEST_GEN_PROGS := execveat load_address_4096 load_address_2097152 load_address_16777216
+-TEST_GEN_FILES := execveat.symlink execveat.denatured script subdir pipe
++TEST_GEN_FILES := execveat.symlink execveat.denatured script subdir
+ # Makefile is a run-time dependency, since it's accessed by the execveat test
+ TEST_FILES := Makefile
+ 
+diff --git a/tools/testing/selftests/futex/Makefile b/tools/testing/selftests/futex/Makefile
+index 12631f0076a10..11e157d7533b8 100644
+--- a/tools/testing/selftests/futex/Makefile
++++ b/tools/testing/selftests/futex/Makefile
+@@ -11,7 +11,7 @@ all:
+ 	@for DIR in $(SUBDIRS); do		\
+ 		BUILD_TARGET=$(OUTPUT)/$$DIR;	\
+ 		mkdir $$BUILD_TARGET  -p;	\
+-		make OUTPUT=$$BUILD_TARGET -C $$DIR $@;\
++		$(MAKE) OUTPUT=$$BUILD_TARGET -C $$DIR $@;\
+ 		if [ -e $$DIR/$(TEST_PROGS) ]; then \
+ 			rsync -a $$DIR/$(TEST_PROGS) $$BUILD_TARGET/; \
+ 		fi \
+@@ -32,6 +32,6 @@ override define CLEAN
+ 	@for DIR in $(SUBDIRS); do		\
+ 		BUILD_TARGET=$(OUTPUT)/$$DIR;	\
+ 		mkdir $$BUILD_TARGET  -p;	\
+-		make OUTPUT=$$BUILD_TARGET -C $$DIR $@;\
++		$(MAKE) OUTPUT=$$BUILD_TARGET -C $$DIR $@;\
+ 	done
+ endef
+diff --git a/tools/testing/selftests/netfilter/nft_concat_range.sh b/tools/testing/selftests/netfilter/nft_concat_range.sh
+index ed61f6cab60f4..df322e47a54fb 100755
+--- a/tools/testing/selftests/netfilter/nft_concat_range.sh
++++ b/tools/testing/selftests/netfilter/nft_concat_range.sh
+@@ -27,7 +27,7 @@ TYPES="net_port port_net net6_port port_proto net6_port_mac net6_port_mac_proto
+        net6_port_net6_port net_port_mac_proto_net"
+ 
+ # Reported bugs, also described by TYPE_ variables below
+-BUGS="flush_remove_add"
++BUGS="flush_remove_add reload"
+ 
+ # List of possible paths to pktgen script from kernel tree for performance tests
+ PKTGEN_SCRIPT_PATHS="
+@@ -354,6 +354,23 @@ TYPE_flush_remove_add="
+ display		Add two elements, flush, re-add
+ "
+ 
++TYPE_reload="
++display		net,mac with reload
++type_spec	ipv4_addr . ether_addr
++chain_spec	ip daddr . ether saddr
++dst		addr4
++src		mac
++start		1
++count		1
++src_delta	2000
++tools		sendip nc bash
++proto		udp
++
++race_repeat	0
++
++perf_duration	0
++"
++
+ # Set template for all tests, types and rules are filled in depending on test
+ set_template='
+ flush ruleset
+@@ -1473,6 +1490,59 @@ test_bug_flush_remove_add() {
+ 	nft flush ruleset
+ }
+ 
++# - add ranged element, check that packets match it
++# - reload the set, check packets still match
++test_bug_reload() {
++	setup veth send_"${proto}" set || return ${KSELFTEST_SKIP}
++	rstart=${start}
++
++	range_size=1
++	for i in $(seq "${start}" $((start + count))); do
++		end=$((start + range_size))
++
++		# Avoid negative or zero-sized port ranges
++		if [ $((end / 65534)) -gt $((start / 65534)) ]; then
++			start=${end}
++			end=$((end + 1))
++		fi
++		srcstart=$((start + src_delta))
++		srcend=$((end + src_delta))
++
++		add "$(format)" || return 1
++		range_size=$((range_size + 1))
++		start=$((end + range_size))
++	done
++
++	# check kernel does allocate pcpu sctrach map
++	# for reload with no elemet add/delete
++	( echo flush set inet filter test ;
++	  nft list set inet filter test ) | nft -f -
++
++	start=${rstart}
++	range_size=1
++
++	for i in $(seq "${start}" $((start + count))); do
++		end=$((start + range_size))
++
++		# Avoid negative or zero-sized port ranges
++		if [ $((end / 65534)) -gt $((start / 65534)) ]; then
++			start=${end}
++			end=$((end + 1))
++		fi
++		srcstart=$((start + src_delta))
++		srcend=$((end + src_delta))
++
++		for j in $(seq ${start} $((range_size / 2 + 1)) ${end}); do
++			send_match "${j}" $((j + src_delta)) || return 1
++		done
++
++		range_size=$((range_size + 1))
++		start=$((end + range_size))
++	done
++
++	nft flush ruleset
++}
++
+ test_reported_issues() {
+ 	eval test_bug_"${subtest}"
+ }
+diff --git a/tools/testing/selftests/netfilter/nft_nat.sh b/tools/testing/selftests/netfilter/nft_nat.sh
+index d88867d2fed75..eb8543b9a5c40 100755
+--- a/tools/testing/selftests/netfilter/nft_nat.sh
++++ b/tools/testing/selftests/netfilter/nft_nat.sh
+@@ -898,6 +898,144 @@ EOF
+ 	ip netns exec "$ns0" nft delete table $family nat
+ }
+ 
++test_stateless_nat_ip()
++{
++	local lret=0
++
++	ip netns exec "$ns0" sysctl net.ipv4.conf.veth0.forwarding=1 > /dev/null
++	ip netns exec "$ns0" sysctl net.ipv4.conf.veth1.forwarding=1 > /dev/null
++
++	ip netns exec "$ns2" ping -q -c 1 10.0.1.99 > /dev/null # ping ns2->ns1
++	if [ $? -ne 0 ] ; then
++		echo "ERROR: cannot ping $ns1 from $ns2 before loading stateless rules"
++		return 1
++	fi
++
++ip netns exec "$ns0" nft -f /dev/stdin <<EOF
++table ip stateless {
++	map xlate_in {
++		typeof meta iifname . ip saddr . ip daddr : ip daddr
++		elements = {
++			"veth1" . 10.0.2.99 . 10.0.1.99 : 10.0.2.2,
++		}
++	}
++	map xlate_out {
++		typeof meta iifname . ip saddr . ip daddr : ip daddr
++		elements = {
++			"veth0" . 10.0.1.99 . 10.0.2.2 : 10.0.2.99
++		}
++	}
++
++	chain prerouting {
++		type filter hook prerouting priority -400; policy accept;
++		ip saddr set meta iifname . ip saddr . ip daddr map @xlate_in
++		ip daddr set meta iifname . ip saddr . ip daddr map @xlate_out
++	}
++}
++EOF
++	if [ $? -ne 0 ]; then
++		echo "SKIP: Could not add ip statless rules"
++		return $ksft_skip
++	fi
++
++	reset_counters
++
++	ip netns exec "$ns2" ping -q -c 1 10.0.1.99 > /dev/null # ping ns2->ns1
++	if [ $? -ne 0 ] ; then
++		echo "ERROR: cannot ping $ns1 from $ns2 with stateless rules"
++		lret=1
++	fi
++
++	# ns1 should have seen packets from .2.2, due to stateless rewrite.
++	expect="packets 1 bytes 84"
++	cnt=$(ip netns exec "$ns1" nft list counter inet filter ns0insl | grep -q "$expect")
++	if [ $? -ne 0 ]; then
++		bad_counter "$ns1" ns0insl "$expect" "test_stateless 1"
++		lret=1
++	fi
++
++	for dir in "in" "out" ; do
++		cnt=$(ip netns exec "$ns2" nft list counter inet filter ns1${dir} | grep -q "$expect")
++		if [ $? -ne 0 ]; then
++			bad_counter "$ns2" ns1$dir "$expect" "test_stateless 2"
++			lret=1
++		fi
++	done
++
++	# ns1 should not have seen packets from ns2, due to masquerade
++	expect="packets 0 bytes 0"
++	for dir in "in" "out" ; do
++		cnt=$(ip netns exec "$ns1" nft list counter inet filter ns2${dir} | grep -q "$expect")
++		if [ $? -ne 0 ]; then
++			bad_counter "$ns1" ns0$dir "$expect" "test_stateless 3"
++			lret=1
++		fi
++
++		cnt=$(ip netns exec "$ns0" nft list counter inet filter ns1${dir} | grep -q "$expect")
++		if [ $? -ne 0 ]; then
++			bad_counter "$ns0" ns1$dir "$expect" "test_stateless 4"
++			lret=1
++		fi
++	done
++
++	reset_counters
++
++	socat -h > /dev/null 2>&1
++	if [ $? -ne 0 ];then
++		echo "SKIP: Could not run stateless nat frag test without socat tool"
++		if [ $lret -eq 0 ]; then
++			return $ksft_skip
++		fi
++
++		ip netns exec "$ns0" nft delete table ip stateless
++		return $lret
++	fi
++
++	local tmpfile=$(mktemp)
++	dd if=/dev/urandom of=$tmpfile bs=4096 count=1 2>/dev/null
++
++	local outfile=$(mktemp)
++	ip netns exec "$ns1" timeout 3 socat -u UDP4-RECV:4233 OPEN:$outfile < /dev/null &
++	sc_r=$!
++
++	sleep 1
++	# re-do with large ping -> ip fragmentation
++	ip netns exec "$ns2" timeout 3 socat - UDP4-SENDTO:"10.0.1.99:4233" < "$tmpfile" > /dev/null
++	if [ $? -ne 0 ] ; then
++		echo "ERROR: failed to test udp $ns1 to $ns2 with stateless ip nat" 1>&2
++		lret=1
++	fi
++
++	wait
++
++	cmp "$tmpfile" "$outfile"
++	if [ $? -ne 0 ]; then
++		ls -l "$tmpfile" "$outfile"
++		echo "ERROR: in and output file mismatch when checking udp with stateless nat" 1>&2
++		lret=1
++	fi
++
++	rm -f "$tmpfile" "$outfile"
++
++	# ns1 should have seen packets from 2.2, due to stateless rewrite.
++	expect="packets 3 bytes 4164"
++	cnt=$(ip netns exec "$ns1" nft list counter inet filter ns0insl | grep -q "$expect")
++	if [ $? -ne 0 ]; then
++		bad_counter "$ns1" ns0insl "$expect" "test_stateless 5"
++		lret=1
++	fi
++
++	ip netns exec "$ns0" nft delete table ip stateless
++	if [ $? -ne 0 ]; then
++		echo "ERROR: Could not delete table ip stateless" 1>&2
++		lret=1
++	fi
++
++	test $lret -eq 0 && echo "PASS: IP statless for $ns2"
++
++	return $lret
++}
++
+ # ip netns exec "$ns0" ping -c 1 -q 10.0.$i.99
+ for i in 0 1 2; do
+ ip netns exec ns$i-$sfx nft -f /dev/stdin <<EOF
+@@ -964,6 +1102,19 @@ table inet filter {
+ EOF
+ done
+ 
++# special case for stateless nat check, counter needs to
++# be done before (input) ip defragmentation
++ip netns exec ns1-$sfx nft -f /dev/stdin <<EOF
++table inet filter {
++	counter ns0insl {}
++
++	chain pre {
++		type filter hook prerouting priority -400; policy accept;
++		ip saddr 10.0.2.2 counter name "ns0insl"
++	}
++}
++EOF
++
+ sleep 3
+ # test basic connectivity
+ for i in 1 2; do
+@@ -1018,6 +1169,7 @@ $test_inet_nat && test_redirect inet
+ $test_inet_nat && test_redirect6 inet
+ 
+ test_port_shadowing
++test_stateless_nat_ip
+ 
+ if [ $ret -ne 0 ];then
+ 	echo -n "FAIL: "


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-02-11 12:33 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-02-11 12:33 UTC (permalink / raw
  To: gentoo-commits

commit:     6cf5ebe727d6049ac15e0b369590e6f4124c4791
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Feb 11 12:33:31 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Feb 11 12:33:31 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6cf5ebe7

Linux patch 5.16.9

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |   4 ++
 1008_linux-5.16.9.patch | 174 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 178 insertions(+)

diff --git a/0000_README b/0000_README
index 13f566fe..9fe7df04 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch:  1007_linux-5.16.8.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.16.8
 
+Patch:  1008_linux-5.16.9.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.9
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1008_linux-5.16.9.patch b/1008_linux-5.16.9.patch
new file mode 100644
index 00000000..c798eacb
--- /dev/null
+++ b/1008_linux-5.16.9.patch
@@ -0,0 +1,174 @@
+diff --git a/Makefile b/Makefile
+index 0cbab4df51b92..1f32bb42f3288 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 8fc9c79c899b5..41a6fe086fdc3 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -4711,6 +4711,8 @@ static long kvm_s390_guest_sida_op(struct kvm_vcpu *vcpu,
+ 		return -EINVAL;
+ 	if (mop->size + mop->sida_offset > sida_size(vcpu->arch.sie_block))
+ 		return -E2BIG;
++	if (!kvm_s390_pv_cpu_is_protected(vcpu))
++		return -EINVAL;
+ 
+ 	switch (mop->op) {
+ 	case KVM_S390_MEMOP_SIDA_READ:
+diff --git a/crypto/algapi.c b/crypto/algapi.c
+index a366cb3e8aa18..76fdaa16bd4a0 100644
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -1324,3 +1324,4 @@ module_exit(crypto_algapi_exit);
+ 
+ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("Cryptographic algorithms API");
++MODULE_SOFTDEP("pre: cryptomgr");
+diff --git a/crypto/api.c b/crypto/api.c
+index cf0869dd130b3..7ddfe946dd56b 100644
+--- a/crypto/api.c
++++ b/crypto/api.c
+@@ -643,4 +643,3 @@ EXPORT_SYMBOL_GPL(crypto_req_done);
+ 
+ MODULE_DESCRIPTION("Cryptographic core API");
+ MODULE_LICENSE("GPL");
+-MODULE_SOFTDEP("pre: cryptomgr");
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 1cdf8cfcc31b3..94bc5dbb31e1e 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -2486,23 +2486,21 @@ static void ata_dev_config_cpr(struct ata_device *dev)
+ 	struct ata_cpr_log *cpr_log = NULL;
+ 	u8 *desc, *buf = NULL;
+ 
+-	if (!ata_identify_page_supported(dev,
+-				 ATA_LOG_CONCURRENT_POSITIONING_RANGES))
++	if (ata_id_major_version(dev->id) < 11 ||
++	    !ata_log_supported(dev, ATA_LOG_CONCURRENT_POSITIONING_RANGES))
+ 		goto out;
+ 
+ 	/*
+-	 * Read IDENTIFY DEVICE data log, page 0x47
+-	 * (concurrent positioning ranges). We can have at most 255 32B range
+-	 * descriptors plus a 64B header.
++	 * Read the concurrent positioning ranges log (0x47). We can have at
++	 * most 255 32B range descriptors plus a 64B header.
+ 	 */
+ 	buf_len = (64 + 255 * 32 + 511) & ~511;
+ 	buf = kzalloc(buf_len, GFP_KERNEL);
+ 	if (!buf)
+ 		goto out;
+ 
+-	err_mask = ata_read_log_page(dev, ATA_LOG_IDENTIFY_DEVICE,
+-				     ATA_LOG_CONCURRENT_POSITIONING_RANGES,
+-				     buf, buf_len >> 9);
++	err_mask = ata_read_log_page(dev, ATA_LOG_CONCURRENT_POSITIONING_RANGES,
++				     0, buf, buf_len >> 9);
+ 	if (err_mask)
+ 		goto out;
+ 
+diff --git a/drivers/mmc/host/moxart-mmc.c b/drivers/mmc/host/moxart-mmc.c
+index 16d1c7a43d331..b6eb75f4bbfc6 100644
+--- a/drivers/mmc/host/moxart-mmc.c
++++ b/drivers/mmc/host/moxart-mmc.c
+@@ -705,12 +705,12 @@ static int moxart_remove(struct platform_device *pdev)
+ 	if (!IS_ERR_OR_NULL(host->dma_chan_rx))
+ 		dma_release_channel(host->dma_chan_rx);
+ 	mmc_remove_host(mmc);
+-	mmc_free_host(mmc);
+ 
+ 	writel(0, host->base + REG_INTERRUPT_MASK);
+ 	writel(0, host->base + REG_POWER_CONTROL);
+ 	writel(readl(host->base + REG_CLOCK_CONTROL) | CLK_OFF,
+ 	       host->base + REG_CLOCK_CONTROL);
++	mmc_free_host(mmc);
+ 
+ 	return 0;
+ }
+diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c
+index 32300bd6af7ab..1ff1e52f398fc 100644
+--- a/fs/ksmbd/smb2pdu.c
++++ b/fs/ksmbd/smb2pdu.c
+@@ -2688,7 +2688,7 @@ int smb2_open(struct ksmbd_work *work)
+ 					(struct create_posix *)context;
+ 				if (le16_to_cpu(context->DataOffset) +
+ 				    le32_to_cpu(context->DataLength) <
+-				    sizeof(struct create_posix)) {
++				    sizeof(struct create_posix) - 4) {
+ 					rc = -EINVAL;
+ 					goto err_out1;
+ 				}
+diff --git a/include/linux/ata.h b/include/linux/ata.h
+index 199e47e97d645..21292b5bbb550 100644
+--- a/include/linux/ata.h
++++ b/include/linux/ata.h
+@@ -324,12 +324,12 @@ enum {
+ 	ATA_LOG_NCQ_NON_DATA	= 0x12,
+ 	ATA_LOG_NCQ_SEND_RECV	= 0x13,
+ 	ATA_LOG_IDENTIFY_DEVICE	= 0x30,
++	ATA_LOG_CONCURRENT_POSITIONING_RANGES = 0x47,
+ 
+ 	/* Identify device log pages: */
+ 	ATA_LOG_SECURITY	  = 0x06,
+ 	ATA_LOG_SATA_SETTINGS	  = 0x08,
+ 	ATA_LOG_ZONED_INFORMATION = 0x09,
+-	ATA_LOG_CONCURRENT_POSITIONING_RANGES = 0x47,
+ 
+ 	/* Identify device SATA settings log:*/
+ 	ATA_LOG_DEVSLP_OFFSET	  = 0x30,
+diff --git a/net/tipc/link.c b/net/tipc/link.c
+index 09ae8448f394f..4e7936d9b4424 100644
+--- a/net/tipc/link.c
++++ b/net/tipc/link.c
+@@ -2199,7 +2199,7 @@ static int tipc_link_proto_rcv(struct tipc_link *l, struct sk_buff *skb,
+ 	struct tipc_msg *hdr = buf_msg(skb);
+ 	struct tipc_gap_ack_blks *ga = NULL;
+ 	bool reply = msg_probe(hdr), retransmitted = false;
+-	u16 dlen = msg_data_sz(hdr), glen = 0;
++	u32 dlen = msg_data_sz(hdr), glen = 0;
+ 	u16 peers_snd_nxt =  msg_next_sent(hdr);
+ 	u16 peers_tol = msg_link_tolerance(hdr);
+ 	u16 peers_prio = msg_linkprio(hdr);
+@@ -2213,6 +2213,10 @@ static int tipc_link_proto_rcv(struct tipc_link *l, struct sk_buff *skb,
+ 	void *data;
+ 
+ 	trace_tipc_proto_rcv(skb, false, l->name);
++
++	if (dlen > U16_MAX)
++		goto exit;
++
+ 	if (tipc_link_is_blocked(l) || !xmitq)
+ 		goto exit;
+ 
+@@ -2308,7 +2312,8 @@ static int tipc_link_proto_rcv(struct tipc_link *l, struct sk_buff *skb,
+ 
+ 		/* Receive Gap ACK blocks from peer if any */
+ 		glen = tipc_get_gap_ack_blks(&ga, l, hdr, true);
+-
++		if(glen > dlen)
++			break;
+ 		tipc_mon_rcv(l->net, data + glen, dlen - glen, l->addr,
+ 			     &l->mon_state, l->bearer_id);
+ 
+diff --git a/net/tipc/monitor.c b/net/tipc/monitor.c
+index 407619697292f..2f4d23238a7e3 100644
+--- a/net/tipc/monitor.c
++++ b/net/tipc/monitor.c
+@@ -496,6 +496,8 @@ void tipc_mon_rcv(struct net *net, void *data, u16 dlen, u32 addr,
+ 	state->probing = false;
+ 
+ 	/* Sanity check received domain record */
++	if (new_member_cnt > MAX_MON_DOMAIN)
++		return;
+ 	if (dlen < dom_rec_len(arrv_dom, 0))
+ 		return;
+ 	if (dlen != dom_rec_len(arrv_dom, new_member_cnt))


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-02-16 12:44 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-02-16 12:44 UTC (permalink / raw
  To: gentoo-commits

commit:     5f82687731c29102df75c39f5f1cdf95a07e1e82
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Feb 16 12:44:04 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Feb 16 12:44:04 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5f826877

Linux patch 5.16.10

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1009_linux-5.16.10.patch | 7402 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7406 insertions(+)

diff --git a/0000_README b/0000_README
index 9fe7df04..fd9081b3 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  1008_linux-5.16.9.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.16.9
 
+Patch:  1009_linux-5.16.10.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.10
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1009_linux-5.16.10.patch b/1009_linux-5.16.10.patch
new file mode 100644
index 00000000..c0ef78b4
--- /dev/null
+++ b/1009_linux-5.16.10.patch
@@ -0,0 +1,7402 @@
+diff --git a/Documentation/ABI/testing/sysfs-driver-aspeed-uart-routing b/Documentation/ABI/testing/sysfs-driver-aspeed-uart-routing
+index b363827da4375..910df0e5815ab 100644
+--- a/Documentation/ABI/testing/sysfs-driver-aspeed-uart-routing
++++ b/Documentation/ABI/testing/sysfs-driver-aspeed-uart-routing
+@@ -1,4 +1,4 @@
+-What:		/sys/bus/platform/drivers/aspeed-uart-routing/*/uart*
++What:		/sys/bus/platform/drivers/aspeed-uart-routing/\*/uart\*
+ Date:		September 2021
+ Contact:	Oskar Senft <osk@google.com>
+ 		Chia-Wei Wang <chiawei_wang@aspeedtech.com>
+@@ -9,7 +9,7 @@ Description:	Selects the RX source of the UARTx device.
+ 		depends on the selected file.
+ 
+ 		e.g.
+-		cat /sys/bus/platform/drivers/aspeed-uart-routing/*.uart_routing/uart1
++		cat /sys/bus/platform/drivers/aspeed-uart-routing/\*.uart_routing/uart1
+ 		[io1] io2 io3 io4 uart2 uart3 uart4 io6
+ 
+ 		In this case, UART1 gets its input from IO1 (physical serial port 1).
+@@ -17,7 +17,7 @@ Description:	Selects the RX source of the UARTx device.
+ Users:		OpenBMC.  Proposed changes should be mailed to
+ 		openbmc@lists.ozlabs.org
+ 
+-What:		/sys/bus/platform/drivers/aspeed-uart-routing/*/io*
++What:		/sys/bus/platform/drivers/aspeed-uart-routing/\*/io\*
+ Date:		September 2021
+ Contact:	Oskar Senft <osk@google.com>
+ 		Chia-Wei Wang <chiawei_wang@aspeedtech.com>
+diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst
+index 5342e895fb604..0ec7b7f1524b1 100644
+--- a/Documentation/arm64/silicon-errata.rst
++++ b/Documentation/arm64/silicon-errata.rst
+@@ -52,6 +52,12 @@ stable kernels.
+ | Allwinner      | A64/R18         | UNKNOWN1        | SUN50I_ERRATUM_UNKNOWN1     |
+ +----------------+-----------------+-----------------+-----------------------------+
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A510     | #2064142        | ARM64_ERRATUM_2064142       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A510     | #2038923        | ARM64_ERRATUM_2038923       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A510     | #1902691        | ARM64_ERRATUM_1902691       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A53      | #826319         | ARM64_ERRATUM_826319        |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A53      | #827319         | ARM64_ERRATUM_827319        |
+@@ -92,12 +98,18 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A77      | #1508412        | ARM64_ERRATUM_1508412       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A510     | #2051678        | ARM64_ERRATUM_2051678       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A710     | #2119858        | ARM64_ERRATUM_2119858       |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A710     | #2054223        | ARM64_ERRATUM_2054223       |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A710     | #2224489        | ARM64_ERRATUM_2224489       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-X2       | #2119858        | ARM64_ERRATUM_2119858       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-X2       | #2224489        | ARM64_ERRATUM_2224489       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Neoverse-N1     | #1188873,1418040| ARM64_ERRATUM_1418040       |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Neoverse-N1     | #1349291        | N/A                         |
+diff --git a/Documentation/devicetree/bindings/arm/omap/omap.txt b/Documentation/devicetree/bindings/arm/omap/omap.txt
+index e77635c5422c6..fa8b31660cadd 100644
+--- a/Documentation/devicetree/bindings/arm/omap/omap.txt
++++ b/Documentation/devicetree/bindings/arm/omap/omap.txt
+@@ -119,6 +119,9 @@ Boards (incomplete list of examples):
+ - OMAP3 BeagleBoard : Low cost community board
+   compatible = "ti,omap3-beagle", "ti,omap3430", "ti,omap3"
+ 
++- OMAP3 BeagleBoard A to B4 : Early BeagleBoard revisions A to B4 with a timer quirk
++  compatible = "ti,omap3-beagle-ab4", "ti,omap3-beagle", "ti,omap3430", "ti,omap3"
++
+ - OMAP3 Tobi with Overo : Commercial expansion board with daughter board
+   compatible = "gumstix,omap3-overo-tobi", "gumstix,omap3-overo", "ti,omap3430", "ti,omap3"
+ 
+diff --git a/Makefile b/Makefile
+index 1f32bb42f3288..36bbff16530ba 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/arch/arm/boot/dts/Makefile b/arch/arm/boot/dts/Makefile
+index 0de64f237cd87..a387ebe8919b1 100644
+--- a/arch/arm/boot/dts/Makefile
++++ b/arch/arm/boot/dts/Makefile
+@@ -794,6 +794,7 @@ dtb-$(CONFIG_ARCH_OMAP3) += \
+ 	logicpd-som-lv-37xx-devkit.dtb \
+ 	omap3430-sdp.dtb \
+ 	omap3-beagle.dtb \
++	omap3-beagle-ab4.dtb \
+ 	omap3-beagle-xm.dtb \
+ 	omap3-beagle-xm-ab.dtb \
+ 	omap3-cm-t3517.dtb \
+diff --git a/arch/arm/boot/dts/imx23-evk.dts b/arch/arm/boot/dts/imx23-evk.dts
+index 8cbaf1c811745..3b609d987d883 100644
+--- a/arch/arm/boot/dts/imx23-evk.dts
++++ b/arch/arm/boot/dts/imx23-evk.dts
+@@ -79,7 +79,6 @@
+ 						MX23_PAD_LCD_RESET__GPIO_1_18
+ 						MX23_PAD_PWM3__GPIO_1_29
+ 						MX23_PAD_PWM4__GPIO_1_30
+-						MX23_PAD_SSP1_DETECT__SSP1_DETECT
+ 					>;
+ 					fsl,drive-strength = <MXS_DRIVE_4mA>;
+ 					fsl,voltage = <MXS_VOLTAGE_HIGH>;
+diff --git a/arch/arm/boot/dts/imx6qdl-udoo.dtsi b/arch/arm/boot/dts/imx6qdl-udoo.dtsi
+index d07d8f83456d2..ccfa8e320be62 100644
+--- a/arch/arm/boot/dts/imx6qdl-udoo.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-udoo.dtsi
+@@ -5,6 +5,8 @@
+  * Author: Fabio Estevam <fabio.estevam@freescale.com>
+  */
+ 
++#include <dt-bindings/gpio/gpio.h>
++
+ / {
+ 	aliases {
+ 		backlight = &backlight;
+@@ -226,6 +228,7 @@
+ 				MX6QDL_PAD_SD3_DAT1__SD3_DATA1		0x17059
+ 				MX6QDL_PAD_SD3_DAT2__SD3_DATA2		0x17059
+ 				MX6QDL_PAD_SD3_DAT3__SD3_DATA3		0x17059
++				MX6QDL_PAD_SD3_DAT5__GPIO7_IO00		0x1b0b0
+ 			>;
+ 		};
+ 
+@@ -304,7 +307,7 @@
+ &usdhc3 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_usdhc3>;
+-	non-removable;
++	cd-gpios = <&gpio7 0 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm/boot/dts/imx7ulp.dtsi b/arch/arm/boot/dts/imx7ulp.dtsi
+index b7ea37ad4e55c..bcec98b964114 100644
+--- a/arch/arm/boot/dts/imx7ulp.dtsi
++++ b/arch/arm/boot/dts/imx7ulp.dtsi
+@@ -259,7 +259,7 @@
+ 			interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&pcc2 IMX7ULP_CLK_WDG1>;
+ 			assigned-clocks = <&pcc2 IMX7ULP_CLK_WDG1>;
+-			assigned-clocks-parents = <&scg1 IMX7ULP_CLK_FIRC_BUS_CLK>;
++			assigned-clock-parents = <&scg1 IMX7ULP_CLK_FIRC_BUS_CLK>;
+ 			timeout-sec = <40>;
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/meson.dtsi b/arch/arm/boot/dts/meson.dtsi
+index 3be7cba603d5a..26eaba3fa96f3 100644
+--- a/arch/arm/boot/dts/meson.dtsi
++++ b/arch/arm/boot/dts/meson.dtsi
+@@ -59,7 +59,7 @@
+ 			};
+ 
+ 			uart_A: serial@84c0 {
+-				compatible = "amlogic,meson6-uart", "amlogic,meson-uart";
++				compatible = "amlogic,meson6-uart";
+ 				reg = <0x84c0 0x18>;
+ 				interrupts = <GIC_SPI 26 IRQ_TYPE_EDGE_RISING>;
+ 				fifo-size = <128>;
+@@ -67,7 +67,7 @@
+ 			};
+ 
+ 			uart_B: serial@84dc {
+-				compatible = "amlogic,meson6-uart", "amlogic,meson-uart";
++				compatible = "amlogic,meson6-uart";
+ 				reg = <0x84dc 0x18>;
+ 				interrupts = <GIC_SPI 75 IRQ_TYPE_EDGE_RISING>;
+ 				status = "disabled";
+@@ -105,7 +105,7 @@
+ 			};
+ 
+ 			uart_C: serial@8700 {
+-				compatible = "amlogic,meson6-uart", "amlogic,meson-uart";
++				compatible = "amlogic,meson6-uart";
+ 				reg = <0x8700 0x18>;
+ 				interrupts = <GIC_SPI 93 IRQ_TYPE_EDGE_RISING>;
+ 				status = "disabled";
+@@ -228,7 +228,7 @@
+ 			};
+ 
+ 			uart_AO: serial@4c0 {
+-				compatible = "amlogic,meson6-uart", "amlogic,meson-ao-uart", "amlogic,meson-uart";
++				compatible = "amlogic,meson6-uart", "amlogic,meson-ao-uart";
+ 				reg = <0x4c0 0x18>;
+ 				interrupts = <GIC_SPI 90 IRQ_TYPE_EDGE_RISING>;
+ 				status = "disabled";
+diff --git a/arch/arm/boot/dts/meson8.dtsi b/arch/arm/boot/dts/meson8.dtsi
+index f80ddc98d3a2b..9997a5d0333a3 100644
+--- a/arch/arm/boot/dts/meson8.dtsi
++++ b/arch/arm/boot/dts/meson8.dtsi
+@@ -736,27 +736,27 @@
+ };
+ 
+ &uart_AO {
+-	compatible = "amlogic,meson8-uart", "amlogic,meson-uart";
+-	clocks = <&clkc CLKID_CLK81>, <&xtal>, <&clkc CLKID_CLK81>;
+-	clock-names = "baud", "xtal", "pclk";
++	compatible = "amlogic,meson8-uart", "amlogic,meson-ao-uart";
++	clocks = <&xtal>, <&clkc CLKID_CLK81>, <&clkc CLKID_CLK81>;
++	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &uart_A {
+-	compatible = "amlogic,meson8-uart", "amlogic,meson-uart";
+-	clocks = <&clkc CLKID_CLK81>, <&xtal>, <&clkc CLKID_UART0>;
+-	clock-names = "baud", "xtal", "pclk";
++	compatible = "amlogic,meson8-uart";
++	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &uart_B {
+-	compatible = "amlogic,meson8-uart", "amlogic,meson-uart";
+-	clocks = <&clkc CLKID_CLK81>, <&xtal>, <&clkc CLKID_UART1>;
+-	clock-names = "baud", "xtal", "pclk";
++	compatible = "amlogic,meson8-uart";
++	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &uart_C {
+-	compatible = "amlogic,meson8-uart", "amlogic,meson-uart";
+-	clocks = <&clkc CLKID_CLK81>, <&xtal>, <&clkc CLKID_UART2>;
+-	clock-names = "baud", "xtal", "pclk";
++	compatible = "amlogic,meson8-uart";
++	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &usb0 {
+diff --git a/arch/arm/boot/dts/meson8b.dtsi b/arch/arm/boot/dts/meson8b.dtsi
+index b49b7cbaed4ee..94f1c03deccef 100644
+--- a/arch/arm/boot/dts/meson8b.dtsi
++++ b/arch/arm/boot/dts/meson8b.dtsi
+@@ -724,27 +724,27 @@
+ };
+ 
+ &uart_AO {
+-	compatible = "amlogic,meson8b-uart", "amlogic,meson-uart";
+-	clocks = <&clkc CLKID_CLK81>, <&xtal>, <&clkc CLKID_CLK81>;
+-	clock-names = "baud", "xtal", "pclk";
++	compatible = "amlogic,meson8b-uart", "amlogic,meson-ao-uart";
++	clocks = <&xtal>, <&clkc CLKID_CLK81>, <&clkc CLKID_CLK81>;
++	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &uart_A {
+-	compatible = "amlogic,meson8b-uart", "amlogic,meson-uart";
+-	clocks = <&clkc CLKID_CLK81>, <&xtal>, <&clkc CLKID_UART0>;
+-	clock-names = "baud", "xtal", "pclk";
++	compatible = "amlogic,meson8b-uart";
++	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &uart_B {
+-	compatible = "amlogic,meson8b-uart", "amlogic,meson-uart";
+-	clocks = <&clkc CLKID_CLK81>, <&xtal>, <&clkc CLKID_UART1>;
+-	clock-names = "baud", "xtal", "pclk";
++	compatible = "amlogic,meson8b-uart";
++	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &uart_C {
+-	compatible = "amlogic,meson8b-uart", "amlogic,meson-uart";
+-	clocks = <&clkc CLKID_CLK81>, <&xtal>, <&clkc CLKID_UART2>;
+-	clock-names = "baud", "xtal", "pclk";
++	compatible = "amlogic,meson8b-uart";
++	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &usb0 {
+diff --git a/arch/arm/boot/dts/omap3-beagle-ab4.dts b/arch/arm/boot/dts/omap3-beagle-ab4.dts
+new file mode 100644
+index 0000000000000..990ff2d846868
+--- /dev/null
++++ b/arch/arm/boot/dts/omap3-beagle-ab4.dts
+@@ -0,0 +1,47 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/dts-v1/;
++
++#include "omap3-beagle.dts"
++
++/ {
++	model = "TI OMAP3 BeagleBoard A to B4";
++	compatible = "ti,omap3-beagle-ab4", "ti,omap3-beagle", "ti,omap3430", "ti,omap3";
++};
++
++/*
++ * Workaround for capacitor C70 issue, see "Boards revision A and < B5"
++ * section at https://elinux.org/BeagleBoard_Community
++ */
++
++/* Unusable as clocksource because of unreliable oscillator */
++&counter32k {
++	status = "disabled";
++};
++
++/* Unusable as clockevent because of unreliable oscillator, allow to idle */
++&timer1_target {
++	/delete-property/ti,no-reset-on-init;
++	/delete-property/ti,no-idle;
++	timer@0 {
++		/delete-property/ti,timer-alwon;
++	};
++};
++
++/* Preferred always-on timer for clocksource */
++&timer12_target {
++	ti,no-reset-on-init;
++	ti,no-idle;
++	timer@0 {
++		/* Always clocked by secure_32k_fck */
++	};
++};
++
++/* Preferred timer for clockevent */
++&timer2_target {
++	ti,no-reset-on-init;
++	ti,no-idle;
++	timer@0 {
++		assigned-clocks = <&gpt2_fck>;
++		assigned-clock-parents = <&sys_ck>;
++	};
++};
+diff --git a/arch/arm/boot/dts/omap3-beagle.dts b/arch/arm/boot/dts/omap3-beagle.dts
+index f9f34b8458e91..0548b391334fd 100644
+--- a/arch/arm/boot/dts/omap3-beagle.dts
++++ b/arch/arm/boot/dts/omap3-beagle.dts
+@@ -304,39 +304,6 @@
+ 	phys = <0 &hsusb2_phy>;
+ };
+ 
+-/* Unusable as clocksource because of unreliable oscillator */
+-&counter32k {
+-	status = "disabled";
+-};
+-
+-/* Unusable as clockevent because if unreliable oscillator, allow to idle */
+-&timer1_target {
+-	/delete-property/ti,no-reset-on-init;
+-	/delete-property/ti,no-idle;
+-	timer@0 {
+-		/delete-property/ti,timer-alwon;
+-	};
+-};
+-
+-/* Preferred always-on timer for clocksource */
+-&timer12_target {
+-	ti,no-reset-on-init;
+-	ti,no-idle;
+-	timer@0 {
+-		/* Always clocked by secure_32k_fck */
+-	};
+-};
+-
+-/* Preferred timer for clockevent */
+-&timer2_target {
+-	ti,no-reset-on-init;
+-	ti,no-idle;
+-	timer@0 {
+-		assigned-clocks = <&gpt2_fck>;
+-		assigned-clock-parents = <&sys_ck>;
+-	};
+-};
+-
+ &twl_gpio {
+ 	ti,use-leds;
+ 	/* pullups: BIT(1) */
+diff --git a/arch/arm/boot/dts/ste-ux500-samsung-skomer.dts b/arch/arm/boot/dts/ste-ux500-samsung-skomer.dts
+index 580ca497f3121..f8c5899fbdba0 100644
+--- a/arch/arm/boot/dts/ste-ux500-samsung-skomer.dts
++++ b/arch/arm/boot/dts/ste-ux500-samsung-skomer.dts
+@@ -185,10 +185,6 @@
+ 			cap-sd-highspeed;
+ 			cap-mmc-highspeed;
+ 			/* All direction control is used */
+-			st,sig-dir-cmd;
+-			st,sig-dir-dat0;
+-			st,sig-dir-dat2;
+-			st,sig-dir-dat31;
+ 			st,sig-pin-fbclk;
+ 			full-pwr-cycle;
+ 			vmmc-supply = <&ab8500_ldo_aux3_reg>;
+diff --git a/arch/arm/mach-socfpga/Kconfig b/arch/arm/mach-socfpga/Kconfig
+index 43ddec677c0b3..594edf9bbea44 100644
+--- a/arch/arm/mach-socfpga/Kconfig
++++ b/arch/arm/mach-socfpga/Kconfig
+@@ -2,6 +2,7 @@
+ menuconfig ARCH_INTEL_SOCFPGA
+ 	bool "Altera SOCFPGA family"
+ 	depends on ARCH_MULTI_V7
++	select ARCH_HAS_RESET_CONTROLLER
+ 	select ARCH_SUPPORTS_BIG_ENDIAN
+ 	select ARM_AMBA
+ 	select ARM_GIC
+@@ -18,6 +19,7 @@ menuconfig ARCH_INTEL_SOCFPGA
+ 	select PL310_ERRATA_727915
+ 	select PL310_ERRATA_753970 if PL310
+ 	select PL310_ERRATA_769419
++	select RESET_CONTROLLER
+ 
+ if ARCH_INTEL_SOCFPGA
+ config SOCFPGA_SUSPEND
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index c4207cf9bb17f..ae0e93871ee5f 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -670,15 +670,26 @@ config ARM64_ERRATUM_1508412
+ config ARM64_WORKAROUND_TRBE_OVERWRITE_FILL_MODE
+ 	bool
+ 
++config ARM64_ERRATUM_2051678
++	bool "Cortex-A510: 2051678: disable Hardware Update of the page table dirty bit"
++	default y
++	help
++	  This options adds the workaround for ARM Cortex-A510 erratum ARM64_ERRATUM_2051678.
++	  Affected Coretex-A510 might not respect the ordering rules for
++	  hardware update of the page table's dirty bit. The workaround
++	  is to not enable the feature on affected CPUs.
++
++	  If unsure, say Y.
++
+ config ARM64_ERRATUM_2119858
+-	bool "Cortex-A710: 2119858: workaround TRBE overwriting trace data in FILL mode"
++	bool "Cortex-A710/X2: 2119858: workaround TRBE overwriting trace data in FILL mode"
+ 	default y
+ 	depends on CORESIGHT_TRBE
+ 	select ARM64_WORKAROUND_TRBE_OVERWRITE_FILL_MODE
+ 	help
+-	  This option adds the workaround for ARM Cortex-A710 erratum 2119858.
++	  This option adds the workaround for ARM Cortex-A710/X2 erratum 2119858.
+ 
+-	  Affected Cortex-A710 cores could overwrite up to 3 cache lines of trace
++	  Affected Cortex-A710/X2 cores could overwrite up to 3 cache lines of trace
+ 	  data at the base of the buffer (pointed to by TRBASER_EL1) in FILL mode in
+ 	  the event of a WRAP event.
+ 
+@@ -761,14 +772,14 @@ config ARM64_ERRATUM_2253138
+ 	  If unsure, say Y.
+ 
+ config ARM64_ERRATUM_2224489
+-	bool "Cortex-A710: 2224489: workaround TRBE writing to address out-of-range"
++	bool "Cortex-A710/X2: 2224489: workaround TRBE writing to address out-of-range"
+ 	depends on CORESIGHT_TRBE
+ 	default y
+ 	select ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE
+ 	help
+-	  This option adds the workaround for ARM Cortex-A710 erratum 2224489.
++	  This option adds the workaround for ARM Cortex-A710/X2 erratum 2224489.
+ 
+-	  Affected Cortex-A710 cores might write to an out-of-range address, not reserved
++	  Affected Cortex-A710/X2 cores might write to an out-of-range address, not reserved
+ 	  for TRBE. Under some conditions, the TRBE might generate a write to the next
+ 	  virtually addressed page following the last page of the TRBE address space
+ 	  (i.e., the TRBLIMITR_EL1.LIMIT), instead of wrapping around to the base.
+@@ -778,6 +789,65 @@ config ARM64_ERRATUM_2224489
+ 
+ 	  If unsure, say Y.
+ 
++config ARM64_ERRATUM_2064142
++	bool "Cortex-A510: 2064142: workaround TRBE register writes while disabled"
++	depends on COMPILE_TEST # Until the CoreSight TRBE driver changes are in
++	default y
++	help
++	  This option adds the workaround for ARM Cortex-A510 erratum 2064142.
++
++	  Affected Cortex-A510 core might fail to write into system registers after the
++	  TRBE has been disabled. Under some conditions after the TRBE has been disabled
++	  writes into TRBE registers TRBLIMITR_EL1, TRBPTR_EL1, TRBBASER_EL1, TRBSR_EL1,
++	  and TRBTRG_EL1 will be ignored and will not be effected.
++
++	  Work around this in the driver by executing TSB CSYNC and DSB after collection
++	  is stopped and before performing a system register write to one of the affected
++	  registers.
++
++	  If unsure, say Y.
++
++config ARM64_ERRATUM_2038923
++	bool "Cortex-A510: 2038923: workaround TRBE corruption with enable"
++	depends on COMPILE_TEST # Until the CoreSight TRBE driver changes are in
++	default y
++	help
++	  This option adds the workaround for ARM Cortex-A510 erratum 2038923.
++
++	  Affected Cortex-A510 core might cause an inconsistent view on whether trace is
++	  prohibited within the CPU. As a result, the trace buffer or trace buffer state
++	  might be corrupted. This happens after TRBE buffer has been enabled by setting
++	  TRBLIMITR_EL1.E, followed by just a single context synchronization event before
++	  execution changes from a context, in which trace is prohibited to one where it
++	  isn't, or vice versa. In these mentioned conditions, the view of whether trace
++	  is prohibited is inconsistent between parts of the CPU, and the trace buffer or
++	  the trace buffer state might be corrupted.
++
++	  Work around this in the driver by preventing an inconsistent view of whether the
++	  trace is prohibited or not based on TRBLIMITR_EL1.E by immediately following a
++	  change to TRBLIMITR_EL1.E with at least one ISB instruction before an ERET, or
++	  two ISB instructions if no ERET is to take place.
++
++	  If unsure, say Y.
++
++config ARM64_ERRATUM_1902691
++	bool "Cortex-A510: 1902691: workaround TRBE trace corruption"
++	depends on COMPILE_TEST # Until the CoreSight TRBE driver changes are in
++	default y
++	help
++	  This option adds the workaround for ARM Cortex-A510 erratum 1902691.
++
++	  Affected Cortex-A510 core might cause trace data corruption, when being written
++	  into the memory. Effectively TRBE is broken and hence cannot be used to capture
++	  trace data.
++
++	  Work around this problem in the driver by just preventing TRBE initialization on
++	  affected cpus. The firmware must have disabled the access to TRBE for the kernel
++	  on such implementations. This will cover the kernel for any firmware that doesn't
++	  do this already.
++
++	  If unsure, say Y.
++
+ config CAVIUM_ERRATUM_22375
+ 	bool "Cavium erratum 22375, 24313"
+ 	default y
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
+index 3e968b2441918..fd3fa82e4c330 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
+@@ -17,7 +17,7 @@
+ 		rtc1 = &vrtc;
+ 	};
+ 
+-	dioo2133: audio-amplifier-0 {
++	dio2133: audio-amplifier-0 {
+ 		compatible = "simple-audio-amplifier";
+ 		enable-gpios = <&gpio_ao GPIOAO_2 GPIO_ACTIVE_HIGH>;
+ 		VCC-supply = <&vcc_5v>;
+@@ -219,7 +219,7 @@
+ 		audio-widgets = "Line", "Lineout";
+ 		audio-aux-devs = <&tdmout_b>, <&tdmout_c>, <&tdmin_a>,
+ 				 <&tdmin_b>, <&tdmin_c>, <&tdmin_lb>,
+-				 <&dioo2133>;
++				 <&dio2133>;
+ 		audio-routing = "TDMOUT_B IN 0", "FRDDR_A OUT 1",
+ 				"TDMOUT_B IN 1", "FRDDR_B OUT 1",
+ 				"TDMOUT_B IN 2", "FRDDR_C OUT 1",
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts
+index 212c6aa5a3b86..5751c48620edf 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts
+@@ -123,7 +123,7 @@
+ 		regulator-min-microvolt = <1800000>;
+ 		regulator-max-microvolt = <3300000>;
+ 
+-		enable-gpio = <&gpio GPIOE_2 GPIO_ACTIVE_HIGH>;
++		enable-gpio = <&gpio_ao GPIOE_2 GPIO_ACTIVE_HIGH>;
+ 		enable-active-high;
+ 		regulator-always-on;
+ 
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-odroid.dtsi b/arch/arm64/boot/dts/amlogic/meson-sm1-odroid.dtsi
+index 5779e70caccd3..76ad052fbf0c9 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1-odroid.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1-odroid.dtsi
+@@ -48,7 +48,7 @@
+ 		regulator-max-microvolt = <3300000>;
+ 		vin-supply = <&vcc_5v>;
+ 
+-		enable-gpio = <&gpio GPIOE_2 GPIO_ACTIVE_HIGH>;
++		enable-gpio = <&gpio_ao GPIOE_2 GPIO_OPEN_DRAIN>;
+ 		enable-active-high;
+ 		regulator-always-on;
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq.dtsi b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+index 71bf497f99c25..58e43eb4151ec 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+@@ -526,7 +526,7 @@
+ 				assigned-clock-rates = <0>, <0>, <0>, <594000000>;
+ 				status = "disabled";
+ 
+-				port@0 {
++				port {
+ 					lcdif_mipi_dsi: endpoint {
+ 						remote-endpoint = <&mipi_dsi_lcdif_in>;
+ 					};
+@@ -1123,8 +1123,8 @@
+ 					#address-cells = <1>;
+ 					#size-cells = <0>;
+ 
+-					port@0 {
+-						reg = <0>;
++					port@1 {
++						reg = <1>;
+ 
+ 						csi1_mipi_ep: endpoint {
+ 							remote-endpoint = <&csi1_ep>;
+@@ -1175,8 +1175,8 @@
+ 					#address-cells = <1>;
+ 					#size-cells = <0>;
+ 
+-					port@0 {
+-						reg = <0>;
++					port@1 {
++						reg = <1>;
+ 
+ 						csi2_mipi_ep: endpoint {
+ 							remote-endpoint = <&csi2_ep>;
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index e8fdc10395b6a..999b9149f8568 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -75,6 +75,7 @@
+ #define ARM_CPU_PART_CORTEX_A77		0xD0D
+ #define ARM_CPU_PART_CORTEX_A510	0xD46
+ #define ARM_CPU_PART_CORTEX_A710	0xD47
++#define ARM_CPU_PART_CORTEX_X2		0xD48
+ #define ARM_CPU_PART_NEOVERSE_N2	0xD49
+ 
+ #define APM_CPU_PART_POTENZA		0x000
+@@ -118,6 +119,7 @@
+ #define MIDR_CORTEX_A77	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77)
+ #define MIDR_CORTEX_A510 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A510)
+ #define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710)
++#define MIDR_CORTEX_X2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X2)
+ #define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2)
+ #define MIDR_THUNDERX	MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
+ #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 9e1c1aef9ebd6..066098198c248 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -347,6 +347,7 @@ static const struct midr_range trbe_overwrite_fill_mode_cpus[] = {
+ #endif
+ #ifdef CONFIG_ARM64_ERRATUM_2119858
+ 	MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
++	MIDR_RANGE(MIDR_CORTEX_X2, 0, 0, 2, 0),
+ #endif
+ 	{},
+ };
+@@ -371,6 +372,7 @@ static struct midr_range trbe_write_out_of_range_cpus[] = {
+ #endif
+ #ifdef CONFIG_ARM64_ERRATUM_2224489
+ 	MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
++	MIDR_RANGE(MIDR_CORTEX_X2, 0, 0, 2, 0),
+ #endif
+ 	{},
+ };
+@@ -597,6 +599,33 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ 		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
+ 		CAP_MIDR_RANGE_LIST(trbe_write_out_of_range_cpus),
+ 	},
++#endif
++#ifdef CONFIG_ARM64_ERRATUM_2064142
++	{
++		.desc = "ARM erratum 2064142",
++		.capability = ARM64_WORKAROUND_2064142,
++
++		/* Cortex-A510 r0p0 - r0p2 */
++		ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A510, 0, 0, 2)
++	},
++#endif
++#ifdef CONFIG_ARM64_ERRATUM_2038923
++	{
++		.desc = "ARM erratum 2038923",
++		.capability = ARM64_WORKAROUND_2038923,
++
++		/* Cortex-A510 r0p0 - r0p2 */
++		ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A510, 0, 0, 2)
++	},
++#endif
++#ifdef CONFIG_ARM64_ERRATUM_1902691
++	{
++		.desc = "ARM erratum 1902691",
++		.capability = ARM64_WORKAROUND_1902691,
++
++		/* Cortex-A510 r0p0 - r0p1 */
++		ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A510, 0, 0, 1)
++	},
+ #endif
+ 	{
+ 	}
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 6f3e677d88f15..d18b953c078db 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -1634,6 +1634,9 @@ static bool cpu_has_broken_dbm(void)
+ 		MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
+ 		/* Kryo4xx Silver (rdpe => r1p0) */
+ 		MIDR_REV(MIDR_QCOM_KRYO_4XX_SILVER, 0xd, 0xe),
++#endif
++#ifdef CONFIG_ARM64_ERRATUM_2051678
++		MIDR_REV_RANGE(MIDR_CORTEX_A510, 0, 0, 2),
+ #endif
+ 		{},
+ 	};
+diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
+index 870c39537dd09..e7719e8f18def 100644
+--- a/arch/arm64/tools/cpucaps
++++ b/arch/arm64/tools/cpucaps
+@@ -55,6 +55,9 @@ WORKAROUND_1418040
+ WORKAROUND_1463225
+ WORKAROUND_1508412
+ WORKAROUND_1542419
++WORKAROUND_2064142
++WORKAROUND_2038923
++WORKAROUND_1902691
+ WORKAROUND_TRBE_OVERWRITE_FILL_MODE
+ WORKAROUND_TSB_FLUSH_FAILURE
+ WORKAROUND_TRBE_WRITE_OUT_OF_RANGE
+diff --git a/arch/mips/cavium-octeon/octeon-memcpy.S b/arch/mips/cavium-octeon/octeon-memcpy.S
+index 0a515cde1c183..25860fba6218d 100644
+--- a/arch/mips/cavium-octeon/octeon-memcpy.S
++++ b/arch/mips/cavium-octeon/octeon-memcpy.S
+@@ -74,7 +74,7 @@
+ #define EXC(inst_reg,addr,handler)		\
+ 9:	inst_reg, addr;				\
+ 	.section __ex_table,"a";		\
+-	PTR	9b, handler;			\
++	PTR_WD	9b, handler;			\
+ 	.previous
+ 
+ /*
+diff --git a/arch/mips/include/asm/asm.h b/arch/mips/include/asm/asm.h
+index 2f8ce94ebaafe..cc69f1deb1ca8 100644
+--- a/arch/mips/include/asm/asm.h
++++ b/arch/mips/include/asm/asm.h
+@@ -276,7 +276,7 @@ symbol		=	value
+ 
+ #define PTR_SCALESHIFT	2
+ 
+-#define PTR		.word
++#define PTR_WD		.word
+ #define PTRSIZE		4
+ #define PTRLOG		2
+ #endif
+@@ -301,7 +301,7 @@ symbol		=	value
+ 
+ #define PTR_SCALESHIFT	3
+ 
+-#define PTR		.dword
++#define PTR_WD		.dword
+ #define PTRSIZE		8
+ #define PTRLOG		3
+ #endif
+diff --git a/arch/mips/include/asm/ftrace.h b/arch/mips/include/asm/ftrace.h
+index b463f2aa5a613..db497a8167da2 100644
+--- a/arch/mips/include/asm/ftrace.h
++++ b/arch/mips/include/asm/ftrace.h
+@@ -32,7 +32,7 @@ do {							\
+ 		".previous\n"				\
+ 							\
+ 		".section\t__ex_table,\"a\"\n\t"	\
+-		STR(PTR) "\t1b, 3b\n\t"			\
++		STR(PTR_WD) "\t1b, 3b\n\t"		\
+ 		".previous\n"				\
+ 							\
+ 		: [tmp_dst] "=&r" (dst), [tmp_err] "=r" (error)\
+@@ -54,7 +54,7 @@ do {						\
+ 		".previous\n"			\
+ 						\
+ 		".section\t__ex_table,\"a\"\n\t"\
+-		STR(PTR) "\t1b, 3b\n\t"		\
++		STR(PTR_WD) "\t1b, 3b\n\t"	\
+ 		".previous\n"			\
+ 						\
+ 		: [tmp_err] "=r" (error)	\
+diff --git a/arch/mips/include/asm/r4kcache.h b/arch/mips/include/asm/r4kcache.h
+index af3788589ee6d..431a1c9d53fc7 100644
+--- a/arch/mips/include/asm/r4kcache.h
++++ b/arch/mips/include/asm/r4kcache.h
+@@ -119,7 +119,7 @@ static inline void flush_scache_line(unsigned long addr)
+ 	"	j	2b			\n"		\
+ 	"	.previous			\n"		\
+ 	"	.section __ex_table,\"a\"	\n"		\
+-	"	"STR(PTR)" 1b, 3b		\n"		\
++	"	"STR(PTR_WD)" 1b, 3b		\n"		\
+ 	"	.previous"					\
+ 	: "+r" (__err)						\
+ 	: "i" (op), "r" (addr), "i" (-EFAULT));			\
+@@ -142,7 +142,7 @@ static inline void flush_scache_line(unsigned long addr)
+ 	"	j	2b			\n"		\
+ 	"	.previous			\n"		\
+ 	"	.section __ex_table,\"a\"	\n"		\
+-	"	"STR(PTR)" 1b, 3b		\n"		\
++	"	"STR(PTR_WD)" 1b, 3b		\n"		\
+ 	"	.previous"					\
+ 	: "+r" (__err)						\
+ 	: "i" (op), "r" (addr), "i" (-EFAULT));			\
+diff --git a/arch/mips/include/asm/unaligned-emul.h b/arch/mips/include/asm/unaligned-emul.h
+index 2022b18944b97..9af0f4d3d288c 100644
+--- a/arch/mips/include/asm/unaligned-emul.h
++++ b/arch/mips/include/asm/unaligned-emul.h
+@@ -20,8 +20,8 @@ do {                                                \
+ 		"j\t3b\n\t"                         \
+ 		".previous\n\t"                     \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 4b\n\t"               \
+-		STR(PTR)"\t2b, 4b\n\t"               \
++		STR(PTR_WD)"\t1b, 4b\n\t"           \
++		STR(PTR_WD)"\t2b, 4b\n\t"           \
+ 		".previous"                         \
+ 		: "=&r" (value), "=r" (res)         \
+ 		: "r" (addr), "i" (-EFAULT));       \
+@@ -41,8 +41,8 @@ do {                                                \
+ 		"j\t3b\n\t"                         \
+ 		".previous\n\t"                     \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 4b\n\t"               \
+-		STR(PTR)"\t2b, 4b\n\t"               \
++		STR(PTR_WD)"\t1b, 4b\n\t"           \
++		STR(PTR_WD)"\t2b, 4b\n\t"           \
+ 		".previous"                         \
+ 		: "=&r" (value), "=r" (res)         \
+ 		: "r" (addr), "i" (-EFAULT));       \
+@@ -74,10 +74,10 @@ do {                                                \
+ 		"j\t10b\n\t"			    \
+ 		".previous\n\t"			    \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 11b\n\t"		    \
+-		STR(PTR)"\t2b, 11b\n\t"		    \
+-		STR(PTR)"\t3b, 11b\n\t"		    \
+-		STR(PTR)"\t4b, 11b\n\t"		    \
++		STR(PTR_WD)"\t1b, 11b\n\t"	    \
++		STR(PTR_WD)"\t2b, 11b\n\t"	    \
++		STR(PTR_WD)"\t3b, 11b\n\t"	    \
++		STR(PTR_WD)"\t4b, 11b\n\t"	    \
+ 		".previous"			    \
+ 		: "=&r" (value), "=r" (res)	    \
+ 		: "r" (addr), "i" (-EFAULT));       \
+@@ -102,8 +102,8 @@ do {                                                \
+ 		"j\t3b\n\t"                         \
+ 		".previous\n\t"                     \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 4b\n\t"               \
+-		STR(PTR)"\t2b, 4b\n\t"               \
++		STR(PTR_WD)"\t1b, 4b\n\t"           \
++		STR(PTR_WD)"\t2b, 4b\n\t"           \
+ 		".previous"                         \
+ 		: "=&r" (value), "=r" (res)         \
+ 		: "r" (addr), "i" (-EFAULT));       \
+@@ -125,8 +125,8 @@ do {                                                \
+ 		"j\t3b\n\t"                         \
+ 		".previous\n\t"                     \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 4b\n\t"               \
+-		STR(PTR)"\t2b, 4b\n\t"               \
++		STR(PTR_WD)"\t1b, 4b\n\t"           \
++		STR(PTR_WD)"\t2b, 4b\n\t"           \
+ 		".previous"                         \
+ 		: "=&r" (value), "=r" (res)         \
+ 		: "r" (addr), "i" (-EFAULT));       \
+@@ -145,8 +145,8 @@ do {                                                \
+ 		"j\t3b\n\t"                         \
+ 		".previous\n\t"                     \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 4b\n\t"               \
+-		STR(PTR)"\t2b, 4b\n\t"               \
++		STR(PTR_WD)"\t1b, 4b\n\t"           \
++		STR(PTR_WD)"\t2b, 4b\n\t"           \
+ 		".previous"                         \
+ 		: "=&r" (value), "=r" (res)         \
+ 		: "r" (addr), "i" (-EFAULT));       \
+@@ -178,10 +178,10 @@ do {                                                \
+ 		"j\t10b\n\t"			    \
+ 		".previous\n\t"			    \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 11b\n\t"		    \
+-		STR(PTR)"\t2b, 11b\n\t"		    \
+-		STR(PTR)"\t3b, 11b\n\t"		    \
+-		STR(PTR)"\t4b, 11b\n\t"		    \
++		STR(PTR_WD)"\t1b, 11b\n\t"	    \
++		STR(PTR_WD)"\t2b, 11b\n\t"	    \
++		STR(PTR_WD)"\t3b, 11b\n\t"	    \
++		STR(PTR_WD)"\t4b, 11b\n\t"	    \
+ 		".previous"			    \
+ 		: "=&r" (value), "=r" (res)	    \
+ 		: "r" (addr), "i" (-EFAULT));       \
+@@ -223,14 +223,14 @@ do {                                                \
+ 		"j\t10b\n\t"			    \
+ 		".previous\n\t"			    \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 11b\n\t"		    \
+-		STR(PTR)"\t2b, 11b\n\t"		    \
+-		STR(PTR)"\t3b, 11b\n\t"		    \
+-		STR(PTR)"\t4b, 11b\n\t"		    \
+-		STR(PTR)"\t5b, 11b\n\t"		    \
+-		STR(PTR)"\t6b, 11b\n\t"		    \
+-		STR(PTR)"\t7b, 11b\n\t"		    \
+-		STR(PTR)"\t8b, 11b\n\t"		    \
++		STR(PTR_WD)"\t1b, 11b\n\t"	    \
++		STR(PTR_WD)"\t2b, 11b\n\t"	    \
++		STR(PTR_WD)"\t3b, 11b\n\t"	    \
++		STR(PTR_WD)"\t4b, 11b\n\t"	    \
++		STR(PTR_WD)"\t5b, 11b\n\t"	    \
++		STR(PTR_WD)"\t6b, 11b\n\t"	    \
++		STR(PTR_WD)"\t7b, 11b\n\t"	    \
++		STR(PTR_WD)"\t8b, 11b\n\t"	    \
+ 		".previous"			    \
+ 		: "=&r" (value), "=r" (res)	    \
+ 		: "r" (addr), "i" (-EFAULT));       \
+@@ -255,8 +255,8 @@ do {                                                \
+ 		"j\t3b\n\t"                         \
+ 		".previous\n\t"                     \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 4b\n\t"              \
+-		STR(PTR)"\t2b, 4b\n\t"              \
++		STR(PTR_WD)"\t1b, 4b\n\t"           \
++		STR(PTR_WD)"\t2b, 4b\n\t"           \
+ 		".previous"                         \
+ 		: "=r" (res)                        \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT));\
+@@ -276,8 +276,8 @@ do {                                                \
+ 		"j\t3b\n\t"                         \
+ 		".previous\n\t"                     \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 4b\n\t"               \
+-		STR(PTR)"\t2b, 4b\n\t"               \
++		STR(PTR_WD)"\t1b, 4b\n\t"           \
++		STR(PTR_WD)"\t2b, 4b\n\t"           \
+ 		".previous"                         \
+ 		: "=r" (res)                                \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT));  \
+@@ -296,8 +296,8 @@ do {                                                \
+ 		"j\t3b\n\t"                         \
+ 		".previous\n\t"                     \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 4b\n\t"               \
+-		STR(PTR)"\t2b, 4b\n\t"               \
++		STR(PTR_WD)"\t1b, 4b\n\t"           \
++		STR(PTR_WD)"\t2b, 4b\n\t"           \
+ 		".previous"                         \
+ 		: "=r" (res)                                \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT));  \
+@@ -325,10 +325,10 @@ do {                                                \
+ 		"j\t10b\n\t"			    \
+ 		".previous\n\t"			    \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 11b\n\t"		    \
+-		STR(PTR)"\t2b, 11b\n\t"		    \
+-		STR(PTR)"\t3b, 11b\n\t"		    \
+-		STR(PTR)"\t4b, 11b\n\t"		    \
++		STR(PTR_WD)"\t1b, 11b\n\t"	    \
++		STR(PTR_WD)"\t2b, 11b\n\t"	    \
++		STR(PTR_WD)"\t3b, 11b\n\t"	    \
++		STR(PTR_WD)"\t4b, 11b\n\t"	    \
+ 		".previous"			    \
+ 		: "=&r" (res)				    \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
+@@ -365,14 +365,14 @@ do {                                                \
+ 		"j\t10b\n\t"			    \
+ 		".previous\n\t"			    \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 11b\n\t"		    \
+-		STR(PTR)"\t2b, 11b\n\t"		    \
+-		STR(PTR)"\t3b, 11b\n\t"		    \
+-		STR(PTR)"\t4b, 11b\n\t"		    \
+-		STR(PTR)"\t5b, 11b\n\t"		    \
+-		STR(PTR)"\t6b, 11b\n\t"		    \
+-		STR(PTR)"\t7b, 11b\n\t"		    \
+-		STR(PTR)"\t8b, 11b\n\t"		    \
++		STR(PTR_WD)"\t1b, 11b\n\t"	    \
++		STR(PTR_WD)"\t2b, 11b\n\t"	    \
++		STR(PTR_WD)"\t3b, 11b\n\t"	    \
++		STR(PTR_WD)"\t4b, 11b\n\t"	    \
++		STR(PTR_WD)"\t5b, 11b\n\t"	    \
++		STR(PTR_WD)"\t6b, 11b\n\t"	    \
++		STR(PTR_WD)"\t7b, 11b\n\t"	    \
++		STR(PTR_WD)"\t8b, 11b\n\t"	    \
+ 		".previous"			    \
+ 		: "=&r" (res)				    \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
+@@ -398,8 +398,8 @@ do {                                                \
+ 		"j\t3b\n\t"                         \
+ 		".previous\n\t"                     \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 4b\n\t"               \
+-		STR(PTR)"\t2b, 4b\n\t"               \
++		STR(PTR_WD)"\t1b, 4b\n\t"           \
++		STR(PTR_WD)"\t2b, 4b\n\t"           \
+ 		".previous"                         \
+ 		: "=&r" (value), "=r" (res)         \
+ 		: "r" (addr), "i" (-EFAULT));       \
+@@ -419,8 +419,8 @@ do {                                                \
+ 		"j\t3b\n\t"                         \
+ 		".previous\n\t"                     \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 4b\n\t"               \
+-		STR(PTR)"\t2b, 4b\n\t"               \
++		STR(PTR_WD)"\t1b, 4b\n\t"           \
++		STR(PTR_WD)"\t2b, 4b\n\t"           \
+ 		".previous"                         \
+ 		: "=&r" (value), "=r" (res)         \
+ 		: "r" (addr), "i" (-EFAULT));       \
+@@ -452,10 +452,10 @@ do {                                                \
+ 		"j\t10b\n\t"			    \
+ 		".previous\n\t"			    \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 11b\n\t"		    \
+-		STR(PTR)"\t2b, 11b\n\t"		    \
+-		STR(PTR)"\t3b, 11b\n\t"		    \
+-		STR(PTR)"\t4b, 11b\n\t"		    \
++		STR(PTR_WD)"\t1b, 11b\n\t"	    \
++		STR(PTR_WD)"\t2b, 11b\n\t"	    \
++		STR(PTR_WD)"\t3b, 11b\n\t"	    \
++		STR(PTR_WD)"\t4b, 11b\n\t"	    \
+ 		".previous"			    \
+ 		: "=&r" (value), "=r" (res)	    \
+ 		: "r" (addr), "i" (-EFAULT));       \
+@@ -481,8 +481,8 @@ do {                                                \
+ 		"j\t3b\n\t"                         \
+ 		".previous\n\t"                     \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 4b\n\t"               \
+-		STR(PTR)"\t2b, 4b\n\t"               \
++		STR(PTR_WD)"\t1b, 4b\n\t"           \
++		STR(PTR_WD)"\t2b, 4b\n\t"           \
+ 		".previous"                         \
+ 		: "=&r" (value), "=r" (res)         \
+ 		: "r" (addr), "i" (-EFAULT));       \
+@@ -504,8 +504,8 @@ do {                                                \
+ 		"j\t3b\n\t"                         \
+ 		".previous\n\t"                     \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 4b\n\t"               \
+-		STR(PTR)"\t2b, 4b\n\t"               \
++		STR(PTR_WD)"\t1b, 4b\n\t"           \
++		STR(PTR_WD)"\t2b, 4b\n\t"           \
+ 		".previous"                         \
+ 		: "=&r" (value), "=r" (res)         \
+ 		: "r" (addr), "i" (-EFAULT));       \
+@@ -524,8 +524,8 @@ do {                                                \
+ 		"j\t3b\n\t"                         \
+ 		".previous\n\t"                     \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 4b\n\t"               \
+-		STR(PTR)"\t2b, 4b\n\t"               \
++		STR(PTR_WD)"\t1b, 4b\n\t"           \
++		STR(PTR_WD)"\t2b, 4b\n\t"           \
+ 		".previous"                         \
+ 		: "=&r" (value), "=r" (res)         \
+ 		: "r" (addr), "i" (-EFAULT));       \
+@@ -557,10 +557,10 @@ do {                                                \
+ 		"j\t10b\n\t"			    \
+ 		".previous\n\t"			    \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 11b\n\t"		    \
+-		STR(PTR)"\t2b, 11b\n\t"		    \
+-		STR(PTR)"\t3b, 11b\n\t"		    \
+-		STR(PTR)"\t4b, 11b\n\t"		    \
++		STR(PTR_WD)"\t1b, 11b\n\t"	    \
++		STR(PTR_WD)"\t2b, 11b\n\t"	    \
++		STR(PTR_WD)"\t3b, 11b\n\t"	    \
++		STR(PTR_WD)"\t4b, 11b\n\t"	    \
+ 		".previous"			    \
+ 		: "=&r" (value), "=r" (res)	    \
+ 		: "r" (addr), "i" (-EFAULT));       \
+@@ -602,14 +602,14 @@ do {                                                \
+ 		"j\t10b\n\t"			    \
+ 		".previous\n\t"			    \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 11b\n\t"		    \
+-		STR(PTR)"\t2b, 11b\n\t"		    \
+-		STR(PTR)"\t3b, 11b\n\t"		    \
+-		STR(PTR)"\t4b, 11b\n\t"		    \
+-		STR(PTR)"\t5b, 11b\n\t"		    \
+-		STR(PTR)"\t6b, 11b\n\t"		    \
+-		STR(PTR)"\t7b, 11b\n\t"		    \
+-		STR(PTR)"\t8b, 11b\n\t"		    \
++		STR(PTR_WD)"\t1b, 11b\n\t"	    \
++		STR(PTR_WD)"\t2b, 11b\n\t"	    \
++		STR(PTR_WD)"\t3b, 11b\n\t"	    \
++		STR(PTR_WD)"\t4b, 11b\n\t"	    \
++		STR(PTR_WD)"\t5b, 11b\n\t"	    \
++		STR(PTR_WD)"\t6b, 11b\n\t"	    \
++		STR(PTR_WD)"\t7b, 11b\n\t"	    \
++		STR(PTR_WD)"\t8b, 11b\n\t"	    \
+ 		".previous"			    \
+ 		: "=&r" (value), "=r" (res)	    \
+ 		: "r" (addr), "i" (-EFAULT));       \
+@@ -632,8 +632,8 @@ do {                                                 \
+ 		"j\t3b\n\t"                         \
+ 		".previous\n\t"                     \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 4b\n\t"               \
+-		STR(PTR)"\t2b, 4b\n\t"               \
++		STR(PTR_WD)"\t1b, 4b\n\t"           \
++		STR(PTR_WD)"\t2b, 4b\n\t"           \
+ 		".previous"                         \
+ 		: "=r" (res)                        \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT));\
+@@ -653,8 +653,8 @@ do {                                                \
+ 		"j\t3b\n\t"                         \
+ 		".previous\n\t"                     \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 4b\n\t"               \
+-		STR(PTR)"\t2b, 4b\n\t"               \
++		STR(PTR_WD)"\t1b, 4b\n\t"           \
++		STR(PTR_WD)"\t2b, 4b\n\t"           \
+ 		".previous"                         \
+ 		: "=r" (res)                                \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT));  \
+@@ -673,8 +673,8 @@ do {                                                \
+ 		"j\t3b\n\t"                         \
+ 		".previous\n\t"                     \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 4b\n\t"               \
+-		STR(PTR)"\t2b, 4b\n\t"               \
++		STR(PTR_WD)"\t1b, 4b\n\t"           \
++		STR(PTR_WD)"\t2b, 4b\n\t"           \
+ 		".previous"                         \
+ 		: "=r" (res)                                \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT));  \
+@@ -703,10 +703,10 @@ do {                                                \
+ 		"j\t10b\n\t"			    \
+ 		".previous\n\t"			    \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 11b\n\t"		    \
+-		STR(PTR)"\t2b, 11b\n\t"		    \
+-		STR(PTR)"\t3b, 11b\n\t"		    \
+-		STR(PTR)"\t4b, 11b\n\t"		    \
++		STR(PTR_WD)"\t1b, 11b\n\t"	    \
++		STR(PTR_WD)"\t2b, 11b\n\t"	    \
++		STR(PTR_WD)"\t3b, 11b\n\t"	    \
++		STR(PTR_WD)"\t4b, 11b\n\t"	    \
+ 		".previous"			    \
+ 		: "=&r" (res)				    \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
+@@ -743,14 +743,14 @@ do {                                                \
+ 		"j\t10b\n\t"			    \
+ 		".previous\n\t"			    \
+ 		".section\t__ex_table,\"a\"\n\t"    \
+-		STR(PTR)"\t1b, 11b\n\t"		    \
+-		STR(PTR)"\t2b, 11b\n\t"		    \
+-		STR(PTR)"\t3b, 11b\n\t"		    \
+-		STR(PTR)"\t4b, 11b\n\t"		    \
+-		STR(PTR)"\t5b, 11b\n\t"		    \
+-		STR(PTR)"\t6b, 11b\n\t"		    \
+-		STR(PTR)"\t7b, 11b\n\t"		    \
+-		STR(PTR)"\t8b, 11b\n\t"		    \
++		STR(PTR_WD)"\t1b, 11b\n\t"	    \
++		STR(PTR_WD)"\t2b, 11b\n\t"	    \
++		STR(PTR_WD)"\t3b, 11b\n\t"	    \
++		STR(PTR_WD)"\t4b, 11b\n\t"	    \
++		STR(PTR_WD)"\t5b, 11b\n\t"	    \
++		STR(PTR_WD)"\t6b, 11b\n\t"	    \
++		STR(PTR_WD)"\t7b, 11b\n\t"	    \
++		STR(PTR_WD)"\t8b, 11b\n\t"	    \
+ 		".previous"			    \
+ 		: "=&r" (res)				    \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
+diff --git a/arch/mips/kernel/mips-r2-to-r6-emul.c b/arch/mips/kernel/mips-r2-to-r6-emul.c
+index a39ec755e4c24..750fe569862b6 100644
+--- a/arch/mips/kernel/mips-r2-to-r6-emul.c
++++ b/arch/mips/kernel/mips-r2-to-r6-emul.c
+@@ -1258,10 +1258,10 @@ fpu_emul:
+ 			"	j	10b\n"
+ 			"	.previous\n"
+ 			"	.section	__ex_table,\"a\"\n"
+-			STR(PTR) " 1b,8b\n"
+-			STR(PTR) " 2b,8b\n"
+-			STR(PTR) " 3b,8b\n"
+-			STR(PTR) " 4b,8b\n"
++			STR(PTR_WD) " 1b,8b\n"
++			STR(PTR_WD) " 2b,8b\n"
++			STR(PTR_WD) " 3b,8b\n"
++			STR(PTR_WD) " 4b,8b\n"
+ 			"	.previous\n"
+ 			"	.set	pop\n"
+ 			: "+&r"(rt), "=&r"(rs),
+@@ -1333,10 +1333,10 @@ fpu_emul:
+ 			"	j	10b\n"
+ 			"       .previous\n"
+ 			"	.section	__ex_table,\"a\"\n"
+-			STR(PTR) " 1b,8b\n"
+-			STR(PTR) " 2b,8b\n"
+-			STR(PTR) " 3b,8b\n"
+-			STR(PTR) " 4b,8b\n"
++			STR(PTR_WD) " 1b,8b\n"
++			STR(PTR_WD) " 2b,8b\n"
++			STR(PTR_WD) " 3b,8b\n"
++			STR(PTR_WD) " 4b,8b\n"
+ 			"	.previous\n"
+ 			"	.set	pop\n"
+ 			: "+&r"(rt), "=&r"(rs),
+@@ -1404,10 +1404,10 @@ fpu_emul:
+ 			"	j	9b\n"
+ 			"	.previous\n"
+ 			"	.section        __ex_table,\"a\"\n"
+-			STR(PTR) " 1b,8b\n"
+-			STR(PTR) " 2b,8b\n"
+-			STR(PTR) " 3b,8b\n"
+-			STR(PTR) " 4b,8b\n"
++			STR(PTR_WD) " 1b,8b\n"
++			STR(PTR_WD) " 2b,8b\n"
++			STR(PTR_WD) " 3b,8b\n"
++			STR(PTR_WD) " 4b,8b\n"
+ 			"	.previous\n"
+ 			"	.set	pop\n"
+ 			: "+&r"(rt), "=&r"(rs),
+@@ -1474,10 +1474,10 @@ fpu_emul:
+ 			"	j	9b\n"
+ 			"	.previous\n"
+ 			"	.section        __ex_table,\"a\"\n"
+-			STR(PTR) " 1b,8b\n"
+-			STR(PTR) " 2b,8b\n"
+-			STR(PTR) " 3b,8b\n"
+-			STR(PTR) " 4b,8b\n"
++			STR(PTR_WD) " 1b,8b\n"
++			STR(PTR_WD) " 2b,8b\n"
++			STR(PTR_WD) " 3b,8b\n"
++			STR(PTR_WD) " 4b,8b\n"
+ 			"	.previous\n"
+ 			"	.set	pop\n"
+ 			: "+&r"(rt), "=&r"(rs),
+@@ -1589,14 +1589,14 @@ fpu_emul:
+ 			"	j	9b\n"
+ 			"	.previous\n"
+ 			"	.section        __ex_table,\"a\"\n"
+-			STR(PTR) " 1b,8b\n"
+-			STR(PTR) " 2b,8b\n"
+-			STR(PTR) " 3b,8b\n"
+-			STR(PTR) " 4b,8b\n"
+-			STR(PTR) " 5b,8b\n"
+-			STR(PTR) " 6b,8b\n"
+-			STR(PTR) " 7b,8b\n"
+-			STR(PTR) " 0b,8b\n"
++			STR(PTR_WD) " 1b,8b\n"
++			STR(PTR_WD) " 2b,8b\n"
++			STR(PTR_WD) " 3b,8b\n"
++			STR(PTR_WD) " 4b,8b\n"
++			STR(PTR_WD) " 5b,8b\n"
++			STR(PTR_WD) " 6b,8b\n"
++			STR(PTR_WD) " 7b,8b\n"
++			STR(PTR_WD) " 0b,8b\n"
+ 			"	.previous\n"
+ 			"	.set	pop\n"
+ 			: "+&r"(rt), "=&r"(rs),
+@@ -1708,14 +1708,14 @@ fpu_emul:
+ 			"	j      9b\n"
+ 			"	.previous\n"
+ 			"	.section        __ex_table,\"a\"\n"
+-			STR(PTR) " 1b,8b\n"
+-			STR(PTR) " 2b,8b\n"
+-			STR(PTR) " 3b,8b\n"
+-			STR(PTR) " 4b,8b\n"
+-			STR(PTR) " 5b,8b\n"
+-			STR(PTR) " 6b,8b\n"
+-			STR(PTR) " 7b,8b\n"
+-			STR(PTR) " 0b,8b\n"
++			STR(PTR_WD) " 1b,8b\n"
++			STR(PTR_WD) " 2b,8b\n"
++			STR(PTR_WD) " 3b,8b\n"
++			STR(PTR_WD) " 4b,8b\n"
++			STR(PTR_WD) " 5b,8b\n"
++			STR(PTR_WD) " 6b,8b\n"
++			STR(PTR_WD) " 7b,8b\n"
++			STR(PTR_WD) " 0b,8b\n"
+ 			"	.previous\n"
+ 			"	.set    pop\n"
+ 			: "+&r"(rt), "=&r"(rs),
+@@ -1827,14 +1827,14 @@ fpu_emul:
+ 			"	j	9b\n"
+ 			"	.previous\n"
+ 			"	.section        __ex_table,\"a\"\n"
+-			STR(PTR) " 1b,8b\n"
+-			STR(PTR) " 2b,8b\n"
+-			STR(PTR) " 3b,8b\n"
+-			STR(PTR) " 4b,8b\n"
+-			STR(PTR) " 5b,8b\n"
+-			STR(PTR) " 6b,8b\n"
+-			STR(PTR) " 7b,8b\n"
+-			STR(PTR) " 0b,8b\n"
++			STR(PTR_WD) " 1b,8b\n"
++			STR(PTR_WD) " 2b,8b\n"
++			STR(PTR_WD) " 3b,8b\n"
++			STR(PTR_WD) " 4b,8b\n"
++			STR(PTR_WD) " 5b,8b\n"
++			STR(PTR_WD) " 6b,8b\n"
++			STR(PTR_WD) " 7b,8b\n"
++			STR(PTR_WD) " 0b,8b\n"
+ 			"	.previous\n"
+ 			"	.set	pop\n"
+ 			: "+&r"(rt), "=&r"(rs),
+@@ -1945,14 +1945,14 @@ fpu_emul:
+ 			"       j	9b\n"
+ 			"       .previous\n"
+ 			"       .section        __ex_table,\"a\"\n"
+-			STR(PTR) " 1b,8b\n"
+-			STR(PTR) " 2b,8b\n"
+-			STR(PTR) " 3b,8b\n"
+-			STR(PTR) " 4b,8b\n"
+-			STR(PTR) " 5b,8b\n"
+-			STR(PTR) " 6b,8b\n"
+-			STR(PTR) " 7b,8b\n"
+-			STR(PTR) " 0b,8b\n"
++			STR(PTR_WD) " 1b,8b\n"
++			STR(PTR_WD) " 2b,8b\n"
++			STR(PTR_WD) " 3b,8b\n"
++			STR(PTR_WD) " 4b,8b\n"
++			STR(PTR_WD) " 5b,8b\n"
++			STR(PTR_WD) " 6b,8b\n"
++			STR(PTR_WD) " 7b,8b\n"
++			STR(PTR_WD) " 0b,8b\n"
+ 			"       .previous\n"
+ 			"       .set	pop\n"
+ 			: "+&r"(rt), "=&r"(rs),
+@@ -2007,7 +2007,7 @@ fpu_emul:
+ 			"j	2b\n"
+ 			".previous\n"
+ 			".section        __ex_table,\"a\"\n"
+-			STR(PTR) " 1b,3b\n"
++			STR(PTR_WD) " 1b,3b\n"
+ 			".previous\n"
+ 			: "=&r"(res), "+&r"(err)
+ 			: "r"(vaddr), "i"(SIGSEGV)
+@@ -2065,7 +2065,7 @@ fpu_emul:
+ 			"j	2b\n"
+ 			".previous\n"
+ 			".section        __ex_table,\"a\"\n"
+-			STR(PTR) " 1b,3b\n"
++			STR(PTR_WD) " 1b,3b\n"
+ 			".previous\n"
+ 			: "+&r"(res), "+&r"(err)
+ 			: "r"(vaddr), "i"(SIGSEGV));
+@@ -2126,7 +2126,7 @@ fpu_emul:
+ 			"j	2b\n"
+ 			".previous\n"
+ 			".section        __ex_table,\"a\"\n"
+-			STR(PTR) " 1b,3b\n"
++			STR(PTR_WD) " 1b,3b\n"
+ 			".previous\n"
+ 			: "=&r"(res), "+&r"(err)
+ 			: "r"(vaddr), "i"(SIGSEGV)
+@@ -2189,7 +2189,7 @@ fpu_emul:
+ 			"j	2b\n"
+ 			".previous\n"
+ 			".section        __ex_table,\"a\"\n"
+-			STR(PTR) " 1b,3b\n"
++			STR(PTR_WD) " 1b,3b\n"
+ 			".previous\n"
+ 			: "+&r"(res), "+&r"(err)
+ 			: "r"(vaddr), "i"(SIGSEGV));
+diff --git a/arch/mips/kernel/r2300_fpu.S b/arch/mips/kernel/r2300_fpu.S
+index cbf6db98cfb38..2748c55820c24 100644
+--- a/arch/mips/kernel/r2300_fpu.S
++++ b/arch/mips/kernel/r2300_fpu.S
+@@ -23,14 +23,14 @@
+ #define EX(a,b)							\
+ 9:	a,##b;							\
+ 	.section __ex_table,"a";				\
+-	PTR	9b,fault;					\
++	PTR_WD	9b,fault;					\
+ 	.previous
+ 
+ #define EX2(a,b)						\
+ 9:	a,##b;							\
+ 	.section __ex_table,"a";				\
+-	PTR	9b,fault;					\
+-	PTR	9b+4,fault;					\
++	PTR_WD	9b,fault;					\
++	PTR_WD	9b+4,fault;					\
+ 	.previous
+ 
+ 	.set	mips1
+diff --git a/arch/mips/kernel/r4k_fpu.S b/arch/mips/kernel/r4k_fpu.S
+index b91e911064756..2e687c60bc4f1 100644
+--- a/arch/mips/kernel/r4k_fpu.S
++++ b/arch/mips/kernel/r4k_fpu.S
+@@ -31,7 +31,7 @@
+ .ex\@:	\insn	\reg, \src
+ 	.set	pop
+ 	.section __ex_table,"a"
+-	PTR	.ex\@, fault
++	PTR_WD	.ex\@, fault
+ 	.previous
+ 	.endm
+ 
+diff --git a/arch/mips/kernel/relocate_kernel.S b/arch/mips/kernel/relocate_kernel.S
+index f3c908abdbb80..cfde14b48fd8d 100644
+--- a/arch/mips/kernel/relocate_kernel.S
++++ b/arch/mips/kernel/relocate_kernel.S
+@@ -147,10 +147,10 @@ LEAF(kexec_smp_wait)
+ 
+ kexec_args:
+ 	EXPORT(kexec_args)
+-arg0:	PTR		0x0
+-arg1:	PTR		0x0
+-arg2:	PTR		0x0
+-arg3:	PTR		0x0
++arg0:	PTR_WD		0x0
++arg1:	PTR_WD		0x0
++arg2:	PTR_WD		0x0
++arg3:	PTR_WD		0x0
+ 	.size	kexec_args,PTRSIZE*4
+ 
+ #ifdef CONFIG_SMP
+@@ -161,10 +161,10 @@ arg3:	PTR		0x0
+  */
+ secondary_kexec_args:
+ 	EXPORT(secondary_kexec_args)
+-s_arg0: PTR		0x0
+-s_arg1: PTR		0x0
+-s_arg2: PTR		0x0
+-s_arg3: PTR		0x0
++s_arg0: PTR_WD		0x0
++s_arg1: PTR_WD		0x0
++s_arg2: PTR_WD		0x0
++s_arg3: PTR_WD		0x0
+ 	.size	secondary_kexec_args,PTRSIZE*4
+ kexec_flag:
+ 	LONG		0x1
+@@ -173,17 +173,17 @@ kexec_flag:
+ 
+ kexec_start_address:
+ 	EXPORT(kexec_start_address)
+-	PTR		0x0
++	PTR_WD		0x0
+ 	.size		kexec_start_address, PTRSIZE
+ 
+ kexec_indirection_page:
+ 	EXPORT(kexec_indirection_page)
+-	PTR		0
++	PTR_WD		0
+ 	.size		kexec_indirection_page, PTRSIZE
+ 
+ relocate_new_kernel_end:
+ 
+ relocate_new_kernel_size:
+ 	EXPORT(relocate_new_kernel_size)
+-	PTR		relocate_new_kernel_end - relocate_new_kernel
++	PTR_WD		relocate_new_kernel_end - relocate_new_kernel
+ 	.size		relocate_new_kernel_size, PTRSIZE
+diff --git a/arch/mips/kernel/scall32-o32.S b/arch/mips/kernel/scall32-o32.S
+index b1b2e106f7118..9bfce5f75f601 100644
+--- a/arch/mips/kernel/scall32-o32.S
++++ b/arch/mips/kernel/scall32-o32.S
+@@ -72,10 +72,10 @@ loads_done:
+ 	.set	pop
+ 
+ 	.section __ex_table,"a"
+-	PTR	load_a4, bad_stack_a4
+-	PTR	load_a5, bad_stack_a5
+-	PTR	load_a6, bad_stack_a6
+-	PTR	load_a7, bad_stack_a7
++	PTR_WD	load_a4, bad_stack_a4
++	PTR_WD	load_a5, bad_stack_a5
++	PTR_WD	load_a6, bad_stack_a6
++	PTR_WD	load_a7, bad_stack_a7
+ 	.previous
+ 
+ 	lw	t0, TI_FLAGS($28)	# syscall tracing enabled?
+@@ -216,7 +216,7 @@ einval: li	v0, -ENOSYS
+ #endif /* CONFIG_MIPS_MT_FPAFF */
+ 
+ #define __SYSCALL_WITH_COMPAT(nr, native, compat)	__SYSCALL(nr, native)
+-#define __SYSCALL(nr, entry) 	PTR entry
++#define __SYSCALL(nr, entry) 	PTR_WD entry
+ 	.align	2
+ 	.type	sys_call_table, @object
+ EXPORT(sys_call_table)
+diff --git a/arch/mips/kernel/scall64-n32.S b/arch/mips/kernel/scall64-n32.S
+index f650c55a17dc5..97456b2ca7dc3 100644
+--- a/arch/mips/kernel/scall64-n32.S
++++ b/arch/mips/kernel/scall64-n32.S
+@@ -101,7 +101,7 @@ not_n32_scall:
+ 
+ 	END(handle_sysn32)
+ 
+-#define __SYSCALL(nr, entry)	PTR entry
++#define __SYSCALL(nr, entry)	PTR_WD entry
+ 	.type	sysn32_call_table, @object
+ EXPORT(sysn32_call_table)
+ #include <asm/syscall_table_n32.h>
+diff --git a/arch/mips/kernel/scall64-n64.S b/arch/mips/kernel/scall64-n64.S
+index 5d7bfc65e4d0b..5f6ed4b4c3993 100644
+--- a/arch/mips/kernel/scall64-n64.S
++++ b/arch/mips/kernel/scall64-n64.S
+@@ -109,7 +109,7 @@ illegal_syscall:
+ 	j	n64_syscall_exit
+ 	END(handle_sys64)
+ 
+-#define __SYSCALL(nr, entry)	PTR entry
++#define __SYSCALL(nr, entry)	PTR_WD entry
+ 	.align	3
+ 	.type	sys_call_table, @object
+ EXPORT(sys_call_table)
+diff --git a/arch/mips/kernel/scall64-o32.S b/arch/mips/kernel/scall64-o32.S
+index cedc8bd888046..d3c2616cba226 100644
+--- a/arch/mips/kernel/scall64-o32.S
++++ b/arch/mips/kernel/scall64-o32.S
+@@ -73,10 +73,10 @@ load_a7: lw	a7, 28(t0)		# argument #8 from usp
+ loads_done:
+ 
+ 	.section __ex_table,"a"
+-	PTR	load_a4, bad_stack_a4
+-	PTR	load_a5, bad_stack_a5
+-	PTR	load_a6, bad_stack_a6
+-	PTR	load_a7, bad_stack_a7
++	PTR_WD	load_a4, bad_stack_a4
++	PTR_WD	load_a5, bad_stack_a5
++	PTR_WD	load_a6, bad_stack_a6
++	PTR_WD	load_a7, bad_stack_a7
+ 	.previous
+ 
+ 	li	t1, _TIF_WORK_SYSCALL_ENTRY
+@@ -214,7 +214,7 @@ einval: li	v0, -ENOSYS
+ 	END(sys32_syscall)
+ 
+ #define __SYSCALL_WITH_COMPAT(nr, native, compat)	__SYSCALL(nr, compat)
+-#define __SYSCALL(nr, entry)	PTR entry
++#define __SYSCALL(nr, entry)	PTR_WD entry
+ 	.align	3
+ 	.type	sys32_call_table,@object
+ EXPORT(sys32_call_table)
+diff --git a/arch/mips/kernel/syscall.c b/arch/mips/kernel/syscall.c
+index 5512cd586e6e8..ae93a607ddf7e 100644
+--- a/arch/mips/kernel/syscall.c
++++ b/arch/mips/kernel/syscall.c
+@@ -122,8 +122,8 @@ static inline int mips_atomic_set(unsigned long addr, unsigned long new)
+ 		"	j	3b					\n"
+ 		"	.previous					\n"
+ 		"	.section __ex_table,\"a\"			\n"
+-		"	"STR(PTR)"	1b, 4b				\n"
+-		"	"STR(PTR)"	2b, 4b				\n"
++		"	"STR(PTR_WD)"	1b, 4b				\n"
++		"	"STR(PTR_WD)"	2b, 4b				\n"
+ 		"	.previous					\n"
+ 		"	.set	pop					\n"
+ 		: [old] "=&r" (old),
+@@ -152,8 +152,8 @@ static inline int mips_atomic_set(unsigned long addr, unsigned long new)
+ 		"	j	3b					\n"
+ 		"	.previous					\n"
+ 		"	.section __ex_table,\"a\"			\n"
+-		"	"STR(PTR)"	1b, 5b				\n"
+-		"	"STR(PTR)"	2b, 5b				\n"
++		"	"STR(PTR_WD)"	1b, 5b				\n"
++		"	"STR(PTR_WD)"	2b, 5b				\n"
+ 		"	.previous					\n"
+ 		"	.set	pop					\n"
+ 		: [old] "=&r" (old),
+diff --git a/arch/mips/lib/csum_partial.S b/arch/mips/lib/csum_partial.S
+index a46db08071953..7767137c3e49a 100644
+--- a/arch/mips/lib/csum_partial.S
++++ b/arch/mips/lib/csum_partial.S
+@@ -347,7 +347,7 @@ EXPORT_SYMBOL(csum_partial)
+ 	.if \mode == LEGACY_MODE;		\
+ 9:		insn reg, addr;			\
+ 		.section __ex_table,"a";	\
+-		PTR	9b, .L_exc;		\
++		PTR_WD	9b, .L_exc;		\
+ 		.previous;			\
+ 	/* This is enabled in EVA mode */	\
+ 	.else;					\
+@@ -356,7 +356,7 @@ EXPORT_SYMBOL(csum_partial)
+ 		    ((\to == USEROP) && (type == ST_INSN));	\
+ 9:			__BUILD_EVA_INSN(insn##e, reg, addr);	\
+ 			.section __ex_table,"a";		\
+-			PTR	9b, .L_exc;			\
++			PTR_WD	9b, .L_exc;			\
+ 			.previous;				\
+ 		.else;						\
+ 			/* EVA without exception */		\
+diff --git a/arch/mips/lib/memcpy.S b/arch/mips/lib/memcpy.S
+index 277c32296636d..18a43f2e29c81 100644
+--- a/arch/mips/lib/memcpy.S
++++ b/arch/mips/lib/memcpy.S
+@@ -116,7 +116,7 @@
+ 	.if \mode == LEGACY_MODE;				\
+ 9:		insn reg, addr;					\
+ 		.section __ex_table,"a";			\
+-		PTR	9b, handler;				\
++		PTR_WD	9b, handler;				\
+ 		.previous;					\
+ 	/* This is assembled in EVA mode */			\
+ 	.else;							\
+@@ -125,7 +125,7 @@
+ 		    ((\to == USEROP) && (type == ST_INSN));	\
+ 9:			__BUILD_EVA_INSN(insn##e, reg, addr);	\
+ 			.section __ex_table,"a";		\
+-			PTR	9b, handler;			\
++			PTR_WD	9b, handler;			\
+ 			.previous;				\
+ 		.else;						\
+ 			/*					\
+diff --git a/arch/mips/lib/memset.S b/arch/mips/lib/memset.S
+index b0baa3c79fad0..0b342bae9a98c 100644
+--- a/arch/mips/lib/memset.S
++++ b/arch/mips/lib/memset.S
+@@ -52,7 +52,7 @@
+ 9:		___BUILD_EVA_INSN(insn, reg, addr);	\
+ 	.endif;						\
+ 	.section __ex_table,"a";			\
+-	PTR	9b, handler;				\
++	PTR_WD	9b, handler;				\
+ 	.previous
+ 
+ 	.macro	f_fill64 dst, offset, val, fixup, mode
+diff --git a/arch/mips/lib/strncpy_user.S b/arch/mips/lib/strncpy_user.S
+index 556acf684d7be..13aaa9927ad12 100644
+--- a/arch/mips/lib/strncpy_user.S
++++ b/arch/mips/lib/strncpy_user.S
+@@ -15,7 +15,7 @@
+ #define EX(insn,reg,addr,handler)			\
+ 9:	insn	reg, addr;				\
+ 	.section __ex_table,"a";			\
+-	PTR	9b, handler;				\
++	PTR_WD	9b, handler;				\
+ 	.previous
+ 
+ /*
+@@ -59,7 +59,7 @@ LEAF(__strncpy_from_user_asm)
+ 	jr		ra
+ 
+ 	.section	__ex_table,"a"
+-	PTR		1b, .Lfault
++	PTR_WD		1b, .Lfault
+ 	.previous
+ 
+ 	EXPORT_SYMBOL(__strncpy_from_user_asm)
+diff --git a/arch/mips/lib/strnlen_user.S b/arch/mips/lib/strnlen_user.S
+index 92b63f20ec05f..6de31b616f9c1 100644
+--- a/arch/mips/lib/strnlen_user.S
++++ b/arch/mips/lib/strnlen_user.S
+@@ -14,7 +14,7 @@
+ #define EX(insn,reg,addr,handler)			\
+ 9:	insn	reg, addr;				\
+ 	.section __ex_table,"a";			\
+-	PTR	9b, handler;				\
++	PTR_WD	9b, handler;				\
+ 	.previous
+ 
+ /*
+diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
+index 609c80f671943..f8b94f78403f1 100644
+--- a/arch/powerpc/include/asm/book3s/32/pgtable.h
++++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
+@@ -178,6 +178,7 @@ static inline bool pte_user(pte_t pte)
+ #ifndef __ASSEMBLY__
+ 
+ int map_kernel_page(unsigned long va, phys_addr_t pa, pgprot_t prot);
++void unmap_kernel_page(unsigned long va);
+ 
+ #endif /* !__ASSEMBLY__ */
+ 
+diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
+index 33e073d6b0c41..875730d5af408 100644
+--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
++++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
+@@ -1082,6 +1082,8 @@ static inline int map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t p
+ 	return hash__map_kernel_page(ea, pa, prot);
+ }
+ 
++void unmap_kernel_page(unsigned long va);
++
+ static inline int __meminit vmemmap_create_mapping(unsigned long start,
+ 						   unsigned long page_size,
+ 						   unsigned long phys)
+diff --git a/arch/powerpc/include/asm/fixmap.h b/arch/powerpc/include/asm/fixmap.h
+index 947b5b9c44241..a832aeafe5601 100644
+--- a/arch/powerpc/include/asm/fixmap.h
++++ b/arch/powerpc/include/asm/fixmap.h
+@@ -111,8 +111,10 @@ static inline void __set_fixmap(enum fixed_addresses idx,
+ 		BUILD_BUG_ON(idx >= __end_of_fixed_addresses);
+ 	else if (WARN_ON(idx >= __end_of_fixed_addresses))
+ 		return;
+-
+-	map_kernel_page(__fix_to_virt(idx), phys, flags);
++	if (pgprot_val(flags))
++		map_kernel_page(__fix_to_virt(idx), phys, flags);
++	else
++		unmap_kernel_page(__fix_to_virt(idx));
+ }
+ 
+ #define __early_set_fixmap	__set_fixmap
+diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h b/arch/powerpc/include/asm/nohash/32/pgtable.h
+index b67742e2a9b22..d959c2a73fbf4 100644
+--- a/arch/powerpc/include/asm/nohash/32/pgtable.h
++++ b/arch/powerpc/include/asm/nohash/32/pgtable.h
+@@ -64,6 +64,7 @@ extern int icache_44x_need_flush;
+ #ifndef __ASSEMBLY__
+ 
+ int map_kernel_page(unsigned long va, phys_addr_t pa, pgprot_t prot);
++void unmap_kernel_page(unsigned long va);
+ 
+ #endif /* !__ASSEMBLY__ */
+ 
+diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
+index 9d2905a474103..2225991c69b55 100644
+--- a/arch/powerpc/include/asm/nohash/64/pgtable.h
++++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
+@@ -308,6 +308,7 @@ static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
+ #define __swp_entry_to_pte(x)		__pte((x).val)
+ 
+ int map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t prot);
++void unmap_kernel_page(unsigned long va);
+ extern int __meminit vmemmap_create_mapping(unsigned long start,
+ 					    unsigned long page_size,
+ 					    unsigned long phys);
+diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
+index ce94823831442..b7385e637e3e3 100644
+--- a/arch/powerpc/mm/pgtable.c
++++ b/arch/powerpc/mm/pgtable.c
+@@ -203,6 +203,15 @@ void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
+ 	__set_pte_at(mm, addr, ptep, pte, 0);
+ }
+ 
++void unmap_kernel_page(unsigned long va)
++{
++	pmd_t *pmdp = pmd_off_k(va);
++	pte_t *ptep = pte_offset_kernel(pmdp, va);
++
++	pte_clear(&init_mm, va, ptep);
++	flush_tlb_kernel_range(va, va + PAGE_SIZE);
++}
++
+ /*
+  * This is called when relaxing access to a PTE. It's also called in the page
+  * fault path when we don't hit any of the major fault cases, ie, a minor
+diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
+index 8a107ed18b0dc..7d81102cffd48 100644
+--- a/arch/riscv/Makefile
++++ b/arch/riscv/Makefile
+@@ -50,6 +50,12 @@ riscv-march-$(CONFIG_ARCH_RV32I)	:= rv32ima
+ riscv-march-$(CONFIG_ARCH_RV64I)	:= rv64ima
+ riscv-march-$(CONFIG_FPU)		:= $(riscv-march-y)fd
+ riscv-march-$(CONFIG_RISCV_ISA_C)	:= $(riscv-march-y)c
++
++# Newer binutils versions default to ISA spec version 20191213 which moves some
++# instructions from the I extension to the Zicsr and Zifencei extensions.
++toolchain-need-zicsr-zifencei := $(call cc-option-yn, -march=$(riscv-march-y)_zicsr_zifencei)
++riscv-march-$(toolchain-need-zicsr-zifencei) := $(riscv-march-y)_zicsr_zifencei
++
+ KBUILD_CFLAGS += -march=$(subst fd,,$(riscv-march-y))
+ KBUILD_AFLAGS += -march=$(riscv-march-y)
+ 
+diff --git a/arch/riscv/kernel/cpu-hotplug.c b/arch/riscv/kernel/cpu-hotplug.c
+index df84e0c13db18..66ddfba1cfbef 100644
+--- a/arch/riscv/kernel/cpu-hotplug.c
++++ b/arch/riscv/kernel/cpu-hotplug.c
+@@ -12,6 +12,7 @@
+ #include <linux/sched/hotplug.h>
+ #include <asm/irq.h>
+ #include <asm/cpu_ops.h>
++#include <asm/numa.h>
+ #include <asm/sbi.h>
+ 
+ void cpu_stop(void);
+@@ -46,6 +47,7 @@ int __cpu_disable(void)
+ 		return ret;
+ 
+ 	remove_cpu_topology(cpu);
++	numa_remove_cpu(cpu);
+ 	set_cpu_online(cpu, false);
+ 	irq_migrate_all_off_this_cpu();
+ 
+diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
+index f52f01ecbeea0..f11877b9fdf03 100644
+--- a/arch/riscv/kernel/head.S
++++ b/arch/riscv/kernel/head.S
+@@ -21,14 +21,13 @@
+ 	add \reg, \reg, t0
+ .endm
+ .macro XIP_FIXUP_FLASH_OFFSET reg
+-	la t1, __data_loc
+-	li t0, XIP_OFFSET_MASK
+-	and t1, t1, t0
+-	li t1, XIP_OFFSET
+-	sub t0, t0, t1
+-	sub \reg, \reg, t0
++	la t0, __data_loc
++	REG_L t1, _xip_phys_offset
++	sub \reg, \reg, t1
++	add \reg, \reg, t0
+ .endm
+ _xip_fixup: .dword CONFIG_PHYS_RAM_BASE - CONFIG_XIP_PHYS_ADDR - XIP_OFFSET
++_xip_phys_offset: .dword CONFIG_XIP_PHYS_ADDR + XIP_OFFSET
+ #else
+ .macro XIP_FIXUP_OFFSET reg
+ .endm
+diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c
+index 0fcdc0233faca..2e675d6ba324f 100644
+--- a/arch/riscv/kernel/stacktrace.c
++++ b/arch/riscv/kernel/stacktrace.c
+@@ -22,15 +22,16 @@ void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs,
+ 			     bool (*fn)(void *, unsigned long), void *arg)
+ {
+ 	unsigned long fp, sp, pc;
++	int level = 0;
+ 
+ 	if (regs) {
+ 		fp = frame_pointer(regs);
+ 		sp = user_stack_pointer(regs);
+ 		pc = instruction_pointer(regs);
+ 	} else if (task == NULL || task == current) {
+-		fp = (unsigned long)__builtin_frame_address(1);
+-		sp = (unsigned long)__builtin_frame_address(0);
+-		pc = (unsigned long)__builtin_return_address(0);
++		fp = (unsigned long)__builtin_frame_address(0);
++		sp = sp_in_global;
++		pc = (unsigned long)walk_stackframe;
+ 	} else {
+ 		/* task blocked in __switch_to */
+ 		fp = task->thread.s[0];
+@@ -42,7 +43,7 @@ void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs,
+ 		unsigned long low, high;
+ 		struct stackframe *frame;
+ 
+-		if (unlikely(!__kernel_text_address(pc) || !fn(arg, pc)))
++		if (unlikely(!__kernel_text_address(pc) || (level++ >= 1 && !fn(arg, pc))))
+ 			break;
+ 
+ 		/* Validate frame pointer */
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 6fd7532aeab94..5c063ea48277b 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -41,6 +41,7 @@ phys_addr_t phys_ram_base __ro_after_init;
+ EXPORT_SYMBOL(phys_ram_base);
+ 
+ #ifdef CONFIG_XIP_KERNEL
++#define phys_ram_base  (*(phys_addr_t *)XIP_FIXUP(&phys_ram_base))
+ extern char _xiprom[], _exiprom[], __data_loc;
+ #endif
+ 
+diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
+index 2a5bb4f29cfed..0344c68f3ffde 100644
+--- a/arch/s390/Kconfig
++++ b/arch/s390/Kconfig
+@@ -947,6 +947,9 @@ config S390_GUEST
+ 
+ endmenu
+ 
++config S390_MODULES_SANITY_TEST_HELPERS
++	def_bool n
++
+ menu "Selftests"
+ 
+ config S390_UNWIND_SELFTEST
+@@ -973,4 +976,16 @@ config S390_KPROBES_SANITY_TEST
+ 
+ 	  Say N if you are unsure.
+ 
++config S390_MODULES_SANITY_TEST
++	def_tristate n
++	depends on KUNIT
++	default KUNIT_ALL_TESTS
++	prompt "Enable s390 specific modules tests"
++	select S390_MODULES_SANITY_TEST_HELPERS
++	help
++	  This option enables an s390 specific modules test. This option is
++	  not useful for distributions or general kernels, but only for
++	  kernel developers working on architecture code.
++
++	  Say N if you are unsure.
+ endmenu
+diff --git a/arch/s390/lib/Makefile b/arch/s390/lib/Makefile
+index 707cd4622c132..69feb8ed3312d 100644
+--- a/arch/s390/lib/Makefile
++++ b/arch/s390/lib/Makefile
+@@ -17,4 +17,7 @@ KASAN_SANITIZE_uaccess.o := n
+ obj-$(CONFIG_S390_UNWIND_SELFTEST) += test_unwind.o
+ CFLAGS_test_unwind.o += -fno-optimize-sibling-calls
+ 
++obj-$(CONFIG_S390_MODULES_SANITY_TEST) += test_modules.o
++obj-$(CONFIG_S390_MODULES_SANITY_TEST_HELPERS) += test_modules_helpers.o
++
+ lib-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
+diff --git a/arch/s390/lib/test_modules.c b/arch/s390/lib/test_modules.c
+new file mode 100644
+index 0000000000000..9894009fc1f25
+--- /dev/null
++++ b/arch/s390/lib/test_modules.c
+@@ -0,0 +1,32 @@
++// SPDX-License-Identifier: GPL-2.0+
++
++#include <kunit/test.h>
++#include <linux/module.h>
++
++#include "test_modules.h"
++
++/*
++ * Test that modules with many relocations are loaded properly.
++ */
++static void test_modules_many_vmlinux_relocs(struct kunit *test)
++{
++	int result = 0;
++
++#define CALL_RETURN(i) result += test_modules_return_ ## i()
++	REPEAT_10000(CALL_RETURN);
++	KUNIT_ASSERT_EQ(test, result, 49995000);
++}
++
++static struct kunit_case modules_testcases[] = {
++	KUNIT_CASE(test_modules_many_vmlinux_relocs),
++	{}
++};
++
++static struct kunit_suite modules_test_suite = {
++	.name = "modules_test_s390",
++	.test_cases = modules_testcases,
++};
++
++kunit_test_suites(&modules_test_suite);
++
++MODULE_LICENSE("GPL");
+diff --git a/arch/s390/lib/test_modules.h b/arch/s390/lib/test_modules.h
+new file mode 100644
+index 0000000000000..6371fcf176845
+--- /dev/null
++++ b/arch/s390/lib/test_modules.h
+@@ -0,0 +1,53 @@
++/* SPDX-License-Identifier: GPL-2.0+ */
++#ifndef TEST_MODULES_H
++#define TEST_MODULES_H
++
++#define __REPEAT_10000_3(f, x) \
++	f(x ## 0); \
++	f(x ## 1); \
++	f(x ## 2); \
++	f(x ## 3); \
++	f(x ## 4); \
++	f(x ## 5); \
++	f(x ## 6); \
++	f(x ## 7); \
++	f(x ## 8); \
++	f(x ## 9)
++#define __REPEAT_10000_2(f, x) \
++	__REPEAT_10000_3(f, x ## 0); \
++	__REPEAT_10000_3(f, x ## 1); \
++	__REPEAT_10000_3(f, x ## 2); \
++	__REPEAT_10000_3(f, x ## 3); \
++	__REPEAT_10000_3(f, x ## 4); \
++	__REPEAT_10000_3(f, x ## 5); \
++	__REPEAT_10000_3(f, x ## 6); \
++	__REPEAT_10000_3(f, x ## 7); \
++	__REPEAT_10000_3(f, x ## 8); \
++	__REPEAT_10000_3(f, x ## 9)
++#define __REPEAT_10000_1(f, x) \
++	__REPEAT_10000_2(f, x ## 0); \
++	__REPEAT_10000_2(f, x ## 1); \
++	__REPEAT_10000_2(f, x ## 2); \
++	__REPEAT_10000_2(f, x ## 3); \
++	__REPEAT_10000_2(f, x ## 4); \
++	__REPEAT_10000_2(f, x ## 5); \
++	__REPEAT_10000_2(f, x ## 6); \
++	__REPEAT_10000_2(f, x ## 7); \
++	__REPEAT_10000_2(f, x ## 8); \
++	__REPEAT_10000_2(f, x ## 9)
++#define REPEAT_10000(f) \
++	__REPEAT_10000_1(f, 0); \
++	__REPEAT_10000_1(f, 1); \
++	__REPEAT_10000_1(f, 2); \
++	__REPEAT_10000_1(f, 3); \
++	__REPEAT_10000_1(f, 4); \
++	__REPEAT_10000_1(f, 5); \
++	__REPEAT_10000_1(f, 6); \
++	__REPEAT_10000_1(f, 7); \
++	__REPEAT_10000_1(f, 8); \
++	__REPEAT_10000_1(f, 9)
++
++#define DECLARE_RETURN(i) int test_modules_return_ ## i(void)
++REPEAT_10000(DECLARE_RETURN);
++
++#endif
+diff --git a/arch/s390/lib/test_modules_helpers.c b/arch/s390/lib/test_modules_helpers.c
+new file mode 100644
+index 0000000000000..1670349a03eba
+--- /dev/null
++++ b/arch/s390/lib/test_modules_helpers.c
+@@ -0,0 +1,13 @@
++// SPDX-License-Identifier: GPL-2.0+
++
++#include <linux/export.h>
++
++#include "test_modules.h"
++
++#define DEFINE_RETURN(i) \
++	int test_modules_return_ ## i(void) \
++	{ \
++		return 1 ## i - 10000; \
++	} \
++	EXPORT_SYMBOL_GPL(test_modules_return_ ## i)
++REPEAT_10000(DEFINE_RETURN);
+diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
+index 8043213b75a52..fa947c4fbd1f8 100644
+--- a/arch/x86/events/intel/lbr.c
++++ b/arch/x86/events/intel/lbr.c
+@@ -1726,6 +1726,9 @@ static bool is_arch_lbr_xsave_available(void)
+ 	 * Check the LBR state with the corresponding software structure.
+ 	 * Disable LBR XSAVES support if the size doesn't match.
+ 	 */
++	if (xfeature_size(XFEATURE_LBR) == 0)
++		return false;
++
+ 	if (WARN_ON(xfeature_size(XFEATURE_LBR) != get_lbr_state_size()))
+ 		return false;
+ 
+diff --git a/arch/x86/events/rapl.c b/arch/x86/events/rapl.c
+index 85feafacc445d..77e3a47af5ad5 100644
+--- a/arch/x86/events/rapl.c
++++ b/arch/x86/events/rapl.c
+@@ -536,11 +536,14 @@ static struct perf_msr intel_rapl_spr_msrs[] = {
+  * - perf_msr_probe(PERF_RAPL_MAX)
+  * - want to use same event codes across both architectures
+  */
+-static struct perf_msr amd_rapl_msrs[PERF_RAPL_MAX] = {
+-	[PERF_RAPL_PKG]  = { MSR_AMD_PKG_ENERGY_STATUS,  &rapl_events_pkg_group,   test_msr },
++static struct perf_msr amd_rapl_msrs[] = {
++	[PERF_RAPL_PP0]  = { 0, &rapl_events_cores_group, 0, false, 0 },
++	[PERF_RAPL_PKG]  = { MSR_AMD_PKG_ENERGY_STATUS,  &rapl_events_pkg_group,   test_msr, false, RAPL_MSR_MASK },
++	[PERF_RAPL_RAM]  = { 0, &rapl_events_ram_group,   0, false, 0 },
++	[PERF_RAPL_PP1]  = { 0, &rapl_events_gpu_group,   0, false, 0 },
++	[PERF_RAPL_PSYS] = { 0, &rapl_events_psys_group,  0, false, 0 },
+ };
+ 
+-
+ static int rapl_cpu_offline(unsigned int cpu)
+ {
+ 	struct rapl_pmu *pmu = cpu_to_rapl_pmu(cpu);
+diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
+index 001808e3901cc..48afe96ae0f0f 100644
+--- a/arch/x86/kernel/cpu/sgx/encl.c
++++ b/arch/x86/kernel/cpu/sgx/encl.c
+@@ -410,6 +410,8 @@ void sgx_encl_release(struct kref *ref)
+ 		}
+ 
+ 		kfree(entry);
++		/* Invoke scheduler to prevent soft lockups. */
++		cond_resched();
+ 	}
+ 
+ 	xa_destroy(&encl->page_array);
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index add8f58d686e3..bf18679757c70 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -532,12 +532,13 @@ void kvm_set_cpu_caps(void)
+ 	);
+ 
+ 	kvm_cpu_cap_mask(CPUID_7_0_EBX,
+-		F(FSGSBASE) | F(SGX) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
+-		F(BMI2) | F(ERMS) | F(INVPCID) | F(RTM) | 0 /*MPX*/ | F(RDSEED) |
+-		F(ADX) | F(SMAP) | F(AVX512IFMA) | F(AVX512F) | F(AVX512PF) |
+-		F(AVX512ER) | F(AVX512CD) | F(CLFLUSHOPT) | F(CLWB) | F(AVX512DQ) |
+-		F(SHA_NI) | F(AVX512BW) | F(AVX512VL) | 0 /*INTEL_PT*/
+-	);
++		F(FSGSBASE) | F(SGX) | F(BMI1) | F(HLE) | F(AVX2) |
++		F(FDP_EXCPTN_ONLY) | F(SMEP) | F(BMI2) | F(ERMS) | F(INVPCID) |
++		F(RTM) | F(ZERO_FCS_FDS) | 0 /*MPX*/ | F(AVX512F) |
++		F(AVX512DQ) | F(RDSEED) | F(ADX) | F(SMAP) | F(AVX512IFMA) |
++		F(CLFLUSHOPT) | F(CLWB) | 0 /*INTEL_PT*/ | F(AVX512PF) |
++		F(AVX512ER) | F(AVX512CD) | F(SHA_NI) | F(AVX512BW) |
++		F(AVX512VL));
+ 
+ 	kvm_cpu_cap_mask(CPUID_7_ECX,
+ 		F(AVX512VBMI) | F(LA57) | F(PKU) | 0 /*OSPKE*/ | F(RDPID) |
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 3efada37272c0..d6a4acaa65742 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -4530,7 +4530,21 @@ static bool svm_can_emulate_instruction(struct kvm_vcpu *vcpu, void *insn, int i
+ 	is_user = svm_get_cpl(vcpu) == 3;
+ 	if (smap && (!smep || is_user)) {
+ 		pr_err_ratelimited("KVM: SEV Guest triggered AMD Erratum 1096\n");
+-		kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu);
++
++		/*
++		 * If the fault occurred in userspace, arbitrarily inject #GP
++		 * to avoid killing the guest and to hopefully avoid confusing
++		 * the guest kernel too much, e.g. injecting #PF would not be
++		 * coherent with respect to the guest's page tables.  Request
++		 * triple fault if the fault occurred in the kernel as there's
++		 * no fault that KVM can inject without confusing the guest.
++		 * In practice, the triple fault is moot as no sane SEV kernel
++		 * will execute from user memory while also running with SMAP=1.
++		 */
++		if (is_user)
++			kvm_inject_gp(vcpu, 0);
++		else
++			kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu);
+ 	}
+ 
+ 	return false;
+diff --git a/arch/x86/kvm/vmx/evmcs.c b/arch/x86/kvm/vmx/evmcs.c
+index 09fac0ddac8bd..87e3dc10edf40 100644
+--- a/arch/x86/kvm/vmx/evmcs.c
++++ b/arch/x86/kvm/vmx/evmcs.c
+@@ -361,6 +361,7 @@ void nested_evmcs_filter_control_msr(u32 msr_index, u64 *pdata)
+ 	case MSR_IA32_VMX_PROCBASED_CTLS2:
+ 		ctl_high &= ~EVMCS1_UNSUPPORTED_2NDEXEC;
+ 		break;
++	case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
+ 	case MSR_IA32_VMX_PINBASED_CTLS:
+ 		ctl_high &= ~EVMCS1_UNSUPPORTED_PINCTRL;
+ 		break;
+diff --git a/arch/x86/kvm/vmx/evmcs.h b/arch/x86/kvm/vmx/evmcs.h
+index 6255fa7167720..8d70f9aea94bc 100644
+--- a/arch/x86/kvm/vmx/evmcs.h
++++ b/arch/x86/kvm/vmx/evmcs.h
+@@ -59,7 +59,9 @@ DECLARE_STATIC_KEY_FALSE(enable_evmcs);
+ 	 SECONDARY_EXEC_SHADOW_VMCS |					\
+ 	 SECONDARY_EXEC_TSC_SCALING |					\
+ 	 SECONDARY_EXEC_PAUSE_LOOP_EXITING)
+-#define EVMCS1_UNSUPPORTED_VMEXIT_CTRL (VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL)
++#define EVMCS1_UNSUPPORTED_VMEXIT_CTRL					\
++	(VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL |				\
++	 VM_EXIT_SAVE_VMX_PREEMPTION_TIMER)
+ #define EVMCS1_UNSUPPORTED_VMENTRY_CTRL (VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL)
+ #define EVMCS1_UNSUPPORTED_VMFUNC (VMX_VMFUNC_EPTP_SWITCHING)
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 7f4e6f625abcf..fe4a36c984460 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -4811,8 +4811,33 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
+ 		dr6 = vmx_get_exit_qual(vcpu);
+ 		if (!(vcpu->guest_debug &
+ 		      (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP))) {
++			/*
++			 * If the #DB was due to ICEBP, a.k.a. INT1, skip the
++			 * instruction.  ICEBP generates a trap-like #DB, but
++			 * despite its interception control being tied to #DB,
++			 * is an instruction intercept, i.e. the VM-Exit occurs
++			 * on the ICEBP itself.  Note, skipping ICEBP also
++			 * clears STI and MOVSS blocking.
++			 *
++			 * For all other #DBs, set vmcs.PENDING_DBG_EXCEPTIONS.BS
++			 * if single-step is enabled in RFLAGS and STI or MOVSS
++			 * blocking is active, as the CPU doesn't set the bit
++			 * on VM-Exit due to #DB interception.  VM-Entry has a
++			 * consistency check that a single-step #DB is pending
++			 * in this scenario as the previous instruction cannot
++			 * have toggled RFLAGS.TF 0=>1 (because STI and POP/MOV
++			 * don't modify RFLAGS), therefore the one instruction
++			 * delay when activating single-step breakpoints must
++			 * have already expired.  Note, the CPU sets/clears BS
++			 * as appropriate for all other VM-Exits types.
++			 */
+ 			if (is_icebp(intr_info))
+ 				WARN_ON(!skip_emulated_instruction(vcpu));
++			else if ((vmx_get_rflags(vcpu) & X86_EFLAGS_TF) &&
++				 (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) &
++				  (GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS)))
++				vmcs_writel(GUEST_PENDING_DBG_EXCEPTIONS,
++					    vmcs_readl(GUEST_PENDING_DBG_EXCEPTIONS) | DR6_BS);
+ 
+ 			kvm_queue_exception_p(vcpu, DB_VECTOR, dr6);
+ 			return 1;
+diff --git a/drivers/accessibility/speakup/speakup_dectlk.c b/drivers/accessibility/speakup/speakup_dectlk.c
+index 580ec796816bc..78ca4987e619e 100644
+--- a/drivers/accessibility/speakup/speakup_dectlk.c
++++ b/drivers/accessibility/speakup/speakup_dectlk.c
+@@ -44,6 +44,7 @@ static struct var_t vars[] = {
+ 	{ CAPS_START, .u.s = {"[:dv ap 160] " } },
+ 	{ CAPS_STOP, .u.s = {"[:dv ap 100 ] " } },
+ 	{ RATE, .u.n = {"[:ra %d] ", 180, 75, 650, 0, 0, NULL } },
++	{ PITCH, .u.n = {"[:dv ap %d] ", 122, 50, 350, 0, 0, NULL } },
+ 	{ INFLECTION, .u.n = {"[:dv pr %d] ", 100, 0, 10000, 0, 0, NULL } },
+ 	{ VOL, .u.n = {"[:dv g5 %d] ", 86, 60, 86, 0, 0, NULL } },
+ 	{ PUNCT, .u.n = {"[:pu %c] ", 0, 0, 2, 0, 0, "nsa" } },
+diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
+index 3b23fb775ac45..f2f8f05662deb 100644
+--- a/drivers/acpi/arm64/iort.c
++++ b/drivers/acpi/arm64/iort.c
+@@ -1361,9 +1361,17 @@ static void __init arm_smmu_v3_pmcg_init_resources(struct resource *res,
+ 	res[0].start = pmcg->page0_base_address;
+ 	res[0].end = pmcg->page0_base_address + SZ_4K - 1;
+ 	res[0].flags = IORESOURCE_MEM;
+-	res[1].start = pmcg->page1_base_address;
+-	res[1].end = pmcg->page1_base_address + SZ_4K - 1;
+-	res[1].flags = IORESOURCE_MEM;
++	/*
++	 * The initial version in DEN0049C lacked a way to describe register
++	 * page 1, which makes it broken for most PMCG implementations; in
++	 * that case, just let the driver fail gracefully if it expects to
++	 * find a second memory resource.
++	 */
++	if (node->revision > 0) {
++		res[1].start = pmcg->page1_base_address;
++		res[1].end = pmcg->page1_base_address + SZ_4K - 1;
++		res[1].flags = IORESOURCE_MEM;
++	}
+ 
+ 	if (pmcg->overflow_gsiv)
+ 		acpi_iort_register_irq(pmcg->overflow_gsiv, "overflow",
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index b9c44e6c5e400..1712990bf2ad8 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -2051,6 +2051,16 @@ bool acpi_ec_dispatch_gpe(void)
+ 	if (acpi_any_gpe_status_set(first_ec->gpe))
+ 		return true;
+ 
++	/*
++	 * Cancel the SCI wakeup and process all pending events in case there
++	 * are any wakeup ones in there.
++	 *
++	 * Note that if any non-EC GPEs are active at this point, the SCI will
++	 * retrigger after the rearming in acpi_s2idle_wake(), so no events
++	 * should be missed by canceling the wakeup here.
++	 */
++	pm_system_cancel_wakeup();
++
+ 	/*
+ 	 * Dispatch the EC GPE in-band, but do not report wakeup in any case
+ 	 * to allow the caller to process events properly after that.
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index eaa47753b7584..8513410ca2fc2 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -739,21 +739,15 @@ bool acpi_s2idle_wake(void)
+ 			return true;
+ 		}
+ 
+-		/* Check non-EC GPE wakeups and dispatch the EC GPE. */
++		/*
++		 * Check non-EC GPE wakeups and if there are none, cancel the
++		 * SCI-related wakeup and dispatch the EC GPE.
++		 */
+ 		if (acpi_ec_dispatch_gpe()) {
+ 			pm_pr_dbg("ACPI non-EC GPE wakeup\n");
+ 			return true;
+ 		}
+ 
+-		/*
+-		 * Cancel the SCI wakeup and process all pending events in case
+-		 * there are any wakeup ones in there.
+-		 *
+-		 * Note that if any non-EC GPEs are active at this point, the
+-		 * SCI will retrigger after the rearming below, so no events
+-		 * should be missed by canceling the wakeup here.
+-		 */
+-		pm_system_cancel_wakeup();
+ 		acpi_os_wait_events_complete();
+ 
+ 		/*
+@@ -767,6 +761,7 @@ bool acpi_s2idle_wake(void)
+ 			return true;
+ 		}
+ 
++		pm_wakeup_clear(acpi_sci_irq);
+ 		rearm_wake_irq(acpi_sci_irq);
+ 	}
+ 
+diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c
+index 99bda0da23a82..8666590201c9a 100644
+--- a/drivers/base/power/wakeup.c
++++ b/drivers/base/power/wakeup.c
+@@ -34,7 +34,8 @@ suspend_state_t pm_suspend_target_state;
+ bool events_check_enabled __read_mostly;
+ 
+ /* First wakeup IRQ seen by the kernel in the last cycle. */
+-unsigned int pm_wakeup_irq __read_mostly;
++static unsigned int wakeup_irq[2] __read_mostly;
++static DEFINE_RAW_SPINLOCK(wakeup_irq_lock);
+ 
+ /* If greater than 0 and the system is suspending, terminate the suspend. */
+ static atomic_t pm_abort_suspend __read_mostly;
+@@ -942,19 +943,45 @@ void pm_system_cancel_wakeup(void)
+ 	atomic_dec_if_positive(&pm_abort_suspend);
+ }
+ 
+-void pm_wakeup_clear(bool reset)
++void pm_wakeup_clear(unsigned int irq_number)
+ {
+-	pm_wakeup_irq = 0;
+-	if (reset)
++	raw_spin_lock_irq(&wakeup_irq_lock);
++
++	if (irq_number && wakeup_irq[0] == irq_number)
++		wakeup_irq[0] = wakeup_irq[1];
++	else
++		wakeup_irq[0] = 0;
++
++	wakeup_irq[1] = 0;
++
++	raw_spin_unlock_irq(&wakeup_irq_lock);
++
++	if (!irq_number)
+ 		atomic_set(&pm_abort_suspend, 0);
+ }
+ 
+ void pm_system_irq_wakeup(unsigned int irq_number)
+ {
+-	if (pm_wakeup_irq == 0) {
+-		pm_wakeup_irq = irq_number;
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&wakeup_irq_lock, flags);
++
++	if (wakeup_irq[0] == 0)
++		wakeup_irq[0] = irq_number;
++	else if (wakeup_irq[1] == 0)
++		wakeup_irq[1] = irq_number;
++	else
++		irq_number = 0;
++
++	raw_spin_unlock_irqrestore(&wakeup_irq_lock, flags);
++
++	if (irq_number)
+ 		pm_system_wakeup();
+-	}
++}
++
++unsigned int pm_wakeup_irq(void)
++{
++	return wakeup_irq[0];
+ }
+ 
+ /**
+diff --git a/drivers/bus/mhi/pci_generic.c b/drivers/bus/mhi/pci_generic.c
+index b8b2811c7c0a9..d340d6864e13a 100644
+--- a/drivers/bus/mhi/pci_generic.c
++++ b/drivers/bus/mhi/pci_generic.c
+@@ -366,6 +366,7 @@ static const struct mhi_pci_dev_info mhi_foxconn_sdx55_info = {
+ 	.config = &modem_foxconn_sdx55_config,
+ 	.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
+ 	.dma_data_width = 32,
++	.mru_default = 32768,
+ 	.sideband_wake = false,
+ };
+ 
+@@ -401,6 +402,7 @@ static const struct mhi_pci_dev_info mhi_mv31_info = {
+ 	.config = &modem_mv31_config,
+ 	.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
+ 	.dma_data_width = 32,
++	.mru_default = 32768,
+ };
+ 
+ static const struct pci_device_id mhi_pci_id_table[] = {
+diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
+index b6f97960d8ee0..5c40ca1d4740e 100644
+--- a/drivers/clocksource/timer-ti-dm-systimer.c
++++ b/drivers/clocksource/timer-ti-dm-systimer.c
+@@ -241,7 +241,7 @@ static void __init dmtimer_systimer_assign_alwon(void)
+ 	bool quirk_unreliable_oscillator = false;
+ 
+ 	/* Quirk unreliable 32 KiHz oscillator with incomplete dts */
+-	if (of_machine_is_compatible("ti,omap3-beagle") ||
++	if (of_machine_is_compatible("ti,omap3-beagle-ab4") ||
+ 	    of_machine_is_compatible("timll,omap3-devkit8000")) {
+ 		quirk_unreliable_oscillator = true;
+ 		counter_32k = -ENODEV;
+diff --git a/drivers/gpio/gpio-aggregator.c b/drivers/gpio/gpio-aggregator.c
+index e9671d1660ef4..ac20f9bd1be61 100644
+--- a/drivers/gpio/gpio-aggregator.c
++++ b/drivers/gpio/gpio-aggregator.c
+@@ -278,7 +278,8 @@ static int gpio_fwd_get(struct gpio_chip *chip, unsigned int offset)
+ {
+ 	struct gpiochip_fwd *fwd = gpiochip_get_data(chip);
+ 
+-	return gpiod_get_value(fwd->descs[offset]);
++	return chip->can_sleep ? gpiod_get_value_cansleep(fwd->descs[offset])
++			       : gpiod_get_value(fwd->descs[offset]);
+ }
+ 
+ static int gpio_fwd_get_multiple(struct gpiochip_fwd *fwd, unsigned long *mask,
+@@ -293,7 +294,10 @@ static int gpio_fwd_get_multiple(struct gpiochip_fwd *fwd, unsigned long *mask,
+ 	for_each_set_bit(i, mask, fwd->chip.ngpio)
+ 		descs[j++] = fwd->descs[i];
+ 
+-	error = gpiod_get_array_value(j, descs, NULL, values);
++	if (fwd->chip.can_sleep)
++		error = gpiod_get_array_value_cansleep(j, descs, NULL, values);
++	else
++		error = gpiod_get_array_value(j, descs, NULL, values);
+ 	if (error)
+ 		return error;
+ 
+@@ -328,7 +332,10 @@ static void gpio_fwd_set(struct gpio_chip *chip, unsigned int offset, int value)
+ {
+ 	struct gpiochip_fwd *fwd = gpiochip_get_data(chip);
+ 
+-	gpiod_set_value(fwd->descs[offset], value);
++	if (chip->can_sleep)
++		gpiod_set_value_cansleep(fwd->descs[offset], value);
++	else
++		gpiod_set_value(fwd->descs[offset], value);
+ }
+ 
+ static void gpio_fwd_set_multiple(struct gpiochip_fwd *fwd, unsigned long *mask,
+@@ -343,7 +350,10 @@ static void gpio_fwd_set_multiple(struct gpiochip_fwd *fwd, unsigned long *mask,
+ 		descs[j++] = fwd->descs[i];
+ 	}
+ 
+-	gpiod_set_array_value(j, descs, NULL, values);
++	if (fwd->chip.can_sleep)
++		gpiod_set_array_value_cansleep(j, descs, NULL, values);
++	else
++		gpiod_set_array_value(j, descs, NULL, values);
+ }
+ 
+ static void gpio_fwd_set_multiple_locked(struct gpio_chip *chip,
+diff --git a/drivers/gpio/gpio-sifive.c b/drivers/gpio/gpio-sifive.c
+index 403f9e833d6a3..7d82388b4ab7c 100644
+--- a/drivers/gpio/gpio-sifive.c
++++ b/drivers/gpio/gpio-sifive.c
+@@ -223,7 +223,7 @@ static int sifive_gpio_probe(struct platform_device *pdev)
+ 			 NULL,
+ 			 chip->base + SIFIVE_GPIO_OUTPUT_EN,
+ 			 chip->base + SIFIVE_GPIO_INPUT_EN,
+-			 0);
++			 BGPIOF_READ_OUTPUT_REG_SET);
+ 	if (ret) {
+ 		dev_err(dev, "unable to init generic GPIO\n");
+ 		return ret;
+diff --git a/drivers/gpio/gpiolib-cdev.c b/drivers/gpio/gpiolib-cdev.c
+index c7b5446d01fd2..ffa0256cad5a0 100644
+--- a/drivers/gpio/gpiolib-cdev.c
++++ b/drivers/gpio/gpiolib-cdev.c
+@@ -330,7 +330,7 @@ static int linehandle_create(struct gpio_device *gdev, void __user *ip)
+ 			goto out_free_lh;
+ 		}
+ 
+-		ret = gpiod_request(desc, lh->label);
++		ret = gpiod_request_user(desc, lh->label);
+ 		if (ret)
+ 			goto out_free_lh;
+ 		lh->descs[i] = desc;
+@@ -1378,7 +1378,7 @@ static int linereq_create(struct gpio_device *gdev, void __user *ip)
+ 			goto out_free_linereq;
+ 		}
+ 
+-		ret = gpiod_request(desc, lr->label);
++		ret = gpiod_request_user(desc, lr->label);
+ 		if (ret)
+ 			goto out_free_linereq;
+ 
+@@ -1764,7 +1764,7 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip)
+ 		}
+ 	}
+ 
+-	ret = gpiod_request(desc, le->label);
++	ret = gpiod_request_user(desc, le->label);
+ 	if (ret)
+ 		goto out_free_le;
+ 	le->desc = desc;
+diff --git a/drivers/gpio/gpiolib-sysfs.c b/drivers/gpio/gpiolib-sysfs.c
+index 4098bc7f88b7e..44c1ad51b3fe9 100644
+--- a/drivers/gpio/gpiolib-sysfs.c
++++ b/drivers/gpio/gpiolib-sysfs.c
+@@ -475,12 +475,9 @@ static ssize_t export_store(struct class *class,
+ 	 * they may be undone on its behalf too.
+ 	 */
+ 
+-	status = gpiod_request(desc, "sysfs");
+-	if (status) {
+-		if (status == -EPROBE_DEFER)
+-			status = -ENODEV;
++	status = gpiod_request_user(desc, "sysfs");
++	if (status)
+ 		goto done;
+-	}
+ 
+ 	status = gpiod_set_transitory(desc, false);
+ 	if (!status) {
+diff --git a/drivers/gpio/gpiolib.h b/drivers/gpio/gpiolib.h
+index 30bc3f80f83e6..c31f4626915de 100644
+--- a/drivers/gpio/gpiolib.h
++++ b/drivers/gpio/gpiolib.h
+@@ -135,6 +135,18 @@ struct gpio_desc {
+ 
+ int gpiod_request(struct gpio_desc *desc, const char *label);
+ void gpiod_free(struct gpio_desc *desc);
++
++static inline int gpiod_request_user(struct gpio_desc *desc, const char *label)
++{
++	int ret;
++
++	ret = gpiod_request(desc, label);
++	if (ret == -EPROBE_DEFER)
++		ret = -ENODEV;
++
++	return ret;
++}
++
+ int gpiod_configure_flags(struct gpio_desc *desc, const char *con_id,
+ 		unsigned long lflags, enum gpiod_flags dflags);
+ int gpio_set_debounce_timeout(struct gpio_desc *desc, unsigned int debounce);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index ccd6cdbe46f43..94e75199d9428 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -201,7 +201,7 @@ void dp_wait_for_training_aux_rd_interval(
+ 	uint32_t wait_in_micro_secs)
+ {
+ #if defined(CONFIG_DRM_AMD_DC_DCN)
+-	if (wait_in_micro_secs > 16000)
++	if (wait_in_micro_secs > 1000)
+ 		msleep(wait_in_micro_secs/1000);
+ 	else
+ 		udelay(wait_in_micro_secs);
+@@ -6058,7 +6058,7 @@ bool dpcd_write_128b_132b_sst_payload_allocation_table(
+ 			}
+ 		}
+ 		retries++;
+-		udelay(5000);
++		msleep(5);
+ 	}
+ 
+ 	if (!result && retries == max_retries) {
+@@ -6110,7 +6110,7 @@ bool dpcd_poll_for_allocation_change_trigger(struct dc_link *link)
+ 			break;
+ 		}
+ 
+-		udelay(5000);
++		msleep(5);
+ 	}
+ 
+ 	if (result == ACT_FAILED) {
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index 83f5d9aaffcb6..3883f918b3bb2 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -1069,7 +1069,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ 		.timing_trace = false,
+ 		.clock_trace = true,
+ 		.disable_pplib_clock_request = true,
+-		.pipe_split_policy = MPC_SPLIT_DYNAMIC,
++		.pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP,
+ 		.force_single_disp_pipe_split = false,
+ 		.disable_dcc = DCC_ENABLE,
+ 		.vsr_support = true,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
+index 9254da120e615..36814d44b19cf 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
+@@ -686,7 +686,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ 	.disable_clock_gate = true,
+ 	.disable_pplib_clock_request = true,
+ 	.disable_pplib_wm_range = true,
+-	.pipe_split_policy = MPC_SPLIT_DYNAMIC,
++	.pipe_split_policy = MPC_SPLIT_AVOID,
+ 	.force_single_disp_pipe_split = false,
+ 	.disable_dcc = DCC_ENABLE,
+ 	.vsr_support = true,
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+index f8370d54100e8..d31719b3418fa 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+@@ -3451,8 +3451,7 @@ static umode_t hwmon_attributes_visible(struct kobject *kobj,
+ 	     attr == &sensor_dev_attr_power2_cap_min.dev_attr.attr ||
+ 		 attr == &sensor_dev_attr_power2_cap.dev_attr.attr ||
+ 		 attr == &sensor_dev_attr_power2_cap_default.dev_attr.attr ||
+-		 attr == &sensor_dev_attr_power2_label.dev_attr.attr ||
+-		 attr == &sensor_dev_attr_power1_label.dev_attr.attr))
++		 attr == &sensor_dev_attr_power2_label.dev_attr.attr))
+ 		return 0;
+ 
+ 	return effective_mode;
+diff --git a/drivers/gpu/drm/bridge/nwl-dsi.c b/drivers/gpu/drm/bridge/nwl-dsi.c
+index a7389a0facfb4..af07eeb47ca02 100644
+--- a/drivers/gpu/drm/bridge/nwl-dsi.c
++++ b/drivers/gpu/drm/bridge/nwl-dsi.c
+@@ -7,6 +7,7 @@
+  */
+ 
+ #include <linux/bitfield.h>
++#include <linux/bits.h>
+ #include <linux/clk.h>
+ #include <linux/irq.h>
+ #include <linux/math64.h>
+@@ -196,12 +197,9 @@ static u32 ps2bc(struct nwl_dsi *dsi, unsigned long long ps)
+ /*
+  * ui2bc - UI time periods to byte clock cycles
+  */
+-static u32 ui2bc(struct nwl_dsi *dsi, unsigned long long ui)
++static u32 ui2bc(unsigned int ui)
+ {
+-	u32 bpp = mipi_dsi_pixel_format_to_bpp(dsi->format);
+-
+-	return DIV64_U64_ROUND_UP(ui * dsi->lanes,
+-				  dsi->mode.clock * 1000 * bpp);
++	return DIV_ROUND_UP(ui, BITS_PER_BYTE);
+ }
+ 
+ /*
+@@ -232,12 +230,12 @@ static int nwl_dsi_config_host(struct nwl_dsi *dsi)
+ 	}
+ 
+ 	/* values in byte clock cycles */
+-	cycles = ui2bc(dsi, cfg->clk_pre);
++	cycles = ui2bc(cfg->clk_pre);
+ 	DRM_DEV_DEBUG_DRIVER(dsi->dev, "cfg_t_pre: 0x%x\n", cycles);
+ 	nwl_dsi_write(dsi, NWL_DSI_CFG_T_PRE, cycles);
+ 	cycles = ps2bc(dsi, cfg->lpx + cfg->clk_prepare + cfg->clk_zero);
+ 	DRM_DEV_DEBUG_DRIVER(dsi->dev, "cfg_tx_gap (pre): 0x%x\n", cycles);
+-	cycles += ui2bc(dsi, cfg->clk_pre);
++	cycles += ui2bc(cfg->clk_pre);
+ 	DRM_DEV_DEBUG_DRIVER(dsi->dev, "cfg_t_post: 0x%x\n", cycles);
+ 	nwl_dsi_write(dsi, NWL_DSI_CFG_T_POST, cycles);
+ 	cycles = ps2bc(dsi, cfg->hs_exit);
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index 042bb80383c93..b910978d3e480 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -115,6 +115,12 @@ static const struct drm_dmi_panel_orientation_data lcd1280x1920_rightside_up = {
+ 	.orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
+ };
+ 
++static const struct drm_dmi_panel_orientation_data lcd1600x2560_leftside_up = {
++	.width = 1600,
++	.height = 2560,
++	.orientation = DRM_MODE_PANEL_ORIENTATION_LEFT_UP,
++};
++
+ static const struct dmi_system_id orientation_data[] = {
+ 	{	/* Acer One 10 (S1003) */
+ 		.matches = {
+@@ -275,6 +281,12 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Default string"),
+ 		},
+ 		.driver_data = (void *)&onegx1_pro,
++	}, {	/* OneXPlayer */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ONE-NETBOOK TECHNOLOGY CO., LTD."),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "ONE XPLAYER"),
++		},
++		.driver_data = (void *)&lcd1600x2560_leftside_up,
+ 	}, {	/* Samsung GalaxyBook 10.6 */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."),
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index ec403e46a3288..8c178939e82b0 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -11901,6 +11901,7 @@ intel_modeset_setup_hw_state(struct drm_device *dev,
+ 		vlv_wm_sanitize(dev_priv);
+ 	} else if (DISPLAY_VER(dev_priv) >= 9) {
+ 		skl_wm_get_hw_state(dev_priv);
++		skl_wm_sanitize(dev_priv);
+ 	} else if (HAS_PCH_SPLIT(dev_priv)) {
+ 		ilk_wm_get_hw_state(dev_priv);
+ 	}
+diff --git a/drivers/gpu/drm/i915/display/intel_drrs.c b/drivers/gpu/drm/i915/display/intel_drrs.c
+index c1439fcb5a959..3ff149df4a77a 100644
+--- a/drivers/gpu/drm/i915/display/intel_drrs.c
++++ b/drivers/gpu/drm/i915/display/intel_drrs.c
+@@ -405,6 +405,7 @@ intel_drrs_init(struct intel_connector *connector,
+ 		struct drm_display_mode *fixed_mode)
+ {
+ 	struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
++	struct intel_encoder *encoder = connector->encoder;
+ 	struct drm_display_mode *downclock_mode = NULL;
+ 
+ 	INIT_DELAYED_WORK(&dev_priv->drrs.work, intel_drrs_downclock_work);
+@@ -416,6 +417,13 @@ intel_drrs_init(struct intel_connector *connector,
+ 		return NULL;
+ 	}
+ 
++	if ((DISPLAY_VER(dev_priv) < 8 && !HAS_GMCH(dev_priv)) &&
++	    encoder->port != PORT_A) {
++		drm_dbg_kms(&dev_priv->drm,
++			    "DRRS only supported on eDP port A\n");
++		return NULL;
++	}
++
+ 	if (dev_priv->vbt.drrs_type != SEAMLESS_DRRS_SUPPORT) {
+ 		drm_dbg_kms(&dev_priv->drm, "VBT doesn't support DRRS\n");
+ 		return NULL;
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index 201477ca408a5..d4ca29755e647 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -4707,6 +4707,10 @@ static const struct dbuf_slice_conf_entry dg2_allowed_dbufs[] = {
+ };
+ 
+ static const struct dbuf_slice_conf_entry adlp_allowed_dbufs[] = {
++	/*
++	 * Keep the join_mbus cases first so check_mbus_joined()
++	 * will prefer them over the !join_mbus cases.
++	 */
+ 	{
+ 		.active_pipes = BIT(PIPE_A),
+ 		.dbuf_mask = {
+@@ -4721,6 +4725,20 @@ static const struct dbuf_slice_conf_entry adlp_allowed_dbufs[] = {
+ 		},
+ 		.join_mbus = true,
+ 	},
++	{
++		.active_pipes = BIT(PIPE_A),
++		.dbuf_mask = {
++			[PIPE_A] = BIT(DBUF_S1) | BIT(DBUF_S2),
++		},
++		.join_mbus = false,
++	},
++	{
++		.active_pipes = BIT(PIPE_B),
++		.dbuf_mask = {
++			[PIPE_B] = BIT(DBUF_S3) | BIT(DBUF_S4),
++		},
++		.join_mbus = false,
++	},
+ 	{
+ 		.active_pipes = BIT(PIPE_A) | BIT(PIPE_B),
+ 		.dbuf_mask = {
+@@ -4837,13 +4855,14 @@ static bool adlp_check_mbus_joined(u8 active_pipes)
+ 	return check_mbus_joined(active_pipes, adlp_allowed_dbufs);
+ }
+ 
+-static u8 compute_dbuf_slices(enum pipe pipe, u8 active_pipes,
++static u8 compute_dbuf_slices(enum pipe pipe, u8 active_pipes, bool join_mbus,
+ 			      const struct dbuf_slice_conf_entry *dbuf_slices)
+ {
+ 	int i;
+ 
+ 	for (i = 0; i < dbuf_slices[i].active_pipes; i++) {
+-		if (dbuf_slices[i].active_pipes == active_pipes)
++		if (dbuf_slices[i].active_pipes == active_pipes &&
++		    dbuf_slices[i].join_mbus == join_mbus)
+ 			return dbuf_slices[i].dbuf_mask[pipe];
+ 	}
+ 	return 0;
+@@ -4854,7 +4873,7 @@ static u8 compute_dbuf_slices(enum pipe pipe, u8 active_pipes,
+  * returns correspondent DBuf slice mask as stated in BSpec for particular
+  * platform.
+  */
+-static u8 icl_compute_dbuf_slices(enum pipe pipe, u8 active_pipes)
++static u8 icl_compute_dbuf_slices(enum pipe pipe, u8 active_pipes, bool join_mbus)
+ {
+ 	/*
+ 	 * FIXME: For ICL this is still a bit unclear as prev BSpec revision
+@@ -4868,37 +4887,41 @@ static u8 icl_compute_dbuf_slices(enum pipe pipe, u8 active_pipes)
+ 	 * still here - we will need it once those additional constraints
+ 	 * pop up.
+ 	 */
+-	return compute_dbuf_slices(pipe, active_pipes, icl_allowed_dbufs);
++	return compute_dbuf_slices(pipe, active_pipes, join_mbus,
++				   icl_allowed_dbufs);
+ }
+ 
+-static u8 tgl_compute_dbuf_slices(enum pipe pipe, u8 active_pipes)
++static u8 tgl_compute_dbuf_slices(enum pipe pipe, u8 active_pipes, bool join_mbus)
+ {
+-	return compute_dbuf_slices(pipe, active_pipes, tgl_allowed_dbufs);
++	return compute_dbuf_slices(pipe, active_pipes, join_mbus,
++				   tgl_allowed_dbufs);
+ }
+ 
+-static u32 adlp_compute_dbuf_slices(enum pipe pipe, u32 active_pipes)
++static u8 adlp_compute_dbuf_slices(enum pipe pipe, u8 active_pipes, bool join_mbus)
+ {
+-	return compute_dbuf_slices(pipe, active_pipes, adlp_allowed_dbufs);
++	return compute_dbuf_slices(pipe, active_pipes, join_mbus,
++				   adlp_allowed_dbufs);
+ }
+ 
+-static u32 dg2_compute_dbuf_slices(enum pipe pipe, u32 active_pipes)
++static u8 dg2_compute_dbuf_slices(enum pipe pipe, u8 active_pipes, bool join_mbus)
+ {
+-	return compute_dbuf_slices(pipe, active_pipes, dg2_allowed_dbufs);
++	return compute_dbuf_slices(pipe, active_pipes, join_mbus,
++				   dg2_allowed_dbufs);
+ }
+ 
+-static u8 skl_compute_dbuf_slices(struct intel_crtc *crtc, u8 active_pipes)
++static u8 skl_compute_dbuf_slices(struct intel_crtc *crtc, u8 active_pipes, bool join_mbus)
+ {
+ 	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
+ 	enum pipe pipe = crtc->pipe;
+ 
+ 	if (IS_DG2(dev_priv))
+-		return dg2_compute_dbuf_slices(pipe, active_pipes);
++		return dg2_compute_dbuf_slices(pipe, active_pipes, join_mbus);
+ 	else if (IS_ALDERLAKE_P(dev_priv))
+-		return adlp_compute_dbuf_slices(pipe, active_pipes);
++		return adlp_compute_dbuf_slices(pipe, active_pipes, join_mbus);
+ 	else if (DISPLAY_VER(dev_priv) == 12)
+-		return tgl_compute_dbuf_slices(pipe, active_pipes);
++		return tgl_compute_dbuf_slices(pipe, active_pipes, join_mbus);
+ 	else if (DISPLAY_VER(dev_priv) == 11)
+-		return icl_compute_dbuf_slices(pipe, active_pipes);
++		return icl_compute_dbuf_slices(pipe, active_pipes, join_mbus);
+ 	/*
+ 	 * For anything else just return one slice yet.
+ 	 * Should be extended for other platforms.
+@@ -6109,11 +6132,16 @@ skl_compute_ddb(struct intel_atomic_state *state)
+ 			return ret;
+ 	}
+ 
++	if (IS_ALDERLAKE_P(dev_priv))
++		new_dbuf_state->joined_mbus =
++			adlp_check_mbus_joined(new_dbuf_state->active_pipes);
++
+ 	for_each_intel_crtc(&dev_priv->drm, crtc) {
+ 		enum pipe pipe = crtc->pipe;
+ 
+ 		new_dbuf_state->slices[pipe] =
+-			skl_compute_dbuf_slices(crtc, new_dbuf_state->active_pipes);
++			skl_compute_dbuf_slices(crtc, new_dbuf_state->active_pipes,
++						new_dbuf_state->joined_mbus);
+ 
+ 		if (old_dbuf_state->slices[pipe] == new_dbuf_state->slices[pipe])
+ 			continue;
+@@ -6125,9 +6153,6 @@ skl_compute_ddb(struct intel_atomic_state *state)
+ 
+ 	new_dbuf_state->enabled_slices = intel_dbuf_enabled_slices(new_dbuf_state);
+ 
+-	if (IS_ALDERLAKE_P(dev_priv))
+-		new_dbuf_state->joined_mbus = adlp_check_mbus_joined(new_dbuf_state->active_pipes);
+-
+ 	if (old_dbuf_state->enabled_slices != new_dbuf_state->enabled_slices ||
+ 	    old_dbuf_state->joined_mbus != new_dbuf_state->joined_mbus) {
+ 		ret = intel_atomic_serialize_global_state(&new_dbuf_state->base);
+@@ -6608,6 +6633,7 @@ void skl_wm_get_hw_state(struct drm_i915_private *dev_priv)
+ 		enum pipe pipe = crtc->pipe;
+ 		unsigned int mbus_offset;
+ 		enum plane_id plane_id;
++		u8 slices;
+ 
+ 		skl_pipe_wm_get_hw_state(crtc, &crtc_state->wm.skl.optimal);
+ 		crtc_state->wm.skl.raw = crtc_state->wm.skl.optimal;
+@@ -6627,19 +6653,22 @@ void skl_wm_get_hw_state(struct drm_i915_private *dev_priv)
+ 			skl_ddb_entry_union(&dbuf_state->ddb[pipe], ddb_uv);
+ 		}
+ 
+-		dbuf_state->slices[pipe] =
+-			skl_compute_dbuf_slices(crtc, dbuf_state->active_pipes);
+-
+ 		dbuf_state->weight[pipe] = intel_crtc_ddb_weight(crtc_state);
+ 
+ 		/*
+ 		 * Used for checking overlaps, so we need absolute
+ 		 * offsets instead of MBUS relative offsets.
+ 		 */
+-		mbus_offset = mbus_ddb_offset(dev_priv, dbuf_state->slices[pipe]);
++		slices = skl_compute_dbuf_slices(crtc, dbuf_state->active_pipes,
++						 dbuf_state->joined_mbus);
++		mbus_offset = mbus_ddb_offset(dev_priv, slices);
+ 		crtc_state->wm.skl.ddb.start = mbus_offset + dbuf_state->ddb[pipe].start;
+ 		crtc_state->wm.skl.ddb.end = mbus_offset + dbuf_state->ddb[pipe].end;
+ 
++		/* The slices actually used by the planes on the pipe */
++		dbuf_state->slices[pipe] =
++			skl_ddb_dbuf_slice_mask(dev_priv, &crtc_state->wm.skl.ddb);
++
+ 		drm_dbg_kms(&dev_priv->drm,
+ 			    "[CRTC:%d:%s] dbuf slices 0x%x, ddb (%d - %d), active pipes 0x%x, mbus joined: %s\n",
+ 			    crtc->base.base.id, crtc->base.name,
+@@ -6651,6 +6680,74 @@ void skl_wm_get_hw_state(struct drm_i915_private *dev_priv)
+ 	dbuf_state->enabled_slices = dev_priv->dbuf.enabled_slices;
+ }
+ 
++static bool skl_dbuf_is_misconfigured(struct drm_i915_private *i915)
++{
++	const struct intel_dbuf_state *dbuf_state =
++		to_intel_dbuf_state(i915->dbuf.obj.state);
++	struct skl_ddb_entry entries[I915_MAX_PIPES] = {};
++	struct intel_crtc *crtc;
++
++	for_each_intel_crtc(&i915->drm, crtc) {
++		const struct intel_crtc_state *crtc_state =
++			to_intel_crtc_state(crtc->base.state);
++
++		entries[crtc->pipe] = crtc_state->wm.skl.ddb;
++	}
++
++	for_each_intel_crtc(&i915->drm, crtc) {
++		const struct intel_crtc_state *crtc_state =
++			to_intel_crtc_state(crtc->base.state);
++		u8 slices;
++
++		slices = skl_compute_dbuf_slices(crtc, dbuf_state->active_pipes,
++						 dbuf_state->joined_mbus);
++		if (dbuf_state->slices[crtc->pipe] & ~slices)
++			return true;
++
++		if (skl_ddb_allocation_overlaps(&crtc_state->wm.skl.ddb, entries,
++						I915_MAX_PIPES, crtc->pipe))
++			return true;
++	}
++
++	return false;
++}
++
++void skl_wm_sanitize(struct drm_i915_private *i915)
++{
++	struct intel_crtc *crtc;
++
++	/*
++	 * On TGL/RKL (at least) the BIOS likes to assign the planes
++	 * to the wrong DBUF slices. This will cause an infinite loop
++	 * in skl_commit_modeset_enables() as it can't find a way to
++	 * transition between the old bogus DBUF layout to the new
++	 * proper DBUF layout without DBUF allocation overlaps between
++	 * the planes (which cannot be allowed or else the hardware
++	 * may hang). If we detect a bogus DBUF layout just turn off
++	 * all the planes so that skl_commit_modeset_enables() can
++	 * simply ignore them.
++	 */
++	if (!skl_dbuf_is_misconfigured(i915))
++		return;
++
++	drm_dbg_kms(&i915->drm, "BIOS has misprogrammed the DBUF, disabling all planes\n");
++
++	for_each_intel_crtc(&i915->drm, crtc) {
++		struct intel_plane *plane = to_intel_plane(crtc->base.primary);
++		const struct intel_plane_state *plane_state =
++			to_intel_plane_state(plane->base.state);
++		struct intel_crtc_state *crtc_state =
++			to_intel_crtc_state(crtc->base.state);
++
++		if (plane_state->uapi.visible)
++			intel_plane_disable_noatomic(crtc, plane);
++
++		drm_WARN_ON(&i915->drm, crtc_state->active_planes != 0);
++
++		memset(&crtc_state->wm.skl.ddb, 0, sizeof(crtc_state->wm.skl.ddb));
++	}
++}
++
+ static void ilk_pipe_wm_get_hw_state(struct intel_crtc *crtc)
+ {
+ 	struct drm_device *dev = crtc->base.dev;
+diff --git a/drivers/gpu/drm/i915/intel_pm.h b/drivers/gpu/drm/i915/intel_pm.h
+index 990cdcaf85cec..d2243653a8930 100644
+--- a/drivers/gpu/drm/i915/intel_pm.h
++++ b/drivers/gpu/drm/i915/intel_pm.h
+@@ -47,6 +47,7 @@ void skl_pipe_wm_get_hw_state(struct intel_crtc *crtc,
+ 			      struct skl_pipe_wm *out);
+ void g4x_wm_sanitize(struct drm_i915_private *dev_priv);
+ void vlv_wm_sanitize(struct drm_i915_private *dev_priv);
++void skl_wm_sanitize(struct drm_i915_private *dev_priv);
+ bool intel_can_enable_sagv(struct drm_i915_private *dev_priv,
+ 			   const struct intel_bw_state *bw_state);
+ void intel_sagv_pre_plane_update(struct intel_atomic_state *state);
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index eb475a3a774b7..87f30bced7b7e 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -588,6 +588,7 @@ static int panel_simple_probe(struct device *dev, const struct panel_desc *desc)
+ 		err = panel_dpi_probe(dev, panel);
+ 		if (err)
+ 			goto free_ddc;
++		desc = panel->desc;
+ 	} else {
+ 		if (!of_get_display_timing(dev->of_node, "panel-timing", &dt))
+ 			panel_simple_parse_panel_timing_node(dev, panel, &dt);
+diff --git a/drivers/gpu/drm/rockchip/rockchip_vop_reg.c b/drivers/gpu/drm/rockchip/rockchip_vop_reg.c
+index 1f7353f0684a0..798b542e59164 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_vop_reg.c
++++ b/drivers/gpu/drm/rockchip/rockchip_vop_reg.c
+@@ -902,6 +902,7 @@ static const struct vop_win_phy rk3399_win01_data = {
+ 	.enable = VOP_REG(RK3288_WIN0_CTRL0, 0x1, 0),
+ 	.format = VOP_REG(RK3288_WIN0_CTRL0, 0x7, 1),
+ 	.rb_swap = VOP_REG(RK3288_WIN0_CTRL0, 0x1, 12),
++	.x_mir_en = VOP_REG(RK3288_WIN0_CTRL0, 0x1, 21),
+ 	.y_mir_en = VOP_REG(RK3288_WIN0_CTRL0, 0x1, 22),
+ 	.act_info = VOP_REG(RK3288_WIN0_ACT_INFO, 0x1fff1fff, 0),
+ 	.dsp_info = VOP_REG(RK3288_WIN0_DSP_INFO, 0x0fff0fff, 0),
+@@ -912,6 +913,7 @@ static const struct vop_win_phy rk3399_win01_data = {
+ 	.uv_vir = VOP_REG(RK3288_WIN0_VIR, 0x3fff, 16),
+ 	.src_alpha_ctl = VOP_REG(RK3288_WIN0_SRC_ALPHA_CTRL, 0xff, 0),
+ 	.dst_alpha_ctl = VOP_REG(RK3288_WIN0_DST_ALPHA_CTRL, 0xff, 0),
++	.channel = VOP_REG(RK3288_WIN0_CTRL2, 0xff, 0),
+ };
+ 
+ /*
+@@ -922,11 +924,11 @@ static const struct vop_win_phy rk3399_win01_data = {
+ static const struct vop_win_data rk3399_vop_win_data[] = {
+ 	{ .base = 0x00, .phy = &rk3399_win01_data,
+ 	  .type = DRM_PLANE_TYPE_PRIMARY },
+-	{ .base = 0x40, .phy = &rk3288_win01_data,
++	{ .base = 0x40, .phy = &rk3368_win01_data,
+ 	  .type = DRM_PLANE_TYPE_OVERLAY },
+-	{ .base = 0x00, .phy = &rk3288_win23_data,
++	{ .base = 0x00, .phy = &rk3368_win23_data,
+ 	  .type = DRM_PLANE_TYPE_OVERLAY },
+-	{ .base = 0x50, .phy = &rk3288_win23_data,
++	{ .base = 0x50, .phy = &rk3368_win23_data,
+ 	  .type = DRM_PLANE_TYPE_CURSOR },
+ };
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_dsi.c b/drivers/gpu/drm/vc4/vc4_dsi.c
+index a229da58962a2..9300d3354c512 100644
+--- a/drivers/gpu/drm/vc4/vc4_dsi.c
++++ b/drivers/gpu/drm/vc4/vc4_dsi.c
+@@ -1262,7 +1262,6 @@ static int vc4_dsi_host_attach(struct mipi_dsi_host *host,
+ 			       struct mipi_dsi_device *device)
+ {
+ 	struct vc4_dsi *dsi = host_to_dsi(host);
+-	int ret;
+ 
+ 	dsi->lanes = device->lanes;
+ 	dsi->channel = device->channel;
+@@ -1297,18 +1296,15 @@ static int vc4_dsi_host_attach(struct mipi_dsi_host *host,
+ 		return 0;
+ 	}
+ 
+-	ret = component_add(&dsi->pdev->dev, &vc4_dsi_ops);
+-	if (ret) {
+-		mipi_dsi_host_unregister(&dsi->dsi_host);
+-		return ret;
+-	}
+-
+-	return 0;
++	return component_add(&dsi->pdev->dev, &vc4_dsi_ops);
+ }
+ 
+ static int vc4_dsi_host_detach(struct mipi_dsi_host *host,
+ 			       struct mipi_dsi_device *device)
+ {
++	struct vc4_dsi *dsi = host_to_dsi(host);
++
++	component_del(&dsi->pdev->dev, &vc4_dsi_ops);
+ 	return 0;
+ }
+ 
+@@ -1686,9 +1682,7 @@ static int vc4_dsi_dev_remove(struct platform_device *pdev)
+ 	struct device *dev = &pdev->dev;
+ 	struct vc4_dsi *dsi = dev_get_drvdata(dev);
+ 
+-	component_del(&pdev->dev, &vc4_dsi_ops);
+ 	mipi_dsi_host_unregister(&dsi->dsi_host);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index c000946996edb..24f11c07bc3c7 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -1090,6 +1090,7 @@ static int vc4_hdmi_encoder_atomic_check(struct drm_encoder *encoder,
+ 	unsigned long long tmds_rate;
+ 
+ 	if (vc4_hdmi->variant->unsupported_odd_h_timings &&
++	    !(mode->flags & DRM_MODE_FLAG_DBLCLK) &&
+ 	    ((mode->hdisplay % 2) || (mode->hsync_start % 2) ||
+ 	     (mode->hsync_end % 2) || (mode->htotal % 2)))
+ 		return -EINVAL;
+@@ -1137,6 +1138,7 @@ vc4_hdmi_encoder_mode_valid(struct drm_encoder *encoder,
+ 	struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder);
+ 
+ 	if (vc4_hdmi->variant->unsupported_odd_h_timings &&
++	    !(mode->flags & DRM_MODE_FLAG_DBLCLK) &&
+ 	    ((mode->hdisplay % 2) || (mode->hsync_start % 2) ||
+ 	     (mode->hsync_end % 2) || (mode->htotal % 2)))
+ 		return MODE_H_ILLEGAL;
+diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
+index e180728914c0b..84171bb2d52bf 100644
+--- a/drivers/iio/industrialio-buffer.c
++++ b/drivers/iio/industrialio-buffer.c
+@@ -1569,9 +1569,17 @@ static long iio_device_buffer_getfd(struct iio_dev *indio_dev, unsigned long arg
+ 	}
+ 
+ 	if (copy_to_user(ival, &fd, sizeof(fd))) {
+-		put_unused_fd(fd);
+-		ret = -EFAULT;
+-		goto error_free_ib;
++		/*
++		 * "Leak" the fd, as there's not much we can do about this
++		 * anyway. 'fd' might have been closed already, as
++		 * anon_inode_getfd() called fd_install() on it, which made
++		 * it reachable by userland.
++		 *
++		 * Instead of allowing a malicious user to play tricks with
++		 * us, rely on the process exit path to do any necessary
++		 * cleanup, as in releasing the file, if still needed.
++		 */
++		return -EFAULT;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index 8b86406b71627..3632bf8b4031c 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -207,9 +207,14 @@ static struct dev_iommu *dev_iommu_get(struct device *dev)
+ 
+ static void dev_iommu_free(struct device *dev)
+ {
+-	iommu_fwspec_free(dev);
+-	kfree(dev->iommu);
++	struct dev_iommu *param = dev->iommu;
++
+ 	dev->iommu = NULL;
++	if (param->fwspec) {
++		fwnode_handle_put(param->fwspec->iommu_fwnode);
++		kfree(param->fwspec);
++	}
++	kfree(param);
+ }
+ 
+ static int __iommu_probe_device(struct device *dev, struct list_head *group_list)
+diff --git a/drivers/irqchip/irq-realtek-rtl.c b/drivers/irqchip/irq-realtek-rtl.c
+index 568614edd88f4..50a56820c99bc 100644
+--- a/drivers/irqchip/irq-realtek-rtl.c
++++ b/drivers/irqchip/irq-realtek-rtl.c
+@@ -76,16 +76,20 @@ static void realtek_irq_dispatch(struct irq_desc *desc)
+ {
+ 	struct irq_chip *chip = irq_desc_get_chip(desc);
+ 	struct irq_domain *domain;
+-	unsigned int pending;
++	unsigned long pending;
++	unsigned int soc_int;
+ 
+ 	chained_irq_enter(chip, desc);
+ 	pending = readl(REG(RTL_ICTL_GIMR)) & readl(REG(RTL_ICTL_GISR));
++
+ 	if (unlikely(!pending)) {
+ 		spurious_interrupt();
+ 		goto out;
+ 	}
++
+ 	domain = irq_desc_get_handler_data(desc);
+-	generic_handle_domain_irq(domain, __ffs(pending));
++	for_each_set_bit(soc_int, &pending, 32)
++		generic_handle_domain_irq(domain, soc_int);
+ 
+ out:
+ 	chained_irq_exit(chip, desc);
+diff --git a/drivers/misc/eeprom/ee1004.c b/drivers/misc/eeprom/ee1004.c
+index bb9c4512c968c..9fbfe784d7101 100644
+--- a/drivers/misc/eeprom/ee1004.c
++++ b/drivers/misc/eeprom/ee1004.c
+@@ -114,6 +114,9 @@ static ssize_t ee1004_eeprom_read(struct i2c_client *client, char *buf,
+ 	if (offset + count > EE1004_PAGE_SIZE)
+ 		count = EE1004_PAGE_SIZE - offset;
+ 
++	if (count > I2C_SMBUS_BLOCK_MAX)
++		count = I2C_SMBUS_BLOCK_MAX;
++
+ 	return i2c_smbus_read_i2c_block_data_or_emulated(client, offset, count, buf);
+ }
+ 
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 4ccbf43e6bfa9..aa1682b94a23b 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -1288,7 +1288,14 @@ static int fastrpc_dmabuf_alloc(struct fastrpc_user *fl, char __user *argp)
+ 	}
+ 
+ 	if (copy_to_user(argp, &bp, sizeof(bp))) {
+-		dma_buf_put(buf->dmabuf);
++		/*
++		 * The usercopy failed, but we can't do much about it, as
++		 * dma_buf_fd() already called fd_install() and made the
++		 * file descriptor accessible for the current process. It
++		 * might already be closed and dmabuf no longer valid when
++		 * we reach this point. Therefore "leak" the fd and rely on
++		 * the process exit path to do any required cleanup.
++		 */
+ 		return -EFAULT;
+ 	}
+ 
+diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
+index c9db24e16af13..44746f2edadaf 100644
+--- a/drivers/mmc/core/sd.c
++++ b/drivers/mmc/core/sd.c
+@@ -67,7 +67,7 @@ static const unsigned int sd_au_size[] = {
+ 		__res & __mask;						\
+ 	})
+ 
+-#define SD_POWEROFF_NOTIFY_TIMEOUT_MS 2000
++#define SD_POWEROFF_NOTIFY_TIMEOUT_MS 1000
+ #define SD_WRITE_EXTR_SINGLE_TIMEOUT_MS 1000
+ 
+ struct sd_busy_data {
+@@ -1664,6 +1664,12 @@ static int sd_poweroff_notify(struct mmc_card *card)
+ 		goto out;
+ 	}
+ 
++	/* Find out when the command is completed. */
++	err = mmc_poll_for_busy(card, SD_WRITE_EXTR_SINGLE_TIMEOUT_MS, false,
++				MMC_BUSY_EXTR_SINGLE);
++	if (err)
++		goto out;
++
+ 	cb_data.card = card;
+ 	cb_data.reg_buf = reg_buf;
+ 	err = __mmc_poll_for_busy(card, SD_POWEROFF_NOTIFY_TIMEOUT_MS,
+diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c
+index a593b1fbd69e5..0f3658b36513c 100644
+--- a/drivers/mmc/host/sdhci-of-esdhc.c
++++ b/drivers/mmc/host/sdhci-of-esdhc.c
+@@ -524,12 +524,16 @@ static void esdhc_of_adma_workaround(struct sdhci_host *host, u32 intmask)
+ 
+ static int esdhc_of_enable_dma(struct sdhci_host *host)
+ {
++	int ret;
+ 	u32 value;
+ 	struct device *dev = mmc_dev(host->mmc);
+ 
+ 	if (of_device_is_compatible(dev->of_node, "fsl,ls1043a-esdhc") ||
+-	    of_device_is_compatible(dev->of_node, "fsl,ls1046a-esdhc"))
+-		dma_set_mask_and_coherent(dev, DMA_BIT_MASK(40));
++	    of_device_is_compatible(dev->of_node, "fsl,ls1046a-esdhc")) {
++		ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(40));
++		if (ret)
++			return ret;
++	}
+ 
+ 	value = sdhci_readl(host, ESDHC_DMA_SYSCTL);
+ 
+diff --git a/drivers/mmc/host/sh_mmcif.c b/drivers/mmc/host/sh_mmcif.c
+index bcc595c70a9fb..104dcd702870c 100644
+--- a/drivers/mmc/host/sh_mmcif.c
++++ b/drivers/mmc/host/sh_mmcif.c
+@@ -405,6 +405,9 @@ static int sh_mmcif_dma_slave_config(struct sh_mmcif_host *host,
+ 	struct dma_slave_config cfg = { 0, };
+ 
+ 	res = platform_get_resource(host->pd, IORESOURCE_MEM, 0);
++	if (!res)
++		return -EINVAL;
++
+ 	cfg.direction = direction;
+ 
+ 	if (direction == DMA_DEV_TO_MEM) {
+diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
+index 6006c2e8fa2bc..9fd1d6cba3cda 100644
+--- a/drivers/net/bonding/bond_3ad.c
++++ b/drivers/net/bonding/bond_3ad.c
+@@ -1021,8 +1021,8 @@ static void ad_mux_machine(struct port *port, bool *update_slave_arr)
+ 				if (port->aggregator &&
+ 				    port->aggregator->is_active &&
+ 				    !__port_is_enabled(port)) {
+-
+ 					__enable_port(port);
++					*update_slave_arr = true;
+ 				}
+ 			}
+ 			break;
+@@ -1779,6 +1779,7 @@ static void ad_agg_selection_logic(struct aggregator *agg,
+ 			     port = port->next_port_in_aggregator) {
+ 				__enable_port(port);
+ 			}
++			*update_slave_arr = true;
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index 13aa43b5cffd9..1502d200682f8 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -584,7 +584,7 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
+ 	get_device(&priv->master_mii_bus->dev);
+ 	priv->master_mii_dn = dn;
+ 
+-	priv->slave_mii_bus = devm_mdiobus_alloc(ds->dev);
++	priv->slave_mii_bus = mdiobus_alloc();
+ 	if (!priv->slave_mii_bus) {
+ 		of_node_put(dn);
+ 		return -ENOMEM;
+@@ -644,8 +644,10 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
+ 	}
+ 
+ 	err = mdiobus_register(priv->slave_mii_bus);
+-	if (err && dn)
++	if (err && dn) {
++		mdiobus_free(priv->slave_mii_bus);
+ 		of_node_put(dn);
++	}
+ 
+ 	return err;
+ }
+@@ -653,6 +655,7 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
+ static void bcm_sf2_mdio_unregister(struct bcm_sf2_priv *priv)
+ {
+ 	mdiobus_unregister(priv->slave_mii_bus);
++	mdiobus_free(priv->slave_mii_bus);
+ 	of_node_put(priv->master_mii_dn);
+ }
+ 
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index 7056d98d8177b..0909b05d02133 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -498,8 +498,9 @@ static int gswip_mdio_rd(struct mii_bus *bus, int addr, int reg)
+ static int gswip_mdio(struct gswip_priv *priv, struct device_node *mdio_np)
+ {
+ 	struct dsa_switch *ds = priv->ds;
++	int err;
+ 
+-	ds->slave_mii_bus = devm_mdiobus_alloc(priv->dev);
++	ds->slave_mii_bus = mdiobus_alloc();
+ 	if (!ds->slave_mii_bus)
+ 		return -ENOMEM;
+ 
+@@ -512,7 +513,11 @@ static int gswip_mdio(struct gswip_priv *priv, struct device_node *mdio_np)
+ 	ds->slave_mii_bus->parent = priv->dev;
+ 	ds->slave_mii_bus->phy_mask = ~ds->phys_mii_mask;
+ 
+-	return of_mdiobus_register(ds->slave_mii_bus, mdio_np);
++	err = of_mdiobus_register(ds->slave_mii_bus, mdio_np);
++	if (err)
++		mdiobus_free(ds->slave_mii_bus);
++
++	return err;
+ }
+ 
+ static int gswip_pce_table_entry_read(struct gswip_priv *priv,
+@@ -2186,8 +2191,10 @@ disable_switch:
+ 	gswip_mdio_mask(priv, GSWIP_MDIO_GLOB_ENABLE, 0, GSWIP_MDIO_GLOB);
+ 	dsa_unregister_switch(priv->ds);
+ mdio_bus:
+-	if (mdio_np)
++	if (mdio_np) {
+ 		mdiobus_unregister(priv->ds->slave_mii_bus);
++		mdiobus_free(priv->ds->slave_mii_bus);
++	}
+ put_mdio_node:
+ 	of_node_put(mdio_np);
+ 	for (i = 0; i < priv->num_gphy_fw; i++)
+@@ -2210,6 +2217,7 @@ static int gswip_remove(struct platform_device *pdev)
+ 
+ 	if (priv->ds->slave_mii_bus) {
+ 		mdiobus_unregister(priv->ds->slave_mii_bus);
++		mdiobus_free(priv->ds->slave_mii_bus);
+ 		of_node_put(priv->ds->slave_mii_bus->dev.of_node);
+ 	}
+ 
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 9890672a206d0..fb59efc7f9266 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -2066,7 +2066,7 @@ mt7530_setup_mdio(struct mt7530_priv *priv)
+ 	if (priv->irq)
+ 		mt7530_setup_mdio_irq(priv);
+ 
+-	ret = mdiobus_register(bus);
++	ret = devm_mdiobus_register(dev, bus);
+ 	if (ret) {
+ 		dev_err(dev, "failed to register MDIO bus: %d\n", ret);
+ 		if (priv->irq)
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index cd8462d1e27c0..70cea1b95298a 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -3415,7 +3415,7 @@ static int mv88e6xxx_mdio_register(struct mv88e6xxx_chip *chip,
+ 			return err;
+ 	}
+ 
+-	bus = devm_mdiobus_alloc_size(chip->dev, sizeof(*mdio_bus));
++	bus = mdiobus_alloc_size(sizeof(*mdio_bus));
+ 	if (!bus)
+ 		return -ENOMEM;
+ 
+@@ -3440,14 +3440,14 @@ static int mv88e6xxx_mdio_register(struct mv88e6xxx_chip *chip,
+ 	if (!external) {
+ 		err = mv88e6xxx_g2_irq_mdio_setup(chip, bus);
+ 		if (err)
+-			return err;
++			goto out;
+ 	}
+ 
+ 	err = of_mdiobus_register(bus, np);
+ 	if (err) {
+ 		dev_err(chip->dev, "Cannot register MDIO bus (%d)\n", err);
+ 		mv88e6xxx_g2_irq_mdio_free(chip, bus);
+-		return err;
++		goto out;
+ 	}
+ 
+ 	if (external)
+@@ -3456,21 +3456,26 @@ static int mv88e6xxx_mdio_register(struct mv88e6xxx_chip *chip,
+ 		list_add(&mdio_bus->list, &chip->mdios);
+ 
+ 	return 0;
++
++out:
++	mdiobus_free(bus);
++	return err;
+ }
+ 
+ static void mv88e6xxx_mdios_unregister(struct mv88e6xxx_chip *chip)
+ 
+ {
+-	struct mv88e6xxx_mdio_bus *mdio_bus;
++	struct mv88e6xxx_mdio_bus *mdio_bus, *p;
+ 	struct mii_bus *bus;
+ 
+-	list_for_each_entry(mdio_bus, &chip->mdios, list) {
++	list_for_each_entry_safe(mdio_bus, p, &chip->mdios, list) {
+ 		bus = mdio_bus->bus;
+ 
+ 		if (!mdio_bus->external)
+ 			mv88e6xxx_g2_irq_mdio_free(chip, bus);
+ 
+ 		mdiobus_unregister(bus);
++		mdiobus_free(bus);
+ 	}
+ }
+ 
+diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
+index 45c5ec7a83eaf..12c2acbc2427b 100644
+--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
++++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
+@@ -1064,7 +1064,7 @@ static int vsc9959_mdio_bus_alloc(struct ocelot *ocelot)
+ 		return PTR_ERR(hw);
+ 	}
+ 
+-	bus = devm_mdiobus_alloc_size(dev, sizeof(*mdio_priv));
++	bus = mdiobus_alloc_size(sizeof(*mdio_priv));
+ 	if (!bus)
+ 		return -ENOMEM;
+ 
+@@ -1084,6 +1084,7 @@ static int vsc9959_mdio_bus_alloc(struct ocelot *ocelot)
+ 	rc = mdiobus_register(bus);
+ 	if (rc < 0) {
+ 		dev_err(dev, "failed to register MDIO bus\n");
++		mdiobus_free(bus);
+ 		return rc;
+ 	}
+ 
+@@ -1133,6 +1134,7 @@ static void vsc9959_mdio_bus_free(struct ocelot *ocelot)
+ 		lynx_pcs_destroy(pcs);
+ 	}
+ 	mdiobus_unregister(felix->imdio);
++	mdiobus_free(felix->imdio);
+ }
+ 
+ static void vsc9959_sched_speed_set(struct ocelot *ocelot, int port,
+diff --git a/drivers/net/dsa/ocelot/seville_vsc9953.c b/drivers/net/dsa/ocelot/seville_vsc9953.c
+index 92eae63150eae..40d6d1f2c724f 100644
+--- a/drivers/net/dsa/ocelot/seville_vsc9953.c
++++ b/drivers/net/dsa/ocelot/seville_vsc9953.c
+@@ -10,6 +10,7 @@
+ #include <linux/pcs-lynx.h>
+ #include <linux/dsa/ocelot.h>
+ #include <linux/iopoll.h>
++#include <linux/of_mdio.h>
+ #include "felix.h"
+ 
+ #define MSCC_MIIM_CMD_OPR_WRITE			BIT(1)
+@@ -1108,7 +1109,7 @@ static int vsc9953_mdio_bus_alloc(struct ocelot *ocelot)
+ 	snprintf(bus->id, MII_BUS_ID_SIZE, "%s-imdio", dev_name(dev));
+ 
+ 	/* Needed in order to initialize the bus mutex lock */
+-	rc = mdiobus_register(bus);
++	rc = devm_of_mdiobus_register(dev, bus, NULL);
+ 	if (rc < 0) {
+ 		dev_err(dev, "failed to register MDIO bus\n");
+ 		return rc;
+@@ -1160,7 +1161,8 @@ static void vsc9953_mdio_bus_free(struct ocelot *ocelot)
+ 		mdio_device_free(pcs->mdio);
+ 		lynx_pcs_destroy(pcs);
+ 	}
+-	mdiobus_unregister(felix->imdio);
++
++	/* mdiobus_unregister and mdiobus_free handled by devres */
+ }
+ 
+ static const struct felix_info seville_info_vsc9953 = {
+diff --git a/drivers/net/dsa/qca/ar9331.c b/drivers/net/dsa/qca/ar9331.c
+index da0d7e68643a9..c39de2a4c1fe0 100644
+--- a/drivers/net/dsa/qca/ar9331.c
++++ b/drivers/net/dsa/qca/ar9331.c
+@@ -378,7 +378,7 @@ static int ar9331_sw_mbus_init(struct ar9331_sw_priv *priv)
+ 	if (!mnp)
+ 		return -ENODEV;
+ 
+-	ret = of_mdiobus_register(mbus, mnp);
++	ret = devm_of_mdiobus_register(dev, mbus, mnp);
+ 	of_node_put(mnp);
+ 	if (ret)
+ 		return ret;
+@@ -1091,7 +1091,6 @@ static void ar9331_sw_remove(struct mdio_device *mdiodev)
+ 	}
+ 
+ 	irq_domain_remove(priv->irqdomain);
+-	mdiobus_unregister(priv->mbus);
+ 	dsa_unregister_switch(&priv->ds);
+ 
+ 	reset_control_assert(priv->sw_reset);
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-pci.c b/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
+index 90cb55eb54665..014513ce00a14 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
+@@ -418,6 +418,9 @@ static void xgbe_pci_remove(struct pci_dev *pdev)
+ 
+ 	pci_free_irq_vectors(pdata->pcidev);
+ 
++	/* Disable all interrupts in the hardware */
++	XP_IOWRITE(pdata, XP_INT_EN, 0x0);
++
+ 	xgbe_free_pdata(pdata);
+ }
+ 
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index 8e643567abce2..70c8dd6cf3508 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -4523,12 +4523,12 @@ static int dpaa2_eth_remove(struct fsl_mc_device *ls_dev)
+ #ifdef CONFIG_DEBUG_FS
+ 	dpaa2_dbg_remove(priv);
+ #endif
++
++	unregister_netdev(net_dev);
+ 	rtnl_lock();
+ 	dpaa2_eth_disconnect_mac(priv);
+ 	rtnl_unlock();
+ 
+-	unregister_netdev(net_dev);
+-
+ 	dpaa2_eth_dl_port_del(priv);
+ 	dpaa2_eth_dl_traps_unregister(priv);
+ 	dpaa2_eth_dl_free(priv);
+diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
+index 04a08904305a9..3453e565472c1 100644
+--- a/drivers/net/ethernet/google/gve/gve_rx.c
++++ b/drivers/net/ethernet/google/gve/gve_rx.c
+@@ -609,6 +609,7 @@ static bool gve_rx(struct gve_rx_ring *rx, netdev_features_t feat,
+ 
+ 	*packet_size_bytes = skb->len + (skb->protocol ? ETH_HLEN : 0);
+ 	*work_done = work_cnt;
++	skb_record_rx_queue(skb, rx->q_num);
+ 	if (skb_is_nonlinear(skb))
+ 		napi_gro_frags(napi);
+ 	else
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 682a440151a87..d5d33325a413e 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -110,6 +110,7 @@ static void ibmvnic_tx_scrq_clean_buffer(struct ibmvnic_adapter *adapter,
+ 					 struct ibmvnic_sub_crq_queue *tx_scrq);
+ static void free_long_term_buff(struct ibmvnic_adapter *adapter,
+ 				struct ibmvnic_long_term_buff *ltb);
++static void ibmvnic_disable_irqs(struct ibmvnic_adapter *adapter);
+ 
+ struct ibmvnic_stat {
+ 	char name[ETH_GSTRING_LEN];
+@@ -1418,7 +1419,7 @@ static int __ibmvnic_open(struct net_device *netdev)
+ 	rc = set_link_state(adapter, IBMVNIC_LOGICAL_LNK_UP);
+ 	if (rc) {
+ 		ibmvnic_napi_disable(adapter);
+-		release_resources(adapter);
++		ibmvnic_disable_irqs(adapter);
+ 		return rc;
+ 	}
+ 
+@@ -1468,9 +1469,6 @@ static int ibmvnic_open(struct net_device *netdev)
+ 		rc = init_resources(adapter);
+ 		if (rc) {
+ 			netdev_err(netdev, "failed to initialize resources\n");
+-			release_resources(adapter);
+-			release_rx_pools(adapter);
+-			release_tx_pools(adapter);
+ 			goto out;
+ 		}
+ 	}
+@@ -1487,6 +1485,13 @@ out:
+ 		adapter->state = VNIC_OPEN;
+ 		rc = 0;
+ 	}
++
++	if (rc) {
++		release_resources(adapter);
++		release_rx_pools(adapter);
++		release_tx_pools(adapter);
++	}
++
+ 	return rc;
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index b2db39ee5f85c..b3e1fc6a0a8eb 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -483,6 +483,7 @@ enum ice_pf_flags {
+ 	ICE_FLAG_VF_TRUE_PROMISC_ENA,
+ 	ICE_FLAG_MDD_AUTO_RESET_VF,
+ 	ICE_FLAG_LINK_LENIENT_MODE_ENA,
++	ICE_FLAG_PLUG_AUX_DEV,
+ 	ICE_PF_FLAGS_NBITS		/* must be last */
+ };
+ 
+@@ -880,7 +881,7 @@ static inline void ice_set_rdma_cap(struct ice_pf *pf)
+ 	if (pf->hw.func_caps.common_cap.rdma && pf->num_rdma_msix) {
+ 		set_bit(ICE_FLAG_RDMA_ENA, pf->flags);
+ 		set_bit(ICE_FLAG_AUX_ENA, pf->flags);
+-		ice_plug_aux_dev(pf);
++		set_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags);
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
+index b3066d0fea8ba..e9a0159cb8b92 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.c
++++ b/drivers/net/ethernet/intel/ice/ice_common.c
+@@ -3321,7 +3321,8 @@ ice_cfg_phy_fec(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg,
+ 	    !ice_fw_supports_report_dflt_cfg(hw)) {
+ 		struct ice_link_default_override_tlv tlv;
+ 
+-		if (ice_get_link_default_override(&tlv, pi))
++		status = ice_get_link_default_override(&tlv, pi);
++		if (status)
+ 			goto out;
+ 
+ 		if (!(tlv.options & ICE_LINK_OVERRIDE_STRICT_MODE) &&
+diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c b/drivers/net/ethernet/intel/ice/ice_lag.c
+index e375ac849aecd..4f954db01b929 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lag.c
++++ b/drivers/net/ethernet/intel/ice/ice_lag.c
+@@ -204,17 +204,39 @@ ice_lag_unlink(struct ice_lag *lag,
+ 		lag->upper_netdev = NULL;
+ 	}
+ 
+-	if (lag->peer_netdev) {
+-		dev_put(lag->peer_netdev);
+-		lag->peer_netdev = NULL;
+-	}
+-
++	lag->peer_netdev = NULL;
+ 	ice_set_sriov_cap(pf);
+ 	ice_set_rdma_cap(pf);
+ 	lag->bonded = false;
+ 	lag->role = ICE_LAG_NONE;
+ }
+ 
++/**
++ * ice_lag_unregister - handle netdev unregister events
++ * @lag: LAG info struct
++ * @netdev: netdev reporting the event
++ */
++static void ice_lag_unregister(struct ice_lag *lag, struct net_device *netdev)
++{
++	struct ice_pf *pf = lag->pf;
++
++	/* check to see if this event is for this netdev
++	 * check that we are in an aggregate
++	 */
++	if (netdev != lag->netdev || !lag->bonded)
++		return;
++
++	if (lag->upper_netdev) {
++		dev_put(lag->upper_netdev);
++		lag->upper_netdev = NULL;
++		ice_set_sriov_cap(pf);
++		ice_set_rdma_cap(pf);
++	}
++	/* perform some cleanup in case we come back */
++	lag->bonded = false;
++	lag->role = ICE_LAG_NONE;
++}
++
+ /**
+  * ice_lag_changeupper_event - handle LAG changeupper event
+  * @lag: LAG info struct
+@@ -307,7 +329,7 @@ ice_lag_event_handler(struct notifier_block *notif_blk, unsigned long event,
+ 		ice_lag_info_event(lag, ptr);
+ 		break;
+ 	case NETDEV_UNREGISTER:
+-		ice_lag_unlink(lag, ptr);
++		ice_lag_unregister(lag, netdev);
+ 		break;
+ 	default:
+ 		break;
+diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
+index d981dc6f23235..85a612838a898 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
++++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
+@@ -568,6 +568,7 @@ struct ice_tx_ctx_desc {
+ 			(0x3FFFFULL << ICE_TXD_CTX_QW1_TSO_LEN_S)
+ 
+ #define ICE_TXD_CTX_QW1_MSS_S	50
++#define ICE_TXD_CTX_MIN_MSS	64
+ 
+ #define ICE_TXD_CTX_QW1_VSI_S	50
+ #define ICE_TXD_CTX_QW1_VSI_M	(0x3FFULL << ICE_TXD_CTX_QW1_VSI_S)
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 73c61cdb036f9..5b4be432b60ce 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -2235,6 +2235,9 @@ static void ice_service_task(struct work_struct *work)
+ 		return;
+ 	}
+ 
++	if (test_and_clear_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags))
++		ice_plug_aux_dev(pf);
++
+ 	ice_clean_adminq_subtask(pf);
+ 	ice_check_media_subtask(pf);
+ 	ice_check_for_hang_subtask(pf);
+@@ -8540,6 +8543,7 @@ ice_features_check(struct sk_buff *skb,
+ 		   struct net_device __always_unused *netdev,
+ 		   netdev_features_t features)
+ {
++	bool gso = skb_is_gso(skb);
+ 	size_t len;
+ 
+ 	/* No point in doing any of this if neither checksum nor GSO are
+@@ -8552,24 +8556,32 @@ ice_features_check(struct sk_buff *skb,
+ 	/* We cannot support GSO if the MSS is going to be less than
+ 	 * 64 bytes. If it is then we need to drop support for GSO.
+ 	 */
+-	if (skb_is_gso(skb) && (skb_shinfo(skb)->gso_size < 64))
++	if (gso && (skb_shinfo(skb)->gso_size < ICE_TXD_CTX_MIN_MSS))
+ 		features &= ~NETIF_F_GSO_MASK;
+ 
+-	len = skb_network_header(skb) - skb->data;
++	len = skb_network_offset(skb);
+ 	if (len > ICE_TXD_MACLEN_MAX || len & 0x1)
+ 		goto out_rm_features;
+ 
+-	len = skb_transport_header(skb) - skb_network_header(skb);
++	len = skb_network_header_len(skb);
+ 	if (len > ICE_TXD_IPLEN_MAX || len & 0x1)
+ 		goto out_rm_features;
+ 
+ 	if (skb->encapsulation) {
+-		len = skb_inner_network_header(skb) - skb_transport_header(skb);
+-		if (len > ICE_TXD_L4LEN_MAX || len & 0x1)
+-			goto out_rm_features;
++		/* this must work for VXLAN frames AND IPIP/SIT frames, and in
++		 * the case of IPIP frames, the transport header pointer is
++		 * after the inner header! So check to make sure that this
++		 * is a GRE or UDP_TUNNEL frame before doing that math.
++		 */
++		if (gso && (skb_shinfo(skb)->gso_type &
++			    (SKB_GSO_GRE | SKB_GSO_UDP_TUNNEL))) {
++			len = skb_inner_network_header(skb) -
++			      skb_transport_header(skb);
++			if (len > ICE_TXD_L4LEN_MAX || len & 0x1)
++				goto out_rm_features;
++		}
+ 
+-		len = skb_inner_transport_header(skb) -
+-		      skb_inner_network_header(skb);
++		len = skb_inner_network_header_len(skb);
+ 		if (len > ICE_TXD_IPLEN_MAX || len & 0x1)
+ 			goto out_rm_features;
+ 	}
+diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+index d81811ab4ec48..c2f87a2d0ef46 100644
+--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
++++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+@@ -1984,14 +1984,15 @@ static void ixgbevf_set_rx_buffer_len(struct ixgbevf_adapter *adapter,
+ 	if (adapter->flags & IXGBEVF_FLAGS_LEGACY_RX)
+ 		return;
+ 
+-	set_ring_build_skb_enabled(rx_ring);
++	if (PAGE_SIZE < 8192)
++		if (max_frame > IXGBEVF_MAX_FRAME_BUILD_SKB)
++			set_ring_uses_large_buffer(rx_ring);
+ 
+-	if (PAGE_SIZE < 8192) {
+-		if (max_frame <= IXGBEVF_MAX_FRAME_BUILD_SKB)
+-			return;
++	/* 82599 can't rely on RXDCTL.RLPML to restrict the size of the frame */
++	if (adapter->hw.mac.type == ixgbe_mac_82599_vf && !ring_uses_large_buffer(rx_ring))
++		return;
+ 
+-		set_ring_uses_large_buffer(rx_ring);
+-	}
++	set_ring_build_skb_enabled(rx_ring);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/litex/Kconfig b/drivers/net/ethernet/litex/Kconfig
+index f99adbf26ab4e..04345b929d8e5 100644
+--- a/drivers/net/ethernet/litex/Kconfig
++++ b/drivers/net/ethernet/litex/Kconfig
+@@ -17,7 +17,7 @@ if NET_VENDOR_LITEX
+ 
+ config LITEX_LITEETH
+ 	tristate "LiteX Ethernet support"
+-	depends on OF
++	depends on OF && HAS_IOMEM
+ 	help
+ 	  If you wish to compile a kernel for hardware with a LiteX LiteEth
+ 	  device then you should answer Y to this.
+diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_ethtool.c b/drivers/net/ethernet/microchip/sparx5/sparx5_ethtool.c
+index 59783fc46a7b9..10b866e9f7266 100644
+--- a/drivers/net/ethernet/microchip/sparx5/sparx5_ethtool.c
++++ b/drivers/net/ethernet/microchip/sparx5/sparx5_ethtool.c
+@@ -1103,7 +1103,7 @@ void sparx5_get_stats64(struct net_device *ndev,
+ 	stats->tx_carrier_errors = portstats[spx5_stats_tx_csense_cnt];
+ 	stats->tx_window_errors = portstats[spx5_stats_tx_late_coll_cnt];
+ 	stats->rx_dropped = portstats[spx5_stats_ana_ac_port_stat_lsb_cnt];
+-	for (idx = 0; idx < 2 * SPX5_PRIOS; ++idx, ++stats)
++	for (idx = 0; idx < 2 * SPX5_PRIOS; ++idx)
+ 		stats->rx_dropped += portstats[spx5_stats_green_p0_rx_port_drop
+ 					       + idx];
+ 	stats->tx_dropped = portstats[spx5_stats_tx_local_drop];
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index 294bb4eb3833f..02edd383dea22 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -1292,6 +1292,8 @@ static void
+ ocelot_populate_ipv4_ptp_event_trap_key(struct ocelot_vcap_filter *trap)
+ {
+ 	trap->key_type = OCELOT_VCAP_KEY_IPV4;
++	trap->key.ipv4.proto.value[0] = IPPROTO_UDP;
++	trap->key.ipv4.proto.mask[0] = 0xff;
+ 	trap->key.ipv4.dport.value = PTP_EV_PORT;
+ 	trap->key.ipv4.dport.mask = 0xffff;
+ }
+@@ -1300,6 +1302,8 @@ static void
+ ocelot_populate_ipv6_ptp_event_trap_key(struct ocelot_vcap_filter *trap)
+ {
+ 	trap->key_type = OCELOT_VCAP_KEY_IPV6;
++	trap->key.ipv4.proto.value[0] = IPPROTO_UDP;
++	trap->key.ipv4.proto.mask[0] = 0xff;
+ 	trap->key.ipv6.dport.value = PTP_EV_PORT;
+ 	trap->key.ipv6.dport.mask = 0xffff;
+ }
+@@ -1308,6 +1312,8 @@ static void
+ ocelot_populate_ipv4_ptp_general_trap_key(struct ocelot_vcap_filter *trap)
+ {
+ 	trap->key_type = OCELOT_VCAP_KEY_IPV4;
++	trap->key.ipv4.proto.value[0] = IPPROTO_UDP;
++	trap->key.ipv4.proto.mask[0] = 0xff;
+ 	trap->key.ipv4.dport.value = PTP_GEN_PORT;
+ 	trap->key.ipv4.dport.mask = 0xffff;
+ }
+@@ -1316,6 +1322,8 @@ static void
+ ocelot_populate_ipv6_ptp_general_trap_key(struct ocelot_vcap_filter *trap)
+ {
+ 	trap->key_type = OCELOT_VCAP_KEY_IPV6;
++	trap->key.ipv4.proto.value[0] = IPPROTO_UDP;
++	trap->key.ipv4.proto.mask[0] = 0xff;
+ 	trap->key.ipv6.dport.value = PTP_GEN_PORT;
+ 	trap->key.ipv6.dport.mask = 0xffff;
+ }
+@@ -1601,12 +1609,11 @@ void ocelot_get_strings(struct ocelot *ocelot, int port, u32 sset, u8 *data)
+ }
+ EXPORT_SYMBOL(ocelot_get_strings);
+ 
++/* Caller must hold &ocelot->stats_lock */
+ static void ocelot_update_stats(struct ocelot *ocelot)
+ {
+ 	int i, j;
+ 
+-	mutex_lock(&ocelot->stats_lock);
+-
+ 	for (i = 0; i < ocelot->num_phys_ports; i++) {
+ 		/* Configure the port to read the stats from */
+ 		ocelot_write(ocelot, SYS_STAT_CFG_STAT_VIEW(i), SYS_STAT_CFG);
+@@ -1625,8 +1632,6 @@ static void ocelot_update_stats(struct ocelot *ocelot)
+ 					      ~(u64)U32_MAX) + val;
+ 		}
+ 	}
+-
+-	mutex_unlock(&ocelot->stats_lock);
+ }
+ 
+ static void ocelot_check_stats_work(struct work_struct *work)
+@@ -1635,7 +1640,9 @@ static void ocelot_check_stats_work(struct work_struct *work)
+ 	struct ocelot *ocelot = container_of(del_work, struct ocelot,
+ 					     stats_work);
+ 
++	mutex_lock(&ocelot->stats_lock);
+ 	ocelot_update_stats(ocelot);
++	mutex_unlock(&ocelot->stats_lock);
+ 
+ 	queue_delayed_work(ocelot->stats_queue, &ocelot->stats_work,
+ 			   OCELOT_STATS_CHECK_DELAY);
+@@ -1645,12 +1652,16 @@ void ocelot_get_ethtool_stats(struct ocelot *ocelot, int port, u64 *data)
+ {
+ 	int i;
+ 
++	mutex_lock(&ocelot->stats_lock);
++
+ 	/* check and update now */
+ 	ocelot_update_stats(ocelot);
+ 
+ 	/* Copy all counters */
+ 	for (i = 0; i < ocelot->num_stats; i++)
+ 		*data++ = ocelot->stats[port * ocelot->num_stats + i];
++
++	mutex_unlock(&ocelot->stats_lock);
+ }
+ EXPORT_SYMBOL(ocelot_get_ethtool_stats);
+ 
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+index dfb4468fe287a..0a326e04e6923 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+@@ -1011,6 +1011,7 @@ nfp_tunnel_del_shared_mac(struct nfp_app *app, struct net_device *netdev,
+ 	struct nfp_flower_repr_priv *repr_priv;
+ 	struct nfp_tun_offloaded_mac *entry;
+ 	struct nfp_repr *repr;
++	u16 nfp_mac_idx;
+ 	int ida_idx;
+ 
+ 	entry = nfp_tunnel_lookup_offloaded_macs(app, mac);
+@@ -1029,8 +1030,6 @@ nfp_tunnel_del_shared_mac(struct nfp_app *app, struct net_device *netdev,
+ 		entry->bridge_count--;
+ 
+ 		if (!entry->bridge_count && entry->ref_count) {
+-			u16 nfp_mac_idx;
+-
+ 			nfp_mac_idx = entry->index & ~NFP_TUN_PRE_TUN_IDX_BIT;
+ 			if (__nfp_tunnel_offload_mac(app, mac, nfp_mac_idx,
+ 						     false)) {
+@@ -1046,7 +1045,6 @@ nfp_tunnel_del_shared_mac(struct nfp_app *app, struct net_device *netdev,
+ 
+ 	/* If MAC is now used by 1 repr set the offloaded MAC index to port. */
+ 	if (entry->ref_count == 1 && list_is_singular(&entry->repr_list)) {
+-		u16 nfp_mac_idx;
+ 		int port, err;
+ 
+ 		repr_priv = list_first_entry(&entry->repr_list,
+@@ -1074,8 +1072,14 @@ nfp_tunnel_del_shared_mac(struct nfp_app *app, struct net_device *netdev,
+ 	WARN_ON_ONCE(rhashtable_remove_fast(&priv->tun.offloaded_macs,
+ 					    &entry->ht_node,
+ 					    offloaded_macs_params));
++
++	if (nfp_flower_is_supported_bridge(netdev))
++		nfp_mac_idx = entry->index & ~NFP_TUN_PRE_TUN_IDX_BIT;
++	else
++		nfp_mac_idx = entry->index;
++
+ 	/* If MAC has global ID then extract and free the ida entry. */
+-	if (nfp_tunnel_is_mac_idx_global(entry->index)) {
++	if (nfp_tunnel_is_mac_idx_global(nfp_mac_idx)) {
+ 		ida_idx = nfp_tunnel_get_ida_from_global_mac_idx(entry->index);
+ 		ida_simple_remove(&priv->tun.mac_off_ids, ida_idx);
+ 	}
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+index 617d0e4c64958..09644ab0d87a7 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+@@ -756,7 +756,7 @@ static int sun8i_dwmac_reset(struct stmmac_priv *priv)
+ 
+ 	if (err) {
+ 		dev_err(priv->device, "EMAC reset timeout\n");
+-		return -EFAULT;
++		return err;
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index c5ad28e543e43..f4015579e8adc 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -400,7 +400,7 @@ static void stmmac_lpi_entry_timer_config(struct stmmac_priv *priv, bool en)
+  * Description: this function is to verify and enter in LPI mode in case of
+  * EEE.
+  */
+-static void stmmac_enable_eee_mode(struct stmmac_priv *priv)
++static int stmmac_enable_eee_mode(struct stmmac_priv *priv)
+ {
+ 	u32 tx_cnt = priv->plat->tx_queues_to_use;
+ 	u32 queue;
+@@ -410,13 +410,14 @@ static void stmmac_enable_eee_mode(struct stmmac_priv *priv)
+ 		struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue];
+ 
+ 		if (tx_q->dirty_tx != tx_q->cur_tx)
+-			return; /* still unfinished work */
++			return -EBUSY; /* still unfinished work */
+ 	}
+ 
+ 	/* Check and enter in LPI mode */
+ 	if (!priv->tx_path_in_lpi_mode)
+ 		stmmac_set_eee_mode(priv, priv->hw,
+ 				priv->plat->en_tx_lpi_clockgating);
++	return 0;
+ }
+ 
+ /**
+@@ -448,8 +449,8 @@ static void stmmac_eee_ctrl_timer(struct timer_list *t)
+ {
+ 	struct stmmac_priv *priv = from_timer(priv, t, eee_ctrl_timer);
+ 
+-	stmmac_enable_eee_mode(priv);
+-	mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(priv->tx_lpi_timer));
++	if (stmmac_enable_eee_mode(priv))
++		mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(priv->tx_lpi_timer));
+ }
+ 
+ /**
+@@ -2641,8 +2642,8 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue)
+ 
+ 	if (priv->eee_enabled && !priv->tx_path_in_lpi_mode &&
+ 	    priv->eee_sw_timer_en) {
+-		stmmac_enable_eee_mode(priv);
+-		mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(priv->tx_lpi_timer));
++		if (stmmac_enable_eee_mode(priv))
++			mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(priv->tx_lpi_timer));
+ 	}
+ 
+ 	/* We still have pending packets, let's call for a new scheduling */
+diff --git a/drivers/net/mdio/mdio-aspeed.c b/drivers/net/mdio/mdio-aspeed.c
+index 966c3b4ad59d1..e2273588c75b6 100644
+--- a/drivers/net/mdio/mdio-aspeed.c
++++ b/drivers/net/mdio/mdio-aspeed.c
+@@ -148,6 +148,7 @@ static const struct of_device_id aspeed_mdio_of_match[] = {
+ 	{ .compatible = "aspeed,ast2600-mdio", },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, aspeed_mdio_of_match);
+ 
+ static struct platform_driver aspeed_mdio_driver = {
+ 	.driver = {
+diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
+index c10cc2cd53b66..cfda625dbea57 100644
+--- a/drivers/net/phy/marvell.c
++++ b/drivers/net/phy/marvell.c
+@@ -553,9 +553,9 @@ static int m88e1121_config_aneg_rgmii_delays(struct phy_device *phydev)
+ 	else
+ 		mscr = 0;
+ 
+-	return phy_modify_paged(phydev, MII_MARVELL_MSCR_PAGE,
+-				MII_88E1121_PHY_MSCR_REG,
+-				MII_88E1121_PHY_MSCR_DELAY_MASK, mscr);
++	return phy_modify_paged_changed(phydev, MII_MARVELL_MSCR_PAGE,
++					MII_88E1121_PHY_MSCR_REG,
++					MII_88E1121_PHY_MSCR_DELAY_MASK, mscr);
+ }
+ 
+ static int m88e1121_config_aneg(struct phy_device *phydev)
+@@ -569,11 +569,13 @@ static int m88e1121_config_aneg(struct phy_device *phydev)
+ 			return err;
+ 	}
+ 
++	changed = err;
++
+ 	err = marvell_set_polarity(phydev, phydev->mdix_ctrl);
+ 	if (err < 0)
+ 		return err;
+ 
+-	changed = err;
++	changed |= err;
+ 
+ 	err = genphy_config_aneg(phydev);
+ 	if (err < 0)
+@@ -1213,16 +1215,15 @@ static int m88e1118_config_aneg(struct phy_device *phydev)
+ {
+ 	int err;
+ 
+-	err = genphy_soft_reset(phydev);
++	err = marvell_set_polarity(phydev, phydev->mdix_ctrl);
+ 	if (err < 0)
+ 		return err;
+ 
+-	err = marvell_set_polarity(phydev, phydev->mdix_ctrl);
++	err = genphy_config_aneg(phydev);
+ 	if (err < 0)
+ 		return err;
+ 
+-	err = genphy_config_aneg(phydev);
+-	return 0;
++	return genphy_soft_reset(phydev);
+ }
+ 
+ static int m88e1118_config_init(struct phy_device *phydev)
+diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c
+index ea8aa8c33241c..f0827c3793446 100644
+--- a/drivers/net/usb/ax88179_178a.c
++++ b/drivers/net/usb/ax88179_178a.c
+@@ -1467,58 +1467,68 @@ static int ax88179_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 	u16 hdr_off;
+ 	u32 *pkt_hdr;
+ 
+-	/* This check is no longer done by usbnet */
+-	if (skb->len < dev->net->hard_header_len)
++	/* At the end of the SKB, there's a header telling us how many packets
++	 * are bundled into this buffer and where we can find an array of
++	 * per-packet metadata (which contains elements encoded into u16).
++	 */
++	if (skb->len < 4)
+ 		return 0;
+-
+ 	skb_trim(skb, skb->len - 4);
+ 	rx_hdr = get_unaligned_le32(skb_tail_pointer(skb));
+-
+ 	pkt_cnt = (u16)rx_hdr;
+ 	hdr_off = (u16)(rx_hdr >> 16);
++
++	if (pkt_cnt == 0)
++		return 0;
++
++	/* Make sure that the bounds of the metadata array are inside the SKB
++	 * (and in front of the counter at the end).
++	 */
++	if (pkt_cnt * 2 + hdr_off > skb->len)
++		return 0;
+ 	pkt_hdr = (u32 *)(skb->data + hdr_off);
+ 
+-	while (pkt_cnt--) {
++	/* Packets must not overlap the metadata array */
++	skb_trim(skb, hdr_off);
++
++	for (; ; pkt_cnt--, pkt_hdr++) {
+ 		u16 pkt_len;
+ 
+ 		le32_to_cpus(pkt_hdr);
+ 		pkt_len = (*pkt_hdr >> 16) & 0x1fff;
+ 
+-		/* Check CRC or runt packet */
+-		if ((*pkt_hdr & AX_RXHDR_CRC_ERR) ||
+-		    (*pkt_hdr & AX_RXHDR_DROP_ERR)) {
+-			skb_pull(skb, (pkt_len + 7) & 0xFFF8);
+-			pkt_hdr++;
+-			continue;
+-		}
+-
+-		if (pkt_cnt == 0) {
+-			skb->len = pkt_len;
+-			/* Skip IP alignment pseudo header */
+-			skb_pull(skb, 2);
+-			skb_set_tail_pointer(skb, skb->len);
+-			skb->truesize = pkt_len + sizeof(struct sk_buff);
+-			ax88179_rx_checksum(skb, pkt_hdr);
+-			return 1;
+-		}
++		if (pkt_len > skb->len)
++			return 0;
+ 
+-		ax_skb = skb_clone(skb, GFP_ATOMIC);
+-		if (ax_skb) {
++		/* Check CRC or runt packet */
++		if (((*pkt_hdr & (AX_RXHDR_CRC_ERR | AX_RXHDR_DROP_ERR)) == 0) &&
++		    pkt_len >= 2 + ETH_HLEN) {
++			bool last = (pkt_cnt == 0);
++
++			if (last) {
++				ax_skb = skb;
++			} else {
++				ax_skb = skb_clone(skb, GFP_ATOMIC);
++				if (!ax_skb)
++					return 0;
++			}
+ 			ax_skb->len = pkt_len;
+ 			/* Skip IP alignment pseudo header */
+ 			skb_pull(ax_skb, 2);
+ 			skb_set_tail_pointer(ax_skb, ax_skb->len);
+ 			ax_skb->truesize = pkt_len + sizeof(struct sk_buff);
+ 			ax88179_rx_checksum(ax_skb, pkt_hdr);
++
++			if (last)
++				return 1;
++
+ 			usbnet_skb_return(dev, ax_skb);
+-		} else {
+-			return 0;
+ 		}
+ 
+-		skb_pull(skb, (pkt_len + 7) & 0xFFF8);
+-		pkt_hdr++;
++		/* Trim this packet away from the SKB */
++		if (!skb_pull(skb, (pkt_len + 7) & 0xFFF8))
++			return 0;
+ 	}
+-	return 1;
+ }
+ 
+ static struct sk_buff *
+diff --git a/drivers/net/veth.c b/drivers/net/veth.c
+index ecbc09cbe2590..f478fe7e2b820 100644
+--- a/drivers/net/veth.c
++++ b/drivers/net/veth.c
+@@ -272,9 +272,10 @@ static void __veth_xdp_flush(struct veth_rq *rq)
+ {
+ 	/* Write ptr_ring before reading rx_notify_masked */
+ 	smp_mb();
+-	if (!rq->rx_notify_masked) {
+-		rq->rx_notify_masked = true;
+-		napi_schedule(&rq->xdp_napi);
++	if (!READ_ONCE(rq->rx_notify_masked) &&
++	    napi_schedule_prep(&rq->xdp_napi)) {
++		WRITE_ONCE(rq->rx_notify_masked, true);
++		__napi_schedule(&rq->xdp_napi);
+ 	}
+ }
+ 
+@@ -919,8 +920,10 @@ static int veth_poll(struct napi_struct *napi, int budget)
+ 		/* Write rx_notify_masked before reading ptr_ring */
+ 		smp_store_mb(rq->rx_notify_masked, false);
+ 		if (unlikely(!__ptr_ring_empty(&rq->xdp_ring))) {
+-			rq->rx_notify_masked = true;
+-			napi_schedule(&rq->xdp_napi);
++			if (napi_schedule_prep(&rq->xdp_napi)) {
++				WRITE_ONCE(rq->rx_notify_masked, true);
++				__napi_schedule(&rq->xdp_napi);
++			}
+ 		}
+ 	}
+ 
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index ca2ee806d74b6..953ea3d5d4bfb 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3326,7 +3326,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ 				NVME_QUIRK_DEALLOCATE_ZEROES, },
+ 	{ PCI_VDEVICE(INTEL, 0x0a54),	/* Intel P4500/P4600 */
+ 		.driver_data = NVME_QUIRK_STRIPE_SIZE |
+-				NVME_QUIRK_DEALLOCATE_ZEROES, },
++				NVME_QUIRK_DEALLOCATE_ZEROES |
++				NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+ 	{ PCI_VDEVICE(INTEL, 0x0a55),	/* Dell Express Flash P4600 */
+ 		.driver_data = NVME_QUIRK_STRIPE_SIZE |
+ 				NVME_QUIRK_DEALLOCATE_ZEROES, },
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 4ceb28675fdf6..22046415a0942 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -913,7 +913,15 @@ static inline void nvme_tcp_done_send_req(struct nvme_tcp_queue *queue)
+ 
+ static void nvme_tcp_fail_request(struct nvme_tcp_request *req)
+ {
+-	nvme_tcp_end_request(blk_mq_rq_from_pdu(req), NVME_SC_HOST_PATH_ERROR);
++	if (nvme_tcp_async_req(req)) {
++		union nvme_result res = {};
++
++		nvme_complete_async_event(&req->queue->ctrl->ctrl,
++				cpu_to_le16(NVME_SC_HOST_PATH_ERROR), &res);
++	} else {
++		nvme_tcp_end_request(blk_mq_rq_from_pdu(req),
++				NVME_SC_HOST_PATH_ERROR);
++	}
+ }
+ 
+ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
+diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c
+index bda630889f955..604feeb84ee40 100644
+--- a/drivers/pci/pcie/portdrv_core.c
++++ b/drivers/pci/pcie/portdrv_core.c
+@@ -166,6 +166,9 @@ static int pcie_init_service_irqs(struct pci_dev *dev, int *irqs, int mask)
+ {
+ 	int ret, i;
+ 
++	for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++)
++		irqs[i] = -1;
++
+ 	/*
+ 	 * If we support PME but can't use MSI/MSI-X for it, we have to
+ 	 * fall back to INTx or other interrupts, e.g., a system shared
+@@ -314,10 +317,8 @@ static int pcie_device_init(struct pci_dev *pdev, int service, int irq)
+  */
+ int pcie_port_device_register(struct pci_dev *dev)
+ {
+-	int status, capabilities, irq_services, i, nr_service;
+-	int irqs[PCIE_PORT_DEVICE_MAXSERVICES] = {
+-		[0 ... PCIE_PORT_DEVICE_MAXSERVICES-1] = -1
+-	};
++	int status, capabilities, i, nr_service;
++	int irqs[PCIE_PORT_DEVICE_MAXSERVICES];
+ 
+ 	/* Enable PCI Express port device */
+ 	status = pci_enable_device(dev);
+@@ -330,32 +331,18 @@ int pcie_port_device_register(struct pci_dev *dev)
+ 		return 0;
+ 
+ 	pci_set_master(dev);
+-
+-	irq_services = 0;
+-	if (IS_ENABLED(CONFIG_PCIE_PME))
+-		irq_services |= PCIE_PORT_SERVICE_PME;
+-	if (IS_ENABLED(CONFIG_PCIEAER))
+-		irq_services |= PCIE_PORT_SERVICE_AER;
+-	if (IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE))
+-		irq_services |= PCIE_PORT_SERVICE_HP;
+-	if (IS_ENABLED(CONFIG_PCIE_DPC))
+-		irq_services |= PCIE_PORT_SERVICE_DPC;
+-	irq_services &= capabilities;
+-
+-	if (irq_services) {
+-		/*
+-		 * Initialize service IRQs. Don't use service devices that
+-		 * require interrupts if there is no way to generate them.
+-		 * However, some drivers may have a polling mode (e.g.
+-		 * pciehp_poll_mode) that can be used in the absence of IRQs.
+-		 * Allow them to determine if that is to be used.
+-		 */
+-		status = pcie_init_service_irqs(dev, irqs, irq_services);
+-		if (status) {
+-			irq_services &= PCIE_PORT_SERVICE_HP;
+-			if (!irq_services)
+-				goto error_disable;
+-		}
++	/*
++	 * Initialize service irqs. Don't use service devices that
++	 * require interrupts if there is no way to generate them.
++	 * However, some drivers may have a polling mode (e.g. pciehp_poll_mode)
++	 * that can be used in the absence of irqs.  Allow them to determine
++	 * if that is to be used.
++	 */
++	status = pcie_init_service_irqs(dev, irqs, capabilities);
++	if (status) {
++		capabilities &= PCIE_PORT_SERVICE_HP;
++		if (!capabilities)
++			goto error_disable;
+ 	}
+ 
+ 	/* Allocate child services if any */
+diff --git a/drivers/phy/amlogic/phy-meson-axg-mipi-dphy.c b/drivers/phy/amlogic/phy-meson-axg-mipi-dphy.c
+index cd2332bf0e31a..fdbd64c03e12b 100644
+--- a/drivers/phy/amlogic/phy-meson-axg-mipi-dphy.c
++++ b/drivers/phy/amlogic/phy-meson-axg-mipi-dphy.c
+@@ -9,6 +9,7 @@
+ 
+ #include <linux/bitfield.h>
+ #include <linux/bitops.h>
++#include <linux/bits.h>
+ #include <linux/clk.h>
+ #include <linux/delay.h>
+ #include <linux/io.h>
+@@ -250,7 +251,7 @@ static int phy_meson_axg_mipi_dphy_power_on(struct phy *phy)
+ 		     (DIV_ROUND_UP(priv->config.clk_zero, temp) << 16) |
+ 		     (DIV_ROUND_UP(priv->config.clk_prepare, temp) << 24));
+ 	regmap_write(priv->regmap, MIPI_DSI_CLK_TIM1,
+-		     DIV_ROUND_UP(priv->config.clk_pre, temp));
++		     DIV_ROUND_UP(priv->config.clk_pre, BITS_PER_BYTE));
+ 
+ 	regmap_write(priv->regmap, MIPI_DSI_HS_TIM,
+ 		     DIV_ROUND_UP(priv->config.hs_exit, temp) |
+diff --git a/drivers/phy/broadcom/Kconfig b/drivers/phy/broadcom/Kconfig
+index f81e237420799..849c4204f5506 100644
+--- a/drivers/phy/broadcom/Kconfig
++++ b/drivers/phy/broadcom/Kconfig
+@@ -97,8 +97,7 @@ config PHY_BRCM_USB
+ 	depends on OF
+ 	select GENERIC_PHY
+ 	select SOC_BRCMSTB if ARCH_BRCMSTB
+-	default ARCH_BCM4908
+-	default ARCH_BRCMSTB
++	default ARCH_BCM4908 || ARCH_BRCMSTB
+ 	help
+ 	  Enable this to support the Broadcom STB USB PHY.
+ 	  This driver is required by the USB XHCI, EHCI and OHCI
+diff --git a/drivers/phy/phy-core-mipi-dphy.c b/drivers/phy/phy-core-mipi-dphy.c
+index 288c9c67aa748..ccb4045685cdd 100644
+--- a/drivers/phy/phy-core-mipi-dphy.c
++++ b/drivers/phy/phy-core-mipi-dphy.c
+@@ -36,7 +36,7 @@ int phy_mipi_dphy_get_default_config(unsigned long pixel_clock,
+ 
+ 	cfg->clk_miss = 0;
+ 	cfg->clk_post = 60000 + 52 * ui;
+-	cfg->clk_pre = 8000;
++	cfg->clk_pre = 8;
+ 	cfg->clk_prepare = 38000;
+ 	cfg->clk_settle = 95000;
+ 	cfg->clk_term_en = 0;
+@@ -97,7 +97,7 @@ int phy_mipi_dphy_config_validate(struct phy_configure_opts_mipi_dphy *cfg)
+ 	if (cfg->clk_post < (60000 + 52 * ui))
+ 		return -EINVAL;
+ 
+-	if (cfg->clk_pre < 8000)
++	if (cfg->clk_pre < 8)
+ 		return -EINVAL;
+ 
+ 	if (cfg->clk_prepare < 38000 || cfg->clk_prepare > 95000)
+diff --git a/drivers/phy/rockchip/phy-rockchip-inno-dsidphy.c b/drivers/phy/rockchip/phy-rockchip-inno-dsidphy.c
+index 347dc79a18c18..630e01b5c19b9 100644
+--- a/drivers/phy/rockchip/phy-rockchip-inno-dsidphy.c
++++ b/drivers/phy/rockchip/phy-rockchip-inno-dsidphy.c
+@@ -5,6 +5,7 @@
+  * Author: Wyon Bi <bivvy.bi@rock-chips.com>
+  */
+ 
++#include <linux/bits.h>
+ #include <linux/kernel.h>
+ #include <linux/clk.h>
+ #include <linux/iopoll.h>
+@@ -364,7 +365,7 @@ static void inno_dsidphy_mipi_mode_enable(struct inno_dsidphy *inno)
+ 	 * The value of counter for HS Tclk-pre
+ 	 * Tclk-pre = Tpin_txbyteclkhs * value
+ 	 */
+-	clk_pre = DIV_ROUND_UP(cfg->clk_pre, t_txbyteclkhs);
++	clk_pre = DIV_ROUND_UP(cfg->clk_pre, BITS_PER_BYTE);
+ 
+ 	/*
+ 	 * The value of counter for HS Tlpx Time
+diff --git a/drivers/phy/st/phy-stm32-usbphyc.c b/drivers/phy/st/phy-stm32-usbphyc.c
+index e4f4a9be51320..1ecdc26689ce8 100644
+--- a/drivers/phy/st/phy-stm32-usbphyc.c
++++ b/drivers/phy/st/phy-stm32-usbphyc.c
+@@ -304,7 +304,7 @@ static int stm32_usbphyc_pll_enable(struct stm32_usbphyc *usbphyc)
+ 
+ 		ret = __stm32_usbphyc_pll_disable(usbphyc);
+ 		if (ret)
+-			return ret;
++			goto dec_n_pll_cons;
+ 	}
+ 
+ 	ret = stm32_usbphyc_regulators_enable(usbphyc);
+diff --git a/drivers/phy/ti/phy-j721e-wiz.c b/drivers/phy/ti/phy-j721e-wiz.c
+index b3384c31637ae..da546c35d1d5c 100644
+--- a/drivers/phy/ti/phy-j721e-wiz.c
++++ b/drivers/phy/ti/phy-j721e-wiz.c
+@@ -233,6 +233,7 @@ static const struct clk_div_table clk_div_table[] = {
+ 	{ .val = 1, .div = 2, },
+ 	{ .val = 2, .div = 4, },
+ 	{ .val = 3, .div = 8, },
++	{ /* sentinel */ },
+ };
+ 
+ static const struct wiz_clk_div_sel clk_div_sel[] = {
+diff --git a/drivers/phy/xilinx/phy-zynqmp.c b/drivers/phy/xilinx/phy-zynqmp.c
+index f478d8a17115b..9be9535ad7ab7 100644
+--- a/drivers/phy/xilinx/phy-zynqmp.c
++++ b/drivers/phy/xilinx/phy-zynqmp.c
+@@ -134,7 +134,8 @@
+ #define PROT_BUS_WIDTH_10		0x0
+ #define PROT_BUS_WIDTH_20		0x1
+ #define PROT_BUS_WIDTH_40		0x2
+-#define PROT_BUS_WIDTH_SHIFT		2
++#define PROT_BUS_WIDTH_SHIFT(n)		((n) * 2)
++#define PROT_BUS_WIDTH_MASK(n)		GENMASK((n) * 2 + 1, (n) * 2)
+ 
+ /* Number of GT lanes */
+ #define NUM_LANES			4
+@@ -445,12 +446,12 @@ static void xpsgtr_phy_init_sata(struct xpsgtr_phy *gtr_phy)
+ static void xpsgtr_phy_init_sgmii(struct xpsgtr_phy *gtr_phy)
+ {
+ 	struct xpsgtr_dev *gtr_dev = gtr_phy->dev;
++	u32 mask = PROT_BUS_WIDTH_MASK(gtr_phy->lane);
++	u32 val = PROT_BUS_WIDTH_10 << PROT_BUS_WIDTH_SHIFT(gtr_phy->lane);
+ 
+ 	/* Set SGMII protocol TX and RX bus width to 10 bits. */
+-	xpsgtr_write(gtr_dev, TX_PROT_BUS_WIDTH,
+-		     PROT_BUS_WIDTH_10 << (gtr_phy->lane * PROT_BUS_WIDTH_SHIFT));
+-	xpsgtr_write(gtr_dev, RX_PROT_BUS_WIDTH,
+-		     PROT_BUS_WIDTH_10 << (gtr_phy->lane * PROT_BUS_WIDTH_SHIFT));
++	xpsgtr_clr_set(gtr_dev, TX_PROT_BUS_WIDTH, mask, val);
++	xpsgtr_clr_set(gtr_dev, RX_PROT_BUS_WIDTH, mask, val);
+ 
+ 	xpsgtr_bypass_scrambler_8b10b(gtr_phy);
+ }
+diff --git a/drivers/s390/cio/device.c b/drivers/s390/cio/device.c
+index 07a17613fab58..b835050202314 100644
+--- a/drivers/s390/cio/device.c
++++ b/drivers/s390/cio/device.c
+@@ -1194,7 +1194,7 @@ static int io_subchannel_chp_event(struct subchannel *sch,
+ 			else
+ 				path_event[chpid] = PE_NONE;
+ 		}
+-		if (cdev)
++		if (cdev && cdev->drv && cdev->drv->path_event)
+ 			cdev->drv->path_event(cdev, path_event);
+ 		break;
+ 	}
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index 54c58392fd5d0..f74a1c09c3518 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -1165,6 +1165,16 @@ struct lpfc_hba {
+ 	uint32_t cfg_hostmem_hgp;
+ 	uint32_t cfg_log_verbose;
+ 	uint32_t cfg_enable_fc4_type;
++#define LPFC_ENABLE_FCP  1
++#define LPFC_ENABLE_NVME 2
++#define LPFC_ENABLE_BOTH 3
++#if (IS_ENABLED(CONFIG_NVME_FC))
++#define LPFC_MAX_ENBL_FC4_TYPE LPFC_ENABLE_BOTH
++#define LPFC_DEF_ENBL_FC4_TYPE LPFC_ENABLE_BOTH
++#else
++#define LPFC_MAX_ENBL_FC4_TYPE LPFC_ENABLE_FCP
++#define LPFC_DEF_ENBL_FC4_TYPE LPFC_ENABLE_FCP
++#endif
+ 	uint32_t cfg_aer_support;
+ 	uint32_t cfg_sriov_nr_virtfn;
+ 	uint32_t cfg_request_firmware_upgrade;
+@@ -1186,9 +1196,6 @@ struct lpfc_hba {
+ 	uint32_t cfg_ras_fwlog_func;
+ 	uint32_t cfg_enable_bbcr;	/* Enable BB Credit Recovery */
+ 	uint32_t cfg_enable_dpp;	/* Enable Direct Packet Push */
+-#define LPFC_ENABLE_FCP  1
+-#define LPFC_ENABLE_NVME 2
+-#define LPFC_ENABLE_BOTH 3
+ 	uint32_t cfg_enable_pbde;
+ 	uint32_t cfg_enable_mi;
+ 	struct nvmet_fc_target_port *targetport;
+diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
+index 7a7f17d71811b..bac78fbce8d6e 100644
+--- a/drivers/scsi/lpfc/lpfc_attr.c
++++ b/drivers/scsi/lpfc/lpfc_attr.c
+@@ -3978,8 +3978,8 @@ LPFC_ATTR_R(nvmet_mrq_post,
+  *                    3 - register both FCP and NVME
+  * Supported values are [1,3]. Default value is 3
+  */
+-LPFC_ATTR_R(enable_fc4_type, LPFC_ENABLE_BOTH,
+-	    LPFC_ENABLE_FCP, LPFC_ENABLE_BOTH,
++LPFC_ATTR_R(enable_fc4_type, LPFC_DEF_ENBL_FC4_TYPE,
++	    LPFC_ENABLE_FCP, LPFC_MAX_ENBL_FC4_TYPE,
+ 	    "Enable FC4 Protocol support - FCP / NVME");
+ 
+ /*
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 7628b0634c57a..ece50201d1724 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -2104,7 +2104,7 @@ lpfc_handle_eratt_s4(struct lpfc_hba *phba)
+ 		}
+ 		if (reg_err1 == SLIPORT_ERR1_REG_ERR_CODE_2 &&
+ 		    reg_err2 == SLIPORT_ERR2_REG_FW_RESTART) {
+-			lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
++			lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
+ 					"3143 Port Down: Firmware Update "
+ 					"Detected\n");
+ 			en_rn_msg = false;
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 513a78d08b1d5..f404e6bf813cc 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -13365,6 +13365,7 @@ lpfc_sli4_eratt_read(struct lpfc_hba *phba)
+ 	uint32_t uerr_sta_hi, uerr_sta_lo;
+ 	uint32_t if_type, portsmphr;
+ 	struct lpfc_register portstat_reg;
++	u32 logmask;
+ 
+ 	/*
+ 	 * For now, use the SLI4 device internal unrecoverable error
+@@ -13415,7 +13416,12 @@ lpfc_sli4_eratt_read(struct lpfc_hba *phba)
+ 				readl(phba->sli4_hba.u.if_type2.ERR1regaddr);
+ 			phba->work_status[1] =
+ 				readl(phba->sli4_hba.u.if_type2.ERR2regaddr);
+-			lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
++			logmask = LOG_TRACE_EVENT;
++			if (phba->work_status[0] ==
++				SLIPORT_ERR1_REG_ERR_CODE_2 &&
++			    phba->work_status[1] == SLIPORT_ERR2_REG_FW_RESTART)
++				logmask = LOG_SLI;
++			lpfc_printf_log(phba, KERN_ERR, logmask,
+ 					"2885 Port Status Event: "
+ 					"port status reg 0x%x, "
+ 					"port smphr reg 0x%x, "
+diff --git a/drivers/scsi/myrs.c b/drivers/scsi/myrs.c
+index 6ea323e9a2e34..f6dbc8f2f60a3 100644
+--- a/drivers/scsi/myrs.c
++++ b/drivers/scsi/myrs.c
+@@ -2269,7 +2269,8 @@ static void myrs_cleanup(struct myrs_hba *cs)
+ 	myrs_unmap(cs);
+ 
+ 	if (cs->mmio_base) {
+-		cs->disable_intr(cs);
++		if (cs->disable_intr)
++			cs->disable_intr(cs);
+ 		iounmap(cs->mmio_base);
+ 		cs->mmio_base = NULL;
+ 	}
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index 2101fc5761c3c..4c5b945bf3187 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -4161,10 +4161,22 @@ static int process_oq(struct pm8001_hba_info *pm8001_ha, u8 vec)
+ 	u32 ret = MPI_IO_STATUS_FAIL;
+ 	u32 regval;
+ 
++	/*
++	 * Fatal errors are programmed to be signalled in irq vector
++	 * pm8001_ha->max_q_num - 1 through pm8001_ha->main_cfg_tbl.pm80xx_tbl.
++	 * fatal_err_interrupt
++	 */
+ 	if (vec == (pm8001_ha->max_q_num - 1)) {
++		u32 mipsall_ready;
++
++		if (pm8001_ha->chip_id == chip_8008 ||
++		    pm8001_ha->chip_id == chip_8009)
++			mipsall_ready = SCRATCH_PAD_MIPSALL_READY_8PORT;
++		else
++			mipsall_ready = SCRATCH_PAD_MIPSALL_READY_16PORT;
++
+ 		regval = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_1);
+-		if ((regval & SCRATCH_PAD_MIPSALL_READY) !=
+-					SCRATCH_PAD_MIPSALL_READY) {
++		if ((regval & mipsall_ready) != mipsall_ready) {
+ 			pm8001_ha->controller_fatal_error = true;
+ 			pm8001_dbg(pm8001_ha, FAIL,
+ 				   "Firmware Fatal error! Regval:0x%x\n",
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.h b/drivers/scsi/pm8001/pm80xx_hwi.h
+index c7e5d93bea924..c41ed039c92ac 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.h
++++ b/drivers/scsi/pm8001/pm80xx_hwi.h
+@@ -1405,8 +1405,12 @@ typedef struct SASProtocolTimerConfig SASProtocolTimerConfig_t;
+ #define SCRATCH_PAD_BOOT_LOAD_SUCCESS	0x0
+ #define SCRATCH_PAD_IOP0_READY		0xC00
+ #define SCRATCH_PAD_IOP1_READY		0x3000
+-#define SCRATCH_PAD_MIPSALL_READY	(SCRATCH_PAD_IOP1_READY | \
++#define SCRATCH_PAD_MIPSALL_READY_16PORT	(SCRATCH_PAD_IOP1_READY | \
+ 					SCRATCH_PAD_IOP0_READY | \
++					SCRATCH_PAD_ILA_READY | \
++					SCRATCH_PAD_RAAE_READY)
++#define SCRATCH_PAD_MIPSALL_READY_8PORT	(SCRATCH_PAD_IOP0_READY | \
++					SCRATCH_PAD_ILA_READY | \
+ 					SCRATCH_PAD_RAAE_READY)
+ 
+ /* boot loader state */
+diff --git a/drivers/scsi/qedf/qedf_io.c b/drivers/scsi/qedf/qedf_io.c
+index 99a56ca1fb163..fab43dabe5b31 100644
+--- a/drivers/scsi/qedf/qedf_io.c
++++ b/drivers/scsi/qedf/qedf_io.c
+@@ -2250,6 +2250,7 @@ process_els:
+ 	    io_req->tm_flags == FCP_TMF_TGT_RESET) {
+ 		clear_bit(QEDF_CMD_OUTSTANDING, &io_req->flags);
+ 		io_req->sc_cmd = NULL;
++		kref_put(&io_req->refcount, qedf_release_cmd);
+ 		complete(&io_req->tm_done);
+ 	}
+ 
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 1bf7a22d49480..e0e03443d7703 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -911,7 +911,7 @@ void qedf_ctx_soft_reset(struct fc_lport *lport)
+ 	struct qed_link_output if_link;
+ 
+ 	if (lport->vport) {
+-		QEDF_ERR(NULL, "Cannot issue host reset on NPIV port.\n");
++		printk_ratelimited("Cannot issue host reset on NPIV port.\n");
+ 		return;
+ 	}
+ 
+@@ -1862,6 +1862,7 @@ static int qedf_vport_create(struct fc_vport *vport, bool disabled)
+ 	vport_qedf->cmd_mgr = base_qedf->cmd_mgr;
+ 	init_completion(&vport_qedf->flogi_compl);
+ 	INIT_LIST_HEAD(&vport_qedf->fcports);
++	INIT_DELAYED_WORK(&vport_qedf->stag_work, qedf_stag_change_work);
+ 
+ 	rc = qedf_vport_libfc_config(vport, vn_port);
+ 	if (rc) {
+@@ -3978,7 +3979,9 @@ void qedf_stag_change_work(struct work_struct *work)
+ 	struct qedf_ctx *qedf =
+ 	    container_of(work, struct qedf_ctx, stag_work.work);
+ 
+-	QEDF_ERR(&qedf->dbg_ctx, "Performing software context reset.\n");
++	printk_ratelimited("[%s]:[%s:%d]:%d: Performing software context reset.",
++			dev_name(&qedf->pdev->dev), __func__, __LINE__,
++			qedf->dbg_ctx.host_no);
+ 	qedf_ctx_soft_reset(qedf->lport);
+ }
+ 
+diff --git a/drivers/scsi/ufs/ufshcd-pltfrm.c b/drivers/scsi/ufs/ufshcd-pltfrm.c
+index 8b16bbbcb806c..87975d1a21c8b 100644
+--- a/drivers/scsi/ufs/ufshcd-pltfrm.c
++++ b/drivers/scsi/ufs/ufshcd-pltfrm.c
+@@ -92,6 +92,11 @@ static int ufshcd_parse_clock_info(struct ufs_hba *hba)
+ 		clki->min_freq = clkfreq[i];
+ 		clki->max_freq = clkfreq[i+1];
+ 		clki->name = devm_kstrdup(dev, name, GFP_KERNEL);
++		if (!clki->name) {
++			ret = -ENOMEM;
++			goto out;
++		}
++
+ 		if (!strcmp(name, "ref_clk"))
+ 			clki->keep_link_active = true;
+ 		dev_dbg(dev, "%s: min %u max %u name %s\n", "freq-table-hz",
+@@ -127,6 +132,8 @@ static int ufshcd_populate_vreg(struct device *dev, const char *name,
+ 		return -ENOMEM;
+ 
+ 	vreg->name = devm_kstrdup(dev, name, GFP_KERNEL);
++	if (!vreg->name)
++		return -ENOMEM;
+ 
+ 	snprintf(prop_name, MAX_PROP_SIZE, "%s-max-microamp", name);
+ 	if (of_property_read_u32(np, prop_name, &vreg->max_uA)) {
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index c94377aa82739..ec7d7e01231d7 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -8587,7 +8587,7 @@ static void ufshcd_hba_exit(struct ufs_hba *hba)
+  * @pwr_mode: device power mode to set
+  *
+  * Returns 0 if requested power mode is set successfully
+- * Returns non-zero if failed to set the requested power mode
++ * Returns < 0 if failed to set the requested power mode
+  */
+ static int ufshcd_set_dev_pwr_mode(struct ufs_hba *hba,
+ 				     enum ufs_dev_pwr_mode pwr_mode)
+@@ -8641,8 +8641,11 @@ static int ufshcd_set_dev_pwr_mode(struct ufs_hba *hba,
+ 		sdev_printk(KERN_WARNING, sdp,
+ 			    "START_STOP failed for power mode: %d, result %x\n",
+ 			    pwr_mode, ret);
+-		if (ret > 0 && scsi_sense_valid(&sshdr))
+-			scsi_print_sense_hdr(sdp, NULL, &sshdr);
++		if (ret > 0) {
++			if (scsi_sense_valid(&sshdr))
++				scsi_print_sense_hdr(sdp, NULL, &sshdr);
++			ret = -EIO;
++		}
+ 	}
+ 
+ 	if (!ret)
+diff --git a/drivers/scsi/ufs/ufshci.h b/drivers/scsi/ufs/ufshci.h
+index 6a295c88d850f..a7ff0e5b54946 100644
+--- a/drivers/scsi/ufs/ufshci.h
++++ b/drivers/scsi/ufs/ufshci.h
+@@ -142,7 +142,8 @@ static inline u32 ufshci_version(u32 major, u32 minor)
+ #define INT_FATAL_ERRORS	(DEVICE_FATAL_ERROR |\
+ 				CONTROLLER_FATAL_ERROR |\
+ 				SYSTEM_BUS_FATAL_ERROR |\
+-				CRYPTO_ENGINE_FATAL_ERROR)
++				CRYPTO_ENGINE_FATAL_ERROR |\
++				UIC_LINK_LOST)
+ 
+ /* HCS - Host Controller Status 30h */
+ #define DEVICE_PRESENT				0x1
+diff --git a/drivers/staging/fbtft/fbtft.h b/drivers/staging/fbtft/fbtft.h
+index 6869f3603b0e6..9a6c906820f22 100644
+--- a/drivers/staging/fbtft/fbtft.h
++++ b/drivers/staging/fbtft/fbtft.h
+@@ -334,7 +334,10 @@ static int __init fbtft_driver_module_init(void)                           \
+ 	ret = spi_register_driver(&fbtft_driver_spi_driver);               \
+ 	if (ret < 0)                                                       \
+ 		return ret;                                                \
+-	return platform_driver_register(&fbtft_driver_platform_driver);    \
++	ret = platform_driver_register(&fbtft_driver_platform_driver);     \
++	if (ret < 0)                                                       \
++		spi_unregister_driver(&fbtft_driver_spi_driver);           \
++	return ret;                                                        \
+ }                                                                          \
+ 									   \
+ static void __exit fbtft_driver_module_exit(void)                          \
+diff --git a/drivers/target/iscsi/iscsi_target_tpg.c b/drivers/target/iscsi/iscsi_target_tpg.c
+index 8075f60fd02c3..2d5cf1714ae05 100644
+--- a/drivers/target/iscsi/iscsi_target_tpg.c
++++ b/drivers/target/iscsi/iscsi_target_tpg.c
+@@ -443,6 +443,9 @@ static bool iscsit_tpg_check_network_portal(
+ 				break;
+ 		}
+ 		spin_unlock(&tpg->tpg_np_lock);
++
++		if (match)
++			break;
+ 	}
+ 	spin_unlock(&tiqn->tiqn_tpg_lock);
+ 
+diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c
+index d8c8683863aa0..28d7c0eafc025 100644
+--- a/drivers/tee/optee/ffa_abi.c
++++ b/drivers/tee/optee/ffa_abi.c
+@@ -619,9 +619,18 @@ static int optee_ffa_do_call_with_arg(struct tee_context *ctx,
+ 		.data2 = (u32)(shm->sec_world_id >> 32),
+ 		.data3 = shm->offset,
+ 	};
+-	struct optee_msg_arg *arg = tee_shm_get_va(shm, 0);
+-	unsigned int rpc_arg_offs = OPTEE_MSG_GET_ARG_SIZE(arg->num_params);
+-	struct optee_msg_arg *rpc_arg = tee_shm_get_va(shm, rpc_arg_offs);
++	struct optee_msg_arg *arg;
++	unsigned int rpc_arg_offs;
++	struct optee_msg_arg *rpc_arg;
++
++	arg = tee_shm_get_va(shm, 0);
++	if (IS_ERR(arg))
++		return PTR_ERR(arg);
++
++	rpc_arg_offs = OPTEE_MSG_GET_ARG_SIZE(arg->num_params);
++	rpc_arg = tee_shm_get_va(shm, rpc_arg_offs);
++	if (IS_ERR(rpc_arg))
++		return PTR_ERR(rpc_arg);
+ 
+ 	return optee_ffa_yielding_call(ctx, &data, rpc_arg);
+ }
+diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c
+index cf2e3293567d9..09e7ec673bb6b 100644
+--- a/drivers/tee/optee/smc_abi.c
++++ b/drivers/tee/optee/smc_abi.c
+@@ -71,16 +71,6 @@ static int from_msg_param_tmp_mem(struct tee_param *p, u32 attr,
+ 	p->u.memref.shm_offs = mp->u.tmem.buf_ptr - pa;
+ 	p->u.memref.shm = shm;
+ 
+-	/* Check that the memref is covered by the shm object */
+-	if (p->u.memref.size) {
+-		size_t o = p->u.memref.shm_offs +
+-			   p->u.memref.size - 1;
+-
+-		rc = tee_shm_get_pa(shm, o, NULL);
+-		if (rc)
+-			return rc;
+-	}
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index 5be6d02dc690b..b2b98fe689e9e 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -1369,7 +1369,7 @@ handle_newline:
+ 			put_tty_queue(c, ldata);
+ 			smp_store_release(&ldata->canon_head, ldata->read_head);
+ 			kill_fasync(&tty->fasync, SIGIO, POLL_IN);
+-			wake_up_interruptible_poll(&tty->read_wait, EPOLLIN);
++			wake_up_interruptible_poll(&tty->read_wait, EPOLLIN | EPOLLRDNORM);
+ 			return;
+ 		}
+ 	}
+@@ -1589,7 +1589,7 @@ static void __receive_buf(struct tty_struct *tty, const unsigned char *cp,
+ 
+ 	if (read_cnt(ldata)) {
+ 		kill_fasync(&tty->fasync, SIGIO, POLL_IN);
+-		wake_up_interruptible_poll(&tty->read_wait, EPOLLIN);
++		wake_up_interruptible_poll(&tty->read_wait, EPOLLIN | EPOLLRDNORM);
+ 	}
+ }
+ 
+diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c
+index 3639bb6dc372e..58013698635f0 100644
+--- a/drivers/tty/vt/vt_ioctl.c
++++ b/drivers/tty/vt/vt_ioctl.c
+@@ -599,8 +599,8 @@ static int vt_setactivate(struct vt_setactivate __user *sa)
+ 	if (vsa.console == 0 || vsa.console > MAX_NR_CONSOLES)
+ 		return -ENXIO;
+ 
+-	vsa.console = array_index_nospec(vsa.console, MAX_NR_CONSOLES + 1);
+ 	vsa.console--;
++	vsa.console = array_index_nospec(vsa.console, MAX_NR_CONSOLES);
+ 	console_lock();
+ 	ret = vc_allocate(vsa.console);
+ 	if (ret) {
+@@ -845,6 +845,7 @@ int vt_ioctl(struct tty_struct *tty,
+ 			return -ENXIO;
+ 
+ 		arg--;
++		arg = array_index_nospec(arg, MAX_NR_CONSOLES);
+ 		console_lock();
+ 		ret = vc_allocate(arg);
+ 		console_unlock();
+diff --git a/drivers/usb/common/ulpi.c b/drivers/usb/common/ulpi.c
+index 8f8405b0d6080..5509d3847af4b 100644
+--- a/drivers/usb/common/ulpi.c
++++ b/drivers/usb/common/ulpi.c
+@@ -130,6 +130,7 @@ static const struct attribute_group *ulpi_dev_attr_groups[] = {
+ 
+ static void ulpi_dev_release(struct device *dev)
+ {
++	of_node_put(dev->of_node);
+ 	kfree(to_ulpi_dev(dev));
+ }
+ 
+@@ -247,12 +248,16 @@ static int ulpi_register(struct device *dev, struct ulpi *ulpi)
+ 		return ret;
+ 
+ 	ret = ulpi_read_id(ulpi);
+-	if (ret)
++	if (ret) {
++		of_node_put(ulpi->dev.of_node);
+ 		return ret;
++	}
+ 
+ 	ret = device_register(&ulpi->dev);
+-	if (ret)
++	if (ret) {
++		put_device(&ulpi->dev);
+ 		return ret;
++	}
+ 
+ 	dev_dbg(&ulpi->dev, "registered ULPI PHY: vendor %04x, product %04x\n",
+ 		ulpi->id.vendor, ulpi->id.product);
+@@ -299,7 +304,6 @@ EXPORT_SYMBOL_GPL(ulpi_register_interface);
+  */
+ void ulpi_unregister_interface(struct ulpi *ulpi)
+ {
+-	of_node_put(ulpi->dev.of_node);
+ 	device_unregister(&ulpi->dev);
+ }
+ EXPORT_SYMBOL_GPL(ulpi_unregister_interface);
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index 43cf49c4e5e59..da82e4140d545 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -5097,7 +5097,7 @@ int dwc2_hsotg_suspend(struct dwc2_hsotg *hsotg)
+ 		hsotg->gadget.speed = USB_SPEED_UNKNOWN;
+ 		spin_unlock_irqrestore(&hsotg->lock, flags);
+ 
+-		for (ep = 0; ep < hsotg->num_of_eps; ep++) {
++		for (ep = 1; ep < hsotg->num_of_eps; ep++) {
+ 			if (hsotg->eps_in[ep])
+ 				dwc2_hsotg_ep_disable_lock(&hsotg->eps_in[ep]->ep);
+ 			if (hsotg->eps_out[ep])
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 7e3db00e97595..7aab9116b0256 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1271,6 +1271,19 @@ static void __dwc3_prepare_one_trb(struct dwc3_ep *dep, struct dwc3_trb *trb,
+ 	if (usb_endpoint_xfer_bulk(dep->endpoint.desc) && dep->stream_capable)
+ 		trb->ctrl |= DWC3_TRB_CTRL_SID_SOFN(stream_id);
+ 
++	/*
++	 * As per data book 4.2.3.2TRB Control Bit Rules section
++	 *
++	 * The controller autonomously checks the HWO field of a TRB to determine if the
++	 * entire TRB is valid. Therefore, software must ensure that the rest of the TRB
++	 * is valid before setting the HWO field to '1'. In most systems, this means that
++	 * software must update the fourth DWORD of a TRB last.
++	 *
++	 * However there is a possibility of CPU re-ordering here which can cause
++	 * controller to observe the HWO bit set prematurely.
++	 * Add a write memory barrier to prevent CPU re-ordering.
++	 */
++	wmb();
+ 	trb->ctrl |= DWC3_TRB_CTRL_HWO;
+ 
+ 	dwc3_ep_inc_enq(dep);
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 3789c329183ca..553382ce38378 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -1975,6 +1975,9 @@ unknown:
+ 				if (w_index != 0x5 || (w_value >> 8))
+ 					break;
+ 				interface = w_value & 0xFF;
++				if (interface >= MAX_CONFIG_INTERFACES ||
++				    !os_desc_cfg->interface[interface])
++					break;
+ 				buf[6] = w_index;
+ 				count = count_ext_prop(os_desc_cfg,
+ 					interface);
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 25ad1e97a4585..1922fd02043c5 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -1711,16 +1711,24 @@ static void ffs_data_put(struct ffs_data *ffs)
+ 
+ static void ffs_data_closed(struct ffs_data *ffs)
+ {
++	struct ffs_epfile *epfiles;
++	unsigned long flags;
++
+ 	ENTER();
+ 
+ 	if (atomic_dec_and_test(&ffs->opened)) {
+ 		if (ffs->no_disconnect) {
+ 			ffs->state = FFS_DEACTIVATED;
+-			if (ffs->epfiles) {
+-				ffs_epfiles_destroy(ffs->epfiles,
+-						   ffs->eps_count);
+-				ffs->epfiles = NULL;
+-			}
++			spin_lock_irqsave(&ffs->eps_lock, flags);
++			epfiles = ffs->epfiles;
++			ffs->epfiles = NULL;
++			spin_unlock_irqrestore(&ffs->eps_lock,
++							flags);
++
++			if (epfiles)
++				ffs_epfiles_destroy(epfiles,
++						 ffs->eps_count);
++
+ 			if (ffs->setup_state == FFS_SETUP_PENDING)
+ 				__ffs_ep0_stall(ffs);
+ 		} else {
+@@ -1767,14 +1775,27 @@ static struct ffs_data *ffs_data_new(const char *dev_name)
+ 
+ static void ffs_data_clear(struct ffs_data *ffs)
+ {
++	struct ffs_epfile *epfiles;
++	unsigned long flags;
++
+ 	ENTER();
+ 
+ 	ffs_closed(ffs);
+ 
+ 	BUG_ON(ffs->gadget);
+ 
+-	if (ffs->epfiles) {
+-		ffs_epfiles_destroy(ffs->epfiles, ffs->eps_count);
++	spin_lock_irqsave(&ffs->eps_lock, flags);
++	epfiles = ffs->epfiles;
++	ffs->epfiles = NULL;
++	spin_unlock_irqrestore(&ffs->eps_lock, flags);
++
++	/*
++	 * potential race possible between ffs_func_eps_disable
++	 * & ffs_epfile_release therefore maintaining a local
++	 * copy of epfile will save us from use-after-free.
++	 */
++	if (epfiles) {
++		ffs_epfiles_destroy(epfiles, ffs->eps_count);
+ 		ffs->epfiles = NULL;
+ 	}
+ 
+@@ -1922,12 +1943,15 @@ static void ffs_epfiles_destroy(struct ffs_epfile *epfiles, unsigned count)
+ 
+ static void ffs_func_eps_disable(struct ffs_function *func)
+ {
+-	struct ffs_ep *ep         = func->eps;
+-	struct ffs_epfile *epfile = func->ffs->epfiles;
+-	unsigned count            = func->ffs->eps_count;
++	struct ffs_ep *ep;
++	struct ffs_epfile *epfile;
++	unsigned short count;
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&func->ffs->eps_lock, flags);
++	count = func->ffs->eps_count;
++	epfile = func->ffs->epfiles;
++	ep = func->eps;
+ 	while (count--) {
+ 		/* pending requests get nuked */
+ 		if (ep->ep)
+@@ -1945,14 +1969,18 @@ static void ffs_func_eps_disable(struct ffs_function *func)
+ 
+ static int ffs_func_eps_enable(struct ffs_function *func)
+ {
+-	struct ffs_data *ffs      = func->ffs;
+-	struct ffs_ep *ep         = func->eps;
+-	struct ffs_epfile *epfile = ffs->epfiles;
+-	unsigned count            = ffs->eps_count;
++	struct ffs_data *ffs;
++	struct ffs_ep *ep;
++	struct ffs_epfile *epfile;
++	unsigned short count;
+ 	unsigned long flags;
+ 	int ret = 0;
+ 
+ 	spin_lock_irqsave(&func->ffs->eps_lock, flags);
++	ffs = func->ffs;
++	ep = func->eps;
++	epfile = ffs->epfiles;
++	count = ffs->eps_count;
+ 	while(count--) {
+ 		ep->ep->driver_data = ep;
+ 
+diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
+index 36fa6ef0581b5..097a709549d60 100644
+--- a/drivers/usb/gadget/function/f_uac2.c
++++ b/drivers/usb/gadget/function/f_uac2.c
+@@ -203,7 +203,7 @@ static struct uac2_input_terminal_descriptor io_in_it_desc = {
+ 
+ 	.bDescriptorSubtype = UAC_INPUT_TERMINAL,
+ 	/* .bTerminalID = DYNAMIC */
+-	.wTerminalType = cpu_to_le16(UAC_INPUT_TERMINAL_UNDEFINED),
++	.wTerminalType = cpu_to_le16(UAC_INPUT_TERMINAL_MICROPHONE),
+ 	.bAssocTerminal = 0,
+ 	/* .bCSourceID = DYNAMIC */
+ 	.iChannelNames = 0,
+@@ -231,7 +231,7 @@ static struct uac2_output_terminal_descriptor io_out_ot_desc = {
+ 
+ 	.bDescriptorSubtype = UAC_OUTPUT_TERMINAL,
+ 	/* .bTerminalID = DYNAMIC */
+-	.wTerminalType = cpu_to_le16(UAC_OUTPUT_TERMINAL_UNDEFINED),
++	.wTerminalType = cpu_to_le16(UAC_OUTPUT_TERMINAL_SPEAKER),
+ 	.bAssocTerminal = 0,
+ 	/* .bSourceID = DYNAMIC */
+ 	/* .bCSourceID = DYNAMIC */
+diff --git a/drivers/usb/gadget/function/rndis.c b/drivers/usb/gadget/function/rndis.c
+index 64de9f1b874c5..d9ed651f06ac3 100644
+--- a/drivers/usb/gadget/function/rndis.c
++++ b/drivers/usb/gadget/function/rndis.c
+@@ -637,14 +637,17 @@ static int rndis_set_response(struct rndis_params *params,
+ 	rndis_set_cmplt_type *resp;
+ 	rndis_resp_t *r;
+ 
++	BufLength = le32_to_cpu(buf->InformationBufferLength);
++	BufOffset = le32_to_cpu(buf->InformationBufferOffset);
++	if ((BufLength > RNDIS_MAX_TOTAL_SIZE) ||
++	    (BufOffset + 8 >= RNDIS_MAX_TOTAL_SIZE))
++		    return -EINVAL;
++
+ 	r = rndis_add_response(params, sizeof(rndis_set_cmplt_type));
+ 	if (!r)
+ 		return -ENOMEM;
+ 	resp = (rndis_set_cmplt_type *)r->buf;
+ 
+-	BufLength = le32_to_cpu(buf->InformationBufferLength);
+-	BufOffset = le32_to_cpu(buf->InformationBufferOffset);
+-
+ #ifdef	VERBOSE_DEBUG
+ 	pr_debug("%s: Length: %d\n", __func__, BufLength);
+ 	pr_debug("%s: Offset: %d\n", __func__, BufOffset);
+diff --git a/drivers/usb/gadget/legacy/raw_gadget.c b/drivers/usb/gadget/legacy/raw_gadget.c
+index c5a2c734234a5..d86c3a36441ee 100644
+--- a/drivers/usb/gadget/legacy/raw_gadget.c
++++ b/drivers/usb/gadget/legacy/raw_gadget.c
+@@ -1004,7 +1004,7 @@ static int raw_process_ep_io(struct raw_dev *dev, struct usb_raw_ep_io *io,
+ 		ret = -EBUSY;
+ 		goto out_unlock;
+ 	}
+-	if ((in && !ep->ep->caps.dir_in) || (!in && ep->ep->caps.dir_in)) {
++	if (in != usb_endpoint_dir_in(ep->ep->desc)) {
+ 		dev_dbg(&dev->gadget->dev, "fail, wrong direction\n");
+ 		ret = -EINVAL;
+ 		goto out_unlock;
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index 57d417a7c3e0a..601829a6b4bad 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -2378,6 +2378,8 @@ static void handle_ext_role_switch_states(struct device *dev,
+ 	switch (role) {
+ 	case USB_ROLE_NONE:
+ 		usb3->connection_state = USB_ROLE_NONE;
++		if (cur_role == USB_ROLE_HOST)
++			device_release_driver(host);
+ 		if (usb3->driver)
+ 			usb3_disconnect(usb3);
+ 		usb3_vbus_out(usb3, false);
+diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
+index 29f4b87a9e743..58cba8ee0277a 100644
+--- a/drivers/usb/serial/ch341.c
++++ b/drivers/usb/serial/ch341.c
+@@ -85,6 +85,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x1a86, 0x5523) },
+ 	{ USB_DEVICE(0x1a86, 0x7522) },
+ 	{ USB_DEVICE(0x1a86, 0x7523) },
++	{ USB_DEVICE(0x2184, 0x0057) },
+ 	{ USB_DEVICE(0x4348, 0x5523) },
+ 	{ USB_DEVICE(0x9986, 0x7523) },
+ 	{ },
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 8a60c0d56863e..a27f7efcec6a8 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -51,6 +51,7 @@ static void cp210x_enable_event_mode(struct usb_serial_port *port);
+ static void cp210x_disable_event_mode(struct usb_serial_port *port);
+ 
+ static const struct usb_device_id id_table[] = {
++	{ USB_DEVICE(0x0404, 0x034C) },	/* NCR Retail IO Box */
+ 	{ USB_DEVICE(0x045B, 0x0053) }, /* Renesas RX610 RX-Stick */
+ 	{ USB_DEVICE(0x0471, 0x066A) }, /* AKTAKOM ACE-1001 cable */
+ 	{ USB_DEVICE(0x0489, 0xE000) }, /* Pirelli Broadband S.p.A, DP-L10 SIP/GSM Mobile */
+@@ -68,6 +69,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x0FCF, 0x1004) }, /* Dynastream ANT2USB */
+ 	{ USB_DEVICE(0x0FCF, 0x1006) }, /* Dynastream ANT development board */
+ 	{ USB_DEVICE(0x0FDE, 0xCA05) }, /* OWL Wireless Electricity Monitor CM-160 */
++	{ USB_DEVICE(0x106F, 0x0003) },	/* CPI / Money Controls Bulk Coin Recycler */
+ 	{ USB_DEVICE(0x10A6, 0xAA26) }, /* Knock-off DCU-11 cable */
+ 	{ USB_DEVICE(0x10AB, 0x10C5) }, /* Siemens MC60 Cable */
+ 	{ USB_DEVICE(0x10B5, 0xAC70) }, /* Nokia CA-42 USB */
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 4edebd14ef29a..49c08f07c9690 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -969,6 +969,7 @@ static const struct usb_device_id id_table_combined[] = {
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_VX_023_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_VX_034_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_101_PID) },
++	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_159_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_1_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_2_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_3_PID) },
+@@ -977,12 +978,14 @@ static const struct usb_device_id id_table_combined[] = {
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_6_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_7_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_8_PID) },
++	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_235_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_257_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_279_1_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_279_2_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_279_3_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_279_4_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_313_PID) },
++	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_320_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_324_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_346_1_PID) },
+ 	{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_346_2_PID) },
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 755858ca20bac..d1a9564697a4b 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -1506,6 +1506,9 @@
+ #define BRAINBOXES_VX_023_PID		0x1003 /* VX-023 ExpressCard 1 Port RS422/485 */
+ #define BRAINBOXES_VX_034_PID		0x1004 /* VX-034 ExpressCard 2 Port RS422/485 */
+ #define BRAINBOXES_US_101_PID		0x1011 /* US-101 1xRS232 */
++#define BRAINBOXES_US_159_PID		0x1021 /* US-159 1xRS232 */
++#define BRAINBOXES_US_235_PID		0x1017 /* US-235 1xRS232 */
++#define BRAINBOXES_US_320_PID		0x1019 /* US-320 1xRS422/485 */
+ #define BRAINBOXES_US_324_PID		0x1013 /* US-324 1xRS422/485 1Mbaud */
+ #define BRAINBOXES_US_606_1_PID		0x2001 /* US-606 6 Port RS232 Serial Port 1 and 2 */
+ #define BRAINBOXES_US_606_2_PID		0x2002 /* US-606 6 Port RS232 Serial Port 3 and 4 */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 42420bfc983c2..962e9943fc20e 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1649,6 +1649,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(2) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(ZTE_VENDOR_ID, 0x1476, 0xff) },	/* GosunCn ZTE WeLink ME3630 (ECM/NCM mode) */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1481, 0xff, 0x00, 0x00) }, /* ZTE MF871A */
++	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1485, 0xff, 0xff, 0xff),  /* ZTE MF286D */
++	  .driver_info = RSVD(5) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1533, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1534, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1535, 0xff, 0xff, 0xff) },
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index f36829eeb5a93..2fc1b80a26ad9 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -1025,7 +1025,7 @@ static void fbcon_init(struct vc_data *vc, int init)
+ 	struct vc_data *svc = *default_mode;
+ 	struct fbcon_display *t, *p = &fb_display[vc->vc_num];
+ 	int logo = 1, new_rows, new_cols, rows, cols;
+-	int cap, ret;
++	int ret;
+ 
+ 	if (WARN_ON(info_idx == -1))
+ 	    return;
+@@ -1034,7 +1034,6 @@ static void fbcon_init(struct vc_data *vc, int init)
+ 		con2fb_map[vc->vc_num] = info_idx;
+ 
+ 	info = registered_fb[con2fb_map[vc->vc_num]];
+-	cap = info->flags;
+ 
+ 	if (logo_shown < 0 && console_loglevel <= CONSOLE_LOGLEVEL_QUIET)
+ 		logo_shown = FBCON_LOGO_DONTSHOW;
+@@ -1137,8 +1136,8 @@ static void fbcon_init(struct vc_data *vc, int init)
+ 	ops->graphics = 0;
+ 
+ #ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION
+-	if ((cap & FBINFO_HWACCEL_COPYAREA) &&
+-	    !(cap & FBINFO_HWACCEL_DISABLED))
++	if ((info->flags & FBINFO_HWACCEL_COPYAREA) &&
++	    !(info->flags & FBINFO_HWACCEL_DISABLED))
+ 		p->scrollmode = SCROLL_MOVE;
+ 	else /* default to something safe */
+ 		p->scrollmode = SCROLL_REDRAW;
+diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
+index 3e718cfc19a79..8c39a8571b1fa 100644
+--- a/fs/gfs2/file.c
++++ b/fs/gfs2/file.c
+@@ -704,10 +704,11 @@ static int gfs2_release(struct inode *inode, struct file *file)
+ 	kfree(file->private_data);
+ 	file->private_data = NULL;
+ 
+-	if (gfs2_rs_active(&ip->i_res))
+-		gfs2_rs_delete(ip, &inode->i_writecount);
+-	if (file->f_mode & FMODE_WRITE)
++	if (file->f_mode & FMODE_WRITE) {
++		if (gfs2_rs_active(&ip->i_res))
++			gfs2_rs_delete(ip, &inode->i_writecount);
+ 		gfs2_qa_put(ip);
++	}
+ 	return 0;
+ }
+ 
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index 44a7a4288956b..e946ccba384c5 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -301,9 +301,6 @@ void gfs2_glock_queue_put(struct gfs2_glock *gl)
+ 
+ void gfs2_glock_put(struct gfs2_glock *gl)
+ {
+-	/* last put could call sleepable dlm api */
+-	might_sleep();
+-
+ 	if (lockref_put_or_lock(&gl->gl_lockref))
+ 		return;
+ 
+diff --git a/fs/nfs/callback.h b/fs/nfs/callback.h
+index 6a2033131c068..ccd4f245cae24 100644
+--- a/fs/nfs/callback.h
++++ b/fs/nfs/callback.h
+@@ -170,7 +170,7 @@ struct cb_devicenotifyitem {
+ };
+ 
+ struct cb_devicenotifyargs {
+-	int				 ndevs;
++	uint32_t			 ndevs;
+ 	struct cb_devicenotifyitem	 *devs;
+ };
+ 
+diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
+index 09c5b1cb3e075..c343666d9a428 100644
+--- a/fs/nfs/callback_proc.c
++++ b/fs/nfs/callback_proc.c
+@@ -358,7 +358,7 @@ __be32 nfs4_callback_devicenotify(void *argp, void *resp,
+ 				  struct cb_process_state *cps)
+ {
+ 	struct cb_devicenotifyargs *args = argp;
+-	int i;
++	uint32_t i;
+ 	__be32 res = 0;
+ 	struct nfs_client *clp = cps->clp;
+ 	struct nfs_server *server = NULL;
+diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c
+index a67c41ec545fd..f90de8043b0f9 100644
+--- a/fs/nfs/callback_xdr.c
++++ b/fs/nfs/callback_xdr.c
+@@ -258,11 +258,9 @@ __be32 decode_devicenotify_args(struct svc_rqst *rqstp,
+ 				void *argp)
+ {
+ 	struct cb_devicenotifyargs *args = argp;
++	uint32_t tmp, n, i;
+ 	__be32 *p;
+ 	__be32 status = 0;
+-	u32 tmp;
+-	int n, i;
+-	args->ndevs = 0;
+ 
+ 	/* Num of device notifications */
+ 	p = xdr_inline_decode(xdr, sizeof(uint32_t));
+@@ -271,7 +269,7 @@ __be32 decode_devicenotify_args(struct svc_rqst *rqstp,
+ 		goto out;
+ 	}
+ 	n = ntohl(*p++);
+-	if (n <= 0)
++	if (n == 0)
+ 		goto out;
+ 	if (n > ULONG_MAX / sizeof(*args->devs)) {
+ 		status = htonl(NFS4ERR_BADXDR);
+@@ -330,19 +328,21 @@ __be32 decode_devicenotify_args(struct svc_rqst *rqstp,
+ 			dev->cbd_immediate = 0;
+ 		}
+ 
+-		args->ndevs++;
+-
+ 		dprintk("%s: type %d layout 0x%x immediate %d\n",
+ 			__func__, dev->cbd_notify_type, dev->cbd_layout_type,
+ 			dev->cbd_immediate);
+ 	}
++	args->ndevs = n;
++	dprintk("%s: ndevs %d\n", __func__, args->ndevs);
++	return 0;
++err:
++	kfree(args->devs);
+ out:
++	args->devs = NULL;
++	args->ndevs = 0;
+ 	dprintk("%s: status %d ndevs %d\n",
+ 		__func__, ntohl(status), args->ndevs);
+ 	return status;
+-err:
+-	kfree(args->devs);
+-	goto out;
+ }
+ 
+ static __be32 decode_sessionid(struct xdr_stream *xdr,
+diff --git a/fs/nfs/client.c b/fs/nfs/client.c
+index 1e4dc1ab9312c..a1e87419f3a42 100644
+--- a/fs/nfs/client.c
++++ b/fs/nfs/client.c
+@@ -177,6 +177,7 @@ struct nfs_client *nfs_alloc_client(const struct nfs_client_initdata *cl_init)
+ 	INIT_LIST_HEAD(&clp->cl_superblocks);
+ 	clp->cl_rpcclient = ERR_PTR(-EINVAL);
+ 
++	clp->cl_flags = cl_init->init_flags;
+ 	clp->cl_proto = cl_init->proto;
+ 	clp->cl_nconnect = cl_init->nconnect;
+ 	clp->cl_max_connect = cl_init->max_connect ? cl_init->max_connect : 1;
+@@ -427,7 +428,6 @@ struct nfs_client *nfs_get_client(const struct nfs_client_initdata *cl_init)
+ 			list_add_tail(&new->cl_share_link,
+ 					&nn->nfs_client_list);
+ 			spin_unlock(&nn->nfs_client_lock);
+-			new->cl_flags = cl_init->init_flags;
+ 			return rpc_ops->init_client(new, cl_init);
+ 		}
+ 
+@@ -860,6 +860,13 @@ static int nfs_probe_fsinfo(struct nfs_server *server, struct nfs_fh *mntfh, str
+ 			server->namelen = pathinfo.max_namelen;
+ 	}
+ 
++	if (clp->rpc_ops->discover_trunking != NULL &&
++			(server->caps & NFS_CAP_FS_LOCATIONS)) {
++		error = clp->rpc_ops->discover_trunking(server, mntfh);
++		if (error < 0)
++			return error;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index 24ce5652d9be8..b2460a0504411 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -79,6 +79,7 @@ static struct nfs_open_dir_context *alloc_nfs_open_dir_context(struct inode *dir
+ 		ctx->dir_cookie = 0;
+ 		ctx->dup_cookie = 0;
+ 		ctx->page_index = 0;
++		ctx->eof = false;
+ 		spin_lock(&dir->i_lock);
+ 		if (list_empty(&nfsi->open_files) &&
+ 		    (nfsi->cache_validity & NFS_INO_DATA_INVAL_DEFER))
+@@ -167,6 +168,7 @@ struct nfs_readdir_descriptor {
+ 	unsigned int	cache_entry_index;
+ 	signed char duped;
+ 	bool plus;
++	bool eob;
+ 	bool eof;
+ };
+ 
+@@ -866,7 +868,8 @@ static int nfs_readdir_xdr_to_array(struct nfs_readdir_descriptor *desc,
+ 
+ 		status = nfs_readdir_page_filler(desc, entry, pages, pglen,
+ 						 arrays, narrays);
+-	} while (!status && nfs_readdir_page_needs_filling(page));
++	} while (!status && nfs_readdir_page_needs_filling(page) &&
++		page_mapping(page));
+ 
+ 	nfs_readdir_free_pages(pages, array_size);
+ out:
+@@ -987,7 +990,7 @@ static void nfs_do_filldir(struct nfs_readdir_descriptor *desc,
+ 		ent = &array->array[i];
+ 		if (!dir_emit(desc->ctx, ent->name, ent->name_len,
+ 		    nfs_compat_user_ino64(ent->ino), ent->d_type)) {
+-			desc->eof = true;
++			desc->eob = true;
+ 			break;
+ 		}
+ 		memcpy(desc->verf, verf, sizeof(desc->verf));
+@@ -1003,7 +1006,7 @@ static void nfs_do_filldir(struct nfs_readdir_descriptor *desc,
+ 			desc->duped = 1;
+ 	}
+ 	if (array->page_is_eof)
+-		desc->eof = true;
++		desc->eof = !desc->eob;
+ 
+ 	kunmap(desc->page);
+ 	dfprintk(DIRCACHE, "NFS: nfs_do_filldir() filling ended @ cookie %llu\n",
+@@ -1040,12 +1043,13 @@ static int uncached_readdir(struct nfs_readdir_descriptor *desc)
+ 		goto out;
+ 
+ 	desc->page_index = 0;
++	desc->cache_entry_index = 0;
+ 	desc->last_cookie = desc->dir_cookie;
+ 	desc->duped = 0;
+ 
+ 	status = nfs_readdir_xdr_to_array(desc, desc->verf, verf, arrays, sz);
+ 
+-	for (i = 0; !desc->eof && i < sz && arrays[i]; i++) {
++	for (i = 0; !desc->eob && i < sz && arrays[i]; i++) {
+ 		desc->page = arrays[i];
+ 		nfs_do_filldir(desc, verf);
+ 	}
+@@ -1104,9 +1108,15 @@ static int nfs_readdir(struct file *file, struct dir_context *ctx)
+ 	desc->duped = dir_ctx->duped;
+ 	page_index = dir_ctx->page_index;
+ 	desc->attr_gencount = dir_ctx->attr_gencount;
++	desc->eof = dir_ctx->eof;
+ 	memcpy(desc->verf, dir_ctx->verf, sizeof(desc->verf));
+ 	spin_unlock(&file->f_lock);
+ 
++	if (desc->eof) {
++		res = 0;
++		goto out_free;
++	}
++
+ 	if (test_and_clear_bit(NFS_INO_FORCE_READDIR, &nfsi->flags) &&
+ 	    list_is_singular(&nfsi->open_files))
+ 		invalidate_mapping_pages(inode->i_mapping, page_index + 1, -1);
+@@ -1140,7 +1150,7 @@ static int nfs_readdir(struct file *file, struct dir_context *ctx)
+ 
+ 		nfs_do_filldir(desc, nfsi->cookieverf);
+ 		nfs_readdir_page_unlock_and_put_cached(desc);
+-	} while (!desc->eof);
++	} while (!desc->eob && !desc->eof);
+ 
+ 	spin_lock(&file->f_lock);
+ 	dir_ctx->dir_cookie = desc->dir_cookie;
+@@ -1148,9 +1158,10 @@ static int nfs_readdir(struct file *file, struct dir_context *ctx)
+ 	dir_ctx->duped = desc->duped;
+ 	dir_ctx->attr_gencount = desc->attr_gencount;
+ 	dir_ctx->page_index = desc->page_index;
++	dir_ctx->eof = desc->eof;
+ 	memcpy(dir_ctx->verf, desc->verf, sizeof(dir_ctx->verf));
+ 	spin_unlock(&file->f_lock);
+-
++out_free:
+ 	kfree(desc);
+ 
+ out:
+@@ -1192,6 +1203,7 @@ static loff_t nfs_llseek_dir(struct file *filp, loff_t offset, int whence)
+ 		if (offset == 0)
+ 			memset(dir_ctx->verf, 0, sizeof(dir_ctx->verf));
+ 		dir_ctx->duped = 0;
++		dir_ctx->eof = false;
+ 	}
+ 	spin_unlock(&filp->f_lock);
+ 	return offset;
+@@ -2695,7 +2707,7 @@ static struct nfs_access_entry *nfs_access_search_rbtree(struct inode *inode, co
+ 	return NULL;
+ }
+ 
+-static int nfs_access_get_cached_locked(struct inode *inode, const struct cred *cred, struct nfs_access_entry *res, bool may_block)
++static int nfs_access_get_cached_locked(struct inode *inode, const struct cred *cred, u32 *mask, bool may_block)
+ {
+ 	struct nfs_inode *nfsi = NFS_I(inode);
+ 	struct nfs_access_entry *cache;
+@@ -2725,8 +2737,7 @@ static int nfs_access_get_cached_locked(struct inode *inode, const struct cred *
+ 		spin_lock(&inode->i_lock);
+ 		retry = false;
+ 	}
+-	res->cred = cache->cred;
+-	res->mask = cache->mask;
++	*mask = cache->mask;
+ 	list_move_tail(&cache->lru, &nfsi->access_cache_entry_lru);
+ 	err = 0;
+ out:
+@@ -2738,7 +2749,7 @@ out_zap:
+ 	return -ENOENT;
+ }
+ 
+-static int nfs_access_get_cached_rcu(struct inode *inode, const struct cred *cred, struct nfs_access_entry *res)
++static int nfs_access_get_cached_rcu(struct inode *inode, const struct cred *cred, u32 *mask)
+ {
+ 	/* Only check the most recently returned cache entry,
+ 	 * but do it without locking.
+@@ -2760,22 +2771,21 @@ static int nfs_access_get_cached_rcu(struct inode *inode, const struct cred *cre
+ 		goto out;
+ 	if (nfs_check_cache_invalid(inode, NFS_INO_INVALID_ACCESS))
+ 		goto out;
+-	res->cred = cache->cred;
+-	res->mask = cache->mask;
++	*mask = cache->mask;
+ 	err = 0;
+ out:
+ 	rcu_read_unlock();
+ 	return err;
+ }
+ 
+-int nfs_access_get_cached(struct inode *inode, const struct cred *cred, struct
+-nfs_access_entry *res, bool may_block)
++int nfs_access_get_cached(struct inode *inode, const struct cred *cred,
++			  u32 *mask, bool may_block)
+ {
+ 	int status;
+ 
+-	status = nfs_access_get_cached_rcu(inode, cred, res);
++	status = nfs_access_get_cached_rcu(inode, cred, mask);
+ 	if (status != 0)
+-		status = nfs_access_get_cached_locked(inode, cred, res,
++		status = nfs_access_get_cached_locked(inode, cred, mask,
+ 		    may_block);
+ 
+ 	return status;
+@@ -2896,7 +2906,7 @@ static int nfs_do_access(struct inode *inode, const struct cred *cred, int mask)
+ 
+ 	trace_nfs_access_enter(inode);
+ 
+-	status = nfs_access_get_cached(inode, cred, &cache, may_block);
++	status = nfs_access_get_cached(inode, cred, &cache.mask, may_block);
+ 	if (status == 0)
+ 		goto out_cached;
+ 
+diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h
+index ed5eaca6801ee..85c5d08dfa9cc 100644
+--- a/fs/nfs/nfs4_fs.h
++++ b/fs/nfs/nfs4_fs.h
+@@ -260,8 +260,8 @@ struct nfs4_state_maintenance_ops {
+ };
+ 
+ struct nfs4_mig_recovery_ops {
+-	int (*get_locations)(struct inode *, struct nfs4_fs_locations *,
+-		struct page *, const struct cred *);
++	int (*get_locations)(struct nfs_server *, struct nfs_fh *,
++		struct nfs4_fs_locations *, struct page *, const struct cred *);
+ 	int (*fsid_present)(struct inode *, const struct cred *);
+ };
+ 
+@@ -280,7 +280,8 @@ struct rpc_clnt *nfs4_negotiate_security(struct rpc_clnt *, struct inode *,
+ int nfs4_submount(struct fs_context *, struct nfs_server *);
+ int nfs4_replace_transport(struct nfs_server *server,
+ 				const struct nfs4_fs_locations *locations);
+-
++size_t nfs_parse_server_name(char *string, size_t len, struct sockaddr *sa,
++			     size_t salen, struct net *net, int port);
+ /* nfs4proc.c */
+ extern int nfs4_handle_exception(struct nfs_server *, int, struct nfs4_exception *);
+ extern int nfs4_async_handle_error(struct rpc_task *task,
+@@ -302,8 +303,9 @@ extern int nfs4_do_close(struct nfs4_state *state, gfp_t gfp_mask, int wait);
+ extern int nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *fhandle);
+ extern int nfs4_proc_fs_locations(struct rpc_clnt *, struct inode *, const struct qstr *,
+ 				  struct nfs4_fs_locations *, struct page *);
+-extern int nfs4_proc_get_locations(struct inode *, struct nfs4_fs_locations *,
+-		struct page *page, const struct cred *);
++extern int nfs4_proc_get_locations(struct nfs_server *, struct nfs_fh *,
++				   struct nfs4_fs_locations *,
++				   struct page *page, const struct cred *);
+ extern int nfs4_proc_fsid_present(struct inode *, const struct cred *);
+ extern struct rpc_clnt *nfs4_proc_lookup_mountpoint(struct inode *,
+ 						    struct dentry *,
+diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
+index d8b5a250ca050..47a6cf892c95a 100644
+--- a/fs/nfs/nfs4client.c
++++ b/fs/nfs/nfs4client.c
+@@ -1343,8 +1343,11 @@ int nfs4_update_server(struct nfs_server *server, const char *hostname,
+ 	}
+ 	nfs_put_client(clp);
+ 
+-	if (server->nfs_client->cl_hostname == NULL)
++	if (server->nfs_client->cl_hostname == NULL) {
+ 		server->nfs_client->cl_hostname = kstrdup(hostname, GFP_KERNEL);
++		if (server->nfs_client->cl_hostname == NULL)
++			return -ENOMEM;
++	}
+ 	nfs_server_insert_lists(server);
+ 
+ 	return nfs_probe_server(server, NFS_FH(d_inode(server->super->s_root)));
+diff --git a/fs/nfs/nfs4namespace.c b/fs/nfs/nfs4namespace.c
+index 873342308dc0d..3680c8da510c9 100644
+--- a/fs/nfs/nfs4namespace.c
++++ b/fs/nfs/nfs4namespace.c
+@@ -164,16 +164,21 @@ static int nfs4_validate_fspath(struct dentry *dentry,
+ 	return 0;
+ }
+ 
+-static size_t nfs_parse_server_name(char *string, size_t len,
+-		struct sockaddr *sa, size_t salen, struct net *net)
++size_t nfs_parse_server_name(char *string, size_t len, struct sockaddr *sa,
++			     size_t salen, struct net *net, int port)
+ {
+ 	ssize_t ret;
+ 
+ 	ret = rpc_pton(net, string, len, sa, salen);
+ 	if (ret == 0) {
+-		ret = nfs_dns_resolve_name(net, string, len, sa, salen);
+-		if (ret < 0)
+-			ret = 0;
++		ret = rpc_uaddr2sockaddr(net, string, len, sa, salen);
++		if (ret == 0) {
++			ret = nfs_dns_resolve_name(net, string, len, sa, salen);
++			if (ret < 0)
++				ret = 0;
++		}
++	} else if (port) {
++		rpc_set_port(sa, port);
+ 	}
+ 	return ret;
+ }
+@@ -328,7 +333,7 @@ static int try_location(struct fs_context *fc,
+ 			nfs_parse_server_name(buf->data, buf->len,
+ 					      &ctx->nfs_server.address,
+ 					      sizeof(ctx->nfs_server._address),
+-					      fc->net_ns);
++					      fc->net_ns, 0);
+ 		if (ctx->nfs_server.addrlen == 0)
+ 			continue;
+ 
+@@ -496,7 +501,7 @@ static int nfs4_try_replacing_one_location(struct nfs_server *server,
+ 			continue;
+ 
+ 		salen = nfs_parse_server_name(buf->data, buf->len,
+-						sap, addr_bufsize, net);
++						sap, addr_bufsize, net, 0);
+ 		if (salen == 0)
+ 			continue;
+ 		rpc_set_port(sap, NFS_PORT);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index ee3bc79f6ca3a..9a94e758212c8 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -3874,6 +3874,8 @@ static int _nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *f
+ 		if (res.attr_bitmask[2] & FATTR4_WORD2_SECURITY_LABEL)
+ 			server->caps |= NFS_CAP_SECURITY_LABEL;
+ #endif
++		if (res.attr_bitmask[0] & FATTR4_WORD0_FS_LOCATIONS)
++			server->caps |= NFS_CAP_FS_LOCATIONS;
+ 		if (!(res.attr_bitmask[0] & FATTR4_WORD0_FILEID))
+ 			server->fattr_valid &= ~NFS_ATTR_FATTR_FILEID;
+ 		if (!(res.attr_bitmask[1] & FATTR4_WORD1_MODE))
+@@ -3932,6 +3934,60 @@ int nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *fhandle)
+ 	return err;
+ }
+ 
++static int _nfs4_discover_trunking(struct nfs_server *server,
++				   struct nfs_fh *fhandle)
++{
++	struct nfs4_fs_locations *locations = NULL;
++	struct page *page;
++	const struct cred *cred;
++	struct nfs_client *clp = server->nfs_client;
++	const struct nfs4_state_maintenance_ops *ops =
++		clp->cl_mvops->state_renewal_ops;
++	int status = -ENOMEM;
++
++	cred = ops->get_state_renewal_cred(clp);
++	if (cred == NULL) {
++		cred = nfs4_get_clid_cred(clp);
++		if (cred == NULL)
++			return -ENOKEY;
++	}
++
++	page = alloc_page(GFP_KERNEL);
++	locations = kmalloc(sizeof(struct nfs4_fs_locations), GFP_KERNEL);
++	if (page == NULL || locations == NULL)
++		goto out;
++
++	status = nfs4_proc_get_locations(server, fhandle, locations, page,
++					 cred);
++	if (status)
++		goto out;
++out:
++	if (page)
++		__free_page(page);
++	kfree(locations);
++	return status;
++}
++
++static int nfs4_discover_trunking(struct nfs_server *server,
++				  struct nfs_fh *fhandle)
++{
++	struct nfs4_exception exception = {
++		.interruptible = true,
++	};
++	struct nfs_client *clp = server->nfs_client;
++	int err = 0;
++
++	if (!nfs4_has_session(clp))
++		goto out;
++	do {
++		err = nfs4_handle_exception(server,
++				_nfs4_discover_trunking(server, fhandle),
++				&exception);
++	} while (exception.retry);
++out:
++	return err;
++}
++
+ static int _nfs4_lookup_root(struct nfs_server *server, struct nfs_fh *fhandle,
+ 		struct nfs_fsinfo *info)
+ {
+@@ -7611,7 +7667,7 @@ static int nfs4_xattr_set_nfs4_user(const struct xattr_handler *handler,
+ 				    const char *key, const void *buf,
+ 				    size_t buflen, int flags)
+ {
+-	struct nfs_access_entry cache;
++	u32 mask;
+ 	int ret;
+ 
+ 	if (!nfs_server_capable(inode, NFS_CAP_XATTR))
+@@ -7626,8 +7682,8 @@ static int nfs4_xattr_set_nfs4_user(const struct xattr_handler *handler,
+ 	 * do a cached access check for the XA* flags to possibly avoid
+ 	 * doing an RPC and getting EACCES back.
+ 	 */
+-	if (!nfs_access_get_cached(inode, current_cred(), &cache, true)) {
+-		if (!(cache.mask & NFS_ACCESS_XAWRITE))
++	if (!nfs_access_get_cached(inode, current_cred(), &mask, true)) {
++		if (!(mask & NFS_ACCESS_XAWRITE))
+ 			return -EACCES;
+ 	}
+ 
+@@ -7648,14 +7704,14 @@ static int nfs4_xattr_get_nfs4_user(const struct xattr_handler *handler,
+ 				    struct dentry *unused, struct inode *inode,
+ 				    const char *key, void *buf, size_t buflen)
+ {
+-	struct nfs_access_entry cache;
++	u32 mask;
+ 	ssize_t ret;
+ 
+ 	if (!nfs_server_capable(inode, NFS_CAP_XATTR))
+ 		return -EOPNOTSUPP;
+ 
+-	if (!nfs_access_get_cached(inode, current_cred(), &cache, true)) {
+-		if (!(cache.mask & NFS_ACCESS_XAREAD))
++	if (!nfs_access_get_cached(inode, current_cred(), &mask, true)) {
++		if (!(mask & NFS_ACCESS_XAREAD))
+ 			return -EACCES;
+ 	}
+ 
+@@ -7680,13 +7736,13 @@ nfs4_listxattr_nfs4_user(struct inode *inode, char *list, size_t list_len)
+ 	ssize_t ret, size;
+ 	char *buf;
+ 	size_t buflen;
+-	struct nfs_access_entry cache;
++	u32 mask;
+ 
+ 	if (!nfs_server_capable(inode, NFS_CAP_XATTR))
+ 		return 0;
+ 
+-	if (!nfs_access_get_cached(inode, current_cred(), &cache, true)) {
+-		if (!(cache.mask & NFS_ACCESS_XALIST))
++	if (!nfs_access_get_cached(inode, current_cred(), &mask, true)) {
++		if (!(mask & NFS_ACCESS_XALIST))
+ 			return 0;
+ 	}
+ 
+@@ -7818,18 +7874,18 @@ int nfs4_proc_fs_locations(struct rpc_clnt *client, struct inode *dir,
+  * appended to this compound to identify the client ID which is
+  * performing recovery.
+  */
+-static int _nfs40_proc_get_locations(struct inode *inode,
++static int _nfs40_proc_get_locations(struct nfs_server *server,
++				     struct nfs_fh *fhandle,
+ 				     struct nfs4_fs_locations *locations,
+ 				     struct page *page, const struct cred *cred)
+ {
+-	struct nfs_server *server = NFS_SERVER(inode);
+ 	struct rpc_clnt *clnt = server->client;
+ 	u32 bitmask[2] = {
+ 		[0] = FATTR4_WORD0_FSID | FATTR4_WORD0_FS_LOCATIONS,
+ 	};
+ 	struct nfs4_fs_locations_arg args = {
+ 		.clientid	= server->nfs_client->cl_clientid,
+-		.fh		= NFS_FH(inode),
++		.fh		= fhandle,
+ 		.page		= page,
+ 		.bitmask	= bitmask,
+ 		.migration	= 1,		/* skip LOOKUP */
+@@ -7875,17 +7931,17 @@ static int _nfs40_proc_get_locations(struct inode *inode,
+  * When the client supports GETATTR(fs_locations_info), it can
+  * be plumbed in here.
+  */
+-static int _nfs41_proc_get_locations(struct inode *inode,
++static int _nfs41_proc_get_locations(struct nfs_server *server,
++				     struct nfs_fh *fhandle,
+ 				     struct nfs4_fs_locations *locations,
+ 				     struct page *page, const struct cred *cred)
+ {
+-	struct nfs_server *server = NFS_SERVER(inode);
+ 	struct rpc_clnt *clnt = server->client;
+ 	u32 bitmask[2] = {
+ 		[0] = FATTR4_WORD0_FSID | FATTR4_WORD0_FS_LOCATIONS,
+ 	};
+ 	struct nfs4_fs_locations_arg args = {
+-		.fh		= NFS_FH(inode),
++		.fh		= fhandle,
+ 		.page		= page,
+ 		.bitmask	= bitmask,
+ 		.migration	= 1,		/* skip LOOKUP */
+@@ -7934,11 +7990,11 @@ static int _nfs41_proc_get_locations(struct inode *inode,
+  * -NFS4ERR_LEASE_MOVED is returned if the server still has leases
+  * from this client that require migration recovery.
+  */
+-int nfs4_proc_get_locations(struct inode *inode,
++int nfs4_proc_get_locations(struct nfs_server *server,
++			    struct nfs_fh *fhandle,
+ 			    struct nfs4_fs_locations *locations,
+ 			    struct page *page, const struct cred *cred)
+ {
+-	struct nfs_server *server = NFS_SERVER(inode);
+ 	struct nfs_client *clp = server->nfs_client;
+ 	const struct nfs4_mig_recovery_ops *ops =
+ 					clp->cl_mvops->mig_recovery_ops;
+@@ -7951,10 +8007,11 @@ int nfs4_proc_get_locations(struct inode *inode,
+ 		(unsigned long long)server->fsid.major,
+ 		(unsigned long long)server->fsid.minor,
+ 		clp->cl_hostname);
+-	nfs_display_fhandle(NFS_FH(inode), __func__);
++	nfs_display_fhandle(fhandle, __func__);
+ 
+ 	do {
+-		status = ops->get_locations(inode, locations, page, cred);
++		status = ops->get_locations(server, fhandle, locations, page,
++					    cred);
+ 		if (status != -NFS4ERR_DELAY)
+ 			break;
+ 		nfs4_handle_exception(server, status, &exception);
+@@ -10423,6 +10480,7 @@ const struct nfs_rpc_ops nfs_v4_clientops = {
+ 	.free_client	= nfs4_free_client,
+ 	.create_server	= nfs4_create_server,
+ 	.clone_server	= nfs_clone_server,
++	.discover_trunking = nfs4_discover_trunking,
+ };
+ 
+ static const struct xattr_handler nfs4_xattr_nfs4_acl_handler = {
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index f63dfa01001c9..499bef9fe1186 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -2098,7 +2098,8 @@ static int nfs4_try_migration(struct nfs_server *server, const struct cred *cred
+ 	}
+ 
+ 	inode = d_inode(server->super->s_root);
+-	result = nfs4_proc_get_locations(inode, locations, page, cred);
++	result = nfs4_proc_get_locations(server, NFS_FH(inode), locations,
++					 page, cred);
+ 	if (result) {
+ 		dprintk("<-- %s: failed to retrieve fs_locations: %d\n",
+ 			__func__, result);
+@@ -2106,6 +2107,9 @@ static int nfs4_try_migration(struct nfs_server *server, const struct cred *cred
+ 	}
+ 
+ 	result = -NFS4ERR_NXIO;
++	if (!locations->nlocations)
++		goto out;
++
+ 	if (!(locations->fattr.valid & NFS_ATTR_FATTR_V4_LOCATIONS)) {
+ 		dprintk("<-- %s: No fs_locations data, migration skipped\n",
+ 			__func__);
+diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
+index 69862bf6db001..71a00e48bd2dd 100644
+--- a/fs/nfs/nfs4xdr.c
++++ b/fs/nfs/nfs4xdr.c
+@@ -3696,8 +3696,6 @@ static int decode_attr_fs_locations(struct xdr_stream *xdr, uint32_t *bitmap, st
+ 	if (unlikely(!p))
+ 		goto out_eio;
+ 	n = be32_to_cpup(p);
+-	if (n <= 0)
+-		goto out_eio;
+ 	for (res->nlocations = 0; res->nlocations < n; res->nlocations++) {
+ 		u32 m;
+ 		struct nfs4_fs_location *loc;
+@@ -4200,10 +4198,11 @@ static int decode_attr_security_label(struct xdr_stream *xdr, uint32_t *bitmap,
+ 		} else
+ 			printk(KERN_WARNING "%s: label too long (%u)!\n",
+ 					__func__, len);
++		if (label && label->label)
++			dprintk("%s: label=%.*s, len=%d, PI=%d, LFS=%d\n",
++				__func__, label->len, (char *)label->label,
++				label->len, label->pi, label->lfs);
+ 	}
+-	if (label && label->label)
+-		dprintk("%s: label=%s, len=%d, PI=%d, LFS=%d\n", __func__,
+-			(char *)label->label, label->len, label->pi, label->lfs);
+ 	return status;
+ }
+ 
+diff --git a/fs/nfsd/nfs3proc.c b/fs/nfsd/nfs3proc.c
+index 8ef53f6726ec8..b540489ea240d 100644
+--- a/fs/nfsd/nfs3proc.c
++++ b/fs/nfsd/nfs3proc.c
+@@ -150,13 +150,17 @@ nfsd3_proc_read(struct svc_rqst *rqstp)
+ 	unsigned int len;
+ 	int v;
+ 
+-	argp->count = min_t(u32, argp->count, max_blocksize);
+-
+ 	dprintk("nfsd: READ(3) %s %lu bytes at %Lu\n",
+ 				SVCFH_fmt(&argp->fh),
+ 				(unsigned long) argp->count,
+ 				(unsigned long long) argp->offset);
+ 
++	argp->count = min_t(u32, argp->count, max_blocksize);
++	if (argp->offset > (u64)OFFSET_MAX)
++		argp->offset = (u64)OFFSET_MAX;
++	if (argp->offset + argp->count > (u64)OFFSET_MAX)
++		argp->count = (u64)OFFSET_MAX - argp->offset;
++
+ 	v = 0;
+ 	len = argp->count;
+ 	resp->pages = rqstp->rq_next_page;
+@@ -199,6 +203,11 @@ nfsd3_proc_write(struct svc_rqst *rqstp)
+ 				(unsigned long long) argp->offset,
+ 				argp->stable? " stable" : "");
+ 
++	resp->status = nfserr_fbig;
++	if (argp->offset > (u64)OFFSET_MAX ||
++	    argp->offset + argp->len > (u64)OFFSET_MAX)
++		return rpc_success;
++
+ 	fh_copy(&resp->fh, &argp->fh);
+ 	resp->committed = argp->stable;
+ 	nvecs = svc_fill_write_vector(rqstp, &argp->payload);
+diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c
+index 84088581bbe09..48e8a02ebc83b 100644
+--- a/fs/nfsd/nfs3xdr.c
++++ b/fs/nfsd/nfs3xdr.c
+@@ -254,7 +254,7 @@ svcxdr_decode_sattr3(struct svc_rqst *rqstp, struct xdr_stream *xdr,
+ 		if (xdr_stream_decode_u64(xdr, &newsize) < 0)
+ 			return false;
+ 		iap->ia_valid |= ATTR_SIZE;
+-		iap->ia_size = min_t(u64, newsize, NFS_OFFSET_MAX);
++		iap->ia_size = newsize;
+ 	}
+ 	if (xdr_stream_decode_u32(xdr, &set_it) < 0)
+ 		return false;
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index a36261f89bdfa..c7bcd65dae11d 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -782,12 +782,16 @@ nfsd4_read(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	__be32 status;
+ 
+ 	read->rd_nf = NULL;
+-	if (read->rd_offset >= OFFSET_MAX)
+-		return nfserr_inval;
+ 
+ 	trace_nfsd_read_start(rqstp, &cstate->current_fh,
+ 			      read->rd_offset, read->rd_length);
+ 
++	read->rd_length = min_t(u32, read->rd_length, svc_max_payload(rqstp));
++	if (read->rd_offset > (u64)OFFSET_MAX)
++		read->rd_offset = (u64)OFFSET_MAX;
++	if (read->rd_offset + read->rd_length > (u64)OFFSET_MAX)
++		read->rd_length = (u64)OFFSET_MAX - read->rd_offset;
++
+ 	/*
+ 	 * If we do a zero copy read, then a client will see read data
+ 	 * that reflects the state of the file *after* performing the
+@@ -1018,8 +1022,9 @@ nfsd4_write(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	unsigned long cnt;
+ 	int nvecs;
+ 
+-	if (write->wr_offset >= OFFSET_MAX)
+-		return nfserr_inval;
++	if (write->wr_offset > (u64)OFFSET_MAX ||
++	    write->wr_offset + write->wr_buflen > (u64)OFFSET_MAX)
++		return nfserr_fbig;
+ 
+ 	cnt = write->wr_buflen;
+ 	trace_nfsd_write_start(rqstp, &cstate->current_fh,
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 5a93a5db4fb0a..8b09ecf8b9885 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -3997,10 +3997,8 @@ nfsd4_encode_read(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 	}
+ 	xdr_commit_encode(xdr);
+ 
+-	maxcount = svc_max_payload(resp->rqstp);
+-	maxcount = min_t(unsigned long, maxcount,
++	maxcount = min_t(unsigned long, read->rd_length,
+ 			 (xdr->buf->buflen - xdr->buf->len));
+-	maxcount = min_t(unsigned long, maxcount, read->rd_length);
+ 
+ 	if (file->f_op->splice_read &&
+ 	    test_bit(RQ_SPLICE_OK, &resp->rqstp->rq_flags))
+@@ -4837,10 +4835,8 @@ nfsd4_encode_read_plus(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 		return nfserr_resource;
+ 	xdr_commit_encode(xdr);
+ 
+-	maxcount = svc_max_payload(resp->rqstp);
+-	maxcount = min_t(unsigned long, maxcount,
++	maxcount = min_t(unsigned long, read->rd_length,
+ 			 (xdr->buf->buflen - xdr->buf->len));
+-	maxcount = min_t(unsigned long, maxcount, read->rd_length);
+ 	count    = maxcount;
+ 
+ 	eof = read->rd_offset >= i_size_read(file_inode(file));
+diff --git a/fs/nfsd/trace.h b/fs/nfsd/trace.h
+index f1e0d3c51bc23..e662494de3c75 100644
+--- a/fs/nfsd/trace.h
++++ b/fs/nfsd/trace.h
+@@ -320,14 +320,14 @@ TRACE_EVENT(nfsd_export_update,
+ DECLARE_EVENT_CLASS(nfsd_io_class,
+ 	TP_PROTO(struct svc_rqst *rqstp,
+ 		 struct svc_fh	*fhp,
+-		 loff_t		offset,
+-		 unsigned long	len),
++		 u64		offset,
++		 u32		len),
+ 	TP_ARGS(rqstp, fhp, offset, len),
+ 	TP_STRUCT__entry(
+ 		__field(u32, xid)
+ 		__field(u32, fh_hash)
+-		__field(loff_t, offset)
+-		__field(unsigned long, len)
++		__field(u64, offset)
++		__field(u32, len)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->xid = be32_to_cpu(rqstp->rq_xid);
+@@ -335,7 +335,7 @@ DECLARE_EVENT_CLASS(nfsd_io_class,
+ 		__entry->offset = offset;
+ 		__entry->len = len;
+ 	),
+-	TP_printk("xid=0x%08x fh_hash=0x%08x offset=%lld len=%lu",
++	TP_printk("xid=0x%08x fh_hash=0x%08x offset=%llu len=%u",
+ 		  __entry->xid, __entry->fh_hash,
+ 		  __entry->offset, __entry->len)
+ )
+@@ -344,8 +344,8 @@ DECLARE_EVENT_CLASS(nfsd_io_class,
+ DEFINE_EVENT(nfsd_io_class, nfsd_##name,	\
+ 	TP_PROTO(struct svc_rqst *rqstp,	\
+ 		 struct svc_fh	*fhp,		\
+-		 loff_t		offset,		\
+-		 unsigned long	len),		\
++		 u64		offset,		\
++		 u32		len),		\
+ 	TP_ARGS(rqstp, fhp, offset, len))
+ 
+ DEFINE_NFSD_IO_EVENT(read_start);
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index 7f2472e4b88f9..6cf79ca806ab5 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -434,6 +434,10 @@ nfsd_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp, struct iattr *iap,
+ 			.ia_size	= iap->ia_size,
+ 		};
+ 
++		host_err = -EFBIG;
++		if (iap->ia_size < 0)
++			goto out_unlock;
++
+ 		host_err = notify_change(&init_user_ns, dentry, &size_attr, NULL);
+ 		if (host_err)
+ 			goto out_unlock;
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index ad667dbc96f5c..cbb22354f3802 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -429,7 +429,8 @@ static void smaps_page_accumulate(struct mem_size_stats *mss,
+ }
+ 
+ static void smaps_account(struct mem_size_stats *mss, struct page *page,
+-		bool compound, bool young, bool dirty, bool locked)
++		bool compound, bool young, bool dirty, bool locked,
++		bool migration)
+ {
+ 	int i, nr = compound ? compound_nr(page) : 1;
+ 	unsigned long size = nr * PAGE_SIZE;
+@@ -456,8 +457,15 @@ static void smaps_account(struct mem_size_stats *mss, struct page *page,
+ 	 * page_count(page) == 1 guarantees the page is mapped exactly once.
+ 	 * If any subpage of the compound page mapped with PTE it would elevate
+ 	 * page_count().
++	 *
++	 * The page_mapcount() is called to get a snapshot of the mapcount.
++	 * Without holding the page lock this snapshot can be slightly wrong as
++	 * we cannot always read the mapcount atomically.  It is not safe to
++	 * call page_mapcount() even with PTL held if the page is not mapped,
++	 * especially for migration entries.  Treat regular migration entries
++	 * as mapcount == 1.
+ 	 */
+-	if (page_count(page) == 1) {
++	if ((page_count(page) == 1) || migration) {
+ 		smaps_page_accumulate(mss, page, size, size << PSS_SHIFT, dirty,
+ 			locked, true);
+ 		return;
+@@ -506,6 +514,7 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
+ 	struct vm_area_struct *vma = walk->vma;
+ 	bool locked = !!(vma->vm_flags & VM_LOCKED);
+ 	struct page *page = NULL;
++	bool migration = false;
+ 
+ 	if (pte_present(*pte)) {
+ 		page = vm_normal_page(vma, addr, *pte);
+@@ -525,8 +534,11 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
+ 			} else {
+ 				mss->swap_pss += (u64)PAGE_SIZE << PSS_SHIFT;
+ 			}
+-		} else if (is_pfn_swap_entry(swpent))
++		} else if (is_pfn_swap_entry(swpent)) {
++			if (is_migration_entry(swpent))
++				migration = true;
+ 			page = pfn_swap_entry_to_page(swpent);
++		}
+ 	} else {
+ 		smaps_pte_hole_lookup(addr, walk);
+ 		return;
+@@ -535,7 +547,8 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
+ 	if (!page)
+ 		return;
+ 
+-	smaps_account(mss, page, false, pte_young(*pte), pte_dirty(*pte), locked);
++	smaps_account(mss, page, false, pte_young(*pte), pte_dirty(*pte),
++		      locked, migration);
+ }
+ 
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+@@ -546,6 +559,7 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
+ 	struct vm_area_struct *vma = walk->vma;
+ 	bool locked = !!(vma->vm_flags & VM_LOCKED);
+ 	struct page *page = NULL;
++	bool migration = false;
+ 
+ 	if (pmd_present(*pmd)) {
+ 		/* FOLL_DUMP will return -EFAULT on huge zero page */
+@@ -553,8 +567,10 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
+ 	} else if (unlikely(thp_migration_supported() && is_swap_pmd(*pmd))) {
+ 		swp_entry_t entry = pmd_to_swp_entry(*pmd);
+ 
+-		if (is_migration_entry(entry))
++		if (is_migration_entry(entry)) {
++			migration = true;
+ 			page = pfn_swap_entry_to_page(entry);
++		}
+ 	}
+ 	if (IS_ERR_OR_NULL(page))
+ 		return;
+@@ -566,7 +582,9 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
+ 		/* pass */;
+ 	else
+ 		mss->file_thp += HPAGE_PMD_SIZE;
+-	smaps_account(mss, page, true, pmd_young(*pmd), pmd_dirty(*pmd), locked);
++
++	smaps_account(mss, page, true, pmd_young(*pmd), pmd_dirty(*pmd),
++		      locked, migration);
+ }
+ #else
+ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
+@@ -1367,6 +1385,7 @@ static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm,
+ {
+ 	u64 frame = 0, flags = 0;
+ 	struct page *page = NULL;
++	bool migration = false;
+ 
+ 	if (pte_present(pte)) {
+ 		if (pm->show_pfn)
+@@ -1388,13 +1407,14 @@ static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm,
+ 			frame = swp_type(entry) |
+ 				(swp_offset(entry) << MAX_SWAPFILES_SHIFT);
+ 		flags |= PM_SWAP;
++		migration = is_migration_entry(entry);
+ 		if (is_pfn_swap_entry(entry))
+ 			page = pfn_swap_entry_to_page(entry);
+ 	}
+ 
+ 	if (page && !PageAnon(page))
+ 		flags |= PM_FILE;
+-	if (page && page_mapcount(page) == 1)
++	if (page && !migration && page_mapcount(page) == 1)
+ 		flags |= PM_MMAP_EXCLUSIVE;
+ 	if (vma->vm_flags & VM_SOFTDIRTY)
+ 		flags |= PM_SOFT_DIRTY;
+@@ -1410,8 +1430,9 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ 	spinlock_t *ptl;
+ 	pte_t *pte, *orig_pte;
+ 	int err = 0;
+-
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
++	bool migration = false;
++
+ 	ptl = pmd_trans_huge_lock(pmdp, vma);
+ 	if (ptl) {
+ 		u64 flags = 0, frame = 0;
+@@ -1450,11 +1471,12 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ 			if (pmd_swp_uffd_wp(pmd))
+ 				flags |= PM_UFFD_WP;
+ 			VM_BUG_ON(!is_pmd_migration_entry(pmd));
++			migration = is_migration_entry(entry);
+ 			page = pfn_swap_entry_to_page(entry);
+ 		}
+ #endif
+ 
+-		if (page && page_mapcount(page) == 1)
++		if (page && !migration && page_mapcount(page) == 1)
+ 			flags |= PM_MMAP_EXCLUSIVE;
+ 
+ 		for (; addr != end; addr += PAGE_SIZE) {
+diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
+index 0c5c403f4be6b..5c59d045359bb 100644
+--- a/include/linux/memcontrol.h
++++ b/include/linux/memcontrol.h
+@@ -217,7 +217,7 @@ struct obj_cgroup {
+ 	struct mem_cgroup *memcg;
+ 	atomic_t nr_charged_bytes;
+ 	union {
+-		struct list_head list;
++		struct list_head list; /* protected by objcg_lock */
+ 		struct rcu_head rcu;
+ 	};
+ };
+@@ -313,7 +313,8 @@ struct mem_cgroup {
+ #ifdef CONFIG_MEMCG_KMEM
+ 	int kmemcg_id;
+ 	struct obj_cgroup __rcu *objcg;
+-	struct list_head objcg_list; /* list of inherited objcgs */
++	/* list of inherited objcgs, protected by objcg_lock */
++	struct list_head objcg_list;
+ #endif
+ 
+ 	MEMCG_PADDING(_pad2_);
+diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
+index 05f249f20f55d..29a2ab5de1daa 100644
+--- a/include/linux/nfs_fs.h
++++ b/include/linux/nfs_fs.h
+@@ -105,6 +105,7 @@ struct nfs_open_dir_context {
+ 	__u64 dup_cookie;
+ 	pgoff_t page_index;
+ 	signed char duped;
++	bool eof;
+ };
+ 
+ /*
+@@ -533,8 +534,8 @@ extern int nfs_instantiate(struct dentry *dentry, struct nfs_fh *fh,
+ 			struct nfs_fattr *fattr);
+ extern int nfs_may_open(struct inode *inode, const struct cred *cred, int openflags);
+ extern void nfs_access_zap_cache(struct inode *inode);
+-extern int nfs_access_get_cached(struct inode *inode, const struct cred *cred, struct nfs_access_entry *res,
+-				 bool may_block);
++extern int nfs_access_get_cached(struct inode *inode, const struct cred *cred,
++				 u32 *mask, bool may_block);
+ 
+ /*
+  * linux/fs/nfs/symlink.c
+diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
+index 2a9acbfe00f0f..9a6e70ccde56e 100644
+--- a/include/linux/nfs_fs_sb.h
++++ b/include/linux/nfs_fs_sb.h
+@@ -287,5 +287,5 @@ struct nfs_server {
+ #define NFS_CAP_COPY_NOTIFY	(1U << 27)
+ #define NFS_CAP_XATTR		(1U << 28)
+ #define NFS_CAP_READ_PLUS	(1U << 29)
+-
++#define NFS_CAP_FS_LOCATIONS	(1U << 30)
+ #endif
+diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h
+index 967a0098f0a97..695fa84611b66 100644
+--- a/include/linux/nfs_xdr.h
++++ b/include/linux/nfs_xdr.h
+@@ -1795,6 +1795,7 @@ struct nfs_rpc_ops {
+ 	struct nfs_server *(*create_server)(struct fs_context *);
+ 	struct nfs_server *(*clone_server)(struct nfs_server *, struct nfs_fh *,
+ 					   struct nfs_fattr *, rpc_authflavor_t);
++	int	(*discover_trunking)(struct nfs_server *, struct nfs_fh *);
+ };
+ 
+ /*
+diff --git a/include/linux/suspend.h b/include/linux/suspend.h
+index 8af13ba60c7e4..4bcd65679cee0 100644
+--- a/include/linux/suspend.h
++++ b/include/linux/suspend.h
+@@ -430,15 +430,7 @@ struct platform_hibernation_ops {
+ 
+ #ifdef CONFIG_HIBERNATION
+ /* kernel/power/snapshot.c */
+-extern void __register_nosave_region(unsigned long b, unsigned long e, int km);
+-static inline void __init register_nosave_region(unsigned long b, unsigned long e)
+-{
+-	__register_nosave_region(b, e, 0);
+-}
+-static inline void __init register_nosave_region_late(unsigned long b, unsigned long e)
+-{
+-	__register_nosave_region(b, e, 1);
+-}
++extern void register_nosave_region(unsigned long b, unsigned long e);
+ extern int swsusp_page_is_forbidden(struct page *);
+ extern void swsusp_set_page_free(struct page *);
+ extern void swsusp_unset_page_free(struct page *);
+@@ -457,7 +449,6 @@ int pfn_is_nosave(unsigned long pfn);
+ int hibernate_quiet_exec(int (*func)(void *data), void *data);
+ #else /* CONFIG_HIBERNATION */
+ static inline void register_nosave_region(unsigned long b, unsigned long e) {}
+-static inline void register_nosave_region_late(unsigned long b, unsigned long e) {}
+ static inline int swsusp_page_is_forbidden(struct page *p) { return 0; }
+ static inline void swsusp_set_page_free(struct page *p) {}
+ static inline void swsusp_unset_page_free(struct page *p) {}
+@@ -505,14 +496,14 @@ extern void ksys_sync_helper(void);
+ 
+ /* drivers/base/power/wakeup.c */
+ extern bool events_check_enabled;
+-extern unsigned int pm_wakeup_irq;
+ extern suspend_state_t pm_suspend_target_state;
+ 
+ extern bool pm_wakeup_pending(void);
+ extern void pm_system_wakeup(void);
+ extern void pm_system_cancel_wakeup(void);
+-extern void pm_wakeup_clear(bool reset);
++extern void pm_wakeup_clear(unsigned int irq_number);
+ extern void pm_system_irq_wakeup(unsigned int irq_number);
++extern unsigned int pm_wakeup_irq(void);
+ extern bool pm_get_wakeup_count(unsigned int *count, bool block);
+ extern bool pm_save_wakeup_count(unsigned int count);
+ extern void pm_wakep_autosleep_enabled(bool set);
+diff --git a/include/net/dst_metadata.h b/include/net/dst_metadata.h
+index 14efa0ded75dd..adab27ba1ecbf 100644
+--- a/include/net/dst_metadata.h
++++ b/include/net/dst_metadata.h
+@@ -123,8 +123,20 @@ static inline struct metadata_dst *tun_dst_unclone(struct sk_buff *skb)
+ 
+ 	memcpy(&new_md->u.tun_info, &md_dst->u.tun_info,
+ 	       sizeof(struct ip_tunnel_info) + md_size);
++#ifdef CONFIG_DST_CACHE
++	/* Unclone the dst cache if there is one */
++	if (new_md->u.tun_info.dst_cache.cache) {
++		int ret;
++
++		ret = dst_cache_init(&new_md->u.tun_info.dst_cache, GFP_ATOMIC);
++		if (ret) {
++			metadata_dst_free(new_md);
++			return ERR_PTR(ret);
++		}
++	}
++#endif
++
+ 	skb_dst_drop(skb);
+-	dst_hold(&new_md->dst);
+ 	skb_dst_set(skb, &new_md->dst);
+ 	return new_md;
+ }
+diff --git a/include/uapi/linux/netfilter/nf_conntrack_common.h b/include/uapi/linux/netfilter/nf_conntrack_common.h
+index 4b3395082d15c..26071021e986f 100644
+--- a/include/uapi/linux/netfilter/nf_conntrack_common.h
++++ b/include/uapi/linux/netfilter/nf_conntrack_common.h
+@@ -106,7 +106,7 @@ enum ip_conntrack_status {
+ 	IPS_NAT_CLASH = IPS_UNTRACKED,
+ #endif
+ 
+-	/* Conntrack got a helper explicitly attached via CT target. */
++	/* Conntrack got a helper explicitly attached (ruleset, ctnetlink). */
+ 	IPS_HELPER_BIT = 13,
+ 	IPS_HELPER = (1 << IPS_HELPER_BIT),
+ 
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index b517947bfa48d..2dc94a0e34474 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -185,7 +185,7 @@ static int audit_match_perm(struct audit_context *ctx, int mask)
+ 	case AUDITSC_EXECVE:
+ 		return mask & AUDIT_PERM_EXEC;
+ 	case AUDITSC_OPENAT2:
+-		return mask & ACC_MODE((u32)((struct open_how *)ctx->argv[2])->flags);
++		return mask & ACC_MODE((u32)ctx->openat2.flags);
+ 	default:
+ 		return 0;
+ 	}
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 04e6e2dae60e4..e4a43d475ba6a 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -839,7 +839,7 @@ static DEFINE_PER_CPU(struct list_head, cgrp_cpuctx_list);
+  */
+ static void perf_cgroup_switch(struct task_struct *task, int mode)
+ {
+-	struct perf_cpu_context *cpuctx;
++	struct perf_cpu_context *cpuctx, *tmp;
+ 	struct list_head *list;
+ 	unsigned long flags;
+ 
+@@ -850,7 +850,7 @@ static void perf_cgroup_switch(struct task_struct *task, int mode)
+ 	local_irq_save(flags);
+ 
+ 	list = this_cpu_ptr(&cgrp_cpuctx_list);
+-	list_for_each_entry(cpuctx, list, cgrp_cpuctx_entry) {
++	list_for_each_entry_safe(cpuctx, tmp, list, cgrp_cpuctx_entry) {
+ 		WARN_ON_ONCE(cpuctx->ctx.nr_cgroups == 0);
+ 
+ 		perf_ctx_lock(cpuctx, cpuctx->task_ctx);
+@@ -6004,6 +6004,8 @@ static void ring_buffer_attach(struct perf_event *event,
+ 	struct perf_buffer *old_rb = NULL;
+ 	unsigned long flags;
+ 
++	WARN_ON_ONCE(event->parent);
++
+ 	if (event->rb) {
+ 		/*
+ 		 * Should be impossible, we set this when removing
+@@ -6061,6 +6063,9 @@ static void ring_buffer_wakeup(struct perf_event *event)
+ {
+ 	struct perf_buffer *rb;
+ 
++	if (event->parent)
++		event = event->parent;
++
+ 	rcu_read_lock();
+ 	rb = rcu_dereference(event->rb);
+ 	if (rb) {
+@@ -6074,6 +6079,9 @@ struct perf_buffer *ring_buffer_get(struct perf_event *event)
+ {
+ 	struct perf_buffer *rb;
+ 
++	if (event->parent)
++		event = event->parent;
++
+ 	rcu_read_lock();
+ 	rb = rcu_dereference(event->rb);
+ 	if (rb) {
+@@ -6772,7 +6780,7 @@ static unsigned long perf_prepare_sample_aux(struct perf_event *event,
+ 	if (WARN_ON_ONCE(READ_ONCE(sampler->oncpu) != smp_processor_id()))
+ 		goto out;
+ 
+-	rb = ring_buffer_get(sampler->parent ? sampler->parent : sampler);
++	rb = ring_buffer_get(sampler);
+ 	if (!rb)
+ 		goto out;
+ 
+@@ -6838,7 +6846,7 @@ static void perf_aux_sample_output(struct perf_event *event,
+ 	if (WARN_ON_ONCE(!sampler || !data->aux_size))
+ 		return;
+ 
+-	rb = ring_buffer_get(sampler->parent ? sampler->parent : sampler);
++	rb = ring_buffer_get(sampler);
+ 	if (!rb)
+ 		return;
+ 
+diff --git a/kernel/power/main.c b/kernel/power/main.c
+index 44169f3081fdc..7e646079fbeb2 100644
+--- a/kernel/power/main.c
++++ b/kernel/power/main.c
+@@ -504,7 +504,10 @@ static ssize_t pm_wakeup_irq_show(struct kobject *kobj,
+ 					struct kobj_attribute *attr,
+ 					char *buf)
+ {
+-	return pm_wakeup_irq ? sprintf(buf, "%u\n", pm_wakeup_irq) : -ENODATA;
++	if (!pm_wakeup_irq())
++		return -ENODATA;
++
++	return sprintf(buf, "%u\n", pm_wakeup_irq());
+ }
+ 
+ power_attr_ro(pm_wakeup_irq);
+diff --git a/kernel/power/process.c b/kernel/power/process.c
+index b7e7798637b8e..11b570fcf0494 100644
+--- a/kernel/power/process.c
++++ b/kernel/power/process.c
+@@ -134,7 +134,7 @@ int freeze_processes(void)
+ 	if (!pm_freezing)
+ 		atomic_inc(&system_freezing_cnt);
+ 
+-	pm_wakeup_clear(true);
++	pm_wakeup_clear(0);
+ 	pr_info("Freezing user space processes ... ");
+ 	pm_freezing = true;
+ 	error = try_to_freeze_tasks(true);
+diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
+index f7a9860782135..330d499376924 100644
+--- a/kernel/power/snapshot.c
++++ b/kernel/power/snapshot.c
+@@ -978,8 +978,7 @@ static void memory_bm_recycle(struct memory_bitmap *bm)
+  * Register a range of page frames the contents of which should not be saved
+  * during hibernation (to be used in the early initialization code).
+  */
+-void __init __register_nosave_region(unsigned long start_pfn,
+-				     unsigned long end_pfn, int use_kmalloc)
++void __init register_nosave_region(unsigned long start_pfn, unsigned long end_pfn)
+ {
+ 	struct nosave_region *region;
+ 
+@@ -995,18 +994,12 @@ void __init __register_nosave_region(unsigned long start_pfn,
+ 			goto Report;
+ 		}
+ 	}
+-	if (use_kmalloc) {
+-		/* During init, this shouldn't fail */
+-		region = kmalloc(sizeof(struct nosave_region), GFP_KERNEL);
+-		BUG_ON(!region);
+-	} else {
+-		/* This allocation cannot fail */
+-		region = memblock_alloc(sizeof(struct nosave_region),
+-					SMP_CACHE_BYTES);
+-		if (!region)
+-			panic("%s: Failed to allocate %zu bytes\n", __func__,
+-			      sizeof(struct nosave_region));
+-	}
++	/* This allocation cannot fail */
++	region = memblock_alloc(sizeof(struct nosave_region),
++				SMP_CACHE_BYTES);
++	if (!region)
++		panic("%s: Failed to allocate %zu bytes\n", __func__,
++		      sizeof(struct nosave_region));
+ 	region->start_pfn = start_pfn;
+ 	region->end_pfn = end_pfn;
+ 	list_add_tail(&region->list, &nosave_regions);
+diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c
+index 80cc1f0f502b8..6fcdee7e87a57 100644
+--- a/kernel/power/suspend.c
++++ b/kernel/power/suspend.c
+@@ -136,8 +136,6 @@ static void s2idle_loop(void)
+ 			break;
+ 		}
+ 
+-		pm_wakeup_clear(false);
+-
+ 		s2idle_enter();
+ 	}
+ 
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 77563109c0ea0..d24823b3c3f9f 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -8176,9 +8176,7 @@ int __cond_resched_lock(spinlock_t *lock)
+ 
+ 	if (spin_needbreak(lock) || resched) {
+ 		spin_unlock(lock);
+-		if (resched)
+-			preempt_schedule_common();
+-		else
++		if (!_cond_resched())
+ 			cpu_relax();
+ 		ret = 1;
+ 		spin_lock(lock);
+@@ -8196,9 +8194,7 @@ int __cond_resched_rwlock_read(rwlock_t *lock)
+ 
+ 	if (rwlock_needbreak(lock) || resched) {
+ 		read_unlock(lock);
+-		if (resched)
+-			preempt_schedule_common();
+-		else
++		if (!_cond_resched())
+ 			cpu_relax();
+ 		ret = 1;
+ 		read_lock(lock);
+@@ -8216,9 +8212,7 @@ int __cond_resched_rwlock_write(rwlock_t *lock)
+ 
+ 	if (rwlock_needbreak(lock) || resched) {
+ 		write_unlock(lock);
+-		if (resched)
+-			preempt_schedule_common();
+-		else
++		if (!_cond_resched())
+ 			cpu_relax();
+ 		ret = 1;
+ 		write_lock(lock);
+diff --git a/kernel/seccomp.c b/kernel/seccomp.c
+index 4d8f44a177274..db10e73d06e02 100644
+--- a/kernel/seccomp.c
++++ b/kernel/seccomp.c
+@@ -29,6 +29,9 @@
+ #include <linux/syscalls.h>
+ #include <linux/sysctl.h>
+ 
++/* Not exposed in headers: strictly internal use only. */
++#define SECCOMP_MODE_DEAD	(SECCOMP_MODE_FILTER + 1)
++
+ #ifdef CONFIG_HAVE_ARCH_SECCOMP_FILTER
+ #include <asm/syscall.h>
+ #endif
+@@ -1010,6 +1013,7 @@ static void __secure_computing_strict(int this_syscall)
+ #ifdef SECCOMP_DEBUG
+ 	dump_stack();
+ #endif
++	current->seccomp.mode = SECCOMP_MODE_DEAD;
+ 	seccomp_log(this_syscall, SIGKILL, SECCOMP_RET_KILL_THREAD, true);
+ 	do_exit(SIGKILL);
+ }
+@@ -1261,6 +1265,7 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd,
+ 	case SECCOMP_RET_KILL_THREAD:
+ 	case SECCOMP_RET_KILL_PROCESS:
+ 	default:
++		current->seccomp.mode = SECCOMP_MODE_DEAD;
+ 		seccomp_log(this_syscall, SIGSYS, action, true);
+ 		/* Dump core only if this is the last remaining thread. */
+ 		if (action != SECCOMP_RET_KILL_THREAD ||
+@@ -1309,6 +1314,11 @@ int __secure_computing(const struct seccomp_data *sd)
+ 		return 0;
+ 	case SECCOMP_MODE_FILTER:
+ 		return __seccomp_filter(this_syscall, sd, false);
++	/* Surviving SECCOMP_RET_KILL_* must be proactively impossible. */
++	case SECCOMP_MODE_DEAD:
++		WARN_ON_ONCE(1);
++		do_exit(SIGKILL);
++		return -1;
+ 	default:
+ 		BUG();
+ 	}
+diff --git a/kernel/signal.c b/kernel/signal.c
+index cf97b9c4d665a..684fb02b20f86 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -1339,9 +1339,10 @@ force_sig_info_to_task(struct kernel_siginfo *info, struct task_struct *t,
+ 	}
+ 	/*
+ 	 * Don't clear SIGNAL_UNKILLABLE for traced tasks, users won't expect
+-	 * debugging to leave init killable.
++	 * debugging to leave init killable. But HANDLER_EXIT is always fatal.
+ 	 */
+-	if (action->sa.sa_handler == SIG_DFL && !t->ptrace)
++	if (action->sa.sa_handler == SIG_DFL &&
++	    (!t->ptrace || (handler == HANDLER_EXIT)))
+ 		t->signal->flags &= ~SIGNAL_UNKILLABLE;
+ 	ret = send_signal(sig, info, t, PIDTYPE_PID);
+ 	spin_unlock_irqrestore(&t->sighand->siglock, flags);
+diff --git a/lib/test_kasan.c b/lib/test_kasan.c
+index 0643573f86862..2ef2948261bf8 100644
+--- a/lib/test_kasan.c
++++ b/lib/test_kasan.c
+@@ -492,6 +492,7 @@ static void kmalloc_oob_in_memset(struct kunit *test)
+ 	ptr = kmalloc(size, GFP_KERNEL);
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+ 
++	OPTIMIZER_HIDE_VAR(ptr);
+ 	OPTIMIZER_HIDE_VAR(size);
+ 	KUNIT_EXPECT_KASAN_FAIL(test,
+ 				memset(ptr, 0, size + KASAN_GRANULE_SIZE));
+@@ -515,6 +516,7 @@ static void kmalloc_memmove_negative_size(struct kunit *test)
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+ 
+ 	memset((char *)ptr, 0, 64);
++	OPTIMIZER_HIDE_VAR(ptr);
+ 	OPTIMIZER_HIDE_VAR(invalid_size);
+ 	KUNIT_EXPECT_KASAN_FAIL(test,
+ 		memmove((char *)ptr, (char *)ptr + 4, invalid_size));
+@@ -531,6 +533,7 @@ static void kmalloc_memmove_invalid_size(struct kunit *test)
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+ 
+ 	memset((char *)ptr, 0, 64);
++	OPTIMIZER_HIDE_VAR(ptr);
+ 	KUNIT_EXPECT_KASAN_FAIL(test,
+ 		memmove((char *)ptr, (char *)ptr + 4, invalid_size));
+ 	kfree(ptr);
+@@ -869,6 +872,7 @@ static void kasan_memchr(struct kunit *test)
+ 	ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO);
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+ 
++	OPTIMIZER_HIDE_VAR(ptr);
+ 	OPTIMIZER_HIDE_VAR(size);
+ 	KUNIT_EXPECT_KASAN_FAIL(test,
+ 		kasan_ptr_result = memchr(ptr, '1', size + 1));
+@@ -895,6 +899,7 @@ static void kasan_memcmp(struct kunit *test)
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+ 	memset(arr, 0, sizeof(arr));
+ 
++	OPTIMIZER_HIDE_VAR(ptr);
+ 	OPTIMIZER_HIDE_VAR(size);
+ 	KUNIT_EXPECT_KASAN_FAIL(test,
+ 		kasan_int_result = memcmp(ptr, arr, size+1));
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index fdc952f270c14..e57b14f966151 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -254,7 +254,7 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressure *vmpr)
+ }
+ 
+ #ifdef CONFIG_MEMCG_KMEM
+-extern spinlock_t css_set_lock;
++static DEFINE_SPINLOCK(objcg_lock);
+ 
+ bool mem_cgroup_kmem_disabled(void)
+ {
+@@ -298,9 +298,9 @@ static void obj_cgroup_release(struct percpu_ref *ref)
+ 	if (nr_pages)
+ 		obj_cgroup_uncharge_pages(objcg, nr_pages);
+ 
+-	spin_lock_irqsave(&css_set_lock, flags);
++	spin_lock_irqsave(&objcg_lock, flags);
+ 	list_del(&objcg->list);
+-	spin_unlock_irqrestore(&css_set_lock, flags);
++	spin_unlock_irqrestore(&objcg_lock, flags);
+ 
+ 	percpu_ref_exit(ref);
+ 	kfree_rcu(objcg, rcu);
+@@ -332,7 +332,7 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
+ 
+ 	objcg = rcu_replace_pointer(memcg->objcg, NULL, true);
+ 
+-	spin_lock_irq(&css_set_lock);
++	spin_lock_irq(&objcg_lock);
+ 
+ 	/* 1) Ready to reparent active objcg. */
+ 	list_add(&objcg->list, &memcg->objcg_list);
+@@ -342,7 +342,7 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
+ 	/* 3) Move already reparented objcgs to the parent's list */
+ 	list_splice(&memcg->objcg_list, &parent->objcg_list);
+ 
+-	spin_unlock_irq(&css_set_lock);
++	spin_unlock_irq(&objcg_lock);
+ 
+ 	percpu_ref_kill(&objcg->refcnt);
+ }
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 700434db57352..8d24a2299c065 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -1066,8 +1066,10 @@ void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason)
+ 	 * forward progress (e.g. journalling workqueues or kthreads).
+ 	 */
+ 	if (!current_is_kswapd() &&
+-	    current->flags & (PF_IO_WORKER|PF_KTHREAD))
++	    current->flags & (PF_IO_WORKER|PF_KTHREAD)) {
++		cond_resched();
+ 		return;
++	}
+ 
+ 	/*
+ 	 * These figures are pulled out of thin air.
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index 02cbcb2ecf0db..d2a430b6a13bd 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -56,6 +56,7 @@
+ #include <linux/module.h>
+ #include <linux/init.h>
+ #include <linux/interrupt.h>
++#include <linux/spinlock.h>
+ #include <linux/hrtimer.h>
+ #include <linux/wait.h>
+ #include <linux/uio.h>
+@@ -145,6 +146,7 @@ struct isotp_sock {
+ 	struct tpcon rx, tx;
+ 	struct list_head notifier;
+ 	wait_queue_head_t wait;
++	spinlock_t rx_lock; /* protect single thread state machine */
+ };
+ 
+ static LIST_HEAD(isotp_notifier_list);
+@@ -615,11 +617,17 @@ static void isotp_rcv(struct sk_buff *skb, void *data)
+ 
+ 	n_pci_type = cf->data[ae] & 0xF0;
+ 
++	/* Make sure the state changes and data structures stay consistent at
++	 * CAN frame reception time. This locking is not needed in real world
++	 * use cases but the inconsistency can be triggered with syzkaller.
++	 */
++	spin_lock(&so->rx_lock);
++
+ 	if (so->opt.flags & CAN_ISOTP_HALF_DUPLEX) {
+ 		/* check rx/tx path half duplex expectations */
+ 		if ((so->tx.state != ISOTP_IDLE && n_pci_type != N_PCI_FC) ||
+ 		    (so->rx.state != ISOTP_IDLE && n_pci_type == N_PCI_FC))
+-			return;
++			goto out_unlock;
+ 	}
+ 
+ 	switch (n_pci_type) {
+@@ -668,6 +676,9 @@ static void isotp_rcv(struct sk_buff *skb, void *data)
+ 		isotp_rcv_cf(sk, cf, ae, skb);
+ 		break;
+ 	}
++
++out_unlock:
++	spin_unlock(&so->rx_lock);
+ }
+ 
+ static void isotp_fill_dataframe(struct canfd_frame *cf, struct isotp_sock *so,
+@@ -876,7 +887,7 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 
+ 	if (!size || size > MAX_MSG_LENGTH) {
+ 		err = -EINVAL;
+-		goto err_out;
++		goto err_out_drop;
+ 	}
+ 
+ 	/* take care of a potential SF_DL ESC offset for TX_DL > 8 */
+@@ -886,24 +897,24 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 	if ((so->opt.flags & CAN_ISOTP_SF_BROADCAST) &&
+ 	    (size > so->tx.ll_dl - SF_PCI_SZ4 - ae - off)) {
+ 		err = -EINVAL;
+-		goto err_out;
++		goto err_out_drop;
+ 	}
+ 
+ 	err = memcpy_from_msg(so->tx.buf, msg, size);
+ 	if (err < 0)
+-		goto err_out;
++		goto err_out_drop;
+ 
+ 	dev = dev_get_by_index(sock_net(sk), so->ifindex);
+ 	if (!dev) {
+ 		err = -ENXIO;
+-		goto err_out;
++		goto err_out_drop;
+ 	}
+ 
+ 	skb = sock_alloc_send_skb(sk, so->ll.mtu + sizeof(struct can_skb_priv),
+ 				  msg->msg_flags & MSG_DONTWAIT, &err);
+ 	if (!skb) {
+ 		dev_put(dev);
+-		goto err_out;
++		goto err_out_drop;
+ 	}
+ 
+ 	can_skb_reserve(skb);
+@@ -965,7 +976,7 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 	if (err) {
+ 		pr_notice_once("can-isotp: %s: can_send_ret %pe\n",
+ 			       __func__, ERR_PTR(err));
+-		goto err_out;
++		goto err_out_drop;
+ 	}
+ 
+ 	if (wait_tx_done) {
+@@ -978,6 +989,9 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 
+ 	return size;
+ 
++err_out_drop:
++	/* drop this PDU and unlock a potential wait queue */
++	old_state = ISOTP_IDLE;
+ err_out:
+ 	so->tx.state = old_state;
+ 	if (so->tx.state == ISOTP_IDLE)
+@@ -1444,6 +1458,7 @@ static int isotp_init(struct sock *sk)
+ 	so->txtimer.function = isotp_tx_timer_handler;
+ 
+ 	init_waitqueue_head(&so->wait);
++	spin_lock_init(&so->rx_lock);
+ 
+ 	spin_lock(&isotp_notifier_lock);
+ 	list_add_tail(&so->notifier, &isotp_notifier_list);
+diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
+index 826957b6442b0..7578b1350f182 100644
+--- a/net/dsa/dsa2.c
++++ b/net/dsa/dsa2.c
+@@ -1609,7 +1609,6 @@ EXPORT_SYMBOL_GPL(dsa_unregister_switch);
+ void dsa_switch_shutdown(struct dsa_switch *ds)
+ {
+ 	struct net_device *master, *slave_dev;
+-	LIST_HEAD(unregister_list);
+ 	struct dsa_port *dp;
+ 
+ 	mutex_lock(&dsa2_mutex);
+@@ -1620,25 +1619,13 @@ void dsa_switch_shutdown(struct dsa_switch *ds)
+ 		slave_dev = dp->slave;
+ 
+ 		netdev_upper_dev_unlink(master, slave_dev);
+-		/* Just unlinking ourselves as uppers of the master is not
+-		 * sufficient. When the master net device unregisters, that will
+-		 * also call dev_close, which we will catch as NETDEV_GOING_DOWN
+-		 * and trigger a dev_close on our own devices (dsa_slave_close).
+-		 * In turn, that will call dev_mc_unsync on the master's net
+-		 * device. If the master is also a DSA switch port, this will
+-		 * trigger dsa_slave_set_rx_mode which will call dev_mc_sync on
+-		 * its own master. Lockdep will complain about the fact that
+-		 * all cascaded masters have the same dsa_master_addr_list_lock_key,
+-		 * which it normally would not do if the cascaded masters would
+-		 * be in a proper upper/lower relationship, which we've just
+-		 * destroyed.
+-		 * To suppress the lockdep warnings, let's actually unregister
+-		 * the DSA slave interfaces too, to avoid the nonsensical
+-		 * multicast address list synchronization on shutdown.
+-		 */
+-		unregister_netdevice_queue(slave_dev, &unregister_list);
+ 	}
+-	unregister_netdevice_many(&unregister_list);
++
++	/* Disconnect from further netdevice notifiers on the master,
++	 * since netdev_uses_dsa() will now return false.
++	 */
++	dsa_switch_for_each_cpu_port(dp, ds)
++		dp->master->dsa_ptr = NULL;
+ 
+ 	rtnl_unlock();
+ 	mutex_unlock(&dsa2_mutex);
+diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
+index 2dda856ca2602..aea29d97f8dfa 100644
+--- a/net/ipv4/ipmr.c
++++ b/net/ipv4/ipmr.c
+@@ -261,7 +261,9 @@ static int __net_init ipmr_rules_init(struct net *net)
+ 	return 0;
+ 
+ err2:
++	rtnl_lock();
+ 	ipmr_free_table(mrt);
++	rtnl_unlock();
+ err1:
+ 	fib_rules_unregister(ops);
+ 	return err;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 94cbba9fb12b1..28abb0bb1c515 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -936,6 +936,22 @@ void tcp_remove_empty_skb(struct sock *sk)
+ 	}
+ }
+ 
++/* skb changing from pure zc to mixed, must charge zc */
++static int tcp_downgrade_zcopy_pure(struct sock *sk, struct sk_buff *skb)
++{
++	if (unlikely(skb_zcopy_pure(skb))) {
++		u32 extra = skb->truesize -
++			    SKB_TRUESIZE(skb_end_offset(skb));
++
++		if (!sk_wmem_schedule(sk, extra))
++			return -ENOMEM;
++
++		sk_mem_charge(sk, extra);
++		skb_shinfo(skb)->flags &= ~SKBFL_PURE_ZEROCOPY;
++	}
++	return 0;
++}
++
+ static struct sk_buff *tcp_build_frag(struct sock *sk, int size_goal, int flags,
+ 				      struct page *page, int offset, size_t *size)
+ {
+@@ -971,7 +987,7 @@ new_segment:
+ 		tcp_mark_push(tp, skb);
+ 		goto new_segment;
+ 	}
+-	if (!sk_wmem_schedule(sk, copy))
++	if (tcp_downgrade_zcopy_pure(sk, skb) || !sk_wmem_schedule(sk, copy))
+ 		return NULL;
+ 
+ 	if (can_coalesce) {
+@@ -1319,19 +1335,8 @@ new_segment:
+ 
+ 			copy = min_t(int, copy, pfrag->size - pfrag->offset);
+ 
+-			/* skb changing from pure zc to mixed, must charge zc */
+-			if (unlikely(skb_zcopy_pure(skb))) {
+-				u32 extra = skb->truesize -
+-					    SKB_TRUESIZE(skb_end_offset(skb));
+-
+-				if (!sk_wmem_schedule(sk, extra))
+-					goto wait_for_space;
+-
+-				sk_mem_charge(sk, extra);
+-				skb_shinfo(skb)->flags &= ~SKBFL_PURE_ZEROCOPY;
+-			}
+-
+-			if (!sk_wmem_schedule(sk, copy))
++			if (tcp_downgrade_zcopy_pure(sk, skb) ||
++			    !sk_wmem_schedule(sk, copy))
+ 				goto wait_for_space;
+ 
+ 			err = skb_copy_to_page_nocache(sk, &msg->msg_iter, skb,
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index 36ed9efb88254..6a4065d81aa91 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -248,7 +248,9 @@ static int __net_init ip6mr_rules_init(struct net *net)
+ 	return 0;
+ 
+ err2:
++	rtnl_lock();
+ 	ip6mr_free_table(mrt);
++	rtnl_unlock();
+ err1:
+ 	fib_rules_unregister(ops);
+ 	return err;
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 5d305fafd0e99..5eada95dd76b3 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -878,6 +878,7 @@ out:
+ static int mptcp_pm_nl_create_listen_socket(struct sock *sk,
+ 					    struct mptcp_pm_addr_entry *entry)
+ {
++	int addrlen = sizeof(struct sockaddr_in);
+ 	struct sockaddr_storage addr;
+ 	struct mptcp_sock *msk;
+ 	struct socket *ssock;
+@@ -902,8 +903,11 @@ static int mptcp_pm_nl_create_listen_socket(struct sock *sk,
+ 	}
+ 
+ 	mptcp_info2sockaddr(&entry->addr, &addr, entry->addr.family);
+-	err = kernel_bind(ssock, (struct sockaddr *)&addr,
+-			  sizeof(struct sockaddr_in));
++#if IS_ENABLED(CONFIG_MPTCP_IPV6)
++	if (entry->addr.family == AF_INET6)
++		addrlen = sizeof(struct sockaddr_in6);
++#endif
++	err = kernel_bind(ssock, (struct sockaddr *)&addr, addrlen);
+ 	if (err) {
+ 		pr_warn("kernel_bind error, err=%d", err);
+ 		goto out;
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index ec4164c32d270..2d7f63ad33604 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -2311,7 +2311,8 @@ ctnetlink_create_conntrack(struct net *net,
+ 			if (helper->from_nlattr)
+ 				helper->from_nlattr(helpinfo, ct);
+ 
+-			/* not in hash table yet so not strictly necessary */
++			/* disable helper auto-assignment for this entry */
++			ct->status |= IPS_HELPER;
+ 			RCU_INIT_POINTER(help->helper, helper);
+ 		}
+ 	} else {
+diff --git a/net/netfilter/nft_exthdr.c b/net/netfilter/nft_exthdr.c
+index dbe1f2e7dd9ed..9e927ab4df151 100644
+--- a/net/netfilter/nft_exthdr.c
++++ b/net/netfilter/nft_exthdr.c
+@@ -167,7 +167,7 @@ nft_tcp_header_pointer(const struct nft_pktinfo *pkt,
+ {
+ 	struct tcphdr *tcph;
+ 
+-	if (pkt->tprot != IPPROTO_TCP)
++	if (pkt->tprot != IPPROTO_TCP || pkt->fragoff)
+ 		return NULL;
+ 
+ 	tcph = skb_header_pointer(pkt->skb, nft_thoff(pkt), sizeof(*tcph), buffer);
+diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c
+index 58e96a0fe0b4c..a4fbce560bddb 100644
+--- a/net/netfilter/nft_payload.c
++++ b/net/netfilter/nft_payload.c
+@@ -83,7 +83,7 @@ static int __nft_payload_inner_offset(struct nft_pktinfo *pkt)
+ {
+ 	unsigned int thoff = nft_thoff(pkt);
+ 
+-	if (!(pkt->flags & NFT_PKTINFO_L4PROTO))
++	if (!(pkt->flags & NFT_PKTINFO_L4PROTO) || pkt->fragoff)
+ 		return -1;
+ 
+ 	switch (pkt->tprot) {
+@@ -147,7 +147,7 @@ void nft_payload_eval(const struct nft_expr *expr,
+ 		offset = skb_network_offset(skb);
+ 		break;
+ 	case NFT_PAYLOAD_TRANSPORT_HEADER:
+-		if (!(pkt->flags & NFT_PKTINFO_L4PROTO))
++		if (!(pkt->flags & NFT_PKTINFO_L4PROTO) || pkt->fragoff)
+ 			goto err;
+ 		offset = nft_thoff(pkt);
+ 		break;
+@@ -657,7 +657,7 @@ static void nft_payload_set_eval(const struct nft_expr *expr,
+ 		offset = skb_network_offset(skb);
+ 		break;
+ 	case NFT_PAYLOAD_TRANSPORT_HEADER:
+-		if (!(pkt->flags & NFT_PKTINFO_L4PROTO))
++		if (!(pkt->flags & NFT_PKTINFO_L4PROTO) || pkt->fragoff)
+ 			goto err;
+ 		offset = nft_thoff(pkt);
+ 		break;
+@@ -696,7 +696,8 @@ static void nft_payload_set_eval(const struct nft_expr *expr,
+ 	if (priv->csum_type == NFT_PAYLOAD_CSUM_SCTP &&
+ 	    pkt->tprot == IPPROTO_SCTP &&
+ 	    skb->ip_summed != CHECKSUM_PARTIAL) {
+-		if (nft_payload_csum_sctp(skb, nft_thoff(pkt)))
++		if (pkt->fragoff == 0 &&
++		    nft_payload_csum_sctp(skb, nft_thoff(pkt)))
+ 			goto err;
+ 	}
+ 
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 910a36ed56343..e4a7ce5c79f4f 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1204,7 +1204,7 @@ static struct Qdisc *qdisc_create(struct net_device *dev,
+ 
+ 	err = -ENOENT;
+ 	if (!ops) {
+-		NL_SET_ERR_MSG(extack, "Specified qdisc not found");
++		NL_SET_ERR_MSG(extack, "Specified qdisc kind is unknown");
+ 		goto err_out;
+ 	}
+ 
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index a312ea2bc4405..c83fe618767c4 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -2900,7 +2900,7 @@ int rpc_clnt_add_xprt(struct rpc_clnt *clnt,
+ 	unsigned long connect_timeout;
+ 	unsigned long reconnect_timeout;
+ 	unsigned char resvport, reuseport;
+-	int ret = 0;
++	int ret = 0, ident;
+ 
+ 	rcu_read_lock();
+ 	xps = xprt_switch_get(rcu_dereference(clnt->cl_xpi.xpi_xpswitch));
+@@ -2914,8 +2914,11 @@ int rpc_clnt_add_xprt(struct rpc_clnt *clnt,
+ 	reuseport = xprt->reuseport;
+ 	connect_timeout = xprt->connect_timeout;
+ 	reconnect_timeout = xprt->max_reconnect_timeout;
++	ident = xprt->xprt_class->ident;
+ 	rcu_read_unlock();
+ 
++	if (!xprtargs->ident)
++		xprtargs->ident = ident;
+ 	xprt = xprt_create_transport(xprtargs);
+ 	if (IS_ERR(xprt)) {
+ 		ret = PTR_ERR(xprt);
+diff --git a/net/sunrpc/sysfs.c b/net/sunrpc/sysfs.c
+index 2766dd21935b8..0c28280dd3bcb 100644
+--- a/net/sunrpc/sysfs.c
++++ b/net/sunrpc/sysfs.c
+@@ -115,11 +115,14 @@ static ssize_t rpc_sysfs_xprt_srcaddr_show(struct kobject *kobj,
+ 	}
+ 
+ 	sock = container_of(xprt, struct sock_xprt, xprt);
+-	if (kernel_getsockname(sock->sock, (struct sockaddr *)&saddr) < 0)
++	mutex_lock(&sock->recv_mutex);
++	if (sock->sock == NULL ||
++	    kernel_getsockname(sock->sock, (struct sockaddr *)&saddr) < 0)
+ 		goto out;
+ 
+ 	ret = sprintf(buf, "%pISc\n", &saddr);
+ out:
++	mutex_unlock(&sock->recv_mutex);
+ 	xprt_put(xprt);
+ 	return ret + 1;
+ }
+@@ -295,8 +298,10 @@ static ssize_t rpc_sysfs_xprt_state_change(struct kobject *kobj,
+ 		online = 1;
+ 	else if (!strncmp(buf, "remove", 6))
+ 		remove = 1;
+-	else
+-		return -EINVAL;
++	else {
++		count = -EINVAL;
++		goto out_put;
++	}
+ 
+ 	if (wait_on_bit_lock(&xprt->state, XPRT_LOCKED, TASK_KILLABLE)) {
+ 		count = -EINTR;
+@@ -307,25 +312,28 @@ static ssize_t rpc_sysfs_xprt_state_change(struct kobject *kobj,
+ 		goto release_tasks;
+ 	}
+ 	if (offline) {
+-		set_bit(XPRT_OFFLINE, &xprt->state);
+-		spin_lock(&xps->xps_lock);
+-		xps->xps_nactive--;
+-		spin_unlock(&xps->xps_lock);
++		if (!test_and_set_bit(XPRT_OFFLINE, &xprt->state)) {
++			spin_lock(&xps->xps_lock);
++			xps->xps_nactive--;
++			spin_unlock(&xps->xps_lock);
++		}
+ 	} else if (online) {
+-		clear_bit(XPRT_OFFLINE, &xprt->state);
+-		spin_lock(&xps->xps_lock);
+-		xps->xps_nactive++;
+-		spin_unlock(&xps->xps_lock);
++		if (test_and_clear_bit(XPRT_OFFLINE, &xprt->state)) {
++			spin_lock(&xps->xps_lock);
++			xps->xps_nactive++;
++			spin_unlock(&xps->xps_lock);
++		}
+ 	} else if (remove) {
+ 		if (test_bit(XPRT_OFFLINE, &xprt->state)) {
+-			set_bit(XPRT_REMOVE, &xprt->state);
+-			xprt_force_disconnect(xprt);
+-			if (test_bit(XPRT_CONNECTED, &xprt->state)) {
+-				if (!xprt->sending.qlen &&
+-				    !xprt->pending.qlen &&
+-				    !xprt->backlog.qlen &&
+-				    !atomic_long_read(&xprt->queuelen))
+-					rpc_xprt_switch_remove_xprt(xps, xprt);
++			if (!test_and_set_bit(XPRT_REMOVE, &xprt->state)) {
++				xprt_force_disconnect(xprt);
++				if (test_bit(XPRT_CONNECTED, &xprt->state)) {
++					if (!xprt->sending.qlen &&
++					    !xprt->pending.qlen &&
++					    !xprt->backlog.qlen &&
++					    !atomic_long_read(&xprt->queuelen))
++						rpc_xprt_switch_remove_xprt(xps, xprt);
++				}
+ 			}
+ 		} else {
+ 			count = -EINVAL;
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index d8ee06a9650a1..03770e56df361 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -1641,7 +1641,12 @@ static int xs_get_srcport(struct sock_xprt *transport)
+ unsigned short get_srcport(struct rpc_xprt *xprt)
+ {
+ 	struct sock_xprt *sock = container_of(xprt, struct sock_xprt, xprt);
+-	return xs_sock_getport(sock->sock);
++	unsigned short ret = 0;
++	mutex_lock(&sock->recv_mutex);
++	if (sock->sock)
++		ret = xs_sock_getport(sock->sock);
++	mutex_unlock(&sock->recv_mutex);
++	return ret;
+ }
+ EXPORT_SYMBOL(get_srcport);
+ 
+diff --git a/net/tipc/name_distr.c b/net/tipc/name_distr.c
+index bda902caa8147..8267b751a526a 100644
+--- a/net/tipc/name_distr.c
++++ b/net/tipc/name_distr.c
+@@ -313,7 +313,7 @@ static bool tipc_update_nametbl(struct net *net, struct distr_item *i,
+ 		pr_warn_ratelimited("Failed to remove binding %u,%u from %u\n",
+ 				    ua.sr.type, ua.sr.lower, node);
+ 	} else {
+-		pr_warn("Unrecognized name table message received\n");
++		pr_warn_ratelimited("Unknown name table message received\n");
+ 	}
+ 	return false;
+ }
+diff --git a/scripts/Makefile.extrawarn b/scripts/Makefile.extrawarn
+index d538255038747..8be892887d716 100644
+--- a/scripts/Makefile.extrawarn
++++ b/scripts/Makefile.extrawarn
+@@ -51,6 +51,7 @@ KBUILD_CFLAGS += -Wno-sign-compare
+ KBUILD_CFLAGS += -Wno-format-zero-length
+ KBUILD_CFLAGS += $(call cc-disable-warning, pointer-to-enum-cast)
+ KBUILD_CFLAGS += -Wno-tautological-constant-out-of-range-compare
++KBUILD_CFLAGS += $(call cc-disable-warning, unaligned-access)
+ endif
+ 
+ endif
+diff --git a/scripts/kconfig/confdata.c b/scripts/kconfig/confdata.c
+index 42bc56ee238c8..00284c03da4d3 100644
+--- a/scripts/kconfig/confdata.c
++++ b/scripts/kconfig/confdata.c
+@@ -977,10 +977,10 @@ static int conf_write_autoconf_cmd(const char *autoconf_name)
+ 
+ 	fprintf(out, "\n$(deps_config): ;\n");
+ 
+-	if (ferror(out)) /* error check for all fprintf() calls */
+-		return -1;
+-
++	ret = ferror(out); /* error check for all fprintf() calls */
+ 	fclose(out);
++	if (ret)
++		return -1;
+ 
+ 	if (rename(tmp, name)) {
+ 		perror("rename");
+@@ -1091,10 +1091,10 @@ static int __conf_write_autoconf(const char *filename,
+ 			print_symbol(file, sym);
+ 
+ 	/* check possible errors in conf_write_heading() and print_symbol() */
+-	if (ferror(file))
+-		return -1;
+-
++	ret = ferror(file);
+ 	fclose(file);
++	if (ret)
++		return -1;
+ 
+ 	if (rename(tmp, filename)) {
+ 		perror("rename");
+diff --git a/security/integrity/digsig_asymmetric.c b/security/integrity/digsig_asymmetric.c
+index 23240d793b074..895f4b9ce8c6b 100644
+--- a/security/integrity/digsig_asymmetric.c
++++ b/security/integrity/digsig_asymmetric.c
+@@ -109,22 +109,25 @@ int asymmetric_verify(struct key *keyring, const char *sig,
+ 
+ 	pk = asymmetric_key_public_key(key);
+ 	pks.pkey_algo = pk->pkey_algo;
+-	if (!strcmp(pk->pkey_algo, "rsa"))
++	if (!strcmp(pk->pkey_algo, "rsa")) {
+ 		pks.encoding = "pkcs1";
+-	else if (!strncmp(pk->pkey_algo, "ecdsa-", 6))
++	} else if (!strncmp(pk->pkey_algo, "ecdsa-", 6)) {
+ 		/* edcsa-nist-p192 etc. */
+ 		pks.encoding = "x962";
+-	else if (!strcmp(pk->pkey_algo, "ecrdsa") ||
+-		   !strcmp(pk->pkey_algo, "sm2"))
++	} else if (!strcmp(pk->pkey_algo, "ecrdsa") ||
++		   !strcmp(pk->pkey_algo, "sm2")) {
+ 		pks.encoding = "raw";
+-	else
+-		return -ENOPKG;
++	} else {
++		ret = -ENOPKG;
++		goto out;
++	}
+ 
+ 	pks.digest = (u8 *)data;
+ 	pks.digest_size = datalen;
+ 	pks.s = hdr->sig;
+ 	pks.s_size = siglen;
+ 	ret = verify_signature(key, &pks);
++out:
+ 	key_put(key);
+ 	pr_debug("%s() = %d\n", __func__, ret);
+ 	return ret;
+diff --git a/security/integrity/ima/ima_fs.c b/security/integrity/ima/ima_fs.c
+index 3d8e9d5db5aa5..3ad8f7734208b 100644
+--- a/security/integrity/ima/ima_fs.c
++++ b/security/integrity/ima/ima_fs.c
+@@ -496,12 +496,12 @@ int __init ima_fs_init(void)
+ 
+ 	return 0;
+ out:
++	securityfs_remove(ima_policy);
+ 	securityfs_remove(violations);
+ 	securityfs_remove(runtime_measurements_count);
+ 	securityfs_remove(ascii_runtime_measurements);
+ 	securityfs_remove(binary_runtime_measurements);
+ 	securityfs_remove(ima_symlink);
+ 	securityfs_remove(ima_dir);
+-	securityfs_remove(ima_policy);
+ 	return -1;
+ }
+diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
+index 320ca80aacab5..2a1f6418b10a6 100644
+--- a/security/integrity/ima/ima_policy.c
++++ b/security/integrity/ima/ima_policy.c
+@@ -1967,6 +1967,14 @@ int ima_policy_show(struct seq_file *m, void *v)
+ 
+ 	rcu_read_lock();
+ 
++	/* Do not print rules with inactive LSM labels */
++	for (i = 0; i < MAX_LSM_RULES; i++) {
++		if (entry->lsm[i].args_p && !entry->lsm[i].rule) {
++			rcu_read_unlock();
++			return 0;
++		}
++	}
++
+ 	if (entry->action & MEASURE)
+ 		seq_puts(m, pt(Opt_measure));
+ 	if (entry->action & DONT_MEASURE)
+diff --git a/security/integrity/ima/ima_template.c b/security/integrity/ima/ima_template.c
+index 694560396be05..db1ad6d7a57fb 100644
+--- a/security/integrity/ima/ima_template.c
++++ b/security/integrity/ima/ima_template.c
+@@ -29,6 +29,7 @@ static struct ima_template_desc builtin_templates[] = {
+ 
+ static LIST_HEAD(defined_templates);
+ static DEFINE_SPINLOCK(template_list);
++static int template_setup_done;
+ 
+ static const struct ima_template_field supported_fields[] = {
+ 	{.field_id = "d", .field_init = ima_eventdigest_init,
+@@ -101,10 +102,11 @@ static int __init ima_template_setup(char *str)
+ 	struct ima_template_desc *template_desc;
+ 	int template_len = strlen(str);
+ 
+-	if (ima_template)
++	if (template_setup_done)
+ 		return 1;
+ 
+-	ima_init_template_list();
++	if (!ima_template)
++		ima_init_template_list();
+ 
+ 	/*
+ 	 * Verify that a template with the supplied name exists.
+@@ -128,6 +130,7 @@ static int __init ima_template_setup(char *str)
+ 	}
+ 
+ 	ima_template = template_desc;
++	template_setup_done = 1;
+ 	return 1;
+ }
+ __setup("ima_template=", ima_template_setup);
+@@ -136,7 +139,7 @@ static int __init ima_template_fmt_setup(char *str)
+ {
+ 	int num_templates = ARRAY_SIZE(builtin_templates);
+ 
+-	if (ima_template)
++	if (template_setup_done)
+ 		return 1;
+ 
+ 	if (template_desc_init_fields(str, NULL, NULL) < 0) {
+@@ -147,6 +150,7 @@ static int __init ima_template_fmt_setup(char *str)
+ 
+ 	builtin_templates[num_templates - 1].fmt = str;
+ 	ima_template = builtin_templates + num_templates - 1;
++	template_setup_done = 1;
+ 
+ 	return 1;
+ }
+diff --git a/security/integrity/integrity_audit.c b/security/integrity/integrity_audit.c
+index 29220056207f4..0ec5e4c22cb2a 100644
+--- a/security/integrity/integrity_audit.c
++++ b/security/integrity/integrity_audit.c
+@@ -45,6 +45,8 @@ void integrity_audit_message(int audit_msgno, struct inode *inode,
+ 		return;
+ 
+ 	ab = audit_log_start(audit_context(), GFP_KERNEL, audit_msgno);
++	if (!ab)
++		return;
+ 	audit_log_format(ab, "pid=%d uid=%u auid=%u ses=%u",
+ 			 task_pid_nr(current),
+ 			 from_kuid(&init_user_ns, current_uid()),
+diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
+index 2ad013b8bde96..59b1dd4a549ee 100644
+--- a/virt/kvm/eventfd.c
++++ b/virt/kvm/eventfd.c
+@@ -463,8 +463,8 @@ bool kvm_irq_has_notifier(struct kvm *kvm, unsigned irqchip, unsigned pin)
+ 	idx = srcu_read_lock(&kvm->irq_srcu);
+ 	gsi = kvm_irq_map_chip_pin(kvm, irqchip, pin);
+ 	if (gsi != -1)
+-		hlist_for_each_entry_rcu(kian, &kvm->irq_ack_notifier_list,
+-					 link)
++		hlist_for_each_entry_srcu(kian, &kvm->irq_ack_notifier_list,
++					  link, srcu_read_lock_held(&kvm->irq_srcu))
+ 			if (kian->gsi == gsi) {
+ 				srcu_read_unlock(&kvm->irq_srcu, idx);
+ 				return true;
+@@ -480,8 +480,8 @@ void kvm_notify_acked_gsi(struct kvm *kvm, int gsi)
+ {
+ 	struct kvm_irq_ack_notifier *kian;
+ 
+-	hlist_for_each_entry_rcu(kian, &kvm->irq_ack_notifier_list,
+-				 link)
++	hlist_for_each_entry_srcu(kian, &kvm->irq_ack_notifier_list,
++				  link, srcu_read_lock_held(&kvm->irq_srcu))
+ 		if (kian->gsi == gsi)
+ 			kian->irq_acked(kian);
+ }


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-02-23 12:35 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-02-23 12:35 UTC (permalink / raw
  To: gentoo-commits

commit:     c33c02322aa211b7f4d87fc86b7d8ed27a0f5668
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Feb 23 12:35:25 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Feb 23 12:35:25 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c33c0232

Linux patch 5.16.11

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |     4 +
 1010_linux-5.16.11.patch | 10908 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 10912 insertions(+)

diff --git a/0000_README b/0000_README
index fd9081b3..544fcf03 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch:  1009_linux-5.16.10.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.16.10
 
+Patch:  1010_linux-5.16.11.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.11
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1010_linux-5.16.11.patch b/1010_linux-5.16.11.patch
new file mode 100644
index 00000000..4fde165d
--- /dev/null
+++ b/1010_linux-5.16.11.patch
@@ -0,0 +1,10908 @@
+diff --git a/.mailmap b/.mailmap
+index b344067e0acb6..3979fb166e0fd 100644
+--- a/.mailmap
++++ b/.mailmap
+@@ -74,6 +74,9 @@ Chris Chiu <chris.chiu@canonical.com> <chiu@endlessos.org>
+ Christian Borntraeger <borntraeger@linux.ibm.com> <borntraeger@de.ibm.com>
+ Christian Borntraeger <borntraeger@linux.ibm.com> <cborntra@de.ibm.com>
+ Christian Borntraeger <borntraeger@linux.ibm.com> <borntrae@de.ibm.com>
++Christian Brauner <brauner@kernel.org> <christian@brauner.io>
++Christian Brauner <brauner@kernel.org> <christian.brauner@canonical.com>
++Christian Brauner <brauner@kernel.org> <christian.brauner@ubuntu.com>
+ Christophe Ricard <christophe.ricard@gmail.com>
+ Christoph Hellwig <hch@lst.de>
+ Colin Ian King <colin.king@intel.com> <colin.king@canonical.com>
+diff --git a/Makefile b/Makefile
+index 36bbff16530ba..00ba75768af73 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/arch/arm/mach-omap2/display.c b/arch/arm/mach-omap2/display.c
+index 6daaa645ae5d9..21413a9b7b6c6 100644
+--- a/arch/arm/mach-omap2/display.c
++++ b/arch/arm/mach-omap2/display.c
+@@ -263,9 +263,9 @@ static int __init omapdss_init_of(void)
+ 	}
+ 
+ 	r = of_platform_populate(node, NULL, NULL, &pdev->dev);
++	put_device(&pdev->dev);
+ 	if (r) {
+ 		pr_err("Unable to populate DSS submodule devices\n");
+-		put_device(&pdev->dev);
+ 		return r;
+ 	}
+ 
+diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c
+index ccb0e3732c0dc..31d1a21f60416 100644
+--- a/arch/arm/mach-omap2/omap_hwmod.c
++++ b/arch/arm/mach-omap2/omap_hwmod.c
+@@ -752,8 +752,10 @@ static int __init _init_clkctrl_providers(void)
+ 
+ 	for_each_matching_node(np, ti_clkctrl_match_table) {
+ 		ret = _setup_clkctrl_provider(np);
+-		if (ret)
++		if (ret) {
++			of_node_put(np);
+ 			break;
++		}
+ 	}
+ 
+ 	return ret;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+index 428449d98c0ae..a3a1ea0f21340 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+@@ -107,6 +107,12 @@
+ 			no-map;
+ 		};
+ 
++		/* 32 MiB reserved for ARM Trusted Firmware (BL32) */
++		secmon_reserved_bl32: secmon@5300000 {
++			reg = <0x0 0x05300000 0x0 0x2000000>;
++			no-map;
++		};
++
+ 		linux,cma {
+ 			compatible = "shared-dma-pool";
+ 			reusable;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12a-sei510.dts b/arch/arm64/boot/dts/amlogic/meson-g12a-sei510.dts
+index d8838dde0f0f4..4fb31c2ba31c4 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12a-sei510.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-g12a-sei510.dts
+@@ -157,14 +157,6 @@
+ 		regulator-always-on;
+ 	};
+ 
+-	reserved-memory {
+-		/* TEE Reserved Memory */
+-		bl32_reserved: bl32@5000000 {
+-			reg = <0x0 0x05300000 0x0 0x2000000>;
+-			no-map;
+-		};
+-	};
+-
+ 	sdio_pwrseq: sdio-pwrseq {
+ 		compatible = "mmc-pwrseq-simple";
+ 		reset-gpios = <&gpio GPIOX_6 GPIO_ACTIVE_LOW>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
+index 6b457b2c30a4b..aa14ea017a613 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
+@@ -49,6 +49,12 @@
+ 			no-map;
+ 		};
+ 
++		/* 32 MiB reserved for ARM Trusted Firmware (BL32) */
++		secmon_reserved_bl32: secmon@5300000 {
++			reg = <0x0 0x05300000 0x0 0x2000000>;
++			no-map;
++		};
++
+ 		linux,cma {
+ 			compatible = "shared-dma-pool";
+ 			reusable;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts
+index 427475846fc70..a5d79f2f7c196 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts
+@@ -203,14 +203,6 @@
+ 		regulator-always-on;
+ 	};
+ 
+-	reserved-memory {
+-		/* TEE Reserved Memory */
+-		bl32_reserved: bl32@5000000 {
+-			reg = <0x0 0x05300000 0x0 0x2000000>;
+-			no-map;
+-		};
+-	};
+-
+ 	sdio_pwrseq: sdio-pwrseq {
+ 		compatible = "mmc-pwrseq-simple";
+ 		reset-gpios = <&gpio GPIOX_6 GPIO_ACTIVE_LOW>;
+diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
+index 3198acb2aad8c..7f3c87f7a0cec 100644
+--- a/arch/arm64/include/asm/el2_setup.h
++++ b/arch/arm64/include/asm/el2_setup.h
+@@ -106,7 +106,7 @@
+ 	msr_s	SYS_ICC_SRE_EL2, x0
+ 	isb					// Make sure SRE is now set
+ 	mrs_s	x0, SYS_ICC_SRE_EL2		// Read SRE back,
+-	tbz	x0, #0, 1f			// and check that it sticks
++	tbz	x0, #0, .Lskip_gicv3_\@		// and check that it sticks
+ 	msr_s	SYS_ICH_HCR_EL2, xzr		// Reset ICC_HCR_EL2 to defaults
+ .Lskip_gicv3_\@:
+ .endm
+diff --git a/arch/parisc/include/asm/bitops.h b/arch/parisc/include/asm/bitops.h
+index daa2afd974fbf..9efb18573fd81 100644
+--- a/arch/parisc/include/asm/bitops.h
++++ b/arch/parisc/include/asm/bitops.h
+@@ -12,6 +12,14 @@
+ #include <asm/barrier.h>
+ #include <linux/atomic.h>
+ 
++/* compiler build environment sanity checks: */
++#if !defined(CONFIG_64BIT) && defined(__LP64__)
++#error "Please use 'ARCH=parisc' to build the 32-bit kernel."
++#endif
++#if defined(CONFIG_64BIT) && !defined(__LP64__)
++#error "Please use 'ARCH=parisc64' to build the 64-bit kernel."
++#endif
++
+ /* See http://marc.theaimsgroup.com/?t=108826637900003 for discussion
+  * on use of volatile and __*_bit() (set/clear/change):
+  *	*_bit() want use of volatile.
+diff --git a/arch/parisc/lib/iomap.c b/arch/parisc/lib/iomap.c
+index 367f6397bda7a..8603850580857 100644
+--- a/arch/parisc/lib/iomap.c
++++ b/arch/parisc/lib/iomap.c
+@@ -346,6 +346,16 @@ u64 ioread64be(const void __iomem *addr)
+ 	return *((u64 *)addr);
+ }
+ 
++u64 ioread64_lo_hi(const void __iomem *addr)
++{
++	u32 low, high;
++
++	low = ioread32(addr);
++	high = ioread32(addr + sizeof(u32));
++
++	return low + ((u64)high << 32);
++}
++
+ u64 ioread64_hi_lo(const void __iomem *addr)
+ {
+ 	u32 low, high;
+@@ -419,6 +429,12 @@ void iowrite64be(u64 datum, void __iomem *addr)
+ 	}
+ }
+ 
++void iowrite64_lo_hi(u64 val, void __iomem *addr)
++{
++	iowrite32(val, addr);
++	iowrite32(val >> 32, addr + sizeof(u32));
++}
++
+ void iowrite64_hi_lo(u64 val, void __iomem *addr)
+ {
+ 	iowrite32(val >> 32, addr + sizeof(u32));
+@@ -530,6 +546,7 @@ EXPORT_SYMBOL(ioread32);
+ EXPORT_SYMBOL(ioread32be);
+ EXPORT_SYMBOL(ioread64);
+ EXPORT_SYMBOL(ioread64be);
++EXPORT_SYMBOL(ioread64_lo_hi);
+ EXPORT_SYMBOL(ioread64_hi_lo);
+ EXPORT_SYMBOL(iowrite8);
+ EXPORT_SYMBOL(iowrite16);
+@@ -538,6 +555,7 @@ EXPORT_SYMBOL(iowrite32);
+ EXPORT_SYMBOL(iowrite32be);
+ EXPORT_SYMBOL(iowrite64);
+ EXPORT_SYMBOL(iowrite64be);
++EXPORT_SYMBOL(iowrite64_lo_hi);
+ EXPORT_SYMBOL(iowrite64_hi_lo);
+ EXPORT_SYMBOL(ioread8_rep);
+ EXPORT_SYMBOL(ioread16_rep);
+diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
+index 1ae31db9988f5..1dc2e88e7b04f 100644
+--- a/arch/parisc/mm/init.c
++++ b/arch/parisc/mm/init.c
+@@ -337,9 +337,9 @@ static void __init setup_bootmem(void)
+ 
+ static bool kernel_set_to_readonly;
+ 
+-static void __init map_pages(unsigned long start_vaddr,
+-			     unsigned long start_paddr, unsigned long size,
+-			     pgprot_t pgprot, int force)
++static void __ref map_pages(unsigned long start_vaddr,
++			    unsigned long start_paddr, unsigned long size,
++			    pgprot_t pgprot, int force)
+ {
+ 	pmd_t *pmd;
+ 	pte_t *pg_table;
+@@ -449,7 +449,7 @@ void __init set_kernel_text_rw(int enable_read_write)
+ 	flush_tlb_all();
+ }
+ 
+-void __ref free_initmem(void)
++void free_initmem(void)
+ {
+ 	unsigned long init_begin = (unsigned long)__init_begin;
+ 	unsigned long init_end = (unsigned long)__init_end;
+@@ -463,7 +463,6 @@ void __ref free_initmem(void)
+ 	/* The init text pages are marked R-X.  We have to
+ 	 * flush the icache and mark them RW-
+ 	 *
+-	 * This is tricky, because map_pages is in the init section.
+ 	 * Do a dummy remap of the data section first (the data
+ 	 * section is already PAGE_KERNEL) to pull in the TLB entries
+ 	 * for map_kernel */
+diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
+index 68e5c0a7e99d1..2e2a8211b17be 100644
+--- a/arch/powerpc/kernel/head_book3s_32.S
++++ b/arch/powerpc/kernel/head_book3s_32.S
+@@ -421,14 +421,14 @@ InstructionTLBMiss:
+  */
+ 	/* Get PTE (linux-style) and check access */
+ 	mfspr	r3,SPRN_IMISS
+-#ifdef CONFIG_MODULES
++#if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
+ 	lis	r1, TASK_SIZE@h		/* check if kernel address */
+ 	cmplw	0,r1,r3
+ #endif
+ 	mfspr	r2, SPRN_SDR1
+ 	li	r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC | _PAGE_USER
+ 	rlwinm	r2, r2, 28, 0xfffff000
+-#ifdef CONFIG_MODULES
++#if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
+ 	bgt-	112f
+ 	lis	r2, (swapper_pg_dir - PAGE_OFFSET)@ha	/* if kernel address, use */
+ 	li	r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC
+diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
+index 86f49e3e7cf56..b042fcae39137 100644
+--- a/arch/powerpc/lib/sstep.c
++++ b/arch/powerpc/lib/sstep.c
+@@ -3264,12 +3264,14 @@ void emulate_update_regs(struct pt_regs *regs, struct instruction_op *op)
+ 		case BARRIER_EIEIO:
+ 			eieio();
+ 			break;
++#ifdef CONFIG_PPC64
+ 		case BARRIER_LWSYNC:
+ 			asm volatile("lwsync" : : : "memory");
+ 			break;
+ 		case BARRIER_PTESYNC:
+ 			asm volatile("ptesync" : : : "memory");
+ 			break;
++#endif
+ 		}
+ 		break;
+ 
+diff --git a/arch/x86/include/asm/bug.h b/arch/x86/include/asm/bug.h
+index 84b87538a15de..bab883c0b6fee 100644
+--- a/arch/x86/include/asm/bug.h
++++ b/arch/x86/include/asm/bug.h
+@@ -22,7 +22,7 @@
+ 
+ #ifdef CONFIG_DEBUG_BUGVERBOSE
+ 
+-#define _BUG_FLAGS(ins, flags)						\
++#define _BUG_FLAGS(ins, flags, extra)					\
+ do {									\
+ 	asm_inline volatile("1:\t" ins "\n"				\
+ 		     ".pushsection __bug_table,\"aw\"\n"		\
+@@ -31,7 +31,8 @@ do {									\
+ 		     "\t.word %c1"        "\t# bug_entry::line\n"	\
+ 		     "\t.word %c2"        "\t# bug_entry::flags\n"	\
+ 		     "\t.org 2b+%c3\n"					\
+-		     ".popsection"					\
++		     ".popsection\n"					\
++		     extra						\
+ 		     : : "i" (__FILE__), "i" (__LINE__),		\
+ 			 "i" (flags),					\
+ 			 "i" (sizeof(struct bug_entry)));		\
+@@ -39,14 +40,15 @@ do {									\
+ 
+ #else /* !CONFIG_DEBUG_BUGVERBOSE */
+ 
+-#define _BUG_FLAGS(ins, flags)						\
++#define _BUG_FLAGS(ins, flags, extra)					\
+ do {									\
+ 	asm_inline volatile("1:\t" ins "\n"				\
+ 		     ".pushsection __bug_table,\"aw\"\n"		\
+ 		     "2:\t" __BUG_REL(1b) "\t# bug_entry::bug_addr\n"	\
+ 		     "\t.word %c0"        "\t# bug_entry::flags\n"	\
+ 		     "\t.org 2b+%c1\n"					\
+-		     ".popsection"					\
++		     ".popsection\n"					\
++		     extra						\
+ 		     : : "i" (flags),					\
+ 			 "i" (sizeof(struct bug_entry)));		\
+ } while (0)
+@@ -55,7 +57,7 @@ do {									\
+ 
+ #else
+ 
+-#define _BUG_FLAGS(ins, flags)  asm volatile(ins)
++#define _BUG_FLAGS(ins, flags, extra)  asm volatile(ins)
+ 
+ #endif /* CONFIG_GENERIC_BUG */
+ 
+@@ -63,8 +65,8 @@ do {									\
+ #define BUG()							\
+ do {								\
+ 	instrumentation_begin();				\
+-	_BUG_FLAGS(ASM_UD2, 0);					\
+-	unreachable();						\
++	_BUG_FLAGS(ASM_UD2, 0, "");				\
++	__builtin_unreachable();				\
+ } while (0)
+ 
+ /*
+@@ -75,9 +77,9 @@ do {								\
+  */
+ #define __WARN_FLAGS(flags)					\
+ do {								\
++	__auto_type f = BUGFLAG_WARNING|(flags);		\
+ 	instrumentation_begin();				\
+-	_BUG_FLAGS(ASM_UD2, BUGFLAG_WARNING|(flags));		\
+-	annotate_reachable();					\
++	_BUG_FLAGS(ASM_UD2, f, ASM_REACHABLE);			\
+ 	instrumentation_end();					\
+ } while (0)
+ 
+diff --git a/arch/x86/kernel/fpu/regset.c b/arch/x86/kernel/fpu/regset.c
+index 437d7c930c0bd..75ffaef8c2991 100644
+--- a/arch/x86/kernel/fpu/regset.c
++++ b/arch/x86/kernel/fpu/regset.c
+@@ -91,11 +91,9 @@ int xfpregs_set(struct task_struct *target, const struct user_regset *regset,
+ 		const void *kbuf, const void __user *ubuf)
+ {
+ 	struct fpu *fpu = &target->thread.fpu;
+-	struct user32_fxsr_struct newstate;
++	struct fxregs_state newstate;
+ 	int ret;
+ 
+-	BUILD_BUG_ON(sizeof(newstate) != sizeof(struct fxregs_state));
+-
+ 	if (!cpu_feature_enabled(X86_FEATURE_FXSR))
+ 		return -ENODEV;
+ 
+@@ -116,9 +114,10 @@ int xfpregs_set(struct task_struct *target, const struct user_regset *regset,
+ 	/* Copy the state  */
+ 	memcpy(&fpu->fpstate->regs.fxsave, &newstate, sizeof(newstate));
+ 
+-	/* Clear xmm8..15 */
++	/* Clear xmm8..15 for 32-bit callers */
+ 	BUILD_BUG_ON(sizeof(fpu->__fpstate.regs.fxsave.xmm_space) != 16 * 16);
+-	memset(&fpu->fpstate->regs.fxsave.xmm_space[8], 0, 8 * 16);
++	if (in_ia32_syscall())
++		memset(&fpu->fpstate->regs.fxsave.xmm_space[8*4], 0, 8 * 16);
+ 
+ 	/* Mark FP and SSE as in use when XSAVE is enabled */
+ 	if (use_xsave())
+diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
+index 6d2244c94799c..8d2f2f995539d 100644
+--- a/arch/x86/kernel/ptrace.c
++++ b/arch/x86/kernel/ptrace.c
+@@ -1224,7 +1224,7 @@ static struct user_regset x86_64_regsets[] __ro_after_init = {
+ 	},
+ 	[REGSET_FP] = {
+ 		.core_note_type = NT_PRFPREG,
+-		.n = sizeof(struct user_i387_struct) / sizeof(long),
++		.n = sizeof(struct fxregs_state) / sizeof(long),
+ 		.size = sizeof(long), .align = sizeof(long),
+ 		.active = regset_xregset_fpregs_active, .regset_get = xfpregs_get, .set = xfpregs_set
+ 	},
+@@ -1271,7 +1271,7 @@ static struct user_regset x86_32_regsets[] __ro_after_init = {
+ 	},
+ 	[REGSET_XFP] = {
+ 		.core_note_type = NT_PRXFPREG,
+-		.n = sizeof(struct user32_fxsr_struct) / sizeof(u32),
++		.n = sizeof(struct fxregs_state) / sizeof(u32),
+ 		.size = sizeof(u32), .align = sizeof(u32),
+ 		.active = regset_xregset_fpregs_active, .regset_get = xfpregs_get, .set = xfpregs_set
+ 	},
+diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
+index 09873f6488f7c..de955ca58d17c 100644
+--- a/arch/x86/kvm/pmu.c
++++ b/arch/x86/kvm/pmu.c
+@@ -95,7 +95,7 @@ static void kvm_perf_overflow_intr(struct perf_event *perf_event,
+ }
+ 
+ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type,
+-				  unsigned config, bool exclude_user,
++				  u64 config, bool exclude_user,
+ 				  bool exclude_kernel, bool intr,
+ 				  bool in_tx, bool in_tx_cp)
+ {
+@@ -173,8 +173,8 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc)
+ 
+ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
+ {
+-	unsigned config, type = PERF_TYPE_RAW;
+-	u8 event_select, unit_mask;
++	u64 config;
++	u32 type = PERF_TYPE_RAW;
+ 	struct kvm *kvm = pmc->vcpu->kvm;
+ 	struct kvm_pmu_event_filter *filter;
+ 	int i;
+@@ -206,23 +206,18 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
+ 	if (!allow_event)
+ 		return;
+ 
+-	event_select = eventsel & ARCH_PERFMON_EVENTSEL_EVENT;
+-	unit_mask = (eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8;
+-
+ 	if (!(eventsel & (ARCH_PERFMON_EVENTSEL_EDGE |
+ 			  ARCH_PERFMON_EVENTSEL_INV |
+ 			  ARCH_PERFMON_EVENTSEL_CMASK |
+ 			  HSW_IN_TX |
+ 			  HSW_IN_TX_CHECKPOINTED))) {
+-		config = kvm_x86_ops.pmu_ops->find_arch_event(pmc_to_pmu(pmc),
+-						      event_select,
+-						      unit_mask);
++		config = kvm_x86_ops.pmu_ops->pmc_perf_hw_id(pmc);
+ 		if (config != PERF_COUNT_HW_MAX)
+ 			type = PERF_TYPE_HARDWARE;
+ 	}
+ 
+ 	if (type == PERF_TYPE_RAW)
+-		config = eventsel & X86_RAW_EVENT_MASK;
++		config = eventsel & AMD64_RAW_EVENT_MASK;
+ 
+ 	if (pmc->current_config == eventsel && pmc_resume_counter(pmc))
+ 		return;
+diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
+index 59d6b76203d5b..dd7dbb1c5048d 100644
+--- a/arch/x86/kvm/pmu.h
++++ b/arch/x86/kvm/pmu.h
+@@ -24,8 +24,7 @@ struct kvm_event_hw_type_mapping {
+ };
+ 
+ struct kvm_pmu_ops {
+-	unsigned (*find_arch_event)(struct kvm_pmu *pmu, u8 event_select,
+-				    u8 unit_mask);
++	unsigned int (*pmc_perf_hw_id)(struct kvm_pmc *pmc);
+ 	unsigned (*find_fixed_event)(int idx);
+ 	bool (*pmc_is_enabled)(struct kvm_pmc *pmc);
+ 	struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx);
+diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
+index 8f9af7b7dbbe4..212af871ca746 100644
+--- a/arch/x86/kvm/svm/avic.c
++++ b/arch/x86/kvm/svm/avic.c
+@@ -342,8 +342,6 @@ int avic_incomplete_ipi_interception(struct kvm_vcpu *vcpu)
+ 		avic_kick_target_vcpus(vcpu->kvm, apic, icrl, icrh);
+ 		break;
+ 	case AVIC_IPI_FAILURE_INVALID_TARGET:
+-		WARN_ONCE(1, "Invalid IPI target: index=%u, vcpu=%d, icr=%#0x:%#0x\n",
+-			  index, vcpu->vcpu_id, icrh, icrl);
+ 		break;
+ 	case AVIC_IPI_FAILURE_INVALID_BACKING_PAGE:
+ 		WARN_ONCE(1, "Invalid backing page\n");
+diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
+index a67f8bee3adc3..f70d90b4402e8 100644
+--- a/arch/x86/kvm/svm/nested.c
++++ b/arch/x86/kvm/svm/nested.c
+@@ -1389,18 +1389,6 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
+ 	    !nested_vmcb_valid_sregs(vcpu, save))
+ 		goto out_free;
+ 
+-	/*
+-	 * While the nested guest CR3 is already checked and set by
+-	 * KVM_SET_SREGS, it was set when nested state was yet loaded,
+-	 * thus MMU might not be initialized correctly.
+-	 * Set it again to fix this.
+-	 */
+-
+-	ret = nested_svm_load_cr3(&svm->vcpu, vcpu->arch.cr3,
+-				  nested_npt_enabled(svm), false);
+-	if (WARN_ON_ONCE(ret))
+-		goto out_free;
+-
+ 
+ 	/*
+ 	 * All checks done, we can enter guest mode. Userspace provides
+@@ -1426,6 +1414,20 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
+ 
+ 	svm_switch_vmcb(svm, &svm->nested.vmcb02);
+ 	nested_vmcb02_prepare_control(svm);
++
++	/*
++	 * While the nested guest CR3 is already checked and set by
++	 * KVM_SET_SREGS, it was set when nested state was yet loaded,
++	 * thus MMU might not be initialized correctly.
++	 * Set it again to fix this.
++	 */
++
++	ret = nested_svm_load_cr3(&svm->vcpu, vcpu->arch.cr3,
++				  nested_npt_enabled(svm), false);
++	if (WARN_ON_ONCE(ret))
++		goto out_free;
++
++
+ 	kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
+ 	ret = 0;
+ out_free:
+diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
+index b4095dfeeee62..7fadfe3c67e73 100644
+--- a/arch/x86/kvm/svm/pmu.c
++++ b/arch/x86/kvm/svm/pmu.c
+@@ -134,10 +134,10 @@ static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr,
+ 	return &pmu->gp_counters[msr_to_index(msr)];
+ }
+ 
+-static unsigned amd_find_arch_event(struct kvm_pmu *pmu,
+-				    u8 event_select,
+-				    u8 unit_mask)
++static unsigned int amd_pmc_perf_hw_id(struct kvm_pmc *pmc)
+ {
++	u8 event_select = pmc->eventsel & ARCH_PERFMON_EVENTSEL_EVENT;
++	u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8;
+ 	int i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(amd_event_mapping); i++)
+@@ -319,7 +319,7 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu)
+ }
+ 
+ struct kvm_pmu_ops amd_pmu_ops = {
+-	.find_arch_event = amd_find_arch_event,
++	.pmc_perf_hw_id = amd_pmc_perf_hw_id,
+ 	.find_fixed_event = amd_find_fixed_event,
+ 	.pmc_is_enabled = amd_pmc_is_enabled,
+ 	.pmc_idx_to_pmc = amd_pmc_idx_to_pmc,
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index d6a4acaa65742..57e2a55e46175 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -1795,6 +1795,7 @@ void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
+ {
+ 	struct vcpu_svm *svm = to_svm(vcpu);
+ 	u64 hcr0 = cr0;
++	bool old_paging = is_paging(vcpu);
+ 
+ #ifdef CONFIG_X86_64
+ 	if (vcpu->arch.efer & EFER_LME && !vcpu->arch.guest_state_protected) {
+@@ -1811,8 +1812,11 @@ void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
+ #endif
+ 	vcpu->arch.cr0 = cr0;
+ 
+-	if (!npt_enabled)
++	if (!npt_enabled) {
+ 		hcr0 |= X86_CR0_PG | X86_CR0_WP;
++		if (old_paging != is_paging(vcpu))
++			svm_set_cr4(vcpu, kvm_read_cr4(vcpu));
++	}
+ 
+ 	/*
+ 	 * re-enable caching here because the QEMU bios
+@@ -1856,8 +1860,12 @@ void svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ 		svm_flush_tlb(vcpu);
+ 
+ 	vcpu->arch.cr4 = cr4;
+-	if (!npt_enabled)
++	if (!npt_enabled) {
+ 		cr4 |= X86_CR4_PAE;
++
++		if (!is_paging(vcpu))
++			cr4 &= ~(X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE);
++	}
+ 	cr4 |= host_cr4_mce;
+ 	to_svm(vcpu)->vmcb->save.cr4 = cr4;
+ 	vmcb_mark_dirty(to_svm(vcpu)->vmcb, VMCB_CR);
+@@ -4441,10 +4449,17 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
+ 	 * Enter the nested guest now
+ 	 */
+ 
++	vmcb_mark_all_dirty(svm->vmcb01.ptr);
++
+ 	vmcb12 = map.hva;
+ 	nested_load_control_from_vmcb12(svm, &vmcb12->control);
+ 	ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, false);
+ 
++	if (ret)
++		goto unmap_save;
++
++	svm->nested.nested_run_pending = 1;
++
+ unmap_save:
+ 	kvm_vcpu_unmap(vcpu, &map_save, true);
+ unmap_map:
+diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
+index 1b7456b2177b9..60563a45f3eb8 100644
+--- a/arch/x86/kvm/vmx/pmu_intel.c
++++ b/arch/x86/kvm/vmx/pmu_intel.c
+@@ -68,10 +68,11 @@ static void global_ctrl_changed(struct kvm_pmu *pmu, u64 data)
+ 		reprogram_counter(pmu, bit);
+ }
+ 
+-static unsigned intel_find_arch_event(struct kvm_pmu *pmu,
+-				      u8 event_select,
+-				      u8 unit_mask)
++static unsigned int intel_pmc_perf_hw_id(struct kvm_pmc *pmc)
+ {
++	struct kvm_pmu *pmu = pmc_to_pmu(pmc);
++	u8 event_select = pmc->eventsel & ARCH_PERFMON_EVENTSEL_EVENT;
++	u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8;
+ 	int i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(intel_arch_events); i++)
+@@ -703,7 +704,7 @@ static void intel_pmu_cleanup(struct kvm_vcpu *vcpu)
+ }
+ 
+ struct kvm_pmu_ops intel_pmu_ops = {
+-	.find_arch_event = intel_find_arch_event,
++	.pmc_perf_hw_id = intel_pmc_perf_hw_id,
+ 	.find_fixed_event = intel_find_fixed_event,
+ 	.pmc_is_enabled = intel_pmc_is_enabled,
+ 	.pmc_idx_to_pmc = intel_pmc_idx_to_pmc,
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index fe4a36c984460..4b356ae175cc9 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -7534,6 +7534,7 @@ static int vmx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
+ 		if (ret)
+ 			return ret;
+ 
++		vmx->nested.nested_run_pending = 1;
+ 		vmx->nested.smm.guest_mode = false;
+ 	}
+ 	return 0;
+diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
+index dff2bdf9507a8..94fce17f0f5a3 100644
+--- a/arch/x86/kvm/xen.c
++++ b/arch/x86/kvm/xen.c
+@@ -93,32 +93,57 @@ static void kvm_xen_update_runstate(struct kvm_vcpu *v, int state)
+ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state)
+ {
+ 	struct kvm_vcpu_xen *vx = &v->arch.xen;
++	struct gfn_to_hva_cache *ghc = &vx->runstate_cache;
++	struct kvm_memslots *slots = kvm_memslots(v->kvm);
++	bool atomic = (state == RUNSTATE_runnable);
+ 	uint64_t state_entry_time;
+-	unsigned int offset;
++	int __user *user_state;
++	uint64_t __user *user_times;
+ 
+ 	kvm_xen_update_runstate(v, state);
+ 
+ 	if (!vx->runstate_set)
+ 		return;
+ 
+-	BUILD_BUG_ON(sizeof(struct compat_vcpu_runstate_info) != 0x2c);
++	if (unlikely(slots->generation != ghc->generation || kvm_is_error_hva(ghc->hva)) &&
++	    kvm_gfn_to_hva_cache_init(v->kvm, ghc, ghc->gpa, ghc->len))
++		return;
++
++	/* We made sure it fits in a single page */
++	BUG_ON(!ghc->memslot);
++
++	if (atomic)
++		pagefault_disable();
+ 
+-	offset = offsetof(struct compat_vcpu_runstate_info, state_entry_time);
+-#ifdef CONFIG_X86_64
+ 	/*
+-	 * The only difference is alignment of uint64_t in 32-bit.
+-	 * So the first field 'state' is accessed directly using
+-	 * offsetof() (where its offset happens to be zero), while the
+-	 * remaining fields which are all uint64_t, start at 'offset'
+-	 * which we tweak here by adding 4.
++	 * The only difference between 32-bit and 64-bit versions of the
++	 * runstate struct us the alignment of uint64_t in 32-bit, which
++	 * means that the 64-bit version has an additional 4 bytes of
++	 * padding after the first field 'state'.
++	 *
++	 * So we use 'int __user *user_state' to point to the state field,
++	 * and 'uint64_t __user *user_times' for runstate_entry_time. So
++	 * the actual array of time[] in each state starts at user_times[1].
+ 	 */
++	BUILD_BUG_ON(offsetof(struct vcpu_runstate_info, state) != 0);
++	BUILD_BUG_ON(offsetof(struct compat_vcpu_runstate_info, state) != 0);
++	user_state = (int __user *)ghc->hva;
++
++	BUILD_BUG_ON(sizeof(struct compat_vcpu_runstate_info) != 0x2c);
++
++	user_times = (uint64_t __user *)(ghc->hva +
++					 offsetof(struct compat_vcpu_runstate_info,
++						  state_entry_time));
++#ifdef CONFIG_X86_64
+ 	BUILD_BUG_ON(offsetof(struct vcpu_runstate_info, state_entry_time) !=
+ 		     offsetof(struct compat_vcpu_runstate_info, state_entry_time) + 4);
+ 	BUILD_BUG_ON(offsetof(struct vcpu_runstate_info, time) !=
+ 		     offsetof(struct compat_vcpu_runstate_info, time) + 4);
+ 
+ 	if (v->kvm->arch.xen.long_mode)
+-		offset = offsetof(struct vcpu_runstate_info, state_entry_time);
++		user_times = (uint64_t __user *)(ghc->hva +
++						 offsetof(struct vcpu_runstate_info,
++							  state_entry_time));
+ #endif
+ 	/*
+ 	 * First write the updated state_entry_time at the appropriate
+@@ -132,10 +157,8 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state)
+ 	BUILD_BUG_ON(sizeof_field(struct compat_vcpu_runstate_info, state_entry_time) !=
+ 		     sizeof(state_entry_time));
+ 
+-	if (kvm_write_guest_offset_cached(v->kvm, &v->arch.xen.runstate_cache,
+-					  &state_entry_time, offset,
+-					  sizeof(state_entry_time)))
+-		return;
++	if (__put_user(state_entry_time, user_times))
++		goto out;
+ 	smp_wmb();
+ 
+ 	/*
+@@ -149,11 +172,8 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state)
+ 	BUILD_BUG_ON(sizeof_field(struct compat_vcpu_runstate_info, state) !=
+ 		     sizeof(vx->current_runstate));
+ 
+-	if (kvm_write_guest_offset_cached(v->kvm, &v->arch.xen.runstate_cache,
+-					  &vx->current_runstate,
+-					  offsetof(struct vcpu_runstate_info, state),
+-					  sizeof(vx->current_runstate)))
+-		return;
++	if (__put_user(vx->current_runstate, user_state))
++		goto out;
+ 
+ 	/*
+ 	 * Write the actual runstate times immediately after the
+@@ -168,24 +188,23 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state)
+ 	BUILD_BUG_ON(sizeof_field(struct vcpu_runstate_info, time) !=
+ 		     sizeof(vx->runstate_times));
+ 
+-	if (kvm_write_guest_offset_cached(v->kvm, &v->arch.xen.runstate_cache,
+-					  &vx->runstate_times[0],
+-					  offset + sizeof(u64),
+-					  sizeof(vx->runstate_times)))
+-		return;
+-
++	if (__copy_to_user(user_times + 1, vx->runstate_times, sizeof(vx->runstate_times)))
++		goto out;
+ 	smp_wmb();
+ 
+ 	/*
+ 	 * Finally, clear the XEN_RUNSTATE_UPDATE bit in the guest's
+ 	 * runstate_entry_time field.
+ 	 */
+-
+ 	state_entry_time &= ~XEN_RUNSTATE_UPDATE;
+-	if (kvm_write_guest_offset_cached(v->kvm, &v->arch.xen.runstate_cache,
+-					  &state_entry_time, offset,
+-					  sizeof(state_entry_time)))
+-		return;
++	__put_user(state_entry_time, user_times);
++	smp_wmb();
++
++ out:
++	mark_page_dirty_in_slot(v->kvm, ghc->memslot, ghc->gpa >> PAGE_SHIFT);
++
++	if (atomic)
++		pagefault_enable();
+ }
+ 
+ int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
+@@ -337,6 +356,12 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data)
+ 			break;
+ 		}
+ 
++		/* It must fit within a single page */
++		if ((data->u.gpa & ~PAGE_MASK) + sizeof(struct vcpu_info) > PAGE_SIZE) {
++			r = -EINVAL;
++			break;
++		}
++
+ 		r = kvm_gfn_to_hva_cache_init(vcpu->kvm,
+ 					      &vcpu->arch.xen.vcpu_info_cache,
+ 					      data->u.gpa,
+@@ -354,6 +379,12 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data)
+ 			break;
+ 		}
+ 
++		/* It must fit within a single page */
++		if ((data->u.gpa & ~PAGE_MASK) + sizeof(struct pvclock_vcpu_time_info) > PAGE_SIZE) {
++			r = -EINVAL;
++			break;
++		}
++
+ 		r = kvm_gfn_to_hva_cache_init(vcpu->kvm,
+ 					      &vcpu->arch.xen.vcpu_time_info_cache,
+ 					      data->u.gpa,
+@@ -375,6 +406,12 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data)
+ 			break;
+ 		}
+ 
++		/* It must fit within a single page */
++		if ((data->u.gpa & ~PAGE_MASK) + sizeof(struct vcpu_runstate_info) > PAGE_SIZE) {
++			r = -EINVAL;
++			break;
++		}
++
+ 		r = kvm_gfn_to_hva_cache_init(vcpu->kvm,
+ 					      &vcpu->arch.xen.runstate_cache,
+ 					      data->u.gpa,
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 5004feb16783d..d47c3d176ae4b 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1341,10 +1341,6 @@ asmlinkage __visible void __init xen_start_kernel(void)
+ 
+ 		xen_acpi_sleep_register();
+ 
+-		/* Avoid searching for BIOS MP tables */
+-		x86_init.mpparse.find_smp_config = x86_init_noop;
+-		x86_init.mpparse.get_smp_config = x86_init_uint_noop;
+-
+ 		xen_boot_params_init_edd();
+ 
+ #ifdef CONFIG_ACPI
+diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
+index 6a8f3b53ab834..4a6019238ee7d 100644
+--- a/arch/x86/xen/smp_pv.c
++++ b/arch/x86/xen/smp_pv.c
+@@ -148,28 +148,12 @@ int xen_smp_intr_init_pv(unsigned int cpu)
+ 	return rc;
+ }
+ 
+-static void __init xen_fill_possible_map(void)
+-{
+-	int i, rc;
+-
+-	if (xen_initial_domain())
+-		return;
+-
+-	for (i = 0; i < nr_cpu_ids; i++) {
+-		rc = HYPERVISOR_vcpu_op(VCPUOP_is_up, i, NULL);
+-		if (rc >= 0) {
+-			num_processors++;
+-			set_cpu_possible(i, true);
+-		}
+-	}
+-}
+-
+-static void __init xen_filter_cpu_maps(void)
++static void __init _get_smp_config(unsigned int early)
+ {
+ 	int i, rc;
+ 	unsigned int subtract = 0;
+ 
+-	if (!xen_initial_domain())
++	if (early)
+ 		return;
+ 
+ 	num_processors = 0;
+@@ -210,7 +194,6 @@ static void __init xen_pv_smp_prepare_boot_cpu(void)
+ 		 * sure the old memory can be recycled. */
+ 		make_lowmem_page_readwrite(xen_initial_gdt);
+ 
+-	xen_filter_cpu_maps();
+ 	xen_setup_vcpu_info_placement();
+ 
+ 	/*
+@@ -476,5 +459,8 @@ static const struct smp_ops xen_smp_ops __initconst = {
+ void __init xen_smp_init(void)
+ {
+ 	smp_ops = xen_smp_ops;
+-	xen_fill_possible_map();
++
++	/* Avoid searching for BIOS MP tables */
++	x86_init.mpparse.find_smp_config = x86_init_noop;
++	x86_init.mpparse.get_smp_config = _get_smp_config;
+ }
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 30918b0e81c02..8c0950c9a1a2f 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -6878,6 +6878,8 @@ static void bfq_exit_queue(struct elevator_queue *e)
+ 	spin_unlock_irq(&bfqd->lock);
+ #endif
+ 
++	wbt_enable_default(bfqd->queue);
++
+ 	kfree(bfqd);
+ }
+ 
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 9ebeb9bdf5832..5adca3a9cebea 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -324,13 +324,6 @@ void blk_queue_start_drain(struct request_queue *q)
+ 	wake_up_all(&q->mq_freeze_wq);
+ }
+ 
+-void blk_set_queue_dying(struct request_queue *q)
+-{
+-	blk_queue_flag_set(QUEUE_FLAG_DYING, q);
+-	blk_queue_start_drain(q);
+-}
+-EXPORT_SYMBOL_GPL(blk_set_queue_dying);
+-
+ /**
+  * blk_cleanup_queue - shutdown a request queue
+  * @q: request queue to shutdown
+@@ -348,7 +341,8 @@ void blk_cleanup_queue(struct request_queue *q)
+ 	WARN_ON_ONCE(blk_queue_registered(q));
+ 
+ 	/* mark @q DYING, no new request or merges will be allowed afterwards */
+-	blk_set_queue_dying(q);
++	blk_queue_flag_set(QUEUE_FLAG_DYING, q);
++	blk_queue_start_drain(q);
+ 
+ 	blk_queue_flag_set(QUEUE_FLAG_NOMERGES, q);
+ 	blk_queue_flag_set(QUEUE_FLAG_NOXMERGES, q);
+diff --git a/block/elevator.c b/block/elevator.c
+index 19a78d5516ba7..42cb7af57b3ed 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -523,8 +523,6 @@ void elv_unregister_queue(struct request_queue *q)
+ 		kobject_del(&e->kobj);
+ 
+ 		e->registered = 0;
+-		/* Re-enable throttling in case elevator disabled it */
+-		wbt_enable_default(q);
+ 	}
+ }
+ 
+diff --git a/block/genhd.c b/block/genhd.c
+index 5308e0920fa6f..f6a698f3252b5 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -549,6 +549,20 @@ out_free_ext_minor:
+ }
+ EXPORT_SYMBOL(device_add_disk);
+ 
++/**
++ * blk_mark_disk_dead - mark a disk as dead
++ * @disk: disk to mark as dead
++ *
++ * Mark as disk as dead (e.g. surprise removed) and don't accept any new I/O
++ * to this disk.
++ */
++void blk_mark_disk_dead(struct gendisk *disk)
++{
++	set_bit(GD_DEAD, &disk->state);
++	blk_queue_start_drain(disk->queue);
++}
++EXPORT_SYMBOL_GPL(blk_mark_disk_dead);
++
+ /**
+  * del_gendisk - remove the gendisk
+  * @disk: the struct gendisk to remove
+diff --git a/crypto/af_alg.c b/crypto/af_alg.c
+index 3dd5a773c320b..17dc136d4538f 100644
+--- a/crypto/af_alg.c
++++ b/crypto/af_alg.c
+@@ -25,12 +25,9 @@ struct alg_type_list {
+ 	struct list_head list;
+ };
+ 
+-static atomic_long_t alg_memory_allocated;
+-
+ static struct proto alg_proto = {
+ 	.name			= "ALG",
+ 	.owner			= THIS_MODULE,
+-	.memory_allocated	= &alg_memory_allocated,
+ 	.obj_size		= sizeof(struct alg_sock),
+ };
+ 
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index 76ef1bcc88480..a26f8094cc1c1 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -95,6 +95,11 @@ static const struct dmi_system_id processor_power_dmi_table[] = {
+ 	  DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer Inc."),
+ 	  DMI_MATCH(DMI_PRODUCT_NAME,"L8400B series Notebook PC")},
+ 	 (void *)1},
++	/* T40 can not handle C3 idle state */
++	{ set_max_cstate, "IBM ThinkPad T40", {
++	  DMI_MATCH(DMI_SYS_VENDOR, "IBM"),
++	  DMI_MATCH(DMI_PRODUCT_NAME, "23737CU")},
++	 (void *)2},
+ 	{},
+ };
+ 
+diff --git a/drivers/acpi/x86/s2idle.c b/drivers/acpi/x86/s2idle.c
+index 1c48358b43ba3..e0185e841b2a3 100644
+--- a/drivers/acpi/x86/s2idle.c
++++ b/drivers/acpi/x86/s2idle.c
+@@ -424,15 +424,11 @@ static int lps0_device_attach(struct acpi_device *adev,
+ 		mem_sleep_current = PM_SUSPEND_TO_IDLE;
+ 
+ 	/*
+-	 * Some Intel based LPS0 systems, like ASUS Zenbook UX430UNR/i7-8550U don't
+-	 * use intel-hid or intel-vbtn but require the EC GPE to be enabled while
+-	 * suspended for certain wakeup devices to work, so mark it as wakeup-capable.
+-	 *
+-	 * Only enable on !AMD as enabling this universally causes problems for a number
+-	 * of AMD based systems.
++	 * Some LPS0 systems, like ASUS Zenbook UX430UNR/i7-8550U, require the
++	 * EC GPE to be enabled while suspended for certain wakeup devices to
++	 * work, so mark it as wakeup-capable.
+ 	 */
+-	if (!acpi_s2idle_vendor_amd())
+-		acpi_ec_mark_gpe_for_wake();
++	acpi_ec_mark_gpe_for_wake();
+ 
+ 	return 0;
+ }
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 94bc5dbb31e1e..63666ee9de175 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -4079,6 +4079,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 
+ 	/* devices that don't properly handle TRIM commands */
+ 	{ "SuperSSpeed S238*",		NULL,	ATA_HORKAGE_NOTRIM, },
++	{ "M88V29*",			NULL,	ATA_HORKAGE_NOTRIM, },
+ 
+ 	/*
+ 	 * As defined, the DRAT (Deterministic Read After Trim) and RZAT
+diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
+index c91b9010c1a6d..53489562fa36b 100644
+--- a/drivers/block/mtip32xx/mtip32xx.c
++++ b/drivers/block/mtip32xx/mtip32xx.c
+@@ -4113,7 +4113,7 @@ static void mtip_pci_remove(struct pci_dev *pdev)
+ 			"Completion workers still active!\n");
+ 	}
+ 
+-	blk_set_queue_dying(dd->queue);
++	blk_mark_disk_dead(dd->disk);
+ 	set_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag);
+ 
+ 	/* Clean up the block layer. */
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index 953fa134cd3db..7cc6871fd8e52 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -7186,7 +7186,7 @@ static ssize_t do_rbd_remove(struct bus_type *bus,
+ 		 * IO to complete/fail.
+ 		 */
+ 		blk_mq_freeze_queue(rbd_dev->disk->queue);
+-		blk_set_queue_dying(rbd_dev->disk->queue);
++		blk_mark_disk_dead(rbd_dev->disk);
+ 	}
+ 
+ 	del_gendisk(rbd_dev->disk);
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index 286cf1afad781..2a5b14230986a 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -2129,7 +2129,7 @@ static void blkfront_closing(struct blkfront_info *info)
+ 
+ 	/* No more blkif_request(). */
+ 	blk_mq_stop_hw_queues(info->rq);
+-	blk_set_queue_dying(info->rq);
++	blk_mark_disk_dead(info->gd);
+ 	set_capacity(info->gd, 0);
+ 
+ 	for_each_rinfo(info, rinfo, i) {
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index a27ae3999ff32..ebe86de9d0acc 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -1963,7 +1963,10 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
+ 		 */
+ 		if (!capable(CAP_SYS_ADMIN))
+ 			return -EPERM;
+-		input_pool.entropy_count = 0;
++		if (xchg(&input_pool.entropy_count, 0) && random_write_wakeup_bits) {
++			wake_up_interruptible(&random_write_wait);
++			kill_fasync(&fasync, SIGIO, POLL_OUT);
++		}
+ 		return 0;
+ 	case RNDRESEEDCRNG:
+ 		if (!capable(CAP_SYS_ADMIN))
+diff --git a/drivers/dma/ptdma/ptdma-dev.c b/drivers/dma/ptdma/ptdma-dev.c
+index 8a6bf291a73fe..daafea5bc35d9 100644
+--- a/drivers/dma/ptdma/ptdma-dev.c
++++ b/drivers/dma/ptdma/ptdma-dev.c
+@@ -207,7 +207,7 @@ int pt_core_init(struct pt_device *pt)
+ 	if (!cmd_q->qbase) {
+ 		dev_err(dev, "unable to allocate command queue\n");
+ 		ret = -ENOMEM;
+-		goto e_dma_alloc;
++		goto e_destroy_pool;
+ 	}
+ 
+ 	cmd_q->qidx = 0;
+@@ -229,8 +229,10 @@ int pt_core_init(struct pt_device *pt)
+ 
+ 	/* Request an irq */
+ 	ret = request_irq(pt->pt_irq, pt_core_irq_handler, 0, dev_name(pt->dev), pt);
+-	if (ret)
+-		goto e_pool;
++	if (ret) {
++		dev_err(dev, "unable to allocate an IRQ\n");
++		goto e_free_dma;
++	}
+ 
+ 	/* Update the device registers with queue information. */
+ 	cmd_q->qcontrol &= ~CMD_Q_SIZE;
+@@ -250,21 +252,20 @@ int pt_core_init(struct pt_device *pt)
+ 	/* Register the DMA engine support */
+ 	ret = pt_dmaengine_register(pt);
+ 	if (ret)
+-		goto e_dmaengine;
++		goto e_free_irq;
+ 
+ 	/* Set up debugfs entries */
+ 	ptdma_debugfs_setup(pt);
+ 
+ 	return 0;
+ 
+-e_dmaengine:
++e_free_irq:
+ 	free_irq(pt->pt_irq, pt);
+ 
+-e_dma_alloc:
++e_free_dma:
+ 	dma_free_coherent(dev, cmd_q->qsize, cmd_q->qbase, cmd_q->qbase_dma);
+ 
+-e_pool:
+-	dev_err(dev, "unable to allocate an IRQ\n");
++e_destroy_pool:
+ 	dma_pool_destroy(pt->cmd_q.dma_pool);
+ 
+ 	return ret;
+diff --git a/drivers/dma/sh/rcar-dmac.c b/drivers/dma/sh/rcar-dmac.c
+index 5c7716fd6bc56..02fbb85a1aaf3 100644
+--- a/drivers/dma/sh/rcar-dmac.c
++++ b/drivers/dma/sh/rcar-dmac.c
+@@ -1868,8 +1868,13 @@ static int rcar_dmac_probe(struct platform_device *pdev)
+ 
+ 	dmac->dev = &pdev->dev;
+ 	platform_set_drvdata(pdev, dmac);
+-	dma_set_max_seg_size(dmac->dev, RCAR_DMATCR_MASK);
+-	dma_set_mask_and_coherent(dmac->dev, DMA_BIT_MASK(40));
++	ret = dma_set_max_seg_size(dmac->dev, RCAR_DMATCR_MASK);
++	if (ret)
++		return ret;
++
++	ret = dma_set_mask_and_coherent(dmac->dev, DMA_BIT_MASK(40));
++	if (ret)
++		return ret;
+ 
+ 	ret = rcar_dmac_parse_of(&pdev->dev, dmac);
+ 	if (ret < 0)
+diff --git a/drivers/dma/stm32-dmamux.c b/drivers/dma/stm32-dmamux.c
+index a42164389ebc2..d5d55732adba1 100644
+--- a/drivers/dma/stm32-dmamux.c
++++ b/drivers/dma/stm32-dmamux.c
+@@ -292,10 +292,12 @@ static int stm32_dmamux_probe(struct platform_device *pdev)
+ 	ret = of_dma_router_register(node, stm32_dmamux_route_allocate,
+ 				     &stm32_dmamux->dmarouter);
+ 	if (ret)
+-		goto err_clk;
++		goto pm_disable;
+ 
+ 	return 0;
+ 
++pm_disable:
++	pm_runtime_disable(&pdev->dev);
+ err_clk:
+ 	clk_disable_unprepare(stm32_dmamux->clk);
+ 
+diff --git a/drivers/edac/edac_mc.c b/drivers/edac/edac_mc.c
+index 9f82ca2953530..9dcca5b90b804 100644
+--- a/drivers/edac/edac_mc.c
++++ b/drivers/edac/edac_mc.c
+@@ -213,7 +213,7 @@ void *edac_align_ptr(void **p, unsigned int size, int n_elems)
+ 	else
+ 		return (char *)ptr;
+ 
+-	r = (unsigned long)p % align;
++	r = (unsigned long)ptr % align;
+ 
+ 	if (r == 0)
+ 		return (char *)ptr;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index 7d67aec6f4a2b..f59121ec26485 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -1406,12 +1406,10 @@ int amdgpu_acpi_smart_shift_update(struct drm_device *dev, enum amdgpu_ss ss_sta
+ int amdgpu_acpi_pcie_notify_device_ready(struct amdgpu_device *adev);
+ 
+ void amdgpu_acpi_get_backlight_caps(struct amdgpu_dm_backlight_caps *caps);
+-bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev);
+ void amdgpu_acpi_detect(void);
+ #else
+ static inline int amdgpu_acpi_init(struct amdgpu_device *adev) { return 0; }
+ static inline void amdgpu_acpi_fini(struct amdgpu_device *adev) { }
+-static inline bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev) { return false; }
+ static inline void amdgpu_acpi_detect(void) { }
+ static inline bool amdgpu_acpi_is_power_shift_control_supported(void) { return false; }
+ static inline int amdgpu_acpi_power_shift_control(struct amdgpu_device *adev,
+@@ -1420,6 +1418,14 @@ static inline int amdgpu_acpi_smart_shift_update(struct drm_device *dev,
+ 						 enum amdgpu_ss ss_state) { return 0; }
+ #endif
+ 
++#if defined(CONFIG_ACPI) && defined(CONFIG_SUSPEND)
++bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev);
++bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev);
++#else
++static inline bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev) { return false; }
++static inline bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev) { return false; }
++#endif
++
+ int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
+ 			   uint64_t addr, struct amdgpu_bo **bo,
+ 			   struct amdgpu_bo_va_mapping **mapping);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+index 4811b0faafd9a..0e12315fa0cb8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+@@ -1031,6 +1031,20 @@ void amdgpu_acpi_detect(void)
+ 	}
+ }
+ 
++#if IS_ENABLED(CONFIG_SUSPEND)
++/**
++ * amdgpu_acpi_is_s3_active
++ *
++ * @adev: amdgpu_device_pointer
++ *
++ * returns true if supported, false if not.
++ */
++bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev)
++{
++	return !(adev->flags & AMD_IS_APU) ||
++		(pm_suspend_target_state == PM_SUSPEND_MEM);
++}
++
+ /**
+  * amdgpu_acpi_is_s0ix_active
+  *
+@@ -1040,11 +1054,24 @@ void amdgpu_acpi_detect(void)
+  */
+ bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev)
+ {
+-#if IS_ENABLED(CONFIG_AMD_PMC) && IS_ENABLED(CONFIG_SUSPEND)
+-	if (acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0) {
+-		if (adev->flags & AMD_IS_APU)
+-			return pm_suspend_target_state == PM_SUSPEND_TO_IDLE;
++	if (!(adev->flags & AMD_IS_APU) ||
++	    (pm_suspend_target_state != PM_SUSPEND_TO_IDLE))
++		return false;
++
++	if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0)) {
++		dev_warn_once(adev->dev,
++			      "Power consumption will be higher as BIOS has not been configured for suspend-to-idle.\n"
++			      "To use suspend-to-idle change the sleep mode in BIOS setup.\n");
++		return false;
+ 	}
+-#endif
++
++#if !IS_ENABLED(CONFIG_AMD_PMC)
++	dev_warn_once(adev->dev,
++		      "Power consumption will be higher as the kernel has not been compiled with CONFIG_AMD_PMC.\n");
+ 	return false;
++#else
++	return true;
++#endif /* CONFIG_AMD_PMC */
+ }
++
++#endif /* CONFIG_SUSPEND */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index c811161ce9f09..ab3851c26f71c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -2236,6 +2236,7 @@ static void amdgpu_drv_delayed_reset_work_handler(struct work_struct *work)
+ static int amdgpu_pmops_prepare(struct device *dev)
+ {
+ 	struct drm_device *drm_dev = dev_get_drvdata(dev);
++	struct amdgpu_device *adev = drm_to_adev(drm_dev);
+ 
+ 	/* Return a positive number here so
+ 	 * DPM_FLAG_SMART_SUSPEND works properly
+@@ -2243,6 +2244,13 @@ static int amdgpu_pmops_prepare(struct device *dev)
+ 	if (amdgpu_device_supports_boco(drm_dev))
+ 		return pm_runtime_suspended(dev);
+ 
++	/* if we will not support s3 or s2i for the device
++	 *  then skip suspend
++	 */
++	if (!amdgpu_acpi_is_s0ix_active(adev) &&
++	    !amdgpu_acpi_is_s3_active(adev))
++		return 1;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+index c875f1cdd2af7..ffc3ce0004e99 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+@@ -1913,7 +1913,7 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
+ 	unsigned i;
+ 	int r;
+ 
+-	if (direct_submit && !ring->sched.ready) {
++	if (!direct_submit && !ring->sched.ready) {
+ 		DRM_ERROR("Trying to move memory with ring turned off.\n");
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_1.c b/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_1.c
+index b4eddf6e98a6a..ff738e9725ee8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_1.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_1.c
+@@ -543,7 +543,9 @@ static void gfxhub_v2_1_utcl2_harvest(struct amdgpu_device *adev)
+ 		adev->gfx.config.max_sh_per_se *
+ 		adev->gfx.config.max_shader_engines);
+ 
+-	if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 3, 3)) {
++	switch (adev->ip_versions[GC_HWIP][0]) {
++	case IP_VERSION(10, 3, 1):
++	case IP_VERSION(10, 3, 3):
+ 		/* Get SA disabled bitmap from eFuse setting */
+ 		efuse_setting = RREG32_SOC15(GC, 0, mmCC_GC_SA_UNIT_DISABLE);
+ 		efuse_setting &= CC_GC_SA_UNIT_DISABLE__SA_DISABLE_MASK;
+@@ -566,6 +568,9 @@ static void gfxhub_v2_1_utcl2_harvest(struct amdgpu_device *adev)
+ 		disabled_sa = tmp;
+ 
+ 		WREG32_SOC15(GC, 0, mmGCUTCL2_HARVEST_BYPASS_GROUPS_YELLOW_CARP, disabled_sa);
++		break;
++	default:
++		break;
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+index e8e4749e9c797..f0638db57111d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+@@ -2057,6 +2057,10 @@ static int sdma_v4_0_suspend(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
++	/* SMU saves SDMA state for us */
++	if (adev->in_s0ix)
++		return 0;
++
+ 	return sdma_v4_0_hw_fini(adev);
+ }
+ 
+@@ -2064,6 +2068,10 @@ static int sdma_v4_0_resume(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
++	/* SMU restores SDMA state for us */
++	if (adev->in_s0ix)
++		return 0;
++
+ 	return sdma_v4_0_hw_init(adev);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index efcb25ef1809a..0117b00b4ed83 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3629,7 +3629,7 @@ static int dcn10_register_irq_handlers(struct amdgpu_device *adev)
+ 
+ 	/* Use GRPH_PFLIP interrupt */
+ 	for (i = DCN_1_0__SRCID__HUBP0_FLIP_INTERRUPT;
+-			i <= DCN_1_0__SRCID__HUBP0_FLIP_INTERRUPT + adev->mode_info.num_crtc - 1;
++			i <= DCN_1_0__SRCID__HUBP0_FLIP_INTERRUPT + dc->caps.max_otg_num - 1;
+ 			i++) {
+ 		r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_DCE, i, &adev->pageflip_irq);
+ 		if (r) {
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c
+index 162ae71861247..21d2cbc3cbb20 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c
+@@ -120,7 +120,11 @@ int dcn31_smu_send_msg_with_param(
+ 	result = dcn31_smu_wait_for_response(clk_mgr, 10, 200000);
+ 
+ 	if (result == VBIOSSMC_Result_Failed) {
+-		ASSERT(0);
++		if (msg_id == VBIOSSMC_MSG_TransferTableDram2Smu &&
++		    param == TABLE_WATERMARKS)
++			DC_LOG_WARNING("Watermarks table not configured properly by SMU");
++		else
++			ASSERT(0);
+ 		REG_WRITE(MP1_SMN_C2PMSG_91, VBIOSSMC_Result_OK);
+ 		return -1;
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index f0fbd8ad56229..e890e063cde31 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -1237,6 +1237,8 @@ struct dc *dc_create(const struct dc_init_data *init_params)
+ 
+ 		dc->caps.max_dp_protocol_version = DP_VERSION_1_4;
+ 
++		dc->caps.max_otg_num = dc->res_pool->res_cap->num_timing_generator;
++
+ 		if (dc->res_pool->dmcu != NULL)
+ 			dc->versions.dmcu_version = dc->res_pool->dmcu->dmcu_version;
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 618e7989176fc..14864763a1881 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -190,6 +190,7 @@ struct dc_caps {
+ #endif
+ 	bool vbios_lttpr_aware;
+ 	bool vbios_lttpr_enable;
++	uint32_t max_otg_num;
+ };
+ 
+ struct dc_bug_wa {
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hubbub.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hubbub.c
+index 90c73a1cb9861..5e3bcaf12cac4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hubbub.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hubbub.c
+@@ -138,8 +138,11 @@ static uint32_t convert_and_clamp(
+ 	ret_val = wm_ns * refclk_mhz;
+ 	ret_val /= 1000;
+ 
+-	if (ret_val > clamp_value)
++	if (ret_val > clamp_value) {
++		/* clamping WMs is abnormal, unexpected and may lead to underflow*/
++		ASSERT(0);
+ 		ret_val = clamp_value;
++	}
+ 
+ 	return ret_val;
+ }
+@@ -159,7 +162,7 @@ static bool hubbub31_program_urgent_watermarks(
+ 	if (safe_to_lower || watermarks->a.urgent_ns > hubbub2->watermarks.a.urgent_ns) {
+ 		hubbub2->watermarks.a.urgent_ns = watermarks->a.urgent_ns;
+ 		prog_wm_value = convert_and_clamp(watermarks->a.urgent_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0x3fff);
+ 		REG_SET(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_A, 0,
+ 				DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_A, prog_wm_value);
+ 
+@@ -193,7 +196,7 @@ static bool hubbub31_program_urgent_watermarks(
+ 	if (safe_to_lower || watermarks->a.urgent_latency_ns > hubbub2->watermarks.a.urgent_latency_ns) {
+ 		hubbub2->watermarks.a.urgent_latency_ns = watermarks->a.urgent_latency_ns;
+ 		prog_wm_value = convert_and_clamp(watermarks->a.urgent_latency_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0x3fff);
+ 		REG_SET(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_A, 0,
+ 				DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_A, prog_wm_value);
+ 	} else if (watermarks->a.urgent_latency_ns < hubbub2->watermarks.a.urgent_latency_ns)
+@@ -203,7 +206,7 @@ static bool hubbub31_program_urgent_watermarks(
+ 	if (safe_to_lower || watermarks->b.urgent_ns > hubbub2->watermarks.b.urgent_ns) {
+ 		hubbub2->watermarks.b.urgent_ns = watermarks->b.urgent_ns;
+ 		prog_wm_value = convert_and_clamp(watermarks->b.urgent_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0x3fff);
+ 		REG_SET(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_B, 0,
+ 				DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_B, prog_wm_value);
+ 
+@@ -237,7 +240,7 @@ static bool hubbub31_program_urgent_watermarks(
+ 	if (safe_to_lower || watermarks->b.urgent_latency_ns > hubbub2->watermarks.b.urgent_latency_ns) {
+ 		hubbub2->watermarks.b.urgent_latency_ns = watermarks->b.urgent_latency_ns;
+ 		prog_wm_value = convert_and_clamp(watermarks->b.urgent_latency_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0x3fff);
+ 		REG_SET(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_B, 0,
+ 				DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_B, prog_wm_value);
+ 	} else if (watermarks->b.urgent_latency_ns < hubbub2->watermarks.b.urgent_latency_ns)
+@@ -247,7 +250,7 @@ static bool hubbub31_program_urgent_watermarks(
+ 	if (safe_to_lower || watermarks->c.urgent_ns > hubbub2->watermarks.c.urgent_ns) {
+ 		hubbub2->watermarks.c.urgent_ns = watermarks->c.urgent_ns;
+ 		prog_wm_value = convert_and_clamp(watermarks->c.urgent_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0x3fff);
+ 		REG_SET(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_C, 0,
+ 				DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_C, prog_wm_value);
+ 
+@@ -281,7 +284,7 @@ static bool hubbub31_program_urgent_watermarks(
+ 	if (safe_to_lower || watermarks->c.urgent_latency_ns > hubbub2->watermarks.c.urgent_latency_ns) {
+ 		hubbub2->watermarks.c.urgent_latency_ns = watermarks->c.urgent_latency_ns;
+ 		prog_wm_value = convert_and_clamp(watermarks->c.urgent_latency_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0x3fff);
+ 		REG_SET(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_C, 0,
+ 				DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_C, prog_wm_value);
+ 	} else if (watermarks->c.urgent_latency_ns < hubbub2->watermarks.c.urgent_latency_ns)
+@@ -291,7 +294,7 @@ static bool hubbub31_program_urgent_watermarks(
+ 	if (safe_to_lower || watermarks->d.urgent_ns > hubbub2->watermarks.d.urgent_ns) {
+ 		hubbub2->watermarks.d.urgent_ns = watermarks->d.urgent_ns;
+ 		prog_wm_value = convert_and_clamp(watermarks->d.urgent_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0x3fff);
+ 		REG_SET(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_D, 0,
+ 				DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_D, prog_wm_value);
+ 
+@@ -325,7 +328,7 @@ static bool hubbub31_program_urgent_watermarks(
+ 	if (safe_to_lower || watermarks->d.urgent_latency_ns > hubbub2->watermarks.d.urgent_latency_ns) {
+ 		hubbub2->watermarks.d.urgent_latency_ns = watermarks->d.urgent_latency_ns;
+ 		prog_wm_value = convert_and_clamp(watermarks->d.urgent_latency_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0x3fff);
+ 		REG_SET(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_D, 0,
+ 				DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_D, prog_wm_value);
+ 	} else if (watermarks->d.urgent_latency_ns < hubbub2->watermarks.d.urgent_latency_ns)
+@@ -351,7 +354,7 @@ static bool hubbub31_program_stutter_watermarks(
+ 				watermarks->a.cstate_pstate.cstate_enter_plus_exit_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->a.cstate_pstate.cstate_enter_plus_exit_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_A, 0,
+ 				DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_A, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("SR_ENTER_EXIT_WATERMARK_A calculated =%d\n"
+@@ -367,7 +370,7 @@ static bool hubbub31_program_stutter_watermarks(
+ 				watermarks->a.cstate_pstate.cstate_exit_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->a.cstate_pstate.cstate_exit_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_A, 0,
+ 				DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_A, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("SR_EXIT_WATERMARK_A calculated =%d\n"
+@@ -383,7 +386,7 @@ static bool hubbub31_program_stutter_watermarks(
+ 				watermarks->a.cstate_pstate.cstate_enter_plus_exit_z8_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->a.cstate_pstate.cstate_enter_plus_exit_z8_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_Z8_A, 0,
+ 				DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_Z8_A, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("SR_ENTER_WATERMARK_Z8_A calculated =%d\n"
+@@ -399,7 +402,7 @@ static bool hubbub31_program_stutter_watermarks(
+ 				watermarks->a.cstate_pstate.cstate_exit_z8_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->a.cstate_pstate.cstate_exit_z8_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_Z8_A, 0,
+ 				DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_Z8_A, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("SR_EXIT_WATERMARK_Z8_A calculated =%d\n"
+@@ -416,7 +419,7 @@ static bool hubbub31_program_stutter_watermarks(
+ 				watermarks->b.cstate_pstate.cstate_enter_plus_exit_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->b.cstate_pstate.cstate_enter_plus_exit_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_B, 0,
+ 				DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_B, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("SR_ENTER_EXIT_WATERMARK_B calculated =%d\n"
+@@ -432,7 +435,7 @@ static bool hubbub31_program_stutter_watermarks(
+ 				watermarks->b.cstate_pstate.cstate_exit_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->b.cstate_pstate.cstate_exit_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_B, 0,
+ 				DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_B, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("SR_EXIT_WATERMARK_B calculated =%d\n"
+@@ -448,7 +451,7 @@ static bool hubbub31_program_stutter_watermarks(
+ 				watermarks->b.cstate_pstate.cstate_enter_plus_exit_z8_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->b.cstate_pstate.cstate_enter_plus_exit_z8_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_Z8_B, 0,
+ 				DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_Z8_B, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("SR_ENTER_WATERMARK_Z8_B calculated =%d\n"
+@@ -464,7 +467,7 @@ static bool hubbub31_program_stutter_watermarks(
+ 				watermarks->b.cstate_pstate.cstate_exit_z8_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->b.cstate_pstate.cstate_exit_z8_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_Z8_B, 0,
+ 				DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_Z8_B, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("SR_EXIT_WATERMARK_Z8_B calculated =%d\n"
+@@ -481,7 +484,7 @@ static bool hubbub31_program_stutter_watermarks(
+ 				watermarks->c.cstate_pstate.cstate_enter_plus_exit_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->c.cstate_pstate.cstate_enter_plus_exit_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_C, 0,
+ 				DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_C, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("SR_ENTER_EXIT_WATERMARK_C calculated =%d\n"
+@@ -497,7 +500,7 @@ static bool hubbub31_program_stutter_watermarks(
+ 				watermarks->c.cstate_pstate.cstate_exit_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->c.cstate_pstate.cstate_exit_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_C, 0,
+ 				DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_C, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("SR_EXIT_WATERMARK_C calculated =%d\n"
+@@ -513,7 +516,7 @@ static bool hubbub31_program_stutter_watermarks(
+ 				watermarks->c.cstate_pstate.cstate_enter_plus_exit_z8_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->c.cstate_pstate.cstate_enter_plus_exit_z8_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_Z8_C, 0,
+ 				DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_Z8_C, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("SR_ENTER_WATERMARK_Z8_C calculated =%d\n"
+@@ -529,7 +532,7 @@ static bool hubbub31_program_stutter_watermarks(
+ 				watermarks->c.cstate_pstate.cstate_exit_z8_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->c.cstate_pstate.cstate_exit_z8_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_Z8_C, 0,
+ 				DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_Z8_C, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("SR_EXIT_WATERMARK_Z8_C calculated =%d\n"
+@@ -546,7 +549,7 @@ static bool hubbub31_program_stutter_watermarks(
+ 				watermarks->d.cstate_pstate.cstate_enter_plus_exit_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->d.cstate_pstate.cstate_enter_plus_exit_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_D, 0,
+ 				DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_D, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("SR_ENTER_EXIT_WATERMARK_D calculated =%d\n"
+@@ -562,7 +565,7 @@ static bool hubbub31_program_stutter_watermarks(
+ 				watermarks->d.cstate_pstate.cstate_exit_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->d.cstate_pstate.cstate_exit_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_D, 0,
+ 				DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_D, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("SR_EXIT_WATERMARK_D calculated =%d\n"
+@@ -578,7 +581,7 @@ static bool hubbub31_program_stutter_watermarks(
+ 				watermarks->d.cstate_pstate.cstate_enter_plus_exit_z8_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->d.cstate_pstate.cstate_enter_plus_exit_z8_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_Z8_D, 0,
+ 				DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_Z8_D, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("SR_ENTER_WATERMARK_Z8_D calculated =%d\n"
+@@ -594,7 +597,7 @@ static bool hubbub31_program_stutter_watermarks(
+ 				watermarks->d.cstate_pstate.cstate_exit_z8_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->d.cstate_pstate.cstate_exit_z8_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_Z8_D, 0,
+ 				DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_Z8_D, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("SR_EXIT_WATERMARK_Z8_D calculated =%d\n"
+@@ -625,7 +628,7 @@ static bool hubbub31_program_pstate_watermarks(
+ 				watermarks->a.cstate_pstate.pstate_change_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->a.cstate_pstate.pstate_change_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_DRAM_CLK_CHANGE_WATERMARK_A, 0,
+ 				DCHUBBUB_ARB_ALLOW_DRAM_CLK_CHANGE_WATERMARK_A, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("DRAM_CLK_CHANGE_WATERMARK_A calculated =%d\n"
+@@ -642,7 +645,7 @@ static bool hubbub31_program_pstate_watermarks(
+ 				watermarks->b.cstate_pstate.pstate_change_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->b.cstate_pstate.pstate_change_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_DRAM_CLK_CHANGE_WATERMARK_B, 0,
+ 				DCHUBBUB_ARB_ALLOW_DRAM_CLK_CHANGE_WATERMARK_B, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("DRAM_CLK_CHANGE_WATERMARK_B calculated =%d\n"
+@@ -659,7 +662,7 @@ static bool hubbub31_program_pstate_watermarks(
+ 				watermarks->c.cstate_pstate.pstate_change_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->c.cstate_pstate.pstate_change_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_DRAM_CLK_CHANGE_WATERMARK_C, 0,
+ 				DCHUBBUB_ARB_ALLOW_DRAM_CLK_CHANGE_WATERMARK_C, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("DRAM_CLK_CHANGE_WATERMARK_C calculated =%d\n"
+@@ -676,7 +679,7 @@ static bool hubbub31_program_pstate_watermarks(
+ 				watermarks->d.cstate_pstate.pstate_change_ns;
+ 		prog_wm_value = convert_and_clamp(
+ 				watermarks->d.cstate_pstate.pstate_change_ns,
+-				refclk_mhz, 0x1fffff);
++				refclk_mhz, 0xffff);
+ 		REG_SET(DCHUBBUB_ARB_ALLOW_DRAM_CLK_CHANGE_WATERMARK_D, 0,
+ 				DCHUBBUB_ARB_ALLOW_DRAM_CLK_CHANGE_WATERMARK_D, prog_wm_value);
+ 		DC_LOG_BANDWIDTH_CALCS("DRAM_CLK_CHANGE_WATERMARK_D calculated =%d\n"
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
+index caf1775d48ef6..0bc84b709a935 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
+@@ -282,14 +282,9 @@ static int yellow_carp_post_smu_init(struct smu_context *smu)
+ 
+ static int yellow_carp_mode_reset(struct smu_context *smu, int type)
+ {
+-	int ret = 0, index = 0;
+-
+-	index = smu_cmn_to_asic_specific_index(smu, CMN2ASIC_MAPPING_MSG,
+-				SMU_MSG_GfxDeviceDriverReset);
+-	if (index < 0)
+-		return index == -EACCES ? 0 : index;
++	int ret = 0;
+ 
+-	ret = smu_cmn_send_smc_msg_with_param(smu, (uint16_t)index, type, NULL);
++	ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_GfxDeviceDriverReset, type, NULL);
+ 	if (ret)
+ 		dev_err(smu->adev->dev, "Failed to mode reset!\n");
+ 
+diff --git a/drivers/gpu/drm/drm_atomic_uapi.c b/drivers/gpu/drm/drm_atomic_uapi.c
+index 909f318331816..f195c70131373 100644
+--- a/drivers/gpu/drm/drm_atomic_uapi.c
++++ b/drivers/gpu/drm/drm_atomic_uapi.c
+@@ -76,15 +76,17 @@ int drm_atomic_set_mode_for_crtc(struct drm_crtc_state *state,
+ 	state->mode_blob = NULL;
+ 
+ 	if (mode) {
++		struct drm_property_blob *blob;
++
+ 		drm_mode_convert_to_umode(&umode, mode);
+-		state->mode_blob =
+-			drm_property_create_blob(state->crtc->dev,
+-						 sizeof(umode),
+-						 &umode);
+-		if (IS_ERR(state->mode_blob))
+-			return PTR_ERR(state->mode_blob);
++		blob = drm_property_create_blob(crtc->dev,
++						sizeof(umode), &umode);
++		if (IS_ERR(blob))
++			return PTR_ERR(blob);
+ 
+ 		drm_mode_copy(&state->mode, mode);
++
++		state->mode_blob = blob;
+ 		state->enable = true;
+ 		drm_dbg_atomic(crtc->dev,
+ 			       "Set [MODE:%s] for [CRTC:%d:%s] state %p\n",
+diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
+index 1e7e8cd64cb58..9338a342027a5 100644
+--- a/drivers/gpu/drm/drm_gem_cma_helper.c
++++ b/drivers/gpu/drm/drm_gem_cma_helper.c
+@@ -518,6 +518,7 @@ int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+ 	 */
+ 	vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
+ 	vma->vm_flags &= ~VM_PFNMAP;
++	vma->vm_flags |= VM_DONTEXPAND;
+ 
+ 	cma_obj = to_drm_gem_cma_obj(obj);
+ 
+diff --git a/drivers/gpu/drm/i915/Kconfig b/drivers/gpu/drm/i915/Kconfig
+index 84b6fc70cbf52..0bddb75fa7e01 100644
+--- a/drivers/gpu/drm/i915/Kconfig
++++ b/drivers/gpu/drm/i915/Kconfig
+@@ -101,6 +101,7 @@ config DRM_I915_USERPTR
+ config DRM_I915_GVT
+ 	bool "Enable Intel GVT-g graphics virtualization host support"
+ 	depends on DRM_I915
++	depends on X86
+ 	depends on 64BIT
+ 	default n
+ 	help
+diff --git a/drivers/gpu/drm/i915/display/intel_opregion.c b/drivers/gpu/drm/i915/display/intel_opregion.c
+index 0065111593a60..4a2662838cd8d 100644
+--- a/drivers/gpu/drm/i915/display/intel_opregion.c
++++ b/drivers/gpu/drm/i915/display/intel_opregion.c
+@@ -360,6 +360,21 @@ int intel_opregion_notify_encoder(struct intel_encoder *intel_encoder,
+ 		port++;
+ 	}
+ 
++	/*
++	 * The port numbering and mapping here is bizarre. The now-obsolete
++	 * swsci spec supports ports numbered [0..4]. Port E is handled as a
++	 * special case, but port F and beyond are not. The functionality is
++	 * supposed to be obsolete for new platforms. Just bail out if the port
++	 * number is out of bounds after mapping.
++	 */
++	if (port > 4) {
++		drm_dbg_kms(&dev_priv->drm,
++			    "[ENCODER:%d:%s] port %c (index %u) out of bounds for display power state notification\n",
++			    intel_encoder->base.base.id, intel_encoder->base.name,
++			    port_name(intel_encoder->port), port);
++		return -EINVAL;
++	}
++
+ 	if (!enable)
+ 		parm |= 4 << 8;
+ 
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+index 74a1ffd0d7ddb..dcb184d9d0b80 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+@@ -787,11 +787,9 @@ static void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj)
+ 	if (obj->mm.madv != I915_MADV_WILLNEED) {
+ 		bo->priority = I915_TTM_PRIO_PURGE;
+ 	} else if (!i915_gem_object_has_pages(obj)) {
+-		if (bo->priority < I915_TTM_PRIO_HAS_PAGES)
+-			bo->priority = I915_TTM_PRIO_HAS_PAGES;
++		bo->priority = I915_TTM_PRIO_NO_PAGES;
+ 	} else {
+-		if (bo->priority > I915_TTM_PRIO_NO_PAGES)
+-			bo->priority = I915_TTM_PRIO_NO_PAGES;
++		bo->priority = I915_TTM_PRIO_HAS_PAGES;
+ 	}
+ 
+ 	ttm_bo_move_to_lru_tail(bo, bo->resource, NULL);
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index d4ca29755e647..75c1522fdae8c 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -4843,7 +4843,7 @@ static bool check_mbus_joined(u8 active_pipes,
+ {
+ 	int i;
+ 
+-	for (i = 0; i < dbuf_slices[i].active_pipes; i++) {
++	for (i = 0; dbuf_slices[i].active_pipes != 0; i++) {
+ 		if (dbuf_slices[i].active_pipes == active_pipes)
+ 			return dbuf_slices[i].join_mbus;
+ 	}
+@@ -4860,7 +4860,7 @@ static u8 compute_dbuf_slices(enum pipe pipe, u8 active_pipes, bool join_mbus,
+ {
+ 	int i;
+ 
+-	for (i = 0; i < dbuf_slices[i].active_pipes; i++) {
++	for (i = 0; dbuf_slices[i].active_pipes != 0; i++) {
+ 		if (dbuf_slices[i].active_pipes == active_pipes &&
+ 		    dbuf_slices[i].join_mbus == join_mbus)
+ 			return dbuf_slices[i].dbuf_mask[pipe];
+diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c
+index 5d90d2eb00193..bced4c7d668e3 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dsi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dsi.c
+@@ -786,18 +786,101 @@ void mtk_dsi_ddp_stop(struct device *dev)
+ 	mtk_dsi_poweroff(dsi);
+ }
+ 
++static int mtk_dsi_encoder_init(struct drm_device *drm, struct mtk_dsi *dsi)
++{
++	int ret;
++
++	ret = drm_simple_encoder_init(drm, &dsi->encoder,
++				      DRM_MODE_ENCODER_DSI);
++	if (ret) {
++		DRM_ERROR("Failed to encoder init to drm\n");
++		return ret;
++	}
++
++	dsi->encoder.possible_crtcs = mtk_drm_find_possible_crtc_by_comp(drm, dsi->host.dev);
++
++	ret = drm_bridge_attach(&dsi->encoder, &dsi->bridge, NULL,
++				DRM_BRIDGE_ATTACH_NO_CONNECTOR);
++	if (ret)
++		goto err_cleanup_encoder;
++
++	dsi->connector = drm_bridge_connector_init(drm, &dsi->encoder);
++	if (IS_ERR(dsi->connector)) {
++		DRM_ERROR("Unable to create bridge connector\n");
++		ret = PTR_ERR(dsi->connector);
++		goto err_cleanup_encoder;
++	}
++	drm_connector_attach_encoder(dsi->connector, &dsi->encoder);
++
++	return 0;
++
++err_cleanup_encoder:
++	drm_encoder_cleanup(&dsi->encoder);
++	return ret;
++}
++
++static int mtk_dsi_bind(struct device *dev, struct device *master, void *data)
++{
++	int ret;
++	struct drm_device *drm = data;
++	struct mtk_dsi *dsi = dev_get_drvdata(dev);
++
++	ret = mtk_dsi_encoder_init(drm, dsi);
++	if (ret)
++		return ret;
++
++	return device_reset_optional(dev);
++}
++
++static void mtk_dsi_unbind(struct device *dev, struct device *master,
++			   void *data)
++{
++	struct mtk_dsi *dsi = dev_get_drvdata(dev);
++
++	drm_encoder_cleanup(&dsi->encoder);
++}
++
++static const struct component_ops mtk_dsi_component_ops = {
++	.bind = mtk_dsi_bind,
++	.unbind = mtk_dsi_unbind,
++};
++
+ static int mtk_dsi_host_attach(struct mipi_dsi_host *host,
+ 			       struct mipi_dsi_device *device)
+ {
+ 	struct mtk_dsi *dsi = host_to_dsi(host);
++	struct device *dev = host->dev;
++	int ret;
+ 
+ 	dsi->lanes = device->lanes;
+ 	dsi->format = device->format;
+ 	dsi->mode_flags = device->mode_flags;
++	dsi->next_bridge = devm_drm_of_get_bridge(dev, dev->of_node, 0, 0);
++	if (IS_ERR(dsi->next_bridge))
++		return PTR_ERR(dsi->next_bridge);
++
++	drm_bridge_add(&dsi->bridge);
++
++	ret = component_add(host->dev, &mtk_dsi_component_ops);
++	if (ret) {
++		DRM_ERROR("failed to add dsi_host component: %d\n", ret);
++		drm_bridge_remove(&dsi->bridge);
++		return ret;
++	}
+ 
+ 	return 0;
+ }
+ 
++static int mtk_dsi_host_detach(struct mipi_dsi_host *host,
++			       struct mipi_dsi_device *device)
++{
++	struct mtk_dsi *dsi = host_to_dsi(host);
++
++	component_del(host->dev, &mtk_dsi_component_ops);
++	drm_bridge_remove(&dsi->bridge);
++	return 0;
++}
++
+ static void mtk_dsi_wait_for_idle(struct mtk_dsi *dsi)
+ {
+ 	int ret;
+@@ -938,73 +1021,14 @@ static ssize_t mtk_dsi_host_transfer(struct mipi_dsi_host *host,
+ 
+ static const struct mipi_dsi_host_ops mtk_dsi_ops = {
+ 	.attach = mtk_dsi_host_attach,
++	.detach = mtk_dsi_host_detach,
+ 	.transfer = mtk_dsi_host_transfer,
+ };
+ 
+-static int mtk_dsi_encoder_init(struct drm_device *drm, struct mtk_dsi *dsi)
+-{
+-	int ret;
+-
+-	ret = drm_simple_encoder_init(drm, &dsi->encoder,
+-				      DRM_MODE_ENCODER_DSI);
+-	if (ret) {
+-		DRM_ERROR("Failed to encoder init to drm\n");
+-		return ret;
+-	}
+-
+-	dsi->encoder.possible_crtcs = mtk_drm_find_possible_crtc_by_comp(drm, dsi->host.dev);
+-
+-	ret = drm_bridge_attach(&dsi->encoder, &dsi->bridge, NULL,
+-				DRM_BRIDGE_ATTACH_NO_CONNECTOR);
+-	if (ret)
+-		goto err_cleanup_encoder;
+-
+-	dsi->connector = drm_bridge_connector_init(drm, &dsi->encoder);
+-	if (IS_ERR(dsi->connector)) {
+-		DRM_ERROR("Unable to create bridge connector\n");
+-		ret = PTR_ERR(dsi->connector);
+-		goto err_cleanup_encoder;
+-	}
+-	drm_connector_attach_encoder(dsi->connector, &dsi->encoder);
+-
+-	return 0;
+-
+-err_cleanup_encoder:
+-	drm_encoder_cleanup(&dsi->encoder);
+-	return ret;
+-}
+-
+-static int mtk_dsi_bind(struct device *dev, struct device *master, void *data)
+-{
+-	int ret;
+-	struct drm_device *drm = data;
+-	struct mtk_dsi *dsi = dev_get_drvdata(dev);
+-
+-	ret = mtk_dsi_encoder_init(drm, dsi);
+-	if (ret)
+-		return ret;
+-
+-	return device_reset_optional(dev);
+-}
+-
+-static void mtk_dsi_unbind(struct device *dev, struct device *master,
+-			   void *data)
+-{
+-	struct mtk_dsi *dsi = dev_get_drvdata(dev);
+-
+-	drm_encoder_cleanup(&dsi->encoder);
+-}
+-
+-static const struct component_ops mtk_dsi_component_ops = {
+-	.bind = mtk_dsi_bind,
+-	.unbind = mtk_dsi_unbind,
+-};
+-
+ static int mtk_dsi_probe(struct platform_device *pdev)
+ {
+ 	struct mtk_dsi *dsi;
+ 	struct device *dev = &pdev->dev;
+-	struct drm_panel *panel;
+ 	struct resource *regs;
+ 	int irq_num;
+ 	int ret;
+@@ -1021,19 +1045,6 @@ static int mtk_dsi_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	ret = drm_of_find_panel_or_bridge(dev->of_node, 0, 0,
+-					  &panel, &dsi->next_bridge);
+-	if (ret)
+-		goto err_unregister_host;
+-
+-	if (panel) {
+-		dsi->next_bridge = devm_drm_panel_bridge_add(dev, panel);
+-		if (IS_ERR(dsi->next_bridge)) {
+-			ret = PTR_ERR(dsi->next_bridge);
+-			goto err_unregister_host;
+-		}
+-	}
+-
+ 	dsi->driver_data = of_device_get_match_data(dev);
+ 
+ 	dsi->engine_clk = devm_clk_get(dev, "engine");
+@@ -1098,14 +1109,6 @@ static int mtk_dsi_probe(struct platform_device *pdev)
+ 	dsi->bridge.of_node = dev->of_node;
+ 	dsi->bridge.type = DRM_MODE_CONNECTOR_DSI;
+ 
+-	drm_bridge_add(&dsi->bridge);
+-
+-	ret = component_add(&pdev->dev, &mtk_dsi_component_ops);
+-	if (ret) {
+-		dev_err(&pdev->dev, "failed to add component: %d\n", ret);
+-		goto err_unregister_host;
+-	}
+-
+ 	return 0;
+ 
+ err_unregister_host:
+@@ -1118,8 +1121,6 @@ static int mtk_dsi_remove(struct platform_device *pdev)
+ 	struct mtk_dsi *dsi = platform_get_drvdata(pdev);
+ 
+ 	mtk_output_dsi_disable(dsi);
+-	drm_bridge_remove(&dsi->bridge);
+-	component_del(&pdev->dev, &mtk_dsi_component_ops);
+ 	mipi_dsi_host_unregister(&dsi->host);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/falcon/base.c b/drivers/gpu/drm/nouveau/nvkm/falcon/base.c
+index 262641a014b06..c91130a6be2a1 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/falcon/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/falcon/base.c
+@@ -117,8 +117,12 @@ nvkm_falcon_disable(struct nvkm_falcon *falcon)
+ int
+ nvkm_falcon_reset(struct nvkm_falcon *falcon)
+ {
+-	nvkm_falcon_disable(falcon);
+-	return nvkm_falcon_enable(falcon);
++	if (!falcon->func->reset) {
++		nvkm_falcon_disable(falcon);
++		return nvkm_falcon_enable(falcon);
++	}
++
++	return falcon->func->reset(falcon);
+ }
+ 
+ int
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm200.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm200.c
+index 5968c7696596c..40439e329aa9f 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm200.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm200.c
+@@ -23,9 +23,38 @@
+  */
+ #include "priv.h"
+ 
++static int
++gm200_pmu_flcn_reset(struct nvkm_falcon *falcon)
++{
++	struct nvkm_pmu *pmu = container_of(falcon, typeof(*pmu), falcon);
++
++	nvkm_falcon_wr32(falcon, 0x014, 0x0000ffff);
++	pmu->func->reset(pmu);
++	return nvkm_falcon_enable(falcon);
++}
++
++const struct nvkm_falcon_func
++gm200_pmu_flcn = {
++	.debug = 0xc08,
++	.fbif = 0xe00,
++	.load_imem = nvkm_falcon_v1_load_imem,
++	.load_dmem = nvkm_falcon_v1_load_dmem,
++	.read_dmem = nvkm_falcon_v1_read_dmem,
++	.bind_context = nvkm_falcon_v1_bind_context,
++	.wait_for_halt = nvkm_falcon_v1_wait_for_halt,
++	.clear_interrupt = nvkm_falcon_v1_clear_interrupt,
++	.set_start_addr = nvkm_falcon_v1_set_start_addr,
++	.start = nvkm_falcon_v1_start,
++	.enable = nvkm_falcon_v1_enable,
++	.disable = nvkm_falcon_v1_disable,
++	.reset = gm200_pmu_flcn_reset,
++	.cmdq = { 0x4a0, 0x4b0, 4 },
++	.msgq = { 0x4c8, 0x4cc, 0 },
++};
++
+ static const struct nvkm_pmu_func
+ gm200_pmu = {
+-	.flcn = &gt215_pmu_flcn,
++	.flcn = &gm200_pmu_flcn,
+ 	.enabled = gf100_pmu_enabled,
+ 	.reset = gf100_pmu_reset,
+ };
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c
+index 148706977eec7..e1772211b0a4b 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c
+@@ -211,7 +211,7 @@ gm20b_pmu_recv(struct nvkm_pmu *pmu)
+ 
+ static const struct nvkm_pmu_func
+ gm20b_pmu = {
+-	.flcn = &gt215_pmu_flcn,
++	.flcn = &gm200_pmu_flcn,
+ 	.enabled = gf100_pmu_enabled,
+ 	.intr = gt215_pmu_intr,
+ 	.recv = gm20b_pmu_recv,
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c
+index 00da1b873ce81..6bf7fc1bd1e3b 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c
+@@ -39,7 +39,7 @@ gp102_pmu_enabled(struct nvkm_pmu *pmu)
+ 
+ static const struct nvkm_pmu_func
+ gp102_pmu = {
+-	.flcn = &gt215_pmu_flcn,
++	.flcn = &gm200_pmu_flcn,
+ 	.enabled = gp102_pmu_enabled,
+ 	.reset = gp102_pmu_reset,
+ };
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
+index 461f722656e24..ba1583bb618b2 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
+@@ -78,7 +78,7 @@ gp10b_pmu_acr = {
+ 
+ static const struct nvkm_pmu_func
+ gp10b_pmu = {
+-	.flcn = &gt215_pmu_flcn,
++	.flcn = &gm200_pmu_flcn,
+ 	.enabled = gf100_pmu_enabled,
+ 	.intr = gt215_pmu_intr,
+ 	.recv = gm20b_pmu_recv,
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h
+index e7860d1773539..bcaade758ff72 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h
+@@ -44,6 +44,8 @@ void gf100_pmu_reset(struct nvkm_pmu *);
+ 
+ void gk110_pmu_pgob(struct nvkm_pmu *, bool);
+ 
++extern const struct nvkm_falcon_func gm200_pmu_flcn;
++
+ void gm20b_pmu_acr_bld_patch(struct nvkm_acr *, u32, s64);
+ void gm20b_pmu_acr_bld_write(struct nvkm_acr *, u32, struct nvkm_acr_lsfw *);
+ int gm20b_pmu_acr_boot(struct nvkm_falcon *);
+diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c
+index 0fce73b9a6469..70bd84b7ef2b0 100644
+--- a/drivers/gpu/drm/radeon/atombios_encoders.c
++++ b/drivers/gpu/drm/radeon/atombios_encoders.c
+@@ -198,7 +198,8 @@ void radeon_atom_backlight_init(struct radeon_encoder *radeon_encoder,
+ 	 * so don't register a backlight device
+ 	 */
+ 	if ((rdev->pdev->subsystem_vendor == PCI_VENDOR_ID_APPLE) &&
+-	    (rdev->pdev->device == 0x6741))
++	    (rdev->pdev->device == 0x6741) &&
++	    !dmi_match(DMI_PRODUCT_NAME, "iMac12,1"))
+ 		return;
+ 
+ 	if (!radeon_encoder->enc_priv)
+diff --git a/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c b/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
+index 830bdd5e9b7ce..8677c82716784 100644
+--- a/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
++++ b/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
+@@ -529,13 +529,6 @@ static int dw_hdmi_rockchip_bind(struct device *dev, struct device *master,
+ 		return ret;
+ 	}
+ 
+-	ret = clk_prepare_enable(hdmi->vpll_clk);
+-	if (ret) {
+-		DRM_DEV_ERROR(hdmi->dev, "Failed to enable HDMI vpll: %d\n",
+-			      ret);
+-		return ret;
+-	}
+-
+ 	hdmi->phy = devm_phy_optional_get(dev, "hdmi");
+ 	if (IS_ERR(hdmi->phy)) {
+ 		ret = PTR_ERR(hdmi->phy);
+@@ -544,6 +537,13 @@ static int dw_hdmi_rockchip_bind(struct device *dev, struct device *master,
+ 		return ret;
+ 	}
+ 
++	ret = clk_prepare_enable(hdmi->vpll_clk);
++	if (ret) {
++		DRM_DEV_ERROR(hdmi->dev, "Failed to enable HDMI vpll: %d\n",
++			      ret);
++		return ret;
++	}
++
+ 	drm_encoder_helper_add(encoder, &dw_hdmi_rockchip_encoder_helper_funcs);
+ 	drm_simple_encoder_init(drm, encoder, DRM_MODE_ENCODER_TMDS);
+ 
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+index 2503be0253d3e..d3f32ffe299a8 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+@@ -37,11 +37,11 @@ static int amd_sfh_wait_response_v2(struct amd_mp2_dev *mp2, u8 sid, u32 sensor_
+ {
+ 	union cmd_response cmd_resp;
+ 
+-	/* Get response with status within a max of 800 ms timeout */
++	/* Get response with status within a max of 1600 ms timeout */
+ 	if (!readl_poll_timeout(mp2->mmio + AMD_P2C_MSG(0), cmd_resp.resp,
+ 				(cmd_resp.response_v2.response == sensor_sts &&
+ 				cmd_resp.response_v2.status == 0 && (sid == 0xff ||
+-				cmd_resp.response_v2.sensor_id == sid)), 500, 800000))
++				cmd_resp.response_v2.sensor_id == sid)), 500, 1600000))
+ 		return cmd_resp.response_v2.response;
+ 
+ 	return SENSOR_DISABLED;
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h
+index ae30e059f8475..8a9c544c27aef 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h
+@@ -49,7 +49,7 @@ union sfh_cmd_base {
+ 	} s;
+ 	struct {
+ 		u32 cmd_id : 4;
+-		u32 intr_enable : 1;
++		u32 intr_disable : 1;
+ 		u32 rsvd1 : 3;
+ 		u32 length : 7;
+ 		u32 mem_type : 1;
+diff --git a/drivers/hid/amd-sfh-hid/hid_descriptor/amd_sfh_hid_desc.c b/drivers/hid/amd-sfh-hid/hid_descriptor/amd_sfh_hid_desc.c
+index be41f83b0289c..76095bd53c655 100644
+--- a/drivers/hid/amd-sfh-hid/hid_descriptor/amd_sfh_hid_desc.c
++++ b/drivers/hid/amd-sfh-hid/hid_descriptor/amd_sfh_hid_desc.c
+@@ -27,6 +27,7 @@
+ #define HID_USAGE_SENSOR_STATE_READY_ENUM                             0x02
+ #define HID_USAGE_SENSOR_STATE_INITIALIZING_ENUM                      0x05
+ #define HID_USAGE_SENSOR_EVENT_DATA_UPDATED_ENUM                      0x04
++#define ILLUMINANCE_MASK					GENMASK(14, 0)
+ 
+ int get_report_descriptor(int sensor_idx, u8 *rep_desc)
+ {
+@@ -246,7 +247,8 @@ u8 get_input_report(u8 current_index, int sensor_idx, int report_id, struct amd_
+ 		get_common_inputs(&als_input.common_property, report_id);
+ 		/* For ALS ,V2 Platforms uses C2P_MSG5 register instead of DRAM access method */
+ 		if (supported_input == V2_STATUS)
+-			als_input.illuminance_value = (int)readl(privdata->mmio + AMD_C2P_MSG(5));
++			als_input.illuminance_value =
++				readl(privdata->mmio + AMD_C2P_MSG(5)) & ILLUMINANCE_MASK;
+ 		else
+ 			als_input.illuminance_value =
+ 				(int)sensor_virt_addr[0] / AMD_SFH_FW_MULTIPLIER;
+diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
+index a4ca5ed00e5f5..a050dbcfc60e0 100644
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -580,49 +580,49 @@ static const struct hid_device_id apple_devices[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING5_ANSI),
+ 		.driver_data = APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING5_ISO),
+-		.driver_data = APPLE_HAS_FN },
++		.driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING5_JIS),
+ 		.driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING6_ANSI),
+ 		.driver_data = APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING6_ISO),
+-		.driver_data = APPLE_HAS_FN },
++		.driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING6_JIS),
+ 		.driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING6A_ANSI),
+ 		.driver_data = APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING6A_ISO),
+-		.driver_data = APPLE_HAS_FN },
++		.driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING6A_JIS),
+ 		.driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING5A_ANSI),
+ 		.driver_data = APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING5A_ISO),
+-		.driver_data = APPLE_HAS_FN },
++		.driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING5A_JIS),
+ 		.driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7_ANSI),
+ 		.driver_data = APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7_ISO),
+-		.driver_data = APPLE_HAS_FN },
++		.driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7_JIS),
+ 		.driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7A_ANSI),
+ 		.driver_data = APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7A_ISO),
+-		.driver_data = APPLE_HAS_FN },
++		.driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7A_JIS),
+ 		.driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING8_ANSI),
+ 		.driver_data = APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING8_ISO),
+-		.driver_data = APPLE_HAS_FN },
++		.driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING8_JIS),
+ 		.driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING9_ANSI),
+ 		.driver_data = APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING9_ISO),
+-		.driver_data = APPLE_HAS_FN },
++		.driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING9_JIS),
+ 		.driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS },
+ 	{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_2009_ANSI),
+diff --git a/drivers/hid/hid-elo.c b/drivers/hid/hid-elo.c
+index 8e960d7b233b3..9b42b0cdeef06 100644
+--- a/drivers/hid/hid-elo.c
++++ b/drivers/hid/hid-elo.c
+@@ -262,6 +262,7 @@ static int elo_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 
+ 	return 0;
+ err_free:
++	usb_put_dev(udev);
+ 	kfree(priv);
+ 	return ret;
+ }
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index a5a5a64c7abc4..fd224222d95f2 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -1365,6 +1365,7 @@
+ #define USB_VENDOR_ID_UGTIZER			0x2179
+ #define USB_DEVICE_ID_UGTIZER_TABLET_GP0610	0x0053
+ #define USB_DEVICE_ID_UGTIZER_TABLET_GT5040	0x0077
++#define USB_DEVICE_ID_UGTIZER_TABLET_WP5540	0x0004
+ 
+ #define USB_VENDOR_ID_VIEWSONIC			0x0543
+ #define USB_DEVICE_ID_VIEWSONIC_PD1011		0xe621
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index ee7e504e7279f..451baa9b0fbe2 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -187,6 +187,7 @@ static const struct hid_device_id hid_quirks[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_TURBOX, USB_DEVICE_ID_TURBOX_KEYBOARD), HID_QUIRK_NOGET },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_UCLOGIC, USB_DEVICE_ID_UCLOGIC_TABLET_KNA5), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_UCLOGIC, USB_DEVICE_ID_UCLOGIC_TABLET_TWA60), HID_QUIRK_MULTI_INPUT },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_UGTIZER, USB_DEVICE_ID_UGTIZER_TABLET_WP5540), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_WALTOP, USB_DEVICE_ID_WALTOP_MEDIA_TABLET_10_6_INCH), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_WALTOP, USB_DEVICE_ID_WALTOP_MEDIA_TABLET_14_1_INCH), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_WALTOP, USB_DEVICE_ID_WALTOP_SIRIUS_BATTERY_FREE_TABLET), HID_QUIRK_MULTI_INPUT },
+diff --git a/drivers/hid/i2c-hid/i2c-hid-of-goodix.c b/drivers/hid/i2c-hid/i2c-hid-of-goodix.c
+index b4dad66fa954d..ec6c73f75ffe0 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-of-goodix.c
++++ b/drivers/hid/i2c-hid/i2c-hid-of-goodix.c
+@@ -27,7 +27,6 @@ struct i2c_hid_of_goodix {
+ 
+ 	struct regulator *vdd;
+ 	struct notifier_block nb;
+-	struct mutex regulator_mutex;
+ 	struct gpio_desc *reset_gpio;
+ 	const struct goodix_i2c_hid_timing_data *timings;
+ };
+@@ -67,8 +66,6 @@ static int ihid_goodix_vdd_notify(struct notifier_block *nb,
+ 		container_of(nb, struct i2c_hid_of_goodix, nb);
+ 	int ret = NOTIFY_OK;
+ 
+-	mutex_lock(&ihid_goodix->regulator_mutex);
+-
+ 	switch (event) {
+ 	case REGULATOR_EVENT_PRE_DISABLE:
+ 		gpiod_set_value_cansleep(ihid_goodix->reset_gpio, 1);
+@@ -87,8 +84,6 @@ static int ihid_goodix_vdd_notify(struct notifier_block *nb,
+ 		break;
+ 	}
+ 
+-	mutex_unlock(&ihid_goodix->regulator_mutex);
+-
+ 	return ret;
+ }
+ 
+@@ -102,8 +97,6 @@ static int i2c_hid_of_goodix_probe(struct i2c_client *client,
+ 	if (!ihid_goodix)
+ 		return -ENOMEM;
+ 
+-	mutex_init(&ihid_goodix->regulator_mutex);
+-
+ 	ihid_goodix->ops.power_up = goodix_i2c_hid_power_up;
+ 	ihid_goodix->ops.power_down = goodix_i2c_hid_power_down;
+ 
+@@ -130,25 +123,28 @@ static int i2c_hid_of_goodix_probe(struct i2c_client *client,
+ 	 *   long. Holding the controller in reset apparently draws extra
+ 	 *   power.
+ 	 */
+-	mutex_lock(&ihid_goodix->regulator_mutex);
+ 	ihid_goodix->nb.notifier_call = ihid_goodix_vdd_notify;
+ 	ret = devm_regulator_register_notifier(ihid_goodix->vdd, &ihid_goodix->nb);
+-	if (ret) {
+-		mutex_unlock(&ihid_goodix->regulator_mutex);
++	if (ret)
+ 		return dev_err_probe(&client->dev, ret,
+ 			"regulator notifier request failed\n");
+-	}
+ 
+ 	/*
+ 	 * If someone else is holding the regulator on (or the regulator is
+ 	 * an always-on one) we might never be told to deassert reset. Do it
+-	 * now. Here we'll assume that someone else might have _just
+-	 * barely_ turned the regulator on so we'll do the full
+-	 * "post_power_delay" just in case.
++	 * now... and temporarily bump the regulator reference count just to
++	 * make sure it is impossible for this to race with our own notifier!
++	 * We also assume that someone else might have _just barely_ turned
++	 * the regulator on so we'll do the full "post_power_delay" just in
++	 * case.
+ 	 */
+-	if (ihid_goodix->reset_gpio && regulator_is_enabled(ihid_goodix->vdd))
++	if (ihid_goodix->reset_gpio && regulator_is_enabled(ihid_goodix->vdd)) {
++		ret = regulator_enable(ihid_goodix->vdd);
++		if (ret)
++			return ret;
+ 		goodix_i2c_hid_deassert_reset(ihid_goodix, true);
+-	mutex_unlock(&ihid_goodix->regulator_mutex);
++		regulator_disable(ihid_goodix->vdd);
++	}
+ 
+ 	return i2c_hid_core_probe(client, &ihid_goodix->ops, 0x0001, 0);
+ }
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index 392c1ac4f8193..44bd0b6ff5059 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -2027,8 +2027,10 @@ int vmbus_add_channel_kobj(struct hv_device *dev, struct vmbus_channel *channel)
+ 	kobj->kset = dev->channels_kset;
+ 	ret = kobject_init_and_add(kobj, &vmbus_chan_ktype, NULL,
+ 				   "%u", relid);
+-	if (ret)
++	if (ret) {
++		kobject_put(kobj);
+ 		return ret;
++	}
+ 
+ 	ret = sysfs_create_group(kobj, &vmbus_chan_group);
+ 
+@@ -2037,6 +2039,7 @@ int vmbus_add_channel_kobj(struct hv_device *dev, struct vmbus_channel *channel)
+ 		 * The calling functions' error handling paths will cleanup the
+ 		 * empty channel directory.
+ 		 */
++		kobject_put(kobj);
+ 		dev_err(device, "Unable to set up channel sysfs files\n");
+ 		return ret;
+ 	}
+diff --git a/drivers/i2c/busses/i2c-brcmstb.c b/drivers/i2c/busses/i2c-brcmstb.c
+index 490ee3962645d..b00f35c0b0662 100644
+--- a/drivers/i2c/busses/i2c-brcmstb.c
++++ b/drivers/i2c/busses/i2c-brcmstb.c
+@@ -673,7 +673,7 @@ static int brcmstb_i2c_probe(struct platform_device *pdev)
+ 
+ 	/* set the data in/out register size for compatible SoCs */
+ 	if (of_device_is_compatible(dev->device->of_node,
+-				    "brcmstb,brcmper-i2c"))
++				    "brcm,brcmper-i2c"))
+ 		dev->data_regsz = sizeof(u8);
+ 	else
+ 		dev->data_regsz = sizeof(u32);
+diff --git a/drivers/i2c/busses/i2c-qcom-cci.c b/drivers/i2c/busses/i2c-qcom-cci.c
+index c1de8eb66169f..cf54f1cb4c57a 100644
+--- a/drivers/i2c/busses/i2c-qcom-cci.c
++++ b/drivers/i2c/busses/i2c-qcom-cci.c
+@@ -558,7 +558,7 @@ static int cci_probe(struct platform_device *pdev)
+ 		cci->master[idx].adap.quirks = &cci->data->quirks;
+ 		cci->master[idx].adap.algo = &cci_algo;
+ 		cci->master[idx].adap.dev.parent = dev;
+-		cci->master[idx].adap.dev.of_node = child;
++		cci->master[idx].adap.dev.of_node = of_node_get(child);
+ 		cci->master[idx].master = idx;
+ 		cci->master[idx].cci = cci;
+ 
+@@ -643,8 +643,10 @@ static int cci_probe(struct platform_device *pdev)
+ 			continue;
+ 
+ 		ret = i2c_add_adapter(&cci->master[i].adap);
+-		if (ret < 0)
++		if (ret < 0) {
++			of_node_put(cci->master[i].adap.dev.of_node);
+ 			goto error_i2c;
++		}
+ 	}
+ 
+ 	pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC);
+@@ -655,9 +657,11 @@ static int cci_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ error_i2c:
+-	for (; i >= 0; i--) {
+-		if (cci->master[i].cci)
++	for (--i ; i >= 0; i--) {
++		if (cci->master[i].cci) {
+ 			i2c_del_adapter(&cci->master[i].adap);
++			of_node_put(cci->master[i].adap.dev.of_node);
++		}
+ 	}
+ error:
+ 	disable_irq(cci->irq);
+@@ -673,8 +677,10 @@ static int cci_remove(struct platform_device *pdev)
+ 	int i;
+ 
+ 	for (i = 0; i < cci->data->num_masters; i++) {
+-		if (cci->master[i].cci)
++		if (cci->master[i].cci) {
+ 			i2c_del_adapter(&cci->master[i].adap);
++			of_node_put(cci->master[i].adap.dev.of_node);
++		}
+ 		cci_halt(cci, i);
+ 	}
+ 
+diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
+index 259065d271ef0..09cc98266d30f 100644
+--- a/drivers/irqchip/irq-sifive-plic.c
++++ b/drivers/irqchip/irq-sifive-plic.c
+@@ -398,3 +398,4 @@ out_free_priv:
+ 
+ IRQCHIP_DECLARE(sifive_plic, "sifive,plic-1.0.0", plic_init);
+ IRQCHIP_DECLARE(riscv_plic0, "riscv,plic0", plic_init); /* for legacy systems */
++IRQCHIP_DECLARE(thead_c900_plic, "thead,c900-plic", plic_init); /* for firmware driver */
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index bd814fa0b7216..356a0183e1ad1 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -2140,7 +2140,7 @@ static void __dm_destroy(struct mapped_device *md, bool wait)
+ 	set_bit(DMF_FREEING, &md->flags);
+ 	spin_unlock(&_minor_lock);
+ 
+-	blk_set_queue_dying(md->queue);
++	blk_mark_disk_dead(md->disk);
+ 
+ 	/*
+ 	 * Take suspend_lock so that presuspend and postsuspend methods
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 90e1bcd03b46c..92713cd440651 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -1682,31 +1682,31 @@ static void mmc_blk_read_single(struct mmc_queue *mq, struct request *req)
+ 	struct mmc_card *card = mq->card;
+ 	struct mmc_host *host = card->host;
+ 	blk_status_t error = BLK_STS_OK;
+-	int retries = 0;
+ 
+ 	do {
+ 		u32 status;
+ 		int err;
++		int retries = 0;
+ 
+-		mmc_blk_rw_rq_prep(mqrq, card, 1, mq);
++		while (retries++ <= MMC_READ_SINGLE_RETRIES) {
++			mmc_blk_rw_rq_prep(mqrq, card, 1, mq);
+ 
+-		mmc_wait_for_req(host, mrq);
++			mmc_wait_for_req(host, mrq);
+ 
+-		err = mmc_send_status(card, &status);
+-		if (err)
+-			goto error_exit;
+-
+-		if (!mmc_host_is_spi(host) &&
+-		    !mmc_ready_for_data(status)) {
+-			err = mmc_blk_fix_state(card, req);
++			err = mmc_send_status(card, &status);
+ 			if (err)
+ 				goto error_exit;
+-		}
+ 
+-		if (mrq->cmd->error && retries++ < MMC_READ_SINGLE_RETRIES)
+-			continue;
++			if (!mmc_host_is_spi(host) &&
++			    !mmc_ready_for_data(status)) {
++				err = mmc_blk_fix_state(card, req);
++				if (err)
++					goto error_exit;
++			}
+ 
+-		retries = 0;
++			if (!mrq->cmd->error)
++				break;
++		}
+ 
+ 		if (mrq->cmd->error ||
+ 		    mrq->data->error ||
+diff --git a/drivers/mtd/devices/phram.c b/drivers/mtd/devices/phram.c
+index 6ed6c51fac69e..d503821a3e606 100644
+--- a/drivers/mtd/devices/phram.c
++++ b/drivers/mtd/devices/phram.c
+@@ -264,16 +264,20 @@ static int phram_setup(const char *val)
+ 		}
+ 	}
+ 
+-	if (erasesize)
+-		div_u64_rem(len, (uint32_t)erasesize, &rem);
+-
+ 	if (len == 0 || erasesize == 0 || erasesize > len
+-	    || erasesize > UINT_MAX || rem) {
++	    || erasesize > UINT_MAX) {
+ 		parse_err("illegal erasesize or len\n");
+ 		ret = -EINVAL;
+ 		goto error;
+ 	}
+ 
++	div_u64_rem(len, (uint32_t)erasesize, &rem);
++	if (rem) {
++		parse_err("len is not multiple of erasesize\n");
++		ret = -EINVAL;
++		goto error;
++	}
++
+ 	ret = register_device(name, start, len, (uint32_t)erasesize);
+ 	if (ret)
+ 		goto error;
+diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+index f75929783b941..aee78f5f4f156 100644
+--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c
++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+@@ -2106,7 +2106,7 @@ static int brcmnand_read_by_pio(struct mtd_info *mtd, struct nand_chip *chip,
+ 					mtd->oobsize / trans,
+ 					host->hwcfg.sector_size_1k);
+ 
+-		if (!ret) {
++		if (ret != -EBADMSG) {
+ 			*err_addr = brcmnand_get_uncorrecc_addr(ctrl);
+ 
+ 			if (*err_addr)
+diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+index 65bcd1c548d2e..5eb20dfe4186e 100644
+--- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
++++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+@@ -2291,7 +2291,7 @@ static int gpmi_nfc_exec_op(struct nand_chip *chip,
+ 		this->hw.must_apply_timings = false;
+ 		ret = gpmi_nfc_apply_timings(this);
+ 		if (ret)
+-			return ret;
++			goto out_pm;
+ 	}
+ 
+ 	dev_dbg(this->dev, "%s: %d instructions\n", __func__, op->ninstrs);
+@@ -2420,6 +2420,7 @@ unmap:
+ 
+ 	this->bch = false;
+ 
++out_pm:
+ 	pm_runtime_mark_last_busy(this->dev);
+ 	pm_runtime_put_autosuspend(this->dev);
+ 
+diff --git a/drivers/mtd/nand/raw/ingenic/ingenic_ecc.c b/drivers/mtd/nand/raw/ingenic/ingenic_ecc.c
+index efe0ffe4f1abc..9054559e52dda 100644
+--- a/drivers/mtd/nand/raw/ingenic/ingenic_ecc.c
++++ b/drivers/mtd/nand/raw/ingenic/ingenic_ecc.c
+@@ -68,9 +68,14 @@ static struct ingenic_ecc *ingenic_ecc_get(struct device_node *np)
+ 	struct ingenic_ecc *ecc;
+ 
+ 	pdev = of_find_device_by_node(np);
+-	if (!pdev || !platform_get_drvdata(pdev))
++	if (!pdev)
+ 		return ERR_PTR(-EPROBE_DEFER);
+ 
++	if (!platform_get_drvdata(pdev)) {
++		put_device(&pdev->dev);
++		return ERR_PTR(-EPROBE_DEFER);
++	}
++
+ 	ecc = platform_get_drvdata(pdev);
+ 	clk_prepare_enable(ecc->clk);
+ 
+diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_nandc.c
+index 04e6f7b267064..0f41a9a421575 100644
+--- a/drivers/mtd/nand/raw/qcom_nandc.c
++++ b/drivers/mtd/nand/raw/qcom_nandc.c
+@@ -2,7 +2,6 @@
+ /*
+  * Copyright (c) 2016, The Linux Foundation. All rights reserved.
+  */
+-
+ #include <linux/clk.h>
+ #include <linux/slab.h>
+ #include <linux/bitops.h>
+@@ -3063,10 +3062,6 @@ static int qcom_nandc_probe(struct platform_device *pdev)
+ 	if (dma_mapping_error(dev, nandc->base_dma))
+ 		return -ENXIO;
+ 
+-	ret = qcom_nandc_alloc(nandc);
+-	if (ret)
+-		goto err_nandc_alloc;
+-
+ 	ret = clk_prepare_enable(nandc->core_clk);
+ 	if (ret)
+ 		goto err_core_clk;
+@@ -3075,6 +3070,10 @@ static int qcom_nandc_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto err_aon_clk;
+ 
++	ret = qcom_nandc_alloc(nandc);
++	if (ret)
++		goto err_nandc_alloc;
++
+ 	ret = qcom_nandc_setup(nandc);
+ 	if (ret)
+ 		goto err_setup;
+@@ -3086,15 +3085,14 @@ static int qcom_nandc_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ err_setup:
++	qcom_nandc_unalloc(nandc);
++err_nandc_alloc:
+ 	clk_disable_unprepare(nandc->aon_clk);
+ err_aon_clk:
+ 	clk_disable_unprepare(nandc->core_clk);
+ err_core_clk:
+-	qcom_nandc_unalloc(nandc);
+-err_nandc_alloc:
+ 	dma_unmap_resource(dev, res->start, resource_size(res),
+ 			   DMA_BIDIRECTIONAL, 0);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/mtd/parsers/qcomsmempart.c b/drivers/mtd/parsers/qcomsmempart.c
+index 06a818cd2433f..32ddfea701423 100644
+--- a/drivers/mtd/parsers/qcomsmempart.c
++++ b/drivers/mtd/parsers/qcomsmempart.c
+@@ -58,11 +58,11 @@ static int parse_qcomsmem_part(struct mtd_info *mtd,
+ 			       const struct mtd_partition **pparts,
+ 			       struct mtd_part_parser_data *data)
+ {
++	size_t len = SMEM_FLASH_PTABLE_HDR_LEN;
++	int ret, i, j, tmpparts, numparts = 0;
+ 	struct smem_flash_pentry *pentry;
+ 	struct smem_flash_ptable *ptable;
+-	size_t len = SMEM_FLASH_PTABLE_HDR_LEN;
+ 	struct mtd_partition *parts;
+-	int ret, i, numparts;
+ 	char *name, *c;
+ 
+ 	if (IS_ENABLED(CONFIG_MTD_SPI_NOR_USE_4K_SECTORS)
+@@ -87,8 +87,8 @@ static int parse_qcomsmem_part(struct mtd_info *mtd,
+ 	}
+ 
+ 	/* Ensure that # of partitions is less than the max we have allocated */
+-	numparts = le32_to_cpu(ptable->numparts);
+-	if (numparts > SMEM_FLASH_PTABLE_MAX_PARTS_V4) {
++	tmpparts = le32_to_cpu(ptable->numparts);
++	if (tmpparts > SMEM_FLASH_PTABLE_MAX_PARTS_V4) {
+ 		pr_err("Partition numbers exceed the max limit\n");
+ 		return -EINVAL;
+ 	}
+@@ -116,11 +116,17 @@ static int parse_qcomsmem_part(struct mtd_info *mtd,
+ 		return PTR_ERR(ptable);
+ 	}
+ 
++	for (i = 0; i < tmpparts; i++) {
++		pentry = &ptable->pentry[i];
++		if (pentry->name[0] != '\0')
++			numparts++;
++	}
++
+ 	parts = kcalloc(numparts, sizeof(*parts), GFP_KERNEL);
+ 	if (!parts)
+ 		return -ENOMEM;
+ 
+-	for (i = 0; i < numparts; i++) {
++	for (i = 0, j = 0; i < tmpparts; i++) {
+ 		pentry = &ptable->pentry[i];
+ 		if (pentry->name[0] == '\0')
+ 			continue;
+@@ -135,24 +141,25 @@ static int parse_qcomsmem_part(struct mtd_info *mtd,
+ 		for (c = name; *c != '\0'; c++)
+ 			*c = tolower(*c);
+ 
+-		parts[i].name = name;
+-		parts[i].offset = le32_to_cpu(pentry->offset) * mtd->erasesize;
+-		parts[i].mask_flags = pentry->attr;
+-		parts[i].size = le32_to_cpu(pentry->length) * mtd->erasesize;
++		parts[j].name = name;
++		parts[j].offset = le32_to_cpu(pentry->offset) * mtd->erasesize;
++		parts[j].mask_flags = pentry->attr;
++		parts[j].size = le32_to_cpu(pentry->length) * mtd->erasesize;
+ 		pr_debug("%d: %s offs=0x%08x size=0x%08x attr:0x%08x\n",
+ 			 i, pentry->name, le32_to_cpu(pentry->offset),
+ 			 le32_to_cpu(pentry->length), pentry->attr);
++		j++;
+ 	}
+ 
+ 	pr_debug("SMEM partition table found: ver: %d len: %d\n",
+-		 le32_to_cpu(ptable->version), numparts);
++		 le32_to_cpu(ptable->version), tmpparts);
+ 	*pparts = parts;
+ 
+ 	return numparts;
+ 
+ out_free_parts:
+-	while (--i >= 0)
+-		kfree(parts[i].name);
++	while (--j >= 0)
++		kfree(parts[j].name);
+ 	kfree(parts);
+ 	*pparts = NULL;
+ 
+@@ -166,6 +173,8 @@ static void parse_qcomsmem_cleanup(const struct mtd_partition *pparts,
+ 
+ 	for (i = 0; i < nr_parts; i++)
+ 		kfree(pparts[i].name);
++
++	kfree(pparts);
+ }
+ 
+ static const struct of_device_id qcomsmem_of_match_table[] = {
+diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
+index 9fd1d6cba3cda..a86b1f71762ea 100644
+--- a/drivers/net/bonding/bond_3ad.c
++++ b/drivers/net/bonding/bond_3ad.c
+@@ -225,7 +225,7 @@ static inline int __check_agg_selection_timer(struct port *port)
+ 	if (bond == NULL)
+ 		return 0;
+ 
+-	return BOND_AD_INFO(bond).agg_select_timer ? 1 : 0;
++	return atomic_read(&BOND_AD_INFO(bond).agg_select_timer) ? 1 : 0;
+ }
+ 
+ /**
+@@ -1995,7 +1995,7 @@ static void ad_marker_response_received(struct bond_marker *marker,
+  */
+ void bond_3ad_initiate_agg_selection(struct bonding *bond, int timeout)
+ {
+-	BOND_AD_INFO(bond).agg_select_timer = timeout;
++	atomic_set(&BOND_AD_INFO(bond).agg_select_timer, timeout);
+ }
+ 
+ /**
+@@ -2278,6 +2278,28 @@ void bond_3ad_update_ad_actor_settings(struct bonding *bond)
+ 	spin_unlock_bh(&bond->mode_lock);
+ }
+ 
++/**
++ * bond_agg_timer_advance - advance agg_select_timer
++ * @bond:  bonding structure
++ *
++ * Return true when agg_select_timer reaches 0.
++ */
++static bool bond_agg_timer_advance(struct bonding *bond)
++{
++	int val, nval;
++
++	while (1) {
++		val = atomic_read(&BOND_AD_INFO(bond).agg_select_timer);
++		if (!val)
++			return false;
++		nval = val - 1;
++		if (atomic_cmpxchg(&BOND_AD_INFO(bond).agg_select_timer,
++				   val, nval) == val)
++			break;
++	}
++	return nval == 0;
++}
++
+ /**
+  * bond_3ad_state_machine_handler - handle state machines timeout
+  * @work: work context to fetch bonding struct to work on from
+@@ -2313,9 +2335,7 @@ void bond_3ad_state_machine_handler(struct work_struct *work)
+ 	if (!bond_has_slaves(bond))
+ 		goto re_arm;
+ 
+-	/* check if agg_select_timer timer after initialize is timed out */
+-	if (BOND_AD_INFO(bond).agg_select_timer &&
+-	    !(--BOND_AD_INFO(bond).agg_select_timer)) {
++	if (bond_agg_timer_advance(bond)) {
+ 		slave = bond_first_slave_rcu(bond);
+ 		port = slave ? &(SLAVE_AD_INFO(slave)->port) : NULL;
+ 
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 1db5c7a172a71..92035571490bd 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -2377,10 +2377,9 @@ static int __bond_release_one(struct net_device *bond_dev,
+ 		bond_select_active_slave(bond);
+ 	}
+ 
+-	if (!bond_has_slaves(bond)) {
+-		bond_set_carrier(bond);
++	bond_set_carrier(bond);
++	if (!bond_has_slaves(bond))
+ 		eth_hw_addr_random(bond_dev);
+-	}
+ 
+ 	unblock_netpoll_tx();
+ 	synchronize_rcu();
+diff --git a/drivers/net/dsa/Kconfig b/drivers/net/dsa/Kconfig
+index c0c91440340ae..0029d279616fd 100644
+--- a/drivers/net/dsa/Kconfig
++++ b/drivers/net/dsa/Kconfig
+@@ -82,6 +82,7 @@ config NET_DSA_REALTEK_SMI
+ 
+ config NET_DSA_SMSC_LAN9303
+ 	tristate
++	depends on VLAN_8021Q || VLAN_8021Q=n
+ 	select NET_DSA_TAG_LAN9303
+ 	select REGMAP
+ 	help
+diff --git a/drivers/net/dsa/lan9303-core.c b/drivers/net/dsa/lan9303-core.c
+index 89f920289ae21..0b6f29ee87b56 100644
+--- a/drivers/net/dsa/lan9303-core.c
++++ b/drivers/net/dsa/lan9303-core.c
+@@ -10,6 +10,7 @@
+ #include <linux/mii.h>
+ #include <linux/phy.h>
+ #include <linux/if_bridge.h>
++#include <linux/if_vlan.h>
+ #include <linux/etherdevice.h>
+ 
+ #include "lan9303.h"
+@@ -1083,21 +1084,27 @@ static void lan9303_adjust_link(struct dsa_switch *ds, int port,
+ static int lan9303_port_enable(struct dsa_switch *ds, int port,
+ 			       struct phy_device *phy)
+ {
++	struct dsa_port *dp = dsa_to_port(ds, port);
+ 	struct lan9303 *chip = ds->priv;
+ 
+-	if (!dsa_is_user_port(ds, port))
++	if (!dsa_port_is_user(dp))
+ 		return 0;
+ 
++	vlan_vid_add(dp->cpu_dp->master, htons(ETH_P_8021Q), port);
++
+ 	return lan9303_enable_processing_port(chip, port);
+ }
+ 
+ static void lan9303_port_disable(struct dsa_switch *ds, int port)
+ {
++	struct dsa_port *dp = dsa_to_port(ds, port);
+ 	struct lan9303 *chip = ds->priv;
+ 
+-	if (!dsa_is_user_port(ds, port))
++	if (!dsa_port_is_user(dp))
+ 		return;
+ 
++	vlan_vid_del(dp->cpu_dp->master, htons(ETH_P_8021Q), port);
++
+ 	lan9303_disable_processing_port(chip, port);
+ 	lan9303_phy_write(ds, chip->phy_addr_base + port, MII_BMCR, BMCR_PDOWN);
+ }
+@@ -1309,7 +1316,7 @@ static int lan9303_probe_reset_gpio(struct lan9303 *chip,
+ 				     struct device_node *np)
+ {
+ 	chip->reset_gpio = devm_gpiod_get_optional(chip->dev, "reset",
+-						   GPIOD_OUT_LOW);
++						   GPIOD_OUT_HIGH);
+ 	if (IS_ERR(chip->reset_gpio))
+ 		return PTR_ERR(chip->reset_gpio);
+ 
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index 0909b05d02133..ae91edcbfa8f6 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -2217,8 +2217,8 @@ static int gswip_remove(struct platform_device *pdev)
+ 
+ 	if (priv->ds->slave_mii_bus) {
+ 		mdiobus_unregister(priv->ds->slave_mii_bus);
+-		mdiobus_free(priv->ds->slave_mii_bus);
+ 		of_node_put(priv->ds->slave_mii_bus->dev.of_node);
++		mdiobus_free(priv->ds->slave_mii_bus);
+ 	}
+ 
+ 	for (i = 0; i < priv->num_gphy_fw; i++)
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 70cea1b95298a..ec8b02f5459d2 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -2290,6 +2290,13 @@ static int mv88e6xxx_port_vlan_del(struct dsa_switch *ds, int port,
+ 	if (!mv88e6xxx_max_vid(chip))
+ 		return -EOPNOTSUPP;
+ 
++	/* The ATU removal procedure needs the FID to be mapped in the VTU,
++	 * but FDB deletion runs concurrently with VLAN deletion. Flush the DSA
++	 * switchdev workqueue to ensure that all FDB entries are deleted
++	 * before we remove the VLAN.
++	 */
++	dsa_flush_workqueue();
++
+ 	mv88e6xxx_reg_lock(chip);
+ 
+ 	err = mv88e6xxx_port_get_pvid(chip, port, &pvid);
+diff --git a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
+index da595242bc13a..f50604f3e541e 100644
+--- a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
++++ b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
+@@ -900,7 +900,7 @@ static void atl1c_clean_tx_ring(struct atl1c_adapter *adapter,
+ 		atl1c_clean_buffer(pdev, buffer_info);
+ 	}
+ 
+-	netdev_reset_queue(adapter->netdev);
++	netdev_tx_reset_queue(netdev_get_tx_queue(adapter->netdev, queue));
+ 
+ 	/* Zero out Tx-buffers */
+ 	memset(tpd_ring->desc, 0, sizeof(struct atl1c_tpd_desc) *
+diff --git a/drivers/net/ethernet/broadcom/bgmac-platform.c b/drivers/net/ethernet/broadcom/bgmac-platform.c
+index c6412c523637b..b4381cd419792 100644
+--- a/drivers/net/ethernet/broadcom/bgmac-platform.c
++++ b/drivers/net/ethernet/broadcom/bgmac-platform.c
+@@ -172,6 +172,7 @@ static int bgmac_probe(struct platform_device *pdev)
+ {
+ 	struct device_node *np = pdev->dev.of_node;
+ 	struct bgmac *bgmac;
++	struct resource *regs;
+ 	int ret;
+ 
+ 	bgmac = bgmac_alloc(&pdev->dev);
+@@ -208,15 +209,23 @@ static int bgmac_probe(struct platform_device *pdev)
+ 	if (IS_ERR(bgmac->plat.base))
+ 		return PTR_ERR(bgmac->plat.base);
+ 
+-	bgmac->plat.idm_base = devm_platform_ioremap_resource_byname(pdev, "idm_base");
+-	if (IS_ERR(bgmac->plat.idm_base))
+-		return PTR_ERR(bgmac->plat.idm_base);
+-	else
++	/* The idm_base resource is optional for some platforms */
++	regs = platform_get_resource_byname(pdev, IORESOURCE_MEM, "idm_base");
++	if (regs) {
++		bgmac->plat.idm_base = devm_ioremap_resource(&pdev->dev, regs);
++		if (IS_ERR(bgmac->plat.idm_base))
++			return PTR_ERR(bgmac->plat.idm_base);
+ 		bgmac->feature_flags &= ~BGMAC_FEAT_IDM_MASK;
++	}
+ 
+-	bgmac->plat.nicpm_base = devm_platform_ioremap_resource_byname(pdev, "nicpm_base");
+-	if (IS_ERR(bgmac->plat.nicpm_base))
+-		return PTR_ERR(bgmac->plat.nicpm_base);
++	/* The nicpm_base resource is optional for some platforms */
++	regs = platform_get_resource_byname(pdev, IORESOURCE_MEM, "nicpm_base");
++	if (regs) {
++		bgmac->plat.nicpm_base = devm_ioremap_resource(&pdev->dev,
++							       regs);
++		if (IS_ERR(bgmac->plat.nicpm_base))
++			return PTR_ERR(bgmac->plat.nicpm_base);
++	}
+ 
+ 	bgmac->read = platform_bgmac_read;
+ 	bgmac->write = platform_bgmac_write;
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index ffce528aa00e4..aac1b27bfc7bf 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -4749,7 +4749,7 @@ static int macb_probe(struct platform_device *pdev)
+ 
+ #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
+ 	if (GEM_BFEXT(DAW64, gem_readl(bp, DCFG6))) {
+-		dma_set_mask(&pdev->dev, DMA_BIT_MASK(44));
++		dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(44));
+ 		bp->hw_dma_cap |= HW_DMA_CAP_64B;
+ 	}
+ #endif
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index 70c8dd6cf3508..118933efb1587 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -4338,7 +4338,7 @@ static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev)
+ 	}
+ 
+ 	INIT_WORK(&priv->tx_onestep_tstamp, dpaa2_eth_tx_onestep_tstamp);
+-
++	mutex_init(&priv->onestep_tstamp_lock);
+ 	skb_queue_head_init(&priv->tx_skbs);
+ 
+ 	priv->rx_copybreak = DPAA2_ETH_DEFAULT_COPYBREAK;
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch-flower.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch-flower.c
+index d6eefbbf163fa..cacd454ac696c 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch-flower.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch-flower.c
+@@ -532,6 +532,7 @@ static int dpaa2_switch_flower_parse_mirror_key(struct flow_cls_offload *cls,
+ 	struct flow_rule *rule = flow_cls_offload_flow_rule(cls);
+ 	struct flow_dissector *dissector = rule->match.dissector;
+ 	struct netlink_ext_ack *extack = cls->common.extack;
++	int ret = -EOPNOTSUPP;
+ 
+ 	if (dissector->used_keys &
+ 	    ~(BIT(FLOW_DISSECTOR_KEY_BASIC) |
+@@ -561,9 +562,10 @@ static int dpaa2_switch_flower_parse_mirror_key(struct flow_cls_offload *cls,
+ 		}
+ 
+ 		*vlan = (u16)match.key->vlan_id;
++		ret = 0;
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index 09a3297cd63cd..edba96845baf7 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -1641,6 +1641,12 @@ static void ice_vsi_set_rss_flow_fld(struct ice_vsi *vsi)
+ 	if (status)
+ 		dev_dbg(dev, "ice_add_rss_cfg failed for sctp6 flow, vsi = %d, error = %s\n",
+ 			vsi_num, ice_stat_str(status));
++
++	status = ice_add_rss_cfg(hw, vsi_handle, ICE_FLOW_HASH_ESP_SPI,
++				 ICE_FLOW_SEG_HDR_ESP);
++	if (status)
++		dev_dbg(dev, "ice_add_rss_cfg failed for esp/spi flow, vsi = %d, error = %d\n",
++			vsi_num, status);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c b/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c
+index dc7e5ea6ec158..148d431fcde42 100644
+--- a/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c
++++ b/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c
+@@ -145,9 +145,9 @@ static void sparx5_xtr_grp(struct sparx5 *sparx5, u8 grp, bool byte_swap)
+ 	skb_put(skb, byte_cnt - ETH_FCS_LEN);
+ 	eth_skb_pad(skb);
+ 	skb->protocol = eth_type_trans(skb, netdev);
+-	netif_rx(skb);
+ 	netdev->stats.rx_bytes += skb->len;
+ 	netdev->stats.rx_packets++;
++	netif_rx(skb);
+ }
+ 
+ static int sparx5_inject(struct sparx5 *sparx5,
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index 02edd383dea22..886daa29bfb02 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -480,14 +480,18 @@ EXPORT_SYMBOL(ocelot_vlan_add);
+ int ocelot_vlan_del(struct ocelot *ocelot, int port, u16 vid)
+ {
+ 	struct ocelot_port *ocelot_port = ocelot->ports[port];
++	bool del_pvid = false;
+ 	int err;
+ 
++	if (ocelot_port->pvid_vlan && ocelot_port->pvid_vlan->vid == vid)
++		del_pvid = true;
++
+ 	err = ocelot_vlan_member_del(ocelot, port, vid);
+ 	if (err)
+ 		return err;
+ 
+ 	/* Ingress */
+-	if (ocelot_port->pvid_vlan && ocelot_port->pvid_vlan->vid == vid)
++	if (del_pvid)
+ 		ocelot_port_set_pvid(ocelot, port, NULL);
+ 
+ 	/* Egress */
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
+index 784292b162907..1543e47456d57 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
++++ b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
+@@ -723,6 +723,8 @@ static inline bool nfp_fl_is_netdev_to_offload(struct net_device *netdev)
+ 		return true;
+ 	if (netif_is_gretap(netdev))
+ 		return true;
++	if (netif_is_ip6gretap(netdev))
++		return true;
+ 
+ 	return false;
+ }
+diff --git a/drivers/net/ieee802154/at86rf230.c b/drivers/net/ieee802154/at86rf230.c
+index 7d67f41387f55..4f5ef8a9a9a87 100644
+--- a/drivers/net/ieee802154/at86rf230.c
++++ b/drivers/net/ieee802154/at86rf230.c
+@@ -100,6 +100,7 @@ struct at86rf230_local {
+ 	unsigned long cal_timeout;
+ 	bool is_tx;
+ 	bool is_tx_from_off;
++	bool was_tx;
+ 	u8 tx_retry;
+ 	struct sk_buff *tx_skb;
+ 	struct at86rf230_state_change tx;
+@@ -343,7 +344,11 @@ at86rf230_async_error_recover_complete(void *context)
+ 	if (ctx->free)
+ 		kfree(ctx);
+ 
+-	ieee802154_wake_queue(lp->hw);
++	if (lp->was_tx) {
++		lp->was_tx = 0;
++		dev_kfree_skb_any(lp->tx_skb);
++		ieee802154_wake_queue(lp->hw);
++	}
+ }
+ 
+ static void
+@@ -352,7 +357,11 @@ at86rf230_async_error_recover(void *context)
+ 	struct at86rf230_state_change *ctx = context;
+ 	struct at86rf230_local *lp = ctx->lp;
+ 
+-	lp->is_tx = 0;
++	if (lp->is_tx) {
++		lp->was_tx = 1;
++		lp->is_tx = 0;
++	}
++
+ 	at86rf230_async_state_change(lp, ctx, STATE_RX_AACK_ON,
+ 				     at86rf230_async_error_recover_complete);
+ }
+diff --git a/drivers/net/ieee802154/ca8210.c b/drivers/net/ieee802154/ca8210.c
+index f3438d3e104ac..2bc730fd260eb 100644
+--- a/drivers/net/ieee802154/ca8210.c
++++ b/drivers/net/ieee802154/ca8210.c
+@@ -2975,8 +2975,8 @@ static void ca8210_hw_setup(struct ieee802154_hw *ca8210_hw)
+ 	ca8210_hw->phy->cca.opt = NL802154_CCA_OPT_ENERGY_CARRIER_AND;
+ 	ca8210_hw->phy->cca_ed_level = -9800;
+ 	ca8210_hw->phy->symbol_duration = 16;
+-	ca8210_hw->phy->lifs_period = 40;
+-	ca8210_hw->phy->sifs_period = 12;
++	ca8210_hw->phy->lifs_period = 40 * ca8210_hw->phy->symbol_duration;
++	ca8210_hw->phy->sifs_period = 12 * ca8210_hw->phy->symbol_duration;
+ 	ca8210_hw->flags =
+ 		IEEE802154_HW_AFILT |
+ 		IEEE802154_HW_OMIT_CKSUM |
+diff --git a/drivers/net/netdevsim/fib.c b/drivers/net/netdevsim/fib.c
+index 4300261e2f9e7..378ee779061c3 100644
+--- a/drivers/net/netdevsim/fib.c
++++ b/drivers/net/netdevsim/fib.c
+@@ -623,14 +623,14 @@ static int nsim_fib6_rt_append(struct nsim_fib_data *data,
+ 		if (err)
+ 			goto err_fib6_rt_nh_del;
+ 
+-		fib6_event->rt_arr[i]->trap = true;
++		WRITE_ONCE(fib6_event->rt_arr[i]->trap, true);
+ 	}
+ 
+ 	return 0;
+ 
+ err_fib6_rt_nh_del:
+ 	for (i--; i >= 0; i--) {
+-		fib6_event->rt_arr[i]->trap = false;
++		WRITE_ONCE(fib6_event->rt_arr[i]->trap, false);
+ 		nsim_fib6_rt_nh_del(fib6_rt, fib6_event->rt_arr[i]);
+ 	}
+ 	return err;
+diff --git a/drivers/net/phy/mediatek-ge.c b/drivers/net/phy/mediatek-ge.c
+index b7a5ae20edd53..68ee434f9dea3 100644
+--- a/drivers/net/phy/mediatek-ge.c
++++ b/drivers/net/phy/mediatek-ge.c
+@@ -55,9 +55,6 @@ static int mt7530_phy_config_init(struct phy_device *phydev)
+ 
+ static int mt7531_phy_config_init(struct phy_device *phydev)
+ {
+-	if (phydev->interface != PHY_INTERFACE_MODE_INTERNAL)
+-		return -EINVAL;
+-
+ 	mtk_gephy_config_init(phydev);
+ 
+ 	/* PHY link down power saving enable */
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index f510e82194705..2f2abc42cecea 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1399,6 +1399,8 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x413c, 0x81d7, 0)},	/* Dell Wireless 5821e */
+ 	{QMI_FIXED_INTF(0x413c, 0x81d7, 1)},	/* Dell Wireless 5821e preproduction config */
+ 	{QMI_FIXED_INTF(0x413c, 0x81e0, 0)},	/* Dell Wireless 5821e with eSIM support*/
++	{QMI_FIXED_INTF(0x413c, 0x81e4, 0)},	/* Dell Wireless 5829e with eSIM support*/
++	{QMI_FIXED_INTF(0x413c, 0x81e6, 0)},	/* Dell Wireless 5829e */
+ 	{QMI_FIXED_INTF(0x03f0, 0x4e1d, 8)},	/* HP lt4111 LTE/EV-DO/HSPA+ Gobi 4G Module */
+ 	{QMI_FIXED_INTF(0x03f0, 0x9d1d, 1)},	/* HP lt4120 Snapdragon X5 LTE */
+ 	{QMI_FIXED_INTF(0x22de, 0x9061, 3)},	/* WeTelecom WPD-600N */
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
+index 0eb13e5df5177..d99140960a820 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
+@@ -693,7 +693,7 @@ int brcmf_fw_get_firmwares(struct device *dev, struct brcmf_fw_request *req,
+ {
+ 	struct brcmf_fw_item *first = &req->items[0];
+ 	struct brcmf_fw *fwctx;
+-	char *alt_path;
++	char *alt_path = NULL;
+ 	int ret;
+ 
+ 	brcmf_dbg(TRACE, "enter: dev=%s\n", dev_name(dev));
+@@ -712,7 +712,9 @@ int brcmf_fw_get_firmwares(struct device *dev, struct brcmf_fw_request *req,
+ 	fwctx->done = fw_cb;
+ 
+ 	/* First try alternative board-specific path if any */
+-	alt_path = brcm_alt_fw_path(first->path, fwctx->req->board_type);
++	if (fwctx->req->board_type)
++		alt_path = brcm_alt_fw_path(first->path,
++					    fwctx->req->board_type);
+ 	if (alt_path) {
+ 		ret = request_firmware_nowait(THIS_MODULE, true, alt_path,
+ 					      fwctx->dev, GFP_KERNEL, fwctx,
+diff --git a/drivers/net/wireless/intel/iwlwifi/Kconfig b/drivers/net/wireless/intel/iwlwifi/Kconfig
+index 418ae4f870ab7..fcfd2bd0baa6d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/Kconfig
++++ b/drivers/net/wireless/intel/iwlwifi/Kconfig
+@@ -79,19 +79,6 @@ config IWLWIFI_OPMODE_MODULAR
+ comment "WARNING: iwlwifi is useless without IWLDVM or IWLMVM"
+ 	depends on IWLDVM=n && IWLMVM=n
+ 
+-config IWLWIFI_BCAST_FILTERING
+-	bool "Enable broadcast filtering"
+-	depends on IWLMVM
+-	help
+-	  Say Y here to enable default bcast filtering configuration.
+-
+-	  Enabling broadcast filtering will drop any incoming wireless
+-	  broadcast frames, except some very specific predefined
+-	  patterns (e.g. incoming arp requests).
+-
+-	  If unsure, don't enable this option, as some programs might
+-	  expect incoming broadcasts for their normal operations.
+-
+ menu "Debugging Options"
+ 
+ config IWLWIFI_DEBUG
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index 2e4590876bc33..19d85760dfac3 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+  * Copyright (C) 2017 Intel Deutschland GmbH
+- * Copyright (C) 2019-2021 Intel Corporation
++ * Copyright (C) 2019-2022 Intel Corporation
+  */
+ #include <linux/uuid.h>
+ #include "iwl-drv.h"
+@@ -873,10 +873,11 @@ bool iwl_sar_geo_support(struct iwl_fw_runtime *fwrt)
+ 	 * only one using version 36, so skip this version entirely.
+ 	 */
+ 	return IWL_UCODE_SERIAL(fwrt->fw->ucode_ver) >= 38 ||
+-	       IWL_UCODE_SERIAL(fwrt->fw->ucode_ver) == 17 ||
+-	       (IWL_UCODE_SERIAL(fwrt->fw->ucode_ver) == 29 &&
+-		((fwrt->trans->hw_rev & CSR_HW_REV_TYPE_MSK) ==
+-		 CSR_HW_REV_TYPE_7265D));
++		(IWL_UCODE_SERIAL(fwrt->fw->ucode_ver) == 17 &&
++		 fwrt->trans->hw_rev != CSR_HW_REV_TYPE_3160) ||
++		(IWL_UCODE_SERIAL(fwrt->fw->ucode_ver) == 29 &&
++		 ((fwrt->trans->hw_rev & CSR_HW_REV_TYPE_MSK) ==
++		  CSR_HW_REV_TYPE_7265D));
+ }
+ IWL_EXPORT_SYMBOL(iwl_sar_geo_support);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/commands.h b/drivers/net/wireless/intel/iwlwifi/fw/api/commands.h
+index ee6b5844a871c..46ad5543a6cc8 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/api/commands.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/api/commands.h
+@@ -505,11 +505,6 @@ enum iwl_legacy_cmds {
+ 	 */
+ 	DEBUG_LOG_MSG = 0xf7,
+ 
+-	/**
+-	 * @BCAST_FILTER_CMD: &struct iwl_bcast_filter_cmd
+-	 */
+-	BCAST_FILTER_CMD = 0xcf,
+-
+ 	/**
+ 	 * @MCAST_FILTER_CMD: &struct iwl_mcast_filter_cmd
+ 	 */
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/filter.h b/drivers/net/wireless/intel/iwlwifi/fw/api/filter.h
+index dd62a63956b3b..e44c70b7c7907 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/api/filter.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/api/filter.h
+@@ -36,92 +36,4 @@ struct iwl_mcast_filter_cmd {
+ 	u8 addr_list[0];
+ } __packed; /* MCAST_FILTERING_CMD_API_S_VER_1 */
+ 
+-#define MAX_BCAST_FILTERS 8
+-#define MAX_BCAST_FILTER_ATTRS 2
+-
+-/**
+- * enum iwl_mvm_bcast_filter_attr_offset - written by fw for each Rx packet
+- * @BCAST_FILTER_OFFSET_PAYLOAD_START: offset is from payload start.
+- * @BCAST_FILTER_OFFSET_IP_END: offset is from ip header end (i.e.
+- *	start of ip payload).
+- */
+-enum iwl_mvm_bcast_filter_attr_offset {
+-	BCAST_FILTER_OFFSET_PAYLOAD_START = 0,
+-	BCAST_FILTER_OFFSET_IP_END = 1,
+-};
+-
+-/**
+- * struct iwl_fw_bcast_filter_attr - broadcast filter attribute
+- * @offset_type:	&enum iwl_mvm_bcast_filter_attr_offset.
+- * @offset:	starting offset of this pattern.
+- * @reserved1:	reserved
+- * @val:	value to match - big endian (MSB is the first
+- *		byte to match from offset pos).
+- * @mask:	mask to match (big endian).
+- */
+-struct iwl_fw_bcast_filter_attr {
+-	u8 offset_type;
+-	u8 offset;
+-	__le16 reserved1;
+-	__be32 val;
+-	__be32 mask;
+-} __packed; /* BCAST_FILTER_ATT_S_VER_1 */
+-
+-/**
+- * enum iwl_mvm_bcast_filter_frame_type - filter frame type
+- * @BCAST_FILTER_FRAME_TYPE_ALL: consider all frames.
+- * @BCAST_FILTER_FRAME_TYPE_IPV4: consider only ipv4 frames
+- */
+-enum iwl_mvm_bcast_filter_frame_type {
+-	BCAST_FILTER_FRAME_TYPE_ALL = 0,
+-	BCAST_FILTER_FRAME_TYPE_IPV4 = 1,
+-};
+-
+-/**
+- * struct iwl_fw_bcast_filter - broadcast filter
+- * @discard: discard frame (1) or let it pass (0).
+- * @frame_type: &enum iwl_mvm_bcast_filter_frame_type.
+- * @reserved1: reserved
+- * @num_attrs: number of valid attributes in this filter.
+- * @attrs: attributes of this filter. a filter is considered matched
+- *	only when all its attributes are matched (i.e. AND relationship)
+- */
+-struct iwl_fw_bcast_filter {
+-	u8 discard;
+-	u8 frame_type;
+-	u8 num_attrs;
+-	u8 reserved1;
+-	struct iwl_fw_bcast_filter_attr attrs[MAX_BCAST_FILTER_ATTRS];
+-} __packed; /* BCAST_FILTER_S_VER_1 */
+-
+-/**
+- * struct iwl_fw_bcast_mac - per-mac broadcast filtering configuration.
+- * @default_discard: default action for this mac (discard (1) / pass (0)).
+- * @reserved1: reserved
+- * @attached_filters: bitmap of relevant filters for this mac.
+- */
+-struct iwl_fw_bcast_mac {
+-	u8 default_discard;
+-	u8 reserved1;
+-	__le16 attached_filters;
+-} __packed; /* BCAST_MAC_CONTEXT_S_VER_1 */
+-
+-/**
+- * struct iwl_bcast_filter_cmd - broadcast filtering configuration
+- * @disable: enable (0) / disable (1)
+- * @max_bcast_filters: max number of filters (MAX_BCAST_FILTERS)
+- * @max_macs: max number of macs (NUM_MAC_INDEX_DRIVER)
+- * @reserved1: reserved
+- * @filters: broadcast filters
+- * @macs: broadcast filtering configuration per-mac
+- */
+-struct iwl_bcast_filter_cmd {
+-	u8 disable;
+-	u8 max_bcast_filters;
+-	u8 max_macs;
+-	u8 reserved1;
+-	struct iwl_fw_bcast_filter filters[MAX_BCAST_FILTERS];
+-	struct iwl_fw_bcast_mac macs[NUM_MAC_INDEX_DRIVER];
+-} __packed; /* BCAST_FILTERING_HCMD_API_S_VER_1 */
+-
+ #endif /* __iwl_fw_api_filter_h__ */
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/rs.h b/drivers/net/wireless/intel/iwlwifi/fw/api/rs.h
+index a09081d7ed45e..f6301f898c7f5 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/api/rs.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/api/rs.h
+@@ -710,7 +710,6 @@ struct iwl_lq_cmd {
+ 
+ u8 iwl_fw_rate_idx_to_plcp(int idx);
+ u32 iwl_new_rate_from_v1(u32 rate_v1);
+-u32 iwl_legacy_rate_to_fw_idx(u32 rate_n_flags);
+ const struct iwl_rate_mcs_info *iwl_rate_mcs(int idx);
+ const char *iwl_rs_pretty_ant(u8 ant);
+ const char *iwl_rs_pretty_bw(int bw);
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/file.h b/drivers/net/wireless/intel/iwlwifi/fw/file.h
+index 3d572f5024bbc..b44b869dd3704 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/file.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/file.h
+@@ -182,7 +182,6 @@ struct iwl_ucode_capa {
+  * @IWL_UCODE_TLV_FLAGS_NEW_NSOFFL_LARGE: new NS offload (large version)
+  * @IWL_UCODE_TLV_FLAGS_UAPSD_SUPPORT: General support for uAPSD
+  * @IWL_UCODE_TLV_FLAGS_P2P_PS_UAPSD: P2P client supports uAPSD power save
+- * @IWL_UCODE_TLV_FLAGS_BCAST_FILTERING: uCode supports broadcast filtering.
+  * @IWL_UCODE_TLV_FLAGS_EBS_SUPPORT: this uCode image supports EBS.
+  */
+ enum iwl_ucode_tlv_flag {
+@@ -197,7 +196,6 @@ enum iwl_ucode_tlv_flag {
+ 	IWL_UCODE_TLV_FLAGS_UAPSD_SUPPORT	= BIT(24),
+ 	IWL_UCODE_TLV_FLAGS_EBS_SUPPORT		= BIT(25),
+ 	IWL_UCODE_TLV_FLAGS_P2P_PS_UAPSD	= BIT(26),
+-	IWL_UCODE_TLV_FLAGS_BCAST_FILTERING	= BIT(29),
+ };
+ 
+ typedef unsigned int __bitwise iwl_ucode_tlv_api_t;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/rs.c b/drivers/net/wireless/intel/iwlwifi/fw/rs.c
+index a21c3befd93b5..a835214611ce5 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/rs.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/rs.c
+@@ -91,6 +91,20 @@ const char *iwl_rs_pretty_bw(int bw)
+ }
+ IWL_EXPORT_SYMBOL(iwl_rs_pretty_bw);
+ 
++static u32 iwl_legacy_rate_to_fw_idx(u32 rate_n_flags)
++{
++	int rate = rate_n_flags & RATE_LEGACY_RATE_MSK_V1;
++	int idx;
++	bool ofdm = !(rate_n_flags & RATE_MCS_CCK_MSK_V1);
++	int offset = ofdm ? IWL_FIRST_OFDM_RATE : 0;
++	int last = ofdm ? IWL_RATE_COUNT_LEGACY : IWL_FIRST_OFDM_RATE;
++
++	for (idx = offset; idx < last; idx++)
++		if (iwl_fw_rate_idx_to_plcp(idx) == rate)
++			return idx - offset;
++	return IWL_RATE_INVALID;
++}
++
+ u32 iwl_new_rate_from_v1(u32 rate_v1)
+ {
+ 	u32 rate_v2 = 0;
+@@ -144,7 +158,10 @@ u32 iwl_new_rate_from_v1(u32 rate_v1)
+ 	} else {
+ 		u32 legacy_rate = iwl_legacy_rate_to_fw_idx(rate_v1);
+ 
+-		WARN_ON(legacy_rate < 0);
++		if (WARN_ON_ONCE(legacy_rate == IWL_RATE_INVALID))
++			legacy_rate = (rate_v1 & RATE_MCS_CCK_MSK_V1) ?
++				IWL_FIRST_CCK_RATE : IWL_FIRST_OFDM_RATE;
++
+ 		rate_v2 |= legacy_rate;
+ 		if (!(rate_v1 & RATE_MCS_CCK_MSK_V1))
+ 			rate_v2 |= RATE_MCS_LEGACY_OFDM_MSK;
+@@ -172,20 +189,6 @@ u32 iwl_new_rate_from_v1(u32 rate_v1)
+ }
+ IWL_EXPORT_SYMBOL(iwl_new_rate_from_v1);
+ 
+-u32 iwl_legacy_rate_to_fw_idx(u32 rate_n_flags)
+-{
+-	int rate = rate_n_flags & RATE_LEGACY_RATE_MSK_V1;
+-	int idx;
+-	bool ofdm = !(rate_n_flags & RATE_MCS_CCK_MSK_V1);
+-	int offset = ofdm ? IWL_FIRST_OFDM_RATE : 0;
+-	int last = ofdm ? IWL_RATE_COUNT_LEGACY : IWL_FIRST_OFDM_RATE;
+-
+-	for (idx = offset; idx < last; idx++)
+-		if (iwl_fw_rate_idx_to_plcp(idx) == rate)
+-			return idx - offset;
+-	return -1;
+-}
+-
+ int rs_pretty_print_rate(char *buf, int bufsz, const u32 rate)
+ {
+ 	char *type;
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-csr.h b/drivers/net/wireless/intel/iwlwifi/iwl-csr.h
+index 70f9dc7ecb0eb..078fd20285e6d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-csr.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-csr.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
+ /*
+- * Copyright (C) 2005-2014, 2018-2021 Intel Corporation
++ * Copyright (C) 2005-2014, 2018-2022 Intel Corporation
+  * Copyright (C) 2013-2014 Intel Mobile Communications GmbH
+  * Copyright (C) 2016 Intel Deutschland GmbH
+  */
+@@ -326,6 +326,7 @@ enum {
+ #define CSR_HW_REV_TYPE_2x00		(0x0000100)
+ #define CSR_HW_REV_TYPE_105		(0x0000110)
+ #define CSR_HW_REV_TYPE_135		(0x0000120)
++#define CSR_HW_REV_TYPE_3160		(0x0000164)
+ #define CSR_HW_REV_TYPE_7265D		(0x0000210)
+ #define CSR_HW_REV_TYPE_NONE		(0x00001F0)
+ #define CSR_HW_REV_TYPE_QNJ		(0x0000360)
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+index f53ce9c086947..506d05953314d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+@@ -1656,6 +1656,8 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
+  out_unbind:
+ 	complete(&drv->request_firmware_complete);
+ 	device_release_driver(drv->trans->dev);
++	/* drv has just been freed by the release */
++	failure = false;
+  free:
+ 	if (failure)
+ 		iwl_dealloc_ucode(drv);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+index ff66001d507ef..64100e73b5bc6 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+@@ -1361,189 +1361,6 @@ static ssize_t iwl_dbgfs_dbg_time_point_write(struct iwl_mvm *mvm,
+ 	return count;
+ }
+ 
+-#define ADD_TEXT(...) pos += scnprintf(buf + pos, bufsz - pos, __VA_ARGS__)
+-#ifdef CONFIG_IWLWIFI_BCAST_FILTERING
+-static ssize_t iwl_dbgfs_bcast_filters_read(struct file *file,
+-					    char __user *user_buf,
+-					    size_t count, loff_t *ppos)
+-{
+-	struct iwl_mvm *mvm = file->private_data;
+-	struct iwl_bcast_filter_cmd cmd;
+-	const struct iwl_fw_bcast_filter *filter;
+-	char *buf;
+-	int bufsz = 1024;
+-	int i, j, pos = 0;
+-	ssize_t ret;
+-
+-	buf = kzalloc(bufsz, GFP_KERNEL);
+-	if (!buf)
+-		return -ENOMEM;
+-
+-	mutex_lock(&mvm->mutex);
+-	if (!iwl_mvm_bcast_filter_build_cmd(mvm, &cmd)) {
+-		ADD_TEXT("None\n");
+-		mutex_unlock(&mvm->mutex);
+-		goto out;
+-	}
+-	mutex_unlock(&mvm->mutex);
+-
+-	for (i = 0; cmd.filters[i].attrs[0].mask; i++) {
+-		filter = &cmd.filters[i];
+-
+-		ADD_TEXT("Filter [%d]:\n", i);
+-		ADD_TEXT("\tDiscard=%d\n", filter->discard);
+-		ADD_TEXT("\tFrame Type: %s\n",
+-			 filter->frame_type ? "IPv4" : "Generic");
+-
+-		for (j = 0; j < ARRAY_SIZE(filter->attrs); j++) {
+-			const struct iwl_fw_bcast_filter_attr *attr;
+-
+-			attr = &filter->attrs[j];
+-			if (!attr->mask)
+-				break;
+-
+-			ADD_TEXT("\tAttr [%d]: offset=%d (from %s), mask=0x%x, value=0x%x reserved=0x%x\n",
+-				 j, attr->offset,
+-				 attr->offset_type ? "IP End" :
+-						     "Payload Start",
+-				 be32_to_cpu(attr->mask),
+-				 be32_to_cpu(attr->val),
+-				 le16_to_cpu(attr->reserved1));
+-		}
+-	}
+-out:
+-	ret = simple_read_from_buffer(user_buf, count, ppos, buf, pos);
+-	kfree(buf);
+-	return ret;
+-}
+-
+-static ssize_t iwl_dbgfs_bcast_filters_write(struct iwl_mvm *mvm, char *buf,
+-					     size_t count, loff_t *ppos)
+-{
+-	int pos, next_pos;
+-	struct iwl_fw_bcast_filter filter = {};
+-	struct iwl_bcast_filter_cmd cmd;
+-	u32 filter_id, attr_id, mask, value;
+-	int err = 0;
+-
+-	if (sscanf(buf, "%d %hhi %hhi %n", &filter_id, &filter.discard,
+-		   &filter.frame_type, &pos) != 3)
+-		return -EINVAL;
+-
+-	if (filter_id >= ARRAY_SIZE(mvm->dbgfs_bcast_filtering.cmd.filters) ||
+-	    filter.frame_type > BCAST_FILTER_FRAME_TYPE_IPV4)
+-		return -EINVAL;
+-
+-	for (attr_id = 0; attr_id < ARRAY_SIZE(filter.attrs);
+-	     attr_id++) {
+-		struct iwl_fw_bcast_filter_attr *attr =
+-				&filter.attrs[attr_id];
+-
+-		if (pos >= count)
+-			break;
+-
+-		if (sscanf(&buf[pos], "%hhi %hhi %i %i %n",
+-			   &attr->offset, &attr->offset_type,
+-			   &mask, &value, &next_pos) != 4)
+-			return -EINVAL;
+-
+-		attr->mask = cpu_to_be32(mask);
+-		attr->val = cpu_to_be32(value);
+-		if (mask)
+-			filter.num_attrs++;
+-
+-		pos += next_pos;
+-	}
+-
+-	mutex_lock(&mvm->mutex);
+-	memcpy(&mvm->dbgfs_bcast_filtering.cmd.filters[filter_id],
+-	       &filter, sizeof(filter));
+-
+-	/* send updated bcast filtering configuration */
+-	if (iwl_mvm_firmware_running(mvm) &&
+-	    mvm->dbgfs_bcast_filtering.override &&
+-	    iwl_mvm_bcast_filter_build_cmd(mvm, &cmd))
+-		err = iwl_mvm_send_cmd_pdu(mvm, BCAST_FILTER_CMD, 0,
+-					   sizeof(cmd), &cmd);
+-	mutex_unlock(&mvm->mutex);
+-
+-	return err ?: count;
+-}
+-
+-static ssize_t iwl_dbgfs_bcast_filters_macs_read(struct file *file,
+-						 char __user *user_buf,
+-						 size_t count, loff_t *ppos)
+-{
+-	struct iwl_mvm *mvm = file->private_data;
+-	struct iwl_bcast_filter_cmd cmd;
+-	char *buf;
+-	int bufsz = 1024;
+-	int i, pos = 0;
+-	ssize_t ret;
+-
+-	buf = kzalloc(bufsz, GFP_KERNEL);
+-	if (!buf)
+-		return -ENOMEM;
+-
+-	mutex_lock(&mvm->mutex);
+-	if (!iwl_mvm_bcast_filter_build_cmd(mvm, &cmd)) {
+-		ADD_TEXT("None\n");
+-		mutex_unlock(&mvm->mutex);
+-		goto out;
+-	}
+-	mutex_unlock(&mvm->mutex);
+-
+-	for (i = 0; i < ARRAY_SIZE(cmd.macs); i++) {
+-		const struct iwl_fw_bcast_mac *mac = &cmd.macs[i];
+-
+-		ADD_TEXT("Mac [%d]: discard=%d attached_filters=0x%x\n",
+-			 i, mac->default_discard, mac->attached_filters);
+-	}
+-out:
+-	ret = simple_read_from_buffer(user_buf, count, ppos, buf, pos);
+-	kfree(buf);
+-	return ret;
+-}
+-
+-static ssize_t iwl_dbgfs_bcast_filters_macs_write(struct iwl_mvm *mvm,
+-						  char *buf, size_t count,
+-						  loff_t *ppos)
+-{
+-	struct iwl_bcast_filter_cmd cmd;
+-	struct iwl_fw_bcast_mac mac = {};
+-	u32 mac_id, attached_filters;
+-	int err = 0;
+-
+-	if (!mvm->bcast_filters)
+-		return -ENOENT;
+-
+-	if (sscanf(buf, "%d %hhi %i", &mac_id, &mac.default_discard,
+-		   &attached_filters) != 3)
+-		return -EINVAL;
+-
+-	if (mac_id >= ARRAY_SIZE(cmd.macs) ||
+-	    mac.default_discard > 1 ||
+-	    attached_filters >= BIT(ARRAY_SIZE(cmd.filters)))
+-		return -EINVAL;
+-
+-	mac.attached_filters = cpu_to_le16(attached_filters);
+-
+-	mutex_lock(&mvm->mutex);
+-	memcpy(&mvm->dbgfs_bcast_filtering.cmd.macs[mac_id],
+-	       &mac, sizeof(mac));
+-
+-	/* send updated bcast filtering configuration */
+-	if (iwl_mvm_firmware_running(mvm) &&
+-	    mvm->dbgfs_bcast_filtering.override &&
+-	    iwl_mvm_bcast_filter_build_cmd(mvm, &cmd))
+-		err = iwl_mvm_send_cmd_pdu(mvm, BCAST_FILTER_CMD, 0,
+-					   sizeof(cmd), &cmd);
+-	mutex_unlock(&mvm->mutex);
+-
+-	return err ?: count;
+-}
+-#endif
+-
+ #define MVM_DEBUGFS_WRITE_FILE_OPS(name, bufsz) \
+ 	_MVM_DEBUGFS_WRITE_FILE_OPS(name, bufsz, struct iwl_mvm)
+ #define MVM_DEBUGFS_READ_WRITE_FILE_OPS(name, bufsz) \
+@@ -1873,11 +1690,6 @@ MVM_DEBUGFS_WRITE_FILE_OPS(inject_beacon_ie_restore, 512);
+ 
+ MVM_DEBUGFS_READ_FILE_OPS(uapsd_noagg_bssids);
+ 
+-#ifdef CONFIG_IWLWIFI_BCAST_FILTERING
+-MVM_DEBUGFS_READ_WRITE_FILE_OPS(bcast_filters, 256);
+-MVM_DEBUGFS_READ_WRITE_FILE_OPS(bcast_filters_macs, 256);
+-#endif
+-
+ #ifdef CONFIG_ACPI
+ MVM_DEBUGFS_READ_FILE_OPS(sar_geo_profile);
+ #endif
+@@ -2088,21 +1900,6 @@ void iwl_mvm_dbgfs_register(struct iwl_mvm *mvm)
+ 
+ 	MVM_DEBUGFS_ADD_FILE(uapsd_noagg_bssids, mvm->debugfs_dir, S_IRUSR);
+ 
+-#ifdef CONFIG_IWLWIFI_BCAST_FILTERING
+-	if (mvm->fw->ucode_capa.flags & IWL_UCODE_TLV_FLAGS_BCAST_FILTERING) {
+-		bcast_dir = debugfs_create_dir("bcast_filtering",
+-					       mvm->debugfs_dir);
+-
+-		debugfs_create_bool("override", 0600, bcast_dir,
+-				    &mvm->dbgfs_bcast_filtering.override);
+-
+-		MVM_DEBUGFS_ADD_FILE_ALIAS("filters", bcast_filters,
+-					   bcast_dir, 0600);
+-		MVM_DEBUGFS_ADD_FILE_ALIAS("macs", bcast_filters_macs,
+-					   bcast_dir, 0600);
+-	}
+-#endif
+-
+ #ifdef CONFIG_PM_SLEEP
+ 	MVM_DEBUGFS_ADD_FILE(d3_test, mvm->debugfs_dir, 0400);
+ 	debugfs_create_bool("d3_wake_sysassert", 0600, mvm->debugfs_dir,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 9eb78461f2800..58d5395acf73c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -1636,7 +1636,7 @@ int iwl_mvm_up(struct iwl_mvm *mvm)
+ 	ret = iwl_mvm_sar_init(mvm);
+ 	if (ret == 0)
+ 		ret = iwl_mvm_sar_geo_init(mvm);
+-	else if (ret < 0)
++	if (ret < 0)
+ 		goto error;
+ 
+ 	iwl_mvm_tas_init(mvm);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 9c5c10908f013..cde3d2ce0b855 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -55,79 +55,6 @@ static const struct ieee80211_iface_combination iwl_mvm_iface_combinations[] = {
+ 	},
+ };
+ 
+-#ifdef CONFIG_IWLWIFI_BCAST_FILTERING
+-/*
+- * Use the reserved field to indicate magic values.
+- * these values will only be used internally by the driver,
+- * and won't make it to the fw (reserved will be 0).
+- * BC_FILTER_MAGIC_IP - configure the val of this attribute to
+- *	be the vif's ip address. in case there is not a single
+- *	ip address (0, or more than 1), this attribute will
+- *	be skipped.
+- * BC_FILTER_MAGIC_MAC - set the val of this attribute to
+- *	the LSB bytes of the vif's mac address
+- */
+-enum {
+-	BC_FILTER_MAGIC_NONE = 0,
+-	BC_FILTER_MAGIC_IP,
+-	BC_FILTER_MAGIC_MAC,
+-};
+-
+-static const struct iwl_fw_bcast_filter iwl_mvm_default_bcast_filters[] = {
+-	{
+-		/* arp */
+-		.discard = 0,
+-		.frame_type = BCAST_FILTER_FRAME_TYPE_ALL,
+-		.attrs = {
+-			{
+-				/* frame type - arp, hw type - ethernet */
+-				.offset_type =
+-					BCAST_FILTER_OFFSET_PAYLOAD_START,
+-				.offset = sizeof(rfc1042_header),
+-				.val = cpu_to_be32(0x08060001),
+-				.mask = cpu_to_be32(0xffffffff),
+-			},
+-			{
+-				/* arp dest ip */
+-				.offset_type =
+-					BCAST_FILTER_OFFSET_PAYLOAD_START,
+-				.offset = sizeof(rfc1042_header) + 2 +
+-					  sizeof(struct arphdr) +
+-					  ETH_ALEN + sizeof(__be32) +
+-					  ETH_ALEN,
+-				.mask = cpu_to_be32(0xffffffff),
+-				/* mark it as special field */
+-				.reserved1 = cpu_to_le16(BC_FILTER_MAGIC_IP),
+-			},
+-		},
+-	},
+-	{
+-		/* dhcp offer bcast */
+-		.discard = 0,
+-		.frame_type = BCAST_FILTER_FRAME_TYPE_IPV4,
+-		.attrs = {
+-			{
+-				/* udp dest port - 68 (bootp client)*/
+-				.offset_type = BCAST_FILTER_OFFSET_IP_END,
+-				.offset = offsetof(struct udphdr, dest),
+-				.val = cpu_to_be32(0x00440000),
+-				.mask = cpu_to_be32(0xffff0000),
+-			},
+-			{
+-				/* dhcp - lsb bytes of client hw address */
+-				.offset_type = BCAST_FILTER_OFFSET_IP_END,
+-				.offset = 38,
+-				.mask = cpu_to_be32(0xffffffff),
+-				/* mark it as special field */
+-				.reserved1 = cpu_to_le16(BC_FILTER_MAGIC_MAC),
+-			},
+-		},
+-	},
+-	/* last filter must be empty */
+-	{},
+-};
+-#endif
+-
+ static const struct cfg80211_pmsr_capabilities iwl_mvm_pmsr_capa = {
+ 	.max_peers = IWL_MVM_TOF_MAX_APS,
+ 	.report_ap_tsf = 1,
+@@ -683,11 +610,6 @@ int iwl_mvm_mac_setup_register(struct iwl_mvm *mvm)
+ 	}
+ #endif
+ 
+-#ifdef CONFIG_IWLWIFI_BCAST_FILTERING
+-	/* assign default bcast filtering configuration */
+-	mvm->bcast_filters = iwl_mvm_default_bcast_filters;
+-#endif
+-
+ 	ret = iwl_mvm_leds_init(mvm);
+ 	if (ret)
+ 		return ret;
+@@ -1803,162 +1725,6 @@ static void iwl_mvm_config_iface_filter(struct ieee80211_hw *hw,
+ 	mutex_unlock(&mvm->mutex);
+ }
+ 
+-#ifdef CONFIG_IWLWIFI_BCAST_FILTERING
+-struct iwl_bcast_iter_data {
+-	struct iwl_mvm *mvm;
+-	struct iwl_bcast_filter_cmd *cmd;
+-	u8 current_filter;
+-};
+-
+-static void
+-iwl_mvm_set_bcast_filter(struct ieee80211_vif *vif,
+-			 const struct iwl_fw_bcast_filter *in_filter,
+-			 struct iwl_fw_bcast_filter *out_filter)
+-{
+-	struct iwl_fw_bcast_filter_attr *attr;
+-	int i;
+-
+-	memcpy(out_filter, in_filter, sizeof(*out_filter));
+-
+-	for (i = 0; i < ARRAY_SIZE(out_filter->attrs); i++) {
+-		attr = &out_filter->attrs[i];
+-
+-		if (!attr->mask)
+-			break;
+-
+-		switch (attr->reserved1) {
+-		case cpu_to_le16(BC_FILTER_MAGIC_IP):
+-			if (vif->bss_conf.arp_addr_cnt != 1) {
+-				attr->mask = 0;
+-				continue;
+-			}
+-
+-			attr->val = vif->bss_conf.arp_addr_list[0];
+-			break;
+-		case cpu_to_le16(BC_FILTER_MAGIC_MAC):
+-			attr->val = *(__be32 *)&vif->addr[2];
+-			break;
+-		default:
+-			break;
+-		}
+-		attr->reserved1 = 0;
+-		out_filter->num_attrs++;
+-	}
+-}
+-
+-static void iwl_mvm_bcast_filter_iterator(void *_data, u8 *mac,
+-					  struct ieee80211_vif *vif)
+-{
+-	struct iwl_bcast_iter_data *data = _data;
+-	struct iwl_mvm *mvm = data->mvm;
+-	struct iwl_bcast_filter_cmd *cmd = data->cmd;
+-	struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+-	struct iwl_fw_bcast_mac *bcast_mac;
+-	int i;
+-
+-	if (WARN_ON(mvmvif->id >= ARRAY_SIZE(cmd->macs)))
+-		return;
+-
+-	bcast_mac = &cmd->macs[mvmvif->id];
+-
+-	/*
+-	 * enable filtering only for associated stations, but not for P2P
+-	 * Clients
+-	 */
+-	if (vif->type != NL80211_IFTYPE_STATION || vif->p2p ||
+-	    !vif->bss_conf.assoc)
+-		return;
+-
+-	bcast_mac->default_discard = 1;
+-
+-	/* copy all configured filters */
+-	for (i = 0; mvm->bcast_filters[i].attrs[0].mask; i++) {
+-		/*
+-		 * Make sure we don't exceed our filters limit.
+-		 * if there is still a valid filter to be configured,
+-		 * be on the safe side and just allow bcast for this mac.
+-		 */
+-		if (WARN_ON_ONCE(data->current_filter >=
+-				 ARRAY_SIZE(cmd->filters))) {
+-			bcast_mac->default_discard = 0;
+-			bcast_mac->attached_filters = 0;
+-			break;
+-		}
+-
+-		iwl_mvm_set_bcast_filter(vif,
+-					 &mvm->bcast_filters[i],
+-					 &cmd->filters[data->current_filter]);
+-
+-		/* skip current filter if it contains no attributes */
+-		if (!cmd->filters[data->current_filter].num_attrs)
+-			continue;
+-
+-		/* attach the filter to current mac */
+-		bcast_mac->attached_filters |=
+-				cpu_to_le16(BIT(data->current_filter));
+-
+-		data->current_filter++;
+-	}
+-}
+-
+-bool iwl_mvm_bcast_filter_build_cmd(struct iwl_mvm *mvm,
+-				    struct iwl_bcast_filter_cmd *cmd)
+-{
+-	struct iwl_bcast_iter_data iter_data = {
+-		.mvm = mvm,
+-		.cmd = cmd,
+-	};
+-
+-	if (IWL_MVM_FW_BCAST_FILTER_PASS_ALL)
+-		return false;
+-
+-	memset(cmd, 0, sizeof(*cmd));
+-	cmd->max_bcast_filters = ARRAY_SIZE(cmd->filters);
+-	cmd->max_macs = ARRAY_SIZE(cmd->macs);
+-
+-#ifdef CONFIG_IWLWIFI_DEBUGFS
+-	/* use debugfs filters/macs if override is configured */
+-	if (mvm->dbgfs_bcast_filtering.override) {
+-		memcpy(cmd->filters, &mvm->dbgfs_bcast_filtering.cmd.filters,
+-		       sizeof(cmd->filters));
+-		memcpy(cmd->macs, &mvm->dbgfs_bcast_filtering.cmd.macs,
+-		       sizeof(cmd->macs));
+-		return true;
+-	}
+-#endif
+-
+-	/* if no filters are configured, do nothing */
+-	if (!mvm->bcast_filters)
+-		return false;
+-
+-	/* configure and attach these filters for each associated sta vif */
+-	ieee80211_iterate_active_interfaces(
+-		mvm->hw, IEEE80211_IFACE_ITER_NORMAL,
+-		iwl_mvm_bcast_filter_iterator, &iter_data);
+-
+-	return true;
+-}
+-
+-static int iwl_mvm_configure_bcast_filter(struct iwl_mvm *mvm)
+-{
+-	struct iwl_bcast_filter_cmd cmd;
+-
+-	if (!(mvm->fw->ucode_capa.flags & IWL_UCODE_TLV_FLAGS_BCAST_FILTERING))
+-		return 0;
+-
+-	if (!iwl_mvm_bcast_filter_build_cmd(mvm, &cmd))
+-		return 0;
+-
+-	return iwl_mvm_send_cmd_pdu(mvm, BCAST_FILTER_CMD, 0,
+-				    sizeof(cmd), &cmd);
+-}
+-#else
+-static inline int iwl_mvm_configure_bcast_filter(struct iwl_mvm *mvm)
+-{
+-	return 0;
+-}
+-#endif
+-
+ static int iwl_mvm_update_mu_groups(struct iwl_mvm *mvm,
+ 				    struct ieee80211_vif *vif)
+ {
+@@ -2469,7 +2235,6 @@ static void iwl_mvm_bss_info_changed_station(struct iwl_mvm *mvm,
+ 		}
+ 
+ 		iwl_mvm_recalc_multicast(mvm);
+-		iwl_mvm_configure_bcast_filter(mvm);
+ 
+ 		/* reset rssi values */
+ 		mvmvif->bf_data.ave_beacon_signal = 0;
+@@ -2519,11 +2284,6 @@ static void iwl_mvm_bss_info_changed_station(struct iwl_mvm *mvm,
+ 		}
+ 	}
+ 
+-	if (changes & BSS_CHANGED_ARP_FILTER) {
+-		IWL_DEBUG_MAC80211(mvm, "arp filter changed\n");
+-		iwl_mvm_configure_bcast_filter(mvm);
+-	}
+-
+ 	if (changes & BSS_CHANGED_BANDWIDTH)
+ 		iwl_mvm_apply_fw_smps_request(vif);
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+index a72d85086fe33..da8330b5e6d5f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+@@ -872,17 +872,6 @@ struct iwl_mvm {
+ 	/* rx chain antennas set through debugfs for the scan command */
+ 	u8 scan_rx_ant;
+ 
+-#ifdef CONFIG_IWLWIFI_BCAST_FILTERING
+-	/* broadcast filters to configure for each associated station */
+-	const struct iwl_fw_bcast_filter *bcast_filters;
+-#ifdef CONFIG_IWLWIFI_DEBUGFS
+-	struct {
+-		bool override;
+-		struct iwl_bcast_filter_cmd cmd;
+-	} dbgfs_bcast_filtering;
+-#endif
+-#endif
+-
+ 	/* Internal station */
+ 	struct iwl_mvm_int_sta aux_sta;
+ 	struct iwl_mvm_int_sta snif_sta;
+@@ -1570,8 +1559,6 @@ int iwl_mvm_up(struct iwl_mvm *mvm);
+ int iwl_mvm_load_d3_fw(struct iwl_mvm *mvm);
+ 
+ int iwl_mvm_mac_setup_register(struct iwl_mvm *mvm);
+-bool iwl_mvm_bcast_filter_build_cmd(struct iwl_mvm *mvm,
+-				    struct iwl_bcast_filter_cmd *cmd);
+ 
+ /*
+  * FW notifications / CMD responses handlers
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index cd08e289cd9a0..364f6aefae81d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -474,7 +474,6 @@ static const struct iwl_hcmd_names iwl_mvm_legacy_names[] = {
+ 	HCMD_NAME(MCC_CHUB_UPDATE_CMD),
+ 	HCMD_NAME(MARKER_CMD),
+ 	HCMD_NAME(BT_PROFILE_NOTIFICATION),
+-	HCMD_NAME(BCAST_FILTER_CMD),
+ 	HCMD_NAME(MCAST_FILTER_CMD),
+ 	HCMD_NAME(REPLY_SF_CFG_CMD),
+ 	HCMD_NAME(REPLY_BEACON_FILTERING_CMD),
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index 76e0b7b45980d..0f96d422d6e06 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -1380,7 +1380,7 @@ static void iwl_mvm_hwrate_to_tx_status(const struct iwl_fw *fw,
+ 	struct ieee80211_tx_rate *r = &info->status.rates[0];
+ 
+ 	if (iwl_fw_lookup_notif_ver(fw, LONG_GROUP,
+-				    TX_CMD, 0) > 6)
++				    TX_CMD, 0) <= 6)
+ 		rate_n_flags = iwl_new_rate_from_v1(rate_n_flags);
+ 
+ 	info->status.antenna =
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+index 645cb4dd4e5a3..6642d85850734 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+@@ -384,8 +384,7 @@ int iwl_trans_pcie_gen2_start_fw(struct iwl_trans *trans,
+ 	/* This may fail if AMT took ownership of the device */
+ 	if (iwl_pcie_prepare_card_hw(trans)) {
+ 		IWL_WARN(trans, "Exit HW not ready\n");
+-		ret = -EIO;
+-		goto out;
++		return -EIO;
+ 	}
+ 
+ 	iwl_enable_rfkill_int(trans);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index 1efb53f78a62f..3b38c426575bc 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -1303,8 +1303,7 @@ static int iwl_trans_pcie_start_fw(struct iwl_trans *trans,
+ 	/* This may fail if AMT took ownership of the device */
+ 	if (iwl_pcie_prepare_card_hw(trans)) {
+ 		IWL_WARN(trans, "Exit HW not ready\n");
+-		ret = -EIO;
+-		goto out;
++		return -EIO;
+ 	}
+ 
+ 	iwl_enable_rfkill_int(trans);
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 1af8a4513708a..352766aa3122e 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4258,7 +4258,14 @@ static void nvme_async_event_work(struct work_struct *work)
+ 		container_of(work, struct nvme_ctrl, async_event_work);
+ 
+ 	nvme_aen_uevent(ctrl);
+-	ctrl->ops->submit_async_event(ctrl);
++
++	/*
++	 * The transport drivers must guarantee AER submission here is safe by
++	 * flushing ctrl async_event_work after changing the controller state
++	 * from LIVE and before freeing the admin queue.
++	*/
++	if (ctrl->state == NVME_CTRL_LIVE)
++		ctrl->ops->submit_async_event(ctrl);
+ }
+ 
+ static bool nvme_ctrl_pp_status(struct nvme_ctrl *ctrl)
+@@ -4571,7 +4578,7 @@ static void nvme_set_queue_dying(struct nvme_ns *ns)
+ 	if (test_and_set_bit(NVME_NS_DEAD, &ns->flags))
+ 		return;
+ 
+-	blk_set_queue_dying(ns->queue);
++	blk_mark_disk_dead(ns->disk);
+ 	nvme_start_ns_queue(ns);
+ 
+ 	set_capacity_and_notify(ns->disk, 0);
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 13e5d503ed076..99c2307b04e2c 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -817,7 +817,7 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
+ {
+ 	if (!head->disk)
+ 		return;
+-	blk_set_queue_dying(head->disk->queue);
++	blk_mark_disk_dead(head->disk);
+ 	/* make sure all pending bios are cleaned up */
+ 	kblockd_schedule_work(&head->requeue_work);
+ 	flush_work(&head->requeue_work);
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 850f84d204d05..9c55e4be8a398 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -1200,6 +1200,7 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
+ 			struct nvme_rdma_ctrl, err_work);
+ 
+ 	nvme_stop_keep_alive(&ctrl->ctrl);
++	flush_work(&ctrl->ctrl.async_event_work);
+ 	nvme_rdma_teardown_io_queues(ctrl, false);
+ 	nvme_start_queues(&ctrl->ctrl);
+ 	nvme_rdma_teardown_admin_queue(ctrl, false);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 22046415a0942..891a36d02e7c7 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -2104,6 +2104,7 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work)
+ 	struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl;
+ 
+ 	nvme_stop_keep_alive(ctrl);
++	flush_work(&ctrl->async_event_work);
+ 	nvme_tcp_teardown_io_queues(ctrl, false);
+ 	/* unquiesce to fail fast pending requests */
+ 	nvme_start_queues(ctrl);
+diff --git a/drivers/parisc/ccio-dma.c b/drivers/parisc/ccio-dma.c
+index 059566f544291..9be007c9420f9 100644
+--- a/drivers/parisc/ccio-dma.c
++++ b/drivers/parisc/ccio-dma.c
+@@ -1003,7 +1003,7 @@ ccio_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents,
+ 	ioc->usg_calls++;
+ #endif
+ 
+-	while(sg_dma_len(sglist) && nents--) {
++	while (nents && sg_dma_len(sglist)) {
+ 
+ #ifdef CCIO_COLLECT_STATS
+ 		ioc->usg_pages += sg_dma_len(sglist) >> PAGE_SHIFT;
+@@ -1011,6 +1011,7 @@ ccio_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents,
+ 		ccio_unmap_page(dev, sg_dma_address(sglist),
+ 				  sg_dma_len(sglist), direction, 0);
+ 		++sglist;
++		nents--;
+ 	}
+ 
+ 	DBG_RUN_SG("%s() DONE (nents %d)\n", __func__, nents);
+diff --git a/drivers/parisc/sba_iommu.c b/drivers/parisc/sba_iommu.c
+index e60690d38d677..374b9199878d4 100644
+--- a/drivers/parisc/sba_iommu.c
++++ b/drivers/parisc/sba_iommu.c
+@@ -1047,7 +1047,7 @@ sba_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents,
+ 	spin_unlock_irqrestore(&ioc->res_lock, flags);
+ #endif
+ 
+-	while (sg_dma_len(sglist) && nents--) {
++	while (nents && sg_dma_len(sglist)) {
+ 
+ 		sba_unmap_page(dev, sg_dma_address(sglist), sg_dma_len(sglist),
+ 				direction, 0);
+@@ -1056,6 +1056,7 @@ sba_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents,
+ 		ioc->usingle_calls--;	/* kluge since call is unmap_sg() */
+ #endif
+ 		++sglist;
++		nents--;
+ 	}
+ 
+ 	DBG_RUN_SG("%s() DONE (nents %d)\n", __func__,  nents);
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index 6733cb14e7753..c04636f52c1e9 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -1899,8 +1899,17 @@ static void hv_pci_assign_numa_node(struct hv_pcibus_device *hbus)
+ 		if (!hv_dev)
+ 			continue;
+ 
+-		if (hv_dev->desc.flags & HV_PCI_DEVICE_FLAG_NUMA_AFFINITY)
+-			set_dev_node(&dev->dev, hv_dev->desc.virtual_numa_node);
++		if (hv_dev->desc.flags & HV_PCI_DEVICE_FLAG_NUMA_AFFINITY &&
++		    hv_dev->desc.virtual_numa_node < num_possible_nodes())
++			/*
++			 * The kernel may boot with some NUMA nodes offline
++			 * (e.g. in a KDUMP kernel) or with NUMA disabled via
++			 * "numa=off". In those cases, adjust the host provided
++			 * NUMA node to a valid NUMA node used by the kernel.
++			 */
++			set_dev_node(&dev->dev,
++				     numa_map_to_online_node(
++					     hv_dev->desc.virtual_numa_node));
+ 
+ 		put_pcichild(hv_dev);
+ 	}
+diff --git a/drivers/phy/broadcom/phy-brcm-usb.c b/drivers/phy/broadcom/phy-brcm-usb.c
+index 116fb23aebd99..0f1deb6e0eabf 100644
+--- a/drivers/phy/broadcom/phy-brcm-usb.c
++++ b/drivers/phy/broadcom/phy-brcm-usb.c
+@@ -18,6 +18,7 @@
+ #include <linux/soc/brcmstb/brcmstb.h>
+ #include <dt-bindings/phy/phy.h>
+ #include <linux/mfd/syscon.h>
++#include <linux/suspend.h>
+ 
+ #include "phy-brcm-usb-init.h"
+ 
+@@ -70,12 +71,35 @@ struct brcm_usb_phy_data {
+ 	int			init_count;
+ 	int			wake_irq;
+ 	struct brcm_usb_phy	phys[BRCM_USB_PHY_ID_MAX];
++	struct notifier_block	pm_notifier;
++	bool			pm_active;
+ };
+ 
+ static s8 *node_reg_names[BRCM_REGS_MAX] = {
+ 	"crtl", "xhci_ec", "xhci_gbl", "usb_phy", "usb_mdio", "bdc_ec"
+ };
+ 
++static int brcm_pm_notifier(struct notifier_block *notifier,
++			    unsigned long pm_event,
++			    void *unused)
++{
++	struct brcm_usb_phy_data *priv =
++		container_of(notifier, struct brcm_usb_phy_data, pm_notifier);
++
++	switch (pm_event) {
++	case PM_HIBERNATION_PREPARE:
++	case PM_SUSPEND_PREPARE:
++		priv->pm_active = true;
++		break;
++	case PM_POST_RESTORE:
++	case PM_POST_HIBERNATION:
++	case PM_POST_SUSPEND:
++		priv->pm_active = false;
++		break;
++	}
++	return NOTIFY_DONE;
++}
++
+ static irqreturn_t brcm_usb_phy_wake_isr(int irq, void *dev_id)
+ {
+ 	struct phy *gphy = dev_id;
+@@ -91,6 +115,9 @@ static int brcm_usb_phy_init(struct phy *gphy)
+ 	struct brcm_usb_phy_data *priv =
+ 		container_of(phy, struct brcm_usb_phy_data, phys[phy->id]);
+ 
++	if (priv->pm_active)
++		return 0;
++
+ 	/*
+ 	 * Use a lock to make sure a second caller waits until
+ 	 * the base phy is inited before using it.
+@@ -120,6 +147,9 @@ static int brcm_usb_phy_exit(struct phy *gphy)
+ 	struct brcm_usb_phy_data *priv =
+ 		container_of(phy, struct brcm_usb_phy_data, phys[phy->id]);
+ 
++	if (priv->pm_active)
++		return 0;
++
+ 	dev_dbg(&gphy->dev, "EXIT\n");
+ 	if (phy->id == BRCM_USB_PHY_2_0)
+ 		brcm_usb_uninit_eohci(&priv->ini);
+@@ -488,6 +518,9 @@ static int brcm_usb_phy_probe(struct platform_device *pdev)
+ 	if (err)
+ 		return err;
+ 
++	priv->pm_notifier.notifier_call = brcm_pm_notifier;
++	register_pm_notifier(&priv->pm_notifier);
++
+ 	mutex_init(&priv->mutex);
+ 
+ 	/* make sure invert settings are correct */
+@@ -528,7 +561,10 @@ static int brcm_usb_phy_probe(struct platform_device *pdev)
+ 
+ static int brcm_usb_phy_remove(struct platform_device *pdev)
+ {
++	struct brcm_usb_phy_data *priv = dev_get_drvdata(&pdev->dev);
++
+ 	sysfs_remove_group(&pdev->dev.kobj, &brcm_usb_phy_group);
++	unregister_pm_notifier(&priv->pm_notifier);
+ 
+ 	return 0;
+ }
+@@ -539,6 +575,7 @@ static int brcm_usb_phy_suspend(struct device *dev)
+ 	struct brcm_usb_phy_data *priv = dev_get_drvdata(dev);
+ 
+ 	if (priv->init_count) {
++		dev_dbg(dev, "SUSPEND\n");
+ 		priv->ini.wake_enabled = device_may_wakeup(dev);
+ 		if (priv->phys[BRCM_USB_PHY_3_0].inited)
+ 			brcm_usb_uninit_xhci(&priv->ini);
+@@ -578,6 +615,7 @@ static int brcm_usb_phy_resume(struct device *dev)
+ 	 * Uninitialize anything that wasn't previously initialized.
+ 	 */
+ 	if (priv->init_count) {
++		dev_dbg(dev, "RESUME\n");
+ 		if (priv->wake_irq >= 0)
+ 			disable_irq_wake(priv->wake_irq);
+ 		brcm_usb_init_common(&priv->ini);
+diff --git a/drivers/phy/mediatek/phy-mtk-tphy.c b/drivers/phy/mediatek/phy-mtk-tphy.c
+index 98a942c607a67..db39b0c4649a2 100644
+--- a/drivers/phy/mediatek/phy-mtk-tphy.c
++++ b/drivers/phy/mediatek/phy-mtk-tphy.c
+@@ -1125,7 +1125,7 @@ static int phy_efuse_get(struct mtk_tphy *tphy, struct mtk_phy_instance *instanc
+ 		/* no efuse, ignore it */
+ 		if (!instance->efuse_intr &&
+ 		    !instance->efuse_rx_imp &&
+-		    !instance->efuse_rx_imp) {
++		    !instance->efuse_tx_imp) {
+ 			dev_warn(dev, "no u3 intr efuse, but dts enable it\n");
+ 			instance->efuse_sw_en = 0;
+ 			break;
+diff --git a/drivers/pinctrl/bcm/Kconfig b/drivers/pinctrl/bcm/Kconfig
+index 8fc1feedd8617..5116b014e2a4f 100644
+--- a/drivers/pinctrl/bcm/Kconfig
++++ b/drivers/pinctrl/bcm/Kconfig
+@@ -35,6 +35,7 @@ config PINCTRL_BCM63XX
+ 	select PINCONF
+ 	select GENERIC_PINCONF
+ 	select GPIOLIB
++	select REGMAP
+ 	select GPIO_REGMAP
+ 
+ config PINCTRL_BCM6318
+diff --git a/drivers/platform/x86/amd-pmc.c b/drivers/platform/x86/amd-pmc.c
+index 230593ae5d6de..8c74733530e3d 100644
+--- a/drivers/platform/x86/amd-pmc.c
++++ b/drivers/platform/x86/amd-pmc.c
+@@ -117,9 +117,10 @@ struct amd_pmc_dev {
+ 	u32 cpu_id;
+ 	u32 active_ips;
+ /* SMU version information */
+-	u16 major;
+-	u16 minor;
+-	u16 rev;
++	u8 smu_program;
++	u8 major;
++	u8 minor;
++	u8 rev;
+ 	struct device *dev;
+ 	struct mutex lock; /* generic mutex lock */
+ #if IS_ENABLED(CONFIG_DEBUG_FS)
+@@ -166,11 +167,13 @@ static int amd_pmc_get_smu_version(struct amd_pmc_dev *dev)
+ 	if (rc)
+ 		return rc;
+ 
+-	dev->major = (val >> 16) & GENMASK(15, 0);
++	dev->smu_program = (val >> 24) & GENMASK(7, 0);
++	dev->major = (val >> 16) & GENMASK(7, 0);
+ 	dev->minor = (val >> 8) & GENMASK(7, 0);
+ 	dev->rev = (val >> 0) & GENMASK(7, 0);
+ 
+-	dev_dbg(dev->dev, "SMU version is %u.%u.%u\n", dev->major, dev->minor, dev->rev);
++	dev_dbg(dev->dev, "SMU program %u version is %u.%u.%u\n",
++		dev->smu_program, dev->major, dev->minor, dev->rev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+index c9a85eb2e8600..e8424e70d81d2 100644
+--- a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
++++ b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+@@ -596,7 +596,10 @@ static long isst_if_def_ioctl(struct file *file, unsigned int cmd,
+ 	return ret;
+ }
+ 
+-static DEFINE_MUTEX(punit_misc_dev_lock);
++/* Lock to prevent module registration when already opened by user space */
++static DEFINE_MUTEX(punit_misc_dev_open_lock);
++/* Lock to allow one share misc device for all ISST interace */
++static DEFINE_MUTEX(punit_misc_dev_reg_lock);
+ static int misc_usage_count;
+ static int misc_device_ret;
+ static int misc_device_open;
+@@ -606,7 +609,7 @@ static int isst_if_open(struct inode *inode, struct file *file)
+ 	int i, ret = 0;
+ 
+ 	/* Fail open, if a module is going away */
+-	mutex_lock(&punit_misc_dev_lock);
++	mutex_lock(&punit_misc_dev_open_lock);
+ 	for (i = 0; i < ISST_IF_DEV_MAX; ++i) {
+ 		struct isst_if_cmd_cb *cb = &punit_callbacks[i];
+ 
+@@ -628,7 +631,7 @@ static int isst_if_open(struct inode *inode, struct file *file)
+ 	} else {
+ 		misc_device_open++;
+ 	}
+-	mutex_unlock(&punit_misc_dev_lock);
++	mutex_unlock(&punit_misc_dev_open_lock);
+ 
+ 	return ret;
+ }
+@@ -637,7 +640,7 @@ static int isst_if_relase(struct inode *inode, struct file *f)
+ {
+ 	int i;
+ 
+-	mutex_lock(&punit_misc_dev_lock);
++	mutex_lock(&punit_misc_dev_open_lock);
+ 	misc_device_open--;
+ 	for (i = 0; i < ISST_IF_DEV_MAX; ++i) {
+ 		struct isst_if_cmd_cb *cb = &punit_callbacks[i];
+@@ -645,7 +648,7 @@ static int isst_if_relase(struct inode *inode, struct file *f)
+ 		if (cb->registered)
+ 			module_put(cb->owner);
+ 	}
+-	mutex_unlock(&punit_misc_dev_lock);
++	mutex_unlock(&punit_misc_dev_open_lock);
+ 
+ 	return 0;
+ }
+@@ -662,6 +665,43 @@ static struct miscdevice isst_if_char_driver = {
+ 	.fops		= &isst_if_char_driver_ops,
+ };
+ 
++static int isst_misc_reg(void)
++{
++	mutex_lock(&punit_misc_dev_reg_lock);
++	if (misc_device_ret)
++		goto unlock_exit;
++
++	if (!misc_usage_count) {
++		misc_device_ret = isst_if_cpu_info_init();
++		if (misc_device_ret)
++			goto unlock_exit;
++
++		misc_device_ret = misc_register(&isst_if_char_driver);
++		if (misc_device_ret) {
++			isst_if_cpu_info_exit();
++			goto unlock_exit;
++		}
++	}
++	misc_usage_count++;
++
++unlock_exit:
++	mutex_unlock(&punit_misc_dev_reg_lock);
++
++	return misc_device_ret;
++}
++
++static void isst_misc_unreg(void)
++{
++	mutex_lock(&punit_misc_dev_reg_lock);
++	if (misc_usage_count)
++		misc_usage_count--;
++	if (!misc_usage_count && !misc_device_ret) {
++		misc_deregister(&isst_if_char_driver);
++		isst_if_cpu_info_exit();
++	}
++	mutex_unlock(&punit_misc_dev_reg_lock);
++}
++
+ /**
+  * isst_if_cdev_register() - Register callback for IOCTL
+  * @device_type: The device type this callback handling.
+@@ -679,38 +719,31 @@ static struct miscdevice isst_if_char_driver = {
+  */
+ int isst_if_cdev_register(int device_type, struct isst_if_cmd_cb *cb)
+ {
+-	if (misc_device_ret)
+-		return misc_device_ret;
++	int ret;
+ 
+ 	if (device_type >= ISST_IF_DEV_MAX)
+ 		return -EINVAL;
+ 
+-	mutex_lock(&punit_misc_dev_lock);
++	mutex_lock(&punit_misc_dev_open_lock);
++	/* Device is already open, we don't want to add new callbacks */
+ 	if (misc_device_open) {
+-		mutex_unlock(&punit_misc_dev_lock);
++		mutex_unlock(&punit_misc_dev_open_lock);
+ 		return -EAGAIN;
+ 	}
+-	if (!misc_usage_count) {
+-		int ret;
+-
+-		misc_device_ret = misc_register(&isst_if_char_driver);
+-		if (misc_device_ret)
+-			goto unlock_exit;
+-
+-		ret = isst_if_cpu_info_init();
+-		if (ret) {
+-			misc_deregister(&isst_if_char_driver);
+-			misc_device_ret = ret;
+-			goto unlock_exit;
+-		}
+-	}
+ 	memcpy(&punit_callbacks[device_type], cb, sizeof(*cb));
+ 	punit_callbacks[device_type].registered = 1;
+-	misc_usage_count++;
+-unlock_exit:
+-	mutex_unlock(&punit_misc_dev_lock);
++	mutex_unlock(&punit_misc_dev_open_lock);
+ 
+-	return misc_device_ret;
++	ret = isst_misc_reg();
++	if (ret) {
++		/*
++		 * No need of mutex as the misc device register failed
++		 * as no one can open device yet. Hence no contention.
++		 */
++		punit_callbacks[device_type].registered = 0;
++		return ret;
++	}
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(isst_if_cdev_register);
+ 
+@@ -725,16 +758,12 @@ EXPORT_SYMBOL_GPL(isst_if_cdev_register);
+  */
+ void isst_if_cdev_unregister(int device_type)
+ {
+-	mutex_lock(&punit_misc_dev_lock);
+-	misc_usage_count--;
++	isst_misc_unreg();
++	mutex_lock(&punit_misc_dev_open_lock);
+ 	punit_callbacks[device_type].registered = 0;
+ 	if (device_type == ISST_IF_DEV_MBOX)
+ 		isst_delete_hash();
+-	if (!misc_usage_count && !misc_device_ret) {
+-		misc_deregister(&isst_if_char_driver);
+-		isst_if_cpu_info_exit();
+-	}
+-	mutex_unlock(&punit_misc_dev_lock);
++	mutex_unlock(&punit_misc_dev_open_lock);
+ }
+ EXPORT_SYMBOL_GPL(isst_if_cdev_unregister);
+ 
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index 17dd54d4b783c..e318b40949679 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -773,6 +773,21 @@ static const struct ts_dmi_data predia_basic_data = {
+ 	.properties	= predia_basic_props,
+ };
+ 
++static const struct property_entry rwc_nanote_p8_props[] = {
++	PROPERTY_ENTRY_U32("touchscreen-min-y", 46),
++	PROPERTY_ENTRY_U32("touchscreen-size-x", 1728),
++	PROPERTY_ENTRY_U32("touchscreen-size-y", 1140),
++	PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"),
++	PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-rwc-nanote-p8.fw"),
++	PROPERTY_ENTRY_U32("silead,max-fingers", 10),
++	{ }
++};
++
++static const struct ts_dmi_data rwc_nanote_p8_data = {
++	.acpi_name = "MSSL1680:00",
++	.properties = rwc_nanote_p8_props,
++};
++
+ static const struct property_entry schneider_sct101ctm_props[] = {
+ 	PROPERTY_ENTRY_U32("touchscreen-size-x", 1715),
+ 	PROPERTY_ENTRY_U32("touchscreen-size-y", 1140),
+@@ -1406,6 +1421,15 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_EXACT_MATCH(DMI_BOARD_NAME, "0E57"),
+ 		},
+ 	},
++	{
++		/* RWC NANOTE P8 */
++		.driver_data = (void *)&rwc_nanote_p8_data,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Default string"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "AY07J"),
++			DMI_MATCH(DMI_PRODUCT_SKU, "0001")
++		},
++	},
+ 	{
+ 		/* Schneider SCT101CTM */
+ 		.driver_data = (void *)&schneider_sct101ctm_data,
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index f74a1c09c3518..936a7c067eef9 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -594,6 +594,7 @@ struct lpfc_vport {
+ #define FC_VPORT_LOGO_RCVD      0x200    /* LOGO received on vport */
+ #define FC_RSCN_DISCOVERY       0x400	 /* Auth all devices after RSCN */
+ #define FC_LOGO_RCVD_DID_CHNG   0x800    /* FDISC on phys port detect DID chng*/
++#define FC_PT2PT_NO_NVME        0x1000   /* Don't send NVME PRLI */
+ #define FC_SCSI_SCAN_TMO        0x4000	 /* scsi scan timer running */
+ #define FC_ABORT_DISCOVERY      0x8000	 /* we want to abort discovery */
+ #define FC_NDISC_ACTIVE         0x10000	 /* NPort discovery active */
+diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
+index bac78fbce8d6e..fa8415259cb8a 100644
+--- a/drivers/scsi/lpfc/lpfc_attr.c
++++ b/drivers/scsi/lpfc/lpfc_attr.c
+@@ -1315,6 +1315,9 @@ lpfc_issue_lip(struct Scsi_Host *shost)
+ 	pmboxq->u.mb.mbxCommand = MBX_DOWN_LINK;
+ 	pmboxq->u.mb.mbxOwner = OWN_HOST;
+ 
++	if ((vport->fc_flag & FC_PT2PT) && (vport->fc_flag & FC_PT2PT_NO_NVME))
++		vport->fc_flag &= ~FC_PT2PT_NO_NVME;
++
+ 	mbxstatus = lpfc_sli_issue_mbox_wait(phba, pmboxq, LPFC_MBOX_TMO * 2);
+ 
+ 	if ((mbxstatus == MBX_SUCCESS) &&
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index 78024f11b794a..dcfa47165acdf 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -1072,7 +1072,8 @@ stop_rr_fcf_flogi:
+ 
+ 		/* FLOGI failed, so there is no fabric */
+ 		spin_lock_irq(shost->host_lock);
+-		vport->fc_flag &= ~(FC_FABRIC | FC_PUBLIC_LOOP);
++		vport->fc_flag &= ~(FC_FABRIC | FC_PUBLIC_LOOP |
++				    FC_PT2PT_NO_NVME);
+ 		spin_unlock_irq(shost->host_lock);
+ 
+ 		/* If private loop, then allow max outstanding els to be
+@@ -4607,6 +4608,23 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ 		/* Added for Vendor specifc support
+ 		 * Just keep retrying for these Rsn / Exp codes
+ 		 */
++		if ((vport->fc_flag & FC_PT2PT) &&
++		    cmd == ELS_CMD_NVMEPRLI) {
++			switch (stat.un.b.lsRjtRsnCode) {
++			case LSRJT_UNABLE_TPC:
++			case LSRJT_INVALID_CMD:
++			case LSRJT_LOGICAL_ERR:
++			case LSRJT_CMD_UNSUPPORTED:
++				lpfc_printf_vlog(vport, KERN_WARNING, LOG_ELS,
++						 "0168 NVME PRLI LS_RJT "
++						 "reason %x port doesn't "
++						 "support NVME, disabling NVME\n",
++						 stat.un.b.lsRjtRsnCode);
++				retry = 0;
++				vport->fc_flag |= FC_PT2PT_NO_NVME;
++				goto out_retry;
++			}
++		}
+ 		switch (stat.un.b.lsRjtRsnCode) {
+ 		case LSRJT_UNABLE_TPC:
+ 			/* The driver has a VALID PLOGI but the rport has
+diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
+index 7d717a4ac14d1..fdf5e777bf113 100644
+--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
++++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
+@@ -1961,8 +1961,9 @@ lpfc_cmpl_reglogin_reglogin_issue(struct lpfc_vport *vport,
+ 			 * is configured try it.
+ 			 */
+ 			ndlp->nlp_fc4_type |= NLP_FC4_FCP;
+-			if ((vport->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
+-			    (vport->cfg_enable_fc4_type == LPFC_ENABLE_NVME)) {
++			if ((!(vport->fc_flag & FC_PT2PT_NO_NVME)) &&
++			    (vport->cfg_enable_fc4_type == LPFC_ENABLE_BOTH ||
++			    vport->cfg_enable_fc4_type == LPFC_ENABLE_NVME)) {
+ 				ndlp->nlp_fc4_type |= NLP_FC4_NVME;
+ 				/* We need to update the localport also */
+ 				lpfc_nvme_update_localport(vport);
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index 4390c8b9170cd..066290dd57565 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -2695,7 +2695,6 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	u32 tag = le32_to_cpu(psataPayload->tag);
+ 	u32 port_id = le32_to_cpu(psataPayload->port_id);
+ 	u32 dev_id = le32_to_cpu(psataPayload->device_id);
+-	unsigned long flags;
+ 
+ 	ccb = &pm8001_ha->ccb_info[tag];
+ 
+@@ -2735,8 +2734,6 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		ts->residual = 0;
+-		if (pm8001_dev)
+-			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_BREAK:
+ 		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_BREAK\n");
+@@ -2778,7 +2775,6 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 				IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS);
+ 			ts->resp = SAS_TASK_COMPLETE;
+ 			ts->stat = SAS_QUEUE_FULL;
+-			pm8001_ccb_task_free_done(pm8001_ha, t, ccb, tag);
+ 			return;
+ 		}
+ 		break;
+@@ -2864,20 +2860,6 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		ts->stat = SAS_OPEN_TO;
+ 		break;
+ 	}
+-	spin_lock_irqsave(&t->task_state_lock, flags);
+-	t->task_state_flags &= ~SAS_TASK_STATE_PENDING;
+-	t->task_state_flags &= ~SAS_TASK_AT_INITIATOR;
+-	t->task_state_flags |= SAS_TASK_STATE_DONE;
+-	if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) {
+-		spin_unlock_irqrestore(&t->task_state_lock, flags);
+-		pm8001_dbg(pm8001_ha, FAIL,
+-			   "task 0x%p done with io_status 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n",
+-			   t, event, ts->resp, ts->stat);
+-		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+-	} else {
+-		spin_unlock_irqrestore(&t->task_state_lock, flags);
+-		pm8001_ccb_task_free_done(pm8001_ha, t, ccb, tag);
+-	}
+ }
+ 
+ /*See the comments for mpi_ssp_completion */
+diff --git a/drivers/scsi/pm8001/pm8001_sas.c b/drivers/scsi/pm8001/pm8001_sas.c
+index 83e73009db5cd..c0b45b8a513d7 100644
+--- a/drivers/scsi/pm8001/pm8001_sas.c
++++ b/drivers/scsi/pm8001/pm8001_sas.c
+@@ -753,8 +753,13 @@ static int pm8001_exec_internal_tmf_task(struct domain_device *dev,
+ 		res = -TMF_RESP_FUNC_FAILED;
+ 		/* Even TMF timed out, return direct. */
+ 		if (task->task_state_flags & SAS_TASK_STATE_ABORTED) {
++			struct pm8001_ccb_info *ccb = task->lldd_task;
++
+ 			pm8001_dbg(pm8001_ha, FAIL, "TMF task[%x]timeout.\n",
+ 				   tmf->tmf);
++
++			if (ccb)
++				ccb->task = NULL;
+ 			goto ex_err;
+ 		}
+ 
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index 4c5b945bf3187..ca4820d99dc70 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -2184,9 +2184,9 @@ mpi_ssp_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		pm8001_dbg(pm8001_ha, FAIL,
+ 			   "task 0x%p done with io_status 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n",
+ 			   t, status, ts->resp, ts->stat);
++		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+ 		if (t->slow_task)
+ 			complete(&t->slow_task->completion);
+-		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+ 	} else {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+ 		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+@@ -2801,9 +2801,9 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha,
+ 		pm8001_dbg(pm8001_ha, FAIL,
+ 			   "task 0x%p done with io_status 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n",
+ 			   t, status, ts->resp, ts->stat);
++		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+ 		if (t->slow_task)
+ 			complete(&t->slow_task->completion);
+-		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+ 	} else {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
+ 		spin_unlock_irqrestore(&circularQ->oq_lock,
+@@ -2828,7 +2828,6 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha,
+ 	u32 tag = le32_to_cpu(psataPayload->tag);
+ 	u32 port_id = le32_to_cpu(psataPayload->port_id);
+ 	u32 dev_id = le32_to_cpu(psataPayload->device_id);
+-	unsigned long flags;
+ 
+ 	ccb = &pm8001_ha->ccb_info[tag];
+ 
+@@ -2866,8 +2865,6 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha,
+ 		ts->resp = SAS_TASK_COMPLETE;
+ 		ts->stat = SAS_DATA_OVERRUN;
+ 		ts->residual = 0;
+-		if (pm8001_dev)
+-			atomic_dec(&pm8001_dev->running_req);
+ 		break;
+ 	case IO_XFER_ERROR_BREAK:
+ 		pm8001_dbg(pm8001_ha, IO, "IO_XFER_ERROR_BREAK\n");
+@@ -2916,11 +2913,6 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha,
+ 				IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS);
+ 			ts->resp = SAS_TASK_COMPLETE;
+ 			ts->stat = SAS_QUEUE_FULL;
+-			spin_unlock_irqrestore(&circularQ->oq_lock,
+-					circularQ->lock_flags);
+-			pm8001_ccb_task_free_done(pm8001_ha, t, ccb, tag);
+-			spin_lock_irqsave(&circularQ->oq_lock,
+-					circularQ->lock_flags);
+ 			return;
+ 		}
+ 		break;
+@@ -3020,24 +3012,6 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha,
+ 		ts->stat = SAS_OPEN_TO;
+ 		break;
+ 	}
+-	spin_lock_irqsave(&t->task_state_lock, flags);
+-	t->task_state_flags &= ~SAS_TASK_STATE_PENDING;
+-	t->task_state_flags &= ~SAS_TASK_AT_INITIATOR;
+-	t->task_state_flags |= SAS_TASK_STATE_DONE;
+-	if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) {
+-		spin_unlock_irqrestore(&t->task_state_lock, flags);
+-		pm8001_dbg(pm8001_ha, FAIL,
+-			   "task 0x%p done with io_status 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n",
+-			   t, event, ts->resp, ts->stat);
+-		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+-	} else {
+-		spin_unlock_irqrestore(&t->task_state_lock, flags);
+-		spin_unlock_irqrestore(&circularQ->oq_lock,
+-				circularQ->lock_flags);
+-		pm8001_ccb_task_free_done(pm8001_ha, t, ccb, tag);
+-		spin_lock_irqsave(&circularQ->oq_lock,
+-				circularQ->lock_flags);
+-	}
+ }
+ 
+ /*See the comments for mpi_ssp_completion */
+diff --git a/drivers/scsi/qedi/qedi_fw.c b/drivers/scsi/qedi/qedi_fw.c
+index 5916ed7662d56..4eb89aa4a39dc 100644
+--- a/drivers/scsi/qedi/qedi_fw.c
++++ b/drivers/scsi/qedi/qedi_fw.c
+@@ -771,11 +771,10 @@ static void qedi_process_cmd_cleanup_resp(struct qedi_ctx *qedi,
+ 			qedi_cmd->list_tmf_work = NULL;
+ 		}
+ 	}
++	spin_unlock_bh(&qedi_conn->tmf_work_lock);
+ 
+-	if (!found) {
+-		spin_unlock_bh(&qedi_conn->tmf_work_lock);
++	if (!found)
+ 		goto check_cleanup_reqs;
+-	}
+ 
+ 	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+ 		  "TMF work, cqe->tid=0x%x, tmf flags=0x%x, cid=0x%x\n",
+@@ -806,7 +805,6 @@ static void qedi_process_cmd_cleanup_resp(struct qedi_ctx *qedi,
+ 	qedi_cmd->state = CLEANUP_RECV;
+ unlock:
+ 	spin_unlock_bh(&conn->session->back_lock);
+-	spin_unlock_bh(&qedi_conn->tmf_work_lock);
+ 	wake_up_interruptible(&qedi_conn->wait_queue);
+ 	return;
+ 
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index 23e1c0acdeaee..d0ce723299bf7 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -214,6 +214,48 @@ static void scsi_unlock_floptical(struct scsi_device *sdev,
+ 			 SCSI_TIMEOUT, 3, NULL);
+ }
+ 
++static int scsi_realloc_sdev_budget_map(struct scsi_device *sdev,
++					unsigned int depth)
++{
++	int new_shift = sbitmap_calculate_shift(depth);
++	bool need_alloc = !sdev->budget_map.map;
++	bool need_free = false;
++	int ret;
++	struct sbitmap sb_backup;
++
++	/*
++	 * realloc if new shift is calculated, which is caused by setting
++	 * up one new default queue depth after calling ->slave_configure
++	 */
++	if (!need_alloc && new_shift != sdev->budget_map.shift)
++		need_alloc = need_free = true;
++
++	if (!need_alloc)
++		return 0;
++
++	/*
++	 * Request queue has to be frozen for reallocating budget map,
++	 * and here disk isn't added yet, so freezing is pretty fast
++	 */
++	if (need_free) {
++		blk_mq_freeze_queue(sdev->request_queue);
++		sb_backup = sdev->budget_map;
++	}
++	ret = sbitmap_init_node(&sdev->budget_map,
++				scsi_device_max_queue_depth(sdev),
++				new_shift, GFP_KERNEL,
++				sdev->request_queue->node, false, true);
++	if (need_free) {
++		if (ret)
++			sdev->budget_map = sb_backup;
++		else
++			sbitmap_free(&sb_backup);
++		ret = 0;
++		blk_mq_unfreeze_queue(sdev->request_queue);
++	}
++	return ret;
++}
++
+ /**
+  * scsi_alloc_sdev - allocate and setup a scsi_Device
+  * @starget: which target to allocate a &scsi_device for
+@@ -306,11 +348,7 @@ static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget,
+ 	 * default device queue depth to figure out sbitmap shift
+ 	 * since we use this queue depth most of times.
+ 	 */
+-	if (sbitmap_init_node(&sdev->budget_map,
+-				scsi_device_max_queue_depth(sdev),
+-				sbitmap_calculate_shift(depth),
+-				GFP_KERNEL, sdev->request_queue->node,
+-				false, true)) {
++	if (scsi_realloc_sdev_budget_map(sdev, depth)) {
+ 		put_device(&starget->dev);
+ 		kfree(sdev);
+ 		goto out;
+@@ -1017,6 +1055,13 @@ static int scsi_add_lun(struct scsi_device *sdev, unsigned char *inq_result,
+ 			}
+ 			return SCSI_SCAN_NO_RESPONSE;
+ 		}
++
++		/*
++		 * The queue_depth is often changed in ->slave_configure.
++		 * Set up budget map again since memory consumption of
++		 * the map depends on actual queue depth.
++		 */
++		scsi_realloc_sdev_budget_map(sdev, sdev->queue_depth);
+ 	}
+ 
+ 	if (sdev->scsi_level >= SCSI_3)
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index ec7d7e01231d7..9e7aa3a2fdf54 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -128,8 +128,9 @@ EXPORT_SYMBOL_GPL(ufshcd_dump_regs);
+ enum {
+ 	UFSHCD_MAX_CHANNEL	= 0,
+ 	UFSHCD_MAX_ID		= 1,
+-	UFSHCD_CMD_PER_LUN	= 32,
+-	UFSHCD_CAN_QUEUE	= 32,
++	UFSHCD_NUM_RESERVED	= 1,
++	UFSHCD_CMD_PER_LUN	= 32 - UFSHCD_NUM_RESERVED,
++	UFSHCD_CAN_QUEUE	= 32 - UFSHCD_NUM_RESERVED,
+ };
+ 
+ static const char *const ufshcd_state_name[] = {
+@@ -2194,6 +2195,7 @@ static inline int ufshcd_hba_capabilities(struct ufs_hba *hba)
+ 	hba->nutrs = (hba->capabilities & MASK_TRANSFER_REQUESTS_SLOTS) + 1;
+ 	hba->nutmrs =
+ 	((hba->capabilities & MASK_TASK_MANAGEMENT_REQUEST_SLOTS) >> 16) + 1;
++	hba->reserved_slot = hba->nutrs - 1;
+ 
+ 	/* Read crypto capabilities */
+ 	err = ufshcd_hba_init_crypto_capabilities(hba);
+@@ -2941,30 +2943,15 @@ static int ufshcd_wait_for_dev_cmd(struct ufs_hba *hba,
+ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba,
+ 		enum dev_cmd_type cmd_type, int timeout)
+ {
+-	struct request_queue *q = hba->cmd_queue;
+ 	DECLARE_COMPLETION_ONSTACK(wait);
+-	struct request *req;
++	const u32 tag = hba->reserved_slot;
+ 	struct ufshcd_lrb *lrbp;
+ 	int err;
+-	int tag;
+ 
+-	down_read(&hba->clk_scaling_lock);
++	/* Protects use of hba->reserved_slot. */
++	lockdep_assert_held(&hba->dev_cmd.lock);
+ 
+-	/*
+-	 * Get free slot, sleep if slots are unavailable.
+-	 * Even though we use wait_event() which sleeps indefinitely,
+-	 * the maximum wait time is bounded by SCSI request timeout.
+-	 */
+-	req = blk_mq_alloc_request(q, REQ_OP_DRV_OUT, 0);
+-	if (IS_ERR(req)) {
+-		err = PTR_ERR(req);
+-		goto out_unlock;
+-	}
+-	tag = req->tag;
+-	WARN_ONCE(tag < 0, "Invalid tag %d\n", tag);
+-	/* Set the timeout such that the SCSI error handler is not activated. */
+-	req->timeout = msecs_to_jiffies(2 * timeout);
+-	blk_mq_start_request(req);
++	down_read(&hba->clk_scaling_lock);
+ 
+ 	lrbp = &hba->lrb[tag];
+ 	WARN_ON(lrbp->cmd);
+@@ -2982,8 +2969,6 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba,
+ 				    (struct utp_upiu_req *)lrbp->ucd_rsp_ptr);
+ 
+ out:
+-	blk_mq_free_request(req);
+-out_unlock:
+ 	up_read(&hba->clk_scaling_lock);
+ 	return err;
+ }
+@@ -6716,28 +6701,16 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
+ 					enum dev_cmd_type cmd_type,
+ 					enum query_opcode desc_op)
+ {
+-	struct request_queue *q = hba->cmd_queue;
+ 	DECLARE_COMPLETION_ONSTACK(wait);
+-	struct request *req;
++	const u32 tag = hba->reserved_slot;
+ 	struct ufshcd_lrb *lrbp;
+ 	int err = 0;
+-	int tag;
+ 	u8 upiu_flags;
+ 
+-	down_read(&hba->clk_scaling_lock);
+-
+-	req = blk_mq_alloc_request(q, REQ_OP_DRV_OUT, 0);
+-	if (IS_ERR(req)) {
+-		err = PTR_ERR(req);
+-		goto out_unlock;
+-	}
+-	tag = req->tag;
+-	WARN_ONCE(tag < 0, "Invalid tag %d\n", tag);
++	/* Protects use of hba->reserved_slot. */
++	lockdep_assert_held(&hba->dev_cmd.lock);
+ 
+-	if (unlikely(test_bit(tag, &hba->outstanding_reqs))) {
+-		err = -EBUSY;
+-		goto out;
+-	}
++	down_read(&hba->clk_scaling_lock);
+ 
+ 	lrbp = &hba->lrb[tag];
+ 	WARN_ON(lrbp->cmd);
+@@ -6806,9 +6779,6 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
+ 	ufshcd_add_query_upiu_trace(hba, err ? UFS_QUERY_ERR : UFS_QUERY_COMP,
+ 				    (struct utp_upiu_req *)lrbp->ucd_rsp_ptr);
+ 
+-out:
+-	blk_mq_free_request(req);
+-out_unlock:
+ 	up_read(&hba->clk_scaling_lock);
+ 	return err;
+ }
+@@ -9543,8 +9513,8 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
+ 	/* Configure LRB */
+ 	ufshcd_host_memory_configure(hba);
+ 
+-	host->can_queue = hba->nutrs;
+-	host->cmd_per_lun = hba->nutrs;
++	host->can_queue = hba->nutrs - UFSHCD_NUM_RESERVED;
++	host->cmd_per_lun = hba->nutrs - UFSHCD_NUM_RESERVED;
+ 	host->max_id = UFSHCD_MAX_ID;
+ 	host->max_lun = UFS_MAX_LUNS;
+ 	host->max_channel = UFSHCD_MAX_CHANNEL;
+diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
+index 54750d72c8fb0..26fbf1b9ab156 100644
+--- a/drivers/scsi/ufs/ufshcd.h
++++ b/drivers/scsi/ufs/ufshcd.h
+@@ -744,6 +744,7 @@ struct ufs_hba_monitor {
+  * @capabilities: UFS Controller Capabilities
+  * @nutrs: Transfer Request Queue depth supported by controller
+  * @nutmrs: Task Management Queue depth supported by controller
++ * @reserved_slot: Used to submit device commands. Protected by @dev_cmd.lock.
+  * @ufs_version: UFS Version to which controller complies
+  * @vops: pointer to variant specific operations
+  * @priv: pointer to variant specific private data
+@@ -836,6 +837,7 @@ struct ufs_hba {
+ 	u32 capabilities;
+ 	int nutrs;
+ 	int nutmrs;
++	u32 reserved_slot;
+ 	u32 ufs_version;
+ 	const struct ufs_hba_variant_ops *vops;
+ 	struct ufs_hba_variant_params *vps;
+diff --git a/drivers/soc/aspeed/aspeed-lpc-ctrl.c b/drivers/soc/aspeed/aspeed-lpc-ctrl.c
+index 72771e018c42e..258894ed234b3 100644
+--- a/drivers/soc/aspeed/aspeed-lpc-ctrl.c
++++ b/drivers/soc/aspeed/aspeed-lpc-ctrl.c
+@@ -306,10 +306,9 @@ static int aspeed_lpc_ctrl_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	lpc_ctrl->clk = devm_clk_get(dev, NULL);
+-	if (IS_ERR(lpc_ctrl->clk)) {
+-		dev_err(dev, "couldn't get clock\n");
+-		return PTR_ERR(lpc_ctrl->clk);
+-	}
++	if (IS_ERR(lpc_ctrl->clk))
++		return dev_err_probe(dev, PTR_ERR(lpc_ctrl->clk),
++				     "couldn't get clock\n");
+ 	rc = clk_prepare_enable(lpc_ctrl->clk);
+ 	if (rc) {
+ 		dev_err(dev, "couldn't enable clock\n");
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index c650a32bcedff..b9505bb51f45c 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -1058,15 +1058,27 @@ service_callback(enum vchiq_reason reason, struct vchiq_header *header,
+ 
+ 	DEBUG_TRACE(SERVICE_CALLBACK_LINE);
+ 
++	rcu_read_lock();
+ 	service = handle_to_service(handle);
+-	if (WARN_ON(!service))
++	if (WARN_ON(!service)) {
++		rcu_read_unlock();
+ 		return VCHIQ_SUCCESS;
++	}
+ 
+ 	user_service = (struct user_service *)service->base.userdata;
+ 	instance = user_service->instance;
+ 
+-	if (!instance || instance->closing)
++	if (!instance || instance->closing) {
++		rcu_read_unlock();
+ 		return VCHIQ_SUCCESS;
++	}
++
++	/*
++	 * As hopping around different synchronization mechanism,
++	 * taking an extra reference results in simpler implementation.
++	 */
++	vchiq_service_get(service);
++	rcu_read_unlock();
+ 
+ 	vchiq_log_trace(vchiq_arm_log_level,
+ 			"%s - service %lx(%d,%p), reason %d, header %lx, instance %lx, bulk_userdata %lx",
+@@ -1097,6 +1109,7 @@ service_callback(enum vchiq_reason reason, struct vchiq_header *header,
+ 							bulk_userdata);
+ 				if (status != VCHIQ_SUCCESS) {
+ 					DEBUG_TRACE(SERVICE_CALLBACK_LINE);
++					vchiq_service_put(service);
+ 					return status;
+ 				}
+ 			}
+@@ -1105,10 +1118,12 @@ service_callback(enum vchiq_reason reason, struct vchiq_header *header,
+ 			if (wait_for_completion_interruptible(&user_service->remove_event)) {
+ 				vchiq_log_info(vchiq_arm_log_level, "%s interrupted", __func__);
+ 				DEBUG_TRACE(SERVICE_CALLBACK_LINE);
++				vchiq_service_put(service);
+ 				return VCHIQ_RETRY;
+ 			} else if (instance->closing) {
+ 				vchiq_log_info(vchiq_arm_log_level, "%s closing", __func__);
+ 				DEBUG_TRACE(SERVICE_CALLBACK_LINE);
++				vchiq_service_put(service);
+ 				return VCHIQ_ERROR;
+ 			}
+ 			DEBUG_TRACE(SERVICE_CALLBACK_LINE);
+@@ -1137,6 +1152,7 @@ service_callback(enum vchiq_reason reason, struct vchiq_header *header,
+ 		header = NULL;
+ 	}
+ 	DEBUG_TRACE(SERVICE_CALLBACK_LINE);
++	vchiq_service_put(service);
+ 
+ 	if (skip_completion)
+ 		return VCHIQ_SUCCESS;
+diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c
+index 2a66a5203d2fa..7bf9e888b6214 100644
+--- a/drivers/tee/optee/core.c
++++ b/drivers/tee/optee/core.c
+@@ -157,6 +157,7 @@ void optee_remove_common(struct optee *optee)
+ 	/* Unregister OP-TEE specific client devices on TEE bus */
+ 	optee_unregister_devices();
+ 
++	teedev_close_context(optee->ctx);
+ 	/*
+ 	 * The two devices have to be unregistered before we can free the
+ 	 * other resources.
+diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c
+index 28d7c0eafc025..39546110fc9a2 100644
+--- a/drivers/tee/optee/ffa_abi.c
++++ b/drivers/tee/optee/ffa_abi.c
+@@ -424,6 +424,7 @@ static struct tee_shm_pool_mgr *optee_ffa_shm_pool_alloc_pages(void)
+  */
+ 
+ static void handle_ffa_rpc_func_cmd_shm_alloc(struct tee_context *ctx,
++					      struct optee *optee,
+ 					      struct optee_msg_arg *arg)
+ {
+ 	struct tee_shm *shm;
+@@ -439,7 +440,7 @@ static void handle_ffa_rpc_func_cmd_shm_alloc(struct tee_context *ctx,
+ 		shm = optee_rpc_cmd_alloc_suppl(ctx, arg->params[0].u.value.b);
+ 		break;
+ 	case OPTEE_RPC_SHM_TYPE_KERNEL:
+-		shm = tee_shm_alloc(ctx, arg->params[0].u.value.b,
++		shm = tee_shm_alloc(optee->ctx, arg->params[0].u.value.b,
+ 				    TEE_SHM_MAPPED | TEE_SHM_PRIV);
+ 		break;
+ 	default:
+@@ -493,14 +494,13 @@ err_bad_param:
+ }
+ 
+ static void handle_ffa_rpc_func_cmd(struct tee_context *ctx,
++				    struct optee *optee,
+ 				    struct optee_msg_arg *arg)
+ {
+-	struct optee *optee = tee_get_drvdata(ctx->teedev);
+-
+ 	arg->ret_origin = TEEC_ORIGIN_COMMS;
+ 	switch (arg->cmd) {
+ 	case OPTEE_RPC_CMD_SHM_ALLOC:
+-		handle_ffa_rpc_func_cmd_shm_alloc(ctx, arg);
++		handle_ffa_rpc_func_cmd_shm_alloc(ctx, optee, arg);
+ 		break;
+ 	case OPTEE_RPC_CMD_SHM_FREE:
+ 		handle_ffa_rpc_func_cmd_shm_free(ctx, optee, arg);
+@@ -510,12 +510,12 @@ static void handle_ffa_rpc_func_cmd(struct tee_context *ctx,
+ 	}
+ }
+ 
+-static void optee_handle_ffa_rpc(struct tee_context *ctx, u32 cmd,
+-				 struct optee_msg_arg *arg)
++static void optee_handle_ffa_rpc(struct tee_context *ctx, struct optee *optee,
++				 u32 cmd, struct optee_msg_arg *arg)
+ {
+ 	switch (cmd) {
+ 	case OPTEE_FFA_YIELDING_CALL_RETURN_RPC_CMD:
+-		handle_ffa_rpc_func_cmd(ctx, arg);
++		handle_ffa_rpc_func_cmd(ctx, optee, arg);
+ 		break;
+ 	case OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT:
+ 		/* Interrupt delivered by now */
+@@ -582,7 +582,7 @@ static int optee_ffa_yielding_call(struct tee_context *ctx,
+ 		 * above.
+ 		 */
+ 		cond_resched();
+-		optee_handle_ffa_rpc(ctx, data->data1, rpc_arg);
++		optee_handle_ffa_rpc(ctx, optee, data->data1, rpc_arg);
+ 		cmd = OPTEE_FFA_YIELDING_CALL_RESUME;
+ 		data->data0 = cmd;
+ 		data->data1 = 0;
+@@ -802,7 +802,9 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev)
+ {
+ 	const struct ffa_dev_ops *ffa_ops;
+ 	unsigned int rpc_arg_count;
++	struct tee_shm_pool *pool;
+ 	struct tee_device *teedev;
++	struct tee_context *ctx;
+ 	struct optee *optee;
+ 	int rc;
+ 
+@@ -822,12 +824,12 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev)
+ 	if (!optee)
+ 		return -ENOMEM;
+ 
+-	optee->pool = optee_ffa_config_dyn_shm();
+-	if (IS_ERR(optee->pool)) {
+-		rc = PTR_ERR(optee->pool);
+-		optee->pool = NULL;
+-		goto err;
++	pool = optee_ffa_config_dyn_shm();
++	if (IS_ERR(pool)) {
++		rc = PTR_ERR(pool);
++		goto err_free_optee;
+ 	}
++	optee->pool = pool;
+ 
+ 	optee->ops = &optee_ffa_ops;
+ 	optee->ffa.ffa_dev = ffa_dev;
+@@ -838,7 +840,7 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev)
+ 				  optee);
+ 	if (IS_ERR(teedev)) {
+ 		rc = PTR_ERR(teedev);
+-		goto err;
++		goto err_free_pool;
+ 	}
+ 	optee->teedev = teedev;
+ 
+@@ -846,46 +848,54 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev)
+ 				  optee);
+ 	if (IS_ERR(teedev)) {
+ 		rc = PTR_ERR(teedev);
+-		goto err;
++		goto err_unreg_teedev;
+ 	}
+ 	optee->supp_teedev = teedev;
+ 
+ 	rc = tee_device_register(optee->teedev);
+ 	if (rc)
+-		goto err;
++		goto err_unreg_supp_teedev;
+ 
+ 	rc = tee_device_register(optee->supp_teedev);
+ 	if (rc)
+-		goto err;
++		goto err_unreg_supp_teedev;
+ 
+ 	rc = rhashtable_init(&optee->ffa.global_ids, &shm_rhash_params);
+ 	if (rc)
+-		goto err;
++		goto err_unreg_supp_teedev;
+ 	mutex_init(&optee->ffa.mutex);
+ 	mutex_init(&optee->call_queue.mutex);
+ 	INIT_LIST_HEAD(&optee->call_queue.waiters);
+ 	optee_wait_queue_init(&optee->wait_queue);
+ 	optee_supp_init(&optee->supp);
+ 	ffa_dev_set_drvdata(ffa_dev, optee);
++	ctx = teedev_open(optee->teedev);
++	if (IS_ERR(ctx))
++		goto err_rhashtable_free;
++	optee->ctx = ctx;
++
+ 
+ 	rc = optee_enumerate_devices(PTA_CMD_GET_DEVICES);
+-	if (rc) {
+-		optee_ffa_remove(ffa_dev);
+-		return rc;
+-	}
++	if (rc)
++		goto err_unregister_devices;
+ 
+ 	pr_info("initialized driver\n");
+ 	return 0;
+-err:
+-	/*
+-	 * tee_device_unregister() is safe to call even if the
+-	 * devices hasn't been registered with
+-	 * tee_device_register() yet.
+-	 */
++
++err_unregister_devices:
++	optee_unregister_devices();
++	teedev_close_context(ctx);
++err_rhashtable_free:
++	rhashtable_free_and_destroy(&optee->ffa.global_ids, rh_free_fn, NULL);
++	optee_supp_uninit(&optee->supp);
++	mutex_destroy(&optee->call_queue.mutex);
++err_unreg_supp_teedev:
+ 	tee_device_unregister(optee->supp_teedev);
++err_unreg_teedev:
+ 	tee_device_unregister(optee->teedev);
+-	if (optee->pool)
+-		tee_shm_pool_free(optee->pool);
++err_free_pool:
++	tee_shm_pool_free(pool);
++err_free_optee:
+ 	kfree(optee);
+ 	return rc;
+ }
+diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h
+index 6660e05298db8..912b046976564 100644
+--- a/drivers/tee/optee/optee_private.h
++++ b/drivers/tee/optee/optee_private.h
+@@ -123,9 +123,10 @@ struct optee_ops {
+ /**
+  * struct optee - main service struct
+  * @supp_teedev:	supplicant device
++ * @teedev:		client device
+  * @ops:		internal callbacks for different ways to reach secure
+  *			world
+- * @teedev:		client device
++ * @ctx:		driver internal TEE context
+  * @smc:		specific to SMC ABI
+  * @ffa:		specific to FF-A ABI
+  * @call_queue:		queue of threads waiting to call @invoke_fn
+@@ -142,6 +143,7 @@ struct optee {
+ 	struct tee_device *supp_teedev;
+ 	struct tee_device *teedev;
+ 	const struct optee_ops *ops;
++	struct tee_context *ctx;
+ 	union {
+ 		struct optee_smc smc;
+ 		struct optee_ffa ffa;
+diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c
+index 09e7ec673bb6b..33f55ea52bc89 100644
+--- a/drivers/tee/optee/smc_abi.c
++++ b/drivers/tee/optee/smc_abi.c
+@@ -608,6 +608,7 @@ static void handle_rpc_func_cmd_shm_free(struct tee_context *ctx,
+ }
+ 
+ static void handle_rpc_func_cmd_shm_alloc(struct tee_context *ctx,
++					  struct optee *optee,
+ 					  struct optee_msg_arg *arg,
+ 					  struct optee_call_ctx *call_ctx)
+ {
+@@ -637,7 +638,8 @@ static void handle_rpc_func_cmd_shm_alloc(struct tee_context *ctx,
+ 		shm = optee_rpc_cmd_alloc_suppl(ctx, sz);
+ 		break;
+ 	case OPTEE_RPC_SHM_TYPE_KERNEL:
+-		shm = tee_shm_alloc(ctx, sz, TEE_SHM_MAPPED | TEE_SHM_PRIV);
++		shm = tee_shm_alloc(optee->ctx, sz,
++				    TEE_SHM_MAPPED | TEE_SHM_PRIV);
+ 		break;
+ 	default:
+ 		arg->ret = TEEC_ERROR_BAD_PARAMETERS;
+@@ -733,7 +735,7 @@ static void handle_rpc_func_cmd(struct tee_context *ctx, struct optee *optee,
+ 	switch (arg->cmd) {
+ 	case OPTEE_RPC_CMD_SHM_ALLOC:
+ 		free_pages_list(call_ctx);
+-		handle_rpc_func_cmd_shm_alloc(ctx, arg, call_ctx);
++		handle_rpc_func_cmd_shm_alloc(ctx, optee, arg, call_ctx);
+ 		break;
+ 	case OPTEE_RPC_CMD_SHM_FREE:
+ 		handle_rpc_func_cmd_shm_free(ctx, arg);
+@@ -762,7 +764,7 @@ static void optee_handle_rpc(struct tee_context *ctx,
+ 
+ 	switch (OPTEE_SMC_RETURN_GET_RPC_FUNC(param->a0)) {
+ 	case OPTEE_SMC_RPC_FUNC_ALLOC:
+-		shm = tee_shm_alloc(ctx, param->a1,
++		shm = tee_shm_alloc(optee->ctx, param->a1,
+ 				    TEE_SHM_MAPPED | TEE_SHM_PRIV);
+ 		if (!IS_ERR(shm) && !tee_shm_get_pa(shm, 0, &pa)) {
+ 			reg_pair_from_64(&param->a1, &param->a2, pa);
+@@ -1207,6 +1209,7 @@ static int optee_probe(struct platform_device *pdev)
+ 	struct optee *optee = NULL;
+ 	void *memremaped_shm = NULL;
+ 	struct tee_device *teedev;
++	struct tee_context *ctx;
+ 	u32 sec_caps;
+ 	int rc;
+ 
+@@ -1284,6 +1287,10 @@ static int optee_probe(struct platform_device *pdev)
+ 	optee_supp_init(&optee->supp);
+ 	optee->smc.memremaped_shm = memremaped_shm;
+ 	optee->pool = pool;
++	ctx = teedev_open(optee->teedev);
++	if (IS_ERR(ctx))
++		goto err;
++	optee->ctx = ctx;
+ 
+ 	/*
+ 	 * Ensure that there are no pre-existing shm objects before enabling
+diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c
+index 85102d12d7169..3fc426dad2df3 100644
+--- a/drivers/tee/tee_core.c
++++ b/drivers/tee/tee_core.c
+@@ -43,7 +43,7 @@ static DEFINE_SPINLOCK(driver_lock);
+ static struct class *tee_class;
+ static dev_t tee_devt;
+ 
+-static struct tee_context *teedev_open(struct tee_device *teedev)
++struct tee_context *teedev_open(struct tee_device *teedev)
+ {
+ 	int rc;
+ 	struct tee_context *ctx;
+@@ -70,6 +70,7 @@ err:
+ 	return ERR_PTR(rc);
+ 
+ }
++EXPORT_SYMBOL_GPL(teedev_open);
+ 
+ void teedev_ctx_get(struct tee_context *ctx)
+ {
+@@ -96,13 +97,14 @@ void teedev_ctx_put(struct tee_context *ctx)
+ 	kref_put(&ctx->refcount, teedev_ctx_release);
+ }
+ 
+-static void teedev_close_context(struct tee_context *ctx)
++void teedev_close_context(struct tee_context *ctx)
+ {
+ 	struct tee_device *teedev = ctx->teedev;
+ 
+ 	teedev_ctx_put(ctx);
+ 	tee_device_put(teedev);
+ }
++EXPORT_SYMBOL_GPL(teedev_close_context);
+ 
+ static int tee_open(struct inode *inode, struct file *filp)
+ {
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index b2b98fe689e9e..5c25bbe1a09ff 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -1963,7 +1963,7 @@ static bool canon_copy_from_read_buf(struct tty_struct *tty,
+ 		return false;
+ 
+ 	canon_head = smp_load_acquire(&ldata->canon_head);
+-	n = min(*nr + 1, canon_head - ldata->read_tail);
++	n = min(*nr, canon_head - ldata->read_tail);
+ 
+ 	tail = ldata->read_tail & (N_TTY_BUF_SIZE - 1);
+ 	size = min_t(size_t, tail + n, N_TTY_BUF_SIZE);
+@@ -1985,10 +1985,8 @@ static bool canon_copy_from_read_buf(struct tty_struct *tty,
+ 		n += N_TTY_BUF_SIZE;
+ 	c = n + found;
+ 
+-	if (!found || read_buf(ldata, eol) != __DISABLED_CHAR) {
+-		c = min(*nr, c);
++	if (!found || read_buf(ldata, eol) != __DISABLED_CHAR)
+ 		n = c;
+-	}
+ 
+ 	n_tty_trace("%s: eol:%zu found:%d n:%zu c:%zu tail:%zu more:%zu\n",
+ 		    __func__, eol, found, n, c, tail, more);
+diff --git a/drivers/tty/serial/8250/8250_gsc.c b/drivers/tty/serial/8250/8250_gsc.c
+index 673cda3d011d0..948d0a1c6ae8e 100644
+--- a/drivers/tty/serial/8250/8250_gsc.c
++++ b/drivers/tty/serial/8250/8250_gsc.c
+@@ -26,7 +26,7 @@ static int __init serial_init_chip(struct parisc_device *dev)
+ 	unsigned long address;
+ 	int err;
+ 
+-#ifdef CONFIG_64BIT
++#if defined(CONFIG_64BIT) && defined(CONFIG_IOSAPIC)
+ 	if (!dev->irq && (dev->id.sversion == 0xad))
+ 		dev->irq = iosapic_serial_irq(dev);
+ #endif
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 48e03e176f319..cec7163bc8730 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -1184,6 +1184,10 @@ static int defrag_collect_targets(struct btrfs_inode *inode,
+ 		if (em->generation < newer_than)
+ 			goto next;
+ 
++		/* This em is under writeback, no need to defrag */
++		if (em->generation == (u64)-1)
++			goto next;
++
+ 		/*
+ 		 * Our start offset might be in the middle of an existing extent
+ 		 * map, so take that into account.
+@@ -1603,6 +1607,7 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra,
+ 			ret = 0;
+ 			break;
+ 		}
++		cond_resched();
+ 	}
+ 
+ 	if (ra_allocated)
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 040324d711188..7e1159474a4e6 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -4983,6 +4983,10 @@ static int put_file_data(struct send_ctx *sctx, u64 offset, u32 len)
+ 			lock_page(page);
+ 			if (!PageUptodate(page)) {
+ 				unlock_page(page);
++				btrfs_err(fs_info,
++			"send: IO error at offset %llu for inode %llu root %llu",
++					page_offset(page), sctx->cur_ino,
++					sctx->send_root->root_key.objectid);
+ 				put_page(page);
+ 				ret = -EIO;
+ 				break;
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index cefd0e9623ba9..fb69524a992bb 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -1796,13 +1796,9 @@ void cifs_put_smb_ses(struct cifs_ses *ses)
+ 		int i;
+ 
+ 		for (i = 1; i < chan_count; i++) {
+-			/*
+-			 * note: for now, we're okay accessing ses->chans
+-			 * without chan_lock. But when chans can go away, we'll
+-			 * need to introduce ref counting to make sure that chan
+-			 * is not freed from under us.
+-			 */
++			spin_unlock(&ses->chan_lock);
+ 			cifs_put_tcp_session(ses->chans[i].server, 0);
++			spin_lock(&ses->chan_lock);
+ 			ses->chans[i].server = NULL;
+ 		}
+ 	}
+diff --git a/fs/cifs/fs_context.c b/fs/cifs/fs_context.c
+index e3ed25dc6f3f6..43c406b812fed 100644
+--- a/fs/cifs/fs_context.c
++++ b/fs/cifs/fs_context.c
+@@ -147,7 +147,7 @@ const struct fs_parameter_spec smb3_fs_parameters[] = {
+ 	fsparam_u32("echo_interval", Opt_echo_interval),
+ 	fsparam_u32("max_credits", Opt_max_credits),
+ 	fsparam_u32("handletimeout", Opt_handletimeout),
+-	fsparam_u32("snapshot", Opt_snapshot),
++	fsparam_u64("snapshot", Opt_snapshot),
+ 	fsparam_u32("max_channels", Opt_max_channels),
+ 
+ 	/* Mount options which take string value */
+@@ -1072,7 +1072,7 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ 		ctx->echo_interval = result.uint_32;
+ 		break;
+ 	case Opt_snapshot:
+-		ctx->snapshot_time = result.uint_32;
++		ctx->snapshot_time = result.uint_64;
+ 		break;
+ 	case Opt_max_credits:
+ 		if (result.uint_32 < 20 || result.uint_32 > 60000) {
+diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
+index 38574fc70117e..1ecfa53e4b0a1 100644
+--- a/fs/cifs/sess.c
++++ b/fs/cifs/sess.c
+@@ -76,11 +76,6 @@ int cifs_try_adding_channels(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses)
+ 	struct cifs_server_iface *ifaces = NULL;
+ 	size_t iface_count;
+ 
+-	if (ses->server->dialect < SMB30_PROT_ID) {
+-		cifs_dbg(VFS, "multichannel is not supported on this protocol version, use 3.0 or above\n");
+-		return 0;
+-	}
+-
+ 	spin_lock(&ses->chan_lock);
+ 
+ 	new_chan_count = old_chan_count = ses->chan_count;
+@@ -94,6 +89,12 @@ int cifs_try_adding_channels(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses)
+ 		return 0;
+ 	}
+ 
++	if (ses->server->dialect < SMB30_PROT_ID) {
++		spin_unlock(&ses->chan_lock);
++		cifs_dbg(VFS, "multichannel is not supported on this protocol version, use 3.0 or above\n");
++		return 0;
++	}
++
+ 	if (!(ses->server->capabilities & SMB2_GLOBAL_CAP_MULTI_CHANNEL)) {
+ 		ses->chan_max = 1;
+ 		spin_unlock(&ses->chan_lock);
+diff --git a/fs/cifs/xattr.c b/fs/cifs/xattr.c
+index 7d8b72d67c803..9d486fbbfbbde 100644
+--- a/fs/cifs/xattr.c
++++ b/fs/cifs/xattr.c
+@@ -175,11 +175,13 @@ static int cifs_xattr_set(const struct xattr_handler *handler,
+ 				switch (handler->flags) {
+ 				case XATTR_CIFS_NTSD_FULL:
+ 					aclflags = (CIFS_ACL_OWNER |
++						    CIFS_ACL_GROUP |
+ 						    CIFS_ACL_DACL |
+ 						    CIFS_ACL_SACL);
+ 					break;
+ 				case XATTR_CIFS_NTSD:
+ 					aclflags = (CIFS_ACL_OWNER |
++						    CIFS_ACL_GROUP |
+ 						    CIFS_ACL_DACL);
+ 					break;
+ 				case XATTR_CIFS_ACL:
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 698db7fb62e06..a92f276f21d9c 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -8872,10 +8872,9 @@ static void io_mem_free(void *ptr)
+ 
+ static void *io_mem_alloc(size_t size)
+ {
+-	gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO | __GFP_NOWARN | __GFP_COMP |
+-				__GFP_NORETRY | __GFP_ACCOUNT;
++	gfp_t gfp = GFP_KERNEL_ACCOUNT | __GFP_ZERO | __GFP_NOWARN | __GFP_COMP;
+ 
+-	return (void *) __get_free_pages(gfp_flags, get_order(size));
++	return (void *) __get_free_pages(gfp, get_order(size));
+ }
+ 
+ static unsigned long rings_size(unsigned sq_entries, unsigned cq_entries,
+diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c
+index 1ff1e52f398fc..cbbbccdc5a0a5 100644
+--- a/fs/ksmbd/smb2pdu.c
++++ b/fs/ksmbd/smb2pdu.c
+@@ -3423,9 +3423,9 @@ static int smb2_populate_readdir_entry(struct ksmbd_conn *conn, int info_level,
+ 		goto free_conv_name;
+ 	}
+ 
+-	struct_sz = readdir_info_level_struct_sz(info_level);
+-	next_entry_offset = ALIGN(struct_sz - 1 + conv_len,
+-				  KSMBD_DIR_INFO_ALIGNMENT);
++	struct_sz = readdir_info_level_struct_sz(info_level) - 1 + conv_len;
++	next_entry_offset = ALIGN(struct_sz, KSMBD_DIR_INFO_ALIGNMENT);
++	d_info->last_entry_off_align = next_entry_offset - struct_sz;
+ 
+ 	if (next_entry_offset > d_info->out_buf_len) {
+ 		d_info->out_buf_len = 0;
+@@ -3977,6 +3977,7 @@ int smb2_query_dir(struct ksmbd_work *work)
+ 		((struct file_directory_info *)
+ 		((char *)rsp->Buffer + d_info.last_entry_offset))
+ 		->NextEntryOffset = 0;
++		d_info.data_count -= d_info.last_entry_off_align;
+ 
+ 		rsp->StructureSize = cpu_to_le16(9);
+ 		rsp->OutputBufferOffset = cpu_to_le16(72);
+diff --git a/fs/ksmbd/smb_common.c b/fs/ksmbd/smb_common.c
+index ef7f42b0290a8..9a7e211dbf4f4 100644
+--- a/fs/ksmbd/smb_common.c
++++ b/fs/ksmbd/smb_common.c
+@@ -308,14 +308,17 @@ int ksmbd_populate_dot_dotdot_entries(struct ksmbd_work *work, int info_level,
+ 	for (i = 0; i < 2; i++) {
+ 		struct kstat kstat;
+ 		struct ksmbd_kstat ksmbd_kstat;
++		struct dentry *dentry;
+ 
+ 		if (!dir->dot_dotdot[i]) { /* fill dot entry info */
+ 			if (i == 0) {
+ 				d_info->name = ".";
+ 				d_info->name_len = 1;
++				dentry = dir->filp->f_path.dentry;
+ 			} else {
+ 				d_info->name = "..";
+ 				d_info->name_len = 2;
++				dentry = dir->filp->f_path.dentry->d_parent;
+ 			}
+ 
+ 			if (!match_pattern(d_info->name, d_info->name_len,
+@@ -327,7 +330,7 @@ int ksmbd_populate_dot_dotdot_entries(struct ksmbd_work *work, int info_level,
+ 			ksmbd_kstat.kstat = &kstat;
+ 			ksmbd_vfs_fill_dentry_attrs(work,
+ 						    user_ns,
+-						    dir->filp->f_path.dentry->d_parent,
++						    dentry,
+ 						    &ksmbd_kstat);
+ 			rc = fn(conn, info_level, d_info, &ksmbd_kstat);
+ 			if (rc)
+diff --git a/fs/ksmbd/vfs.h b/fs/ksmbd/vfs.h
+index adf94a4f22fa6..8c37aaf936ab1 100644
+--- a/fs/ksmbd/vfs.h
++++ b/fs/ksmbd/vfs.h
+@@ -47,6 +47,7 @@ struct ksmbd_dir_info {
+ 	int		last_entry_offset;
+ 	bool		hide_dot_file;
+ 	int		flags;
++	int		last_entry_off_align;
+ };
+ 
+ struct ksmbd_readdir_data {
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index b2460a0504411..877f72433f435 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -1982,14 +1982,14 @@ no_open:
+ 	if (!res) {
+ 		inode = d_inode(dentry);
+ 		if ((lookup_flags & LOOKUP_DIRECTORY) && inode &&
+-		    !S_ISDIR(inode->i_mode))
++		    !(S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode)))
+ 			res = ERR_PTR(-ENOTDIR);
+ 		else if (inode && S_ISREG(inode->i_mode))
+ 			res = ERR_PTR(-EOPENSTALE);
+ 	} else if (!IS_ERR(res)) {
+ 		inode = d_inode(res);
+ 		if ((lookup_flags & LOOKUP_DIRECTORY) && inode &&
+-		    !S_ISDIR(inode->i_mode)) {
++		    !(S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode))) {
+ 			dput(res);
+ 			res = ERR_PTR(-ENOTDIR);
+ 		} else if (inode && S_ISREG(inode->i_mode)) {
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index fda530d5e7640..a09d3ff627c20 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -853,12 +853,9 @@ int nfs_getattr(struct user_namespace *mnt_userns, const struct path *path,
+ 	}
+ 
+ 	/* Flush out writes to the server in order to update c/mtime.  */
+-	if ((request_mask & (STATX_CTIME|STATX_MTIME)) &&
+-			S_ISREG(inode->i_mode)) {
+-		err = filemap_write_and_wait(inode->i_mapping);
+-		if (err)
+-			goto out;
+-	}
++	if ((request_mask & (STATX_CTIME | STATX_MTIME)) &&
++	    S_ISREG(inode->i_mode))
++		filemap_write_and_wait(inode->i_mapping);
+ 
+ 	/*
+ 	 * We may force a getattr if the user cares about atime.
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 9a94e758212c8..0abbbf5d2bdf1 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -1233,8 +1233,7 @@ nfs4_update_changeattr_locked(struct inode *inode,
+ 				NFS_INO_INVALID_ACCESS | NFS_INO_INVALID_ACL |
+ 				NFS_INO_INVALID_SIZE | NFS_INO_INVALID_OTHER |
+ 				NFS_INO_INVALID_BLOCKS | NFS_INO_INVALID_NLINK |
+-				NFS_INO_INVALID_MODE | NFS_INO_INVALID_XATTR |
+-				NFS_INO_REVAL_PAGECACHE;
++				NFS_INO_INVALID_MODE | NFS_INO_INVALID_XATTR;
+ 		nfsi->attrtimeo = NFS_MINATTRTIMEO(inode);
+ 	}
+ 	nfsi->attrtimeo_timestamp = jiffies;
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index 22d904bde6ab9..a74aef99bd3d6 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -690,9 +690,14 @@ int dquot_quota_sync(struct super_block *sb, int type)
+ 	/* This is not very clever (and fast) but currently I don't know about
+ 	 * any other simple way of getting quota data to disk and we must get
+ 	 * them there for userspace to be visible... */
+-	if (sb->s_op->sync_fs)
+-		sb->s_op->sync_fs(sb, 1);
+-	sync_blockdev(sb->s_bdev);
++	if (sb->s_op->sync_fs) {
++		ret = sb->s_op->sync_fs(sb, 1);
++		if (ret)
++			return ret;
++	}
++	ret = sync_blockdev(sb->s_bdev);
++	if (ret)
++		return ret;
+ 
+ 	/*
+ 	 * Now when everything is written we can discard the pagecache so
+diff --git a/fs/super.c b/fs/super.c
+index a6405d44d4ca2..d978dd031a036 100644
+--- a/fs/super.c
++++ b/fs/super.c
+@@ -1619,11 +1619,9 @@ static void lockdep_sb_freeze_acquire(struct super_block *sb)
+ 		percpu_rwsem_acquire(sb->s_writers.rw_sem + level, 0, _THIS_IP_);
+ }
+ 
+-static void sb_freeze_unlock(struct super_block *sb)
++static void sb_freeze_unlock(struct super_block *sb, int level)
+ {
+-	int level;
+-
+-	for (level = SB_FREEZE_LEVELS - 1; level >= 0; level--)
++	for (level--; level >= 0; level--)
+ 		percpu_up_write(sb->s_writers.rw_sem + level);
+ }
+ 
+@@ -1694,7 +1692,14 @@ int freeze_super(struct super_block *sb)
+ 	sb_wait_write(sb, SB_FREEZE_PAGEFAULT);
+ 
+ 	/* All writers are done so after syncing there won't be dirty data */
+-	sync_filesystem(sb);
++	ret = sync_filesystem(sb);
++	if (ret) {
++		sb->s_writers.frozen = SB_UNFROZEN;
++		sb_freeze_unlock(sb, SB_FREEZE_PAGEFAULT);
++		wake_up(&sb->s_writers.wait_unfrozen);
++		deactivate_locked_super(sb);
++		return ret;
++	}
+ 
+ 	/* Now wait for internal filesystem counter */
+ 	sb->s_writers.frozen = SB_FREEZE_FS;
+@@ -1706,7 +1711,7 @@ int freeze_super(struct super_block *sb)
+ 			printk(KERN_ERR
+ 				"VFS:Filesystem freeze failed\n");
+ 			sb->s_writers.frozen = SB_UNFROZEN;
+-			sb_freeze_unlock(sb);
++			sb_freeze_unlock(sb, SB_FREEZE_FS);
+ 			wake_up(&sb->s_writers.wait_unfrozen);
+ 			deactivate_locked_super(sb);
+ 			return ret;
+@@ -1751,7 +1756,7 @@ static int thaw_super_locked(struct super_block *sb)
+ 	}
+ 
+ 	sb->s_writers.frozen = SB_UNFROZEN;
+-	sb_freeze_unlock(sb);
++	sb_freeze_unlock(sb, SB_FREEZE_FS);
+ out:
+ 	wake_up(&sb->s_writers.wait_unfrozen);
+ 	deactivate_locked_super(sb);
+diff --git a/fs/sync.c b/fs/sync.c
+index 3ce8e2137f310..c7690016453e4 100644
+--- a/fs/sync.c
++++ b/fs/sync.c
+@@ -29,7 +29,7 @@
+  */
+ int sync_filesystem(struct super_block *sb)
+ {
+-	int ret;
++	int ret = 0;
+ 
+ 	/*
+ 	 * We need to be protected against the filesystem going from
+@@ -52,15 +52,21 @@ int sync_filesystem(struct super_block *sb)
+ 	 * at a time.
+ 	 */
+ 	writeback_inodes_sb(sb, WB_REASON_SYNC);
+-	if (sb->s_op->sync_fs)
+-		sb->s_op->sync_fs(sb, 0);
++	if (sb->s_op->sync_fs) {
++		ret = sb->s_op->sync_fs(sb, 0);
++		if (ret)
++			return ret;
++	}
+ 	ret = sync_blockdev_nowait(sb->s_bdev);
+-	if (ret < 0)
++	if (ret)
+ 		return ret;
+ 
+ 	sync_inodes_sb(sb);
+-	if (sb->s_op->sync_fs)
+-		sb->s_op->sync_fs(sb, 1);
++	if (sb->s_op->sync_fs) {
++		ret = sb->s_op->sync_fs(sb, 1);
++		if (ret)
++			return ret;
++	}
+ 	return sync_blockdev(sb->s_bdev);
+ }
+ EXPORT_SYMBOL(sync_filesystem);
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index d73887c805e05..d405ffe770342 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -740,7 +740,8 @@ extern bool blk_queue_can_use_dma_map_merging(struct request_queue *q,
+ 
+ bool __must_check blk_get_queue(struct request_queue *);
+ extern void blk_put_queue(struct request_queue *);
+-extern void blk_set_queue_dying(struct request_queue *);
++
++void blk_mark_disk_dead(struct gendisk *disk);
+ 
+ #ifdef CONFIG_BLOCK
+ /*
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 9f20b0f539f78..29b9b199c56bb 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -297,6 +297,34 @@ bool bpf_map_meta_equal(const struct bpf_map *meta0,
+ 
+ extern const struct bpf_map_ops bpf_map_offload_ops;
+ 
++/* bpf_type_flag contains a set of flags that are applicable to the values of
++ * arg_type, ret_type and reg_type. For example, a pointer value may be null,
++ * or a memory is read-only. We classify types into two categories: base types
++ * and extended types. Extended types are base types combined with a type flag.
++ *
++ * Currently there are no more than 32 base types in arg_type, ret_type and
++ * reg_types.
++ */
++#define BPF_BASE_TYPE_BITS	8
++
++enum bpf_type_flag {
++	/* PTR may be NULL. */
++	PTR_MAYBE_NULL		= BIT(0 + BPF_BASE_TYPE_BITS),
++
++	/* MEM is read-only. When applied on bpf_arg, it indicates the arg is
++	 * compatible with both mutable and immutable memory.
++	 */
++	MEM_RDONLY		= BIT(1 + BPF_BASE_TYPE_BITS),
++
++	__BPF_TYPE_LAST_FLAG	= MEM_RDONLY,
++};
++
++/* Max number of base types. */
++#define BPF_BASE_TYPE_LIMIT	(1UL << BPF_BASE_TYPE_BITS)
++
++/* Max number of all types. */
++#define BPF_TYPE_LIMIT		(__BPF_TYPE_LAST_FLAG | (__BPF_TYPE_LAST_FLAG - 1))
++
+ /* function argument constraints */
+ enum bpf_arg_type {
+ 	ARG_DONTCARE = 0,	/* unused argument in helper function */
+@@ -308,13 +336,11 @@ enum bpf_arg_type {
+ 	ARG_PTR_TO_MAP_KEY,	/* pointer to stack used as map key */
+ 	ARG_PTR_TO_MAP_VALUE,	/* pointer to stack used as map value */
+ 	ARG_PTR_TO_UNINIT_MAP_VALUE,	/* pointer to valid memory used to store a map value */
+-	ARG_PTR_TO_MAP_VALUE_OR_NULL,	/* pointer to stack used as map value or NULL */
+ 
+ 	/* the following constraints used to prototype bpf_memcmp() and other
+ 	 * functions that access data on eBPF program stack
+ 	 */
+ 	ARG_PTR_TO_MEM,		/* pointer to valid memory (stack, packet, map value) */
+-	ARG_PTR_TO_MEM_OR_NULL, /* pointer to valid memory or NULL */
+ 	ARG_PTR_TO_UNINIT_MEM,	/* pointer to memory does not need to be initialized,
+ 				 * helper function must fill all bytes or clear
+ 				 * them in error case.
+@@ -324,42 +350,65 @@ enum bpf_arg_type {
+ 	ARG_CONST_SIZE_OR_ZERO,	/* number of bytes accessed from memory or 0 */
+ 
+ 	ARG_PTR_TO_CTX,		/* pointer to context */
+-	ARG_PTR_TO_CTX_OR_NULL,	/* pointer to context or NULL */
+ 	ARG_ANYTHING,		/* any (initialized) argument is ok */
+ 	ARG_PTR_TO_SPIN_LOCK,	/* pointer to bpf_spin_lock */
+ 	ARG_PTR_TO_SOCK_COMMON,	/* pointer to sock_common */
+ 	ARG_PTR_TO_INT,		/* pointer to int */
+ 	ARG_PTR_TO_LONG,	/* pointer to long */
+ 	ARG_PTR_TO_SOCKET,	/* pointer to bpf_sock (fullsock) */
+-	ARG_PTR_TO_SOCKET_OR_NULL,	/* pointer to bpf_sock (fullsock) or NULL */
+ 	ARG_PTR_TO_BTF_ID,	/* pointer to in-kernel struct */
+ 	ARG_PTR_TO_ALLOC_MEM,	/* pointer to dynamically allocated memory */
+-	ARG_PTR_TO_ALLOC_MEM_OR_NULL,	/* pointer to dynamically allocated memory or NULL */
+ 	ARG_CONST_ALLOC_SIZE_OR_ZERO,	/* number of allocated bytes requested */
+ 	ARG_PTR_TO_BTF_ID_SOCK_COMMON,	/* pointer to in-kernel sock_common or bpf-mirrored bpf_sock */
+ 	ARG_PTR_TO_PERCPU_BTF_ID,	/* pointer to in-kernel percpu type */
+ 	ARG_PTR_TO_FUNC,	/* pointer to a bpf program function */
+-	ARG_PTR_TO_STACK_OR_NULL,	/* pointer to stack or NULL */
++	ARG_PTR_TO_STACK,	/* pointer to stack */
+ 	ARG_PTR_TO_CONST_STR,	/* pointer to a null terminated read-only string */
+ 	ARG_PTR_TO_TIMER,	/* pointer to bpf_timer */
+ 	__BPF_ARG_TYPE_MAX,
++
++	/* Extended arg_types. */
++	ARG_PTR_TO_MAP_VALUE_OR_NULL	= PTR_MAYBE_NULL | ARG_PTR_TO_MAP_VALUE,
++	ARG_PTR_TO_MEM_OR_NULL		= PTR_MAYBE_NULL | ARG_PTR_TO_MEM,
++	ARG_PTR_TO_CTX_OR_NULL		= PTR_MAYBE_NULL | ARG_PTR_TO_CTX,
++	ARG_PTR_TO_SOCKET_OR_NULL	= PTR_MAYBE_NULL | ARG_PTR_TO_SOCKET,
++	ARG_PTR_TO_ALLOC_MEM_OR_NULL	= PTR_MAYBE_NULL | ARG_PTR_TO_ALLOC_MEM,
++	ARG_PTR_TO_STACK_OR_NULL	= PTR_MAYBE_NULL | ARG_PTR_TO_STACK,
++
++	/* This must be the last entry. Its purpose is to ensure the enum is
++	 * wide enough to hold the higher bits reserved for bpf_type_flag.
++	 */
++	__BPF_ARG_TYPE_LIMIT	= BPF_TYPE_LIMIT,
+ };
++static_assert(__BPF_ARG_TYPE_MAX <= BPF_BASE_TYPE_LIMIT);
+ 
+ /* type of values returned from helper functions */
+ enum bpf_return_type {
+ 	RET_INTEGER,			/* function returns integer */
+ 	RET_VOID,			/* function doesn't return anything */
+ 	RET_PTR_TO_MAP_VALUE,		/* returns a pointer to map elem value */
+-	RET_PTR_TO_MAP_VALUE_OR_NULL,	/* returns a pointer to map elem value or NULL */
+-	RET_PTR_TO_SOCKET_OR_NULL,	/* returns a pointer to a socket or NULL */
+-	RET_PTR_TO_TCP_SOCK_OR_NULL,	/* returns a pointer to a tcp_sock or NULL */
+-	RET_PTR_TO_SOCK_COMMON_OR_NULL,	/* returns a pointer to a sock_common or NULL */
+-	RET_PTR_TO_ALLOC_MEM_OR_NULL,	/* returns a pointer to dynamically allocated memory or NULL */
+-	RET_PTR_TO_BTF_ID_OR_NULL,	/* returns a pointer to a btf_id or NULL */
+-	RET_PTR_TO_MEM_OR_BTF_ID_OR_NULL, /* returns a pointer to a valid memory or a btf_id or NULL */
++	RET_PTR_TO_SOCKET,		/* returns a pointer to a socket */
++	RET_PTR_TO_TCP_SOCK,		/* returns a pointer to a tcp_sock */
++	RET_PTR_TO_SOCK_COMMON,		/* returns a pointer to a sock_common */
++	RET_PTR_TO_ALLOC_MEM,		/* returns a pointer to dynamically allocated memory */
+ 	RET_PTR_TO_MEM_OR_BTF_ID,	/* returns a pointer to a valid memory or a btf_id */
+ 	RET_PTR_TO_BTF_ID,		/* returns a pointer to a btf_id */
++	__BPF_RET_TYPE_MAX,
++
++	/* Extended ret_types. */
++	RET_PTR_TO_MAP_VALUE_OR_NULL	= PTR_MAYBE_NULL | RET_PTR_TO_MAP_VALUE,
++	RET_PTR_TO_SOCKET_OR_NULL	= PTR_MAYBE_NULL | RET_PTR_TO_SOCKET,
++	RET_PTR_TO_TCP_SOCK_OR_NULL	= PTR_MAYBE_NULL | RET_PTR_TO_TCP_SOCK,
++	RET_PTR_TO_SOCK_COMMON_OR_NULL	= PTR_MAYBE_NULL | RET_PTR_TO_SOCK_COMMON,
++	RET_PTR_TO_ALLOC_MEM_OR_NULL	= PTR_MAYBE_NULL | RET_PTR_TO_ALLOC_MEM,
++	RET_PTR_TO_BTF_ID_OR_NULL	= PTR_MAYBE_NULL | RET_PTR_TO_BTF_ID,
++
++	/* This must be the last entry. Its purpose is to ensure the enum is
++	 * wide enough to hold the higher bits reserved for bpf_type_flag.
++	 */
++	__BPF_RET_TYPE_LIMIT	= BPF_TYPE_LIMIT,
+ };
++static_assert(__BPF_RET_TYPE_MAX <= BPF_BASE_TYPE_LIMIT);
+ 
+ /* eBPF function prototype used by verifier to allow BPF_CALLs from eBPF programs
+  * to in-kernel helper functions and for adjusting imm32 field in BPF_CALL
+@@ -421,18 +470,15 @@ enum bpf_reg_type {
+ 	PTR_TO_CTX,		 /* reg points to bpf_context */
+ 	CONST_PTR_TO_MAP,	 /* reg points to struct bpf_map */
+ 	PTR_TO_MAP_VALUE,	 /* reg points to map element value */
+-	PTR_TO_MAP_VALUE_OR_NULL,/* points to map elem value or NULL */
++	PTR_TO_MAP_KEY,		 /* reg points to a map element key */
+ 	PTR_TO_STACK,		 /* reg == frame_pointer + offset */
+ 	PTR_TO_PACKET_META,	 /* skb->data - meta_len */
+ 	PTR_TO_PACKET,		 /* reg points to skb->data */
+ 	PTR_TO_PACKET_END,	 /* skb->data + headlen */
+ 	PTR_TO_FLOW_KEYS,	 /* reg points to bpf_flow_keys */
+ 	PTR_TO_SOCKET,		 /* reg points to struct bpf_sock */
+-	PTR_TO_SOCKET_OR_NULL,	 /* reg points to struct bpf_sock or NULL */
+ 	PTR_TO_SOCK_COMMON,	 /* reg points to sock_common */
+-	PTR_TO_SOCK_COMMON_OR_NULL, /* reg points to sock_common or NULL */
+ 	PTR_TO_TCP_SOCK,	 /* reg points to struct tcp_sock */
+-	PTR_TO_TCP_SOCK_OR_NULL, /* reg points to struct tcp_sock or NULL */
+ 	PTR_TO_TP_BUFFER,	 /* reg points to a writable raw tp's buffer */
+ 	PTR_TO_XDP_SOCK,	 /* reg points to struct xdp_sock */
+ 	/* PTR_TO_BTF_ID points to a kernel struct that does not need
+@@ -450,18 +496,25 @@ enum bpf_reg_type {
+ 	 * been checked for null. Used primarily to inform the verifier
+ 	 * an explicit null check is required for this struct.
+ 	 */
+-	PTR_TO_BTF_ID_OR_NULL,
+ 	PTR_TO_MEM,		 /* reg points to valid memory region */
+-	PTR_TO_MEM_OR_NULL,	 /* reg points to valid memory region or NULL */
+-	PTR_TO_RDONLY_BUF,	 /* reg points to a readonly buffer */
+-	PTR_TO_RDONLY_BUF_OR_NULL, /* reg points to a readonly buffer or NULL */
+-	PTR_TO_RDWR_BUF,	 /* reg points to a read/write buffer */
+-	PTR_TO_RDWR_BUF_OR_NULL, /* reg points to a read/write buffer or NULL */
++	PTR_TO_BUF,		 /* reg points to a read/write buffer */
+ 	PTR_TO_PERCPU_BTF_ID,	 /* reg points to a percpu kernel variable */
+ 	PTR_TO_FUNC,		 /* reg points to a bpf program function */
+-	PTR_TO_MAP_KEY,		 /* reg points to a map element key */
+ 	__BPF_REG_TYPE_MAX,
++
++	/* Extended reg_types. */
++	PTR_TO_MAP_VALUE_OR_NULL	= PTR_MAYBE_NULL | PTR_TO_MAP_VALUE,
++	PTR_TO_SOCKET_OR_NULL		= PTR_MAYBE_NULL | PTR_TO_SOCKET,
++	PTR_TO_SOCK_COMMON_OR_NULL	= PTR_MAYBE_NULL | PTR_TO_SOCK_COMMON,
++	PTR_TO_TCP_SOCK_OR_NULL		= PTR_MAYBE_NULL | PTR_TO_TCP_SOCK,
++	PTR_TO_BTF_ID_OR_NULL		= PTR_MAYBE_NULL | PTR_TO_BTF_ID,
++
++	/* This must be the last entry. Its purpose is to ensure the enum is
++	 * wide enough to hold the higher bits reserved for bpf_type_flag.
++	 */
++	__BPF_REG_TYPE_LIMIT	= BPF_TYPE_LIMIT,
+ };
++static_assert(__BPF_REG_TYPE_MAX <= BPF_BASE_TYPE_LIMIT);
+ 
+ /* The information passed from prog-specific *_is_valid_access
+  * back to the verifier.
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index 182b16a910849..540bc0b3bfae6 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -18,6 +18,8 @@
+  * that converting umax_value to int cannot overflow.
+  */
+ #define BPF_MAX_VAR_SIZ	(1 << 29)
++/* size of type_str_buf in bpf_verifier. */
++#define TYPE_STR_BUF_LEN 64
+ 
+ /* Liveness marks, used for registers and spilled-regs (in stack slots).
+  * Read marks propagate upwards until they find a write mark; they record that
+@@ -474,6 +476,8 @@ struct bpf_verifier_env {
+ 	/* longest register parentage chain walked for liveness marking */
+ 	u32 longest_mark_read_walk;
+ 	bpfptr_t fd_array;
++	/* buffer used in reg_type_str() to generate reg_type string */
++	char type_str_buf[TYPE_STR_BUF_LEN];
+ };
+ 
+ __printf(2, 0) void bpf_verifier_vlog(struct bpf_verifier_log *log,
+@@ -536,5 +540,18 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
+ 			    struct bpf_attach_target_info *tgt_info);
+ void bpf_free_kfunc_btf_tab(struct bpf_kfunc_btf_tab *tab);
+ 
++#define BPF_BASE_TYPE_MASK	GENMASK(BPF_BASE_TYPE_BITS - 1, 0)
++
++/* extract base type from bpf_{arg, return, reg}_type. */
++static inline u32 base_type(u32 type)
++{
++	return type & BPF_BASE_TYPE_MASK;
++}
++
++/* extract flags from an extended type. See bpf_type_flag in bpf.h. */
++static inline u32 type_flag(u32 type)
++{
++	return type & ~BPF_BASE_TYPE_MASK;
++}
+ 
+ #endif /* _LINUX_BPF_VERIFIER_H */
+diff --git a/include/linux/compiler.h b/include/linux/compiler.h
+index 429dcebe2b992..0f7fd205ab7ea 100644
+--- a/include/linux/compiler.h
++++ b/include/linux/compiler.h
+@@ -117,14 +117,6 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
+  */
+ #define __stringify_label(n) #n
+ 
+-#define __annotate_reachable(c) ({					\
+-	asm volatile(__stringify_label(c) ":\n\t"			\
+-		     ".pushsection .discard.reachable\n\t"		\
+-		     ".long " __stringify_label(c) "b - .\n\t"		\
+-		     ".popsection\n\t" : : "i" (c));			\
+-})
+-#define annotate_reachable() __annotate_reachable(__COUNTER__)
+-
+ #define __annotate_unreachable(c) ({					\
+ 	asm volatile(__stringify_label(c) ":\n\t"			\
+ 		     ".pushsection .discard.unreachable\n\t"		\
+@@ -133,24 +125,21 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
+ })
+ #define annotate_unreachable() __annotate_unreachable(__COUNTER__)
+ 
+-#define ASM_UNREACHABLE							\
+-	"999:\n\t"							\
+-	".pushsection .discard.unreachable\n\t"				\
+-	".long 999b - .\n\t"						\
++#define ASM_REACHABLE							\
++	"998:\n\t"							\
++	".pushsection .discard.reachable\n\t"				\
++	".long 998b - .\n\t"						\
+ 	".popsection\n\t"
+ 
+ /* Annotate a C jump table to allow objtool to follow the code flow */
+ #define __annotate_jump_table __section(".rodata..c_jump_table")
+ 
+ #else
+-#define annotate_reachable()
+ #define annotate_unreachable()
++# define ASM_REACHABLE
+ #define __annotate_jump_table
+ #endif
+ 
+-#ifndef ASM_UNREACHABLE
+-# define ASM_UNREACHABLE
+-#endif
+ #ifndef unreachable
+ # define unreachable() do {		\
+ 	annotate_unreachable();		\
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 6cbefb660fa3b..049858c671efa 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -2149,7 +2149,7 @@ struct net_device {
+ 	struct netdev_queue	*_tx ____cacheline_aligned_in_smp;
+ 	unsigned int		num_tx_queues;
+ 	unsigned int		real_num_tx_queues;
+-	struct Qdisc		*qdisc;
++	struct Qdisc __rcu	*qdisc;
+ 	unsigned int		tx_queue_len;
+ 	spinlock_t		tx_global_lock;
+ 
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 78c351e35fec6..ee5ed88219631 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1672,7 +1672,6 @@ extern struct pid *cad_pid;
+ #define PF_MEMALLOC		0x00000800	/* Allocating memory */
+ #define PF_NPROC_EXCEEDED	0x00001000	/* set_user() noticed that RLIMIT_NPROC was exceeded */
+ #define PF_USED_MATH		0x00002000	/* If unset the fpu must be initialized before use */
+-#define PF_USED_ASYNC		0x00004000	/* Used async_schedule*(), used by module init */
+ #define PF_NOFREEZE		0x00008000	/* This thread should not be frozen */
+ #define PF_FROZEN		0x00010000	/* Frozen for system suspend */
+ #define PF_KSWAPD		0x00020000	/* I am kswapd */
+diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h
+index cf5999626e28d..5e1533ee3785b 100644
+--- a/include/linux/tee_drv.h
++++ b/include/linux/tee_drv.h
+@@ -587,4 +587,18 @@ struct tee_client_driver {
+ #define to_tee_client_driver(d) \
+ 		container_of(d, struct tee_client_driver, driver)
+ 
++/**
++ * teedev_open() - Open a struct tee_device
++ * @teedev:	Device to open
++ *
++ * @return a pointer to struct tee_context on success or an ERR_PTR on failure.
++ */
++struct tee_context *teedev_open(struct tee_device *teedev);
++
++/**
++ * teedev_close_context() - closes a struct tee_context
++ * @ctx:	The struct tee_context to close
++ */
++void teedev_close_context(struct tee_context *ctx);
++
+ #endif /*__TEE_DRV_H*/
+diff --git a/include/net/addrconf.h b/include/net/addrconf.h
+index e7ce719838b5e..59940e230b782 100644
+--- a/include/net/addrconf.h
++++ b/include/net/addrconf.h
+@@ -109,8 +109,6 @@ struct inet6_ifaddr *ipv6_get_ifaddr(struct net *net,
+ int ipv6_dev_get_saddr(struct net *net, const struct net_device *dev,
+ 		       const struct in6_addr *daddr, unsigned int srcprefs,
+ 		       struct in6_addr *saddr);
+-int __ipv6_get_lladdr(struct inet6_dev *idev, struct in6_addr *addr,
+-		      u32 banned_flags);
+ int ipv6_get_lladdr(struct net_device *dev, struct in6_addr *addr,
+ 		    u32 banned_flags);
+ bool inet_rcv_saddr_equal(const struct sock *sk, const struct sock *sk2,
+diff --git a/include/net/bond_3ad.h b/include/net/bond_3ad.h
+index 38785d48baff9..184105d682942 100644
+--- a/include/net/bond_3ad.h
++++ b/include/net/bond_3ad.h
+@@ -262,7 +262,7 @@ struct ad_system {
+ struct ad_bond_info {
+ 	struct ad_system system;	/* 802.3ad system structure */
+ 	struct bond_3ad_stats stats;
+-	u32 agg_select_timer;		/* Timer to select aggregator after all adapter's hand shakes */
++	atomic_t agg_select_timer;		/* Timer to select aggregator after all adapter's hand shakes */
+ 	u16 aggregator_identifier;
+ };
+ 
+diff --git a/include/net/dsa.h b/include/net/dsa.h
+index eff5c44ba3774..aede735bed64c 100644
+--- a/include/net/dsa.h
++++ b/include/net/dsa.h
+@@ -1094,6 +1094,7 @@ void dsa_unregister_switch(struct dsa_switch *ds);
+ int dsa_register_switch(struct dsa_switch *ds);
+ void dsa_switch_shutdown(struct dsa_switch *ds);
+ struct dsa_switch *dsa_switch_find(int tree_index, int sw_index);
++void dsa_flush_workqueue(void);
+ #ifdef CONFIG_PM_SLEEP
+ int dsa_switch_suspend(struct dsa_switch *ds);
+ int dsa_switch_resume(struct dsa_switch *ds);
+diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h
+index c85b040728d7e..bbb27639f2933 100644
+--- a/include/net/ip6_fib.h
++++ b/include/net/ip6_fib.h
+@@ -189,14 +189,16 @@ struct fib6_info {
+ 	u32				fib6_metric;
+ 	u8				fib6_protocol;
+ 	u8				fib6_type;
++
++	u8				offload;
++	u8				trap;
++	u8				offload_failed;
++
+ 	u8				should_flush:1,
+ 					dst_nocount:1,
+ 					dst_nopolicy:1,
+ 					fib6_destroying:1,
+-					offload:1,
+-					trap:1,
+-					offload_failed:1,
+-					unused:1;
++					unused:4;
+ 
+ 	struct rcu_head			rcu;
+ 	struct nexthop			*nh;
+diff --git a/include/net/ipv6.h b/include/net/ipv6.h
+index c19bf51ded1d0..c6ee334ad846b 100644
+--- a/include/net/ipv6.h
++++ b/include/net/ipv6.h
+@@ -391,17 +391,20 @@ static inline void txopt_put(struct ipv6_txoptions *opt)
+ 		kfree_rcu(opt, rcu);
+ }
+ 
++#if IS_ENABLED(CONFIG_IPV6)
+ struct ip6_flowlabel *__fl6_sock_lookup(struct sock *sk, __be32 label);
+ 
+ extern struct static_key_false_deferred ipv6_flowlabel_exclusive;
+ static inline struct ip6_flowlabel *fl6_sock_lookup(struct sock *sk,
+ 						    __be32 label)
+ {
+-	if (static_branch_unlikely(&ipv6_flowlabel_exclusive.key))
++	if (static_branch_unlikely(&ipv6_flowlabel_exclusive.key) &&
++	    READ_ONCE(sock_net(sk)->ipv6.flowlabel_has_excl))
+ 		return __fl6_sock_lookup(sk, label) ? : ERR_PTR(-ENOENT);
+ 
+ 	return NULL;
+ }
++#endif
+ 
+ struct ipv6_txoptions *fl6_merge_options(struct ipv6_txoptions *opt_space,
+ 					 struct ip6_flowlabel *fl,
+diff --git a/include/net/netns/ipv6.h b/include/net/netns/ipv6.h
+index a4b5503803165..6bd7e5a85ce76 100644
+--- a/include/net/netns/ipv6.h
++++ b/include/net/netns/ipv6.h
+@@ -77,9 +77,10 @@ struct netns_ipv6 {
+ 	spinlock_t		fib6_gc_lock;
+ 	unsigned int		 ip6_rt_gc_expire;
+ 	unsigned long		 ip6_rt_last_gc;
++	unsigned char		flowlabel_has_excl;
+ #ifdef CONFIG_IPV6_MULTIPLE_TABLES
+-	unsigned int		fib6_rules_require_fldissect;
+ 	bool			fib6_has_custom_rules;
++	unsigned int		fib6_rules_require_fldissect;
+ #ifdef CONFIG_IPV6_SUBTREES
+ 	unsigned int		fib6_routes_require_src;
+ #endif
+diff --git a/kernel/async.c b/kernel/async.c
+index b8d7a663497f9..b2c4ba5686ee4 100644
+--- a/kernel/async.c
++++ b/kernel/async.c
+@@ -205,9 +205,6 @@ async_cookie_t async_schedule_node_domain(async_func_t func, void *data,
+ 	atomic_inc(&entry_count);
+ 	spin_unlock_irqrestore(&async_lock, flags);
+ 
+-	/* mark that this task has queued an async job, used by module init */
+-	current->flags |= PF_USED_ASYNC;
+-
+ 	/* schedule for execution */
+ 	queue_work_node(node, system_unbound_wq, &entry->work);
+ 
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 5e037070cb656..d2ff8ba7ae58f 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -4928,10 +4928,12 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+ 	/* check for PTR_TO_RDONLY_BUF_OR_NULL or PTR_TO_RDWR_BUF_OR_NULL */
+ 	for (i = 0; i < prog->aux->ctx_arg_info_size; i++) {
+ 		const struct bpf_ctx_arg_aux *ctx_arg_info = &prog->aux->ctx_arg_info[i];
++		u32 type, flag;
+ 
+-		if (ctx_arg_info->offset == off &&
+-		    (ctx_arg_info->reg_type == PTR_TO_RDONLY_BUF_OR_NULL ||
+-		     ctx_arg_info->reg_type == PTR_TO_RDWR_BUF_OR_NULL)) {
++		type = base_type(ctx_arg_info->reg_type);
++		flag = type_flag(ctx_arg_info->reg_type);
++		if (ctx_arg_info->offset == off && type == PTR_TO_BUF &&
++		    (flag & PTR_MAYBE_NULL)) {
+ 			info->reg_type = ctx_arg_info->reg_type;
+ 			return true;
+ 		}
+@@ -5845,7 +5847,7 @@ int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog,
+ 				return -EINVAL;
+ 			}
+ 
+-			reg->type = PTR_TO_MEM_OR_NULL;
++			reg->type = PTR_TO_MEM | PTR_MAYBE_NULL;
+ 			reg->id = ++env->id_gen;
+ 
+ 			continue;
+@@ -6335,7 +6337,7 @@ const struct bpf_func_proto bpf_btf_find_by_name_kind_proto = {
+ 	.func		= bpf_btf_find_by_name_kind,
+ 	.gpl_only	= false,
+ 	.ret_type	= RET_INTEGER,
+-	.arg1_type	= ARG_PTR_TO_MEM,
++	.arg1_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg2_type	= ARG_CONST_SIZE,
+ 	.arg3_type	= ARG_ANYTHING,
+ 	.arg4_type	= ARG_ANYTHING,
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index 43eb3501721b7..514b4681a90ac 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -1789,7 +1789,7 @@ static const struct bpf_func_proto bpf_sysctl_set_new_value_proto = {
+ 	.gpl_only	= false,
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE,
+ };
+ 
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index 649f07623df6c..acb2383b0f537 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -530,7 +530,7 @@ const struct bpf_func_proto bpf_strtol_proto = {
+ 	.func		= bpf_strtol,
+ 	.gpl_only	= false,
+ 	.ret_type	= RET_INTEGER,
+-	.arg1_type	= ARG_PTR_TO_MEM,
++	.arg1_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg2_type	= ARG_CONST_SIZE,
+ 	.arg3_type	= ARG_ANYTHING,
+ 	.arg4_type	= ARG_PTR_TO_LONG,
+@@ -558,7 +558,7 @@ const struct bpf_func_proto bpf_strtoul_proto = {
+ 	.func		= bpf_strtoul,
+ 	.gpl_only	= false,
+ 	.ret_type	= RET_INTEGER,
+-	.arg1_type	= ARG_PTR_TO_MEM,
++	.arg1_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg2_type	= ARG_CONST_SIZE,
+ 	.arg3_type	= ARG_ANYTHING,
+ 	.arg4_type	= ARG_PTR_TO_LONG,
+@@ -630,7 +630,7 @@ const struct bpf_func_proto bpf_event_output_data_proto =  {
+ 	.arg1_type      = ARG_PTR_TO_CTX,
+ 	.arg2_type      = ARG_CONST_MAP_PTR,
+ 	.arg3_type      = ARG_ANYTHING,
+-	.arg4_type      = ARG_PTR_TO_MEM,
++	.arg4_type      = ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg5_type      = ARG_CONST_SIZE_OR_ZERO,
+ };
+ 
+@@ -667,7 +667,7 @@ BPF_CALL_2(bpf_per_cpu_ptr, const void *, ptr, u32, cpu)
+ const struct bpf_func_proto bpf_per_cpu_ptr_proto = {
+ 	.func		= bpf_per_cpu_ptr,
+ 	.gpl_only	= false,
+-	.ret_type	= RET_PTR_TO_MEM_OR_BTF_ID_OR_NULL,
++	.ret_type	= RET_PTR_TO_MEM_OR_BTF_ID | PTR_MAYBE_NULL | MEM_RDONLY,
+ 	.arg1_type	= ARG_PTR_TO_PERCPU_BTF_ID,
+ 	.arg2_type	= ARG_ANYTHING,
+ };
+@@ -680,7 +680,7 @@ BPF_CALL_1(bpf_this_cpu_ptr, const void *, percpu_ptr)
+ const struct bpf_func_proto bpf_this_cpu_ptr_proto = {
+ 	.func		= bpf_this_cpu_ptr,
+ 	.gpl_only	= false,
+-	.ret_type	= RET_PTR_TO_MEM_OR_BTF_ID,
++	.ret_type	= RET_PTR_TO_MEM_OR_BTF_ID | MEM_RDONLY,
+ 	.arg1_type	= ARG_PTR_TO_PERCPU_BTF_ID,
+ };
+ 
+@@ -1011,7 +1011,7 @@ const struct bpf_func_proto bpf_snprintf_proto = {
+ 	.arg1_type	= ARG_PTR_TO_MEM_OR_NULL,
+ 	.arg2_type	= ARG_CONST_SIZE_OR_ZERO,
+ 	.arg3_type	= ARG_PTR_TO_CONST_STR,
+-	.arg4_type	= ARG_PTR_TO_MEM_OR_NULL,
++	.arg4_type	= ARG_PTR_TO_MEM | PTR_MAYBE_NULL | MEM_RDONLY,
+ 	.arg5_type	= ARG_CONST_SIZE_OR_ZERO,
+ };
+ 
+diff --git a/kernel/bpf/map_iter.c b/kernel/bpf/map_iter.c
+index 6a9542af4212a..b0fa190b09790 100644
+--- a/kernel/bpf/map_iter.c
++++ b/kernel/bpf/map_iter.c
+@@ -174,9 +174,9 @@ static const struct bpf_iter_reg bpf_map_elem_reg_info = {
+ 	.ctx_arg_info_size	= 2,
+ 	.ctx_arg_info		= {
+ 		{ offsetof(struct bpf_iter__bpf_map_elem, key),
+-		  PTR_TO_RDONLY_BUF_OR_NULL },
++		  PTR_TO_BUF | PTR_MAYBE_NULL | MEM_RDONLY },
+ 		{ offsetof(struct bpf_iter__bpf_map_elem, value),
+-		  PTR_TO_RDWR_BUF_OR_NULL },
++		  PTR_TO_BUF | PTR_MAYBE_NULL },
+ 	},
+ };
+ 
+diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
+index f1c51c45667d3..710ba9de12ce4 100644
+--- a/kernel/bpf/ringbuf.c
++++ b/kernel/bpf/ringbuf.c
+@@ -444,7 +444,7 @@ const struct bpf_func_proto bpf_ringbuf_output_proto = {
+ 	.func		= bpf_ringbuf_output,
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_CONST_MAP_PTR,
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE_OR_ZERO,
+ 	.arg4_type	= ARG_ANYTHING,
+ };
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 1033ee8c0caf0..4c6c2c2137458 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -4772,7 +4772,7 @@ static const struct bpf_func_proto bpf_sys_bpf_proto = {
+ 	.gpl_only	= false,
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_ANYTHING,
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE,
+ };
+ 
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 6b987407752ab..40d92628e2f97 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -439,18 +439,6 @@ static bool reg_type_not_null(enum bpf_reg_type type)
+ 		type == PTR_TO_SOCK_COMMON;
+ }
+ 
+-static bool reg_type_may_be_null(enum bpf_reg_type type)
+-{
+-	return type == PTR_TO_MAP_VALUE_OR_NULL ||
+-	       type == PTR_TO_SOCKET_OR_NULL ||
+-	       type == PTR_TO_SOCK_COMMON_OR_NULL ||
+-	       type == PTR_TO_TCP_SOCK_OR_NULL ||
+-	       type == PTR_TO_BTF_ID_OR_NULL ||
+-	       type == PTR_TO_MEM_OR_NULL ||
+-	       type == PTR_TO_RDONLY_BUF_OR_NULL ||
+-	       type == PTR_TO_RDWR_BUF_OR_NULL;
+-}
+-
+ static bool reg_may_point_to_spin_lock(const struct bpf_reg_state *reg)
+ {
+ 	return reg->type == PTR_TO_MAP_VALUE &&
+@@ -459,12 +447,14 @@ static bool reg_may_point_to_spin_lock(const struct bpf_reg_state *reg)
+ 
+ static bool reg_type_may_be_refcounted_or_null(enum bpf_reg_type type)
+ {
+-	return type == PTR_TO_SOCKET ||
+-		type == PTR_TO_SOCKET_OR_NULL ||
+-		type == PTR_TO_TCP_SOCK ||
+-		type == PTR_TO_TCP_SOCK_OR_NULL ||
+-		type == PTR_TO_MEM ||
+-		type == PTR_TO_MEM_OR_NULL;
++	return base_type(type) == PTR_TO_SOCKET ||
++		base_type(type) == PTR_TO_TCP_SOCK ||
++		base_type(type) == PTR_TO_MEM;
++}
++
++static bool type_is_rdonly_mem(u32 type)
++{
++	return type & MEM_RDONLY;
+ }
+ 
+ static bool arg_type_may_be_refcounted(enum bpf_arg_type type)
+@@ -472,14 +462,9 @@ static bool arg_type_may_be_refcounted(enum bpf_arg_type type)
+ 	return type == ARG_PTR_TO_SOCK_COMMON;
+ }
+ 
+-static bool arg_type_may_be_null(enum bpf_arg_type type)
++static bool type_may_be_null(u32 type)
+ {
+-	return type == ARG_PTR_TO_MAP_VALUE_OR_NULL ||
+-	       type == ARG_PTR_TO_MEM_OR_NULL ||
+-	       type == ARG_PTR_TO_CTX_OR_NULL ||
+-	       type == ARG_PTR_TO_SOCKET_OR_NULL ||
+-	       type == ARG_PTR_TO_ALLOC_MEM_OR_NULL ||
+-	       type == ARG_PTR_TO_STACK_OR_NULL;
++	return type & PTR_MAYBE_NULL;
+ }
+ 
+ /* Determine whether the function releases some resources allocated by another
+@@ -539,39 +524,54 @@ static bool is_cmpxchg_insn(const struct bpf_insn *insn)
+ 	       insn->imm == BPF_CMPXCHG;
+ }
+ 
+-/* string representation of 'enum bpf_reg_type' */
+-static const char * const reg_type_str[] = {
+-	[NOT_INIT]		= "?",
+-	[SCALAR_VALUE]		= "inv",
+-	[PTR_TO_CTX]		= "ctx",
+-	[CONST_PTR_TO_MAP]	= "map_ptr",
+-	[PTR_TO_MAP_VALUE]	= "map_value",
+-	[PTR_TO_MAP_VALUE_OR_NULL] = "map_value_or_null",
+-	[PTR_TO_STACK]		= "fp",
+-	[PTR_TO_PACKET]		= "pkt",
+-	[PTR_TO_PACKET_META]	= "pkt_meta",
+-	[PTR_TO_PACKET_END]	= "pkt_end",
+-	[PTR_TO_FLOW_KEYS]	= "flow_keys",
+-	[PTR_TO_SOCKET]		= "sock",
+-	[PTR_TO_SOCKET_OR_NULL] = "sock_or_null",
+-	[PTR_TO_SOCK_COMMON]	= "sock_common",
+-	[PTR_TO_SOCK_COMMON_OR_NULL] = "sock_common_or_null",
+-	[PTR_TO_TCP_SOCK]	= "tcp_sock",
+-	[PTR_TO_TCP_SOCK_OR_NULL] = "tcp_sock_or_null",
+-	[PTR_TO_TP_BUFFER]	= "tp_buffer",
+-	[PTR_TO_XDP_SOCK]	= "xdp_sock",
+-	[PTR_TO_BTF_ID]		= "ptr_",
+-	[PTR_TO_BTF_ID_OR_NULL]	= "ptr_or_null_",
+-	[PTR_TO_PERCPU_BTF_ID]	= "percpu_ptr_",
+-	[PTR_TO_MEM]		= "mem",
+-	[PTR_TO_MEM_OR_NULL]	= "mem_or_null",
+-	[PTR_TO_RDONLY_BUF]	= "rdonly_buf",
+-	[PTR_TO_RDONLY_BUF_OR_NULL] = "rdonly_buf_or_null",
+-	[PTR_TO_RDWR_BUF]	= "rdwr_buf",
+-	[PTR_TO_RDWR_BUF_OR_NULL] = "rdwr_buf_or_null",
+-	[PTR_TO_FUNC]		= "func",
+-	[PTR_TO_MAP_KEY]	= "map_key",
+-};
++/* string representation of 'enum bpf_reg_type'
++ *
++ * Note that reg_type_str() can not appear more than once in a single verbose()
++ * statement.
++ */
++static const char *reg_type_str(struct bpf_verifier_env *env,
++				enum bpf_reg_type type)
++{
++	char postfix[16] = {0}, prefix[16] = {0};
++	static const char * const str[] = {
++		[NOT_INIT]		= "?",
++		[SCALAR_VALUE]		= "inv",
++		[PTR_TO_CTX]		= "ctx",
++		[CONST_PTR_TO_MAP]	= "map_ptr",
++		[PTR_TO_MAP_VALUE]	= "map_value",
++		[PTR_TO_STACK]		= "fp",
++		[PTR_TO_PACKET]		= "pkt",
++		[PTR_TO_PACKET_META]	= "pkt_meta",
++		[PTR_TO_PACKET_END]	= "pkt_end",
++		[PTR_TO_FLOW_KEYS]	= "flow_keys",
++		[PTR_TO_SOCKET]		= "sock",
++		[PTR_TO_SOCK_COMMON]	= "sock_common",
++		[PTR_TO_TCP_SOCK]	= "tcp_sock",
++		[PTR_TO_TP_BUFFER]	= "tp_buffer",
++		[PTR_TO_XDP_SOCK]	= "xdp_sock",
++		[PTR_TO_BTF_ID]		= "ptr_",
++		[PTR_TO_PERCPU_BTF_ID]	= "percpu_ptr_",
++		[PTR_TO_MEM]		= "mem",
++		[PTR_TO_BUF]		= "buf",
++		[PTR_TO_FUNC]		= "func",
++		[PTR_TO_MAP_KEY]	= "map_key",
++	};
++
++	if (type & PTR_MAYBE_NULL) {
++		if (base_type(type) == PTR_TO_BTF_ID ||
++		    base_type(type) == PTR_TO_PERCPU_BTF_ID)
++			strncpy(postfix, "or_null_", 16);
++		else
++			strncpy(postfix, "_or_null", 16);
++	}
++
++	if (type & MEM_RDONLY)
++		strncpy(prefix, "rdonly_", 16);
++
++	snprintf(env->type_str_buf, TYPE_STR_BUF_LEN, "%s%s%s",
++		 prefix, str[base_type(type)], postfix);
++	return env->type_str_buf;
++}
+ 
+ static char slot_type_char[] = {
+ 	[STACK_INVALID]	= '?',
+@@ -636,7 +636,7 @@ static void print_verifier_state(struct bpf_verifier_env *env,
+ 			continue;
+ 		verbose(env, " R%d", i);
+ 		print_liveness(env, reg->live);
+-		verbose(env, "=%s", reg_type_str[t]);
++		verbose(env, "=%s", reg_type_str(env, t));
+ 		if (t == SCALAR_VALUE && reg->precise)
+ 			verbose(env, "P");
+ 		if ((t == SCALAR_VALUE || t == PTR_TO_STACK) &&
+@@ -644,9 +644,8 @@ static void print_verifier_state(struct bpf_verifier_env *env,
+ 			/* reg->off should be 0 for SCALAR_VALUE */
+ 			verbose(env, "%lld", reg->var_off.value + reg->off);
+ 		} else {
+-			if (t == PTR_TO_BTF_ID ||
+-			    t == PTR_TO_BTF_ID_OR_NULL ||
+-			    t == PTR_TO_PERCPU_BTF_ID)
++			if (base_type(t) == PTR_TO_BTF_ID ||
++			    base_type(t) == PTR_TO_PERCPU_BTF_ID)
+ 				verbose(env, "%s", kernel_type_name(reg->btf, reg->btf_id));
+ 			verbose(env, "(id=%d", reg->id);
+ 			if (reg_type_may_be_refcounted_or_null(t))
+@@ -655,10 +654,9 @@ static void print_verifier_state(struct bpf_verifier_env *env,
+ 				verbose(env, ",off=%d", reg->off);
+ 			if (type_is_pkt_pointer(t))
+ 				verbose(env, ",r=%d", reg->range);
+-			else if (t == CONST_PTR_TO_MAP ||
+-				 t == PTR_TO_MAP_KEY ||
+-				 t == PTR_TO_MAP_VALUE ||
+-				 t == PTR_TO_MAP_VALUE_OR_NULL)
++			else if (base_type(t) == CONST_PTR_TO_MAP ||
++				 base_type(t) == PTR_TO_MAP_KEY ||
++				 base_type(t) == PTR_TO_MAP_VALUE)
+ 				verbose(env, ",ks=%d,vs=%d",
+ 					reg->map_ptr->key_size,
+ 					reg->map_ptr->value_size);
+@@ -728,7 +726,7 @@ static void print_verifier_state(struct bpf_verifier_env *env,
+ 		if (is_spilled_reg(&state->stack[i])) {
+ 			reg = &state->stack[i].spilled_ptr;
+ 			t = reg->type;
+-			verbose(env, "=%s", reg_type_str[t]);
++			verbose(env, "=%s", reg_type_str(env, t));
+ 			if (t == SCALAR_VALUE && reg->precise)
+ 				verbose(env, "P");
+ 			if (t == SCALAR_VALUE && tnum_is_const(reg->var_off))
+@@ -1141,8 +1139,7 @@ static void mark_reg_known_zero(struct bpf_verifier_env *env,
+ 
+ static void mark_ptr_not_null_reg(struct bpf_reg_state *reg)
+ {
+-	switch (reg->type) {
+-	case PTR_TO_MAP_VALUE_OR_NULL: {
++	if (base_type(reg->type) == PTR_TO_MAP_VALUE) {
+ 		const struct bpf_map *map = reg->map_ptr;
+ 
+ 		if (map->inner_map_meta) {
+@@ -1161,32 +1158,10 @@ static void mark_ptr_not_null_reg(struct bpf_reg_state *reg)
+ 		} else {
+ 			reg->type = PTR_TO_MAP_VALUE;
+ 		}
+-		break;
+-	}
+-	case PTR_TO_SOCKET_OR_NULL:
+-		reg->type = PTR_TO_SOCKET;
+-		break;
+-	case PTR_TO_SOCK_COMMON_OR_NULL:
+-		reg->type = PTR_TO_SOCK_COMMON;
+-		break;
+-	case PTR_TO_TCP_SOCK_OR_NULL:
+-		reg->type = PTR_TO_TCP_SOCK;
+-		break;
+-	case PTR_TO_BTF_ID_OR_NULL:
+-		reg->type = PTR_TO_BTF_ID;
+-		break;
+-	case PTR_TO_MEM_OR_NULL:
+-		reg->type = PTR_TO_MEM;
+-		break;
+-	case PTR_TO_RDONLY_BUF_OR_NULL:
+-		reg->type = PTR_TO_RDONLY_BUF;
+-		break;
+-	case PTR_TO_RDWR_BUF_OR_NULL:
+-		reg->type = PTR_TO_RDWR_BUF;
+-		break;
+-	default:
+-		WARN_ONCE(1, "unknown nullable register type");
++		return;
+ 	}
++
++	reg->type &= ~PTR_MAYBE_NULL;
+ }
+ 
+ static bool reg_is_pkt_pointer(const struct bpf_reg_state *reg)
+@@ -2047,7 +2022,7 @@ static int mark_reg_read(struct bpf_verifier_env *env,
+ 			break;
+ 		if (parent->live & REG_LIVE_DONE) {
+ 			verbose(env, "verifier BUG type %s var_off %lld off %d\n",
+-				reg_type_str[parent->type],
++				reg_type_str(env, parent->type),
+ 				parent->var_off.value, parent->off);
+ 			return -EFAULT;
+ 		}
+@@ -2706,9 +2681,8 @@ static int mark_chain_precision_stack(struct bpf_verifier_env *env, int spi)
+ 
+ static bool is_spillable_regtype(enum bpf_reg_type type)
+ {
+-	switch (type) {
++	switch (base_type(type)) {
+ 	case PTR_TO_MAP_VALUE:
+-	case PTR_TO_MAP_VALUE_OR_NULL:
+ 	case PTR_TO_STACK:
+ 	case PTR_TO_CTX:
+ 	case PTR_TO_PACKET:
+@@ -2717,21 +2691,13 @@ static bool is_spillable_regtype(enum bpf_reg_type type)
+ 	case PTR_TO_FLOW_KEYS:
+ 	case CONST_PTR_TO_MAP:
+ 	case PTR_TO_SOCKET:
+-	case PTR_TO_SOCKET_OR_NULL:
+ 	case PTR_TO_SOCK_COMMON:
+-	case PTR_TO_SOCK_COMMON_OR_NULL:
+ 	case PTR_TO_TCP_SOCK:
+-	case PTR_TO_TCP_SOCK_OR_NULL:
+ 	case PTR_TO_XDP_SOCK:
+ 	case PTR_TO_BTF_ID:
+-	case PTR_TO_BTF_ID_OR_NULL:
+-	case PTR_TO_RDONLY_BUF:
+-	case PTR_TO_RDONLY_BUF_OR_NULL:
+-	case PTR_TO_RDWR_BUF:
+-	case PTR_TO_RDWR_BUF_OR_NULL:
++	case PTR_TO_BUF:
+ 	case PTR_TO_PERCPU_BTF_ID:
+ 	case PTR_TO_MEM:
+-	case PTR_TO_MEM_OR_NULL:
+ 	case PTR_TO_FUNC:
+ 	case PTR_TO_MAP_KEY:
+ 		return true;
+@@ -3572,7 +3538,7 @@ static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int off,
+ 		 */
+ 		*reg_type = info.reg_type;
+ 
+-		if (*reg_type == PTR_TO_BTF_ID || *reg_type == PTR_TO_BTF_ID_OR_NULL) {
++		if (base_type(*reg_type) == PTR_TO_BTF_ID) {
+ 			*btf = info.btf;
+ 			*btf_id = info.btf_id;
+ 		} else {
+@@ -3640,7 +3606,7 @@ static int check_sock_access(struct bpf_verifier_env *env, int insn_idx,
+ 	}
+ 
+ 	verbose(env, "R%d invalid %s access off=%d size=%d\n",
+-		regno, reg_type_str[reg->type], off, size);
++		regno, reg_type_str(env, reg->type), off, size);
+ 
+ 	return -EACCES;
+ }
+@@ -4367,15 +4333,30 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ 				mark_reg_unknown(env, regs, value_regno);
+ 			}
+ 		}
+-	} else if (reg->type == PTR_TO_MEM) {
++	} else if (base_type(reg->type) == PTR_TO_MEM) {
++		bool rdonly_mem = type_is_rdonly_mem(reg->type);
++
++		if (type_may_be_null(reg->type)) {
++			verbose(env, "R%d invalid mem access '%s'\n", regno,
++				reg_type_str(env, reg->type));
++			return -EACCES;
++		}
++
++		if (t == BPF_WRITE && rdonly_mem) {
++			verbose(env, "R%d cannot write into %s\n",
++				regno, reg_type_str(env, reg->type));
++			return -EACCES;
++		}
++
+ 		if (t == BPF_WRITE && value_regno >= 0 &&
+ 		    is_pointer_value(env, value_regno)) {
+ 			verbose(env, "R%d leaks addr into mem\n", value_regno);
+ 			return -EACCES;
+ 		}
++
+ 		err = check_mem_region_access(env, regno, off, size,
+ 					      reg->mem_size, false);
+-		if (!err && t == BPF_READ && value_regno >= 0)
++		if (!err && value_regno >= 0 && (t == BPF_READ || rdonly_mem))
+ 			mark_reg_unknown(env, regs, value_regno);
+ 	} else if (reg->type == PTR_TO_CTX) {
+ 		enum bpf_reg_type reg_type = SCALAR_VALUE;
+@@ -4405,7 +4386,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ 			} else {
+ 				mark_reg_known_zero(env, regs,
+ 						    value_regno);
+-				if (reg_type_may_be_null(reg_type))
++				if (type_may_be_null(reg_type))
+ 					regs[value_regno].id = ++env->id_gen;
+ 				/* A load of ctx field could have different
+ 				 * actual load size with the one encoded in the
+@@ -4413,8 +4394,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ 				 * a sub-register.
+ 				 */
+ 				regs[value_regno].subreg_def = DEF_NOT_SUBREG;
+-				if (reg_type == PTR_TO_BTF_ID ||
+-				    reg_type == PTR_TO_BTF_ID_OR_NULL) {
++				if (base_type(reg_type) == PTR_TO_BTF_ID) {
+ 					regs[value_regno].btf = btf;
+ 					regs[value_regno].btf_id = btf_id;
+ 				}
+@@ -4467,7 +4447,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ 	} else if (type_is_sk_pointer(reg->type)) {
+ 		if (t == BPF_WRITE) {
+ 			verbose(env, "R%d cannot write into %s\n",
+-				regno, reg_type_str[reg->type]);
++				regno, reg_type_str(env, reg->type));
+ 			return -EACCES;
+ 		}
+ 		err = check_sock_access(env, insn_idx, regno, off, size, t);
+@@ -4483,26 +4463,32 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ 	} else if (reg->type == CONST_PTR_TO_MAP) {
+ 		err = check_ptr_to_map_access(env, regs, regno, off, size, t,
+ 					      value_regno);
+-	} else if (reg->type == PTR_TO_RDONLY_BUF) {
+-		if (t == BPF_WRITE) {
+-			verbose(env, "R%d cannot write into %s\n",
+-				regno, reg_type_str[reg->type]);
+-			return -EACCES;
++	} else if (base_type(reg->type) == PTR_TO_BUF) {
++		bool rdonly_mem = type_is_rdonly_mem(reg->type);
++		const char *buf_info;
++		u32 *max_access;
++
++		if (rdonly_mem) {
++			if (t == BPF_WRITE) {
++				verbose(env, "R%d cannot write into %s\n",
++					regno, reg_type_str(env, reg->type));
++				return -EACCES;
++			}
++			buf_info = "rdonly";
++			max_access = &env->prog->aux->max_rdonly_access;
++		} else {
++			buf_info = "rdwr";
++			max_access = &env->prog->aux->max_rdwr_access;
+ 		}
++
+ 		err = check_buffer_access(env, reg, regno, off, size, false,
+-					  "rdonly",
+-					  &env->prog->aux->max_rdonly_access);
+-		if (!err && value_regno >= 0)
+-			mark_reg_unknown(env, regs, value_regno);
+-	} else if (reg->type == PTR_TO_RDWR_BUF) {
+-		err = check_buffer_access(env, reg, regno, off, size, false,
+-					  "rdwr",
+-					  &env->prog->aux->max_rdwr_access);
+-		if (!err && t == BPF_READ && value_regno >= 0)
++					  buf_info, max_access);
++
++		if (!err && value_regno >= 0 && (rdonly_mem || t == BPF_READ))
+ 			mark_reg_unknown(env, regs, value_regno);
+ 	} else {
+ 		verbose(env, "R%d invalid mem access '%s'\n", regno,
+-			reg_type_str[reg->type]);
++			reg_type_str(env, reg->type));
+ 		return -EACCES;
+ 	}
+ 
+@@ -4576,7 +4562,7 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
+ 	    is_sk_reg(env, insn->dst_reg)) {
+ 		verbose(env, "BPF_ATOMIC stores into R%d %s is not allowed\n",
+ 			insn->dst_reg,
+-			reg_type_str[reg_state(env, insn->dst_reg)->type]);
++			reg_type_str(env, reg_state(env, insn->dst_reg)->type));
+ 		return -EACCES;
+ 	}
+ 
+@@ -4759,8 +4745,10 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
+ 				   struct bpf_call_arg_meta *meta)
+ {
+ 	struct bpf_reg_state *regs = cur_regs(env), *reg = &regs[regno];
++	const char *buf_info;
++	u32 *max_access;
+ 
+-	switch (reg->type) {
++	switch (base_type(reg->type)) {
+ 	case PTR_TO_PACKET:
+ 	case PTR_TO_PACKET_META:
+ 		return check_packet_access(env, regno, reg->off, access_size,
+@@ -4779,18 +4767,20 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
+ 		return check_mem_region_access(env, regno, reg->off,
+ 					       access_size, reg->mem_size,
+ 					       zero_size_allowed);
+-	case PTR_TO_RDONLY_BUF:
+-		if (meta && meta->raw_mode)
+-			return -EACCES;
+-		return check_buffer_access(env, reg, regno, reg->off,
+-					   access_size, zero_size_allowed,
+-					   "rdonly",
+-					   &env->prog->aux->max_rdonly_access);
+-	case PTR_TO_RDWR_BUF:
++	case PTR_TO_BUF:
++		if (type_is_rdonly_mem(reg->type)) {
++			if (meta && meta->raw_mode)
++				return -EACCES;
++
++			buf_info = "rdonly";
++			max_access = &env->prog->aux->max_rdonly_access;
++		} else {
++			buf_info = "rdwr";
++			max_access = &env->prog->aux->max_rdwr_access;
++		}
+ 		return check_buffer_access(env, reg, regno, reg->off,
+ 					   access_size, zero_size_allowed,
+-					   "rdwr",
+-					   &env->prog->aux->max_rdwr_access);
++					   buf_info, max_access);
+ 	case PTR_TO_STACK:
+ 		return check_stack_range_initialized(
+ 				env,
+@@ -4802,9 +4792,9 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
+ 		    register_is_null(reg))
+ 			return 0;
+ 
+-		verbose(env, "R%d type=%s expected=%s\n", regno,
+-			reg_type_str[reg->type],
+-			reg_type_str[PTR_TO_STACK]);
++		verbose(env, "R%d type=%s ", regno,
++			reg_type_str(env, reg->type));
++		verbose(env, "expected=%s\n", reg_type_str(env, PTR_TO_STACK));
+ 		return -EACCES;
+ 	}
+ }
+@@ -4815,7 +4805,7 @@ int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
+ 	if (register_is_null(reg))
+ 		return 0;
+ 
+-	if (reg_type_may_be_null(reg->type)) {
++	if (type_may_be_null(reg->type)) {
+ 		/* Assuming that the register contains a value check if the memory
+ 		 * access is safe. Temporarily save and restore the register's state as
+ 		 * the conversion shouldn't be visible to a caller.
+@@ -4963,9 +4953,8 @@ static int process_timer_func(struct bpf_verifier_env *env, int regno,
+ 
+ static bool arg_type_is_mem_ptr(enum bpf_arg_type type)
+ {
+-	return type == ARG_PTR_TO_MEM ||
+-	       type == ARG_PTR_TO_MEM_OR_NULL ||
+-	       type == ARG_PTR_TO_UNINIT_MEM;
++	return base_type(type) == ARG_PTR_TO_MEM ||
++	       base_type(type) == ARG_PTR_TO_UNINIT_MEM;
+ }
+ 
+ static bool arg_type_is_mem_size(enum bpf_arg_type type)
+@@ -5070,8 +5059,7 @@ static const struct bpf_reg_types mem_types = {
+ 		PTR_TO_MAP_KEY,
+ 		PTR_TO_MAP_VALUE,
+ 		PTR_TO_MEM,
+-		PTR_TO_RDONLY_BUF,
+-		PTR_TO_RDWR_BUF,
++		PTR_TO_BUF,
+ 	},
+ };
+ 
+@@ -5102,31 +5090,26 @@ static const struct bpf_reg_types *compatible_reg_types[__BPF_ARG_TYPE_MAX] = {
+ 	[ARG_PTR_TO_MAP_KEY]		= &map_key_value_types,
+ 	[ARG_PTR_TO_MAP_VALUE]		= &map_key_value_types,
+ 	[ARG_PTR_TO_UNINIT_MAP_VALUE]	= &map_key_value_types,
+-	[ARG_PTR_TO_MAP_VALUE_OR_NULL]	= &map_key_value_types,
+ 	[ARG_CONST_SIZE]		= &scalar_types,
+ 	[ARG_CONST_SIZE_OR_ZERO]	= &scalar_types,
+ 	[ARG_CONST_ALLOC_SIZE_OR_ZERO]	= &scalar_types,
+ 	[ARG_CONST_MAP_PTR]		= &const_map_ptr_types,
+ 	[ARG_PTR_TO_CTX]		= &context_types,
+-	[ARG_PTR_TO_CTX_OR_NULL]	= &context_types,
+ 	[ARG_PTR_TO_SOCK_COMMON]	= &sock_types,
+ #ifdef CONFIG_NET
+ 	[ARG_PTR_TO_BTF_ID_SOCK_COMMON]	= &btf_id_sock_common_types,
+ #endif
+ 	[ARG_PTR_TO_SOCKET]		= &fullsock_types,
+-	[ARG_PTR_TO_SOCKET_OR_NULL]	= &fullsock_types,
+ 	[ARG_PTR_TO_BTF_ID]		= &btf_ptr_types,
+ 	[ARG_PTR_TO_SPIN_LOCK]		= &spin_lock_types,
+ 	[ARG_PTR_TO_MEM]		= &mem_types,
+-	[ARG_PTR_TO_MEM_OR_NULL]	= &mem_types,
+ 	[ARG_PTR_TO_UNINIT_MEM]		= &mem_types,
+ 	[ARG_PTR_TO_ALLOC_MEM]		= &alloc_mem_types,
+-	[ARG_PTR_TO_ALLOC_MEM_OR_NULL]	= &alloc_mem_types,
+ 	[ARG_PTR_TO_INT]		= &int_ptr_types,
+ 	[ARG_PTR_TO_LONG]		= &int_ptr_types,
+ 	[ARG_PTR_TO_PERCPU_BTF_ID]	= &percpu_btf_ptr_types,
+ 	[ARG_PTR_TO_FUNC]		= &func_ptr_types,
+-	[ARG_PTR_TO_STACK_OR_NULL]	= &stack_ptr_types,
++	[ARG_PTR_TO_STACK]		= &stack_ptr_types,
+ 	[ARG_PTR_TO_CONST_STR]		= &const_str_ptr_types,
+ 	[ARG_PTR_TO_TIMER]		= &timer_types,
+ };
+@@ -5140,12 +5123,27 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
+ 	const struct bpf_reg_types *compatible;
+ 	int i, j;
+ 
+-	compatible = compatible_reg_types[arg_type];
++	compatible = compatible_reg_types[base_type(arg_type)];
+ 	if (!compatible) {
+ 		verbose(env, "verifier internal error: unsupported arg type %d\n", arg_type);
+ 		return -EFAULT;
+ 	}
+ 
++	/* ARG_PTR_TO_MEM + RDONLY is compatible with PTR_TO_MEM and PTR_TO_MEM + RDONLY,
++	 * but ARG_PTR_TO_MEM is compatible only with PTR_TO_MEM and NOT with PTR_TO_MEM + RDONLY
++	 *
++	 * Same for MAYBE_NULL:
++	 *
++	 * ARG_PTR_TO_MEM + MAYBE_NULL is compatible with PTR_TO_MEM and PTR_TO_MEM + MAYBE_NULL,
++	 * but ARG_PTR_TO_MEM is compatible only with PTR_TO_MEM but NOT with PTR_TO_MEM + MAYBE_NULL
++	 *
++	 * Therefore we fold these flags depending on the arg_type before comparison.
++	 */
++	if (arg_type & MEM_RDONLY)
++		type &= ~MEM_RDONLY;
++	if (arg_type & PTR_MAYBE_NULL)
++		type &= ~PTR_MAYBE_NULL;
++
+ 	for (i = 0; i < ARRAY_SIZE(compatible->types); i++) {
+ 		expected = compatible->types[i];
+ 		if (expected == NOT_INIT)
+@@ -5155,14 +5153,14 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
+ 			goto found;
+ 	}
+ 
+-	verbose(env, "R%d type=%s expected=", regno, reg_type_str[type]);
++	verbose(env, "R%d type=%s expected=", regno, reg_type_str(env, reg->type));
+ 	for (j = 0; j + 1 < i; j++)
+-		verbose(env, "%s, ", reg_type_str[compatible->types[j]]);
+-	verbose(env, "%s\n", reg_type_str[compatible->types[j]]);
++		verbose(env, "%s, ", reg_type_str(env, compatible->types[j]));
++	verbose(env, "%s\n", reg_type_str(env, compatible->types[j]));
+ 	return -EACCES;
+ 
+ found:
+-	if (type == PTR_TO_BTF_ID) {
++	if (reg->type == PTR_TO_BTF_ID) {
+ 		if (!arg_btf_id) {
+ 			if (!compatible->btf_id) {
+ 				verbose(env, "verifier internal error: missing arg compatible BTF ID\n");
+@@ -5221,15 +5219,14 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ 		return -EACCES;
+ 	}
+ 
+-	if (arg_type == ARG_PTR_TO_MAP_VALUE ||
+-	    arg_type == ARG_PTR_TO_UNINIT_MAP_VALUE ||
+-	    arg_type == ARG_PTR_TO_MAP_VALUE_OR_NULL) {
++	if (base_type(arg_type) == ARG_PTR_TO_MAP_VALUE ||
++	    base_type(arg_type) == ARG_PTR_TO_UNINIT_MAP_VALUE) {
+ 		err = resolve_map_arg_type(env, meta, &arg_type);
+ 		if (err)
+ 			return err;
+ 	}
+ 
+-	if (register_is_null(reg) && arg_type_may_be_null(arg_type))
++	if (register_is_null(reg) && type_may_be_null(arg_type))
+ 		/* A NULL register has a SCALAR_VALUE type, so skip
+ 		 * type checking.
+ 		 */
+@@ -5298,10 +5295,11 @@ skip_type_check:
+ 		err = check_helper_mem_access(env, regno,
+ 					      meta->map_ptr->key_size, false,
+ 					      NULL);
+-	} else if (arg_type == ARG_PTR_TO_MAP_VALUE ||
+-		   (arg_type == ARG_PTR_TO_MAP_VALUE_OR_NULL &&
+-		    !register_is_null(reg)) ||
+-		   arg_type == ARG_PTR_TO_UNINIT_MAP_VALUE) {
++	} else if (base_type(arg_type) == ARG_PTR_TO_MAP_VALUE ||
++		   base_type(arg_type) == ARG_PTR_TO_UNINIT_MAP_VALUE) {
++		if (type_may_be_null(arg_type) && register_is_null(reg))
++			return 0;
++
+ 		/* bpf_map_xxx(..., map_ptr, ..., value) call:
+ 		 * check [value, value + map->value_size) validity
+ 		 */
+@@ -6386,6 +6384,8 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
+ 			     int *insn_idx_p)
+ {
+ 	const struct bpf_func_proto *fn = NULL;
++	enum bpf_return_type ret_type;
++	enum bpf_type_flag ret_flag;
+ 	struct bpf_reg_state *regs;
+ 	struct bpf_call_arg_meta meta;
+ 	int insn_idx = *insn_idx_p;
+@@ -6519,13 +6519,14 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
+ 	regs[BPF_REG_0].subreg_def = DEF_NOT_SUBREG;
+ 
+ 	/* update return register (already marked as written above) */
+-	if (fn->ret_type == RET_INTEGER) {
++	ret_type = fn->ret_type;
++	ret_flag = type_flag(fn->ret_type);
++	if (ret_type == RET_INTEGER) {
+ 		/* sets type to SCALAR_VALUE */
+ 		mark_reg_unknown(env, regs, BPF_REG_0);
+-	} else if (fn->ret_type == RET_VOID) {
++	} else if (ret_type == RET_VOID) {
+ 		regs[BPF_REG_0].type = NOT_INIT;
+-	} else if (fn->ret_type == RET_PTR_TO_MAP_VALUE_OR_NULL ||
+-		   fn->ret_type == RET_PTR_TO_MAP_VALUE) {
++	} else if (base_type(ret_type) == RET_PTR_TO_MAP_VALUE) {
+ 		/* There is no offset yet applied, variable or fixed */
+ 		mark_reg_known_zero(env, regs, BPF_REG_0);
+ 		/* remember map_ptr, so that check_map_access()
+@@ -6539,28 +6540,25 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
+ 		}
+ 		regs[BPF_REG_0].map_ptr = meta.map_ptr;
+ 		regs[BPF_REG_0].map_uid = meta.map_uid;
+-		if (fn->ret_type == RET_PTR_TO_MAP_VALUE) {
+-			regs[BPF_REG_0].type = PTR_TO_MAP_VALUE;
+-			if (map_value_has_spin_lock(meta.map_ptr))
+-				regs[BPF_REG_0].id = ++env->id_gen;
+-		} else {
+-			regs[BPF_REG_0].type = PTR_TO_MAP_VALUE_OR_NULL;
++		regs[BPF_REG_0].type = PTR_TO_MAP_VALUE | ret_flag;
++		if (!type_may_be_null(ret_type) &&
++		    map_value_has_spin_lock(meta.map_ptr)) {
++			regs[BPF_REG_0].id = ++env->id_gen;
+ 		}
+-	} else if (fn->ret_type == RET_PTR_TO_SOCKET_OR_NULL) {
++	} else if (base_type(ret_type) == RET_PTR_TO_SOCKET) {
+ 		mark_reg_known_zero(env, regs, BPF_REG_0);
+-		regs[BPF_REG_0].type = PTR_TO_SOCKET_OR_NULL;
+-	} else if (fn->ret_type == RET_PTR_TO_SOCK_COMMON_OR_NULL) {
++		regs[BPF_REG_0].type = PTR_TO_SOCKET | ret_flag;
++	} else if (base_type(ret_type) == RET_PTR_TO_SOCK_COMMON) {
+ 		mark_reg_known_zero(env, regs, BPF_REG_0);
+-		regs[BPF_REG_0].type = PTR_TO_SOCK_COMMON_OR_NULL;
+-	} else if (fn->ret_type == RET_PTR_TO_TCP_SOCK_OR_NULL) {
++		regs[BPF_REG_0].type = PTR_TO_SOCK_COMMON | ret_flag;
++	} else if (base_type(ret_type) == RET_PTR_TO_TCP_SOCK) {
+ 		mark_reg_known_zero(env, regs, BPF_REG_0);
+-		regs[BPF_REG_0].type = PTR_TO_TCP_SOCK_OR_NULL;
+-	} else if (fn->ret_type == RET_PTR_TO_ALLOC_MEM_OR_NULL) {
++		regs[BPF_REG_0].type = PTR_TO_TCP_SOCK | ret_flag;
++	} else if (base_type(ret_type) == RET_PTR_TO_ALLOC_MEM) {
+ 		mark_reg_known_zero(env, regs, BPF_REG_0);
+-		regs[BPF_REG_0].type = PTR_TO_MEM_OR_NULL;
++		regs[BPF_REG_0].type = PTR_TO_MEM | ret_flag;
+ 		regs[BPF_REG_0].mem_size = meta.mem_size;
+-	} else if (fn->ret_type == RET_PTR_TO_MEM_OR_BTF_ID_OR_NULL ||
+-		   fn->ret_type == RET_PTR_TO_MEM_OR_BTF_ID) {
++	} else if (base_type(ret_type) == RET_PTR_TO_MEM_OR_BTF_ID) {
+ 		const struct btf_type *t;
+ 
+ 		mark_reg_known_zero(env, regs, BPF_REG_0);
+@@ -6578,29 +6576,30 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
+ 					tname, PTR_ERR(ret));
+ 				return -EINVAL;
+ 			}
+-			regs[BPF_REG_0].type =
+-				fn->ret_type == RET_PTR_TO_MEM_OR_BTF_ID ?
+-				PTR_TO_MEM : PTR_TO_MEM_OR_NULL;
++			regs[BPF_REG_0].type = PTR_TO_MEM | ret_flag;
+ 			regs[BPF_REG_0].mem_size = tsize;
+ 		} else {
+-			regs[BPF_REG_0].type =
+-				fn->ret_type == RET_PTR_TO_MEM_OR_BTF_ID ?
+-				PTR_TO_BTF_ID : PTR_TO_BTF_ID_OR_NULL;
++			/* MEM_RDONLY may be carried from ret_flag, but it
++			 * doesn't apply on PTR_TO_BTF_ID. Fold it, otherwise
++			 * it will confuse the check of PTR_TO_BTF_ID in
++			 * check_mem_access().
++			 */
++			ret_flag &= ~MEM_RDONLY;
++
++			regs[BPF_REG_0].type = PTR_TO_BTF_ID | ret_flag;
+ 			regs[BPF_REG_0].btf = meta.ret_btf;
+ 			regs[BPF_REG_0].btf_id = meta.ret_btf_id;
+ 		}
+-	} else if (fn->ret_type == RET_PTR_TO_BTF_ID_OR_NULL ||
+-		   fn->ret_type == RET_PTR_TO_BTF_ID) {
++	} else if (base_type(ret_type) == RET_PTR_TO_BTF_ID) {
+ 		int ret_btf_id;
+ 
+ 		mark_reg_known_zero(env, regs, BPF_REG_0);
+-		regs[BPF_REG_0].type = fn->ret_type == RET_PTR_TO_BTF_ID ?
+-						     PTR_TO_BTF_ID :
+-						     PTR_TO_BTF_ID_OR_NULL;
++		regs[BPF_REG_0].type = PTR_TO_BTF_ID | ret_flag;
+ 		ret_btf_id = *fn->ret_btf_id;
+ 		if (ret_btf_id == 0) {
+-			verbose(env, "invalid return type %d of func %s#%d\n",
+-				fn->ret_type, func_id_name(func_id), func_id);
++			verbose(env, "invalid return type %u of func %s#%d\n",
++				base_type(ret_type), func_id_name(func_id),
++				func_id);
+ 			return -EINVAL;
+ 		}
+ 		/* current BPF helper definitions are only coming from
+@@ -6609,12 +6608,12 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
+ 		regs[BPF_REG_0].btf = btf_vmlinux;
+ 		regs[BPF_REG_0].btf_id = ret_btf_id;
+ 	} else {
+-		verbose(env, "unknown return type %d of func %s#%d\n",
+-			fn->ret_type, func_id_name(func_id), func_id);
++		verbose(env, "unknown return type %u of func %s#%d\n",
++			base_type(ret_type), func_id_name(func_id), func_id);
+ 		return -EINVAL;
+ 	}
+ 
+-	if (reg_type_may_be_null(regs[BPF_REG_0].type))
++	if (type_may_be_null(regs[BPF_REG_0].type))
+ 		regs[BPF_REG_0].id = ++env->id_gen;
+ 
+ 	if (is_ptr_cast_function(func_id)) {
+@@ -6823,25 +6822,25 @@ static bool check_reg_sane_offset(struct bpf_verifier_env *env,
+ 
+ 	if (known && (val >= BPF_MAX_VAR_OFF || val <= -BPF_MAX_VAR_OFF)) {
+ 		verbose(env, "math between %s pointer and %lld is not allowed\n",
+-			reg_type_str[type], val);
++			reg_type_str(env, type), val);
+ 		return false;
+ 	}
+ 
+ 	if (reg->off >= BPF_MAX_VAR_OFF || reg->off <= -BPF_MAX_VAR_OFF) {
+ 		verbose(env, "%s pointer offset %d is not allowed\n",
+-			reg_type_str[type], reg->off);
++			reg_type_str(env, type), reg->off);
+ 		return false;
+ 	}
+ 
+ 	if (smin == S64_MIN) {
+ 		verbose(env, "math between %s pointer and register with unbounded min value is not allowed\n",
+-			reg_type_str[type]);
++			reg_type_str(env, type));
+ 		return false;
+ 	}
+ 
+ 	if (smin >= BPF_MAX_VAR_OFF || smin <= -BPF_MAX_VAR_OFF) {
+ 		verbose(env, "value %lld makes %s pointer be out of bounds\n",
+-			smin, reg_type_str[type]);
++			smin, reg_type_str(env, type));
+ 		return false;
+ 	}
+ 
+@@ -7218,11 +7217,13 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 		return -EACCES;
+ 	}
+ 
+-	switch (ptr_reg->type) {
+-	case PTR_TO_MAP_VALUE_OR_NULL:
++	if (ptr_reg->type & PTR_MAYBE_NULL) {
+ 		verbose(env, "R%d pointer arithmetic on %s prohibited, null-check it first\n",
+-			dst, reg_type_str[ptr_reg->type]);
++			dst, reg_type_str(env, ptr_reg->type));
+ 		return -EACCES;
++	}
++
++	switch (base_type(ptr_reg->type)) {
+ 	case CONST_PTR_TO_MAP:
+ 		/* smin_val represents the known value */
+ 		if (known && smin_val == 0 && opcode == BPF_ADD)
+@@ -7235,10 +7236,10 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 	case PTR_TO_XDP_SOCK:
+ reject:
+ 		verbose(env, "R%d pointer arithmetic on %s prohibited\n",
+-			dst, reg_type_str[ptr_reg->type]);
++			dst, reg_type_str(env, ptr_reg->type));
+ 		return -EACCES;
+ 	default:
+-		if (reg_type_may_be_null(ptr_reg->type))
++		if (type_may_be_null(ptr_reg->type))
+ 			goto reject;
+ 		break;
+ 	}
+@@ -8960,7 +8961,7 @@ static void mark_ptr_or_null_reg(struct bpf_func_state *state,
+ 				 struct bpf_reg_state *reg, u32 id,
+ 				 bool is_null)
+ {
+-	if (reg_type_may_be_null(reg->type) && reg->id == id &&
++	if (type_may_be_null(reg->type) && reg->id == id &&
+ 	    !WARN_ON_ONCE(!reg->id)) {
+ 		if (WARN_ON_ONCE(reg->smin_value || reg->smax_value ||
+ 				 !tnum_equals_const(reg->var_off, 0) ||
+@@ -9338,7 +9339,7 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
+ 	 */
+ 	if (!is_jmp32 && BPF_SRC(insn->code) == BPF_K &&
+ 	    insn->imm == 0 && (opcode == BPF_JEQ || opcode == BPF_JNE) &&
+-	    reg_type_may_be_null(dst_reg->type)) {
++	    type_may_be_null(dst_reg->type)) {
+ 		/* Mark all identical registers in each branch as either
+ 		 * safe or unknown depending R == 0 or R != 0 conditional.
+ 		 */
+@@ -9397,7 +9398,7 @@ static int check_ld_imm(struct bpf_verifier_env *env, struct bpf_insn *insn)
+ 
+ 	if (insn->src_reg == BPF_PSEUDO_BTF_ID) {
+ 		dst_reg->type = aux->btf_var.reg_type;
+-		switch (dst_reg->type) {
++		switch (base_type(dst_reg->type)) {
+ 		case PTR_TO_MEM:
+ 			dst_reg->mem_size = aux->btf_var.mem_size;
+ 			break;
+@@ -9595,7 +9596,7 @@ static int check_return_code(struct bpf_verifier_env *env)
+ 		/* enforce return zero from async callbacks like timer */
+ 		if (reg->type != SCALAR_VALUE) {
+ 			verbose(env, "In async callback the register R0 is not a known value (%s)\n",
+-				reg_type_str[reg->type]);
++				reg_type_str(env, reg->type));
+ 			return -EINVAL;
+ 		}
+ 
+@@ -9609,7 +9610,7 @@ static int check_return_code(struct bpf_verifier_env *env)
+ 	if (is_subprog) {
+ 		if (reg->type != SCALAR_VALUE) {
+ 			verbose(env, "At subprogram exit the register R0 is not a scalar value (%s)\n",
+-				reg_type_str[reg->type]);
++				reg_type_str(env, reg->type));
+ 			return -EINVAL;
+ 		}
+ 		return 0;
+@@ -9673,7 +9674,7 @@ static int check_return_code(struct bpf_verifier_env *env)
+ 
+ 	if (reg->type != SCALAR_VALUE) {
+ 		verbose(env, "At program exit the register R0 is not a known value (%s)\n",
+-			reg_type_str[reg->type]);
++			reg_type_str(env, reg->type));
+ 		return -EINVAL;
+ 	}
+ 
+@@ -10454,7 +10455,7 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
+ 		return true;
+ 	if (rcur->type == NOT_INIT)
+ 		return false;
+-	switch (rold->type) {
++	switch (base_type(rold->type)) {
+ 	case SCALAR_VALUE:
+ 		if (env->explore_alu_limits)
+ 			return false;
+@@ -10476,6 +10477,22 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
+ 		}
+ 	case PTR_TO_MAP_KEY:
+ 	case PTR_TO_MAP_VALUE:
++		/* a PTR_TO_MAP_VALUE could be safe to use as a
++		 * PTR_TO_MAP_VALUE_OR_NULL into the same map.
++		 * However, if the old PTR_TO_MAP_VALUE_OR_NULL then got NULL-
++		 * checked, doing so could have affected others with the same
++		 * id, and we can't check for that because we lost the id when
++		 * we converted to a PTR_TO_MAP_VALUE.
++		 */
++		if (type_may_be_null(rold->type)) {
++			if (!type_may_be_null(rcur->type))
++				return false;
++			if (memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)))
++				return false;
++			/* Check our ids match any regs they're supposed to */
++			return check_ids(rold->id, rcur->id, idmap);
++		}
++
+ 		/* If the new min/max/var_off satisfy the old ones and
+ 		 * everything else matches, we are OK.
+ 		 * 'id' is not compared, since it's only used for maps with
+@@ -10487,20 +10504,6 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
+ 		return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
+ 		       range_within(rold, rcur) &&
+ 		       tnum_in(rold->var_off, rcur->var_off);
+-	case PTR_TO_MAP_VALUE_OR_NULL:
+-		/* a PTR_TO_MAP_VALUE could be safe to use as a
+-		 * PTR_TO_MAP_VALUE_OR_NULL into the same map.
+-		 * However, if the old PTR_TO_MAP_VALUE_OR_NULL then got NULL-
+-		 * checked, doing so could have affected others with the same
+-		 * id, and we can't check for that because we lost the id when
+-		 * we converted to a PTR_TO_MAP_VALUE.
+-		 */
+-		if (rcur->type != PTR_TO_MAP_VALUE_OR_NULL)
+-			return false;
+-		if (memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)))
+-			return false;
+-		/* Check our ids match any regs they're supposed to */
+-		return check_ids(rold->id, rcur->id, idmap);
+ 	case PTR_TO_PACKET_META:
+ 	case PTR_TO_PACKET:
+ 		if (rcur->type != rold->type)
+@@ -10529,11 +10532,8 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
+ 	case PTR_TO_PACKET_END:
+ 	case PTR_TO_FLOW_KEYS:
+ 	case PTR_TO_SOCKET:
+-	case PTR_TO_SOCKET_OR_NULL:
+ 	case PTR_TO_SOCK_COMMON:
+-	case PTR_TO_SOCK_COMMON_OR_NULL:
+ 	case PTR_TO_TCP_SOCK:
+-	case PTR_TO_TCP_SOCK_OR_NULL:
+ 	case PTR_TO_XDP_SOCK:
+ 		/* Only valid matches are exact, which memcmp() above
+ 		 * would have accepted
+@@ -11059,17 +11059,13 @@ next:
+ /* Return true if it's OK to have the same insn return a different type. */
+ static bool reg_type_mismatch_ok(enum bpf_reg_type type)
+ {
+-	switch (type) {
++	switch (base_type(type)) {
+ 	case PTR_TO_CTX:
+ 	case PTR_TO_SOCKET:
+-	case PTR_TO_SOCKET_OR_NULL:
+ 	case PTR_TO_SOCK_COMMON:
+-	case PTR_TO_SOCK_COMMON_OR_NULL:
+ 	case PTR_TO_TCP_SOCK:
+-	case PTR_TO_TCP_SOCK_OR_NULL:
+ 	case PTR_TO_XDP_SOCK:
+ 	case PTR_TO_BTF_ID:
+-	case PTR_TO_BTF_ID_OR_NULL:
+ 		return false;
+ 	default:
+ 		return true;
+@@ -11293,7 +11289,7 @@ static int do_check(struct bpf_verifier_env *env)
+ 			if (is_ctx_reg(env, insn->dst_reg)) {
+ 				verbose(env, "BPF_ST stores into R%d %s is not allowed\n",
+ 					insn->dst_reg,
+-					reg_type_str[reg_state(env, insn->dst_reg)->type]);
++					reg_type_str(env, reg_state(env, insn->dst_reg)->type));
+ 				return -EACCES;
+ 			}
+ 
+@@ -11545,7 +11541,7 @@ static int check_pseudo_btf_id(struct bpf_verifier_env *env,
+ 			err = -EINVAL;
+ 			goto err_put;
+ 		}
+-		aux->btf_var.reg_type = PTR_TO_MEM;
++		aux->btf_var.reg_type = PTR_TO_MEM | MEM_RDONLY;
+ 		aux->btf_var.mem_size = tsize;
+ 	} else {
+ 		aux->btf_var.reg_type = PTR_TO_BTF_ID;
+@@ -13376,7 +13372,7 @@ static int do_check_common(struct bpf_verifier_env *env, int subprog)
+ 				mark_reg_known_zero(env, regs, i);
+ 			else if (regs[i].type == SCALAR_VALUE)
+ 				mark_reg_unknown(env, regs, i);
+-			else if (regs[i].type == PTR_TO_MEM_OR_NULL) {
++			else if (base_type(regs[i].type) == PTR_TO_MEM) {
+ 				const u32 mem_size = regs[i].mem_size;
+ 
+ 				mark_reg_known_zero(env, regs, i);
+diff --git a/kernel/cred.c b/kernel/cred.c
+index 473d17c431f3a..933155c969227 100644
+--- a/kernel/cred.c
++++ b/kernel/cred.c
+@@ -665,21 +665,16 @@ EXPORT_SYMBOL(cred_fscmp);
+ 
+ int set_cred_ucounts(struct cred *new)
+ {
+-	struct task_struct *task = current;
+-	const struct cred *old = task->real_cred;
+ 	struct ucounts *new_ucounts, *old_ucounts = new->ucounts;
+ 
+-	if (new->user == old->user && new->user_ns == old->user_ns)
+-		return 0;
+-
+ 	/*
+ 	 * This optimization is needed because alloc_ucounts() uses locks
+ 	 * for table lookups.
+ 	 */
+-	if (old_ucounts->ns == new->user_ns && uid_eq(old_ucounts->uid, new->euid))
++	if (old_ucounts->ns == new->user_ns && uid_eq(old_ucounts->uid, new->uid))
+ 		return 0;
+ 
+-	if (!(new_ucounts = alloc_ucounts(new->user_ns, new->euid)))
++	if (!(new_ucounts = alloc_ucounts(new->user_ns, new->uid)))
+ 		return -EAGAIN;
+ 
+ 	new->ucounts = new_ucounts;
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 3244cc56b697d..50d02e3103a57 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -2052,18 +2052,18 @@ static __latent_entropy struct task_struct *copy_process(
+ #ifdef CONFIG_PROVE_LOCKING
+ 	DEBUG_LOCKS_WARN_ON(!p->softirqs_enabled);
+ #endif
++	retval = copy_creds(p, clone_flags);
++	if (retval < 0)
++		goto bad_fork_free;
++
+ 	retval = -EAGAIN;
+ 	if (is_ucounts_overlimit(task_ucounts(p), UCOUNT_RLIMIT_NPROC, rlimit(RLIMIT_NPROC))) {
+ 		if (p->real_cred->user != INIT_USER &&
+ 		    !capable(CAP_SYS_RESOURCE) && !capable(CAP_SYS_ADMIN))
+-			goto bad_fork_free;
++			goto bad_fork_cleanup_count;
+ 	}
+ 	current->flags &= ~PF_NPROC_EXCEEDED;
+ 
+-	retval = copy_creds(p, clone_flags);
+-	if (retval < 0)
+-		goto bad_fork_free;
+-
+ 	/*
+ 	 * If multiple threads are within copy_process(), then this check
+ 	 * triggers too late. This doesn't hurt, the check is only there
+@@ -2350,10 +2350,6 @@ static __latent_entropy struct task_struct *copy_process(
+ 		goto bad_fork_cancel_cgroup;
+ 	}
+ 
+-	/* past the last point of failure */
+-	if (pidfile)
+-		fd_install(pidfd, pidfile);
+-
+ 	init_task_pid_links(p);
+ 	if (likely(p->pid)) {
+ 		ptrace_init_task(p, (clone_flags & CLONE_PTRACE) || trace);
+@@ -2402,6 +2398,9 @@ static __latent_entropy struct task_struct *copy_process(
+ 	syscall_tracepoint_update(p);
+ 	write_unlock_irq(&tasklist_lock);
+ 
++	if (pidfile)
++		fd_install(pidfd, pidfile);
++
+ 	proc_fork_connector(p);
+ 	sched_post_fork(p, args);
+ 	cgroup_post_fork(p, args);
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 2270ec68f10a1..d48cd608376ae 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -3462,7 +3462,7 @@ struct lock_class *lock_chain_get_class(struct lock_chain *chain, int i)
+ 	u16 chain_hlock = chain_hlocks[chain->base + i];
+ 	unsigned int class_idx = chain_hlock_class_idx(chain_hlock);
+ 
+-	return lock_classes + class_idx - 1;
++	return lock_classes + class_idx;
+ }
+ 
+ /*
+@@ -3530,7 +3530,7 @@ static void print_chain_keys_chain(struct lock_chain *chain)
+ 		hlock_id = chain_hlocks[chain->base + i];
+ 		chain_key = print_chain_key_iteration(hlock_id, chain_key);
+ 
+-		print_lock_name(lock_classes + chain_hlock_class_idx(hlock_id) - 1);
++		print_lock_name(lock_classes + chain_hlock_class_idx(hlock_id));
+ 		printk("\n");
+ 	}
+ }
+diff --git a/kernel/module.c b/kernel/module.c
+index 84a9141a5e159..f25e7653aa150 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -3722,12 +3722,6 @@ static noinline int do_init_module(struct module *mod)
+ 	}
+ 	freeinit->module_init = mod->init_layout.base;
+ 
+-	/*
+-	 * We want to find out whether @mod uses async during init.  Clear
+-	 * PF_USED_ASYNC.  async_schedule*() will set it.
+-	 */
+-	current->flags &= ~PF_USED_ASYNC;
+-
+ 	do_mod_ctors(mod);
+ 	/* Start the module */
+ 	if (mod->init != NULL)
+@@ -3753,22 +3747,13 @@ static noinline int do_init_module(struct module *mod)
+ 
+ 	/*
+ 	 * We need to finish all async code before the module init sequence
+-	 * is done.  This has potential to deadlock.  For example, a newly
+-	 * detected block device can trigger request_module() of the
+-	 * default iosched from async probing task.  Once userland helper
+-	 * reaches here, async_synchronize_full() will wait on the async
+-	 * task waiting on request_module() and deadlock.
+-	 *
+-	 * This deadlock is avoided by perfomring async_synchronize_full()
+-	 * iff module init queued any async jobs.  This isn't a full
+-	 * solution as it will deadlock the same if module loading from
+-	 * async jobs nests more than once; however, due to the various
+-	 * constraints, this hack seems to be the best option for now.
+-	 * Please refer to the following thread for details.
++	 * is done. This has potential to deadlock if synchronous module
++	 * loading is requested from async (which is not allowed!).
+ 	 *
+-	 * http://thread.gmane.org/gmane.linux.kernel/1420814
++	 * See commit 0fdff3ec6d87 ("async, kmod: warn on synchronous
++	 * request_module() from async workers") for more details.
+ 	 */
+-	if (!mod->async_probe_requested && (current->flags & PF_USED_ASYNC))
++	if (!mod->async_probe_requested)
+ 		async_synchronize_full();
+ 
+ 	ftrace_free_mem(mod, mod->init_layout.base, mod->init_layout.base +
+diff --git a/kernel/stackleak.c b/kernel/stackleak.c
+index ce161a8e8d975..dd07239ddff9f 100644
+--- a/kernel/stackleak.c
++++ b/kernel/stackleak.c
+@@ -48,7 +48,7 @@ int stack_erasing_sysctl(struct ctl_table *table, int write,
+ #define skip_erasing()	false
+ #endif /* CONFIG_STACKLEAK_RUNTIME_DISABLE */
+ 
+-asmlinkage void notrace stackleak_erase(void)
++asmlinkage void noinstr stackleak_erase(void)
+ {
+ 	/* It would be nice not to have 'kstack_ptr' and 'boundary' on stack */
+ 	unsigned long kstack_ptr = current->lowest_stack;
+@@ -102,9 +102,8 @@ asmlinkage void notrace stackleak_erase(void)
+ 	/* Reset the 'lowest_stack' value for the next syscall */
+ 	current->lowest_stack = current_top_of_stack() - THREAD_SIZE/64;
+ }
+-NOKPROBE_SYMBOL(stackleak_erase);
+ 
+-void __used __no_caller_saved_registers notrace stackleak_track_stack(void)
++void __used __no_caller_saved_registers noinstr stackleak_track_stack(void)
+ {
+ 	unsigned long sp = current_stack_pointer;
+ 
+diff --git a/kernel/sys.c b/kernel/sys.c
+index 8fdac0d90504a..3e4e8930fafc6 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -472,6 +472,16 @@ static int set_user(struct cred *new)
+ 	if (!new_user)
+ 		return -EAGAIN;
+ 
++	free_uid(new->user);
++	new->user = new_user;
++	return 0;
++}
++
++static void flag_nproc_exceeded(struct cred *new)
++{
++	if (new->ucounts == current_ucounts())
++		return;
++
+ 	/*
+ 	 * We don't fail in case of NPROC limit excess here because too many
+ 	 * poorly written programs don't check set*uid() return code, assuming
+@@ -480,15 +490,10 @@ static int set_user(struct cred *new)
+ 	 * failure to the execve() stage.
+ 	 */
+ 	if (is_ucounts_overlimit(new->ucounts, UCOUNT_RLIMIT_NPROC, rlimit(RLIMIT_NPROC)) &&
+-			new_user != INIT_USER &&
+-			!capable(CAP_SYS_RESOURCE) && !capable(CAP_SYS_ADMIN))
++			new->user != INIT_USER)
+ 		current->flags |= PF_NPROC_EXCEEDED;
+ 	else
+ 		current->flags &= ~PF_NPROC_EXCEEDED;
+-
+-	free_uid(new->user);
+-	new->user = new_user;
+-	return 0;
+ }
+ 
+ /*
+@@ -563,6 +568,7 @@ long __sys_setreuid(uid_t ruid, uid_t euid)
+ 	if (retval < 0)
+ 		goto error;
+ 
++	flag_nproc_exceeded(new);
+ 	return commit_creds(new);
+ 
+ error:
+@@ -625,6 +631,7 @@ long __sys_setuid(uid_t uid)
+ 	if (retval < 0)
+ 		goto error;
+ 
++	flag_nproc_exceeded(new);
+ 	return commit_creds(new);
+ 
+ error:
+@@ -704,6 +711,7 @@ long __sys_setresuid(uid_t ruid, uid_t euid, uid_t suid)
+ 	if (retval < 0)
+ 		goto error;
+ 
++	flag_nproc_exceeded(new);
+ 	return commit_creds(new);
+ 
+ error:
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index e36d184615fb7..4cc73a0d1215b 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -345,7 +345,7 @@ static const struct bpf_func_proto bpf_probe_write_user_proto = {
+ 	.gpl_only	= true,
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_ANYTHING,
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE,
+ };
+ 
+@@ -394,7 +394,7 @@ static const struct bpf_func_proto bpf_trace_printk_proto = {
+ 	.func		= bpf_trace_printk,
+ 	.gpl_only	= true,
+ 	.ret_type	= RET_INTEGER,
+-	.arg1_type	= ARG_PTR_TO_MEM,
++	.arg1_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg2_type	= ARG_CONST_SIZE,
+ };
+ 
+@@ -450,9 +450,9 @@ static const struct bpf_func_proto bpf_trace_vprintk_proto = {
+ 	.func		= bpf_trace_vprintk,
+ 	.gpl_only	= true,
+ 	.ret_type	= RET_INTEGER,
+-	.arg1_type	= ARG_PTR_TO_MEM,
++	.arg1_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg2_type	= ARG_CONST_SIZE,
+-	.arg3_type	= ARG_PTR_TO_MEM_OR_NULL,
++	.arg3_type	= ARG_PTR_TO_MEM | PTR_MAYBE_NULL | MEM_RDONLY,
+ 	.arg4_type	= ARG_CONST_SIZE_OR_ZERO,
+ };
+ 
+@@ -492,9 +492,9 @@ static const struct bpf_func_proto bpf_seq_printf_proto = {
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_PTR_TO_BTF_ID,
+ 	.arg1_btf_id	= &btf_seq_file_ids[0],
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE,
+-	.arg4_type      = ARG_PTR_TO_MEM_OR_NULL,
++	.arg4_type      = ARG_PTR_TO_MEM | PTR_MAYBE_NULL | MEM_RDONLY,
+ 	.arg5_type      = ARG_CONST_SIZE_OR_ZERO,
+ };
+ 
+@@ -509,7 +509,7 @@ static const struct bpf_func_proto bpf_seq_write_proto = {
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_PTR_TO_BTF_ID,
+ 	.arg1_btf_id	= &btf_seq_file_ids[0],
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE_OR_ZERO,
+ };
+ 
+@@ -533,7 +533,7 @@ static const struct bpf_func_proto bpf_seq_printf_btf_proto = {
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_PTR_TO_BTF_ID,
+ 	.arg1_btf_id	= &btf_seq_file_ids[0],
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE_OR_ZERO,
+ 	.arg4_type	= ARG_ANYTHING,
+ };
+@@ -694,7 +694,7 @@ static const struct bpf_func_proto bpf_perf_event_output_proto = {
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+ 	.arg2_type	= ARG_CONST_MAP_PTR,
+ 	.arg3_type	= ARG_ANYTHING,
+-	.arg4_type	= ARG_PTR_TO_MEM,
++	.arg4_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg5_type	= ARG_CONST_SIZE_OR_ZERO,
+ };
+ 
+@@ -1004,7 +1004,7 @@ const struct bpf_func_proto bpf_snprintf_btf_proto = {
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_PTR_TO_MEM,
+ 	.arg2_type	= ARG_CONST_SIZE,
+-	.arg3_type	= ARG_PTR_TO_MEM,
++	.arg3_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg4_type	= ARG_CONST_SIZE,
+ 	.arg5_type	= ARG_ANYTHING,
+ };
+@@ -1285,7 +1285,7 @@ static const struct bpf_func_proto bpf_perf_event_output_proto_tp = {
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+ 	.arg2_type	= ARG_CONST_MAP_PTR,
+ 	.arg3_type	= ARG_ANYTHING,
+-	.arg4_type	= ARG_PTR_TO_MEM,
++	.arg4_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg5_type	= ARG_CONST_SIZE_OR_ZERO,
+ };
+ 
+@@ -1507,7 +1507,7 @@ static const struct bpf_func_proto bpf_perf_event_output_proto_raw_tp = {
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+ 	.arg2_type	= ARG_CONST_MAP_PTR,
+ 	.arg3_type	= ARG_ANYTHING,
+-	.arg4_type	= ARG_PTR_TO_MEM,
++	.arg4_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg5_type	= ARG_CONST_SIZE_OR_ZERO,
+ };
+ 
+@@ -1561,7 +1561,7 @@ static const struct bpf_func_proto bpf_get_stack_proto_raw_tp = {
+ 	.gpl_only	= true,
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE_OR_ZERO,
+ 	.arg4_type	= ARG_ANYTHING,
+ };
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index ae9f9e4af9314..bb15059020445 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -252,6 +252,10 @@ __setup("trace_clock=", set_trace_boot_clock);
+ 
+ static int __init set_tracepoint_printk(char *str)
+ {
++	/* Ignore the "tp_printk_stop_on_boot" param */
++	if (*str == '_')
++		return 0;
++
+ 	if ((strcmp(str, "=0") != 0 && strcmp(str, "=off") != 0))
+ 		tracepoint_printk = 1;
+ 	return 1;
+diff --git a/kernel/ucount.c b/kernel/ucount.c
+index 65b597431c861..06ea04d446852 100644
+--- a/kernel/ucount.c
++++ b/kernel/ucount.c
+@@ -350,7 +350,8 @@ bool is_ucounts_overlimit(struct ucounts *ucounts, enum ucount_type type, unsign
+ 	if (rlimit > LONG_MAX)
+ 		max = LONG_MAX;
+ 	for (iter = ucounts; iter; iter = iter->ns->ucounts) {
+-		if (get_ucounts_value(iter, type) > max)
++		long val = get_ucounts_value(iter, type);
++		if (val < 0 || val > max)
+ 			return true;
+ 		max = READ_ONCE(iter->ns->ucount_max[type]);
+ 	}
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index 66a740e6e153c..6d146f77601d7 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -416,6 +416,7 @@ static size_t copy_page_to_iter_pipe(struct page *page, size_t offset, size_t by
+ 		return 0;
+ 
+ 	buf->ops = &page_cache_pipe_buf_ops;
++	buf->flags = 0;
+ 	get_page(page);
+ 	buf->page = page;
+ 	buf->offset = offset;
+@@ -579,6 +580,7 @@ static size_t push_pipe(struct iov_iter *i, size_t size,
+ 			break;
+ 
+ 		buf->ops = &default_pipe_buf_ops;
++		buf->flags = 0;
+ 		buf->page = page;
+ 		buf->offset = 0;
+ 		buf->len = min_t(ssize_t, left, PAGE_SIZE);
+diff --git a/mm/mprotect.c b/mm/mprotect.c
+index e552f5e0ccbde..02a11c49b5a87 100644
+--- a/mm/mprotect.c
++++ b/mm/mprotect.c
+@@ -94,7 +94,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
+ 
+ 				/* Also skip shared copy-on-write pages */
+ 				if (is_cow_mapping(vma->vm_flags) &&
+-				    page_mapcount(page) != 1)
++				    page_count(page) != 1)
+ 					continue;
+ 
+ 				/*
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index 02f43f3e2c564..44a8730c26acc 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -77,6 +77,7 @@ static void ax25_kill_by_device(struct net_device *dev)
+ {
+ 	ax25_dev *ax25_dev;
+ 	ax25_cb *s;
++	struct sock *sk;
+ 
+ 	if ((ax25_dev = ax25_dev_ax25dev(dev)) == NULL)
+ 		return;
+@@ -85,13 +86,15 @@ static void ax25_kill_by_device(struct net_device *dev)
+ again:
+ 	ax25_for_each(s, &ax25_list) {
+ 		if (s->ax25_dev == ax25_dev) {
++			sk = s->sk;
++			sock_hold(sk);
+ 			spin_unlock_bh(&ax25_list_lock);
+-			lock_sock(s->sk);
++			lock_sock(sk);
+ 			s->ax25_dev = NULL;
+-			release_sock(s->sk);
++			release_sock(sk);
+ 			ax25_disconnect(s, ENETUNREACH);
+ 			spin_lock_bh(&ax25_list_lock);
+-
++			sock_put(sk);
+ 			/* The entry could have been deleted from the
+ 			 * list meanwhile and thus the next pointer is
+ 			 * no longer valid.  Play it safe and restart
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index de24098894897..db4f2641d1cd1 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -82,6 +82,9 @@ static void br_multicast_find_del_pg(struct net_bridge *br,
+ 				     struct net_bridge_port_group *pg);
+ static void __br_multicast_stop(struct net_bridge_mcast *brmctx);
+ 
++static int br_mc_disabled_update(struct net_device *dev, bool value,
++				 struct netlink_ext_ack *extack);
++
+ static struct net_bridge_port_group *
+ br_sg_port_find(struct net_bridge *br,
+ 		struct net_bridge_port_group_sg_key *sg_p)
+@@ -1156,6 +1159,7 @@ struct net_bridge_mdb_entry *br_multicast_new_group(struct net_bridge *br,
+ 		return mp;
+ 
+ 	if (atomic_read(&br->mdb_hash_tbl.nelems) >= br->hash_max) {
++		br_mc_disabled_update(br->dev, false, NULL);
+ 		br_opt_toggle(br, BROPT_MULTICAST_ENABLED, false);
+ 		return ERR_PTR(-E2BIG);
+ 	}
+diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
+index 68d2cbf8331ac..ea61dfe19c869 100644
+--- a/net/core/bpf_sk_storage.c
++++ b/net/core/bpf_sk_storage.c
+@@ -929,7 +929,7 @@ static struct bpf_iter_reg bpf_sk_storage_map_reg_info = {
+ 		{ offsetof(struct bpf_iter__bpf_sk_storage_map, sk),
+ 		  PTR_TO_BTF_ID_OR_NULL },
+ 		{ offsetof(struct bpf_iter__bpf_sk_storage_map, value),
+-		  PTR_TO_RDWR_BUF_OR_NULL },
++		  PTR_TO_BUF | PTR_MAYBE_NULL },
+ 	},
+ 	.seq_info		= &iter_seq_info,
+ };
+diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c
+index 49442cae6f69d..1d99b731e5b21 100644
+--- a/net/core/drop_monitor.c
++++ b/net/core/drop_monitor.c
+@@ -280,13 +280,17 @@ static void trace_napi_poll_hit(void *ignore, struct napi_struct *napi,
+ 
+ 	rcu_read_lock();
+ 	list_for_each_entry_rcu(new_stat, &hw_stats_list, list) {
++		struct net_device *dev;
++
+ 		/*
+ 		 * only add a note to our monitor buffer if:
+ 		 * 1) this is the dev we received on
+ 		 * 2) its after the last_rx delta
+ 		 * 3) our rx_dropped count has gone up
+ 		 */
+-		if ((new_stat->dev == napi->dev)  &&
++		/* Paired with WRITE_ONCE() in dropmon_net_event() */
++		dev = READ_ONCE(new_stat->dev);
++		if ((dev == napi->dev)  &&
+ 		    (time_after(jiffies, new_stat->last_rx + dm_hw_check_delta)) &&
+ 		    (napi->dev->stats.rx_dropped != new_stat->last_drop_val)) {
+ 			trace_drop_common(NULL, NULL);
+@@ -1572,7 +1576,10 @@ static int dropmon_net_event(struct notifier_block *ev_block,
+ 		mutex_lock(&net_dm_mutex);
+ 		list_for_each_entry_safe(new_stat, tmp, &hw_stats_list, list) {
+ 			if (new_stat->dev == dev) {
+-				new_stat->dev = NULL;
++
++				/* Paired with READ_ONCE() in trace_napi_poll_hit() */
++				WRITE_ONCE(new_stat->dev, NULL);
++
+ 				if (trace_state == TRACE_OFF) {
+ 					list_del_rcu(&new_stat->list);
+ 					kfree_rcu(new_stat, rcu);
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 5b82a817f65a6..22bed067284fb 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -1713,7 +1713,7 @@ static const struct bpf_func_proto bpf_skb_store_bytes_proto = {
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+ 	.arg2_type	= ARG_ANYTHING,
+-	.arg3_type	= ARG_PTR_TO_MEM,
++	.arg3_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg4_type	= ARG_CONST_SIZE,
+ 	.arg5_type	= ARG_ANYTHING,
+ };
+@@ -2018,9 +2018,9 @@ static const struct bpf_func_proto bpf_csum_diff_proto = {
+ 	.gpl_only	= false,
+ 	.pkt_access	= true,
+ 	.ret_type	= RET_INTEGER,
+-	.arg1_type	= ARG_PTR_TO_MEM_OR_NULL,
++	.arg1_type	= ARG_PTR_TO_MEM | PTR_MAYBE_NULL | MEM_RDONLY,
+ 	.arg2_type	= ARG_CONST_SIZE_OR_ZERO,
+-	.arg3_type	= ARG_PTR_TO_MEM_OR_NULL,
++	.arg3_type	= ARG_PTR_TO_MEM | PTR_MAYBE_NULL | MEM_RDONLY,
+ 	.arg4_type	= ARG_CONST_SIZE_OR_ZERO,
+ 	.arg5_type	= ARG_ANYTHING,
+ };
+@@ -2541,7 +2541,7 @@ static const struct bpf_func_proto bpf_redirect_neigh_proto = {
+ 	.gpl_only	= false,
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_ANYTHING,
+-	.arg2_type      = ARG_PTR_TO_MEM_OR_NULL,
++	.arg2_type      = ARG_PTR_TO_MEM | PTR_MAYBE_NULL | MEM_RDONLY,
+ 	.arg3_type      = ARG_CONST_SIZE_OR_ZERO,
+ 	.arg4_type	= ARG_ANYTHING,
+ };
+@@ -4174,7 +4174,7 @@ static const struct bpf_func_proto bpf_skb_event_output_proto = {
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+ 	.arg2_type	= ARG_CONST_MAP_PTR,
+ 	.arg3_type	= ARG_ANYTHING,
+-	.arg4_type	= ARG_PTR_TO_MEM,
++	.arg4_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg5_type	= ARG_CONST_SIZE_OR_ZERO,
+ };
+ 
+@@ -4188,7 +4188,7 @@ const struct bpf_func_proto bpf_skb_output_proto = {
+ 	.arg1_btf_id	= &bpf_skb_output_btf_ids[0],
+ 	.arg2_type	= ARG_CONST_MAP_PTR,
+ 	.arg3_type	= ARG_ANYTHING,
+-	.arg4_type	= ARG_PTR_TO_MEM,
++	.arg4_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg5_type	= ARG_CONST_SIZE_OR_ZERO,
+ };
+ 
+@@ -4371,7 +4371,7 @@ static const struct bpf_func_proto bpf_skb_set_tunnel_key_proto = {
+ 	.gpl_only	= false,
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE,
+ 	.arg4_type	= ARG_ANYTHING,
+ };
+@@ -4397,7 +4397,7 @@ static const struct bpf_func_proto bpf_skb_set_tunnel_opt_proto = {
+ 	.gpl_only	= false,
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE,
+ };
+ 
+@@ -4567,7 +4567,7 @@ static const struct bpf_func_proto bpf_xdp_event_output_proto = {
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+ 	.arg2_type	= ARG_CONST_MAP_PTR,
+ 	.arg3_type	= ARG_ANYTHING,
+-	.arg4_type	= ARG_PTR_TO_MEM,
++	.arg4_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg5_type	= ARG_CONST_SIZE_OR_ZERO,
+ };
+ 
+@@ -4581,7 +4581,7 @@ const struct bpf_func_proto bpf_xdp_output_proto = {
+ 	.arg1_btf_id	= &bpf_xdp_output_btf_ids[0],
+ 	.arg2_type	= ARG_CONST_MAP_PTR,
+ 	.arg3_type	= ARG_ANYTHING,
+-	.arg4_type	= ARG_PTR_TO_MEM,
++	.arg4_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg5_type	= ARG_CONST_SIZE_OR_ZERO,
+ };
+ 
+@@ -5069,7 +5069,7 @@ const struct bpf_func_proto bpf_sk_setsockopt_proto = {
+ 	.arg1_type	= ARG_PTR_TO_BTF_ID_SOCK_COMMON,
+ 	.arg2_type	= ARG_ANYTHING,
+ 	.arg3_type	= ARG_ANYTHING,
+-	.arg4_type	= ARG_PTR_TO_MEM,
++	.arg4_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg5_type	= ARG_CONST_SIZE,
+ };
+ 
+@@ -5103,7 +5103,7 @@ static const struct bpf_func_proto bpf_sock_addr_setsockopt_proto = {
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+ 	.arg2_type	= ARG_ANYTHING,
+ 	.arg3_type	= ARG_ANYTHING,
+-	.arg4_type	= ARG_PTR_TO_MEM,
++	.arg4_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg5_type	= ARG_CONST_SIZE,
+ };
+ 
+@@ -5137,7 +5137,7 @@ static const struct bpf_func_proto bpf_sock_ops_setsockopt_proto = {
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+ 	.arg2_type	= ARG_ANYTHING,
+ 	.arg3_type	= ARG_ANYTHING,
+-	.arg4_type	= ARG_PTR_TO_MEM,
++	.arg4_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg5_type	= ARG_CONST_SIZE,
+ };
+ 
+@@ -5312,7 +5312,7 @@ static const struct bpf_func_proto bpf_bind_proto = {
+ 	.gpl_only	= false,
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE,
+ };
+ 
+@@ -5900,7 +5900,7 @@ static const struct bpf_func_proto bpf_lwt_in_push_encap_proto = {
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+ 	.arg2_type	= ARG_ANYTHING,
+-	.arg3_type	= ARG_PTR_TO_MEM,
++	.arg3_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg4_type	= ARG_CONST_SIZE
+ };
+ 
+@@ -5910,7 +5910,7 @@ static const struct bpf_func_proto bpf_lwt_xmit_push_encap_proto = {
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+ 	.arg2_type	= ARG_ANYTHING,
+-	.arg3_type	= ARG_PTR_TO_MEM,
++	.arg3_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg4_type	= ARG_CONST_SIZE
+ };
+ 
+@@ -5953,7 +5953,7 @@ static const struct bpf_func_proto bpf_lwt_seg6_store_bytes_proto = {
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+ 	.arg2_type	= ARG_ANYTHING,
+-	.arg3_type	= ARG_PTR_TO_MEM,
++	.arg3_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg4_type	= ARG_CONST_SIZE
+ };
+ 
+@@ -6041,7 +6041,7 @@ static const struct bpf_func_proto bpf_lwt_seg6_action_proto = {
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+ 	.arg2_type	= ARG_ANYTHING,
+-	.arg3_type	= ARG_PTR_TO_MEM,
++	.arg3_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg4_type	= ARG_CONST_SIZE
+ };
+ 
+@@ -6266,7 +6266,7 @@ static const struct bpf_func_proto bpf_skc_lookup_tcp_proto = {
+ 	.pkt_access	= true,
+ 	.ret_type	= RET_PTR_TO_SOCK_COMMON_OR_NULL,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE,
+ 	.arg4_type	= ARG_ANYTHING,
+ 	.arg5_type	= ARG_ANYTHING,
+@@ -6285,7 +6285,7 @@ static const struct bpf_func_proto bpf_sk_lookup_tcp_proto = {
+ 	.pkt_access	= true,
+ 	.ret_type	= RET_PTR_TO_SOCKET_OR_NULL,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE,
+ 	.arg4_type	= ARG_ANYTHING,
+ 	.arg5_type	= ARG_ANYTHING,
+@@ -6304,7 +6304,7 @@ static const struct bpf_func_proto bpf_sk_lookup_udp_proto = {
+ 	.pkt_access	= true,
+ 	.ret_type	= RET_PTR_TO_SOCKET_OR_NULL,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE,
+ 	.arg4_type	= ARG_ANYTHING,
+ 	.arg5_type	= ARG_ANYTHING,
+@@ -6341,7 +6341,7 @@ static const struct bpf_func_proto bpf_xdp_sk_lookup_udp_proto = {
+ 	.pkt_access     = true,
+ 	.ret_type       = RET_PTR_TO_SOCKET_OR_NULL,
+ 	.arg1_type      = ARG_PTR_TO_CTX,
+-	.arg2_type      = ARG_PTR_TO_MEM,
++	.arg2_type      = ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type      = ARG_CONST_SIZE,
+ 	.arg4_type      = ARG_ANYTHING,
+ 	.arg5_type      = ARG_ANYTHING,
+@@ -6364,7 +6364,7 @@ static const struct bpf_func_proto bpf_xdp_skc_lookup_tcp_proto = {
+ 	.pkt_access     = true,
+ 	.ret_type       = RET_PTR_TO_SOCK_COMMON_OR_NULL,
+ 	.arg1_type      = ARG_PTR_TO_CTX,
+-	.arg2_type      = ARG_PTR_TO_MEM,
++	.arg2_type      = ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type      = ARG_CONST_SIZE,
+ 	.arg4_type      = ARG_ANYTHING,
+ 	.arg5_type      = ARG_ANYTHING,
+@@ -6387,7 +6387,7 @@ static const struct bpf_func_proto bpf_xdp_sk_lookup_tcp_proto = {
+ 	.pkt_access     = true,
+ 	.ret_type       = RET_PTR_TO_SOCKET_OR_NULL,
+ 	.arg1_type      = ARG_PTR_TO_CTX,
+-	.arg2_type      = ARG_PTR_TO_MEM,
++	.arg2_type      = ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type      = ARG_CONST_SIZE,
+ 	.arg4_type      = ARG_ANYTHING,
+ 	.arg5_type      = ARG_ANYTHING,
+@@ -6406,7 +6406,7 @@ static const struct bpf_func_proto bpf_sock_addr_skc_lookup_tcp_proto = {
+ 	.gpl_only	= false,
+ 	.ret_type	= RET_PTR_TO_SOCK_COMMON_OR_NULL,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE,
+ 	.arg4_type	= ARG_ANYTHING,
+ 	.arg5_type	= ARG_ANYTHING,
+@@ -6425,7 +6425,7 @@ static const struct bpf_func_proto bpf_sock_addr_sk_lookup_tcp_proto = {
+ 	.gpl_only	= false,
+ 	.ret_type	= RET_PTR_TO_SOCKET_OR_NULL,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE,
+ 	.arg4_type	= ARG_ANYTHING,
+ 	.arg5_type	= ARG_ANYTHING,
+@@ -6444,7 +6444,7 @@ static const struct bpf_func_proto bpf_sock_addr_sk_lookup_udp_proto = {
+ 	.gpl_only	= false,
+ 	.ret_type	= RET_PTR_TO_SOCKET_OR_NULL,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE,
+ 	.arg4_type	= ARG_ANYTHING,
+ 	.arg5_type	= ARG_ANYTHING,
+@@ -6757,9 +6757,9 @@ static const struct bpf_func_proto bpf_tcp_check_syncookie_proto = {
+ 	.pkt_access	= true,
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_PTR_TO_BTF_ID_SOCK_COMMON,
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE,
+-	.arg4_type	= ARG_PTR_TO_MEM,
++	.arg4_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg5_type	= ARG_CONST_SIZE,
+ };
+ 
+@@ -6826,9 +6826,9 @@ static const struct bpf_func_proto bpf_tcp_gen_syncookie_proto = {
+ 	.pkt_access	= true,
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_PTR_TO_BTF_ID_SOCK_COMMON,
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE,
+-	.arg4_type	= ARG_PTR_TO_MEM,
++	.arg4_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg5_type	= ARG_CONST_SIZE,
+ };
+ 
+@@ -7057,7 +7057,7 @@ static const struct bpf_func_proto bpf_sock_ops_store_hdr_opt_proto = {
+ 	.gpl_only	= false,
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+-	.arg2_type	= ARG_PTR_TO_MEM,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg3_type	= ARG_CONST_SIZE,
+ 	.arg4_type	= ARG_ANYTHING,
+ };
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index abab13633f845..8b5c5703d7582 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -1698,6 +1698,7 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb,
+ {
+ 	struct ifinfomsg *ifm;
+ 	struct nlmsghdr *nlh;
++	struct Qdisc *qdisc;
+ 
+ 	ASSERT_RTNL();
+ 	nlh = nlmsg_put(skb, pid, seq, type, sizeof(*ifm), flags);
+@@ -1715,6 +1716,7 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb,
+ 	if (tgt_netnsid >= 0 && nla_put_s32(skb, IFLA_TARGET_NETNSID, tgt_netnsid))
+ 		goto nla_put_failure;
+ 
++	qdisc = rtnl_dereference(dev->qdisc);
+ 	if (nla_put_string(skb, IFLA_IFNAME, dev->name) ||
+ 	    nla_put_u32(skb, IFLA_TXQLEN, dev->tx_queue_len) ||
+ 	    nla_put_u8(skb, IFLA_OPERSTATE,
+@@ -1733,8 +1735,8 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb,
+ #endif
+ 	    put_master_ifindex(skb, dev) ||
+ 	    nla_put_u8(skb, IFLA_CARRIER, netif_carrier_ok(dev)) ||
+-	    (dev->qdisc &&
+-	     nla_put_string(skb, IFLA_QDISC, dev->qdisc->ops->id)) ||
++	    (qdisc &&
++	     nla_put_string(skb, IFLA_QDISC, qdisc->ops->id)) ||
+ 	    nla_put_ifalias(skb, dev) ||
+ 	    nla_put_u32(skb, IFLA_CARRIER_CHANGES,
+ 			atomic_read(&dev->carrier_up_count) +
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index 687c81386518c..1827669eedd6f 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -1569,7 +1569,7 @@ static struct bpf_iter_reg sock_map_iter_reg = {
+ 	.ctx_arg_info_size	= 2,
+ 	.ctx_arg_info		= {
+ 		{ offsetof(struct bpf_iter__sockmap, key),
+-		  PTR_TO_RDONLY_BUF_OR_NULL },
++		  PTR_TO_BUF | PTR_MAYBE_NULL | MEM_RDONLY },
+ 		{ offsetof(struct bpf_iter__sockmap, sk),
+ 		  PTR_TO_BTF_ID_OR_NULL },
+ 	},
+diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c
+index ea5169e671aea..817f884b43d6e 100644
+--- a/net/dsa/dsa.c
++++ b/net/dsa/dsa.c
+@@ -349,6 +349,7 @@ void dsa_flush_workqueue(void)
+ {
+ 	flush_workqueue(dsa_owq);
+ }
++EXPORT_SYMBOL_GPL(dsa_flush_workqueue);
+ 
+ int dsa_devlink_param_get(struct devlink *dl, u32 id,
+ 			  struct devlink_param_gset_ctx *ctx)
+diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h
+index a5c9bc7b66c6e..33ab7d7af9eb4 100644
+--- a/net/dsa/dsa_priv.h
++++ b/net/dsa/dsa_priv.h
+@@ -170,7 +170,6 @@ void dsa_tag_driver_put(const struct dsa_device_ops *ops);
+ const struct dsa_device_ops *dsa_find_tagger_by_name(const char *buf);
+ 
+ bool dsa_schedule_work(struct work_struct *work);
+-void dsa_flush_workqueue(void);
+ const char *dsa_tag_protocol_to_str(const struct dsa_device_ops *ops);
+ 
+ static inline int dsa_tag_protocol_overhead(const struct dsa_device_ops *ops)
+diff --git a/net/dsa/tag_lan9303.c b/net/dsa/tag_lan9303.c
+index cb548188f8134..98d7d7120bab2 100644
+--- a/net/dsa/tag_lan9303.c
++++ b/net/dsa/tag_lan9303.c
+@@ -77,7 +77,6 @@ static struct sk_buff *lan9303_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ static struct sk_buff *lan9303_rcv(struct sk_buff *skb, struct net_device *dev)
+ {
+-	__be16 *lan9303_tag;
+ 	u16 lan9303_tag1;
+ 	unsigned int source_port;
+ 
+@@ -87,14 +86,15 @@ static struct sk_buff *lan9303_rcv(struct sk_buff *skb, struct net_device *dev)
+ 		return NULL;
+ 	}
+ 
+-	lan9303_tag = dsa_etype_header_pos_rx(skb);
+-
+-	if (lan9303_tag[0] != htons(ETH_P_8021Q)) {
+-		dev_warn_ratelimited(&dev->dev, "Dropping packet due to invalid VLAN marker\n");
+-		return NULL;
++	if (skb_vlan_tag_present(skb)) {
++		lan9303_tag1 = skb_vlan_tag_get(skb);
++		__vlan_hwaccel_clear_tag(skb);
++	} else {
++		skb_push_rcsum(skb, ETH_HLEN);
++		__skb_vlan_pop(skb, &lan9303_tag1);
++		skb_pull_rcsum(skb, ETH_HLEN);
+ 	}
+ 
+-	lan9303_tag1 = ntohs(lan9303_tag[1]);
+ 	source_port = lan9303_tag1 & 0x3;
+ 
+ 	skb->dev = dsa_master_find_slave(dev, 0, source_port);
+@@ -103,13 +103,6 @@ static struct sk_buff *lan9303_rcv(struct sk_buff *skb, struct net_device *dev)
+ 		return NULL;
+ 	}
+ 
+-	/* remove the special VLAN tag between the MAC addresses
+-	 * and the current ethertype field.
+-	 */
+-	skb_pull_rcsum(skb, 2 + 2);
+-
+-	dsa_strip_etype_header(skb, LAN9303_TAG_LEN);
+-
+ 	if (!(lan9303_tag1 & LAN9303_TAG_RX_TRAPPED_TO_CPU))
+ 		dsa_default_offload_fwd_mark(skb);
+ 
+diff --git a/net/ipv4/fib_lookup.h b/net/ipv4/fib_lookup.h
+index e184bcb199434..78e40ea42e58d 100644
+--- a/net/ipv4/fib_lookup.h
++++ b/net/ipv4/fib_lookup.h
+@@ -16,10 +16,9 @@ struct fib_alias {
+ 	u8			fa_slen;
+ 	u32			tb_id;
+ 	s16			fa_default;
+-	u8			offload:1,
+-				trap:1,
+-				offload_failed:1,
+-				unused:5;
++	u8			offload;
++	u8			trap;
++	u8			offload_failed;
+ 	struct rcu_head		rcu;
+ };
+ 
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index 5dfb94abe7b10..d244c57b73031 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -524,9 +524,9 @@ void rtmsg_fib(int event, __be32 key, struct fib_alias *fa,
+ 	fri.dst_len = dst_len;
+ 	fri.tos = fa->fa_tos;
+ 	fri.type = fa->fa_type;
+-	fri.offload = fa->offload;
+-	fri.trap = fa->trap;
+-	fri.offload_failed = fa->offload_failed;
++	fri.offload = READ_ONCE(fa->offload);
++	fri.trap = READ_ONCE(fa->trap);
++	fri.offload_failed = READ_ONCE(fa->offload_failed);
+ 	err = fib_dump_info(skb, info->portid, seq, event, &fri, nlm_flags);
+ 	if (err < 0) {
+ 		/* -EMSGSIZE implies BUG in fib_nlmsg_size() */
+diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
+index 8060524f42566..f7f74d5c14da6 100644
+--- a/net/ipv4/fib_trie.c
++++ b/net/ipv4/fib_trie.c
+@@ -1047,19 +1047,23 @@ void fib_alias_hw_flags_set(struct net *net, const struct fib_rt_info *fri)
+ 	if (!fa_match)
+ 		goto out;
+ 
+-	if (fa_match->offload == fri->offload && fa_match->trap == fri->trap &&
+-	    fa_match->offload_failed == fri->offload_failed)
++	/* These are paired with the WRITE_ONCE() happening in this function.
++	 * The reason is that we are only protected by RCU at this point.
++	 */
++	if (READ_ONCE(fa_match->offload) == fri->offload &&
++	    READ_ONCE(fa_match->trap) == fri->trap &&
++	    READ_ONCE(fa_match->offload_failed) == fri->offload_failed)
+ 		goto out;
+ 
+-	fa_match->offload = fri->offload;
+-	fa_match->trap = fri->trap;
++	WRITE_ONCE(fa_match->offload, fri->offload);
++	WRITE_ONCE(fa_match->trap, fri->trap);
+ 
+ 	/* 2 means send notifications only if offload_failed was changed. */
+ 	if (net->ipv4.sysctl_fib_notify_on_flag_change == 2 &&
+-	    fa_match->offload_failed == fri->offload_failed)
++	    READ_ONCE(fa_match->offload_failed) == fri->offload_failed)
+ 		goto out;
+ 
+-	fa_match->offload_failed = fri->offload_failed;
++	WRITE_ONCE(fa_match->offload_failed, fri->offload_failed);
+ 
+ 	if (!net->ipv4.sysctl_fib_notify_on_flag_change)
+ 		goto out;
+@@ -2297,9 +2301,9 @@ static int fn_trie_dump_leaf(struct key_vector *l, struct fib_table *tb,
+ 				fri.dst_len = KEYLENGTH - fa->fa_slen;
+ 				fri.tos = fa->fa_tos;
+ 				fri.type = fa->fa_type;
+-				fri.offload = fa->offload;
+-				fri.trap = fa->trap;
+-				fri.offload_failed = fa->offload_failed;
++				fri.offload = READ_ONCE(fa->offload);
++				fri.trap = READ_ONCE(fa->trap);
++				fri.offload_failed = READ_ONCE(fa->offload_failed);
+ 				err = fib_dump_info(skb,
+ 						    NETLINK_CB(cb->skb).portid,
+ 						    cb->nlh->nlmsg_seq,
+diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
+index 086822cb1cc96..e3a159c8f231e 100644
+--- a/net/ipv4/ping.c
++++ b/net/ipv4/ping.c
+@@ -172,16 +172,23 @@ static struct sock *ping_lookup(struct net *net, struct sk_buff *skb, u16 ident)
+ 	struct sock *sk = NULL;
+ 	struct inet_sock *isk;
+ 	struct hlist_nulls_node *hnode;
+-	int dif = skb->dev->ifindex;
++	int dif, sdif;
+ 
+ 	if (skb->protocol == htons(ETH_P_IP)) {
++		dif = inet_iif(skb);
++		sdif = inet_sdif(skb);
+ 		pr_debug("try to find: num = %d, daddr = %pI4, dif = %d\n",
+ 			 (int)ident, &ip_hdr(skb)->daddr, dif);
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	} else if (skb->protocol == htons(ETH_P_IPV6)) {
++		dif = inet6_iif(skb);
++		sdif = inet6_sdif(skb);
+ 		pr_debug("try to find: num = %d, daddr = %pI6c, dif = %d\n",
+ 			 (int)ident, &ipv6_hdr(skb)->daddr, dif);
+ #endif
++	} else {
++		pr_err("ping: protocol(%x) is not supported\n", ntohs(skb->protocol));
++		return NULL;
+ 	}
+ 
+ 	read_lock_bh(&ping_table.lock);
+@@ -221,7 +228,7 @@ static struct sock *ping_lookup(struct net *net, struct sk_buff *skb, u16 ident)
+ 		}
+ 
+ 		if (sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif &&
+-		    sk->sk_bound_dev_if != inet_sdif(skb))
++		    sk->sk_bound_dev_if != sdif)
+ 			continue;
+ 
+ 		sock_hold(sk);
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 0b4103b1e6220..2c30c599cc161 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -3393,8 +3393,8 @@ static int inet_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh,
+ 				    fa->fa_tos == fri.tos &&
+ 				    fa->fa_info == res.fi &&
+ 				    fa->fa_type == fri.type) {
+-					fri.offload = fa->offload;
+-					fri.trap = fa->trap;
++					fri.offload = READ_ONCE(fa->offload);
++					fri.trap = READ_ONCE(fa->trap);
+ 					break;
+ 				}
+ 			}
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 87961f1d9959b..6652d96329a0c 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -1839,8 +1839,8 @@ out:
+ }
+ EXPORT_SYMBOL(ipv6_dev_get_saddr);
+ 
+-int __ipv6_get_lladdr(struct inet6_dev *idev, struct in6_addr *addr,
+-		      u32 banned_flags)
++static int __ipv6_get_lladdr(struct inet6_dev *idev, struct in6_addr *addr,
++			      u32 banned_flags)
+ {
+ 	struct inet6_ifaddr *ifp;
+ 	int err = -EADDRNOTAVAIL;
+diff --git a/net/ipv6/ip6_flowlabel.c b/net/ipv6/ip6_flowlabel.c
+index aa673a6a7e432..ceb85c67ce395 100644
+--- a/net/ipv6/ip6_flowlabel.c
++++ b/net/ipv6/ip6_flowlabel.c
+@@ -450,8 +450,10 @@ fl_create(struct net *net, struct sock *sk, struct in6_flowlabel_req *freq,
+ 		err = -EINVAL;
+ 		goto done;
+ 	}
+-	if (fl_shared_exclusive(fl) || fl->opt)
++	if (fl_shared_exclusive(fl) || fl->opt) {
++		WRITE_ONCE(sock_net(sk)->ipv6.flowlabel_has_excl, 1);
+ 		static_branch_deferred_inc(&ipv6_flowlabel_exclusive);
++	}
+ 	return fl;
+ 
+ done:
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index bed8155508c85..a8861db52c187 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -1759,7 +1759,7 @@ static struct sk_buff *mld_newpack(struct inet6_dev *idev, unsigned int mtu)
+ 	skb_reserve(skb, hlen);
+ 	skb_tailroom_reserve(skb, mtu, tlen);
+ 
+-	if (__ipv6_get_lladdr(idev, &addr_buf, IFA_F_TENTATIVE)) {
++	if (ipv6_get_lladdr(dev, &addr_buf, IFA_F_TENTATIVE)) {
+ 		/* <draft-ietf-magma-mld-source-05.txt>:
+ 		 * use unspecified address as the source address
+ 		 * when a valid link-local address is not available.
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 49fee1f1951c2..75f916b7460c7 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -5767,11 +5767,11 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 	}
+ 
+ 	if (!dst) {
+-		if (rt->offload)
++		if (READ_ONCE(rt->offload))
+ 			rtm->rtm_flags |= RTM_F_OFFLOAD;
+-		if (rt->trap)
++		if (READ_ONCE(rt->trap))
+ 			rtm->rtm_flags |= RTM_F_TRAP;
+-		if (rt->offload_failed)
++		if (READ_ONCE(rt->offload_failed))
+ 			rtm->rtm_flags |= RTM_F_OFFLOAD_FAILED;
+ 	}
+ 
+@@ -6229,19 +6229,20 @@ void fib6_info_hw_flags_set(struct net *net, struct fib6_info *f6i,
+ 	struct sk_buff *skb;
+ 	int err;
+ 
+-	if (f6i->offload == offload && f6i->trap == trap &&
+-	    f6i->offload_failed == offload_failed)
++	if (READ_ONCE(f6i->offload) == offload &&
++	    READ_ONCE(f6i->trap) == trap &&
++	    READ_ONCE(f6i->offload_failed) == offload_failed)
+ 		return;
+ 
+-	f6i->offload = offload;
+-	f6i->trap = trap;
++	WRITE_ONCE(f6i->offload, offload);
++	WRITE_ONCE(f6i->trap, trap);
+ 
+ 	/* 2 means send notifications only if offload_failed was changed. */
+ 	if (net->ipv6.sysctl.fib_notify_on_flag_change == 2 &&
+-	    f6i->offload_failed == offload_failed)
++	    READ_ONCE(f6i->offload_failed) == offload_failed)
+ 		return;
+ 
+-	f6i->offload_failed = offload_failed;
++	WRITE_ONCE(f6i->offload_failed, offload_failed);
+ 
+ 	if (!rcu_access_pointer(f6i->fib6_node))
+ 		/* The route was removed from the tree, do not send
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 3147ca89f608e..311b4d9344959 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -664,7 +664,7 @@ static void ieee80211_add_he_ie(struct ieee80211_sub_if_data *sdata,
+ 	ieee80211_ie_build_he_6ghz_cap(sdata, skb);
+ }
+ 
+-static void ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata)
++static int ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata)
+ {
+ 	struct ieee80211_local *local = sdata->local;
+ 	struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
+@@ -684,6 +684,7 @@ static void ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata)
+ 	enum nl80211_iftype iftype = ieee80211_vif_type_p2p(&sdata->vif);
+ 	const struct ieee80211_sband_iftype_data *iftd;
+ 	struct ieee80211_prep_tx_info info = {};
++	int ret;
+ 
+ 	/* we know it's writable, cast away the const */
+ 	if (assoc_data->ie_len)
+@@ -697,7 +698,7 @@ static void ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata)
+ 	chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
+ 	if (WARN_ON(!chanctx_conf)) {
+ 		rcu_read_unlock();
+-		return;
++		return -EINVAL;
+ 	}
+ 	chan = chanctx_conf->def.chan;
+ 	rcu_read_unlock();
+@@ -748,7 +749,7 @@ static void ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata)
+ 			(iftd ? iftd->vendor_elems.len : 0),
+ 			GFP_KERNEL);
+ 	if (!skb)
+-		return;
++		return -ENOMEM;
+ 
+ 	skb_reserve(skb, local->hw.extra_tx_headroom);
+ 
+@@ -1029,15 +1030,22 @@ skip_rates:
+ 		skb_put_data(skb, assoc_data->ie + offset, noffset - offset);
+ 	}
+ 
+-	if (assoc_data->fils_kek_len &&
+-	    fils_encrypt_assoc_req(skb, assoc_data) < 0) {
+-		dev_kfree_skb(skb);
+-		return;
++	if (assoc_data->fils_kek_len) {
++		ret = fils_encrypt_assoc_req(skb, assoc_data);
++		if (ret < 0) {
++			dev_kfree_skb(skb);
++			return ret;
++		}
+ 	}
+ 
+ 	pos = skb_tail_pointer(skb);
+ 	kfree(ifmgd->assoc_req_ies);
+ 	ifmgd->assoc_req_ies = kmemdup(ie_start, pos - ie_start, GFP_ATOMIC);
++	if (!ifmgd->assoc_req_ies) {
++		dev_kfree_skb(skb);
++		return -ENOMEM;
++	}
++
+ 	ifmgd->assoc_req_ies_len = pos - ie_start;
+ 
+ 	drv_mgd_prepare_tx(local, sdata, &info);
+@@ -1047,6 +1055,8 @@ skip_rates:
+ 		IEEE80211_SKB_CB(skb)->flags |= IEEE80211_TX_CTL_REQ_TX_STATUS |
+ 						IEEE80211_TX_INTFL_MLME_CONN_TX;
+ 	ieee80211_tx_skb(sdata, skb);
++
++	return 0;
+ }
+ 
+ void ieee80211_send_pspoll(struct ieee80211_local *local,
+@@ -4491,6 +4501,7 @@ static int ieee80211_do_assoc(struct ieee80211_sub_if_data *sdata)
+ {
+ 	struct ieee80211_mgd_assoc_data *assoc_data = sdata->u.mgd.assoc_data;
+ 	struct ieee80211_local *local = sdata->local;
++	int ret;
+ 
+ 	sdata_assert_lock(sdata);
+ 
+@@ -4511,7 +4522,9 @@ static int ieee80211_do_assoc(struct ieee80211_sub_if_data *sdata)
+ 	sdata_info(sdata, "associate with %pM (try %d/%d)\n",
+ 		   assoc_data->bss->bssid, assoc_data->tries,
+ 		   IEEE80211_ASSOC_MAX_TRIES);
+-	ieee80211_send_assoc(sdata);
++	ret = ieee80211_send_assoc(sdata);
++	if (ret)
++		return ret;
+ 
+ 	if (!ieee80211_hw_check(&local->hw, REPORTS_TX_ACK_STATUS)) {
+ 		assoc_data->timeout = jiffies + IEEE80211_ASSOC_TIMEOUT;
+diff --git a/net/mctp/route.c b/net/mctp/route.c
+index cdf09c2a7007a..f8c0cb2de98be 100644
+--- a/net/mctp/route.c
++++ b/net/mctp/route.c
+@@ -414,13 +414,14 @@ static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)
+ 			 * this function.
+ 			 */
+ 			rc = mctp_key_add(key, msk);
+-			if (rc)
++			if (rc) {
+ 				kfree(key);
++			} else {
++				trace_mctp_key_acquire(key);
+ 
+-			trace_mctp_key_acquire(key);
+-
+-			/* we don't need to release key->lock on exit */
+-			mctp_key_unref(key);
++				/* we don't need to release key->lock on exit */
++				mctp_key_unref(key);
++			}
+ 			key = NULL;
+ 
+ 		} else {
+diff --git a/net/netfilter/nf_conntrack_proto_sctp.c b/net/netfilter/nf_conntrack_proto_sctp.c
+index 2394238d01c91..5a936334b517a 100644
+--- a/net/netfilter/nf_conntrack_proto_sctp.c
++++ b/net/netfilter/nf_conntrack_proto_sctp.c
+@@ -489,6 +489,15 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct,
+ 			pr_debug("Setting vtag %x for dir %d\n",
+ 				 ih->init_tag, !dir);
+ 			ct->proto.sctp.vtag[!dir] = ih->init_tag;
++
++			/* don't renew timeout on init retransmit so
++			 * port reuse by client or NAT middlebox cannot
++			 * keep entry alive indefinitely (incl. nat info).
++			 */
++			if (new_state == SCTP_CONNTRACK_CLOSED &&
++			    old_state == SCTP_CONNTRACK_CLOSED &&
++			    nf_ct_is_confirmed(ct))
++				ignore = true;
+ 		}
+ 
+ 		ct->proto.sctp.state = new_state;
+diff --git a/net/netfilter/nft_synproxy.c b/net/netfilter/nft_synproxy.c
+index a0109fa1e92d0..1133e06f3c40e 100644
+--- a/net/netfilter/nft_synproxy.c
++++ b/net/netfilter/nft_synproxy.c
+@@ -191,8 +191,10 @@ static int nft_synproxy_do_init(const struct nft_ctx *ctx,
+ 		if (err)
+ 			goto nf_ct_failure;
+ 		err = nf_synproxy_ipv6_init(snet, ctx->net);
+-		if (err)
++		if (err) {
++			nf_synproxy_ipv4_fini(snet, ctx->net);
+ 			goto nf_ct_failure;
++		}
+ 		break;
+ 	}
+ 
+diff --git a/net/sched/act_api.c b/net/sched/act_api.c
+index 3258da3d5bed5..2f46f9f9afb95 100644
+--- a/net/sched/act_api.c
++++ b/net/sched/act_api.c
+@@ -730,15 +730,24 @@ int tcf_action_exec(struct sk_buff *skb, struct tc_action **actions,
+ restart_act_graph:
+ 	for (i = 0; i < nr_actions; i++) {
+ 		const struct tc_action *a = actions[i];
++		int repeat_ttl;
+ 
+ 		if (jmp_prgcnt > 0) {
+ 			jmp_prgcnt -= 1;
+ 			continue;
+ 		}
++
++		repeat_ttl = 32;
+ repeat:
+ 		ret = a->ops->act(skb, a, res);
+-		if (ret == TC_ACT_REPEAT)
+-			goto repeat;	/* we need a ttl - JHS */
++
++		if (unlikely(ret == TC_ACT_REPEAT)) {
++			if (--repeat_ttl != 0)
++				goto repeat;
++			/* suspicious opcode, stop pipeline */
++			net_warn_ratelimited("TC_ACT_REPEAT abuse ?\n");
++			return TC_ACT_OK;
++		}
+ 
+ 		if (TC_ACT_EXT_CMP(ret, TC_ACT_JUMP)) {
+ 			jmp_prgcnt = ret & TCA_ACT_MAX_PRIO_MASK;
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 56dba8519d7c3..cd44cac7fbcf9 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -1044,7 +1044,7 @@ static int __tcf_qdisc_find(struct net *net, struct Qdisc **q,
+ 
+ 	/* Find qdisc */
+ 	if (!*parent) {
+-		*q = dev->qdisc;
++		*q = rcu_dereference(dev->qdisc);
+ 		*parent = (*q)->handle;
+ 	} else {
+ 		*q = qdisc_lookup_rcu(dev, TC_H_MAJ(*parent));
+@@ -2587,7 +2587,7 @@ static int tc_dump_tfilter(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ 		parent = tcm->tcm_parent;
+ 		if (!parent)
+-			q = dev->qdisc;
++			q = rtnl_dereference(dev->qdisc);
+ 		else
+ 			q = qdisc_lookup(dev, TC_H_MAJ(tcm->tcm_parent));
+ 		if (!q)
+@@ -2962,7 +2962,7 @@ static int tc_dump_chain(struct sk_buff *skb, struct netlink_callback *cb)
+ 			return skb->len;
+ 
+ 		if (!tcm->tcm_parent)
+-			q = dev->qdisc;
++			q = rtnl_dereference(dev->qdisc);
+ 		else
+ 			q = qdisc_lookup(dev, TC_H_MAJ(tcm->tcm_parent));
+ 
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index e4a7ce5c79f4f..6d9411b44258e 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -301,7 +301,7 @@ struct Qdisc *qdisc_lookup(struct net_device *dev, u32 handle)
+ 
+ 	if (!handle)
+ 		return NULL;
+-	q = qdisc_match_from_root(dev->qdisc, handle);
++	q = qdisc_match_from_root(rtnl_dereference(dev->qdisc), handle);
+ 	if (q)
+ 		goto out;
+ 
+@@ -320,7 +320,7 @@ struct Qdisc *qdisc_lookup_rcu(struct net_device *dev, u32 handle)
+ 
+ 	if (!handle)
+ 		return NULL;
+-	q = qdisc_match_from_root(dev->qdisc, handle);
++	q = qdisc_match_from_root(rcu_dereference(dev->qdisc), handle);
+ 	if (q)
+ 		goto out;
+ 
+@@ -1082,10 +1082,10 @@ static int qdisc_graft(struct net_device *dev, struct Qdisc *parent,
+ skip:
+ 		if (!ingress) {
+ 			notify_and_destroy(net, skb, n, classid,
+-					   dev->qdisc, new);
++					   rtnl_dereference(dev->qdisc), new);
+ 			if (new && !new->ops->attach)
+ 				qdisc_refcount_inc(new);
+-			dev->qdisc = new ? : &noop_qdisc;
++			rcu_assign_pointer(dev->qdisc, new ? : &noop_qdisc);
+ 
+ 			if (new && new->ops->attach)
+ 				new->ops->attach(new);
+@@ -1451,7 +1451,7 @@ static int tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ 				q = dev_ingress_queue(dev)->qdisc_sleeping;
+ 			}
+ 		} else {
+-			q = dev->qdisc;
++			q = rtnl_dereference(dev->qdisc);
+ 		}
+ 		if (!q) {
+ 			NL_SET_ERR_MSG(extack, "Cannot find specified qdisc on specified device");
+@@ -1540,7 +1540,7 @@ replay:
+ 				q = dev_ingress_queue(dev)->qdisc_sleeping;
+ 			}
+ 		} else {
+-			q = dev->qdisc;
++			q = rtnl_dereference(dev->qdisc);
+ 		}
+ 
+ 		/* It may be default qdisc, ignore it */
+@@ -1762,7 +1762,8 @@ static int tc_dump_qdisc(struct sk_buff *skb, struct netlink_callback *cb)
+ 			s_q_idx = 0;
+ 		q_idx = 0;
+ 
+-		if (tc_dump_qdisc_root(dev->qdisc, skb, cb, &q_idx, s_q_idx,
++		if (tc_dump_qdisc_root(rtnl_dereference(dev->qdisc),
++				       skb, cb, &q_idx, s_q_idx,
+ 				       true, tca[TCA_DUMP_INVISIBLE]) < 0)
+ 			goto done;
+ 
+@@ -2033,7 +2034,7 @@ static int tc_ctl_tclass(struct sk_buff *skb, struct nlmsghdr *n,
+ 		} else if (qid1) {
+ 			qid = qid1;
+ 		} else if (qid == 0)
+-			qid = dev->qdisc->handle;
++			qid = rtnl_dereference(dev->qdisc)->handle;
+ 
+ 		/* Now qid is genuine qdisc handle consistent
+ 		 * both with parent and child.
+@@ -2044,7 +2045,7 @@ static int tc_ctl_tclass(struct sk_buff *skb, struct nlmsghdr *n,
+ 			portid = TC_H_MAKE(qid, portid);
+ 	} else {
+ 		if (qid == 0)
+-			qid = dev->qdisc->handle;
++			qid = rtnl_dereference(dev->qdisc)->handle;
+ 	}
+ 
+ 	/* OK. Locate qdisc */
+@@ -2205,7 +2206,8 @@ static int tc_dump_tclass(struct sk_buff *skb, struct netlink_callback *cb)
+ 	s_t = cb->args[0];
+ 	t = 0;
+ 
+-	if (tc_dump_tclass_root(dev->qdisc, skb, tcm, cb, &t, s_t, true) < 0)
++	if (tc_dump_tclass_root(rtnl_dereference(dev->qdisc),
++				skb, tcm, cb, &t, s_t, true) < 0)
+ 		goto done;
+ 
+ 	dev_queue = dev_ingress_queue(dev);
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index 5d391fe3137dc..28052706c4e36 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -1109,30 +1109,33 @@ static void attach_default_qdiscs(struct net_device *dev)
+ 	if (!netif_is_multiqueue(dev) ||
+ 	    dev->priv_flags & IFF_NO_QUEUE) {
+ 		netdev_for_each_tx_queue(dev, attach_one_default_qdisc, NULL);
+-		dev->qdisc = txq->qdisc_sleeping;
+-		qdisc_refcount_inc(dev->qdisc);
++		qdisc = txq->qdisc_sleeping;
++		rcu_assign_pointer(dev->qdisc, qdisc);
++		qdisc_refcount_inc(qdisc);
+ 	} else {
+ 		qdisc = qdisc_create_dflt(txq, &mq_qdisc_ops, TC_H_ROOT, NULL);
+ 		if (qdisc) {
+-			dev->qdisc = qdisc;
++			rcu_assign_pointer(dev->qdisc, qdisc);
+ 			qdisc->ops->attach(qdisc);
+ 		}
+ 	}
++	qdisc = rtnl_dereference(dev->qdisc);
+ 
+ 	/* Detect default qdisc setup/init failed and fallback to "noqueue" */
+-	if (dev->qdisc == &noop_qdisc) {
++	if (qdisc == &noop_qdisc) {
+ 		netdev_warn(dev, "default qdisc (%s) fail, fallback to %s\n",
+ 			    default_qdisc_ops->id, noqueue_qdisc_ops.id);
+ 		dev->priv_flags |= IFF_NO_QUEUE;
+ 		netdev_for_each_tx_queue(dev, attach_one_default_qdisc, NULL);
+-		dev->qdisc = txq->qdisc_sleeping;
+-		qdisc_refcount_inc(dev->qdisc);
++		qdisc = txq->qdisc_sleeping;
++		rcu_assign_pointer(dev->qdisc, qdisc);
++		qdisc_refcount_inc(qdisc);
+ 		dev->priv_flags ^= IFF_NO_QUEUE;
+ 	}
+ 
+ #ifdef CONFIG_NET_SCHED
+-	if (dev->qdisc != &noop_qdisc)
+-		qdisc_hash_add(dev->qdisc, false);
++	if (qdisc != &noop_qdisc)
++		qdisc_hash_add(qdisc, false);
+ #endif
+ }
+ 
+@@ -1162,7 +1165,7 @@ void dev_activate(struct net_device *dev)
+ 	 * and noqueue_qdisc for virtual interfaces
+ 	 */
+ 
+-	if (dev->qdisc == &noop_qdisc)
++	if (rtnl_dereference(dev->qdisc) == &noop_qdisc)
+ 		attach_default_qdiscs(dev);
+ 
+ 	if (!netif_carrier_ok(dev))
+@@ -1328,7 +1331,7 @@ static int qdisc_change_tx_queue_len(struct net_device *dev,
+ void dev_qdisc_change_real_num_tx(struct net_device *dev,
+ 				  unsigned int new_real_tx)
+ {
+-	struct Qdisc *qdisc = dev->qdisc;
++	struct Qdisc *qdisc = rtnl_dereference(dev->qdisc);
+ 
+ 	if (qdisc->ops->change_real_num_tx)
+ 		qdisc->ops->change_real_num_tx(qdisc, new_real_tx);
+@@ -1392,7 +1395,7 @@ static void dev_init_scheduler_queue(struct net_device *dev,
+ 
+ void dev_init_scheduler(struct net_device *dev)
+ {
+-	dev->qdisc = &noop_qdisc;
++	rcu_assign_pointer(dev->qdisc, &noop_qdisc);
+ 	netdev_for_each_tx_queue(dev, dev_init_scheduler_queue, &noop_qdisc);
+ 	if (dev_ingress_queue(dev))
+ 		dev_init_scheduler_queue(dev, dev_ingress_queue(dev), &noop_qdisc);
+@@ -1420,8 +1423,8 @@ void dev_shutdown(struct net_device *dev)
+ 	netdev_for_each_tx_queue(dev, shutdown_scheduler_queue, &noop_qdisc);
+ 	if (dev_ingress_queue(dev))
+ 		shutdown_scheduler_queue(dev, dev_ingress_queue(dev), &noop_qdisc);
+-	qdisc_put(dev->qdisc);
+-	dev->qdisc = &noop_qdisc;
++	qdisc_put(rtnl_dereference(dev->qdisc));
++	rcu_assign_pointer(dev->qdisc, &noop_qdisc);
+ 
+ 	WARN_ON(timer_pending(&dev->watchdog_timer));
+ }
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 0f5c305c55c5a..10d2d81f93376 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -667,14 +667,17 @@ static void smc_fback_error_report(struct sock *clcsk)
+ static int smc_switch_to_fallback(struct smc_sock *smc, int reason_code)
+ {
+ 	struct sock *clcsk;
++	int rc = 0;
+ 
+ 	mutex_lock(&smc->clcsock_release_lock);
+ 	if (!smc->clcsock) {
+-		mutex_unlock(&smc->clcsock_release_lock);
+-		return -EBADF;
++		rc = -EBADF;
++		goto out;
+ 	}
+ 	clcsk = smc->clcsock->sk;
+ 
++	if (smc->use_fallback)
++		goto out;
+ 	smc->use_fallback = true;
+ 	smc->fallback_rsn = reason_code;
+ 	smc_stat_fallback(smc);
+@@ -702,8 +705,9 @@ static int smc_switch_to_fallback(struct smc_sock *smc, int reason_code)
+ 		smc->clcsock->sk->sk_user_data =
+ 			(void *)((uintptr_t)smc | SK_USER_DATA_NOCOPY);
+ 	}
++out:
+ 	mutex_unlock(&smc->clcsock_release_lock);
+-	return 0;
++	return rc;
+ }
+ 
+ /* fall back during connect */
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 3d3673ba9e1e5..2a2e1514ac79a 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -436,6 +436,7 @@ static int rpcrdma_ep_create(struct rpcrdma_xprt *r_xprt)
+ 					      IB_POLL_WORKQUEUE);
+ 	if (IS_ERR(ep->re_attr.send_cq)) {
+ 		rc = PTR_ERR(ep->re_attr.send_cq);
++		ep->re_attr.send_cq = NULL;
+ 		goto out_destroy;
+ 	}
+ 
+@@ -444,6 +445,7 @@ static int rpcrdma_ep_create(struct rpcrdma_xprt *r_xprt)
+ 					      IB_POLL_WORKQUEUE);
+ 	if (IS_ERR(ep->re_attr.recv_cq)) {
+ 		rc = PTR_ERR(ep->re_attr.recv_cq);
++		ep->re_attr.recv_cq = NULL;
+ 		goto out_destroy;
+ 	}
+ 	ep->re_receive_count = 0;
+@@ -482,6 +484,7 @@ static int rpcrdma_ep_create(struct rpcrdma_xprt *r_xprt)
+ 	ep->re_pd = ib_alloc_pd(device, 0);
+ 	if (IS_ERR(ep->re_pd)) {
+ 		rc = PTR_ERR(ep->re_pd);
++		ep->re_pd = NULL;
+ 		goto out_destroy;
+ 	}
+ 
+diff --git a/net/tipc/node.c b/net/tipc/node.c
+index 9947b7dfe1d2d..6ef95ce565bd3 100644
+--- a/net/tipc/node.c
++++ b/net/tipc/node.c
+@@ -403,7 +403,7 @@ static void tipc_node_write_unlock(struct tipc_node *n)
+ 	u32 flags = n->action_flags;
+ 	struct list_head *publ_list;
+ 	struct tipc_uaddr ua;
+-	u32 bearer_id;
++	u32 bearer_id, node;
+ 
+ 	if (likely(!flags)) {
+ 		write_unlock_bh(&n->lock);
+@@ -413,7 +413,8 @@ static void tipc_node_write_unlock(struct tipc_node *n)
+ 	tipc_uaddr(&ua, TIPC_SERVICE_RANGE, TIPC_NODE_SCOPE,
+ 		   TIPC_LINK_STATE, n->addr, n->addr);
+ 	sk.ref = n->link_id;
+-	sk.node = n->addr;
++	sk.node = tipc_own_addr(net);
++	node = n->addr;
+ 	bearer_id = n->link_id & 0xffff;
+ 	publ_list = &n->publ_list;
+ 
+@@ -423,17 +424,17 @@ static void tipc_node_write_unlock(struct tipc_node *n)
+ 	write_unlock_bh(&n->lock);
+ 
+ 	if (flags & TIPC_NOTIFY_NODE_DOWN)
+-		tipc_publ_notify(net, publ_list, sk.node, n->capabilities);
++		tipc_publ_notify(net, publ_list, node, n->capabilities);
+ 
+ 	if (flags & TIPC_NOTIFY_NODE_UP)
+-		tipc_named_node_up(net, sk.node, n->capabilities);
++		tipc_named_node_up(net, node, n->capabilities);
+ 
+ 	if (flags & TIPC_NOTIFY_LINK_UP) {
+-		tipc_mon_peer_up(net, sk.node, bearer_id);
++		tipc_mon_peer_up(net, node, bearer_id);
+ 		tipc_nametbl_publish(net, &ua, &sk, sk.ref);
+ 	}
+ 	if (flags & TIPC_NOTIFY_LINK_DOWN) {
+-		tipc_mon_peer_down(net, sk.node, bearer_id);
++		tipc_mon_peer_down(net, node, bearer_id);
+ 		tipc_nametbl_withdraw(net, &ua, &sk, sk.ref);
+ 	}
+ }
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index ed0df839c38ce..8e4f2a1346be6 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1400,6 +1400,7 @@ static int vsock_connect(struct socket *sock, struct sockaddr *addr,
+ 			sk->sk_state = sk->sk_state == TCP_ESTABLISHED ? TCP_CLOSING : TCP_CLOSE;
+ 			sock->state = SS_UNCONNECTED;
+ 			vsock_transport_cancel_pkt(vsk);
++			vsock_remove_connected(vsk);
+ 			goto out_wait;
+ 		} else if (timeout == 0) {
+ 			err = -ETIMEDOUT;
+diff --git a/net/wireless/core.c b/net/wireless/core.c
+index eb297e1015e05..441136646f89a 100644
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -5,7 +5,7 @@
+  * Copyright 2006-2010		Johannes Berg <johannes@sipsolutions.net>
+  * Copyright 2013-2014  Intel Mobile Communications GmbH
+  * Copyright 2015-2017	Intel Deutschland GmbH
+- * Copyright (C) 2018-2021 Intel Corporation
++ * Copyright (C) 2018-2022 Intel Corporation
+  */
+ 
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+@@ -332,29 +332,20 @@ static void cfg80211_event_work(struct work_struct *work)
+ void cfg80211_destroy_ifaces(struct cfg80211_registered_device *rdev)
+ {
+ 	struct wireless_dev *wdev, *tmp;
+-	bool found = false;
+ 
+ 	ASSERT_RTNL();
+ 
+-	list_for_each_entry(wdev, &rdev->wiphy.wdev_list, list) {
++	list_for_each_entry_safe(wdev, tmp, &rdev->wiphy.wdev_list, list) {
+ 		if (wdev->nl_owner_dead) {
+ 			if (wdev->netdev)
+ 				dev_close(wdev->netdev);
+-			found = true;
+-		}
+-	}
+-
+-	if (!found)
+-		return;
+ 
+-	wiphy_lock(&rdev->wiphy);
+-	list_for_each_entry_safe(wdev, tmp, &rdev->wiphy.wdev_list, list) {
+-		if (wdev->nl_owner_dead) {
++			wiphy_lock(&rdev->wiphy);
+ 			cfg80211_leave(rdev, wdev);
+ 			rdev_del_virtual_intf(rdev, wdev);
++			wiphy_unlock(&rdev->wiphy);
+ 		}
+ 	}
+-	wiphy_unlock(&rdev->wiphy);
+ }
+ 
+ static void cfg80211_destroy_iface_wk(struct work_struct *work)
+diff --git a/scripts/kconfig/confdata.c b/scripts/kconfig/confdata.c
+index 00284c03da4d3..027f4c28dc320 100644
+--- a/scripts/kconfig/confdata.c
++++ b/scripts/kconfig/confdata.c
+@@ -992,14 +992,19 @@ static int conf_write_autoconf_cmd(const char *autoconf_name)
+ 
+ static int conf_touch_deps(void)
+ {
+-	const char *name;
++	const char *name, *tmp;
+ 	struct symbol *sym;
+ 	int res, i;
+ 
+-	strcpy(depfile_path, "include/config/");
+-	depfile_prefix_len = strlen(depfile_path);
+-
+ 	name = conf_get_autoconfig_name();
++	tmp = strrchr(name, '/');
++	depfile_prefix_len = tmp ? tmp - name + 1 : 0;
++	if (depfile_prefix_len + 1 > sizeof(depfile_path))
++		return -1;
++
++	strncpy(depfile_path, name, depfile_prefix_len);
++	depfile_path[depfile_prefix_len] = 0;
++
+ 	conf_read_simple(name, S_DEF_AUTO);
+ 	sym_calc_value(modules_sym);
+ 
+diff --git a/scripts/kconfig/preprocess.c b/scripts/kconfig/preprocess.c
+index 0590f86df6e40..748da578b418c 100644
+--- a/scripts/kconfig/preprocess.c
++++ b/scripts/kconfig/preprocess.c
+@@ -141,7 +141,7 @@ static char *do_lineno(int argc, char *argv[])
+ static char *do_shell(int argc, char *argv[])
+ {
+ 	FILE *p;
+-	char buf[256];
++	char buf[4096];
+ 	char *cmd;
+ 	size_t nread;
+ 	int i;
+diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c
+index 9fc971a704a9e..b03c185f6b21c 100644
+--- a/sound/core/memalloc.c
++++ b/sound/core/memalloc.c
+@@ -511,7 +511,8 @@ static void *snd_dma_noncontig_alloc(struct snd_dma_buffer *dmab, size_t size)
+ 				      DEFAULT_GFP, 0);
+ 	if (!sgt)
+ 		return NULL;
+-	dmab->dev.need_sync = dma_need_sync(dmab->dev.dev, dmab->dev.dir);
++	dmab->dev.need_sync = dma_need_sync(dmab->dev.dev,
++					    sg_dma_address(sgt->sgl));
+ 	p = dma_vmap_noncontiguous(dmab->dev.dev, size, sgt);
+ 	if (p)
+ 		dmab->private_data = sgt;
+@@ -540,9 +541,9 @@ static void snd_dma_noncontig_sync(struct snd_dma_buffer *dmab,
+ 	if (mode == SNDRV_DMA_SYNC_CPU) {
+ 		if (dmab->dev.dir == DMA_TO_DEVICE)
+ 			return;
++		invalidate_kernel_vmap_range(dmab->area, dmab->bytes);
+ 		dma_sync_sgtable_for_cpu(dmab->dev.dev, dmab->private_data,
+ 					 dmab->dev.dir);
+-		invalidate_kernel_vmap_range(dmab->area, dmab->bytes);
+ 	} else {
+ 		if (dmab->dev.dir == DMA_FROM_DEVICE)
+ 			return;
+@@ -625,9 +626,13 @@ static const struct snd_malloc_ops snd_dma_noncontig_ops = {
+  */
+ static void *snd_dma_noncoherent_alloc(struct snd_dma_buffer *dmab, size_t size)
+ {
+-	dmab->dev.need_sync = dma_need_sync(dmab->dev.dev, dmab->dev.dir);
+-	return dma_alloc_noncoherent(dmab->dev.dev, size, &dmab->addr,
+-				     dmab->dev.dir, DEFAULT_GFP);
++	void *p;
++
++	p = dma_alloc_noncoherent(dmab->dev.dev, size, &dmab->addr,
++				  dmab->dev.dir, DEFAULT_GFP);
++	if (p)
++		dmab->dev.need_sync = dma_need_sync(dmab->dev.dev, dmab->addr);
++	return p;
+ }
+ 
+ static void snd_dma_noncoherent_free(struct snd_dma_buffer *dmab)
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 1b46b599a5cff..3b6f2aacda459 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -1611,6 +1611,7 @@ static const struct snd_pci_quirk probe_mask_list[] = {
+ 	/* forced codec slots */
+ 	SND_PCI_QUIRK(0x1043, 0x1262, "ASUS W5Fm", 0x103),
+ 	SND_PCI_QUIRK(0x1046, 0x1262, "ASUS W5F", 0x103),
++	SND_PCI_QUIRK(0x1558, 0x0351, "Schenker Dock 15", 0x105),
+ 	/* WinFast VP200 H (Teradici) user reported broken communication */
+ 	SND_PCI_QUIRK(0x3a21, 0x040d, "WinFast VP200 H", 0x101),
+ 	{}
+@@ -1794,8 +1795,6 @@ static int azx_create(struct snd_card *card, struct pci_dev *pci,
+ 
+ 	assign_position_fix(chip, check_position_fix(chip, position_fix[dev]));
+ 
+-	check_probe_mask(chip, dev);
+-
+ 	if (single_cmd < 0) /* allow fallback to single_cmd at errors */
+ 		chip->fallback_to_single_cmd = 1;
+ 	else /* explicitly set to single_cmd or not */
+@@ -1821,6 +1820,8 @@ static int azx_create(struct snd_card *card, struct pci_dev *pci,
+ 		chip->bus.core.needs_damn_long_delay = 1;
+ 	}
+ 
++	check_probe_mask(chip, dev);
++
+ 	err = snd_device_new(card, SNDRV_DEV_LOWLEVEL, chip, &ops);
+ 	if (err < 0) {
+ 		dev_err(card->dev, "Error creating device [card]!\n");
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 18f04137f61cf..83b56c1ba3996 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -133,6 +133,22 @@ struct alc_spec {
+  * COEF access helper functions
+  */
+ 
++static void coef_mutex_lock(struct hda_codec *codec)
++{
++	struct alc_spec *spec = codec->spec;
++
++	snd_hda_power_up_pm(codec);
++	mutex_lock(&spec->coef_mutex);
++}
++
++static void coef_mutex_unlock(struct hda_codec *codec)
++{
++	struct alc_spec *spec = codec->spec;
++
++	mutex_unlock(&spec->coef_mutex);
++	snd_hda_power_down_pm(codec);
++}
++
+ static int __alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+ 				 unsigned int coef_idx)
+ {
+@@ -146,12 +162,11 @@ static int __alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+ static int alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+ 			       unsigned int coef_idx)
+ {
+-	struct alc_spec *spec = codec->spec;
+ 	unsigned int val;
+ 
+-	mutex_lock(&spec->coef_mutex);
++	coef_mutex_lock(codec);
+ 	val = __alc_read_coefex_idx(codec, nid, coef_idx);
+-	mutex_unlock(&spec->coef_mutex);
++	coef_mutex_unlock(codec);
+ 	return val;
+ }
+ 
+@@ -168,11 +183,9 @@ static void __alc_write_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+ static void alc_write_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+ 				 unsigned int coef_idx, unsigned int coef_val)
+ {
+-	struct alc_spec *spec = codec->spec;
+-
+-	mutex_lock(&spec->coef_mutex);
++	coef_mutex_lock(codec);
+ 	__alc_write_coefex_idx(codec, nid, coef_idx, coef_val);
+-	mutex_unlock(&spec->coef_mutex);
++	coef_mutex_unlock(codec);
+ }
+ 
+ #define alc_write_coef_idx(codec, coef_idx, coef_val) \
+@@ -193,11 +206,9 @@ static void alc_update_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
+ 				  unsigned int coef_idx, unsigned int mask,
+ 				  unsigned int bits_set)
+ {
+-	struct alc_spec *spec = codec->spec;
+-
+-	mutex_lock(&spec->coef_mutex);
++	coef_mutex_lock(codec);
+ 	__alc_update_coefex_idx(codec, nid, coef_idx, mask, bits_set);
+-	mutex_unlock(&spec->coef_mutex);
++	coef_mutex_unlock(codec);
+ }
+ 
+ #define alc_update_coef_idx(codec, coef_idx, mask, bits_set)	\
+@@ -230,9 +241,7 @@ struct coef_fw {
+ static void alc_process_coef_fw(struct hda_codec *codec,
+ 				const struct coef_fw *fw)
+ {
+-	struct alc_spec *spec = codec->spec;
+-
+-	mutex_lock(&spec->coef_mutex);
++	coef_mutex_lock(codec);
+ 	for (; fw->nid; fw++) {
+ 		if (fw->mask == (unsigned short)-1)
+ 			__alc_write_coefex_idx(codec, fw->nid, fw->idx, fw->val);
+@@ -240,7 +249,7 @@ static void alc_process_coef_fw(struct hda_codec *codec,
+ 			__alc_update_coefex_idx(codec, fw->nid, fw->idx,
+ 						fw->mask, fw->val);
+ 	}
+-	mutex_unlock(&spec->coef_mutex);
++	coef_mutex_unlock(codec);
+ }
+ 
+ /*
+@@ -9013,6 +9022,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3824, "Legion Y9000X 2020", ALC285_FIXUP_LEGION_Y9000X_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF),
+ 	SND_PCI_QUIRK(0x17aa, 0x3834, "Lenovo IdeaPad Slim 9i 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
++	SND_PCI_QUIRK(0x17aa, 0x383d, "Legion Y9000X 2019", ALC285_FIXUP_LEGION_Y9000X_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3843, "Yoga 9i", ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP),
+ 	SND_PCI_QUIRK(0x17aa, 0x384a, "Lenovo Yoga 7 15ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3852, "Lenovo Yoga 7 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+diff --git a/sound/soc/codecs/tas2770.c b/sound/soc/codecs/tas2770.c
+index 6549e7fef3e32..c5ea3b115966b 100644
+--- a/sound/soc/codecs/tas2770.c
++++ b/sound/soc/codecs/tas2770.c
+@@ -38,10 +38,12 @@ static void tas2770_reset(struct tas2770_priv *tas2770)
+ 		gpiod_set_value_cansleep(tas2770->reset_gpio, 0);
+ 		msleep(20);
+ 		gpiod_set_value_cansleep(tas2770->reset_gpio, 1);
++		usleep_range(1000, 2000);
+ 	}
+ 
+ 	snd_soc_component_write(tas2770->component, TAS2770_SW_RST,
+ 		TAS2770_RST);
++	usleep_range(1000, 2000);
+ }
+ 
+ static int tas2770_set_bias_level(struct snd_soc_component *component,
+@@ -110,6 +112,7 @@ static int tas2770_codec_resume(struct snd_soc_component *component)
+ 
+ 	if (tas2770->sdz_gpio) {
+ 		gpiod_set_value_cansleep(tas2770->sdz_gpio, 1);
++		usleep_range(1000, 2000);
+ 	} else {
+ 		ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+ 						    TAS2770_PWR_CTRL_MASK,
+@@ -510,8 +513,10 @@ static int tas2770_codec_probe(struct snd_soc_component *component)
+ 
+ 	tas2770->component = component;
+ 
+-	if (tas2770->sdz_gpio)
++	if (tas2770->sdz_gpio) {
+ 		gpiod_set_value_cansleep(tas2770->sdz_gpio, 1);
++		usleep_range(1000, 2000);
++	}
+ 
+ 	tas2770_reset(tas2770);
+ 
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index 6cb01a8e08fb6..738038f141964 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -1448,7 +1448,8 @@ static int wm_adsp_buffer_parse_coeff(struct cs_dsp_coeff_ctl *cs_ctl)
+ 	int ret, i;
+ 
+ 	for (i = 0; i < 5; ++i) {
+-		ret = cs_dsp_coeff_read_ctrl(cs_ctl, &coeff_v1, sizeof(coeff_v1));
++		ret = cs_dsp_coeff_read_ctrl(cs_ctl, &coeff_v1,
++					     min(cs_ctl->len, sizeof(coeff_v1)));
+ 		if (ret < 0)
+ 			return ret;
+ 
+diff --git a/sound/soc/mediatek/Kconfig b/sound/soc/mediatek/Kconfig
+index 3b1ddea26a9ef..76f191ec7bf84 100644
+--- a/sound/soc/mediatek/Kconfig
++++ b/sound/soc/mediatek/Kconfig
+@@ -215,7 +215,7 @@ config SND_SOC_MT8195_MT6359_RT1019_RT5682
+ 
+ config SND_SOC_MT8195_MT6359_RT1011_RT5682
+ 	tristate "ASoC Audio driver for MT8195 with MT6359 RT1011 RT5682 codec"
+-	depends on I2C
++	depends on I2C && GPIOLIB
+ 	depends on SND_SOC_MT8195 && MTK_PMIC_WRAP
+ 	select SND_SOC_MT6359
+ 	select SND_SOC_RT1011
+diff --git a/sound/soc/qcom/lpass-platform.c b/sound/soc/qcom/lpass-platform.c
+index a59e9d20cb46b..4b1773c1fb95f 100644
+--- a/sound/soc/qcom/lpass-platform.c
++++ b/sound/soc/qcom/lpass-platform.c
+@@ -524,7 +524,7 @@ static int lpass_platform_pcmops_trigger(struct snd_soc_component *component,
+ 			return -EINVAL;
+ 		}
+ 
+-		ret = regmap_update_bits(map, reg_irqclr, val_irqclr, val_irqclr);
++		ret = regmap_write_bits(map, reg_irqclr, val_irqclr, val_irqclr);
+ 		if (ret) {
+ 			dev_err(soc_runtime->dev, "error writing to irqclear reg: %d\n", ret);
+ 			return ret;
+@@ -665,7 +665,7 @@ static irqreturn_t lpass_dma_interrupt_handler(
+ 	return -EINVAL;
+ 	}
+ 	if (interrupts & LPAIF_IRQ_PER(chan)) {
+-		rv = regmap_update_bits(map, reg, mask, (LPAIF_IRQ_PER(chan) | val));
++		rv = regmap_write_bits(map, reg, mask, (LPAIF_IRQ_PER(chan) | val));
+ 		if (rv) {
+ 			dev_err(soc_runtime->dev,
+ 				"error writing to irqclear reg: %d\n", rv);
+@@ -676,7 +676,7 @@ static irqreturn_t lpass_dma_interrupt_handler(
+ 	}
+ 
+ 	if (interrupts & LPAIF_IRQ_XRUN(chan)) {
+-		rv = regmap_update_bits(map, reg, mask, (LPAIF_IRQ_XRUN(chan) | val));
++		rv = regmap_write_bits(map, reg, mask, (LPAIF_IRQ_XRUN(chan) | val));
+ 		if (rv) {
+ 			dev_err(soc_runtime->dev,
+ 				"error writing to irqclear reg: %d\n", rv);
+@@ -688,7 +688,7 @@ static irqreturn_t lpass_dma_interrupt_handler(
+ 	}
+ 
+ 	if (interrupts & LPAIF_IRQ_ERR(chan)) {
+-		rv = regmap_update_bits(map, reg, mask, (LPAIF_IRQ_ERR(chan) | val));
++		rv = regmap_write_bits(map, reg, mask, (LPAIF_IRQ_ERR(chan) | val));
+ 		if (rv) {
+ 			dev_err(soc_runtime->dev,
+ 				"error writing to irqclear reg: %d\n", rv);
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index dc0e7c8d31f37..53457a0d466d3 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -308,7 +308,7 @@ int snd_soc_put_volsw(struct snd_kcontrol *kcontrol,
+ 	unsigned int sign_bit = mc->sign_bit;
+ 	unsigned int mask = (1 << fls(max)) - 1;
+ 	unsigned int invert = mc->invert;
+-	int err;
++	int err, ret;
+ 	bool type_2r = false;
+ 	unsigned int val2 = 0;
+ 	unsigned int val, val_mask;
+@@ -350,12 +350,18 @@ int snd_soc_put_volsw(struct snd_kcontrol *kcontrol,
+ 	err = snd_soc_component_update_bits(component, reg, val_mask, val);
+ 	if (err < 0)
+ 		return err;
++	ret = err;
+ 
+-	if (type_2r)
++	if (type_2r) {
+ 		err = snd_soc_component_update_bits(component, reg2, val_mask,
+-			val2);
++						    val2);
++		/* Don't discard any error code or drop change flag */
++		if (ret == 0 || err < 0) {
++			ret = err;
++		}
++	}
+ 
+-	return err;
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_put_volsw);
+ 
+@@ -421,6 +427,7 @@ int snd_soc_put_volsw_sx(struct snd_kcontrol *kcontrol,
+ 	int min = mc->min;
+ 	unsigned int mask = (1U << (fls(min + max) - 1)) - 1;
+ 	int err = 0;
++	int ret;
+ 	unsigned int val, val_mask;
+ 
+ 	val = ucontrol->value.integer.value[0];
+@@ -437,6 +444,7 @@ int snd_soc_put_volsw_sx(struct snd_kcontrol *kcontrol,
+ 	err = snd_soc_component_update_bits(component, reg, val_mask, val);
+ 	if (err < 0)
+ 		return err;
++	ret = err;
+ 
+ 	if (snd_soc_volsw_is_stereo(mc)) {
+ 		unsigned int val2;
+@@ -447,6 +455,11 @@ int snd_soc_put_volsw_sx(struct snd_kcontrol *kcontrol,
+ 
+ 		err = snd_soc_component_update_bits(component, reg2, val_mask,
+ 			val2);
++
++		/* Don't discard any error code or drop change flag */
++		if (ret == 0 || err < 0) {
++			ret = err;
++		}
+ 	}
+ 	return err;
+ }
+@@ -506,7 +519,7 @@ int snd_soc_put_volsw_range(struct snd_kcontrol *kcontrol,
+ 	unsigned int mask = (1 << fls(max)) - 1;
+ 	unsigned int invert = mc->invert;
+ 	unsigned int val, val_mask;
+-	int ret;
++	int err, ret;
+ 
+ 	if (invert)
+ 		val = (max - ucontrol->value.integer.value[0]) & mask;
+@@ -515,9 +528,10 @@ int snd_soc_put_volsw_range(struct snd_kcontrol *kcontrol,
+ 	val_mask = mask << shift;
+ 	val = val << shift;
+ 
+-	ret = snd_soc_component_update_bits(component, reg, val_mask, val);
+-	if (ret < 0)
+-		return ret;
++	err = snd_soc_component_update_bits(component, reg, val_mask, val);
++	if (err < 0)
++		return err;
++	ret = err;
+ 
+ 	if (snd_soc_volsw_is_stereo(mc)) {
+ 		if (invert)
+@@ -527,8 +541,12 @@ int snd_soc_put_volsw_range(struct snd_kcontrol *kcontrol,
+ 		val_mask = mask << shift;
+ 		val = val << shift;
+ 
+-		ret = snd_soc_component_update_bits(component, rreg, val_mask,
++		err = snd_soc_component_update_bits(component, rreg, val_mask,
+ 			val);
++		/* Don't discard any error code or drop change flag */
++		if (ret == 0 || err < 0) {
++			ret = err;
++		}
+ 	}
+ 
+ 	return ret;
+@@ -877,6 +895,7 @@ int snd_soc_put_xr_sx(struct snd_kcontrol *kcontrol,
+ 	unsigned long mask = (1UL<<mc->nbits)-1;
+ 	long max = mc->max;
+ 	long val = ucontrol->value.integer.value[0];
++	int ret = 0;
+ 	unsigned int i;
+ 
+ 	if (val < mc->min || val > mc->max)
+@@ -891,9 +910,11 @@ int snd_soc_put_xr_sx(struct snd_kcontrol *kcontrol,
+ 							regmask, regval);
+ 		if (err < 0)
+ 			return err;
++		if (err > 0)
++			ret = err;
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_put_xr_sx);
+ 
+diff --git a/sound/usb/implicit.c b/sound/usb/implicit.c
+index 70319c822c10b..2d444ec742029 100644
+--- a/sound/usb/implicit.c
++++ b/sound/usb/implicit.c
+@@ -47,13 +47,13 @@ struct snd_usb_implicit_fb_match {
+ static const struct snd_usb_implicit_fb_match playback_implicit_fb_quirks[] = {
+ 	/* Generic matching */
+ 	IMPLICIT_FB_GENERIC_DEV(0x0499, 0x1509), /* Steinberg UR22 */
+-	IMPLICIT_FB_GENERIC_DEV(0x0763, 0x2080), /* M-Audio FastTrack Ultra */
+-	IMPLICIT_FB_GENERIC_DEV(0x0763, 0x2081), /* M-Audio FastTrack Ultra */
+ 	IMPLICIT_FB_GENERIC_DEV(0x0763, 0x2030), /* M-Audio Fast Track C400 */
+ 	IMPLICIT_FB_GENERIC_DEV(0x0763, 0x2031), /* M-Audio Fast Track C600 */
+ 
+ 	/* Fixed EP */
+ 	/* FIXME: check the availability of generic matching */
++	IMPLICIT_FB_FIXED_DEV(0x0763, 0x2080, 0x81, 2), /* M-Audio FastTrack Ultra */
++	IMPLICIT_FB_FIXED_DEV(0x0763, 0x2081, 0x81, 2), /* M-Audio FastTrack Ultra */
+ 	IMPLICIT_FB_FIXED_DEV(0x2466, 0x8010, 0x81, 2), /* Fractal Audio Axe-Fx III */
+ 	IMPLICIT_FB_FIXED_DEV(0x31e9, 0x0001, 0x81, 2), /* Solid State Logic SSL2 */
+ 	IMPLICIT_FB_FIXED_DEV(0x31e9, 0x0002, 0x81, 2), /* Solid State Logic SSL2+ */
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 2bd28474e8fae..258165e7c8074 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -3678,17 +3678,14 @@ static int restore_mixer_value(struct usb_mixer_elem_list *list)
+ 				err = snd_usb_set_cur_mix_value(cval, c + 1, idx,
+ 							cval->cache_val[idx]);
+ 				if (err < 0)
+-					return err;
++					break;
+ 			}
+ 			idx++;
+ 		}
+ 	} else {
+ 		/* master */
+-		if (cval->cached) {
+-			err = snd_usb_set_cur_mix_value(cval, 0, 0, *cval->cache_val);
+-			if (err < 0)
+-				return err;
+-		}
++		if (cval->cached)
++			snd_usb_set_cur_mix_value(cval, 0, 0, *cval->cache_val);
+ 	}
+ 
+ 	return 0;
+diff --git a/tools/lib/subcmd/subcmd-util.h b/tools/lib/subcmd/subcmd-util.h
+index 794a375dad360..b2aec04fce8f6 100644
+--- a/tools/lib/subcmd/subcmd-util.h
++++ b/tools/lib/subcmd/subcmd-util.h
+@@ -50,15 +50,8 @@ static NORETURN inline void die(const char *err, ...)
+ static inline void *xrealloc(void *ptr, size_t size)
+ {
+ 	void *ret = realloc(ptr, size);
+-	if (!ret && !size)
+-		ret = realloc(ptr, 1);
+-	if (!ret) {
+-		ret = realloc(ptr, size);
+-		if (!ret && !size)
+-			ret = realloc(ptr, 1);
+-		if (!ret)
+-			die("Out of memory, realloc failed");
+-	}
++	if (!ret)
++		die("Out of memory, realloc failed");
+ 	return ret;
+ }
+ 
+diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
+index fbb3c4057c302..71710a1da4472 100644
+--- a/tools/perf/util/bpf-loader.c
++++ b/tools/perf/util/bpf-loader.c
+@@ -1214,9 +1214,10 @@ bpf__obj_config_map(struct bpf_object *obj,
+ 	pr_debug("ERROR: Invalid map config option '%s'\n", map_opt);
+ 	err = -BPF_LOADER_ERRNO__OBJCONF_MAP_OPT;
+ out:
+-	free(map_name);
+ 	if (!err)
+ 		*key_scan_pos += strlen(map_opt);
++
++	free(map_name);
+ 	return err;
+ }
+ 
+diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
+index 66095568bf327..fae843bf2f0eb 100644
+--- a/tools/testing/kunit/kunit_kernel.py
++++ b/tools/testing/kunit/kunit_kernel.py
+@@ -6,6 +6,7 @@
+ # Author: Felix Guo <felixguoxiuping@gmail.com>
+ # Author: Brendan Higgins <brendanhiggins@google.com>
+ 
++import importlib.abc
+ import importlib.util
+ import logging
+ import subprocess
+diff --git a/tools/testing/selftests/bpf/prog_tests/ksyms_btf.c b/tools/testing/selftests/bpf/prog_tests/ksyms_btf.c
+index 79f6bd1e50d60..f6933b06daf88 100644
+--- a/tools/testing/selftests/bpf/prog_tests/ksyms_btf.c
++++ b/tools/testing/selftests/bpf/prog_tests/ksyms_btf.c
+@@ -8,6 +8,7 @@
+ #include "test_ksyms_btf_null_check.skel.h"
+ #include "test_ksyms_weak.skel.h"
+ #include "test_ksyms_weak.lskel.h"
++#include "test_ksyms_btf_write_check.skel.h"
+ 
+ static int duration;
+ 
+@@ -137,6 +138,16 @@ cleanup:
+ 	test_ksyms_weak_lskel__destroy(skel);
+ }
+ 
++static void test_write_check(void)
++{
++	struct test_ksyms_btf_write_check *skel;
++
++	skel = test_ksyms_btf_write_check__open_and_load();
++	ASSERT_ERR_PTR(skel, "unexpected load of a prog writing to ksym memory\n");
++
++	test_ksyms_btf_write_check__destroy(skel);
++}
++
+ void test_ksyms_btf(void)
+ {
+ 	int percpu_datasec;
+@@ -167,4 +178,7 @@ void test_ksyms_btf(void)
+ 
+ 	if (test__start_subtest("weak_ksyms_lskel"))
+ 		test_weak_syms_lskel();
++
++	if (test__start_subtest("write_check"))
++		test_write_check();
+ }
+diff --git a/tools/testing/selftests/bpf/progs/test_ksyms_btf_write_check.c b/tools/testing/selftests/bpf/progs/test_ksyms_btf_write_check.c
+new file mode 100644
+index 0000000000000..2180c41cd890f
+--- /dev/null
++++ b/tools/testing/selftests/bpf/progs/test_ksyms_btf_write_check.c
+@@ -0,0 +1,29 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Copyright (c) 2021 Google */
++
++#include "vmlinux.h"
++
++#include <bpf/bpf_helpers.h>
++
++extern const int bpf_prog_active __ksym; /* int type global var. */
++
++SEC("raw_tp/sys_enter")
++int handler(const void *ctx)
++{
++	int *active;
++	__u32 cpu;
++
++	cpu = bpf_get_smp_processor_id();
++	active = (int *)bpf_per_cpu_ptr(&bpf_prog_active, cpu);
++	if (active) {
++		/* Kernel memory obtained from bpf_{per,this}_cpu_ptr
++		 * is read-only, should _not_ pass verification.
++		 */
++		/* WRITE_ONCE */
++		*(volatile int *)active = -1;
++	}
++
++	return 0;
++}
++
++char _license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/clone3/clone3.c b/tools/testing/selftests/clone3/clone3.c
+index 076cf4325f783..cd4582129c7d6 100644
+--- a/tools/testing/selftests/clone3/clone3.c
++++ b/tools/testing/selftests/clone3/clone3.c
+@@ -126,8 +126,6 @@ static void test_clone3(uint64_t flags, size_t size, int expected,
+ 
+ int main(int argc, char *argv[])
+ {
+-	pid_t pid;
+-
+ 	uid_t uid = getuid();
+ 
+ 	ksft_print_header();
+diff --git a/tools/testing/selftests/exec/Makefile b/tools/testing/selftests/exec/Makefile
+index 12c5e27d32c16..2d7fca446c7f7 100644
+--- a/tools/testing/selftests/exec/Makefile
++++ b/tools/testing/selftests/exec/Makefile
+@@ -3,8 +3,8 @@ CFLAGS = -Wall
+ CFLAGS += -Wno-nonnull
+ CFLAGS += -D_GNU_SOURCE
+ 
+-TEST_PROGS := binfmt_script non-regular
+-TEST_GEN_PROGS := execveat load_address_4096 load_address_2097152 load_address_16777216
++TEST_PROGS := binfmt_script
++TEST_GEN_PROGS := execveat load_address_4096 load_address_2097152 load_address_16777216 non-regular
+ TEST_GEN_FILES := execveat.symlink execveat.denatured script subdir
+ # Makefile is a run-time dependency, since it's accessed by the execveat test
+ TEST_FILES := Makefile
+diff --git a/tools/testing/selftests/kselftest_harness.h b/tools/testing/selftests/kselftest_harness.h
+index 79a182cfa43ad..78e59620d28de 100644
+--- a/tools/testing/selftests/kselftest_harness.h
++++ b/tools/testing/selftests/kselftest_harness.h
+@@ -875,7 +875,8 @@ static void __timeout_handler(int sig, siginfo_t *info, void *ucontext)
+ 	}
+ 
+ 	t->timed_out = true;
+-	kill(t->pid, SIGKILL);
++	// signal process group
++	kill(-(t->pid), SIGKILL);
+ }
+ 
+ void __wait_for_test(struct __test_metadata *t)
+@@ -985,6 +986,7 @@ void __run_test(struct __fixture_metadata *f,
+ 		ksft_print_msg("ERROR SPAWNING TEST CHILD\n");
+ 		t->passed = 0;
+ 	} else if (t->pid == 0) {
++		setpgrp();
+ 		t->fn(t, variant);
+ 		if (t->skip)
+ 			_exit(255);
+diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
+index 4fdfb42aeddba..d2e0b9091fdca 100644
+--- a/tools/testing/selftests/kvm/Makefile
++++ b/tools/testing/selftests/kvm/Makefile
+@@ -75,7 +75,6 @@ TEST_GEN_PROGS_x86_64 += x86_64/tsc_msrs_test
+ TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_msrs_test
+ TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
+ TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
+-TEST_GEN_PROGS_x86_64 += x86_64/vmx_pi_mmio_test
+ TEST_GEN_PROGS_x86_64 += x86_64/sev_migrate_tests
+ TEST_GEN_PROGS_x86_64 += access_tracking_perf_test
+ TEST_GEN_PROGS_x86_64 += demand_paging_test
+diff --git a/tools/testing/selftests/mincore/mincore_selftest.c b/tools/testing/selftests/mincore/mincore_selftest.c
+index e54106643337b..4c88238fc8f05 100644
+--- a/tools/testing/selftests/mincore/mincore_selftest.c
++++ b/tools/testing/selftests/mincore/mincore_selftest.c
+@@ -207,15 +207,21 @@ TEST(check_file_mmap)
+ 
+ 	errno = 0;
+ 	fd = open(".", O_TMPFILE | O_RDWR, 0600);
+-	ASSERT_NE(-1, fd) {
+-		TH_LOG("Can't create temporary file: %s",
+-			strerror(errno));
++	if (fd < 0) {
++		ASSERT_EQ(errno, EOPNOTSUPP) {
++			TH_LOG("Can't create temporary file: %s",
++			       strerror(errno));
++		}
++		SKIP(goto out_free, "O_TMPFILE not supported by filesystem.");
+ 	}
+ 	errno = 0;
+ 	retval = fallocate(fd, 0, 0, FILE_SIZE);
+-	ASSERT_EQ(0, retval) {
+-		TH_LOG("Error allocating space for the temporary file: %s",
+-			strerror(errno));
++	if (retval) {
++		ASSERT_EQ(errno, EOPNOTSUPP) {
++			TH_LOG("Error allocating space for the temporary file: %s",
++			       strerror(errno));
++		}
++		SKIP(goto out_close, "fallocate not supported by filesystem.");
+ 	}
+ 
+ 	/*
+@@ -271,7 +277,9 @@ TEST(check_file_mmap)
+ 	}
+ 
+ 	munmap(addr, FILE_SIZE);
++out_close:
+ 	close(fd);
++out_free:
+ 	free(vec);
+ }
+ 
+diff --git a/tools/testing/selftests/mount_setattr/mount_setattr_test.c b/tools/testing/selftests/mount_setattr/mount_setattr_test.c
+index f31205f04ee05..8c5fea68ae677 100644
+--- a/tools/testing/selftests/mount_setattr/mount_setattr_test.c
++++ b/tools/testing/selftests/mount_setattr/mount_setattr_test.c
+@@ -1236,7 +1236,7 @@ static int get_userns_fd(unsigned long nsid, unsigned long hostid, unsigned long
+ }
+ 
+ /**
+- * Validate that an attached mount in our mount namespace can be idmapped.
++ * Validate that an attached mount in our mount namespace cannot be idmapped.
+  * (The kernel enforces that the mount's mount namespace and the caller's mount
+  *  namespace match.)
+  */
+@@ -1259,7 +1259,7 @@ TEST_F(mount_setattr_idmapped, attached_mount_inside_current_mount_namespace)
+ 
+ 	attr.userns_fd	= get_userns_fd(0, 10000, 10000);
+ 	ASSERT_GE(attr.userns_fd, 0);
+-	ASSERT_EQ(sys_mount_setattr(open_tree_fd, "", AT_EMPTY_PATH, &attr, sizeof(attr)), 0);
++	ASSERT_NE(sys_mount_setattr(open_tree_fd, "", AT_EMPTY_PATH, &attr, sizeof(attr)), 0);
+ 	ASSERT_EQ(close(attr.userns_fd), 0);
+ 	ASSERT_EQ(close(open_tree_fd), 0);
+ }
+diff --git a/tools/testing/selftests/netfilter/nft_concat_range.sh b/tools/testing/selftests/netfilter/nft_concat_range.sh
+index df322e47a54fb..b35010cc7f6ae 100755
+--- a/tools/testing/selftests/netfilter/nft_concat_range.sh
++++ b/tools/testing/selftests/netfilter/nft_concat_range.sh
+@@ -1601,4 +1601,4 @@ for name in ${TESTS}; do
+ 	done
+ done
+ 
+-[ ${passed} -eq 0 ] && exit ${KSELFTEST_SKIP}
++[ ${passed} -eq 0 ] && exit ${KSELFTEST_SKIP} || exit 0
+diff --git a/tools/testing/selftests/netfilter/nft_fib.sh b/tools/testing/selftests/netfilter/nft_fib.sh
+index 6caf6ac8c285f..695a1958723f5 100755
+--- a/tools/testing/selftests/netfilter/nft_fib.sh
++++ b/tools/testing/selftests/netfilter/nft_fib.sh
+@@ -174,6 +174,7 @@ test_ping() {
+ ip netns exec ${nsrouter} sysctl net.ipv6.conf.all.forwarding=1 > /dev/null
+ ip netns exec ${nsrouter} sysctl net.ipv4.conf.veth0.forwarding=1 > /dev/null
+ ip netns exec ${nsrouter} sysctl net.ipv4.conf.veth1.forwarding=1 > /dev/null
++ip netns exec ${nsrouter} sysctl net.ipv4.conf.veth0.rp_filter=0 > /dev/null
+ 
+ sleep 3
+ 
+diff --git a/tools/testing/selftests/netfilter/nft_zones_many.sh b/tools/testing/selftests/netfilter/nft_zones_many.sh
+index 04633119b29a0..5a8db0b48928f 100755
+--- a/tools/testing/selftests/netfilter/nft_zones_many.sh
++++ b/tools/testing/selftests/netfilter/nft_zones_many.sh
+@@ -9,7 +9,7 @@ ns="ns-$sfx"
+ # Kselftest framework requirement - SKIP code is 4.
+ ksft_skip=4
+ 
+-zones=20000
++zones=2000
+ have_ct_tool=0
+ ret=0
+ 
+@@ -75,10 +75,10 @@ EOF
+ 
+ 	while [ $i -lt $max_zones ]; do
+ 		local start=$(date +%s%3N)
+-		i=$((i + 10000))
++		i=$((i + 1000))
+ 		j=$((j + 1))
+ 		# nft rule in output places each packet in a different zone.
+-		dd if=/dev/zero of=/dev/stdout bs=8k count=10000 2>/dev/null | ip netns exec "$ns" socat STDIN UDP:127.0.0.1:12345,sourceport=12345
++		dd if=/dev/zero of=/dev/stdout bs=8k count=1000 2>/dev/null | ip netns exec "$ns" socat STDIN UDP:127.0.0.1:12345,sourceport=12345
+ 		if [ $? -ne 0 ] ;then
+ 			ret=1
+ 			break
+@@ -86,7 +86,7 @@ EOF
+ 
+ 		stop=$(date +%s%3N)
+ 		local duration=$((stop-start))
+-		echo "PASS: added 10000 entries in $duration ms (now $i total, loop $j)"
++		echo "PASS: added 1000 entries in $duration ms (now $i total, loop $j)"
+ 	done
+ 
+ 	if [ $have_ct_tool -eq 1 ]; then
+@@ -128,11 +128,11 @@ test_conntrack_tool() {
+ 			break
+ 		fi
+ 
+-		if [ $((i%10000)) -eq 0 ];then
++		if [ $((i%1000)) -eq 0 ];then
+ 			stop=$(date +%s%3N)
+ 
+ 			local duration=$((stop-start))
+-			echo "PASS: added 10000 entries in $duration ms (now $i total)"
++			echo "PASS: added 1000 entries in $duration ms (now $i total)"
+ 			start=$stop
+ 		fi
+ 	done
+diff --git a/tools/testing/selftests/openat2/Makefile b/tools/testing/selftests/openat2/Makefile
+index 4b93b1417b862..843ba56d8e49e 100644
+--- a/tools/testing/selftests/openat2/Makefile
++++ b/tools/testing/selftests/openat2/Makefile
+@@ -5,4 +5,4 @@ TEST_GEN_PROGS := openat2_test resolve_test rename_attack_test
+ 
+ include ../lib.mk
+ 
+-$(TEST_GEN_PROGS): helpers.c
++$(TEST_GEN_PROGS): helpers.c helpers.h
+diff --git a/tools/testing/selftests/openat2/helpers.h b/tools/testing/selftests/openat2/helpers.h
+index a6ea27344db2d..7056340b9339e 100644
+--- a/tools/testing/selftests/openat2/helpers.h
++++ b/tools/testing/selftests/openat2/helpers.h
+@@ -9,6 +9,7 @@
+ 
+ #define _GNU_SOURCE
+ #include <stdint.h>
++#include <stdbool.h>
+ #include <errno.h>
+ #include <linux/types.h>
+ #include "../kselftest.h"
+@@ -62,11 +63,12 @@ bool needs_openat2(const struct open_how *how);
+ 					(similar to chroot(2)). */
+ #endif /* RESOLVE_IN_ROOT */
+ 
+-#define E_func(func, ...)						\
+-	do {								\
+-		if (func(__VA_ARGS__) < 0)				\
+-			ksft_exit_fail_msg("%s:%d %s failed\n", \
+-					   __FILE__, __LINE__, #func);\
++#define E_func(func, ...)						      \
++	do {								      \
++		errno = 0;						      \
++		if (func(__VA_ARGS__) < 0)				      \
++			ksft_exit_fail_msg("%s:%d %s failed - errno:%d\n",    \
++					   __FILE__, __LINE__, #func, errno); \
+ 	} while (0)
+ 
+ #define E_asprintf(...)		E_func(asprintf,	__VA_ARGS__)
+diff --git a/tools/testing/selftests/openat2/openat2_test.c b/tools/testing/selftests/openat2/openat2_test.c
+index 1bddbe934204c..7fb902099de45 100644
+--- a/tools/testing/selftests/openat2/openat2_test.c
++++ b/tools/testing/selftests/openat2/openat2_test.c
+@@ -259,6 +259,16 @@ void test_openat2_flags(void)
+ 		unlink(path);
+ 
+ 		fd = sys_openat2(AT_FDCWD, path, &test->how);
++		if (fd < 0 && fd == -EOPNOTSUPP) {
++			/*
++			 * Skip the testcase if it failed because not supported
++			 * by FS. (e.g. a valid O_TMPFILE combination on NFS)
++			 */
++			ksft_test_result_skip("openat2 with %s fails with %d (%s)\n",
++					      test->name, fd, strerror(-fd));
++			goto next;
++		}
++
+ 		if (test->err >= 0)
+ 			failed = (fd < 0);
+ 		else
+@@ -303,7 +313,7 @@ skip:
+ 		else
+ 			resultfn("openat2 with %s fails with %d (%s)\n",
+ 				 test->name, test->err, strerror(-test->err));
+-
++next:
+ 		free(fdpath);
+ 		fflush(stdout);
+ 	}
+diff --git a/tools/testing/selftests/pidfd/pidfd.h b/tools/testing/selftests/pidfd/pidfd.h
+index 01f8d3c0cf2cb..6922d6417e1cf 100644
+--- a/tools/testing/selftests/pidfd/pidfd.h
++++ b/tools/testing/selftests/pidfd/pidfd.h
+@@ -68,7 +68,7 @@
+ #define PIDFD_SKIP 3
+ #define PIDFD_XFAIL 4
+ 
+-int wait_for_pid(pid_t pid)
++static inline int wait_for_pid(pid_t pid)
+ {
+ 	int status, ret;
+ 
+@@ -78,13 +78,20 @@ again:
+ 		if (errno == EINTR)
+ 			goto again;
+ 
++		ksft_print_msg("waitpid returned -1, errno=%d\n", errno);
+ 		return -1;
+ 	}
+ 
+-	if (!WIFEXITED(status))
++	if (!WIFEXITED(status)) {
++		ksft_print_msg(
++		       "waitpid !WIFEXITED, WIFSIGNALED=%d, WTERMSIG=%d\n",
++		       WIFSIGNALED(status), WTERMSIG(status));
+ 		return -1;
++	}
+ 
+-	return WEXITSTATUS(status);
++	ret = WEXITSTATUS(status);
++	ksft_print_msg("waitpid WEXITSTATUS=%d\n", ret);
++	return ret;
+ }
+ 
+ static inline int sys_pidfd_open(pid_t pid, unsigned int flags)
+diff --git a/tools/testing/selftests/pidfd/pidfd_fdinfo_test.c b/tools/testing/selftests/pidfd/pidfd_fdinfo_test.c
+index 22558524f71c3..3fd8e903118f5 100644
+--- a/tools/testing/selftests/pidfd/pidfd_fdinfo_test.c
++++ b/tools/testing/selftests/pidfd/pidfd_fdinfo_test.c
+@@ -12,6 +12,7 @@
+ #include <string.h>
+ #include <syscall.h>
+ #include <sys/wait.h>
++#include <sys/mman.h>
+ 
+ #include "pidfd.h"
+ #include "../kselftest.h"
+@@ -80,7 +81,10 @@ static inline int error_check(struct error *err, const char *test_name)
+ 	return err->code;
+ }
+ 
++#define CHILD_STACK_SIZE 8192
++
+ struct child {
++	char *stack;
+ 	pid_t pid;
+ 	int   fd;
+ };
+@@ -89,17 +93,22 @@ static struct child clone_newns(int (*fn)(void *), void *args,
+ 				struct error *err)
+ {
+ 	static int flags = CLONE_PIDFD | CLONE_NEWPID | CLONE_NEWNS | SIGCHLD;
+-	size_t stack_size = 1024;
+-	char *stack[1024] = { 0 };
+ 	struct child ret;
+ 
+ 	if (!(flags & CLONE_NEWUSER) && geteuid() != 0)
+ 		flags |= CLONE_NEWUSER;
+ 
++	ret.stack = mmap(NULL, CHILD_STACK_SIZE, PROT_READ | PROT_WRITE,
++			 MAP_PRIVATE | MAP_ANONYMOUS | MAP_STACK, -1, 0);
++	if (ret.stack == MAP_FAILED) {
++		error_set(err, -1, "mmap of stack failed (errno %d)", errno);
++		return ret;
++	}
++
+ #ifdef __ia64__
+-	ret.pid = __clone2(fn, stack, stack_size, flags, args, &ret.fd);
++	ret.pid = __clone2(fn, ret.stack, CHILD_STACK_SIZE, flags, args, &ret.fd);
+ #else
+-	ret.pid = clone(fn, stack + stack_size, flags, args, &ret.fd);
++	ret.pid = clone(fn, ret.stack + CHILD_STACK_SIZE, flags, args, &ret.fd);
+ #endif
+ 
+ 	if (ret.pid < 0) {
+@@ -129,6 +138,11 @@ static inline int child_join(struct child *child, struct error *err)
+ 	else if (r > 0)
+ 		error_set(err, r, "child %d reported: %d", child->pid, r);
+ 
++	if (munmap(child->stack, CHILD_STACK_SIZE)) {
++		error_set(err, -1, "munmap of child stack failed (errno %d)", errno);
++		r = -1;
++	}
++
+ 	return r;
+ }
+ 
+diff --git a/tools/testing/selftests/pidfd/pidfd_test.c b/tools/testing/selftests/pidfd/pidfd_test.c
+index 529eb700ac26a..9a2d64901d591 100644
+--- a/tools/testing/selftests/pidfd/pidfd_test.c
++++ b/tools/testing/selftests/pidfd/pidfd_test.c
+@@ -441,7 +441,6 @@ static void test_pidfd_poll_exec(int use_waitpid)
+ {
+ 	int pid, pidfd = 0;
+ 	int status, ret;
+-	pthread_t t1;
+ 	time_t prog_start = time(NULL);
+ 	const char *test_name = "pidfd_poll check for premature notification on child thread exec";
+ 
+@@ -500,13 +499,14 @@ static int child_poll_leader_exit_test(void *args)
+ 	 */
+ 	*child_exit_secs = time(NULL);
+ 	syscall(SYS_exit, 0);
++	/* Never reached, but appeases compiler thinking we should return. */
++	exit(0);
+ }
+ 
+ static void test_pidfd_poll_leader_exit(int use_waitpid)
+ {
+ 	int pid, pidfd = 0;
+-	int status, ret;
+-	time_t prog_start = time(NULL);
++	int status, ret = 0;
+ 	const char *test_name = "pidfd_poll check for premature notification on non-empty"
+ 				"group leader exit";
+ 
+diff --git a/tools/testing/selftests/pidfd/pidfd_wait.c b/tools/testing/selftests/pidfd/pidfd_wait.c
+index be2943f072f60..17999e082aa71 100644
+--- a/tools/testing/selftests/pidfd/pidfd_wait.c
++++ b/tools/testing/selftests/pidfd/pidfd_wait.c
+@@ -39,7 +39,7 @@ static int sys_waitid(int which, pid_t pid, siginfo_t *info, int options,
+ 
+ TEST(wait_simple)
+ {
+-	int pidfd = -1, status = 0;
++	int pidfd = -1;
+ 	pid_t parent_tid = -1;
+ 	struct clone_args args = {
+ 		.parent_tid = ptr_to_u64(&parent_tid),
+@@ -47,7 +47,6 @@ TEST(wait_simple)
+ 		.flags = CLONE_PIDFD | CLONE_PARENT_SETTID,
+ 		.exit_signal = SIGCHLD,
+ 	};
+-	int ret;
+ 	pid_t pid;
+ 	siginfo_t info = {
+ 		.si_signo = 0,
+@@ -88,7 +87,7 @@ TEST(wait_simple)
+ 
+ TEST(wait_states)
+ {
+-	int pidfd = -1, status = 0;
++	int pidfd = -1;
+ 	pid_t parent_tid = -1;
+ 	struct clone_args args = {
+ 		.parent_tid = ptr_to_u64(&parent_tid),
+diff --git a/tools/testing/selftests/rtc/settings b/tools/testing/selftests/rtc/settings
+index ba4d85f74cd6b..a953c96aa16e1 100644
+--- a/tools/testing/selftests/rtc/settings
++++ b/tools/testing/selftests/rtc/settings
+@@ -1 +1 @@
+-timeout=90
++timeout=180
+diff --git a/tools/testing/selftests/vDSO/vdso_test_abi.c b/tools/testing/selftests/vDSO/vdso_test_abi.c
+index 3d603f1394af4..883ca85424bc5 100644
+--- a/tools/testing/selftests/vDSO/vdso_test_abi.c
++++ b/tools/testing/selftests/vDSO/vdso_test_abi.c
+@@ -33,110 +33,114 @@ typedef long (*vdso_clock_gettime_t)(clockid_t clk_id, struct timespec *ts);
+ typedef long (*vdso_clock_getres_t)(clockid_t clk_id, struct timespec *ts);
+ typedef time_t (*vdso_time_t)(time_t *t);
+ 
+-static int vdso_test_gettimeofday(void)
++#define VDSO_TEST_PASS_MSG()	"\n%s(): PASS\n", __func__
++#define VDSO_TEST_FAIL_MSG(x)	"\n%s(): %s FAIL\n", __func__, x
++#define VDSO_TEST_SKIP_MSG(x)	"\n%s(): SKIP: Could not find %s\n", __func__, x
++
++static void vdso_test_gettimeofday(void)
+ {
+ 	/* Find gettimeofday. */
+ 	vdso_gettimeofday_t vdso_gettimeofday =
+ 		(vdso_gettimeofday_t)vdso_sym(version, name[0]);
+ 
+ 	if (!vdso_gettimeofday) {
+-		printf("Could not find %s\n", name[0]);
+-		return KSFT_SKIP;
++		ksft_test_result_skip(VDSO_TEST_SKIP_MSG(name[0]));
++		return;
+ 	}
+ 
+ 	struct timeval tv;
+ 	long ret = vdso_gettimeofday(&tv, 0);
+ 
+ 	if (ret == 0) {
+-		printf("The time is %lld.%06lld\n",
+-		       (long long)tv.tv_sec, (long long)tv.tv_usec);
++		ksft_print_msg("The time is %lld.%06lld\n",
++			       (long long)tv.tv_sec, (long long)tv.tv_usec);
++		ksft_test_result_pass(VDSO_TEST_PASS_MSG());
+ 	} else {
+-		printf("%s failed\n", name[0]);
+-		return KSFT_FAIL;
++		ksft_test_result_fail(VDSO_TEST_FAIL_MSG(name[0]));
+ 	}
+-
+-	return KSFT_PASS;
+ }
+ 
+-static int vdso_test_clock_gettime(clockid_t clk_id)
++static void vdso_test_clock_gettime(clockid_t clk_id)
+ {
+ 	/* Find clock_gettime. */
+ 	vdso_clock_gettime_t vdso_clock_gettime =
+ 		(vdso_clock_gettime_t)vdso_sym(version, name[1]);
+ 
+ 	if (!vdso_clock_gettime) {
+-		printf("Could not find %s\n", name[1]);
+-		return KSFT_SKIP;
++		ksft_test_result_skip(VDSO_TEST_SKIP_MSG(name[1]));
++		return;
+ 	}
+ 
+ 	struct timespec ts;
+ 	long ret = vdso_clock_gettime(clk_id, &ts);
+ 
+ 	if (ret == 0) {
+-		printf("The time is %lld.%06lld\n",
+-		       (long long)ts.tv_sec, (long long)ts.tv_nsec);
++		ksft_print_msg("The time is %lld.%06lld\n",
++			       (long long)ts.tv_sec, (long long)ts.tv_nsec);
++		ksft_test_result_pass(VDSO_TEST_PASS_MSG());
+ 	} else {
+-		printf("%s failed\n", name[1]);
+-		return KSFT_FAIL;
++		ksft_test_result_fail(VDSO_TEST_FAIL_MSG(name[1]));
+ 	}
+-
+-	return KSFT_PASS;
+ }
+ 
+-static int vdso_test_time(void)
++static void vdso_test_time(void)
+ {
+ 	/* Find time. */
+ 	vdso_time_t vdso_time =
+ 		(vdso_time_t)vdso_sym(version, name[2]);
+ 
+ 	if (!vdso_time) {
+-		printf("Could not find %s\n", name[2]);
+-		return KSFT_SKIP;
++		ksft_test_result_skip(VDSO_TEST_SKIP_MSG(name[2]));
++		return;
+ 	}
+ 
+ 	long ret = vdso_time(NULL);
+ 
+ 	if (ret > 0) {
+-		printf("The time in hours since January 1, 1970 is %lld\n",
++		ksft_print_msg("The time in hours since January 1, 1970 is %lld\n",
+ 				(long long)(ret / 3600));
++		ksft_test_result_pass(VDSO_TEST_PASS_MSG());
+ 	} else {
+-		printf("%s failed\n", name[2]);
+-		return KSFT_FAIL;
++		ksft_test_result_fail(VDSO_TEST_FAIL_MSG(name[2]));
+ 	}
+-
+-	return KSFT_PASS;
+ }
+ 
+-static int vdso_test_clock_getres(clockid_t clk_id)
++static void vdso_test_clock_getres(clockid_t clk_id)
+ {
++	int clock_getres_fail = 0;
++
+ 	/* Find clock_getres. */
+ 	vdso_clock_getres_t vdso_clock_getres =
+ 		(vdso_clock_getres_t)vdso_sym(version, name[3]);
+ 
+ 	if (!vdso_clock_getres) {
+-		printf("Could not find %s\n", name[3]);
+-		return KSFT_SKIP;
++		ksft_test_result_skip(VDSO_TEST_SKIP_MSG(name[3]));
++		return;
+ 	}
+ 
+ 	struct timespec ts, sys_ts;
+ 	long ret = vdso_clock_getres(clk_id, &ts);
+ 
+ 	if (ret == 0) {
+-		printf("The resolution is %lld %lld\n",
+-		       (long long)ts.tv_sec, (long long)ts.tv_nsec);
++		ksft_print_msg("The vdso resolution is %lld %lld\n",
++			       (long long)ts.tv_sec, (long long)ts.tv_nsec);
+ 	} else {
+-		printf("%s failed\n", name[3]);
+-		return KSFT_FAIL;
++		clock_getres_fail++;
+ 	}
+ 
+ 	ret = syscall(SYS_clock_getres, clk_id, &sys_ts);
+ 
+-	if ((sys_ts.tv_sec != ts.tv_sec) || (sys_ts.tv_nsec != ts.tv_nsec)) {
+-		printf("%s failed\n", name[3]);
+-		return KSFT_FAIL;
+-	}
++	ksft_print_msg("The syscall resolution is %lld %lld\n",
++			(long long)sys_ts.tv_sec, (long long)sys_ts.tv_nsec);
+ 
+-	return KSFT_PASS;
++	if ((sys_ts.tv_sec != ts.tv_sec) || (sys_ts.tv_nsec != ts.tv_nsec))
++		clock_getres_fail++;
++
++	if (clock_getres_fail > 0) {
++		ksft_test_result_fail(VDSO_TEST_FAIL_MSG(name[3]));
++	} else {
++		ksft_test_result_pass(VDSO_TEST_PASS_MSG());
++	}
+ }
+ 
+ const char *vdso_clock_name[12] = {
+@@ -158,36 +162,23 @@ const char *vdso_clock_name[12] = {
+  * This function calls vdso_test_clock_gettime and vdso_test_clock_getres
+  * with different values for clock_id.
+  */
+-static inline int vdso_test_clock(clockid_t clock_id)
++static inline void vdso_test_clock(clockid_t clock_id)
+ {
+-	int ret0, ret1;
+-
+-	ret0 = vdso_test_clock_gettime(clock_id);
+-	/* A skipped test is considered passed */
+-	if (ret0 == KSFT_SKIP)
+-		ret0 = KSFT_PASS;
+-
+-	ret1 = vdso_test_clock_getres(clock_id);
+-	/* A skipped test is considered passed */
+-	if (ret1 == KSFT_SKIP)
+-		ret1 = KSFT_PASS;
++	ksft_print_msg("\nclock_id: %s\n", vdso_clock_name[clock_id]);
+ 
+-	ret0 += ret1;
++	vdso_test_clock_gettime(clock_id);
+ 
+-	printf("clock_id: %s", vdso_clock_name[clock_id]);
+-
+-	if (ret0 > 0)
+-		printf(" [FAIL]\n");
+-	else
+-		printf(" [PASS]\n");
+-
+-	return ret0;
++	vdso_test_clock_getres(clock_id);
+ }
+ 
++#define VDSO_TEST_PLAN	16
++
+ int main(int argc, char **argv)
+ {
+ 	unsigned long sysinfo_ehdr = getauxval(AT_SYSINFO_EHDR);
+-	int ret;
++
++	ksft_print_header();
++	ksft_set_plan(VDSO_TEST_PLAN);
+ 
+ 	if (!sysinfo_ehdr) {
+ 		printf("AT_SYSINFO_EHDR is not present!\n");
+@@ -201,44 +192,42 @@ int main(int argc, char **argv)
+ 
+ 	vdso_init_from_sysinfo_ehdr(getauxval(AT_SYSINFO_EHDR));
+ 
+-	ret = vdso_test_gettimeofday();
++	vdso_test_gettimeofday();
+ 
+ #if _POSIX_TIMERS > 0
+ 
+ #ifdef CLOCK_REALTIME
+-	ret += vdso_test_clock(CLOCK_REALTIME);
++	vdso_test_clock(CLOCK_REALTIME);
+ #endif
+ 
+ #ifdef CLOCK_BOOTTIME
+-	ret += vdso_test_clock(CLOCK_BOOTTIME);
++	vdso_test_clock(CLOCK_BOOTTIME);
+ #endif
+ 
+ #ifdef CLOCK_TAI
+-	ret += vdso_test_clock(CLOCK_TAI);
++	vdso_test_clock(CLOCK_TAI);
+ #endif
+ 
+ #ifdef CLOCK_REALTIME_COARSE
+-	ret += vdso_test_clock(CLOCK_REALTIME_COARSE);
++	vdso_test_clock(CLOCK_REALTIME_COARSE);
+ #endif
+ 
+ #ifdef CLOCK_MONOTONIC
+-	ret += vdso_test_clock(CLOCK_MONOTONIC);
++	vdso_test_clock(CLOCK_MONOTONIC);
+ #endif
+ 
+ #ifdef CLOCK_MONOTONIC_RAW
+-	ret += vdso_test_clock(CLOCK_MONOTONIC_RAW);
++	vdso_test_clock(CLOCK_MONOTONIC_RAW);
+ #endif
+ 
+ #ifdef CLOCK_MONOTONIC_COARSE
+-	ret += vdso_test_clock(CLOCK_MONOTONIC_COARSE);
++	vdso_test_clock(CLOCK_MONOTONIC_COARSE);
+ #endif
+ 
+ #endif
+ 
+-	ret += vdso_test_time();
+-
+-	if (ret > 0)
+-		return KSFT_FAIL;
++	vdso_test_time();
+ 
+-	return KSFT_PASS;
++	ksft_print_cnts();
++	return ksft_get_fail_cnt() == 0 ? KSFT_PASS : KSFT_FAIL;
+ }
+diff --git a/tools/testing/selftests/zram/zram.sh b/tools/testing/selftests/zram/zram.sh
+index 232e958ec4547..b0b91d9b0dc21 100755
+--- a/tools/testing/selftests/zram/zram.sh
++++ b/tools/testing/selftests/zram/zram.sh
+@@ -2,9 +2,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ TCID="zram.sh"
+ 
+-# Kselftest framework requirement - SKIP code is 4.
+-ksft_skip=4
+-
+ . ./zram_lib.sh
+ 
+ run_zram () {
+@@ -18,14 +15,4 @@ echo ""
+ 
+ check_prereqs
+ 
+-# check zram module exists
+-MODULE_PATH=/lib/modules/`uname -r`/kernel/drivers/block/zram/zram.ko
+-if [ -f $MODULE_PATH ]; then
+-	run_zram
+-elif [ -b /dev/zram0 ]; then
+-	run_zram
+-else
+-	echo "$TCID : No zram.ko module or /dev/zram0 device file not found"
+-	echo "$TCID : CONFIG_ZRAM is not set"
+-	exit $ksft_skip
+-fi
++run_zram
+diff --git a/tools/testing/selftests/zram/zram01.sh b/tools/testing/selftests/zram/zram01.sh
+index 114863d9fb876..8f4affe34f3e4 100755
+--- a/tools/testing/selftests/zram/zram01.sh
++++ b/tools/testing/selftests/zram/zram01.sh
+@@ -33,9 +33,7 @@ zram_algs="lzo"
+ 
+ zram_fill_fs()
+ {
+-	local mem_free0=$(free -m | awk 'NR==2 {print $4}')
+-
+-	for i in $(seq 0 $(($dev_num - 1))); do
++	for i in $(seq $dev_start $dev_end); do
+ 		echo "fill zram$i..."
+ 		local b=0
+ 		while [ true ]; do
+@@ -45,29 +43,17 @@ zram_fill_fs()
+ 			b=$(($b + 1))
+ 		done
+ 		echo "zram$i can be filled with '$b' KB"
+-	done
+ 
+-	local mem_free1=$(free -m | awk 'NR==2 {print $4}')
+-	local used_mem=$(($mem_free0 - $mem_free1))
++		local mem_used_total=`awk '{print $3}' "/sys/block/zram$i/mm_stat"`
++		local v=$((100 * 1024 * $b / $mem_used_total))
++		if [ "$v" -lt 100 ]; then
++			 echo "FAIL compression ratio: 0.$v:1"
++			 ERR_CODE=-1
++			 return
++		fi
+ 
+-	local total_size=0
+-	for sm in $zram_sizes; do
+-		local s=$(echo $sm | sed 's/M//')
+-		total_size=$(($total_size + $s))
++		echo "zram compression ratio: $(echo "scale=2; $v / 100 " | bc):1: OK"
+ 	done
+-
+-	echo "zram used ${used_mem}M, zram disk sizes ${total_size}M"
+-
+-	local v=$((100 * $total_size / $used_mem))
+-
+-	if [ "$v" -lt 100 ]; then
+-		echo "FAIL compression ratio: 0.$v:1"
+-		ERR_CODE=-1
+-		zram_cleanup
+-		return
+-	fi
+-
+-	echo "zram compression ratio: $(echo "scale=2; $v / 100 " | bc):1: OK"
+ }
+ 
+ check_prereqs
+@@ -81,7 +67,6 @@ zram_mount
+ 
+ zram_fill_fs
+ zram_cleanup
+-zram_unload
+ 
+ if [ $ERR_CODE -ne 0 ]; then
+ 	echo "$TCID : [FAIL]"
+diff --git a/tools/testing/selftests/zram/zram02.sh b/tools/testing/selftests/zram/zram02.sh
+index e83b404807c09..2418b0c4ed136 100755
+--- a/tools/testing/selftests/zram/zram02.sh
++++ b/tools/testing/selftests/zram/zram02.sh
+@@ -36,7 +36,6 @@ zram_set_memlimit
+ zram_makeswap
+ zram_swapoff
+ zram_cleanup
+-zram_unload
+ 
+ if [ $ERR_CODE -ne 0 ]; then
+ 	echo "$TCID : [FAIL]"
+diff --git a/tools/testing/selftests/zram/zram_lib.sh b/tools/testing/selftests/zram/zram_lib.sh
+index 6f872f266fd11..21ec1966de76c 100755
+--- a/tools/testing/selftests/zram/zram_lib.sh
++++ b/tools/testing/selftests/zram/zram_lib.sh
+@@ -5,12 +5,17 @@
+ # Author: Alexey Kodanev <alexey.kodanev@oracle.com>
+ # Modified: Naresh Kamboju <naresh.kamboju@linaro.org>
+ 
+-MODULE=0
+ dev_makeswap=-1
+ dev_mounted=-1
+-
++dev_start=0
++dev_end=-1
++module_load=-1
++sys_control=-1
+ # Kselftest framework requirement - SKIP code is 4.
+ ksft_skip=4
++kernel_version=`uname -r | cut -d'.' -f1,2`
++kernel_major=${kernel_version%.*}
++kernel_minor=${kernel_version#*.}
+ 
+ trap INT
+ 
+@@ -25,68 +30,104 @@ check_prereqs()
+ 	fi
+ }
+ 
++kernel_gte()
++{
++	major=${1%.*}
++	minor=${1#*.}
++
++	if [ $kernel_major -gt $major ]; then
++		return 0
++	elif [[ $kernel_major -eq $major && $kernel_minor -ge $minor ]]; then
++		return 0
++	fi
++
++	return 1
++}
++
+ zram_cleanup()
+ {
+ 	echo "zram cleanup"
+ 	local i=
+-	for i in $(seq 0 $dev_makeswap); do
++	for i in $(seq $dev_start $dev_makeswap); do
+ 		swapoff /dev/zram$i
+ 	done
+ 
+-	for i in $(seq 0 $dev_mounted); do
++	for i in $(seq $dev_start $dev_mounted); do
+ 		umount /dev/zram$i
+ 	done
+ 
+-	for i in $(seq 0 $(($dev_num - 1))); do
++	for i in $(seq $dev_start $dev_end); do
+ 		echo 1 > /sys/block/zram${i}/reset
+ 		rm -rf zram$i
+ 	done
+ 
+-}
++	if [ $sys_control -eq 1 ]; then
++		for i in $(seq $dev_start $dev_end); do
++			echo $i > /sys/class/zram-control/hot_remove
++		done
++	fi
+ 
+-zram_unload()
+-{
+-	if [ $MODULE -ne 0 ] ; then
+-		echo "zram rmmod zram"
++	if [ $module_load -eq 1 ]; then
+ 		rmmod zram > /dev/null 2>&1
+ 	fi
+ }
+ 
+ zram_load()
+ {
+-	# check zram module exists
+-	MODULE_PATH=/lib/modules/`uname -r`/kernel/drivers/block/zram/zram.ko
+-	if [ -f $MODULE_PATH ]; then
+-		MODULE=1
+-		echo "create '$dev_num' zram device(s)"
+-		modprobe zram num_devices=$dev_num
+-		if [ $? -ne 0 ]; then
+-			echo "failed to insert zram module"
+-			exit 1
+-		fi
+-
+-		dev_num_created=$(ls /dev/zram* | wc -w)
++	echo "create '$dev_num' zram device(s)"
++
++	# zram module loaded, new kernel
++	if [ -d "/sys/class/zram-control" ]; then
++		echo "zram modules already loaded, kernel supports" \
++			"zram-control interface"
++		dev_start=$(ls /dev/zram* | wc -w)
++		dev_end=$(($dev_start + $dev_num - 1))
++		sys_control=1
++
++		for i in $(seq $dev_start $dev_end); do
++			cat /sys/class/zram-control/hot_add > /dev/null
++		done
++
++		echo "all zram devices (/dev/zram$dev_start~$dev_end" \
++			"successfully created"
++		return 0
++	fi
+ 
+-		if [ "$dev_num_created" -ne "$dev_num" ]; then
+-			echo "unexpected num of devices: $dev_num_created"
+-			ERR_CODE=-1
++	# detect old kernel or built-in
++	modprobe zram num_devices=$dev_num
++	if [ ! -d "/sys/class/zram-control" ]; then
++		if grep -q '^zram' /proc/modules; then
++			rmmod zram > /dev/null 2>&1
++			if [ $? -ne 0 ]; then
++				echo "zram module is being used on old kernel" \
++					"without zram-control interface"
++				exit $ksft_skip
++			fi
+ 		else
+-			echo "zram load module successful"
++			echo "test needs CONFIG_ZRAM=m on old kernel without" \
++				"zram-control interface"
++			exit $ksft_skip
+ 		fi
+-	elif [ -b /dev/zram0 ]; then
+-		echo "/dev/zram0 device file found: OK"
+-	else
+-		echo "ERROR: No zram.ko module or no /dev/zram0 device found"
+-		echo "$TCID : CONFIG_ZRAM is not set"
+-		exit 1
++		modprobe zram num_devices=$dev_num
+ 	fi
++
++	module_load=1
++	dev_end=$(($dev_num - 1))
++	echo "all zram devices (/dev/zram0~$dev_end) successfully created"
+ }
+ 
+ zram_max_streams()
+ {
+ 	echo "set max_comp_streams to zram device(s)"
+ 
+-	local i=0
++	kernel_gte 4.7
++	if [ $? -eq 0 ]; then
++		echo "The device attribute max_comp_streams was"\
++		               "deprecated in 4.7"
++		return 0
++	fi
++
++	local i=$dev_start
+ 	for max_s in $zram_max_streams; do
+ 		local sys_path="/sys/block/zram${i}/max_comp_streams"
+ 		echo $max_s > $sys_path || \
+@@ -98,7 +139,7 @@ zram_max_streams()
+ 			echo "FAIL can't set max_streams '$max_s', get $max_stream"
+ 
+ 		i=$(($i + 1))
+-		echo "$sys_path = '$max_streams' ($i/$dev_num)"
++		echo "$sys_path = '$max_streams'"
+ 	done
+ 
+ 	echo "zram max streams: OK"
+@@ -108,15 +149,16 @@ zram_compress_alg()
+ {
+ 	echo "test that we can set compression algorithm"
+ 
+-	local algs=$(cat /sys/block/zram0/comp_algorithm)
++	local i=$dev_start
++	local algs=$(cat /sys/block/zram${i}/comp_algorithm)
+ 	echo "supported algs: $algs"
+-	local i=0
++
+ 	for alg in $zram_algs; do
+ 		local sys_path="/sys/block/zram${i}/comp_algorithm"
+ 		echo "$alg" >	$sys_path || \
+ 			echo "FAIL can't set '$alg' to $sys_path"
+ 		i=$(($i + 1))
+-		echo "$sys_path = '$alg' ($i/$dev_num)"
++		echo "$sys_path = '$alg'"
+ 	done
+ 
+ 	echo "zram set compression algorithm: OK"
+@@ -125,14 +167,14 @@ zram_compress_alg()
+ zram_set_disksizes()
+ {
+ 	echo "set disk size to zram device(s)"
+-	local i=0
++	local i=$dev_start
+ 	for ds in $zram_sizes; do
+ 		local sys_path="/sys/block/zram${i}/disksize"
+ 		echo "$ds" >	$sys_path || \
+ 			echo "FAIL can't set '$ds' to $sys_path"
+ 
+ 		i=$(($i + 1))
+-		echo "$sys_path = '$ds' ($i/$dev_num)"
++		echo "$sys_path = '$ds'"
+ 	done
+ 
+ 	echo "zram set disksizes: OK"
+@@ -142,14 +184,14 @@ zram_set_memlimit()
+ {
+ 	echo "set memory limit to zram device(s)"
+ 
+-	local i=0
++	local i=$dev_start
+ 	for ds in $zram_mem_limits; do
+ 		local sys_path="/sys/block/zram${i}/mem_limit"
+ 		echo "$ds" >	$sys_path || \
+ 			echo "FAIL can't set '$ds' to $sys_path"
+ 
+ 		i=$(($i + 1))
+-		echo "$sys_path = '$ds' ($i/$dev_num)"
++		echo "$sys_path = '$ds'"
+ 	done
+ 
+ 	echo "zram set memory limit: OK"
+@@ -158,8 +200,8 @@ zram_set_memlimit()
+ zram_makeswap()
+ {
+ 	echo "make swap with zram device(s)"
+-	local i=0
+-	for i in $(seq 0 $(($dev_num - 1))); do
++	local i=$dev_start
++	for i in $(seq $dev_start $dev_end); do
+ 		mkswap /dev/zram$i > err.log 2>&1
+ 		if [ $? -ne 0 ]; then
+ 			cat err.log
+@@ -182,7 +224,7 @@ zram_makeswap()
+ zram_swapoff()
+ {
+ 	local i=
+-	for i in $(seq 0 $dev_makeswap); do
++	for i in $(seq $dev_start $dev_end); do
+ 		swapoff /dev/zram$i > err.log 2>&1
+ 		if [ $? -ne 0 ]; then
+ 			cat err.log
+@@ -196,7 +238,7 @@ zram_swapoff()
+ 
+ zram_makefs()
+ {
+-	local i=0
++	local i=$dev_start
+ 	for fs in $zram_filesystems; do
+ 		# if requested fs not supported default it to ext2
+ 		which mkfs.$fs > /dev/null 2>&1 || fs=ext2
+@@ -215,7 +257,7 @@ zram_makefs()
+ zram_mount()
+ {
+ 	local i=0
+-	for i in $(seq 0 $(($dev_num - 1))); do
++	for i in $(seq $dev_start $dev_end); do
+ 		echo "mount /dev/zram$i"
+ 		mkdir zram$i
+ 		mount /dev/zram$i zram$i > /dev/null || \


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-02-23 12:56 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-02-23 12:56 UTC (permalink / raw
  To: gentoo-commits

commit:     a3f065e0d1f162f03786176f45dd96b8f5f7132f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Feb 23 12:55:52 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Feb 23 12:55:52 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a3f065e0

Remove redundant patch

2410_iwlwifi-fix-use-after-free.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                           |  4 ----
 2410_iwlwifi-fix-use-after-free.patch | 37 -----------------------------------
 2 files changed, 41 deletions(-)

diff --git a/0000_README b/0000_README
index 544fcf03..7706410d 100644
--- a/0000_README
+++ b/0000_README
@@ -103,10 +103,6 @@ Patch:  2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch
 From:   https://patchwork.kernel.org/project/linux-wireless/patch/70e27cbc652cbdb78277b9c691a3a5ba02653afb.1641540175.git.objelf@gmail.com/
 Desc:   mt76: mt7921e: fix possible probe failure after reboot
 
-Patch:  2410_iwlwifi-fix-use-after-free.patch
-From:   https://marc.info/?l=linux-wireless&m=164431994900440&w=2
-Desc:   iwlwifi: fix use-after-free
-
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2410_iwlwifi-fix-use-after-free.patch b/2410_iwlwifi-fix-use-after-free.patch
deleted file mode 100644
index 4c94467b..00000000
--- a/2410_iwlwifi-fix-use-after-free.patch
+++ /dev/null
@@ -1,37 +0,0 @@
-If no firmware was present at all (or, presumably, all of the
-firmware files failed to parse), we end up unbinding by calling
-device_release_driver(), which calls remove(), which then in
-iwlwifi calls iwl_drv_stop(), freeing the 'drv' struct. However
-the new code I added will still erroneously access it after it
-was freed.
-
-Set 'failure=false' in this case to avoid the access, all data
-was already freed anyway.
-
-Cc: stable@vger.kernel.org
-Reported-by: Stefan Agner <stefan@agner.ch>
-Reported-by: Wolfgang Walter <linux@stwm.de>
-Reported-by: Jason Self <jason@bluehome.net>
-Reported-by: Dominik Behr <dominik@dominikbehr.com>
-Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
-Fixes: ab07506b0454 ("iwlwifi: fix leaks/bad data after failed firmware load")
-Signed-off-by: Johannes Berg <johannes.berg@intel.com>
----
- drivers/net/wireless/intel/iwlwifi/iwl-drv.c | 2 ++
- 1 file changed, 2 insertions(+)
-
-diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
-index 83e3b731ad29..6651e78b39ec 100644
---- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
-+++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
-@@ -1707,6 +1707,8 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
-  out_unbind:
- 	complete(&drv->request_firmware_complete);
- 	device_release_driver(drv->trans->dev);
-+	/* drv has just been freed by the release */
-+	failure = false;
-  free:
- 	if (failure)
- 		iwl_dealloc_ucode(drv);
--- 
-2.34.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-02-26 17:42 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-02-26 17:42 UTC (permalink / raw
  To: gentoo-commits

commit:     9d5fa235c9c8eff2dc40d7d9640e7968219d4353
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Feb 26 17:42:09 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Feb 26 17:42:09 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9d5fa235

Update default sec restrictions

Bug: https://bugs.gentoo.org/834085

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 ...able-link-security-restrictions-by-default.patch | 21 +++++++++------------
 1 file changed, 9 insertions(+), 12 deletions(-)

diff --git a/1510_fs-enable-link-security-restrictions-by-default.patch b/1510_fs-enable-link-security-restrictions-by-default.patch
index f0ed144f..1b3e590d 100644
--- a/1510_fs-enable-link-security-restrictions-by-default.patch
+++ b/1510_fs-enable-link-security-restrictions-by-default.patch
@@ -1,20 +1,17 @@
-From: Ben Hutchings <ben@decadent.org.uk>
-Subject: fs: Enable link security restrictions by default
-Date: Fri, 02 Nov 2012 05:32:06 +0000
-Bug-Debian: https://bugs.debian.org/609455
-Forwarded: not-needed
-This reverts commit 561ec64ae67ef25cac8d72bb9c4bfc955edfd415
-('VFS: don't do protected {sym,hard}links by default').
---- a/fs/namei.c	2018-09-28 07:56:07.770005006 -0400
-+++ b/fs/namei.c	2018-09-28 07:56:43.370349204 -0400
-@@ -885,8 +885,8 @@ static inline void put_link(struct namei
+--- a/fs/namei.c	2022-01-09 17:55:34.000000000 -0500
++++ b/fs/namei.c	2022-02-26 11:32:31.832844465 -0500
+@@ -1020,10 +1020,10 @@ static inline void put_link(struct namei
  		path_put(&last->link);
  }
  
 -int sysctl_protected_symlinks __read_mostly = 0;
 -int sysctl_protected_hardlinks __read_mostly = 0;
+-int sysctl_protected_fifos __read_mostly;
+-int sysctl_protected_regular __read_mostly;
 +int sysctl_protected_symlinks __read_mostly = 1;
 +int sysctl_protected_hardlinks __read_mostly = 1;
- int sysctl_protected_fifos __read_mostly;
- int sysctl_protected_regular __read_mostly;
++int sysctl_protected_fifos __read_mostly = 1;
++int sysctl_protected_regular __read_mostly = 1;
  
+ /**
+  * may_follow_link - Check symlink following for unsafe situations


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-03-02 13:04 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-03-02 13:04 UTC (permalink / raw
  To: gentoo-commits

commit:     9dca87bbb73f85f2bea7d939047158ff44930ace
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Mar  2 13:04:21 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Mar  2 13:04:21 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9dca87bb

Linux patch 5.16.12

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1011_linux-5.16.12.patch | 6431 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6435 insertions(+)

diff --git a/0000_README b/0000_README
index 7706410d..0785204e 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  1010_linux-5.16.11.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.16.11
 
+Patch:  1011_linux-5.16.12.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.12
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1011_linux-5.16.12.patch b/1011_linux-5.16.12.patch
new file mode 100644
index 00000000..3b6bed0f
--- /dev/null
+++ b/1011_linux-5.16.12.patch
@@ -0,0 +1,6431 @@
+diff --git a/Makefile b/Makefile
+index 00ba75768af73..09a9bb824afad 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/arch/parisc/kernel/unaligned.c b/arch/parisc/kernel/unaligned.c
+index 237d20dd5622d..286cec4d86d7b 100644
+--- a/arch/parisc/kernel/unaligned.c
++++ b/arch/parisc/kernel/unaligned.c
+@@ -340,7 +340,7 @@ static int emulate_stw(struct pt_regs *regs, int frreg, int flop)
+ 	: "r" (val), "r" (regs->ior), "r" (regs->isr)
+ 	: "r19", "r20", "r21", "r22", "r1", FIXUP_BRANCH_CLOBBER );
+ 
+-	return 0;
++	return ret;
+ }
+ static int emulate_std(struct pt_regs *regs, int frreg, int flop)
+ {
+@@ -397,7 +397,7 @@ static int emulate_std(struct pt_regs *regs, int frreg, int flop)
+ 	__asm__ __volatile__ (
+ "	mtsp	%4, %%sr1\n"
+ "	zdep	%2, 29, 2, %%r19\n"
+-"	dep	%%r0, 31, 2, %2\n"
++"	dep	%%r0, 31, 2, %3\n"
+ "	mtsar	%%r19\n"
+ "	zvdepi	-2, 32, %%r19\n"
+ "1:	ldw	0(%%sr1,%3),%%r20\n"
+@@ -409,7 +409,7 @@ static int emulate_std(struct pt_regs *regs, int frreg, int flop)
+ "	andcm	%%r21, %%r19, %%r21\n"
+ "	or	%1, %%r20, %1\n"
+ "	or	%2, %%r21, %2\n"
+-"3:	stw	%1,0(%%sr1,%1)\n"
++"3:	stw	%1,0(%%sr1,%3)\n"
+ "4:	stw	%%r1,4(%%sr1,%3)\n"
+ "5:	stw	%2,8(%%sr1,%3)\n"
+ "	copy	%%r0, %0\n"
+@@ -596,7 +596,6 @@ void handle_unaligned(struct pt_regs *regs)
+ 		ret = ERR_NOTHANDLED;	/* "undefined", but lets kill them. */
+ 		break;
+ 	}
+-#ifdef CONFIG_PA20
+ 	switch (regs->iir & OPCODE2_MASK)
+ 	{
+ 	case OPCODE_FLDD_L:
+@@ -607,22 +606,23 @@ void handle_unaligned(struct pt_regs *regs)
+ 		flop=1;
+ 		ret = emulate_std(regs, R2(regs->iir),1);
+ 		break;
++#ifdef CONFIG_PA20
+ 	case OPCODE_LDD_L:
+ 		ret = emulate_ldd(regs, R2(regs->iir),0);
+ 		break;
+ 	case OPCODE_STD_L:
+ 		ret = emulate_std(regs, R2(regs->iir),0);
+ 		break;
+-	}
+ #endif
++	}
+ 	switch (regs->iir & OPCODE3_MASK)
+ 	{
+ 	case OPCODE_FLDW_L:
+ 		flop=1;
+-		ret = emulate_ldw(regs, R2(regs->iir),0);
++		ret = emulate_ldw(regs, R2(regs->iir), 1);
+ 		break;
+ 	case OPCODE_LDW_M:
+-		ret = emulate_ldw(regs, R2(regs->iir),1);
++		ret = emulate_ldw(regs, R2(regs->iir), 0);
+ 		break;
+ 
+ 	case OPCODE_FSTW_L:
+diff --git a/arch/riscv/configs/nommu_k210_sdcard_defconfig b/arch/riscv/configs/nommu_k210_sdcard_defconfig
+index d68b743d580f8..15d1fd0a70184 100644
+--- a/arch/riscv/configs/nommu_k210_sdcard_defconfig
++++ b/arch/riscv/configs/nommu_k210_sdcard_defconfig
+@@ -23,7 +23,7 @@ CONFIG_SLOB=y
+ CONFIG_SOC_CANAAN=y
+ CONFIG_SMP=y
+ CONFIG_NR_CPUS=2
+-CONFIG_CMDLINE="earlycon console=ttySIF0 rootdelay=2 root=/dev/mmcblk0p1 ro"
++CONFIG_CMDLINE="earlycon console=ttySIF0 root=/dev/mmcblk0p1 rootwait ro"
+ CONFIG_CMDLINE_FORCE=y
+ # CONFIG_SECCOMP is not set
+ # CONFIG_STACKPROTECTOR is not set
+diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
+index 3397ddac1a30c..16308ef1e5787 100644
+--- a/arch/riscv/kernel/Makefile
++++ b/arch/riscv/kernel/Makefile
+@@ -50,6 +50,8 @@ obj-$(CONFIG_MODULE_SECTIONS)	+= module-sections.o
+ obj-$(CONFIG_FUNCTION_TRACER)	+= mcount.o ftrace.o
+ obj-$(CONFIG_DYNAMIC_FTRACE)	+= mcount-dyn.o
+ 
++obj-$(CONFIG_TRACE_IRQFLAGS)	+= trace_irq.o
++
+ obj-$(CONFIG_RISCV_BASE_PMU)	+= perf_event.o
+ obj-$(CONFIG_PERF_EVENTS)	+= perf_callchain.o
+ obj-$(CONFIG_HAVE_PERF_REGS)	+= perf_regs.o
+diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
+index ed29e9c8f660c..d6a46ed0bf051 100644
+--- a/arch/riscv/kernel/entry.S
++++ b/arch/riscv/kernel/entry.S
+@@ -108,7 +108,7 @@ _save_context:
+ .option pop
+ 
+ #ifdef CONFIG_TRACE_IRQFLAGS
+-	call trace_hardirqs_off
++	call __trace_hardirqs_off
+ #endif
+ 
+ #ifdef CONFIG_CONTEXT_TRACKING
+@@ -143,7 +143,7 @@ skip_context_tracking:
+ 	li t0, EXC_BREAKPOINT
+ 	beq s4, t0, 1f
+ #ifdef CONFIG_TRACE_IRQFLAGS
+-	call trace_hardirqs_on
++	call __trace_hardirqs_on
+ #endif
+ 	csrs CSR_STATUS, SR_IE
+ 
+@@ -234,7 +234,7 @@ ret_from_exception:
+ 	REG_L s0, PT_STATUS(sp)
+ 	csrc CSR_STATUS, SR_IE
+ #ifdef CONFIG_TRACE_IRQFLAGS
+-	call trace_hardirqs_off
++	call __trace_hardirqs_off
+ #endif
+ #ifdef CONFIG_RISCV_M_MODE
+ 	/* the MPP value is too large to be used as an immediate arg for addi */
+@@ -270,10 +270,10 @@ restore_all:
+ 	REG_L s1, PT_STATUS(sp)
+ 	andi t0, s1, SR_PIE
+ 	beqz t0, 1f
+-	call trace_hardirqs_on
++	call __trace_hardirqs_on
+ 	j 2f
+ 1:
+-	call trace_hardirqs_off
++	call __trace_hardirqs_off
+ 2:
+ #endif
+ 	REG_L a0, PT_STATUS(sp)
+diff --git a/arch/riscv/kernel/trace_irq.c b/arch/riscv/kernel/trace_irq.c
+new file mode 100644
+index 0000000000000..095ac976d7da1
+--- /dev/null
++++ b/arch/riscv/kernel/trace_irq.c
+@@ -0,0 +1,27 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Copyright (C) 2022 Changbin Du <changbin.du@gmail.com>
++ */
++
++#include <linux/irqflags.h>
++#include <linux/kprobes.h>
++#include "trace_irq.h"
++
++/*
++ * trace_hardirqs_on/off require the caller to setup frame pointer properly.
++ * Otherwise, CALLER_ADDR1 might trigger an pagging exception in kernel.
++ * Here we add one extra level so they can be safely called by low
++ * level entry code which $fp is used for other purpose.
++ */
++
++void __trace_hardirqs_on(void)
++{
++	trace_hardirqs_on();
++}
++NOKPROBE_SYMBOL(__trace_hardirqs_on);
++
++void __trace_hardirqs_off(void)
++{
++	trace_hardirqs_off();
++}
++NOKPROBE_SYMBOL(__trace_hardirqs_off);
+diff --git a/arch/riscv/kernel/trace_irq.h b/arch/riscv/kernel/trace_irq.h
+new file mode 100644
+index 0000000000000..99fe67377e5ed
+--- /dev/null
++++ b/arch/riscv/kernel/trace_irq.h
+@@ -0,0 +1,11 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * Copyright (C) 2022 Changbin Du <changbin.du@gmail.com>
++ */
++#ifndef __TRACE_IRQ_H
++#define __TRACE_IRQ_H
++
++void __trace_hardirqs_on(void);
++void __trace_hardirqs_off(void);
++
++#endif /* __TRACE_IRQ_H */
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index fcdf3f8bb59a6..84e23b9864f4c 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -3905,12 +3905,23 @@ static void shadow_page_table_clear_flood(struct kvm_vcpu *vcpu, gva_t addr)
+ 	walk_shadow_page_lockless_end(vcpu);
+ }
+ 
++static u32 alloc_apf_token(struct kvm_vcpu *vcpu)
++{
++	/* make sure the token value is not 0 */
++	u32 id = vcpu->arch.apf.id;
++
++	if (id << 12 == 0)
++		vcpu->arch.apf.id = 1;
++
++	return (vcpu->arch.apf.id++ << 12) | vcpu->vcpu_id;
++}
++
+ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
+ 				    gfn_t gfn)
+ {
+ 	struct kvm_arch_async_pf arch;
+ 
+-	arch.token = (vcpu->arch.apf.id++ << 12) | vcpu->vcpu_id;
++	arch.token = alloc_apf_token(vcpu);
+ 	arch.gfn = gfn;
+ 	arch.direct_map = vcpu->arch.mmu->direct_map;
+ 	arch.cr3 = vcpu->arch.mmu->get_guest_pgd(vcpu);
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 57e2a55e46175..9875c4cc3c768 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -2903,8 +2903,23 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ 	u64 data = msr->data;
+ 	switch (ecx) {
+ 	case MSR_AMD64_TSC_RATIO:
+-		if (!msr->host_initiated && !svm->tsc_scaling_enabled)
+-			return 1;
++
++		if (!svm->tsc_scaling_enabled) {
++
++			if (!msr->host_initiated)
++				return 1;
++			/*
++			 * In case TSC scaling is not enabled, always
++			 * leave this MSR at the default value.
++			 *
++			 * Due to bug in qemu 6.2.0, it would try to set
++			 * this msr to 0 if tsc scaling is not enabled.
++			 * Ignore this value as well.
++			 */
++			if (data != 0 && data != svm->tsc_ratio_msr)
++				return 1;
++			break;
++		}
+ 
+ 		if (data & TSC_RATIO_RSVD)
+ 			return 1;
+diff --git a/block/fops.c b/block/fops.c
+index 0da147edbd186..77a5579d8de66 100644
+--- a/block/fops.c
++++ b/block/fops.c
+@@ -289,6 +289,8 @@ static void blkdev_bio_end_io_async(struct bio *bio)
+ 	struct kiocb *iocb = dio->iocb;
+ 	ssize_t ret;
+ 
++	WRITE_ONCE(iocb->private, NULL);
++
+ 	if (likely(!bio->bi_status)) {
+ 		ret = dio->size;
+ 		iocb->ki_pos += ret;
+diff --git a/drivers/ata/pata_hpt37x.c b/drivers/ata/pata_hpt37x.c
+index f242157bc81bb..ae8375e9d2681 100644
+--- a/drivers/ata/pata_hpt37x.c
++++ b/drivers/ata/pata_hpt37x.c
+@@ -919,6 +919,20 @@ static int hpt37x_init_one(struct pci_dev *dev, const struct pci_device_id *id)
+ 	irqmask &= ~0x10;
+ 	pci_write_config_byte(dev, 0x5a, irqmask);
+ 
++	/*
++	 * HPT371 chips physically have only one channel, the secondary one,
++	 * but the primary channel registers do exist!  Go figure...
++	 * So,  we manually disable the non-existing channel here
++	 * (if the BIOS hasn't done this already).
++	 */
++	if (dev->device == PCI_DEVICE_ID_TTI_HPT371) {
++		u8 mcr1;
++
++		pci_read_config_byte(dev, 0x50, &mcr1);
++		mcr1 &= ~0x04;
++		pci_write_config_byte(dev, 0x50, mcr1);
++	}
++
+ 	/*
+ 	 * default to pci clock. make sure MA15/16 are set to output
+ 	 * to prevent drives having problems with 40-pin cables. Needed
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 68ea1f949daa9..6b66306932016 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -629,6 +629,9 @@ re_probe:
+ 			drv->remove(dev);
+ 
+ 		devres_release_all(dev);
++		arch_teardown_dma_ops(dev);
++		kfree(dev->dma_range_map);
++		dev->dma_range_map = NULL;
+ 		driver_sysfs_remove(dev);
+ 		dev->driver = NULL;
+ 		dev_set_drvdata(dev, NULL);
+@@ -1208,6 +1211,8 @@ static void __device_release_driver(struct device *dev, struct device *parent)
+ 
+ 		devres_release_all(dev);
+ 		arch_teardown_dma_ops(dev);
++		kfree(dev->dma_range_map);
++		dev->dma_range_map = NULL;
+ 		dev->driver = NULL;
+ 		dev_set_drvdata(dev, NULL);
+ 		if (dev->pm_domain && dev->pm_domain->dismiss)
+diff --git a/drivers/base/regmap/regmap-irq.c b/drivers/base/regmap/regmap-irq.c
+index d2656581a6085..4a446259a184e 100644
+--- a/drivers/base/regmap/regmap-irq.c
++++ b/drivers/base/regmap/regmap-irq.c
+@@ -189,11 +189,9 @@ static void regmap_irq_sync_unlock(struct irq_data *data)
+ 				ret = regmap_write(map, reg, d->mask_buf[i]);
+ 			if (d->chip->clear_ack) {
+ 				if (d->chip->ack_invert && !ret)
+-					ret = regmap_write(map, reg,
+-							   d->mask_buf[i]);
++					ret = regmap_write(map, reg, UINT_MAX);
+ 				else if (!ret)
+-					ret = regmap_write(map, reg,
+-							   ~d->mask_buf[i]);
++					ret = regmap_write(map, reg, 0);
+ 			}
+ 			if (ret != 0)
+ 				dev_err(d->map->dev, "Failed to ack 0x%x: %d\n",
+@@ -556,11 +554,9 @@ static irqreturn_t regmap_irq_thread(int irq, void *d)
+ 						data->status_buf[i]);
+ 			if (chip->clear_ack) {
+ 				if (chip->ack_invert && !ret)
+-					ret = regmap_write(map, reg,
+-							data->status_buf[i]);
++					ret = regmap_write(map, reg, UINT_MAX);
+ 				else if (!ret)
+-					ret = regmap_write(map, reg,
+-							~data->status_buf[i]);
++					ret = regmap_write(map, reg, 0);
+ 			}
+ 			if (ret != 0)
+ 				dev_err(map->dev, "Failed to ack 0x%x: %d\n",
+@@ -817,13 +813,9 @@ int regmap_add_irq_chip_fwnode(struct fwnode_handle *fwnode,
+ 					d->status_buf[i] & d->mask_buf[i]);
+ 			if (chip->clear_ack) {
+ 				if (chip->ack_invert && !ret)
+-					ret = regmap_write(map, reg,
+-						(d->status_buf[i] &
+-						 d->mask_buf[i]));
++					ret = regmap_write(map, reg, UINT_MAX);
+ 				else if (!ret)
+-					ret = regmap_write(map, reg,
+-						~(d->status_buf[i] &
+-						  d->mask_buf[i]));
++					ret = regmap_write(map, reg, 0);
+ 			}
+ 			if (ret != 0) {
+ 				dev_err(map->dev, "Failed to ack 0x%x: %d\n",
+diff --git a/drivers/clk/ingenic/jz4725b-cgu.c b/drivers/clk/ingenic/jz4725b-cgu.c
+index 744d136b721bc..15d61793f53b1 100644
+--- a/drivers/clk/ingenic/jz4725b-cgu.c
++++ b/drivers/clk/ingenic/jz4725b-cgu.c
+@@ -139,11 +139,10 @@ static const struct ingenic_cgu_clk_info jz4725b_cgu_clocks[] = {
+ 	},
+ 
+ 	[JZ4725B_CLK_I2S] = {
+-		"i2s", CGU_CLK_MUX | CGU_CLK_DIV | CGU_CLK_GATE,
++		"i2s", CGU_CLK_MUX | CGU_CLK_DIV,
+ 		.parents = { JZ4725B_CLK_EXT, JZ4725B_CLK_PLL_HALF, -1, -1 },
+ 		.mux = { CGU_REG_CPCCR, 31, 1 },
+ 		.div = { CGU_REG_I2SCDR, 0, 1, 9, -1, -1, -1 },
+-		.gate = { CGU_REG_CLKGR, 6 },
+ 	},
+ 
+ 	[JZ4725B_CLK_SPI] = {
+diff --git a/drivers/clk/qcom/gcc-msm8994.c b/drivers/clk/qcom/gcc-msm8994.c
+index 702a9bdc05598..5df9f1ead48e0 100644
+--- a/drivers/clk/qcom/gcc-msm8994.c
++++ b/drivers/clk/qcom/gcc-msm8994.c
+@@ -107,42 +107,6 @@ static const struct clk_parent_data gcc_xo_gpll0_gpll4[] = {
+ 	{ .hw = &gpll4.clkr.hw },
+ };
+ 
+-static struct clk_rcg2 system_noc_clk_src = {
+-	.cmd_rcgr = 0x0120,
+-	.hid_width = 5,
+-	.parent_map = gcc_xo_gpll0_map,
+-	.clkr.hw.init = &(struct clk_init_data){
+-		.name = "system_noc_clk_src",
+-		.parent_data = gcc_xo_gpll0,
+-		.num_parents = ARRAY_SIZE(gcc_xo_gpll0),
+-		.ops = &clk_rcg2_ops,
+-	},
+-};
+-
+-static struct clk_rcg2 config_noc_clk_src = {
+-	.cmd_rcgr = 0x0150,
+-	.hid_width = 5,
+-	.parent_map = gcc_xo_gpll0_map,
+-	.clkr.hw.init = &(struct clk_init_data){
+-		.name = "config_noc_clk_src",
+-		.parent_data = gcc_xo_gpll0,
+-		.num_parents = ARRAY_SIZE(gcc_xo_gpll0),
+-		.ops = &clk_rcg2_ops,
+-	},
+-};
+-
+-static struct clk_rcg2 periph_noc_clk_src = {
+-	.cmd_rcgr = 0x0190,
+-	.hid_width = 5,
+-	.parent_map = gcc_xo_gpll0_map,
+-	.clkr.hw.init = &(struct clk_init_data){
+-		.name = "periph_noc_clk_src",
+-		.parent_data = gcc_xo_gpll0,
+-		.num_parents = ARRAY_SIZE(gcc_xo_gpll0),
+-		.ops = &clk_rcg2_ops,
+-	},
+-};
+-
+ static struct freq_tbl ftbl_ufs_axi_clk_src[] = {
+ 	F(50000000, P_GPLL0, 12, 0, 0),
+ 	F(100000000, P_GPLL0, 6, 0, 0),
+@@ -1149,8 +1113,6 @@ static struct clk_branch gcc_blsp1_ahb_clk = {
+ 		.enable_mask = BIT(17),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_blsp1_ahb_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -1434,8 +1396,6 @@ static struct clk_branch gcc_blsp2_ahb_clk = {
+ 		.enable_mask = BIT(15),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_blsp2_ahb_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -1763,8 +1723,6 @@ static struct clk_branch gcc_lpass_q6_axi_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_lpass_q6_axi_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -1777,8 +1735,6 @@ static struct clk_branch gcc_mss_q6_bimc_axi_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_mss_q6_bimc_axi_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -1806,9 +1762,6 @@ static struct clk_branch gcc_pcie_0_cfg_ahb_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_pcie_0_cfg_ahb_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &config_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -1821,9 +1774,6 @@ static struct clk_branch gcc_pcie_0_mstr_axi_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_pcie_0_mstr_axi_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -1853,9 +1803,6 @@ static struct clk_branch gcc_pcie_0_slv_axi_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_pcie_0_slv_axi_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -1883,9 +1830,6 @@ static struct clk_branch gcc_pcie_1_cfg_ahb_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_pcie_1_cfg_ahb_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &config_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -1898,9 +1842,6 @@ static struct clk_branch gcc_pcie_1_mstr_axi_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_pcie_1_mstr_axi_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -1929,9 +1870,6 @@ static struct clk_branch gcc_pcie_1_slv_axi_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_pcie_1_slv_axi_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -1959,8 +1897,6 @@ static struct clk_branch gcc_pdm_ahb_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_pdm_ahb_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -1988,9 +1924,6 @@ static struct clk_branch gcc_sdcc1_ahb_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_sdcc1_ahb_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -2003,9 +1936,6 @@ static struct clk_branch gcc_sdcc2_ahb_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_sdcc2_ahb_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -2033,9 +1963,6 @@ static struct clk_branch gcc_sdcc3_ahb_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_sdcc3_ahb_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -2063,9 +1990,6 @@ static struct clk_branch gcc_sdcc4_ahb_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_sdcc4_ahb_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -2123,8 +2047,6 @@ static struct clk_branch gcc_tsif_ahb_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_tsif_ahb_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -2152,8 +2074,6 @@ static struct clk_branch gcc_ufs_ahb_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_ufs_ahb_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &config_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -2197,8 +2117,6 @@ static struct clk_branch gcc_ufs_rx_symbol_0_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_ufs_rx_symbol_0_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -2212,8 +2130,6 @@ static struct clk_branch gcc_ufs_rx_symbol_1_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_ufs_rx_symbol_1_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -2242,8 +2158,6 @@ static struct clk_branch gcc_ufs_tx_symbol_0_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_ufs_tx_symbol_0_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -2257,8 +2171,6 @@ static struct clk_branch gcc_ufs_tx_symbol_1_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_ufs_tx_symbol_1_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &system_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -2363,8 +2275,6 @@ static struct clk_branch gcc_usb_hs_ahb_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_usb_hs_ahb_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -2487,8 +2397,6 @@ static struct clk_branch gcc_boot_rom_ahb_clk = {
+ 		.enable_mask = BIT(10),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_boot_rom_ahb_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &config_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -2502,8 +2410,6 @@ static struct clk_branch gcc_prng_ahb_clk = {
+ 		.enable_mask = BIT(13),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_prng_ahb_clk",
+-			.parent_hws = (const struct clk_hw *[]){ &periph_noc_clk_src.clkr.hw },
+-			.num_parents = 1,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -2546,9 +2452,6 @@ static struct clk_regmap *gcc_msm8994_clocks[] = {
+ 	[GPLL0] = &gpll0.clkr,
+ 	[GPLL4_EARLY] = &gpll4_early.clkr,
+ 	[GPLL4] = &gpll4.clkr,
+-	[CONFIG_NOC_CLK_SRC] = &config_noc_clk_src.clkr,
+-	[PERIPH_NOC_CLK_SRC] = &periph_noc_clk_src.clkr,
+-	[SYSTEM_NOC_CLK_SRC] = &system_noc_clk_src.clkr,
+ 	[UFS_AXI_CLK_SRC] = &ufs_axi_clk_src.clkr,
+ 	[USB30_MASTER_CLK_SRC] = &usb30_master_clk_src.clkr,
+ 	[BLSP1_QUP1_I2C_APPS_CLK_SRC] = &blsp1_qup1_i2c_apps_clk_src.clkr,
+@@ -2695,6 +2598,15 @@ static struct clk_regmap *gcc_msm8994_clocks[] = {
+ 	[USB_SS_PHY_LDO] = &usb_ss_phy_ldo.clkr,
+ 	[GCC_BOOT_ROM_AHB_CLK] = &gcc_boot_rom_ahb_clk.clkr,
+ 	[GCC_PRNG_AHB_CLK] = &gcc_prng_ahb_clk.clkr,
++
++	/*
++	 * The following clocks should NOT be managed by this driver, but they once were
++	 * mistakengly added. Now they are only here to indicate that they are not defined
++	 * on purpose, even though the names will stay in the header file (for ABI sanity).
++	 */
++	[CONFIG_NOC_CLK_SRC] = NULL,
++	[PERIPH_NOC_CLK_SRC] = NULL,
++	[SYSTEM_NOC_CLK_SRC] = NULL,
+ };
+ 
+ static struct gdsc *gcc_msm8994_gdscs[] = {
+diff --git a/drivers/gpio/gpio-rockchip.c b/drivers/gpio/gpio-rockchip.c
+index ce63cbd14d69a..24155c038f6d0 100644
+--- a/drivers/gpio/gpio-rockchip.c
++++ b/drivers/gpio/gpio-rockchip.c
+@@ -410,10 +410,8 @@ static int rockchip_irq_set_type(struct irq_data *d, unsigned int type)
+ 	level = rockchip_gpio_readl(bank, bank->gpio_regs->int_type);
+ 	polarity = rockchip_gpio_readl(bank, bank->gpio_regs->int_polarity);
+ 
+-	switch (type) {
+-	case IRQ_TYPE_EDGE_BOTH:
++	if (type == IRQ_TYPE_EDGE_BOTH) {
+ 		if (bank->gpio_type == GPIO_TYPE_V2) {
+-			bank->toggle_edge_mode &= ~mask;
+ 			rockchip_gpio_writel_bit(bank, d->hwirq, 1,
+ 						 bank->gpio_regs->int_bothedge);
+ 			goto out;
+@@ -431,30 +429,34 @@ static int rockchip_irq_set_type(struct irq_data *d, unsigned int type)
+ 			else
+ 				polarity |= mask;
+ 		}
+-		break;
+-	case IRQ_TYPE_EDGE_RISING:
+-		bank->toggle_edge_mode &= ~mask;
+-		level |= mask;
+-		polarity |= mask;
+-		break;
+-	case IRQ_TYPE_EDGE_FALLING:
+-		bank->toggle_edge_mode &= ~mask;
+-		level |= mask;
+-		polarity &= ~mask;
+-		break;
+-	case IRQ_TYPE_LEVEL_HIGH:
+-		bank->toggle_edge_mode &= ~mask;
+-		level &= ~mask;
+-		polarity |= mask;
+-		break;
+-	case IRQ_TYPE_LEVEL_LOW:
+-		bank->toggle_edge_mode &= ~mask;
+-		level &= ~mask;
+-		polarity &= ~mask;
+-		break;
+-	default:
+-		ret = -EINVAL;
+-		goto out;
++	} else {
++		if (bank->gpio_type == GPIO_TYPE_V2) {
++			rockchip_gpio_writel_bit(bank, d->hwirq, 0,
++						 bank->gpio_regs->int_bothedge);
++		} else {
++			bank->toggle_edge_mode &= ~mask;
++		}
++		switch (type) {
++		case IRQ_TYPE_EDGE_RISING:
++			level |= mask;
++			polarity |= mask;
++			break;
++		case IRQ_TYPE_EDGE_FALLING:
++			level |= mask;
++			polarity &= ~mask;
++			break;
++		case IRQ_TYPE_LEVEL_HIGH:
++			level &= ~mask;
++			polarity |= mask;
++			break;
++		case IRQ_TYPE_LEVEL_LOW:
++			level &= ~mask;
++			polarity &= ~mask;
++			break;
++		default:
++			ret = -EINVAL;
++			goto out;
++		}
+ 	}
+ 
+ 	rockchip_gpio_writel(bank, level, bank->gpio_regs->int_type);
+diff --git a/drivers/gpio/gpio-tegra186.c b/drivers/gpio/gpio-tegra186.c
+index c026e7141e4ea..f62f267dfd7d2 100644
+--- a/drivers/gpio/gpio-tegra186.c
++++ b/drivers/gpio/gpio-tegra186.c
+@@ -341,9 +341,12 @@ static int tegra186_gpio_of_xlate(struct gpio_chip *chip,
+ 	return offset + pin;
+ }
+ 
++#define to_tegra_gpio(x) container_of((x), struct tegra_gpio, gpio)
++
+ static void tegra186_irq_ack(struct irq_data *data)
+ {
+-	struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data);
++	struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
++	struct tegra_gpio *gpio = to_tegra_gpio(gc);
+ 	void __iomem *base;
+ 
+ 	base = tegra186_gpio_get_base(gpio, data->hwirq);
+@@ -355,7 +358,8 @@ static void tegra186_irq_ack(struct irq_data *data)
+ 
+ static void tegra186_irq_mask(struct irq_data *data)
+ {
+-	struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data);
++	struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
++	struct tegra_gpio *gpio = to_tegra_gpio(gc);
+ 	void __iomem *base;
+ 	u32 value;
+ 
+@@ -370,7 +374,8 @@ static void tegra186_irq_mask(struct irq_data *data)
+ 
+ static void tegra186_irq_unmask(struct irq_data *data)
+ {
+-	struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data);
++	struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
++	struct tegra_gpio *gpio = to_tegra_gpio(gc);
+ 	void __iomem *base;
+ 	u32 value;
+ 
+@@ -385,7 +390,8 @@ static void tegra186_irq_unmask(struct irq_data *data)
+ 
+ static int tegra186_irq_set_type(struct irq_data *data, unsigned int type)
+ {
+-	struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data);
++	struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
++	struct tegra_gpio *gpio = to_tegra_gpio(gc);
+ 	void __iomem *base;
+ 	u32 value;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index ab3851c26f71c..8c7637233c816 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -2014,6 +2014,9 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
+ 		return -ENODEV;
+ 	}
+ 
++	if (amdgpu_aspm == -1 && !pcie_aspm_enabled(pdev))
++		amdgpu_aspm = 0;
++
+ 	if (amdgpu_virtual_display ||
+ 	    amdgpu_device_asic_has_dc_support(flags & AMD_ASIC_MASK))
+ 		supports_atomic = true;
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
+index de9b55383e9f8..d01ddce2dec1d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
+@@ -619,8 +619,8 @@ soc15_asic_reset_method(struct amdgpu_device *adev)
+ static int soc15_asic_reset(struct amdgpu_device *adev)
+ {
+ 	/* original raven doesn't have full asic reset */
+-	if ((adev->apu_flags & AMD_APU_IS_RAVEN) &&
+-	    !(adev->apu_flags & AMD_APU_IS_RAVEN2))
++	if ((adev->apu_flags & AMD_APU_IS_RAVEN) ||
++	    (adev->apu_flags & AMD_APU_IS_RAVEN2))
+ 		return 0;
+ 
+ 	switch (soc15_asic_reset_method(adev)) {
+@@ -1114,8 +1114,11 @@ static int soc15_common_early_init(void *handle)
+ 				AMD_CG_SUPPORT_SDMA_LS |
+ 				AMD_CG_SUPPORT_VCN_MGCG;
+ 
++			/*
++			 * MMHUB PG needs to be disabled for Picasso for
++			 * stability reasons.
++			 */
+ 			adev->pg_flags = AMD_PG_SUPPORT_SDMA |
+-				AMD_PG_SUPPORT_MMHUB |
+ 				AMD_PG_SUPPORT_VCN;
+ 		} else {
+ 			adev->cg_flags = AMD_CG_SUPPORT_GFX_MGCG |
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 0117b00b4ed83..7a5bb5a3456a6 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -4232,6 +4232,9 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
+ 	}
+ #endif
+ 
++	/* Disable vblank IRQs aggressively for power-saving. */
++	adev_to_drm(adev)->vblank_disable_immediate = true;
++
+ 	/* loops over all connectors on the board */
+ 	for (i = 0; i < link_cnt; i++) {
+ 		struct dc_link *link = NULL;
+@@ -4277,19 +4280,17 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
+ 				update_connector_ext_caps(aconnector);
+ 			if (psr_feature_enabled)
+ 				amdgpu_dm_set_psr_caps(link);
++
++			/* TODO: Fix vblank control helpers to delay PSR entry to allow this when
++			 * PSR is also supported.
++			 */
++			if (link->psr_settings.psr_feature_enabled)
++				adev_to_drm(adev)->vblank_disable_immediate = false;
+ 		}
+ 
+ 
+ 	}
+ 
+-	/*
+-	 * Disable vblank IRQs aggressively for power-saving.
+-	 *
+-	 * TODO: Fix vblank control helpers to delay PSR entry to allow this when PSR
+-	 * is also supported.
+-	 */
+-	adev_to_drm(adev)->vblank_disable_immediate = !psr_feature_enabled;
+-
+ 	/* Software is initialized. Now we can register interrupt handlers. */
+ 	switch (adev->asic_type) {
+ #if defined(CONFIG_DRM_AMD_DC_SI)
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
+index 1861a147a7fa1..5c5cbeb59c4d9 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
+@@ -437,8 +437,10 @@ static void dcn3_get_memclk_states_from_smu(struct clk_mgr *clk_mgr_base)
+ 	clk_mgr_base->bw_params->clk_table.num_entries = num_levels ? num_levels : 1;
+ 
+ 	/* Refresh bounding box */
++	DC_FP_START();
+ 	clk_mgr_base->ctx->dc->res_pool->funcs->update_bw_bounding_box(
+ 			clk_mgr->base.ctx->dc, clk_mgr_base->bw_params);
++	DC_FP_END();
+ }
+ 
+ static bool dcn3_is_smu_present(struct clk_mgr *clk_mgr_base)
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index e890e063cde31..1e7fe6bea300f 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -999,10 +999,13 @@ static bool dc_construct(struct dc *dc,
+ 		goto fail;
+ #ifdef CONFIG_DRM_AMD_DC_DCN
+ 	dc->clk_mgr->force_smu_not_present = init_params->force_smu_not_present;
+-#endif
+ 
+-	if (dc->res_pool->funcs->update_bw_bounding_box)
++	if (dc->res_pool->funcs->update_bw_bounding_box) {
++		DC_FP_START();
+ 		dc->res_pool->funcs->update_bw_bounding_box(dc, dc->clk_mgr->bw_params);
++		DC_FP_END();
++	}
++#endif
+ 
+ 	/* Creation of current_state must occur after dc->dml
+ 	 * is initialized in dc_create_resource_pool because
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index e2d9a46d0e1ad..6b066ceab4128 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -1876,10 +1876,6 @@ enum dc_status dc_remove_stream_from_ctx(
+ 				dc->res_pool,
+ 			del_pipe->stream_res.stream_enc,
+ 			false);
+-	/* Release link encoder from stream in new dc_state. */
+-	if (dc->res_pool->funcs->link_enc_unassign)
+-		dc->res_pool->funcs->link_enc_unassign(new_ctx, del_pipe->stream);
+-
+ #if defined(CONFIG_DRM_AMD_DC_DCN)
+ 	if (is_dp_128b_132b_signal(del_pipe)) {
+ 		update_hpo_dp_stream_engine_usage(
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index 446d37320b948..b55118388d2d7 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -418,6 +418,36 @@ static int sienna_cichlid_store_powerplay_table(struct smu_context *smu)
+ 	return 0;
+ }
+ 
++static int sienna_cichlid_patch_pptable_quirk(struct smu_context *smu)
++{
++	struct amdgpu_device *adev = smu->adev;
++	uint32_t *board_reserved;
++	uint16_t *freq_table_gfx;
++	uint32_t i;
++
++	/* Fix some OEM SKU specific stability issues */
++	GET_PPTABLE_MEMBER(BoardReserved, &board_reserved);
++	if ((adev->pdev->device == 0x73DF) &&
++	    (adev->pdev->revision == 0XC3) &&
++	    (adev->pdev->subsystem_device == 0x16C2) &&
++	    (adev->pdev->subsystem_vendor == 0x1043))
++		board_reserved[0] = 1387;
++
++	GET_PPTABLE_MEMBER(FreqTableGfx, &freq_table_gfx);
++	if ((adev->pdev->device == 0x73DF) &&
++	    (adev->pdev->revision == 0XC3) &&
++	    ((adev->pdev->subsystem_device == 0x16C2) ||
++	    (adev->pdev->subsystem_device == 0x133C)) &&
++	    (adev->pdev->subsystem_vendor == 0x1043)) {
++		for (i = 0; i < NUM_GFXCLK_DPM_LEVELS; i++) {
++			if (freq_table_gfx[i] > 2500)
++				freq_table_gfx[i] = 2500;
++		}
++	}
++
++	return 0;
++}
++
+ static int sienna_cichlid_setup_pptable(struct smu_context *smu)
+ {
+ 	int ret = 0;
+@@ -438,7 +468,7 @@ static int sienna_cichlid_setup_pptable(struct smu_context *smu)
+ 	if (ret)
+ 		return ret;
+ 
+-	return ret;
++	return sienna_cichlid_patch_pptable_quirk(smu);
+ }
+ 
+ static int sienna_cichlid_tables_init(struct smu_context *smu)
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 12893e7be89bb..f5f5de362ff2c 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -5345,6 +5345,7 @@ u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edi
+ 	if (!(edid->input & DRM_EDID_INPUT_DIGITAL))
+ 		return quirks;
+ 
++	info->color_formats |= DRM_COLOR_FORMAT_RGB444;
+ 	drm_parse_cea_ext(connector, edid);
+ 
+ 	/*
+@@ -5393,7 +5394,6 @@ u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edi
+ 	DRM_DEBUG("%s: Assigning EDID-1.4 digital sink color depth as %d bpc.\n",
+ 			  connector->name, info->bpc);
+ 
+-	info->color_formats |= DRM_COLOR_FORMAT_RGB444;
+ 	if (edid->features & DRM_EDID_FEATURE_RGB_YCRCB444)
+ 		info->color_formats |= DRM_COLOR_FORMAT_YCRCB444;
+ 	if (edid->features & DRM_EDID_FEATURE_RGB_YCRCB422)
+diff --git a/drivers/gpu/drm/i915/display/intel_bw.c b/drivers/gpu/drm/i915/display/intel_bw.c
+index 8d9d888e93161..5a2f96d39ac78 100644
+--- a/drivers/gpu/drm/i915/display/intel_bw.c
++++ b/drivers/gpu/drm/i915/display/intel_bw.c
+@@ -681,6 +681,7 @@ int intel_bw_atomic_check(struct intel_atomic_state *state)
+ 	unsigned int max_bw_point = 0, max_bw = 0;
+ 	unsigned int num_qgv_points = dev_priv->max_bw[0].num_qgv_points;
+ 	unsigned int num_psf_gv_points = dev_priv->max_bw[0].num_psf_gv_points;
++	bool changed = false;
+ 	u32 mask = 0;
+ 
+ 	/* FIXME earlier gens need some checks too */
+@@ -724,6 +725,8 @@ int intel_bw_atomic_check(struct intel_atomic_state *state)
+ 		new_bw_state->data_rate[crtc->pipe] = new_data_rate;
+ 		new_bw_state->num_active_planes[crtc->pipe] = new_active_planes;
+ 
++		changed = true;
++
+ 		drm_dbg_kms(&dev_priv->drm,
+ 			    "pipe %c data rate %u num active planes %u\n",
+ 			    pipe_name(crtc->pipe),
+@@ -731,7 +734,19 @@ int intel_bw_atomic_check(struct intel_atomic_state *state)
+ 			    new_bw_state->num_active_planes[crtc->pipe]);
+ 	}
+ 
+-	if (!new_bw_state)
++	old_bw_state = intel_atomic_get_old_bw_state(state);
++	new_bw_state = intel_atomic_get_new_bw_state(state);
++
++	if (new_bw_state &&
++	    intel_can_enable_sagv(dev_priv, old_bw_state) !=
++	    intel_can_enable_sagv(dev_priv, new_bw_state))
++		changed = true;
++
++	/*
++	 * If none of our inputs (data rates, number of active
++	 * planes, SAGV yes/no) changed then nothing to do here.
++	 */
++	if (!changed)
+ 		return 0;
+ 
+ 	ret = intel_atomic_lock_global_state(&new_bw_state->base);
+@@ -814,7 +829,6 @@ int intel_bw_atomic_check(struct intel_atomic_state *state)
+ 	 */
+ 	new_bw_state->qgv_points_mask = ~allowed_points & mask;
+ 
+-	old_bw_state = intel_atomic_get_old_bw_state(state);
+ 	/*
+ 	 * If the actual mask had changed we need to make sure that
+ 	 * the commits are serialized(in case this is a nomodeset, nonblocking)
+diff --git a/drivers/gpu/drm/i915/display/intel_bw.h b/drivers/gpu/drm/i915/display/intel_bw.h
+index 46c6eecbd9175..0ceaed1c96562 100644
+--- a/drivers/gpu/drm/i915/display/intel_bw.h
++++ b/drivers/gpu/drm/i915/display/intel_bw.h
+@@ -30,19 +30,19 @@ struct intel_bw_state {
+ 	 */
+ 	u8 pipe_sagv_reject;
+ 
++	/* bitmask of active pipes */
++	u8 active_pipes;
++
+ 	/*
+ 	 * Current QGV points mask, which restricts
+ 	 * some particular SAGV states, not to confuse
+ 	 * with pipe_sagv_mask.
+ 	 */
+-	u8 qgv_points_mask;
++	u16 qgv_points_mask;
+ 
+ 	unsigned int data_rate[I915_MAX_PIPES];
+ 	u8 num_active_planes[I915_MAX_PIPES];
+ 
+-	/* bitmask of active pipes */
+-	u8 active_pipes;
+-
+ 	int min_cdclk;
+ };
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_snps_phy.c b/drivers/gpu/drm/i915/display/intel_snps_phy.c
+index 5e20f340730fb..601929bab874c 100644
+--- a/drivers/gpu/drm/i915/display/intel_snps_phy.c
++++ b/drivers/gpu/drm/i915/display/intel_snps_phy.c
+@@ -34,7 +34,7 @@ void intel_snps_phy_wait_for_calibration(struct drm_i915_private *dev_priv)
+ 		if (intel_de_wait_for_clear(dev_priv, ICL_PHY_MISC(phy),
+ 					    DG2_PHY_DP_TX_ACK_MASK, 25))
+ 			DRM_ERROR("SNPS PHY %c failed to calibrate after 25ms.\n",
+-				  phy);
++				  phy_name(phy));
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_tc.c b/drivers/gpu/drm/i915/display/intel_tc.c
+index dbd7d0d83a141..7784c30fe8937 100644
+--- a/drivers/gpu/drm/i915/display/intel_tc.c
++++ b/drivers/gpu/drm/i915/display/intel_tc.c
+@@ -691,6 +691,8 @@ void intel_tc_port_sanitize(struct intel_digital_port *dig_port)
+ {
+ 	struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
+ 	struct intel_encoder *encoder = &dig_port->base;
++	intel_wakeref_t tc_cold_wref;
++	enum intel_display_power_domain domain;
+ 	int active_links = 0;
+ 
+ 	mutex_lock(&dig_port->tc_lock);
+@@ -702,12 +704,11 @@ void intel_tc_port_sanitize(struct intel_digital_port *dig_port)
+ 
+ 	drm_WARN_ON(&i915->drm, dig_port->tc_mode != TC_PORT_DISCONNECTED);
+ 	drm_WARN_ON(&i915->drm, dig_port->tc_lock_wakeref);
+-	if (active_links) {
+-		enum intel_display_power_domain domain;
+-		intel_wakeref_t tc_cold_wref = tc_cold_block(dig_port, &domain);
+ 
+-		dig_port->tc_mode = intel_tc_port_get_current_mode(dig_port);
++	tc_cold_wref = tc_cold_block(dig_port, &domain);
+ 
++	dig_port->tc_mode = intel_tc_port_get_current_mode(dig_port);
++	if (active_links) {
+ 		if (!icl_tc_phy_is_connected(dig_port))
+ 			drm_dbg_kms(&i915->drm,
+ 				    "Port %s: PHY disconnected with %d active link(s)\n",
+@@ -716,10 +717,23 @@ void intel_tc_port_sanitize(struct intel_digital_port *dig_port)
+ 
+ 		dig_port->tc_lock_wakeref = tc_cold_block(dig_port,
+ 							  &dig_port->tc_lock_power_domain);
+-
+-		tc_cold_unblock(dig_port, domain, tc_cold_wref);
++	} else {
++		/*
++		 * TBT-alt is the default mode in any case the PHY ownership is not
++		 * held (regardless of the sink's connected live state), so
++		 * we'll just switch to disconnected mode from it here without
++		 * a note.
++		 */
++		if (dig_port->tc_mode != TC_PORT_TBT_ALT)
++			drm_dbg_kms(&i915->drm,
++				    "Port %s: PHY left in %s mode on disabled port, disconnecting it\n",
++				    dig_port->tc_port_name,
++				    tc_port_mode_name(dig_port->tc_mode));
++		icl_tc_phy_disconnect(dig_port);
+ 	}
+ 
++	tc_cold_unblock(dig_port, domain, tc_cold_wref);
++
+ 	drm_dbg_kms(&i915->drm, "Port %s: sanitize mode (%s)\n",
+ 		    dig_port->tc_port_name,
+ 		    tc_port_mode_name(dig_port->tc_mode));
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index 75c1522fdae8c..7cbffd9a7be88 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -4019,6 +4019,17 @@ static int intel_compute_sagv_mask(struct intel_atomic_state *state)
+ 			return ret;
+ 	}
+ 
++	if (intel_can_enable_sagv(dev_priv, new_bw_state) !=
++	    intel_can_enable_sagv(dev_priv, old_bw_state)) {
++		ret = intel_atomic_serialize_global_state(&new_bw_state->base);
++		if (ret)
++			return ret;
++	} else if (new_bw_state->pipe_sagv_reject != old_bw_state->pipe_sagv_reject) {
++		ret = intel_atomic_lock_global_state(&new_bw_state->base);
++		if (ret)
++			return ret;
++	}
++
+ 	for_each_new_intel_crtc_in_state(state, crtc,
+ 					 new_crtc_state, i) {
+ 		struct skl_pipe_wm *pipe_wm = &new_crtc_state->wm.skl.optimal;
+@@ -4034,17 +4045,6 @@ static int intel_compute_sagv_mask(struct intel_atomic_state *state)
+ 			intel_can_enable_sagv(dev_priv, new_bw_state);
+ 	}
+ 
+-	if (intel_can_enable_sagv(dev_priv, new_bw_state) !=
+-	    intel_can_enable_sagv(dev_priv, old_bw_state)) {
+-		ret = intel_atomic_serialize_global_state(&new_bw_state->base);
+-		if (ret)
+-			return ret;
+-	} else if (new_bw_state->pipe_sagv_reject != old_bw_state->pipe_sagv_reject) {
+-		ret = intel_atomic_lock_global_state(&new_bw_state->base);
+-		if (ret)
+-			return ret;
+-	}
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c
+index e3ed52d96f423..3e61184e194c9 100644
+--- a/drivers/gpu/drm/vc4/vc4_crtc.c
++++ b/drivers/gpu/drm/vc4/vc4_crtc.c
+@@ -538,9 +538,11 @@ int vc4_crtc_disable_at_boot(struct drm_crtc *crtc)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = pm_runtime_put(&vc4_hdmi->pdev->dev);
+-	if (ret)
+-		return ret;
++	/*
++	 * post_crtc_powerdown will have called pm_runtime_put, so we
++	 * don't need it here otherwise we'll get the reference counting
++	 * wrong.
++	 */
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/host1x/syncpt.c b/drivers/gpu/host1x/syncpt.c
+index d198a10848c6b..a89a408182e60 100644
+--- a/drivers/gpu/host1x/syncpt.c
++++ b/drivers/gpu/host1x/syncpt.c
+@@ -225,27 +225,12 @@ int host1x_syncpt_wait(struct host1x_syncpt *sp, u32 thresh, long timeout,
+ 	void *ref;
+ 	struct host1x_waitlist *waiter;
+ 	int err = 0, check_count = 0;
+-	u32 val;
+ 
+ 	if (value)
+-		*value = 0;
+-
+-	/* first check cache */
+-	if (host1x_syncpt_is_expired(sp, thresh)) {
+-		if (value)
+-			*value = host1x_syncpt_load(sp);
++		*value = host1x_syncpt_load(sp);
+ 
++	if (host1x_syncpt_is_expired(sp, thresh))
+ 		return 0;
+-	}
+-
+-	/* try to read from register */
+-	val = host1x_hw_syncpt_load(sp->host, sp);
+-	if (host1x_syncpt_is_expired(sp, thresh)) {
+-		if (value)
+-			*value = val;
+-
+-		goto done;
+-	}
+ 
+ 	if (!timeout) {
+ 		err = -EAGAIN;
+diff --git a/drivers/hwmon/hwmon.c b/drivers/hwmon/hwmon.c
+index 3501a3ead4ba6..3ae961986fc31 100644
+--- a/drivers/hwmon/hwmon.c
++++ b/drivers/hwmon/hwmon.c
+@@ -214,12 +214,14 @@ static int hwmon_thermal_add_sensor(struct device *dev, int index)
+ 
+ 	tzd = devm_thermal_zone_of_sensor_register(dev, index, tdata,
+ 						   &hwmon_thermal_ops);
+-	/*
+-	 * If CONFIG_THERMAL_OF is disabled, this returns -ENODEV,
+-	 * so ignore that error but forward any other error.
+-	 */
+-	if (IS_ERR(tzd) && (PTR_ERR(tzd) != -ENODEV))
+-		return PTR_ERR(tzd);
++	if (IS_ERR(tzd)) {
++		if (PTR_ERR(tzd) != -ENODEV)
++			return PTR_ERR(tzd);
++		dev_info(dev, "temp%d_input not attached to any thermal zone\n",
++			 index + 1);
++		devm_kfree(dev, tdata);
++		return 0;
++	}
+ 
+ 	err = devm_add_action(dev, hwmon_thermal_remove_sensor, &tdata->node);
+ 	if (err)
+diff --git a/drivers/iio/accel/bmc150-accel-core.c b/drivers/iio/accel/bmc150-accel-core.c
+index b0678c351e829..c3a2b4c0b3b26 100644
+--- a/drivers/iio/accel/bmc150-accel-core.c
++++ b/drivers/iio/accel/bmc150-accel-core.c
+@@ -1783,11 +1783,14 @@ int bmc150_accel_core_probe(struct device *dev, struct regmap *regmap, int irq,
+ 	ret = iio_device_register(indio_dev);
+ 	if (ret < 0) {
+ 		dev_err(dev, "Unable to register iio device\n");
+-		goto err_trigger_unregister;
++		goto err_pm_cleanup;
+ 	}
+ 
+ 	return 0;
+ 
++err_pm_cleanup:
++	pm_runtime_dont_use_autosuspend(dev);
++	pm_runtime_disable(dev);
+ err_trigger_unregister:
+ 	bmc150_accel_unregister_triggers(data, BMC150_ACCEL_TRIGGERS - 1);
+ err_buffer_cleanup:
+diff --git a/drivers/iio/accel/fxls8962af-core.c b/drivers/iio/accel/fxls8962af-core.c
+index 32989d91b9829..f7fd9e046588b 100644
+--- a/drivers/iio/accel/fxls8962af-core.c
++++ b/drivers/iio/accel/fxls8962af-core.c
+@@ -173,12 +173,20 @@ struct fxls8962af_data {
+ 	u16 upper_thres;
+ };
+ 
+-const struct regmap_config fxls8962af_regmap_conf = {
++const struct regmap_config fxls8962af_i2c_regmap_conf = {
+ 	.reg_bits = 8,
+ 	.val_bits = 8,
+ 	.max_register = FXLS8962AF_MAX_REG,
+ };
+-EXPORT_SYMBOL_GPL(fxls8962af_regmap_conf);
++EXPORT_SYMBOL_GPL(fxls8962af_i2c_regmap_conf);
++
++const struct regmap_config fxls8962af_spi_regmap_conf = {
++	.reg_bits = 8,
++	.pad_bits = 8,
++	.val_bits = 8,
++	.max_register = FXLS8962AF_MAX_REG,
++};
++EXPORT_SYMBOL_GPL(fxls8962af_spi_regmap_conf);
+ 
+ enum {
+ 	fxls8962af_idx_x,
+diff --git a/drivers/iio/accel/fxls8962af-i2c.c b/drivers/iio/accel/fxls8962af-i2c.c
+index cfb004b204559..6bde9891effbf 100644
+--- a/drivers/iio/accel/fxls8962af-i2c.c
++++ b/drivers/iio/accel/fxls8962af-i2c.c
+@@ -18,7 +18,7 @@ static int fxls8962af_probe(struct i2c_client *client)
+ {
+ 	struct regmap *regmap;
+ 
+-	regmap = devm_regmap_init_i2c(client, &fxls8962af_regmap_conf);
++	regmap = devm_regmap_init_i2c(client, &fxls8962af_i2c_regmap_conf);
+ 	if (IS_ERR(regmap)) {
+ 		dev_err(&client->dev, "Failed to initialize i2c regmap\n");
+ 		return PTR_ERR(regmap);
+diff --git a/drivers/iio/accel/fxls8962af-spi.c b/drivers/iio/accel/fxls8962af-spi.c
+index 57108d3d480b6..6f4dff3238d3c 100644
+--- a/drivers/iio/accel/fxls8962af-spi.c
++++ b/drivers/iio/accel/fxls8962af-spi.c
+@@ -18,7 +18,7 @@ static int fxls8962af_probe(struct spi_device *spi)
+ {
+ 	struct regmap *regmap;
+ 
+-	regmap = devm_regmap_init_spi(spi, &fxls8962af_regmap_conf);
++	regmap = devm_regmap_init_spi(spi, &fxls8962af_spi_regmap_conf);
+ 	if (IS_ERR(regmap)) {
+ 		dev_err(&spi->dev, "Failed to initialize spi regmap\n");
+ 		return PTR_ERR(regmap);
+diff --git a/drivers/iio/accel/fxls8962af.h b/drivers/iio/accel/fxls8962af.h
+index b67572c3ef069..9cbe98c3ba9a2 100644
+--- a/drivers/iio/accel/fxls8962af.h
++++ b/drivers/iio/accel/fxls8962af.h
+@@ -17,6 +17,7 @@ int fxls8962af_core_probe(struct device *dev, struct regmap *regmap, int irq);
+ int fxls8962af_core_remove(struct device *dev);
+ 
+ extern const struct dev_pm_ops fxls8962af_pm_ops;
+-extern const struct regmap_config fxls8962af_regmap_conf;
++extern const struct regmap_config fxls8962af_i2c_regmap_conf;
++extern const struct regmap_config fxls8962af_spi_regmap_conf;
+ 
+ #endif				/* _FXLS8962AF_H_ */
+diff --git a/drivers/iio/accel/kxcjk-1013.c b/drivers/iio/accel/kxcjk-1013.c
+index 24c9387c29687..ba6c8ca488b1a 100644
+--- a/drivers/iio/accel/kxcjk-1013.c
++++ b/drivers/iio/accel/kxcjk-1013.c
+@@ -1589,11 +1589,14 @@ static int kxcjk1013_probe(struct i2c_client *client,
+ 	ret = iio_device_register(indio_dev);
+ 	if (ret < 0) {
+ 		dev_err(&client->dev, "unable to register iio device\n");
+-		goto err_buffer_cleanup;
++		goto err_pm_cleanup;
+ 	}
+ 
+ 	return 0;
+ 
++err_pm_cleanup:
++	pm_runtime_dont_use_autosuspend(&client->dev);
++	pm_runtime_disable(&client->dev);
+ err_buffer_cleanup:
+ 	iio_triggered_buffer_cleanup(indio_dev);
+ err_trigger_unregister:
+diff --git a/drivers/iio/accel/mma9551.c b/drivers/iio/accel/mma9551.c
+index 4c359fb054801..c53a3398b14c4 100644
+--- a/drivers/iio/accel/mma9551.c
++++ b/drivers/iio/accel/mma9551.c
+@@ -495,11 +495,14 @@ static int mma9551_probe(struct i2c_client *client,
+ 	ret = iio_device_register(indio_dev);
+ 	if (ret < 0) {
+ 		dev_err(&client->dev, "unable to register iio device\n");
+-		goto out_poweroff;
++		goto err_pm_cleanup;
+ 	}
+ 
+ 	return 0;
+ 
++err_pm_cleanup:
++	pm_runtime_dont_use_autosuspend(&client->dev);
++	pm_runtime_disable(&client->dev);
+ out_poweroff:
+ 	mma9551_set_device_state(client, false);
+ 
+diff --git a/drivers/iio/accel/mma9553.c b/drivers/iio/accel/mma9553.c
+index ba3ecb3b57dcd..1599b75724d4f 100644
+--- a/drivers/iio/accel/mma9553.c
++++ b/drivers/iio/accel/mma9553.c
+@@ -1134,12 +1134,15 @@ static int mma9553_probe(struct i2c_client *client,
+ 	ret = iio_device_register(indio_dev);
+ 	if (ret < 0) {
+ 		dev_err(&client->dev, "unable to register iio device\n");
+-		goto out_poweroff;
++		goto err_pm_cleanup;
+ 	}
+ 
+ 	dev_dbg(&indio_dev->dev, "Registered device %s\n", name);
+ 	return 0;
+ 
++err_pm_cleanup:
++	pm_runtime_dont_use_autosuspend(&client->dev);
++	pm_runtime_disable(&client->dev);
+ out_poweroff:
+ 	mma9551_set_device_state(client, false);
+ 	return ret;
+diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c
+index e45c600fccc0b..18c154afbd7ac 100644
+--- a/drivers/iio/adc/ad7124.c
++++ b/drivers/iio/adc/ad7124.c
+@@ -76,7 +76,7 @@
+ #define AD7124_CONFIG_REF_SEL(x)	FIELD_PREP(AD7124_CONFIG_REF_SEL_MSK, x)
+ #define AD7124_CONFIG_PGA_MSK		GENMASK(2, 0)
+ #define AD7124_CONFIG_PGA(x)		FIELD_PREP(AD7124_CONFIG_PGA_MSK, x)
+-#define AD7124_CONFIG_IN_BUFF_MSK	GENMASK(7, 6)
++#define AD7124_CONFIG_IN_BUFF_MSK	GENMASK(6, 5)
+ #define AD7124_CONFIG_IN_BUFF(x)	FIELD_PREP(AD7124_CONFIG_IN_BUFF_MSK, x)
+ 
+ /* AD7124_FILTER_X */
+diff --git a/drivers/iio/adc/men_z188_adc.c b/drivers/iio/adc/men_z188_adc.c
+index 42ea8bc7e7805..adc5ceaef8c93 100644
+--- a/drivers/iio/adc/men_z188_adc.c
++++ b/drivers/iio/adc/men_z188_adc.c
+@@ -103,6 +103,7 @@ static int men_z188_probe(struct mcb_device *dev,
+ 	struct z188_adc *adc;
+ 	struct iio_dev *indio_dev;
+ 	struct resource *mem;
++	int ret;
+ 
+ 	indio_dev = devm_iio_device_alloc(&dev->dev, sizeof(struct z188_adc));
+ 	if (!indio_dev)
+@@ -128,8 +129,14 @@ static int men_z188_probe(struct mcb_device *dev,
+ 	adc->mem = mem;
+ 	mcb_set_drvdata(dev, indio_dev);
+ 
+-	return iio_device_register(indio_dev);
++	ret = iio_device_register(indio_dev);
++	if (ret)
++		goto err_unmap;
++
++	return 0;
+ 
++err_unmap:
++	iounmap(adc->base);
+ err:
+ 	mcb_release_mem(mem);
+ 	return -ENXIO;
+diff --git a/drivers/iio/adc/ti-tsc2046.c b/drivers/iio/adc/ti-tsc2046.c
+index d84ae6b008c1b..e8fc4d01f30b6 100644
+--- a/drivers/iio/adc/ti-tsc2046.c
++++ b/drivers/iio/adc/ti-tsc2046.c
+@@ -388,7 +388,7 @@ static int tsc2046_adc_update_scan_mode(struct iio_dev *indio_dev,
+ 	mutex_lock(&priv->slock);
+ 
+ 	size = 0;
+-	for_each_set_bit(ch_idx, active_scan_mask, indio_dev->num_channels) {
++	for_each_set_bit(ch_idx, active_scan_mask, ARRAY_SIZE(priv->l)) {
+ 		size += tsc2046_adc_group_set_layout(priv, group, ch_idx);
+ 		tsc2046_adc_group_set_cmd(priv, group, ch_idx);
+ 		group++;
+@@ -548,7 +548,7 @@ static int tsc2046_adc_setup_spi_msg(struct tsc2046_adc_priv *priv)
+ 	 * enabled.
+ 	 */
+ 	size = 0;
+-	for (ch_idx = 0; ch_idx < priv->dcfg->num_channels; ch_idx++)
++	for (ch_idx = 0; ch_idx < ARRAY_SIZE(priv->l); ch_idx++)
+ 		size += tsc2046_adc_group_set_layout(priv, ch_idx, ch_idx);
+ 
+ 	priv->tx = devm_kzalloc(&priv->spi->dev, size, GFP_KERNEL);
+diff --git a/drivers/iio/gyro/bmg160_core.c b/drivers/iio/gyro/bmg160_core.c
+index 17b939a367ad0..81a6d09788bd7 100644
+--- a/drivers/iio/gyro/bmg160_core.c
++++ b/drivers/iio/gyro/bmg160_core.c
+@@ -1188,11 +1188,14 @@ int bmg160_core_probe(struct device *dev, struct regmap *regmap, int irq,
+ 	ret = iio_device_register(indio_dev);
+ 	if (ret < 0) {
+ 		dev_err(dev, "unable to register iio device\n");
+-		goto err_buffer_cleanup;
++		goto err_pm_cleanup;
+ 	}
+ 
+ 	return 0;
+ 
++err_pm_cleanup:
++	pm_runtime_dont_use_autosuspend(dev);
++	pm_runtime_disable(dev);
+ err_buffer_cleanup:
+ 	iio_triggered_buffer_cleanup(indio_dev);
+ err_trigger_unregister:
+diff --git a/drivers/iio/imu/adis16480.c b/drivers/iio/imu/adis16480.c
+index ed129321a14da..f9b4540db1f43 100644
+--- a/drivers/iio/imu/adis16480.c
++++ b/drivers/iio/imu/adis16480.c
+@@ -1403,6 +1403,7 @@ static int adis16480_probe(struct spi_device *spi)
+ {
+ 	const struct spi_device_id *id = spi_get_device_id(spi);
+ 	const struct adis_data *adis16480_data;
++	irq_handler_t trigger_handler = NULL;
+ 	struct iio_dev *indio_dev;
+ 	struct adis16480 *st;
+ 	int ret;
+@@ -1474,8 +1475,12 @@ static int adis16480_probe(struct spi_device *spi)
+ 		st->clk_freq = st->chip_info->int_clk;
+ 	}
+ 
++	/* Only use our trigger handler if burst mode is supported */
++	if (adis16480_data->burst_len)
++		trigger_handler = adis16480_trigger_handler;
++
+ 	ret = devm_adis_setup_buffer_and_trigger(&st->adis, indio_dev,
+-						 adis16480_trigger_handler);
++						 trigger_handler);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/iio/imu/kmx61.c b/drivers/iio/imu/kmx61.c
+index 1dabfd615dabf..f89724481df93 100644
+--- a/drivers/iio/imu/kmx61.c
++++ b/drivers/iio/imu/kmx61.c
+@@ -1385,7 +1385,7 @@ static int kmx61_probe(struct i2c_client *client,
+ 	ret = iio_device_register(data->acc_indio_dev);
+ 	if (ret < 0) {
+ 		dev_err(&client->dev, "Failed to register acc iio device\n");
+-		goto err_buffer_cleanup_mag;
++		goto err_pm_cleanup;
+ 	}
+ 
+ 	ret = iio_device_register(data->mag_indio_dev);
+@@ -1398,6 +1398,9 @@ static int kmx61_probe(struct i2c_client *client,
+ 
+ err_iio_unregister_acc:
+ 	iio_device_unregister(data->acc_indio_dev);
++err_pm_cleanup:
++	pm_runtime_dont_use_autosuspend(&client->dev);
++	pm_runtime_disable(&client->dev);
+ err_buffer_cleanup_mag:
+ 	if (client->irq > 0)
+ 		iio_triggered_buffer_cleanup(data->mag_indio_dev);
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+index f2cbbc756459b..32d9a5e30685b 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+@@ -1374,8 +1374,12 @@ static int st_lsm6dsx_read_oneshot(struct st_lsm6dsx_sensor *sensor,
+ 	if (err < 0)
+ 		return err;
+ 
++	/*
++	 * we need to wait for sensor settling time before
++	 * reading data in order to avoid corrupted samples
++	 */
+ 	delay = 1000000000 / sensor->odr;
+-	usleep_range(delay, 2 * delay);
++	usleep_range(3 * delay, 4 * delay);
+ 
+ 	err = st_lsm6dsx_read_locked(hw, addr, &data, sizeof(data));
+ 	if (err < 0)
+diff --git a/drivers/iio/magnetometer/bmc150_magn.c b/drivers/iio/magnetometer/bmc150_magn.c
+index f96f531753495..3d4d21f979fab 100644
+--- a/drivers/iio/magnetometer/bmc150_magn.c
++++ b/drivers/iio/magnetometer/bmc150_magn.c
+@@ -962,13 +962,14 @@ int bmc150_magn_probe(struct device *dev, struct regmap *regmap,
+ 	ret = iio_device_register(indio_dev);
+ 	if (ret < 0) {
+ 		dev_err(dev, "unable to register iio device\n");
+-		goto err_disable_runtime_pm;
++		goto err_pm_cleanup;
+ 	}
+ 
+ 	dev_dbg(dev, "Registered device %s\n", name);
+ 	return 0;
+ 
+-err_disable_runtime_pm:
++err_pm_cleanup:
++	pm_runtime_dont_use_autosuspend(dev);
+ 	pm_runtime_disable(dev);
+ err_buffer_cleanup:
+ 	iio_triggered_buffer_cleanup(indio_dev);
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index a8da4291e7e3b..41ec05c4b0d0e 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -3370,22 +3370,30 @@ err:
+ static int cma_bind_addr(struct rdma_cm_id *id, struct sockaddr *src_addr,
+ 			 const struct sockaddr *dst_addr)
+ {
+-	if (!src_addr || !src_addr->sa_family) {
+-		src_addr = (struct sockaddr *) &id->route.addr.src_addr;
+-		src_addr->sa_family = dst_addr->sa_family;
+-		if (IS_ENABLED(CONFIG_IPV6) &&
+-		    dst_addr->sa_family == AF_INET6) {
+-			struct sockaddr_in6 *src_addr6 = (struct sockaddr_in6 *) src_addr;
+-			struct sockaddr_in6 *dst_addr6 = (struct sockaddr_in6 *) dst_addr;
+-			src_addr6->sin6_scope_id = dst_addr6->sin6_scope_id;
+-			if (ipv6_addr_type(&dst_addr6->sin6_addr) & IPV6_ADDR_LINKLOCAL)
+-				id->route.addr.dev_addr.bound_dev_if = dst_addr6->sin6_scope_id;
+-		} else if (dst_addr->sa_family == AF_IB) {
+-			((struct sockaddr_ib *) src_addr)->sib_pkey =
+-				((struct sockaddr_ib *) dst_addr)->sib_pkey;
+-		}
+-	}
+-	return rdma_bind_addr(id, src_addr);
++	struct sockaddr_storage zero_sock = {};
++
++	if (src_addr && src_addr->sa_family)
++		return rdma_bind_addr(id, src_addr);
++
++	/*
++	 * When the src_addr is not specified, automatically supply an any addr
++	 */
++	zero_sock.ss_family = dst_addr->sa_family;
++	if (IS_ENABLED(CONFIG_IPV6) && dst_addr->sa_family == AF_INET6) {
++		struct sockaddr_in6 *src_addr6 =
++			(struct sockaddr_in6 *)&zero_sock;
++		struct sockaddr_in6 *dst_addr6 =
++			(struct sockaddr_in6 *)dst_addr;
++
++		src_addr6->sin6_scope_id = dst_addr6->sin6_scope_id;
++		if (ipv6_addr_type(&dst_addr6->sin6_addr) & IPV6_ADDR_LINKLOCAL)
++			id->route.addr.dev_addr.bound_dev_if =
++				dst_addr6->sin6_scope_id;
++	} else if (dst_addr->sa_family == AF_IB) {
++		((struct sockaddr_ib *)&zero_sock)->sib_pkey =
++			((struct sockaddr_ib *)dst_addr)->sib_pkey;
++	}
++	return rdma_bind_addr(id, (struct sockaddr *)&zero_sock);
+ }
+ 
+ /*
+diff --git a/drivers/infiniband/hw/qib/qib_sysfs.c b/drivers/infiniband/hw/qib/qib_sysfs.c
+index 0a3b28142c05b..41c272980f91c 100644
+--- a/drivers/infiniband/hw/qib/qib_sysfs.c
++++ b/drivers/infiniband/hw/qib/qib_sysfs.c
+@@ -541,7 +541,7 @@ static struct attribute *port_diagc_attributes[] = {
+ };
+ 
+ static const struct attribute_group port_diagc_group = {
+-	.name = "linkcontrol",
++	.name = "diag_counters",
+ 	.attrs = port_diagc_attributes,
+ };
+ 
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index e39709dee179d..be96701cf281e 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -2664,6 +2664,8 @@ static void rtrs_clt_dev_release(struct device *dev)
+ {
+ 	struct rtrs_clt *clt = container_of(dev, struct rtrs_clt, dev);
+ 
++	mutex_destroy(&clt->paths_ev_mutex);
++	mutex_destroy(&clt->paths_mutex);
+ 	kfree(clt);
+ }
+ 
+@@ -2693,6 +2695,8 @@ static struct rtrs_clt *alloc_clt(const char *sessname, size_t paths_num,
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 
++	clt->dev.class = rtrs_clt_dev_class;
++	clt->dev.release = rtrs_clt_dev_release;
+ 	uuid_gen(&clt->paths_uuid);
+ 	INIT_LIST_HEAD_RCU(&clt->paths_list);
+ 	clt->paths_num = paths_num;
+@@ -2709,53 +2713,51 @@ static struct rtrs_clt *alloc_clt(const char *sessname, size_t paths_num,
+ 	init_waitqueue_head(&clt->permits_wait);
+ 	mutex_init(&clt->paths_ev_mutex);
+ 	mutex_init(&clt->paths_mutex);
++	device_initialize(&clt->dev);
+ 
+-	clt->dev.class = rtrs_clt_dev_class;
+-	clt->dev.release = rtrs_clt_dev_release;
+ 	err = dev_set_name(&clt->dev, "%s", sessname);
+ 	if (err)
+-		goto err;
++		goto err_put;
++
+ 	/*
+ 	 * Suppress user space notification until
+ 	 * sysfs files are created
+ 	 */
+ 	dev_set_uevent_suppress(&clt->dev, true);
+-	err = device_register(&clt->dev);
+-	if (err) {
+-		put_device(&clt->dev);
+-		goto err;
+-	}
++	err = device_add(&clt->dev);
++	if (err)
++		goto err_put;
+ 
+ 	clt->kobj_paths = kobject_create_and_add("paths", &clt->dev.kobj);
+ 	if (!clt->kobj_paths) {
+ 		err = -ENOMEM;
+-		goto err_dev;
++		goto err_del;
+ 	}
+ 	err = rtrs_clt_create_sysfs_root_files(clt);
+ 	if (err) {
+ 		kobject_del(clt->kobj_paths);
+ 		kobject_put(clt->kobj_paths);
+-		goto err_dev;
++		goto err_del;
+ 	}
+ 	dev_set_uevent_suppress(&clt->dev, false);
+ 	kobject_uevent(&clt->dev.kobj, KOBJ_ADD);
+ 
+ 	return clt;
+-err_dev:
+-	device_unregister(&clt->dev);
+-err:
++err_del:
++	device_del(&clt->dev);
++err_put:
+ 	free_percpu(clt->pcpu_path);
+-	kfree(clt);
++	put_device(&clt->dev);
+ 	return ERR_PTR(err);
+ }
+ 
+ static void free_clt(struct rtrs_clt *clt)
+ {
+-	free_permits(clt);
+ 	free_percpu(clt->pcpu_path);
+-	mutex_destroy(&clt->paths_ev_mutex);
+-	mutex_destroy(&clt->paths_mutex);
+-	/* release callback will free clt in last put */
++
++	/*
++	 * release callback will free clt and destroy mutexes in last put
++	 */
+ 	device_unregister(&clt->dev);
+ }
+ 
+@@ -2872,6 +2874,7 @@ void rtrs_clt_close(struct rtrs_clt *clt)
+ 		rtrs_clt_destroy_sess_files(sess, NULL);
+ 		kobject_put(&sess->kobj);
+ 	}
++	free_permits(clt);
+ 	free_clt(clt);
+ }
+ EXPORT_SYMBOL(rtrs_clt_close);
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index e174e853f8a40..285b766e4e704 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -4047,9 +4047,11 @@ static void srp_remove_one(struct ib_device *device, void *client_data)
+ 		spin_unlock(&host->target_lock);
+ 
+ 		/*
+-		 * Wait for tl_err and target port removal tasks.
++		 * srp_queue_remove_work() queues a call to
++		 * srp_remove_target(). The latter function cancels
++		 * target->tl_err_work so waiting for the remove works to
++		 * finish is sufficient.
+ 		 */
+-		flush_workqueue(system_long_wq);
+ 		flush_workqueue(srp_remove_wq);
+ 
+ 		kfree(host);
+diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
+index fc0bed14bfb10..02ca6e5fa0dc7 100644
+--- a/drivers/mtd/mtdcore.c
++++ b/drivers/mtd/mtdcore.c
+@@ -546,6 +546,7 @@ static int mtd_nvmem_add(struct mtd_info *mtd)
+ 	config.stride = 1;
+ 	config.read_only = true;
+ 	config.root_only = true;
++	config.ignore_wp = true;
+ 	config.no_of_node = !of_device_is_compatible(node, "nvmem-cells");
+ 	config.priv = mtd;
+ 
+@@ -830,6 +831,7 @@ static struct nvmem_device *mtd_otp_nvmem_register(struct mtd_info *mtd,
+ 	config.owner = THIS_MODULE;
+ 	config.type = NVMEM_TYPE_OTP;
+ 	config.root_only = true;
++	config.ignore_wp = true;
+ 	config.reg_read = reg_read;
+ 	config.size = size;
+ 	config.of_node = np;
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+index 125dafe1db7ee..4ce596daeaae3 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+@@ -100,6 +100,9 @@ MODULE_LICENSE("GPL");
+ MODULE_FIRMWARE(FW_FILE_NAME_E1);
+ MODULE_FIRMWARE(FW_FILE_NAME_E1H);
+ MODULE_FIRMWARE(FW_FILE_NAME_E2);
++MODULE_FIRMWARE(FW_FILE_NAME_E1_V15);
++MODULE_FIRMWARE(FW_FILE_NAME_E1H_V15);
++MODULE_FIRMWARE(FW_FILE_NAME_E2_V15);
+ 
+ int bnx2x_num_queues;
+ module_param_named(num_queues, bnx2x_num_queues, int, 0444);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 7eaf74e5b2929..fab8dd73fa84c 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -4719,8 +4719,10 @@ static int bnxt_hwrm_cfa_l2_set_rx_mask(struct bnxt *bp, u16 vnic_id)
+ 		return rc;
+ 
+ 	req->vnic_id = cpu_to_le32(vnic->fw_vnic_id);
+-	req->num_mc_entries = cpu_to_le32(vnic->mc_list_count);
+-	req->mc_tbl_addr = cpu_to_le64(vnic->mc_list_mapping);
++	if (vnic->rx_mask & CFA_L2_SET_RX_MASK_REQ_MASK_MCAST) {
++		req->num_mc_entries = cpu_to_le32(vnic->mc_list_count);
++		req->mc_tbl_addr = cpu_to_le64(vnic->mc_list_mapping);
++	}
+ 	req->mask = cpu_to_le32(vnic->rx_mask);
+ 	return hwrm_req_send_silent(bp, req);
+ }
+@@ -7774,6 +7776,19 @@ static int bnxt_map_fw_health_regs(struct bnxt *bp)
+ 	return 0;
+ }
+ 
++static void bnxt_remap_fw_health_regs(struct bnxt *bp)
++{
++	if (!bp->fw_health)
++		return;
++
++	if (bp->fw_cap & BNXT_FW_CAP_ERROR_RECOVERY) {
++		bp->fw_health->status_reliable = true;
++		bp->fw_health->resets_reliable = true;
++	} else {
++		bnxt_try_map_fw_health_reg(bp);
++	}
++}
++
+ static int bnxt_hwrm_error_recovery_qcfg(struct bnxt *bp)
+ {
+ 	struct bnxt_fw_health *fw_health = bp->fw_health;
+@@ -8623,6 +8638,9 @@ static int bnxt_init_chip(struct bnxt *bp, bool irq_re_init)
+ 	vnic->uc_filter_count = 1;
+ 
+ 	vnic->rx_mask = 0;
++	if (test_bit(BNXT_STATE_HALF_OPEN, &bp->state))
++		goto skip_rx_mask;
++
+ 	if (bp->dev->flags & IFF_BROADCAST)
+ 		vnic->rx_mask |= CFA_L2_SET_RX_MASK_REQ_MASK_BCAST;
+ 
+@@ -8632,7 +8650,7 @@ static int bnxt_init_chip(struct bnxt *bp, bool irq_re_init)
+ 	if (bp->dev->flags & IFF_ALLMULTI) {
+ 		vnic->rx_mask |= CFA_L2_SET_RX_MASK_REQ_MASK_ALL_MCAST;
+ 		vnic->mc_list_count = 0;
+-	} else {
++	} else if (bp->dev->flags & IFF_MULTICAST) {
+ 		u32 mask = 0;
+ 
+ 		bnxt_mc_list_updated(bp, &mask);
+@@ -8643,6 +8661,7 @@ static int bnxt_init_chip(struct bnxt *bp, bool irq_re_init)
+ 	if (rc)
+ 		goto err_out;
+ 
++skip_rx_mask:
+ 	rc = bnxt_hwrm_set_coal(bp);
+ 	if (rc)
+ 		netdev_warn(bp->dev, "HWRM set coalescing failure rc: %x\n",
+@@ -9830,8 +9849,8 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
+ 		resc_reinit = true;
+ 	if (flags & FUNC_DRV_IF_CHANGE_RESP_FLAGS_HOT_FW_RESET_DONE)
+ 		fw_reset = true;
+-	else if (bp->fw_health && !bp->fw_health->status_reliable)
+-		bnxt_try_map_fw_health_reg(bp);
++	else
++		bnxt_remap_fw_health_regs(bp);
+ 
+ 	if (test_bit(BNXT_STATE_IN_FW_RESET, &bp->state) && !fw_reset) {
+ 		netdev_err(bp->dev, "RESET_DONE not set during FW reset.\n");
+@@ -10310,13 +10329,15 @@ int bnxt_half_open_nic(struct bnxt *bp)
+ 		goto half_open_err;
+ 	}
+ 
+-	rc = bnxt_alloc_mem(bp, false);
++	rc = bnxt_alloc_mem(bp, true);
+ 	if (rc) {
+ 		netdev_err(bp->dev, "bnxt_alloc_mem err: %x\n", rc);
+ 		goto half_open_err;
+ 	}
+-	rc = bnxt_init_nic(bp, false);
++	set_bit(BNXT_STATE_HALF_OPEN, &bp->state);
++	rc = bnxt_init_nic(bp, true);
+ 	if (rc) {
++		clear_bit(BNXT_STATE_HALF_OPEN, &bp->state);
+ 		netdev_err(bp->dev, "bnxt_init_nic err: %x\n", rc);
+ 		goto half_open_err;
+ 	}
+@@ -10324,7 +10345,7 @@ int bnxt_half_open_nic(struct bnxt *bp)
+ 
+ half_open_err:
+ 	bnxt_free_skbs(bp);
+-	bnxt_free_mem(bp, false);
++	bnxt_free_mem(bp, true);
+ 	dev_close(bp->dev);
+ 	return rc;
+ }
+@@ -10334,9 +10355,10 @@ half_open_err:
+  */
+ void bnxt_half_close_nic(struct bnxt *bp)
+ {
+-	bnxt_hwrm_resource_free(bp, false, false);
++	bnxt_hwrm_resource_free(bp, false, true);
+ 	bnxt_free_skbs(bp);
+-	bnxt_free_mem(bp, false);
++	bnxt_free_mem(bp, true);
++	clear_bit(BNXT_STATE_HALF_OPEN, &bp->state);
+ }
+ 
+ void bnxt_reenable_sriov(struct bnxt *bp)
+@@ -10752,7 +10774,7 @@ static void bnxt_set_rx_mode(struct net_device *dev)
+ 	if (dev->flags & IFF_ALLMULTI) {
+ 		mask |= CFA_L2_SET_RX_MASK_REQ_MASK_ALL_MCAST;
+ 		vnic->mc_list_count = 0;
+-	} else {
++	} else if (dev->flags & IFF_MULTICAST) {
+ 		mc_update = bnxt_mc_list_updated(bp, &mask);
+ 	}
+ 
+@@ -10820,9 +10842,10 @@ skip_uc:
+ 	    !bnxt_promisc_ok(bp))
+ 		vnic->rx_mask &= ~CFA_L2_SET_RX_MASK_REQ_MASK_PROMISCUOUS;
+ 	rc = bnxt_hwrm_cfa_l2_set_rx_mask(bp, 0);
+-	if (rc && vnic->mc_list_count) {
++	if (rc && (vnic->rx_mask & CFA_L2_SET_RX_MASK_REQ_MASK_MCAST)) {
+ 		netdev_info(bp->dev, "Failed setting MC filters rc: %d, turning on ALL_MCAST mode\n",
+ 			    rc);
++		vnic->rx_mask &= ~CFA_L2_SET_RX_MASK_REQ_MASK_MCAST;
+ 		vnic->rx_mask |= CFA_L2_SET_RX_MASK_REQ_MASK_ALL_MCAST;
+ 		vnic->mc_list_count = 0;
+ 		rc = bnxt_hwrm_cfa_l2_set_rx_mask(bp, 0);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 6bacd5fae6ba5..2846d14756671 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -1919,6 +1919,7 @@ struct bnxt {
+ #define BNXT_STATE_RECOVER		12
+ #define BNXT_STATE_FW_NON_FATAL_COND	13
+ #define BNXT_STATE_FW_ACTIVATE_RESET	14
++#define BNXT_STATE_HALF_OPEN		15	/* For offline ethtool tests */
+ 
+ #define BNXT_NO_FW_ACCESS(bp)					\
+ 	(test_bit(BNXT_STATE_FW_FATAL_COND, &(bp)->state) ||	\
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
+index 951c4c569a9b3..61e0373079316 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
+@@ -366,6 +366,16 @@ bnxt_dl_livepatch_report_err(struct bnxt *bp, struct netlink_ext_ack *extack,
+ 	}
+ }
+ 
++/* Live patch status in NVM */
++#define BNXT_LIVEPATCH_NOT_INSTALLED	0
++#define BNXT_LIVEPATCH_INSTALLED	FW_LIVEPATCH_QUERY_RESP_STATUS_FLAGS_INSTALL
++#define BNXT_LIVEPATCH_REMOVED		FW_LIVEPATCH_QUERY_RESP_STATUS_FLAGS_ACTIVE
++#define BNXT_LIVEPATCH_MASK		(FW_LIVEPATCH_QUERY_RESP_STATUS_FLAGS_INSTALL | \
++					 FW_LIVEPATCH_QUERY_RESP_STATUS_FLAGS_ACTIVE)
++#define BNXT_LIVEPATCH_ACTIVATED	BNXT_LIVEPATCH_MASK
++
++#define BNXT_LIVEPATCH_STATE(flags)	((flags) & BNXT_LIVEPATCH_MASK)
++
+ static int
+ bnxt_dl_livepatch_activate(struct bnxt *bp, struct netlink_ext_ack *extack)
+ {
+@@ -373,8 +383,9 @@ bnxt_dl_livepatch_activate(struct bnxt *bp, struct netlink_ext_ack *extack)
+ 	struct hwrm_fw_livepatch_query_input *query_req;
+ 	struct hwrm_fw_livepatch_output *patch_resp;
+ 	struct hwrm_fw_livepatch_input *patch_req;
++	u16 flags, live_patch_state;
++	bool activated = false;
+ 	u32 installed = 0;
+-	u16 flags;
+ 	u8 target;
+ 	int rc;
+ 
+@@ -393,7 +404,6 @@ bnxt_dl_livepatch_activate(struct bnxt *bp, struct netlink_ext_ack *extack)
+ 		hwrm_req_drop(bp, query_req);
+ 		return rc;
+ 	}
+-	patch_req->opcode = FW_LIVEPATCH_REQ_OPCODE_ACTIVATE;
+ 	patch_req->loadtype = FW_LIVEPATCH_REQ_LOADTYPE_NVM_INSTALL;
+ 	patch_resp = hwrm_req_hold(bp, patch_req);
+ 
+@@ -406,12 +416,20 @@ bnxt_dl_livepatch_activate(struct bnxt *bp, struct netlink_ext_ack *extack)
+ 		}
+ 
+ 		flags = le16_to_cpu(query_resp->status_flags);
+-		if (~flags & FW_LIVEPATCH_QUERY_RESP_STATUS_FLAGS_INSTALL)
++		live_patch_state = BNXT_LIVEPATCH_STATE(flags);
++
++		if (live_patch_state == BNXT_LIVEPATCH_NOT_INSTALLED)
+ 			continue;
+-		if ((flags & FW_LIVEPATCH_QUERY_RESP_STATUS_FLAGS_ACTIVE) &&
+-		    !strncmp(query_resp->active_ver, query_resp->install_ver,
+-			     sizeof(query_resp->active_ver)))
++
++		if (live_patch_state == BNXT_LIVEPATCH_ACTIVATED) {
++			activated = true;
+ 			continue;
++		}
++
++		if (live_patch_state == BNXT_LIVEPATCH_INSTALLED)
++			patch_req->opcode = FW_LIVEPATCH_REQ_OPCODE_ACTIVATE;
++		else if (live_patch_state == BNXT_LIVEPATCH_REMOVED)
++			patch_req->opcode = FW_LIVEPATCH_REQ_OPCODE_DEACTIVATE;
+ 
+ 		patch_req->fw_target = target;
+ 		rc = hwrm_req_send(bp, patch_req);
+@@ -423,8 +441,13 @@ bnxt_dl_livepatch_activate(struct bnxt *bp, struct netlink_ext_ack *extack)
+ 	}
+ 
+ 	if (!rc && !installed) {
+-		NL_SET_ERR_MSG_MOD(extack, "No live patches found");
+-		rc = -ENOENT;
++		if (activated) {
++			NL_SET_ERR_MSG_MOD(extack, "Live patch already activated");
++			rc = -EEXIST;
++		} else {
++			NL_SET_ERR_MSG_MOD(extack, "No live patches found");
++			rc = -ENOENT;
++		}
+ 	}
+ 	hwrm_req_drop(bp, query_req);
+ 	hwrm_req_drop(bp, patch_req);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index 7307df49c1313..f147ad5a65315 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -25,6 +25,7 @@
+ #include "bnxt_hsi.h"
+ #include "bnxt.h"
+ #include "bnxt_hwrm.h"
++#include "bnxt_ulp.h"
+ #include "bnxt_xdp.h"
+ #include "bnxt_ptp.h"
+ #include "bnxt_ethtool.h"
+@@ -1944,6 +1945,9 @@ static int bnxt_get_fecparam(struct net_device *dev,
+ 	case PORT_PHY_QCFG_RESP_ACTIVE_FEC_FEC_RS272_IEEE_ACTIVE:
+ 		fec->active_fec |= ETHTOOL_FEC_LLRS;
+ 		break;
++	case PORT_PHY_QCFG_RESP_ACTIVE_FEC_FEC_NONE_ACTIVE:
++		fec->active_fec |= ETHTOOL_FEC_OFF;
++		break;
+ 	}
+ 	return 0;
+ }
+@@ -3429,7 +3433,7 @@ static int bnxt_run_loopback(struct bnxt *bp)
+ 	if (!skb)
+ 		return -ENOMEM;
+ 	data = skb_put(skb, pkt_size);
+-	eth_broadcast_addr(data);
++	ether_addr_copy(&data[i], bp->dev->dev_addr);
+ 	i += ETH_ALEN;
+ 	ether_addr_copy(&data[i], bp->dev->dev_addr);
+ 	i += ETH_ALEN;
+@@ -3523,9 +3527,12 @@ static void bnxt_self_test(struct net_device *dev, struct ethtool_test *etest,
+ 	if (!offline) {
+ 		bnxt_run_fw_tests(bp, test_mask, &test_results);
+ 	} else {
+-		rc = bnxt_close_nic(bp, false, false);
+-		if (rc)
++		bnxt_ulp_stop(bp);
++		rc = bnxt_close_nic(bp, true, false);
++		if (rc) {
++			bnxt_ulp_start(bp, rc);
+ 			return;
++		}
+ 		bnxt_run_fw_tests(bp, test_mask, &test_results);
+ 
+ 		buf[BNXT_MACLPBK_TEST_IDX] = 1;
+@@ -3535,6 +3542,7 @@ static void bnxt_self_test(struct net_device *dev, struct ethtool_test *etest,
+ 		if (rc) {
+ 			bnxt_hwrm_mac_loopback(bp, false);
+ 			etest->flags |= ETH_TEST_FL_FAILED;
++			bnxt_ulp_start(bp, rc);
+ 			return;
+ 		}
+ 		if (bnxt_run_loopback(bp))
+@@ -3560,7 +3568,8 @@ static void bnxt_self_test(struct net_device *dev, struct ethtool_test *etest,
+ 		}
+ 		bnxt_hwrm_phy_loopback(bp, false, false);
+ 		bnxt_half_close_nic(bp);
+-		rc = bnxt_open_nic(bp, false, true);
++		rc = bnxt_open_nic(bp, true, true);
++		bnxt_ulp_start(bp, rc);
+ 	}
+ 	if (rc || bnxt_test_irq(bp)) {
+ 		buf[BNXT_IRQ_TEST_IDX] = 1;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c
+index 8171f4912fa01..3a0eeb3737767 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c
+@@ -595,18 +595,24 @@ timeout_abort:
+ 
+ 		/* Last byte of resp contains valid bit */
+ 		valid = ((u8 *)ctx->resp) + len - 1;
+-		for (j = 0; j < HWRM_VALID_BIT_DELAY_USEC; j++) {
++		for (j = 0; j < HWRM_VALID_BIT_DELAY_USEC; ) {
+ 			/* make sure we read from updated DMA memory */
+ 			dma_rmb();
+ 			if (*valid)
+ 				break;
+-			usleep_range(1, 5);
++			if (j < 10) {
++				udelay(1);
++				j++;
++			} else {
++				usleep_range(20, 30);
++				j += 20;
++			}
+ 		}
+ 
+ 		if (j >= HWRM_VALID_BIT_DELAY_USEC) {
+ 			if (!(ctx->flags & BNXT_HWRM_CTX_SILENT))
+ 				netdev_err(bp->dev, "Error (timeout: %u) msg {0x%x 0x%x} len:%d v:%d\n",
+-					   hwrm_total_timeout(i),
++					   hwrm_total_timeout(i) + j,
+ 					   le16_to_cpu(ctx->req->req_type),
+ 					   le16_to_cpu(ctx->req->seq_id), len,
+ 					   *valid);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.h
+index 9a9fc4e8041b6..380ef69afb51b 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.h
+@@ -94,7 +94,7 @@ static inline unsigned int hwrm_total_timeout(unsigned int n)
+ }
+ 
+ 
+-#define HWRM_VALID_BIT_DELAY_USEC	150
++#define HWRM_VALID_BIT_DELAY_USEC	50000
+ 
+ static inline bool bnxt_cfa_hwrm_message(u16 req_type)
+ {
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index d5d33325a413e..1c6bc69197a53 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -5919,10 +5919,14 @@ static ssize_t failover_store(struct device *dev, struct device_attribute *attr,
+ 		   be64_to_cpu(session_token));
+ 	rc = plpar_hcall_norets(H_VIOCTL, adapter->vdev->unit_address,
+ 				H_SESSION_ERR_DETECTED, session_token, 0, 0);
+-	if (rc)
++	if (rc) {
+ 		netdev_err(netdev,
+ 			   "H_VIOCTL initiated failover failed, rc %ld\n",
+ 			   rc);
++		goto last_resort;
++	}
++
++	return count;
+ 
+ last_resort:
+ 	netdev_dbg(netdev, "Trying to send CRQ_CMD, the last resort\n");
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index d3af1457fa0dc..1eddb99c4e9e1 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -5372,15 +5372,7 @@ static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc,
+ 	/* There is no need to reset BW when mqprio mode is on.  */
+ 	if (pf->flags & I40E_FLAG_TC_MQPRIO)
+ 		return 0;
+-
+-	if (!vsi->mqprio_qopt.qopt.hw) {
+-		if (pf->flags & I40E_FLAG_DCB_ENABLED)
+-			goto skip_reset;
+-
+-		if (IS_ENABLED(CONFIG_I40E_DCB) &&
+-		    i40e_dcb_hw_get_num_tc(&pf->hw) == 1)
+-			goto skip_reset;
+-
++	if (!vsi->mqprio_qopt.qopt.hw && !(pf->flags & I40E_FLAG_DCB_ENABLED)) {
+ 		ret = i40e_set_bw_limit(vsi, vsi->seid, 0);
+ 		if (ret)
+ 			dev_info(&pf->pdev->dev,
+@@ -5388,8 +5380,6 @@ static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc,
+ 				 vsi->seid);
+ 		return ret;
+ 	}
+-
+-skip_reset:
+ 	memset(&bw_data, 0, sizeof(bw_data));
+ 	bw_data.tc_valid_bits = enabled_tc;
+ 	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++)
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index b3e1fc6a0a8eb..b067dd9c71e78 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -280,7 +280,6 @@ enum ice_pf_state {
+ 	ICE_VFLR_EVENT_PENDING,
+ 	ICE_FLTR_OVERFLOW_PROMISC,
+ 	ICE_VF_DIS,
+-	ICE_VF_DEINIT_IN_PROGRESS,
+ 	ICE_CFG_BUSY,
+ 	ICE_SERVICE_SCHED,
+ 	ICE_SERVICE_DIS,
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
+index e9a0159cb8b92..ec8c980f73421 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.c
++++ b/drivers/net/ethernet/intel/ice/ice_common.c
+@@ -3319,7 +3319,7 @@ ice_cfg_phy_fec(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg,
+ 
+ 	if (fec == ICE_FEC_AUTO && ice_fw_supports_link_override(hw) &&
+ 	    !ice_fw_supports_report_dflt_cfg(hw)) {
+-		struct ice_link_default_override_tlv tlv;
++		struct ice_link_default_override_tlv tlv = { 0 };
+ 
+ 		status = ice_get_link_default_override(&tlv, pi);
+ 		if (status)
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 5b4be432b60ce..8ee778aaa8000 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -1772,7 +1772,9 @@ static void ice_handle_mdd_event(struct ice_pf *pf)
+ 				 * reset, so print the event prior to reset.
+ 				 */
+ 				ice_print_vf_rx_mdd_event(vf);
++				mutex_lock(&pf->vf[i].cfg_lock);
+ 				ice_reset_vf(&pf->vf[i], false);
++				mutex_unlock(&pf->vf[i].cfg_lock);
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
+index 442b031b0edc0..fdb9c4b367588 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
+@@ -1121,9 +1121,12 @@ exit:
+ static int ice_ptp_adjtime_nonatomic(struct ptp_clock_info *info, s64 delta)
+ {
+ 	struct timespec64 now, then;
++	int ret;
+ 
+ 	then = ns_to_timespec64(delta);
+-	ice_ptp_gettimex64(info, &now, NULL);
++	ret = ice_ptp_gettimex64(info, &now, NULL);
++	if (ret)
++		return ret;
+ 	now = timespec64_add(now, then);
+ 
+ 	return ice_ptp_settime64(info, (const struct timespec64 *)&now);
+diff --git a/drivers/net/ethernet/intel/ice/ice_tc_lib.c b/drivers/net/ethernet/intel/ice/ice_tc_lib.c
+index 25cca5c4ae575..275a99f62b285 100644
+--- a/drivers/net/ethernet/intel/ice/ice_tc_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.c
+@@ -711,7 +711,7 @@ ice_tc_set_port(struct flow_match_ports match,
+ 			fltr->flags |= ICE_TC_FLWR_FIELD_ENC_DEST_L4_PORT;
+ 		else
+ 			fltr->flags |= ICE_TC_FLWR_FIELD_DEST_L4_PORT;
+-		fltr->flags |= ICE_TC_FLWR_FIELD_DEST_L4_PORT;
++
+ 		headers->l4_key.dst_port = match.key->dst;
+ 		headers->l4_mask.dst_port = match.mask->dst;
+ 	}
+@@ -720,7 +720,7 @@ ice_tc_set_port(struct flow_match_ports match,
+ 			fltr->flags |= ICE_TC_FLWR_FIELD_ENC_SRC_L4_PORT;
+ 		else
+ 			fltr->flags |= ICE_TC_FLWR_FIELD_SRC_L4_PORT;
+-		fltr->flags |= ICE_TC_FLWR_FIELD_SRC_L4_PORT;
++
+ 		headers->l4_key.src_port = match.key->src;
+ 		headers->l4_mask.src_port = match.mask->src;
+ 	}
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+index 6427e7ec93de6..a12cc305c4619 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+@@ -617,8 +617,6 @@ void ice_free_vfs(struct ice_pf *pf)
+ 	struct ice_hw *hw = &pf->hw;
+ 	unsigned int tmp, i;
+ 
+-	set_bit(ICE_VF_DEINIT_IN_PROGRESS, pf->state);
+-
+ 	if (!pf->vf)
+ 		return;
+ 
+@@ -636,22 +634,26 @@ void ice_free_vfs(struct ice_pf *pf)
+ 	else
+ 		dev_warn(dev, "VFs are assigned - not disabling SR-IOV\n");
+ 
+-	/* Avoid wait time by stopping all VFs at the same time */
+-	ice_for_each_vf(pf, i)
+-		ice_dis_vf_qs(&pf->vf[i]);
+-
+ 	tmp = pf->num_alloc_vfs;
+ 	pf->num_qps_per_vf = 0;
+ 	pf->num_alloc_vfs = 0;
+ 	for (i = 0; i < tmp; i++) {
+-		if (test_bit(ICE_VF_STATE_INIT, pf->vf[i].vf_states)) {
++		struct ice_vf *vf = &pf->vf[i];
++
++		mutex_lock(&vf->cfg_lock);
++
++		ice_dis_vf_qs(vf);
++
++		if (test_bit(ICE_VF_STATE_INIT, vf->vf_states)) {
+ 			/* disable VF qp mappings and set VF disable state */
+-			ice_dis_vf_mappings(&pf->vf[i]);
+-			set_bit(ICE_VF_STATE_DIS, pf->vf[i].vf_states);
+-			ice_free_vf_res(&pf->vf[i]);
++			ice_dis_vf_mappings(vf);
++			set_bit(ICE_VF_STATE_DIS, vf->vf_states);
++			ice_free_vf_res(vf);
+ 		}
+ 
+-		mutex_destroy(&pf->vf[i].cfg_lock);
++		mutex_unlock(&vf->cfg_lock);
++
++		mutex_destroy(&vf->cfg_lock);
+ 	}
+ 
+ 	if (ice_sriov_free_msix_res(pf))
+@@ -687,7 +689,6 @@ void ice_free_vfs(struct ice_pf *pf)
+ 				i);
+ 
+ 	clear_bit(ICE_VF_DIS, pf->state);
+-	clear_bit(ICE_VF_DEINIT_IN_PROGRESS, pf->state);
+ 	clear_bit(ICE_FLAG_SRIOV_ENA, pf->flags);
+ }
+ 
+@@ -1613,6 +1614,8 @@ bool ice_reset_all_vfs(struct ice_pf *pf, bool is_vflr)
+ 	ice_for_each_vf(pf, v) {
+ 		vf = &pf->vf[v];
+ 
++		mutex_lock(&vf->cfg_lock);
++
+ 		vf->driver_caps = 0;
+ 		ice_vc_set_default_allowlist(vf);
+ 
+@@ -1627,6 +1630,8 @@ bool ice_reset_all_vfs(struct ice_pf *pf, bool is_vflr)
+ 		ice_vf_pre_vsi_rebuild(vf);
+ 		ice_vf_rebuild_vsi(vf);
+ 		ice_vf_post_vsi_rebuild(vf);
++
++		mutex_unlock(&vf->cfg_lock);
+ 	}
+ 
+ 	if (ice_is_eswitch_mode_switchdev(pf))
+@@ -1677,6 +1682,8 @@ bool ice_reset_vf(struct ice_vf *vf, bool is_vflr)
+ 	u32 reg;
+ 	int i;
+ 
++	lockdep_assert_held(&vf->cfg_lock);
++
+ 	dev = ice_pf_to_dev(pf);
+ 
+ 	if (test_bit(ICE_VF_RESETS_DISABLED, pf->state)) {
+@@ -2176,9 +2183,12 @@ void ice_process_vflr_event(struct ice_pf *pf)
+ 		bit_idx = (hw->func_caps.vf_base_id + vf_id) % 32;
+ 		/* read GLGEN_VFLRSTAT register to find out the flr VFs */
+ 		reg = rd32(hw, GLGEN_VFLRSTAT(reg_idx));
+-		if (reg & BIT(bit_idx))
++		if (reg & BIT(bit_idx)) {
+ 			/* GLGEN_VFLRSTAT bit will be cleared in ice_reset_vf */
++			mutex_lock(&vf->cfg_lock);
+ 			ice_reset_vf(vf, true);
++			mutex_unlock(&vf->cfg_lock);
++		}
+ 	}
+ }
+ 
+@@ -2255,7 +2265,9 @@ ice_vf_lan_overflow_event(struct ice_pf *pf, struct ice_rq_event_info *event)
+ 	if (!vf)
+ 		return;
+ 
++	mutex_lock(&vf->cfg_lock);
+ 	ice_vc_reset_vf(vf);
++	mutex_unlock(&vf->cfg_lock);
+ }
+ 
+ /**
+@@ -4651,10 +4663,6 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event)
+ 	struct device *dev;
+ 	int err = 0;
+ 
+-	/* if de-init is underway, don't process messages from VF */
+-	if (test_bit(ICE_VF_DEINIT_IN_PROGRESS, pf->state))
+-		return;
+-
+ 	dev = ice_pf_to_dev(pf);
+ 	if (ice_validate_vf_id(pf, vf_id)) {
+ 		err = -EINVAL;
+diff --git a/drivers/net/ethernet/marvell/mv643xx_eth.c b/drivers/net/ethernet/marvell/mv643xx_eth.c
+index bb14fa2241a36..0636783f7bc03 100644
+--- a/drivers/net/ethernet/marvell/mv643xx_eth.c
++++ b/drivers/net/ethernet/marvell/mv643xx_eth.c
+@@ -2700,6 +2700,16 @@ MODULE_DEVICE_TABLE(of, mv643xx_eth_shared_ids);
+ 
+ static struct platform_device *port_platdev[3];
+ 
++static void mv643xx_eth_shared_of_remove(void)
++{
++	int n;
++
++	for (n = 0; n < 3; n++) {
++		platform_device_del(port_platdev[n]);
++		port_platdev[n] = NULL;
++	}
++}
++
+ static int mv643xx_eth_shared_of_add_port(struct platform_device *pdev,
+ 					  struct device_node *pnp)
+ {
+@@ -2736,7 +2746,9 @@ static int mv643xx_eth_shared_of_add_port(struct platform_device *pdev,
+ 		return -EINVAL;
+ 	}
+ 
+-	of_get_mac_address(pnp, ppd.mac_addr);
++	ret = of_get_mac_address(pnp, ppd.mac_addr);
++	if (ret)
++		return ret;
+ 
+ 	mv643xx_eth_property(pnp, "tx-queue-size", ppd.tx_queue_size);
+ 	mv643xx_eth_property(pnp, "tx-sram-addr", ppd.tx_sram_addr);
+@@ -2800,21 +2812,13 @@ static int mv643xx_eth_shared_of_probe(struct platform_device *pdev)
+ 		ret = mv643xx_eth_shared_of_add_port(pdev, pnp);
+ 		if (ret) {
+ 			of_node_put(pnp);
++			mv643xx_eth_shared_of_remove();
+ 			return ret;
+ 		}
+ 	}
+ 	return 0;
+ }
+ 
+-static void mv643xx_eth_shared_of_remove(void)
+-{
+-	int n;
+-
+-	for (n = 0; n < 3; n++) {
+-		platform_device_del(port_platdev[n]);
+-		port_platdev[n] = NULL;
+-	}
+-}
+ #else
+ static inline int mv643xx_eth_shared_of_probe(struct platform_device *pdev)
+ {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_mplsoudp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_mplsoudp.c
+index 60952b33b5688..d2333310b56fe 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_mplsoudp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_mplsoudp.c
+@@ -60,37 +60,31 @@ static int parse_tunnel(struct mlx5e_priv *priv,
+ 			void *headers_v)
+ {
+ 	struct flow_rule *rule = flow_cls_offload_flow_rule(f);
+-	struct flow_match_enc_keyid enc_keyid;
+ 	struct flow_match_mpls match;
+ 	void *misc2_c;
+ 	void *misc2_v;
+ 
+-	misc2_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
+-			       misc_parameters_2);
+-	misc2_v = MLX5_ADDR_OF(fte_match_param, spec->match_value,
+-			       misc_parameters_2);
+-
+-	if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_MPLS))
+-		return 0;
+-
+-	if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID))
+-		return 0;
+-
+-	flow_rule_match_enc_keyid(rule, &enc_keyid);
+-
+-	if (!enc_keyid.mask->keyid)
+-		return 0;
+-
+ 	if (!MLX5_CAP_ETH(priv->mdev, tunnel_stateless_mpls_over_udp) &&
+ 	    !(MLX5_CAP_GEN(priv->mdev, flex_parser_protocols) & MLX5_FLEX_PROTO_CW_MPLS_UDP))
+ 		return -EOPNOTSUPP;
+ 
++	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID))
++		return -EOPNOTSUPP;
++
++	if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_MPLS))
++		return 0;
++
+ 	flow_rule_match_mpls(rule, &match);
+ 
+ 	/* Only support matching the first LSE */
+ 	if (match.mask->used_lses != 1)
+ 		return -EOPNOTSUPP;
+ 
++	misc2_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
++			       misc_parameters_2);
++	misc2_v = MLX5_ADDR_OF(fte_match_param, spec->match_value,
++			       misc_parameters_2);
++
+ 	MLX5_SET(fte_match_set_misc2, misc2_c,
+ 		 outer_first_mpls_over_udp.mpls_label,
+ 		 match.mask->ls[0].mpls_label);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index c2ea5fad48ddf..58c72142804a5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -1752,7 +1752,7 @@ static int mlx5e_get_module_eeprom(struct net_device *netdev,
+ 		if (size_read < 0) {
+ 			netdev_err(priv->netdev, "%s: mlx5_query_eeprom failed:0x%x\n",
+ 				   __func__, size_read);
+-			return 0;
++			return size_read;
+ 		}
+ 
+ 		i += size_read;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index bf25d0aa74c3b..ea0968ea88d6a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -1348,7 +1348,8 @@ static inline void mlx5e_handle_csum(struct net_device *netdev,
+ 	}
+ 
+ 	/* True when explicitly set via priv flag, or XDP prog is loaded */
+-	if (test_bit(MLX5E_RQ_STATE_NO_CSUM_COMPLETE, &rq->state))
++	if (test_bit(MLX5E_RQ_STATE_NO_CSUM_COMPLETE, &rq->state) ||
++	    get_cqe_tls_offload(cqe))
+ 		goto csum_unnecessary;
+ 
+ 	/* CQE csum doesn't cover padding octets in short ethernet
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c b/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
+index 8c9163d2c6468..08a75654f5f18 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
+@@ -334,6 +334,7 @@ void mlx5e_self_test(struct net_device *ndev, struct ethtool_test *etest,
+ 		netdev_info(ndev, "\t[%d] %s start..\n", i, st.name);
+ 		buf[count] = st.st_func(priv);
+ 		netdev_info(ndev, "\t[%d] %s end: result(%lld)\n", i, st.name, buf[count]);
++		count++;
+ 	}
+ 
+ 	mutex_unlock(&priv->state_lock);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index eae37934cdf70..308733cbaf775 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -3427,6 +3427,18 @@ actions_match_supported(struct mlx5e_priv *priv,
+ 		return false;
+ 	}
+ 
++	if (!(~actions &
++	      (MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_DROP))) {
++		NL_SET_ERR_MSG_MOD(extack, "Rule cannot support forward+drop action");
++		return false;
++	}
++
++	if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR &&
++	    actions & MLX5_FLOW_CONTEXT_ACTION_DROP) {
++		NL_SET_ERR_MSG_MOD(extack, "Drop with modify header action is not supported");
++		return false;
++	}
++
+ 	if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR &&
+ 	    actions & MLX5_FLOW_CONTEXT_ACTION_DROP) {
+ 		NL_SET_ERR_MSG_MOD(extack, "Drop with modify header action is not supported");
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index ccb66428aeb5b..52b973e244189 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -2838,10 +2838,6 @@ bool mlx5_esw_vport_match_metadata_supported(const struct mlx5_eswitch *esw)
+ 	if (!MLX5_CAP_ESW_FLOWTABLE(esw->dev, flow_source))
+ 		return false;
+ 
+-	if (mlx5_core_is_ecpf_esw_manager(esw->dev) ||
+-	    mlx5_ecpf_vport_exists(esw->dev))
+-		return false;
+-
+ 	return true;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 386ab9a2d490f..4f6b010726998 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -2073,6 +2073,8 @@ void mlx5_del_flow_rules(struct mlx5_flow_handle *handle)
+ 		fte->node.del_hw_func = NULL;
+ 		up_write_ref_node(&fte->node, false);
+ 		tree_put_node(&fte->node, false);
++	} else {
++		up_write_ref_node(&fte->node, false);
+ 	}
+ 	kfree(handle);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
+index df58cba37930a..1e8ec4f236b28 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
+@@ -121,6 +121,9 @@ u32 mlx5_chains_get_nf_ft_chain(struct mlx5_fs_chains *chains)
+ 
+ u32 mlx5_chains_get_prio_range(struct mlx5_fs_chains *chains)
+ {
++	if (!mlx5_chains_prios_supported(chains))
++		return 1;
++
+ 	if (mlx5_chains_ignore_flow_level_supported(chains))
+ 		return UINT_MAX;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 6e381111f1d2f..c3861c69521c2 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -510,7 +510,7 @@ static int handle_hca_cap(struct mlx5_core_dev *dev, void *set_ctx)
+ 
+ 	/* Check log_max_qp from HCA caps to set in current profile */
+ 	if (prof->log_max_qp == LOG_MAX_SUPPORTED_QPS) {
+-		prof->log_max_qp = MLX5_CAP_GEN_MAX(dev, log_max_qp);
++		prof->log_max_qp = min_t(u8, 17, MLX5_CAP_GEN_MAX(dev, log_max_qp));
+ 	} else if (MLX5_CAP_GEN_MAX(dev, log_max_qp) < prof->log_max_qp) {
+ 		mlx5_core_warn(dev, "log_max_qp value in current profile is %d, changing it to HCA capability limit (%d)\n",
+ 			       prof->log_max_qp,
+@@ -1796,10 +1796,12 @@ static const struct pci_device_id mlx5_core_pci_table[] = {
+ 	{ PCI_VDEVICE(MELLANOX, 0x101e), MLX5_PCI_DEV_IS_VF},	/* ConnectX Family mlx5Gen Virtual Function */
+ 	{ PCI_VDEVICE(MELLANOX, 0x101f) },			/* ConnectX-6 LX */
+ 	{ PCI_VDEVICE(MELLANOX, 0x1021) },			/* ConnectX-7 */
++	{ PCI_VDEVICE(MELLANOX, 0x1023) },			/* ConnectX-8 */
+ 	{ PCI_VDEVICE(MELLANOX, 0xa2d2) },			/* BlueField integrated ConnectX-5 network controller */
+ 	{ PCI_VDEVICE(MELLANOX, 0xa2d3), MLX5_PCI_DEV_IS_VF},	/* BlueField integrated ConnectX-5 network controller VF */
+ 	{ PCI_VDEVICE(MELLANOX, 0xa2d6) },			/* BlueField-2 integrated ConnectX-6 Dx network controller */
+ 	{ PCI_VDEVICE(MELLANOX, 0xa2dc) },			/* BlueField-3 integrated ConnectX-7 network controller */
++	{ PCI_VDEVICE(MELLANOX, 0xa2df) },			/* BlueField-4 integrated ConnectX-8 network controller */
+ 	{ 0, }
+ };
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c
+index 7f6fd9c5e371b..e289cfdbce075 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c
+@@ -4,7 +4,6 @@
+ #include "dr_types.h"
+ 
+ #define DR_ICM_MODIFY_HDR_ALIGN_BASE 64
+-#define DR_ICM_SYNC_THRESHOLD_POOL (64 * 1024 * 1024)
+ 
+ struct mlx5dr_icm_pool {
+ 	enum mlx5dr_icm_type icm_type;
+@@ -136,37 +135,35 @@ static void dr_icm_pool_mr_destroy(struct mlx5dr_icm_mr *icm_mr)
+ 	kvfree(icm_mr);
+ }
+ 
+-static int dr_icm_chunk_ste_init(struct mlx5dr_icm_chunk *chunk)
++static int dr_icm_buddy_get_ste_size(struct mlx5dr_icm_buddy_mem *buddy)
+ {
+-	chunk->ste_arr = kvzalloc(chunk->num_of_entries *
+-				  sizeof(chunk->ste_arr[0]), GFP_KERNEL);
+-	if (!chunk->ste_arr)
+-		return -ENOMEM;
+-
+-	chunk->hw_ste_arr = kvzalloc(chunk->num_of_entries *
+-				     DR_STE_SIZE_REDUCED, GFP_KERNEL);
+-	if (!chunk->hw_ste_arr)
+-		goto out_free_ste_arr;
+-
+-	chunk->miss_list = kvmalloc(chunk->num_of_entries *
+-				    sizeof(chunk->miss_list[0]), GFP_KERNEL);
+-	if (!chunk->miss_list)
+-		goto out_free_hw_ste_arr;
++	/* We support only one type of STE size, both for ConnectX-5 and later
++	 * devices. Once the support for match STE which has a larger tag is
++	 * added (32B instead of 16B), the STE size for devices later than
++	 * ConnectX-5 needs to account for that.
++	 */
++	return DR_STE_SIZE_REDUCED;
++}
+ 
+-	return 0;
++static void dr_icm_chunk_ste_init(struct mlx5dr_icm_chunk *chunk, int offset)
++{
++	struct mlx5dr_icm_buddy_mem *buddy = chunk->buddy_mem;
++	int index = offset / DR_STE_SIZE;
+ 
+-out_free_hw_ste_arr:
+-	kvfree(chunk->hw_ste_arr);
+-out_free_ste_arr:
+-	kvfree(chunk->ste_arr);
+-	return -ENOMEM;
++	chunk->ste_arr = &buddy->ste_arr[index];
++	chunk->miss_list = &buddy->miss_list[index];
++	chunk->hw_ste_arr = buddy->hw_ste_arr +
++			    index * dr_icm_buddy_get_ste_size(buddy);
+ }
+ 
+ static void dr_icm_chunk_ste_cleanup(struct mlx5dr_icm_chunk *chunk)
+ {
+-	kvfree(chunk->miss_list);
+-	kvfree(chunk->hw_ste_arr);
+-	kvfree(chunk->ste_arr);
++	struct mlx5dr_icm_buddy_mem *buddy = chunk->buddy_mem;
++
++	memset(chunk->hw_ste_arr, 0,
++	       chunk->num_of_entries * dr_icm_buddy_get_ste_size(buddy));
++	memset(chunk->ste_arr, 0,
++	       chunk->num_of_entries * sizeof(chunk->ste_arr[0]));
+ }
+ 
+ static enum mlx5dr_icm_type
+@@ -189,6 +186,44 @@ static void dr_icm_chunk_destroy(struct mlx5dr_icm_chunk *chunk,
+ 	kvfree(chunk);
+ }
+ 
++static int dr_icm_buddy_init_ste_cache(struct mlx5dr_icm_buddy_mem *buddy)
++{
++	int num_of_entries =
++		mlx5dr_icm_pool_chunk_size_to_entries(buddy->pool->max_log_chunk_sz);
++
++	buddy->ste_arr = kvcalloc(num_of_entries,
++				  sizeof(struct mlx5dr_ste), GFP_KERNEL);
++	if (!buddy->ste_arr)
++		return -ENOMEM;
++
++	/* Preallocate full STE size on non-ConnectX-5 devices since
++	 * we need to support both full and reduced with the same cache.
++	 */
++	buddy->hw_ste_arr = kvcalloc(num_of_entries,
++				     dr_icm_buddy_get_ste_size(buddy), GFP_KERNEL);
++	if (!buddy->hw_ste_arr)
++		goto free_ste_arr;
++
++	buddy->miss_list = kvmalloc(num_of_entries * sizeof(struct list_head), GFP_KERNEL);
++	if (!buddy->miss_list)
++		goto free_hw_ste_arr;
++
++	return 0;
++
++free_hw_ste_arr:
++	kvfree(buddy->hw_ste_arr);
++free_ste_arr:
++	kvfree(buddy->ste_arr);
++	return -ENOMEM;
++}
++
++static void dr_icm_buddy_cleanup_ste_cache(struct mlx5dr_icm_buddy_mem *buddy)
++{
++	kvfree(buddy->ste_arr);
++	kvfree(buddy->hw_ste_arr);
++	kvfree(buddy->miss_list);
++}
++
+ static int dr_icm_buddy_create(struct mlx5dr_icm_pool *pool)
+ {
+ 	struct mlx5dr_icm_buddy_mem *buddy;
+@@ -208,11 +243,19 @@ static int dr_icm_buddy_create(struct mlx5dr_icm_pool *pool)
+ 	buddy->icm_mr = icm_mr;
+ 	buddy->pool = pool;
+ 
++	if (pool->icm_type == DR_ICM_TYPE_STE) {
++		/* Reduce allocations by preallocating and reusing the STE structures */
++		if (dr_icm_buddy_init_ste_cache(buddy))
++			goto err_cleanup_buddy;
++	}
++
+ 	/* add it to the -start- of the list in order to search in it first */
+ 	list_add(&buddy->list_node, &pool->buddy_mem_list);
+ 
+ 	return 0;
+ 
++err_cleanup_buddy:
++	mlx5dr_buddy_cleanup(buddy);
+ err_free_buddy:
+ 	kvfree(buddy);
+ free_mr:
+@@ -234,6 +277,9 @@ static void dr_icm_buddy_destroy(struct mlx5dr_icm_buddy_mem *buddy)
+ 
+ 	mlx5dr_buddy_cleanup(buddy);
+ 
++	if (buddy->pool->icm_type == DR_ICM_TYPE_STE)
++		dr_icm_buddy_cleanup_ste_cache(buddy);
++
+ 	kvfree(buddy);
+ }
+ 
+@@ -261,34 +307,30 @@ dr_icm_chunk_create(struct mlx5dr_icm_pool *pool,
+ 	chunk->byte_size =
+ 		mlx5dr_icm_pool_chunk_size_to_byte(chunk_size, pool->icm_type);
+ 	chunk->seg = seg;
++	chunk->buddy_mem = buddy_mem_pool;
+ 
+-	if (pool->icm_type == DR_ICM_TYPE_STE && dr_icm_chunk_ste_init(chunk)) {
+-		mlx5dr_err(pool->dmn,
+-			   "Failed to init ste arrays (order: %d)\n",
+-			   chunk_size);
+-		goto out_free_chunk;
+-	}
++	if (pool->icm_type == DR_ICM_TYPE_STE)
++		dr_icm_chunk_ste_init(chunk, offset);
+ 
+ 	buddy_mem_pool->used_memory += chunk->byte_size;
+-	chunk->buddy_mem = buddy_mem_pool;
+ 	INIT_LIST_HEAD(&chunk->chunk_list);
+ 
+ 	/* chunk now is part of the used_list */
+ 	list_add_tail(&chunk->chunk_list, &buddy_mem_pool->used_list);
+ 
+ 	return chunk;
+-
+-out_free_chunk:
+-	kvfree(chunk);
+-	return NULL;
+ }
+ 
+ static bool dr_icm_pool_is_sync_required(struct mlx5dr_icm_pool *pool)
+ {
+-	if (pool->hot_memory_size > DR_ICM_SYNC_THRESHOLD_POOL)
+-		return true;
++	int allow_hot_size;
++
++	/* sync when hot memory reaches half of the pool size */
++	allow_hot_size =
++		mlx5dr_icm_pool_chunk_size_to_byte(pool->max_log_chunk_sz,
++						   pool->icm_type) / 2;
+ 
+-	return false;
++	return pool->hot_memory_size > allow_hot_size;
+ }
+ 
+ static int dr_icm_pool_sync_all_buddy_pools(struct mlx5dr_icm_pool *pool)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c
+index 3d0cdc36a91ab..01213045a8a84 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c
+@@ -13,18 +13,6 @@ static bool dr_mask_is_dmac_set(struct mlx5dr_match_spec *spec)
+ 	return (spec->dmac_47_16 || spec->dmac_15_0);
+ }
+ 
+-static bool dr_mask_is_src_addr_set(struct mlx5dr_match_spec *spec)
+-{
+-	return (spec->src_ip_127_96 || spec->src_ip_95_64 ||
+-		spec->src_ip_63_32 || spec->src_ip_31_0);
+-}
+-
+-static bool dr_mask_is_dst_addr_set(struct mlx5dr_match_spec *spec)
+-{
+-	return (spec->dst_ip_127_96 || spec->dst_ip_95_64 ||
+-		spec->dst_ip_63_32 || spec->dst_ip_31_0);
+-}
+-
+ static bool dr_mask_is_l3_base_set(struct mlx5dr_match_spec *spec)
+ {
+ 	return (spec->ip_protocol || spec->frag || spec->tcp_flags ||
+@@ -480,11 +468,11 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher,
+ 						    &mask, inner, rx);
+ 
+ 		if (outer_ipv == DR_RULE_IPV6) {
+-			if (dr_mask_is_dst_addr_set(&mask.outer))
++			if (DR_MASK_IS_DST_IP_SET(&mask.outer))
+ 				mlx5dr_ste_build_eth_l3_ipv6_dst(ste_ctx, &sb[idx++],
+ 								 &mask, inner, rx);
+ 
+-			if (dr_mask_is_src_addr_set(&mask.outer))
++			if (DR_MASK_IS_SRC_IP_SET(&mask.outer))
+ 				mlx5dr_ste_build_eth_l3_ipv6_src(ste_ctx, &sb[idx++],
+ 								 &mask, inner, rx);
+ 
+@@ -580,11 +568,11 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher,
+ 						    &mask, inner, rx);
+ 
+ 		if (inner_ipv == DR_RULE_IPV6) {
+-			if (dr_mask_is_dst_addr_set(&mask.inner))
++			if (DR_MASK_IS_DST_IP_SET(&mask.inner))
+ 				mlx5dr_ste_build_eth_l3_ipv6_dst(ste_ctx, &sb[idx++],
+ 								 &mask, inner, rx);
+ 
+-			if (dr_mask_is_src_addr_set(&mask.inner))
++			if (DR_MASK_IS_SRC_IP_SET(&mask.inner))
+ 				mlx5dr_ste_build_eth_l3_ipv6_src(ste_ctx, &sb[idx++],
+ 								 &mask, inner, rx);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c
+index 219a5474a8a46..7e711b2037b5b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c
+@@ -602,12 +602,34 @@ int mlx5dr_ste_set_action_decap_l3_list(struct mlx5dr_ste_ctx *ste_ctx,
+ 						 used_hw_action_num);
+ }
+ 
++static int dr_ste_build_pre_check_spec(struct mlx5dr_domain *dmn,
++				       struct mlx5dr_match_spec *spec)
++{
++	if (spec->ip_version) {
++		if (spec->ip_version != 0xf) {
++			mlx5dr_err(dmn,
++				   "Partial ip_version mask with src/dst IP is not supported\n");
++			return -EINVAL;
++		}
++	} else if (spec->ethertype != 0xffff &&
++		   (DR_MASK_IS_SRC_IP_SET(spec) || DR_MASK_IS_DST_IP_SET(spec))) {
++		mlx5dr_err(dmn,
++			   "Partial/no ethertype mask with src/dst IP is not supported\n");
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ int mlx5dr_ste_build_pre_check(struct mlx5dr_domain *dmn,
+ 			       u8 match_criteria,
+ 			       struct mlx5dr_match_param *mask,
+ 			       struct mlx5dr_match_param *value)
+ {
+-	if (!value && (match_criteria & DR_MATCHER_CRITERIA_MISC)) {
++	if (value)
++		return 0;
++
++	if (match_criteria & DR_MATCHER_CRITERIA_MISC) {
+ 		if (mask->misc.source_port && mask->misc.source_port != 0xffff) {
+ 			mlx5dr_err(dmn,
+ 				   "Partial mask source_port is not supported\n");
+@@ -621,6 +643,14 @@ int mlx5dr_ste_build_pre_check(struct mlx5dr_domain *dmn,
+ 		}
+ 	}
+ 
++	if ((match_criteria & DR_MATCHER_CRITERIA_OUTER) &&
++	    dr_ste_build_pre_check_spec(dmn, &mask->outer))
++		return -EINVAL;
++
++	if ((match_criteria & DR_MATCHER_CRITERIA_INNER) &&
++	    dr_ste_build_pre_check_spec(dmn, &mask->inner))
++		return -EINVAL;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
+index 2333c2439c287..5f98db648e865 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
+@@ -739,6 +739,16 @@ struct mlx5dr_match_param {
+ 				       (_misc3)->icmpv4_code || \
+ 				       (_misc3)->icmpv4_header_data)
+ 
++#define DR_MASK_IS_SRC_IP_SET(_spec) ((_spec)->src_ip_127_96 || \
++				      (_spec)->src_ip_95_64  || \
++				      (_spec)->src_ip_63_32  || \
++				      (_spec)->src_ip_31_0)
++
++#define DR_MASK_IS_DST_IP_SET(_spec) ((_spec)->dst_ip_127_96 || \
++				      (_spec)->dst_ip_95_64  || \
++				      (_spec)->dst_ip_63_32  || \
++				      (_spec)->dst_ip_31_0)
++
+ struct mlx5dr_esw_caps {
+ 	u64 drop_icm_address_rx;
+ 	u64 drop_icm_address_tx;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
+index 2632d5ae9bc0e..ac4651235b34c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
+@@ -222,7 +222,11 @@ static bool contain_vport_reformat_action(struct mlx5_flow_rule *dst)
+ 		dst->dest_attr.vport.flags & MLX5_FLOW_DEST_VPORT_REFORMAT_ID;
+ }
+ 
+-#define MLX5_FLOW_CONTEXT_ACTION_MAX  32
++/* We want to support a rule with 32 destinations, which means we need to
++ * account for 32 destinations plus usually a counter plus one more action
++ * for a multi-destination flow table.
++ */
++#define MLX5_FLOW_CONTEXT_ACTION_MAX  34
+ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
+ 				  struct mlx5_flow_table *ft,
+ 				  struct mlx5_flow_group *group,
+@@ -392,9 +396,9 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
+ 			enum mlx5_flow_destination_type type = dst->dest_attr.type;
+ 			u32 id;
+ 
+-			if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX ||
+-			    num_term_actions >= MLX5_FLOW_CONTEXT_ACTION_MAX) {
+-				err = -ENOSPC;
++			if (fs_dr_num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX ||
++			    num_term_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
++				err = -EOPNOTSUPP;
+ 				goto free_actions;
+ 			}
+ 
+@@ -464,8 +468,9 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
+ 			    MLX5_FLOW_DESTINATION_TYPE_COUNTER)
+ 				continue;
+ 
+-			if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
+-				err = -ENOSPC;
++			if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX ||
++			    fs_dr_num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
++				err = -EOPNOTSUPP;
+ 				goto free_actions;
+ 			}
+ 
+@@ -485,14 +490,28 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
+ 	params.match_sz = match_sz;
+ 	params.match_buf = (u64 *)fte->val;
+ 	if (num_term_actions == 1) {
+-		if (term_actions->reformat)
++		if (term_actions->reformat) {
++			if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
++				err = -EOPNOTSUPP;
++				goto free_actions;
++			}
+ 			actions[num_actions++] = term_actions->reformat;
++		}
+ 
++		if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
++			err = -EOPNOTSUPP;
++			goto free_actions;
++		}
+ 		actions[num_actions++] = term_actions->dest;
+ 	} else if (num_term_actions > 1) {
+ 		bool ignore_flow_level =
+ 			!!(fte->action.flags & FLOW_ACT_IGNORE_FLOW_LEVEL);
+ 
++		if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX ||
++		    fs_dr_num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
++			err = -EOPNOTSUPP;
++			goto free_actions;
++		}
+ 		tmp_action = mlx5dr_action_create_mult_dest_tbl(domain,
+ 								term_actions,
+ 								num_term_actions,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h
+index c7c93131b762b..dfa223415fe24 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h
+@@ -160,6 +160,11 @@ struct mlx5dr_icm_buddy_mem {
+ 	 * sync_ste command sets them free.
+ 	 */
+ 	struct list_head	hot_list;
++
++	/* Memory optimisation */
++	struct mlx5dr_ste	*ste_arr;
++	struct list_head	*miss_list;
++	u8			*hw_ste_arr;
+ };
+ 
+ int mlx5dr_buddy_init(struct mlx5dr_icm_buddy_mem *buddy,
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+index 0a326e04e6923..cb43651ea9ba8 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+@@ -922,8 +922,8 @@ nfp_tunnel_add_shared_mac(struct nfp_app *app, struct net_device *netdev,
+ 			  int port, bool mod)
+ {
+ 	struct nfp_flower_priv *priv = app->priv;
+-	int ida_idx = NFP_MAX_MAC_INDEX, err;
+ 	struct nfp_tun_offloaded_mac *entry;
++	int ida_idx = -1, err;
+ 	u16 nfp_mac_idx = 0;
+ 
+ 	entry = nfp_tunnel_lookup_offloaded_macs(app, netdev->dev_addr);
+@@ -997,7 +997,7 @@ err_remove_hash:
+ err_free_entry:
+ 	kfree(entry);
+ err_free_ida:
+-	if (ida_idx != NFP_MAX_MAC_INDEX)
++	if (ida_idx != -1)
+ 		ida_simple_remove(&priv->tun.mac_off_ids, ida_idx);
+ 
+ 	return err;
+diff --git a/drivers/net/ethernet/xilinx/ll_temac_main.c b/drivers/net/ethernet/xilinx/ll_temac_main.c
+index e7065c9a8e389..e8be35b1b6c96 100644
+--- a/drivers/net/ethernet/xilinx/ll_temac_main.c
++++ b/drivers/net/ethernet/xilinx/ll_temac_main.c
+@@ -1427,6 +1427,8 @@ static int temac_probe(struct platform_device *pdev)
+ 		lp->indirect_lock = devm_kmalloc(&pdev->dev,
+ 						 sizeof(*lp->indirect_lock),
+ 						 GFP_KERNEL);
++		if (!lp->indirect_lock)
++			return -ENOMEM;
+ 		spin_lock_init(lp->indirect_lock);
+ 	}
+ 
+diff --git a/drivers/net/mdio/mdio-ipq4019.c b/drivers/net/mdio/mdio-ipq4019.c
+index 5f4cd24a0241d..4eba5a91075c0 100644
+--- a/drivers/net/mdio/mdio-ipq4019.c
++++ b/drivers/net/mdio/mdio-ipq4019.c
+@@ -200,7 +200,11 @@ static int ipq_mdio_reset(struct mii_bus *bus)
+ 	if (ret)
+ 		return ret;
+ 
+-	return clk_prepare_enable(priv->mdio_clk);
++	ret = clk_prepare_enable(priv->mdio_clk);
++	if (ret == 0)
++		mdelay(10);
++
++	return ret;
+ }
+ 
+ static int ipq4019_mdio_probe(struct platform_device *pdev)
+diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
+index eb3817d70f2b8..9b4dfa3001d6e 100644
+--- a/drivers/net/usb/cdc_ether.c
++++ b/drivers/net/usb/cdc_ether.c
+@@ -583,6 +583,11 @@ static const struct usb_device_id	products[] = {
+ 	.bInterfaceSubClass	= USB_CDC_SUBCLASS_ETHERNET, \
+ 	.bInterfaceProtocol	= USB_CDC_PROTO_NONE
+ 
++#define ZAURUS_FAKE_INTERFACE \
++	.bInterfaceClass	= USB_CLASS_COMM, \
++	.bInterfaceSubClass	= USB_CDC_SUBCLASS_MDLM, \
++	.bInterfaceProtocol	= USB_CDC_PROTO_NONE
++
+ /* SA-1100 based Sharp Zaurus ("collie"), or compatible;
+  * wire-incompatible with true CDC Ethernet implementations.
+  * (And, it seems, needlessly so...)
+@@ -636,6 +641,13 @@ static const struct usb_device_id	products[] = {
+ 	.idProduct              = 0x9032,	/* SL-6000 */
+ 	ZAURUS_MASTER_INTERFACE,
+ 	.driver_info		= 0,
++}, {
++	.match_flags    =   USB_DEVICE_ID_MATCH_INT_INFO
++		 | USB_DEVICE_ID_MATCH_DEVICE,
++	.idVendor               = 0x04DD,
++	.idProduct              = 0x9032,	/* SL-6000 */
++	ZAURUS_FAKE_INTERFACE,
++	.driver_info		= 0,
+ }, {
+ 	.match_flags    =   USB_DEVICE_ID_MATCH_INT_INFO
+ 		 | USB_DEVICE_ID_MATCH_DEVICE,
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index e303b522efb50..15f91d691bba3 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -1715,10 +1715,10 @@ int cdc_ncm_rx_fixup(struct usbnet *dev, struct sk_buff *skb_in)
+ {
+ 	struct sk_buff *skb;
+ 	struct cdc_ncm_ctx *ctx = (struct cdc_ncm_ctx *)dev->data[0];
+-	int len;
++	unsigned int len;
+ 	int nframes;
+ 	int x;
+-	int offset;
++	unsigned int offset;
+ 	union {
+ 		struct usb_cdc_ncm_ndp16 *ndp16;
+ 		struct usb_cdc_ncm_ndp32 *ndp32;
+@@ -1790,8 +1790,8 @@ next_ndp:
+ 			break;
+ 		}
+ 
+-		/* sanity checking */
+-		if (((offset + len) > skb_in->len) ||
++		/* sanity checking - watch out for integer wrap*/
++		if ((offset > skb_in->len) || (len > skb_in->len - offset) ||
+ 				(len > ctx->rx_max) || (len < ETH_HLEN)) {
+ 			netif_dbg(dev, rx_err, dev->net,
+ 				  "invalid frame detected (ignored) offset[%u]=%u, length=%u, skb=%p\n",
+diff --git a/drivers/net/usb/sr9700.c b/drivers/net/usb/sr9700.c
+index b658510cc9a42..5a53e63d33a60 100644
+--- a/drivers/net/usb/sr9700.c
++++ b/drivers/net/usb/sr9700.c
+@@ -413,7 +413,7 @@ static int sr9700_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 		/* ignore the CRC length */
+ 		len = (skb->data[1] | (skb->data[2] << 8)) - 4;
+ 
+-		if (len > ETH_FRAME_LEN)
++		if (len > ETH_FRAME_LEN || len > skb->len)
+ 			return 0;
+ 
+ 		/* the last packet of current skb */
+diff --git a/drivers/net/usb/zaurus.c b/drivers/net/usb/zaurus.c
+index 8e717a0b559b3..7984f2157d222 100644
+--- a/drivers/net/usb/zaurus.c
++++ b/drivers/net/usb/zaurus.c
+@@ -256,6 +256,11 @@ static const struct usb_device_id	products [] = {
+ 	.bInterfaceSubClass	= USB_CDC_SUBCLASS_ETHERNET, \
+ 	.bInterfaceProtocol	= USB_CDC_PROTO_NONE
+ 
++#define ZAURUS_FAKE_INTERFACE \
++	.bInterfaceClass	= USB_CLASS_COMM, \
++	.bInterfaceSubClass	= USB_CDC_SUBCLASS_MDLM, \
++	.bInterfaceProtocol	= USB_CDC_PROTO_NONE
++
+ /* SA-1100 based Sharp Zaurus ("collie"), or compatible. */
+ {
+ 	.match_flags	=   USB_DEVICE_ID_MATCH_INT_INFO
+@@ -313,6 +318,13 @@ static const struct usb_device_id	products [] = {
+ 	.idProduct              = 0x9032,	/* SL-6000 */
+ 	ZAURUS_MASTER_INTERFACE,
+ 	.driver_info = ZAURUS_PXA_INFO,
++}, {
++	.match_flags    =   USB_DEVICE_ID_MATCH_INT_INFO
++			    | USB_DEVICE_ID_MATCH_DEVICE,
++	.idVendor		= 0x04DD,
++	.idProduct		= 0x9032,	/* SL-6000 */
++	ZAURUS_FAKE_INTERFACE,
++	.driver_info = (unsigned long)&bogus_mdlm_info,
+ }, {
+ 	.match_flags    =   USB_DEVICE_ID_MATCH_INT_INFO
+ 		 | USB_DEVICE_ID_MATCH_DEVICE,
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 352766aa3122e..5785f6abf1945 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1936,7 +1936,7 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
+ 	if (blk_queue_is_zoned(ns->queue)) {
+ 		ret = nvme_revalidate_zones(ns);
+ 		if (ret && !nvme_first_scan(ns->disk))
+-			goto out;
++			return ret;
+ 	}
+ 
+ 	if (nvme_ns_head_multipath(ns->head)) {
+@@ -1951,16 +1951,16 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
+ 	return 0;
+ 
+ out_unfreeze:
+-	blk_mq_unfreeze_queue(ns->disk->queue);
+-out:
+ 	/*
+ 	 * If probing fails due an unsupported feature, hide the block device,
+ 	 * but still allow other access.
+ 	 */
+ 	if (ret == -ENODEV) {
+ 		ns->disk->flags |= GENHD_FL_HIDDEN;
++		set_bit(NVME_NS_READY, &ns->flags);
+ 		ret = 0;
+ 	}
++	blk_mq_unfreeze_queue(ns->disk->queue);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index 23a38dcf0fc4d..9fd1602b539d9 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -771,7 +771,7 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
+ 
+ 	if (config->wp_gpio)
+ 		nvmem->wp_gpio = config->wp_gpio;
+-	else
++	else if (!config->ignore_wp)
+ 		nvmem->wp_gpio = gpiod_get_optional(config->dev, "wp",
+ 						    GPIOD_OUT_HIGH);
+ 	if (IS_ERR(nvmem->wp_gpio)) {
+diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
+index 357e9a293edf7..2a3bf82aa4e26 100644
+--- a/drivers/pci/controller/pci-mvebu.c
++++ b/drivers/pci/controller/pci-mvebu.c
+@@ -1288,7 +1288,8 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
+ 		 * indirectly via kernel emulated PCI bridge driver.
+ 		 */
+ 		mvebu_pcie_setup_hw(port);
+-		mvebu_pcie_set_local_dev_nr(port, 0);
++		mvebu_pcie_set_local_dev_nr(port, 1);
++		mvebu_pcie_set_local_bus_nr(port, 0);
+ 	}
+ 
+ 	pcie->nports = i;
+diff --git a/drivers/pinctrl/pinctrl-k210.c b/drivers/pinctrl/pinctrl-k210.c
+index 49e32684dbb25..ecab6bf63dc6d 100644
+--- a/drivers/pinctrl/pinctrl-k210.c
++++ b/drivers/pinctrl/pinctrl-k210.c
+@@ -482,7 +482,7 @@ static int k210_pinconf_get_drive(unsigned int max_strength_ua)
+ {
+ 	int i;
+ 
+-	for (i = K210_PC_DRIVE_MAX; i; i--) {
++	for (i = K210_PC_DRIVE_MAX; i >= 0; i--) {
+ 		if (k210_pinconf_drive_strength[i] <= max_strength_ua)
+ 			return i;
+ 	}
+@@ -527,7 +527,7 @@ static int k210_pinconf_set_param(struct pinctrl_dev *pctldev,
+ 	case PIN_CONFIG_BIAS_PULL_UP:
+ 		if (!arg)
+ 			return -EINVAL;
+-		val |= K210_PC_PD;
++		val |= K210_PC_PU;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_STRENGTH:
+ 		arg *= 1000;
+diff --git a/drivers/platform/surface/surface3_power.c b/drivers/platform/surface/surface3_power.c
+index abac3eec565e8..444ec81ba02d7 100644
+--- a/drivers/platform/surface/surface3_power.c
++++ b/drivers/platform/surface/surface3_power.c
+@@ -232,14 +232,21 @@ static int mshw0011_bix(struct mshw0011_data *cdata, struct bix *bix)
+ 	}
+ 	bix->last_full_charg_capacity = ret;
+ 
+-	/* get serial number */
++	/*
++	 * Get serial number, on some devices (with unofficial replacement
++	 * battery?) reading any of the serial number range addresses gets
++	 * nacked in this case just leave the serial number empty.
++	 */
+ 	ret = i2c_smbus_read_i2c_block_data(client, MSHW0011_BAT0_REG_SERIAL_NO,
+ 					    sizeof(buf), buf);
+-	if (ret != sizeof(buf)) {
++	if (ret == -EREMOTEIO) {
++		/* no serial number available */
++	} else if (ret != sizeof(buf)) {
+ 		dev_err(&client->dev, "Error reading serial no: %d\n", ret);
+ 		return ret;
++	} else {
++		snprintf(bix->serial, ARRAY_SIZE(bix->serial), "%3pE%6pE", buf + 7, buf);
+ 	}
+-	snprintf(bix->serial, ARRAY_SIZE(bix->serial), "%3pE%6pE", buf + 7, buf);
+ 
+ 	/* get cycle count */
+ 	ret = i2c_smbus_read_word_data(client, MSHW0011_BAT0_REG_CYCLE_CNT);
+diff --git a/drivers/spi/spi-zynq-qspi.c b/drivers/spi/spi-zynq-qspi.c
+index cfa222c9bd5e7..78f31b61a2aac 100644
+--- a/drivers/spi/spi-zynq-qspi.c
++++ b/drivers/spi/spi-zynq-qspi.c
+@@ -570,6 +570,9 @@ static int zynq_qspi_exec_mem_op(struct spi_mem *mem,
+ 
+ 	if (op->dummy.nbytes) {
+ 		tmpbuf = kzalloc(op->dummy.nbytes, GFP_KERNEL);
++		if (!tmpbuf)
++			return -ENOMEM;
++
+ 		memset(tmpbuf, 0xff, op->dummy.nbytes);
+ 		reinit_completion(&xqspi->data_completion);
+ 		xqspi->txbuf = tmpbuf;
+diff --git a/drivers/staging/fbtft/fb_st7789v.c b/drivers/staging/fbtft/fb_st7789v.c
+index abe9395a0aefd..861a154144e66 100644
+--- a/drivers/staging/fbtft/fb_st7789v.c
++++ b/drivers/staging/fbtft/fb_st7789v.c
+@@ -144,6 +144,8 @@ static int init_display(struct fbtft_par *par)
+ {
+ 	int rc;
+ 
++	par->fbtftops.reset(par);
++
+ 	rc = init_tearing_effect_line(par);
+ 	if (rc)
+ 		return rc;
+diff --git a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+index 8502b7d8df896..68f61a7389303 100644
+--- a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
++++ b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+@@ -404,6 +404,10 @@ static void int3400_notify(acpi_handle handle,
+ 	thermal_prop[3] = kasprintf(GFP_KERNEL, "EVENT=%d", therm_event);
+ 	thermal_prop[4] = NULL;
+ 	kobject_uevent_env(&priv->thermal->device.kobj, KOBJ_CHANGE, thermal_prop);
++	kfree(thermal_prop[0]);
++	kfree(thermal_prop[1]);
++	kfree(thermal_prop[2]);
++	kfree(thermal_prop[3]);
+ }
+ 
+ static int int3400_thermal_get_temp(struct thermal_zone_device *thermal,
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index 9c5211f2ea84c..2ec9eeaabac94 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -439,7 +439,7 @@ static u8 gsm_encode_modem(const struct gsm_dlci *dlci)
+ 		modembits |= MDM_RTR;
+ 	if (dlci->modem_tx & TIOCM_RI)
+ 		modembits |= MDM_IC;
+-	if (dlci->modem_tx & TIOCM_CD)
++	if (dlci->modem_tx & TIOCM_CD || dlci->gsm->initiator)
+ 		modembits |= MDM_DV;
+ 	return modembits;
+ }
+@@ -448,7 +448,7 @@ static u8 gsm_encode_modem(const struct gsm_dlci *dlci)
+  *	gsm_print_packet	-	display a frame for debug
+  *	@hdr: header to print before decode
+  *	@addr: address EA from the frame
+- *	@cr: C/R bit from the frame
++ *	@cr: C/R bit seen as initiator
+  *	@control: control including PF bit
+  *	@data: following data bytes
+  *	@dlen: length of data
+@@ -548,7 +548,7 @@ static int gsm_stuff_frame(const u8 *input, u8 *output, int len)
+  *	gsm_send	-	send a control frame
+  *	@gsm: our GSM mux
+  *	@addr: address for control frame
+- *	@cr: command/response bit
++ *	@cr: command/response bit seen as initiator
+  *	@control:  control byte including PF bit
+  *
+  *	Format up and transmit a control frame. These do not go via the
+@@ -563,11 +563,15 @@ static void gsm_send(struct gsm_mux *gsm, int addr, int cr, int control)
+ 	int len;
+ 	u8 cbuf[10];
+ 	u8 ibuf[3];
++	int ocr;
++
++	/* toggle C/R coding if not initiator */
++	ocr = cr ^ (gsm->initiator ? 0 : 1);
+ 
+ 	switch (gsm->encoding) {
+ 	case 0:
+ 		cbuf[0] = GSM0_SOF;
+-		cbuf[1] = (addr << 2) | (cr << 1) | EA;
++		cbuf[1] = (addr << 2) | (ocr << 1) | EA;
+ 		cbuf[2] = control;
+ 		cbuf[3] = EA;	/* Length of data = 0 */
+ 		cbuf[4] = 0xFF - gsm_fcs_add_block(INIT_FCS, cbuf + 1, 3);
+@@ -577,7 +581,7 @@ static void gsm_send(struct gsm_mux *gsm, int addr, int cr, int control)
+ 	case 1:
+ 	case 2:
+ 		/* Control frame + packing (but not frame stuffing) in mode 1 */
+-		ibuf[0] = (addr << 2) | (cr << 1) | EA;
++		ibuf[0] = (addr << 2) | (ocr << 1) | EA;
+ 		ibuf[1] = control;
+ 		ibuf[2] = 0xFF - gsm_fcs_add_block(INIT_FCS, ibuf, 2);
+ 		/* Stuffing may double the size worst case */
+@@ -611,7 +615,7 @@ static void gsm_send(struct gsm_mux *gsm, int addr, int cr, int control)
+ 
+ static inline void gsm_response(struct gsm_mux *gsm, int addr, int control)
+ {
+-	gsm_send(gsm, addr, 1, control);
++	gsm_send(gsm, addr, 0, control);
+ }
+ 
+ /**
+@@ -1017,25 +1021,25 @@ static void gsm_control_reply(struct gsm_mux *gsm, int cmd, const u8 *data,
+  *	@tty: virtual tty bound to the DLCI
+  *	@dlci: DLCI to affect
+  *	@modem: modem bits (full EA)
+- *	@clen: command length
++ *	@slen: number of signal octets
+  *
+  *	Used when a modem control message or line state inline in adaption
+  *	layer 2 is processed. Sort out the local modem state and throttles
+  */
+ 
+ static void gsm_process_modem(struct tty_struct *tty, struct gsm_dlci *dlci,
+-							u32 modem, int clen)
++							u32 modem, int slen)
+ {
+ 	int  mlines = 0;
+ 	u8 brk = 0;
+ 	int fc;
+ 
+-	/* The modem status command can either contain one octet (v.24 signals)
+-	   or two octets (v.24 signals + break signals). The length field will
+-	   either be 2 or 3 respectively. This is specified in section
+-	   5.4.6.3.7 of the  27.010 mux spec. */
++	/* The modem status command can either contain one octet (V.24 signals)
++	 * or two octets (V.24 signals + break signals). This is specified in
++	 * section 5.4.6.3.7 of the 07.10 mux spec.
++	 */
+ 
+-	if (clen == 2)
++	if (slen == 1)
+ 		modem = modem & 0x7f;
+ 	else {
+ 		brk = modem & 0x7f;
+@@ -1092,6 +1096,7 @@ static void gsm_control_modem(struct gsm_mux *gsm, const u8 *data, int clen)
+ 	unsigned int brk = 0;
+ 	struct gsm_dlci *dlci;
+ 	int len = clen;
++	int slen;
+ 	const u8 *dp = data;
+ 	struct tty_struct *tty;
+ 
+@@ -1111,6 +1116,7 @@ static void gsm_control_modem(struct gsm_mux *gsm, const u8 *data, int clen)
+ 		return;
+ 	dlci = gsm->dlci[addr];
+ 
++	slen = len;
+ 	while (gsm_read_ea(&modem, *dp++) == 0) {
+ 		len--;
+ 		if (len == 0)
+@@ -1127,7 +1133,7 @@ static void gsm_control_modem(struct gsm_mux *gsm, const u8 *data, int clen)
+ 		modem |= (brk & 0x7f);
+ 	}
+ 	tty = tty_port_tty_get(&dlci->port);
+-	gsm_process_modem(tty, dlci, modem, clen);
++	gsm_process_modem(tty, dlci, modem, slen);
+ 	if (tty) {
+ 		tty_wakeup(tty);
+ 		tty_kref_put(tty);
+@@ -1451,6 +1457,9 @@ static void gsm_dlci_close(struct gsm_dlci *dlci)
+ 	if (dlci->addr != 0) {
+ 		tty_port_tty_hangup(&dlci->port, false);
+ 		kfifo_reset(&dlci->fifo);
++		/* Ensure that gsmtty_open() can return. */
++		tty_port_set_initialized(&dlci->port, 0);
++		wake_up_interruptible(&dlci->port.open_wait);
+ 	} else
+ 		dlci->gsm->dead = true;
+ 	/* Unregister gsmtty driver,report gsmtty dev remove uevent for user */
+@@ -1514,7 +1523,7 @@ static void gsm_dlci_t1(struct timer_list *t)
+ 			dlci->mode = DLCI_MODE_ADM;
+ 			gsm_dlci_open(dlci);
+ 		} else {
+-			gsm_dlci_close(dlci);
++			gsm_dlci_begin_close(dlci); /* prevent half open link */
+ 		}
+ 
+ 		break;
+@@ -1593,6 +1602,7 @@ static void gsm_dlci_data(struct gsm_dlci *dlci, const u8 *data, int clen)
+ 	struct tty_struct *tty;
+ 	unsigned int modem = 0;
+ 	int len = clen;
++	int slen = 0;
+ 
+ 	if (debug & 16)
+ 		pr_debug("%d bytes for tty\n", len);
+@@ -1605,12 +1615,14 @@ static void gsm_dlci_data(struct gsm_dlci *dlci, const u8 *data, int clen)
+ 	case 2:		/* Asynchronous serial with line state in each frame */
+ 		while (gsm_read_ea(&modem, *data++) == 0) {
+ 			len--;
++			slen++;
+ 			if (len == 0)
+ 				return;
+ 		}
++		slen++;
+ 		tty = tty_port_tty_get(port);
+ 		if (tty) {
+-			gsm_process_modem(tty, dlci, modem, clen);
++			gsm_process_modem(tty, dlci, modem, slen);
+ 			tty_kref_put(tty);
+ 		}
+ 		fallthrough;
+@@ -1748,7 +1760,12 @@ static void gsm_dlci_release(struct gsm_dlci *dlci)
+ 		gsm_destroy_network(dlci);
+ 		mutex_unlock(&dlci->mutex);
+ 
+-		tty_hangup(tty);
++		/* We cannot use tty_hangup() because in tty_kref_put() the tty
++		 * driver assumes that the hangup queue is free and reuses it to
++		 * queue release_one_tty() -> NULL pointer panic in
++		 * process_one_work().
++		 */
++		tty_vhangup(tty);
+ 
+ 		tty_port_tty_set(&dlci->port, NULL);
+ 		tty_kref_put(tty);
+@@ -1800,10 +1817,10 @@ static void gsm_queue(struct gsm_mux *gsm)
+ 		goto invalid;
+ 
+ 	cr = gsm->address & 1;		/* C/R bit */
++	cr ^= gsm->initiator ? 0 : 1;	/* Flip so 1 always means command */
+ 
+ 	gsm_print_packet("<--", address, cr, gsm->control, gsm->buf, gsm->len);
+ 
+-	cr ^= 1 - gsm->initiator;	/* Flip so 1 always means command */
+ 	dlci = gsm->dlci[address];
+ 
+ 	switch (gsm->control) {
+@@ -3237,9 +3254,9 @@ static void gsmtty_throttle(struct tty_struct *tty)
+ 	if (dlci->state == DLCI_CLOSED)
+ 		return;
+ 	if (C_CRTSCTS(tty))
+-		dlci->modem_tx &= ~TIOCM_DTR;
++		dlci->modem_tx &= ~TIOCM_RTS;
+ 	dlci->throttled = true;
+-	/* Send an MSC with DTR cleared */
++	/* Send an MSC with RTS cleared */
+ 	gsmtty_modem_update(dlci, 0);
+ }
+ 
+@@ -3249,9 +3266,9 @@ static void gsmtty_unthrottle(struct tty_struct *tty)
+ 	if (dlci->state == DLCI_CLOSED)
+ 		return;
+ 	if (C_CRTSCTS(tty))
+-		dlci->modem_tx |= TIOCM_DTR;
++		dlci->modem_tx |= TIOCM_RTS;
+ 	dlci->throttled = false;
+-	/* Send an MSC with DTR set */
++	/* Send an MSC with RTS set */
+ 	gsmtty_modem_update(dlci, 0);
+ }
+ 
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index 64e7e6c8145f8..38d1c0748533c 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -734,12 +734,15 @@ static irqreturn_t sc16is7xx_irq(int irq, void *dev_id)
+ static void sc16is7xx_tx_proc(struct kthread_work *ws)
+ {
+ 	struct uart_port *port = &(to_sc16is7xx_one(ws, tx_work)->port);
++	struct sc16is7xx_port *s = dev_get_drvdata(port->dev);
+ 
+ 	if ((port->rs485.flags & SER_RS485_ENABLED) &&
+ 	    (port->rs485.delay_rts_before_send > 0))
+ 		msleep(port->rs485.delay_rts_before_send);
+ 
++	mutex_lock(&s->efr_lock);
+ 	sc16is7xx_handle_tx(port);
++	mutex_unlock(&s->efr_lock);
+ }
+ 
+ static void sc16is7xx_reconf_rs485(struct uart_port *port)
+diff --git a/drivers/usb/dwc2/core.h b/drivers/usb/dwc2/core.h
+index 37185eb66ae4c..f76c30083fbc9 100644
+--- a/drivers/usb/dwc2/core.h
++++ b/drivers/usb/dwc2/core.h
+@@ -1416,6 +1416,7 @@ void dwc2_hsotg_core_connect(struct dwc2_hsotg *hsotg);
+ void dwc2_hsotg_disconnect(struct dwc2_hsotg *dwc2);
+ int dwc2_hsotg_set_test_mode(struct dwc2_hsotg *hsotg, int testmode);
+ #define dwc2_is_device_connected(hsotg) (hsotg->connected)
++#define dwc2_is_device_enabled(hsotg) (hsotg->enabled)
+ int dwc2_backup_device_registers(struct dwc2_hsotg *hsotg);
+ int dwc2_restore_device_registers(struct dwc2_hsotg *hsotg, int remote_wakeup);
+ int dwc2_gadget_enter_hibernation(struct dwc2_hsotg *hsotg);
+@@ -1452,6 +1453,7 @@ static inline int dwc2_hsotg_set_test_mode(struct dwc2_hsotg *hsotg,
+ 					   int testmode)
+ { return 0; }
+ #define dwc2_is_device_connected(hsotg) (0)
++#define dwc2_is_device_enabled(hsotg) (0)
+ static inline int dwc2_backup_device_registers(struct dwc2_hsotg *hsotg)
+ { return 0; }
+ static inline int dwc2_restore_device_registers(struct dwc2_hsotg *hsotg,
+diff --git a/drivers/usb/dwc2/drd.c b/drivers/usb/dwc2/drd.c
+index aa6eb76f64ddc..36f2c38416e5e 100644
+--- a/drivers/usb/dwc2/drd.c
++++ b/drivers/usb/dwc2/drd.c
+@@ -109,8 +109,10 @@ static int dwc2_drd_role_sw_set(struct usb_role_switch *sw, enum usb_role role)
+ 		already = dwc2_ovr_avalid(hsotg, true);
+ 	} else if (role == USB_ROLE_DEVICE) {
+ 		already = dwc2_ovr_bvalid(hsotg, true);
+-		/* This clear DCTL.SFTDISCON bit */
+-		dwc2_hsotg_core_connect(hsotg);
++		if (dwc2_is_device_enabled(hsotg)) {
++			/* This clear DCTL.SFTDISCON bit */
++			dwc2_hsotg_core_connect(hsotg);
++		}
+ 	} else {
+ 		if (dwc2_is_device_mode(hsotg)) {
+ 			if (!dwc2_ovr_bvalid(hsotg, false))
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index 7ff8fc8f79a9b..1ecedbb1684c8 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -85,8 +85,8 @@ static const struct acpi_gpio_mapping acpi_dwc3_byt_gpios[] = {
+ static struct gpiod_lookup_table platform_bytcr_gpios = {
+ 	.dev_id		= "0000:00:16.0",
+ 	.table		= {
+-		GPIO_LOOKUP("INT33FC:00", 54, "reset", GPIO_ACTIVE_HIGH),
+-		GPIO_LOOKUP("INT33FC:02", 14, "cs", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("INT33FC:00", 54, "cs", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("INT33FC:02", 14, "reset", GPIO_ACTIVE_HIGH),
+ 		{}
+ 	},
+ };
+@@ -119,6 +119,13 @@ static const struct property_entry dwc3_pci_intel_properties[] = {
+ 	{}
+ };
+ 
++static const struct property_entry dwc3_pci_intel_byt_properties[] = {
++	PROPERTY_ENTRY_STRING("dr_mode", "peripheral"),
++	PROPERTY_ENTRY_BOOL("snps,dis_u2_susphy_quirk"),
++	PROPERTY_ENTRY_BOOL("linux,sysdev_is_parent"),
++	{}
++};
++
+ static const struct property_entry dwc3_pci_mrfld_properties[] = {
+ 	PROPERTY_ENTRY_STRING("dr_mode", "otg"),
+ 	PROPERTY_ENTRY_STRING("linux,extcon-name", "mrfld_bcove_pwrsrc"),
+@@ -161,6 +168,10 @@ static const struct software_node dwc3_pci_intel_swnode = {
+ 	.properties = dwc3_pci_intel_properties,
+ };
+ 
++static const struct software_node dwc3_pci_intel_byt_swnode = {
++	.properties = dwc3_pci_intel_byt_properties,
++};
++
+ static const struct software_node dwc3_pci_intel_mrfld_swnode = {
+ 	.properties = dwc3_pci_mrfld_properties,
+ };
+@@ -344,7 +355,7 @@ static const struct pci_device_id dwc3_pci_id_table[] = {
+ 	  (kernel_ulong_t) &dwc3_pci_intel_swnode, },
+ 
+ 	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BYT),
+-	  (kernel_ulong_t) &dwc3_pci_intel_swnode, },
++	  (kernel_ulong_t) &dwc3_pci_intel_byt_swnode, },
+ 
+ 	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MRFLD),
+ 	  (kernel_ulong_t) &dwc3_pci_intel_mrfld_swnode, },
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 7aab9116b0256..0566a841dca25 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -4131,9 +4131,11 @@ static irqreturn_t dwc3_thread_interrupt(int irq, void *_evt)
+ 	unsigned long flags;
+ 	irqreturn_t ret = IRQ_NONE;
+ 
++	local_bh_disable();
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	ret = dwc3_process_event_buf(evt);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
++	local_bh_enable();
+ 
+ 	return ret;
+ }
+diff --git a/drivers/usb/gadget/function/rndis.c b/drivers/usb/gadget/function/rndis.c
+index d9ed651f06ac3..0f14c5291af07 100644
+--- a/drivers/usb/gadget/function/rndis.c
++++ b/drivers/usb/gadget/function/rndis.c
+@@ -922,6 +922,7 @@ struct rndis_params *rndis_register(void (*resp_avail)(void *v), void *v)
+ 	params->resp_avail = resp_avail;
+ 	params->v = v;
+ 	INIT_LIST_HEAD(&params->resp_queue);
++	spin_lock_init(&params->resp_lock);
+ 	pr_debug("%s: configNr = %d\n", __func__, i);
+ 
+ 	return params;
+@@ -1015,12 +1016,14 @@ void rndis_free_response(struct rndis_params *params, u8 *buf)
+ {
+ 	rndis_resp_t *r, *n;
+ 
++	spin_lock(&params->resp_lock);
+ 	list_for_each_entry_safe(r, n, &params->resp_queue, list) {
+ 		if (r->buf == buf) {
+ 			list_del(&r->list);
+ 			kfree(r);
+ 		}
+ 	}
++	spin_unlock(&params->resp_lock);
+ }
+ EXPORT_SYMBOL_GPL(rndis_free_response);
+ 
+@@ -1030,14 +1033,17 @@ u8 *rndis_get_next_response(struct rndis_params *params, u32 *length)
+ 
+ 	if (!length) return NULL;
+ 
++	spin_lock(&params->resp_lock);
+ 	list_for_each_entry_safe(r, n, &params->resp_queue, list) {
+ 		if (!r->send) {
+ 			r->send = 1;
+ 			*length = r->length;
++			spin_unlock(&params->resp_lock);
+ 			return r->buf;
+ 		}
+ 	}
+ 
++	spin_unlock(&params->resp_lock);
+ 	return NULL;
+ }
+ EXPORT_SYMBOL_GPL(rndis_get_next_response);
+@@ -1054,7 +1060,9 @@ static rndis_resp_t *rndis_add_response(struct rndis_params *params, u32 length)
+ 	r->length = length;
+ 	r->send = 0;
+ 
++	spin_lock(&params->resp_lock);
+ 	list_add_tail(&r->list, &params->resp_queue);
++	spin_unlock(&params->resp_lock);
+ 	return r;
+ }
+ 
+diff --git a/drivers/usb/gadget/function/rndis.h b/drivers/usb/gadget/function/rndis.h
+index f6167f7fea82b..6206b8b7490f6 100644
+--- a/drivers/usb/gadget/function/rndis.h
++++ b/drivers/usb/gadget/function/rndis.h
+@@ -174,6 +174,7 @@ typedef struct rndis_params {
+ 	void			(*resp_avail)(void *v);
+ 	void			*v;
+ 	struct list_head	resp_queue;
++	spinlock_t		resp_lock;
+ } rndis_params;
+ 
+ /* RNDIS Message parser and other useless functions */
+diff --git a/drivers/usb/gadget/udc/udc-xilinx.c b/drivers/usb/gadget/udc/udc-xilinx.c
+index 857159dd5ae05..540824534e962 100644
+--- a/drivers/usb/gadget/udc/udc-xilinx.c
++++ b/drivers/usb/gadget/udc/udc-xilinx.c
+@@ -1615,6 +1615,8 @@ static void xudc_getstatus(struct xusb_udc *udc)
+ 		break;
+ 	case USB_RECIP_ENDPOINT:
+ 		epnum = udc->setup.wIndex & USB_ENDPOINT_NUMBER_MASK;
++		if (epnum >= XUSB_MAX_ENDPOINTS)
++			goto stall;
+ 		target_ep = &udc->ep[epnum];
+ 		epcfgreg = udc->read_fn(udc->addr + target_ep->offset);
+ 		halt = epcfgreg & XUSB_EP_CFG_STALL_MASK;
+@@ -1682,6 +1684,10 @@ static void xudc_set_clear_feature(struct xusb_udc *udc)
+ 	case USB_RECIP_ENDPOINT:
+ 		if (!udc->setup.wValue) {
+ 			endpoint = udc->setup.wIndex & USB_ENDPOINT_NUMBER_MASK;
++			if (endpoint >= XUSB_MAX_ENDPOINTS) {
++				xudc_ep0_stall(udc);
++				return;
++			}
+ 			target_ep = &udc->ep[endpoint];
+ 			outinbit = udc->setup.wIndex & USB_ENDPOINT_DIR_MASK;
+ 			outinbit = outinbit >> 7;
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index f5b1bcc875ded..d7c0bf494d930 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -1091,6 +1091,7 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ 	int			retval = 0;
+ 	bool			comp_timer_running = false;
+ 	bool			pending_portevent = false;
++	bool			reinit_xhc = false;
+ 
+ 	if (!hcd->state)
+ 		return 0;
+@@ -1107,10 +1108,11 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ 	set_bit(HCD_FLAG_HW_ACCESSIBLE, &xhci->shared_hcd->flags);
+ 
+ 	spin_lock_irq(&xhci->lock);
+-	if ((xhci->quirks & XHCI_RESET_ON_RESUME) || xhci->broken_suspend)
+-		hibernated = true;
+ 
+-	if (!hibernated) {
++	if (hibernated || xhci->quirks & XHCI_RESET_ON_RESUME || xhci->broken_suspend)
++		reinit_xhc = true;
++
++	if (!reinit_xhc) {
+ 		/*
+ 		 * Some controllers might lose power during suspend, so wait
+ 		 * for controller not ready bit to clear, just as in xHC init.
+@@ -1143,12 +1145,17 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ 			spin_unlock_irq(&xhci->lock);
+ 			return -ETIMEDOUT;
+ 		}
+-		temp = readl(&xhci->op_regs->status);
+ 	}
+ 
+-	/* If restore operation fails, re-initialize the HC during resume */
+-	if ((temp & STS_SRE) || hibernated) {
++	temp = readl(&xhci->op_regs->status);
+ 
++	/* re-initialize the HC on Restore Error, or Host Controller Error */
++	if (temp & (STS_SRE | STS_HCE)) {
++		reinit_xhc = true;
++		xhci_warn(xhci, "xHC error in resume, USBSTS 0x%x, Reinit\n", temp);
++	}
++
++	if (reinit_xhc) {
+ 		if ((xhci->quirks & XHCI_COMP_MODE_QUIRK) &&
+ 				!(xhci_all_ports_seen_u0(xhci))) {
+ 			del_timer_sync(&xhci->comp_mode_recovery_timer);
+@@ -1604,9 +1611,12 @@ static int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flag
+ 	struct urb_priv	*urb_priv;
+ 	int num_tds;
+ 
+-	if (!urb || xhci_check_args(hcd, urb->dev, urb->ep,
+-					true, true, __func__) <= 0)
++	if (!urb)
+ 		return -EINVAL;
++	ret = xhci_check_args(hcd, urb->dev, urb->ep,
++					true, true, __func__);
++	if (ret <= 0)
++		return ret ? ret : -EINVAL;
+ 
+ 	slot_id = urb->dev->slot_id;
+ 	ep_index = xhci_get_endpoint_index(&urb->ep->desc);
+@@ -3323,7 +3333,7 @@ static int xhci_check_streams_endpoint(struct xhci_hcd *xhci,
+ 		return -EINVAL;
+ 	ret = xhci_check_args(xhci_to_hcd(xhci), udev, ep, 1, true, __func__);
+ 	if (ret <= 0)
+-		return -EINVAL;
++		return ret ? ret : -EINVAL;
+ 	if (usb_ss_max_streams(&ep->ss_ep_comp) == 0) {
+ 		xhci_warn(xhci, "WARN: SuperSpeed Endpoint Companion"
+ 				" descriptor for ep 0x%x does not support streams\n",
+diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
+index 58cba8ee0277a..2798fca712612 100644
+--- a/drivers/usb/serial/ch341.c
++++ b/drivers/usb/serial/ch341.c
+@@ -81,7 +81,6 @@
+ #define CH341_QUIRK_SIMULATE_BREAK	BIT(1)
+ 
+ static const struct usb_device_id id_table[] = {
+-	{ USB_DEVICE(0x1a86, 0x5512) },
+ 	{ USB_DEVICE(0x1a86, 0x5523) },
+ 	{ USB_DEVICE(0x1a86, 0x7522) },
+ 	{ USB_DEVICE(0x1a86, 0x7523) },
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 962e9943fc20e..e7755d9cfc61a 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -198,6 +198,8 @@ static void option_instat_callback(struct urb *urb);
+ 
+ #define DELL_PRODUCT_5821E			0x81d7
+ #define DELL_PRODUCT_5821E_ESIM			0x81e0
++#define DELL_PRODUCT_5829E_ESIM			0x81e4
++#define DELL_PRODUCT_5829E			0x81e6
+ 
+ #define KYOCERA_VENDOR_ID			0x0c88
+ #define KYOCERA_PRODUCT_KPC650			0x17da
+@@ -1063,6 +1065,10 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
+ 	{ USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5821E_ESIM),
+ 	  .driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
++	{ USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5829E),
++	  .driver_info = RSVD(0) | RSVD(6) },
++	{ USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5829E_ESIM),
++	  .driver_info = RSVD(0) | RSVD(6) },
+ 	{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_E100A) },	/* ADU-E100, ADU-310 */
+ 	{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_500A) },
+ 	{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_620UW) },
+@@ -1273,10 +1279,16 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(2) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x7011, 0xff),	/* Telit LE910-S1 (ECM) */
+ 	  .driver_info = NCTRL(2) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x701a, 0xff),	/* Telit LE910R1 (RNDIS) */
++	  .driver_info = NCTRL(2) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x701b, 0xff),	/* Telit LE910R1 (ECM) */
++	  .driver_info = NCTRL(2) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, 0x9010),				/* Telit SBL FN980 flashing device */
+ 	  .driver_info = NCTRL(0) | ZLP },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, 0x9200),				/* Telit LE910S1 flashing device */
+ 	  .driver_info = NCTRL(0) | ZLP },
++	{ USB_DEVICE(TELIT_VENDOR_ID, 0x9201),				/* Telit LE910R1 flashing device */
++	  .driver_info = NCTRL(0) | ZLP },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0002, 0xff, 0xff, 0xff),
+ 	  .driver_info = RSVD(1) },
+diff --git a/drivers/usb/typec/tipd/core.c b/drivers/usb/typec/tipd/core.c
+index 6d27a5b5e3cac..7ffcda94d323a 100644
+--- a/drivers/usb/typec/tipd/core.c
++++ b/drivers/usb/typec/tipd/core.c
+@@ -761,12 +761,12 @@ static int tps6598x_probe(struct i2c_client *client)
+ 
+ 	ret = tps6598x_read32(tps, TPS_REG_STATUS, &status);
+ 	if (ret < 0)
+-		return ret;
++		goto err_clear_mask;
+ 	trace_tps6598x_status(status);
+ 
+ 	ret = tps6598x_read32(tps, TPS_REG_SYSTEM_CONF, &conf);
+ 	if (ret < 0)
+-		return ret;
++		goto err_clear_mask;
+ 
+ 	/*
+ 	 * This fwnode has a "compatible" property, but is never populated as a
+@@ -855,7 +855,8 @@ err_role_put:
+ 	usb_role_switch_put(tps->role_sw);
+ err_fwnode_put:
+ 	fwnode_handle_put(fwnode);
+-
++err_clear_mask:
++	tps6598x_write64(tps, TPS_REG_INT_MASK1, 0);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
+index d6ca1c7ad513f..37f0b4274113c 100644
+--- a/drivers/vhost/vsock.c
++++ b/drivers/vhost/vsock.c
+@@ -629,16 +629,18 @@ err:
+ 	return ret;
+ }
+ 
+-static int vhost_vsock_stop(struct vhost_vsock *vsock)
++static int vhost_vsock_stop(struct vhost_vsock *vsock, bool check_owner)
+ {
+ 	size_t i;
+-	int ret;
++	int ret = 0;
+ 
+ 	mutex_lock(&vsock->dev.mutex);
+ 
+-	ret = vhost_dev_check_owner(&vsock->dev);
+-	if (ret)
+-		goto err;
++	if (check_owner) {
++		ret = vhost_dev_check_owner(&vsock->dev);
++		if (ret)
++			goto err;
++	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) {
+ 		struct vhost_virtqueue *vq = &vsock->vqs[i];
+@@ -753,7 +755,12 @@ static int vhost_vsock_dev_release(struct inode *inode, struct file *file)
+ 	 * inefficient.  Room for improvement here. */
+ 	vsock_for_each_connected_socket(vhost_vsock_reset_orphans);
+ 
+-	vhost_vsock_stop(vsock);
++	/* Don't check the owner, because we are in the release path, so we
++	 * need to stop the vsock device in any case.
++	 * vhost_vsock_stop() can not fail in this case, so we don't need to
++	 * check the return code.
++	 */
++	vhost_vsock_stop(vsock, false);
+ 	vhost_vsock_flush(vsock);
+ 	vhost_dev_stop(&vsock->dev);
+ 
+@@ -868,7 +875,7 @@ static long vhost_vsock_dev_ioctl(struct file *f, unsigned int ioctl,
+ 		if (start)
+ 			return vhost_vsock_start(vsock);
+ 		else
+-			return vhost_vsock_stop(vsock);
++			return vhost_vsock_stop(vsock, true);
+ 	case VHOST_GET_FEATURES:
+ 		features = VHOST_VSOCK_FEATURES;
+ 		if (copy_to_user(argp, &features, sizeof(features)))
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 5fe5eccb3c874..269094176b8b3 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -3315,7 +3315,7 @@ void btrfs_exclop_finish(struct btrfs_fs_info *fs_info);
+ int __init btrfs_auto_defrag_init(void);
+ void __cold btrfs_auto_defrag_exit(void);
+ int btrfs_add_inode_defrag(struct btrfs_trans_handle *trans,
+-			   struct btrfs_inode *inode);
++			   struct btrfs_inode *inode, u32 extent_thresh);
+ int btrfs_run_defrag_inodes(struct btrfs_fs_info *fs_info);
+ void btrfs_cleanup_defrag_inodes(struct btrfs_fs_info *fs_info);
+ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync);
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 11204dbbe0530..a0179cc62913b 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -50,11 +50,14 @@ struct inode_defrag {
+ 	/* root objectid */
+ 	u64 root;
+ 
+-	/* last offset we were able to defrag */
+-	u64 last_offset;
+-
+-	/* if we've wrapped around back to zero once already */
+-	int cycled;
++	/*
++	 * The extent size threshold for autodefrag.
++	 *
++	 * This value is different for compressed/non-compressed extents,
++	 * thus needs to be passed from higher layer.
++	 * (aka, inode_should_defrag())
++	 */
++	u32 extent_thresh;
+ };
+ 
+ static int __compare_inode_defrag(struct inode_defrag *defrag1,
+@@ -107,8 +110,8 @@ static int __btrfs_add_inode_defrag(struct btrfs_inode *inode,
+ 			 */
+ 			if (defrag->transid < entry->transid)
+ 				entry->transid = defrag->transid;
+-			if (defrag->last_offset > entry->last_offset)
+-				entry->last_offset = defrag->last_offset;
++			entry->extent_thresh = min(defrag->extent_thresh,
++						   entry->extent_thresh);
+ 			return -EEXIST;
+ 		}
+ 	}
+@@ -134,7 +137,7 @@ static inline int __need_auto_defrag(struct btrfs_fs_info *fs_info)
+  * enabled
+  */
+ int btrfs_add_inode_defrag(struct btrfs_trans_handle *trans,
+-			   struct btrfs_inode *inode)
++			   struct btrfs_inode *inode, u32 extent_thresh)
+ {
+ 	struct btrfs_root *root = inode->root;
+ 	struct btrfs_fs_info *fs_info = root->fs_info;
+@@ -160,6 +163,7 @@ int btrfs_add_inode_defrag(struct btrfs_trans_handle *trans,
+ 	defrag->ino = btrfs_ino(inode);
+ 	defrag->transid = transid;
+ 	defrag->root = root->root_key.objectid;
++	defrag->extent_thresh = extent_thresh;
+ 
+ 	spin_lock(&fs_info->defrag_inodes_lock);
+ 	if (!test_bit(BTRFS_INODE_IN_DEFRAG, &inode->runtime_flags)) {
+@@ -178,34 +182,6 @@ int btrfs_add_inode_defrag(struct btrfs_trans_handle *trans,
+ 	return 0;
+ }
+ 
+-/*
+- * Requeue the defrag object. If there is a defrag object that points to
+- * the same inode in the tree, we will merge them together (by
+- * __btrfs_add_inode_defrag()) and free the one that we want to requeue.
+- */
+-static void btrfs_requeue_inode_defrag(struct btrfs_inode *inode,
+-				       struct inode_defrag *defrag)
+-{
+-	struct btrfs_fs_info *fs_info = inode->root->fs_info;
+-	int ret;
+-
+-	if (!__need_auto_defrag(fs_info))
+-		goto out;
+-
+-	/*
+-	 * Here we don't check the IN_DEFRAG flag, because we need merge
+-	 * them together.
+-	 */
+-	spin_lock(&fs_info->defrag_inodes_lock);
+-	ret = __btrfs_add_inode_defrag(inode, defrag);
+-	spin_unlock(&fs_info->defrag_inodes_lock);
+-	if (ret)
+-		goto out;
+-	return;
+-out:
+-	kmem_cache_free(btrfs_inode_defrag_cachep, defrag);
+-}
+-
+ /*
+  * pick the defragable inode that we want, if it doesn't exist, we will get
+  * the next one.
+@@ -278,8 +254,14 @@ static int __btrfs_run_defrag_inode(struct btrfs_fs_info *fs_info,
+ 	struct btrfs_root *inode_root;
+ 	struct inode *inode;
+ 	struct btrfs_ioctl_defrag_range_args range;
+-	int num_defrag;
+-	int ret;
++	int ret = 0;
++	u64 cur = 0;
++
++again:
++	if (test_bit(BTRFS_FS_STATE_REMOUNTING, &fs_info->fs_state))
++		goto cleanup;
++	if (!__need_auto_defrag(fs_info))
++		goto cleanup;
+ 
+ 	/* get the inode */
+ 	inode_root = btrfs_get_fs_root(fs_info, defrag->root, true);
+@@ -295,39 +277,30 @@ static int __btrfs_run_defrag_inode(struct btrfs_fs_info *fs_info,
+ 		goto cleanup;
+ 	}
+ 
++	if (cur >= i_size_read(inode)) {
++		iput(inode);
++		goto cleanup;
++	}
++
+ 	/* do a chunk of defrag */
+ 	clear_bit(BTRFS_INODE_IN_DEFRAG, &BTRFS_I(inode)->runtime_flags);
+ 	memset(&range, 0, sizeof(range));
+ 	range.len = (u64)-1;
+-	range.start = defrag->last_offset;
++	range.start = cur;
++	range.extent_thresh = defrag->extent_thresh;
+ 
+ 	sb_start_write(fs_info->sb);
+-	num_defrag = btrfs_defrag_file(inode, NULL, &range, defrag->transid,
++	ret = btrfs_defrag_file(inode, NULL, &range, defrag->transid,
+ 				       BTRFS_DEFRAG_BATCH);
+ 	sb_end_write(fs_info->sb);
+-	/*
+-	 * if we filled the whole defrag batch, there
+-	 * must be more work to do.  Queue this defrag
+-	 * again
+-	 */
+-	if (num_defrag == BTRFS_DEFRAG_BATCH) {
+-		defrag->last_offset = range.start;
+-		btrfs_requeue_inode_defrag(BTRFS_I(inode), defrag);
+-	} else if (defrag->last_offset && !defrag->cycled) {
+-		/*
+-		 * we didn't fill our defrag batch, but
+-		 * we didn't start at zero.  Make sure we loop
+-		 * around to the start of the file.
+-		 */
+-		defrag->last_offset = 0;
+-		defrag->cycled = 1;
+-		btrfs_requeue_inode_defrag(BTRFS_I(inode), defrag);
+-	} else {
+-		kmem_cache_free(btrfs_inode_defrag_cachep, defrag);
+-	}
+-
+ 	iput(inode);
+-	return 0;
++
++	if (ret < 0)
++		goto cleanup;
++
++	cur = max(cur + fs_info->sectorsize, range.start);
++	goto again;
++
+ cleanup:
+ 	kmem_cache_free(btrfs_inode_defrag_cachep, defrag);
+ 	return ret;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 39a6745434613..3be5735372ee8 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -561,12 +561,12 @@ static inline int inode_need_compress(struct btrfs_inode *inode, u64 start,
+ }
+ 
+ static inline void inode_should_defrag(struct btrfs_inode *inode,
+-		u64 start, u64 end, u64 num_bytes, u64 small_write)
++		u64 start, u64 end, u64 num_bytes, u32 small_write)
+ {
+ 	/* If this is a small write inside eof, kick off a defrag */
+ 	if (num_bytes < small_write &&
+ 	    (start > 0 || end + 1 < inode->disk_i_size))
+-		btrfs_add_inode_defrag(NULL, inode);
++		btrfs_add_inode_defrag(NULL, inode, small_write);
+ }
+ 
+ /*
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index cec7163bc8730..541a4fbfd79ec 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -1020,23 +1020,37 @@ static struct extent_map *defrag_lookup_extent(struct inode *inode, u64 start,
+ 	return em;
+ }
+ 
++static u32 get_extent_max_capacity(const struct extent_map *em)
++{
++	if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags))
++		return BTRFS_MAX_COMPRESSED;
++	return BTRFS_MAX_EXTENT_SIZE;
++}
++
+ static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em,
+ 				     bool locked)
+ {
+ 	struct extent_map *next;
+-	bool ret = true;
++	bool ret = false;
+ 
+ 	/* this is the last extent */
+ 	if (em->start + em->len >= i_size_read(inode))
+ 		return false;
+ 
+ 	next = defrag_lookup_extent(inode, em->start + em->len, locked);
++	/* No more em or hole */
+ 	if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE)
+-		ret = false;
+-	else if ((em->block_start + em->block_len == next->block_start) &&
+-		 (em->block_len > SZ_128K && next->block_len > SZ_128K))
+-		ret = false;
+-
++		goto out;
++	if (test_bit(EXTENT_FLAG_PREALLOC, &next->flags))
++		goto out;
++	/*
++	 * If the next extent is at its max capacity, defragging current extent
++	 * makes no sense, as the total number of extents won't change.
++	 */
++	if (next->len >= get_extent_max_capacity(em))
++		goto out;
++	ret = true;
++out:
+ 	free_extent_map(next);
+ 	return ret;
+ }
+@@ -1160,8 +1174,10 @@ struct defrag_target_range {
+ static int defrag_collect_targets(struct btrfs_inode *inode,
+ 				  u64 start, u64 len, u32 extent_thresh,
+ 				  u64 newer_than, bool do_compress,
+-				  bool locked, struct list_head *target_list)
++				  bool locked, struct list_head *target_list,
++				  u64 *last_scanned_ret)
+ {
++	bool last_is_target = false;
+ 	u64 cur = start;
+ 	int ret = 0;
+ 
+@@ -1171,6 +1187,7 @@ static int defrag_collect_targets(struct btrfs_inode *inode,
+ 		bool next_mergeable = true;
+ 		u64 range_len;
+ 
++		last_is_target = false;
+ 		em = defrag_lookup_extent(&inode->vfs_inode, cur, locked);
+ 		if (!em)
+ 			break;
+@@ -1228,6 +1245,13 @@ static int defrag_collect_targets(struct btrfs_inode *inode,
+ 		if (range_len >= extent_thresh)
+ 			goto next;
+ 
++		/*
++		 * Skip extents already at its max capacity, this is mostly for
++		 * compressed extents, which max cap is only 128K.
++		 */
++		if (em->len >= get_extent_max_capacity(em))
++			goto next;
++
+ 		next_mergeable = defrag_check_next_extent(&inode->vfs_inode, em,
+ 							  locked);
+ 		if (!next_mergeable) {
+@@ -1246,6 +1270,7 @@ static int defrag_collect_targets(struct btrfs_inode *inode,
+ 		}
+ 
+ add:
++		last_is_target = true;
+ 		range_len = min(extent_map_end(em), start + len) - cur;
+ 		/*
+ 		 * This one is a good target, check if it can be merged into
+@@ -1289,6 +1314,17 @@ next:
+ 			kfree(entry);
+ 		}
+ 	}
++	if (!ret && last_scanned_ret) {
++		/*
++		 * If the last extent is not a target, the caller can skip to
++		 * the end of that extent.
++		 * Otherwise, we can only go the end of the specified range.
++		 */
++		if (!last_is_target)
++			*last_scanned_ret = max(cur, *last_scanned_ret);
++		else
++			*last_scanned_ret = max(start + len, *last_scanned_ret);
++	}
+ 	return ret;
+ }
+ 
+@@ -1347,7 +1383,8 @@ static int defrag_one_locked_target(struct btrfs_inode *inode,
+ }
+ 
+ static int defrag_one_range(struct btrfs_inode *inode, u64 start, u32 len,
+-			    u32 extent_thresh, u64 newer_than, bool do_compress)
++			    u32 extent_thresh, u64 newer_than, bool do_compress,
++			    u64 *last_scanned_ret)
+ {
+ 	struct extent_state *cached_state = NULL;
+ 	struct defrag_target_range *entry;
+@@ -1393,7 +1430,7 @@ static int defrag_one_range(struct btrfs_inode *inode, u64 start, u32 len,
+ 	 */
+ 	ret = defrag_collect_targets(inode, start, len, extent_thresh,
+ 				     newer_than, do_compress, true,
+-				     &target_list);
++				     &target_list, last_scanned_ret);
+ 	if (ret < 0)
+ 		goto unlock_extent;
+ 
+@@ -1428,7 +1465,8 @@ static int defrag_one_cluster(struct btrfs_inode *inode,
+ 			      u64 start, u32 len, u32 extent_thresh,
+ 			      u64 newer_than, bool do_compress,
+ 			      unsigned long *sectors_defragged,
+-			      unsigned long max_sectors)
++			      unsigned long max_sectors,
++			      u64 *last_scanned_ret)
+ {
+ 	const u32 sectorsize = inode->root->fs_info->sectorsize;
+ 	struct defrag_target_range *entry;
+@@ -1439,7 +1477,7 @@ static int defrag_one_cluster(struct btrfs_inode *inode,
+ 	BUILD_BUG_ON(!IS_ALIGNED(CLUSTER_SIZE, PAGE_SIZE));
+ 	ret = defrag_collect_targets(inode, start, len, extent_thresh,
+ 				     newer_than, do_compress, false,
+-				     &target_list);
++				     &target_list, NULL);
+ 	if (ret < 0)
+ 		goto out;
+ 
+@@ -1456,6 +1494,15 @@ static int defrag_one_cluster(struct btrfs_inode *inode,
+ 			range_len = min_t(u32, range_len,
+ 				(max_sectors - *sectors_defragged) * sectorsize);
+ 
++		/*
++		 * If defrag_one_range() has updated last_scanned_ret,
++		 * our range may already be invalid (e.g. hole punched).
++		 * Skip if our range is before last_scanned_ret, as there is
++		 * no need to defrag the range anymore.
++		 */
++		if (entry->start + range_len <= *last_scanned_ret)
++			continue;
++
+ 		if (ra)
+ 			page_cache_sync_readahead(inode->vfs_inode.i_mapping,
+ 				ra, NULL, entry->start >> PAGE_SHIFT,
+@@ -1468,7 +1515,8 @@ static int defrag_one_cluster(struct btrfs_inode *inode,
+ 		 * accounting.
+ 		 */
+ 		ret = defrag_one_range(inode, entry->start, range_len,
+-				       extent_thresh, newer_than, do_compress);
++				       extent_thresh, newer_than, do_compress,
++				       last_scanned_ret);
+ 		if (ret < 0)
+ 			break;
+ 		*sectors_defragged += range_len >>
+@@ -1479,6 +1527,8 @@ out:
+ 		list_del_init(&entry->list);
+ 		kfree(entry);
+ 	}
++	if (ret >= 0)
++		*last_scanned_ret = max(*last_scanned_ret, start + len);
+ 	return ret;
+ }
+ 
+@@ -1564,6 +1614,7 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra,
+ 
+ 	while (cur < last_byte) {
+ 		const unsigned long prev_sectors_defragged = sectors_defragged;
++		u64 last_scanned = cur;
+ 		u64 cluster_end;
+ 
+ 		/* The cluster size 256K should always be page aligned */
+@@ -1593,8 +1644,8 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra,
+ 			BTRFS_I(inode)->defrag_compress = compress_type;
+ 		ret = defrag_one_cluster(BTRFS_I(inode), ra, cur,
+ 				cluster_end + 1 - cur, extent_thresh,
+-				newer_than, do_compress,
+-				&sectors_defragged, max_to_defrag);
++				newer_than, do_compress, &sectors_defragged,
++				max_to_defrag, &last_scanned);
+ 
+ 		if (sectors_defragged > prev_sectors_defragged)
+ 			balance_dirty_pages_ratelimited(inode->i_mapping);
+@@ -1602,7 +1653,7 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra,
+ 		btrfs_inode_unlock(inode, 0);
+ 		if (ret < 0)
+ 			break;
+-		cur = cluster_end + 1;
++		cur = max(cluster_end + 1, last_scanned);
+ 		if (ret > 0) {
+ 			ret = 0;
+ 			break;
+diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
+index 0fb90cbe76697..e6e28a9c79877 100644
+--- a/fs/btrfs/lzo.c
++++ b/fs/btrfs/lzo.c
+@@ -380,6 +380,17 @@ int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
+ 		kunmap(cur_page);
+ 		cur_in += LZO_LEN;
+ 
++		if (seg_len > lzo1x_worst_compress(PAGE_SIZE)) {
++			/*
++			 * seg_len shouldn't be larger than we have allocated
++			 * for workspace->cbuf
++			 */
++			btrfs_err(fs_info, "unexpectedly large lzo segment len %u",
++					seg_len);
++			ret = -EIO;
++			goto out;
++		}
++
+ 		/* Copy the compressed segment payload into workspace */
+ 		copy_compressed_segment(cb, workspace->cbuf, seg_len, &cur_in);
+ 
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index 7733e8ac0a698..51382d2be3d44 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -965,6 +965,7 @@ static int check_dev_item(struct extent_buffer *leaf,
+ 			  struct btrfs_key *key, int slot)
+ {
+ 	struct btrfs_dev_item *ditem;
++	const u32 item_size = btrfs_item_size_nr(leaf, slot);
+ 
+ 	if (unlikely(key->objectid != BTRFS_DEV_ITEMS_OBJECTID)) {
+ 		dev_item_err(leaf, slot,
+@@ -972,6 +973,13 @@ static int check_dev_item(struct extent_buffer *leaf,
+ 			     key->objectid, BTRFS_DEV_ITEMS_OBJECTID);
+ 		return -EUCLEAN;
+ 	}
++
++	if (unlikely(item_size != sizeof(*ditem))) {
++		dev_item_err(leaf, slot, "invalid item size: has %u expect %zu",
++			     item_size, sizeof(*ditem));
++		return -EUCLEAN;
++	}
++
+ 	ditem = btrfs_item_ptr(leaf, slot, struct btrfs_dev_item);
+ 	if (unlikely(btrfs_device_id(leaf, ditem) != key->offset)) {
+ 		dev_item_err(leaf, slot,
+@@ -1007,6 +1015,7 @@ static int check_inode_item(struct extent_buffer *leaf,
+ 	struct btrfs_inode_item *iitem;
+ 	u64 super_gen = btrfs_super_generation(fs_info->super_copy);
+ 	u32 valid_mask = (S_IFMT | S_ISUID | S_ISGID | S_ISVTX | 0777);
++	const u32 item_size = btrfs_item_size_nr(leaf, slot);
+ 	u32 mode;
+ 	int ret;
+ 	u32 flags;
+@@ -1016,6 +1025,12 @@ static int check_inode_item(struct extent_buffer *leaf,
+ 	if (unlikely(ret < 0))
+ 		return ret;
+ 
++	if (unlikely(item_size != sizeof(*iitem))) {
++		generic_err(leaf, slot, "invalid item size: has %u expect %zu",
++			    item_size, sizeof(*iitem));
++		return -EUCLEAN;
++	}
++
+ 	iitem = btrfs_item_ptr(leaf, slot, struct btrfs_inode_item);
+ 
+ 	/* Here we use super block generation + 1 to handle log tree */
+diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c
+index d3cd2a94d1e8c..d1f9d26322027 100644
+--- a/fs/configfs/dir.c
++++ b/fs/configfs/dir.c
+@@ -34,6 +34,14 @@
+  */
+ DEFINE_SPINLOCK(configfs_dirent_lock);
+ 
++/*
++ * All of link_obj/unlink_obj/link_group/unlink_group require that
++ * subsys->su_mutex is held.
++ * But parent configfs_subsystem is NULL when config_item is root.
++ * Use this mutex when config_item is root.
++ */
++static DEFINE_MUTEX(configfs_subsystem_mutex);
++
+ static void configfs_d_iput(struct dentry * dentry,
+ 			    struct inode * inode)
+ {
+@@ -1859,7 +1867,9 @@ int configfs_register_subsystem(struct configfs_subsystem *subsys)
+ 		group->cg_item.ci_name = group->cg_item.ci_namebuf;
+ 
+ 	sd = root->d_fsdata;
++	mutex_lock(&configfs_subsystem_mutex);
+ 	link_group(to_config_group(sd->s_element), group);
++	mutex_unlock(&configfs_subsystem_mutex);
+ 
+ 	inode_lock_nested(d_inode(root), I_MUTEX_PARENT);
+ 
+@@ -1884,7 +1894,9 @@ int configfs_register_subsystem(struct configfs_subsystem *subsys)
+ 	inode_unlock(d_inode(root));
+ 
+ 	if (err) {
++		mutex_lock(&configfs_subsystem_mutex);
+ 		unlink_group(group);
++		mutex_unlock(&configfs_subsystem_mutex);
+ 		configfs_release_fs();
+ 	}
+ 	put_fragment(frag);
+@@ -1931,7 +1943,9 @@ void configfs_unregister_subsystem(struct configfs_subsystem *subsys)
+ 
+ 	dput(dentry);
+ 
++	mutex_lock(&configfs_subsystem_mutex);
+ 	unlink_group(group);
++	mutex_unlock(&configfs_subsystem_mutex);
+ 	configfs_release_fs();
+ }
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index a92f276f21d9c..db724482cd117 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -4477,6 +4477,7 @@ static int io_add_buffers(struct io_provide_buf *pbuf, struct io_buffer **head)
+ 		} else {
+ 			list_add_tail(&buf->list, &(*head)->list);
+ 		}
++		cond_resched();
+ 	}
+ 
+ 	return i ? i : -ENOMEM;
+@@ -7633,7 +7634,7 @@ static int io_run_task_work_sig(void)
+ /* when returns >0, the caller should retry */
+ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+ 					  struct io_wait_queue *iowq,
+-					  signed long *timeout)
++					  ktime_t timeout)
+ {
+ 	int ret;
+ 
+@@ -7645,8 +7646,9 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+ 	if (test_bit(0, &ctx->check_cq_overflow))
+ 		return 1;
+ 
+-	*timeout = schedule_timeout(*timeout);
+-	return !*timeout ? -ETIME : 1;
++	if (!schedule_hrtimeout(&timeout, HRTIMER_MODE_ABS))
++		return -ETIME;
++	return 1;
+ }
+ 
+ /*
+@@ -7659,7 +7661,7 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ {
+ 	struct io_wait_queue iowq;
+ 	struct io_rings *rings = ctx->rings;
+-	signed long timeout = MAX_SCHEDULE_TIMEOUT;
++	ktime_t timeout = KTIME_MAX;
+ 	int ret;
+ 
+ 	do {
+@@ -7675,7 +7677,7 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ 
+ 		if (get_timespec64(&ts, uts))
+ 			return -EFAULT;
+-		timeout = timespec64_to_jiffies(&ts);
++		timeout = ktime_add_ns(timespec64_to_ktime(ts), ktime_get_ns());
+ 	}
+ 
+ 	if (sig) {
+@@ -7707,7 +7709,7 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ 		}
+ 		prepare_to_wait_exclusive(&ctx->cq_wait, &iowq.wq,
+ 						TASK_INTERRUPTIBLE);
+-		ret = io_cqring_wait_schedule(ctx, &iowq, &timeout);
++		ret = io_cqring_wait_schedule(ctx, &iowq, timeout);
+ 		finish_wait(&ctx->cq_wait, &iowq.wq);
+ 		cond_resched();
+ 	} while (ret > 0);
+@@ -7864,7 +7866,15 @@ static __cold int io_rsrc_ref_quiesce(struct io_rsrc_data *data,
+ 		ret = wait_for_completion_interruptible(&data->done);
+ 		if (!ret) {
+ 			mutex_lock(&ctx->uring_lock);
+-			break;
++			if (atomic_read(&data->refs) > 0) {
++				/*
++				 * it has been revived by another thread while
++				 * we were unlocked
++				 */
++				mutex_unlock(&ctx->uring_lock);
++			} else {
++				break;
++			}
+ 		}
+ 
+ 		atomic_inc(&data->refs);
+diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
+index 3616839c5c4b6..f2625a372a3ae 100644
+--- a/fs/tracefs/inode.c
++++ b/fs/tracefs/inode.c
+@@ -264,7 +264,6 @@ static int tracefs_parse_options(char *data, struct tracefs_mount_opts *opts)
+ 			if (!gid_valid(gid))
+ 				return -EINVAL;
+ 			opts->gid = gid;
+-			set_gid(tracefs_mount->mnt_root, gid);
+ 			break;
+ 		case Opt_mode:
+ 			if (match_octal(&args[0], &option))
+@@ -291,7 +290,9 @@ static int tracefs_apply_options(struct super_block *sb)
+ 	inode->i_mode |= opts->mode;
+ 
+ 	inode->i_uid = opts->uid;
+-	inode->i_gid = opts->gid;
++
++	/* Set all the group ids to the mount option */
++	set_gid(sb->s_root, opts->gid);
+ 
+ 	return 0;
+ }
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 29b9b199c56bb..7078938ba235c 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -209,11 +209,9 @@ static inline bool map_value_has_timer(const struct bpf_map *map)
+ static inline void check_and_init_map_value(struct bpf_map *map, void *dst)
+ {
+ 	if (unlikely(map_value_has_spin_lock(map)))
+-		*(struct bpf_spin_lock *)(dst + map->spin_lock_off) =
+-			(struct bpf_spin_lock){};
++		memset(dst + map->spin_lock_off, 0, sizeof(struct bpf_spin_lock));
+ 	if (unlikely(map_value_has_timer(map)))
+-		*(struct bpf_timer *)(dst + map->timer_off) =
+-			(struct bpf_timer){};
++		memset(dst + map->timer_off, 0, sizeof(struct bpf_timer));
+ }
+ 
+ /* copy everything but bpf_spin_lock and bpf_timer. There could be one of each. */
+@@ -224,7 +222,8 @@ static inline void copy_map_value(struct bpf_map *map, void *dst, void *src)
+ 	if (unlikely(map_value_has_spin_lock(map))) {
+ 		s_off = map->spin_lock_off;
+ 		s_sz = sizeof(struct bpf_spin_lock);
+-	} else if (unlikely(map_value_has_timer(map))) {
++	}
++	if (unlikely(map_value_has_timer(map))) {
+ 		t_off = map->timer_off;
+ 		t_sz = sizeof(struct bpf_timer);
+ 	}
+diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h
+index 98efb7b5660d9..c9a3ac9efeaa9 100644
+--- a/include/linux/nvmem-provider.h
++++ b/include/linux/nvmem-provider.h
+@@ -70,7 +70,8 @@ struct nvmem_keepout {
+  * @word_size:	Minimum read/write access granularity.
+  * @stride:	Minimum read/write access stride.
+  * @priv:	User context passed to read/write callbacks.
+- * @wp-gpio:   Write protect pin
++ * @wp-gpio:	Write protect pin
++ * @ignore_wp:  Write Protect pin is managed by the provider.
+  *
+  * Note: A default "nvmem<id>" name will be assigned to the device if
+  * no name is specified in its configuration. In such case "<id>" is
+@@ -92,6 +93,7 @@ struct nvmem_config {
+ 	enum nvmem_type		type;
+ 	bool			read_only;
+ 	bool			root_only;
++	bool			ignore_wp;
+ 	struct device_node	*of_node;
+ 	bool			no_of_node;
+ 	nvmem_reg_read_t	reg_read;
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index 584d94be9c8b0..18a717fe62eb0 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -507,12 +507,6 @@ static inline bool sk_psock_strp_enabled(struct sk_psock *psock)
+ 	return !!psock->saved_data_ready;
+ }
+ 
+-static inline bool sk_is_tcp(const struct sock *sk)
+-{
+-	return sk->sk_type == SOCK_STREAM &&
+-	       sk->sk_protocol == IPPROTO_TCP;
+-}
+-
+ static inline bool sk_is_udp(const struct sock *sk)
+ {
+ 	return sk->sk_type == SOCK_DGRAM &&
+diff --git a/include/linux/slab.h b/include/linux/slab.h
+index 181045148b065..79c2ff9256d04 100644
+--- a/include/linux/slab.h
++++ b/include/linux/slab.h
+@@ -669,8 +669,7 @@ static inline __alloc_size(1, 2) void *kcalloc(size_t n, size_t size, gfp_t flag
+  * allocator where we care about the real place the memory allocation
+  * request comes from.
+  */
+-extern void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller)
+-				   __alloc_size(1);
++extern void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller);
+ #define kmalloc_track_caller(size, flags) \
+ 	__kmalloc_track_caller(size, flags, _RET_IP_)
+ 
+diff --git a/include/net/checksum.h b/include/net/checksum.h
+index 5b96d5bd6e545..d3b5d368a0caa 100644
+--- a/include/net/checksum.h
++++ b/include/net/checksum.h
+@@ -22,7 +22,7 @@
+ #include <asm/checksum.h>
+ 
+ #ifndef _HAVE_ARCH_COPY_AND_CSUM_FROM_USER
+-static inline
++static __always_inline
+ __wsum csum_and_copy_from_user (const void __user *src, void *dst,
+ 				      int len)
+ {
+@@ -33,7 +33,7 @@ __wsum csum_and_copy_from_user (const void __user *src, void *dst,
+ #endif
+ 
+ #ifndef HAVE_CSUM_COPY_USER
+-static __inline__ __wsum csum_and_copy_to_user
++static __always_inline __wsum csum_and_copy_to_user
+ (const void *src, void __user *dst, int len)
+ {
+ 	__wsum sum = csum_partial(src, len, ~0U);
+@@ -45,7 +45,7 @@ static __inline__ __wsum csum_and_copy_to_user
+ #endif
+ 
+ #ifndef _HAVE_ARCH_CSUM_AND_COPY
+-static inline __wsum
++static __always_inline __wsum
+ csum_partial_copy_nocheck(const void *src, void *dst, int len)
+ {
+ 	memcpy(dst, src, len);
+@@ -54,7 +54,7 @@ csum_partial_copy_nocheck(const void *src, void *dst, int len)
+ #endif
+ 
+ #ifndef HAVE_ARCH_CSUM_ADD
+-static inline __wsum csum_add(__wsum csum, __wsum addend)
++static __always_inline __wsum csum_add(__wsum csum, __wsum addend)
+ {
+ 	u32 res = (__force u32)csum;
+ 	res += (__force u32)addend;
+@@ -62,12 +62,12 @@ static inline __wsum csum_add(__wsum csum, __wsum addend)
+ }
+ #endif
+ 
+-static inline __wsum csum_sub(__wsum csum, __wsum addend)
++static __always_inline __wsum csum_sub(__wsum csum, __wsum addend)
+ {
+ 	return csum_add(csum, ~addend);
+ }
+ 
+-static inline __sum16 csum16_add(__sum16 csum, __be16 addend)
++static __always_inline __sum16 csum16_add(__sum16 csum, __be16 addend)
+ {
+ 	u16 res = (__force u16)csum;
+ 
+@@ -75,12 +75,12 @@ static inline __sum16 csum16_add(__sum16 csum, __be16 addend)
+ 	return (__force __sum16)(res + (res < (__force u16)addend));
+ }
+ 
+-static inline __sum16 csum16_sub(__sum16 csum, __be16 addend)
++static __always_inline __sum16 csum16_sub(__sum16 csum, __be16 addend)
+ {
+ 	return csum16_add(csum, ~addend);
+ }
+ 
+-static inline __wsum csum_shift(__wsum sum, int offset)
++static __always_inline __wsum csum_shift(__wsum sum, int offset)
+ {
+ 	/* rotate sum to align it with a 16b boundary */
+ 	if (offset & 1)
+@@ -88,42 +88,43 @@ static inline __wsum csum_shift(__wsum sum, int offset)
+ 	return sum;
+ }
+ 
+-static inline __wsum
++static __always_inline __wsum
+ csum_block_add(__wsum csum, __wsum csum2, int offset)
+ {
+ 	return csum_add(csum, csum_shift(csum2, offset));
+ }
+ 
+-static inline __wsum
++static __always_inline __wsum
+ csum_block_add_ext(__wsum csum, __wsum csum2, int offset, int len)
+ {
+ 	return csum_block_add(csum, csum2, offset);
+ }
+ 
+-static inline __wsum
++static __always_inline __wsum
+ csum_block_sub(__wsum csum, __wsum csum2, int offset)
+ {
+ 	return csum_block_add(csum, ~csum2, offset);
+ }
+ 
+-static inline __wsum csum_unfold(__sum16 n)
++static __always_inline __wsum csum_unfold(__sum16 n)
+ {
+ 	return (__force __wsum)n;
+ }
+ 
+-static inline __wsum csum_partial_ext(const void *buff, int len, __wsum sum)
++static __always_inline
++__wsum csum_partial_ext(const void *buff, int len, __wsum sum)
+ {
+ 	return csum_partial(buff, len, sum);
+ }
+ 
+ #define CSUM_MANGLED_0 ((__force __sum16)0xffff)
+ 
+-static inline void csum_replace_by_diff(__sum16 *sum, __wsum diff)
++static __always_inline void csum_replace_by_diff(__sum16 *sum, __wsum diff)
+ {
+ 	*sum = csum_fold(csum_add(diff, ~csum_unfold(*sum)));
+ }
+ 
+-static inline void csum_replace4(__sum16 *sum, __be32 from, __be32 to)
++static __always_inline void csum_replace4(__sum16 *sum, __be32 from, __be32 to)
+ {
+ 	__wsum tmp = csum_sub(~csum_unfold(*sum), (__force __wsum)from);
+ 
+@@ -136,11 +137,16 @@ static inline void csum_replace4(__sum16 *sum, __be32 from, __be32 to)
+  *  m : old value of a 16bit field
+  *  m' : new value of a 16bit field
+  */
+-static inline void csum_replace2(__sum16 *sum, __be16 old, __be16 new)
++static __always_inline void csum_replace2(__sum16 *sum, __be16 old, __be16 new)
+ {
+ 	*sum = ~csum16_add(csum16_sub(~(*sum), old), new);
+ }
+ 
++static inline void csum_replace(__wsum *csum, __wsum old, __wsum new)
++{
++	*csum = csum_add(csum_sub(*csum, old), new);
++}
++
+ struct sk_buff;
+ void inet_proto_csum_replace4(__sum16 *sum, struct sk_buff *skb,
+ 			      __be32 from, __be32 to, bool pseudohdr);
+@@ -150,16 +156,16 @@ void inet_proto_csum_replace16(__sum16 *sum, struct sk_buff *skb,
+ void inet_proto_csum_replace_by_diff(__sum16 *sum, struct sk_buff *skb,
+ 				     __wsum diff, bool pseudohdr);
+ 
+-static inline void inet_proto_csum_replace2(__sum16 *sum, struct sk_buff *skb,
+-					    __be16 from, __be16 to,
+-					    bool pseudohdr)
++static __always_inline
++void inet_proto_csum_replace2(__sum16 *sum, struct sk_buff *skb,
++			      __be16 from, __be16 to, bool pseudohdr)
+ {
+ 	inet_proto_csum_replace4(sum, skb, (__force __be32)from,
+ 				 (__force __be32)to, pseudohdr);
+ }
+ 
+-static inline __wsum remcsum_adjust(void *ptr, __wsum csum,
+-				    int start, int offset)
++static __always_inline __wsum remcsum_adjust(void *ptr, __wsum csum,
++					     int start, int offset)
+ {
+ 	__sum16 *psum = (__sum16 *)(ptr + offset);
+ 	__wsum delta;
+@@ -175,7 +181,7 @@ static inline __wsum remcsum_adjust(void *ptr, __wsum csum,
+ 	return delta;
+ }
+ 
+-static inline void remcsum_unadjust(__sum16 *psum, __wsum delta)
++static __always_inline void remcsum_unadjust(__sum16 *psum, __wsum delta)
+ {
+ 	*psum = csum_fold(csum_sub(delta, (__force __wsum)*psum));
+ }
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index a0d9e0b47ab8f..1dbddde8364ab 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -889,9 +889,9 @@ struct nft_expr_ops {
+ 	int				(*offload)(struct nft_offload_ctx *ctx,
+ 						   struct nft_flow_rule *flow,
+ 						   const struct nft_expr *expr);
++	bool				(*offload_action)(const struct nft_expr *expr);
+ 	void				(*offload_stats)(struct nft_expr *expr,
+ 							 const struct flow_stats *stats);
+-	u32				offload_flags;
+ 	const struct nft_expr_type	*type;
+ 	void				*data;
+ };
+diff --git a/include/net/netfilter/nf_tables_offload.h b/include/net/netfilter/nf_tables_offload.h
+index f9d95ff82df83..7971478439580 100644
+--- a/include/net/netfilter/nf_tables_offload.h
++++ b/include/net/netfilter/nf_tables_offload.h
+@@ -67,8 +67,6 @@ struct nft_flow_rule {
+ 	struct flow_rule	*rule;
+ };
+ 
+-#define NFT_OFFLOAD_F_ACTION	(1 << 0)
+-
+ void nft_flow_rule_set_addr_type(struct nft_flow_rule *flow,
+ 				 enum flow_dissector_key_id addr_type);
+ 
+diff --git a/include/net/sock.h b/include/net/sock.h
+index d47e9658da285..cd69595949614 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -504,7 +504,7 @@ struct sock {
+ 	u16			sk_tsflags;
+ 	int			sk_bind_phc;
+ 	u8			sk_shutdown;
+-	u32			sk_tskey;
++	atomic_t		sk_tskey;
+ 	atomic_t		sk_zckey;
+ 
+ 	u8			sk_clockid;
+@@ -2636,7 +2636,7 @@ static inline void _sock_tx_timestamp(struct sock *sk, __u16 tsflags,
+ 		__sock_tx_timestamp(tsflags, tx_flags);
+ 		if (tsflags & SOF_TIMESTAMPING_OPT_ID && tskey &&
+ 		    tsflags & SOF_TIMESTAMPING_TX_RECORD_MASK)
+-			*tskey = sk->sk_tskey++;
++			*tskey = atomic_inc_return(&sk->sk_tskey) - 1;
+ 	}
+ 	if (unlikely(sock_flag(sk, SOCK_WIFI_STATUS)))
+ 		*tx_flags |= SKBTX_WIFI_STATUS;
+@@ -2654,6 +2654,11 @@ static inline void skb_setup_tx_timestamp(struct sk_buff *skb, __u16 tsflags)
+ 			   &skb_shinfo(skb)->tskey);
+ }
+ 
++static inline bool sk_is_tcp(const struct sock *sk)
++{
++	return sk->sk_type == SOCK_STREAM && sk->sk_protocol == IPPROTO_TCP;
++}
++
+ /**
+  * sk_eat_skb - Release a skb if it is no longer needed
+  * @sk: socket to eat this skb from
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index d2ff8ba7ae58f..c9da250fee38c 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -5564,12 +5564,53 @@ static u32 *reg2btf_ids[__BPF_REG_TYPE_MAX] = {
+ #endif
+ };
+ 
++/* Returns true if struct is composed of scalars, 4 levels of nesting allowed */
++static bool __btf_type_is_scalar_struct(struct bpf_verifier_log *log,
++					const struct btf *btf,
++					const struct btf_type *t, int rec)
++{
++	const struct btf_type *member_type;
++	const struct btf_member *member;
++	u32 i;
++
++	if (!btf_type_is_struct(t))
++		return false;
++
++	for_each_member(i, t, member) {
++		const struct btf_array *array;
++
++		member_type = btf_type_skip_modifiers(btf, member->type, NULL);
++		if (btf_type_is_struct(member_type)) {
++			if (rec >= 3) {
++				bpf_log(log, "max struct nesting depth exceeded\n");
++				return false;
++			}
++			if (!__btf_type_is_scalar_struct(log, btf, member_type, rec + 1))
++				return false;
++			continue;
++		}
++		if (btf_type_is_array(member_type)) {
++			array = btf_type_array(member_type);
++			if (!array->nelems)
++				return false;
++			member_type = btf_type_skip_modifiers(btf, array->type, NULL);
++			if (!btf_type_is_scalar(member_type))
++				return false;
++			continue;
++		}
++		if (!btf_type_is_scalar(member_type))
++			return false;
++	}
++	return true;
++}
++
+ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
+ 				    const struct btf *btf, u32 func_id,
+ 				    struct bpf_reg_state *regs,
+ 				    bool ptr_to_mem_ok)
+ {
+ 	struct bpf_verifier_log *log = &env->log;
++	bool is_kfunc = btf_is_kernel(btf);
+ 	const char *func_name, *ref_tname;
+ 	const struct btf_type *t, *ref_t;
+ 	const struct btf_param *args;
+@@ -5622,7 +5663,21 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
+ 
+ 		ref_t = btf_type_skip_modifiers(btf, t->type, &ref_id);
+ 		ref_tname = btf_name_by_offset(btf, ref_t->name_off);
+-		if (btf_is_kernel(btf)) {
++		if (btf_get_prog_ctx_type(log, btf, t,
++					  env->prog->type, i)) {
++			/* If function expects ctx type in BTF check that caller
++			 * is passing PTR_TO_CTX.
++			 */
++			if (reg->type != PTR_TO_CTX) {
++				bpf_log(log,
++					"arg#%d expected pointer to ctx, but got %s\n",
++					i, btf_type_str(t));
++				return -EINVAL;
++			}
++			if (check_ctx_reg(env, reg, regno))
++				return -EINVAL;
++		} else if (is_kfunc && (reg->type == PTR_TO_BTF_ID ||
++			   (reg2btf_ids[base_type(reg->type)] && !type_flag(reg->type)))) {
+ 			const struct btf_type *reg_ref_t;
+ 			const struct btf *reg_btf;
+ 			const char *reg_ref_tname;
+@@ -5638,14 +5693,9 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
+ 			if (reg->type == PTR_TO_BTF_ID) {
+ 				reg_btf = reg->btf;
+ 				reg_ref_id = reg->btf_id;
+-			} else if (reg2btf_ids[reg->type]) {
+-				reg_btf = btf_vmlinux;
+-				reg_ref_id = *reg2btf_ids[reg->type];
+ 			} else {
+-				bpf_log(log, "kernel function %s args#%d expected pointer to %s %s but R%d is not a pointer to btf_id\n",
+-					func_name, i,
+-					btf_type_str(ref_t), ref_tname, regno);
+-				return -EINVAL;
++				reg_btf = btf_vmlinux;
++				reg_ref_id = *reg2btf_ids[base_type(reg->type)];
+ 			}
+ 
+ 			reg_ref_t = btf_type_skip_modifiers(reg_btf, reg_ref_id,
+@@ -5661,23 +5711,24 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
+ 					reg_ref_tname);
+ 				return -EINVAL;
+ 			}
+-		} else if (btf_get_prog_ctx_type(log, btf, t,
+-						 env->prog->type, i)) {
+-			/* If function expects ctx type in BTF check that caller
+-			 * is passing PTR_TO_CTX.
+-			 */
+-			if (reg->type != PTR_TO_CTX) {
+-				bpf_log(log,
+-					"arg#%d expected pointer to ctx, but got %s\n",
+-					i, btf_type_str(t));
+-				return -EINVAL;
+-			}
+-			if (check_ctx_reg(env, reg, regno))
+-				return -EINVAL;
+ 		} else if (ptr_to_mem_ok) {
+ 			const struct btf_type *resolve_ret;
+ 			u32 type_size;
+ 
++			if (is_kfunc) {
++				/* Permit pointer to mem, but only when argument
++				 * type is pointer to scalar, or struct composed
++				 * (recursively) of scalars.
++				 */
++				if (!btf_type_is_scalar(ref_t) &&
++				    !__btf_type_is_scalar_struct(log, btf, ref_t, 0)) {
++					bpf_log(log,
++						"arg#%d pointer type %s %s must point to scalar or struct with scalar\n",
++						i, btf_type_str(ref_t), ref_tname);
++					return -EINVAL;
++				}
++			}
++
+ 			resolve_ret = btf_resolve_size(btf, ref_t, &type_size);
+ 			if (IS_ERR(resolve_ret)) {
+ 				bpf_log(log,
+@@ -5690,6 +5741,8 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
+ 			if (check_mem_reg(env, reg, regno, type_size))
+ 				return -EINVAL;
+ 		} else {
++			bpf_log(log, "reg type unsupported for arg#%d %sfunction %s#%d\n", i,
++				is_kfunc ? "kernel " : "", func_name, func_id);
+ 			return -EINVAL;
+ 		}
+ 	}
+@@ -5739,7 +5792,7 @@ int btf_check_kfunc_arg_match(struct bpf_verifier_env *env,
+ 			      const struct btf *btf, u32 func_id,
+ 			      struct bpf_reg_state *regs)
+ {
+-	return btf_check_func_arg_match(env, btf, func_id, regs, false);
++	return btf_check_func_arg_match(env, btf, func_id, regs, true);
+ }
+ 
+ /* Convert BTF of a function into bpf_reg_state if possible
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 4c6c2c2137458..d2914cb9b7d18 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -1354,6 +1354,7 @@ int generic_map_delete_batch(struct bpf_map *map,
+ 		maybe_wait_bpf_programs(map);
+ 		if (err)
+ 			break;
++		cond_resched();
+ 	}
+ 	if (copy_to_user(&uattr->batch.count, &cp, sizeof(cp)))
+ 		err = -EFAULT;
+@@ -1411,6 +1412,7 @@ int generic_map_update_batch(struct bpf_map *map,
+ 
+ 		if (err)
+ 			break;
++		cond_resched();
+ 	}
+ 
+ 	if (copy_to_user(&uattr->batch.count, &cp, sizeof(cp)))
+@@ -1508,6 +1510,7 @@ int generic_map_lookup_batch(struct bpf_map *map,
+ 		swap(prev_key, key);
+ 		retry = MAP_LOOKUP_RETRIES;
+ 		cp++;
++		cond_resched();
+ 	}
+ 
+ 	if (err == -EFAULT)
+diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
+index 0e877dbcfeea9..afc6c0e9c966e 100644
+--- a/kernel/cgroup/cgroup-v1.c
++++ b/kernel/cgroup/cgroup-v1.c
+@@ -546,6 +546,7 @@ static ssize_t cgroup_release_agent_write(struct kernfs_open_file *of,
+ 					  char *buf, size_t nbytes, loff_t off)
+ {
+ 	struct cgroup *cgrp;
++	struct cgroup_file_ctx *ctx;
+ 
+ 	BUILD_BUG_ON(sizeof(cgrp->root->release_agent_path) < PATH_MAX);
+ 
+@@ -553,8 +554,9 @@ static ssize_t cgroup_release_agent_write(struct kernfs_open_file *of,
+ 	 * Release agent gets called with all capabilities,
+ 	 * require capabilities to set release agent.
+ 	 */
+-	if ((of->file->f_cred->user_ns != &init_user_ns) ||
+-	    !capable(CAP_SYS_ADMIN))
++	ctx = of->priv;
++	if ((ctx->ns->user_ns != &init_user_ns) ||
++	    !file_ns_capable(of->file, &init_user_ns, CAP_SYS_ADMIN))
+ 		return -EPERM;
+ 
+ 	cgrp = cgroup_kn_lock_live(of->kn, false);
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index d729cbd2445af..df62527f5e0b1 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -2269,6 +2269,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
+ 	cgroup_taskset_first(tset, &css);
+ 	cs = css_cs(css);
+ 
++	cpus_read_lock();
+ 	percpu_down_write(&cpuset_rwsem);
+ 
+ 	guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
+@@ -2322,6 +2323,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
+ 		wake_up(&cpuset_attach_wq);
+ 
+ 	percpu_up_write(&cpuset_rwsem);
++	cpus_read_unlock();
+ }
+ 
+ /* The various types of files and directories in a cpuset file system */
+diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
+index 3d5c07239a2a8..67c7979c40c0b 100644
+--- a/kernel/trace/trace_events_trigger.c
++++ b/kernel/trace/trace_events_trigger.c
+@@ -955,6 +955,16 @@ traceon_trigger(struct event_trigger_data *data,
+ 		struct trace_buffer *buffer, void *rec,
+ 		struct ring_buffer_event *event)
+ {
++	struct trace_event_file *file = data->private_data;
++
++	if (file) {
++		if (tracer_tracing_is_on(file->tr))
++			return;
++
++		tracer_tracing_on(file->tr);
++		return;
++	}
++
+ 	if (tracing_is_on())
+ 		return;
+ 
+@@ -966,8 +976,15 @@ traceon_count_trigger(struct event_trigger_data *data,
+ 		      struct trace_buffer *buffer, void *rec,
+ 		      struct ring_buffer_event *event)
+ {
+-	if (tracing_is_on())
+-		return;
++	struct trace_event_file *file = data->private_data;
++
++	if (file) {
++		if (tracer_tracing_is_on(file->tr))
++			return;
++	} else {
++		if (tracing_is_on())
++			return;
++	}
+ 
+ 	if (!data->count)
+ 		return;
+@@ -975,7 +992,10 @@ traceon_count_trigger(struct event_trigger_data *data,
+ 	if (data->count != -1)
+ 		(data->count)--;
+ 
+-	tracing_on();
++	if (file)
++		tracer_tracing_on(file->tr);
++	else
++		tracing_on();
+ }
+ 
+ static void
+@@ -983,6 +1003,16 @@ traceoff_trigger(struct event_trigger_data *data,
+ 		 struct trace_buffer *buffer, void *rec,
+ 		 struct ring_buffer_event *event)
+ {
++	struct trace_event_file *file = data->private_data;
++
++	if (file) {
++		if (!tracer_tracing_is_on(file->tr))
++			return;
++
++		tracer_tracing_off(file->tr);
++		return;
++	}
++
+ 	if (!tracing_is_on())
+ 		return;
+ 
+@@ -994,8 +1024,15 @@ traceoff_count_trigger(struct event_trigger_data *data,
+ 		       struct trace_buffer *buffer, void *rec,
+ 		       struct ring_buffer_event *event)
+ {
+-	if (!tracing_is_on())
+-		return;
++	struct trace_event_file *file = data->private_data;
++
++	if (file) {
++		if (!tracer_tracing_is_on(file->tr))
++			return;
++	} else {
++		if (!tracing_is_on())
++			return;
++	}
+ 
+ 	if (!data->count)
+ 		return;
+@@ -1003,7 +1040,10 @@ traceoff_count_trigger(struct event_trigger_data *data,
+ 	if (data->count != -1)
+ 		(data->count)--;
+ 
+-	tracing_off();
++	if (file)
++		tracer_tracing_off(file->tr);
++	else
++		tracing_off();
+ }
+ 
+ static int
+@@ -1200,7 +1240,12 @@ stacktrace_trigger(struct event_trigger_data *data,
+ 		   struct trace_buffer *buffer,  void *rec,
+ 		   struct ring_buffer_event *event)
+ {
+-	trace_dump_stack(STACK_SKIP);
++	struct trace_event_file *file = data->private_data;
++
++	if (file)
++		__trace_stack(file->tr, tracing_gen_ctx(), STACK_SKIP);
++	else
++		trace_dump_stack(STACK_SKIP);
+ }
+ 
+ static void
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 39c4c46c61337..56b437eb85547 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -2365,8 +2365,12 @@ static void filemap_get_read_batch(struct address_space *mapping,
+ 			break;
+ 		if (PageReadahead(head))
+ 			break;
+-		xas.xa_index = head->index + thp_nr_pages(head) - 1;
+-		xas.xa_offset = (xas.xa_index >> xas.xa_shift) & XA_CHUNK_MASK;
++		if (PageHead(head)) {
++			xas_set(&xas, head->index + thp_nr_pages(head));
++			/* Handle wrap correctly */
++			if (xas.xa_index - 1 >= max)
++				break;
++		}
+ 		continue;
+ put_page:
+ 		put_page(head);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index a1baa198519a2..221239db6389a 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -4159,10 +4159,10 @@ static int __init hugepages_setup(char *s)
+ 				pr_warn("HugeTLB: architecture can't support node specific alloc, ignoring!\n");
+ 				return 0;
+ 			}
++			if (tmp >= nr_online_nodes)
++				goto invalid;
+ 			node = tmp;
+ 			p += count + 1;
+-			if (node < 0 || node >= nr_online_nodes)
+-				goto invalid;
+ 			/* Parse hugepages */
+ 			if (sscanf(p, "%lu%n", &tmp, &count) != 1)
+ 				goto invalid;
+@@ -4851,14 +4851,13 @@ again:
+ }
+ 
+ static void move_huge_pte(struct vm_area_struct *vma, unsigned long old_addr,
+-			  unsigned long new_addr, pte_t *src_pte)
++			  unsigned long new_addr, pte_t *src_pte, pte_t *dst_pte)
+ {
+ 	struct hstate *h = hstate_vma(vma);
+ 	struct mm_struct *mm = vma->vm_mm;
+-	pte_t *dst_pte, pte;
+ 	spinlock_t *src_ptl, *dst_ptl;
++	pte_t pte;
+ 
+-	dst_pte = huge_pte_offset(mm, new_addr, huge_page_size(h));
+ 	dst_ptl = huge_pte_lock(h, mm, dst_pte);
+ 	src_ptl = huge_pte_lockptr(h, mm, src_pte);
+ 
+@@ -4917,7 +4916,7 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
+ 		if (!dst_pte)
+ 			break;
+ 
+-		move_huge_pte(vma, old_addr, new_addr, src_pte);
++		move_huge_pte(vma, old_addr, new_addr, src_pte, dst_pte);
+ 	}
+ 	flush_tlb_range(vma, old_end - len, old_end);
+ 	mmu_notifier_invalidate_range_end(&range);
+diff --git a/mm/memblock.c b/mm/memblock.c
+index 1018e50566f35..b12a364f2766f 100644
+--- a/mm/memblock.c
++++ b/mm/memblock.c
+@@ -366,14 +366,20 @@ void __init memblock_discard(void)
+ 		addr = __pa(memblock.reserved.regions);
+ 		size = PAGE_ALIGN(sizeof(struct memblock_region) *
+ 				  memblock.reserved.max);
+-		memblock_free_late(addr, size);
++		if (memblock_reserved_in_slab)
++			kfree(memblock.reserved.regions);
++		else
++			memblock_free_late(addr, size);
+ 	}
+ 
+ 	if (memblock.memory.regions != memblock_memory_init_regions) {
+ 		addr = __pa(memblock.memory.regions);
+ 		size = PAGE_ALIGN(sizeof(struct memblock_region) *
+ 				  memblock.memory.max);
+-		memblock_free_late(addr, size);
++		if (memblock_memory_in_slab)
++			kfree(memblock.memory.regions);
++		else
++			memblock_free_late(addr, size);
+ 	}
+ 
+ 	memblock_memory = NULL;
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index a271688780a2c..307ee1174a6e2 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -2006,7 +2006,7 @@ struct j1939_session *j1939_tp_send(struct j1939_priv *priv,
+ 		/* set the end-packet for broadcast */
+ 		session->pkt.last = session->pkt.total;
+ 
+-	skcb->tskey = session->sk->sk_tskey++;
++	skcb->tskey = atomic_inc_return(&session->sk->sk_tskey) - 1;
+ 	session->tskey = skcb->tskey;
+ 
+ 	return session;
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 22bed067284fb..d4cdf11656b3f 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2711,6 +2711,9 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
+ 	if (unlikely(flags))
+ 		return -EINVAL;
+ 
++	if (unlikely(len == 0))
++		return 0;
++
+ 	/* First find the starting scatterlist element */
+ 	i = msg->sg.start;
+ 	do {
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 909db87d7383d..f78969d8d8160 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -2254,7 +2254,7 @@ void *__pskb_pull_tail(struct sk_buff *skb, int delta)
+ 		/* Free pulled out fragments. */
+ 		while ((list = skb_shinfo(skb)->frag_list) != insp) {
+ 			skb_shinfo(skb)->frag_list = list->next;
+-			kfree_skb(list);
++			consume_skb(list);
+ 		}
+ 		/* And insert new clone at head. */
+ 		if (clone) {
+@@ -4849,9 +4849,8 @@ static void __skb_complete_tx_timestamp(struct sk_buff *skb,
+ 	serr->header.h4.iif = skb->dev ? skb->dev->ifindex : 0;
+ 	if (sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID) {
+ 		serr->ee.ee_data = skb_shinfo(skb)->tskey;
+-		if (sk->sk_protocol == IPPROTO_TCP &&
+-		    sk->sk_type == SOCK_STREAM)
+-			serr->ee.ee_data -= sk->sk_tskey;
++		if (sk_is_tcp(sk))
++			serr->ee.ee_data -= atomic_read(&sk->sk_tskey);
+ 	}
+ 
+ 	err = sock_queue_err_skb(sk, skb);
+@@ -4919,8 +4918,7 @@ void __skb_tstamp_tx(struct sk_buff *orig_skb,
+ 	if (tsonly) {
+ #ifdef CONFIG_INET
+ 		if ((sk->sk_tsflags & SOF_TIMESTAMPING_OPT_STATS) &&
+-		    sk->sk_protocol == IPPROTO_TCP &&
+-		    sk->sk_type == SOCK_STREAM) {
++		    sk_is_tcp(sk)) {
+ 			skb = tcp_get_timestamping_opt_stats(sk, orig_skb,
+ 							     ack_skb);
+ 			opt_stats = true;
+@@ -6227,7 +6225,7 @@ static int pskb_carve_frag_list(struct sk_buff *skb,
+ 	/* Free pulled out fragments. */
+ 	while ((list = shinfo->frag_list) != insp) {
+ 		shinfo->frag_list = list->next;
+-		kfree_skb(list);
++		consume_skb(list);
+ 	}
+ 	/* And insert new clone at head. */
+ 	if (clone) {
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 7de234693a3bf..6613a864f7f5a 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -874,14 +874,13 @@ int sock_set_timestamping(struct sock *sk, int optname,
+ 
+ 	if (val & SOF_TIMESTAMPING_OPT_ID &&
+ 	    !(sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID)) {
+-		if (sk->sk_protocol == IPPROTO_TCP &&
+-		    sk->sk_type == SOCK_STREAM) {
++		if (sk_is_tcp(sk)) {
+ 			if ((1 << sk->sk_state) &
+ 			    (TCPF_CLOSE | TCPF_LISTEN))
+ 				return -EINVAL;
+-			sk->sk_tskey = tcp_sk(sk)->snd_una;
++			atomic_set(&sk->sk_tskey, tcp_sk(sk)->snd_una);
+ 		} else {
+-			sk->sk_tskey = 0;
++			atomic_set(&sk->sk_tskey, 0);
+ 		}
+ 	}
+ 
+@@ -1372,8 +1371,7 @@ set_sndbuf:
+ 
+ 	case SO_ZEROCOPY:
+ 		if (sk->sk_family == PF_INET || sk->sk_family == PF_INET6) {
+-			if (!((sk->sk_type == SOCK_STREAM &&
+-			       sk->sk_protocol == IPPROTO_TCP) ||
++			if (!(sk_is_tcp(sk) ||
+ 			      (sk->sk_type == SOCK_DGRAM &&
+ 			       sk->sk_protocol == IPPROTO_UDP)))
+ 				ret = -ENOTSUPP;
+diff --git a/net/dsa/master.c b/net/dsa/master.c
+index e8e19857621bd..b0ab3cbeff3ca 100644
+--- a/net/dsa/master.c
++++ b/net/dsa/master.c
+@@ -260,11 +260,16 @@ static void dsa_netdev_ops_set(struct net_device *dev,
+ 	dev->dsa_ptr->netdev_ops = ops;
+ }
+ 
++/* Keep the master always promiscuous if the tagging protocol requires that
++ * (garbles MAC DA) or if it doesn't support unicast filtering, case in which
++ * it would revert to promiscuous mode as soon as we call dev_uc_add() on it
++ * anyway.
++ */
+ static void dsa_master_set_promiscuity(struct net_device *dev, int inc)
+ {
+ 	const struct dsa_device_ops *ops = dev->dsa_ptr->tag_ops;
+ 
+-	if (!ops->promisc_on_master)
++	if ((dev->priv_flags & IFF_UNICAST_FLT) && !ops->promisc_on_master)
+ 		return;
+ 
+ 	rtnl_lock();
+diff --git a/net/dsa/port.c b/net/dsa/port.c
+index f6f12ad2b5251..6cc353b77681f 100644
+--- a/net/dsa/port.c
++++ b/net/dsa/port.c
+@@ -777,9 +777,15 @@ int dsa_port_host_fdb_add(struct dsa_port *dp, const unsigned char *addr,
+ 	struct dsa_port *cpu_dp = dp->cpu_dp;
+ 	int err;
+ 
+-	err = dev_uc_add(cpu_dp->master, addr);
+-	if (err)
+-		return err;
++	/* Avoid a call to __dev_set_promiscuity() on the master, which
++	 * requires rtnl_lock(), since we can't guarantee that is held here,
++	 * and we can't take it either.
++	 */
++	if (cpu_dp->master->priv_flags & IFF_UNICAST_FLT) {
++		err = dev_uc_add(cpu_dp->master, addr);
++		if (err)
++			return err;
++	}
+ 
+ 	return dsa_port_notify(dp, DSA_NOTIFIER_HOST_FDB_ADD, &info);
+ }
+@@ -796,9 +802,11 @@ int dsa_port_host_fdb_del(struct dsa_port *dp, const unsigned char *addr,
+ 	struct dsa_port *cpu_dp = dp->cpu_dp;
+ 	int err;
+ 
+-	err = dev_uc_del(cpu_dp->master, addr);
+-	if (err)
+-		return err;
++	if (cpu_dp->master->priv_flags & IFF_UNICAST_FLT) {
++		err = dev_uc_del(cpu_dp->master, addr);
++		if (err)
++			return err;
++	}
+ 
+ 	return dsa_port_notify(dp, DSA_NOTIFIER_HOST_FDB_DEL, &info);
+ }
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index 5f70ffdae1b52..43dd5dd176c24 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -1376,8 +1376,11 @@ struct sk_buff *inet_gso_segment(struct sk_buff *skb,
+ 	}
+ 
+ 	ops = rcu_dereference(inet_offloads[proto]);
+-	if (likely(ops && ops->callbacks.gso_segment))
++	if (likely(ops && ops->callbacks.gso_segment)) {
+ 		segs = ops->callbacks.gso_segment(skb, features);
++		if (!segs)
++			skb->network_header = skb_mac_header(skb) + nhoff - skb->head;
++	}
+ 
+ 	if (IS_ERR_OR_NULL(segs))
+ 		goto out;
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index a4d2eb691cbc1..131066d0319a2 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -992,7 +992,7 @@ static int __ip_append_data(struct sock *sk,
+ 
+ 	if (cork->tx_flags & SKBTX_ANY_SW_TSTAMP &&
+ 	    sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID)
+-		tskey = sk->sk_tskey++;
++		tskey = atomic_inc_return(&sk->sk_tskey) - 1;
+ 
+ 	hh_len = LL_RESERVED_SPACE(rt->dst.dev);
+ 
+diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
+index e3a159c8f231e..36e89b6873876 100644
+--- a/net/ipv4/ping.c
++++ b/net/ipv4/ping.c
+@@ -187,7 +187,6 @@ static struct sock *ping_lookup(struct net *net, struct sk_buff *skb, u16 ident)
+ 			 (int)ident, &ipv6_hdr(skb)->daddr, dif);
+ #endif
+ 	} else {
+-		pr_err("ping: protocol(%x) is not supported\n", ntohs(skb->protocol));
+ 		return NULL;
+ 	}
+ 
+diff --git a/net/ipv4/udp_tunnel_nic.c b/net/ipv4/udp_tunnel_nic.c
+index b91003538d87a..bc3a043a5d5c7 100644
+--- a/net/ipv4/udp_tunnel_nic.c
++++ b/net/ipv4/udp_tunnel_nic.c
+@@ -846,7 +846,7 @@ udp_tunnel_nic_unregister(struct net_device *dev, struct udp_tunnel_nic *utn)
+ 		list_for_each_entry(node, &info->shared->devices, list)
+ 			if (node->dev == dev)
+ 				break;
+-		if (node->dev != dev)
++		if (list_entry_is_head(node, &info->shared->devices, list))
+ 			return;
+ 
+ 		list_del(&node->list);
+diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c
+index 1cbd49d5788dd..b2919a8e9c012 100644
+--- a/net/ipv6/ip6_offload.c
++++ b/net/ipv6/ip6_offload.c
+@@ -114,6 +114,8 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb,
+ 	if (likely(ops && ops->callbacks.gso_segment)) {
+ 		skb_reset_transport_header(skb);
+ 		segs = ops->callbacks.gso_segment(skb, features);
++		if (!segs)
++			skb->network_header = skb_mac_header(skb) + nhoff - skb->head;
+ 	}
+ 
+ 	if (IS_ERR_OR_NULL(segs))
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index ff4e83e2a5068..22bf8fb617165 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1465,7 +1465,7 @@ static int __ip6_append_data(struct sock *sk,
+ 
+ 	if (cork->tx_flags & SKBTX_ANY_SW_TSTAMP &&
+ 	    sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID)
+-		tskey = sk->sk_tskey++;
++		tskey = atomic_inc_return(&sk->sk_tskey) - 1;
+ 
+ 	hh_len = LL_RESERVED_SPACE(rt->dst.dev);
+ 
+diff --git a/net/mptcp/mib.c b/net/mptcp/mib.c
+index 3240b72271a7f..7558802a14350 100644
+--- a/net/mptcp/mib.c
++++ b/net/mptcp/mib.c
+@@ -35,12 +35,14 @@ static const struct snmp_mib mptcp_snmp_list[] = {
+ 	SNMP_MIB_ITEM("AddAddr", MPTCP_MIB_ADDADDR),
+ 	SNMP_MIB_ITEM("EchoAdd", MPTCP_MIB_ECHOADD),
+ 	SNMP_MIB_ITEM("PortAdd", MPTCP_MIB_PORTADD),
++	SNMP_MIB_ITEM("AddAddrDrop", MPTCP_MIB_ADDADDRDROP),
+ 	SNMP_MIB_ITEM("MPJoinPortSynRx", MPTCP_MIB_JOINPORTSYNRX),
+ 	SNMP_MIB_ITEM("MPJoinPortSynAckRx", MPTCP_MIB_JOINPORTSYNACKRX),
+ 	SNMP_MIB_ITEM("MPJoinPortAckRx", MPTCP_MIB_JOINPORTACKRX),
+ 	SNMP_MIB_ITEM("MismatchPortSynRx", MPTCP_MIB_MISMATCHPORTSYNRX),
+ 	SNMP_MIB_ITEM("MismatchPortAckRx", MPTCP_MIB_MISMATCHPORTACKRX),
+ 	SNMP_MIB_ITEM("RmAddr", MPTCP_MIB_RMADDR),
++	SNMP_MIB_ITEM("RmAddrDrop", MPTCP_MIB_RMADDRDROP),
+ 	SNMP_MIB_ITEM("RmSubflow", MPTCP_MIB_RMSUBFLOW),
+ 	SNMP_MIB_ITEM("MPPrioTx", MPTCP_MIB_MPPRIOTX),
+ 	SNMP_MIB_ITEM("MPPrioRx", MPTCP_MIB_MPPRIORX),
+diff --git a/net/mptcp/mib.h b/net/mptcp/mib.h
+index ecd3d8b117e0b..2966fcb6548ba 100644
+--- a/net/mptcp/mib.h
++++ b/net/mptcp/mib.h
+@@ -28,12 +28,14 @@ enum linux_mptcp_mib_field {
+ 	MPTCP_MIB_ADDADDR,		/* Received ADD_ADDR with echo-flag=0 */
+ 	MPTCP_MIB_ECHOADD,		/* Received ADD_ADDR with echo-flag=1 */
+ 	MPTCP_MIB_PORTADD,		/* Received ADD_ADDR with a port-number */
++	MPTCP_MIB_ADDADDRDROP,		/* Dropped incoming ADD_ADDR */
+ 	MPTCP_MIB_JOINPORTSYNRX,	/* Received a SYN MP_JOIN with a different port-number */
+ 	MPTCP_MIB_JOINPORTSYNACKRX,	/* Received a SYNACK MP_JOIN with a different port-number */
+ 	MPTCP_MIB_JOINPORTACKRX,	/* Received an ACK MP_JOIN with a different port-number */
+ 	MPTCP_MIB_MISMATCHPORTSYNRX,	/* Received a SYN MP_JOIN with a mismatched port-number */
+ 	MPTCP_MIB_MISMATCHPORTACKRX,	/* Received an ACK MP_JOIN with a mismatched port-number */
+ 	MPTCP_MIB_RMADDR,		/* Received RM_ADDR */
++	MPTCP_MIB_RMADDRDROP,		/* Dropped incoming RM_ADDR */
+ 	MPTCP_MIB_RMSUBFLOW,		/* Remove a subflow */
+ 	MPTCP_MIB_MPPRIOTX,		/* Transmit a MP_PRIO */
+ 	MPTCP_MIB_MPPRIORX,		/* Received a MP_PRIO */
+diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c
+index 6ab386ff32944..d9790d6fbce9c 100644
+--- a/net/mptcp/pm.c
++++ b/net/mptcp/pm.c
+@@ -194,6 +194,8 @@ void mptcp_pm_add_addr_received(struct mptcp_sock *msk,
+ 		mptcp_pm_add_addr_send_ack(msk);
+ 	} else if (mptcp_pm_schedule_work(msk, MPTCP_PM_ADD_ADDR_RECEIVED)) {
+ 		pm->remote = *addr;
++	} else {
++		__MPTCP_INC_STATS(sock_net((struct sock *)msk), MPTCP_MIB_ADDADDRDROP);
+ 	}
+ 
+ 	spin_unlock_bh(&pm->lock);
+@@ -234,8 +236,10 @@ void mptcp_pm_rm_addr_received(struct mptcp_sock *msk,
+ 		mptcp_event_addr_removed(msk, rm_list->ids[i]);
+ 
+ 	spin_lock_bh(&pm->lock);
+-	mptcp_pm_schedule_work(msk, MPTCP_PM_RM_ADDR_RECEIVED);
+-	pm->rm_list_rx = *rm_list;
++	if (mptcp_pm_schedule_work(msk, MPTCP_PM_RM_ADDR_RECEIVED))
++		pm->rm_list_rx = *rm_list;
++	else
++		__MPTCP_INC_STATS(sock_net((struct sock *)msk), MPTCP_MIB_RMADDRDROP);
+ 	spin_unlock_bh(&pm->lock);
+ }
+ 
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 5eada95dd76b3..d57d507ef83f1 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -606,6 +606,7 @@ static void mptcp_pm_nl_add_addr_received(struct mptcp_sock *msk)
+ 	unsigned int add_addr_accept_max;
+ 	struct mptcp_addr_info remote;
+ 	unsigned int subflows_max;
++	bool reset_port = false;
+ 	int i, nr;
+ 
+ 	add_addr_accept_max = mptcp_pm_get_add_addr_accept_max(msk);
+@@ -615,15 +616,19 @@ static void mptcp_pm_nl_add_addr_received(struct mptcp_sock *msk)
+ 		 msk->pm.add_addr_accepted, add_addr_accept_max,
+ 		 msk->pm.remote.family);
+ 
+-	if (lookup_subflow_by_daddr(&msk->conn_list, &msk->pm.remote))
++	remote = msk->pm.remote;
++	if (lookup_subflow_by_daddr(&msk->conn_list, &remote))
+ 		goto add_addr_echo;
+ 
++	/* pick id 0 port, if none is provided the remote address */
++	if (!remote.port) {
++		reset_port = true;
++		remote.port = sk->sk_dport;
++	}
++
+ 	/* connect to the specified remote address, using whatever
+ 	 * local address the routing configuration will pick.
+ 	 */
+-	remote = msk->pm.remote;
+-	if (!remote.port)
+-		remote.port = sk->sk_dport;
+ 	nr = fill_local_addresses_vec(msk, addrs);
+ 
+ 	msk->pm.add_addr_accepted++;
+@@ -636,8 +641,12 @@ static void mptcp_pm_nl_add_addr_received(struct mptcp_sock *msk)
+ 		__mptcp_subflow_connect(sk, &addrs[i], &remote);
+ 	spin_lock_bh(&msk->pm.lock);
+ 
++	/* be sure to echo exactly the received address */
++	if (reset_port)
++		remote.port = 0;
++
+ add_addr_echo:
+-	mptcp_pm_announce_addr(msk, &msk->pm.remote, true);
++	mptcp_pm_announce_addr(msk, &remote, true);
+ 	mptcp_pm_nl_addr_send_ack(msk);
+ }
+ 
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index c207728226372..a65b530975f54 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -6535,12 +6535,15 @@ static int nf_tables_updobj(const struct nft_ctx *ctx,
+ {
+ 	struct nft_object *newobj;
+ 	struct nft_trans *trans;
+-	int err;
++	int err = -ENOMEM;
++
++	if (!try_module_get(type->owner))
++		return -ENOENT;
+ 
+ 	trans = nft_trans_alloc(ctx, NFT_MSG_NEWOBJ,
+ 				sizeof(struct nft_trans_obj));
+ 	if (!trans)
+-		return -ENOMEM;
++		goto err_trans;
+ 
+ 	newobj = nft_obj_init(ctx, type, attr);
+ 	if (IS_ERR(newobj)) {
+@@ -6557,6 +6560,8 @@ static int nf_tables_updobj(const struct nft_ctx *ctx,
+ 
+ err_free_trans:
+ 	kfree(trans);
++err_trans:
++	module_put(type->owner);
+ 	return err;
+ }
+ 
+@@ -8169,7 +8174,7 @@ static void nft_obj_commit_update(struct nft_trans *trans)
+ 	if (obj->ops->update)
+ 		obj->ops->update(obj, newobj);
+ 
+-	kfree(newobj);
++	nft_obj_destroy(&trans->ctx, newobj);
+ }
+ 
+ static void nft_commit_release(struct nft_trans *trans)
+@@ -8914,7 +8919,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 			break;
+ 		case NFT_MSG_NEWOBJ:
+ 			if (nft_trans_obj_update(trans)) {
+-				kfree(nft_trans_obj_newobj(trans));
++				nft_obj_destroy(&trans->ctx, nft_trans_obj_newobj(trans));
+ 				nft_trans_destroy(trans);
+ 			} else {
+ 				trans->ctx.table->use--;
+@@ -9574,10 +9579,13 @@ EXPORT_SYMBOL_GPL(__nft_release_basechain);
+ 
+ static void __nft_release_hook(struct net *net, struct nft_table *table)
+ {
++	struct nft_flowtable *flowtable;
+ 	struct nft_chain *chain;
+ 
+ 	list_for_each_entry(chain, &table->chains, list)
+ 		nf_tables_unregister_hook(net, table, chain);
++	list_for_each_entry(flowtable, &table->flowtables, list)
++		nft_unregister_flowtable_net_hooks(net, &flowtable->hook_list);
+ }
+ 
+ static void __nft_release_hooks(struct net *net)
+diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c
+index 9656c16462222..2d36952b13920 100644
+--- a/net/netfilter/nf_tables_offload.c
++++ b/net/netfilter/nf_tables_offload.c
+@@ -94,7 +94,8 @@ struct nft_flow_rule *nft_flow_rule_create(struct net *net,
+ 
+ 	expr = nft_expr_first(rule);
+ 	while (nft_expr_more(rule, expr)) {
+-		if (expr->ops->offload_flags & NFT_OFFLOAD_F_ACTION)
++		if (expr->ops->offload_action &&
++		    expr->ops->offload_action(expr))
+ 			num_actions++;
+ 
+ 		expr = nft_expr_next(expr);
+diff --git a/net/netfilter/nft_dup_netdev.c b/net/netfilter/nft_dup_netdev.c
+index bbf3fcba3df40..5b5c607fbf83f 100644
+--- a/net/netfilter/nft_dup_netdev.c
++++ b/net/netfilter/nft_dup_netdev.c
+@@ -67,6 +67,11 @@ static int nft_dup_netdev_offload(struct nft_offload_ctx *ctx,
+ 	return nft_fwd_dup_netdev_offload(ctx, flow, FLOW_ACTION_MIRRED, oif);
+ }
+ 
++static bool nft_dup_netdev_offload_action(const struct nft_expr *expr)
++{
++	return true;
++}
++
+ static struct nft_expr_type nft_dup_netdev_type;
+ static const struct nft_expr_ops nft_dup_netdev_ops = {
+ 	.type		= &nft_dup_netdev_type,
+@@ -75,6 +80,7 @@ static const struct nft_expr_ops nft_dup_netdev_ops = {
+ 	.init		= nft_dup_netdev_init,
+ 	.dump		= nft_dup_netdev_dump,
+ 	.offload	= nft_dup_netdev_offload,
++	.offload_action	= nft_dup_netdev_offload_action,
+ };
+ 
+ static struct nft_expr_type nft_dup_netdev_type __read_mostly = {
+diff --git a/net/netfilter/nft_fwd_netdev.c b/net/netfilter/nft_fwd_netdev.c
+index cd59afde5b2f8..7730409f6f091 100644
+--- a/net/netfilter/nft_fwd_netdev.c
++++ b/net/netfilter/nft_fwd_netdev.c
+@@ -77,6 +77,11 @@ static int nft_fwd_netdev_offload(struct nft_offload_ctx *ctx,
+ 	return nft_fwd_dup_netdev_offload(ctx, flow, FLOW_ACTION_REDIRECT, oif);
+ }
+ 
++static bool nft_fwd_netdev_offload_action(const struct nft_expr *expr)
++{
++	return true;
++}
++
+ struct nft_fwd_neigh {
+ 	u8			sreg_dev;
+ 	u8			sreg_addr;
+@@ -219,6 +224,7 @@ static const struct nft_expr_ops nft_fwd_netdev_ops = {
+ 	.dump		= nft_fwd_netdev_dump,
+ 	.validate	= nft_fwd_validate,
+ 	.offload	= nft_fwd_netdev_offload,
++	.offload_action	= nft_fwd_netdev_offload_action,
+ };
+ 
+ static const struct nft_expr_ops *
+diff --git a/net/netfilter/nft_immediate.c b/net/netfilter/nft_immediate.c
+index 90c64d27ae532..d0f67d325bdfd 100644
+--- a/net/netfilter/nft_immediate.c
++++ b/net/netfilter/nft_immediate.c
+@@ -213,6 +213,16 @@ static int nft_immediate_offload(struct nft_offload_ctx *ctx,
+ 	return 0;
+ }
+ 
++static bool nft_immediate_offload_action(const struct nft_expr *expr)
++{
++	const struct nft_immediate_expr *priv = nft_expr_priv(expr);
++
++	if (priv->dreg == NFT_REG_VERDICT)
++		return true;
++
++	return false;
++}
++
+ static const struct nft_expr_ops nft_imm_ops = {
+ 	.type		= &nft_imm_type,
+ 	.size		= NFT_EXPR_SIZE(sizeof(struct nft_immediate_expr)),
+@@ -224,7 +234,7 @@ static const struct nft_expr_ops nft_imm_ops = {
+ 	.dump		= nft_immediate_dump,
+ 	.validate	= nft_immediate_validate,
+ 	.offload	= nft_immediate_offload,
+-	.offload_flags	= NFT_OFFLOAD_F_ACTION,
++	.offload_action	= nft_immediate_offload_action,
+ };
+ 
+ struct nft_expr_type nft_imm_type __read_mostly = {
+diff --git a/net/netfilter/xt_socket.c b/net/netfilter/xt_socket.c
+index 5e6459e116055..7013f55f05d1e 100644
+--- a/net/netfilter/xt_socket.c
++++ b/net/netfilter/xt_socket.c
+@@ -220,8 +220,10 @@ static void socket_mt_destroy(const struct xt_mtdtor_param *par)
+ {
+ 	if (par->family == NFPROTO_IPV4)
+ 		nf_defrag_ipv4_disable(par->net);
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
+ 	else if (par->family == NFPROTO_IPV6)
+-		nf_defrag_ipv4_disable(par->net);
++		nf_defrag_ipv6_disable(par->net);
++#endif
+ }
+ 
+ static struct xt_match socket_mt_reg[] __read_mostly = {
+diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
+index 076774034bb96..780d9e2246f39 100644
+--- a/net/openvswitch/actions.c
++++ b/net/openvswitch/actions.c
+@@ -423,12 +423,43 @@ static void set_ipv6_addr(struct sk_buff *skb, u8 l4_proto,
+ 	memcpy(addr, new_addr, sizeof(__be32[4]));
+ }
+ 
+-static void set_ipv6_fl(struct ipv6hdr *nh, u32 fl, u32 mask)
++static void set_ipv6_dsfield(struct sk_buff *skb, struct ipv6hdr *nh, u8 ipv6_tclass, u8 mask)
+ {
++	u8 old_ipv6_tclass = ipv6_get_dsfield(nh);
++
++	ipv6_tclass = OVS_MASKED(old_ipv6_tclass, ipv6_tclass, mask);
++
++	if (skb->ip_summed == CHECKSUM_COMPLETE)
++		csum_replace(&skb->csum, (__force __wsum)(old_ipv6_tclass << 12),
++			     (__force __wsum)(ipv6_tclass << 12));
++
++	ipv6_change_dsfield(nh, ~mask, ipv6_tclass);
++}
++
++static void set_ipv6_fl(struct sk_buff *skb, struct ipv6hdr *nh, u32 fl, u32 mask)
++{
++	u32 ofl;
++
++	ofl = nh->flow_lbl[0] << 16 |  nh->flow_lbl[1] << 8 |  nh->flow_lbl[2];
++	fl = OVS_MASKED(ofl, fl, mask);
++
+ 	/* Bits 21-24 are always unmasked, so this retains their values. */
+-	OVS_SET_MASKED(nh->flow_lbl[0], (u8)(fl >> 16), (u8)(mask >> 16));
+-	OVS_SET_MASKED(nh->flow_lbl[1], (u8)(fl >> 8), (u8)(mask >> 8));
+-	OVS_SET_MASKED(nh->flow_lbl[2], (u8)fl, (u8)mask);
++	nh->flow_lbl[0] = (u8)(fl >> 16);
++	nh->flow_lbl[1] = (u8)(fl >> 8);
++	nh->flow_lbl[2] = (u8)fl;
++
++	if (skb->ip_summed == CHECKSUM_COMPLETE)
++		csum_replace(&skb->csum, (__force __wsum)htonl(ofl), (__force __wsum)htonl(fl));
++}
++
++static void set_ipv6_ttl(struct sk_buff *skb, struct ipv6hdr *nh, u8 new_ttl, u8 mask)
++{
++	new_ttl = OVS_MASKED(nh->hop_limit, new_ttl, mask);
++
++	if (skb->ip_summed == CHECKSUM_COMPLETE)
++		csum_replace(&skb->csum, (__force __wsum)(nh->hop_limit << 8),
++			     (__force __wsum)(new_ttl << 8));
++	nh->hop_limit = new_ttl;
+ }
+ 
+ static void set_ip_ttl(struct sk_buff *skb, struct iphdr *nh, u8 new_ttl,
+@@ -546,18 +577,17 @@ static int set_ipv6(struct sk_buff *skb, struct sw_flow_key *flow_key,
+ 		}
+ 	}
+ 	if (mask->ipv6_tclass) {
+-		ipv6_change_dsfield(nh, ~mask->ipv6_tclass, key->ipv6_tclass);
++		set_ipv6_dsfield(skb, nh, key->ipv6_tclass, mask->ipv6_tclass);
+ 		flow_key->ip.tos = ipv6_get_dsfield(nh);
+ 	}
+ 	if (mask->ipv6_label) {
+-		set_ipv6_fl(nh, ntohl(key->ipv6_label),
++		set_ipv6_fl(skb, nh, ntohl(key->ipv6_label),
+ 			    ntohl(mask->ipv6_label));
+ 		flow_key->ipv6.label =
+ 		    *(__be32 *)nh & htonl(IPV6_FLOWINFO_FLOWLABEL);
+ 	}
+ 	if (mask->ipv6_hlimit) {
+-		OVS_SET_MASKED(nh->hop_limit, key->ipv6_hlimit,
+-			       mask->ipv6_hlimit);
++		set_ipv6_ttl(skb, nh, key->ipv6_hlimit, mask->ipv6_hlimit);
+ 		flow_key->ip.ttl = nh->hop_limit;
+ 	}
+ 	return 0;
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index 2a17eb77c9049..4ffea1290ce1c 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -516,11 +516,6 @@ static bool tcf_ct_flow_table_lookup(struct tcf_ct_params *p,
+ 	struct nf_conn *ct;
+ 	u8 dir;
+ 
+-	/* Previously seen or loopback */
+-	ct = nf_ct_get(skb, &ctinfo);
+-	if ((ct && !nf_ct_is_template(ct)) || ctinfo == IP_CT_UNTRACKED)
+-		return false;
+-
+ 	switch (family) {
+ 	case NFPROTO_IPV4:
+ 		if (!tcf_ct_flow_table_fill_tuple_ipv4(skb, &tuple, &tcph))
+diff --git a/net/smc/smc_pnet.c b/net/smc/smc_pnet.c
+index 67e9d9fde0854..756b4dbadf36d 100644
+--- a/net/smc/smc_pnet.c
++++ b/net/smc/smc_pnet.c
+@@ -112,7 +112,7 @@ static int smc_pnet_remove_by_pnetid(struct net *net, char *pnet_name)
+ 	pnettable = &sn->pnettable;
+ 
+ 	/* remove table entry */
+-	write_lock(&pnettable->lock);
++	mutex_lock(&pnettable->lock);
+ 	list_for_each_entry_safe(pnetelem, tmp_pe, &pnettable->pnetlist,
+ 				 list) {
+ 		if (!pnet_name ||
+@@ -130,7 +130,7 @@ static int smc_pnet_remove_by_pnetid(struct net *net, char *pnet_name)
+ 			rc = 0;
+ 		}
+ 	}
+-	write_unlock(&pnettable->lock);
++	mutex_unlock(&pnettable->lock);
+ 
+ 	/* if this is not the initial namespace, stop here */
+ 	if (net != &init_net)
+@@ -191,7 +191,7 @@ static int smc_pnet_add_by_ndev(struct net_device *ndev)
+ 	sn = net_generic(net, smc_net_id);
+ 	pnettable = &sn->pnettable;
+ 
+-	write_lock(&pnettable->lock);
++	mutex_lock(&pnettable->lock);
+ 	list_for_each_entry_safe(pnetelem, tmp_pe, &pnettable->pnetlist, list) {
+ 		if (pnetelem->type == SMC_PNET_ETH && !pnetelem->ndev &&
+ 		    !strncmp(pnetelem->eth_name, ndev->name, IFNAMSIZ)) {
+@@ -205,7 +205,7 @@ static int smc_pnet_add_by_ndev(struct net_device *ndev)
+ 			break;
+ 		}
+ 	}
+-	write_unlock(&pnettable->lock);
++	mutex_unlock(&pnettable->lock);
+ 	return rc;
+ }
+ 
+@@ -223,7 +223,7 @@ static int smc_pnet_remove_by_ndev(struct net_device *ndev)
+ 	sn = net_generic(net, smc_net_id);
+ 	pnettable = &sn->pnettable;
+ 
+-	write_lock(&pnettable->lock);
++	mutex_lock(&pnettable->lock);
+ 	list_for_each_entry_safe(pnetelem, tmp_pe, &pnettable->pnetlist, list) {
+ 		if (pnetelem->type == SMC_PNET_ETH && pnetelem->ndev == ndev) {
+ 			dev_put(pnetelem->ndev);
+@@ -236,7 +236,7 @@ static int smc_pnet_remove_by_ndev(struct net_device *ndev)
+ 			break;
+ 		}
+ 	}
+-	write_unlock(&pnettable->lock);
++	mutex_unlock(&pnettable->lock);
+ 	return rc;
+ }
+ 
+@@ -371,7 +371,7 @@ static int smc_pnet_add_eth(struct smc_pnettable *pnettable, struct net *net,
+ 
+ 	rc = -EEXIST;
+ 	new_netdev = true;
+-	write_lock(&pnettable->lock);
++	mutex_lock(&pnettable->lock);
+ 	list_for_each_entry(tmp_pe, &pnettable->pnetlist, list) {
+ 		if (tmp_pe->type == SMC_PNET_ETH &&
+ 		    !strncmp(tmp_pe->eth_name, eth_name, IFNAMSIZ)) {
+@@ -381,9 +381,9 @@ static int smc_pnet_add_eth(struct smc_pnettable *pnettable, struct net *net,
+ 	}
+ 	if (new_netdev) {
+ 		list_add_tail(&new_pe->list, &pnettable->pnetlist);
+-		write_unlock(&pnettable->lock);
++		mutex_unlock(&pnettable->lock);
+ 	} else {
+-		write_unlock(&pnettable->lock);
++		mutex_unlock(&pnettable->lock);
+ 		kfree(new_pe);
+ 		goto out_put;
+ 	}
+@@ -444,7 +444,7 @@ static int smc_pnet_add_ib(struct smc_pnettable *pnettable, char *ib_name,
+ 	new_pe->ib_port = ib_port;
+ 
+ 	new_ibdev = true;
+-	write_lock(&pnettable->lock);
++	mutex_lock(&pnettable->lock);
+ 	list_for_each_entry(tmp_pe, &pnettable->pnetlist, list) {
+ 		if (tmp_pe->type == SMC_PNET_IB &&
+ 		    !strncmp(tmp_pe->ib_name, ib_name, IB_DEVICE_NAME_MAX)) {
+@@ -454,9 +454,9 @@ static int smc_pnet_add_ib(struct smc_pnettable *pnettable, char *ib_name,
+ 	}
+ 	if (new_ibdev) {
+ 		list_add_tail(&new_pe->list, &pnettable->pnetlist);
+-		write_unlock(&pnettable->lock);
++		mutex_unlock(&pnettable->lock);
+ 	} else {
+-		write_unlock(&pnettable->lock);
++		mutex_unlock(&pnettable->lock);
+ 		kfree(new_pe);
+ 	}
+ 	return (new_ibdev) ? 0 : -EEXIST;
+@@ -601,7 +601,7 @@ static int _smc_pnet_dump(struct net *net, struct sk_buff *skb, u32 portid,
+ 	pnettable = &sn->pnettable;
+ 
+ 	/* dump pnettable entries */
+-	read_lock(&pnettable->lock);
++	mutex_lock(&pnettable->lock);
+ 	list_for_each_entry(pnetelem, &pnettable->pnetlist, list) {
+ 		if (pnetid && !smc_pnet_match(pnetelem->pnet_name, pnetid))
+ 			continue;
+@@ -616,7 +616,7 @@ static int _smc_pnet_dump(struct net *net, struct sk_buff *skb, u32 portid,
+ 			break;
+ 		}
+ 	}
+-	read_unlock(&pnettable->lock);
++	mutex_unlock(&pnettable->lock);
+ 	return idx;
+ }
+ 
+@@ -860,7 +860,7 @@ int smc_pnet_net_init(struct net *net)
+ 	struct smc_pnetids_ndev *pnetids_ndev = &sn->pnetids_ndev;
+ 
+ 	INIT_LIST_HEAD(&pnettable->pnetlist);
+-	rwlock_init(&pnettable->lock);
++	mutex_init(&pnettable->lock);
+ 	INIT_LIST_HEAD(&pnetids_ndev->list);
+ 	rwlock_init(&pnetids_ndev->lock);
+ 
+@@ -940,7 +940,7 @@ static int smc_pnet_find_ndev_pnetid_by_table(struct net_device *ndev,
+ 	sn = net_generic(net, smc_net_id);
+ 	pnettable = &sn->pnettable;
+ 
+-	read_lock(&pnettable->lock);
++	mutex_lock(&pnettable->lock);
+ 	list_for_each_entry(pnetelem, &pnettable->pnetlist, list) {
+ 		if (pnetelem->type == SMC_PNET_ETH && ndev == pnetelem->ndev) {
+ 			/* get pnetid of netdev device */
+@@ -949,7 +949,7 @@ static int smc_pnet_find_ndev_pnetid_by_table(struct net_device *ndev,
+ 			break;
+ 		}
+ 	}
+-	read_unlock(&pnettable->lock);
++	mutex_unlock(&pnettable->lock);
+ 	return rc;
+ }
+ 
+@@ -1141,7 +1141,7 @@ int smc_pnetid_by_table_ib(struct smc_ib_device *smcibdev, u8 ib_port)
+ 	sn = net_generic(&init_net, smc_net_id);
+ 	pnettable = &sn->pnettable;
+ 
+-	read_lock(&pnettable->lock);
++	mutex_lock(&pnettable->lock);
+ 	list_for_each_entry(tmp_pe, &pnettable->pnetlist, list) {
+ 		if (tmp_pe->type == SMC_PNET_IB &&
+ 		    !strncmp(tmp_pe->ib_name, ib_name, IB_DEVICE_NAME_MAX) &&
+@@ -1151,7 +1151,7 @@ int smc_pnetid_by_table_ib(struct smc_ib_device *smcibdev, u8 ib_port)
+ 			break;
+ 		}
+ 	}
+-	read_unlock(&pnettable->lock);
++	mutex_unlock(&pnettable->lock);
+ 
+ 	return rc;
+ }
+@@ -1170,7 +1170,7 @@ int smc_pnetid_by_table_smcd(struct smcd_dev *smcddev)
+ 	sn = net_generic(&init_net, smc_net_id);
+ 	pnettable = &sn->pnettable;
+ 
+-	read_lock(&pnettable->lock);
++	mutex_lock(&pnettable->lock);
+ 	list_for_each_entry(tmp_pe, &pnettable->pnetlist, list) {
+ 		if (tmp_pe->type == SMC_PNET_IB &&
+ 		    !strncmp(tmp_pe->ib_name, ib_name, IB_DEVICE_NAME_MAX)) {
+@@ -1179,7 +1179,7 @@ int smc_pnetid_by_table_smcd(struct smcd_dev *smcddev)
+ 			break;
+ 		}
+ 	}
+-	read_unlock(&pnettable->lock);
++	mutex_unlock(&pnettable->lock);
+ 
+ 	return rc;
+ }
+diff --git a/net/smc/smc_pnet.h b/net/smc/smc_pnet.h
+index 14039272f7e42..80a88eea49491 100644
+--- a/net/smc/smc_pnet.h
++++ b/net/smc/smc_pnet.h
+@@ -29,7 +29,7 @@ struct smc_link_group;
+  * @pnetlist: List of PNETIDs
+  */
+ struct smc_pnettable {
+-	rwlock_t lock;
++	struct mutex lock;
+ 	struct list_head pnetlist;
+ };
+ 
+diff --git a/net/tipc/name_table.c b/net/tipc/name_table.c
+index 01396dd1c899b..1d8ba233d0474 100644
+--- a/net/tipc/name_table.c
++++ b/net/tipc/name_table.c
+@@ -967,7 +967,7 @@ static int __tipc_nl_add_nametable_publ(struct tipc_nl_msg *msg,
+ 		list_for_each_entry(p, &sr->all_publ, all_publ)
+ 			if (p->key == *last_key)
+ 				break;
+-		if (p->key != *last_key)
++		if (list_entry_is_head(p, &sr->all_publ, all_publ))
+ 			return -EPIPE;
+ 	} else {
+ 		p = list_first_entry(&sr->all_publ,
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 3e63c83e641c5..7545321c3440b 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -3749,7 +3749,7 @@ static int __tipc_nl_list_sk_publ(struct sk_buff *skb,
+ 			if (p->key == *last_publ)
+ 				break;
+ 		}
+-		if (p->key != *last_publ) {
++		if (list_entry_is_head(p, &tsk->publications, binding_sock)) {
+ 			/* We never set seq or call nl_dump_check_consistent()
+ 			 * this means that setting prev_seq here will cause the
+ 			 * consistence check to fail in the netlink callback
+diff --git a/security/selinux/ima.c b/security/selinux/ima.c
+index 727c4e43219d7..ff7aea6b3774a 100644
+--- a/security/selinux/ima.c
++++ b/security/selinux/ima.c
+@@ -77,7 +77,7 @@ void selinux_ima_measure_state_locked(struct selinux_state *state)
+ 	size_t policy_len;
+ 	int rc = 0;
+ 
+-	WARN_ON(!mutex_is_locked(&state->policy_mutex));
++	lockdep_assert_held(&state->policy_mutex);
+ 
+ 	state_str = selinux_ima_collect_state(state);
+ 	if (!state_str) {
+@@ -117,7 +117,7 @@ void selinux_ima_measure_state_locked(struct selinux_state *state)
+  */
+ void selinux_ima_measure_state(struct selinux_state *state)
+ {
+-	WARN_ON(mutex_is_locked(&state->policy_mutex));
++	lockdep_assert_not_held(&state->policy_mutex);
+ 
+ 	mutex_lock(&state->policy_mutex);
+ 	selinux_ima_measure_state_locked(state);
+diff --git a/tools/perf/util/data.c b/tools/perf/util/data.c
+index f5d260b1df4d1..15a4547d608ec 100644
+--- a/tools/perf/util/data.c
++++ b/tools/perf/util/data.c
+@@ -44,10 +44,6 @@ int perf_data__create_dir(struct perf_data *data, int nr)
+ 	if (!files)
+ 		return -ENOMEM;
+ 
+-	data->dir.version = PERF_DIR_VERSION;
+-	data->dir.files   = files;
+-	data->dir.nr      = nr;
+-
+ 	for (i = 0; i < nr; i++) {
+ 		struct perf_data_file *file = &files[i];
+ 
+@@ -62,6 +58,9 @@ int perf_data__create_dir(struct perf_data *data, int nr)
+ 		file->fd = ret;
+ 	}
+ 
++	data->dir.version = PERF_DIR_VERSION;
++	data->dir.files   = files;
++	data->dir.nr      = nr;
+ 	return 0;
+ 
+ out_err:
+diff --git a/tools/perf/util/evlist-hybrid.c b/tools/perf/util/evlist-hybrid.c
+index 7c554234b43d4..f39c8ffc5a111 100644
+--- a/tools/perf/util/evlist-hybrid.c
++++ b/tools/perf/util/evlist-hybrid.c
+@@ -153,8 +153,8 @@ int evlist__fix_hybrid_cpus(struct evlist *evlist, const char *cpu_list)
+ 		perf_cpu_map__put(matched_cpus);
+ 		perf_cpu_map__put(unmatched_cpus);
+ 	}
+-
+-	ret = (unmatched_count == events_nr) ? -1 : 0;
++	if (events_nr)
++		ret = (unmatched_count == events_nr) ? -1 : 0;
+ out:
+ 	perf_cpu_map__put(cpus);
+ 	return ret;
+diff --git a/tools/testing/selftests/bpf/progs/test_sockmap_kern.h b/tools/testing/selftests/bpf/progs/test_sockmap_kern.h
+index 2966564b8497a..6c85b00f27b2e 100644
+--- a/tools/testing/selftests/bpf/progs/test_sockmap_kern.h
++++ b/tools/testing/selftests/bpf/progs/test_sockmap_kern.h
+@@ -235,7 +235,7 @@ SEC("sk_msg1")
+ int bpf_prog4(struct sk_msg_md *msg)
+ {
+ 	int *bytes, zero = 0, one = 1, two = 2, three = 3, four = 4, five = 5;
+-	int *start, *end, *start_push, *end_push, *start_pop, *pop;
++	int *start, *end, *start_push, *end_push, *start_pop, *pop, err = 0;
+ 
+ 	bytes = bpf_map_lookup_elem(&sock_apply_bytes, &zero);
+ 	if (bytes)
+@@ -249,8 +249,11 @@ int bpf_prog4(struct sk_msg_md *msg)
+ 		bpf_msg_pull_data(msg, *start, *end, 0);
+ 	start_push = bpf_map_lookup_elem(&sock_bytes, &two);
+ 	end_push = bpf_map_lookup_elem(&sock_bytes, &three);
+-	if (start_push && end_push)
+-		bpf_msg_push_data(msg, *start_push, *end_push, 0);
++	if (start_push && end_push) {
++		err = bpf_msg_push_data(msg, *start_push, *end_push, 0);
++		if (err)
++			return SK_DROP;
++	}
+ 	start_pop = bpf_map_lookup_elem(&sock_bytes, &four);
+ 	pop = bpf_map_lookup_elem(&sock_bytes, &five);
+ 	if (start_pop && pop)
+@@ -263,6 +266,7 @@ int bpf_prog6(struct sk_msg_md *msg)
+ {
+ 	int zero = 0, one = 1, two = 2, three = 3, four = 4, five = 5, key = 0;
+ 	int *bytes, *start, *end, *start_push, *end_push, *start_pop, *pop, *f;
++	int err = 0;
+ 	__u64 flags = 0;
+ 
+ 	bytes = bpf_map_lookup_elem(&sock_apply_bytes, &zero);
+@@ -279,8 +283,11 @@ int bpf_prog6(struct sk_msg_md *msg)
+ 
+ 	start_push = bpf_map_lookup_elem(&sock_bytes, &two);
+ 	end_push = bpf_map_lookup_elem(&sock_bytes, &three);
+-	if (start_push && end_push)
+-		bpf_msg_push_data(msg, *start_push, *end_push, 0);
++	if (start_push && end_push) {
++		err = bpf_msg_push_data(msg, *start_push, *end_push, 0);
++		if (err)
++			return SK_DROP;
++	}
+ 
+ 	start_pop = bpf_map_lookup_elem(&sock_bytes, &four);
+ 	pop = bpf_map_lookup_elem(&sock_bytes, &five);
+@@ -338,7 +345,7 @@ SEC("sk_msg5")
+ int bpf_prog10(struct sk_msg_md *msg)
+ {
+ 	int *bytes, *start, *end, *start_push, *end_push, *start_pop, *pop;
+-	int zero = 0, one = 1, two = 2, three = 3, four = 4, five = 5;
++	int zero = 0, one = 1, two = 2, three = 3, four = 4, five = 5, err = 0;
+ 
+ 	bytes = bpf_map_lookup_elem(&sock_apply_bytes, &zero);
+ 	if (bytes)
+@@ -352,8 +359,11 @@ int bpf_prog10(struct sk_msg_md *msg)
+ 		bpf_msg_pull_data(msg, *start, *end, 0);
+ 	start_push = bpf_map_lookup_elem(&sock_bytes, &two);
+ 	end_push = bpf_map_lookup_elem(&sock_bytes, &three);
+-	if (start_push && end_push)
+-		bpf_msg_push_data(msg, *start_push, *end_push, 0);
++	if (start_push && end_push) {
++		err = bpf_msg_push_data(msg, *start_push, *end_push, 0);
++		if (err)
++			return SK_PASS;
++	}
+ 	start_pop = bpf_map_lookup_elem(&sock_bytes, &four);
+ 	pop = bpf_map_lookup_elem(&sock_bytes, &five);
+ 	if (start_pop && pop)
+diff --git a/tools/testing/selftests/net/mptcp/diag.sh b/tools/testing/selftests/net/mptcp/diag.sh
+index 2674ba20d5249..ff821025d3096 100755
+--- a/tools/testing/selftests/net/mptcp/diag.sh
++++ b/tools/testing/selftests/net/mptcp/diag.sh
+@@ -71,6 +71,36 @@ chk_msk_remote_key_nr()
+ 		__chk_nr "grep -c remote_key" $*
+ }
+ 
++# $1: ns, $2: port
++wait_local_port_listen()
++{
++	local listener_ns="${1}"
++	local port="${2}"
++
++	local port_hex i
++
++	port_hex="$(printf "%04X" "${port}")"
++	for i in $(seq 10); do
++		ip netns exec "${listener_ns}" cat /proc/net/tcp | \
++			awk "BEGIN {rc=1} {if (\$2 ~ /:${port_hex}\$/ && \$4 ~ /0A/) {rc=0; exit}} END {exit rc}" &&
++			break
++		sleep 0.1
++	done
++}
++
++wait_connected()
++{
++	local listener_ns="${1}"
++	local port="${2}"
++
++	local port_hex i
++
++	port_hex="$(printf "%04X" "${port}")"
++	for i in $(seq 10); do
++		ip netns exec ${listener_ns} grep -q " 0100007F:${port_hex} " /proc/net/tcp && break
++		sleep 0.1
++	done
++}
+ 
+ trap cleanup EXIT
+ ip netns add $ns
+@@ -81,15 +111,15 @@ echo "a" | \
+ 		ip netns exec $ns \
+ 			./mptcp_connect -p 10000 -l -t ${timeout_poll} \
+ 				0.0.0.0 >/dev/null &
+-sleep 0.1
++wait_local_port_listen $ns 10000
+ chk_msk_nr 0 "no msk on netns creation"
+ 
+ echo "b" | \
+ 	timeout ${timeout_test} \
+ 		ip netns exec $ns \
+-			./mptcp_connect -p 10000 -j -t ${timeout_poll} \
++			./mptcp_connect -p 10000 -r 0 -t ${timeout_poll} \
+ 				127.0.0.1 >/dev/null &
+-sleep 0.1
++wait_connected $ns 10000
+ chk_msk_nr 2 "after MPC handshake "
+ chk_msk_remote_key_nr 2 "....chk remote_key"
+ chk_msk_fallback_nr 0 "....chk no fallback"
+@@ -101,13 +131,13 @@ echo "a" | \
+ 		ip netns exec $ns \
+ 			./mptcp_connect -p 10001 -l -s TCP -t ${timeout_poll} \
+ 				0.0.0.0 >/dev/null &
+-sleep 0.1
++wait_local_port_listen $ns 10001
+ echo "b" | \
+ 	timeout ${timeout_test} \
+ 		ip netns exec $ns \
+-			./mptcp_connect -p 10001 -j -t ${timeout_poll} \
++			./mptcp_connect -p 10001 -r 0 -t ${timeout_poll} \
+ 				127.0.0.1 >/dev/null &
+-sleep 0.1
++wait_connected $ns 10001
+ chk_msk_fallback_nr 1 "check fallback"
+ flush_pids
+ 
+@@ -119,7 +149,7 @@ for I in `seq 1 $NR_CLIENTS`; do
+ 				./mptcp_connect -p $((I+10001)) -l -w 10 \
+ 					-t ${timeout_poll} 0.0.0.0 >/dev/null &
+ done
+-sleep 0.1
++wait_local_port_listen $ns $((NR_CLIENTS + 10001))
+ 
+ for I in `seq 1 $NR_CLIENTS`; do
+ 	echo "b" | \
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index f06dc9dfe15eb..f4f0e3eb3b921 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -624,6 +624,7 @@ chk_join_nr()
+ 	local ack_nr=$4
+ 	local count
+ 	local dump_stats
++	local with_cookie
+ 
+ 	printf "%02u %-36s %s" "$TEST_COUNT" "$msg" "syn"
+ 	count=`ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinSynRx | awk '{print $2}'`
+@@ -637,12 +638,20 @@ chk_join_nr()
+ 	fi
+ 
+ 	echo -n " - synack"
++	with_cookie=`ip netns exec $ns2 sysctl -n net.ipv4.tcp_syncookies`
+ 	count=`ip netns exec $ns2 nstat -as | grep MPTcpExtMPJoinSynAckRx | awk '{print $2}'`
+ 	[ -z "$count" ] && count=0
+ 	if [ "$count" != "$syn_ack_nr" ]; then
+-		echo "[fail] got $count JOIN[s] synack expected $syn_ack_nr"
+-		ret=1
+-		dump_stats=1
++		# simult connections exceeding the limit with cookie enabled could go up to
++		# synack validation as the conn limit can be enforced reliably only after
++		# the subflow creation
++		if [ "$with_cookie" = 2 ] && [ "$count" -gt "$syn_ack_nr" ] && [ "$count" -le "$syn_nr" ]; then
++			echo -n "[ ok ]"
++		else
++			echo "[fail] got $count JOIN[s] synack expected $syn_ack_nr"
++			ret=1
++			dump_stats=1
++		fi
+ 	else
+ 		echo -n "[ ok ]"
+ 	fi


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-03-08 18:38 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-03-08 18:38 UTC (permalink / raw
  To: gentoo-commits

commit:     8ad26f78d86cf30ed07dbfbcad135da9564855a8
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Mar  8 18:38:26 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Mar  8 18:38:26 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8ad26f78

Linux patch 5.16.13

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1012_linux-5.16.13.patch | 7542 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7546 insertions(+)

diff --git a/0000_README b/0000_README
index 0785204e..f1b481e1 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch:  1011_linux-5.16.12.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.16.12
 
+Patch:  1012_linux-5.16.13.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.13
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1012_linux-5.16.13.patch b/1012_linux-5.16.13.patch
new file mode 100644
index 00000000..65f51e0e
--- /dev/null
+++ b/1012_linux-5.16.13.patch
@@ -0,0 +1,7542 @@
+diff --git a/Documentation/admin-guide/mm/pagemap.rst b/Documentation/admin-guide/mm/pagemap.rst
+index bfc28704856c6..6e2e416af7838 100644
+--- a/Documentation/admin-guide/mm/pagemap.rst
++++ b/Documentation/admin-guide/mm/pagemap.rst
+@@ -23,7 +23,7 @@ There are four components to pagemap:
+     * Bit  56    page exclusively mapped (since 4.2)
+     * Bit  57    pte is uffd-wp write-protected (since 5.13) (see
+       :ref:`Documentation/admin-guide/mm/userfaultfd.rst <userfaultfd>`)
+-    * Bits 57-60 zero
++    * Bits 58-60 zero
+     * Bit  61    page is file-page or shared-anon (since 3.5)
+     * Bit  62    page swapped
+     * Bit  63    page present
+diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst
+index 0ec7b7f1524b1..ea281dd755171 100644
+--- a/Documentation/arm64/silicon-errata.rst
++++ b/Documentation/arm64/silicon-errata.rst
+@@ -100,6 +100,8 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A510     | #2051678        | ARM64_ERRATUM_2051678       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A510     | #2077057        | ARM64_ERRATUM_2077057       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A710     | #2119858        | ARM64_ERRATUM_2119858       |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A710     | #2054223        | ARM64_ERRATUM_2054223       |
+diff --git a/Documentation/trace/events.rst b/Documentation/trace/events.rst
+index 8ddb9b09451c8..c47f381d0c002 100644
+--- a/Documentation/trace/events.rst
++++ b/Documentation/trace/events.rst
+@@ -198,6 +198,15 @@ The glob (~) accepts a wild card character (\*,?) and character classes
+   prev_comm ~ "*sh*"
+   prev_comm ~ "ba*sh"
+ 
++If the field is a pointer that points into user space (for example
++"filename" from sys_enter_openat), then you have to append ".ustring" to the
++field name::
++
++  filename.ustring ~ "password"
++
++As the kernel will have to know how to retrieve the memory that the pointer
++is at from user space.
++
+ 5.2 Setting filters
+ -------------------
+ 
+@@ -230,6 +239,16 @@ Currently the caret ('^') for an error always appears at the beginning of
+ the filter string; the error message should still be useful though
+ even without more accurate position info.
+ 
++5.2.1 Filter limitations
++------------------------
++
++If a filter is placed on a string pointer ``(char *)`` that does not point
++to a string on the ring buffer, but instead points to kernel or user space
++memory, then, for safety reasons, at most 1024 bytes of the content is
++copied onto a temporary buffer to do the compare. If the copy of the memory
++faults (the pointer points to memory that should not be accessed), then the
++string compare will be treated as not matching.
++
+ 5.3 Clearing filters
+ --------------------
+ 
+diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
+index aeeb071c76881..9df9eadaeb5c2 100644
+--- a/Documentation/virt/kvm/api.rst
++++ b/Documentation/virt/kvm/api.rst
+@@ -1391,7 +1391,7 @@ documentation when it pops into existence).
+ -------------------
+ 
+ :Capability: KVM_CAP_ENABLE_CAP
+-:Architectures: mips, ppc, s390
++:Architectures: mips, ppc, s390, x86
+ :Type: vcpu ioctl
+ :Parameters: struct kvm_enable_cap (in)
+ :Returns: 0 on success; -1 on error
+diff --git a/Makefile b/Makefile
+index 09a9bb824afad..6702e44821eb0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/arch/arm/boot/dts/omap3-devkit8000-common.dtsi b/arch/arm/boot/dts/omap3-devkit8000-common.dtsi
+index 5e55198e45765..54cd37336be7a 100644
+--- a/arch/arm/boot/dts/omap3-devkit8000-common.dtsi
++++ b/arch/arm/boot/dts/omap3-devkit8000-common.dtsi
+@@ -158,6 +158,24 @@
+ 	status = "disabled";
+ };
+ 
++/* Unusable as clockevent because if unreliable oscillator, allow to idle */
++&timer1_target {
++	/delete-property/ti,no-reset-on-init;
++	/delete-property/ti,no-idle;
++	timer@0 {
++		/delete-property/ti,timer-alwon;
++	};
++};
++
++/* Preferred timer for clockevent */
++&timer12_target {
++	ti,no-reset-on-init;
++	ti,no-idle;
++	timer@0 {
++		/* Always clocked by secure_32k_fck */
++	};
++};
++
+ &twl_gpio {
+ 	ti,use-leds;
+ 	/*
+diff --git a/arch/arm/boot/dts/omap3-devkit8000.dts b/arch/arm/boot/dts/omap3-devkit8000.dts
+index c2995a280729d..162d0726b0080 100644
+--- a/arch/arm/boot/dts/omap3-devkit8000.dts
++++ b/arch/arm/boot/dts/omap3-devkit8000.dts
+@@ -14,36 +14,3 @@
+ 		display2 = &tv0;
+ 	};
+ };
+-
+-/* Unusable as clocksource because of unreliable oscillator */
+-&counter32k {
+-	status = "disabled";
+-};
+-
+-/* Unusable as clockevent because if unreliable oscillator, allow to idle */
+-&timer1_target {
+-	/delete-property/ti,no-reset-on-init;
+-	/delete-property/ti,no-idle;
+-	timer@0 {
+-		/delete-property/ti,timer-alwon;
+-	};
+-};
+-
+-/* Preferred always-on timer for clocksource */
+-&timer12_target {
+-	ti,no-reset-on-init;
+-	ti,no-idle;
+-	timer@0 {
+-		/* Always clocked by secure_32k_fck */
+-	};
+-};
+-
+-/* Preferred timer for clockevent */
+-&timer2_target {
+-	ti,no-reset-on-init;
+-	ti,no-idle;
+-	timer@0 {
+-		assigned-clocks = <&gpt2_fck>;
+-		assigned-clock-parents = <&sys_ck>;
+-	};
+-};
+diff --git a/arch/arm/boot/dts/tegra124-nyan-big.dts b/arch/arm/boot/dts/tegra124-nyan-big.dts
+index 1d2aac2cb6d03..fdc1d64dfff9d 100644
+--- a/arch/arm/boot/dts/tegra124-nyan-big.dts
++++ b/arch/arm/boot/dts/tegra124-nyan-big.dts
+@@ -13,12 +13,15 @@
+ 		     "google,nyan-big-rev1", "google,nyan-big-rev0",
+ 		     "google,nyan-big", "google,nyan", "nvidia,tegra124";
+ 
+-	panel: panel {
+-		compatible = "auo,b133xtn01";
+-
+-		power-supply = <&vdd_3v3_panel>;
+-		backlight = <&backlight>;
+-		ddc-i2c-bus = <&dpaux>;
++	host1x@50000000 {
++		dpaux@545c0000 {
++			aux-bus {
++				panel: panel {
++					compatible = "auo,b133xtn01";
++					backlight = <&backlight>;
++				};
++			};
++		};
+ 	};
+ 
+ 	mmc@700b0400 { /* SD Card on this bus */
+diff --git a/arch/arm/boot/dts/tegra124-nyan-blaze.dts b/arch/arm/boot/dts/tegra124-nyan-blaze.dts
+index 677babde6460e..abdf4456826f8 100644
+--- a/arch/arm/boot/dts/tegra124-nyan-blaze.dts
++++ b/arch/arm/boot/dts/tegra124-nyan-blaze.dts
+@@ -15,12 +15,15 @@
+ 		     "google,nyan-blaze-rev0", "google,nyan-blaze",
+ 		     "google,nyan", "nvidia,tegra124";
+ 
+-	panel: panel {
+-		compatible = "samsung,ltn140at29-301";
+-
+-		power-supply = <&vdd_3v3_panel>;
+-		backlight = <&backlight>;
+-		ddc-i2c-bus = <&dpaux>;
++	host1x@50000000 {
++		dpaux@545c0000 {
++			aux-bus {
++				panel: panel {
++					compatible = "samsung,ltn140at29-301";
++					backlight = <&backlight>;
++				};
++			};
++		};
+ 	};
+ 
+ 	sound {
+diff --git a/arch/arm/boot/dts/tegra124-venice2.dts b/arch/arm/boot/dts/tegra124-venice2.dts
+index e6b54ac1ebd1a..84e2d24065e9a 100644
+--- a/arch/arm/boot/dts/tegra124-venice2.dts
++++ b/arch/arm/boot/dts/tegra124-venice2.dts
+@@ -48,6 +48,13 @@
+ 		dpaux@545c0000 {
+ 			vdd-supply = <&vdd_3v3_panel>;
+ 			status = "okay";
++
++			aux-bus {
++				panel: panel {
++					compatible = "lg,lp129qe";
++					backlight = <&backlight>;
++				};
++			};
+ 		};
+ 	};
+ 
+@@ -1079,13 +1086,6 @@
+ 		};
+ 	};
+ 
+-	panel: panel {
+-		compatible = "lg,lp129qe";
+-		power-supply = <&vdd_3v3_panel>;
+-		backlight = <&backlight>;
+-		ddc-i2c-bus = <&dpaux>;
+-	};
+-
+ 	vdd_mux: regulator@0 {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "+VDD_MUX";
+diff --git a/arch/arm/kernel/kgdb.c b/arch/arm/kernel/kgdb.c
+index 7bd30c0a4280d..22f937e6f3ffb 100644
+--- a/arch/arm/kernel/kgdb.c
++++ b/arch/arm/kernel/kgdb.c
+@@ -154,22 +154,38 @@ static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int instr)
+ 	return 0;
+ }
+ 
+-static struct undef_hook kgdb_brkpt_hook = {
++static struct undef_hook kgdb_brkpt_arm_hook = {
+ 	.instr_mask		= 0xffffffff,
+ 	.instr_val		= KGDB_BREAKINST,
+-	.cpsr_mask		= MODE_MASK,
++	.cpsr_mask		= PSR_T_BIT | MODE_MASK,
+ 	.cpsr_val		= SVC_MODE,
+ 	.fn			= kgdb_brk_fn
+ };
+ 
+-static struct undef_hook kgdb_compiled_brkpt_hook = {
++static struct undef_hook kgdb_brkpt_thumb_hook = {
++	.instr_mask		= 0xffff,
++	.instr_val		= KGDB_BREAKINST & 0xffff,
++	.cpsr_mask		= PSR_T_BIT | MODE_MASK,
++	.cpsr_val		= PSR_T_BIT | SVC_MODE,
++	.fn			= kgdb_brk_fn
++};
++
++static struct undef_hook kgdb_compiled_brkpt_arm_hook = {
+ 	.instr_mask		= 0xffffffff,
+ 	.instr_val		= KGDB_COMPILED_BREAK,
+-	.cpsr_mask		= MODE_MASK,
++	.cpsr_mask		= PSR_T_BIT | MODE_MASK,
+ 	.cpsr_val		= SVC_MODE,
+ 	.fn			= kgdb_compiled_brk_fn
+ };
+ 
++static struct undef_hook kgdb_compiled_brkpt_thumb_hook = {
++	.instr_mask		= 0xffff,
++	.instr_val		= KGDB_COMPILED_BREAK & 0xffff,
++	.cpsr_mask		= PSR_T_BIT | MODE_MASK,
++	.cpsr_val		= PSR_T_BIT | SVC_MODE,
++	.fn			= kgdb_compiled_brk_fn
++};
++
+ static int __kgdb_notify(struct die_args *args, unsigned long cmd)
+ {
+ 	struct pt_regs *regs = args->regs;
+@@ -210,8 +226,10 @@ int kgdb_arch_init(void)
+ 	if (ret != 0)
+ 		return ret;
+ 
+-	register_undef_hook(&kgdb_brkpt_hook);
+-	register_undef_hook(&kgdb_compiled_brkpt_hook);
++	register_undef_hook(&kgdb_brkpt_arm_hook);
++	register_undef_hook(&kgdb_brkpt_thumb_hook);
++	register_undef_hook(&kgdb_compiled_brkpt_arm_hook);
++	register_undef_hook(&kgdb_compiled_brkpt_thumb_hook);
+ 
+ 	return 0;
+ }
+@@ -224,8 +242,10 @@ int kgdb_arch_init(void)
+  */
+ void kgdb_arch_exit(void)
+ {
+-	unregister_undef_hook(&kgdb_brkpt_hook);
+-	unregister_undef_hook(&kgdb_compiled_brkpt_hook);
++	unregister_undef_hook(&kgdb_brkpt_arm_hook);
++	unregister_undef_hook(&kgdb_brkpt_thumb_hook);
++	unregister_undef_hook(&kgdb_compiled_brkpt_arm_hook);
++	unregister_undef_hook(&kgdb_compiled_brkpt_thumb_hook);
+ 	unregister_die_notifier(&kgdb_notifier);
+ }
+ 
+diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
+index 274e4f73fd33c..5e2be37a198e2 100644
+--- a/arch/arm/mm/mmu.c
++++ b/arch/arm/mm/mmu.c
+@@ -212,12 +212,14 @@ early_param("ecc", early_ecc);
+ static int __init early_cachepolicy(char *p)
+ {
+ 	pr_warn("cachepolicy kernel parameter not supported without cp15\n");
++	return 0;
+ }
+ early_param("cachepolicy", early_cachepolicy);
+ 
+ static int __init noalign_setup(char *__unused)
+ {
+ 	pr_warn("noalign kernel parameter not supported without cp15\n");
++	return 1;
+ }
+ __setup("noalign", noalign_setup);
+ 
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index ae0e93871ee5f..651bf217465e9 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -681,6 +681,22 @@ config ARM64_ERRATUM_2051678
+ 
+ 	  If unsure, say Y.
+ 
++config ARM64_ERRATUM_2077057
++	bool "Cortex-A510: 2077057: workaround software-step corrupting SPSR_EL2"
++	help
++	  This option adds the workaround for ARM Cortex-A510 erratum 2077057.
++	  Affected Cortex-A510 may corrupt SPSR_EL2 when the a step exception is
++	  expected, but a Pointer Authentication trap is taken instead. The
++	  erratum causes SPSR_EL1 to be copied to SPSR_EL2, which could allow
++	  EL1 to cause a return to EL2 with a guest controlled ELR_EL2.
++
++	  This can only happen when EL2 is stepping EL1.
++
++	  When these conditions occur, the SPSR_EL2 value is unchanged from the
++	  previous guest entry, and can be restored from the in-memory copy.
++
++	  If unsure, say Y.
++
+ config ARM64_ERRATUM_2119858
+ 	bool "Cortex-A710/X2: 2119858: workaround TRBE overwriting trace data in FILL mode"
+ 	default y
+diff --git a/arch/arm64/boot/dts/arm/juno-base.dtsi b/arch/arm64/boot/dts/arm/juno-base.dtsi
+index 6288e104a0893..a2635b14da309 100644
+--- a/arch/arm64/boot/dts/arm/juno-base.dtsi
++++ b/arch/arm64/boot/dts/arm/juno-base.dtsi
+@@ -543,8 +543,7 @@
+ 			 <0x02000000 0x00 0x50000000 0x00 0x50000000 0x0 0x08000000>,
+ 			 <0x42000000 0x40 0x00000000 0x40 0x00000000 0x1 0x00000000>;
+ 		/* Standard AXI Translation entries as programmed by EDK2 */
+-		dma-ranges = <0x02000000 0x0 0x2c1c0000 0x0 0x2c1c0000 0x0 0x00040000>,
+-			     <0x02000000 0x0 0x80000000 0x0 0x80000000 0x0 0x80000000>,
++		dma-ranges = <0x02000000 0x0 0x80000000 0x0 0x80000000 0x0 0x80000000>,
+ 			     <0x43000000 0x8 0x00000000 0x8 0x00000000 0x2 0x00000000>;
+ 		#interrupt-cells = <1>;
+ 		interrupt-map-mask = <0 0 0 7>;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm.dtsi b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+index c2f3f118f82e2..f13d31ebfcbd3 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+@@ -681,7 +681,6 @@
+ 						clocks = <&clk IMX8MM_CLK_VPU_DEC_ROOT>;
+ 						assigned-clocks = <&clk IMX8MM_CLK_VPU_BUS>;
+ 						assigned-clock-parents = <&clk IMX8MM_SYS_PLL1_800M>;
+-						resets = <&src IMX8MQ_RESET_VPU_RESET>;
+ 					};
+ 
+ 					pgc_vpu_g1: power-domain@7 {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi
+index 45a5ae5d2027f..162f08bca0d40 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi
+@@ -286,7 +286,7 @@
+ 
+ 	sound: sound {
+ 		compatible = "rockchip,rk3399-gru-sound";
+-		rockchip,cpu = <&i2s0 &i2s2>;
++		rockchip,cpu = <&i2s0 &spdif>;
+ 	};
+ };
+ 
+@@ -437,10 +437,6 @@ ap_i2c_audio: &i2c8 {
+ 	status = "okay";
+ };
+ 
+-&i2s2 {
+-	status = "okay";
+-};
+-
+ &io_domains {
+ 	status = "okay";
+ 
+@@ -537,6 +533,17 @@ ap_i2c_audio: &i2c8 {
+ 	vqmmc-supply = <&ppvar_sd_card_io>;
+ };
+ 
++&spdif {
++	status = "okay";
++
++	/*
++	 * SPDIF is routed internally to DP; we either don't use these pins, or
++	 * mux them to something else.
++	 */
++	/delete-property/ pinctrl-0;
++	/delete-property/ pinctrl-names;
++};
++
+ &spi1 {
+ 	status = "okay";
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3566-quartz64-a.dts b/arch/arm64/boot/dts/rockchip/rk3566-quartz64-a.dts
+index 4d4b2a301b1a4..f6290538c8a4c 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3566-quartz64-a.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3566-quartz64-a.dts
+@@ -285,8 +285,6 @@
+ 			vcc_ddr: DCDC_REG3 {
+ 				regulator-always-on;
+ 				regulator-boot-on;
+-				regulator-min-microvolt = <1100000>;
+-				regulator-max-microvolt = <1100000>;
+ 				regulator-initial-mode = <0x2>;
+ 				regulator-name = "vcc_ddr";
+ 				regulator-state-mem {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568.dtsi b/arch/arm64/boot/dts/rockchip/rk3568.dtsi
+index 2fd313a295f8a..d91df1cde7363 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3568.dtsi
+@@ -32,13 +32,11 @@
+ 		clocks = <&cru SCLK_GMAC0>, <&cru SCLK_GMAC0_RX_TX>,
+ 			 <&cru SCLK_GMAC0_RX_TX>, <&cru CLK_MAC0_REFOUT>,
+ 			 <&cru ACLK_GMAC0>, <&cru PCLK_GMAC0>,
+-			 <&cru SCLK_GMAC0_RX_TX>, <&cru CLK_GMAC0_PTP_REF>,
+-			 <&cru PCLK_XPCS>;
++			 <&cru SCLK_GMAC0_RX_TX>, <&cru CLK_GMAC0_PTP_REF>;
+ 		clock-names = "stmmaceth", "mac_clk_rx",
+ 			      "mac_clk_tx", "clk_mac_refout",
+ 			      "aclk_mac", "pclk_mac",
+-			      "clk_mac_speed", "ptp_ref",
+-			      "pclk_xpcs";
++			      "clk_mac_speed", "ptp_ref";
+ 		resets = <&cru SRST_A_GMAC0>;
+ 		reset-names = "stmmaceth";
+ 		rockchip,grf = <&grf>;
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 066098198c248..b217941713a8d 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -600,6 +600,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ 		CAP_MIDR_RANGE_LIST(trbe_write_out_of_range_cpus),
+ 	},
+ #endif
++#ifdef CONFIG_ARM64_ERRATUM_2077057
++	{
++		.desc = "ARM erratum 2077057",
++		.capability = ARM64_WORKAROUND_2077057,
++		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
++		ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A510, 0, 0, 2),
++	},
++#endif
+ #ifdef CONFIG_ARM64_ERRATUM_2064142
+ 	{
+ 		.desc = "ARM erratum 2064142",
+diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
+index 94f83cd44e507..0ee6bd390bd09 100644
+--- a/arch/arm64/kernel/stacktrace.c
++++ b/arch/arm64/kernel/stacktrace.c
+@@ -33,7 +33,7 @@
+  */
+ 
+ 
+-void start_backtrace(struct stackframe *frame, unsigned long fp,
++notrace void start_backtrace(struct stackframe *frame, unsigned long fp,
+ 		     unsigned long pc)
+ {
+ 	frame->fp = fp;
+@@ -55,6 +55,7 @@ void start_backtrace(struct stackframe *frame, unsigned long fp,
+ 	frame->prev_fp = 0;
+ 	frame->prev_type = STACK_TYPE_UNKNOWN;
+ }
++NOKPROBE_SYMBOL(start_backtrace);
+ 
+ /*
+  * Unwind from one frame record (A) to the next frame record (B).
+diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
+index adb67f8c9d7d3..3ae9c0b944878 100644
+--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
++++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
+@@ -424,6 +424,24 @@ static inline bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
+ 	return false;
+ }
+ 
++static inline void synchronize_vcpu_pstate(struct kvm_vcpu *vcpu, u64 *exit_code)
++{
++	/*
++	 * Check for the conditions of Cortex-A510's #2077057. When these occur
++	 * SPSR_EL2 can't be trusted, but isn't needed either as it is
++	 * unchanged from the value in vcpu_gp_regs(vcpu)->pstate.
++	 * Are we single-stepping the guest, and took a PAC exception from the
++	 * active-not-pending state?
++	 */
++	if (cpus_have_final_cap(ARM64_WORKAROUND_2077057)		&&
++	    vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP			&&
++	    *vcpu_cpsr(vcpu) & DBG_SPSR_SS				&&
++	    ESR_ELx_EC(read_sysreg_el2(SYS_ESR)) == ESR_ELx_EC_PAC)
++		write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR);
++
++	vcpu->arch.ctxt.regs.pstate = read_sysreg_el2(SYS_SPSR);
++}
++
+ /*
+  * Return true when we were able to fixup the guest exit and should return to
+  * the guest, false when we should restore the host state and return to the
+@@ -435,7 +453,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
+ 	 * Save PSTATE early so that we can evaluate the vcpu mode
+ 	 * early on.
+ 	 */
+-	vcpu->arch.ctxt.regs.pstate = read_sysreg_el2(SYS_SPSR);
++	synchronize_vcpu_pstate(vcpu, exit_code);
+ 
+ 	/*
+ 	 * Check whether we want to repaint the state one way or
+diff --git a/arch/arm64/kvm/vgic/vgic-mmio.c b/arch/arm64/kvm/vgic/vgic-mmio.c
+index 48c6067fc5ecb..f972992682746 100644
+--- a/arch/arm64/kvm/vgic/vgic-mmio.c
++++ b/arch/arm64/kvm/vgic/vgic-mmio.c
+@@ -248,6 +248,8 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,
+ 						    IRQCHIP_STATE_PENDING,
+ 						    &val);
+ 			WARN_RATELIMIT(err, "IRQ %d", irq->host_irq);
++		} else if (vgic_irq_is_mapped_level(irq)) {
++			val = vgic_get_phys_line_level(irq);
+ 		} else {
+ 			val = irq_is_pending(irq);
+ 		}
+diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
+index e7719e8f18def..9c65b1e25a965 100644
+--- a/arch/arm64/tools/cpucaps
++++ b/arch/arm64/tools/cpucaps
+@@ -55,9 +55,10 @@ WORKAROUND_1418040
+ WORKAROUND_1463225
+ WORKAROUND_1508412
+ WORKAROUND_1542419
+-WORKAROUND_2064142
+-WORKAROUND_2038923
+ WORKAROUND_1902691
++WORKAROUND_2038923
++WORKAROUND_2064142
++WORKAROUND_2077057
+ WORKAROUND_TRBE_OVERWRITE_FILL_MODE
+ WORKAROUND_TSB_FLUSH_FAILURE
+ WORKAROUND_TRBE_WRITE_OUT_OF_RANGE
+diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
+index f979adfd4fc20..ef73ba1e0ec10 100644
+--- a/arch/mips/kernel/setup.c
++++ b/arch/mips/kernel/setup.c
+@@ -803,7 +803,7 @@ early_param("coherentio", setcoherentio);
+ 
+ static int __init setnocoherentio(char *str)
+ {
+-	dma_default_coherent = true;
++	dma_default_coherent = false;
+ 	pr_info("Software DMA cache coherency (command line)\n");
+ 	return 0;
+ }
+diff --git a/arch/mips/ralink/mt7621.c b/arch/mips/ralink/mt7621.c
+index bd71f5b142383..4c83786612193 100644
+--- a/arch/mips/ralink/mt7621.c
++++ b/arch/mips/ralink/mt7621.c
+@@ -20,31 +20,41 @@
+ 
+ #include "common.h"
+ 
+-static void *detect_magic __initdata = detect_memory_region;
++#define MT7621_MEM_TEST_PATTERN         0xaa5555aa
++
++static u32 detect_magic __initdata;
+ 
+ phys_addr_t mips_cpc_default_phys_base(void)
+ {
+ 	panic("Cannot detect cpc address");
+ }
+ 
++static bool __init mt7621_addr_wraparound_test(phys_addr_t size)
++{
++	void *dm = (void *)KSEG1ADDR(&detect_magic);
++
++	if (CPHYSADDR(dm + size) >= MT7621_LOWMEM_MAX_SIZE)
++		return true;
++	__raw_writel(MT7621_MEM_TEST_PATTERN, dm);
++	if (__raw_readl(dm) != __raw_readl(dm + size))
++		return false;
++	__raw_writel(~MT7621_MEM_TEST_PATTERN, dm);
++	return __raw_readl(dm) == __raw_readl(dm + size);
++}
++
+ static void __init mt7621_memory_detect(void)
+ {
+-	void *dm = &detect_magic;
+ 	phys_addr_t size;
+ 
+-	for (size = 32 * SZ_1M; size < 256 * SZ_1M; size <<= 1) {
+-		if (!__builtin_memcmp(dm, dm + size, sizeof(detect_magic)))
+-			break;
++	for (size = 32 * SZ_1M; size <= 256 * SZ_1M; size <<= 1) {
++		if (mt7621_addr_wraparound_test(size)) {
++			memblock_add(MT7621_LOWMEM_BASE, size);
++			return;
++		}
+ 	}
+ 
+-	if ((size == 256 * SZ_1M) &&
+-	    (CPHYSADDR(dm + size) < MT7621_LOWMEM_MAX_SIZE) &&
+-	    __builtin_memcmp(dm, dm + size, sizeof(detect_magic))) {
+-		memblock_add(MT7621_LOWMEM_BASE, MT7621_LOWMEM_MAX_SIZE);
+-		memblock_add(MT7621_HIGHMEM_BASE, MT7621_HIGHMEM_SIZE);
+-	} else {
+-		memblock_add(MT7621_LOWMEM_BASE, size);
+-	}
++	memblock_add(MT7621_LOWMEM_BASE, MT7621_LOWMEM_MAX_SIZE);
++	memblock_add(MT7621_HIGHMEM_BASE, MT7621_HIGHMEM_SIZE);
+ }
+ 
+ void __init ralink_of_remap(void)
+diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile
+index 7ebaef10ea1b6..ac7a25298a04a 100644
+--- a/arch/riscv/mm/Makefile
++++ b/arch/riscv/mm/Makefile
+@@ -24,6 +24,9 @@ obj-$(CONFIG_KASAN)   += kasan_init.o
+ ifdef CONFIG_KASAN
+ KASAN_SANITIZE_kasan_init.o := n
+ KASAN_SANITIZE_init.o := n
++ifdef CONFIG_DEBUG_VIRTUAL
++KASAN_SANITIZE_physaddr.o := n
++endif
+ endif
+ 
+ obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o
+diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
+index 54294f83513d1..e26e367a3d9ef 100644
+--- a/arch/riscv/mm/kasan_init.c
++++ b/arch/riscv/mm/kasan_init.c
+@@ -22,8 +22,7 @@ asmlinkage void __init kasan_early_init(void)
+ 
+ 	for (i = 0; i < PTRS_PER_PTE; ++i)
+ 		set_pte(kasan_early_shadow_pte + i,
+-			mk_pte(virt_to_page(kasan_early_shadow_page),
+-			       PAGE_KERNEL));
++			pfn_pte(virt_to_pfn(kasan_early_shadow_page), PAGE_KERNEL));
+ 
+ 	for (i = 0; i < PTRS_PER_PMD; ++i)
+ 		set_pmd(kasan_early_shadow_pmd + i,
+diff --git a/arch/s390/include/asm/extable.h b/arch/s390/include/asm/extable.h
+index 16dc57dd90b30..8511f0e59290f 100644
+--- a/arch/s390/include/asm/extable.h
++++ b/arch/s390/include/asm/extable.h
+@@ -69,8 +69,13 @@ static inline void swap_ex_entry_fixup(struct exception_table_entry *a,
+ {
+ 	a->fixup = b->fixup + delta;
+ 	b->fixup = tmp.fixup - delta;
+-	a->handler = b->handler + delta;
+-	b->handler = tmp.handler - delta;
++	a->handler = b->handler;
++	if (a->handler)
++		a->handler += delta;
++	b->handler = tmp.handler;
++	if (b->handler)
++		b->handler -= delta;
+ }
++#define swap_ex_entry_fixup swap_ex_entry_fixup
+ 
+ #endif
+diff --git a/arch/s390/include/asm/ftrace.h b/arch/s390/include/asm/ftrace.h
+index 267f70f4393f7..6f80ec9c04be9 100644
+--- a/arch/s390/include/asm/ftrace.h
++++ b/arch/s390/include/asm/ftrace.h
+@@ -47,15 +47,17 @@ struct ftrace_regs {
+ 
+ static __always_inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs *fregs)
+ {
+-	return &fregs->regs;
++	struct pt_regs *regs = &fregs->regs;
++
++	if (test_pt_regs_flag(regs, PIF_FTRACE_FULL_REGS))
++		return regs;
++	return NULL;
+ }
+ 
+ static __always_inline void ftrace_instruction_pointer_set(struct ftrace_regs *fregs,
+ 							   unsigned long ip)
+ {
+-	struct pt_regs *regs = arch_ftrace_get_regs(fregs);
+-
+-	regs->psw.addr = ip;
++	fregs->regs.psw.addr = ip;
+ }
+ 
+ /*
+diff --git a/arch/s390/include/asm/ptrace.h b/arch/s390/include/asm/ptrace.h
+index 4ffa8e7f0ed3a..ddb70fb13fbc9 100644
+--- a/arch/s390/include/asm/ptrace.h
++++ b/arch/s390/include/asm/ptrace.h
+@@ -15,11 +15,13 @@
+ #define PIF_EXECVE_PGSTE_RESTART	1	/* restart execve for PGSTE binaries */
+ #define PIF_SYSCALL_RET_SET		2	/* return value was set via ptrace */
+ #define PIF_GUEST_FAULT			3	/* indicates program check in sie64a */
++#define PIF_FTRACE_FULL_REGS		4	/* all register contents valid (ftrace) */
+ 
+ #define _PIF_SYSCALL			BIT(PIF_SYSCALL)
+ #define _PIF_EXECVE_PGSTE_RESTART	BIT(PIF_EXECVE_PGSTE_RESTART)
+ #define _PIF_SYSCALL_RET_SET		BIT(PIF_SYSCALL_RET_SET)
+ #define _PIF_GUEST_FAULT		BIT(PIF_GUEST_FAULT)
++#define _PIF_FTRACE_FULL_REGS		BIT(PIF_FTRACE_FULL_REGS)
+ 
+ #ifndef __ASSEMBLY__
+ 
+diff --git a/arch/s390/kernel/ftrace.c b/arch/s390/kernel/ftrace.c
+index 21d62d8b6b9af..89c0870d56792 100644
+--- a/arch/s390/kernel/ftrace.c
++++ b/arch/s390/kernel/ftrace.c
+@@ -159,9 +159,38 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
+ 	return 0;
+ }
+ 
++static struct ftrace_hotpatch_trampoline *ftrace_get_trampoline(struct dyn_ftrace *rec)
++{
++	struct ftrace_hotpatch_trampoline *trampoline;
++	struct ftrace_insn insn;
++	s64 disp;
++	u16 opc;
++
++	if (copy_from_kernel_nofault(&insn, (void *)rec->ip, sizeof(insn)))
++		return ERR_PTR(-EFAULT);
++	disp = (s64)insn.disp * 2;
++	trampoline = (void *)(rec->ip + disp);
++	if (get_kernel_nofault(opc, &trampoline->brasl_opc))
++		return ERR_PTR(-EFAULT);
++	if (opc != 0xc015)
++		return ERR_PTR(-EINVAL);
++	return trampoline;
++}
++
+ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+ 		       unsigned long addr)
+ {
++	struct ftrace_hotpatch_trampoline *trampoline;
++	u64 old;
++
++	trampoline = ftrace_get_trampoline(rec);
++	if (IS_ERR(trampoline))
++		return PTR_ERR(trampoline);
++	if (get_kernel_nofault(old, &trampoline->interceptor))
++		return -EFAULT;
++	if (old != old_addr)
++		return -EINVAL;
++	s390_kernel_write(&trampoline->interceptor, &addr, sizeof(addr));
+ 	return 0;
+ }
+ 
+@@ -188,6 +217,12 @@ static void brcl_enable(void *brcl)
+ 
+ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
+ {
++	struct ftrace_hotpatch_trampoline *trampoline;
++
++	trampoline = ftrace_get_trampoline(rec);
++	if (IS_ERR(trampoline))
++		return PTR_ERR(trampoline);
++	s390_kernel_write(&trampoline->interceptor, &addr, sizeof(addr));
+ 	brcl_enable((void *)rec->ip);
+ 	return 0;
+ }
+@@ -291,7 +326,7 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
+ 
+ 	regs = ftrace_get_regs(fregs);
+ 	p = get_kprobe((kprobe_opcode_t *)ip);
+-	if (unlikely(!p) || kprobe_disabled(p))
++	if (!regs || unlikely(!p) || kprobe_disabled(p))
+ 		goto out;
+ 
+ 	if (kprobe_running()) {
+diff --git a/arch/s390/kernel/mcount.S b/arch/s390/kernel/mcount.S
+index 39bcc0e39a10d..a24177dcd12a8 100644
+--- a/arch/s390/kernel/mcount.S
++++ b/arch/s390/kernel/mcount.S
+@@ -27,6 +27,7 @@ ENDPROC(ftrace_stub)
+ #define STACK_PTREGS_GPRS	(STACK_PTREGS + __PT_GPRS)
+ #define STACK_PTREGS_PSW	(STACK_PTREGS + __PT_PSW)
+ #define STACK_PTREGS_ORIG_GPR2	(STACK_PTREGS + __PT_ORIG_GPR2)
++#define STACK_PTREGS_FLAGS	(STACK_PTREGS + __PT_FLAGS)
+ #ifdef __PACK_STACK
+ /* allocate just enough for r14, r15 and backchain */
+ #define TRACED_FUNC_FRAME_SIZE	24
+@@ -57,6 +58,14 @@ ENDPROC(ftrace_stub)
+ 	.if \allregs == 1
+ 	stg	%r14,(STACK_PTREGS_PSW)(%r15)
+ 	stosm	(STACK_PTREGS_PSW)(%r15),0
++#ifdef CONFIG_HAVE_MARCH_Z10_FEATURES
++	mvghi	STACK_PTREGS_FLAGS(%r15),_PIF_FTRACE_FULL_REGS
++#else
++	lghi	%r14,_PIF_FTRACE_FULL_REGS
++	stg	%r14,STACK_PTREGS_FLAGS(%r15)
++#endif
++	.else
++	xc	STACK_PTREGS_FLAGS(8,%r15),STACK_PTREGS_FLAGS(%r15)
+ 	.endif
+ 
+ 	lg	%r14,(__SF_GPRS+8*8)(%r1)	# restore original return address
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index 225ab2d0a4c60..65a31cb0611f3 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -800,6 +800,8 @@ static void __init check_initrd(void)
+ static void __init reserve_kernel(void)
+ {
+ 	memblock_reserve(0, STARTUP_NORMAL_OFFSET);
++	memblock_reserve(OLDMEM_BASE, sizeof(unsigned long));
++	memblock_reserve(OLDMEM_SIZE, sizeof(unsigned long));
+ 	memblock_reserve(__amode31_base, __eamode31 - __samode31);
+ 	memblock_reserve(__pa(sclp_early_sccb), EXT_SCCB_READ_SCP);
+ 	memblock_reserve(__pa(_stext), _end - _stext);
+diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
+index 462dd8e9b03d5..f6ccf92203c58 100644
+--- a/arch/x86/kernel/kvmclock.c
++++ b/arch/x86/kernel/kvmclock.c
+@@ -239,6 +239,9 @@ static void __init kvmclock_init_mem(void)
+ 
+ static int __init kvm_setup_vsyscall_timeinfo(void)
+ {
++	if (!kvm_para_available())
++		return 0;
++
+ 	kvmclock_init_mem();
+ 
+ #ifdef CONFIG_X86_64
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 84e23b9864f4c..d70eee9e67163 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -3573,7 +3573,7 @@ set_root_pgd:
+ out_unlock:
+ 	write_unlock(&vcpu->kvm->mmu_lock);
+ 
+-	return 0;
++	return r;
+ }
+ 
+ static int mmu_alloc_special_roots(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 0714fa0e7ede0..c6eb3e45e3d80 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -4163,6 +4163,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+ 	case KVM_CAP_SREGS2:
+ 	case KVM_CAP_EXIT_ON_EMULATION_FAILURE:
+ 	case KVM_CAP_VCPU_ATTRIBUTES:
++	case KVM_CAP_ENABLE_CAP:
+ 		r = 1;
+ 		break;
+ 	case KVM_CAP_EXIT_HYPERCALL:
+diff --git a/block/blk-map.c b/block/blk-map.c
+index 4526adde01564..c7f71d83eff18 100644
+--- a/block/blk-map.c
++++ b/block/blk-map.c
+@@ -446,7 +446,7 @@ static struct bio *bio_copy_kern(struct request_queue *q, void *data,
+ 		if (bytes > len)
+ 			bytes = len;
+ 
+-		page = alloc_page(GFP_NOIO | gfp_mask);
++		page = alloc_page(GFP_NOIO | __GFP_ZERO | gfp_mask);
+ 		if (!page)
+ 			goto cleanup;
+ 
+diff --git a/drivers/ata/pata_hpt37x.c b/drivers/ata/pata_hpt37x.c
+index ae8375e9d2681..9d371859e81ed 100644
+--- a/drivers/ata/pata_hpt37x.c
++++ b/drivers/ata/pata_hpt37x.c
+@@ -964,14 +964,14 @@ static int hpt37x_init_one(struct pci_dev *dev, const struct pci_device_id *id)
+ 
+ 	if ((freq >> 12) != 0xABCDE) {
+ 		int i;
+-		u8 sr;
++		u16 sr;
+ 		u32 total = 0;
+ 
+ 		pr_warn("BIOS has not set timing clocks\n");
+ 
+ 		/* This is the process the HPT371 BIOS is reported to use */
+ 		for (i = 0; i < 128; i++) {
+-			pci_read_config_byte(dev, 0x78, &sr);
++			pci_read_config_word(dev, 0x78, &sr);
+ 			total += sr & 0x1FF;
+ 			udelay(15);
+ 		}
+diff --git a/drivers/auxdisplay/lcd2s.c b/drivers/auxdisplay/lcd2s.c
+index 38ba08628ccb3..2578b2d454397 100644
+--- a/drivers/auxdisplay/lcd2s.c
++++ b/drivers/auxdisplay/lcd2s.c
+@@ -238,7 +238,7 @@ static int lcd2s_redefine_char(struct charlcd *lcd, char *esc)
+ 	if (buf[1] > 7)
+ 		return 1;
+ 
+-	i = 0;
++	i = 2;
+ 	shift = 0;
+ 	value = 0;
+ 	while (*esc && i < LCD2S_CHARACTER_SIZE + 2) {
+@@ -298,6 +298,10 @@ static int lcd2s_i2c_probe(struct i2c_client *i2c,
+ 			I2C_FUNC_SMBUS_WRITE_BLOCK_DATA))
+ 		return -EIO;
+ 
++	lcd2s = devm_kzalloc(&i2c->dev, sizeof(*lcd2s), GFP_KERNEL);
++	if (!lcd2s)
++		return -ENOMEM;
++
+ 	/* Test, if the display is responding */
+ 	err = lcd2s_i2c_smbus_write_byte(i2c, LCD2S_CMD_DISPLAY_OFF);
+ 	if (err < 0)
+@@ -307,12 +311,6 @@ static int lcd2s_i2c_probe(struct i2c_client *i2c,
+ 	if (!lcd)
+ 		return -ENOMEM;
+ 
+-	lcd2s = kzalloc(sizeof(struct lcd2s_data), GFP_KERNEL);
+-	if (!lcd2s) {
+-		err = -ENOMEM;
+-		goto fail1;
+-	}
+-
+ 	lcd->drvdata = lcd2s;
+ 	lcd2s->i2c = i2c;
+ 	lcd2s->charlcd = lcd;
+@@ -321,26 +319,24 @@ static int lcd2s_i2c_probe(struct i2c_client *i2c,
+ 	err = device_property_read_u32(&i2c->dev, "display-height-chars",
+ 			&lcd->height);
+ 	if (err)
+-		goto fail2;
++		goto fail1;
+ 
+ 	err = device_property_read_u32(&i2c->dev, "display-width-chars",
+ 			&lcd->width);
+ 	if (err)
+-		goto fail2;
++		goto fail1;
+ 
+ 	lcd->ops = &lcd2s_ops;
+ 
+ 	err = charlcd_register(lcd2s->charlcd);
+ 	if (err)
+-		goto fail2;
++		goto fail1;
+ 
+ 	i2c_set_clientdata(i2c, lcd2s);
+ 	return 0;
+ 
+-fail2:
+-	kfree(lcd2s);
+ fail1:
+-	kfree(lcd);
++	charlcd_free(lcd2s->charlcd);
+ 	return err;
+ }
+ 
+@@ -349,7 +345,7 @@ static int lcd2s_i2c_remove(struct i2c_client *i2c)
+ 	struct lcd2s_data *lcd2s = i2c_get_clientdata(i2c);
+ 
+ 	charlcd_unregister(lcd2s->charlcd);
+-	kfree(lcd2s->charlcd);
++	charlcd_free(lcd2s->charlcd);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index c3a36cfaa855a..fdb4798cb0065 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -79,6 +79,7 @@
+ #include <linux/ioprio.h>
+ #include <linux/blk-cgroup.h>
+ #include <linux/sched/mm.h>
++#include <linux/statfs.h>
+ 
+ #include "loop.h"
+ 
+@@ -774,8 +775,13 @@ static void loop_config_discard(struct loop_device *lo)
+ 		granularity = 0;
+ 
+ 	} else {
++		struct kstatfs sbuf;
++
+ 		max_discard_sectors = UINT_MAX >> 9;
+-		granularity = inode->i_sb->s_blocksize;
++		if (!vfs_statfs(&file->f_path, &sbuf))
++			granularity = sbuf.f_bsize;
++		else
++			max_discard_sectors = 0;
+ 	}
+ 
+ 	if (max_discard_sectors) {
+diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
+index 5c40ca1d4740e..1fccb457fcc54 100644
+--- a/drivers/clocksource/timer-ti-dm-systimer.c
++++ b/drivers/clocksource/timer-ti-dm-systimer.c
+@@ -241,8 +241,7 @@ static void __init dmtimer_systimer_assign_alwon(void)
+ 	bool quirk_unreliable_oscillator = false;
+ 
+ 	/* Quirk unreliable 32 KiHz oscillator with incomplete dts */
+-	if (of_machine_is_compatible("ti,omap3-beagle-ab4") ||
+-	    of_machine_is_compatible("timll,omap3-devkit8000")) {
++	if (of_machine_is_compatible("ti,omap3-beagle-ab4")) {
+ 		quirk_unreliable_oscillator = true;
+ 		counter_32k = -ENODEV;
+ 	}
+diff --git a/drivers/dma/sh/shdma-base.c b/drivers/dma/sh/shdma-base.c
+index 7f72b3f4cd1ae..19ac95c0098f0 100644
+--- a/drivers/dma/sh/shdma-base.c
++++ b/drivers/dma/sh/shdma-base.c
+@@ -115,8 +115,10 @@ static dma_cookie_t shdma_tx_submit(struct dma_async_tx_descriptor *tx)
+ 		ret = pm_runtime_get(schan->dev);
+ 
+ 		spin_unlock_irq(&schan->chan_lock);
+-		if (ret < 0)
++		if (ret < 0) {
+ 			dev_err(schan->dev, "%s(): GET = %d\n", __func__, ret);
++			pm_runtime_put(schan->dev);
++		}
+ 
+ 		pm_runtime_barrier(schan->dev);
+ 
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index b406b3f78f467..d76bab3aaac45 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -2112,7 +2112,7 @@ static void __exit scmi_driver_exit(void)
+ }
+ module_exit(scmi_driver_exit);
+ 
+-MODULE_ALIAS("platform: arm-scmi");
++MODULE_ALIAS("platform:arm-scmi");
+ MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>");
+ MODULE_DESCRIPTION("ARM SCMI protocol driver");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/firmware/efi/libstub/riscv-stub.c b/drivers/firmware/efi/libstub/riscv-stub.c
+index 380e4e2513994..9c460843442f5 100644
+--- a/drivers/firmware/efi/libstub/riscv-stub.c
++++ b/drivers/firmware/efi/libstub/riscv-stub.c
+@@ -25,7 +25,7 @@ typedef void __noreturn (*jump_kernel_func)(unsigned int, unsigned long);
+ 
+ static u32 hartid;
+ 
+-static u32 get_boot_hartid_from_fdt(void)
++static int get_boot_hartid_from_fdt(void)
+ {
+ 	const void *fdt;
+ 	int chosen_node, len;
+@@ -33,23 +33,26 @@ static u32 get_boot_hartid_from_fdt(void)
+ 
+ 	fdt = get_efi_config_table(DEVICE_TREE_GUID);
+ 	if (!fdt)
+-		return U32_MAX;
++		return -EINVAL;
+ 
+ 	chosen_node = fdt_path_offset(fdt, "/chosen");
+ 	if (chosen_node < 0)
+-		return U32_MAX;
++		return -EINVAL;
+ 
+ 	prop = fdt_getprop((void *)fdt, chosen_node, "boot-hartid", &len);
+ 	if (!prop || len != sizeof(u32))
+-		return U32_MAX;
++		return -EINVAL;
+ 
+-	return fdt32_to_cpu(*prop);
++	hartid = fdt32_to_cpu(*prop);
++	return 0;
+ }
+ 
+ efi_status_t check_platform_features(void)
+ {
+-	hartid = get_boot_hartid_from_fdt();
+-	if (hartid == U32_MAX) {
++	int ret;
++
++	ret = get_boot_hartid_from_fdt();
++	if (ret) {
+ 		efi_err("/chosen/boot-hartid missing or invalid!\n");
+ 		return EFI_UNSUPPORTED;
+ 	}
+diff --git a/drivers/firmware/efi/vars.c b/drivers/firmware/efi/vars.c
+index abdc8a6a39631..cae590bd08f27 100644
+--- a/drivers/firmware/efi/vars.c
++++ b/drivers/firmware/efi/vars.c
+@@ -742,6 +742,7 @@ int efivar_entry_set_safe(efi_char16_t *name, efi_guid_t vendor, u32 attributes,
+ {
+ 	const struct efivar_operations *ops;
+ 	efi_status_t status;
++	unsigned long varsize;
+ 
+ 	if (!__efivars)
+ 		return -EINVAL;
+@@ -764,15 +765,17 @@ int efivar_entry_set_safe(efi_char16_t *name, efi_guid_t vendor, u32 attributes,
+ 		return efivar_entry_set_nonblocking(name, vendor, attributes,
+ 						    size, data);
+ 
++	varsize = size + ucs2_strsize(name, 1024);
+ 	if (!block) {
+ 		if (down_trylock(&efivars_lock))
+ 			return -EBUSY;
++		status = check_var_size_nonblocking(attributes, varsize);
+ 	} else {
+ 		if (down_interruptible(&efivars_lock))
+ 			return -EINTR;
++		status = check_var_size(attributes, varsize);
+ 	}
+ 
+-	status = check_var_size(attributes, size + ucs2_strsize(name, 1024));
+ 	if (status != EFI_SUCCESS) {
+ 		up(&efivars_lock);
+ 		return -ENOSPC;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 0e7dc23f78e7f..e73c09d05b6ee 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -768,11 +768,17 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+  * Check if all VM PDs/PTs are ready for updates
+  *
+  * Returns:
+- * True if eviction list is empty.
++ * True if VM is not evicting.
+  */
+ bool amdgpu_vm_ready(struct amdgpu_vm *vm)
+ {
+-	return list_empty(&vm->evicted);
++	bool ret;
++
++	amdgpu_vm_eviction_lock(vm);
++	ret = !vm->evicting;
++	amdgpu_vm_eviction_unlock(vm);
++
++	return ret && list_empty(&vm->evicted);
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 94e75199d9428..135ea1c422f2c 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -4454,7 +4454,9 @@ bool dp_retrieve_lttpr_cap(struct dc_link *link)
+ 				lttpr_dpcd_data,
+ 				sizeof(lttpr_dpcd_data));
+ 		if (status != DC_OK) {
+-			dm_error("%s: Read LTTPR caps data failed.\n", __func__);
++#if defined(CONFIG_DRM_AMD_DC_DCN)
++			DC_LOG_DP2("%s: Read LTTPR caps data failed.\n", __func__);
++#endif
+ 			return false;
+ 		}
+ 
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index b55118388d2d7..59e1b92f0d27e 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -1261,21 +1261,37 @@ static int sienna_cichlid_populate_umd_state_clk(struct smu_context *smu)
+ 				&dpm_context->dpm_tables.soc_table;
+ 	struct smu_umd_pstate_table *pstate_table =
+ 				&smu->pstate_table;
++	struct amdgpu_device *adev = smu->adev;
+ 
+ 	pstate_table->gfxclk_pstate.min = gfx_table->min;
+ 	pstate_table->gfxclk_pstate.peak = gfx_table->max;
+-	if (gfx_table->max >= SIENNA_CICHLID_UMD_PSTATE_PROFILING_GFXCLK)
+-		pstate_table->gfxclk_pstate.standard = SIENNA_CICHLID_UMD_PSTATE_PROFILING_GFXCLK;
+ 
+ 	pstate_table->uclk_pstate.min = mem_table->min;
+ 	pstate_table->uclk_pstate.peak = mem_table->max;
+-	if (mem_table->max >= SIENNA_CICHLID_UMD_PSTATE_PROFILING_MEMCLK)
+-		pstate_table->uclk_pstate.standard = SIENNA_CICHLID_UMD_PSTATE_PROFILING_MEMCLK;
+ 
+ 	pstate_table->socclk_pstate.min = soc_table->min;
+ 	pstate_table->socclk_pstate.peak = soc_table->max;
+-	if (soc_table->max >= SIENNA_CICHLID_UMD_PSTATE_PROFILING_SOCCLK)
++
++	switch (adev->asic_type) {
++	case CHIP_SIENNA_CICHLID:
++	case CHIP_NAVY_FLOUNDER:
++		pstate_table->gfxclk_pstate.standard = SIENNA_CICHLID_UMD_PSTATE_PROFILING_GFXCLK;
++		pstate_table->uclk_pstate.standard = SIENNA_CICHLID_UMD_PSTATE_PROFILING_MEMCLK;
+ 		pstate_table->socclk_pstate.standard = SIENNA_CICHLID_UMD_PSTATE_PROFILING_SOCCLK;
++		break;
++	case CHIP_DIMGREY_CAVEFISH:
++		pstate_table->gfxclk_pstate.standard = DIMGREY_CAVEFISH_UMD_PSTATE_PROFILING_GFXCLK;
++		pstate_table->uclk_pstate.standard = DIMGREY_CAVEFISH_UMD_PSTATE_PROFILING_MEMCLK;
++		pstate_table->socclk_pstate.standard = DIMGREY_CAVEFISH_UMD_PSTATE_PROFILING_SOCCLK;
++		break;
++	case CHIP_BEIGE_GOBY:
++		pstate_table->gfxclk_pstate.standard = BEIGE_GOBY_UMD_PSTATE_PROFILING_GFXCLK;
++		pstate_table->uclk_pstate.standard = BEIGE_GOBY_UMD_PSTATE_PROFILING_MEMCLK;
++		pstate_table->socclk_pstate.standard = BEIGE_GOBY_UMD_PSTATE_PROFILING_SOCCLK;
++		break;
++	default:
++		break;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.h b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.h
+index 38cd0ece24f6b..42f705c7a36f8 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.h
+@@ -33,6 +33,14 @@ typedef enum {
+ #define SIENNA_CICHLID_UMD_PSTATE_PROFILING_SOCCLK    960
+ #define SIENNA_CICHLID_UMD_PSTATE_PROFILING_MEMCLK    1000
+ 
++#define DIMGREY_CAVEFISH_UMD_PSTATE_PROFILING_GFXCLK 1950
++#define DIMGREY_CAVEFISH_UMD_PSTATE_PROFILING_SOCCLK 960
++#define DIMGREY_CAVEFISH_UMD_PSTATE_PROFILING_MEMCLK 676
++
++#define BEIGE_GOBY_UMD_PSTATE_PROFILING_GFXCLK 2200
++#define BEIGE_GOBY_UMD_PSTATE_PROFILING_SOCCLK 960
++#define BEIGE_GOBY_UMD_PSTATE_PROFILING_MEMCLK 1000
++
+ extern void sienna_cichlid_set_ppt_funcs(struct smu_context *smu);
+ 
+ #endif
+diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi86.c b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+index 83d06c16d4d74..b7f74c146deeb 100644
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi86.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+@@ -1474,6 +1474,7 @@ static inline void ti_sn_gpio_unregister(void) {}
+ 
+ static void ti_sn65dsi86_runtime_disable(void *data)
+ {
++	pm_runtime_dont_use_autosuspend(data);
+ 	pm_runtime_disable(data);
+ }
+ 
+@@ -1533,11 +1534,11 @@ static int ti_sn65dsi86_probe(struct i2c_client *client,
+ 				     "failed to get reference clock\n");
+ 
+ 	pm_runtime_enable(dev);
++	pm_runtime_set_autosuspend_delay(pdata->dev, 500);
++	pm_runtime_use_autosuspend(pdata->dev);
+ 	ret = devm_add_action_or_reset(dev, ti_sn65dsi86_runtime_disable, dev);
+ 	if (ret)
+ 		return ret;
+-	pm_runtime_set_autosuspend_delay(pdata->dev, 500);
+-	pm_runtime_use_autosuspend(pdata->dev);
+ 
+ 	ti_sn65dsi86_debugfs_init(pdata);
+ 
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
+index 65a3e7fdb2b2c..95ff630157b9c 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
+@@ -133,7 +133,7 @@ static int guc_action_slpc_unset_param(struct intel_guc *guc, u8 id)
+ {
+ 	u32 request[] = {
+ 		GUC_ACTION_HOST2GUC_PC_SLPC_REQUEST,
+-		SLPC_EVENT(SLPC_EVENT_PARAMETER_UNSET, 2),
++		SLPC_EVENT(SLPC_EVENT_PARAMETER_UNSET, 1),
+ 		id,
+ 	};
+ 
+diff --git a/drivers/gpu/drm/i915/intel_pch.c b/drivers/gpu/drm/i915/intel_pch.c
+index d1d4b97b86f59..287f5a3d0b354 100644
+--- a/drivers/gpu/drm/i915/intel_pch.c
++++ b/drivers/gpu/drm/i915/intel_pch.c
+@@ -108,6 +108,7 @@ intel_pch_type(const struct drm_i915_private *dev_priv, unsigned short id)
+ 		/* Comet Lake V PCH is based on KBP, which is SPT compatible */
+ 		return PCH_SPT;
+ 	case INTEL_PCH_ICP_DEVICE_ID_TYPE:
++	case INTEL_PCH_ICP2_DEVICE_ID_TYPE:
+ 		drm_dbg_kms(&dev_priv->drm, "Found Ice Lake PCH\n");
+ 		drm_WARN_ON(&dev_priv->drm, !IS_ICELAKE(dev_priv));
+ 		return PCH_ICP;
+@@ -123,7 +124,6 @@ intel_pch_type(const struct drm_i915_private *dev_priv, unsigned short id)
+ 			    !IS_GEN9_BC(dev_priv));
+ 		return PCH_TGP;
+ 	case INTEL_PCH_JSP_DEVICE_ID_TYPE:
+-	case INTEL_PCH_JSP2_DEVICE_ID_TYPE:
+ 		drm_dbg_kms(&dev_priv->drm, "Found Jasper Lake PCH\n");
+ 		drm_WARN_ON(&dev_priv->drm, !IS_JSL_EHL(dev_priv));
+ 		return PCH_JSP;
+diff --git a/drivers/gpu/drm/i915/intel_pch.h b/drivers/gpu/drm/i915/intel_pch.h
+index 7c0d83d292dcc..994c56fcb1991 100644
+--- a/drivers/gpu/drm/i915/intel_pch.h
++++ b/drivers/gpu/drm/i915/intel_pch.h
+@@ -50,11 +50,11 @@ enum intel_pch {
+ #define INTEL_PCH_CMP2_DEVICE_ID_TYPE		0x0680
+ #define INTEL_PCH_CMP_V_DEVICE_ID_TYPE		0xA380
+ #define INTEL_PCH_ICP_DEVICE_ID_TYPE		0x3480
++#define INTEL_PCH_ICP2_DEVICE_ID_TYPE		0x3880
+ #define INTEL_PCH_MCC_DEVICE_ID_TYPE		0x4B00
+ #define INTEL_PCH_TGP_DEVICE_ID_TYPE		0xA080
+ #define INTEL_PCH_TGP2_DEVICE_ID_TYPE		0x4380
+ #define INTEL_PCH_JSP_DEVICE_ID_TYPE		0x4D80
+-#define INTEL_PCH_JSP2_DEVICE_ID_TYPE		0x3880
+ #define INTEL_PCH_ADP_DEVICE_ID_TYPE		0x7A80
+ #define INTEL_PCH_ADP2_DEVICE_ID_TYPE		0x5180
+ #define INTEL_PCH_P2X_DEVICE_ID_TYPE		0x7100
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+index d3f32ffe299a8..2d7fc2c860eaf 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+@@ -89,6 +89,44 @@ static void amd_stop_all_sensor_v2(struct amd_mp2_dev *privdata)
+ 	writel(cmd_base.ul, privdata->mmio + AMD_C2P_MSG0);
+ }
+ 
++static void amd_sfh_clear_intr_v2(struct amd_mp2_dev *privdata)
++{
++	if (readl(privdata->mmio + AMD_P2C_MSG(4))) {
++		writel(0, privdata->mmio + AMD_P2C_MSG(4));
++		writel(0xf, privdata->mmio + AMD_P2C_MSG(5));
++	}
++}
++
++static void amd_sfh_clear_intr(struct amd_mp2_dev *privdata)
++{
++	if (privdata->mp2_ops->clear_intr)
++		privdata->mp2_ops->clear_intr(privdata);
++}
++
++static irqreturn_t amd_sfh_irq_handler(int irq, void *data)
++{
++	amd_sfh_clear_intr(data);
++
++	return IRQ_HANDLED;
++}
++
++static int amd_sfh_irq_init_v2(struct amd_mp2_dev *privdata)
++{
++	int rc;
++
++	pci_intx(privdata->pdev, true);
++
++	rc = devm_request_irq(&privdata->pdev->dev, privdata->pdev->irq,
++			      amd_sfh_irq_handler, 0, DRIVER_NAME, privdata);
++	if (rc) {
++		dev_err(&privdata->pdev->dev, "failed to request irq %d err=%d\n",
++			privdata->pdev->irq, rc);
++		return rc;
++	}
++
++	return 0;
++}
++
+ void amd_start_sensor(struct amd_mp2_dev *privdata, struct amd_mp2_sensor_info info)
+ {
+ 	union sfh_cmd_param cmd_param;
+@@ -193,6 +231,8 @@ static void amd_mp2_pci_remove(void *privdata)
+ 	struct amd_mp2_dev *mp2 = privdata;
+ 	amd_sfh_hid_client_deinit(privdata);
+ 	mp2->mp2_ops->stop_all(mp2);
++	pci_intx(mp2->pdev, false);
++	amd_sfh_clear_intr(mp2);
+ }
+ 
+ static const struct amd_mp2_ops amd_sfh_ops_v2 = {
+@@ -200,6 +240,8 @@ static const struct amd_mp2_ops amd_sfh_ops_v2 = {
+ 	.stop = amd_stop_sensor_v2,
+ 	.stop_all = amd_stop_all_sensor_v2,
+ 	.response = amd_sfh_wait_response_v2,
++	.clear_intr = amd_sfh_clear_intr_v2,
++	.init_intr = amd_sfh_irq_init_v2,
+ };
+ 
+ static const struct amd_mp2_ops amd_sfh_ops = {
+@@ -225,6 +267,14 @@ static void mp2_select_ops(struct amd_mp2_dev *privdata)
+ 	}
+ }
+ 
++static int amd_sfh_irq_init(struct amd_mp2_dev *privdata)
++{
++	if (privdata->mp2_ops->init_intr)
++		return privdata->mp2_ops->init_intr(privdata);
++
++	return 0;
++}
++
+ static int amd_mp2_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ {
+ 	struct amd_mp2_dev *privdata;
+@@ -261,9 +311,20 @@ static int amd_mp2_pci_probe(struct pci_dev *pdev, const struct pci_device_id *i
+ 
+ 	mp2_select_ops(privdata);
+ 
++	rc = amd_sfh_irq_init(privdata);
++	if (rc) {
++		dev_err(&pdev->dev, "amd_sfh_irq_init failed\n");
++		return rc;
++	}
++
+ 	rc = amd_sfh_hid_client_init(privdata);
+-	if (rc)
++	if (rc) {
++		amd_sfh_clear_intr(privdata);
++		dev_err(&pdev->dev, "amd_sfh_hid_client_init failed\n");
+ 		return rc;
++	}
++
++	amd_sfh_clear_intr(privdata);
+ 
+ 	return devm_add_action_or_reset(&pdev->dev, amd_mp2_pci_remove, privdata);
+ }
+@@ -290,6 +351,9 @@ static int __maybe_unused amd_mp2_pci_resume(struct device *dev)
+ 		}
+ 	}
+ 
++	schedule_delayed_work(&cl_data->work_buffer, msecs_to_jiffies(AMD_SFH_IDLE_LOOP));
++	amd_sfh_clear_intr(mp2);
++
+ 	return 0;
+ }
+ 
+@@ -312,6 +376,9 @@ static int __maybe_unused amd_mp2_pci_suspend(struct device *dev)
+ 		}
+ 	}
+ 
++	cancel_delayed_work_sync(&cl_data->work_buffer);
++	amd_sfh_clear_intr(mp2);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h
+index 8a9c544c27aef..97b99861fae25 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h
+@@ -141,5 +141,7 @@ struct amd_mp2_ops {
+ 	 void (*stop)(struct amd_mp2_dev *privdata, u16 sensor_idx);
+ 	 void (*stop_all)(struct amd_mp2_dev *privdata);
+ 	 int (*response)(struct amd_mp2_dev *mp2, u8 sid, u32 sensor_sts);
++	 void (*clear_intr)(struct amd_mp2_dev *privdata);
++	 int (*init_intr)(struct amd_mp2_dev *privdata);
+ };
+ #endif
+diff --git a/drivers/hid/hid-debug.c b/drivers/hid/hid-debug.c
+index 7a92e2a04a09d..cbd7d318f87cf 100644
+--- a/drivers/hid/hid-debug.c
++++ b/drivers/hid/hid-debug.c
+@@ -825,7 +825,9 @@ static const char *keys[KEY_MAX + 1] = {
+ 	[KEY_F22] = "F22",			[KEY_F23] = "F23",
+ 	[KEY_F24] = "F24",			[KEY_PLAYCD] = "PlayCD",
+ 	[KEY_PAUSECD] = "PauseCD",		[KEY_PROG3] = "Prog3",
+-	[KEY_PROG4] = "Prog4",			[KEY_SUSPEND] = "Suspend",
++	[KEY_PROG4] = "Prog4",
++	[KEY_ALL_APPLICATIONS] = "AllApplications",
++	[KEY_SUSPEND] = "Suspend",
+ 	[KEY_CLOSE] = "Close",			[KEY_PLAY] = "Play",
+ 	[KEY_FASTFORWARD] = "FastForward",	[KEY_BASSBOOST] = "BassBoost",
+ 	[KEY_PRINT] = "Print",			[KEY_HP] = "HP",
+@@ -934,6 +936,7 @@ static const char *keys[KEY_MAX + 1] = {
+ 	[KEY_ASSISTANT] = "Assistant",
+ 	[KEY_KBD_LAYOUT_NEXT] = "KbdLayoutNext",
+ 	[KEY_EMOJI_PICKER] = "EmojiPicker",
++	[KEY_DICTATE] = "Dictate",
+ 	[KEY_BRIGHTNESS_MIN] = "BrightnessMin",
+ 	[KEY_BRIGHTNESS_MAX] = "BrightnessMax",
+ 	[KEY_BRIGHTNESS_AUTO] = "BrightnessAuto",
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index 87fee137ff65e..b05d41848cd95 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -994,6 +994,7 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ 		case 0x0cd: map_key_clear(KEY_PLAYPAUSE);	break;
+ 		case 0x0cf: map_key_clear(KEY_VOICECOMMAND);	break;
+ 
++		case 0x0d8: map_key_clear(KEY_DICTATE);		break;
+ 		case 0x0d9: map_key_clear(KEY_EMOJI_PICKER);	break;
+ 
+ 		case 0x0e0: map_abs_clear(ABS_VOLUME);		break;
+@@ -1085,6 +1086,8 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ 
+ 		case 0x29d: map_key_clear(KEY_KBD_LAYOUT_NEXT);	break;
+ 
++		case 0x2a2: map_key_clear(KEY_ALL_APPLICATIONS);	break;
++
+ 		case 0x2c7: map_key_clear(KEY_KBDINPUTASSIST_PREV);		break;
+ 		case 0x2c8: map_key_clear(KEY_KBDINPUTASSIST_NEXT);		break;
+ 		case 0x2c9: map_key_clear(KEY_KBDINPUTASSIST_PREVGROUP);		break;
+diff --git a/drivers/i2c/busses/Kconfig b/drivers/i2c/busses/Kconfig
+index dce3928390176..37233bb483a17 100644
+--- a/drivers/i2c/busses/Kconfig
++++ b/drivers/i2c/busses/Kconfig
+@@ -488,7 +488,7 @@ config I2C_BRCMSTB
+ 
+ config I2C_CADENCE
+ 	tristate "Cadence I2C Controller"
+-	depends on ARCH_ZYNQ || ARM64 || XTENSA
++	depends on ARCH_ZYNQ || ARM64 || XTENSA || COMPILE_TEST
+ 	help
+ 	  Say yes here to select Cadence I2C Host Controller. This controller is
+ 	  e.g. used by Xilinx Zynq.
+@@ -680,7 +680,7 @@ config I2C_IMG
+ 
+ config I2C_IMX
+ 	tristate "IMX I2C interface"
+-	depends on ARCH_MXC || ARCH_LAYERSCAPE || COLDFIRE
++	depends on ARCH_MXC || ARCH_LAYERSCAPE || COLDFIRE || COMPILE_TEST
+ 	select I2C_SLAVE
+ 	help
+ 	  Say Y here if you want to use the IIC bus controller on
+@@ -935,7 +935,7 @@ config I2C_QCOM_GENI
+ 
+ config I2C_QUP
+ 	tristate "Qualcomm QUP based I2C controller"
+-	depends on ARCH_QCOM
++	depends on ARCH_QCOM || COMPILE_TEST
+ 	help
+ 	  If you say yes to this option, support will be included for the
+ 	  built-in I2C interface on the Qualcomm SoCs.
+diff --git a/drivers/i2c/busses/i2c-bcm2835.c b/drivers/i2c/busses/i2c-bcm2835.c
+index 37443edbf7546..ad3b124a2e376 100644
+--- a/drivers/i2c/busses/i2c-bcm2835.c
++++ b/drivers/i2c/busses/i2c-bcm2835.c
+@@ -23,6 +23,11 @@
+ #define BCM2835_I2C_FIFO	0x10
+ #define BCM2835_I2C_DIV		0x14
+ #define BCM2835_I2C_DEL		0x18
++/*
++ * 16-bit field for the number of SCL cycles to wait after rising SCL
++ * before deciding the slave is not responding. 0 disables the
++ * timeout detection.
++ */
+ #define BCM2835_I2C_CLKT	0x1c
+ 
+ #define BCM2835_I2C_C_READ	BIT(0)
+@@ -477,6 +482,12 @@ static int bcm2835_i2c_probe(struct platform_device *pdev)
+ 	adap->dev.of_node = pdev->dev.of_node;
+ 	adap->quirks = of_device_get_match_data(&pdev->dev);
+ 
++	/*
++	 * Disable the hardware clock stretching timeout. SMBUS
++	 * specifies a limit for how long the device can stretch the
++	 * clock, but core I2C doesn't.
++	 */
++	bcm2835_i2c_writel(i2c_dev, BCM2835_I2C_CLKT, 0);
+ 	bcm2835_i2c_writel(i2c_dev, BCM2835_I2C_C, 0);
+ 
+ 	ret = i2c_add_adapter(adap);
+diff --git a/drivers/input/input.c b/drivers/input/input.c
+index ccaeb24263854..c3139bc2aa0db 100644
+--- a/drivers/input/input.c
++++ b/drivers/input/input.c
+@@ -2285,6 +2285,12 @@ int input_register_device(struct input_dev *dev)
+ 	/* KEY_RESERVED is not supposed to be transmitted to userspace. */
+ 	__clear_bit(KEY_RESERVED, dev->keybit);
+ 
++	/* Buttonpads should not map BTN_RIGHT and/or BTN_MIDDLE. */
++	if (test_bit(INPUT_PROP_BUTTONPAD, dev->propbit)) {
++		__clear_bit(BTN_RIGHT, dev->keybit);
++		__clear_bit(BTN_MIDDLE, dev->keybit);
++	}
++
+ 	/* Make sure that bitmasks not mentioned in dev->evbit are clean. */
+ 	input_cleanse_bitmasks(dev);
+ 
+diff --git a/drivers/input/keyboard/Kconfig b/drivers/input/keyboard/Kconfig
+index 0c607da9ee10d..9417ee0b1eff8 100644
+--- a/drivers/input/keyboard/Kconfig
++++ b/drivers/input/keyboard/Kconfig
+@@ -556,7 +556,7 @@ config KEYBOARD_PMIC8XXX
+ 
+ config KEYBOARD_SAMSUNG
+ 	tristate "Samsung keypad support"
+-	depends on HAVE_CLK
++	depends on HAS_IOMEM && HAVE_CLK
+ 	select INPUT_MATRIXKMAP
+ 	help
+ 	  Say Y here if you want to use the keypad on your Samsung mobile
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index 47af62c122672..e1758d5ffe421 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -186,55 +186,21 @@ static int elan_get_fwinfo(u16 ic_type, u8 iap_version, u16 *validpage_count,
+ 	return 0;
+ }
+ 
+-static int elan_enable_power(struct elan_tp_data *data)
++static int elan_set_power(struct elan_tp_data *data, bool on)
+ {
+ 	int repeat = ETP_RETRY_COUNT;
+ 	int error;
+ 
+-	error = regulator_enable(data->vcc);
+-	if (error) {
+-		dev_err(&data->client->dev,
+-			"failed to enable regulator: %d\n", error);
+-		return error;
+-	}
+-
+ 	do {
+-		error = data->ops->power_control(data->client, true);
++		error = data->ops->power_control(data->client, on);
+ 		if (error >= 0)
+ 			return 0;
+ 
+ 		msleep(30);
+ 	} while (--repeat > 0);
+ 
+-	dev_err(&data->client->dev, "failed to enable power: %d\n", error);
+-	return error;
+-}
+-
+-static int elan_disable_power(struct elan_tp_data *data)
+-{
+-	int repeat = ETP_RETRY_COUNT;
+-	int error;
+-
+-	do {
+-		error = data->ops->power_control(data->client, false);
+-		if (!error) {
+-			error = regulator_disable(data->vcc);
+-			if (error) {
+-				dev_err(&data->client->dev,
+-					"failed to disable regulator: %d\n",
+-					error);
+-				/* Attempt to power the chip back up */
+-				data->ops->power_control(data->client, true);
+-				break;
+-			}
+-
+-			return 0;
+-		}
+-
+-		msleep(30);
+-	} while (--repeat > 0);
+-
+-	dev_err(&data->client->dev, "failed to disable power: %d\n", error);
++	dev_err(&data->client->dev, "failed to set power %s: %d\n",
++		on ? "on" : "off", error);
+ 	return error;
+ }
+ 
+@@ -1399,9 +1365,19 @@ static int __maybe_unused elan_suspend(struct device *dev)
+ 		/* Enable wake from IRQ */
+ 		data->irq_wake = (enable_irq_wake(client->irq) == 0);
+ 	} else {
+-		ret = elan_disable_power(data);
++		ret = elan_set_power(data, false);
++		if (ret)
++			goto err;
++
++		ret = regulator_disable(data->vcc);
++		if (ret) {
++			dev_err(dev, "error %d disabling regulator\n", ret);
++			/* Attempt to power the chip back up */
++			elan_set_power(data, true);
++		}
+ 	}
+ 
++err:
+ 	mutex_unlock(&data->sysfs_mutex);
+ 	return ret;
+ }
+@@ -1412,12 +1388,18 @@ static int __maybe_unused elan_resume(struct device *dev)
+ 	struct elan_tp_data *data = i2c_get_clientdata(client);
+ 	int error;
+ 
+-	if (device_may_wakeup(dev) && data->irq_wake) {
++	if (!device_may_wakeup(dev)) {
++		error = regulator_enable(data->vcc);
++		if (error) {
++			dev_err(dev, "error %d enabling regulator\n", error);
++			goto err;
++		}
++	} else if (data->irq_wake) {
+ 		disable_irq_wake(client->irq);
+ 		data->irq_wake = false;
+ 	}
+ 
+-	error = elan_enable_power(data);
++	error = elan_set_power(data, true);
+ 	if (error) {
+ 		dev_err(dev, "power up when resuming failed: %d\n", error);
+ 		goto err;
+diff --git a/drivers/iommu/amd/amd_iommu.h b/drivers/iommu/amd/amd_iommu.h
+index 416815a525d67..bb95edf74415b 100644
+--- a/drivers/iommu/amd/amd_iommu.h
++++ b/drivers/iommu/amd/amd_iommu.h
+@@ -14,6 +14,7 @@
+ extern irqreturn_t amd_iommu_int_thread(int irq, void *data);
+ extern irqreturn_t amd_iommu_int_handler(int irq, void *data);
+ extern void amd_iommu_apply_erratum_63(u16 devid);
++extern void amd_iommu_restart_event_logging(struct amd_iommu *iommu);
+ extern void amd_iommu_reset_cmd_buffer(struct amd_iommu *iommu);
+ extern int amd_iommu_init_devices(void);
+ extern void amd_iommu_uninit_devices(void);
+diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
+index ffc89c4fb1205..47108ed44fbba 100644
+--- a/drivers/iommu/amd/amd_iommu_types.h
++++ b/drivers/iommu/amd/amd_iommu_types.h
+@@ -110,6 +110,7 @@
+ #define PASID_MASK		0x0000ffff
+ 
+ /* MMIO status bits */
++#define MMIO_STATUS_EVT_OVERFLOW_INT_MASK	(1 << 0)
+ #define MMIO_STATUS_EVT_INT_MASK	(1 << 1)
+ #define MMIO_STATUS_COM_WAIT_INT_MASK	(1 << 2)
+ #define MMIO_STATUS_PPR_INT_MASK	(1 << 6)
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index ce952b0afbfd9..63146cbf19c7a 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -657,6 +657,16 @@ static int __init alloc_command_buffer(struct amd_iommu *iommu)
+ 	return iommu->cmd_buf ? 0 : -ENOMEM;
+ }
+ 
++/*
++ * This function restarts event logging in case the IOMMU experienced
++ * an event log buffer overflow.
++ */
++void amd_iommu_restart_event_logging(struct amd_iommu *iommu)
++{
++	iommu_feature_disable(iommu, CONTROL_EVT_LOG_EN);
++	iommu_feature_enable(iommu, CONTROL_EVT_LOG_EN);
++}
++
+ /*
+  * This function resets the command buffer if the IOMMU stopped fetching
+  * commands from it.
+diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
+index 182c93a43efd8..1eddf557636d7 100644
+--- a/drivers/iommu/amd/io_pgtable.c
++++ b/drivers/iommu/amd/io_pgtable.c
+@@ -519,12 +519,6 @@ static void v1_free_pgtable(struct io_pgtable *iop)
+ 
+ 	dom = container_of(pgtable, struct protection_domain, iop);
+ 
+-	/* Update data structure */
+-	amd_iommu_domain_clr_pt_root(dom);
+-
+-	/* Make changes visible to IOMMUs */
+-	amd_iommu_domain_update(dom);
+-
+ 	/* Page-table is not visible to IOMMU anymore, so free it */
+ 	BUG_ON(pgtable->mode < PAGE_MODE_NONE ||
+ 	       pgtable->mode > PAGE_MODE_6_LEVEL);
+@@ -532,6 +526,12 @@ static void v1_free_pgtable(struct io_pgtable *iop)
+ 	root = (unsigned long)pgtable->root;
+ 	freelist = free_sub_pt(root, pgtable->mode, freelist);
+ 
++	/* Update data structure */
++	amd_iommu_domain_clr_pt_root(dom);
++
++	/* Make changes visible to IOMMUs */
++	amd_iommu_domain_update(dom);
++
+ 	free_page_list(freelist);
+ }
+ 
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index 461f1844ed1fb..a18b549951bb8 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -764,7 +764,8 @@ amd_iommu_set_pci_msi_domain(struct device *dev, struct amd_iommu *iommu) { }
+ #endif /* !CONFIG_IRQ_REMAP */
+ 
+ #define AMD_IOMMU_INT_MASK	\
+-	(MMIO_STATUS_EVT_INT_MASK | \
++	(MMIO_STATUS_EVT_OVERFLOW_INT_MASK | \
++	 MMIO_STATUS_EVT_INT_MASK | \
+ 	 MMIO_STATUS_PPR_INT_MASK | \
+ 	 MMIO_STATUS_GALOG_INT_MASK)
+ 
+@@ -774,7 +775,7 @@ irqreturn_t amd_iommu_int_thread(int irq, void *data)
+ 	u32 status = readl(iommu->mmio_base + MMIO_STATUS_OFFSET);
+ 
+ 	while (status & AMD_IOMMU_INT_MASK) {
+-		/* Enable EVT and PPR and GA interrupts again */
++		/* Enable interrupt sources again */
+ 		writel(AMD_IOMMU_INT_MASK,
+ 			iommu->mmio_base + MMIO_STATUS_OFFSET);
+ 
+@@ -795,6 +796,11 @@ irqreturn_t amd_iommu_int_thread(int irq, void *data)
+ 		}
+ #endif
+ 
++		if (status & MMIO_STATUS_EVT_OVERFLOW_INT_MASK) {
++			pr_info_ratelimited("IOMMU event log overflow\n");
++			amd_iommu_restart_event_logging(iommu);
++		}
++
+ 		/*
+ 		 * Hardware bug: ERBT1312
+ 		 * When re-enabling interrupt (by writing 1
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index b6a8f3282411f..674ee1bc01161 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -2773,7 +2773,7 @@ static struct dmar_domain *dmar_insert_one_dev_info(struct intel_iommu *iommu,
+ 	spin_unlock_irqrestore(&device_domain_lock, flags);
+ 
+ 	/* PASID table is mandatory for a PCI device in scalable mode. */
+-	if (dev && dev_is_pci(dev) && sm_supported(iommu)) {
++	if (sm_supported(iommu) && !dev_is_real_dma_subdevice(dev)) {
+ 		ret = intel_pasid_alloc_table(dev);
+ 		if (ret) {
+ 			dev_err(dev, "PASID table allocation failed\n");
+diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c
+index e900e3c46903b..2561ce8a2ce8f 100644
+--- a/drivers/iommu/tegra-smmu.c
++++ b/drivers/iommu/tegra-smmu.c
+@@ -808,8 +808,10 @@ static struct tegra_smmu *tegra_smmu_find(struct device_node *np)
+ 		return NULL;
+ 
+ 	mc = platform_get_drvdata(pdev);
+-	if (!mc)
++	if (!mc) {
++		put_device(&pdev->dev);
+ 		return NULL;
++	}
+ 
+ 	return mc->smmu;
+ }
+diff --git a/drivers/net/arcnet/com20020-pci.c b/drivers/net/arcnet/com20020-pci.c
+index 6382e1937cca2..c580acb8b1d34 100644
+--- a/drivers/net/arcnet/com20020-pci.c
++++ b/drivers/net/arcnet/com20020-pci.c
+@@ -138,6 +138,9 @@ static int com20020pci_probe(struct pci_dev *pdev,
+ 		return -ENOMEM;
+ 
+ 	ci = (struct com20020_pci_card_info *)id->driver_data;
++	if (!ci)
++		return -EINVAL;
++
+ 	priv->ci = ci;
+ 	mm = &ci->misc_map;
+ 
+diff --git a/drivers/net/can/usb/etas_es58x/es58x_core.c b/drivers/net/can/usb/etas_es58x/es58x_core.c
+index fb07c33ba0c3c..78d0a5947ba1e 100644
+--- a/drivers/net/can/usb/etas_es58x/es58x_core.c
++++ b/drivers/net/can/usb/etas_es58x/es58x_core.c
+@@ -1787,7 +1787,7 @@ static int es58x_open(struct net_device *netdev)
+ 	struct es58x_device *es58x_dev = es58x_priv(netdev)->es58x_dev;
+ 	int ret;
+ 
+-	if (atomic_inc_return(&es58x_dev->opened_channel_cnt) == 1) {
++	if (!es58x_dev->opened_channel_cnt) {
+ 		ret = es58x_alloc_rx_urbs(es58x_dev);
+ 		if (ret)
+ 			return ret;
+@@ -1805,12 +1805,13 @@ static int es58x_open(struct net_device *netdev)
+ 	if (ret)
+ 		goto free_urbs;
+ 
++	es58x_dev->opened_channel_cnt++;
+ 	netif_start_queue(netdev);
+ 
+ 	return ret;
+ 
+  free_urbs:
+-	if (atomic_dec_and_test(&es58x_dev->opened_channel_cnt))
++	if (!es58x_dev->opened_channel_cnt)
+ 		es58x_free_urbs(es58x_dev);
+ 	netdev_err(netdev, "%s: Could not open the network device: %pe\n",
+ 		   __func__, ERR_PTR(ret));
+@@ -1845,7 +1846,8 @@ static int es58x_stop(struct net_device *netdev)
+ 
+ 	es58x_flush_pending_tx_msg(netdev);
+ 
+-	if (atomic_dec_and_test(&es58x_dev->opened_channel_cnt))
++	es58x_dev->opened_channel_cnt--;
++	if (!es58x_dev->opened_channel_cnt)
+ 		es58x_free_urbs(es58x_dev);
+ 
+ 	return 0;
+@@ -2214,7 +2216,6 @@ static struct es58x_device *es58x_init_es58x_dev(struct usb_interface *intf,
+ 	init_usb_anchor(&es58x_dev->tx_urbs_idle);
+ 	init_usb_anchor(&es58x_dev->tx_urbs_busy);
+ 	atomic_set(&es58x_dev->tx_urbs_idle_cnt, 0);
+-	atomic_set(&es58x_dev->opened_channel_cnt, 0);
+ 	usb_set_intfdata(intf, es58x_dev);
+ 
+ 	es58x_dev->rx_pipe = usb_rcvbulkpipe(es58x_dev->udev,
+diff --git a/drivers/net/can/usb/etas_es58x/es58x_core.h b/drivers/net/can/usb/etas_es58x/es58x_core.h
+index 826a15871573a..e5033cb5e6959 100644
+--- a/drivers/net/can/usb/etas_es58x/es58x_core.h
++++ b/drivers/net/can/usb/etas_es58x/es58x_core.h
+@@ -373,8 +373,6 @@ struct es58x_operators {
+  *	queue wake/stop logic should prevent this URB from getting
+  *	empty. Please refer to es58x_get_tx_urb() for more details.
+  * @tx_urbs_idle_cnt: number of urbs in @tx_urbs_idle.
+- * @opened_channel_cnt: number of channels opened (c.f. es58x_open()
+- *	and es58x_stop()).
+  * @ktime_req_ns: kernel timestamp when es58x_set_realtime_diff_ns()
+  *	was called.
+  * @realtime_diff_ns: difference in nanoseconds between the clocks of
+@@ -384,6 +382,10 @@ struct es58x_operators {
+  *	in RX branches.
+  * @rx_max_packet_size: Maximum length of bulk-in URB.
+  * @num_can_ch: Number of CAN channel (i.e. number of elements of @netdev).
++ * @opened_channel_cnt: number of channels opened. Free of race
++ *	conditions because its two users (net_device_ops:ndo_open()
++ *	and net_device_ops:ndo_close()) guarantee that the network
++ *	stack big kernel lock (a.k.a. rtnl_mutex) is being hold.
+  * @rx_cmd_buf_len: Length of @rx_cmd_buf.
+  * @rx_cmd_buf: The device might split the URB commands in an
+  *	arbitrary amount of pieces. This buffer is used to concatenate
+@@ -406,7 +408,6 @@ struct es58x_device {
+ 	struct usb_anchor tx_urbs_busy;
+ 	struct usb_anchor tx_urbs_idle;
+ 	atomic_t tx_urbs_idle_cnt;
+-	atomic_t opened_channel_cnt;
+ 
+ 	u64 ktime_req_ns;
+ 	s64 realtime_diff_ns;
+@@ -415,6 +416,7 @@ struct es58x_device {
+ 
+ 	u16 rx_max_packet_size;
+ 	u8 num_can_ch;
++	u8 opened_channel_cnt;
+ 
+ 	u16 rx_cmd_buf_len;
+ 	union es58x_urb_cmd rx_cmd_buf;
+diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c
+index 4d43aca2ff568..d9e03a7fede9d 100644
+--- a/drivers/net/can/usb/gs_usb.c
++++ b/drivers/net/can/usb/gs_usb.c
+@@ -191,8 +191,8 @@ struct gs_can {
+ struct gs_usb {
+ 	struct gs_can *canch[GS_MAX_INTF];
+ 	struct usb_anchor rx_submitted;
+-	atomic_t active_channels;
+ 	struct usb_device *udev;
++	u8 active_channels;
+ };
+ 
+ /* 'allocate' a tx context.
+@@ -590,7 +590,7 @@ static int gs_can_open(struct net_device *netdev)
+ 	if (rc)
+ 		return rc;
+ 
+-	if (atomic_add_return(1, &parent->active_channels) == 1) {
++	if (!parent->active_channels) {
+ 		for (i = 0; i < GS_MAX_RX_URBS; i++) {
+ 			struct urb *urb;
+ 			u8 *buf;
+@@ -691,6 +691,7 @@ static int gs_can_open(struct net_device *netdev)
+ 
+ 	dev->can.state = CAN_STATE_ERROR_ACTIVE;
+ 
++	parent->active_channels++;
+ 	if (!(dev->can.ctrlmode & CAN_CTRLMODE_LISTENONLY))
+ 		netif_start_queue(netdev);
+ 
+@@ -706,7 +707,8 @@ static int gs_can_close(struct net_device *netdev)
+ 	netif_stop_queue(netdev);
+ 
+ 	/* Stop polling */
+-	if (atomic_dec_and_test(&parent->active_channels))
++	parent->active_channels--;
++	if (!parent->active_channels)
+ 		usb_kill_anchored_urbs(&parent->rx_submitted);
+ 
+ 	/* Stop sending URBs */
+@@ -985,8 +987,6 @@ static int gs_usb_probe(struct usb_interface *intf,
+ 
+ 	init_usb_anchor(&dev->rx_submitted);
+ 
+-	atomic_set(&dev->active_channels, 0);
+-
+ 	usb_set_intfdata(intf, dev);
+ 	dev->udev = interface_to_usbdev(intf);
+ 
+diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
+index 8a04302018dce..7ab9ab58de652 100644
+--- a/drivers/net/dsa/microchip/ksz_common.c
++++ b/drivers/net/dsa/microchip/ksz_common.c
+@@ -26,7 +26,7 @@ void ksz_update_port_member(struct ksz_device *dev, int port)
+ 	struct dsa_switch *ds = dev->ds;
+ 	u8 port_member = 0, cpu_port;
+ 	const struct dsa_port *dp;
+-	int i;
++	int i, j;
+ 
+ 	if (!dsa_is_user_port(ds, port))
+ 		return;
+@@ -45,13 +45,33 @@ void ksz_update_port_member(struct ksz_device *dev, int port)
+ 			continue;
+ 		if (!dp->bridge_dev || dp->bridge_dev != other_dp->bridge_dev)
+ 			continue;
++		if (other_p->stp_state != BR_STATE_FORWARDING)
++			continue;
+ 
+-		if (other_p->stp_state == BR_STATE_FORWARDING &&
+-		    p->stp_state == BR_STATE_FORWARDING) {
++		if (p->stp_state == BR_STATE_FORWARDING) {
+ 			val |= BIT(port);
+ 			port_member |= BIT(i);
+ 		}
+ 
++		/* Retain port [i]'s relationship to other ports than [port] */
++		for (j = 0; j < ds->num_ports; j++) {
++			const struct dsa_port *third_dp;
++			struct ksz_port *third_p;
++
++			if (j == i)
++				continue;
++			if (j == port)
++				continue;
++			if (!dsa_is_user_port(ds, j))
++				continue;
++			third_p = &dev->ports[j];
++			if (third_p->stp_state != BR_STATE_FORWARDING)
++				continue;
++			third_dp = dsa_to_port(ds, j);
++			if (third_dp->bridge_dev == dp->bridge_dev)
++				val |= BIT(j);
++		}
++
+ 		dev->dev_ops->cfg_port_member(dev, i, val | cpu_port);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c b/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c
+index da41eee2f25c7..a06003bfa04b9 100644
+--- a/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c
++++ b/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c
+@@ -3613,6 +3613,8 @@ int t3_prep_adapter(struct adapter *adapter, const struct adapter_info *ai,
+ 	    MAC_STATS_ACCUM_SECS : (MAC_STATS_ACCUM_SECS * 10);
+ 	adapter->params.pci.vpd_cap_addr =
+ 	    pci_find_capability(adapter->pdev, PCI_CAP_ID_VPD);
++	if (!adapter->params.pci.vpd_cap_addr)
++		return -ENODEV;
+ 	ret = get_vpd_params(adapter, &adapter->params.vpd);
+ 	if (ret < 0)
+ 		return ret;
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 1c6bc69197a53..a8b65c072f64e 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -309,7 +309,7 @@ static int alloc_long_term_buff(struct ibmvnic_adapter *adapter,
+ 	if (adapter->fw_done_rc) {
+ 		dev_err(dev, "Couldn't map LTB, rc = %d\n",
+ 			adapter->fw_done_rc);
+-		rc = -1;
++		rc = -EIO;
+ 		goto out;
+ 	}
+ 	rc = 0;
+@@ -541,13 +541,15 @@ static int init_stats_token(struct ibmvnic_adapter *adapter)
+ {
+ 	struct device *dev = &adapter->vdev->dev;
+ 	dma_addr_t stok;
++	int rc;
+ 
+ 	stok = dma_map_single(dev, &adapter->stats,
+ 			      sizeof(struct ibmvnic_statistics),
+ 			      DMA_FROM_DEVICE);
+-	if (dma_mapping_error(dev, stok)) {
+-		dev_err(dev, "Couldn't map stats buffer\n");
+-		return -1;
++	rc = dma_mapping_error(dev, stok);
++	if (rc) {
++		dev_err(dev, "Couldn't map stats buffer, rc = %d\n", rc);
++		return rc;
+ 	}
+ 
+ 	adapter->stats_token = stok;
+@@ -656,7 +658,7 @@ static int init_rx_pools(struct net_device *netdev)
+ 	u64 num_pools;
+ 	u64 pool_size;		/* # of buffers in one pool */
+ 	u64 buff_size;
+-	int i, j;
++	int i, j, rc;
+ 
+ 	pool_size = adapter->req_rx_add_entries_per_subcrq;
+ 	num_pools = adapter->req_rx_queues;
+@@ -675,7 +677,7 @@ static int init_rx_pools(struct net_device *netdev)
+ 				   GFP_KERNEL);
+ 	if (!adapter->rx_pool) {
+ 		dev_err(dev, "Failed to allocate rx pools\n");
+-		return -1;
++		return -ENOMEM;
+ 	}
+ 
+ 	/* Set num_active_rx_pools early. If we fail below after partial
+@@ -698,6 +700,7 @@ static int init_rx_pools(struct net_device *netdev)
+ 					    GFP_KERNEL);
+ 		if (!rx_pool->free_map) {
+ 			dev_err(dev, "Couldn't alloc free_map %d\n", i);
++			rc = -ENOMEM;
+ 			goto out_release;
+ 		}
+ 
+@@ -706,6 +709,7 @@ static int init_rx_pools(struct net_device *netdev)
+ 					   GFP_KERNEL);
+ 		if (!rx_pool->rx_buff) {
+ 			dev_err(dev, "Couldn't alloc rx buffers\n");
++			rc = -ENOMEM;
+ 			goto out_release;
+ 		}
+ 	}
+@@ -719,8 +723,9 @@ update_ltb:
+ 		dev_dbg(dev, "Updating LTB for rx pool %d [%d, %d]\n",
+ 			i, rx_pool->size, rx_pool->buff_size);
+ 
+-		if (alloc_long_term_buff(adapter, &rx_pool->long_term_buff,
+-					 rx_pool->size * rx_pool->buff_size))
++		rc = alloc_long_term_buff(adapter, &rx_pool->long_term_buff,
++					  rx_pool->size * rx_pool->buff_size);
++		if (rc)
+ 			goto out;
+ 
+ 		for (j = 0; j < rx_pool->size; ++j) {
+@@ -757,7 +762,7 @@ out:
+ 	/* We failed to allocate one or more LTBs or map them on the VIOS.
+ 	 * Hold onto the pools and any LTBs that we did allocate/map.
+ 	 */
+-	return -1;
++	return rc;
+ }
+ 
+ static void release_vpd_data(struct ibmvnic_adapter *adapter)
+@@ -818,13 +823,13 @@ static int init_one_tx_pool(struct net_device *netdev,
+ 				   sizeof(struct ibmvnic_tx_buff),
+ 				   GFP_KERNEL);
+ 	if (!tx_pool->tx_buff)
+-		return -1;
++		return -ENOMEM;
+ 
+ 	tx_pool->free_map = kcalloc(pool_size, sizeof(int), GFP_KERNEL);
+ 	if (!tx_pool->free_map) {
+ 		kfree(tx_pool->tx_buff);
+ 		tx_pool->tx_buff = NULL;
+-		return -1;
++		return -ENOMEM;
+ 	}
+ 
+ 	for (i = 0; i < pool_size; i++)
+@@ -915,7 +920,7 @@ static int init_tx_pools(struct net_device *netdev)
+ 	adapter->tx_pool = kcalloc(num_pools,
+ 				   sizeof(struct ibmvnic_tx_pool), GFP_KERNEL);
+ 	if (!adapter->tx_pool)
+-		return -1;
++		return -ENOMEM;
+ 
+ 	adapter->tso_pool = kcalloc(num_pools,
+ 				    sizeof(struct ibmvnic_tx_pool), GFP_KERNEL);
+@@ -925,7 +930,7 @@ static int init_tx_pools(struct net_device *netdev)
+ 	if (!adapter->tso_pool) {
+ 		kfree(adapter->tx_pool);
+ 		adapter->tx_pool = NULL;
+-		return -1;
++		return -ENOMEM;
+ 	}
+ 
+ 	/* Set num_active_tx_pools early. If we fail below after partial
+@@ -1114,7 +1119,7 @@ static int ibmvnic_login(struct net_device *netdev)
+ 		retry = false;
+ 		if (retry_count > retries) {
+ 			netdev_warn(netdev, "Login attempts exceeded\n");
+-			return -1;
++			return -EACCES;
+ 		}
+ 
+ 		adapter->init_done_rc = 0;
+@@ -1155,25 +1160,26 @@ static int ibmvnic_login(struct net_device *netdev)
+ 							 timeout)) {
+ 				netdev_warn(netdev,
+ 					    "Capabilities query timed out\n");
+-				return -1;
++				return -ETIMEDOUT;
+ 			}
+ 
+ 			rc = init_sub_crqs(adapter);
+ 			if (rc) {
+ 				netdev_warn(netdev,
+ 					    "SCRQ initialization failed\n");
+-				return -1;
++				return rc;
+ 			}
+ 
+ 			rc = init_sub_crq_irqs(adapter);
+ 			if (rc) {
+ 				netdev_warn(netdev,
+ 					    "SCRQ irq initialization failed\n");
+-				return -1;
++				return rc;
+ 			}
+ 		} else if (adapter->init_done_rc) {
+-			netdev_warn(netdev, "Adapter login failed\n");
+-			return -1;
++			netdev_warn(netdev, "Adapter login failed, init_done_rc = %d\n",
++				    adapter->init_done_rc);
++			return -EIO;
+ 		}
+ 	} while (retry);
+ 
+@@ -1232,7 +1238,7 @@ static int set_link_state(struct ibmvnic_adapter *adapter, u8 link_state)
+ 		if (!wait_for_completion_timeout(&adapter->init_done,
+ 						 timeout)) {
+ 			netdev_err(netdev, "timeout setting link state\n");
+-			return -1;
++			return -ETIMEDOUT;
+ 		}
+ 
+ 		if (adapter->init_done_rc == PARTIALSUCCESS) {
+@@ -2206,6 +2212,19 @@ static const char *reset_reason_to_string(enum ibmvnic_reset_reason reason)
+ 	return "UNKNOWN";
+ }
+ 
++/*
++ * Initialize the init_done completion and return code values. We
++ * can get a transport event just after registering the CRQ and the
++ * tasklet will use this to communicate the transport event. To ensure
++ * we don't miss the notification/error, initialize these _before_
++ * regisering the CRQ.
++ */
++static inline void reinit_init_done(struct ibmvnic_adapter *adapter)
++{
++	reinit_completion(&adapter->init_done);
++	adapter->init_done_rc = 0;
++}
++
+ /*
+  * do_reset returns zero if we are able to keep processing reset events, or
+  * non-zero if we hit a fatal error and must halt.
+@@ -2293,7 +2312,7 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ 				/* If someone else changed the adapter state
+ 				 * when we dropped the rtnl, fail the reset
+ 				 */
+-				rc = -1;
++				rc = -EAGAIN;
+ 				goto out;
+ 			}
+ 			adapter->state = VNIC_CLOSED;
+@@ -2312,6 +2331,8 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ 		 */
+ 		adapter->state = VNIC_PROBED;
+ 
++		reinit_init_done(adapter);
++
+ 		if (adapter->reset_reason == VNIC_RESET_CHANGE_PARAM) {
+ 			rc = init_crq_queue(adapter);
+ 		} else if (adapter->reset_reason == VNIC_RESET_MOBILITY) {
+@@ -2335,10 +2356,8 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ 		}
+ 
+ 		rc = ibmvnic_reset_init(adapter, true);
+-		if (rc) {
+-			rc = IBMVNIC_INIT_FAILED;
++		if (rc)
+ 			goto out;
+-		}
+ 
+ 		/* If the adapter was in PROBE or DOWN state prior to the reset,
+ 		 * exit here.
+@@ -2457,7 +2476,8 @@ static int do_hard_reset(struct ibmvnic_adapter *adapter,
+ 	 */
+ 	adapter->state = VNIC_PROBED;
+ 
+-	reinit_completion(&adapter->init_done);
++	reinit_init_done(adapter);
++
+ 	rc = init_crq_queue(adapter);
+ 	if (rc) {
+ 		netdev_err(adapter->netdev,
+@@ -2598,23 +2618,82 @@ out:
+ static void __ibmvnic_reset(struct work_struct *work)
+ {
+ 	struct ibmvnic_adapter *adapter;
+-	bool saved_state = false;
++	unsigned int timeout = 5000;
+ 	struct ibmvnic_rwi *tmprwi;
++	bool saved_state = false;
+ 	struct ibmvnic_rwi *rwi;
+ 	unsigned long flags;
+-	u32 reset_state;
++	struct device *dev;
++	bool need_reset;
+ 	int num_fails = 0;
++	u32 reset_state;
+ 	int rc = 0;
+ 
+ 	adapter = container_of(work, struct ibmvnic_adapter, ibmvnic_reset);
++		dev = &adapter->vdev->dev;
+ 
+-	if (test_and_set_bit_lock(0, &adapter->resetting)) {
++	/* Wait for ibmvnic_probe() to complete. If probe is taking too long
++	 * or if another reset is in progress, defer work for now. If probe
++	 * eventually fails it will flush and terminate our work.
++	 *
++	 * Three possibilities here:
++	 * 1. Adpater being removed  - just return
++	 * 2. Timed out on probe or another reset in progress - delay the work
++	 * 3. Completed probe - perform any resets in queue
++	 */
++	if (adapter->state == VNIC_PROBING &&
++	    !wait_for_completion_timeout(&adapter->probe_done, timeout)) {
++		dev_err(dev, "Reset thread timed out on probe");
+ 		queue_delayed_work(system_long_wq,
+ 				   &adapter->ibmvnic_delayed_reset,
+ 				   IBMVNIC_RESET_DELAY);
+ 		return;
+ 	}
+ 
++	/* adapter is done with probe (i.e state is never VNIC_PROBING now) */
++	if (adapter->state == VNIC_REMOVING)
++		return;
++
++	/* ->rwi_list is stable now (no one else is removing entries) */
++
++	/* ibmvnic_probe() may have purged the reset queue after we were
++	 * scheduled to process a reset so there maybe no resets to process.
++	 * Before setting the ->resetting bit though, we have to make sure
++	 * that there is infact a reset to process. Otherwise we may race
++	 * with ibmvnic_open() and end up leaving the vnic down:
++	 *
++	 *	__ibmvnic_reset()	    ibmvnic_open()
++	 *	-----------------	    --------------
++	 *
++	 *  set ->resetting bit
++	 *  				find ->resetting bit is set
++	 *  				set ->state to IBMVNIC_OPEN (i.e
++	 *  				assume reset will open device)
++	 *  				return
++	 *  find reset queue empty
++	 *  return
++	 *
++	 *  	Neither performed vnic login/open and vnic stays down
++	 *
++	 * If we hold the lock and conditionally set the bit, either we
++	 * or ibmvnic_open() will complete the open.
++	 */
++	need_reset = false;
++	spin_lock(&adapter->rwi_lock);
++	if (!list_empty(&adapter->rwi_list)) {
++		if (test_and_set_bit_lock(0, &adapter->resetting)) {
++			queue_delayed_work(system_long_wq,
++					   &adapter->ibmvnic_delayed_reset,
++					   IBMVNIC_RESET_DELAY);
++		} else {
++			need_reset = true;
++		}
++	}
++	spin_unlock(&adapter->rwi_lock);
++
++	if (!need_reset)
++		return;
++
+ 	rwi = get_next_rwi(adapter);
+ 	while (rwi) {
+ 		spin_lock_irqsave(&adapter->state_lock, flags);
+@@ -2731,12 +2810,23 @@ static void __ibmvnic_delayed_reset(struct work_struct *work)
+ 	__ibmvnic_reset(&adapter->ibmvnic_reset);
+ }
+ 
++static void flush_reset_queue(struct ibmvnic_adapter *adapter)
++{
++	struct list_head *entry, *tmp_entry;
++
++	if (!list_empty(&adapter->rwi_list)) {
++		list_for_each_safe(entry, tmp_entry, &adapter->rwi_list) {
++			list_del(entry);
++			kfree(list_entry(entry, struct ibmvnic_rwi, list));
++		}
++	}
++}
++
+ static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
+ 			 enum ibmvnic_reset_reason reason)
+ {
+-	struct list_head *entry, *tmp_entry;
+-	struct ibmvnic_rwi *rwi, *tmp;
+ 	struct net_device *netdev = adapter->netdev;
++	struct ibmvnic_rwi *rwi, *tmp;
+ 	unsigned long flags;
+ 	int ret;
+ 
+@@ -2755,13 +2845,6 @@ static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
+ 		goto err;
+ 	}
+ 
+-	if (adapter->state == VNIC_PROBING) {
+-		netdev_warn(netdev, "Adapter reset during probe\n");
+-		adapter->init_done_rc = -EAGAIN;
+-		ret = EAGAIN;
+-		goto err;
+-	}
+-
+ 	list_for_each_entry(tmp, &adapter->rwi_list, list) {
+ 		if (tmp->reset_reason == reason) {
+ 			netdev_dbg(netdev, "Skipping matching reset, reason=%s\n",
+@@ -2779,10 +2862,9 @@ static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
+ 	/* if we just received a transport event,
+ 	 * flush reset queue and process this reset
+ 	 */
+-	if (adapter->force_reset_recovery && !list_empty(&adapter->rwi_list)) {
+-		list_for_each_safe(entry, tmp_entry, &adapter->rwi_list)
+-			list_del(entry);
+-	}
++	if (adapter->force_reset_recovery)
++		flush_reset_queue(adapter);
++
+ 	rwi->reset_reason = reason;
+ 	list_add_tail(&rwi->list, &adapter->rwi_list);
+ 	netdev_dbg(adapter->netdev, "Scheduling reset (reason %s)\n",
+@@ -3777,7 +3859,7 @@ static int init_sub_crqs(struct ibmvnic_adapter *adapter)
+ 
+ 	allqueues = kcalloc(total_queues, sizeof(*allqueues), GFP_KERNEL);
+ 	if (!allqueues)
+-		return -1;
++		return -ENOMEM;
+ 
+ 	for (i = 0; i < total_queues; i++) {
+ 		allqueues[i] = init_sub_crq_queue(adapter);
+@@ -3846,7 +3928,7 @@ tx_failed:
+ 	for (i = 0; i < registered_queues; i++)
+ 		release_sub_crq_queue(adapter, allqueues[i], 1);
+ 	kfree(allqueues);
+-	return -1;
++	return -ENOMEM;
+ }
+ 
+ static void send_request_cap(struct ibmvnic_adapter *adapter, int retry)
+@@ -4225,7 +4307,7 @@ static int send_login(struct ibmvnic_adapter *adapter)
+ 	if (!adapter->tx_scrq || !adapter->rx_scrq) {
+ 		netdev_err(adapter->netdev,
+ 			   "RX or TX queues are not allocated, device login failed\n");
+-		return -1;
++		return -ENOMEM;
+ 	}
+ 
+ 	release_login_buffer(adapter);
+@@ -4345,7 +4427,7 @@ buf_map_failed:
+ 	kfree(login_buffer);
+ 	adapter->login_buf = NULL;
+ buf_alloc_failed:
+-	return -1;
++	return -ENOMEM;
+ }
+ 
+ static int send_request_map(struct ibmvnic_adapter *adapter, dma_addr_t addr,
+@@ -5317,9 +5399,9 @@ static void ibmvnic_handle_crq(union ibmvnic_crq *crq,
+ 			}
+ 
+ 			if (!completion_done(&adapter->init_done)) {
+-				complete(&adapter->init_done);
+ 				if (!adapter->init_done_rc)
+ 					adapter->init_done_rc = -EAGAIN;
++				complete(&adapter->init_done);
+ 			}
+ 
+ 			break;
+@@ -5342,6 +5424,13 @@ static void ibmvnic_handle_crq(union ibmvnic_crq *crq,
+ 			adapter->fw_done_rc = -EIO;
+ 			complete(&adapter->fw_done);
+ 		}
++
++		/* if we got here during crq-init, retry crq-init */
++		if (!completion_done(&adapter->init_done)) {
++			adapter->init_done_rc = -EAGAIN;
++			complete(&adapter->init_done);
++		}
++
+ 		if (!completion_done(&adapter->stats_done))
+ 			complete(&adapter->stats_done);
+ 		if (test_bit(0, &adapter->resetting))
+@@ -5664,10 +5753,6 @@ static int ibmvnic_reset_init(struct ibmvnic_adapter *adapter, bool reset)
+ 
+ 	adapter->from_passive_init = false;
+ 
+-	if (reset)
+-		reinit_completion(&adapter->init_done);
+-
+-	adapter->init_done_rc = 0;
+ 	rc = ibmvnic_send_crq_init(adapter);
+ 	if (rc) {
+ 		dev_err(dev, "Send crq init failed with error %d\n", rc);
+@@ -5676,18 +5761,20 @@ static int ibmvnic_reset_init(struct ibmvnic_adapter *adapter, bool reset)
+ 
+ 	if (!wait_for_completion_timeout(&adapter->init_done, timeout)) {
+ 		dev_err(dev, "Initialization sequence timed out\n");
+-		return -1;
++		return -ETIMEDOUT;
+ 	}
+ 
+ 	if (adapter->init_done_rc) {
+ 		release_crq_queue(adapter);
++		dev_err(dev, "CRQ-init failed, %d\n", adapter->init_done_rc);
+ 		return adapter->init_done_rc;
+ 	}
+ 
+ 	if (adapter->from_passive_init) {
+ 		adapter->state = VNIC_OPEN;
+ 		adapter->from_passive_init = false;
+-		return -1;
++		dev_err(dev, "CRQ-init failed, passive-init\n");
++		return -EINVAL;
+ 	}
+ 
+ 	if (reset &&
+@@ -5726,6 +5813,7 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
+ 	struct ibmvnic_adapter *adapter;
+ 	struct net_device *netdev;
+ 	unsigned char *mac_addr_p;
++	unsigned long flags;
+ 	bool init_success;
+ 	int rc;
+ 
+@@ -5770,6 +5858,7 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
+ 	spin_lock_init(&adapter->rwi_lock);
+ 	spin_lock_init(&adapter->state_lock);
+ 	mutex_init(&adapter->fw_lock);
++	init_completion(&adapter->probe_done);
+ 	init_completion(&adapter->init_done);
+ 	init_completion(&adapter->fw_done);
+ 	init_completion(&adapter->reset_done);
+@@ -5780,6 +5869,33 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
+ 
+ 	init_success = false;
+ 	do {
++		reinit_init_done(adapter);
++
++		/* clear any failovers we got in the previous pass
++		 * since we are reinitializing the CRQ
++		 */
++		adapter->failover_pending = false;
++
++		/* If we had already initialized CRQ, we may have one or
++		 * more resets queued already. Discard those and release
++		 * the CRQ before initializing the CRQ again.
++		 */
++		release_crq_queue(adapter);
++
++		/* Since we are still in PROBING state, __ibmvnic_reset()
++		 * will not access the ->rwi_list and since we released CRQ,
++		 * we won't get _new_ transport events. But there maybe an
++		 * ongoing ibmvnic_reset() call. So serialize access to
++		 * rwi_list. If we win the race, ibvmnic_reset() could add
++		 * a reset after we purged but thats ok - we just may end
++		 * up with an extra reset (i.e similar to having two or more
++		 * resets in the queue at once).
++		 * CHECK.
++		 */
++		spin_lock_irqsave(&adapter->rwi_lock, flags);
++		flush_reset_queue(adapter);
++		spin_unlock_irqrestore(&adapter->rwi_lock, flags);
++
+ 		rc = init_crq_queue(adapter);
+ 		if (rc) {
+ 			dev_err(&dev->dev, "Couldn't initialize crq. rc=%d\n",
+@@ -5811,12 +5927,6 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
+ 		goto ibmvnic_dev_file_err;
+ 
+ 	netif_carrier_off(netdev);
+-	rc = register_netdev(netdev);
+-	if (rc) {
+-		dev_err(&dev->dev, "failed to register netdev rc=%d\n", rc);
+-		goto ibmvnic_register_fail;
+-	}
+-	dev_info(&dev->dev, "ibmvnic registered\n");
+ 
+ 	if (init_success) {
+ 		adapter->state = VNIC_PROBED;
+@@ -5829,6 +5939,16 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
+ 
+ 	adapter->wait_for_reset = false;
+ 	adapter->last_reset_time = jiffies;
++
++	rc = register_netdev(netdev);
++	if (rc) {
++		dev_err(&dev->dev, "failed to register netdev rc=%d\n", rc);
++		goto ibmvnic_register_fail;
++	}
++	dev_info(&dev->dev, "ibmvnic registered\n");
++
++	complete(&adapter->probe_done);
++
+ 	return 0;
+ 
+ ibmvnic_register_fail:
+@@ -5843,6 +5963,17 @@ ibmvnic_stats_fail:
+ ibmvnic_init_fail:
+ 	release_sub_crqs(adapter, 1);
+ 	release_crq_queue(adapter);
++
++	/* cleanup worker thread after releasing CRQ so we don't get
++	 * transport events (i.e new work items for the worker thread).
++	 */
++	adapter->state = VNIC_REMOVING;
++	complete(&adapter->probe_done);
++	flush_work(&adapter->ibmvnic_reset);
++	flush_delayed_work(&adapter->ibmvnic_delayed_reset);
++
++	flush_reset_queue(adapter);
++
+ 	mutex_destroy(&adapter->fw_lock);
+ 	free_netdev(netdev);
+ 
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
+index b8e42f67d897e..549a9b7b1a706 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.h
++++ b/drivers/net/ethernet/ibm/ibmvnic.h
+@@ -933,6 +933,7 @@ struct ibmvnic_adapter {
+ 
+ 	struct ibmvnic_tx_pool *tx_pool;
+ 	struct ibmvnic_tx_pool *tso_pool;
++	struct completion probe_done;
+ 	struct completion init_done;
+ 	int init_done_rc;
+ 
+diff --git a/drivers/net/ethernet/intel/e1000e/hw.h b/drivers/net/ethernet/intel/e1000e/hw.h
+index bcf680e838113..13382df2f2eff 100644
+--- a/drivers/net/ethernet/intel/e1000e/hw.h
++++ b/drivers/net/ethernet/intel/e1000e/hw.h
+@@ -630,6 +630,7 @@ struct e1000_phy_info {
+ 	bool disable_polarity_correction;
+ 	bool is_mdix;
+ 	bool polarity_correction;
++	bool reset_disable;
+ 	bool speed_downgraded;
+ 	bool autoneg_wait_to_complete;
+ };
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+index c908c84b86d22..d60e2016d03c6 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+@@ -2050,6 +2050,10 @@ static s32 e1000_check_reset_block_ich8lan(struct e1000_hw *hw)
+ 	bool blocked = false;
+ 	int i = 0;
+ 
++	/* Check the PHY (LCD) reset flag */
++	if (hw->phy.reset_disable)
++		return true;
++
+ 	while ((blocked = !(er32(FWSM) & E1000_ICH_FWSM_RSPCIPHY)) &&
+ 	       (i++ < 30))
+ 		usleep_range(10000, 11000);
+@@ -4136,9 +4140,9 @@ static s32 e1000_validate_nvm_checksum_ich8lan(struct e1000_hw *hw)
+ 		return ret_val;
+ 
+ 	if (!(data & valid_csum_mask)) {
+-		e_dbg("NVM Checksum Invalid\n");
++		e_dbg("NVM Checksum valid bit not set\n");
+ 
+-		if (hw->mac.type < e1000_pch_cnp) {
++		if (hw->mac.type < e1000_pch_tgp) {
+ 			data |= valid_csum_mask;
+ 			ret_val = e1000_write_nvm(hw, word, 1, &data);
+ 			if (ret_val)
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.h b/drivers/net/ethernet/intel/e1000e/ich8lan.h
+index 2504b11c3169f..638a3ddd7ada8 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.h
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.h
+@@ -271,6 +271,7 @@
+ #define I217_CGFREG_ENABLE_MTA_RESET	0x0002
+ #define I217_MEMPWR			PHY_REG(772, 26)
+ #define I217_MEMPWR_DISABLE_SMB_RELEASE	0x0010
++#define I217_MEMPWR_MOEM		0x1000
+ 
+ /* Receive Address Initial CRC Calculation */
+ #define E1000_PCH_RAICC(_n)	(0x05F50 + ((_n) * 4))
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index c063128ed1df3..f2db4ff0003f7 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -6991,8 +6991,21 @@ static __maybe_unused int e1000e_pm_suspend(struct device *dev)
+ 	struct net_device *netdev = pci_get_drvdata(to_pci_dev(dev));
+ 	struct e1000_adapter *adapter = netdev_priv(netdev);
+ 	struct pci_dev *pdev = to_pci_dev(dev);
++	struct e1000_hw *hw = &adapter->hw;
++	u16 phy_data;
+ 	int rc;
+ 
++	if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID &&
++	    hw->mac.type >= e1000_pch_adp) {
++		/* Mask OEM Bits / Gig Disable / Restart AN (772_26[12] = 1) */
++		e1e_rphy(hw, I217_MEMPWR, &phy_data);
++		phy_data |= I217_MEMPWR_MOEM;
++		e1e_wphy(hw, I217_MEMPWR, phy_data);
++
++		/* Disable LCD reset */
++		hw->phy.reset_disable = true;
++	}
++
+ 	e1000e_flush_lpic(pdev);
+ 
+ 	e1000e_pm_freeze(dev);
+@@ -7014,6 +7027,8 @@ static __maybe_unused int e1000e_pm_resume(struct device *dev)
+ 	struct net_device *netdev = pci_get_drvdata(to_pci_dev(dev));
+ 	struct e1000_adapter *adapter = netdev_priv(netdev);
+ 	struct pci_dev *pdev = to_pci_dev(dev);
++	struct e1000_hw *hw = &adapter->hw;
++	u16 phy_data;
+ 	int rc;
+ 
+ 	/* Introduce S0ix implementation */
+@@ -7024,6 +7039,17 @@ static __maybe_unused int e1000e_pm_resume(struct device *dev)
+ 	if (rc)
+ 		return rc;
+ 
++	if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID &&
++	    hw->mac.type >= e1000_pch_adp) {
++		/* Unmask OEM Bits / Gig Disable / Restart AN 772_26[12] = 0 */
++		e1e_rphy(hw, I217_MEMPWR, &phy_data);
++		phy_data &= ~I217_MEMPWR_MOEM;
++		e1e_wphy(hw, I217_MEMPWR, phy_data);
++
++		/* Enable LCD reset */
++		hw->phy.reset_disable = false;
++	}
++
+ 	return e1000e_pm_thaw(dev);
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
+index 3789269ce741d..9a122aea69793 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf.h
++++ b/drivers/net/ethernet/intel/iavf/iavf.h
+@@ -188,6 +188,10 @@ enum iavf_state_t {
+ 	__IAVF_RUNNING,		/* opened, working */
+ };
+ 
++enum iavf_critical_section_t {
++	__IAVF_IN_REMOVE_TASK,	/* device being removed */
++};
++
+ #define IAVF_CLOUD_FIELD_OMAC		0x01
+ #define IAVF_CLOUD_FIELD_IMAC		0x02
+ #define IAVF_CLOUD_FIELD_IVLAN	0x04
+@@ -233,7 +237,6 @@ struct iavf_adapter {
+ 	struct list_head mac_filter_list;
+ 	struct mutex crit_lock;
+ 	struct mutex client_lock;
+-	struct mutex remove_lock;
+ 	/* Lock to protect accesses to MAC and VLAN lists */
+ 	spinlock_t mac_vlan_list_lock;
+ 	char misc_vector_name[IFNAMSIZ + 9];
+@@ -271,6 +274,7 @@ struct iavf_adapter {
+ #define IAVF_FLAG_LEGACY_RX			BIT(15)
+ #define IAVF_FLAG_REINIT_ITR_NEEDED		BIT(16)
+ #define IAVF_FLAG_QUEUES_DISABLED		BIT(17)
++#define IAVF_FLAG_SETUP_NETDEV_FEATURES		BIT(18)
+ /* duplicates for common code */
+ #define IAVF_FLAG_DCB_ENABLED			0
+ 	/* flags for admin queue service task */
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index e4439b0955338..138db07bdfa8e 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -302,8 +302,9 @@ static irqreturn_t iavf_msix_aq(int irq, void *data)
+ 	rd32(hw, IAVF_VFINT_ICR01);
+ 	rd32(hw, IAVF_VFINT_ICR0_ENA1);
+ 
+-	/* schedule work on the private workqueue */
+-	queue_work(iavf_wq, &adapter->adminq_task);
++	if (adapter->state != __IAVF_REMOVE)
++		/* schedule work on the private workqueue */
++		queue_work(iavf_wq, &adapter->adminq_task);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -1072,8 +1073,7 @@ void iavf_down(struct iavf_adapter *adapter)
+ 		rss->state = IAVF_ADV_RSS_DEL_REQUEST;
+ 	spin_unlock_bh(&adapter->adv_rss_lock);
+ 
+-	if (!(adapter->flags & IAVF_FLAG_PF_COMMS_FAILED) &&
+-	    adapter->state != __IAVF_RESETTING) {
++	if (!(adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)) {
+ 		/* cancel any current operation */
+ 		adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+ 		/* Schedule operations to close down the HW. Don't wait
+@@ -1981,17 +1981,22 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 	struct iavf_hw *hw = &adapter->hw;
+ 	u32 reg_val;
+ 
+-	if (!mutex_trylock(&adapter->crit_lock))
++	if (!mutex_trylock(&adapter->crit_lock)) {
++		if (adapter->state == __IAVF_REMOVE)
++			return;
++
+ 		goto restart_watchdog;
++	}
+ 
+ 	if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)
+ 		iavf_change_state(adapter, __IAVF_COMM_FAILED);
+ 
+-	if (adapter->flags & IAVF_FLAG_RESET_NEEDED &&
+-	    adapter->state != __IAVF_RESETTING) {
+-		iavf_change_state(adapter, __IAVF_RESETTING);
++	if (adapter->flags & IAVF_FLAG_RESET_NEEDED) {
+ 		adapter->aq_required = 0;
+ 		adapter->current_op = VIRTCHNL_OP_UNKNOWN;
++		mutex_unlock(&adapter->crit_lock);
++		queue_work(iavf_wq, &adapter->reset_task);
++		return;
+ 	}
+ 
+ 	switch (adapter->state) {
+@@ -2014,6 +2019,15 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 				   msecs_to_jiffies(1));
+ 		return;
+ 	case __IAVF_INIT_FAILED:
++		if (test_bit(__IAVF_IN_REMOVE_TASK,
++			     &adapter->crit_section)) {
++			/* Do not update the state and do not reschedule
++			 * watchdog task, iavf_remove should handle this state
++			 * as it can loop forever
++			 */
++			mutex_unlock(&adapter->crit_lock);
++			return;
++		}
+ 		if (++adapter->aq_wait_count > IAVF_AQ_MAX_ERR) {
+ 			dev_err(&adapter->pdev->dev,
+ 				"Failed to communicate with PF; waiting before retry\n");
+@@ -2030,6 +2044,17 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 		queue_delayed_work(iavf_wq, &adapter->watchdog_task, HZ);
+ 		return;
+ 	case __IAVF_COMM_FAILED:
++		if (test_bit(__IAVF_IN_REMOVE_TASK,
++			     &adapter->crit_section)) {
++			/* Set state to __IAVF_INIT_FAILED and perform remove
++			 * steps. Remove IAVF_FLAG_PF_COMMS_FAILED so the task
++			 * doesn't bring the state back to __IAVF_COMM_FAILED.
++			 */
++			iavf_change_state(adapter, __IAVF_INIT_FAILED);
++			adapter->flags &= ~IAVF_FLAG_PF_COMMS_FAILED;
++			mutex_unlock(&adapter->crit_lock);
++			return;
++		}
+ 		reg_val = rd32(hw, IAVF_VFGEN_RSTAT) &
+ 			  IAVF_VFGEN_RSTAT_VFR_STATE_MASK;
+ 		if (reg_val == VIRTCHNL_VFR_VFACTIVE ||
+@@ -2099,7 +2124,8 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 	schedule_delayed_work(&adapter->client_task, msecs_to_jiffies(5));
+ 	mutex_unlock(&adapter->crit_lock);
+ restart_watchdog:
+-	queue_work(iavf_wq, &adapter->adminq_task);
++	if (adapter->state >= __IAVF_DOWN)
++		queue_work(iavf_wq, &adapter->adminq_task);
+ 	if (adapter->aq_required)
+ 		queue_delayed_work(iavf_wq, &adapter->watchdog_task,
+ 				   msecs_to_jiffies(20));
+@@ -2193,13 +2219,13 @@ static void iavf_reset_task(struct work_struct *work)
+ 	/* When device is being removed it doesn't make sense to run the reset
+ 	 * task, just return in such a case.
+ 	 */
+-	if (mutex_is_locked(&adapter->remove_lock))
+-		return;
++	if (!mutex_trylock(&adapter->crit_lock)) {
++		if (adapter->state != __IAVF_REMOVE)
++			queue_work(iavf_wq, &adapter->reset_task);
+ 
+-	if (iavf_lock_timeout(&adapter->crit_lock, 200)) {
+-		schedule_work(&adapter->reset_task);
+ 		return;
+ 	}
++
+ 	while (!mutex_trylock(&adapter->client_lock))
+ 		usleep_range(500, 1000);
+ 	if (CLIENT_ENABLED(adapter)) {
+@@ -2254,6 +2280,7 @@ static void iavf_reset_task(struct work_struct *work)
+ 			reg_val);
+ 		iavf_disable_vf(adapter);
+ 		mutex_unlock(&adapter->client_lock);
++		mutex_unlock(&adapter->crit_lock);
+ 		return; /* Do not attempt to reinit. It's dead, Jim. */
+ 	}
+ 
+@@ -2262,8 +2289,7 @@ continue_reset:
+ 	 * ndo_open() returning, so we can't assume it means all our open
+ 	 * tasks have finished, since we're not holding the rtnl_lock here.
+ 	 */
+-	running = ((adapter->state == __IAVF_RUNNING) ||
+-		   (adapter->state == __IAVF_RESETTING));
++	running = adapter->state == __IAVF_RUNNING;
+ 
+ 	if (running) {
+ 		netdev->flags &= ~IFF_UP;
+@@ -2411,13 +2437,19 @@ static void iavf_adminq_task(struct work_struct *work)
+ 	if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)
+ 		goto out;
+ 
++	if (!mutex_trylock(&adapter->crit_lock)) {
++		if (adapter->state == __IAVF_REMOVE)
++			return;
++
++		queue_work(iavf_wq, &adapter->adminq_task);
++		goto out;
++	}
++
+ 	event.buf_len = IAVF_MAX_AQ_BUF_SIZE;
+ 	event.msg_buf = kzalloc(event.buf_len, GFP_KERNEL);
+ 	if (!event.msg_buf)
+ 		goto out;
+ 
+-	if (iavf_lock_timeout(&adapter->crit_lock, 200))
+-		goto freedom;
+ 	do {
+ 		ret = iavf_clean_arq_element(hw, &event, &pending);
+ 		v_op = (enum virtchnl_ops)le32_to_cpu(event.desc.cookie_high);
+@@ -2433,6 +2465,18 @@ static void iavf_adminq_task(struct work_struct *work)
+ 	} while (pending);
+ 	mutex_unlock(&adapter->crit_lock);
+ 
++	if ((adapter->flags & IAVF_FLAG_SETUP_NETDEV_FEATURES)) {
++		if (adapter->netdev_registered ||
++		    !test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section)) {
++			struct net_device *netdev = adapter->netdev;
++
++			rtnl_lock();
++			netdev_update_features(netdev);
++			rtnl_unlock();
++		}
++
++		adapter->flags &= ~IAVF_FLAG_SETUP_NETDEV_FEATURES;
++	}
+ 	if ((adapter->flags &
+ 	     (IAVF_FLAG_RESET_PENDING | IAVF_FLAG_RESET_NEEDED)) ||
+ 	    adapter->state == __IAVF_RESETTING)
+@@ -3385,11 +3429,12 @@ static int iavf_close(struct net_device *netdev)
+ 	struct iavf_adapter *adapter = netdev_priv(netdev);
+ 	int status;
+ 
+-	if (adapter->state <= __IAVF_DOWN_PENDING)
+-		return 0;
++	mutex_lock(&adapter->crit_lock);
+ 
+-	while (!mutex_trylock(&adapter->crit_lock))
+-		usleep_range(500, 1000);
++	if (adapter->state <= __IAVF_DOWN_PENDING) {
++		mutex_unlock(&adapter->crit_lock);
++		return 0;
++	}
+ 
+ 	set_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
+ 	if (CLIENT_ENABLED(adapter))
+@@ -3436,8 +3481,11 @@ static int iavf_change_mtu(struct net_device *netdev, int new_mtu)
+ 		iavf_notify_client_l2_params(&adapter->vsi);
+ 		adapter->flags |= IAVF_FLAG_SERVICE_CLIENT_REQUESTED;
+ 	}
+-	adapter->flags |= IAVF_FLAG_RESET_NEEDED;
+-	queue_work(iavf_wq, &adapter->reset_task);
++
++	if (netif_running(netdev)) {
++		adapter->flags |= IAVF_FLAG_RESET_NEEDED;
++		queue_work(iavf_wq, &adapter->reset_task);
++	}
+ 
+ 	return 0;
+ }
+@@ -3850,7 +3898,6 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	 */
+ 	mutex_init(&adapter->crit_lock);
+ 	mutex_init(&adapter->client_lock);
+-	mutex_init(&adapter->remove_lock);
+ 	mutex_init(&hw->aq.asq_mutex);
+ 	mutex_init(&hw->aq.arq_mutex);
+ 
+@@ -3966,7 +4013,6 @@ static int __maybe_unused iavf_resume(struct device *dev_d)
+ static void iavf_remove(struct pci_dev *pdev)
+ {
+ 	struct iavf_adapter *adapter = iavf_pdev_to_adapter(pdev);
+-	enum iavf_state_t prev_state = adapter->last_state;
+ 	struct net_device *netdev = adapter->netdev;
+ 	struct iavf_fdir_fltr *fdir, *fdirtmp;
+ 	struct iavf_vlan_filter *vlf, *vlftmp;
+@@ -3975,14 +4021,30 @@ static void iavf_remove(struct pci_dev *pdev)
+ 	struct iavf_cloud_filter *cf, *cftmp;
+ 	struct iavf_hw *hw = &adapter->hw;
+ 	int err;
+-	/* Indicate we are in remove and not to run reset_task */
+-	mutex_lock(&adapter->remove_lock);
+-	cancel_work_sync(&adapter->reset_task);
++
++	set_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section);
++	/* Wait until port initialization is complete.
++	 * There are flows where register/unregister netdev may race.
++	 */
++	while (1) {
++		mutex_lock(&adapter->crit_lock);
++		if (adapter->state == __IAVF_RUNNING ||
++		    adapter->state == __IAVF_DOWN ||
++		    adapter->state == __IAVF_INIT_FAILED) {
++			mutex_unlock(&adapter->crit_lock);
++			break;
++		}
++
++		mutex_unlock(&adapter->crit_lock);
++		usleep_range(500, 1000);
++	}
+ 	cancel_delayed_work_sync(&adapter->watchdog_task);
+-	cancel_delayed_work_sync(&adapter->client_task);
++
+ 	if (adapter->netdev_registered) {
+-		unregister_netdev(netdev);
++		rtnl_lock();
++		unregister_netdevice(netdev);
+ 		adapter->netdev_registered = false;
++		rtnl_unlock();
+ 	}
+ 	if (CLIENT_ALLOWED(adapter)) {
+ 		err = iavf_lan_del_device(adapter);
+@@ -3991,6 +4053,10 @@ static void iavf_remove(struct pci_dev *pdev)
+ 				 err);
+ 	}
+ 
++	mutex_lock(&adapter->crit_lock);
++	dev_info(&adapter->pdev->dev, "Remove device\n");
++	iavf_change_state(adapter, __IAVF_REMOVE);
++
+ 	iavf_request_reset(adapter);
+ 	msleep(50);
+ 	/* If the FW isn't responding, kick it once, but only once. */
+@@ -3998,36 +4064,24 @@ static void iavf_remove(struct pci_dev *pdev)
+ 		iavf_request_reset(adapter);
+ 		msleep(50);
+ 	}
+-	if (iavf_lock_timeout(&adapter->crit_lock, 5000))
+-		dev_warn(&adapter->pdev->dev, "failed to acquire crit_lock in %s\n", __FUNCTION__);
+ 
++	iavf_misc_irq_disable(adapter);
+ 	/* Shut down all the garbage mashers on the detention level */
+-	iavf_change_state(adapter, __IAVF_REMOVE);
++	cancel_work_sync(&adapter->reset_task);
++	cancel_delayed_work_sync(&adapter->watchdog_task);
++	cancel_work_sync(&adapter->adminq_task);
++	cancel_delayed_work_sync(&adapter->client_task);
++
+ 	adapter->aq_required = 0;
+ 	adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;
+ 
+ 	iavf_free_all_tx_resources(adapter);
+ 	iavf_free_all_rx_resources(adapter);
+-	iavf_misc_irq_disable(adapter);
+ 	iavf_free_misc_irq(adapter);
+ 
+-	/* In case we enter iavf_remove from erroneous state, free traffic irqs
+-	 * here, so as to not cause a kernel crash, when calling
+-	 * iavf_reset_interrupt_capability.
+-	 */
+-	if ((adapter->last_state == __IAVF_RESETTING &&
+-	     prev_state != __IAVF_DOWN) ||
+-	    (adapter->last_state == __IAVF_RUNNING &&
+-	     !(netdev->flags & IFF_UP)))
+-		iavf_free_traffic_irqs(adapter);
+-
+ 	iavf_reset_interrupt_capability(adapter);
+ 	iavf_free_q_vectors(adapter);
+ 
+-	cancel_delayed_work_sync(&adapter->watchdog_task);
+-
+-	cancel_work_sync(&adapter->adminq_task);
+-
+ 	iavf_free_rss(adapter);
+ 
+ 	if (hw->aq.asq.count)
+@@ -4039,8 +4093,6 @@ static void iavf_remove(struct pci_dev *pdev)
+ 	mutex_destroy(&adapter->client_lock);
+ 	mutex_unlock(&adapter->crit_lock);
+ 	mutex_destroy(&adapter->crit_lock);
+-	mutex_unlock(&adapter->remove_lock);
+-	mutex_destroy(&adapter->remove_lock);
+ 
+ 	iounmap(hw->hw_addr);
+ 	pci_release_regions(pdev);
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+index d60bf7c212006..d3da65d24bd62 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+@@ -1752,19 +1752,7 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
+ 
+ 		spin_unlock_bh(&adapter->mac_vlan_list_lock);
+ 		iavf_process_config(adapter);
+-
+-		/* unlock crit_lock before acquiring rtnl_lock as other
+-		 * processes holding rtnl_lock could be waiting for the same
+-		 * crit_lock
+-		 */
+-		mutex_unlock(&adapter->crit_lock);
+-		rtnl_lock();
+-		netdev_update_features(adapter->netdev);
+-		rtnl_unlock();
+-		if (iavf_lock_timeout(&adapter->crit_lock, 10000))
+-			dev_warn(&adapter->pdev->dev, "failed to acquire crit_lock in %s\n",
+-				 __FUNCTION__);
+-
++		adapter->flags |= IAVF_FLAG_SETUP_NETDEV_FEATURES;
+ 		}
+ 		break;
+ 	case VIRTCHNL_OP_ENABLE_QUEUES:
+diff --git a/drivers/net/ethernet/intel/igc/igc_phy.c b/drivers/net/ethernet/intel/igc/igc_phy.c
+index 5cad31c3c7b09..40dbf4b432345 100644
+--- a/drivers/net/ethernet/intel/igc/igc_phy.c
++++ b/drivers/net/ethernet/intel/igc/igc_phy.c
+@@ -746,8 +746,6 @@ s32 igc_write_phy_reg_gpy(struct igc_hw *hw, u32 offset, u16 data)
+ 		if (ret_val)
+ 			return ret_val;
+ 		ret_val = igc_write_phy_reg_mdic(hw, offset, data);
+-		if (ret_val)
+-			return ret_val;
+ 		hw->phy.ops.release(hw);
+ 	} else {
+ 		ret_val = igc_write_xmdio_reg(hw, (u16)offset, dev_addr,
+@@ -779,8 +777,6 @@ s32 igc_read_phy_reg_gpy(struct igc_hw *hw, u32 offset, u16 *data)
+ 		if (ret_val)
+ 			return ret_val;
+ 		ret_val = igc_read_phy_reg_mdic(hw, offset, data);
+-		if (ret_val)
+-			return ret_val;
+ 		hw->phy.ops.release(hw);
+ 	} else {
+ 		ret_val = igc_read_xmdio_reg(hw, (u16)offset, dev_addr,
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+index db2bc58dfcfd0..666ff2c07ab90 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+@@ -390,12 +390,14 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
+ 	u32 cmd_type;
+ 
+ 	while (budget-- > 0) {
+-		if (unlikely(!ixgbe_desc_unused(xdp_ring)) ||
+-		    !netif_carrier_ok(xdp_ring->netdev)) {
++		if (unlikely(!ixgbe_desc_unused(xdp_ring))) {
+ 			work_done = false;
+ 			break;
+ 		}
+ 
++		if (!netif_carrier_ok(xdp_ring->netdev))
++			break;
++
+ 		if (!xsk_tx_peek_desc(pool, &desc))
+ 			break;
+ 
+diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_vlan.c b/drivers/net/ethernet/microchip/sparx5/sparx5_vlan.c
+index 4ce490a25f332..8e56ffa1c4f7a 100644
+--- a/drivers/net/ethernet/microchip/sparx5/sparx5_vlan.c
++++ b/drivers/net/ethernet/microchip/sparx5/sparx5_vlan.c
+@@ -58,16 +58,6 @@ int sparx5_vlan_vid_add(struct sparx5_port *port, u16 vid, bool pvid,
+ 	struct sparx5 *sparx5 = port->sparx5;
+ 	int ret;
+ 
+-	/* Make the port a member of the VLAN */
+-	set_bit(port->portno, sparx5->vlan_mask[vid]);
+-	ret = sparx5_vlant_set_mask(sparx5, vid);
+-	if (ret)
+-		return ret;
+-
+-	/* Default ingress vlan classification */
+-	if (pvid)
+-		port->pvid = vid;
+-
+ 	/* Untagged egress vlan classification */
+ 	if (untagged && port->vid != vid) {
+ 		if (port->vid) {
+@@ -79,6 +69,16 @@ int sparx5_vlan_vid_add(struct sparx5_port *port, u16 vid, bool pvid,
+ 		port->vid = vid;
+ 	}
+ 
++	/* Make the port a member of the VLAN */
++	set_bit(port->portno, sparx5->vlan_mask[vid]);
++	ret = sparx5_vlant_set_mask(sparx5, vid);
++	if (ret)
++		return ret;
++
++	/* Default ingress vlan classification */
++	if (pvid)
++		port->pvid = vid;
++
+ 	sparx5_vlan_port_apply(sparx5, port);
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c b/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c
+index 32161a56726c1..2881f5b2b5f49 100644
+--- a/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c
++++ b/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c
+@@ -2285,18 +2285,18 @@ static int __init sxgbe_cmdline_opt(char *str)
+ 	char *opt;
+ 
+ 	if (!str || !*str)
+-		return -EINVAL;
++		return 1;
+ 	while ((opt = strsep(&str, ",")) != NULL) {
+ 		if (!strncmp(opt, "eee_timer:", 10)) {
+ 			if (kstrtoint(opt + 10, 0, &eee_timer))
+ 				goto err;
+ 		}
+ 	}
+-	return 0;
++	return 1;
+ 
+ err:
+ 	pr_err("%s: ERROR broken module parameter conversion\n", __func__);
+-	return -EINVAL;
++	return 1;
+ }
+ 
+ __setup("sxgbeeth=", sxgbe_cmdline_opt);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+index 873b9e3e5da25..05b5371ca036b 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+@@ -334,8 +334,8 @@ void stmmac_set_ethtool_ops(struct net_device *netdev);
+ int stmmac_init_tstamp_counter(struct stmmac_priv *priv, u32 systime_flags);
+ void stmmac_ptp_register(struct stmmac_priv *priv);
+ void stmmac_ptp_unregister(struct stmmac_priv *priv);
+-int stmmac_open(struct net_device *dev);
+-int stmmac_release(struct net_device *dev);
++int stmmac_xdp_open(struct net_device *dev);
++void stmmac_xdp_release(struct net_device *dev);
+ int stmmac_resume(struct device *dev);
+ int stmmac_suspend(struct device *dev);
+ int stmmac_dvr_remove(struct device *dev);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index f4015579e8adc..e2468c3a9a437 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2272,6 +2272,23 @@ static void stmmac_stop_tx_dma(struct stmmac_priv *priv, u32 chan)
+ 	stmmac_stop_tx(priv, priv->ioaddr, chan);
+ }
+ 
++static void stmmac_enable_all_dma_irq(struct stmmac_priv *priv)
++{
++	u32 rx_channels_count = priv->plat->rx_queues_to_use;
++	u32 tx_channels_count = priv->plat->tx_queues_to_use;
++	u32 dma_csr_ch = max(rx_channels_count, tx_channels_count);
++	u32 chan;
++
++	for (chan = 0; chan < dma_csr_ch; chan++) {
++		struct stmmac_channel *ch = &priv->channel[chan];
++		unsigned long flags;
++
++		spin_lock_irqsave(&ch->lock, flags);
++		stmmac_enable_dma_irq(priv, priv->ioaddr, chan, 1, 1);
++		spin_unlock_irqrestore(&ch->lock, flags);
++	}
++}
++
+ /**
+  * stmmac_start_all_dma - start all RX and TX DMA channels
+  * @priv: driver private structure
+@@ -2911,8 +2928,10 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv)
+ 		stmmac_axi(priv, priv->ioaddr, priv->plat->axi);
+ 
+ 	/* DMA CSR Channel configuration */
+-	for (chan = 0; chan < dma_csr_ch; chan++)
++	for (chan = 0; chan < dma_csr_ch; chan++) {
+ 		stmmac_init_chan(priv, priv->ioaddr, priv->plat->dma_cfg, chan);
++		stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 1, 1);
++	}
+ 
+ 	/* DMA RX Channel Configuration */
+ 	for (chan = 0; chan < rx_channels_count; chan++) {
+@@ -3679,7 +3698,7 @@ static int stmmac_request_irq(struct net_device *dev)
+  *  0 on success and an appropriate (-)ve integer as defined in errno.h
+  *  file on failure.
+  */
+-int stmmac_open(struct net_device *dev)
++static int stmmac_open(struct net_device *dev)
+ {
+ 	struct stmmac_priv *priv = netdev_priv(dev);
+ 	int mode = priv->plat->phy_interface;
+@@ -3768,6 +3787,7 @@ int stmmac_open(struct net_device *dev)
+ 
+ 	stmmac_enable_all_queues(priv);
+ 	netif_tx_start_all_queues(priv->dev);
++	stmmac_enable_all_dma_irq(priv);
+ 
+ 	return 0;
+ 
+@@ -3803,7 +3823,7 @@ static void stmmac_fpe_stop_wq(struct stmmac_priv *priv)
+  *  Description:
+  *  This is the stop entry point of the driver.
+  */
+-int stmmac_release(struct net_device *dev)
++static int stmmac_release(struct net_device *dev)
+ {
+ 	struct stmmac_priv *priv = netdev_priv(dev);
+ 	u32 chan;
+@@ -6473,6 +6493,143 @@ void stmmac_enable_tx_queue(struct stmmac_priv *priv, u32 queue)
+ 	spin_unlock_irqrestore(&ch->lock, flags);
+ }
+ 
++void stmmac_xdp_release(struct net_device *dev)
++{
++	struct stmmac_priv *priv = netdev_priv(dev);
++	u32 chan;
++
++	/* Disable NAPI process */
++	stmmac_disable_all_queues(priv);
++
++	for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++)
++		hrtimer_cancel(&priv->tx_queue[chan].txtimer);
++
++	/* Free the IRQ lines */
++	stmmac_free_irq(dev, REQ_IRQ_ERR_ALL, 0);
++
++	/* Stop TX/RX DMA channels */
++	stmmac_stop_all_dma(priv);
++
++	/* Release and free the Rx/Tx resources */
++	free_dma_desc_resources(priv);
++
++	/* Disable the MAC Rx/Tx */
++	stmmac_mac_set(priv, priv->ioaddr, false);
++
++	/* set trans_start so we don't get spurious
++	 * watchdogs during reset
++	 */
++	netif_trans_update(dev);
++	netif_carrier_off(dev);
++}
++
++int stmmac_xdp_open(struct net_device *dev)
++{
++	struct stmmac_priv *priv = netdev_priv(dev);
++	u32 rx_cnt = priv->plat->rx_queues_to_use;
++	u32 tx_cnt = priv->plat->tx_queues_to_use;
++	u32 dma_csr_ch = max(rx_cnt, tx_cnt);
++	struct stmmac_rx_queue *rx_q;
++	struct stmmac_tx_queue *tx_q;
++	u32 buf_size;
++	bool sph_en;
++	u32 chan;
++	int ret;
++
++	ret = alloc_dma_desc_resources(priv);
++	if (ret < 0) {
++		netdev_err(dev, "%s: DMA descriptors allocation failed\n",
++			   __func__);
++		goto dma_desc_error;
++	}
++
++	ret = init_dma_desc_rings(dev, GFP_KERNEL);
++	if (ret < 0) {
++		netdev_err(dev, "%s: DMA descriptors initialization failed\n",
++			   __func__);
++		goto init_error;
++	}
++
++	/* DMA CSR Channel configuration */
++	for (chan = 0; chan < dma_csr_ch; chan++) {
++		stmmac_init_chan(priv, priv->ioaddr, priv->plat->dma_cfg, chan);
++		stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 1, 1);
++	}
++
++	/* Adjust Split header */
++	sph_en = (priv->hw->rx_csum > 0) && priv->sph;
++
++	/* DMA RX Channel Configuration */
++	for (chan = 0; chan < rx_cnt; chan++) {
++		rx_q = &priv->rx_queue[chan];
++
++		stmmac_init_rx_chan(priv, priv->ioaddr, priv->plat->dma_cfg,
++				    rx_q->dma_rx_phy, chan);
++
++		rx_q->rx_tail_addr = rx_q->dma_rx_phy +
++				     (rx_q->buf_alloc_num *
++				      sizeof(struct dma_desc));
++		stmmac_set_rx_tail_ptr(priv, priv->ioaddr,
++				       rx_q->rx_tail_addr, chan);
++
++		if (rx_q->xsk_pool && rx_q->buf_alloc_num) {
++			buf_size = xsk_pool_get_rx_frame_size(rx_q->xsk_pool);
++			stmmac_set_dma_bfsize(priv, priv->ioaddr,
++					      buf_size,
++					      rx_q->queue_index);
++		} else {
++			stmmac_set_dma_bfsize(priv, priv->ioaddr,
++					      priv->dma_buf_sz,
++					      rx_q->queue_index);
++		}
++
++		stmmac_enable_sph(priv, priv->ioaddr, sph_en, chan);
++	}
++
++	/* DMA TX Channel Configuration */
++	for (chan = 0; chan < tx_cnt; chan++) {
++		tx_q = &priv->tx_queue[chan];
++
++		stmmac_init_tx_chan(priv, priv->ioaddr, priv->plat->dma_cfg,
++				    tx_q->dma_tx_phy, chan);
++
++		tx_q->tx_tail_addr = tx_q->dma_tx_phy;
++		stmmac_set_tx_tail_ptr(priv, priv->ioaddr,
++				       tx_q->tx_tail_addr, chan);
++
++		hrtimer_init(&tx_q->txtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++		tx_q->txtimer.function = stmmac_tx_timer;
++	}
++
++	/* Enable the MAC Rx/Tx */
++	stmmac_mac_set(priv, priv->ioaddr, true);
++
++	/* Start Rx & Tx DMA Channels */
++	stmmac_start_all_dma(priv);
++
++	ret = stmmac_request_irq(dev);
++	if (ret)
++		goto irq_error;
++
++	/* Enable NAPI process*/
++	stmmac_enable_all_queues(priv);
++	netif_carrier_on(dev);
++	netif_tx_start_all_queues(dev);
++	stmmac_enable_all_dma_irq(priv);
++
++	return 0;
++
++irq_error:
++	for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++)
++		hrtimer_cancel(&priv->tx_queue[chan].txtimer);
++
++	stmmac_hw_teardown(dev);
++init_error:
++	free_dma_desc_resources(priv);
++dma_desc_error:
++	return ret;
++}
++
+ int stmmac_xsk_wakeup(struct net_device *dev, u32 queue, u32 flags)
+ {
+ 	struct stmmac_priv *priv = netdev_priv(dev);
+@@ -7337,6 +7494,7 @@ int stmmac_resume(struct device *dev)
+ 	stmmac_restore_hw_vlan_rx_fltr(priv, ndev, priv->hw);
+ 
+ 	stmmac_enable_all_queues(priv);
++	stmmac_enable_all_dma_irq(priv);
+ 
+ 	mutex_unlock(&priv->lock);
+ 	rtnl_unlock();
+@@ -7353,7 +7511,7 @@ static int __init stmmac_cmdline_opt(char *str)
+ 	char *opt;
+ 
+ 	if (!str || !*str)
+-		return -EINVAL;
++		return 1;
+ 	while ((opt = strsep(&str, ",")) != NULL) {
+ 		if (!strncmp(opt, "debug:", 6)) {
+ 			if (kstrtoint(opt + 6, 0, &debug))
+@@ -7384,11 +7542,11 @@ static int __init stmmac_cmdline_opt(char *str)
+ 				goto err;
+ 		}
+ 	}
+-	return 0;
++	return 1;
+ 
+ err:
+ 	pr_err("%s: ERROR broken module parameter conversion", __func__);
+-	return -EINVAL;
++	return 1;
+ }
+ 
+ __setup("stmmaceth=", stmmac_cmdline_opt);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c
+index 2a616c6f7cd0e..9d4d8c3dad0a3 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c
+@@ -119,7 +119,7 @@ int stmmac_xdp_set_prog(struct stmmac_priv *priv, struct bpf_prog *prog,
+ 
+ 	need_update = !!priv->xdp_prog != !!prog;
+ 	if (if_running && need_update)
+-		stmmac_release(dev);
++		stmmac_xdp_release(dev);
+ 
+ 	old_prog = xchg(&priv->xdp_prog, prog);
+ 	if (old_prog)
+@@ -129,7 +129,7 @@ int stmmac_xdp_set_prog(struct stmmac_priv *priv, struct bpf_prog *prog,
+ 	priv->sph = priv->sph_cap && !stmmac_xdp_is_enabled(priv);
+ 
+ 	if (if_running && need_update)
+-		stmmac_open(dev);
++		stmmac_xdp_open(dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ipa/Kconfig b/drivers/net/ipa/Kconfig
+index d037682fb7adb..6782c2cbf542f 100644
+--- a/drivers/net/ipa/Kconfig
++++ b/drivers/net/ipa/Kconfig
+@@ -2,7 +2,9 @@ config QCOM_IPA
+ 	tristate "Qualcomm IPA support"
+ 	depends on NET && QCOM_SMEM
+ 	depends on ARCH_QCOM || COMPILE_TEST
++	depends on INTERCONNECT
+ 	depends on QCOM_RPROC_COMMON || (QCOM_RPROC_COMMON=n && COMPILE_TEST)
++	depends on QCOM_AOSS_QMP || QCOM_AOSS_QMP=n
+ 	select QCOM_MDT_LOADER if ARCH_QCOM
+ 	select QCOM_SCM
+ 	select QCOM_QMI_HELPERS
+diff --git a/drivers/net/usb/cdc_mbim.c b/drivers/net/usb/cdc_mbim.c
+index 82bb5ed94c485..c0b8b4aa78f37 100644
+--- a/drivers/net/usb/cdc_mbim.c
++++ b/drivers/net/usb/cdc_mbim.c
+@@ -659,6 +659,11 @@ static const struct usb_device_id mbim_devs[] = {
+ 	  .driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle,
+ 	},
+ 
++	/* Telit FN990 */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x1bc7, 0x1071, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
++	  .driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle,
++	},
++
+ 	/* default entry */
+ 	{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
+ 	  .driver_info = (unsigned long)&cdc_mbim_info_zlp,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+index 64100e73b5bc6..a0d6194f8a52a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+@@ -5,6 +5,7 @@
+  * Copyright (C) 2016-2017 Intel Deutschland GmbH
+  */
+ #include <linux/vmalloc.h>
++#include <linux/err.h>
+ #include <linux/ieee80211.h>
+ #include <linux/netdevice.h>
+ 
+@@ -1849,7 +1850,6 @@ void iwl_mvm_sta_add_debugfs(struct ieee80211_hw *hw,
+ void iwl_mvm_dbgfs_register(struct iwl_mvm *mvm)
+ {
+ 	struct dentry *bcast_dir __maybe_unused;
+-	char buf[100];
+ 
+ 	spin_lock_init(&mvm->drv_stats_lock);
+ 
+@@ -1930,6 +1930,11 @@ void iwl_mvm_dbgfs_register(struct iwl_mvm *mvm)
+ 	 * Create a symlink with mac80211. It will be removed when mac80211
+ 	 * exists (before the opmode exists which removes the target.)
+ 	 */
+-	snprintf(buf, 100, "../../%pd2", mvm->debugfs_dir->d_parent);
+-	debugfs_create_symlink("iwlwifi", mvm->hw->wiphy->debugfsdir, buf);
++	if (!IS_ERR(mvm->debugfs_dir)) {
++		char buf[100];
++
++		snprintf(buf, 100, "../../%pd2", mvm->debugfs_dir->d_parent);
++		debugfs_create_symlink("iwlwifi", mvm->hw->wiphy->debugfsdir,
++				       buf);
++	}
+ }
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index 23219f3747f81..f7cfda9192de2 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -2336,6 +2336,15 @@ static void hw_scan_work(struct work_struct *work)
+ 			if (req->ie_len)
+ 				skb_put_data(probe, req->ie, req->ie_len);
+ 
++			if (!ieee80211_tx_prepare_skb(hwsim->hw,
++						      hwsim->hw_scan_vif,
++						      probe,
++						      hwsim->tmp_chan->band,
++						      NULL)) {
++				kfree_skb(probe);
++				continue;
++			}
++
+ 			local_bh_disable();
+ 			mac80211_hwsim_tx_frame(hwsim->hw, probe,
+ 						hwsim->tmp_chan);
+@@ -3770,6 +3779,10 @@ static int hwsim_tx_info_frame_received_nl(struct sk_buff *skb_2,
+ 		}
+ 		txi->flags |= IEEE80211_TX_STAT_ACK;
+ 	}
++
++	if (hwsim_flags & HWSIM_TX_CTL_NO_ACK)
++		txi->flags |= IEEE80211_TX_STAT_NOACK_TRANSMITTED;
++
+ 	ieee80211_tx_status_irqsafe(data2->hw, skb);
+ 	return 0;
+ out:
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index d514d96027a6f..039ca902c5bf2 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -842,6 +842,28 @@ static int xennet_close(struct net_device *dev)
+ 	return 0;
+ }
+ 
++static void xennet_destroy_queues(struct netfront_info *info)
++{
++	unsigned int i;
++
++	for (i = 0; i < info->netdev->real_num_tx_queues; i++) {
++		struct netfront_queue *queue = &info->queues[i];
++
++		if (netif_running(info->netdev))
++			napi_disable(&queue->napi);
++		netif_napi_del(&queue->napi);
++	}
++
++	kfree(info->queues);
++	info->queues = NULL;
++}
++
++static void xennet_uninit(struct net_device *dev)
++{
++	struct netfront_info *np = netdev_priv(dev);
++	xennet_destroy_queues(np);
++}
++
+ static void xennet_set_rx_rsp_cons(struct netfront_queue *queue, RING_IDX val)
+ {
+ 	unsigned long flags;
+@@ -1611,6 +1633,7 @@ static int xennet_xdp(struct net_device *dev, struct netdev_bpf *xdp)
+ }
+ 
+ static const struct net_device_ops xennet_netdev_ops = {
++	.ndo_uninit          = xennet_uninit,
+ 	.ndo_open            = xennet_open,
+ 	.ndo_stop            = xennet_close,
+ 	.ndo_start_xmit      = xennet_start_xmit,
+@@ -2103,22 +2126,6 @@ error:
+ 	return err;
+ }
+ 
+-static void xennet_destroy_queues(struct netfront_info *info)
+-{
+-	unsigned int i;
+-
+-	for (i = 0; i < info->netdev->real_num_tx_queues; i++) {
+-		struct netfront_queue *queue = &info->queues[i];
+-
+-		if (netif_running(info->netdev))
+-			napi_disable(&queue->napi);
+-		netif_napi_del(&queue->napi);
+-	}
+-
+-	kfree(info->queues);
+-	info->queues = NULL;
+-}
+-
+ 
+ 
+ static int xennet_create_page_pool(struct netfront_queue *queue)
+diff --git a/drivers/ntb/hw/intel/ntb_hw_gen4.c b/drivers/ntb/hw/intel/ntb_hw_gen4.c
+index fede05151f698..4081fc538ff45 100644
+--- a/drivers/ntb/hw/intel/ntb_hw_gen4.c
++++ b/drivers/ntb/hw/intel/ntb_hw_gen4.c
+@@ -168,6 +168,18 @@ static enum ntb_topo gen4_ppd_topo(struct intel_ntb_dev *ndev, u32 ppd)
+ 	return NTB_TOPO_NONE;
+ }
+ 
++static enum ntb_topo spr_ppd_topo(struct intel_ntb_dev *ndev, u32 ppd)
++{
++	switch (ppd & SPR_PPD_TOPO_MASK) {
++	case SPR_PPD_TOPO_B2B_USD:
++		return NTB_TOPO_B2B_USD;
++	case SPR_PPD_TOPO_B2B_DSD:
++		return NTB_TOPO_B2B_DSD;
++	}
++
++	return NTB_TOPO_NONE;
++}
++
+ int gen4_init_dev(struct intel_ntb_dev *ndev)
+ {
+ 	struct pci_dev *pdev = ndev->ntb.pdev;
+@@ -183,7 +195,10 @@ int gen4_init_dev(struct intel_ntb_dev *ndev)
+ 	}
+ 
+ 	ppd1 = ioread32(ndev->self_mmio + GEN4_PPD1_OFFSET);
+-	ndev->ntb.topo = gen4_ppd_topo(ndev, ppd1);
++	if (pdev_is_ICX(pdev))
++		ndev->ntb.topo = gen4_ppd_topo(ndev, ppd1);
++	else if (pdev_is_SPR(pdev))
++		ndev->ntb.topo = spr_ppd_topo(ndev, ppd1);
+ 	dev_dbg(&pdev->dev, "ppd %#x topo %s\n", ppd1,
+ 		ntb_topo_string(ndev->ntb.topo));
+ 	if (ndev->ntb.topo == NTB_TOPO_NONE)
+diff --git a/drivers/ntb/hw/intel/ntb_hw_gen4.h b/drivers/ntb/hw/intel/ntb_hw_gen4.h
+index 3fcd3fdce9edf..f91323eaf5ce4 100644
+--- a/drivers/ntb/hw/intel/ntb_hw_gen4.h
++++ b/drivers/ntb/hw/intel/ntb_hw_gen4.h
+@@ -49,10 +49,14 @@
+ #define GEN4_PPD_CLEAR_TRN		0x0001
+ #define GEN4_PPD_LINKTRN		0x0008
+ #define GEN4_PPD_CONN_MASK		0x0300
++#define SPR_PPD_CONN_MASK		0x0700
+ #define GEN4_PPD_CONN_B2B		0x0200
+ #define GEN4_PPD_DEV_MASK		0x1000
+ #define GEN4_PPD_DEV_DSD		0x1000
+ #define GEN4_PPD_DEV_USD		0x0000
++#define SPR_PPD_DEV_MASK		0x4000
++#define SPR_PPD_DEV_DSD 		0x4000
++#define SPR_PPD_DEV_USD 		0x0000
+ #define GEN4_LINK_CTRL_LINK_DISABLE	0x0010
+ 
+ #define GEN4_SLOTSTS			0xb05a
+@@ -62,6 +66,10 @@
+ #define GEN4_PPD_TOPO_B2B_USD	(GEN4_PPD_CONN_B2B | GEN4_PPD_DEV_USD)
+ #define GEN4_PPD_TOPO_B2B_DSD	(GEN4_PPD_CONN_B2B | GEN4_PPD_DEV_DSD)
+ 
++#define SPR_PPD_TOPO_MASK	(SPR_PPD_CONN_MASK | SPR_PPD_DEV_MASK)
++#define SPR_PPD_TOPO_B2B_USD	(GEN4_PPD_CONN_B2B | SPR_PPD_DEV_USD)
++#define SPR_PPD_TOPO_B2B_DSD	(GEN4_PPD_CONN_B2B | SPR_PPD_DEV_DSD)
++
+ #define GEN4_DB_COUNT			32
+ #define GEN4_DB_LINK			32
+ #define GEN4_DB_LINK_BIT		BIT_ULL(GEN4_DB_LINK)
+@@ -112,4 +120,12 @@ static inline int pdev_is_ICX(struct pci_dev *pdev)
+ 	return 0;
+ }
+ 
++static inline int pdev_is_SPR(struct pci_dev *pdev)
++{
++	if (pdev_is_gen4(pdev) &&
++	    pdev->revision > PCI_DEVICE_REVISION_ICX_MAX)
++		return 1;
++	return 0;
++}
++
+ #endif
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi.c b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+index 862c84efb718f..ce3f9ea415117 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sunxi.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+@@ -36,6 +36,13 @@
+ #include "../core.h"
+ #include "pinctrl-sunxi.h"
+ 
++/*
++ * These lock classes tell lockdep that GPIO IRQs are in a different
++ * category than their parents, so it won't report false recursion.
++ */
++static struct lock_class_key sunxi_pinctrl_irq_lock_class;
++static struct lock_class_key sunxi_pinctrl_irq_request_class;
++
+ static struct irq_chip sunxi_pinctrl_edge_irq_chip;
+ static struct irq_chip sunxi_pinctrl_level_irq_chip;
+ 
+@@ -1551,6 +1558,8 @@ int sunxi_pinctrl_init_with_variant(struct platform_device *pdev,
+ 	for (i = 0; i < (pctl->desc->irq_banks * IRQ_PER_BANK); i++) {
+ 		int irqno = irq_create_mapping(pctl->domain, i);
+ 
++		irq_set_lockdep_class(irqno, &sunxi_pinctrl_irq_lock_class,
++				      &sunxi_pinctrl_irq_request_class);
+ 		irq_set_chip_and_handler(irqno, &sunxi_pinctrl_edge_irq_chip,
+ 					 handle_edge_irq);
+ 		irq_set_chip_data(irqno, pctl);
+diff --git a/drivers/platform/x86/amd-pmc.c b/drivers/platform/x86/amd-pmc.c
+index 8c74733530e3d..11d0f829302bf 100644
+--- a/drivers/platform/x86/amd-pmc.c
++++ b/drivers/platform/x86/amd-pmc.c
+@@ -21,6 +21,7 @@
+ #include <linux/module.h>
+ #include <linux/pci.h>
+ #include <linux/platform_device.h>
++#include <linux/pm_qos.h>
+ #include <linux/rtc.h>
+ #include <linux/suspend.h>
+ #include <linux/seq_file.h>
+@@ -79,6 +80,9 @@
+ #define PMC_MSG_DELAY_MIN_US		50
+ #define RESPONSE_REGISTER_LOOP_MAX	20000
+ 
++/* QoS request for letting CPUs in idle states, but not the deepest */
++#define AMD_PMC_MAX_IDLE_STATE_LATENCY	3
++
+ #define SOC_SUBSYSTEM_IP_MAX	12
+ #define DELAY_MIN_US		2000
+ #define DELAY_MAX_US		3000
+@@ -123,6 +127,7 @@ struct amd_pmc_dev {
+ 	u8 rev;
+ 	struct device *dev;
+ 	struct mutex lock; /* generic mutex lock */
++	struct pm_qos_request amd_pmc_pm_qos_req;
+ #if IS_ENABLED(CONFIG_DEBUG_FS)
+ 	struct dentry *dbgfs_dir;
+ #endif /* CONFIG_DEBUG_FS */
+@@ -459,6 +464,14 @@ static int amd_pmc_verify_czn_rtc(struct amd_pmc_dev *pdev, u32 *arg)
+ 	rc = rtc_alarm_irq_enable(rtc_device, 0);
+ 	dev_dbg(pdev->dev, "wakeup timer programmed for %lld seconds\n", duration);
+ 
++	/*
++	 * Prevent CPUs from getting into deep idle states while sending OS_HINT
++	 * which is otherwise generally safe to send when at least one of the CPUs
++	 * is not in deep idle states.
++	 */
++	cpu_latency_qos_update_request(&pdev->amd_pmc_pm_qos_req, AMD_PMC_MAX_IDLE_STATE_LATENCY);
++	wake_up_all_idle_cpus();
++
+ 	return rc;
+ }
+ 
+@@ -476,17 +489,24 @@ static int __maybe_unused amd_pmc_suspend(struct device *dev)
+ 	/* Activate CZN specific RTC functionality */
+ 	if (pdev->cpu_id == AMD_CPU_ID_CZN) {
+ 		rc = amd_pmc_verify_czn_rtc(pdev, &arg);
+-		if (rc < 0)
+-			return rc;
++		if (rc)
++			goto fail;
+ 	}
+ 
+ 	/* Dump the IdleMask before we send hint to SMU */
+ 	amd_pmc_idlemask_read(pdev, dev, NULL);
+ 	msg = amd_pmc_get_os_hint(pdev);
+ 	rc = amd_pmc_send_cmd(pdev, arg, NULL, msg, 0);
+-	if (rc)
++	if (rc) {
+ 		dev_err(pdev->dev, "suspend failed\n");
++		goto fail;
++	}
+ 
++	return 0;
++fail:
++	if (pdev->cpu_id == AMD_CPU_ID_CZN)
++		cpu_latency_qos_update_request(&pdev->amd_pmc_pm_qos_req,
++						PM_QOS_DEFAULT_VALUE);
+ 	return rc;
+ }
+ 
+@@ -507,7 +527,12 @@ static int __maybe_unused amd_pmc_resume(struct device *dev)
+ 	/* Dump the IdleMask to see the blockers */
+ 	amd_pmc_idlemask_read(pdev, dev, NULL);
+ 
+-	return 0;
++	/* Restore the QoS request back to defaults if it was set */
++	if (pdev->cpu_id == AMD_CPU_ID_CZN)
++		cpu_latency_qos_update_request(&pdev->amd_pmc_pm_qos_req,
++						PM_QOS_DEFAULT_VALUE);
++
++	return rc;
+ }
+ 
+ static const struct dev_pm_ops amd_pmc_pm_ops = {
+@@ -597,6 +622,7 @@ static int amd_pmc_probe(struct platform_device *pdev)
+ 	amd_pmc_get_smu_version(dev);
+ 	platform_set_drvdata(pdev, dev);
+ 	amd_pmc_dbgfs_register(dev);
++	cpu_latency_qos_add_request(&dev->amd_pmc_pm_qos_req, PM_QOS_DEFAULT_VALUE);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c
+index 0f1b5a7d2a89c..17ad5f0d13b2a 100644
+--- a/drivers/ptp/ptp_ocp.c
++++ b/drivers/ptp/ptp_ocp.c
+@@ -607,7 +607,7 @@ ptp_ocp_settime(struct ptp_clock_info *ptp_info, const struct timespec64 *ts)
+ }
+ 
+ static void
+-__ptp_ocp_adjtime_locked(struct ptp_ocp *bp, u64 adj_val)
++__ptp_ocp_adjtime_locked(struct ptp_ocp *bp, u32 adj_val)
+ {
+ 	u32 select, ctrl;
+ 
+@@ -615,7 +615,7 @@ __ptp_ocp_adjtime_locked(struct ptp_ocp *bp, u64 adj_val)
+ 	iowrite32(OCP_SELECT_CLK_REG, &bp->reg->select);
+ 
+ 	iowrite32(adj_val, &bp->reg->offset_ns);
+-	iowrite32(adj_val & 0x7f, &bp->reg->offset_window_ns);
++	iowrite32(NSEC_PER_SEC, &bp->reg->offset_window_ns);
+ 
+ 	ctrl = OCP_CTRL_ADJUST_OFFSET | OCP_CTRL_ENABLE;
+ 	iowrite32(ctrl, &bp->reg->ctrl);
+@@ -624,6 +624,22 @@ __ptp_ocp_adjtime_locked(struct ptp_ocp *bp, u64 adj_val)
+ 	iowrite32(select >> 16, &bp->reg->select);
+ }
+ 
++static void
++ptp_ocp_adjtime_coarse(struct ptp_ocp *bp, u64 delta_ns)
++{
++	struct timespec64 ts;
++	unsigned long flags;
++	int err;
++
++	spin_lock_irqsave(&bp->lock, flags);
++	err = __ptp_ocp_gettime_locked(bp, &ts, NULL);
++	if (likely(!err)) {
++		timespec64_add_ns(&ts, delta_ns);
++		__ptp_ocp_settime_locked(bp, &ts);
++	}
++	spin_unlock_irqrestore(&bp->lock, flags);
++}
++
+ static int
+ ptp_ocp_adjtime(struct ptp_clock_info *ptp_info, s64 delta_ns)
+ {
+@@ -631,6 +647,11 @@ ptp_ocp_adjtime(struct ptp_clock_info *ptp_info, s64 delta_ns)
+ 	unsigned long flags;
+ 	u32 adj_ns, sign;
+ 
++	if (delta_ns > NSEC_PER_SEC || -delta_ns > NSEC_PER_SEC) {
++		ptp_ocp_adjtime_coarse(bp, delta_ns);
++		return 0;
++	}
++
+ 	sign = delta_ns < 0 ? BIT(31) : 0;
+ 	adj_ns = sign ? -delta_ns : delta_ns;
+ 
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 86aa4141efa92..d2553970a67ba 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -6014,9 +6014,8 @@ core_initcall(regulator_init);
+ static int regulator_late_cleanup(struct device *dev, void *data)
+ {
+ 	struct regulator_dev *rdev = dev_to_rdev(dev);
+-	const struct regulator_ops *ops = rdev->desc->ops;
+ 	struct regulation_constraints *c = rdev->constraints;
+-	int enabled, ret;
++	int ret;
+ 
+ 	if (c && c->always_on)
+ 		return 0;
+@@ -6029,14 +6028,8 @@ static int regulator_late_cleanup(struct device *dev, void *data)
+ 	if (rdev->use_count)
+ 		goto unlock;
+ 
+-	/* If we can't read the status assume it's always on. */
+-	if (ops->is_enabled)
+-		enabled = ops->is_enabled(rdev);
+-	else
+-		enabled = 1;
+-
+-	/* But if reading the status failed, assume that it's off. */
+-	if (enabled <= 0)
++	/* If reading the status failed, assume that it's off. */
++	if (_regulator_is_enabled(rdev) <= 0)
+ 		goto unlock;
+ 
+ 	if (have_full_constraints()) {
+diff --git a/drivers/soc/fsl/guts.c b/drivers/soc/fsl/guts.c
+index 072473a16f4de..5ed2fc1c53a0e 100644
+--- a/drivers/soc/fsl/guts.c
++++ b/drivers/soc/fsl/guts.c
+@@ -28,7 +28,6 @@ struct fsl_soc_die_attr {
+ static struct guts *guts;
+ static struct soc_device_attribute soc_dev_attr;
+ static struct soc_device *soc_dev;
+-static struct device_node *root;
+ 
+ 
+ /* SoC die attribute definition for QorIQ platform */
+@@ -138,7 +137,7 @@ static u32 fsl_guts_get_svr(void)
+ 
+ static int fsl_guts_probe(struct platform_device *pdev)
+ {
+-	struct device_node *np = pdev->dev.of_node;
++	struct device_node *root, *np = pdev->dev.of_node;
+ 	struct device *dev = &pdev->dev;
+ 	const struct fsl_soc_die_attr *soc_die;
+ 	const char *machine;
+@@ -159,8 +158,14 @@ static int fsl_guts_probe(struct platform_device *pdev)
+ 	root = of_find_node_by_path("/");
+ 	if (of_property_read_string(root, "model", &machine))
+ 		of_property_read_string_index(root, "compatible", 0, &machine);
+-	if (machine)
+-		soc_dev_attr.machine = machine;
++	if (machine) {
++		soc_dev_attr.machine = devm_kstrdup(dev, machine, GFP_KERNEL);
++		if (!soc_dev_attr.machine) {
++			of_node_put(root);
++			return -ENOMEM;
++		}
++	}
++	of_node_put(root);
+ 
+ 	svr = fsl_guts_get_svr();
+ 	soc_die = fsl_soc_die_match(svr, fsl_soc_die);
+@@ -195,7 +200,6 @@ static int fsl_guts_probe(struct platform_device *pdev)
+ static int fsl_guts_remove(struct platform_device *dev)
+ {
+ 	soc_device_unregister(soc_dev);
+-	of_node_put(root);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/soc/fsl/qe/qe_io.c b/drivers/soc/fsl/qe/qe_io.c
+index e277c827bdf33..a5e2d0e5ab511 100644
+--- a/drivers/soc/fsl/qe/qe_io.c
++++ b/drivers/soc/fsl/qe/qe_io.c
+@@ -35,6 +35,8 @@ int par_io_init(struct device_node *np)
+ 	if (ret)
+ 		return ret;
+ 	par_io = ioremap(res.start, resource_size(&res));
++	if (!par_io)
++		return -ENOMEM;
+ 
+ 	if (!of_property_read_u32(np, "num-ports", &num_ports))
+ 		num_par_io_ports = num_ports;
+diff --git a/drivers/soc/imx/gpcv2.c b/drivers/soc/imx/gpcv2.c
+index 8176380b02e6e..4415f07e1fa97 100644
+--- a/drivers/soc/imx/gpcv2.c
++++ b/drivers/soc/imx/gpcv2.c
+@@ -382,7 +382,8 @@ static int imx_pgc_power_down(struct generic_pm_domain *genpd)
+ 	return 0;
+ 
+ out_clk_disable:
+-	clk_bulk_disable_unprepare(domain->num_clks, domain->clks);
++	if (!domain->keep_clocks)
++		clk_bulk_disable_unprepare(domain->num_clks, domain->clks);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/thermal/thermal_netlink.c b/drivers/thermal/thermal_netlink.c
+index a16dd4d5d710e..73e68cce292e2 100644
+--- a/drivers/thermal/thermal_netlink.c
++++ b/drivers/thermal/thermal_netlink.c
+@@ -419,11 +419,12 @@ static int thermal_genl_cmd_tz_get_trip(struct param *p)
+ 	for (i = 0; i < tz->trips; i++) {
+ 
+ 		enum thermal_trip_type type;
+-		int temp, hyst;
++		int temp, hyst = 0;
+ 
+ 		tz->ops->get_trip_type(tz, i, &type);
+ 		tz->ops->get_trip_temp(tz, i, &temp);
+-		tz->ops->get_trip_hyst(tz, i, &hyst);
++		if (tz->ops->get_trip_hyst)
++			tz->ops->get_trip_hyst(tz, i, &hyst);
+ 
+ 		if (nla_put_u32(msg, THERMAL_GENL_ATTR_TZ_TRIP_ID, i) ||
+ 		    nla_put_u32(msg, THERMAL_GENL_ATTR_TZ_TRIP_TYPE, type) ||
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index 2d3fbcbfaf108..93c2a5c956540 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -520,10 +520,22 @@ static void stm32_usart_transmit_chars(struct uart_port *port)
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+ 	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ 	struct circ_buf *xmit = &port->state->xmit;
++	u32 isr;
++	int ret;
+ 
+ 	if (port->x_char) {
+ 		if (stm32_port->tx_dma_busy)
+ 			stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
++
++		/* Check that TDR is empty before filling FIFO */
++		ret =
++		readl_relaxed_poll_timeout_atomic(port->membase + ofs->isr,
++						  isr,
++						  (isr & USART_SR_TXE),
++						  10, 1000);
++		if (ret)
++			dev_warn(port->dev, "1 character may be erased\n");
++
+ 		writel_relaxed(port->x_char, port->membase + ofs->tdr);
+ 		port->x_char = 0;
+ 		port->icount.tx++;
+diff --git a/drivers/usb/gadget/legacy/inode.c b/drivers/usb/gadget/legacy/inode.c
+index 3b58f4fc0a806..25c8809e0a38c 100644
+--- a/drivers/usb/gadget/legacy/inode.c
++++ b/drivers/usb/gadget/legacy/inode.c
+@@ -1826,8 +1826,9 @@ dev_config (struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
+ 	spin_lock_irq (&dev->lock);
+ 	value = -EINVAL;
+ 	if (dev->buf) {
++		spin_unlock_irq(&dev->lock);
+ 		kfree(kbuf);
+-		goto fail;
++		return value;
+ 	}
+ 	dev->buf = kbuf;
+ 
+@@ -1874,8 +1875,8 @@ dev_config (struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
+ 
+ 	value = usb_gadget_probe_driver(&gadgetfs_driver);
+ 	if (value != 0) {
+-		kfree (dev->buf);
+-		dev->buf = NULL;
++		spin_lock_irq(&dev->lock);
++		goto fail;
+ 	} else {
+ 		/* at this point "good" hardware has for the first time
+ 		 * let the USB the host see us.  alternatively, if users
+@@ -1892,6 +1893,9 @@ dev_config (struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
+ 	return value;
+ 
+ fail:
++	dev->config = NULL;
++	dev->hs_config = NULL;
++	dev->dev = NULL;
+ 	spin_unlock_irq (&dev->lock);
+ 	pr_debug ("%s: %s fail %zd, %p\n", shortname, __func__, value, dev);
+ 	kfree (dev->buf);
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index f8c7f26f1fbb3..d19762dc90fed 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -1135,14 +1135,25 @@ out_free_interp:
+ 			 * is then page aligned.
+ 			 */
+ 			load_bias = ELF_PAGESTART(load_bias - vaddr);
+-		}
+ 
+-		/*
+-		 * Calculate the entire size of the ELF mapping (total_size).
+-		 * (Note that load_addr_set is set to true later once the
+-		 * initial mapping is performed.)
+-		 */
+-		if (!load_addr_set) {
++			/*
++			 * Calculate the entire size of the ELF mapping
++			 * (total_size), used for the initial mapping,
++			 * due to load_addr_set which is set to true later
++			 * once the initial mapping is performed.
++			 *
++			 * Note that this is only sensible when the LOAD
++			 * segments are contiguous (or overlapping). If
++			 * used for LOADs that are far apart, this would
++			 * cause the holes between LOADs to be mapped,
++			 * running the risk of having the mapping fail,
++			 * as it would be larger than the ELF file itself.
++			 *
++			 * As a result, only ET_DYN does this, since
++			 * some ET_EXEC (e.g. ia64) may have large virtual
++			 * memory holes between LOADs.
++			 *
++			 */
+ 			total_size = total_mapping_size(elf_phdata,
+ 							elf_ex->e_phnum);
+ 			if (!total_size) {
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 269094176b8b3..fd77eca085b41 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -601,6 +601,9 @@ enum {
+ 	/* Indicate whether there are any tree modification log users */
+ 	BTRFS_FS_TREE_MOD_LOG_USERS,
+ 
++	/* Indicate we have half completed snapshot deletions pending. */
++	BTRFS_FS_UNFINISHED_DROPS,
++
+ #if BITS_PER_LONG == 32
+ 	/* Indicate if we have error/warn message printed on 32bit systems */
+ 	BTRFS_FS_32BIT_ERROR,
+@@ -1110,8 +1113,15 @@ enum {
+ 	BTRFS_ROOT_HAS_LOG_TREE,
+ 	/* Qgroup flushing is in progress */
+ 	BTRFS_ROOT_QGROUP_FLUSHING,
++	/* This root has a drop operation that was started previously. */
++	BTRFS_ROOT_UNFINISHED_DROP,
+ };
+ 
++static inline void btrfs_wake_unfinished_drop(struct btrfs_fs_info *fs_info)
++{
++	clear_and_wake_up_bit(BTRFS_FS_UNFINISHED_DROPS, &fs_info->flags);
++}
++
+ /*
+  * Record swapped tree blocks of a subvolume tree for delayed subtree trace
+  * code. For detail check comment in fs/btrfs/qgroup.c.
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 5f0a879c10436..15ffb237a489f 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3665,6 +3665,10 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ 
+ 	set_bit(BTRFS_FS_OPEN, &fs_info->flags);
+ 
++	/* Kick the cleaner thread so it'll start deleting snapshots. */
++	if (test_bit(BTRFS_FS_UNFINISHED_DROPS, &fs_info->flags))
++		wake_up_process(fs_info->cleaner_kthread);
++
+ clear_oneshot:
+ 	btrfs_clear_oneshot_options(fs_info);
+ 	return 0;
+@@ -4348,6 +4352,12 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info)
+ 	 */
+ 	kthread_park(fs_info->cleaner_kthread);
+ 
++	/*
++	 * If we had UNFINISHED_DROPS we could still be processing them, so
++	 * clear that bit and wake up relocation so it can stop.
++	 */
++	btrfs_wake_unfinished_drop(fs_info);
++
+ 	/* wait for the qgroup rescan worker to stop */
+ 	btrfs_qgroup_wait_for_completion(fs_info, false);
+ 
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 7b4ee1b2d5d83..852348189ecb1 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -5604,6 +5604,7 @@ int btrfs_drop_snapshot(struct btrfs_root *root, int update_ref, int for_reloc)
+ 	int ret;
+ 	int level;
+ 	bool root_dropped = false;
++	bool unfinished_drop = false;
+ 
+ 	btrfs_debug(fs_info, "Drop subvolume %llu", root->root_key.objectid);
+ 
+@@ -5646,6 +5647,8 @@ int btrfs_drop_snapshot(struct btrfs_root *root, int update_ref, int for_reloc)
+ 	 * already dropped.
+ 	 */
+ 	set_bit(BTRFS_ROOT_DELETING, &root->state);
++	unfinished_drop = test_bit(BTRFS_ROOT_UNFINISHED_DROP, &root->state);
++
+ 	if (btrfs_disk_key_objectid(&root_item->drop_progress) == 0) {
+ 		level = btrfs_header_level(root->node);
+ 		path->nodes[level] = btrfs_lock_root_node(root);
+@@ -5820,6 +5823,13 @@ out_free:
+ 	kfree(wc);
+ 	btrfs_free_path(path);
+ out:
++	/*
++	 * We were an unfinished drop root, check to see if there are any
++	 * pending, and if not clear and wake up any waiters.
++	 */
++	if (!err && unfinished_drop)
++		btrfs_maybe_wake_unfinished_drop(fs_info);
++
+ 	/*
+ 	 * So if we need to stop dropping the snapshot for whatever reason we
+ 	 * need to make sure to add it back to the dead root list so that we
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 9234d96a7fd5c..5512d7028091f 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -6865,14 +6865,24 @@ static void assert_eb_page_uptodate(const struct extent_buffer *eb,
+ {
+ 	struct btrfs_fs_info *fs_info = eb->fs_info;
+ 
++	/*
++	 * If we are using the commit root we could potentially clear a page
++	 * Uptodate while we're using the extent buffer that we've previously
++	 * looked up.  We don't want to complain in this case, as the page was
++	 * valid before, we just didn't write it out.  Instead we want to catch
++	 * the case where we didn't actually read the block properly, which
++	 * would have !PageUptodate && !PageError, as we clear PageError before
++	 * reading.
++	 */
+ 	if (fs_info->sectorsize < PAGE_SIZE) {
+-		bool uptodate;
++		bool uptodate, error;
+ 
+ 		uptodate = btrfs_subpage_test_uptodate(fs_info, page,
+ 						       eb->start, eb->len);
+-		WARN_ON(!uptodate);
++		error = btrfs_subpage_test_error(fs_info, page, eb->start, eb->len);
++		WARN_ON(!uptodate && !error);
+ 	} else {
+-		WARN_ON(!PageUptodate(page));
++		WARN_ON(!PageUptodate(page) && !PageError(page));
+ 	}
+ }
+ 
+diff --git a/fs/btrfs/extent_map.c b/fs/btrfs/extent_map.c
+index 5a36add213053..c28ceddefae42 100644
+--- a/fs/btrfs/extent_map.c
++++ b/fs/btrfs/extent_map.c
+@@ -261,6 +261,7 @@ static void try_merge_map(struct extent_map_tree *tree, struct extent_map *em)
+ 			em->mod_len = (em->mod_len + em->mod_start) - merge->mod_start;
+ 			em->mod_start = merge->mod_start;
+ 			em->generation = max(em->generation, merge->generation);
++			set_bit(EXTENT_FLAG_MERGED, &em->flags);
+ 
+ 			rb_erase_cached(&merge->rb_node, &tree->map);
+ 			RB_CLEAR_NODE(&merge->rb_node);
+@@ -278,6 +279,7 @@ static void try_merge_map(struct extent_map_tree *tree, struct extent_map *em)
+ 		RB_CLEAR_NODE(&merge->rb_node);
+ 		em->mod_len = (merge->mod_start + merge->mod_len) - em->mod_start;
+ 		em->generation = max(em->generation, merge->generation);
++		set_bit(EXTENT_FLAG_MERGED, &em->flags);
+ 		free_extent_map(merge);
+ 	}
+ }
+diff --git a/fs/btrfs/extent_map.h b/fs/btrfs/extent_map.h
+index 8e217337dff90..d2fa32ffe3045 100644
+--- a/fs/btrfs/extent_map.h
++++ b/fs/btrfs/extent_map.h
+@@ -25,6 +25,8 @@ enum {
+ 	EXTENT_FLAG_FILLING,
+ 	/* filesystem extent mapping type */
+ 	EXTENT_FLAG_FS_MAPPING,
++	/* This em is merged from two or more physically adjacent ems */
++	EXTENT_FLAG_MERGED,
+ };
+ 
+ struct extent_map {
+@@ -40,6 +42,12 @@ struct extent_map {
+ 	u64 ram_bytes;
+ 	u64 block_start;
+ 	u64 block_len;
++
++	/*
++	 * Generation of the extent map, for merged em it's the highest
++	 * generation of all merged ems.
++	 * For non-merged extents, it's from btrfs_file_extent_item::generation.
++	 */
+ 	u64 generation;
+ 	unsigned long flags;
+ 	/* Used for chunk mappings, flag EXTENT_FLAG_FS_MAPPING must be set */
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 3be5735372ee8..cc957cce23a1a 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -61,8 +61,6 @@ struct btrfs_iget_args {
+ };
+ 
+ struct btrfs_dio_data {
+-	u64 reserve;
+-	loff_t length;
+ 	ssize_t submitted;
+ 	struct extent_changeset *data_reserved;
+ };
+@@ -7773,6 +7771,10 @@ static int btrfs_get_blocks_direct_write(struct extent_map **map,
+ {
+ 	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
+ 	struct extent_map *em = *map;
++	int type;
++	u64 block_start, orig_start, orig_block_len, ram_bytes;
++	bool can_nocow = false;
++	bool space_reserved = false;
+ 	int ret = 0;
+ 
+ 	/*
+@@ -7787,9 +7789,6 @@ static int btrfs_get_blocks_direct_write(struct extent_map **map,
+ 	if (test_bit(EXTENT_FLAG_PREALLOC, &em->flags) ||
+ 	    ((BTRFS_I(inode)->flags & BTRFS_INODE_NODATACOW) &&
+ 	     em->block_start != EXTENT_MAP_HOLE)) {
+-		int type;
+-		u64 block_start, orig_start, orig_block_len, ram_bytes;
+-
+ 		if (test_bit(EXTENT_FLAG_PREALLOC, &em->flags))
+ 			type = BTRFS_ORDERED_PREALLOC;
+ 		else
+@@ -7799,53 +7798,92 @@ static int btrfs_get_blocks_direct_write(struct extent_map **map,
+ 
+ 		if (can_nocow_extent(inode, start, &len, &orig_start,
+ 				     &orig_block_len, &ram_bytes, false) == 1 &&
+-		    btrfs_inc_nocow_writers(fs_info, block_start)) {
+-			struct extent_map *em2;
++		    btrfs_inc_nocow_writers(fs_info, block_start))
++			can_nocow = true;
++	}
++
++	if (can_nocow) {
++		struct extent_map *em2;
+ 
+-			em2 = btrfs_create_dio_extent(BTRFS_I(inode), start, len,
+-						      orig_start, block_start,
+-						      len, orig_block_len,
+-						      ram_bytes, type);
++		/* We can NOCOW, so only need to reserve metadata space. */
++		ret = btrfs_delalloc_reserve_metadata(BTRFS_I(inode), len);
++		if (ret < 0) {
++			/* Our caller expects us to free the input extent map. */
++			free_extent_map(em);
++			*map = NULL;
+ 			btrfs_dec_nocow_writers(fs_info, block_start);
+-			if (type == BTRFS_ORDERED_PREALLOC) {
+-				free_extent_map(em);
+-				*map = em = em2;
+-			}
++			goto out;
++		}
++		space_reserved = true;
+ 
+-			if (em2 && IS_ERR(em2)) {
+-				ret = PTR_ERR(em2);
+-				goto out;
+-			}
+-			/*
+-			 * For inode marked NODATACOW or extent marked PREALLOC,
+-			 * use the existing or preallocated extent, so does not
+-			 * need to adjust btrfs_space_info's bytes_may_use.
+-			 */
+-			btrfs_free_reserved_data_space_noquota(fs_info, len);
+-			goto skip_cow;
++		em2 = btrfs_create_dio_extent(BTRFS_I(inode), start, len,
++					      orig_start, block_start,
++					      len, orig_block_len,
++					      ram_bytes, type);
++		btrfs_dec_nocow_writers(fs_info, block_start);
++		if (type == BTRFS_ORDERED_PREALLOC) {
++			free_extent_map(em);
++			*map = em = em2;
+ 		}
+-	}
+ 
+-	/* this will cow the extent */
+-	free_extent_map(em);
+-	*map = em = btrfs_new_extent_direct(BTRFS_I(inode), start, len);
+-	if (IS_ERR(em)) {
+-		ret = PTR_ERR(em);
+-		goto out;
++		if (IS_ERR(em2)) {
++			ret = PTR_ERR(em2);
++			goto out;
++		}
++	} else {
++		const u64 prev_len = len;
++
++		/* Our caller expects us to free the input extent map. */
++		free_extent_map(em);
++		*map = NULL;
++
++		/* We have to COW, so need to reserve metadata and data space. */
++		ret = btrfs_delalloc_reserve_space(BTRFS_I(inode),
++						   &dio_data->data_reserved,
++						   start, len);
++		if (ret < 0)
++			goto out;
++		space_reserved = true;
++
++		em = btrfs_new_extent_direct(BTRFS_I(inode), start, len);
++		if (IS_ERR(em)) {
++			ret = PTR_ERR(em);
++			goto out;
++		}
++		*map = em;
++		len = min(len, em->len - (start - em->start));
++		if (len < prev_len)
++			btrfs_delalloc_release_space(BTRFS_I(inode),
++						     dio_data->data_reserved,
++						     start + len, prev_len - len,
++						     true);
+ 	}
+ 
+-	len = min(len, em->len - (start - em->start));
++	/*
++	 * We have created our ordered extent, so we can now release our reservation
++	 * for an outstanding extent.
++	 */
++	btrfs_delalloc_release_extents(BTRFS_I(inode), len);
+ 
+-skip_cow:
+ 	/*
+ 	 * Need to update the i_size under the extent lock so buffered
+ 	 * readers will get the updated i_size when we unlock.
+ 	 */
+ 	if (start + len > i_size_read(inode))
+ 		i_size_write(inode, start + len);
+-
+-	dio_data->reserve -= len;
+ out:
++	if (ret && space_reserved) {
++		btrfs_delalloc_release_extents(BTRFS_I(inode), len);
++		if (can_nocow) {
++			btrfs_delalloc_release_metadata(BTRFS_I(inode), len, true);
++		} else {
++			btrfs_delalloc_release_space(BTRFS_I(inode),
++						     dio_data->data_reserved,
++						     start, len, true);
++			extent_changeset_free(dio_data->data_reserved);
++			dio_data->data_reserved = NULL;
++		}
++	}
+ 	return ret;
+ }
+ 
+@@ -7887,18 +7925,6 @@ static int btrfs_dio_iomap_begin(struct inode *inode, loff_t start,
+ 	if (!dio_data)
+ 		return -ENOMEM;
+ 
+-	dio_data->length = length;
+-	if (write) {
+-		dio_data->reserve = round_up(length, fs_info->sectorsize);
+-		ret = btrfs_delalloc_reserve_space(BTRFS_I(inode),
+-				&dio_data->data_reserved,
+-				start, dio_data->reserve);
+-		if (ret) {
+-			extent_changeset_free(dio_data->data_reserved);
+-			kfree(dio_data);
+-			return ret;
+-		}
+-	}
+ 	iomap->private = dio_data;
+ 
+ 
+@@ -7939,6 +7965,34 @@ static int btrfs_dio_iomap_begin(struct inode *inode, loff_t start,
+ 	}
+ 
+ 	len = min(len, em->len - (start - em->start));
++
++	/*
++	 * If we have a NOWAIT request and the range contains multiple extents
++	 * (or a mix of extents and holes), then we return -EAGAIN to make the
++	 * caller fallback to a context where it can do a blocking (without
++	 * NOWAIT) request. This way we avoid doing partial IO and returning
++	 * success to the caller, which is not optimal for writes and for reads
++	 * it can result in unexpected behaviour for an application.
++	 *
++	 * When doing a read, because we use IOMAP_DIO_PARTIAL when calling
++	 * iomap_dio_rw(), we can end up returning less data then what the caller
++	 * asked for, resulting in an unexpected, and incorrect, short read.
++	 * That is, the caller asked to read N bytes and we return less than that,
++	 * which is wrong unless we are crossing EOF. This happens if we get a
++	 * page fault error when trying to fault in pages for the buffer that is
++	 * associated to the struct iov_iter passed to iomap_dio_rw(), and we
++	 * have previously submitted bios for other extents in the range, in
++	 * which case iomap_dio_rw() may return us EIOCBQUEUED if not all of
++	 * those bios have completed by the time we get the page fault error,
++	 * which we return back to our caller - we should only return EIOCBQUEUED
++	 * after we have submitted bios for all the extents in the range.
++	 */
++	if ((flags & IOMAP_NOWAIT) && len < length) {
++		free_extent_map(em);
++		ret = -EAGAIN;
++		goto unlock_err;
++	}
++
+ 	if (write) {
+ 		ret = btrfs_get_blocks_direct_write(&em, inode, dio_data,
+ 						    start, len);
+@@ -7991,14 +8045,8 @@ unlock_err:
+ 	unlock_extent_cached(&BTRFS_I(inode)->io_tree, lockstart, lockend,
+ 			     &cached_state);
+ err:
+-	if (dio_data) {
+-		btrfs_delalloc_release_space(BTRFS_I(inode),
+-				dio_data->data_reserved, start,
+-				dio_data->reserve, true);
+-		btrfs_delalloc_release_extents(BTRFS_I(inode), dio_data->reserve);
+-		extent_changeset_free(dio_data->data_reserved);
+-		kfree(dio_data);
+-	}
++	kfree(dio_data);
++
+ 	return ret;
+ }
+ 
+@@ -8028,14 +8076,8 @@ static int btrfs_dio_iomap_end(struct inode *inode, loff_t pos, loff_t length,
+ 		ret = -ENOTBLK;
+ 	}
+ 
+-	if (write) {
+-		if (dio_data->reserve)
+-			btrfs_delalloc_release_space(BTRFS_I(inode),
+-					dio_data->data_reserved, pos,
+-					dio_data->reserve, true);
+-		btrfs_delalloc_release_extents(BTRFS_I(inode), dio_data->length);
++	if (write)
+ 		extent_changeset_free(dio_data->data_reserved);
+-	}
+ out:
+ 	kfree(dio_data);
+ 	iomap->private = NULL;
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 541a4fbfd79ec..3d3d6e2f110ae 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -986,8 +986,155 @@ out:
+ 	return ret;
+ }
+ 
++/*
++ * Defrag specific helper to get an extent map.
++ *
++ * Differences between this and btrfs_get_extent() are:
++ *
++ * - No extent_map will be added to inode->extent_tree
++ *   To reduce memory usage in the long run.
++ *
++ * - Extra optimization to skip file extents older than @newer_than
++ *   By using btrfs_search_forward() we can skip entire file ranges that
++ *   have extents created in past transactions, because btrfs_search_forward()
++ *   will not visit leaves and nodes with a generation smaller than given
++ *   minimal generation threshold (@newer_than).
++ *
++ * Return valid em if we find a file extent matching the requirement.
++ * Return NULL if we can not find a file extent matching the requirement.
++ *
++ * Return ERR_PTR() for error.
++ */
++static struct extent_map *defrag_get_extent(struct btrfs_inode *inode,
++					    u64 start, u64 newer_than)
++{
++	struct btrfs_root *root = inode->root;
++	struct btrfs_file_extent_item *fi;
++	struct btrfs_path path = { 0 };
++	struct extent_map *em;
++	struct btrfs_key key;
++	u64 ino = btrfs_ino(inode);
++	int ret;
++
++	em = alloc_extent_map();
++	if (!em) {
++		ret = -ENOMEM;
++		goto err;
++	}
++
++	key.objectid = ino;
++	key.type = BTRFS_EXTENT_DATA_KEY;
++	key.offset = start;
++
++	if (newer_than) {
++		ret = btrfs_search_forward(root, &key, &path, newer_than);
++		if (ret < 0)
++			goto err;
++		/* Can't find anything newer */
++		if (ret > 0)
++			goto not_found;
++	} else {
++		ret = btrfs_search_slot(NULL, root, &key, &path, 0, 0);
++		if (ret < 0)
++			goto err;
++	}
++	if (path.slots[0] >= btrfs_header_nritems(path.nodes[0])) {
++		/*
++		 * If btrfs_search_slot() makes path to point beyond nritems,
++		 * we should not have an empty leaf, as this inode must at
++		 * least have its INODE_ITEM.
++		 */
++		ASSERT(btrfs_header_nritems(path.nodes[0]));
++		path.slots[0] = btrfs_header_nritems(path.nodes[0]) - 1;
++	}
++	btrfs_item_key_to_cpu(path.nodes[0], &key, path.slots[0]);
++	/* Perfect match, no need to go one slot back */
++	if (key.objectid == ino && key.type == BTRFS_EXTENT_DATA_KEY &&
++	    key.offset == start)
++		goto iterate;
++
++	/* We didn't find a perfect match, needs to go one slot back */
++	if (path.slots[0] > 0) {
++		btrfs_item_key_to_cpu(path.nodes[0], &key, path.slots[0]);
++		if (key.objectid == ino && key.type == BTRFS_EXTENT_DATA_KEY)
++			path.slots[0]--;
++	}
++
++iterate:
++	/* Iterate through the path to find a file extent covering @start */
++	while (true) {
++		u64 extent_end;
++
++		if (path.slots[0] >= btrfs_header_nritems(path.nodes[0]))
++			goto next;
++
++		btrfs_item_key_to_cpu(path.nodes[0], &key, path.slots[0]);
++
++		/*
++		 * We may go one slot back to INODE_REF/XATTR item, then
++		 * need to go forward until we reach an EXTENT_DATA.
++		 * But we should still has the correct ino as key.objectid.
++		 */
++		if (WARN_ON(key.objectid < ino) || key.type < BTRFS_EXTENT_DATA_KEY)
++			goto next;
++
++		/* It's beyond our target range, definitely not extent found */
++		if (key.objectid > ino || key.type > BTRFS_EXTENT_DATA_KEY)
++			goto not_found;
++
++		/*
++		 *	|	|<- File extent ->|
++		 *	\- start
++		 *
++		 * This means there is a hole between start and key.offset.
++		 */
++		if (key.offset > start) {
++			em->start = start;
++			em->orig_start = start;
++			em->block_start = EXTENT_MAP_HOLE;
++			em->len = key.offset - start;
++			break;
++		}
++
++		fi = btrfs_item_ptr(path.nodes[0], path.slots[0],
++				    struct btrfs_file_extent_item);
++		extent_end = btrfs_file_extent_end(&path);
++
++		/*
++		 *	|<- file extent ->|	|
++		 *				\- start
++		 *
++		 * We haven't reached start, search next slot.
++		 */
++		if (extent_end <= start)
++			goto next;
++
++		/* Now this extent covers @start, convert it to em */
++		btrfs_extent_item_to_extent_map(inode, &path, fi, false, em);
++		break;
++next:
++		ret = btrfs_next_item(root, &path);
++		if (ret < 0)
++			goto err;
++		if (ret > 0)
++			goto not_found;
++	}
++	btrfs_release_path(&path);
++	return em;
++
++not_found:
++	btrfs_release_path(&path);
++	free_extent_map(em);
++	return NULL;
++
++err:
++	btrfs_release_path(&path);
++	free_extent_map(em);
++	return ERR_PTR(ret);
++}
++
+ static struct extent_map *defrag_lookup_extent(struct inode *inode, u64 start,
+-					       bool locked)
++					       u64 newer_than, bool locked)
+ {
+ 	struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree;
+ 	struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree;
+@@ -1002,6 +1149,20 @@ static struct extent_map *defrag_lookup_extent(struct inode *inode, u64 start,
+ 	em = lookup_extent_mapping(em_tree, start, sectorsize);
+ 	read_unlock(&em_tree->lock);
+ 
++	/*
++	 * We can get a merged extent, in that case, we need to re-search
++	 * tree to get the original em for defrag.
++	 *
++	 * If @newer_than is 0 or em::generation < newer_than, we can trust
++	 * this em, as either we don't care about the generation, or the
++	 * merged extent map will be rejected anyway.
++	 */
++	if (em && test_bit(EXTENT_FLAG_MERGED, &em->flags) &&
++	    newer_than && em->generation >= newer_than) {
++		free_extent_map(em);
++		em = NULL;
++	}
++
+ 	if (!em) {
+ 		struct extent_state *cached = NULL;
+ 		u64 end = start + sectorsize - 1;
+@@ -1009,7 +1170,7 @@ static struct extent_map *defrag_lookup_extent(struct inode *inode, u64 start,
+ 		/* get the big lock and read metadata off disk */
+ 		if (!locked)
+ 			lock_extent_bits(io_tree, start, end, &cached);
+-		em = btrfs_get_extent(BTRFS_I(inode), NULL, 0, start, sectorsize);
++		em = defrag_get_extent(BTRFS_I(inode), start, newer_than);
+ 		if (!locked)
+ 			unlock_extent_cached(io_tree, start, end, &cached);
+ 
+@@ -1037,7 +1198,12 @@ static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em,
+ 	if (em->start + em->len >= i_size_read(inode))
+ 		return false;
+ 
+-	next = defrag_lookup_extent(inode, em->start + em->len, locked);
++	/*
++	 * We want to check if the next extent can be merged with the current
++	 * one, which can be an extent created in a past generation, so we pass
++	 * a minimum generation of 0 to defrag_lookup_extent().
++	 */
++	next = defrag_lookup_extent(inode, em->start + em->len, 0, locked);
+ 	/* No more em or hole */
+ 	if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE)
+ 		goto out;
+@@ -1188,7 +1354,8 @@ static int defrag_collect_targets(struct btrfs_inode *inode,
+ 		u64 range_len;
+ 
+ 		last_is_target = false;
+-		em = defrag_lookup_extent(&inode->vfs_inode, cur, locked);
++		em = defrag_lookup_extent(&inode->vfs_inode, cur,
++					  newer_than, locked);
+ 		if (!em)
+ 			break;
+ 
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 26134b7476a2f..4ca809fa80eaf 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1196,6 +1196,14 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
+ 	if (!fs_info->quota_root)
+ 		goto out;
+ 
++	/*
++	 * Unlock the qgroup_ioctl_lock mutex before waiting for the rescan worker to
++	 * complete. Otherwise we can deadlock because btrfs_remove_qgroup() needs
++	 * to lock that mutex while holding a transaction handle and the rescan
++	 * worker needs to commit a transaction.
++	 */
++	mutex_unlock(&fs_info->qgroup_ioctl_lock);
++
+ 	/*
+ 	 * Request qgroup rescan worker to complete and wait for it. This wait
+ 	 * must be done before transaction start for quota disable since it may
+@@ -1203,7 +1211,6 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
+ 	 */
+ 	clear_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
+ 	btrfs_qgroup_wait_for_completion(fs_info, false);
+-	mutex_unlock(&fs_info->qgroup_ioctl_lock);
+ 
+ 	/*
+ 	 * 1 For the root item
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 33a0ee7ac5906..f129e0718b110 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -3971,6 +3971,19 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
+ 	int rw = 0;
+ 	int err = 0;
+ 
++	/*
++	 * This only gets set if we had a half-deleted snapshot on mount.  We
++	 * cannot allow relocation to start while we're still trying to clean up
++	 * these pending deletions.
++	 */
++	ret = wait_on_bit(&fs_info->flags, BTRFS_FS_UNFINISHED_DROPS, TASK_INTERRUPTIBLE);
++	if (ret)
++		return ret;
++
++	/* We may have been woken up by close_ctree, so bail if we're closing. */
++	if (btrfs_fs_closing(fs_info))
++		return -EINTR;
++
+ 	bg = btrfs_lookup_block_group(fs_info, group_start);
+ 	if (!bg)
+ 		return -ENOENT;
+diff --git a/fs/btrfs/root-tree.c b/fs/btrfs/root-tree.c
+index d201663365576..c6a173e81118c 100644
+--- a/fs/btrfs/root-tree.c
++++ b/fs/btrfs/root-tree.c
+@@ -278,6 +278,21 @@ int btrfs_find_orphan_roots(struct btrfs_fs_info *fs_info)
+ 
+ 		WARN_ON(!test_bit(BTRFS_ROOT_ORPHAN_ITEM_INSERTED, &root->state));
+ 		if (btrfs_root_refs(&root->root_item) == 0) {
++			struct btrfs_key drop_key;
++
++			btrfs_disk_key_to_cpu(&drop_key, &root->root_item.drop_progress);
++			/*
++			 * If we have a non-zero drop_progress then we know we
++			 * made it partly through deleting this snapshot, and
++			 * thus we need to make sure we block any balance from
++			 * happening until this snapshot is completely dropped.
++			 */
++			if (drop_key.objectid != 0 || drop_key.type != 0 ||
++			    drop_key.offset != 0) {
++				set_bit(BTRFS_FS_UNFINISHED_DROPS, &fs_info->flags);
++				set_bit(BTRFS_ROOT_UNFINISHED_DROP, &root->state);
++			}
++
+ 			set_bit(BTRFS_ROOT_DEAD_TREE, &root->state);
+ 			btrfs_add_dead_root(root);
+ 		}
+diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c
+index 29bd8c7a7706b..ef7ae20d2b77b 100644
+--- a/fs/btrfs/subpage.c
++++ b/fs/btrfs/subpage.c
+@@ -736,7 +736,7 @@ void btrfs_page_unlock_writer(struct btrfs_fs_info *fs_info, struct page *page,
+ 	 * Since we own the page lock, no one else could touch subpage::writers
+ 	 * and we are safe to do several atomic operations without spinlock.
+ 	 */
+-	if (atomic_read(&subpage->writers))
++	if (atomic_read(&subpage->writers) == 0)
+ 		/* No writers, locked by plain lock_page() */
+ 		return unlock_page(page);
+ 
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 27b93a6c41bb4..a43c101c5525c 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -846,7 +846,37 @@ btrfs_attach_transaction_barrier(struct btrfs_root *root)
+ static noinline void wait_for_commit(struct btrfs_transaction *commit,
+ 				     const enum btrfs_trans_state min_state)
+ {
+-	wait_event(commit->commit_wait, commit->state >= min_state);
++	struct btrfs_fs_info *fs_info = commit->fs_info;
++	u64 transid = commit->transid;
++	bool put = false;
++
++	while (1) {
++		wait_event(commit->commit_wait, commit->state >= min_state);
++		if (put)
++			btrfs_put_transaction(commit);
++
++		if (min_state < TRANS_STATE_COMPLETED)
++			break;
++
++		/*
++		 * A transaction isn't really completed until all of the
++		 * previous transactions are completed, but with fsync we can
++		 * end up with SUPER_COMMITTED transactions before a COMPLETED
++		 * transaction. Wait for those.
++		 */
++
++		spin_lock(&fs_info->trans_lock);
++		commit = list_first_entry_or_null(&fs_info->trans_list,
++						  struct btrfs_transaction,
++						  list);
++		if (!commit || commit->transid > transid) {
++			spin_unlock(&fs_info->trans_lock);
++			break;
++		}
++		refcount_inc(&commit->use_count);
++		put = true;
++		spin_unlock(&fs_info->trans_lock);
++	}
+ }
+ 
+ int btrfs_wait_for_commit(struct btrfs_fs_info *fs_info, u64 transid)
+@@ -1309,6 +1339,32 @@ again:
+ 	return 0;
+ }
+ 
++/*
++ * If we had a pending drop we need to see if there are any others left in our
++ * dead roots list, and if not clear our bit and wake any waiters.
++ */
++void btrfs_maybe_wake_unfinished_drop(struct btrfs_fs_info *fs_info)
++{
++	/*
++	 * We put the drop in progress roots at the front of the list, so if the
++	 * first entry doesn't have UNFINISHED_DROP set we can wake everybody
++	 * up.
++	 */
++	spin_lock(&fs_info->trans_lock);
++	if (!list_empty(&fs_info->dead_roots)) {
++		struct btrfs_root *root = list_first_entry(&fs_info->dead_roots,
++							   struct btrfs_root,
++							   root_list);
++		if (test_bit(BTRFS_ROOT_UNFINISHED_DROP, &root->state)) {
++			spin_unlock(&fs_info->trans_lock);
++			return;
++		}
++	}
++	spin_unlock(&fs_info->trans_lock);
++
++	btrfs_wake_unfinished_drop(fs_info);
++}
++
+ /*
+  * dead roots are old snapshots that need to be deleted.  This allocates
+  * a dirty root struct and adds it into the list of dead roots that need to
+@@ -1321,7 +1377,12 @@ void btrfs_add_dead_root(struct btrfs_root *root)
+ 	spin_lock(&fs_info->trans_lock);
+ 	if (list_empty(&root->root_list)) {
+ 		btrfs_grab_root(root);
+-		list_add_tail(&root->root_list, &fs_info->dead_roots);
++
++		/* We want to process the partially complete drops first. */
++		if (test_bit(BTRFS_ROOT_UNFINISHED_DROP, &root->state))
++			list_add(&root->root_list, &fs_info->dead_roots);
++		else
++			list_add_tail(&root->root_list, &fs_info->dead_roots);
+ 	}
+ 	spin_unlock(&fs_info->trans_lock);
+ }
+@@ -2013,16 +2074,24 @@ static void btrfs_cleanup_pending_block_groups(struct btrfs_trans_handle *trans)
+ static inline int btrfs_start_delalloc_flush(struct btrfs_fs_info *fs_info)
+ {
+ 	/*
+-	 * We use writeback_inodes_sb here because if we used
++	 * We use try_to_writeback_inodes_sb() here because if we used
+ 	 * btrfs_start_delalloc_roots we would deadlock with fs freeze.
+ 	 * Currently are holding the fs freeze lock, if we do an async flush
+ 	 * we'll do btrfs_join_transaction() and deadlock because we need to
+ 	 * wait for the fs freeze lock.  Using the direct flushing we benefit
+ 	 * from already being in a transaction and our join_transaction doesn't
+ 	 * have to re-take the fs freeze lock.
++	 *
++	 * Note that try_to_writeback_inodes_sb() will only trigger writeback
++	 * if it can read lock sb->s_umount. It will always be able to lock it,
++	 * except when the filesystem is being unmounted or being frozen, but in
++	 * those cases sync_filesystem() is called, which results in calling
++	 * writeback_inodes_sb() while holding a write lock on sb->s_umount.
++	 * Note that we don't call writeback_inodes_sb() directly, because it
++	 * will emit a warning if sb->s_umount is not locked.
+ 	 */
+ 	if (btrfs_test_opt(fs_info, FLUSHONCOMMIT))
+-		writeback_inodes_sb(fs_info->sb, WB_REASON_SYNC);
++		try_to_writeback_inodes_sb(fs_info->sb, WB_REASON_SYNC);
+ 	return 0;
+ }
+ 
+diff --git a/fs/btrfs/transaction.h b/fs/btrfs/transaction.h
+index eba07b8119bbd..0ded32bbd001e 100644
+--- a/fs/btrfs/transaction.h
++++ b/fs/btrfs/transaction.h
+@@ -217,6 +217,7 @@ int btrfs_wait_for_commit(struct btrfs_fs_info *fs_info, u64 transid);
+ 
+ void btrfs_add_dead_root(struct btrfs_root *root);
+ int btrfs_defrag_root(struct btrfs_root *root);
++void btrfs_maybe_wake_unfinished_drop(struct btrfs_fs_info *fs_info);
+ int btrfs_clean_one_deleted_snapshot(struct btrfs_root *root);
+ int btrfs_commit_transaction(struct btrfs_trans_handle *trans);
+ int btrfs_commit_transaction_async(struct btrfs_trans_handle *trans);
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 6993dcdba6f1a..f5371c4653cd6 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1357,6 +1357,15 @@ again:
+ 						 inode, name, namelen);
+ 			kfree(name);
+ 			iput(dir);
++			/*
++			 * Whenever we need to check if a name exists or not, we
++			 * check the subvolume tree. So after an unlink we must
++			 * run delayed items, so that future checks for a name
++			 * during log replay see that the name does not exists
++			 * anymore.
++			 */
++			if (!ret)
++				ret = btrfs_run_delayed_items(trans);
+ 			if (ret)
+ 				goto out;
+ 			goto again;
+@@ -1609,6 +1618,15 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ 				 */
+ 				if (!ret && inode->i_nlink == 0)
+ 					inc_nlink(inode);
++				/*
++				 * Whenever we need to check if a name exists or
++				 * not, we check the subvolume tree. So after an
++				 * unlink we must run delayed items, so that future
++				 * checks for a name during log replay see that the
++				 * name does not exists anymore.
++				 */
++				if (!ret)
++					ret = btrfs_run_delayed_items(trans);
+ 			}
+ 			if (ret < 0)
+ 				goto out;
+@@ -4658,7 +4676,7 @@ static int log_one_extent(struct btrfs_trans_handle *trans,
+ 
+ /*
+  * Log all prealloc extents beyond the inode's i_size to make sure we do not
+- * lose them after doing a fast fsync and replaying the log. We scan the
++ * lose them after doing a full/fast fsync and replaying the log. We scan the
+  * subvolume's root instead of iterating the inode's extent map tree because
+  * otherwise we can log incorrect extent items based on extent map conversion.
+  * That can happen due to the fact that extent maps are merged when they
+@@ -5437,6 +5455,7 @@ static int copy_inode_items_to_log(struct btrfs_trans_handle *trans,
+ 				   struct btrfs_log_ctx *ctx,
+ 				   bool *need_log_inode_item)
+ {
++	const u64 i_size = i_size_read(&inode->vfs_inode);
+ 	struct btrfs_root *root = inode->root;
+ 	int ins_start_slot = 0;
+ 	int ins_nr = 0;
+@@ -5457,13 +5476,21 @@ again:
+ 		if (min_key->type > max_key->type)
+ 			break;
+ 
+-		if (min_key->type == BTRFS_INODE_ITEM_KEY)
++		if (min_key->type == BTRFS_INODE_ITEM_KEY) {
+ 			*need_log_inode_item = false;
+-
+-		if ((min_key->type == BTRFS_INODE_REF_KEY ||
+-		     min_key->type == BTRFS_INODE_EXTREF_KEY) &&
+-		    inode->generation == trans->transid &&
+-		    !recursive_logging) {
++		} else if (min_key->type == BTRFS_EXTENT_DATA_KEY &&
++			   min_key->offset >= i_size) {
++			/*
++			 * Extents at and beyond eof are logged with
++			 * btrfs_log_prealloc_extents().
++			 * Only regular files have BTRFS_EXTENT_DATA_KEY keys,
++			 * and no keys greater than that, so bail out.
++			 */
++			break;
++		} else if ((min_key->type == BTRFS_INODE_REF_KEY ||
++			    min_key->type == BTRFS_INODE_EXTREF_KEY) &&
++			   inode->generation == trans->transid &&
++			   !recursive_logging) {
+ 			u64 other_ino = 0;
+ 			u64 other_parent = 0;
+ 
+@@ -5494,10 +5521,8 @@ again:
+ 				btrfs_release_path(path);
+ 				goto next_key;
+ 			}
+-		}
+-
+-		/* Skip xattrs, we log them later with btrfs_log_all_xattrs() */
+-		if (min_key->type == BTRFS_XATTR_ITEM_KEY) {
++		} else if (min_key->type == BTRFS_XATTR_ITEM_KEY) {
++			/* Skip xattrs, logged later with btrfs_log_all_xattrs() */
+ 			if (ins_nr == 0)
+ 				goto next_slot;
+ 			ret = copy_items(trans, inode, dst_path, path,
+@@ -5550,9 +5575,21 @@ next_key:
+ 			break;
+ 		}
+ 	}
+-	if (ins_nr)
++	if (ins_nr) {
+ 		ret = copy_items(trans, inode, dst_path, path, ins_start_slot,
+ 				 ins_nr, inode_only, logged_isize);
++		if (ret)
++			return ret;
++	}
++
++	if (inode_only == LOG_INODE_ALL && S_ISREG(inode->vfs_inode.i_mode)) {
++		/*
++		 * Release the path because otherwise we might attempt to double
++		 * lock the same leaf with btrfs_log_prealloc_extents() below.
++		 */
++		btrfs_release_path(path);
++		ret = btrfs_log_prealloc_extents(trans, inode, dst_path);
++	}
+ 
+ 	return ret;
+ }
+diff --git a/fs/cifs/cifsacl.c b/fs/cifs/cifsacl.c
+index ee3aab3dd4ac6..bf861fef2f0c3 100644
+--- a/fs/cifs/cifsacl.c
++++ b/fs/cifs/cifsacl.c
+@@ -949,6 +949,9 @@ static void populate_new_aces(char *nacl_base,
+ 		pnntace = (struct cifs_ace *) (nacl_base + nsize);
+ 		nsize += setup_special_mode_ACE(pnntace, nmode);
+ 		num_aces++;
++		pnntace = (struct cifs_ace *) (nacl_base + nsize);
++		nsize += setup_authusers_ACE(pnntace);
++		num_aces++;
+ 		goto set_size;
+ 	}
+ 
+@@ -1297,7 +1300,7 @@ static int build_sec_desc(struct cifs_ntsd *pntsd, struct cifs_ntsd *pnntsd,
+ 
+ 		if (uid_valid(uid)) { /* chown */
+ 			uid_t id;
+-			nowner_sid_ptr = kmalloc(sizeof(struct cifs_sid),
++			nowner_sid_ptr = kzalloc(sizeof(struct cifs_sid),
+ 								GFP_KERNEL);
+ 			if (!nowner_sid_ptr) {
+ 				rc = -ENOMEM;
+@@ -1326,7 +1329,7 @@ static int build_sec_desc(struct cifs_ntsd *pntsd, struct cifs_ntsd *pnntsd,
+ 		}
+ 		if (gid_valid(gid)) { /* chgrp */
+ 			gid_t id;
+-			ngroup_sid_ptr = kmalloc(sizeof(struct cifs_sid),
++			ngroup_sid_ptr = kzalloc(sizeof(struct cifs_sid),
+ 								GFP_KERNEL);
+ 			if (!ngroup_sid_ptr) {
+ 				rc = -ENOMEM;
+@@ -1613,7 +1616,7 @@ id_mode_to_cifs_acl(struct inode *inode, const char *path, __u64 *pnmode,
+ 	nsecdesclen = secdesclen;
+ 	if (pnmode && *pnmode != NO_CHANGE_64) { /* chmod */
+ 		if (mode_from_sid)
+-			nsecdesclen += sizeof(struct cifs_ace);
++			nsecdesclen += 2 * sizeof(struct cifs_ace);
+ 		else /* cifsacl */
+ 			nsecdesclen += 5 * sizeof(struct cifs_ace);
+ 	} else { /* chown */
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index dca42aa87d305..99c51391a48da 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -908,6 +908,7 @@ cifs_smb3_do_mount(struct file_system_type *fs_type,
+ 
+ out_super:
+ 	deactivate_locked_super(sb);
++	return root;
+ out:
+ 	if (cifs_sb) {
+ 		kfree(cifs_sb->prepath);
+diff --git a/fs/exfat/file.c b/fs/exfat/file.c
+index 6af0191b648f1..d890fd34bb2d0 100644
+--- a/fs/exfat/file.c
++++ b/fs/exfat/file.c
+@@ -110,8 +110,7 @@ int __exfat_truncate(struct inode *inode, loff_t new_size)
+ 	exfat_set_volume_dirty(sb);
+ 
+ 	num_clusters_new = EXFAT_B_TO_CLU_ROUND_UP(i_size_read(inode), sbi);
+-	num_clusters_phys =
+-		EXFAT_B_TO_CLU_ROUND_UP(EXFAT_I(inode)->i_size_ondisk, sbi);
++	num_clusters_phys = EXFAT_B_TO_CLU_ROUND_UP(ei->i_size_ondisk, sbi);
+ 
+ 	exfat_chain_set(&clu, ei->start_clu, num_clusters_phys, ei->flags);
+ 
+@@ -228,12 +227,13 @@ void exfat_truncate(struct inode *inode, loff_t size)
+ {
+ 	struct super_block *sb = inode->i_sb;
+ 	struct exfat_sb_info *sbi = EXFAT_SB(sb);
++	struct exfat_inode_info *ei = EXFAT_I(inode);
+ 	unsigned int blocksize = i_blocksize(inode);
+ 	loff_t aligned_size;
+ 	int err;
+ 
+ 	mutex_lock(&sbi->s_lock);
+-	if (EXFAT_I(inode)->start_clu == 0) {
++	if (ei->start_clu == 0) {
+ 		/*
+ 		 * Empty start_clu != ~0 (not allocated)
+ 		 */
+@@ -251,8 +251,8 @@ void exfat_truncate(struct inode *inode, loff_t size)
+ 	else
+ 		mark_inode_dirty(inode);
+ 
+-	inode->i_blocks = ((i_size_read(inode) + (sbi->cluster_size - 1)) &
+-			~(sbi->cluster_size - 1)) >> inode->i_blkbits;
++	inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >>
++				inode->i_blkbits;
+ write_size:
+ 	aligned_size = i_size_read(inode);
+ 	if (aligned_size & (blocksize - 1)) {
+@@ -260,11 +260,11 @@ write_size:
+ 		aligned_size++;
+ 	}
+ 
+-	if (EXFAT_I(inode)->i_size_ondisk > i_size_read(inode))
+-		EXFAT_I(inode)->i_size_ondisk = aligned_size;
++	if (ei->i_size_ondisk > i_size_read(inode))
++		ei->i_size_ondisk = aligned_size;
+ 
+-	if (EXFAT_I(inode)->i_size_aligned > i_size_read(inode))
+-		EXFAT_I(inode)->i_size_aligned = aligned_size;
++	if (ei->i_size_aligned > i_size_read(inode))
++		ei->i_size_aligned = aligned_size;
+ 	mutex_unlock(&sbi->s_lock);
+ }
+ 
+diff --git a/fs/exfat/inode.c b/fs/exfat/inode.c
+index 1c7aa1ea4724c..72a0ccfb616c3 100644
+--- a/fs/exfat/inode.c
++++ b/fs/exfat/inode.c
+@@ -114,10 +114,9 @@ static int exfat_map_cluster(struct inode *inode, unsigned int clu_offset,
+ 	unsigned int local_clu_offset = clu_offset;
+ 	unsigned int num_to_be_allocated = 0, num_clusters = 0;
+ 
+-	if (EXFAT_I(inode)->i_size_ondisk > 0)
++	if (ei->i_size_ondisk > 0)
+ 		num_clusters =
+-			EXFAT_B_TO_CLU_ROUND_UP(EXFAT_I(inode)->i_size_ondisk,
+-			sbi);
++			EXFAT_B_TO_CLU_ROUND_UP(ei->i_size_ondisk, sbi);
+ 
+ 	if (clu_offset >= num_clusters)
+ 		num_to_be_allocated = clu_offset - num_clusters + 1;
+@@ -416,10 +415,10 @@ static int exfat_write_end(struct file *file, struct address_space *mapping,
+ 
+ 	err = generic_write_end(file, mapping, pos, len, copied, pagep, fsdata);
+ 
+-	if (EXFAT_I(inode)->i_size_aligned < i_size_read(inode)) {
++	if (ei->i_size_aligned < i_size_read(inode)) {
+ 		exfat_fs_error(inode->i_sb,
+ 			"invalid size(size(%llu) > aligned(%llu)\n",
+-			i_size_read(inode), EXFAT_I(inode)->i_size_aligned);
++			i_size_read(inode), ei->i_size_aligned);
+ 		return -EIO;
+ 	}
+ 
+@@ -603,8 +602,8 @@ static int exfat_fill_inode(struct inode *inode, struct exfat_dir_entry *info)
+ 
+ 	exfat_save_attr(inode, info->attr);
+ 
+-	inode->i_blocks = ((i_size_read(inode) + (sbi->cluster_size - 1)) &
+-		~((loff_t)sbi->cluster_size - 1)) >> inode->i_blkbits;
++	inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >>
++				inode->i_blkbits;
+ 	inode->i_mtime = info->mtime;
+ 	inode->i_ctime = info->mtime;
+ 	ei->i_crtime = info->crtime;
+diff --git a/fs/exfat/namei.c b/fs/exfat/namei.c
+index 24b41103d1cc0..9d8ada781250b 100644
+--- a/fs/exfat/namei.c
++++ b/fs/exfat/namei.c
+@@ -395,9 +395,9 @@ static int exfat_find_empty_entry(struct inode *inode,
+ 
+ 		/* directory inode should be updated in here */
+ 		i_size_write(inode, size);
+-		EXFAT_I(inode)->i_size_ondisk += sbi->cluster_size;
+-		EXFAT_I(inode)->i_size_aligned += sbi->cluster_size;
+-		EXFAT_I(inode)->flags = p_dir->flags;
++		ei->i_size_ondisk += sbi->cluster_size;
++		ei->i_size_aligned += sbi->cluster_size;
++		ei->flags = p_dir->flags;
+ 		inode->i_blocks += 1 << sbi->sect_per_clus_bits;
+ 	}
+ 
+diff --git a/fs/exfat/super.c b/fs/exfat/super.c
+index 5539ffc20d164..4b5d02b1df585 100644
+--- a/fs/exfat/super.c
++++ b/fs/exfat/super.c
+@@ -364,11 +364,11 @@ static int exfat_read_root(struct inode *inode)
+ 	inode->i_op = &exfat_dir_inode_operations;
+ 	inode->i_fop = &exfat_dir_operations;
+ 
+-	inode->i_blocks = ((i_size_read(inode) + (sbi->cluster_size - 1))
+-			& ~(sbi->cluster_size - 1)) >> inode->i_blkbits;
+-	EXFAT_I(inode)->i_pos = ((loff_t)sbi->root_dir << 32) | 0xffffffff;
+-	EXFAT_I(inode)->i_size_aligned = i_size_read(inode);
+-	EXFAT_I(inode)->i_size_ondisk = i_size_read(inode);
++	inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >>
++				inode->i_blkbits;
++	ei->i_pos = ((loff_t)sbi->root_dir << 32) | 0xffffffff;
++	ei->i_size_aligned = i_size_read(inode);
++	ei->i_size_ondisk = i_size_read(inode);
+ 
+ 	exfat_save_attr(inode, ATTR_SUBDIR);
+ 	inode->i_mtime = inode->i_atime = inode->i_ctime = ei->i_crtime =
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index d248a01132c3b..c2cc9d78915b0 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1725,9 +1725,9 @@ struct ext4_sb_info {
+ 	 */
+ 	struct work_struct s_error_work;
+ 
+-	/* Ext4 fast commit stuff */
++	/* Ext4 fast commit sub transaction ID */
+ 	atomic_t s_fc_subtid;
+-	atomic_t s_fc_ineligible_updates;
++
+ 	/*
+ 	 * After commit starts, the main queue gets locked, and the further
+ 	 * updates get added in the staging queue.
+@@ -1747,7 +1747,7 @@ struct ext4_sb_info {
+ 	spinlock_t s_fc_lock;
+ 	struct buffer_head *s_fc_bh;
+ 	struct ext4_fc_stats s_fc_stats;
+-	u64 s_fc_avg_commit_time;
++	tid_t s_fc_ineligible_tid;
+ #ifdef CONFIG_EXT4_DEBUG
+ 	int s_fc_debug_max_replay;
+ #endif
+@@ -1793,10 +1793,7 @@ static inline int ext4_valid_inum(struct super_block *sb, unsigned long ino)
+ enum {
+ 	EXT4_MF_MNTDIR_SAMPLED,
+ 	EXT4_MF_FS_ABORTED,	/* Fatal error detected */
+-	EXT4_MF_FC_INELIGIBLE,	/* Fast commit ineligible */
+-	EXT4_MF_FC_COMMITTING	/* File system underoing a fast
+-				 * commit.
+-				 */
++	EXT4_MF_FC_INELIGIBLE	/* Fast commit ineligible */
+ };
+ 
+ static inline void ext4_set_mount_flag(struct super_block *sb, int bit)
+@@ -2925,9 +2922,7 @@ void __ext4_fc_track_create(handle_t *handle, struct inode *inode,
+ 			    struct dentry *dentry);
+ void ext4_fc_track_create(handle_t *handle, struct dentry *dentry);
+ void ext4_fc_track_inode(handle_t *handle, struct inode *inode);
+-void ext4_fc_mark_ineligible(struct super_block *sb, int reason);
+-void ext4_fc_start_ineligible(struct super_block *sb, int reason);
+-void ext4_fc_stop_ineligible(struct super_block *sb);
++void ext4_fc_mark_ineligible(struct super_block *sb, int reason, handle_t *handle);
+ void ext4_fc_start_update(struct inode *inode);
+ void ext4_fc_stop_update(struct inode *inode);
+ void ext4_fc_del(struct inode *inode);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 9b37d16b24ffd..d2667189be7e5 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -5342,7 +5342,7 @@ static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
+ 		ret = PTR_ERR(handle);
+ 		goto out_mmap;
+ 	}
+-	ext4_fc_start_ineligible(sb, EXT4_FC_REASON_FALLOC_RANGE);
++	ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_FALLOC_RANGE, handle);
+ 
+ 	down_write(&EXT4_I(inode)->i_data_sem);
+ 	ext4_discard_preallocations(inode, 0);
+@@ -5381,7 +5381,6 @@ static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
+ 
+ out_stop:
+ 	ext4_journal_stop(handle);
+-	ext4_fc_stop_ineligible(sb);
+ out_mmap:
+ 	filemap_invalidate_unlock(mapping);
+ out_mutex:
+@@ -5483,7 +5482,7 @@ static int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len)
+ 		ret = PTR_ERR(handle);
+ 		goto out_mmap;
+ 	}
+-	ext4_fc_start_ineligible(sb, EXT4_FC_REASON_FALLOC_RANGE);
++	ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_FALLOC_RANGE, handle);
+ 
+ 	/* Expand file to avoid data loss if there is error while shifting */
+ 	inode->i_size += len;
+@@ -5558,7 +5557,6 @@ static int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len)
+ 
+ out_stop:
+ 	ext4_journal_stop(handle);
+-	ext4_fc_stop_ineligible(sb);
+ out_mmap:
+ 	filemap_invalidate_unlock(mapping);
+ out_mutex:
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index 3b79fb063c07a..aca8414706346 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -65,21 +65,11 @@
+  *
+  * Fast Commit Ineligibility
+  * -------------------------
+- * Not all operations are supported by fast commits today (e.g extended
+- * attributes). Fast commit ineligibility is marked by calling one of the
+- * two following functions:
+- *
+- * - ext4_fc_mark_ineligible(): This makes next fast commit operation to fall
+- *   back to full commit. This is useful in case of transient errors.
+  *
+- * - ext4_fc_start_ineligible() and ext4_fc_stop_ineligible() - This makes all
+- *   the fast commits happening between ext4_fc_start_ineligible() and
+- *   ext4_fc_stop_ineligible() and one fast commit after the call to
+- *   ext4_fc_stop_ineligible() to fall back to full commits. It is important to
+- *   make one more fast commit to fall back to full commit after stop call so
+- *   that it guaranteed that the fast commit ineligible operation contained
+- *   within ext4_fc_start_ineligible() and ext4_fc_stop_ineligible() is
+- *   followed by at least 1 full commit.
++ * Not all operations are supported by fast commits today (e.g extended
++ * attributes). Fast commit ineligibility is marked by calling
++ * ext4_fc_mark_ineligible(): This makes next fast commit operation to fall back
++ * to full commit.
+  *
+  * Atomicity of commits
+  * --------------------
+@@ -312,60 +302,36 @@ restart:
+ }
+ 
+ /*
+- * Mark file system as fast commit ineligible. This means that next commit
+- * operation would result in a full jbd2 commit.
++ * Mark file system as fast commit ineligible, and record latest
++ * ineligible transaction tid. This means until the recorded
++ * transaction, commit operation would result in a full jbd2 commit.
+  */
+-void ext4_fc_mark_ineligible(struct super_block *sb, int reason)
++void ext4_fc_mark_ineligible(struct super_block *sb, int reason, handle_t *handle)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
++	tid_t tid;
+ 
+ 	if (!test_opt2(sb, JOURNAL_FAST_COMMIT) ||
+ 	    (EXT4_SB(sb)->s_mount_state & EXT4_FC_REPLAY))
+ 		return;
+ 
+ 	ext4_set_mount_flag(sb, EXT4_MF_FC_INELIGIBLE);
++	if (handle && !IS_ERR(handle))
++		tid = handle->h_transaction->t_tid;
++	else {
++		read_lock(&sbi->s_journal->j_state_lock);
++		tid = sbi->s_journal->j_running_transaction ?
++				sbi->s_journal->j_running_transaction->t_tid : 0;
++		read_unlock(&sbi->s_journal->j_state_lock);
++	}
++	spin_lock(&sbi->s_fc_lock);
++	if (sbi->s_fc_ineligible_tid < tid)
++		sbi->s_fc_ineligible_tid = tid;
++	spin_unlock(&sbi->s_fc_lock);
+ 	WARN_ON(reason >= EXT4_FC_REASON_MAX);
+ 	sbi->s_fc_stats.fc_ineligible_reason_count[reason]++;
+ }
+ 
+-/*
+- * Start a fast commit ineligible update. Any commits that happen while
+- * such an operation is in progress fall back to full commits.
+- */
+-void ext4_fc_start_ineligible(struct super_block *sb, int reason)
+-{
+-	struct ext4_sb_info *sbi = EXT4_SB(sb);
+-
+-	if (!test_opt2(sb, JOURNAL_FAST_COMMIT) ||
+-	    (EXT4_SB(sb)->s_mount_state & EXT4_FC_REPLAY))
+-		return;
+-
+-	WARN_ON(reason >= EXT4_FC_REASON_MAX);
+-	sbi->s_fc_stats.fc_ineligible_reason_count[reason]++;
+-	atomic_inc(&sbi->s_fc_ineligible_updates);
+-}
+-
+-/*
+- * Stop a fast commit ineligible update. We set EXT4_MF_FC_INELIGIBLE flag here
+- * to ensure that after stopping the ineligible update, at least one full
+- * commit takes place.
+- */
+-void ext4_fc_stop_ineligible(struct super_block *sb)
+-{
+-	if (!test_opt2(sb, JOURNAL_FAST_COMMIT) ||
+-	    (EXT4_SB(sb)->s_mount_state & EXT4_FC_REPLAY))
+-		return;
+-
+-	ext4_set_mount_flag(sb, EXT4_MF_FC_INELIGIBLE);
+-	atomic_dec(&EXT4_SB(sb)->s_fc_ineligible_updates);
+-}
+-
+-static inline int ext4_fc_is_ineligible(struct super_block *sb)
+-{
+-	return (ext4_test_mount_flag(sb, EXT4_MF_FC_INELIGIBLE) ||
+-		atomic_read(&EXT4_SB(sb)->s_fc_ineligible_updates));
+-}
+-
+ /*
+  * Generic fast commit tracking function. If this is the first time this we are
+  * called after a full commit, we initialize fast commit fields and then call
+@@ -391,7 +357,7 @@ static int ext4_fc_track_template(
+ 	    (sbi->s_mount_state & EXT4_FC_REPLAY))
+ 		return -EOPNOTSUPP;
+ 
+-	if (ext4_fc_is_ineligible(inode->i_sb))
++	if (ext4_test_mount_flag(inode->i_sb, EXT4_MF_FC_INELIGIBLE))
+ 		return -EINVAL;
+ 
+ 	tid = handle->h_transaction->t_tid;
+@@ -411,7 +377,8 @@ static int ext4_fc_track_template(
+ 	spin_lock(&sbi->s_fc_lock);
+ 	if (list_empty(&EXT4_I(inode)->i_fc_list))
+ 		list_add_tail(&EXT4_I(inode)->i_fc_list,
+-				(ext4_test_mount_flag(inode->i_sb, EXT4_MF_FC_COMMITTING)) ?
++				(sbi->s_journal->j_flags & JBD2_FULL_COMMIT_ONGOING ||
++				 sbi->s_journal->j_flags & JBD2_FAST_COMMIT_ONGOING) ?
+ 				&sbi->s_fc_q[FC_Q_STAGING] :
+ 				&sbi->s_fc_q[FC_Q_MAIN]);
+ 	spin_unlock(&sbi->s_fc_lock);
+@@ -437,7 +404,7 @@ static int __track_dentry_update(struct inode *inode, void *arg, bool update)
+ 	mutex_unlock(&ei->i_fc_lock);
+ 	node = kmem_cache_alloc(ext4_fc_dentry_cachep, GFP_NOFS);
+ 	if (!node) {
+-		ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_NOMEM);
++		ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_NOMEM, NULL);
+ 		mutex_lock(&ei->i_fc_lock);
+ 		return -ENOMEM;
+ 	}
+@@ -450,7 +417,7 @@ static int __track_dentry_update(struct inode *inode, void *arg, bool update)
+ 		if (!node->fcd_name.name) {
+ 			kmem_cache_free(ext4_fc_dentry_cachep, node);
+ 			ext4_fc_mark_ineligible(inode->i_sb,
+-				EXT4_FC_REASON_NOMEM);
++				EXT4_FC_REASON_NOMEM, NULL);
+ 			mutex_lock(&ei->i_fc_lock);
+ 			return -ENOMEM;
+ 		}
+@@ -464,7 +431,8 @@ static int __track_dentry_update(struct inode *inode, void *arg, bool update)
+ 	node->fcd_name.len = dentry->d_name.len;
+ 
+ 	spin_lock(&sbi->s_fc_lock);
+-	if (ext4_test_mount_flag(inode->i_sb, EXT4_MF_FC_COMMITTING))
++	if (sbi->s_journal->j_flags & JBD2_FULL_COMMIT_ONGOING ||
++		sbi->s_journal->j_flags & JBD2_FAST_COMMIT_ONGOING)
+ 		list_add_tail(&node->fcd_list,
+ 				&sbi->s_fc_dentry_q[FC_Q_STAGING]);
+ 	else
+@@ -552,7 +520,7 @@ void ext4_fc_track_inode(handle_t *handle, struct inode *inode)
+ 
+ 	if (ext4_should_journal_data(inode)) {
+ 		ext4_fc_mark_ineligible(inode->i_sb,
+-					EXT4_FC_REASON_INODE_JOURNAL_DATA);
++					EXT4_FC_REASON_INODE_JOURNAL_DATA, handle);
+ 		return;
+ 	}
+ 
+@@ -930,7 +898,6 @@ static int ext4_fc_submit_inode_data_all(journal_t *journal)
+ 	int ret = 0;
+ 
+ 	spin_lock(&sbi->s_fc_lock);
+-	ext4_set_mount_flag(sb, EXT4_MF_FC_COMMITTING);
+ 	list_for_each_entry(ei, &sbi->s_fc_q[FC_Q_MAIN], i_fc_list) {
+ 		ext4_set_inode_state(&ei->vfs_inode, EXT4_STATE_FC_COMMITTING);
+ 		while (atomic_read(&ei->i_fc_updates)) {
+@@ -1123,6 +1090,32 @@ out:
+ 	return ret;
+ }
+ 
++static void ext4_fc_update_stats(struct super_block *sb, int status,
++				 u64 commit_time, int nblks)
++{
++	struct ext4_fc_stats *stats = &EXT4_SB(sb)->s_fc_stats;
++
++	jbd_debug(1, "Fast commit ended with status = %d", status);
++	if (status == EXT4_FC_STATUS_OK) {
++		stats->fc_num_commits++;
++		stats->fc_numblks += nblks;
++		if (likely(stats->s_fc_avg_commit_time))
++			stats->s_fc_avg_commit_time =
++				(commit_time +
++				 stats->s_fc_avg_commit_time * 3) / 4;
++		else
++			stats->s_fc_avg_commit_time = commit_time;
++	} else if (status == EXT4_FC_STATUS_FAILED ||
++		   status == EXT4_FC_STATUS_INELIGIBLE) {
++		if (status == EXT4_FC_STATUS_FAILED)
++			stats->fc_failed_commits++;
++		stats->fc_ineligible_commits++;
++	} else {
++		stats->fc_skipped_commits++;
++	}
++	trace_ext4_fc_commit_stop(sb, nblks, status);
++}
++
+ /*
+  * The main commit entry point. Performs a fast commit for transaction
+  * commit_tid if needed. If it's not possible to perform a fast commit
+@@ -1135,18 +1128,15 @@ int ext4_fc_commit(journal_t *journal, tid_t commit_tid)
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	int nblks = 0, ret, bsize = journal->j_blocksize;
+ 	int subtid = atomic_read(&sbi->s_fc_subtid);
+-	int reason = EXT4_FC_REASON_OK, fc_bufs_before = 0;
++	int status = EXT4_FC_STATUS_OK, fc_bufs_before = 0;
+ 	ktime_t start_time, commit_time;
+ 
+ 	trace_ext4_fc_commit_start(sb);
+ 
+ 	start_time = ktime_get();
+ 
+-	if (!test_opt2(sb, JOURNAL_FAST_COMMIT) ||
+-		(ext4_fc_is_ineligible(sb))) {
+-		reason = EXT4_FC_REASON_INELIGIBLE;
+-		goto out;
+-	}
++	if (!test_opt2(sb, JOURNAL_FAST_COMMIT))
++		return jbd2_complete_transaction(journal, commit_tid);
+ 
+ restart_fc:
+ 	ret = jbd2_fc_begin_commit(journal, commit_tid);
+@@ -1155,74 +1145,59 @@ restart_fc:
+ 		if (atomic_read(&sbi->s_fc_subtid) <= subtid &&
+ 			commit_tid > journal->j_commit_sequence)
+ 			goto restart_fc;
+-		reason = EXT4_FC_REASON_ALREADY_COMMITTED;
+-		goto out;
++		ext4_fc_update_stats(sb, EXT4_FC_STATUS_SKIPPED, 0, 0);
++		return 0;
+ 	} else if (ret) {
+-		sbi->s_fc_stats.fc_ineligible_reason_count[EXT4_FC_COMMIT_FAILED]++;
+-		reason = EXT4_FC_REASON_FC_START_FAILED;
+-		goto out;
++		/*
++		 * Commit couldn't start. Just update stats and perform a
++		 * full commit.
++		 */
++		ext4_fc_update_stats(sb, EXT4_FC_STATUS_FAILED, 0, 0);
++		return jbd2_complete_transaction(journal, commit_tid);
++	}
++
++	/*
++	 * After establishing journal barrier via jbd2_fc_begin_commit(), check
++	 * if we are fast commit ineligible.
++	 */
++	if (ext4_test_mount_flag(sb, EXT4_MF_FC_INELIGIBLE)) {
++		status = EXT4_FC_STATUS_INELIGIBLE;
++		goto fallback;
+ 	}
+ 
+ 	fc_bufs_before = (sbi->s_fc_bytes + bsize - 1) / bsize;
+ 	ret = ext4_fc_perform_commit(journal);
+ 	if (ret < 0) {
+-		sbi->s_fc_stats.fc_ineligible_reason_count[EXT4_FC_COMMIT_FAILED]++;
+-		reason = EXT4_FC_REASON_FC_FAILED;
+-		goto out;
++		status = EXT4_FC_STATUS_FAILED;
++		goto fallback;
+ 	}
+ 	nblks = (sbi->s_fc_bytes + bsize - 1) / bsize - fc_bufs_before;
+ 	ret = jbd2_fc_wait_bufs(journal, nblks);
+ 	if (ret < 0) {
+-		sbi->s_fc_stats.fc_ineligible_reason_count[EXT4_FC_COMMIT_FAILED]++;
+-		reason = EXT4_FC_REASON_FC_FAILED;
+-		goto out;
++		status = EXT4_FC_STATUS_FAILED;
++		goto fallback;
+ 	}
+ 	atomic_inc(&sbi->s_fc_subtid);
+-	jbd2_fc_end_commit(journal);
+-out:
+-	/* Has any ineligible update happened since we started? */
+-	if (reason == EXT4_FC_REASON_OK && ext4_fc_is_ineligible(sb)) {
+-		sbi->s_fc_stats.fc_ineligible_reason_count[EXT4_FC_COMMIT_FAILED]++;
+-		reason = EXT4_FC_REASON_INELIGIBLE;
+-	}
+-
+-	spin_lock(&sbi->s_fc_lock);
+-	if (reason != EXT4_FC_REASON_OK &&
+-		reason != EXT4_FC_REASON_ALREADY_COMMITTED) {
+-		sbi->s_fc_stats.fc_ineligible_commits++;
+-	} else {
+-		sbi->s_fc_stats.fc_num_commits++;
+-		sbi->s_fc_stats.fc_numblks += nblks;
+-	}
+-	spin_unlock(&sbi->s_fc_lock);
+-	nblks = (reason == EXT4_FC_REASON_OK) ? nblks : 0;
+-	trace_ext4_fc_commit_stop(sb, nblks, reason);
+-	commit_time = ktime_to_ns(ktime_sub(ktime_get(), start_time));
++	ret = jbd2_fc_end_commit(journal);
+ 	/*
+-	 * weight the commit time higher than the average time so we don't
+-	 * react too strongly to vast changes in the commit time
++	 * weight the commit time higher than the average time so we
++	 * don't react too strongly to vast changes in the commit time
+ 	 */
+-	if (likely(sbi->s_fc_avg_commit_time))
+-		sbi->s_fc_avg_commit_time = (commit_time +
+-				sbi->s_fc_avg_commit_time * 3) / 4;
+-	else
+-		sbi->s_fc_avg_commit_time = commit_time;
+-	jbd_debug(1,
+-		"Fast commit ended with blks = %d, reason = %d, subtid - %d",
+-		nblks, reason, subtid);
+-	if (reason == EXT4_FC_REASON_FC_FAILED)
+-		return jbd2_fc_end_commit_fallback(journal);
+-	if (reason == EXT4_FC_REASON_FC_START_FAILED ||
+-		reason == EXT4_FC_REASON_INELIGIBLE)
+-		return jbd2_complete_transaction(journal, commit_tid);
+-	return 0;
++	commit_time = ktime_to_ns(ktime_sub(ktime_get(), start_time));
++	ext4_fc_update_stats(sb, status, commit_time, nblks);
++	return ret;
++
++fallback:
++	ret = jbd2_fc_end_commit_fallback(journal);
++	ext4_fc_update_stats(sb, status, 0, 0);
++	return ret;
+ }
+ 
+ /*
+  * Fast commit cleanup routine. This is called after every fast commit and
+  * full commit. full is true if we are called after a full commit.
+  */
+-static void ext4_fc_cleanup(journal_t *journal, int full)
++static void ext4_fc_cleanup(journal_t *journal, int full, tid_t tid)
+ {
+ 	struct super_block *sb = journal->j_private;
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+@@ -1240,7 +1215,8 @@ static void ext4_fc_cleanup(journal_t *journal, int full)
+ 		list_del_init(&iter->i_fc_list);
+ 		ext4_clear_inode_state(&iter->vfs_inode,
+ 				       EXT4_STATE_FC_COMMITTING);
+-		ext4_fc_reset_inode(&iter->vfs_inode);
++		if (iter->i_sync_tid <= tid)
++			ext4_fc_reset_inode(&iter->vfs_inode);
+ 		/* Make sure EXT4_STATE_FC_COMMITTING bit is clear */
+ 		smp_mb();
+ #if (BITS_PER_LONG < 64)
+@@ -1269,8 +1245,10 @@ static void ext4_fc_cleanup(journal_t *journal, int full)
+ 	list_splice_init(&sbi->s_fc_q[FC_Q_STAGING],
+ 				&sbi->s_fc_q[FC_Q_MAIN]);
+ 
+-	ext4_clear_mount_flag(sb, EXT4_MF_FC_COMMITTING);
+-	ext4_clear_mount_flag(sb, EXT4_MF_FC_INELIGIBLE);
++	if (tid >= sbi->s_fc_ineligible_tid) {
++		sbi->s_fc_ineligible_tid = 0;
++		ext4_clear_mount_flag(sb, EXT4_MF_FC_INELIGIBLE);
++	}
+ 
+ 	if (full)
+ 		sbi->s_fc_bytes = 0;
+@@ -2181,7 +2159,7 @@ int ext4_fc_info_show(struct seq_file *seq, void *v)
+ 		"fc stats:\n%ld commits\n%ld ineligible\n%ld numblks\n%lluus avg_commit_time\n",
+ 		   stats->fc_num_commits, stats->fc_ineligible_commits,
+ 		   stats->fc_numblks,
+-		   div_u64(sbi->s_fc_avg_commit_time, 1000));
++		   div_u64(stats->s_fc_avg_commit_time, 1000));
+ 	seq_puts(seq, "Ineligible reasons:\n");
+ 	for (i = 0; i < EXT4_FC_REASON_MAX; i++)
+ 		seq_printf(seq, "\"%s\":\t%d\n", fc_ineligible_reasons[i],
+diff --git a/fs/ext4/fast_commit.h b/fs/ext4/fast_commit.h
+index 937c381b4c85e..083ad1cb705a7 100644
+--- a/fs/ext4/fast_commit.h
++++ b/fs/ext4/fast_commit.h
+@@ -71,21 +71,19 @@ struct ext4_fc_tail {
+ };
+ 
+ /*
+- * Fast commit reason codes
++ * Fast commit status codes
++ */
++enum {
++	EXT4_FC_STATUS_OK = 0,
++	EXT4_FC_STATUS_INELIGIBLE,
++	EXT4_FC_STATUS_SKIPPED,
++	EXT4_FC_STATUS_FAILED,
++};
++
++/*
++ * Fast commit ineligiblity reasons:
+  */
+ enum {
+-	/*
+-	 * Commit status codes:
+-	 */
+-	EXT4_FC_REASON_OK = 0,
+-	EXT4_FC_REASON_INELIGIBLE,
+-	EXT4_FC_REASON_ALREADY_COMMITTED,
+-	EXT4_FC_REASON_FC_START_FAILED,
+-	EXT4_FC_REASON_FC_FAILED,
+-
+-	/*
+-	 * Fast commit ineligiblity reasons:
+-	 */
+ 	EXT4_FC_REASON_XATTR = 0,
+ 	EXT4_FC_REASON_CROSS_RENAME,
+ 	EXT4_FC_REASON_JOURNAL_FLAG_CHANGE,
+@@ -117,7 +115,10 @@ struct ext4_fc_stats {
+ 	unsigned int fc_ineligible_reason_count[EXT4_FC_REASON_MAX];
+ 	unsigned long fc_num_commits;
+ 	unsigned long fc_ineligible_commits;
++	unsigned long fc_failed_commits;
++	unsigned long fc_skipped_commits;
+ 	unsigned long fc_numblks;
++	u64 s_fc_avg_commit_time;
+ };
+ 
+ #define EXT4_FC_REPLAY_REALLOC_INCREMENT	4
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 3bdfe010e17f9..2f5686dfa30d5 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -337,7 +337,7 @@ stop_handle:
+ 	return;
+ no_delete:
+ 	if (!list_empty(&EXT4_I(inode)->i_fc_list))
+-		ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_NOMEM);
++		ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_NOMEM, NULL);
+ 	ext4_clear_inode(inode);	/* We must guarantee clearing of inode... */
+ }
+ 
+@@ -5983,7 +5983,7 @@ int ext4_change_inode_journal_flag(struct inode *inode, int val)
+ 		return PTR_ERR(handle);
+ 
+ 	ext4_fc_mark_ineligible(inode->i_sb,
+-		EXT4_FC_REASON_JOURNAL_FLAG_CHANGE);
++		EXT4_FC_REASON_JOURNAL_FLAG_CHANGE, handle);
+ 	err = ext4_mark_inode_dirty(handle, inode);
+ 	ext4_handle_sync(handle);
+ 	ext4_journal_stop(handle);
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index 220a4c8178b5e..f61b59045c6d3 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -169,7 +169,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 		err = -EINVAL;
+ 		goto err_out;
+ 	}
+-	ext4_fc_start_ineligible(sb, EXT4_FC_REASON_SWAP_BOOT);
++	ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_SWAP_BOOT, handle);
+ 
+ 	/* Protect extent tree against block allocations via delalloc */
+ 	ext4_double_down_write_data_sem(inode, inode_bl);
+@@ -252,7 +252,6 @@ revert:
+ 
+ err_out1:
+ 	ext4_journal_stop(handle);
+-	ext4_fc_stop_ineligible(sb);
+ 	ext4_double_up_write_data_sem(inode, inode_bl);
+ 
+ err_out:
+@@ -1076,7 +1075,7 @@ mext_out:
+ 
+ 		err = ext4_resize_fs(sb, n_blocks_count);
+ 		if (EXT4_SB(sb)->s_journal) {
+-			ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_RESIZE);
++			ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_RESIZE, NULL);
+ 			jbd2_journal_lock_updates(EXT4_SB(sb)->s_journal);
+ 			err2 = jbd2_journal_flush(EXT4_SB(sb)->s_journal, 0);
+ 			jbd2_journal_unlock_updates(EXT4_SB(sb)->s_journal);
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 52c9bd154122a..47b9f87dbc6f7 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -3889,7 +3889,7 @@ static int ext4_rename(struct user_namespace *mnt_userns, struct inode *old_dir,
+ 		 * dirents in directories.
+ 		 */
+ 		ext4_fc_mark_ineligible(old.inode->i_sb,
+-			EXT4_FC_REASON_RENAME_DIR);
++			EXT4_FC_REASON_RENAME_DIR, handle);
+ 	} else {
+ 		if (new.inode)
+ 			ext4_fc_track_unlink(handle, new.dentry);
+@@ -4049,7 +4049,7 @@ static int ext4_cross_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 	if (unlikely(retval))
+ 		goto end_rename;
+ 	ext4_fc_mark_ineligible(new.inode->i_sb,
+-				EXT4_FC_REASON_CROSS_RENAME);
++				EXT4_FC_REASON_CROSS_RENAME, handle);
+ 	if (old.dir_bh) {
+ 		retval = ext4_rename_dir_finish(handle, &old, new.dir->i_ino);
+ 		if (retval)
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 24a7ad80353b5..32ca34403dcec 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -4620,14 +4620,13 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 
+ 	/* Initialize fast commit stuff */
+ 	atomic_set(&sbi->s_fc_subtid, 0);
+-	atomic_set(&sbi->s_fc_ineligible_updates, 0);
+ 	INIT_LIST_HEAD(&sbi->s_fc_q[FC_Q_MAIN]);
+ 	INIT_LIST_HEAD(&sbi->s_fc_q[FC_Q_STAGING]);
+ 	INIT_LIST_HEAD(&sbi->s_fc_dentry_q[FC_Q_MAIN]);
+ 	INIT_LIST_HEAD(&sbi->s_fc_dentry_q[FC_Q_STAGING]);
+ 	sbi->s_fc_bytes = 0;
+ 	ext4_clear_mount_flag(sb, EXT4_MF_FC_INELIGIBLE);
+-	ext4_clear_mount_flag(sb, EXT4_MF_FC_COMMITTING);
++	sbi->s_fc_ineligible_tid = 0;
+ 	spin_lock_init(&sbi->s_fc_lock);
+ 	memset(&sbi->s_fc_stats, 0, sizeof(sbi->s_fc_stats));
+ 	sbi->s_fc_replay_state.fc_regions = NULL;
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 1e0fc1ed845bf..0423253490986 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -2408,7 +2408,7 @@ retry_inode:
+ 		if (IS_SYNC(inode))
+ 			ext4_handle_sync(handle);
+ 	}
+-	ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_XATTR);
++	ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_XATTR, handle);
+ 
+ cleanup:
+ 	brelse(is.iloc.bh);
+@@ -2486,7 +2486,7 @@ retry:
+ 		if (error == 0)
+ 			error = error2;
+ 	}
+-	ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_XATTR);
++	ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_XATTR, NULL);
+ 
+ 	return error;
+ }
+@@ -2920,7 +2920,7 @@ int ext4_xattr_delete_inode(handle_t *handle, struct inode *inode,
+ 					 error);
+ 			goto cleanup;
+ 		}
+-		ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_XATTR);
++		ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_XATTR, handle);
+ 	}
+ 	error = 0;
+ cleanup:
+diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
+index 3cc4ab2ba7f4f..d188fa913a075 100644
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -1170,7 +1170,7 @@ restart_loop:
+ 	if (journal->j_commit_callback)
+ 		journal->j_commit_callback(journal, commit_transaction);
+ 	if (journal->j_fc_cleanup_callback)
+-		journal->j_fc_cleanup_callback(journal, 1);
++		journal->j_fc_cleanup_callback(journal, 1, commit_transaction->t_tid);
+ 
+ 	trace_jbd2_end_commit(journal, commit_transaction);
+ 	jbd_debug(1, "JBD2: commit %d complete, head %d\n",
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index bd9ac98916043..1f8493ef181d6 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -769,7 +769,7 @@ EXPORT_SYMBOL(jbd2_fc_begin_commit);
+ static int __jbd2_fc_end_commit(journal_t *journal, tid_t tid, bool fallback)
+ {
+ 	if (journal->j_fc_cleanup_callback)
+-		journal->j_fc_cleanup_callback(journal, 0);
++		journal->j_fc_cleanup_callback(journal, 0, tid);
+ 	write_lock(&journal->j_state_lock);
+ 	journal->j_flags &= ~JBD2_FAST_COMMIT_ONGOING;
+ 	if (fallback)
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index cbb22354f3802..7df985e3cada0 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -1586,7 +1586,8 @@ static const struct mm_walk_ops pagemap_ops = {
+  * Bits 5-54  swap offset if swapped
+  * Bit  55    pte is soft-dirty (see Documentation/admin-guide/mm/soft-dirty.rst)
+  * Bit  56    page exclusively mapped
+- * Bits 57-60 zero
++ * Bit  57    pte is uffd-wp write-protected
++ * Bits 58-60 zero
+  * Bit  61    page is file-page or shared-anon
+  * Bit  62    page swapped
+  * Bit  63    page present
+diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
+index fd933c45281af..d63b8106796e2 100644
+--- a/include/linux/jbd2.h
++++ b/include/linux/jbd2.h
+@@ -1295,7 +1295,7 @@ struct journal_s
+ 	 * Clean-up after fast commit or full commit. JBD2 calls this function
+ 	 * after every commit operation.
+ 	 */
+-	void (*j_fc_cleanup_callback)(struct journal_s *journal, int);
++	void (*j_fc_cleanup_callback)(struct journal_s *journal, int full, tid_t tid);
+ 
+ 	/**
+ 	 * @j_fc_replay_callback:
+diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
+index 058d7f371e25a..8db25fcc1eba7 100644
+--- a/include/linux/sched/task.h
++++ b/include/linux/sched/task.h
+@@ -54,8 +54,8 @@ extern asmlinkage void schedule_tail(struct task_struct *prev);
+ extern void init_idle(struct task_struct *idle, int cpu);
+ 
+ extern int sched_fork(unsigned long clone_flags, struct task_struct *p);
+-extern void sched_post_fork(struct task_struct *p,
+-			    struct kernel_clone_args *kargs);
++extern void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs);
++extern void sched_post_fork(struct task_struct *p);
+ extern void sched_dead(struct task_struct *p);
+ 
+ void __noreturn do_task_dead(void);
+diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h
+index 3271870fd85e3..a1093994e5e43 100644
+--- a/include/net/bluetooth/bluetooth.h
++++ b/include/net/bluetooth/bluetooth.h
+@@ -497,8 +497,7 @@ static inline struct sk_buff *bt_skb_sendmmsg(struct sock *sk,
+ 
+ 		tmp = bt_skb_sendmsg(sk, msg, len, mtu, headroom, tailroom);
+ 		if (IS_ERR(tmp)) {
+-			kfree_skb(skb);
+-			return tmp;
++			return skb;
+ 		}
+ 
+ 		len -= tmp->len;
+diff --git a/include/net/ndisc.h b/include/net/ndisc.h
+index 04341d86585de..5e37e58586796 100644
+--- a/include/net/ndisc.h
++++ b/include/net/ndisc.h
+@@ -487,9 +487,9 @@ int igmp6_late_init(void);
+ void igmp6_cleanup(void);
+ void igmp6_late_cleanup(void);
+ 
+-int igmp6_event_query(struct sk_buff *skb);
++void igmp6_event_query(struct sk_buff *skb);
+ 
+-int igmp6_event_report(struct sk_buff *skb);
++void igmp6_event_report(struct sk_buff *skb);
+ 
+ 
+ #ifdef CONFIG_SYSCTL
+diff --git a/include/net/netfilter/nf_queue.h b/include/net/netfilter/nf_queue.h
+index 9eed51e920e87..980daa6e1e3aa 100644
+--- a/include/net/netfilter/nf_queue.h
++++ b/include/net/netfilter/nf_queue.h
+@@ -37,7 +37,7 @@ void nf_register_queue_handler(const struct nf_queue_handler *qh);
+ void nf_unregister_queue_handler(void);
+ void nf_reinject(struct nf_queue_entry *entry, unsigned int verdict);
+ 
+-void nf_queue_entry_get_refs(struct nf_queue_entry *entry);
++bool nf_queue_entry_get_refs(struct nf_queue_entry *entry);
+ void nf_queue_entry_free(struct nf_queue_entry *entry);
+ 
+ static inline void init_hashrandom(u32 *jhash_initval)
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 2b1ce8534993c..301a164f17e9f 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -1567,7 +1567,6 @@ void xfrm_sad_getinfo(struct net *net, struct xfrmk_sadinfo *si);
+ void xfrm_spd_getinfo(struct net *net, struct xfrmk_spdinfo *si);
+ u32 xfrm_replay_seqhi(struct xfrm_state *x, __be32 net_seq);
+ int xfrm_init_replay(struct xfrm_state *x);
+-u32 __xfrm_state_mtu(struct xfrm_state *x, int mtu);
+ u32 xfrm_state_mtu(struct xfrm_state *x, int mtu);
+ int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload);
+ int xfrm_init_state(struct xfrm_state *x);
+diff --git a/include/uapi/linux/input-event-codes.h b/include/uapi/linux/input-event-codes.h
+index 225ec87d4f228..7989d9483ea75 100644
+--- a/include/uapi/linux/input-event-codes.h
++++ b/include/uapi/linux/input-event-codes.h
+@@ -278,7 +278,8 @@
+ #define KEY_PAUSECD		201
+ #define KEY_PROG3		202
+ #define KEY_PROG4		203
+-#define KEY_DASHBOARD		204	/* AL Dashboard */
++#define KEY_ALL_APPLICATIONS	204	/* AC Desktop Show All Applications */
++#define KEY_DASHBOARD		KEY_ALL_APPLICATIONS
+ #define KEY_SUSPEND		205
+ #define KEY_CLOSE		206	/* AC Close */
+ #define KEY_PLAY		207
+@@ -612,6 +613,7 @@
+ #define KEY_ASSISTANT		0x247	/* AL Context-aware desktop assistant */
+ #define KEY_KBD_LAYOUT_NEXT	0x248	/* AC Next Keyboard Layout Select */
+ #define KEY_EMOJI_PICKER	0x249	/* Show/hide emoji picker (HUTRR101) */
++#define KEY_DICTATE		0x24a	/* Start or Stop Voice Dictation Session (HUTRR99) */
+ 
+ #define KEY_BRIGHTNESS_MIN		0x250	/* Set Brightness to Minimum */
+ #define KEY_BRIGHTNESS_MAX		0x251	/* Set Brightness to Maximum */
+diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h
+index 4e29d78518902..65e13a099b1a0 100644
+--- a/include/uapi/linux/xfrm.h
++++ b/include/uapi/linux/xfrm.h
+@@ -511,6 +511,12 @@ struct xfrm_user_offload {
+ 	int				ifindex;
+ 	__u8				flags;
+ };
++/* This flag was exposed without any kernel code that supporting it.
++ * Unfortunately, strongswan has the code that uses sets this flag,
++ * which makes impossible to reuse this bit.
++ *
++ * So leave it here to make sure that it won't be reused by mistake.
++ */
+ #define XFRM_OFFLOAD_IPV6	1
+ #define XFRM_OFFLOAD_INBOUND	2
+ 
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 50d02e3103a57..d846dfde66915 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -2293,6 +2293,17 @@ static __latent_entropy struct task_struct *copy_process(
+ 	if (retval)
+ 		goto bad_fork_put_pidfd;
+ 
++	/*
++	 * Now that the cgroups are pinned, re-clone the parent cgroup and put
++	 * the new task on the correct runqueue. All this *before* the task
++	 * becomes visible.
++	 *
++	 * This isn't part of ->can_fork() because while the re-cloning is
++	 * cgroup specific, it unconditionally needs to place the task on a
++	 * runqueue.
++	 */
++	sched_cgroup_fork(p, args);
++
+ 	/*
+ 	 * From this point on we must avoid any synchronous user-space
+ 	 * communication until we take the tasklist-lock. In particular, we do
+@@ -2402,7 +2413,7 @@ static __latent_entropy struct task_struct *copy_process(
+ 		fd_install(pidfd, pidfile);
+ 
+ 	proc_fork_connector(p);
+-	sched_post_fork(p, args);
++	sched_post_fork(p);
+ 	cgroup_post_fork(p, args);
+ 	perf_event_fork(p);
+ 
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index d24823b3c3f9f..65082cab29069 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -4410,6 +4410,7 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
+ 
+ 	init_entity_runnable_average(&p->se);
+ 
++
+ #ifdef CONFIG_SCHED_INFO
+ 	if (likely(sched_info_on()))
+ 		memset(&p->sched_info, 0, sizeof(p->sched_info));
+@@ -4425,18 +4426,23 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
+ 	return 0;
+ }
+ 
+-void sched_post_fork(struct task_struct *p, struct kernel_clone_args *kargs)
++void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
+ {
+ 	unsigned long flags;
+-#ifdef CONFIG_CGROUP_SCHED
+-	struct task_group *tg;
+-#endif
+ 
++	/*
++	 * Because we're not yet on the pid-hash, p->pi_lock isn't strictly
++	 * required yet, but lockdep gets upset if rules are violated.
++	 */
+ 	raw_spin_lock_irqsave(&p->pi_lock, flags);
+ #ifdef CONFIG_CGROUP_SCHED
+-	tg = container_of(kargs->cset->subsys[cpu_cgrp_id],
+-			  struct task_group, css);
+-	p->sched_task_group = autogroup_task_group(p, tg);
++	if (1) {
++		struct task_group *tg;
++		tg = container_of(kargs->cset->subsys[cpu_cgrp_id],
++				  struct task_group, css);
++		tg = autogroup_task_group(p, tg);
++		p->sched_task_group = tg;
++	}
+ #endif
+ 	rseq_migrate(p);
+ 	/*
+@@ -4447,7 +4453,10 @@ void sched_post_fork(struct task_struct *p, struct kernel_clone_args *kargs)
+ 	if (p->sched_class->task_fork)
+ 		p->sched_class->task_fork(p);
+ 	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
+ 
++void sched_post_fork(struct task_struct *p)
++{
+ 	uclamp_post_fork(p);
+ }
+ 
+diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
+index 1183c88634aa6..3812a33b9fcb9 100644
+--- a/kernel/trace/blktrace.c
++++ b/kernel/trace/blktrace.c
+@@ -310,10 +310,20 @@ record_it:
+ 	local_irq_restore(flags);
+ }
+ 
+-static void blk_trace_free(struct blk_trace *bt)
++static void blk_trace_free(struct request_queue *q, struct blk_trace *bt)
+ {
+ 	relay_close(bt->rchan);
+-	debugfs_remove(bt->dir);
++
++	/*
++	 * If 'bt->dir' is not set, then both 'dropped' and 'msg' are created
++	 * under 'q->debugfs_dir', thus lookup and remove them.
++	 */
++	if (!bt->dir) {
++		debugfs_remove(debugfs_lookup("dropped", q->debugfs_dir));
++		debugfs_remove(debugfs_lookup("msg", q->debugfs_dir));
++	} else {
++		debugfs_remove(bt->dir);
++	}
+ 	free_percpu(bt->sequence);
+ 	free_percpu(bt->msg_data);
+ 	kfree(bt);
+@@ -335,10 +345,10 @@ static void put_probe_ref(void)
+ 	mutex_unlock(&blk_probe_mutex);
+ }
+ 
+-static void blk_trace_cleanup(struct blk_trace *bt)
++static void blk_trace_cleanup(struct request_queue *q, struct blk_trace *bt)
+ {
+ 	synchronize_rcu();
+-	blk_trace_free(bt);
++	blk_trace_free(q, bt);
+ 	put_probe_ref();
+ }
+ 
+@@ -352,7 +362,7 @@ static int __blk_trace_remove(struct request_queue *q)
+ 		return -EINVAL;
+ 
+ 	if (bt->trace_state != Blktrace_running)
+-		blk_trace_cleanup(bt);
++		blk_trace_cleanup(q, bt);
+ 
+ 	return 0;
+ }
+@@ -572,7 +582,7 @@ static int do_blk_trace_setup(struct request_queue *q, char *name, dev_t dev,
+ 	ret = 0;
+ err:
+ 	if (ret)
+-		blk_trace_free(bt);
++		blk_trace_free(q, bt);
+ 	return ret;
+ }
+ 
+@@ -1616,7 +1626,7 @@ static int blk_trace_remove_queue(struct request_queue *q)
+ 
+ 	put_probe_ref();
+ 	synchronize_rcu();
+-	blk_trace_free(bt);
++	blk_trace_free(q, bt);
+ 	return 0;
+ }
+ 
+@@ -1647,7 +1657,7 @@ static int blk_trace_setup_queue(struct request_queue *q,
+ 	return 0;
+ 
+ free_bt:
+-	blk_trace_free(bt);
++	blk_trace_free(q, bt);
+ 	return ret;
+ }
+ 
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index bb15059020445..24683115eade2 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -235,7 +235,7 @@ static char trace_boot_options_buf[MAX_TRACER_SIZE] __initdata;
+ static int __init set_trace_boot_options(char *str)
+ {
+ 	strlcpy(trace_boot_options_buf, str, MAX_TRACER_SIZE);
+-	return 0;
++	return 1;
+ }
+ __setup("trace_options=", set_trace_boot_options);
+ 
+@@ -246,7 +246,7 @@ static int __init set_trace_boot_clock(char *str)
+ {
+ 	strlcpy(trace_boot_clock_buf, str, MAX_TRACER_SIZE);
+ 	trace_boot_clock = trace_boot_clock_buf;
+-	return 0;
++	return 1;
+ }
+ __setup("trace_clock=", set_trace_boot_clock);
+ 
+diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
+index c9124038b140f..06d6318ee5377 100644
+--- a/kernel/trace/trace_events_filter.c
++++ b/kernel/trace/trace_events_filter.c
+@@ -5,6 +5,7 @@
+  * Copyright (C) 2009 Tom Zanussi <tzanussi@gmail.com>
+  */
+ 
++#include <linux/uaccess.h>
+ #include <linux/module.h>
+ #include <linux/ctype.h>
+ #include <linux/mutex.h>
+@@ -654,6 +655,52 @@ DEFINE_EQUALITY_PRED(32);
+ DEFINE_EQUALITY_PRED(16);
+ DEFINE_EQUALITY_PRED(8);
+ 
++/* user space strings temp buffer */
++#define USTRING_BUF_SIZE	1024
++
++struct ustring_buffer {
++	char		buffer[USTRING_BUF_SIZE];
++};
++
++static __percpu struct ustring_buffer *ustring_per_cpu;
++
++static __always_inline char *test_string(char *str)
++{
++	struct ustring_buffer *ubuf;
++	char *kstr;
++
++	if (!ustring_per_cpu)
++		return NULL;
++
++	ubuf = this_cpu_ptr(ustring_per_cpu);
++	kstr = ubuf->buffer;
++
++	/* For safety, do not trust the string pointer */
++	if (!strncpy_from_kernel_nofault(kstr, str, USTRING_BUF_SIZE))
++		return NULL;
++	return kstr;
++}
++
++static __always_inline char *test_ustring(char *str)
++{
++	struct ustring_buffer *ubuf;
++	char __user *ustr;
++	char *kstr;
++
++	if (!ustring_per_cpu)
++		return NULL;
++
++	ubuf = this_cpu_ptr(ustring_per_cpu);
++	kstr = ubuf->buffer;
++
++	/* user space address? */
++	ustr = (char __user *)str;
++	if (!strncpy_from_user_nofault(kstr, ustr, USTRING_BUF_SIZE))
++		return NULL;
++
++	return kstr;
++}
++
+ /* Filter predicate for fixed sized arrays of characters */
+ static int filter_pred_string(struct filter_pred *pred, void *event)
+ {
+@@ -667,19 +714,43 @@ static int filter_pred_string(struct filter_pred *pred, void *event)
+ 	return match;
+ }
+ 
+-/* Filter predicate for char * pointers */
+-static int filter_pred_pchar(struct filter_pred *pred, void *event)
++static __always_inline int filter_pchar(struct filter_pred *pred, char *str)
+ {
+-	char **addr = (char **)(event + pred->offset);
+ 	int cmp, match;
+-	int len = strlen(*addr) + 1;	/* including tailing '\0' */
++	int len;
+ 
+-	cmp = pred->regex.match(*addr, &pred->regex, len);
++	len = strlen(str) + 1;	/* including tailing '\0' */
++	cmp = pred->regex.match(str, &pred->regex, len);
+ 
+ 	match = cmp ^ pred->not;
+ 
+ 	return match;
+ }
++/* Filter predicate for char * pointers */
++static int filter_pred_pchar(struct filter_pred *pred, void *event)
++{
++	char **addr = (char **)(event + pred->offset);
++	char *str;
++
++	str = test_string(*addr);
++	if (!str)
++		return 0;
++
++	return filter_pchar(pred, str);
++}
++
++/* Filter predicate for char * pointers in user space*/
++static int filter_pred_pchar_user(struct filter_pred *pred, void *event)
++{
++	char **addr = (char **)(event + pred->offset);
++	char *str;
++
++	str = test_ustring(*addr);
++	if (!str)
++		return 0;
++
++	return filter_pchar(pred, str);
++}
+ 
+ /*
+  * Filter predicate for dynamic sized arrays of characters.
+@@ -1158,6 +1229,7 @@ static int parse_pred(const char *str, void *data,
+ 	struct filter_pred *pred = NULL;
+ 	char num_buf[24];	/* Big enough to hold an address */
+ 	char *field_name;
++	bool ustring = false;
+ 	char q;
+ 	u64 val;
+ 	int len;
+@@ -1192,6 +1264,12 @@ static int parse_pred(const char *str, void *data,
+ 		return -EINVAL;
+ 	}
+ 
++	/* See if the field is a user space string */
++	if ((len = str_has_prefix(str + i, ".ustring"))) {
++		ustring = true;
++		i += len;
++	}
++
+ 	while (isspace(str[i]))
+ 		i++;
+ 
+@@ -1320,8 +1398,20 @@ static int parse_pred(const char *str, void *data,
+ 
+ 		} else if (field->filter_type == FILTER_DYN_STRING)
+ 			pred->fn = filter_pred_strloc;
+-		else
+-			pred->fn = filter_pred_pchar;
++		else {
++
++			if (!ustring_per_cpu) {
++				/* Once allocated, keep it around for good */
++				ustring_per_cpu = alloc_percpu(struct ustring_buffer);
++				if (!ustring_per_cpu)
++					goto err_mem;
++			}
++
++			if (ustring)
++				pred->fn = filter_pred_pchar_user;
++			else
++				pred->fn = filter_pred_pchar;
++		}
+ 		/* go past the last quote */
+ 		i++;
+ 
+@@ -1387,6 +1477,9 @@ static int parse_pred(const char *str, void *data,
+ err_free:
+ 	kfree(pred);
+ 	return -EINVAL;
++err_mem:
++	kfree(pred);
++	return -ENOMEM;
+ }
+ 
+ enum {
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index e697bfedac2f5..8fa373d4de39d 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -2273,9 +2273,9 @@ parse_field(struct hist_trigger_data *hist_data, struct trace_event_file *file,
+ 			/*
+ 			 * For backward compatibility, if field_name
+ 			 * was "cpu", then we treat this the same as
+-			 * common_cpu.
++			 * common_cpu. This also works for "CPU".
+ 			 */
+-			if (strcmp(field_name, "cpu") == 0) {
++			if (field && field->filter_type == FILTER_CPU) {
+ 				*flags |= HIST_FIELD_FL_CPU;
+ 			} else {
+ 				hist_err(tr, HIST_ERR_FIELD_NOT_FOUND,
+@@ -4816,7 +4816,7 @@ static int create_tracing_map_fields(struct hist_trigger_data *hist_data)
+ 
+ 			if (hist_field->flags & HIST_FIELD_FL_STACKTRACE)
+ 				cmp_fn = tracing_map_cmp_none;
+-			else if (!field)
++			else if (!field || hist_field->flags & HIST_FIELD_FL_CPU)
+ 				cmp_fn = tracing_map_cmp_num(hist_field->size,
+ 							     hist_field->is_signed);
+ 			else if (is_string_field(field))
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 1bb85f7a4593a..bc2a2bc5a453f 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -31,7 +31,7 @@ static int __init set_kprobe_boot_events(char *str)
+ 	strlcpy(kprobe_boot_events_buf, str, COMMAND_LINE_SIZE);
+ 	disable_tracing_selftest("running kprobe events");
+ 
+-	return 0;
++	return 1;
+ }
+ __setup("kprobe_event=", set_kprobe_boot_events);
+ 
+diff --git a/kernel/user_namespace.c b/kernel/user_namespace.c
+index 6b2e3ca7ee993..5481ba44a8d68 100644
+--- a/kernel/user_namespace.c
++++ b/kernel/user_namespace.c
+@@ -58,6 +58,18 @@ static void set_cred_user_ns(struct cred *cred, struct user_namespace *user_ns)
+ 	cred->user_ns = user_ns;
+ }
+ 
++static unsigned long enforced_nproc_rlimit(void)
++{
++	unsigned long limit = RLIM_INFINITY;
++
++	/* Is RLIMIT_NPROC currently enforced? */
++	if (!uid_eq(current_uid(), GLOBAL_ROOT_UID) ||
++	    (current_user_ns() != &init_user_ns))
++		limit = rlimit(RLIMIT_NPROC);
++
++	return limit;
++}
++
+ /*
+  * Create a new user namespace, deriving the creator from the user in the
+  * passed credentials, and replacing that user with the new root user for the
+@@ -122,7 +134,7 @@ int create_user_ns(struct cred *new)
+ 	for (i = 0; i < MAX_PER_NAMESPACE_UCOUNTS; i++) {
+ 		ns->ucount_max[i] = INT_MAX;
+ 	}
+-	set_rlimit_ucount_max(ns, UCOUNT_RLIMIT_NPROC, rlimit(RLIMIT_NPROC));
++	set_rlimit_ucount_max(ns, UCOUNT_RLIMIT_NPROC, enforced_nproc_rlimit());
+ 	set_rlimit_ucount_max(ns, UCOUNT_RLIMIT_MSGQUEUE, rlimit(RLIMIT_MSGQUEUE));
+ 	set_rlimit_ucount_max(ns, UCOUNT_RLIMIT_SIGPENDING, rlimit(RLIMIT_SIGPENDING));
+ 	set_rlimit_ucount_max(ns, UCOUNT_RLIMIT_MEMLOCK, rlimit(RLIMIT_MEMLOCK));
+diff --git a/mm/memfd.c b/mm/memfd.c
+index 9f80f162791a5..08f5f8304746f 100644
+--- a/mm/memfd.c
++++ b/mm/memfd.c
+@@ -31,20 +31,28 @@
+ static void memfd_tag_pins(struct xa_state *xas)
+ {
+ 	struct page *page;
+-	unsigned int tagged = 0;
++	int latency = 0;
++	int cache_count;
+ 
+ 	lru_add_drain();
+ 
+ 	xas_lock_irq(xas);
+ 	xas_for_each(xas, page, ULONG_MAX) {
+-		if (xa_is_value(page))
+-			continue;
+-		page = find_subpage(page, xas->xa_index);
+-		if (page_count(page) - page_mapcount(page) > 1)
++		cache_count = 1;
++		if (!xa_is_value(page) &&
++		    PageTransHuge(page) && !PageHuge(page))
++			cache_count = HPAGE_PMD_NR;
++
++		if (!xa_is_value(page) &&
++		    page_count(page) - total_mapcount(page) != cache_count)
+ 			xas_set_mark(xas, MEMFD_TAG_PINNED);
++		if (cache_count != 1)
++			xas_set(xas, page->index + cache_count);
+ 
+-		if (++tagged % XA_CHECK_SCHED)
++		latency += cache_count;
++		if (latency < XA_CHECK_SCHED)
+ 			continue;
++		latency = 0;
+ 
+ 		xas_pause(xas);
+ 		xas_unlock_irq(xas);
+@@ -73,7 +81,8 @@ static int memfd_wait_for_pins(struct address_space *mapping)
+ 
+ 	error = 0;
+ 	for (scan = 0; scan <= LAST_SCAN; scan++) {
+-		unsigned int tagged = 0;
++		int latency = 0;
++		int cache_count;
+ 
+ 		if (!xas_marked(&xas, MEMFD_TAG_PINNED))
+ 			break;
+@@ -87,10 +96,14 @@ static int memfd_wait_for_pins(struct address_space *mapping)
+ 		xas_lock_irq(&xas);
+ 		xas_for_each_marked(&xas, page, ULONG_MAX, MEMFD_TAG_PINNED) {
+ 			bool clear = true;
+-			if (xa_is_value(page))
+-				continue;
+-			page = find_subpage(page, xas.xa_index);
+-			if (page_count(page) - page_mapcount(page) != 1) {
++
++			cache_count = 1;
++			if (!xa_is_value(page) &&
++			    PageTransHuge(page) && !PageHuge(page))
++				cache_count = HPAGE_PMD_NR;
++
++			if (!xa_is_value(page) && cache_count !=
++			    page_count(page) - total_mapcount(page)) {
+ 				/*
+ 				 * On the last scan, we clean up all those tags
+ 				 * we inserted; but make a note that we still
+@@ -103,8 +116,11 @@ static int memfd_wait_for_pins(struct address_space *mapping)
+ 			}
+ 			if (clear)
+ 				xas_clear_mark(&xas, MEMFD_TAG_PINNED);
+-			if (++tagged % XA_CHECK_SCHED)
++
++			latency += cache_count;
++			if (latency < XA_CHECK_SCHED)
+ 				continue;
++			latency = 0;
+ 
+ 			xas_pause(&xas);
+ 			xas_unlock_irq(&xas);
+diff --git a/mm/util.c b/mm/util.c
+index 741ba32a43ac4..c8261c4ca58b2 100644
+--- a/mm/util.c
++++ b/mm/util.c
+@@ -594,8 +594,10 @@ void *kvmalloc_node(size_t size, gfp_t flags, int node)
+ 		return ret;
+ 
+ 	/* Don't even allow crazy sizes */
+-	if (WARN_ON_ONCE(size > INT_MAX))
++	if (unlikely(size > INT_MAX)) {
++		WARN_ON_ONCE(!(flags & __GFP_NOWARN));
+ 		return NULL;
++	}
+ 
+ 	return __vmalloc_node(size, 1, flags, node,
+ 			__builtin_return_address(0));
+diff --git a/net/batman-adv/hard-interface.c b/net/batman-adv/hard-interface.c
+index 8a2b78f9c4b2c..35fadb9248498 100644
+--- a/net/batman-adv/hard-interface.c
++++ b/net/batman-adv/hard-interface.c
+@@ -149,22 +149,25 @@ static bool batadv_is_on_batman_iface(const struct net_device *net_dev)
+ 	struct net *net = dev_net(net_dev);
+ 	struct net_device *parent_dev;
+ 	struct net *parent_net;
++	int iflink;
+ 	bool ret;
+ 
+ 	/* check if this is a batman-adv mesh interface */
+ 	if (batadv_softif_is_valid(net_dev))
+ 		return true;
+ 
+-	/* no more parents..stop recursion */
+-	if (dev_get_iflink(net_dev) == 0 ||
+-	    dev_get_iflink(net_dev) == net_dev->ifindex)
++	iflink = dev_get_iflink(net_dev);
++	if (iflink == 0)
+ 		return false;
+ 
+ 	parent_net = batadv_getlink_net(net_dev, net);
+ 
++	/* iflink to itself, most likely physical device */
++	if (net == parent_net && iflink == net_dev->ifindex)
++		return false;
++
+ 	/* recurse over the parent device */
+-	parent_dev = __dev_get_by_index((struct net *)parent_net,
+-					dev_get_iflink(net_dev));
++	parent_dev = __dev_get_by_index((struct net *)parent_net, iflink);
+ 	/* if we got a NULL parent_dev there is something broken.. */
+ 	if (!parent_dev) {
+ 		pr_err("Cannot find parent device\n");
+@@ -214,14 +217,15 @@ static struct net_device *batadv_get_real_netdevice(struct net_device *netdev)
+ 	struct net_device *real_netdev = NULL;
+ 	struct net *real_net;
+ 	struct net *net;
+-	int ifindex;
++	int iflink;
+ 
+ 	ASSERT_RTNL();
+ 
+ 	if (!netdev)
+ 		return NULL;
+ 
+-	if (netdev->ifindex == dev_get_iflink(netdev)) {
++	iflink = dev_get_iflink(netdev);
++	if (iflink == 0) {
+ 		dev_hold(netdev);
+ 		return netdev;
+ 	}
+@@ -231,9 +235,16 @@ static struct net_device *batadv_get_real_netdevice(struct net_device *netdev)
+ 		goto out;
+ 
+ 	net = dev_net(hard_iface->soft_iface);
+-	ifindex = dev_get_iflink(netdev);
+ 	real_net = batadv_getlink_net(netdev, net);
+-	real_netdev = dev_get_by_index(real_net, ifindex);
++
++	/* iflink to itself, most likely physical device */
++	if (net == real_net && netdev->ifindex == iflink) {
++		real_netdev = netdev;
++		dev_hold(real_netdev);
++		goto out;
++	}
++
++	real_netdev = dev_get_by_index(real_net, iflink);
+ 
+ out:
+ 	batadv_hardif_put(hard_iface);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index f78969d8d8160..56e23333e7080 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -3854,6 +3854,7 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
+ 		list_skb = list_skb->next;
+ 
+ 		err = 0;
++		delta_truesize += nskb->truesize;
+ 		if (skb_shared(nskb)) {
+ 			tmp = skb_clone(nskb, GFP_ATOMIC);
+ 			if (tmp) {
+@@ -3878,7 +3879,6 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
+ 		tail = nskb;
+ 
+ 		delta_len += nskb->len;
+-		delta_truesize += nskb->truesize;
+ 
+ 		skb_push(nskb, -skb_network_offset(nskb) + offset);
+ 
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 8eb671c827f90..929a2b096b04e 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -1153,7 +1153,7 @@ static int sk_psock_verdict_recv(read_descriptor_t *desc, struct sk_buff *skb,
+ 	struct sk_psock *psock;
+ 	struct bpf_prog *prog;
+ 	int ret = __SK_DROP;
+-	int len = skb->len;
++	int len = orig_len;
+ 
+ 	/* clone here so sk_eat_skb() in tcp_read_sock does not drop our data */
+ 	skb = skb_clone(skb, GFP_ATOMIC);
+diff --git a/net/dcb/dcbnl.c b/net/dcb/dcbnl.c
+index b441ab330fd34..dc4fb699b56c3 100644
+--- a/net/dcb/dcbnl.c
++++ b/net/dcb/dcbnl.c
+@@ -2073,8 +2073,52 @@ u8 dcb_ieee_getapp_default_prio_mask(const struct net_device *dev)
+ }
+ EXPORT_SYMBOL(dcb_ieee_getapp_default_prio_mask);
+ 
++static void dcbnl_flush_dev(struct net_device *dev)
++{
++	struct dcb_app_type *itr, *tmp;
++
++	spin_lock_bh(&dcb_lock);
++
++	list_for_each_entry_safe(itr, tmp, &dcb_app_list, list) {
++		if (itr->ifindex == dev->ifindex) {
++			list_del(&itr->list);
++			kfree(itr);
++		}
++	}
++
++	spin_unlock_bh(&dcb_lock);
++}
++
++static int dcbnl_netdevice_event(struct notifier_block *nb,
++				 unsigned long event, void *ptr)
++{
++	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
++
++	switch (event) {
++	case NETDEV_UNREGISTER:
++		if (!dev->dcbnl_ops)
++			return NOTIFY_DONE;
++
++		dcbnl_flush_dev(dev);
++
++		return NOTIFY_OK;
++	default:
++		return NOTIFY_DONE;
++	}
++}
++
++static struct notifier_block dcbnl_nb __read_mostly = {
++	.notifier_call  = dcbnl_netdevice_event,
++};
++
+ static int __init dcbnl_init(void)
+ {
++	int err;
++
++	err = register_netdevice_notifier(&dcbnl_nb);
++	if (err)
++		return err;
++
+ 	rtnl_register(PF_UNSPEC, RTM_GETDCB, dcb_doit, NULL, 0);
+ 	rtnl_register(PF_UNSPEC, RTM_SETDCB, dcb_doit, NULL, 0);
+ 
+diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
+index 851f542928a33..e1b1d080e908d 100644
+--- a/net/ipv4/esp4.c
++++ b/net/ipv4/esp4.c
+@@ -671,7 +671,7 @@ static int esp_output(struct xfrm_state *x, struct sk_buff *skb)
+ 		struct xfrm_dst *dst = (struct xfrm_dst *)skb_dst(skb);
+ 		u32 padto;
+ 
+-		padto = min(x->tfcpad, __xfrm_state_mtu(x, dst->child_mtu_cached));
++		padto = min(x->tfcpad, xfrm_state_mtu(x, dst->child_mtu_cached));
+ 		if (skb->len < padto)
+ 			esp.tfclen = padto - skb->len;
+ 	}
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 6652d96329a0c..7c78e1215ae34 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -3732,6 +3732,7 @@ static int addrconf_ifdown(struct net_device *dev, bool unregister)
+ 	struct inet6_dev *idev;
+ 	struct inet6_ifaddr *ifa, *tmp;
+ 	bool keep_addr = false;
++	bool was_ready;
+ 	int state, i;
+ 
+ 	ASSERT_RTNL();
+@@ -3797,7 +3798,10 @@ restart:
+ 
+ 	addrconf_del_rs_timer(idev);
+ 
+-	/* Step 2: clear flags for stateless addrconf */
++	/* Step 2: clear flags for stateless addrconf, repeated down
++	 *         detection
++	 */
++	was_ready = idev->if_flags & IF_READY;
+ 	if (!unregister)
+ 		idev->if_flags &= ~(IF_RS_SENT|IF_RA_RCVD|IF_READY);
+ 
+@@ -3871,7 +3875,7 @@ restart:
+ 	if (unregister) {
+ 		ipv6_ac_destroy_dev(idev);
+ 		ipv6_mc_destroy_dev(idev);
+-	} else {
++	} else if (was_ready) {
+ 		ipv6_mc_down(idev);
+ 	}
+ 
+diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
+index f0bac6f7ab6bb..883b53fd78467 100644
+--- a/net/ipv6/esp6.c
++++ b/net/ipv6/esp6.c
+@@ -708,7 +708,7 @@ static int esp6_output(struct xfrm_state *x, struct sk_buff *skb)
+ 		struct xfrm_dst *dst = (struct xfrm_dst *)skb_dst(skb);
+ 		u32 padto;
+ 
+-		padto = min(x->tfcpad, __xfrm_state_mtu(x, dst->child_mtu_cached));
++		padto = min(x->tfcpad, xfrm_state_mtu(x, dst->child_mtu_cached));
+ 		if (skb->len < padto)
+ 			esp.tfclen = padto - skb->len;
+ 	}
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 22bf8fb617165..61970fd839c36 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1408,8 +1408,6 @@ static int ip6_setup_cork(struct sock *sk, struct inet_cork_full *cork,
+ 		if (np->frag_size)
+ 			mtu = np->frag_size;
+ 	}
+-	if (mtu < IPV6_MIN_MTU)
+-		return -EINVAL;
+ 	cork->base.fragsize = mtu;
+ 	cork->base.gso_size = ipc6->gso_size;
+ 	cork->base.tx_flags = 0;
+@@ -1471,8 +1469,6 @@ static int __ip6_append_data(struct sock *sk,
+ 
+ 	fragheaderlen = sizeof(struct ipv6hdr) + rt->rt6i_nfheader_len +
+ 			(opt ? opt->opt_nflen : 0);
+-	maxfraglen = ((mtu - fragheaderlen) & ~7) + fragheaderlen -
+-		     sizeof(struct frag_hdr);
+ 
+ 	headersize = sizeof(struct ipv6hdr) +
+ 		     (opt ? opt->opt_flen + opt->opt_nflen : 0) +
+@@ -1480,6 +1476,13 @@ static int __ip6_append_data(struct sock *sk,
+ 		      sizeof(struct frag_hdr) : 0) +
+ 		     rt->rt6i_nfheader_len;
+ 
++	if (mtu < fragheaderlen ||
++	    ((mtu - fragheaderlen) & ~7) + fragheaderlen < sizeof(struct frag_hdr))
++		goto emsgsize;
++
++	maxfraglen = ((mtu - fragheaderlen) & ~7) + fragheaderlen -
++		     sizeof(struct frag_hdr);
++
+ 	/* as per RFC 7112 section 5, the entire IPv6 Header Chain must fit
+ 	 * the first fragment
+ 	 */
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index a8861db52c187..909f937befd71 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -1371,27 +1371,23 @@ static void mld_process_v2(struct inet6_dev *idev, struct mld2_query *mld,
+ }
+ 
+ /* called with rcu_read_lock() */
+-int igmp6_event_query(struct sk_buff *skb)
++void igmp6_event_query(struct sk_buff *skb)
+ {
+ 	struct inet6_dev *idev = __in6_dev_get(skb->dev);
+ 
+-	if (!idev)
+-		return -EINVAL;
+-
+-	if (idev->dead) {
+-		kfree_skb(skb);
+-		return -ENODEV;
+-	}
++	if (!idev || idev->dead)
++		goto out;
+ 
+ 	spin_lock_bh(&idev->mc_query_lock);
+ 	if (skb_queue_len(&idev->mc_query_queue) < MLD_MAX_SKBS) {
+ 		__skb_queue_tail(&idev->mc_query_queue, skb);
+ 		if (!mod_delayed_work(mld_wq, &idev->mc_query_work, 0))
+ 			in6_dev_hold(idev);
++		skb = NULL;
+ 	}
+ 	spin_unlock_bh(&idev->mc_query_lock);
+-
+-	return 0;
++out:
++	kfree_skb(skb);
+ }
+ 
+ static void __mld_query_work(struct sk_buff *skb)
+@@ -1542,27 +1538,23 @@ static void mld_query_work(struct work_struct *work)
+ }
+ 
+ /* called with rcu_read_lock() */
+-int igmp6_event_report(struct sk_buff *skb)
++void igmp6_event_report(struct sk_buff *skb)
+ {
+ 	struct inet6_dev *idev = __in6_dev_get(skb->dev);
+ 
+-	if (!idev)
+-		return -EINVAL;
+-
+-	if (idev->dead) {
+-		kfree_skb(skb);
+-		return -ENODEV;
+-	}
++	if (!idev || idev->dead)
++		goto out;
+ 
+ 	spin_lock_bh(&idev->mc_report_lock);
+ 	if (skb_queue_len(&idev->mc_report_queue) < MLD_MAX_SKBS) {
+ 		__skb_queue_tail(&idev->mc_report_queue, skb);
+ 		if (!mod_delayed_work(mld_wq, &idev->mc_report_work, 0))
+ 			in6_dev_hold(idev);
++		skb = NULL;
+ 	}
+ 	spin_unlock_bh(&idev->mc_report_lock);
+-
+-	return 0;
++out:
++	kfree_skb(skb);
+ }
+ 
+ static void __mld_report_work(struct sk_buff *skb)
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index 482c98ede19bb..3354a3b905b8f 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -376,7 +376,7 @@ struct ieee80211_mgd_auth_data {
+ 
+ 	u8 key[WLAN_KEY_LEN_WEP104];
+ 	u8 key_len, key_idx;
+-	bool done;
++	bool done, waiting;
+ 	bool peer_confirmed;
+ 	bool timeout_started;
+ 
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 311b4d9344959..404b846501618 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -37,6 +37,7 @@
+ #define IEEE80211_AUTH_TIMEOUT_SAE	(HZ * 2)
+ #define IEEE80211_AUTH_MAX_TRIES	3
+ #define IEEE80211_AUTH_WAIT_ASSOC	(HZ * 5)
++#define IEEE80211_AUTH_WAIT_SAE_RETRY	(HZ * 2)
+ #define IEEE80211_ASSOC_TIMEOUT		(HZ / 5)
+ #define IEEE80211_ASSOC_TIMEOUT_LONG	(HZ / 2)
+ #define IEEE80211_ASSOC_TIMEOUT_SHORT	(HZ / 10)
+@@ -3009,8 +3010,15 @@ static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
+ 		    (status_code == WLAN_STATUS_ANTI_CLOG_REQUIRED ||
+ 		     (auth_transaction == 1 &&
+ 		      (status_code == WLAN_STATUS_SAE_HASH_TO_ELEMENT ||
+-		       status_code == WLAN_STATUS_SAE_PK))))
++		       status_code == WLAN_STATUS_SAE_PK)))) {
++			/* waiting for userspace now */
++			ifmgd->auth_data->waiting = true;
++			ifmgd->auth_data->timeout =
++				jiffies + IEEE80211_AUTH_WAIT_SAE_RETRY;
++			ifmgd->auth_data->timeout_started = true;
++			run_again(sdata, ifmgd->auth_data->timeout);
+ 			goto notify_driver;
++		}
+ 
+ 		sdata_info(sdata, "%pM denied authentication (status %d)\n",
+ 			   mgmt->sa, status_code);
+@@ -4597,10 +4605,10 @@ void ieee80211_sta_work(struct ieee80211_sub_if_data *sdata)
+ 
+ 	if (ifmgd->auth_data && ifmgd->auth_data->timeout_started &&
+ 	    time_after(jiffies, ifmgd->auth_data->timeout)) {
+-		if (ifmgd->auth_data->done) {
++		if (ifmgd->auth_data->done || ifmgd->auth_data->waiting) {
+ 			/*
+-			 * ok ... we waited for assoc but userspace didn't,
+-			 * so let's just kill the auth data
++			 * ok ... we waited for assoc or continuation but
++			 * userspace didn't do it, so kill the auth data
+ 			 */
+ 			ieee80211_destroy_auth_data(sdata, false);
+ 		} else if (ieee80211_auth(sdata)) {
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index d2e8b84ed2836..2225b6b8689f0 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -2602,7 +2602,8 @@ static void ieee80211_deliver_skb_to_local_stack(struct sk_buff *skb,
+ 		 * address, so that the authenticator (e.g. hostapd) will see
+ 		 * the frame, but bridge won't forward it anywhere else. Note
+ 		 * that due to earlier filtering, the only other address can
+-		 * be the PAE group address.
++		 * be the PAE group address, unless the hardware allowed them
++		 * through in 802.3 offloaded mode.
+ 		 */
+ 		if (unlikely(skb->protocol == sdata->control_port_protocol &&
+ 			     !ether_addr_equal(ehdr->h_dest, sdata->vif.addr)))
+@@ -2917,13 +2918,13 @@ ieee80211_rx_h_mesh_fwding(struct ieee80211_rx_data *rx)
+ 	    ether_addr_equal(sdata->vif.addr, hdr->addr3))
+ 		return RX_CONTINUE;
+ 
+-	ac = ieee80211_select_queue_80211(sdata, skb, hdr);
++	ac = ieee802_1d_to_ac[skb->priority];
+ 	q = sdata->vif.hw_queue[ac];
+ 	if (ieee80211_queue_stopped(&local->hw, q)) {
+ 		IEEE80211_IFSTA_MESH_CTR_INC(ifmsh, dropped_frames_congestion);
+ 		return RX_DROP_MONITOR;
+ 	}
+-	skb_set_queue_mapping(skb, q);
++	skb_set_queue_mapping(skb, ac);
+ 
+ 	if (!--mesh_hdr->ttl) {
+ 		if (!is_multicast_ether_addr(hdr->addr1))
+@@ -4509,12 +4510,7 @@ static void ieee80211_rx_8023(struct ieee80211_rx_data *rx,
+ 
+ 	/* deliver to local stack */
+ 	skb->protocol = eth_type_trans(skb, fast_rx->dev);
+-	memset(skb->cb, 0, sizeof(skb->cb));
+-	if (rx->list)
+-		list_add_tail(&skb->list, rx->list);
+-	else
+-		netif_receive_skb(skb);
+-
++	ieee80211_deliver_skb_to_local_stack(skb, rx);
+ }
+ 
+ static bool ieee80211_invoke_fast_rx(struct ieee80211_rx_data *rx,
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 0cd55e4c30fab..7566c9dc66812 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -464,9 +464,12 @@ static bool mptcp_pending_data_fin(struct sock *sk, u64 *seq)
+ static void mptcp_set_datafin_timeout(const struct sock *sk)
+ {
+ 	struct inet_connection_sock *icsk = inet_csk(sk);
++	u32 retransmits;
+ 
+-	mptcp_sk(sk)->timer_ival = min(TCP_RTO_MAX,
+-				       TCP_RTO_MIN << icsk->icsk_retransmits);
++	retransmits = min_t(u32, icsk->icsk_retransmits,
++			    ilog2(TCP_RTO_MAX / TCP_RTO_MIN));
++
++	mptcp_sk(sk)->timer_ival = TCP_RTO_MIN << retransmits;
+ }
+ 
+ static void __mptcp_set_timeout(struct sock *sk, long tout)
+diff --git a/net/netfilter/core.c b/net/netfilter/core.c
+index 6dec9cd395f15..c3634744e37b4 100644
+--- a/net/netfilter/core.c
++++ b/net/netfilter/core.c
+@@ -428,14 +428,15 @@ static int __nf_register_net_hook(struct net *net, int pf,
+ 	p = nf_entry_dereference(*pp);
+ 	new_hooks = nf_hook_entries_grow(p, reg);
+ 
+-	if (!IS_ERR(new_hooks))
++	if (!IS_ERR(new_hooks)) {
++		hooks_validate(new_hooks);
+ 		rcu_assign_pointer(*pp, new_hooks);
++	}
+ 
+ 	mutex_unlock(&nf_hook_mutex);
+ 	if (IS_ERR(new_hooks))
+ 		return PTR_ERR(new_hooks);
+ 
+-	hooks_validate(new_hooks);
+ #ifdef CONFIG_NETFILTER_INGRESS
+ 	if (nf_ingress_hook(reg, pf))
+ 		net_inc_ingress_queue();
+diff --git a/net/netfilter/nf_queue.c b/net/netfilter/nf_queue.c
+index 6d12afabfe8a3..63d1516816b1f 100644
+--- a/net/netfilter/nf_queue.c
++++ b/net/netfilter/nf_queue.c
+@@ -46,6 +46,15 @@ void nf_unregister_queue_handler(void)
+ }
+ EXPORT_SYMBOL(nf_unregister_queue_handler);
+ 
++static void nf_queue_sock_put(struct sock *sk)
++{
++#ifdef CONFIG_INET
++	sock_gen_put(sk);
++#else
++	sock_put(sk);
++#endif
++}
++
+ static void nf_queue_entry_release_refs(struct nf_queue_entry *entry)
+ {
+ 	struct nf_hook_state *state = &entry->state;
+@@ -54,7 +63,7 @@ static void nf_queue_entry_release_refs(struct nf_queue_entry *entry)
+ 	dev_put(state->in);
+ 	dev_put(state->out);
+ 	if (state->sk)
+-		sock_put(state->sk);
++		nf_queue_sock_put(state->sk);
+ 
+ #if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
+ 	dev_put(entry->physin);
+@@ -87,19 +96,21 @@ static void __nf_queue_entry_init_physdevs(struct nf_queue_entry *entry)
+ }
+ 
+ /* Bump dev refs so they don't vanish while packet is out */
+-void nf_queue_entry_get_refs(struct nf_queue_entry *entry)
++bool nf_queue_entry_get_refs(struct nf_queue_entry *entry)
+ {
+ 	struct nf_hook_state *state = &entry->state;
+ 
++	if (state->sk && !refcount_inc_not_zero(&state->sk->sk_refcnt))
++		return false;
++
+ 	dev_hold(state->in);
+ 	dev_hold(state->out);
+-	if (state->sk)
+-		sock_hold(state->sk);
+ 
+ #if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
+ 	dev_hold(entry->physin);
+ 	dev_hold(entry->physout);
+ #endif
++	return true;
+ }
+ EXPORT_SYMBOL_GPL(nf_queue_entry_get_refs);
+ 
+@@ -169,6 +180,18 @@ static int __nf_queue(struct sk_buff *skb, const struct nf_hook_state *state,
+ 		break;
+ 	}
+ 
++	if (skb_sk_is_prefetched(skb)) {
++		struct sock *sk = skb->sk;
++
++		if (!sk_is_refcounted(sk)) {
++			if (!refcount_inc_not_zero(&sk->sk_refcnt))
++				return -ENOTCONN;
++
++			/* drop refcount on skb_orphan */
++			skb->destructor = sock_edemux;
++		}
++	}
++
+ 	entry = kmalloc(sizeof(*entry) + route_key_size, GFP_ATOMIC);
+ 	if (!entry)
+ 		return -ENOMEM;
+@@ -187,7 +210,10 @@ static int __nf_queue(struct sk_buff *skb, const struct nf_hook_state *state,
+ 
+ 	__nf_queue_entry_init_physdevs(entry);
+ 
+-	nf_queue_entry_get_refs(entry);
++	if (!nf_queue_entry_get_refs(entry)) {
++		kfree(entry);
++		return -ENOTCONN;
++	}
+ 
+ 	switch (entry->state.pf) {
+ 	case AF_INET:
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index a65b530975f54..2b2e0210a7f9e 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -4486,7 +4486,7 @@ static void nft_set_catchall_destroy(const struct nft_ctx *ctx,
+ 	list_for_each_entry_safe(catchall, next, &set->catchall_list, list) {
+ 		list_del_rcu(&catchall->list);
+ 		nft_set_elem_destroy(set, catchall->elem, true);
+-		kfree_rcu(catchall);
++		kfree_rcu(catchall, rcu);
+ 	}
+ }
+ 
+@@ -5653,7 +5653,7 @@ static void nft_setelem_catchall_remove(const struct net *net,
+ 	list_for_each_entry_safe(catchall, next, &set->catchall_list, list) {
+ 		if (catchall->elem == elem->priv) {
+ 			list_del_rcu(&catchall->list);
+-			kfree_rcu(catchall);
++			kfree_rcu(catchall, rcu);
+ 			break;
+ 		}
+ 	}
+diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
+index f0b9e21a24524..ff79c267d10c0 100644
+--- a/net/netfilter/nfnetlink_queue.c
++++ b/net/netfilter/nfnetlink_queue.c
+@@ -710,9 +710,15 @@ static struct nf_queue_entry *
+ nf_queue_entry_dup(struct nf_queue_entry *e)
+ {
+ 	struct nf_queue_entry *entry = kmemdup(e, e->size, GFP_ATOMIC);
+-	if (entry)
+-		nf_queue_entry_get_refs(entry);
+-	return entry;
++
++	if (!entry)
++		return NULL;
++
++	if (nf_queue_entry_get_refs(entry))
++		return entry;
++
++	kfree(entry);
++	return NULL;
+ }
+ 
+ #if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 10d2d81f93376..a0fb596459a36 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -183,7 +183,7 @@ static int smc_release(struct socket *sock)
+ {
+ 	struct sock *sk = sock->sk;
+ 	struct smc_sock *smc;
+-	int rc = 0;
++	int old_state, rc = 0;
+ 
+ 	if (!sk)
+ 		goto out;
+@@ -191,8 +191,10 @@ static int smc_release(struct socket *sock)
+ 	sock_hold(sk); /* sock_put below */
+ 	smc = smc_sk(sk);
+ 
++	old_state = sk->sk_state;
++
+ 	/* cleanup for a dangling non-blocking connect */
+-	if (smc->connect_nonblock && sk->sk_state == SMC_INIT)
++	if (smc->connect_nonblock && old_state == SMC_INIT)
+ 		tcp_abort(smc->clcsock->sk, ECONNABORTED);
+ 
+ 	if (cancel_work_sync(&smc->connect_work))
+@@ -206,6 +208,10 @@ static int smc_release(struct socket *sock)
+ 	else
+ 		lock_sock(sk);
+ 
++	if (old_state == SMC_INIT && sk->sk_state == SMC_ACTIVE &&
++	    !smc->use_fallback)
++		smc_close_active_abort(smc);
++
+ 	rc = __smc_release(smc);
+ 
+ 	/* detach socket */
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index 6dddcfd6cf734..d1cdc891c2114 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -1132,8 +1132,8 @@ void smc_conn_free(struct smc_connection *conn)
+ 			cancel_work_sync(&conn->abort_work);
+ 	}
+ 	if (!list_empty(&lgr->list)) {
+-		smc_lgr_unregister_conn(conn);
+ 		smc_buf_unuse(conn, lgr); /* allow buffer reuse */
++		smc_lgr_unregister_conn(conn);
+ 	}
+ 
+ 	if (!lgr->conns_num)
+@@ -1783,7 +1783,8 @@ int smc_conn_create(struct smc_sock *smc, struct smc_init_info *ini)
+ 		    (ini->smcd_version == SMC_V2 ||
+ 		     lgr->vlan_id == ini->vlan_id) &&
+ 		    (role == SMC_CLNT || ini->is_smcd ||
+-		     lgr->conns_num < SMC_RMBS_PER_LGR_MAX)) {
++		    (lgr->conns_num < SMC_RMBS_PER_LGR_MAX &&
++		      !bitmap_full(lgr->rtokens_used_mask, SMC_RMBS_PER_LGR_MAX)))) {
+ 			/* link group found */
+ 			ini->first_contact_local = 0;
+ 			conn->lgr = lgr;
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index d293614d5fc65..b5074957e8812 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -2287,7 +2287,7 @@ static bool tipc_crypto_key_rcv(struct tipc_crypto *rx, struct tipc_msg *hdr)
+ 	struct tipc_crypto *tx = tipc_net(rx->net)->crypto_tx;
+ 	struct tipc_aead_key *skey = NULL;
+ 	u16 key_gen = msg_key_gen(hdr);
+-	u16 size = msg_data_sz(hdr);
++	u32 size = msg_data_sz(hdr);
+ 	u8 *data = msg_data(hdr);
+ 	unsigned int keylen;
+ 
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index a27b3b5fa210f..f732518287820 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -13379,6 +13379,9 @@ static int handle_nan_filter(struct nlattr *attr_filter,
+ 	i = 0;
+ 	nla_for_each_nested(attr, attr_filter, rem) {
+ 		filter[i].filter = nla_memdup(attr, GFP_KERNEL);
++		if (!filter[i].filter)
++			goto err;
++
+ 		filter[i].len = nla_len(attr);
+ 		i++;
+ 	}
+@@ -13391,6 +13394,15 @@ static int handle_nan_filter(struct nlattr *attr_filter,
+ 	}
+ 
+ 	return 0;
++
++err:
++	i = 0;
++	nla_for_each_nested(attr, attr_filter, rem) {
++		kfree(filter[i].filter);
++		i++;
++	}
++	kfree(filter);
++	return -ENOMEM;
+ }
+ 
+ static int nl80211_nan_add_func(struct sk_buff *skb,
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index e843b0d9e2a61..c255aac6b816b 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -223,6 +223,9 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
+ 	if (x->encap || x->tfcpad)
+ 		return -EINVAL;
+ 
++	if (xuo->flags & ~(XFRM_OFFLOAD_IPV6 | XFRM_OFFLOAD_INBOUND))
++		return -EINVAL;
++
+ 	dev = dev_get_by_index(net, xuo->ifindex);
+ 	if (!dev) {
+ 		if (!(xuo->flags & XFRM_OFFLOAD_INBOUND)) {
+@@ -261,7 +264,8 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
+ 	xso->dev = dev;
+ 	xso->real_dev = dev;
+ 	xso->num_exthdrs = 1;
+-	xso->flags = xuo->flags;
++	/* Don't forward bit that is not implemented */
++	xso->flags = xuo->flags & ~XFRM_OFFLOAD_IPV6;
+ 
+ 	err = dev->xfrmdev_ops->xdo_dev_state_add(x);
+ 	if (err) {
+diff --git a/net/xfrm/xfrm_interface.c b/net/xfrm/xfrm_interface.c
+index 57448fc519fcd..4e3c62d1ad9e9 100644
+--- a/net/xfrm/xfrm_interface.c
++++ b/net/xfrm/xfrm_interface.c
+@@ -673,12 +673,12 @@ static int xfrmi_changelink(struct net_device *dev, struct nlattr *tb[],
+ 	struct net *net = xi->net;
+ 	struct xfrm_if_parms p = {};
+ 
++	xfrmi_netlink_parms(data, &p);
+ 	if (!p.if_id) {
+ 		NL_SET_ERR_MSG(extack, "if_id must be non zero");
+ 		return -EINVAL;
+ 	}
+ 
+-	xfrmi_netlink_parms(data, &p);
+ 	xi = xfrmi_locate(net, &p);
+ 	if (!xi) {
+ 		xi = netdev_priv(dev);
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 78d51399a0f4b..100b4b3723e72 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -2571,7 +2571,7 @@ void xfrm_state_delete_tunnel(struct xfrm_state *x)
+ }
+ EXPORT_SYMBOL(xfrm_state_delete_tunnel);
+ 
+-u32 __xfrm_state_mtu(struct xfrm_state *x, int mtu)
++u32 xfrm_state_mtu(struct xfrm_state *x, int mtu)
+ {
+ 	const struct xfrm_type *type = READ_ONCE(x->type);
+ 	struct crypto_aead *aead;
+@@ -2602,17 +2602,7 @@ u32 __xfrm_state_mtu(struct xfrm_state *x, int mtu)
+ 	return ((mtu - x->props.header_len - crypto_aead_authsize(aead) -
+ 		 net_adj) & ~(blksize - 1)) + net_adj - 2;
+ }
+-EXPORT_SYMBOL_GPL(__xfrm_state_mtu);
+-
+-u32 xfrm_state_mtu(struct xfrm_state *x, int mtu)
+-{
+-	mtu = __xfrm_state_mtu(x, mtu);
+-
+-	if (x->props.family == AF_INET6 && mtu < IPV6_MIN_MTU)
+-		return IPV6_MIN_MTU;
+-
+-	return mtu;
+-}
++EXPORT_SYMBOL_GPL(xfrm_state_mtu);
+ 
+ int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload)
+ {
+diff --git a/sound/soc/codecs/cs4265.c b/sound/soc/codecs/cs4265.c
+index cffd6111afaca..b49cb92d7b9e8 100644
+--- a/sound/soc/codecs/cs4265.c
++++ b/sound/soc/codecs/cs4265.c
+@@ -150,7 +150,6 @@ static const struct snd_kcontrol_new cs4265_snd_controls[] = {
+ 	SOC_SINGLE("E to F Buffer Disable Switch", CS4265_SPDIF_CTL1,
+ 				6, 1, 0),
+ 	SOC_ENUM("C Data Access", cam_mode_enum),
+-	SOC_SINGLE("SPDIF Switch", CS4265_SPDIF_CTL2, 5, 1, 1),
+ 	SOC_SINGLE("Validity Bit Control Switch", CS4265_SPDIF_CTL2,
+ 				3, 1, 0),
+ 	SOC_ENUM("SPDIF Mono/Stereo", spdif_mono_stereo_enum),
+@@ -186,7 +185,7 @@ static const struct snd_soc_dapm_widget cs4265_dapm_widgets[] = {
+ 
+ 	SND_SOC_DAPM_SWITCH("Loopback", SND_SOC_NOPM, 0, 0,
+ 			&loopback_ctl),
+-	SND_SOC_DAPM_SWITCH("SPDIF", SND_SOC_NOPM, 0, 0,
++	SND_SOC_DAPM_SWITCH("SPDIF", CS4265_SPDIF_CTL2, 5, 1,
+ 			&spdif_switch),
+ 	SND_SOC_DAPM_SWITCH("DAC", CS4265_PWRCTL, 1, 1,
+ 			&dac_switch),
+diff --git a/sound/soc/codecs/rt5668.c b/sound/soc/codecs/rt5668.c
+index fb09715bf9328..5b12cbf2ba215 100644
+--- a/sound/soc/codecs/rt5668.c
++++ b/sound/soc/codecs/rt5668.c
+@@ -1022,11 +1022,13 @@ static void rt5668_jack_detect_handler(struct work_struct *work)
+ 		container_of(work, struct rt5668_priv, jack_detect_work.work);
+ 	int val, btn_type;
+ 
+-	while (!rt5668->component)
+-		usleep_range(10000, 15000);
+-
+-	while (!rt5668->component->card->instantiated)
+-		usleep_range(10000, 15000);
++	if (!rt5668->component || !rt5668->component->card ||
++	    !rt5668->component->card->instantiated) {
++		/* card not yet ready, try later */
++		mod_delayed_work(system_power_efficient_wq,
++				 &rt5668->jack_detect_work, msecs_to_jiffies(15));
++		return;
++	}
+ 
+ 	mutex_lock(&rt5668->calibrate_mutex);
+ 
+diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
+index 7e6e2f26accd0..e3643ae6de66d 100644
+--- a/sound/soc/codecs/rt5682.c
++++ b/sound/soc/codecs/rt5682.c
+@@ -1092,11 +1092,13 @@ void rt5682_jack_detect_handler(struct work_struct *work)
+ 	struct snd_soc_dapm_context *dapm;
+ 	int val, btn_type;
+ 
+-	while (!rt5682->component)
+-		usleep_range(10000, 15000);
+-
+-	while (!rt5682->component->card->instantiated)
+-		usleep_range(10000, 15000);
++	if (!rt5682->component || !rt5682->component->card ||
++	    !rt5682->component->card->instantiated) {
++		/* card not yet ready, try later */
++		mod_delayed_work(system_power_efficient_wq,
++				 &rt5682->jack_detect_work, msecs_to_jiffies(15));
++		return;
++	}
+ 
+ 	dapm = snd_soc_component_get_dapm(rt5682->component);
+ 
+diff --git a/sound/soc/codecs/rt5682s.c b/sound/soc/codecs/rt5682s.c
+index d49a4f68566d2..d79b548d23fa4 100644
+--- a/sound/soc/codecs/rt5682s.c
++++ b/sound/soc/codecs/rt5682s.c
+@@ -824,11 +824,13 @@ static void rt5682s_jack_detect_handler(struct work_struct *work)
+ 		container_of(work, struct rt5682s_priv, jack_detect_work.work);
+ 	int val, btn_type;
+ 
+-	while (!rt5682s->component)
+-		usleep_range(10000, 15000);
+-
+-	while (!rt5682s->component->card->instantiated)
+-		usleep_range(10000, 15000);
++	if (!rt5682s->component || !rt5682s->component->card ||
++	    !rt5682s->component->card->instantiated) {
++		/* card not yet ready, try later */
++		mod_delayed_work(system_power_efficient_wq,
++				 &rt5682s->jack_detect_work, msecs_to_jiffies(15));
++		return;
++	}
+ 
+ 	mutex_lock(&rt5682s->jdet_mutex);
+ 	mutex_lock(&rt5682s->calibrate_mutex);
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index 53457a0d466d3..ee3782ecd7e3a 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -317,7 +317,7 @@ int snd_soc_put_volsw(struct snd_kcontrol *kcontrol,
+ 		mask = BIT(sign_bit + 1) - 1;
+ 
+ 	val = ucontrol->value.integer.value[0];
+-	if (mc->platform_max && val > mc->platform_max)
++	if (mc->platform_max && ((int)val + min) > mc->platform_max)
+ 		return -EINVAL;
+ 	if (val > max - min)
+ 		return -EINVAL;
+@@ -330,7 +330,7 @@ int snd_soc_put_volsw(struct snd_kcontrol *kcontrol,
+ 	val = val << shift;
+ 	if (snd_soc_volsw_is_stereo(mc)) {
+ 		val2 = ucontrol->value.integer.value[1];
+-		if (mc->platform_max && val2 > mc->platform_max)
++		if (mc->platform_max && ((int)val2 + min) > mc->platform_max)
+ 			return -EINVAL;
+ 		if (val2 > max - min)
+ 			return -EINVAL;
+diff --git a/sound/x86/intel_hdmi_audio.c b/sound/x86/intel_hdmi_audio.c
+index 378826312abe6..7aa9472749002 100644
+--- a/sound/x86/intel_hdmi_audio.c
++++ b/sound/x86/intel_hdmi_audio.c
+@@ -1261,7 +1261,7 @@ static int had_pcm_mmap(struct snd_pcm_substream *substream,
+ {
+ 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+ 	return remap_pfn_range(vma, vma->vm_start,
+-			substream->dma_buffer.addr >> PAGE_SHIFT,
++			substream->runtime->dma_addr >> PAGE_SHIFT,
+ 			vma->vm_end - vma->vm_start, vma->vm_page_prot);
+ }
+ 
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh
+index bcb110e830cec..dea33dc937906 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh
+@@ -50,8 +50,8 @@ for current_test in ${TESTS:-$ALL_TESTS}; do
+ 			else
+ 				log_test "'$current_test' [$profile] overflow $target"
+ 			fi
++			RET_FIN=$(( RET_FIN || RET ))
+ 		done
+-		RET_FIN=$(( RET_FIN || RET ))
+ 	done
+ done
+ current_test=""
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/tc_police_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/tc_police_scale.sh
+index 3e3e06ea5703c..86e787895f78b 100644
+--- a/tools/testing/selftests/drivers/net/mlxsw/tc_police_scale.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/tc_police_scale.sh
+@@ -60,7 +60,8 @@ __tc_police_test()
+ 
+ 	tc_police_rules_create $count $should_fail
+ 
+-	offload_count=$(tc filter show dev $swp1 ingress | grep in_hw | wc -l)
++	offload_count=$(tc -j filter show dev $swp1 ingress |
++			jq "[.[] | select(.options.in_hw == true)] | length")
+ 	((offload_count == count))
+ 	check_err_fail $should_fail $? "tc police offload count"
+ }
+diff --git a/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc b/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc
+index e96e279e0533a..25432b8cd5bd2 100644
+--- a/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc
++++ b/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc
+@@ -19,7 +19,7 @@ fail() { # mesg
+ 
+ FILTER=set_ftrace_filter
+ FUNC1="schedule"
+-FUNC2="do_softirq"
++FUNC2="scheduler_tick"
+ 
+ ALL_FUNCS="#### all functions enabled ####"
+ 
+diff --git a/tools/testing/selftests/seccomp/Makefile b/tools/testing/selftests/seccomp/Makefile
+index 0ebfe8b0e147f..585f7a0c10cbe 100644
+--- a/tools/testing/selftests/seccomp/Makefile
++++ b/tools/testing/selftests/seccomp/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0
+-CFLAGS += -Wl,-no-as-needed -Wall
++CFLAGS += -Wl,-no-as-needed -Wall -isystem ../../../../usr/include/
+ LDFLAGS += -lpthread
+ 
+ TEST_GEN_PROGS := seccomp_bpf seccomp_benchmark


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-03-11 12:04 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-03-11 12:04 UTC (permalink / raw
  To: gentoo-commits

commit:     304ad425fa4af8e749b9913045f6262b59c0ca5b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Mar 11 12:04:23 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Mar 11 12:04:23 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=304ad425

Linux patch 5.16.14

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1013_linux-5.16.14.patch | 3836 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3840 insertions(+)

diff --git a/0000_README b/0000_README
index f1b481e1..c74c462d 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch:  1012_linux-5.16.13.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.16.13
 
+Patch:  1013_linux-5.16.14.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.14
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1013_linux-5.16.14.patch b/1013_linux-5.16.14.patch
new file mode 100644
index 00000000..9286dde6
--- /dev/null
+++ b/1013_linux-5.16.14.patch
@@ -0,0 +1,3836 @@
+diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
+index a2b22d5640ec9..9e9556826450b 100644
+--- a/Documentation/admin-guide/hw-vuln/spectre.rst
++++ b/Documentation/admin-guide/hw-vuln/spectre.rst
+@@ -60,8 +60,8 @@ privileged data touched during the speculative execution.
+ Spectre variant 1 attacks take advantage of speculative execution of
+ conditional branches, while Spectre variant 2 attacks use speculative
+ execution of indirect branches to leak privileged memory.
+-See :ref:`[1] <spec_ref1>` :ref:`[5] <spec_ref5>` :ref:`[7] <spec_ref7>`
+-:ref:`[10] <spec_ref10>` :ref:`[11] <spec_ref11>`.
++See :ref:`[1] <spec_ref1>` :ref:`[5] <spec_ref5>` :ref:`[6] <spec_ref6>`
++:ref:`[7] <spec_ref7>` :ref:`[10] <spec_ref10>` :ref:`[11] <spec_ref11>`.
+ 
+ Spectre variant 1 (Bounds Check Bypass)
+ ---------------------------------------
+@@ -131,6 +131,19 @@ steer its indirect branch speculations to gadget code, and measure the
+ speculative execution's side effects left in level 1 cache to infer the
+ victim's data.
+ 
++Yet another variant 2 attack vector is for the attacker to poison the
++Branch History Buffer (BHB) to speculatively steer an indirect branch
++to a specific Branch Target Buffer (BTB) entry, even if the entry isn't
++associated with the source address of the indirect branch. Specifically,
++the BHB might be shared across privilege levels even in the presence of
++Enhanced IBRS.
++
++Currently the only known real-world BHB attack vector is via
++unprivileged eBPF. Therefore, it's highly recommended to not enable
++unprivileged eBPF, especially when eIBRS is used (without retpolines).
++For a full mitigation against BHB attacks, it's recommended to use
++retpolines (or eIBRS combined with retpolines).
++
+ Attack scenarios
+ ----------------
+ 
+@@ -364,13 +377,15 @@ The possible values in this file are:
+ 
+   - Kernel status:
+ 
+-  ====================================  =================================
+-  'Not affected'                        The processor is not vulnerable
+-  'Vulnerable'                          Vulnerable, no mitigation
+-  'Mitigation: Full generic retpoline'  Software-focused mitigation
+-  'Mitigation: Full AMD retpoline'      AMD-specific software mitigation
+-  'Mitigation: Enhanced IBRS'           Hardware-focused mitigation
+-  ====================================  =================================
++  ========================================  =================================
++  'Not affected'                            The processor is not vulnerable
++  'Mitigation: None'                        Vulnerable, no mitigation
++  'Mitigation: Retpolines'                  Use Retpoline thunks
++  'Mitigation: LFENCE'                      Use LFENCE instructions
++  'Mitigation: Enhanced IBRS'               Hardware-focused mitigation
++  'Mitigation: Enhanced IBRS + Retpolines'  Hardware-focused + Retpolines
++  'Mitigation: Enhanced IBRS + LFENCE'      Hardware-focused + LFENCE
++  ========================================  =================================
+ 
+   - Firmware status: Show if Indirect Branch Restricted Speculation (IBRS) is
+     used to protect against Spectre variant 2 attacks when calling firmware (x86 only).
+@@ -583,12 +598,13 @@ kernel command line.
+ 
+ 		Specific mitigations can also be selected manually:
+ 
+-		retpoline
+-					replace indirect branches
+-		retpoline,generic
+-					google's original retpoline
+-		retpoline,amd
+-					AMD-specific minimal thunk
++                retpoline               auto pick between generic,lfence
++                retpoline,generic       Retpolines
++                retpoline,lfence        LFENCE; indirect branch
++                retpoline,amd           alias for retpoline,lfence
++                eibrs                   enhanced IBRS
++                eibrs,retpoline         enhanced IBRS + Retpolines
++                eibrs,lfence            enhanced IBRS + LFENCE
+ 
+ 		Not specifying this option is equivalent to
+ 		spectre_v2=auto.
+@@ -599,7 +615,7 @@ kernel command line.
+ 		spectre_v2=off. Spectre variant 1 mitigations
+ 		cannot be disabled.
+ 
+-For spectre_v2_user see :doc:`/admin-guide/kernel-parameters`.
++For spectre_v2_user see Documentation/admin-guide/kernel-parameters.txt
+ 
+ Mitigation selection guide
+ --------------------------
+@@ -681,7 +697,7 @@ AMD white papers:
+ 
+ .. _spec_ref6:
+ 
+-[6] `Software techniques for managing speculation on AMD processors <https://developer.amd.com/wp-content/resources/90343-B_SoftwareTechniquesforManagingSpeculation_WP_7-18Update_FNL.pdf>`_.
++[6] `Software techniques for managing speculation on AMD processors <https://developer.amd.com/wp-content/resources/Managing-Speculation-on-AMD-Processors.pdf>`_.
+ 
+ ARM white papers:
+ 
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 2fba82431efbe..391b3f9055fe0 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -5303,8 +5303,12 @@
+ 			Specific mitigations can also be selected manually:
+ 
+ 			retpoline	  - replace indirect branches
+-			retpoline,generic - google's original retpoline
+-			retpoline,amd     - AMD-specific minimal thunk
++			retpoline,generic - Retpolines
++			retpoline,lfence  - LFENCE; indirect branch
++			retpoline,amd     - alias for retpoline,lfence
++			eibrs		  - enhanced IBRS
++			eibrs,retpoline   - enhanced IBRS + Retpolines
++			eibrs,lfence      - enhanced IBRS + LFENCE
+ 
+ 			Not specifying this option is equivalent to
+ 			spectre_v2=auto.
+diff --git a/Documentation/arm64/cpu-feature-registers.rst b/Documentation/arm64/cpu-feature-registers.rst
+index 9f9b8fd060892..749ae970c3195 100644
+--- a/Documentation/arm64/cpu-feature-registers.rst
++++ b/Documentation/arm64/cpu-feature-registers.rst
+@@ -275,6 +275,23 @@ infrastructure:
+      | SVEVer                       | [3-0]   |    y    |
+      +------------------------------+---------+---------+
+ 
++  8) ID_AA64MMFR1_EL1 - Memory model feature register 1
++
++     +------------------------------+---------+---------+
++     | Name                         |  bits   | visible |
++     +------------------------------+---------+---------+
++     | AFP                          | [47-44] |    y    |
++     +------------------------------+---------+---------+
++
++  9) ID_AA64ISAR2_EL1 - Instruction set attribute register 2
++
++     +------------------------------+---------+---------+
++     | Name                         |  bits   | visible |
++     +------------------------------+---------+---------+
++     | RPRES                        | [7-4]   |    y    |
++     +------------------------------+---------+---------+
++
++
+ Appendix I: Example
+ -------------------
+ 
+diff --git a/Documentation/arm64/elf_hwcaps.rst b/Documentation/arm64/elf_hwcaps.rst
+index af106af8e1c00..b72ff17d600ae 100644
+--- a/Documentation/arm64/elf_hwcaps.rst
++++ b/Documentation/arm64/elf_hwcaps.rst
+@@ -251,6 +251,14 @@ HWCAP2_ECV
+ 
+     Functionality implied by ID_AA64MMFR0_EL1.ECV == 0b0001.
+ 
++HWCAP2_AFP
++
++    Functionality implied by ID_AA64MFR1_EL1.AFP == 0b0001.
++
++HWCAP2_RPRES
++
++    Functionality implied by ID_AA64ISAR2_EL1.RPRES == 0b0001.
++
+ 4. Unused AT_HWCAP bits
+ -----------------------
+ 
+diff --git a/Makefile b/Makefile
+index 6702e44821eb0..86835419075f6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
+index 6fe67963ba5a0..aee73ef5b3dc4 100644
+--- a/arch/arm/include/asm/assembler.h
++++ b/arch/arm/include/asm/assembler.h
+@@ -107,6 +107,16 @@
+ 	.endm
+ #endif
+ 
++#if __LINUX_ARM_ARCH__ < 7
++	.macro	dsb, args
++	mcr	p15, 0, r0, c7, c10, 4
++	.endm
++
++	.macro	isb, args
++	mcr	p15, 0, r0, c7, c5, 4
++	.endm
++#endif
++
+ 	.macro asm_trace_hardirqs_off, save=1
+ #if defined(CONFIG_TRACE_IRQFLAGS)
+ 	.if \save
+diff --git a/arch/arm/include/asm/spectre.h b/arch/arm/include/asm/spectre.h
+new file mode 100644
+index 0000000000000..d1fa5607d3aa3
+--- /dev/null
++++ b/arch/arm/include/asm/spectre.h
+@@ -0,0 +1,32 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++
++#ifndef __ASM_SPECTRE_H
++#define __ASM_SPECTRE_H
++
++enum {
++	SPECTRE_UNAFFECTED,
++	SPECTRE_MITIGATED,
++	SPECTRE_VULNERABLE,
++};
++
++enum {
++	__SPECTRE_V2_METHOD_BPIALL,
++	__SPECTRE_V2_METHOD_ICIALLU,
++	__SPECTRE_V2_METHOD_SMC,
++	__SPECTRE_V2_METHOD_HVC,
++	__SPECTRE_V2_METHOD_LOOP8,
++};
++
++enum {
++	SPECTRE_V2_METHOD_BPIALL = BIT(__SPECTRE_V2_METHOD_BPIALL),
++	SPECTRE_V2_METHOD_ICIALLU = BIT(__SPECTRE_V2_METHOD_ICIALLU),
++	SPECTRE_V2_METHOD_SMC = BIT(__SPECTRE_V2_METHOD_SMC),
++	SPECTRE_V2_METHOD_HVC = BIT(__SPECTRE_V2_METHOD_HVC),
++	SPECTRE_V2_METHOD_LOOP8 = BIT(__SPECTRE_V2_METHOD_LOOP8),
++};
++
++void spectre_v2_update_state(unsigned int state, unsigned int methods);
++
++int spectre_bhb_update_vectors(unsigned int method);
++
++#endif
+diff --git a/arch/arm/include/asm/vmlinux.lds.h b/arch/arm/include/asm/vmlinux.lds.h
+index 4a91428c324db..fad45c884e988 100644
+--- a/arch/arm/include/asm/vmlinux.lds.h
++++ b/arch/arm/include/asm/vmlinux.lds.h
+@@ -26,6 +26,19 @@
+ #define ARM_MMU_DISCARD(x)	x
+ #endif
+ 
++/*
++ * ld.lld does not support NOCROSSREFS:
++ * https://github.com/ClangBuiltLinux/linux/issues/1609
++ */
++#ifdef CONFIG_LD_IS_LLD
++#define NOCROSSREFS
++#endif
++
++/* Set start/end symbol names to the LMA for the section */
++#define ARM_LMA(sym, section)						\
++	sym##_start = LOADADDR(section);				\
++	sym##_end = LOADADDR(section) + SIZEOF(section)
++
+ #define PROC_INFO							\
+ 		. = ALIGN(4);						\
+ 		__proc_info_begin = .;					\
+@@ -110,19 +123,31 @@
+  * only thing that matters is their relative offsets
+  */
+ #define ARM_VECTORS							\
+-	__vectors_start = .;						\
+-	.vectors 0xffff0000 : AT(__vectors_start) {			\
+-		*(.vectors)						\
++	__vectors_lma = .;						\
++	OVERLAY 0xffff0000 : NOCROSSREFS AT(__vectors_lma) {		\
++		.vectors {						\
++			*(.vectors)					\
++		}							\
++		.vectors.bhb.loop8 {					\
++			*(.vectors.bhb.loop8)				\
++		}							\
++		.vectors.bhb.bpiall {					\
++			*(.vectors.bhb.bpiall)				\
++		}							\
+ 	}								\
+-	. = __vectors_start + SIZEOF(.vectors);				\
+-	__vectors_end = .;						\
++	ARM_LMA(__vectors, .vectors);					\
++	ARM_LMA(__vectors_bhb_loop8, .vectors.bhb.loop8);		\
++	ARM_LMA(__vectors_bhb_bpiall, .vectors.bhb.bpiall);		\
++	. = __vectors_lma + SIZEOF(.vectors) +				\
++		SIZEOF(.vectors.bhb.loop8) +				\
++		SIZEOF(.vectors.bhb.bpiall);				\
+ 									\
+-	__stubs_start = .;						\
+-	.stubs ADDR(.vectors) + 0x1000 : AT(__stubs_start) {		\
++	__stubs_lma = .;						\
++	.stubs ADDR(.vectors) + 0x1000 : AT(__stubs_lma) {		\
+ 		*(.stubs)						\
+ 	}								\
+-	. = __stubs_start + SIZEOF(.stubs);				\
+-	__stubs_end = .;						\
++	ARM_LMA(__stubs, .stubs);					\
++	. = __stubs_lma + SIZEOF(.stubs);				\
+ 									\
+ 	PROVIDE(vector_fiq_offset = vector_fiq - ADDR(.vectors));
+ 
+diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
+index ae295a3bcfefd..6ef3b535b7bf7 100644
+--- a/arch/arm/kernel/Makefile
++++ b/arch/arm/kernel/Makefile
+@@ -106,4 +106,6 @@ endif
+ 
+ obj-$(CONFIG_HAVE_ARM_SMCCC)	+= smccc-call.o
+ 
++obj-$(CONFIG_GENERIC_CPU_VULNERABILITIES) += spectre.o
++
+ extra-y := $(head-y) vmlinux.lds
+diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
+index 5cd057859fe90..676703cbfe4b2 100644
+--- a/arch/arm/kernel/entry-armv.S
++++ b/arch/arm/kernel/entry-armv.S
+@@ -1002,12 +1002,11 @@ vector_\name:
+ 	sub	lr, lr, #\correction
+ 	.endif
+ 
+-	@
+-	@ Save r0, lr_<exception> (parent PC) and spsr_<exception>
+-	@ (parent CPSR)
+-	@
++	@ Save r0, lr_<exception> (parent PC)
+ 	stmia	sp, {r0, lr}		@ save r0, lr
+-	mrs	lr, spsr
++
++	@ Save spsr_<exception> (parent CPSR)
++2:	mrs	lr, spsr
+ 	str	lr, [sp, #8]		@ save spsr
+ 
+ 	@
+@@ -1028,6 +1027,44 @@ vector_\name:
+ 	movs	pc, lr			@ branch to handler in SVC mode
+ ENDPROC(vector_\name)
+ 
++#ifdef CONFIG_HARDEN_BRANCH_HISTORY
++	.subsection 1
++	.align 5
++vector_bhb_loop8_\name:
++	.if \correction
++	sub	lr, lr, #\correction
++	.endif
++
++	@ Save r0, lr_<exception> (parent PC)
++	stmia	sp, {r0, lr}
++
++	@ bhb workaround
++	mov	r0, #8
++1:	b	. + 4
++	subs	r0, r0, #1
++	bne	1b
++	dsb
++	isb
++	b	2b
++ENDPROC(vector_bhb_loop8_\name)
++
++vector_bhb_bpiall_\name:
++	.if \correction
++	sub	lr, lr, #\correction
++	.endif
++
++	@ Save r0, lr_<exception> (parent PC)
++	stmia	sp, {r0, lr}
++
++	@ bhb workaround
++	mcr	p15, 0, r0, c7, c5, 6	@ BPIALL
++	@ isb not needed due to "movs pc, lr" in the vector stub
++	@ which gives a "context synchronisation".
++	b	2b
++ENDPROC(vector_bhb_bpiall_\name)
++	.previous
++#endif
++
+ 	.align	2
+ 	@ handler addresses follow this label
+ 1:
+@@ -1036,6 +1073,10 @@ ENDPROC(vector_\name)
+ 	.section .stubs, "ax", %progbits
+ 	@ This must be the first word
+ 	.word	vector_swi
++#ifdef CONFIG_HARDEN_BRANCH_HISTORY
++	.word	vector_bhb_loop8_swi
++	.word	vector_bhb_bpiall_swi
++#endif
+ 
+ vector_rst:
+  ARM(	swi	SYS_ERROR0	)
+@@ -1150,8 +1191,10 @@ vector_addrexcptn:
+  * FIQ "NMI" handler
+  *-----------------------------------------------------------------------------
+  * Handle a FIQ using the SVC stack allowing FIQ act like NMI on x86
+- * systems.
++ * systems. This must be the last vector stub, so lets place it in its own
++ * subsection.
+  */
++	.subsection 2
+ 	vector_stub	fiq, FIQ_MODE, 4
+ 
+ 	.long	__fiq_usr			@  0  (USR_26 / USR_32)
+@@ -1184,6 +1227,30 @@ vector_addrexcptn:
+ 	W(b)	vector_irq
+ 	W(b)	vector_fiq
+ 
++#ifdef CONFIG_HARDEN_BRANCH_HISTORY
++	.section .vectors.bhb.loop8, "ax", %progbits
++.L__vectors_bhb_loop8_start:
++	W(b)	vector_rst
++	W(b)	vector_bhb_loop8_und
++	W(ldr)	pc, .L__vectors_bhb_loop8_start + 0x1004
++	W(b)	vector_bhb_loop8_pabt
++	W(b)	vector_bhb_loop8_dabt
++	W(b)	vector_addrexcptn
++	W(b)	vector_bhb_loop8_irq
++	W(b)	vector_bhb_loop8_fiq
++
++	.section .vectors.bhb.bpiall, "ax", %progbits
++.L__vectors_bhb_bpiall_start:
++	W(b)	vector_rst
++	W(b)	vector_bhb_bpiall_und
++	W(ldr)	pc, .L__vectors_bhb_bpiall_start + 0x1008
++	W(b)	vector_bhb_bpiall_pabt
++	W(b)	vector_bhb_bpiall_dabt
++	W(b)	vector_addrexcptn
++	W(b)	vector_bhb_bpiall_irq
++	W(b)	vector_bhb_bpiall_fiq
++#endif
++
+ 	.data
+ 	.align	2
+ 
+diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
+index ac86c34682bb5..dbc1913ee30bc 100644
+--- a/arch/arm/kernel/entry-common.S
++++ b/arch/arm/kernel/entry-common.S
+@@ -153,6 +153,29 @@ ENDPROC(ret_from_fork)
+  *-----------------------------------------------------------------------------
+  */
+ 
++	.align	5
++#ifdef CONFIG_HARDEN_BRANCH_HISTORY
++ENTRY(vector_bhb_loop8_swi)
++	sub	sp, sp, #PT_REGS_SIZE
++	stmia	sp, {r0 - r12}
++	mov	r8, #8
++1:	b	2f
++2:	subs	r8, r8, #1
++	bne	1b
++	dsb
++	isb
++	b	3f
++ENDPROC(vector_bhb_loop8_swi)
++
++	.align	5
++ENTRY(vector_bhb_bpiall_swi)
++	sub	sp, sp, #PT_REGS_SIZE
++	stmia	sp, {r0 - r12}
++	mcr	p15, 0, r8, c7, c5, 6	@ BPIALL
++	isb
++	b	3f
++ENDPROC(vector_bhb_bpiall_swi)
++#endif
+ 	.align	5
+ ENTRY(vector_swi)
+ #ifdef CONFIG_CPU_V7M
+@@ -160,6 +183,7 @@ ENTRY(vector_swi)
+ #else
+ 	sub	sp, sp, #PT_REGS_SIZE
+ 	stmia	sp, {r0 - r12}			@ Calling r0 - r12
++3:
+  ARM(	add	r8, sp, #S_PC		)
+  ARM(	stmdb	r8, {sp, lr}^		)	@ Calling sp, lr
+  THUMB(	mov	r8, sp			)
+diff --git a/arch/arm/kernel/spectre.c b/arch/arm/kernel/spectre.c
+new file mode 100644
+index 0000000000000..0dcefc36fb7a0
+--- /dev/null
++++ b/arch/arm/kernel/spectre.c
+@@ -0,0 +1,71 @@
++// SPDX-License-Identifier: GPL-2.0-only
++#include <linux/bpf.h>
++#include <linux/cpu.h>
++#include <linux/device.h>
++
++#include <asm/spectre.h>
++
++static bool _unprivileged_ebpf_enabled(void)
++{
++#ifdef CONFIG_BPF_SYSCALL
++	return !sysctl_unprivileged_bpf_disabled;
++#else
++	return false;
++#endif
++}
++
++ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
++			    char *buf)
++{
++	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
++}
++
++static unsigned int spectre_v2_state;
++static unsigned int spectre_v2_methods;
++
++void spectre_v2_update_state(unsigned int state, unsigned int method)
++{
++	if (state > spectre_v2_state)
++		spectre_v2_state = state;
++	spectre_v2_methods |= method;
++}
++
++ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
++			    char *buf)
++{
++	const char *method;
++
++	if (spectre_v2_state == SPECTRE_UNAFFECTED)
++		return sprintf(buf, "%s\n", "Not affected");
++
++	if (spectre_v2_state != SPECTRE_MITIGATED)
++		return sprintf(buf, "%s\n", "Vulnerable");
++
++	if (_unprivileged_ebpf_enabled())
++		return sprintf(buf, "Vulnerable: Unprivileged eBPF enabled\n");
++
++	switch (spectre_v2_methods) {
++	case SPECTRE_V2_METHOD_BPIALL:
++		method = "Branch predictor hardening";
++		break;
++
++	case SPECTRE_V2_METHOD_ICIALLU:
++		method = "I-cache invalidation";
++		break;
++
++	case SPECTRE_V2_METHOD_SMC:
++	case SPECTRE_V2_METHOD_HVC:
++		method = "Firmware call";
++		break;
++
++	case SPECTRE_V2_METHOD_LOOP8:
++		method = "History overwrite";
++		break;
++
++	default:
++		method = "Multiple mitigations";
++		break;
++	}
++
++	return sprintf(buf, "Mitigation: %s\n", method);
++}
+diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
+index 195dff58bafc7..90c887aa67a44 100644
+--- a/arch/arm/kernel/traps.c
++++ b/arch/arm/kernel/traps.c
+@@ -30,6 +30,7 @@
+ #include <linux/atomic.h>
+ #include <asm/cacheflush.h>
+ #include <asm/exception.h>
++#include <asm/spectre.h>
+ #include <asm/unistd.h>
+ #include <asm/traps.h>
+ #include <asm/ptrace.h>
+@@ -787,10 +788,59 @@ static inline void __init kuser_init(void *vectors)
+ }
+ #endif
+ 
++#ifndef CONFIG_CPU_V7M
++static void copy_from_lma(void *vma, void *lma_start, void *lma_end)
++{
++	memcpy(vma, lma_start, lma_end - lma_start);
++}
++
++static void flush_vectors(void *vma, size_t offset, size_t size)
++{
++	unsigned long start = (unsigned long)vma + offset;
++	unsigned long end = start + size;
++
++	flush_icache_range(start, end);
++}
++
++#ifdef CONFIG_HARDEN_BRANCH_HISTORY
++int spectre_bhb_update_vectors(unsigned int method)
++{
++	extern char __vectors_bhb_bpiall_start[], __vectors_bhb_bpiall_end[];
++	extern char __vectors_bhb_loop8_start[], __vectors_bhb_loop8_end[];
++	void *vec_start, *vec_end;
++
++	if (system_state >= SYSTEM_FREEING_INITMEM) {
++		pr_err("CPU%u: Spectre BHB workaround too late - system vulnerable\n",
++		       smp_processor_id());
++		return SPECTRE_VULNERABLE;
++	}
++
++	switch (method) {
++	case SPECTRE_V2_METHOD_LOOP8:
++		vec_start = __vectors_bhb_loop8_start;
++		vec_end = __vectors_bhb_loop8_end;
++		break;
++
++	case SPECTRE_V2_METHOD_BPIALL:
++		vec_start = __vectors_bhb_bpiall_start;
++		vec_end = __vectors_bhb_bpiall_end;
++		break;
++
++	default:
++		pr_err("CPU%u: unknown Spectre BHB state %d\n",
++		       smp_processor_id(), method);
++		return SPECTRE_VULNERABLE;
++	}
++
++	copy_from_lma(vectors_page, vec_start, vec_end);
++	flush_vectors(vectors_page, 0, vec_end - vec_start);
++
++	return SPECTRE_MITIGATED;
++}
++#endif
++
+ void __init early_trap_init(void *vectors_base)
+ {
+-#ifndef CONFIG_CPU_V7M
+-	unsigned long vectors = (unsigned long)vectors_base;
+ 	extern char __stubs_start[], __stubs_end[];
+ 	extern char __vectors_start[], __vectors_end[];
+ 	unsigned i;
+@@ -811,17 +861,20 @@ void __init early_trap_init(void *vectors_base)
+ 	 * into the vector page, mapped at 0xffff0000, and ensure these
+ 	 * are visible to the instruction stream.
+ 	 */
+-	memcpy((void *)vectors, __vectors_start, __vectors_end - __vectors_start);
+-	memcpy((void *)vectors + 0x1000, __stubs_start, __stubs_end - __stubs_start);
++	copy_from_lma(vectors_base, __vectors_start, __vectors_end);
++	copy_from_lma(vectors_base + 0x1000, __stubs_start, __stubs_end);
+ 
+ 	kuser_init(vectors_base);
+ 
+-	flush_icache_range(vectors, vectors + PAGE_SIZE * 2);
++	flush_vectors(vectors_base, 0, PAGE_SIZE * 2);
++}
+ #else /* ifndef CONFIG_CPU_V7M */
++void __init early_trap_init(void *vectors_base)
++{
+ 	/*
+ 	 * on V7-M there is no need to copy the vector table to a dedicated
+ 	 * memory area. The address is configurable and so a table in the kernel
+ 	 * image can be used.
+ 	 */
+-#endif
+ }
++#endif
+diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
+index 58afba3467299..9724c16e90767 100644
+--- a/arch/arm/mm/Kconfig
++++ b/arch/arm/mm/Kconfig
+@@ -830,6 +830,7 @@ config CPU_BPREDICT_DISABLE
+ 
+ config CPU_SPECTRE
+ 	bool
++	select GENERIC_CPU_VULNERABILITIES
+ 
+ config HARDEN_BRANCH_PREDICTOR
+ 	bool "Harden the branch predictor against aliasing attacks" if EXPERT
+@@ -850,6 +851,16 @@ config HARDEN_BRANCH_PREDICTOR
+ 
+ 	   If unsure, say Y.
+ 
++config HARDEN_BRANCH_HISTORY
++	bool "Harden Spectre style attacks against branch history" if EXPERT
++	depends on CPU_SPECTRE
++	default y
++	help
++	  Speculation attacks against some high-performance processors can
++	  make use of branch history to influence future speculation. When
++	  taking an exception, a sequence of branches overwrites the branch
++	  history, or branch history is invalidated.
++
+ config TLS_REG_EMUL
+ 	bool
+ 	select NEED_KUSER_HELPERS
+diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
+index 114c05ab4dd91..06dbfb968182d 100644
+--- a/arch/arm/mm/proc-v7-bugs.c
++++ b/arch/arm/mm/proc-v7-bugs.c
+@@ -6,8 +6,35 @@
+ #include <asm/cp15.h>
+ #include <asm/cputype.h>
+ #include <asm/proc-fns.h>
++#include <asm/spectre.h>
+ #include <asm/system_misc.h>
+ 
++#ifdef CONFIG_ARM_PSCI
++static int __maybe_unused spectre_v2_get_cpu_fw_mitigation_state(void)
++{
++	struct arm_smccc_res res;
++
++	arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
++			     ARM_SMCCC_ARCH_WORKAROUND_1, &res);
++
++	switch ((int)res.a0) {
++	case SMCCC_RET_SUCCESS:
++		return SPECTRE_MITIGATED;
++
++	case SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED:
++		return SPECTRE_UNAFFECTED;
++
++	default:
++		return SPECTRE_VULNERABLE;
++	}
++}
++#else
++static int __maybe_unused spectre_v2_get_cpu_fw_mitigation_state(void)
++{
++	return SPECTRE_VULNERABLE;
++}
++#endif
++
+ #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+ DEFINE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn);
+ 
+@@ -36,13 +63,61 @@ static void __maybe_unused call_hvc_arch_workaround_1(void)
+ 	arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
+ }
+ 
+-static void cpu_v7_spectre_init(void)
++static unsigned int spectre_v2_install_workaround(unsigned int method)
+ {
+ 	const char *spectre_v2_method = NULL;
+ 	int cpu = smp_processor_id();
+ 
+ 	if (per_cpu(harden_branch_predictor_fn, cpu))
+-		return;
++		return SPECTRE_MITIGATED;
++
++	switch (method) {
++	case SPECTRE_V2_METHOD_BPIALL:
++		per_cpu(harden_branch_predictor_fn, cpu) =
++			harden_branch_predictor_bpiall;
++		spectre_v2_method = "BPIALL";
++		break;
++
++	case SPECTRE_V2_METHOD_ICIALLU:
++		per_cpu(harden_branch_predictor_fn, cpu) =
++			harden_branch_predictor_iciallu;
++		spectre_v2_method = "ICIALLU";
++		break;
++
++	case SPECTRE_V2_METHOD_HVC:
++		per_cpu(harden_branch_predictor_fn, cpu) =
++			call_hvc_arch_workaround_1;
++		cpu_do_switch_mm = cpu_v7_hvc_switch_mm;
++		spectre_v2_method = "hypervisor";
++		break;
++
++	case SPECTRE_V2_METHOD_SMC:
++		per_cpu(harden_branch_predictor_fn, cpu) =
++			call_smc_arch_workaround_1;
++		cpu_do_switch_mm = cpu_v7_smc_switch_mm;
++		spectre_v2_method = "firmware";
++		break;
++	}
++
++	if (spectre_v2_method)
++		pr_info("CPU%u: Spectre v2: using %s workaround\n",
++			smp_processor_id(), spectre_v2_method);
++
++	return SPECTRE_MITIGATED;
++}
++#else
++static unsigned int spectre_v2_install_workaround(unsigned int method)
++{
++	pr_info("CPU%u: Spectre V2: workarounds disabled by configuration\n",
++		smp_processor_id());
++
++	return SPECTRE_VULNERABLE;
++}
++#endif
++
++static void cpu_v7_spectre_v2_init(void)
++{
++	unsigned int state, method = 0;
+ 
+ 	switch (read_cpuid_part()) {
+ 	case ARM_CPU_PART_CORTEX_A8:
+@@ -51,69 +126,133 @@ static void cpu_v7_spectre_init(void)
+ 	case ARM_CPU_PART_CORTEX_A17:
+ 	case ARM_CPU_PART_CORTEX_A73:
+ 	case ARM_CPU_PART_CORTEX_A75:
+-		per_cpu(harden_branch_predictor_fn, cpu) =
+-			harden_branch_predictor_bpiall;
+-		spectre_v2_method = "BPIALL";
++		state = SPECTRE_MITIGATED;
++		method = SPECTRE_V2_METHOD_BPIALL;
+ 		break;
+ 
+ 	case ARM_CPU_PART_CORTEX_A15:
+ 	case ARM_CPU_PART_BRAHMA_B15:
+-		per_cpu(harden_branch_predictor_fn, cpu) =
+-			harden_branch_predictor_iciallu;
+-		spectre_v2_method = "ICIALLU";
++		state = SPECTRE_MITIGATED;
++		method = SPECTRE_V2_METHOD_ICIALLU;
+ 		break;
+ 
+-#ifdef CONFIG_ARM_PSCI
+ 	case ARM_CPU_PART_BRAHMA_B53:
+ 		/* Requires no workaround */
++		state = SPECTRE_UNAFFECTED;
+ 		break;
++
+ 	default:
+ 		/* Other ARM CPUs require no workaround */
+-		if (read_cpuid_implementor() == ARM_CPU_IMP_ARM)
++		if (read_cpuid_implementor() == ARM_CPU_IMP_ARM) {
++			state = SPECTRE_UNAFFECTED;
+ 			break;
++		}
++
+ 		fallthrough;
+-		/* Cortex A57/A72 require firmware workaround */
+-	case ARM_CPU_PART_CORTEX_A57:
+-	case ARM_CPU_PART_CORTEX_A72: {
+-		struct arm_smccc_res res;
+ 
+-		arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+-				     ARM_SMCCC_ARCH_WORKAROUND_1, &res);
+-		if ((int)res.a0 != 0)
+-			return;
++	/* Cortex A57/A72 require firmware workaround */
++	case ARM_CPU_PART_CORTEX_A57:
++	case ARM_CPU_PART_CORTEX_A72:
++		state = spectre_v2_get_cpu_fw_mitigation_state();
++		if (state != SPECTRE_MITIGATED)
++			break;
+ 
+ 		switch (arm_smccc_1_1_get_conduit()) {
+ 		case SMCCC_CONDUIT_HVC:
+-			per_cpu(harden_branch_predictor_fn, cpu) =
+-				call_hvc_arch_workaround_1;
+-			cpu_do_switch_mm = cpu_v7_hvc_switch_mm;
+-			spectre_v2_method = "hypervisor";
++			method = SPECTRE_V2_METHOD_HVC;
+ 			break;
+ 
+ 		case SMCCC_CONDUIT_SMC:
+-			per_cpu(harden_branch_predictor_fn, cpu) =
+-				call_smc_arch_workaround_1;
+-			cpu_do_switch_mm = cpu_v7_smc_switch_mm;
+-			spectre_v2_method = "firmware";
++			method = SPECTRE_V2_METHOD_SMC;
+ 			break;
+ 
+ 		default:
++			state = SPECTRE_VULNERABLE;
+ 			break;
+ 		}
+ 	}
+-#endif
++
++	if (state == SPECTRE_MITIGATED)
++		state = spectre_v2_install_workaround(method);
++
++	spectre_v2_update_state(state, method);
++}
++
++#ifdef CONFIG_HARDEN_BRANCH_HISTORY
++static int spectre_bhb_method;
++
++static const char *spectre_bhb_method_name(int method)
++{
++	switch (method) {
++	case SPECTRE_V2_METHOD_LOOP8:
++		return "loop";
++
++	case SPECTRE_V2_METHOD_BPIALL:
++		return "BPIALL";
++
++	default:
++		return "unknown";
+ 	}
++}
+ 
+-	if (spectre_v2_method)
+-		pr_info("CPU%u: Spectre v2: using %s workaround\n",
+-			smp_processor_id(), spectre_v2_method);
++static int spectre_bhb_install_workaround(int method)
++{
++	if (spectre_bhb_method != method) {
++		if (spectre_bhb_method) {
++			pr_err("CPU%u: Spectre BHB: method disagreement, system vulnerable\n",
++			       smp_processor_id());
++
++			return SPECTRE_VULNERABLE;
++		}
++
++		if (spectre_bhb_update_vectors(method) == SPECTRE_VULNERABLE)
++			return SPECTRE_VULNERABLE;
++
++		spectre_bhb_method = method;
++	}
++
++	pr_info("CPU%u: Spectre BHB: using %s workaround\n",
++		smp_processor_id(), spectre_bhb_method_name(method));
++
++	return SPECTRE_MITIGATED;
+ }
+ #else
+-static void cpu_v7_spectre_init(void)
++static int spectre_bhb_install_workaround(int method)
+ {
++	return SPECTRE_VULNERABLE;
+ }
+ #endif
+ 
++static void cpu_v7_spectre_bhb_init(void)
++{
++	unsigned int state, method = 0;
++
++	switch (read_cpuid_part()) {
++	case ARM_CPU_PART_CORTEX_A15:
++	case ARM_CPU_PART_BRAHMA_B15:
++	case ARM_CPU_PART_CORTEX_A57:
++	case ARM_CPU_PART_CORTEX_A72:
++		state = SPECTRE_MITIGATED;
++		method = SPECTRE_V2_METHOD_LOOP8;
++		break;
++
++	case ARM_CPU_PART_CORTEX_A73:
++	case ARM_CPU_PART_CORTEX_A75:
++		state = SPECTRE_MITIGATED;
++		method = SPECTRE_V2_METHOD_BPIALL;
++		break;
++
++	default:
++		state = SPECTRE_UNAFFECTED;
++		break;
++	}
++
++	if (state == SPECTRE_MITIGATED)
++		state = spectre_bhb_install_workaround(method);
++
++	spectre_v2_update_state(state, method);
++}
++
+ static __maybe_unused bool cpu_v7_check_auxcr_set(bool *warned,
+ 						  u32 mask, const char *msg)
+ {
+@@ -142,16 +281,17 @@ static bool check_spectre_auxcr(bool *warned, u32 bit)
+ void cpu_v7_ca8_ibe(void)
+ {
+ 	if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(6)))
+-		cpu_v7_spectre_init();
++		cpu_v7_spectre_v2_init();
+ }
+ 
+ void cpu_v7_ca15_ibe(void)
+ {
+ 	if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0)))
+-		cpu_v7_spectre_init();
++		cpu_v7_spectre_v2_init();
+ }
+ 
+ void cpu_v7_bugs_init(void)
+ {
+-	cpu_v7_spectre_init();
++	cpu_v7_spectre_v2_init();
++	cpu_v7_spectre_bhb_init();
+ }
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 651bf217465e9..0e2c31f7a9aa2 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -1395,6 +1395,15 @@ config UNMAP_KERNEL_AT_EL0
+ 
+ 	  If unsure, say Y.
+ 
++config MITIGATE_SPECTRE_BRANCH_HISTORY
++	bool "Mitigate Spectre style attacks against branch history" if EXPERT
++	default y
++	help
++	  Speculation attacks against some high-performance processors can
++	  make use of branch history to influence future speculation.
++	  When taking an exception from user-space, a sequence of branches
++	  or a firmware call overwrites the branch history.
++
+ config RODATA_FULL_DEFAULT_ENABLED
+ 	bool "Apply r/o permissions of VM areas also to their linear aliases"
+ 	default y
+diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
+index 136d13f3d6e92..5373a07724394 100644
+--- a/arch/arm64/include/asm/assembler.h
++++ b/arch/arm64/include/asm/assembler.h
+@@ -108,6 +108,13 @@
+ 	hint	#20
+ 	.endm
+ 
++/*
++ * Clear Branch History instruction
++ */
++	.macro clearbhb
++	hint	#22
++	.endm
++
+ /*
+  * Speculation barrier
+  */
+@@ -840,4 +847,50 @@ alternative_endif
+ 
+ #endif /* GNU_PROPERTY_AARCH64_FEATURE_1_DEFAULT */
+ 
++	.macro __mitigate_spectre_bhb_loop      tmp
++#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
++alternative_cb  spectre_bhb_patch_loop_iter
++	mov	\tmp, #32		// Patched to correct the immediate
++alternative_cb_end
++.Lspectre_bhb_loop\@:
++	b	. + 4
++	subs	\tmp, \tmp, #1
++	b.ne	.Lspectre_bhb_loop\@
++	sb
++#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
++	.endm
++
++	.macro mitigate_spectre_bhb_loop	tmp
++#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
++alternative_cb	spectre_bhb_patch_loop_mitigation_enable
++	b	.L_spectre_bhb_loop_done\@	// Patched to NOP
++alternative_cb_end
++	__mitigate_spectre_bhb_loop	\tmp
++.L_spectre_bhb_loop_done\@:
++#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
++	.endm
++
++	/* Save/restores x0-x3 to the stack */
++	.macro __mitigate_spectre_bhb_fw
++#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
++	stp	x0, x1, [sp, #-16]!
++	stp	x2, x3, [sp, #-16]!
++	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_3
++alternative_cb	smccc_patch_fw_mitigation_conduit
++	nop					// Patched to SMC/HVC #0
++alternative_cb_end
++	ldp	x2, x3, [sp], #16
++	ldp	x0, x1, [sp], #16
++#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
++	.endm
++
++	.macro mitigate_spectre_bhb_clear_insn
++#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
++alternative_cb	spectre_bhb_patch_clearbhb
++	/* Patched to NOP when not supported */
++	clearbhb
++	isb
++alternative_cb_end
++#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
++	.endm
+ #endif	/* __ASM_ASSEMBLER_H */
+diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h
+index 0f6d16faa5402..a58e366f0b074 100644
+--- a/arch/arm64/include/asm/cpu.h
++++ b/arch/arm64/include/asm/cpu.h
+@@ -51,6 +51,7 @@ struct cpuinfo_arm64 {
+ 	u64		reg_id_aa64dfr1;
+ 	u64		reg_id_aa64isar0;
+ 	u64		reg_id_aa64isar1;
++	u64		reg_id_aa64isar2;
+ 	u64		reg_id_aa64mmfr0;
+ 	u64		reg_id_aa64mmfr1;
+ 	u64		reg_id_aa64mmfr2;
+diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
+index ef6be92b1921a..a77b5f49b3a6c 100644
+--- a/arch/arm64/include/asm/cpufeature.h
++++ b/arch/arm64/include/asm/cpufeature.h
+@@ -637,6 +637,35 @@ static inline bool cpu_supports_mixed_endian_el0(void)
+ 	return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1));
+ }
+ 
++
++static inline bool supports_csv2p3(int scope)
++{
++	u64 pfr0;
++	u8 csv2_val;
++
++	if (scope == SCOPE_LOCAL_CPU)
++		pfr0 = read_sysreg_s(SYS_ID_AA64PFR0_EL1);
++	else
++		pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
++
++	csv2_val = cpuid_feature_extract_unsigned_field(pfr0,
++							ID_AA64PFR0_CSV2_SHIFT);
++	return csv2_val == 3;
++}
++
++static inline bool supports_clearbhb(int scope)
++{
++	u64 isar2;
++
++	if (scope == SCOPE_LOCAL_CPU)
++		isar2 = read_sysreg_s(SYS_ID_AA64ISAR2_EL1);
++	else
++		isar2 = read_sanitised_ftr_reg(SYS_ID_AA64ISAR2_EL1);
++
++	return cpuid_feature_extract_unsigned_field(isar2,
++						    ID_AA64ISAR2_CLEARBHB_SHIFT);
++}
++
+ const struct cpumask *system_32bit_el0_cpumask(void);
+ DECLARE_STATIC_KEY_FALSE(arm64_mismatched_32bit_el0);
+ 
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index 999b9149f8568..bfbf0c4c7c5e5 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -73,10 +73,14 @@
+ #define ARM_CPU_PART_CORTEX_A76		0xD0B
+ #define ARM_CPU_PART_NEOVERSE_N1	0xD0C
+ #define ARM_CPU_PART_CORTEX_A77		0xD0D
++#define ARM_CPU_PART_NEOVERSE_V1	0xD40
++#define ARM_CPU_PART_CORTEX_A78		0xD41
++#define ARM_CPU_PART_CORTEX_X1		0xD44
+ #define ARM_CPU_PART_CORTEX_A510	0xD46
+ #define ARM_CPU_PART_CORTEX_A710	0xD47
+ #define ARM_CPU_PART_CORTEX_X2		0xD48
+ #define ARM_CPU_PART_NEOVERSE_N2	0xD49
++#define ARM_CPU_PART_CORTEX_A78C	0xD4B
+ 
+ #define APM_CPU_PART_POTENZA		0x000
+ 
+@@ -117,10 +121,14 @@
+ #define MIDR_CORTEX_A76	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76)
+ #define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1)
+ #define MIDR_CORTEX_A77	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77)
++#define MIDR_NEOVERSE_V1	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V1)
++#define MIDR_CORTEX_A78	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78)
++#define MIDR_CORTEX_X1	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1)
+ #define MIDR_CORTEX_A510 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A510)
+ #define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710)
+ #define MIDR_CORTEX_X2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X2)
+ #define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2)
++#define MIDR_CORTEX_A78C	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78C)
+ #define MIDR_THUNDERX	MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
+ #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
+ #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
+diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h
+index 4335800201c97..daff882883f92 100644
+--- a/arch/arm64/include/asm/fixmap.h
++++ b/arch/arm64/include/asm/fixmap.h
+@@ -62,9 +62,11 @@ enum fixed_addresses {
+ #endif /* CONFIG_ACPI_APEI_GHES */
+ 
+ #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++	FIX_ENTRY_TRAMP_TEXT3,
++	FIX_ENTRY_TRAMP_TEXT2,
++	FIX_ENTRY_TRAMP_TEXT1,
+ 	FIX_ENTRY_TRAMP_DATA,
+-	FIX_ENTRY_TRAMP_TEXT,
+-#define TRAMP_VALIAS		(__fix_to_virt(FIX_ENTRY_TRAMP_TEXT))
++#define TRAMP_VALIAS		(__fix_to_virt(FIX_ENTRY_TRAMP_TEXT1))
+ #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
+ 	__end_of_permanent_fixed_addresses,
+ 
+diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h
+index b100e0055eab6..f68fbb2074730 100644
+--- a/arch/arm64/include/asm/hwcap.h
++++ b/arch/arm64/include/asm/hwcap.h
+@@ -106,6 +106,8 @@
+ #define KERNEL_HWCAP_BTI		__khwcap2_feature(BTI)
+ #define KERNEL_HWCAP_MTE		__khwcap2_feature(MTE)
+ #define KERNEL_HWCAP_ECV		__khwcap2_feature(ECV)
++#define KERNEL_HWCAP_AFP		__khwcap2_feature(AFP)
++#define KERNEL_HWCAP_RPRES		__khwcap2_feature(RPRES)
+ 
+ /*
+  * This yields a mask that user programs can use to figure out what
+diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
+index 6b776c8667b20..b02f0c328c8e4 100644
+--- a/arch/arm64/include/asm/insn.h
++++ b/arch/arm64/include/asm/insn.h
+@@ -65,6 +65,7 @@ enum aarch64_insn_hint_cr_op {
+ 	AARCH64_INSN_HINT_PSB  = 0x11 << 5,
+ 	AARCH64_INSN_HINT_TSB  = 0x12 << 5,
+ 	AARCH64_INSN_HINT_CSDB = 0x14 << 5,
++	AARCH64_INSN_HINT_CLEARBHB = 0x16 << 5,
+ 
+ 	AARCH64_INSN_HINT_BTI   = 0x20 << 5,
+ 	AARCH64_INSN_HINT_BTIC  = 0x22 << 5,
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index 2a5f7f38006ff..d016f27af6da9 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -712,6 +712,11 @@ static inline void kvm_init_host_cpu_context(struct kvm_cpu_context *cpu_ctxt)
+ 	ctxt_sys_reg(cpu_ctxt, MPIDR_EL1) = read_cpuid_mpidr();
+ }
+ 
++static inline bool kvm_system_needs_idmapped_vectors(void)
++{
++	return cpus_have_const_cap(ARM64_SPECTRE_V3A);
++}
++
+ void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
+ 
+ static inline void kvm_arch_hardware_unsetup(void) {}
+diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h
+index 1bce62fa908a3..56f7b1d4d54b9 100644
+--- a/arch/arm64/include/asm/rwonce.h
++++ b/arch/arm64/include/asm/rwonce.h
+@@ -5,7 +5,7 @@
+ #ifndef __ASM_RWONCE_H
+ #define __ASM_RWONCE_H
+ 
+-#ifdef CONFIG_LTO
++#if defined(CONFIG_LTO) && !defined(__ASSEMBLY__)
+ 
+ #include <linux/compiler_types.h>
+ #include <asm/alternative-macros.h>
+@@ -66,7 +66,7 @@
+ })
+ 
+ #endif	/* !BUILD_VDSO */
+-#endif	/* CONFIG_LTO */
++#endif	/* CONFIG_LTO && !__ASSEMBLY__ */
+ 
+ #include <asm-generic/rwonce.h>
+ 
+diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h
+index 152cb35bf9df7..40971ac1303f9 100644
+--- a/arch/arm64/include/asm/sections.h
++++ b/arch/arm64/include/asm/sections.h
+@@ -23,4 +23,9 @@ extern char __mmuoff_data_start[], __mmuoff_data_end[];
+ extern char __entry_tramp_text_start[], __entry_tramp_text_end[];
+ extern char __relocate_new_kernel_start[], __relocate_new_kernel_end[];
+ 
++static inline size_t entry_tramp_text_size(void)
++{
++	return __entry_tramp_text_end - __entry_tramp_text_start;
++}
++
+ #endif /* __ASM_SECTIONS_H */
+diff --git a/arch/arm64/include/asm/spectre.h b/arch/arm64/include/asm/spectre.h
+index f62ca39da6c5a..86e0cc9b9c685 100644
+--- a/arch/arm64/include/asm/spectre.h
++++ b/arch/arm64/include/asm/spectre.h
+@@ -93,5 +93,9 @@ void spectre_v4_enable_task_mitigation(struct task_struct *tsk);
+ 
+ enum mitigation_state arm64_get_meltdown_state(void);
+ 
++enum mitigation_state arm64_get_spectre_bhb_state(void);
++bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry, int scope);
++u8 spectre_bhb_loop_affected(int scope);
++void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *__unused);
+ #endif	/* __ASSEMBLY__ */
+ #endif	/* __ASM_SPECTRE_H */
+diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
+index 16b3f1a1d4688..2d9e7f5214f5a 100644
+--- a/arch/arm64/include/asm/sysreg.h
++++ b/arch/arm64/include/asm/sysreg.h
+@@ -182,6 +182,7 @@
+ 
+ #define SYS_ID_AA64ISAR0_EL1		sys_reg(3, 0, 0, 6, 0)
+ #define SYS_ID_AA64ISAR1_EL1		sys_reg(3, 0, 0, 6, 1)
++#define SYS_ID_AA64ISAR2_EL1		sys_reg(3, 0, 0, 6, 2)
+ 
+ #define SYS_ID_AA64MMFR0_EL1		sys_reg(3, 0, 0, 7, 0)
+ #define SYS_ID_AA64MMFR1_EL1		sys_reg(3, 0, 0, 7, 1)
+@@ -771,6 +772,21 @@
+ #define ID_AA64ISAR1_GPI_NI			0x0
+ #define ID_AA64ISAR1_GPI_IMP_DEF		0x1
+ 
++/* id_aa64isar2 */
++#define ID_AA64ISAR2_CLEARBHB_SHIFT	28
++#define ID_AA64ISAR2_RPRES_SHIFT	4
++#define ID_AA64ISAR2_WFXT_SHIFT		0
++
++#define ID_AA64ISAR2_RPRES_8BIT		0x0
++#define ID_AA64ISAR2_RPRES_12BIT	0x1
++/*
++ * Value 0x1 has been removed from the architecture, and is
++ * reserved, but has not yet been removed from the ARM ARM
++ * as of ARM DDI 0487G.b.
++ */
++#define ID_AA64ISAR2_WFXT_NI		0x0
++#define ID_AA64ISAR2_WFXT_SUPPORTED	0x2
++
+ /* id_aa64pfr0 */
+ #define ID_AA64PFR0_CSV3_SHIFT		60
+ #define ID_AA64PFR0_CSV2_SHIFT		56
+@@ -889,6 +905,8 @@
+ #endif
+ 
+ /* id_aa64mmfr1 */
++#define ID_AA64MMFR1_ECBHB_SHIFT	60
++#define ID_AA64MMFR1_AFP_SHIFT		44
+ #define ID_AA64MMFR1_ETS_SHIFT		36
+ #define ID_AA64MMFR1_TWED_SHIFT		32
+ #define ID_AA64MMFR1_XNX_SHIFT		28
+diff --git a/arch/arm64/include/asm/vectors.h b/arch/arm64/include/asm/vectors.h
+new file mode 100644
+index 0000000000000..f64613a96d530
+--- /dev/null
++++ b/arch/arm64/include/asm/vectors.h
+@@ -0,0 +1,73 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Copyright (C) 2022 ARM Ltd.
++ */
++#ifndef __ASM_VECTORS_H
++#define __ASM_VECTORS_H
++
++#include <linux/bug.h>
++#include <linux/percpu.h>
++
++#include <asm/fixmap.h>
++
++extern char vectors[];
++extern char tramp_vectors[];
++extern char __bp_harden_el1_vectors[];
++
++/*
++ * Note: the order of this enum corresponds to two arrays in entry.S:
++ * tramp_vecs and __bp_harden_el1_vectors. By default the canonical
++ * 'full fat' vectors are used directly.
++ */
++enum arm64_bp_harden_el1_vectors {
++#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
++	/*
++	 * Perform the BHB loop mitigation, before branching to the canonical
++	 * vectors.
++	 */
++	EL1_VECTOR_BHB_LOOP,
++
++	/*
++	 * Make the SMC call for firmware mitigation, before branching to the
++	 * canonical vectors.
++	 */
++	EL1_VECTOR_BHB_FW,
++
++	/*
++	 * Use the ClearBHB instruction, before branching to the canonical
++	 * vectors.
++	 */
++	EL1_VECTOR_BHB_CLEAR_INSN,
++#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
++
++	/*
++	 * Remap the kernel before branching to the canonical vectors.
++	 */
++	EL1_VECTOR_KPTI,
++};
++
++#ifndef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
++#define EL1_VECTOR_BHB_LOOP		-1
++#define EL1_VECTOR_BHB_FW		-1
++#define EL1_VECTOR_BHB_CLEAR_INSN	-1
++#endif /* !CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
++
++/* The vectors to use on return from EL0. e.g. to remap the kernel */
++DECLARE_PER_CPU_READ_MOSTLY(const char *, this_cpu_vector);
++
++#ifndef CONFIG_UNMAP_KERNEL_AT_EL0
++#define TRAMP_VALIAS	0
++#endif
++
++static inline const char *
++arm64_get_bp_hardening_vector(enum arm64_bp_harden_el1_vectors slot)
++{
++	if (arm64_kernel_unmapped_at_el0())
++		return (char *)TRAMP_VALIAS + SZ_2K * slot;
++
++	WARN_ON_ONCE(slot == EL1_VECTOR_KPTI);
++
++	return __bp_harden_el1_vectors + SZ_2K * slot;
++}
++
++#endif /* __ASM_VECTORS_H */
+diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
+index 7b23b16f21ce3..f03731847d9df 100644
+--- a/arch/arm64/include/uapi/asm/hwcap.h
++++ b/arch/arm64/include/uapi/asm/hwcap.h
+@@ -76,5 +76,7 @@
+ #define HWCAP2_BTI		(1 << 17)
+ #define HWCAP2_MTE		(1 << 18)
+ #define HWCAP2_ECV		(1 << 19)
++#define HWCAP2_AFP		(1 << 20)
++#define HWCAP2_RPRES		(1 << 21)
+ 
+ #endif /* _UAPI__ASM_HWCAP_H */
+diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
+index b3edde68bc3e0..323e251ed37bc 100644
+--- a/arch/arm64/include/uapi/asm/kvm.h
++++ b/arch/arm64/include/uapi/asm/kvm.h
+@@ -281,6 +281,11 @@ struct kvm_arm_copy_mte_tags {
+ #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED	3
+ #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED     	(1U << 4)
+ 
++#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3	KVM_REG_ARM_FW_REG(3)
++#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_AVAIL		0
++#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_AVAIL		1
++#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_REQUIRED	2
++
+ /* SVE registers */
+ #define KVM_REG_ARM64_SVE		(0x15 << KVM_REG_ARM_COPROC_SHIFT)
+ 
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index b217941713a8d..a401180e8d66c 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -502,6 +502,13 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ 		.matches = has_spectre_v4,
+ 		.cpu_enable = spectre_v4_enable_mitigation,
+ 	},
++	{
++		.desc = "Spectre-BHB",
++		.capability = ARM64_SPECTRE_BHB,
++		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
++		.matches = is_spectre_bhb_affected,
++		.cpu_enable = spectre_bhb_enable_mitigation,
++	},
+ #ifdef CONFIG_ARM64_ERRATUM_1418040
+ 	{
+ 		.desc = "ARM erratum 1418040",
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index d18b953c078db..d33687673f6b8 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -73,6 +73,8 @@
+ #include <linux/mm.h>
+ #include <linux/cpu.h>
+ #include <linux/kasan.h>
++#include <linux/percpu.h>
++
+ #include <asm/cpu.h>
+ #include <asm/cpufeature.h>
+ #include <asm/cpu_ops.h>
+@@ -85,6 +87,7 @@
+ #include <asm/smp.h>
+ #include <asm/sysreg.h>
+ #include <asm/traps.h>
++#include <asm/vectors.h>
+ #include <asm/virt.h>
+ 
+ /* Kernel representation of AT_HWCAP and AT_HWCAP2 */
+@@ -110,6 +113,8 @@ DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
+ bool arm64_use_ng_mappings = false;
+ EXPORT_SYMBOL(arm64_use_ng_mappings);
+ 
++DEFINE_PER_CPU_READ_MOSTLY(const char *, this_cpu_vector) = vectors;
++
+ /*
+  * Permit PER_LINUX32 and execve() of 32-bit binaries even if not all CPUs
+  * support it?
+@@ -225,6 +230,12 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
+ 	ARM64_FTR_END,
+ };
+ 
++static const struct arm64_ftr_bits ftr_id_aa64isar2[] = {
++	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64ISAR2_CLEARBHB_SHIFT, 4, 0),
++	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR2_RPRES_SHIFT, 4, 0),
++	ARM64_FTR_END,
++};
++
+ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
+@@ -325,6 +336,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
+ };
+ 
+ static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
++	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_AFP_SHIFT, 4, 0),
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_ETS_SHIFT, 4, 0),
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_TWED_SHIFT, 4, 0),
+ 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_XNX_SHIFT, 4, 0),
+@@ -637,6 +649,7 @@ static const struct __ftr_reg_entry {
+ 	ARM64_FTR_REG(SYS_ID_AA64ISAR0_EL1, ftr_id_aa64isar0),
+ 	ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64ISAR1_EL1, ftr_id_aa64isar1,
+ 			       &id_aa64isar1_override),
++	ARM64_FTR_REG(SYS_ID_AA64ISAR2_EL1, ftr_id_aa64isar2),
+ 
+ 	/* Op1 = 0, CRn = 0, CRm = 7 */
+ 	ARM64_FTR_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0),
+@@ -933,6 +946,7 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
+ 	init_cpu_ftr_reg(SYS_ID_AA64DFR1_EL1, info->reg_id_aa64dfr1);
+ 	init_cpu_ftr_reg(SYS_ID_AA64ISAR0_EL1, info->reg_id_aa64isar0);
+ 	init_cpu_ftr_reg(SYS_ID_AA64ISAR1_EL1, info->reg_id_aa64isar1);
++	init_cpu_ftr_reg(SYS_ID_AA64ISAR2_EL1, info->reg_id_aa64isar2);
+ 	init_cpu_ftr_reg(SYS_ID_AA64MMFR0_EL1, info->reg_id_aa64mmfr0);
+ 	init_cpu_ftr_reg(SYS_ID_AA64MMFR1_EL1, info->reg_id_aa64mmfr1);
+ 	init_cpu_ftr_reg(SYS_ID_AA64MMFR2_EL1, info->reg_id_aa64mmfr2);
+@@ -1151,6 +1165,8 @@ void update_cpu_features(int cpu,
+ 				      info->reg_id_aa64isar0, boot->reg_id_aa64isar0);
+ 	taint |= check_update_ftr_reg(SYS_ID_AA64ISAR1_EL1, cpu,
+ 				      info->reg_id_aa64isar1, boot->reg_id_aa64isar1);
++	taint |= check_update_ftr_reg(SYS_ID_AA64ISAR2_EL1, cpu,
++				      info->reg_id_aa64isar2, boot->reg_id_aa64isar2);
+ 
+ 	/*
+ 	 * Differing PARange support is fine as long as all peripherals and
+@@ -1272,6 +1288,7 @@ u64 __read_sysreg_by_encoding(u32 sys_id)
+ 	read_sysreg_case(SYS_ID_AA64MMFR2_EL1);
+ 	read_sysreg_case(SYS_ID_AA64ISAR0_EL1);
+ 	read_sysreg_case(SYS_ID_AA64ISAR1_EL1);
++	read_sysreg_case(SYS_ID_AA64ISAR2_EL1);
+ 
+ 	read_sysreg_case(SYS_CNTFRQ_EL0);
+ 	read_sysreg_case(SYS_CTR_EL0);
+@@ -1579,6 +1596,12 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
+ 
+ 	int cpu = smp_processor_id();
+ 
++	if (__this_cpu_read(this_cpu_vector) == vectors) {
++		const char *v = arm64_get_bp_hardening_vector(EL1_VECTOR_KPTI);
++
++		__this_cpu_write(this_cpu_vector, v);
++	}
++
+ 	/*
+ 	 * We don't need to rewrite the page-tables if either we've done
+ 	 * it already or we have KASLR enabled and therefore have not
+@@ -2479,6 +2502,8 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
+ 	HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_MTE_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_MTE, CAP_HWCAP, KERNEL_HWCAP_MTE),
+ #endif /* CONFIG_ARM64_MTE */
+ 	HWCAP_CAP(SYS_ID_AA64MMFR0_EL1, ID_AA64MMFR0_ECV_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ECV),
++	HWCAP_CAP(SYS_ID_AA64MMFR1_EL1, ID_AA64MMFR1_AFP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_AFP),
++	HWCAP_CAP(SYS_ID_AA64ISAR2_EL1, ID_AA64ISAR2_RPRES_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_RPRES),
+ 	{},
+ };
+ 
+diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
+index 6e27b759056a6..591c18a889a56 100644
+--- a/arch/arm64/kernel/cpuinfo.c
++++ b/arch/arm64/kernel/cpuinfo.c
+@@ -95,6 +95,8 @@ static const char *const hwcap_str[] = {
+ 	[KERNEL_HWCAP_BTI]		= "bti",
+ 	[KERNEL_HWCAP_MTE]		= "mte",
+ 	[KERNEL_HWCAP_ECV]		= "ecv",
++	[KERNEL_HWCAP_AFP]		= "afp",
++	[KERNEL_HWCAP_RPRES]		= "rpres",
+ };
+ 
+ #ifdef CONFIG_COMPAT
+@@ -391,6 +393,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
+ 	info->reg_id_aa64dfr1 = read_cpuid(ID_AA64DFR1_EL1);
+ 	info->reg_id_aa64isar0 = read_cpuid(ID_AA64ISAR0_EL1);
+ 	info->reg_id_aa64isar1 = read_cpuid(ID_AA64ISAR1_EL1);
++	info->reg_id_aa64isar2 = read_cpuid(ID_AA64ISAR2_EL1);
+ 	info->reg_id_aa64mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
+ 	info->reg_id_aa64mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);
+ 	info->reg_id_aa64mmfr2 = read_cpuid(ID_AA64MMFR2_EL1);
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index 2f69ae43941d8..41f9c0f072693 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -37,18 +37,21 @@
+ 
+ 	.macro kernel_ventry, el:req, ht:req, regsize:req, label:req
+ 	.align 7
+-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++.Lventry_start\@:
+ 	.if	\el == 0
+-alternative_if ARM64_UNMAP_KERNEL_AT_EL0
++	/*
++	 * This must be the first instruction of the EL0 vector entries. It is
++	 * skipped by the trampoline vectors, to trigger the cleanup.
++	 */
++	b	.Lskip_tramp_vectors_cleanup\@
+ 	.if	\regsize == 64
+ 	mrs	x30, tpidrro_el0
+ 	msr	tpidrro_el0, xzr
+ 	.else
+ 	mov	x30, xzr
+ 	.endif
+-alternative_else_nop_endif
++.Lskip_tramp_vectors_cleanup\@:
+ 	.endif
+-#endif
+ 
+ 	sub	sp, sp, #PT_REGS_SIZE
+ #ifdef CONFIG_VMAP_STACK
+@@ -95,11 +98,15 @@ alternative_else_nop_endif
+ 	mrs	x0, tpidrro_el0
+ #endif
+ 	b	el\el\ht\()_\regsize\()_\label
++.org .Lventry_start\@ + 128	// Did we overflow the ventry slot?
+ 	.endm
+ 
+-	.macro tramp_alias, dst, sym
++	.macro tramp_alias, dst, sym, tmp
+ 	mov_q	\dst, TRAMP_VALIAS
+-	add	\dst, \dst, #(\sym - .entry.tramp.text)
++	adr_l	\tmp, \sym
++	add	\dst, \dst, \tmp
++	adr_l	\tmp, .entry.tramp.text
++	sub	\dst, \dst, \tmp
+ 	.endm
+ 
+ 	/*
+@@ -116,7 +123,7 @@ alternative_cb_end
+ 	tbnz	\tmp2, #TIF_SSBD, .L__asm_ssbd_skip\@
+ 	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
+ 	mov	w1, #\state
+-alternative_cb	spectre_v4_patch_fw_mitigation_conduit
++alternative_cb	smccc_patch_fw_mitigation_conduit
+ 	nop					// Patched to SMC/HVC #0
+ alternative_cb_end
+ .L__asm_ssbd_skip\@:
+@@ -413,21 +420,26 @@ alternative_else_nop_endif
+ 	ldp	x24, x25, [sp, #16 * 12]
+ 	ldp	x26, x27, [sp, #16 * 13]
+ 	ldp	x28, x29, [sp, #16 * 14]
+-	ldr	lr, [sp, #S_LR]
+-	add	sp, sp, #PT_REGS_SIZE		// restore sp
+ 
+ 	.if	\el == 0
+-alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
++alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0
++	ldr	lr, [sp, #S_LR]
++	add	sp, sp, #PT_REGS_SIZE		// restore sp
++	eret
++alternative_else_nop_endif
+ #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+ 	bne	4f
+-	msr	far_el1, x30
+-	tramp_alias	x30, tramp_exit_native
++	msr	far_el1, x29
++	tramp_alias	x30, tramp_exit_native, x29
+ 	br	x30
+ 4:
+-	tramp_alias	x30, tramp_exit_compat
++	tramp_alias	x30, tramp_exit_compat, x29
+ 	br	x30
+ #endif
+ 	.else
++	ldr	lr, [sp, #S_LR]
++	add	sp, sp, #PT_REGS_SIZE		// restore sp
++
+ 	/* Ensure any device/NC reads complete */
+ 	alternative_insn nop, "dmb sy", ARM64_WORKAROUND_1508412
+ 
+@@ -594,12 +606,6 @@ SYM_CODE_END(ret_to_user)
+ 
+ 	.popsection				// .entry.text
+ 
+-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+-/*
+- * Exception vectors trampoline.
+- */
+-	.pushsection ".entry.tramp.text", "ax"
+-
+ 	// Move from tramp_pg_dir to swapper_pg_dir
+ 	.macro tramp_map_kernel, tmp
+ 	mrs	\tmp, ttbr1_el1
+@@ -633,12 +639,47 @@ alternative_else_nop_endif
+ 	 */
+ 	.endm
+ 
+-	.macro tramp_ventry, regsize = 64
++	.macro tramp_data_page	dst
++	adr_l	\dst, .entry.tramp.text
++	sub	\dst, \dst, PAGE_SIZE
++	.endm
++
++	.macro tramp_data_read_var	dst, var
++#ifdef CONFIG_RANDOMIZE_BASE
++	tramp_data_page		\dst
++	add	\dst, \dst, #:lo12:__entry_tramp_data_\var
++	ldr	\dst, [\dst]
++#else
++	ldr	\dst, =\var
++#endif
++	.endm
++
++#define BHB_MITIGATION_NONE	0
++#define BHB_MITIGATION_LOOP	1
++#define BHB_MITIGATION_FW	2
++#define BHB_MITIGATION_INSN	3
++
++	.macro tramp_ventry, vector_start, regsize, kpti, bhb
+ 	.align	7
+ 1:
+ 	.if	\regsize == 64
+ 	msr	tpidrro_el0, x30	// Restored in kernel_ventry
+ 	.endif
++
++	.if	\bhb == BHB_MITIGATION_LOOP
++	/*
++	 * This sequence must appear before the first indirect branch. i.e. the
++	 * ret out of tramp_ventry. It appears here because x30 is free.
++	 */
++	__mitigate_spectre_bhb_loop	x30
++	.endif // \bhb == BHB_MITIGATION_LOOP
++
++	.if	\bhb == BHB_MITIGATION_INSN
++	clearbhb
++	isb
++	.endif // \bhb == BHB_MITIGATION_INSN
++
++	.if	\kpti == 1
+ 	/*
+ 	 * Defend against branch aliasing attacks by pushing a dummy
+ 	 * entry onto the return stack and using a RET instruction to
+@@ -648,46 +689,75 @@ alternative_else_nop_endif
+ 	b	.
+ 2:
+ 	tramp_map_kernel	x30
+-#ifdef CONFIG_RANDOMIZE_BASE
+-	adr	x30, tramp_vectors + PAGE_SIZE
+ alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
+-	ldr	x30, [x30]
+-#else
+-	ldr	x30, =vectors
+-#endif
++	tramp_data_read_var	x30, vectors
+ alternative_if_not ARM64_WORKAROUND_CAVIUM_TX2_219_PRFM
+-	prfm	plil1strm, [x30, #(1b - tramp_vectors)]
++	prfm	plil1strm, [x30, #(1b - \vector_start)]
+ alternative_else_nop_endif
++
+ 	msr	vbar_el1, x30
+-	add	x30, x30, #(1b - tramp_vectors)
+ 	isb
++	.else
++	ldr	x30, =vectors
++	.endif // \kpti == 1
++
++	.if	\bhb == BHB_MITIGATION_FW
++	/*
++	 * The firmware sequence must appear before the first indirect branch.
++	 * i.e. the ret out of tramp_ventry. But it also needs the stack to be
++	 * mapped to save/restore the registers the SMC clobbers.
++	 */
++	__mitigate_spectre_bhb_fw
++	.endif // \bhb == BHB_MITIGATION_FW
++
++	add	x30, x30, #(1b - \vector_start + 4)
+ 	ret
++.org 1b + 128	// Did we overflow the ventry slot?
+ 	.endm
+ 
+ 	.macro tramp_exit, regsize = 64
+-	adr	x30, tramp_vectors
++	tramp_data_read_var	x30, this_cpu_vector
++	get_this_cpu_offset x29
++	ldr	x30, [x30, x29]
++
+ 	msr	vbar_el1, x30
+-	tramp_unmap_kernel	x30
++	ldr	lr, [sp, #S_LR]
++	tramp_unmap_kernel	x29
+ 	.if	\regsize == 64
+-	mrs	x30, far_el1
++	mrs	x29, far_el1
+ 	.endif
++	add	sp, sp, #PT_REGS_SIZE		// restore sp
+ 	eret
+ 	sb
+ 	.endm
+ 
+-	.align	11
+-SYM_CODE_START_NOALIGN(tramp_vectors)
++	.macro	generate_tramp_vector,	kpti, bhb
++.Lvector_start\@:
+ 	.space	0x400
+ 
+-	tramp_ventry
+-	tramp_ventry
+-	tramp_ventry
+-	tramp_ventry
++	.rept	4
++	tramp_ventry	.Lvector_start\@, 64, \kpti, \bhb
++	.endr
++	.rept	4
++	tramp_ventry	.Lvector_start\@, 32, \kpti, \bhb
++	.endr
++	.endm
+ 
+-	tramp_ventry	32
+-	tramp_ventry	32
+-	tramp_ventry	32
+-	tramp_ventry	32
++#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++/*
++ * Exception vectors trampoline.
++ * The order must match __bp_harden_el1_vectors and the
++ * arm64_bp_harden_el1_vectors enum.
++ */
++	.pushsection ".entry.tramp.text", "ax"
++	.align	11
++SYM_CODE_START_NOALIGN(tramp_vectors)
++#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
++	generate_tramp_vector	kpti=1, bhb=BHB_MITIGATION_LOOP
++	generate_tramp_vector	kpti=1, bhb=BHB_MITIGATION_FW
++	generate_tramp_vector	kpti=1, bhb=BHB_MITIGATION_INSN
++#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
++	generate_tramp_vector	kpti=1, bhb=BHB_MITIGATION_NONE
+ SYM_CODE_END(tramp_vectors)
+ 
+ SYM_CODE_START(tramp_exit_native)
+@@ -704,12 +774,56 @@ SYM_CODE_END(tramp_exit_compat)
+ 	.pushsection ".rodata", "a"
+ 	.align PAGE_SHIFT
+ SYM_DATA_START(__entry_tramp_data_start)
++__entry_tramp_data_vectors:
+ 	.quad	vectors
++#ifdef CONFIG_ARM_SDE_INTERFACE
++__entry_tramp_data___sdei_asm_handler:
++	.quad	__sdei_asm_handler
++#endif /* CONFIG_ARM_SDE_INTERFACE */
++__entry_tramp_data_this_cpu_vector:
++	.quad	this_cpu_vector
+ SYM_DATA_END(__entry_tramp_data_start)
+ 	.popsection				// .rodata
+ #endif /* CONFIG_RANDOMIZE_BASE */
+ #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
+ 
++/*
++ * Exception vectors for spectre mitigations on entry from EL1 when
++ * kpti is not in use.
++ */
++	.macro generate_el1_vector, bhb
++.Lvector_start\@:
++	kernel_ventry	1, t, 64, sync		// Synchronous EL1t
++	kernel_ventry	1, t, 64, irq		// IRQ EL1t
++	kernel_ventry	1, t, 64, fiq		// FIQ EL1h
++	kernel_ventry	1, t, 64, error		// Error EL1t
++
++	kernel_ventry	1, h, 64, sync		// Synchronous EL1h
++	kernel_ventry	1, h, 64, irq		// IRQ EL1h
++	kernel_ventry	1, h, 64, fiq		// FIQ EL1h
++	kernel_ventry	1, h, 64, error		// Error EL1h
++
++	.rept	4
++	tramp_ventry	.Lvector_start\@, 64, 0, \bhb
++	.endr
++	.rept 4
++	tramp_ventry	.Lvector_start\@, 32, 0, \bhb
++	.endr
++	.endm
++
++/* The order must match tramp_vecs and the arm64_bp_harden_el1_vectors enum. */
++	.pushsection ".entry.text", "ax"
++	.align	11
++SYM_CODE_START(__bp_harden_el1_vectors)
++#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
++	generate_el1_vector	bhb=BHB_MITIGATION_LOOP
++	generate_el1_vector	bhb=BHB_MITIGATION_FW
++	generate_el1_vector	bhb=BHB_MITIGATION_INSN
++#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */
++SYM_CODE_END(__bp_harden_el1_vectors)
++	.popsection
++
++
+ /*
+  * Register switch for AArch64. The callee-saved registers need to be saved
+  * and restored. On entry:
+@@ -835,14 +949,7 @@ SYM_CODE_START(__sdei_asm_entry_trampoline)
+ 	 * Remember whether to unmap the kernel on exit.
+ 	 */
+ 1:	str	x4, [x1, #(SDEI_EVENT_INTREGS + S_SDEI_TTBR1)]
+-
+-#ifdef CONFIG_RANDOMIZE_BASE
+-	adr	x4, tramp_vectors + PAGE_SIZE
+-	add	x4, x4, #:lo12:__sdei_asm_trampoline_next_handler
+-	ldr	x4, [x4]
+-#else
+-	ldr	x4, =__sdei_asm_handler
+-#endif
++	tramp_data_read_var     x4, __sdei_asm_handler
+ 	br	x4
+ SYM_CODE_END(__sdei_asm_entry_trampoline)
+ NOKPROBE(__sdei_asm_entry_trampoline)
+@@ -865,13 +972,6 @@ SYM_CODE_END(__sdei_asm_exit_trampoline)
+ NOKPROBE(__sdei_asm_exit_trampoline)
+ 	.ltorg
+ .popsection		// .entry.tramp.text
+-#ifdef CONFIG_RANDOMIZE_BASE
+-.pushsection ".rodata", "a"
+-SYM_DATA_START(__sdei_asm_trampoline_next_handler)
+-	.quad	__sdei_asm_handler
+-SYM_DATA_END(__sdei_asm_trampoline_next_handler)
+-.popsection		// .rodata
+-#endif /* CONFIG_RANDOMIZE_BASE */
+ #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
+ 
+ /*
+@@ -979,7 +1079,7 @@ alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0
+ alternative_else_nop_endif
+ 
+ #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+-	tramp_alias	dst=x5, sym=__sdei_asm_exit_trampoline
++	tramp_alias	dst=x5, sym=__sdei_asm_exit_trampoline, tmp=x3
+ 	br	x5
+ #endif
+ SYM_CODE_END(__sdei_asm_handler)
+diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
+index c96a9a0043bf4..e03e60f9482b4 100644
+--- a/arch/arm64/kernel/image-vars.h
++++ b/arch/arm64/kernel/image-vars.h
+@@ -66,6 +66,10 @@ KVM_NVHE_ALIAS(kvm_patch_vector_branch);
+ KVM_NVHE_ALIAS(kvm_update_va_mask);
+ KVM_NVHE_ALIAS(kvm_get_kimage_voffset);
+ KVM_NVHE_ALIAS(kvm_compute_final_ctr_el0);
++KVM_NVHE_ALIAS(spectre_bhb_patch_loop_iter);
++KVM_NVHE_ALIAS(spectre_bhb_patch_loop_mitigation_enable);
++KVM_NVHE_ALIAS(spectre_bhb_patch_wa3);
++KVM_NVHE_ALIAS(spectre_bhb_patch_clearbhb);
+ 
+ /* Global kernel state accessed by nVHE hyp code. */
+ KVM_NVHE_ALIAS(kvm_vgic_global_state);
+diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
+index 902e4084c4775..6d45c63c64548 100644
+--- a/arch/arm64/kernel/proton-pack.c
++++ b/arch/arm64/kernel/proton-pack.c
+@@ -18,15 +18,18 @@
+  */
+ 
+ #include <linux/arm-smccc.h>
++#include <linux/bpf.h>
+ #include <linux/cpu.h>
+ #include <linux/device.h>
+ #include <linux/nospec.h>
+ #include <linux/prctl.h>
+ #include <linux/sched/task_stack.h>
+ 
++#include <asm/debug-monitors.h>
+ #include <asm/insn.h>
+ #include <asm/spectre.h>
+ #include <asm/traps.h>
++#include <asm/vectors.h>
+ #include <asm/virt.h>
+ 
+ /*
+@@ -96,14 +99,51 @@ static bool spectre_v2_mitigations_off(void)
+ 	return ret;
+ }
+ 
++static const char *get_bhb_affected_string(enum mitigation_state bhb_state)
++{
++	switch (bhb_state) {
++	case SPECTRE_UNAFFECTED:
++		return "";
++	default:
++	case SPECTRE_VULNERABLE:
++		return ", but not BHB";
++	case SPECTRE_MITIGATED:
++		return ", BHB";
++	}
++}
++
++static bool _unprivileged_ebpf_enabled(void)
++{
++#ifdef CONFIG_BPF_SYSCALL
++	return !sysctl_unprivileged_bpf_disabled;
++#else
++	return false;
++#endif
++}
++
+ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
+ 			    char *buf)
+ {
++	enum mitigation_state bhb_state = arm64_get_spectre_bhb_state();
++	const char *bhb_str = get_bhb_affected_string(bhb_state);
++	const char *v2_str = "Branch predictor hardening";
++
+ 	switch (spectre_v2_state) {
+ 	case SPECTRE_UNAFFECTED:
+-		return sprintf(buf, "Not affected\n");
++		if (bhb_state == SPECTRE_UNAFFECTED)
++			return sprintf(buf, "Not affected\n");
++
++		/*
++		 * Platforms affected by Spectre-BHB can't report
++		 * "Not affected" for Spectre-v2.
++		 */
++		v2_str = "CSV2";
++		fallthrough;
+ 	case SPECTRE_MITIGATED:
+-		return sprintf(buf, "Mitigation: Branch predictor hardening\n");
++		if (bhb_state == SPECTRE_MITIGATED && _unprivileged_ebpf_enabled())
++			return sprintf(buf, "Vulnerable: Unprivileged eBPF enabled\n");
++
++		return sprintf(buf, "Mitigation: %s%s\n", v2_str, bhb_str);
+ 	case SPECTRE_VULNERABLE:
+ 		fallthrough;
+ 	default:
+@@ -554,9 +594,9 @@ void __init spectre_v4_patch_fw_mitigation_enable(struct alt_instr *alt,
+  * Patch a NOP in the Spectre-v4 mitigation code with an SMC/HVC instruction
+  * to call into firmware to adjust the mitigation state.
+  */
+-void __init spectre_v4_patch_fw_mitigation_conduit(struct alt_instr *alt,
+-						   __le32 *origptr,
+-						   __le32 *updptr, int nr_inst)
++void __init smccc_patch_fw_mitigation_conduit(struct alt_instr *alt,
++					       __le32 *origptr,
++					       __le32 *updptr, int nr_inst)
+ {
+ 	u32 insn;
+ 
+@@ -770,3 +810,344 @@ int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
+ 		return -ENODEV;
+ 	}
+ }
++
++/*
++ * Spectre BHB.
++ *
++ * A CPU is either:
++ * - Mitigated by a branchy loop a CPU specific number of times, and listed
++ *   in our "loop mitigated list".
++ * - Mitigated in software by the firmware Spectre v2 call.
++ * - Has the ClearBHB instruction to perform the mitigation.
++ * - Has the 'Exception Clears Branch History Buffer' (ECBHB) feature, so no
++ *   software mitigation in the vectors is needed.
++ * - Has CSV2.3, so is unaffected.
++ */
++static enum mitigation_state spectre_bhb_state;
++
++enum mitigation_state arm64_get_spectre_bhb_state(void)
++{
++	return spectre_bhb_state;
++}
++
++enum bhb_mitigation_bits {
++	BHB_LOOP,
++	BHB_FW,
++	BHB_HW,
++	BHB_INSN,
++};
++static unsigned long system_bhb_mitigations;
++
++/*
++ * This must be called with SCOPE_LOCAL_CPU for each type of CPU, before any
++ * SCOPE_SYSTEM call will give the right answer.
++ */
++u8 spectre_bhb_loop_affected(int scope)
++{
++	u8 k = 0;
++	static u8 max_bhb_k;
++
++	if (scope == SCOPE_LOCAL_CPU) {
++		static const struct midr_range spectre_bhb_k32_list[] = {
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_A78),
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_A78C),
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_X1),
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_X2),
++			MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2),
++			MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V1),
++			{},
++		};
++		static const struct midr_range spectre_bhb_k24_list[] = {
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_A76),
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_A77),
++			MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1),
++			{},
++		};
++		static const struct midr_range spectre_bhb_k8_list[] = {
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
++			{},
++		};
++
++		if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k32_list))
++			k = 32;
++		else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k24_list))
++			k = 24;
++		else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k8_list))
++			k =  8;
++
++		max_bhb_k = max(max_bhb_k, k);
++	} else {
++		k = max_bhb_k;
++	}
++
++	return k;
++}
++
++static enum mitigation_state spectre_bhb_get_cpu_fw_mitigation_state(void)
++{
++	int ret;
++	struct arm_smccc_res res;
++
++	arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
++			     ARM_SMCCC_ARCH_WORKAROUND_3, &res);
++
++	ret = res.a0;
++	switch (ret) {
++	case SMCCC_RET_SUCCESS:
++		return SPECTRE_MITIGATED;
++	case SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED:
++		return SPECTRE_UNAFFECTED;
++	default:
++		fallthrough;
++	case SMCCC_RET_NOT_SUPPORTED:
++		return SPECTRE_VULNERABLE;
++	}
++}
++
++static bool is_spectre_bhb_fw_affected(int scope)
++{
++	static bool system_affected;
++	enum mitigation_state fw_state;
++	bool has_smccc = arm_smccc_1_1_get_conduit() != SMCCC_CONDUIT_NONE;
++	static const struct midr_range spectre_bhb_firmware_mitigated_list[] = {
++		MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
++		MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
++		{},
++	};
++	bool cpu_in_list = is_midr_in_range_list(read_cpuid_id(),
++					 spectre_bhb_firmware_mitigated_list);
++
++	if (scope != SCOPE_LOCAL_CPU)
++		return system_affected;
++
++	fw_state = spectre_bhb_get_cpu_fw_mitigation_state();
++	if (cpu_in_list || (has_smccc && fw_state == SPECTRE_MITIGATED)) {
++		system_affected = true;
++		return true;
++	}
++
++	return false;
++}
++
++static bool supports_ecbhb(int scope)
++{
++	u64 mmfr1;
++
++	if (scope == SCOPE_LOCAL_CPU)
++		mmfr1 = read_sysreg_s(SYS_ID_AA64MMFR1_EL1);
++	else
++		mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
++
++	return cpuid_feature_extract_unsigned_field(mmfr1,
++						    ID_AA64MMFR1_ECBHB_SHIFT);
++}
++
++bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry,
++			     int scope)
++{
++	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
++
++	if (supports_csv2p3(scope))
++		return false;
++
++	if (supports_clearbhb(scope))
++		return true;
++
++	if (spectre_bhb_loop_affected(scope))
++		return true;
++
++	if (is_spectre_bhb_fw_affected(scope))
++		return true;
++
++	return false;
++}
++
++static void this_cpu_set_vectors(enum arm64_bp_harden_el1_vectors slot)
++{
++	const char *v = arm64_get_bp_hardening_vector(slot);
++
++	if (slot < 0)
++		return;
++
++	__this_cpu_write(this_cpu_vector, v);
++
++	/*
++	 * When KPTI is in use, the vectors are switched when exiting to
++	 * user-space.
++	 */
++	if (arm64_kernel_unmapped_at_el0())
++		return;
++
++	write_sysreg(v, vbar_el1);
++	isb();
++}
++
++void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *entry)
++{
++	bp_hardening_cb_t cpu_cb;
++	enum mitigation_state fw_state, state = SPECTRE_VULNERABLE;
++	struct bp_hardening_data *data = this_cpu_ptr(&bp_hardening_data);
++
++	if (!is_spectre_bhb_affected(entry, SCOPE_LOCAL_CPU))
++		return;
++
++	if (arm64_get_spectre_v2_state() == SPECTRE_VULNERABLE) {
++		/* No point mitigating Spectre-BHB alone. */
++	} else if (!IS_ENABLED(CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY)) {
++		pr_info_once("spectre-bhb mitigation disabled by compile time option\n");
++	} else if (cpu_mitigations_off()) {
++		pr_info_once("spectre-bhb mitigation disabled by command line option\n");
++	} else if (supports_ecbhb(SCOPE_LOCAL_CPU)) {
++		state = SPECTRE_MITIGATED;
++		set_bit(BHB_HW, &system_bhb_mitigations);
++	} else if (supports_clearbhb(SCOPE_LOCAL_CPU)) {
++		/*
++		 * Ensure KVM uses the indirect vector which will have ClearBHB
++		 * added.
++		 */
++		if (!data->slot)
++			data->slot = HYP_VECTOR_INDIRECT;
++
++		this_cpu_set_vectors(EL1_VECTOR_BHB_CLEAR_INSN);
++		state = SPECTRE_MITIGATED;
++		set_bit(BHB_INSN, &system_bhb_mitigations);
++	} else if (spectre_bhb_loop_affected(SCOPE_LOCAL_CPU)) {
++		/*
++		 * Ensure KVM uses the indirect vector which will have the
++		 * branchy-loop added. A57/A72-r0 will already have selected
++		 * the spectre-indirect vector, which is sufficient for BHB
++		 * too.
++		 */
++		if (!data->slot)
++			data->slot = HYP_VECTOR_INDIRECT;
++
++		this_cpu_set_vectors(EL1_VECTOR_BHB_LOOP);
++		state = SPECTRE_MITIGATED;
++		set_bit(BHB_LOOP, &system_bhb_mitigations);
++	} else if (is_spectre_bhb_fw_affected(SCOPE_LOCAL_CPU)) {
++		fw_state = spectre_bhb_get_cpu_fw_mitigation_state();
++		if (fw_state == SPECTRE_MITIGATED) {
++			/*
++			 * Ensure KVM uses one of the spectre bp_hardening
++			 * vectors. The indirect vector doesn't include the EL3
++			 * call, so needs upgrading to
++			 * HYP_VECTOR_SPECTRE_INDIRECT.
++			 */
++			if (!data->slot || data->slot == HYP_VECTOR_INDIRECT)
++				data->slot += 1;
++
++			this_cpu_set_vectors(EL1_VECTOR_BHB_FW);
++
++			/*
++			 * The WA3 call in the vectors supersedes the WA1 call
++			 * made during context-switch. Uninstall any firmware
++			 * bp_hardening callback.
++			 */
++			cpu_cb = spectre_v2_get_sw_mitigation_cb();
++			if (__this_cpu_read(bp_hardening_data.fn) != cpu_cb)
++				__this_cpu_write(bp_hardening_data.fn, NULL);
++
++			state = SPECTRE_MITIGATED;
++			set_bit(BHB_FW, &system_bhb_mitigations);
++		}
++	}
++
++	update_mitigation_state(&spectre_bhb_state, state);
++}
++
++/* Patched to NOP when enabled */
++void noinstr spectre_bhb_patch_loop_mitigation_enable(struct alt_instr *alt,
++						     __le32 *origptr,
++						      __le32 *updptr, int nr_inst)
++{
++	BUG_ON(nr_inst != 1);
++
++	if (test_bit(BHB_LOOP, &system_bhb_mitigations))
++		*updptr++ = cpu_to_le32(aarch64_insn_gen_nop());
++}
++
++/* Patched to NOP when enabled */
++void noinstr spectre_bhb_patch_fw_mitigation_enabled(struct alt_instr *alt,
++						   __le32 *origptr,
++						   __le32 *updptr, int nr_inst)
++{
++	BUG_ON(nr_inst != 1);
++
++	if (test_bit(BHB_FW, &system_bhb_mitigations))
++		*updptr++ = cpu_to_le32(aarch64_insn_gen_nop());
++}
++
++/* Patched to correct the immediate */
++void noinstr spectre_bhb_patch_loop_iter(struct alt_instr *alt,
++				   __le32 *origptr, __le32 *updptr, int nr_inst)
++{
++	u8 rd;
++	u32 insn;
++	u16 loop_count = spectre_bhb_loop_affected(SCOPE_SYSTEM);
++
++	BUG_ON(nr_inst != 1); /* MOV -> MOV */
++
++	if (!IS_ENABLED(CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY))
++		return;
++
++	insn = le32_to_cpu(*origptr);
++	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, insn);
++	insn = aarch64_insn_gen_movewide(rd, loop_count, 0,
++					 AARCH64_INSN_VARIANT_64BIT,
++					 AARCH64_INSN_MOVEWIDE_ZERO);
++	*updptr++ = cpu_to_le32(insn);
++}
++
++/* Patched to mov WA3 when supported */
++void noinstr spectre_bhb_patch_wa3(struct alt_instr *alt,
++				   __le32 *origptr, __le32 *updptr, int nr_inst)
++{
++	u8 rd;
++	u32 insn;
++
++	BUG_ON(nr_inst != 1); /* MOV -> MOV */
++
++	if (!IS_ENABLED(CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY) ||
++	    !test_bit(BHB_FW, &system_bhb_mitigations))
++		return;
++
++	insn = le32_to_cpu(*origptr);
++	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, insn);
++
++	insn = aarch64_insn_gen_logical_immediate(AARCH64_INSN_LOGIC_ORR,
++						  AARCH64_INSN_VARIANT_32BIT,
++						  AARCH64_INSN_REG_ZR, rd,
++						  ARM_SMCCC_ARCH_WORKAROUND_3);
++	if (WARN_ON_ONCE(insn == AARCH64_BREAK_FAULT))
++		return;
++
++	*updptr++ = cpu_to_le32(insn);
++}
++
++/* Patched to NOP when not supported */
++void __init spectre_bhb_patch_clearbhb(struct alt_instr *alt,
++				   __le32 *origptr, __le32 *updptr, int nr_inst)
++{
++	BUG_ON(nr_inst != 2);
++
++	if (test_bit(BHB_INSN, &system_bhb_mitigations))
++		return;
++
++	*updptr++ = cpu_to_le32(aarch64_insn_gen_nop());
++	*updptr++ = cpu_to_le32(aarch64_insn_gen_nop());
++}
++
++#ifdef CONFIG_BPF_SYSCALL
++#define EBPF_WARN "Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks!\n"
++void unpriv_ebpf_notify(int new_state)
++{
++	if (spectre_v2_state == SPECTRE_VULNERABLE ||
++	    spectre_bhb_state != SPECTRE_MITIGATED)
++		return;
++
++	if (!new_state)
++		pr_err("WARNING: %s", EBPF_WARN);
++}
++#endif
+diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
+index 50bab186c49b5..edaf0faf766f0 100644
+--- a/arch/arm64/kernel/vmlinux.lds.S
++++ b/arch/arm64/kernel/vmlinux.lds.S
+@@ -341,7 +341,7 @@ ASSERT(__hibernate_exit_text_end - (__hibernate_exit_text_start & ~(SZ_4K - 1))
+ 	<= SZ_4K, "Hibernate exit text too big or misaligned")
+ #endif
+ #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+-ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) == PAGE_SIZE,
++ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) <= 3*PAGE_SIZE,
+ 	"Entry trampoline text too big")
+ #endif
+ #ifdef CONFIG_KVM
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index b2222d8eb0b55..1eadf9088880a 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -1464,10 +1464,7 @@ static int kvm_init_vector_slots(void)
+ 	base = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs));
+ 	kvm_init_vector_slot(base, HYP_VECTOR_SPECTRE_DIRECT);
+ 
+-	if (!cpus_have_const_cap(ARM64_SPECTRE_V3A))
+-		return 0;
+-
+-	if (!has_vhe()) {
++	if (kvm_system_needs_idmapped_vectors() && !has_vhe()) {
+ 		err = create_hyp_exec_mappings(__pa_symbol(__bp_harden_hyp_vecs),
+ 					       __BP_HARDEN_HYP_VECS_SZ, &base);
+ 		if (err)
+diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
+index b6b6801d96d5a..7839d075729b1 100644
+--- a/arch/arm64/kvm/hyp/hyp-entry.S
++++ b/arch/arm64/kvm/hyp/hyp-entry.S
+@@ -62,6 +62,10 @@ el1_sync:				// Guest trapped into EL2
+ 	/* ARM_SMCCC_ARCH_WORKAROUND_2 handling */
+ 	eor	w1, w1, #(ARM_SMCCC_ARCH_WORKAROUND_1 ^ \
+ 			  ARM_SMCCC_ARCH_WORKAROUND_2)
++	cbz	w1, wa_epilogue
++
++	eor	w1, w1, #(ARM_SMCCC_ARCH_WORKAROUND_2 ^ \
++			  ARM_SMCCC_ARCH_WORKAROUND_3)
+ 	cbnz	w1, el1_trap
+ 
+ wa_epilogue:
+@@ -192,7 +196,10 @@ SYM_CODE_END(__kvm_hyp_vector)
+ 	sub	sp, sp, #(8 * 4)
+ 	stp	x2, x3, [sp, #(8 * 0)]
+ 	stp	x0, x1, [sp, #(8 * 2)]
++	alternative_cb spectre_bhb_patch_wa3
++	/* Patched to mov WA3 when supported */
+ 	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_1
++	alternative_cb_end
+ 	smc	#0
+ 	ldp	x2, x3, [sp, #(8 * 0)]
+ 	add	sp, sp, #(8 * 2)
+@@ -205,6 +212,8 @@ SYM_CODE_END(__kvm_hyp_vector)
+ 	spectrev2_smccc_wa1_smc
+ 	.else
+ 	stp	x0, x1, [sp, #-16]!
++	mitigate_spectre_bhb_loop	x0
++	mitigate_spectre_bhb_clear_insn
+ 	.endif
+ 	.if \indirect != 0
+ 	alternative_cb  kvm_patch_vector_branch
+diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c
+index 2fabeceb889a9..5146fb1705054 100644
+--- a/arch/arm64/kvm/hyp/nvhe/mm.c
++++ b/arch/arm64/kvm/hyp/nvhe/mm.c
+@@ -146,8 +146,10 @@ int hyp_map_vectors(void)
+ 	phys_addr_t phys;
+ 	void *bp_base;
+ 
+-	if (!cpus_have_const_cap(ARM64_SPECTRE_V3A))
++	if (!kvm_system_needs_idmapped_vectors()) {
++		__hyp_bp_vect_base = __bp_harden_hyp_vecs;
+ 		return 0;
++	}
+ 
+ 	phys = __hyp_pa(__bp_harden_hyp_vecs);
+ 	bp_base = (void *)__pkvm_create_private_mapping(phys,
+diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
+index fbb26b93c3477..54af47005e456 100644
+--- a/arch/arm64/kvm/hyp/vhe/switch.c
++++ b/arch/arm64/kvm/hyp/vhe/switch.c
+@@ -10,6 +10,7 @@
+ #include <linux/kvm_host.h>
+ #include <linux/types.h>
+ #include <linux/jump_label.h>
++#include <linux/percpu.h>
+ #include <uapi/linux/psci.h>
+ 
+ #include <kvm/arm_psci.h>
+@@ -25,6 +26,7 @@
+ #include <asm/debug-monitors.h>
+ #include <asm/processor.h>
+ #include <asm/thread_info.h>
++#include <asm/vectors.h>
+ 
+ /* VHE specific context */
+ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data);
+@@ -68,7 +70,7 @@ NOKPROBE_SYMBOL(__activate_traps);
+ 
+ static void __deactivate_traps(struct kvm_vcpu *vcpu)
+ {
+-	extern char vectors[];	/* kernel exception vectors */
++	const char *host_vectors = vectors;
+ 
+ 	___deactivate_traps(vcpu);
+ 
+@@ -82,7 +84,10 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
+ 	asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
+ 
+ 	write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1);
+-	write_sysreg(vectors, vbar_el1);
++
++	if (!arm64_kernel_unmapped_at_el0())
++		host_vectors = __this_cpu_read(this_cpu_vector);
++	write_sysreg(host_vectors, vbar_el1);
+ }
+ NOKPROBE_SYMBOL(__deactivate_traps);
+ 
+diff --git a/arch/arm64/kvm/hypercalls.c b/arch/arm64/kvm/hypercalls.c
+index 30da78f72b3b3..202b8c455724b 100644
+--- a/arch/arm64/kvm/hypercalls.c
++++ b/arch/arm64/kvm/hypercalls.c
+@@ -107,6 +107,18 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
+ 				break;
+ 			}
+ 			break;
++		case ARM_SMCCC_ARCH_WORKAROUND_3:
++			switch (arm64_get_spectre_bhb_state()) {
++			case SPECTRE_VULNERABLE:
++				break;
++			case SPECTRE_MITIGATED:
++				val[0] = SMCCC_RET_SUCCESS;
++				break;
++			case SPECTRE_UNAFFECTED:
++				val[0] = SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED;
++				break;
++			}
++			break;
+ 		case ARM_SMCCC_HV_PV_TIME_FEATURES:
+ 			val[0] = SMCCC_RET_SUCCESS;
+ 			break;
+diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c
+index 74c47d4202534..44efe12dfc066 100644
+--- a/arch/arm64/kvm/psci.c
++++ b/arch/arm64/kvm/psci.c
+@@ -406,7 +406,7 @@ int kvm_psci_call(struct kvm_vcpu *vcpu)
+ 
+ int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu)
+ {
+-	return 3;		/* PSCI version and two workaround registers */
++	return 4;		/* PSCI version and three workaround registers */
+ }
+ 
+ int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
+@@ -420,6 +420,9 @@ int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
+ 	if (put_user(KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2, uindices++))
+ 		return -EFAULT;
+ 
++	if (put_user(KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3, uindices++))
++		return -EFAULT;
++
+ 	return 0;
+ }
+ 
+@@ -459,6 +462,17 @@ static int get_kernel_wa_level(u64 regid)
+ 		case SPECTRE_VULNERABLE:
+ 			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL;
+ 		}
++		break;
++	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3:
++		switch (arm64_get_spectre_bhb_state()) {
++		case SPECTRE_VULNERABLE:
++			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_AVAIL;
++		case SPECTRE_MITIGATED:
++			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_AVAIL;
++		case SPECTRE_UNAFFECTED:
++			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_REQUIRED;
++		}
++		return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_AVAIL;
+ 	}
+ 
+ 	return -EINVAL;
+@@ -475,6 +489,7 @@ int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ 		break;
+ 	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
+ 	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
++	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3:
+ 		val = get_kernel_wa_level(reg->id) & KVM_REG_FEATURE_LEVEL_MASK;
+ 		break;
+ 	default:
+@@ -520,6 +535,7 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ 	}
+ 
+ 	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
++	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3:
+ 		if (val & ~KVM_REG_FEATURE_LEVEL_MASK)
+ 			return -EINVAL;
+ 
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index e3ec1a44f94d8..4dc2fba316fff 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -1525,7 +1525,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
+ 	/* CRm=6 */
+ 	ID_SANITISED(ID_AA64ISAR0_EL1),
+ 	ID_SANITISED(ID_AA64ISAR1_EL1),
+-	ID_UNALLOCATED(6,2),
++	ID_SANITISED(ID_AA64ISAR2_EL1),
+ 	ID_UNALLOCATED(6,3),
+ 	ID_UNALLOCATED(6,4),
+ 	ID_UNALLOCATED(6,5),
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index acfae9b41cc8c..49abbf43bf355 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -617,6 +617,8 @@ early_param("rodata", parse_rodata);
+ #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+ static int __init map_entry_trampoline(void)
+ {
++	int i;
++
+ 	pgprot_t prot = rodata_enabled ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC;
+ 	phys_addr_t pa_start = __pa_symbol(__entry_tramp_text_start);
+ 
+@@ -625,11 +627,15 @@ static int __init map_entry_trampoline(void)
+ 
+ 	/* Map only the text into the trampoline page table */
+ 	memset(tramp_pg_dir, 0, PGD_SIZE);
+-	__create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, PAGE_SIZE,
+-			     prot, __pgd_pgtable_alloc, 0);
++	__create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS,
++			     entry_tramp_text_size(), prot,
++			     __pgd_pgtable_alloc, NO_BLOCK_MAPPINGS);
+ 
+ 	/* Map both the text and data into the kernel page table */
+-	__set_fixmap(FIX_ENTRY_TRAMP_TEXT, pa_start, prot);
++	for (i = 0; i < DIV_ROUND_UP(entry_tramp_text_size(), PAGE_SIZE); i++)
++		__set_fixmap(FIX_ENTRY_TRAMP_TEXT1 - i,
++			     pa_start + i * PAGE_SIZE, prot);
++
+ 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
+ 		extern char __entry_tramp_data_start[];
+ 
+diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
+index 9c65b1e25a965..cea7533cb3043 100644
+--- a/arch/arm64/tools/cpucaps
++++ b/arch/arm64/tools/cpucaps
+@@ -44,6 +44,7 @@ MTE_ASYMM
+ SPECTRE_V2
+ SPECTRE_V3A
+ SPECTRE_V4
++SPECTRE_BHB
+ SSBS
+ SVE
+ UNMAP_KERNEL_AT_EL0
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index d5b5f2ab87a0b..cc8570f738982 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -204,7 +204,7 @@
+ /* FREE!                                ( 7*32+10) */
+ #define X86_FEATURE_PTI			( 7*32+11) /* Kernel Page Table Isolation enabled */
+ #define X86_FEATURE_RETPOLINE		( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
+-#define X86_FEATURE_RETPOLINE_AMD	( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */
++#define X86_FEATURE_RETPOLINE_LFENCE	( 7*32+13) /* "" Use LFENCE for Spectre variant 2 */
+ #define X86_FEATURE_INTEL_PPIN		( 7*32+14) /* Intel Processor Inventory Number */
+ #define X86_FEATURE_CDP_L2		( 7*32+15) /* Code and Data Prioritization L2 */
+ #define X86_FEATURE_MSR_SPEC_CTRL	( 7*32+16) /* "" MSR SPEC_CTRL is implemented */
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index cc74dc584836e..acbaeaf83b61a 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -84,7 +84,7 @@
+ #ifdef CONFIG_RETPOLINE
+ 	ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), \
+ 		      __stringify(jmp __x86_indirect_thunk_\reg), X86_FEATURE_RETPOLINE, \
+-		      __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), X86_FEATURE_RETPOLINE_AMD
++		      __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), X86_FEATURE_RETPOLINE_LFENCE
+ #else
+ 	jmp	*%\reg
+ #endif
+@@ -94,7 +94,7 @@
+ #ifdef CONFIG_RETPOLINE
+ 	ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; call *%\reg), \
+ 		      __stringify(call __x86_indirect_thunk_\reg), X86_FEATURE_RETPOLINE, \
+-		      __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; call *%\reg), X86_FEATURE_RETPOLINE_AMD
++		      __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; call *%\reg), X86_FEATURE_RETPOLINE_LFENCE
+ #else
+ 	call	*%\reg
+ #endif
+@@ -146,7 +146,7 @@ extern retpoline_thunk_t __x86_indirect_thunk_array[];
+ 	"lfence;\n"						\
+ 	ANNOTATE_RETPOLINE_SAFE					\
+ 	"call *%[thunk_target]\n",				\
+-	X86_FEATURE_RETPOLINE_AMD)
++	X86_FEATURE_RETPOLINE_LFENCE)
+ 
+ # define THUNK_TARGET(addr) [thunk_target] "r" (addr)
+ 
+@@ -176,7 +176,7 @@ extern retpoline_thunk_t __x86_indirect_thunk_array[];
+ 	"lfence;\n"						\
+ 	ANNOTATE_RETPOLINE_SAFE					\
+ 	"call *%[thunk_target]\n",				\
+-	X86_FEATURE_RETPOLINE_AMD)
++	X86_FEATURE_RETPOLINE_LFENCE)
+ 
+ # define THUNK_TARGET(addr) [thunk_target] "rm" (addr)
+ #endif
+@@ -188,9 +188,11 @@ extern retpoline_thunk_t __x86_indirect_thunk_array[];
+ /* The Spectre V2 mitigation variants */
+ enum spectre_v2_mitigation {
+ 	SPECTRE_V2_NONE,
+-	SPECTRE_V2_RETPOLINE_GENERIC,
+-	SPECTRE_V2_RETPOLINE_AMD,
+-	SPECTRE_V2_IBRS_ENHANCED,
++	SPECTRE_V2_RETPOLINE,
++	SPECTRE_V2_LFENCE,
++	SPECTRE_V2_EIBRS,
++	SPECTRE_V2_EIBRS_RETPOLINE,
++	SPECTRE_V2_EIBRS_LFENCE,
+ };
+ 
+ /* The indirect branch speculation control variants */
+diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
+index 23fb4d51a5da6..563a3de72477f 100644
+--- a/arch/x86/kernel/alternative.c
++++ b/arch/x86/kernel/alternative.c
+@@ -389,7 +389,7 @@ static int emit_indirect(int op, int reg, u8 *bytes)
+  *
+  *   CALL *%\reg
+  *
+- * It also tries to inline spectre_v2=retpoline,amd when size permits.
++ * It also tries to inline spectre_v2=retpoline,lfence when size permits.
+  */
+ static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes)
+ {
+@@ -407,7 +407,7 @@ static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes)
+ 	BUG_ON(reg == 4);
+ 
+ 	if (cpu_feature_enabled(X86_FEATURE_RETPOLINE) &&
+-	    !cpu_feature_enabled(X86_FEATURE_RETPOLINE_AMD))
++	    !cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE))
+ 		return -1;
+ 
+ 	op = insn->opcode.bytes[0];
+@@ -438,9 +438,9 @@ static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes)
+ 	}
+ 
+ 	/*
+-	 * For RETPOLINE_AMD: prepend the indirect CALL/JMP with an LFENCE.
++	 * For RETPOLINE_LFENCE: prepend the indirect CALL/JMP with an LFENCE.
+ 	 */
+-	if (cpu_feature_enabled(X86_FEATURE_RETPOLINE_AMD)) {
++	if (cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE)) {
+ 		bytes[i++] = 0x0f;
+ 		bytes[i++] = 0xae;
+ 		bytes[i++] = 0xe8; /* LFENCE */
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 1c1f218a701d3..6296e1ebed1db 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -16,6 +16,7 @@
+ #include <linux/prctl.h>
+ #include <linux/sched/smt.h>
+ #include <linux/pgtable.h>
++#include <linux/bpf.h>
+ 
+ #include <asm/spec-ctrl.h>
+ #include <asm/cmdline.h>
+@@ -650,6 +651,32 @@ static inline const char *spectre_v2_module_string(void)
+ static inline const char *spectre_v2_module_string(void) { return ""; }
+ #endif
+ 
++#define SPECTRE_V2_LFENCE_MSG "WARNING: LFENCE mitigation is not recommended for this CPU, data leaks possible!\n"
++#define SPECTRE_V2_EIBRS_EBPF_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!\n"
++#define SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS+LFENCE mitigation and SMT, data leaks possible via Spectre v2 BHB attacks!\n"
++
++#ifdef CONFIG_BPF_SYSCALL
++void unpriv_ebpf_notify(int new_state)
++{
++	if (new_state)
++		return;
++
++	/* Unprivileged eBPF is enabled */
++
++	switch (spectre_v2_enabled) {
++	case SPECTRE_V2_EIBRS:
++		pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
++		break;
++	case SPECTRE_V2_EIBRS_LFENCE:
++		if (sched_smt_active())
++			pr_err(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
++		break;
++	default:
++		break;
++	}
++}
++#endif
++
+ static inline bool match_option(const char *arg, int arglen, const char *opt)
+ {
+ 	int len = strlen(opt);
+@@ -664,7 +691,10 @@ enum spectre_v2_mitigation_cmd {
+ 	SPECTRE_V2_CMD_FORCE,
+ 	SPECTRE_V2_CMD_RETPOLINE,
+ 	SPECTRE_V2_CMD_RETPOLINE_GENERIC,
+-	SPECTRE_V2_CMD_RETPOLINE_AMD,
++	SPECTRE_V2_CMD_RETPOLINE_LFENCE,
++	SPECTRE_V2_CMD_EIBRS,
++	SPECTRE_V2_CMD_EIBRS_RETPOLINE,
++	SPECTRE_V2_CMD_EIBRS_LFENCE,
+ };
+ 
+ enum spectre_v2_user_cmd {
+@@ -737,6 +767,13 @@ spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd)
+ 	return SPECTRE_V2_USER_CMD_AUTO;
+ }
+ 
++static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode)
++{
++	return (mode == SPECTRE_V2_EIBRS ||
++		mode == SPECTRE_V2_EIBRS_RETPOLINE ||
++		mode == SPECTRE_V2_EIBRS_LFENCE);
++}
++
+ static void __init
+ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
+ {
+@@ -804,7 +841,7 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
+ 	 */
+ 	if (!boot_cpu_has(X86_FEATURE_STIBP) ||
+ 	    !smt_possible ||
+-	    spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
++	    spectre_v2_in_eibrs_mode(spectre_v2_enabled))
+ 		return;
+ 
+ 	/*
+@@ -824,9 +861,11 @@ set_mode:
+ 
+ static const char * const spectre_v2_strings[] = {
+ 	[SPECTRE_V2_NONE]			= "Vulnerable",
+-	[SPECTRE_V2_RETPOLINE_GENERIC]		= "Mitigation: Full generic retpoline",
+-	[SPECTRE_V2_RETPOLINE_AMD]		= "Mitigation: Full AMD retpoline",
+-	[SPECTRE_V2_IBRS_ENHANCED]		= "Mitigation: Enhanced IBRS",
++	[SPECTRE_V2_RETPOLINE]			= "Mitigation: Retpolines",
++	[SPECTRE_V2_LFENCE]			= "Mitigation: LFENCE",
++	[SPECTRE_V2_EIBRS]			= "Mitigation: Enhanced IBRS",
++	[SPECTRE_V2_EIBRS_LFENCE]		= "Mitigation: Enhanced IBRS + LFENCE",
++	[SPECTRE_V2_EIBRS_RETPOLINE]		= "Mitigation: Enhanced IBRS + Retpolines",
+ };
+ 
+ static const struct {
+@@ -837,8 +876,12 @@ static const struct {
+ 	{ "off",		SPECTRE_V2_CMD_NONE,		  false },
+ 	{ "on",			SPECTRE_V2_CMD_FORCE,		  true  },
+ 	{ "retpoline",		SPECTRE_V2_CMD_RETPOLINE,	  false },
+-	{ "retpoline,amd",	SPECTRE_V2_CMD_RETPOLINE_AMD,	  false },
++	{ "retpoline,amd",	SPECTRE_V2_CMD_RETPOLINE_LFENCE,  false },
++	{ "retpoline,lfence",	SPECTRE_V2_CMD_RETPOLINE_LFENCE,  false },
+ 	{ "retpoline,generic",	SPECTRE_V2_CMD_RETPOLINE_GENERIC, false },
++	{ "eibrs",		SPECTRE_V2_CMD_EIBRS,		  false },
++	{ "eibrs,lfence",	SPECTRE_V2_CMD_EIBRS_LFENCE,	  false },
++	{ "eibrs,retpoline",	SPECTRE_V2_CMD_EIBRS_RETPOLINE,	  false },
+ 	{ "auto",		SPECTRE_V2_CMD_AUTO,		  false },
+ };
+ 
+@@ -875,10 +918,30 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ 	}
+ 
+ 	if ((cmd == SPECTRE_V2_CMD_RETPOLINE ||
+-	     cmd == SPECTRE_V2_CMD_RETPOLINE_AMD ||
+-	     cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC) &&
++	     cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE ||
++	     cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC ||
++	     cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
++	     cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
+ 	    !IS_ENABLED(CONFIG_RETPOLINE)) {
+-		pr_err("%s selected but not compiled in. Switching to AUTO select\n", mitigation_options[i].option);
++		pr_err("%s selected but not compiled in. Switching to AUTO select\n",
++		       mitigation_options[i].option);
++		return SPECTRE_V2_CMD_AUTO;
++	}
++
++	if ((cmd == SPECTRE_V2_CMD_EIBRS ||
++	     cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
++	     cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
++	    !boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
++		pr_err("%s selected but CPU doesn't have eIBRS. Switching to AUTO select\n",
++		       mitigation_options[i].option);
++		return SPECTRE_V2_CMD_AUTO;
++	}
++
++	if ((cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE ||
++	     cmd == SPECTRE_V2_CMD_EIBRS_LFENCE) &&
++	    !boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) {
++		pr_err("%s selected, but CPU doesn't have a serializing LFENCE. Switching to AUTO select\n",
++		       mitigation_options[i].option);
+ 		return SPECTRE_V2_CMD_AUTO;
+ 	}
+ 
+@@ -887,6 +950,16 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ 	return cmd;
+ }
+ 
++static enum spectre_v2_mitigation __init spectre_v2_select_retpoline(void)
++{
++	if (!IS_ENABLED(CONFIG_RETPOLINE)) {
++		pr_err("Kernel not compiled with retpoline; no mitigation available!");
++		return SPECTRE_V2_NONE;
++	}
++
++	return SPECTRE_V2_RETPOLINE;
++}
++
+ static void __init spectre_v2_select_mitigation(void)
+ {
+ 	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
+@@ -907,49 +980,64 @@ static void __init spectre_v2_select_mitigation(void)
+ 	case SPECTRE_V2_CMD_FORCE:
+ 	case SPECTRE_V2_CMD_AUTO:
+ 		if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
+-			mode = SPECTRE_V2_IBRS_ENHANCED;
+-			/* Force it so VMEXIT will restore correctly */
+-			x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
+-			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+-			goto specv2_set_mode;
++			mode = SPECTRE_V2_EIBRS;
++			break;
+ 		}
+-		if (IS_ENABLED(CONFIG_RETPOLINE))
+-			goto retpoline_auto;
++
++		mode = spectre_v2_select_retpoline();
+ 		break;
+-	case SPECTRE_V2_CMD_RETPOLINE_AMD:
+-		if (IS_ENABLED(CONFIG_RETPOLINE))
+-			goto retpoline_amd;
++
++	case SPECTRE_V2_CMD_RETPOLINE_LFENCE:
++		pr_err(SPECTRE_V2_LFENCE_MSG);
++		mode = SPECTRE_V2_LFENCE;
+ 		break;
++
+ 	case SPECTRE_V2_CMD_RETPOLINE_GENERIC:
+-		if (IS_ENABLED(CONFIG_RETPOLINE))
+-			goto retpoline_generic;
++		mode = SPECTRE_V2_RETPOLINE;
+ 		break;
++
+ 	case SPECTRE_V2_CMD_RETPOLINE:
+-		if (IS_ENABLED(CONFIG_RETPOLINE))
+-			goto retpoline_auto;
++		mode = spectre_v2_select_retpoline();
++		break;
++
++	case SPECTRE_V2_CMD_EIBRS:
++		mode = SPECTRE_V2_EIBRS;
++		break;
++
++	case SPECTRE_V2_CMD_EIBRS_LFENCE:
++		mode = SPECTRE_V2_EIBRS_LFENCE;
++		break;
++
++	case SPECTRE_V2_CMD_EIBRS_RETPOLINE:
++		mode = SPECTRE_V2_EIBRS_RETPOLINE;
+ 		break;
+ 	}
+-	pr_err("Spectre mitigation: kernel not compiled with retpoline; no mitigation available!");
+-	return;
+ 
+-retpoline_auto:
+-	if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
+-	    boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
+-	retpoline_amd:
+-		if (!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) {
+-			pr_err("Spectre mitigation: LFENCE not serializing, switching to generic retpoline\n");
+-			goto retpoline_generic;
+-		}
+-		mode = SPECTRE_V2_RETPOLINE_AMD;
+-		setup_force_cpu_cap(X86_FEATURE_RETPOLINE_AMD);
+-		setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
+-	} else {
+-	retpoline_generic:
+-		mode = SPECTRE_V2_RETPOLINE_GENERIC;
++	if (mode == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
++		pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
++
++	if (spectre_v2_in_eibrs_mode(mode)) {
++		/* Force it so VMEXIT will restore correctly */
++		x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
++		wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
++	}
++
++	switch (mode) {
++	case SPECTRE_V2_NONE:
++	case SPECTRE_V2_EIBRS:
++		break;
++
++	case SPECTRE_V2_LFENCE:
++	case SPECTRE_V2_EIBRS_LFENCE:
++		setup_force_cpu_cap(X86_FEATURE_RETPOLINE_LFENCE);
++		fallthrough;
++
++	case SPECTRE_V2_RETPOLINE:
++	case SPECTRE_V2_EIBRS_RETPOLINE:
+ 		setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
++		break;
+ 	}
+ 
+-specv2_set_mode:
+ 	spectre_v2_enabled = mode;
+ 	pr_info("%s\n", spectre_v2_strings[mode]);
+ 
+@@ -975,7 +1063,7 @@ specv2_set_mode:
+ 	 * the CPU supports Enhanced IBRS, kernel might un-intentionally not
+ 	 * enable IBRS around firmware calls.
+ 	 */
+-	if (boot_cpu_has(X86_FEATURE_IBRS) && mode != SPECTRE_V2_IBRS_ENHANCED) {
++	if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_eibrs_mode(mode)) {
+ 		setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
+ 		pr_info("Enabling Restricted Speculation for firmware calls\n");
+ 	}
+@@ -1045,6 +1133,10 @@ void cpu_bugs_smt_update(void)
+ {
+ 	mutex_lock(&spec_ctrl_mutex);
+ 
++	if (sched_smt_active() && unprivileged_ebpf_enabled() &&
++	    spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
++		pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
++
+ 	switch (spectre_v2_user_stibp) {
+ 	case SPECTRE_V2_USER_NONE:
+ 		break;
+@@ -1684,7 +1776,7 @@ static ssize_t tsx_async_abort_show_state(char *buf)
+ 
+ static char *stibp_state(void)
+ {
+-	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
++	if (spectre_v2_in_eibrs_mode(spectre_v2_enabled))
+ 		return "";
+ 
+ 	switch (spectre_v2_user_stibp) {
+@@ -1714,6 +1806,27 @@ static char *ibpb_state(void)
+ 	return "";
+ }
+ 
++static ssize_t spectre_v2_show_state(char *buf)
++{
++	if (spectre_v2_enabled == SPECTRE_V2_LFENCE)
++		return sprintf(buf, "Vulnerable: LFENCE\n");
++
++	if (spectre_v2_enabled == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
++		return sprintf(buf, "Vulnerable: eIBRS with unprivileged eBPF\n");
++
++	if (sched_smt_active() && unprivileged_ebpf_enabled() &&
++	    spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
++		return sprintf(buf, "Vulnerable: eIBRS+LFENCE with unprivileged eBPF and SMT\n");
++
++	return sprintf(buf, "%s%s%s%s%s%s\n",
++		       spectre_v2_strings[spectre_v2_enabled],
++		       ibpb_state(),
++		       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
++		       stibp_state(),
++		       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
++		       spectre_v2_module_string());
++}
++
+ static ssize_t srbds_show_state(char *buf)
+ {
+ 	return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
+@@ -1739,12 +1852,7 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 		return sprintf(buf, "%s\n", spectre_v1_strings[spectre_v1_mitigation]);
+ 
+ 	case X86_BUG_SPECTRE_V2:
+-		return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
+-			       ibpb_state(),
+-			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
+-			       stibp_state(),
+-			       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
+-			       spectre_v2_module_string());
++		return spectre_v2_show_state(buf);
+ 
+ 	case X86_BUG_SPEC_STORE_BYPASS:
+ 		return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
+diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
+index cf0b39f97adc8..6565c6f861607 100644
+--- a/arch/x86/lib/retpoline.S
++++ b/arch/x86/lib/retpoline.S
+@@ -34,7 +34,7 @@ SYM_INNER_LABEL(__x86_indirect_thunk_\reg, SYM_L_GLOBAL)
+ 
+ 	ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), \
+ 		      __stringify(RETPOLINE \reg), X86_FEATURE_RETPOLINE, \
+-		      __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), X86_FEATURE_RETPOLINE_AMD
++		      __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), X86_FEATURE_RETPOLINE_LFENCE
+ 
+ .endm
+ 
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index b87d98efd2240..cd3959c0be50e 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -394,7 +394,7 @@ static void emit_indirect_jump(u8 **pprog, int reg, u8 *ip)
+ 	u8 *prog = *pprog;
+ 
+ #ifdef CONFIG_RETPOLINE
+-	if (cpu_feature_enabled(X86_FEATURE_RETPOLINE_AMD)) {
++	if (cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE)) {
+ 		EMIT_LFENCE();
+ 		EMIT2(0xFF, 0xE0 + reg);
+ 	} else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 1712990bf2ad8..b9c44e6c5e400 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -2051,16 +2051,6 @@ bool acpi_ec_dispatch_gpe(void)
+ 	if (acpi_any_gpe_status_set(first_ec->gpe))
+ 		return true;
+ 
+-	/*
+-	 * Cancel the SCI wakeup and process all pending events in case there
+-	 * are any wakeup ones in there.
+-	 *
+-	 * Note that if any non-EC GPEs are active at this point, the SCI will
+-	 * retrigger after the rearming in acpi_s2idle_wake(), so no events
+-	 * should be missed by canceling the wakeup here.
+-	 */
+-	pm_system_cancel_wakeup();
+-
+ 	/*
+ 	 * Dispatch the EC GPE in-band, but do not report wakeup in any case
+ 	 * to allow the caller to process events properly after that.
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index 8513410ca2fc2..9f237dc9d45fc 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -739,15 +739,21 @@ bool acpi_s2idle_wake(void)
+ 			return true;
+ 		}
+ 
+-		/*
+-		 * Check non-EC GPE wakeups and if there are none, cancel the
+-		 * SCI-related wakeup and dispatch the EC GPE.
+-		 */
++		/* Check non-EC GPE wakeups and dispatch the EC GPE. */
+ 		if (acpi_ec_dispatch_gpe()) {
+ 			pm_pr_dbg("ACPI non-EC GPE wakeup\n");
+ 			return true;
+ 		}
+ 
++		/*
++		 * Cancel the SCI wakeup and process all pending events in case
++		 * there are any wakeup ones in there.
++		 *
++		 * Note that if any non-EC GPEs are active at this point, the
++		 * SCI will retrigger after the rearming below, so no events
++		 * should be missed by canceling the wakeup here.
++		 */
++		pm_system_cancel_wakeup();
+ 		acpi_os_wait_events_complete();
+ 
+ 		/*
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index 2a5b14230986a..7d798f8216be6 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -1291,7 +1291,8 @@ free_shadow:
+ 			rinfo->ring_ref[i] = GRANT_INVALID_REF;
+ 		}
+ 	}
+-	free_pages((unsigned long)rinfo->ring.sring, get_order(info->nr_ring_pages * XEN_PAGE_SIZE));
++	free_pages_exact(rinfo->ring.sring,
++			 info->nr_ring_pages * XEN_PAGE_SIZE);
+ 	rinfo->ring.sring = NULL;
+ 
+ 	if (rinfo->irq)
+@@ -1375,9 +1376,15 @@ static int blkif_get_final_status(enum blk_req_status s1,
+ 	return BLKIF_RSP_OKAY;
+ }
+ 
+-static bool blkif_completion(unsigned long *id,
+-			     struct blkfront_ring_info *rinfo,
+-			     struct blkif_response *bret)
++/*
++ * Return values:
++ *  1 response processed.
++ *  0 missing further responses.
++ * -1 error while processing.
++ */
++static int blkif_completion(unsigned long *id,
++			    struct blkfront_ring_info *rinfo,
++			    struct blkif_response *bret)
+ {
+ 	int i = 0;
+ 	struct scatterlist *sg;
+@@ -1400,7 +1407,7 @@ static bool blkif_completion(unsigned long *id,
+ 
+ 		/* Wait the second response if not yet here. */
+ 		if (s2->status < REQ_DONE)
+-			return false;
++			return 0;
+ 
+ 		bret->status = blkif_get_final_status(s->status,
+ 						      s2->status);
+@@ -1451,42 +1458,43 @@ static bool blkif_completion(unsigned long *id,
+ 	}
+ 	/* Add the persistent grant into the list of free grants */
+ 	for (i = 0; i < num_grant; i++) {
+-		if (gnttab_query_foreign_access(s->grants_used[i]->gref)) {
++		if (!gnttab_try_end_foreign_access(s->grants_used[i]->gref)) {
+ 			/*
+ 			 * If the grant is still mapped by the backend (the
+ 			 * backend has chosen to make this grant persistent)
+ 			 * we add it at the head of the list, so it will be
+ 			 * reused first.
+ 			 */
+-			if (!info->feature_persistent)
+-				pr_alert_ratelimited("backed has not unmapped grant: %u\n",
+-						     s->grants_used[i]->gref);
++			if (!info->feature_persistent) {
++				pr_alert("backed has not unmapped grant: %u\n",
++					 s->grants_used[i]->gref);
++				return -1;
++			}
+ 			list_add(&s->grants_used[i]->node, &rinfo->grants);
+ 			rinfo->persistent_gnts_c++;
+ 		} else {
+ 			/*
+-			 * If the grant is not mapped by the backend we end the
+-			 * foreign access and add it to the tail of the list,
+-			 * so it will not be picked again unless we run out of
+-			 * persistent grants.
++			 * If the grant is not mapped by the backend we add it
++			 * to the tail of the list, so it will not be picked
++			 * again unless we run out of persistent grants.
+ 			 */
+-			gnttab_end_foreign_access(s->grants_used[i]->gref, 0, 0UL);
+ 			s->grants_used[i]->gref = GRANT_INVALID_REF;
+ 			list_add_tail(&s->grants_used[i]->node, &rinfo->grants);
+ 		}
+ 	}
+ 	if (s->req.operation == BLKIF_OP_INDIRECT) {
+ 		for (i = 0; i < INDIRECT_GREFS(num_grant); i++) {
+-			if (gnttab_query_foreign_access(s->indirect_grants[i]->gref)) {
+-				if (!info->feature_persistent)
+-					pr_alert_ratelimited("backed has not unmapped grant: %u\n",
+-							     s->indirect_grants[i]->gref);
++			if (!gnttab_try_end_foreign_access(s->indirect_grants[i]->gref)) {
++				if (!info->feature_persistent) {
++					pr_alert("backed has not unmapped grant: %u\n",
++						 s->indirect_grants[i]->gref);
++					return -1;
++				}
+ 				list_add(&s->indirect_grants[i]->node, &rinfo->grants);
+ 				rinfo->persistent_gnts_c++;
+ 			} else {
+ 				struct page *indirect_page;
+ 
+-				gnttab_end_foreign_access(s->indirect_grants[i]->gref, 0, 0UL);
+ 				/*
+ 				 * Add the used indirect page back to the list of
+ 				 * available pages for indirect grefs.
+@@ -1501,7 +1509,7 @@ static bool blkif_completion(unsigned long *id,
+ 		}
+ 	}
+ 
+-	return true;
++	return 1;
+ }
+ 
+ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+@@ -1567,12 +1575,17 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+ 		}
+ 
+ 		if (bret.operation != BLKIF_OP_DISCARD) {
++			int ret;
++
+ 			/*
+ 			 * We may need to wait for an extra response if the
+ 			 * I/O request is split in 2
+ 			 */
+-			if (!blkif_completion(&id, rinfo, &bret))
++			ret = blkif_completion(&id, rinfo, &bret);
++			if (!ret)
+ 				continue;
++			if (unlikely(ret < 0))
++				goto err;
+ 		}
+ 
+ 		if (add_id_to_freelist(rinfo, id)) {
+@@ -1679,8 +1692,7 @@ static int setup_blkring(struct xenbus_device *dev,
+ 	for (i = 0; i < info->nr_ring_pages; i++)
+ 		rinfo->ring_ref[i] = GRANT_INVALID_REF;
+ 
+-	sring = (struct blkif_sring *)__get_free_pages(GFP_NOIO | __GFP_HIGH,
+-						       get_order(ring_size));
++	sring = alloc_pages_exact(ring_size, GFP_NOIO);
+ 	if (!sring) {
+ 		xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring");
+ 		return -ENOMEM;
+@@ -1690,7 +1702,7 @@ static int setup_blkring(struct xenbus_device *dev,
+ 
+ 	err = xenbus_grant_ring(dev, rinfo->ring.sring, info->nr_ring_pages, gref);
+ 	if (err < 0) {
+-		free_pages((unsigned long)sring, get_order(ring_size));
++		free_pages_exact(sring, ring_size);
+ 		rinfo->ring.sring = NULL;
+ 		goto fail;
+ 	}
+@@ -2536,11 +2548,10 @@ static void purge_persistent_grants(struct blkfront_info *info)
+ 		list_for_each_entry_safe(gnt_list_entry, tmp, &rinfo->grants,
+ 					 node) {
+ 			if (gnt_list_entry->gref == GRANT_INVALID_REF ||
+-			    gnttab_query_foreign_access(gnt_list_entry->gref))
++			    !gnttab_try_end_foreign_access(gnt_list_entry->gref))
+ 				continue;
+ 
+ 			list_del(&gnt_list_entry->node);
+-			gnttab_end_foreign_access(gnt_list_entry->gref, 0, 0UL);
+ 			rinfo->persistent_gnts_c--;
+ 			gnt_list_entry->gref = GRANT_INVALID_REF;
+ 			list_add_tail(&gnt_list_entry->node, &rinfo->grants);
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 039ca902c5bf2..d2d753960c4db 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -424,14 +424,12 @@ static bool xennet_tx_buf_gc(struct netfront_queue *queue)
+ 			queue->tx_link[id] = TX_LINK_NONE;
+ 			skb = queue->tx_skbs[id];
+ 			queue->tx_skbs[id] = NULL;
+-			if (unlikely(gnttab_query_foreign_access(
+-				queue->grant_tx_ref[id]) != 0)) {
++			if (unlikely(!gnttab_end_foreign_access_ref(
++				queue->grant_tx_ref[id], GNTMAP_readonly))) {
+ 				dev_alert(dev,
+ 					  "Grant still in use by backend domain\n");
+ 				goto err;
+ 			}
+-			gnttab_end_foreign_access_ref(
+-				queue->grant_tx_ref[id], GNTMAP_readonly);
+ 			gnttab_release_grant_reference(
+ 				&queue->gref_tx_head, queue->grant_tx_ref[id]);
+ 			queue->grant_tx_ref[id] = GRANT_INVALID_REF;
+@@ -990,7 +988,6 @@ static int xennet_get_responses(struct netfront_queue *queue,
+ 	struct device *dev = &queue->info->netdev->dev;
+ 	struct bpf_prog *xdp_prog;
+ 	struct xdp_buff xdp;
+-	unsigned long ret;
+ 	int slots = 1;
+ 	int err = 0;
+ 	u32 verdict;
+@@ -1032,8 +1029,13 @@ static int xennet_get_responses(struct netfront_queue *queue,
+ 			goto next;
+ 		}
+ 
+-		ret = gnttab_end_foreign_access_ref(ref, 0);
+-		BUG_ON(!ret);
++		if (!gnttab_end_foreign_access_ref(ref, 0)) {
++			dev_alert(dev,
++				  "Grant still in use by backend domain\n");
++			queue->info->broken = true;
++			dev_alert(dev, "Disabled for further use\n");
++			return -EINVAL;
++		}
+ 
+ 		gnttab_release_grant_reference(&queue->gref_rx_head, ref);
+ 
+@@ -1254,6 +1256,10 @@ static int xennet_poll(struct napi_struct *napi, int budget)
+ 					   &need_xdp_flush);
+ 
+ 		if (unlikely(err)) {
++			if (queue->info->broken) {
++				spin_unlock(&queue->rx_lock);
++				return 0;
++			}
+ err:
+ 			while ((skb = __skb_dequeue(&tmpq)))
+ 				__skb_queue_tail(&errq, skb);
+@@ -1918,7 +1924,7 @@ static int setup_netfront(struct xenbus_device *dev,
+ 			struct netfront_queue *queue, unsigned int feature_split_evtchn)
+ {
+ 	struct xen_netif_tx_sring *txs;
+-	struct xen_netif_rx_sring *rxs;
++	struct xen_netif_rx_sring *rxs = NULL;
+ 	grant_ref_t gref;
+ 	int err;
+ 
+@@ -1938,21 +1944,21 @@ static int setup_netfront(struct xenbus_device *dev,
+ 
+ 	err = xenbus_grant_ring(dev, txs, 1, &gref);
+ 	if (err < 0)
+-		goto grant_tx_ring_fail;
++		goto fail;
+ 	queue->tx_ring_ref = gref;
+ 
+ 	rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
+ 	if (!rxs) {
+ 		err = -ENOMEM;
+ 		xenbus_dev_fatal(dev, err, "allocating rx ring page");
+-		goto alloc_rx_ring_fail;
++		goto fail;
+ 	}
+ 	SHARED_RING_INIT(rxs);
+ 	FRONT_RING_INIT(&queue->rx, rxs, XEN_PAGE_SIZE);
+ 
+ 	err = xenbus_grant_ring(dev, rxs, 1, &gref);
+ 	if (err < 0)
+-		goto grant_rx_ring_fail;
++		goto fail;
+ 	queue->rx_ring_ref = gref;
+ 
+ 	if (feature_split_evtchn)
+@@ -1965,22 +1971,28 @@ static int setup_netfront(struct xenbus_device *dev,
+ 		err = setup_netfront_single(queue);
+ 
+ 	if (err)
+-		goto alloc_evtchn_fail;
++		goto fail;
+ 
+ 	return 0;
+ 
+ 	/* If we fail to setup netfront, it is safe to just revoke access to
+ 	 * granted pages because backend is not accessing it at this point.
+ 	 */
+-alloc_evtchn_fail:
+-	gnttab_end_foreign_access_ref(queue->rx_ring_ref, 0);
+-grant_rx_ring_fail:
+-	free_page((unsigned long)rxs);
+-alloc_rx_ring_fail:
+-	gnttab_end_foreign_access_ref(queue->tx_ring_ref, 0);
+-grant_tx_ring_fail:
+-	free_page((unsigned long)txs);
+-fail:
++ fail:
++	if (queue->rx_ring_ref != GRANT_INVALID_REF) {
++		gnttab_end_foreign_access(queue->rx_ring_ref, 0,
++					  (unsigned long)rxs);
++		queue->rx_ring_ref = GRANT_INVALID_REF;
++	} else {
++		free_page((unsigned long)rxs);
++	}
++	if (queue->tx_ring_ref != GRANT_INVALID_REF) {
++		gnttab_end_foreign_access(queue->tx_ring_ref, 0,
++					  (unsigned long)txs);
++		queue->tx_ring_ref = GRANT_INVALID_REF;
++	} else {
++		free_page((unsigned long)txs);
++	}
+ 	return err;
+ }
+ 
+diff --git a/drivers/scsi/xen-scsifront.c b/drivers/scsi/xen-scsifront.c
+index 12c10a5e3d93a..7f421600cb66d 100644
+--- a/drivers/scsi/xen-scsifront.c
++++ b/drivers/scsi/xen-scsifront.c
+@@ -233,12 +233,11 @@ static void scsifront_gnttab_done(struct vscsifrnt_info *info,
+ 		return;
+ 
+ 	for (i = 0; i < shadow->nr_grants; i++) {
+-		if (unlikely(gnttab_query_foreign_access(shadow->gref[i]))) {
++		if (unlikely(!gnttab_try_end_foreign_access(shadow->gref[i]))) {
+ 			shost_printk(KERN_ALERT, info->host, KBUILD_MODNAME
+ 				     "grant still in use by backend\n");
+ 			BUG();
+ 		}
+-		gnttab_end_foreign_access(shadow->gref[i], 0, 0UL);
+ 	}
+ 
+ 	kfree(shadow->sg);
+diff --git a/drivers/xen/gntalloc.c b/drivers/xen/gntalloc.c
+index 3fa40c723e8e9..edb0acd0b8323 100644
+--- a/drivers/xen/gntalloc.c
++++ b/drivers/xen/gntalloc.c
+@@ -169,20 +169,14 @@ undo:
+ 		__del_gref(gref);
+ 	}
+ 
+-	/* It's possible for the target domain to map the just-allocated grant
+-	 * references by blindly guessing their IDs; if this is done, then
+-	 * __del_gref will leave them in the queue_gref list. They need to be
+-	 * added to the global list so that we can free them when they are no
+-	 * longer referenced.
+-	 */
+-	if (unlikely(!list_empty(&queue_gref)))
+-		list_splice_tail(&queue_gref, &gref_list);
+ 	mutex_unlock(&gref_mutex);
+ 	return rc;
+ }
+ 
+ static void __del_gref(struct gntalloc_gref *gref)
+ {
++	unsigned long addr;
++
+ 	if (gref->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
+ 		uint8_t *tmp = kmap(gref->page);
+ 		tmp[gref->notify.pgoff] = 0;
+@@ -196,21 +190,16 @@ static void __del_gref(struct gntalloc_gref *gref)
+ 	gref->notify.flags = 0;
+ 
+ 	if (gref->gref_id) {
+-		if (gnttab_query_foreign_access(gref->gref_id))
+-			return;
+-
+-		if (!gnttab_end_foreign_access_ref(gref->gref_id, 0))
+-			return;
+-
+-		gnttab_free_grant_reference(gref->gref_id);
++		if (gref->page) {
++			addr = (unsigned long)page_to_virt(gref->page);
++			gnttab_end_foreign_access(gref->gref_id, 0, addr);
++		} else
++			gnttab_free_grant_reference(gref->gref_id);
+ 	}
+ 
+ 	gref_size--;
+ 	list_del(&gref->next_gref);
+ 
+-	if (gref->page)
+-		__free_page(gref->page);
+-
+ 	kfree(gref);
+ }
+ 
+diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
+index 3729bea0c9895..5c83d41766c85 100644
+--- a/drivers/xen/grant-table.c
++++ b/drivers/xen/grant-table.c
+@@ -134,12 +134,9 @@ struct gnttab_ops {
+ 	 */
+ 	unsigned long (*end_foreign_transfer_ref)(grant_ref_t ref);
+ 	/*
+-	 * Query the status of a grant entry. Ref parameter is reference of
+-	 * queried grant entry, return value is the status of queried entry.
+-	 * Detailed status(writing/reading) can be gotten from the return value
+-	 * by bit operations.
++	 * Read the frame number related to a given grant reference.
+ 	 */
+-	int (*query_foreign_access)(grant_ref_t ref);
++	unsigned long (*read_frame)(grant_ref_t ref);
+ };
+ 
+ struct unmap_refs_callback_data {
+@@ -284,22 +281,6 @@ int gnttab_grant_foreign_access(domid_t domid, unsigned long frame,
+ }
+ EXPORT_SYMBOL_GPL(gnttab_grant_foreign_access);
+ 
+-static int gnttab_query_foreign_access_v1(grant_ref_t ref)
+-{
+-	return gnttab_shared.v1[ref].flags & (GTF_reading|GTF_writing);
+-}
+-
+-static int gnttab_query_foreign_access_v2(grant_ref_t ref)
+-{
+-	return grstatus[ref] & (GTF_reading|GTF_writing);
+-}
+-
+-int gnttab_query_foreign_access(grant_ref_t ref)
+-{
+-	return gnttab_interface->query_foreign_access(ref);
+-}
+-EXPORT_SYMBOL_GPL(gnttab_query_foreign_access);
+-
+ static int gnttab_end_foreign_access_ref_v1(grant_ref_t ref, int readonly)
+ {
+ 	u16 flags, nflags;
+@@ -353,6 +334,16 @@ int gnttab_end_foreign_access_ref(grant_ref_t ref, int readonly)
+ }
+ EXPORT_SYMBOL_GPL(gnttab_end_foreign_access_ref);
+ 
++static unsigned long gnttab_read_frame_v1(grant_ref_t ref)
++{
++	return gnttab_shared.v1[ref].frame;
++}
++
++static unsigned long gnttab_read_frame_v2(grant_ref_t ref)
++{
++	return gnttab_shared.v2[ref].full_page.frame;
++}
++
+ struct deferred_entry {
+ 	struct list_head list;
+ 	grant_ref_t ref;
+@@ -382,12 +373,9 @@ static void gnttab_handle_deferred(struct timer_list *unused)
+ 		spin_unlock_irqrestore(&gnttab_list_lock, flags);
+ 		if (_gnttab_end_foreign_access_ref(entry->ref, entry->ro)) {
+ 			put_free_entry(entry->ref);
+-			if (entry->page) {
+-				pr_debug("freeing g.e. %#x (pfn %#lx)\n",
+-					 entry->ref, page_to_pfn(entry->page));
+-				put_page(entry->page);
+-			} else
+-				pr_info("freeing g.e. %#x\n", entry->ref);
++			pr_debug("freeing g.e. %#x (pfn %#lx)\n",
++				 entry->ref, page_to_pfn(entry->page));
++			put_page(entry->page);
+ 			kfree(entry);
+ 			entry = NULL;
+ 		} else {
+@@ -412,9 +400,18 @@ static void gnttab_handle_deferred(struct timer_list *unused)
+ static void gnttab_add_deferred(grant_ref_t ref, bool readonly,
+ 				struct page *page)
+ {
+-	struct deferred_entry *entry = kmalloc(sizeof(*entry), GFP_ATOMIC);
++	struct deferred_entry *entry;
++	gfp_t gfp = (in_atomic() || irqs_disabled()) ? GFP_ATOMIC : GFP_KERNEL;
+ 	const char *what = KERN_WARNING "leaking";
+ 
++	entry = kmalloc(sizeof(*entry), gfp);
++	if (!page) {
++		unsigned long gfn = gnttab_interface->read_frame(ref);
++
++		page = pfn_to_page(gfn_to_pfn(gfn));
++		get_page(page);
++	}
++
+ 	if (entry) {
+ 		unsigned long flags;
+ 
+@@ -435,11 +432,21 @@ static void gnttab_add_deferred(grant_ref_t ref, bool readonly,
+ 	       what, ref, page ? page_to_pfn(page) : -1);
+ }
+ 
++int gnttab_try_end_foreign_access(grant_ref_t ref)
++{
++	int ret = _gnttab_end_foreign_access_ref(ref, 0);
++
++	if (ret)
++		put_free_entry(ref);
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(gnttab_try_end_foreign_access);
++
+ void gnttab_end_foreign_access(grant_ref_t ref, int readonly,
+ 			       unsigned long page)
+ {
+-	if (gnttab_end_foreign_access_ref(ref, readonly)) {
+-		put_free_entry(ref);
++	if (gnttab_try_end_foreign_access(ref)) {
+ 		if (page != 0)
+ 			put_page(virt_to_page(page));
+ 	} else
+@@ -1417,7 +1424,7 @@ static const struct gnttab_ops gnttab_v1_ops = {
+ 	.update_entry			= gnttab_update_entry_v1,
+ 	.end_foreign_access_ref		= gnttab_end_foreign_access_ref_v1,
+ 	.end_foreign_transfer_ref	= gnttab_end_foreign_transfer_ref_v1,
+-	.query_foreign_access		= gnttab_query_foreign_access_v1,
++	.read_frame			= gnttab_read_frame_v1,
+ };
+ 
+ static const struct gnttab_ops gnttab_v2_ops = {
+@@ -1429,7 +1436,7 @@ static const struct gnttab_ops gnttab_v2_ops = {
+ 	.update_entry			= gnttab_update_entry_v2,
+ 	.end_foreign_access_ref		= gnttab_end_foreign_access_ref_v2,
+ 	.end_foreign_transfer_ref	= gnttab_end_foreign_transfer_ref_v2,
+-	.query_foreign_access		= gnttab_query_foreign_access_v2,
++	.read_frame			= gnttab_read_frame_v2,
+ };
+ 
+ static bool gnttab_need_v2(void)
+diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
+index 3c9ae156b597f..0ca351f30a6d2 100644
+--- a/drivers/xen/pvcalls-front.c
++++ b/drivers/xen/pvcalls-front.c
+@@ -337,8 +337,8 @@ static void free_active_ring(struct sock_mapping *map)
+ 	if (!map->active.ring)
+ 		return;
+ 
+-	free_pages((unsigned long)map->active.data.in,
+-			map->active.ring->ring_order);
++	free_pages_exact(map->active.data.in,
++			 PAGE_SIZE << map->active.ring->ring_order);
+ 	free_page((unsigned long)map->active.ring);
+ }
+ 
+@@ -352,8 +352,8 @@ static int alloc_active_ring(struct sock_mapping *map)
+ 		goto out;
+ 
+ 	map->active.ring->ring_order = PVCALLS_RING_ORDER;
+-	bytes = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
+-					PVCALLS_RING_ORDER);
++	bytes = alloc_pages_exact(PAGE_SIZE << PVCALLS_RING_ORDER,
++				  GFP_KERNEL | __GFP_ZERO);
+ 	if (!bytes)
+ 		goto out;
+ 
+diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
+index e8bed1cb76ba2..df68906812315 100644
+--- a/drivers/xen/xenbus/xenbus_client.c
++++ b/drivers/xen/xenbus/xenbus_client.c
+@@ -379,7 +379,14 @@ int xenbus_grant_ring(struct xenbus_device *dev, void *vaddr,
+ 		      unsigned int nr_pages, grant_ref_t *grefs)
+ {
+ 	int err;
+-	int i, j;
++	unsigned int i;
++	grant_ref_t gref_head;
++
++	err = gnttab_alloc_grant_references(nr_pages, &gref_head);
++	if (err) {
++		xenbus_dev_fatal(dev, err, "granting access to ring page");
++		return err;
++	}
+ 
+ 	for (i = 0; i < nr_pages; i++) {
+ 		unsigned long gfn;
+@@ -389,23 +396,14 @@ int xenbus_grant_ring(struct xenbus_device *dev, void *vaddr,
+ 		else
+ 			gfn = virt_to_gfn(vaddr);
+ 
+-		err = gnttab_grant_foreign_access(dev->otherend_id, gfn, 0);
+-		if (err < 0) {
+-			xenbus_dev_fatal(dev, err,
+-					 "granting access to ring page");
+-			goto fail;
+-		}
+-		grefs[i] = err;
++		grefs[i] = gnttab_claim_grant_reference(&gref_head);
++		gnttab_grant_foreign_access_ref(grefs[i], dev->otherend_id,
++						gfn, 0);
+ 
+ 		vaddr = vaddr + XEN_PAGE_SIZE;
+ 	}
+ 
+ 	return 0;
+-
+-fail:
+-	for (j = 0; j < i; j++)
+-		gnttab_end_foreign_access_ref(grefs[j], 0);
+-	return err;
+ }
+ EXPORT_SYMBOL_GPL(xenbus_grant_ring);
+ 
+diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
+index 63ccb52521902..220c8c60e021a 100644
+--- a/include/linux/arm-smccc.h
++++ b/include/linux/arm-smccc.h
+@@ -92,6 +92,11 @@
+ 			   ARM_SMCCC_SMC_32,				\
+ 			   0, 0x7fff)
+ 
++#define ARM_SMCCC_ARCH_WORKAROUND_3					\
++	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,				\
++			   ARM_SMCCC_SMC_32,				\
++			   0, 0x3fff)
++
+ #define ARM_SMCCC_VENDOR_HYP_CALL_UID_FUNC_ID				\
+ 	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,				\
+ 			   ARM_SMCCC_SMC_32,				\
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 7078938ba235c..28e5f31c287db 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1774,6 +1774,12 @@ bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog);
+ const struct btf_func_model *
+ bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
+ 			 const struct bpf_insn *insn);
++
++static inline bool unprivileged_ebpf_enabled(void)
++{
++	return !sysctl_unprivileged_bpf_disabled;
++}
++
+ #else /* !CONFIG_BPF_SYSCALL */
+ static inline struct bpf_prog *bpf_prog_get(u32 ufd)
+ {
+@@ -1993,6 +1999,12 @@ bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
+ {
+ 	return NULL;
+ }
++
++static inline bool unprivileged_ebpf_enabled(void)
++{
++	return false;
++}
++
+ #endif /* CONFIG_BPF_SYSCALL */
+ 
+ void __bpf_free_used_btfs(struct bpf_prog_aux *aux,
+diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
+index cb854df031ce0..c9fea9389ebec 100644
+--- a/include/xen/grant_table.h
++++ b/include/xen/grant_table.h
+@@ -104,17 +104,32 @@ int gnttab_end_foreign_access_ref(grant_ref_t ref, int readonly);
+  * access has been ended, free the given page too.  Access will be ended
+  * immediately iff the grant entry is not in use, otherwise it will happen
+  * some time later.  page may be 0, in which case no freeing will occur.
++ * Note that the granted page might still be accessed (read or write) by the
++ * other side after gnttab_end_foreign_access() returns, so even if page was
++ * specified as 0 it is not allowed to just reuse the page for other
++ * purposes immediately. gnttab_end_foreign_access() will take an additional
++ * reference to the granted page in this case, which is dropped only after
++ * the grant is no longer in use.
++ * This requires that multi page allocations for areas subject to
++ * gnttab_end_foreign_access() are done via alloc_pages_exact() (and freeing
++ * via free_pages_exact()) in order to avoid high order pages.
+  */
+ void gnttab_end_foreign_access(grant_ref_t ref, int readonly,
+ 			       unsigned long page);
+ 
++/*
++ * End access through the given grant reference, iff the grant entry is
++ * no longer in use.  In case of success ending foreign access, the
++ * grant reference is deallocated.
++ * Return 1 if the grant entry was freed, 0 if it is still in use.
++ */
++int gnttab_try_end_foreign_access(grant_ref_t ref);
++
+ int gnttab_grant_foreign_transfer(domid_t domid, unsigned long pfn);
+ 
+ unsigned long gnttab_end_foreign_transfer_ref(grant_ref_t ref);
+ unsigned long gnttab_end_foreign_transfer(grant_ref_t ref);
+ 
+-int gnttab_query_foreign_access(grant_ref_t ref);
+-
+ /*
+  * operations on reserved batches of grant references
+  */
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 083be6af29d70..0586047f73234 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -228,6 +228,10 @@ static int bpf_stats_handler(struct ctl_table *table, int write,
+ 	return ret;
+ }
+ 
++void __weak unpriv_ebpf_notify(int new_state)
++{
++}
++
+ static int bpf_unpriv_handler(struct ctl_table *table, int write,
+ 			      void *buffer, size_t *lenp, loff_t *ppos)
+ {
+@@ -245,6 +249,9 @@ static int bpf_unpriv_handler(struct ctl_table *table, int write,
+ 			return -EPERM;
+ 		*(int *)table->data = unpriv_enable;
+ 	}
++
++	unpriv_ebpf_notify(unpriv_enable);
++
+ 	return ret;
+ }
+ #endif /* CONFIG_BPF_SYSCALL && CONFIG_SYSCTL */
+diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
+index 2418fa0b58f36..cf2685ae5ffeb 100644
+--- a/net/9p/trans_xen.c
++++ b/net/9p/trans_xen.c
+@@ -281,9 +281,9 @@ static void xen_9pfs_front_free(struct xen_9pfs_front_priv *priv)
+ 				ref = priv->rings[i].intf->ref[j];
+ 				gnttab_end_foreign_access(ref, 0, 0);
+ 			}
+-			free_pages((unsigned long)priv->rings[i].data.in,
+-				   priv->rings[i].intf->ring_order -
+-				   (PAGE_SHIFT - XEN_PAGE_SHIFT));
++			free_pages_exact(priv->rings[i].data.in,
++				   1UL << (priv->rings[i].intf->ring_order +
++					   XEN_PAGE_SHIFT));
+ 		}
+ 		gnttab_end_foreign_access(priv->rings[i].ref, 0, 0);
+ 		free_page((unsigned long)priv->rings[i].intf);
+@@ -322,8 +322,8 @@ static int xen_9pfs_front_alloc_dataring(struct xenbus_device *dev,
+ 	if (ret < 0)
+ 		goto out;
+ 	ring->ref = ret;
+-	bytes = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
+-			order - (PAGE_SHIFT - XEN_PAGE_SHIFT));
++	bytes = alloc_pages_exact(1UL << (order + XEN_PAGE_SHIFT),
++				  GFP_KERNEL | __GFP_ZERO);
+ 	if (!bytes) {
+ 		ret = -ENOMEM;
+ 		goto out;
+@@ -354,9 +354,7 @@ out:
+ 	if (bytes) {
+ 		for (i--; i >= 0; i--)
+ 			gnttab_end_foreign_access(ring->intf->ref[i], 0, 0);
+-		free_pages((unsigned long)bytes,
+-			   ring->intf->ring_order -
+-			   (PAGE_SHIFT - XEN_PAGE_SHIFT));
++		free_pages_exact(bytes, 1UL << (order + XEN_PAGE_SHIFT));
+ 	}
+ 	gnttab_end_foreign_access(ring->ref, 0, 0);
+ 	free_page((unsigned long)ring->intf);
+diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h
+index d5b5f2ab87a0b..9f2b79a8e3eec 100644
+--- a/tools/arch/x86/include/asm/cpufeatures.h
++++ b/tools/arch/x86/include/asm/cpufeatures.h
+@@ -204,7 +204,7 @@
+ /* FREE!                                ( 7*32+10) */
+ #define X86_FEATURE_PTI			( 7*32+11) /* Kernel Page Table Isolation enabled */
+ #define X86_FEATURE_RETPOLINE		( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
+-#define X86_FEATURE_RETPOLINE_AMD	( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */
++#define X86_FEATURE_RETPOLINE_LFENCE	( 7*32+13) /* "" Use LFENCEs for Spectre variant 2 */
+ #define X86_FEATURE_INTEL_PPIN		( 7*32+14) /* Intel Processor Inventory Number */
+ #define X86_FEATURE_CDP_L2		( 7*32+15) /* Code and Data Prioritization L2 */
+ #define X86_FEATURE_MSR_SPEC_CTRL	( 7*32+16) /* "" MSR SPEC_CTRL is implemented */


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-03-16 13:57 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-03-16 13:57 UTC (permalink / raw
  To: gentoo-commits

commit:     033ef9b48af763aea8a07fb752fc46a77e6f5ddc
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Mar 16 13:57:31 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Mar 16 13:57:31 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=033ef9b4

Linux patch 5.16.15

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1014_linux-5.16.15.patch | 5599 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5603 insertions(+)

diff --git a/0000_README b/0000_README
index c74c462d..3ef28034 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch:  1013_linux-5.16.14.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.16.14
 
+Patch:  1014_linux-5.16.15.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.15
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1014_linux-5.16.15.patch b/1014_linux-5.16.15.patch
new file mode 100644
index 00000000..bf4e11e9
--- /dev/null
+++ b/1014_linux-5.16.15.patch
@@ -0,0 +1,5599 @@
+diff --git a/Makefile b/Makefile
+index 86835419075f6..8675dd2a9cc85 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi b/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
+index 6dde51c2aed3f..e4775bbceecc6 100644
+--- a/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
++++ b/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
+@@ -118,7 +118,7 @@
+ 	};
+ 
+ 	pinctrl_fwqspid_default: fwqspid_default {
+-		function = "FWQSPID";
++		function = "FWSPID";
+ 		groups = "FWQSPID";
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/bcm2711.dtsi b/arch/arm/boot/dts/bcm2711.dtsi
+index dff18fc9a9065..21294f775a20f 100644
+--- a/arch/arm/boot/dts/bcm2711.dtsi
++++ b/arch/arm/boot/dts/bcm2711.dtsi
+@@ -290,6 +290,7 @@
+ 
+ 		hvs: hvs@7e400000 {
+ 			compatible = "brcm,bcm2711-hvs";
++			reg = <0x7e400000 0x8000>;
+ 			interrupts = <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>;
+ 		};
+ 
+diff --git a/arch/arm/include/asm/spectre.h b/arch/arm/include/asm/spectre.h
+index d1fa5607d3aa3..85f9e538fb325 100644
+--- a/arch/arm/include/asm/spectre.h
++++ b/arch/arm/include/asm/spectre.h
+@@ -25,7 +25,13 @@ enum {
+ 	SPECTRE_V2_METHOD_LOOP8 = BIT(__SPECTRE_V2_METHOD_LOOP8),
+ };
+ 
++#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
+ void spectre_v2_update_state(unsigned int state, unsigned int methods);
++#else
++static inline void spectre_v2_update_state(unsigned int state,
++					   unsigned int methods)
++{}
++#endif
+ 
+ int spectre_bhb_update_vectors(unsigned int method);
+ 
+diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
+index 676703cbfe4b2..ee3f7a599181e 100644
+--- a/arch/arm/kernel/entry-armv.S
++++ b/arch/arm/kernel/entry-armv.S
+@@ -1040,9 +1040,9 @@ vector_bhb_loop8_\name:
+ 
+ 	@ bhb workaround
+ 	mov	r0, #8
+-1:	b	. + 4
++3:	b	. + 4
+ 	subs	r0, r0, #1
+-	bne	1b
++	bne	3b
+ 	dsb
+ 	isb
+ 	b	2b
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 0e2c31f7a9aa2..d05d94d2b28b9 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -1264,9 +1264,6 @@ config HW_PERF_EVENTS
+ 	def_bool y
+ 	depends on ARM_PMU
+ 
+-config ARCH_HAS_FILTER_PGPROT
+-	def_bool y
+-
+ # Supported by clang >= 7.0
+ config CC_HAVE_SHADOW_CALL_STACK
+ 	def_bool $(cc-option, -fsanitize=shadow-call-stack -ffixed-x18)
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+index 04da07ae44208..1cee26479bfec 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+@@ -18,6 +18,7 @@
+ 
+ 	aliases {
+ 		spi0 = &spi0;
++		ethernet0 = &eth0;
+ 		ethernet1 = &eth1;
+ 		mmc0 = &sdhci0;
+ 		mmc1 = &sdhci1;
+@@ -138,7 +139,9 @@
+ 	/*
+ 	 * U-Boot port for Turris Mox has a bug which always expects that "ranges" DT property
+ 	 * contains exactly 2 ranges with 3 (child) address cells, 2 (parent) address cells and
+-	 * 2 size cells and also expects that the second range starts at 16 MB offset. If these
++	 * 2 size cells and also expects that the second range starts at 16 MB offset. Also it
++	 * expects that first range uses same address for PCI (child) and CPU (parent) cells (so
++	 * no remapping) and that this address is the lowest from all specified ranges. If these
+ 	 * conditions are not met then U-Boot crashes during loading kernel DTB file. PCIe address
+ 	 * space is 128 MB long, so the best split between MEM and IO is to use fixed 16 MB window
+ 	 * for IO and the rest 112 MB (64+32+16) for MEM, despite that maximal IO size is just 64 kB.
+@@ -147,6 +150,9 @@
+ 	 * https://source.denx.de/u-boot/u-boot/-/commit/cb2ddb291ee6fcbddd6d8f4ff49089dfe580f5d7
+ 	 * https://source.denx.de/u-boot/u-boot/-/commit/c64ac3b3185aeb3846297ad7391fc6df8ecd73bf
+ 	 * https://source.denx.de/u-boot/u-boot/-/commit/4a82fca8e330157081fc132a591ebd99ba02ee33
++	 * Bug related to requirement of same child and parent addresses for first range is fixed
++	 * in U-Boot version 2022.04 by following commit:
++	 * https://source.denx.de/u-boot/u-boot/-/commit/1fd54253bca7d43d046bba4853fe5fafd034bc17
+ 	 */
+ 	#address-cells = <3>;
+ 	#size-cells = <2>;
+diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+index 9acc5d2b5a002..0adc194e46d15 100644
+--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+@@ -497,7 +497,7 @@
+ 			 * (totaling 127 MiB) for MEM.
+ 			 */
+ 			ranges = <0x82000000 0 0xe8000000   0 0xe8000000   0 0x07f00000   /* Port 0 MEM */
+-				  0x81000000 0 0xefff0000   0 0xefff0000   0 0x00010000>; /* Port 0 IO */
++				  0x81000000 0 0x00000000   0 0xefff0000   0 0x00010000>; /* Port 0 IO */
+ 			interrupt-map-mask = <0 0 0 7>;
+ 			interrupt-map = <0 0 0 1 &pcie_intc 0>,
+ 					<0 0 0 2 &pcie_intc 1>,
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index c13858cf50dd2..1a70a70ed056e 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -34,6 +34,24 @@
+ 			clock-frequency = <32000>;
+ 			#clock-cells = <0>;
+ 		};
++
++		ufs_phy_rx_symbol_0_clk: ufs-phy-rx-symbol-0 {
++			compatible = "fixed-clock";
++			clock-frequency = <1000>;
++			#clock-cells = <0>;
++		};
++
++		ufs_phy_rx_symbol_1_clk: ufs-phy-rx-symbol-1 {
++			compatible = "fixed-clock";
++			clock-frequency = <1000>;
++			#clock-cells = <0>;
++		};
++
++		ufs_phy_tx_symbol_0_clk: ufs-phy-tx-symbol-0 {
++			compatible = "fixed-clock";
++			clock-frequency = <1000>;
++			#clock-cells = <0>;
++		};
+ 	};
+ 
+ 	cpus {
+@@ -583,8 +601,30 @@
+ 			#clock-cells = <1>;
+ 			#reset-cells = <1>;
+ 			#power-domain-cells = <1>;
+-			clock-names = "bi_tcxo", "sleep_clk";
+-			clocks = <&rpmhcc RPMH_CXO_CLK>, <&sleep_clk>;
++			clock-names = "bi_tcxo",
++				      "sleep_clk",
++				      "pcie_0_pipe_clk",
++				      "pcie_1_pipe_clk",
++				      "ufs_card_rx_symbol_0_clk",
++				      "ufs_card_rx_symbol_1_clk",
++				      "ufs_card_tx_symbol_0_clk",
++				      "ufs_phy_rx_symbol_0_clk",
++				      "ufs_phy_rx_symbol_1_clk",
++				      "ufs_phy_tx_symbol_0_clk",
++				      "usb3_phy_wrapper_gcc_usb30_pipe_clk",
++				      "usb3_uni_phy_sec_gcc_usb30_pipe_clk";
++			clocks = <&rpmhcc RPMH_CXO_CLK>,
++				 <&sleep_clk>,
++				 <0>,
++				 <0>,
++				 <0>,
++				 <0>,
++				 <0>,
++				 <&ufs_phy_rx_symbol_0_clk>,
++				 <&ufs_phy_rx_symbol_1_clk>,
++				 <&ufs_phy_tx_symbol_0_clk>,
++				 <0>,
++				 <0>;
+ 		};
+ 
+ 		ipcc: mailbox@408000 {
+@@ -1205,8 +1245,8 @@
+ 				<75000000 300000000>,
+ 				<0 0>,
+ 				<0 0>,
+-				<75000000 300000000>,
+-				<75000000 300000000>;
++				<0 0>,
++				<0 0>;
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
+index e4704a403237e..a857bcacf0fe1 100644
+--- a/arch/arm64/include/asm/mte-kasan.h
++++ b/arch/arm64/include/asm/mte-kasan.h
+@@ -5,6 +5,7 @@
+ #ifndef __ASM_MTE_KASAN_H
+ #define __ASM_MTE_KASAN_H
+ 
++#include <asm/compiler.h>
+ #include <asm/mte-def.h>
+ 
+ #ifndef __ASSEMBLY__
+diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
+index 7032f04c8ac6e..b1e1b74d993c3 100644
+--- a/arch/arm64/include/asm/pgtable-prot.h
++++ b/arch/arm64/include/asm/pgtable-prot.h
+@@ -92,7 +92,7 @@ extern bool arm64_use_ng_mappings;
+ #define __P001  PAGE_READONLY
+ #define __P010  PAGE_READONLY
+ #define __P011  PAGE_READONLY
+-#define __P100  PAGE_EXECONLY
++#define __P100  PAGE_READONLY_EXEC	/* PAGE_EXECONLY if Enhanced PAN */
+ #define __P101  PAGE_READONLY_EXEC
+ #define __P110  PAGE_READONLY_EXEC
+ #define __P111  PAGE_READONLY_EXEC
+@@ -101,7 +101,7 @@ extern bool arm64_use_ng_mappings;
+ #define __S001  PAGE_READONLY
+ #define __S010  PAGE_SHARED
+ #define __S011  PAGE_SHARED
+-#define __S100  PAGE_EXECONLY
++#define __S100  PAGE_READONLY_EXEC	/* PAGE_EXECONLY if Enhanced PAN */
+ #define __S101  PAGE_READONLY_EXEC
+ #define __S110  PAGE_SHARED_EXEC
+ #define __S111  PAGE_SHARED_EXEC
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index c4ba047a82d26..94e147e5456ca 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -1017,17 +1017,6 @@ static inline bool arch_wants_old_prefaulted_pte(void)
+ }
+ #define arch_wants_old_prefaulted_pte	arch_wants_old_prefaulted_pte
+ 
+-static inline pgprot_t arch_filter_pgprot(pgprot_t prot)
+-{
+-	if (cpus_have_const_cap(ARM64_HAS_EPAN))
+-		return prot;
+-
+-	if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY))
+-		return prot;
+-
+-	return PAGE_READONLY_EXEC;
+-}
+-
+ static inline bool pud_sect_supported(void)
+ {
+ 	return PAGE_SIZE == SZ_4K;
+diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
+index a38f54cd638c2..77ada00280d93 100644
+--- a/arch/arm64/mm/mmap.c
++++ b/arch/arm64/mm/mmap.c
+@@ -7,8 +7,10 @@
+ 
+ #include <linux/io.h>
+ #include <linux/memblock.h>
++#include <linux/mm.h>
+ #include <linux/types.h>
+ 
++#include <asm/cpufeature.h>
+ #include <asm/page.h>
+ 
+ /*
+@@ -38,3 +40,18 @@ int valid_mmap_phys_addr_range(unsigned long pfn, size_t size)
+ {
+ 	return !(((pfn << PAGE_SHIFT) + size) & ~PHYS_MASK);
+ }
++
++static int __init adjust_protection_map(void)
++{
++	/*
++	 * With Enhanced PAN we can honour the execute-only permissions as
++	 * there is no PAN override with such mappings.
++	 */
++	if (cpus_have_const_cap(ARM64_HAS_EPAN)) {
++		protection_map[VM_EXEC] = PAGE_EXECONLY;
++		protection_map[VM_EXEC | VM_SHARED] = PAGE_EXECONLY;
++	}
++
++	return 0;
++}
++arch_initcall(adjust_protection_map);
+diff --git a/arch/riscv/Kconfig.erratas b/arch/riscv/Kconfig.erratas
+index b44d6ecdb46e5..0aacd7052585b 100644
+--- a/arch/riscv/Kconfig.erratas
++++ b/arch/riscv/Kconfig.erratas
+@@ -2,6 +2,7 @@ menu "CPU errata selection"
+ 
+ config RISCV_ERRATA_ALTERNATIVE
+ 	bool "RISC-V alternative scheme"
++	depends on !XIP_KERNEL
+ 	default y
+ 	help
+ 	  This Kconfig allows the kernel to automatically patch the
+diff --git a/arch/riscv/Kconfig.socs b/arch/riscv/Kconfig.socs
+index 30676ebb16ebd..46a534f047931 100644
+--- a/arch/riscv/Kconfig.socs
++++ b/arch/riscv/Kconfig.socs
+@@ -14,8 +14,8 @@ config SOC_SIFIVE
+ 	select CLK_SIFIVE
+ 	select CLK_SIFIVE_PRCI
+ 	select SIFIVE_PLIC
+-	select RISCV_ERRATA_ALTERNATIVE
+-	select ERRATA_SIFIVE
++	select RISCV_ERRATA_ALTERNATIVE if !XIP_KERNEL
++	select ERRATA_SIFIVE if !XIP_KERNEL
+ 	help
+ 	  This enables support for SiFive SoC platform hardware.
+ 
+diff --git a/arch/riscv/boot/dts/canaan/k210.dtsi b/arch/riscv/boot/dts/canaan/k210.dtsi
+index 5e8ca81424821..780416d489aa7 100644
+--- a/arch/riscv/boot/dts/canaan/k210.dtsi
++++ b/arch/riscv/boot/dts/canaan/k210.dtsi
+@@ -113,7 +113,8 @@
+ 			compatible = "canaan,k210-plic", "sifive,plic-1.0.0";
+ 			reg = <0xC000000 0x4000000>;
+ 			interrupt-controller;
+-			interrupts-extended = <&cpu0_intc 11 &cpu1_intc 11>;
++			interrupts-extended = <&cpu0_intc 11>, <&cpu0_intc 9>,
++					      <&cpu1_intc 11>, <&cpu1_intc 9>;
+ 			riscv,ndev = <65>;
+ 		};
+ 
+diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
+index 68a9e3d1fe16a..4a48287513c37 100644
+--- a/arch/riscv/kernel/module.c
++++ b/arch/riscv/kernel/module.c
+@@ -13,6 +13,19 @@
+ #include <linux/pgtable.h>
+ #include <asm/sections.h>
+ 
++/*
++ * The auipc+jalr instruction pair can reach any PC-relative offset
++ * in the range [-2^31 - 2^11, 2^31 - 2^11)
++ */
++static bool riscv_insn_valid_32bit_offset(ptrdiff_t val)
++{
++#ifdef CONFIG_32BIT
++	return true;
++#else
++	return (-(1L << 31) - (1L << 11)) <= val && val < ((1L << 31) - (1L << 11));
++#endif
++}
++
+ static int apply_r_riscv_32_rela(struct module *me, u32 *location, Elf_Addr v)
+ {
+ 	if (v != (u32)v) {
+@@ -95,7 +108,7 @@ static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location,
+ 	ptrdiff_t offset = (void *)v - (void *)location;
+ 	s32 hi20;
+ 
+-	if (offset != (s32)offset) {
++	if (!riscv_insn_valid_32bit_offset(offset)) {
+ 		pr_err(
+ 		  "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
+ 		  me->name, (long long)v, location);
+@@ -197,10 +210,9 @@ static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location,
+ 				       Elf_Addr v)
+ {
+ 	ptrdiff_t offset = (void *)v - (void *)location;
+-	s32 fill_v = offset;
+ 	u32 hi20, lo12;
+ 
+-	if (offset != fill_v) {
++	if (!riscv_insn_valid_32bit_offset(offset)) {
+ 		/* Only emit the plt entry if offset over 32-bit range */
+ 		if (IS_ENABLED(CONFIG_MODULE_SECTIONS)) {
+ 			offset = module_emit_plt_entry(me, v);
+@@ -224,10 +236,9 @@ static int apply_r_riscv_call_rela(struct module *me, u32 *location,
+ 				   Elf_Addr v)
+ {
+ 	ptrdiff_t offset = (void *)v - (void *)location;
+-	s32 fill_v = offset;
+ 	u32 hi20, lo12;
+ 
+-	if (offset != fill_v) {
++	if (!riscv_insn_valid_32bit_offset(offset)) {
+ 		pr_err(
+ 		  "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
+ 		  me->name, (long long)v, location);
+diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
+index 48afe96ae0f0f..7c63a1911fae9 100644
+--- a/arch/x86/kernel/cpu/sgx/encl.c
++++ b/arch/x86/kernel/cpu/sgx/encl.c
+@@ -12,6 +12,30 @@
+ #include "encls.h"
+ #include "sgx.h"
+ 
++/*
++ * Calculate byte offset of a PCMD struct associated with an enclave page. PCMD's
++ * follow right after the EPC data in the backing storage. In addition to the
++ * visible enclave pages, there's one extra page slot for SECS, before PCMD
++ * structs.
++ */
++static inline pgoff_t sgx_encl_get_backing_page_pcmd_offset(struct sgx_encl *encl,
++							    unsigned long page_index)
++{
++	pgoff_t epc_end_off = encl->size + sizeof(struct sgx_secs);
++
++	return epc_end_off + page_index * sizeof(struct sgx_pcmd);
++}
++
++/*
++ * Free a page from the backing storage in the given page index.
++ */
++static inline void sgx_encl_truncate_backing_page(struct sgx_encl *encl, unsigned long page_index)
++{
++	struct inode *inode = file_inode(encl->backing);
++
++	shmem_truncate_range(inode, PFN_PHYS(page_index), PFN_PHYS(page_index) + PAGE_SIZE - 1);
++}
++
+ /*
+  * ELDU: Load an EPC page as unblocked. For more info, see "OS Management of EPC
+  * Pages" in the SDM.
+@@ -22,9 +46,11 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
+ {
+ 	unsigned long va_offset = encl_page->desc & SGX_ENCL_PAGE_VA_OFFSET_MASK;
+ 	struct sgx_encl *encl = encl_page->encl;
++	pgoff_t page_index, page_pcmd_off;
+ 	struct sgx_pageinfo pginfo;
+ 	struct sgx_backing b;
+-	pgoff_t page_index;
++	bool pcmd_page_empty;
++	u8 *pcmd_page;
+ 	int ret;
+ 
+ 	if (secs_page)
+@@ -32,14 +58,16 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
+ 	else
+ 		page_index = PFN_DOWN(encl->size);
+ 
++	page_pcmd_off = sgx_encl_get_backing_page_pcmd_offset(encl, page_index);
++
+ 	ret = sgx_encl_get_backing(encl, page_index, &b);
+ 	if (ret)
+ 		return ret;
+ 
+ 	pginfo.addr = encl_page->desc & PAGE_MASK;
+ 	pginfo.contents = (unsigned long)kmap_atomic(b.contents);
+-	pginfo.metadata = (unsigned long)kmap_atomic(b.pcmd) +
+-			  b.pcmd_offset;
++	pcmd_page = kmap_atomic(b.pcmd);
++	pginfo.metadata = (unsigned long)pcmd_page + b.pcmd_offset;
+ 
+ 	if (secs_page)
+ 		pginfo.secs = (u64)sgx_get_epc_virt_addr(secs_page);
+@@ -55,11 +83,24 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
+ 		ret = -EFAULT;
+ 	}
+ 
+-	kunmap_atomic((void *)(unsigned long)(pginfo.metadata - b.pcmd_offset));
++	memset(pcmd_page + b.pcmd_offset, 0, sizeof(struct sgx_pcmd));
++
++	/*
++	 * The area for the PCMD in the page was zeroed above.  Check if the
++	 * whole page is now empty meaning that all PCMD's have been zeroed:
++	 */
++	pcmd_page_empty = !memchr_inv(pcmd_page, 0, PAGE_SIZE);
++
++	kunmap_atomic(pcmd_page);
+ 	kunmap_atomic((void *)(unsigned long)pginfo.contents);
+ 
+ 	sgx_encl_put_backing(&b, false);
+ 
++	sgx_encl_truncate_backing_page(encl, page_index);
++
++	if (pcmd_page_empty)
++		sgx_encl_truncate_backing_page(encl, PFN_DOWN(page_pcmd_off));
++
+ 	return ret;
+ }
+ 
+@@ -579,7 +620,7 @@ static struct page *sgx_encl_get_backing_page(struct sgx_encl *encl,
+ int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
+ 			 struct sgx_backing *backing)
+ {
+-	pgoff_t pcmd_index = PFN_DOWN(encl->size) + 1 + (page_index >> 5);
++	pgoff_t page_pcmd_off = sgx_encl_get_backing_page_pcmd_offset(encl, page_index);
+ 	struct page *contents;
+ 	struct page *pcmd;
+ 
+@@ -587,7 +628,7 @@ int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
+ 	if (IS_ERR(contents))
+ 		return PTR_ERR(contents);
+ 
+-	pcmd = sgx_encl_get_backing_page(encl, pcmd_index);
++	pcmd = sgx_encl_get_backing_page(encl, PFN_DOWN(page_pcmd_off));
+ 	if (IS_ERR(pcmd)) {
+ 		put_page(contents);
+ 		return PTR_ERR(pcmd);
+@@ -596,9 +637,7 @@ int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
+ 	backing->page_index = page_index;
+ 	backing->contents = contents;
+ 	backing->pcmd = pcmd;
+-	backing->pcmd_offset =
+-		(page_index & (PAGE_SIZE / sizeof(struct sgx_pcmd) - 1)) *
+-		sizeof(struct sgx_pcmd);
++	backing->pcmd_offset = page_pcmd_off & (PAGE_SIZE - 1);
+ 
+ 	return 0;
+ }
+diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
+index bc0657f0deedf..f267205f2d5a4 100644
+--- a/arch/x86/kernel/e820.c
++++ b/arch/x86/kernel/e820.c
+@@ -995,8 +995,10 @@ early_param("memmap", parse_memmap_opt);
+  */
+ void __init e820__reserve_setup_data(void)
+ {
++	struct setup_indirect *indirect;
+ 	struct setup_data *data;
+-	u64 pa_data;
++	u64 pa_data, pa_next;
++	u32 len;
+ 
+ 	pa_data = boot_params.hdr.setup_data;
+ 	if (!pa_data)
+@@ -1004,6 +1006,14 @@ void __init e820__reserve_setup_data(void)
+ 
+ 	while (pa_data) {
+ 		data = early_memremap(pa_data, sizeof(*data));
++		if (!data) {
++			pr_warn("e820: failed to memremap setup_data entry\n");
++			return;
++		}
++
++		len = sizeof(*data);
++		pa_next = data->next;
++
+ 		e820__range_update(pa_data, sizeof(*data)+data->len, E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
+ 
+ 		/*
+@@ -1015,18 +1025,27 @@ void __init e820__reserve_setup_data(void)
+ 						 sizeof(*data) + data->len,
+ 						 E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
+ 
+-		if (data->type == SETUP_INDIRECT &&
+-		    ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) {
+-			e820__range_update(((struct setup_indirect *)data->data)->addr,
+-					   ((struct setup_indirect *)data->data)->len,
+-					   E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
+-			e820__range_update_kexec(((struct setup_indirect *)data->data)->addr,
+-						 ((struct setup_indirect *)data->data)->len,
+-						 E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
++		if (data->type == SETUP_INDIRECT) {
++			len += data->len;
++			early_memunmap(data, sizeof(*data));
++			data = early_memremap(pa_data, len);
++			if (!data) {
++				pr_warn("e820: failed to memremap indirect setup_data\n");
++				return;
++			}
++
++			indirect = (struct setup_indirect *)data->data;
++
++			if (indirect->type != SETUP_INDIRECT) {
++				e820__range_update(indirect->addr, indirect->len,
++						   E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
++				e820__range_update_kexec(indirect->addr, indirect->len,
++							 E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
++			}
+ 		}
+ 
+-		pa_data = data->next;
+-		early_memunmap(data, sizeof(*data));
++		pa_data = pa_next;
++		early_memunmap(data, len);
+ 	}
+ 
+ 	e820__update_table(e820_table);
+diff --git a/arch/x86/kernel/kdebugfs.c b/arch/x86/kernel/kdebugfs.c
+index 64b6da95af984..e2e89bebcbc32 100644
+--- a/arch/x86/kernel/kdebugfs.c
++++ b/arch/x86/kernel/kdebugfs.c
+@@ -88,11 +88,13 @@ create_setup_data_node(struct dentry *parent, int no,
+ 
+ static int __init create_setup_data_nodes(struct dentry *parent)
+ {
++	struct setup_indirect *indirect;
+ 	struct setup_data_node *node;
+ 	struct setup_data *data;
+-	int error;
++	u64 pa_data, pa_next;
+ 	struct dentry *d;
+-	u64 pa_data;
++	int error;
++	u32 len;
+ 	int no = 0;
+ 
+ 	d = debugfs_create_dir("setup_data", parent);
+@@ -112,12 +114,29 @@ static int __init create_setup_data_nodes(struct dentry *parent)
+ 			error = -ENOMEM;
+ 			goto err_dir;
+ 		}
+-
+-		if (data->type == SETUP_INDIRECT &&
+-		    ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) {
+-			node->paddr = ((struct setup_indirect *)data->data)->addr;
+-			node->type  = ((struct setup_indirect *)data->data)->type;
+-			node->len   = ((struct setup_indirect *)data->data)->len;
++		pa_next = data->next;
++
++		if (data->type == SETUP_INDIRECT) {
++			len = sizeof(*data) + data->len;
++			memunmap(data);
++			data = memremap(pa_data, len, MEMREMAP_WB);
++			if (!data) {
++				kfree(node);
++				error = -ENOMEM;
++				goto err_dir;
++			}
++
++			indirect = (struct setup_indirect *)data->data;
++
++			if (indirect->type != SETUP_INDIRECT) {
++				node->paddr = indirect->addr;
++				node->type  = indirect->type;
++				node->len   = indirect->len;
++			} else {
++				node->paddr = pa_data;
++				node->type  = data->type;
++				node->len   = data->len;
++			}
+ 		} else {
+ 			node->paddr = pa_data;
+ 			node->type  = data->type;
+@@ -125,7 +144,7 @@ static int __init create_setup_data_nodes(struct dentry *parent)
+ 		}
+ 
+ 		create_setup_data_node(d, no, node);
+-		pa_data = data->next;
++		pa_data = pa_next;
+ 
+ 		memunmap(data);
+ 		no++;
+diff --git a/arch/x86/kernel/ksysfs.c b/arch/x86/kernel/ksysfs.c
+index d0a19121c6a4f..257892fcefa79 100644
+--- a/arch/x86/kernel/ksysfs.c
++++ b/arch/x86/kernel/ksysfs.c
+@@ -91,26 +91,41 @@ static int get_setup_data_paddr(int nr, u64 *paddr)
+ 
+ static int __init get_setup_data_size(int nr, size_t *size)
+ {
+-	int i = 0;
++	u64 pa_data = boot_params.hdr.setup_data, pa_next;
++	struct setup_indirect *indirect;
+ 	struct setup_data *data;
+-	u64 pa_data = boot_params.hdr.setup_data;
++	int i = 0;
++	u32 len;
+ 
+ 	while (pa_data) {
+ 		data = memremap(pa_data, sizeof(*data), MEMREMAP_WB);
+ 		if (!data)
+ 			return -ENOMEM;
++		pa_next = data->next;
++
+ 		if (nr == i) {
+-			if (data->type == SETUP_INDIRECT &&
+-			    ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT)
+-				*size = ((struct setup_indirect *)data->data)->len;
+-			else
++			if (data->type == SETUP_INDIRECT) {
++				len = sizeof(*data) + data->len;
++				memunmap(data);
++				data = memremap(pa_data, len, MEMREMAP_WB);
++				if (!data)
++					return -ENOMEM;
++
++				indirect = (struct setup_indirect *)data->data;
++
++				if (indirect->type != SETUP_INDIRECT)
++					*size = indirect->len;
++				else
++					*size = data->len;
++			} else {
+ 				*size = data->len;
++			}
+ 
+ 			memunmap(data);
+ 			return 0;
+ 		}
+ 
+-		pa_data = data->next;
++		pa_data = pa_next;
+ 		memunmap(data);
+ 		i++;
+ 	}
+@@ -120,9 +135,11 @@ static int __init get_setup_data_size(int nr, size_t *size)
+ static ssize_t type_show(struct kobject *kobj,
+ 			 struct kobj_attribute *attr, char *buf)
+ {
++	struct setup_indirect *indirect;
++	struct setup_data *data;
+ 	int nr, ret;
+ 	u64 paddr;
+-	struct setup_data *data;
++	u32 len;
+ 
+ 	ret = kobj_to_setup_data_nr(kobj, &nr);
+ 	if (ret)
+@@ -135,10 +152,20 @@ static ssize_t type_show(struct kobject *kobj,
+ 	if (!data)
+ 		return -ENOMEM;
+ 
+-	if (data->type == SETUP_INDIRECT)
+-		ret = sprintf(buf, "0x%x\n", ((struct setup_indirect *)data->data)->type);
+-	else
++	if (data->type == SETUP_INDIRECT) {
++		len = sizeof(*data) + data->len;
++		memunmap(data);
++		data = memremap(paddr, len, MEMREMAP_WB);
++		if (!data)
++			return -ENOMEM;
++
++		indirect = (struct setup_indirect *)data->data;
++
++		ret = sprintf(buf, "0x%x\n", indirect->type);
++	} else {
+ 		ret = sprintf(buf, "0x%x\n", data->type);
++	}
++
+ 	memunmap(data);
+ 	return ret;
+ }
+@@ -149,9 +176,10 @@ static ssize_t setup_data_data_read(struct file *fp,
+ 				    char *buf,
+ 				    loff_t off, size_t count)
+ {
++	struct setup_indirect *indirect;
++	struct setup_data *data;
+ 	int nr, ret = 0;
+ 	u64 paddr, len;
+-	struct setup_data *data;
+ 	void *p;
+ 
+ 	ret = kobj_to_setup_data_nr(kobj, &nr);
+@@ -165,10 +193,27 @@ static ssize_t setup_data_data_read(struct file *fp,
+ 	if (!data)
+ 		return -ENOMEM;
+ 
+-	if (data->type == SETUP_INDIRECT &&
+-	    ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) {
+-		paddr = ((struct setup_indirect *)data->data)->addr;
+-		len = ((struct setup_indirect *)data->data)->len;
++	if (data->type == SETUP_INDIRECT) {
++		len = sizeof(*data) + data->len;
++		memunmap(data);
++		data = memremap(paddr, len, MEMREMAP_WB);
++		if (!data)
++			return -ENOMEM;
++
++		indirect = (struct setup_indirect *)data->data;
++
++		if (indirect->type != SETUP_INDIRECT) {
++			paddr = indirect->addr;
++			len = indirect->len;
++		} else {
++			/*
++			 * Even though this is technically undefined, return
++			 * the data as though it is a normal setup_data struct.
++			 * This will at least allow it to be inspected.
++			 */
++			paddr += sizeof(*data);
++			len = data->len;
++		}
+ 	} else {
+ 		paddr += sizeof(*data);
+ 		len = data->len;
+diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
+index 59abbdad7729c..ff3db164e52cb 100644
+--- a/arch/x86/kernel/kvm.c
++++ b/arch/x86/kernel/kvm.c
+@@ -462,19 +462,22 @@ static bool pv_tlb_flush_supported(void)
+ {
+ 	return (kvm_para_has_feature(KVM_FEATURE_PV_TLB_FLUSH) &&
+ 		!kvm_para_has_hint(KVM_HINTS_REALTIME) &&
+-		kvm_para_has_feature(KVM_FEATURE_STEAL_TIME));
++		kvm_para_has_feature(KVM_FEATURE_STEAL_TIME) &&
++		(num_possible_cpus() != 1));
+ }
+ 
+ static bool pv_ipi_supported(void)
+ {
+-	return kvm_para_has_feature(KVM_FEATURE_PV_SEND_IPI);
++	return (kvm_para_has_feature(KVM_FEATURE_PV_SEND_IPI) &&
++	       (num_possible_cpus() != 1));
+ }
+ 
+ static bool pv_sched_yield_supported(void)
+ {
+ 	return (kvm_para_has_feature(KVM_FEATURE_PV_SCHED_YIELD) &&
+ 		!kvm_para_has_hint(KVM_HINTS_REALTIME) &&
+-	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME));
++	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME) &&
++	    (num_possible_cpus() != 1));
+ }
+ 
+ #define KVM_IPI_CLUSTER_SIZE	(2 * BITS_PER_LONG)
+diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
+index 95fa745e310a5..96d7c27b7093d 100644
+--- a/arch/x86/kernel/module.c
++++ b/arch/x86/kernel/module.c
+@@ -273,6 +273,14 @@ int module_finalize(const Elf_Ehdr *hdr,
+ 			retpolines = s;
+ 	}
+ 
++	/*
++	 * See alternative_instructions() for the ordering rules between the
++	 * various patching types.
++	 */
++	if (para) {
++		void *pseg = (void *)para->sh_addr;
++		apply_paravirt(pseg, pseg + para->sh_size);
++	}
+ 	if (retpolines) {
+ 		void *rseg = (void *)retpolines->sh_addr;
+ 		apply_retpolines(rseg, rseg + retpolines->sh_size);
+@@ -290,11 +298,6 @@ int module_finalize(const Elf_Ehdr *hdr,
+ 					    tseg, tseg + text->sh_size);
+ 	}
+ 
+-	if (para) {
+-		void *pseg = (void *)para->sh_addr;
+-		apply_paravirt(pseg, pseg + para->sh_size);
+-	}
+-
+ 	/* make jump label nops */
+ 	jump_label_apply_nops(me);
+ 
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index e04f5e6eb33f4..1782b3fb9320e 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -368,21 +368,41 @@ static void __init parse_setup_data(void)
+ 
+ static void __init memblock_x86_reserve_range_setup_data(void)
+ {
++	struct setup_indirect *indirect;
+ 	struct setup_data *data;
+-	u64 pa_data;
++	u64 pa_data, pa_next;
++	u32 len;
+ 
+ 	pa_data = boot_params.hdr.setup_data;
+ 	while (pa_data) {
+ 		data = early_memremap(pa_data, sizeof(*data));
++		if (!data) {
++			pr_warn("setup: failed to memremap setup_data entry\n");
++			return;
++		}
++
++		len = sizeof(*data);
++		pa_next = data->next;
++
+ 		memblock_reserve(pa_data, sizeof(*data) + data->len);
+ 
+-		if (data->type == SETUP_INDIRECT &&
+-		    ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT)
+-			memblock_reserve(((struct setup_indirect *)data->data)->addr,
+-					 ((struct setup_indirect *)data->data)->len);
++		if (data->type == SETUP_INDIRECT) {
++			len += data->len;
++			early_memunmap(data, sizeof(*data));
++			data = early_memremap(pa_data, len);
++			if (!data) {
++				pr_warn("setup: failed to memremap indirect setup_data\n");
++				return;
++			}
+ 
+-		pa_data = data->next;
+-		early_memunmap(data, sizeof(*data));
++			indirect = (struct setup_indirect *)data->data;
++
++			if (indirect->type != SETUP_INDIRECT)
++				memblock_reserve(indirect->addr, indirect->len);
++		}
++
++		pa_data = pa_next;
++		early_memunmap(data, len);
+ 	}
+ }
+ 
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index c9d566dcf89a0..8143693a7ea6e 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -659,6 +659,7 @@ static bool do_int3(struct pt_regs *regs)
+ 
+ 	return res == NOTIFY_STOP;
+ }
++NOKPROBE_SYMBOL(do_int3);
+ 
+ static void do_int3_user(struct pt_regs *regs)
+ {
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index c6eb3e45e3d80..e8f495b9ae103 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -8770,6 +8770,13 @@ static int kvm_pv_clock_pairing(struct kvm_vcpu *vcpu, gpa_t paddr,
+ 	if (clock_type != KVM_CLOCK_PAIRING_WALLCLOCK)
+ 		return -KVM_EOPNOTSUPP;
+ 
++	/*
++	 * When tsc is in permanent catchup mode guests won't be able to use
++	 * pvclock_read_retry loop to get consistent view of pvclock
++	 */
++	if (vcpu->arch.tsc_always_catchup)
++		return -KVM_EOPNOTSUPP;
++
+ 	if (!kvm_get_walltime_and_clockread(&ts, &cycle))
+ 		return -KVM_EOPNOTSUPP;
+ 
+diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
+index 026031b3b7829..17a492c273069 100644
+--- a/arch/x86/mm/ioremap.c
++++ b/arch/x86/mm/ioremap.c
+@@ -615,6 +615,7 @@ static bool memremap_is_efi_data(resource_size_t phys_addr,
+ static bool memremap_is_setup_data(resource_size_t phys_addr,
+ 				   unsigned long size)
+ {
++	struct setup_indirect *indirect;
+ 	struct setup_data *data;
+ 	u64 paddr, paddr_next;
+ 
+@@ -627,6 +628,10 @@ static bool memremap_is_setup_data(resource_size_t phys_addr,
+ 
+ 		data = memremap(paddr, sizeof(*data),
+ 				MEMREMAP_WB | MEMREMAP_DEC);
++		if (!data) {
++			pr_warn("failed to memremap setup_data entry\n");
++			return false;
++		}
+ 
+ 		paddr_next = data->next;
+ 		len = data->len;
+@@ -636,10 +641,21 @@ static bool memremap_is_setup_data(resource_size_t phys_addr,
+ 			return true;
+ 		}
+ 
+-		if (data->type == SETUP_INDIRECT &&
+-		    ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) {
+-			paddr = ((struct setup_indirect *)data->data)->addr;
+-			len = ((struct setup_indirect *)data->data)->len;
++		if (data->type == SETUP_INDIRECT) {
++			memunmap(data);
++			data = memremap(paddr, sizeof(*data) + len,
++					MEMREMAP_WB | MEMREMAP_DEC);
++			if (!data) {
++				pr_warn("failed to memremap indirect setup_data\n");
++				return false;
++			}
++
++			indirect = (struct setup_indirect *)data->data;
++
++			if (indirect->type != SETUP_INDIRECT) {
++				paddr = indirect->addr;
++				len = indirect->len;
++			}
+ 		}
+ 
+ 		memunmap(data);
+@@ -660,22 +676,51 @@ static bool memremap_is_setup_data(resource_size_t phys_addr,
+ static bool __init early_memremap_is_setup_data(resource_size_t phys_addr,
+ 						unsigned long size)
+ {
++	struct setup_indirect *indirect;
+ 	struct setup_data *data;
+ 	u64 paddr, paddr_next;
+ 
+ 	paddr = boot_params.hdr.setup_data;
+ 	while (paddr) {
+-		unsigned int len;
++		unsigned int len, size;
+ 
+ 		if (phys_addr == paddr)
+ 			return true;
+ 
+ 		data = early_memremap_decrypted(paddr, sizeof(*data));
++		if (!data) {
++			pr_warn("failed to early memremap setup_data entry\n");
++			return false;
++		}
++
++		size = sizeof(*data);
+ 
+ 		paddr_next = data->next;
+ 		len = data->len;
+ 
+-		early_memunmap(data, sizeof(*data));
++		if ((phys_addr > paddr) && (phys_addr < (paddr + len))) {
++			early_memunmap(data, sizeof(*data));
++			return true;
++		}
++
++		if (data->type == SETUP_INDIRECT) {
++			size += len;
++			early_memunmap(data, sizeof(*data));
++			data = early_memremap_decrypted(paddr, size);
++			if (!data) {
++				pr_warn("failed to early memremap indirect setup_data\n");
++				return false;
++			}
++
++			indirect = (struct setup_indirect *)data->data;
++
++			if (indirect->type != SETUP_INDIRECT) {
++				paddr = indirect->addr;
++				len = indirect->len;
++			}
++		}
++
++		early_memunmap(data, size);
+ 
+ 		if ((phys_addr > paddr) && (phys_addr < (paddr + len)))
+ 			return true;
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 6ae38776e30e5..b3df5e5452a7f 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -76,9 +76,6 @@ struct virtio_blk {
+ 	 */
+ 	refcount_t refs;
+ 
+-	/* What host tells us, plus 2 for header & tailer. */
+-	unsigned int sg_elems;
+-
+ 	/* Ida index - used to track minor number allocations. */
+ 	int index;
+ 
+@@ -322,8 +319,6 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 	blk_status_t status;
+ 	int err;
+ 
+-	BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
+-
+ 	status = virtblk_setup_cmd(vblk->vdev, req, vbr);
+ 	if (unlikely(status))
+ 		return status;
+@@ -783,8 +778,6 @@ static int virtblk_probe(struct virtio_device *vdev)
+ 	/* Prevent integer overflows and honor max vq size */
+ 	sg_elems = min_t(u32, sg_elems, VIRTIO_BLK_MAX_SG_ELEMS - 2);
+ 
+-	/* We need extra sg elements at head and tail. */
+-	sg_elems += 2;
+ 	vdev->priv = vblk = kmalloc(sizeof(*vblk), GFP_KERNEL);
+ 	if (!vblk) {
+ 		err = -ENOMEM;
+@@ -796,7 +789,6 @@ static int virtblk_probe(struct virtio_device *vdev)
+ 	mutex_init(&vblk->vdev_mutex);
+ 
+ 	vblk->vdev = vdev;
+-	vblk->sg_elems = sg_elems;
+ 
+ 	INIT_WORK(&vblk->config_work, virtblk_config_changed_work);
+ 
+@@ -854,7 +846,7 @@ static int virtblk_probe(struct virtio_device *vdev)
+ 		set_disk_ro(vblk->disk, 1);
+ 
+ 	/* We can handle whatever the host told us to handle. */
+-	blk_queue_max_segments(q, vblk->sg_elems-2);
++	blk_queue_max_segments(q, sg_elems);
+ 
+ 	/* No real sector limit. */
+ 	blk_queue_max_hw_sectors(q, -1U);
+@@ -926,9 +918,15 @@ static int virtblk_probe(struct virtio_device *vdev)
+ 
+ 		virtio_cread(vdev, struct virtio_blk_config, max_discard_seg,
+ 			     &v);
++
++		/*
++		 * max_discard_seg == 0 is out of spec but we always
++		 * handled it.
++		 */
++		if (!v)
++			v = sg_elems;
+ 		blk_queue_max_discard_segments(q,
+-					       min_not_zero(v,
+-							    MAX_DISCARD_SEGMENTS));
++					       min(v, MAX_DISCARD_SEGMENTS));
+ 
+ 		blk_queue_flag_set(QUEUE_FLAG_DISCARD, q);
+ 	}
+diff --git a/drivers/clk/qcom/dispcc-sc7180.c b/drivers/clk/qcom/dispcc-sc7180.c
+index 538e4963c9152..5d2ae297e7413 100644
+--- a/drivers/clk/qcom/dispcc-sc7180.c
++++ b/drivers/clk/qcom/dispcc-sc7180.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (c) 2019, The Linux Foundation. All rights reserved.
++ * Copyright (c) 2019, 2022, The Linux Foundation. All rights reserved.
+  */
+ 
+ #include <linux/clk-provider.h>
+@@ -625,6 +625,9 @@ static struct clk_branch disp_cc_mdss_vsync_clk = {
+ 
+ static struct gdsc mdss_gdsc = {
+ 	.gdscr = 0x3000,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "mdss_gdsc",
+ 	},
+diff --git a/drivers/clk/qcom/dispcc-sc7280.c b/drivers/clk/qcom/dispcc-sc7280.c
+index 4ef4ae231794b..ad596d567f6ab 100644
+--- a/drivers/clk/qcom/dispcc-sc7280.c
++++ b/drivers/clk/qcom/dispcc-sc7280.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (c) 2021, The Linux Foundation. All rights reserved.
++ * Copyright (c) 2021-2022, The Linux Foundation. All rights reserved.
+  */
+ 
+ #include <linux/clk-provider.h>
+@@ -787,6 +787,9 @@ static struct clk_branch disp_cc_sleep_clk = {
+ 
+ static struct gdsc disp_cc_mdss_core_gdsc = {
+ 	.gdscr = 0x1004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "disp_cc_mdss_core_gdsc",
+ 	},
+diff --git a/drivers/clk/qcom/dispcc-sm8250.c b/drivers/clk/qcom/dispcc-sm8250.c
+index 566fdfa0a15bb..db9379634fb22 100644
+--- a/drivers/clk/qcom/dispcc-sm8250.c
++++ b/drivers/clk/qcom/dispcc-sm8250.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /*
+- * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
++ * Copyright (c) 2018-2020, 2022, The Linux Foundation. All rights reserved.
+  */
+ 
+ #include <linux/clk-provider.h>
+@@ -1126,6 +1126,9 @@ static struct clk_branch disp_cc_mdss_vsync_clk = {
+ 
+ static struct gdsc mdss_gdsc = {
+ 	.gdscr = 0x3000,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "mdss_gdsc",
+ 	},
+diff --git a/drivers/clk/qcom/gdsc.c b/drivers/clk/qcom/gdsc.c
+index 7e1dd8ccfa384..44520efc6c72b 100644
+--- a/drivers/clk/qcom/gdsc.c
++++ b/drivers/clk/qcom/gdsc.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (c) 2015, 2017-2018, The Linux Foundation. All rights reserved.
++ * Copyright (c) 2015, 2017-2018, 2022, The Linux Foundation. All rights reserved.
+  */
+ 
+ #include <linux/bitops.h>
+@@ -35,9 +35,14 @@
+ #define CFG_GDSCR_OFFSET		0x4
+ 
+ /* Wait 2^n CXO cycles between all states. Here, n=2 (4 cycles). */
+-#define EN_REST_WAIT_VAL	(0x2 << 20)
+-#define EN_FEW_WAIT_VAL		(0x8 << 16)
+-#define CLK_DIS_WAIT_VAL	(0x2 << 12)
++#define EN_REST_WAIT_VAL	0x2
++#define EN_FEW_WAIT_VAL		0x8
++#define CLK_DIS_WAIT_VAL	0x2
++
++/* Transition delay shifts */
++#define EN_REST_WAIT_SHIFT	20
++#define EN_FEW_WAIT_SHIFT	16
++#define CLK_DIS_WAIT_SHIFT	12
+ 
+ #define RETAIN_MEM		BIT(14)
+ #define RETAIN_PERIPH		BIT(13)
+@@ -380,7 +385,18 @@ static int gdsc_init(struct gdsc *sc)
+ 	 */
+ 	mask = HW_CONTROL_MASK | SW_OVERRIDE_MASK |
+ 	       EN_REST_WAIT_MASK | EN_FEW_WAIT_MASK | CLK_DIS_WAIT_MASK;
+-	val = EN_REST_WAIT_VAL | EN_FEW_WAIT_VAL | CLK_DIS_WAIT_VAL;
++
++	if (!sc->en_rest_wait_val)
++		sc->en_rest_wait_val = EN_REST_WAIT_VAL;
++	if (!sc->en_few_wait_val)
++		sc->en_few_wait_val = EN_FEW_WAIT_VAL;
++	if (!sc->clk_dis_wait_val)
++		sc->clk_dis_wait_val = CLK_DIS_WAIT_VAL;
++
++	val = sc->en_rest_wait_val << EN_REST_WAIT_SHIFT |
++		sc->en_few_wait_val << EN_FEW_WAIT_SHIFT |
++		sc->clk_dis_wait_val << CLK_DIS_WAIT_SHIFT;
++
+ 	ret = regmap_update_bits(sc->regmap, sc->gdscr, mask, val);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/clk/qcom/gdsc.h b/drivers/clk/qcom/gdsc.h
+index d7cc4c21a9d48..ad313d7210bd3 100644
+--- a/drivers/clk/qcom/gdsc.h
++++ b/drivers/clk/qcom/gdsc.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0-only */
+ /*
+- * Copyright (c) 2015, 2017-2018, The Linux Foundation. All rights reserved.
++ * Copyright (c) 2015, 2017-2018, 2022, The Linux Foundation. All rights reserved.
+  */
+ 
+ #ifndef __QCOM_GDSC_H__
+@@ -22,6 +22,9 @@ struct reset_controller_dev;
+  * @cxcs: offsets of branch registers to toggle mem/periph bits in
+  * @cxc_count: number of @cxcs
+  * @pwrsts: Possible powerdomain power states
++ * @en_rest_wait_val: transition delay value for receiving enr ack signal
++ * @en_few_wait_val: transition delay value for receiving enf ack signal
++ * @clk_dis_wait_val: transition delay value for halting clock
+  * @resets: ids of resets associated with this gdsc
+  * @reset_count: number of @resets
+  * @rcdev: reset controller
+@@ -36,6 +39,9 @@ struct gdsc {
+ 	unsigned int			clamp_io_ctrl;
+ 	unsigned int			*cxcs;
+ 	unsigned int			cxc_count;
++	unsigned int			en_rest_wait_val;
++	unsigned int			en_few_wait_val;
++	unsigned int			clk_dis_wait_val;
+ 	const u8			pwrsts;
+ /* Powerdomain allowable state bitfields */
+ #define PWRSTS_OFF		BIT(0)
+diff --git a/drivers/gpio/gpio-ts4900.c b/drivers/gpio/gpio-ts4900.c
+index d885032cf814d..d918d2df4de2c 100644
+--- a/drivers/gpio/gpio-ts4900.c
++++ b/drivers/gpio/gpio-ts4900.c
+@@ -1,7 +1,7 @@
+ /*
+  * Digital I/O driver for Technologic Systems I2C FPGA Core
+  *
+- * Copyright (C) 2015 Technologic Systems
++ * Copyright (C) 2015, 2018 Technologic Systems
+  * Copyright (C) 2016 Savoir-Faire Linux
+  *
+  * This program is free software; you can redistribute it and/or
+@@ -55,19 +55,33 @@ static int ts4900_gpio_direction_input(struct gpio_chip *chip,
+ {
+ 	struct ts4900_gpio_priv *priv = gpiochip_get_data(chip);
+ 
+-	/*
+-	 * This will clear the output enable bit, the other bits are
+-	 * dontcare when this is cleared
++	/* Only clear the OE bit here, requires a RMW. Prevents potential issue
++	 * with OE and data getting to the physical pin at different times.
+ 	 */
+-	return regmap_write(priv->regmap, offset, 0);
++	return regmap_update_bits(priv->regmap, offset, TS4900_GPIO_OE, 0);
+ }
+ 
+ static int ts4900_gpio_direction_output(struct gpio_chip *chip,
+ 					unsigned int offset, int value)
+ {
+ 	struct ts4900_gpio_priv *priv = gpiochip_get_data(chip);
++	unsigned int reg;
+ 	int ret;
+ 
++	/* If changing from an input to an output, we need to first set the
++	 * proper data bit to what is requested and then set OE bit. This
++	 * prevents a glitch that can occur on the IO line
++	 */
++	regmap_read(priv->regmap, offset, &reg);
++	if (!(reg & TS4900_GPIO_OE)) {
++		if (value)
++			reg = TS4900_GPIO_OUT;
++		else
++			reg &= ~TS4900_GPIO_OUT;
++
++		regmap_write(priv->regmap, offset, reg);
++	}
++
+ 	if (value)
+ 		ret = regmap_write(priv->regmap, offset, TS4900_GPIO_OE |
+ 							 TS4900_GPIO_OUT);
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index feb8157d2d672..c49b3b5334cdc 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -308,7 +308,8 @@ static struct gpio_desc *acpi_request_own_gpiod(struct gpio_chip *chip,
+ 	if (IS_ERR(desc))
+ 		return desc;
+ 
+-	ret = gpio_set_debounce_timeout(desc, agpio->debounce_timeout);
++	/* ACPI uses hundredths of milliseconds units */
++	ret = gpio_set_debounce_timeout(desc, agpio->debounce_timeout * 10);
+ 	if (ret)
+ 		dev_warn(chip->parent,
+ 			 "Failed to set debounce-timeout for pin 0x%04X, err %d\n",
+@@ -1049,7 +1050,8 @@ int acpi_dev_gpio_irq_get_by(struct acpi_device *adev, const char *name, int ind
+ 			if (ret < 0)
+ 				return ret;
+ 
+-			ret = gpio_set_debounce_timeout(desc, info.debounce);
++			/* ACPI uses hundredths of milliseconds units */
++			ret = gpio_set_debounce_timeout(desc, info.debounce * 10);
+ 			if (ret)
+ 				return ret;
+ 
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index abfbf546d1599..dcb0dca651ac7 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -2191,6 +2191,16 @@ static int gpio_set_bias(struct gpio_desc *desc)
+ 	return gpio_set_config_with_argument_optional(desc, bias, arg);
+ }
+ 
++/**
++ * gpio_set_debounce_timeout() - Set debounce timeout
++ * @desc:	GPIO descriptor to set the debounce timeout
++ * @debounce:	Debounce timeout in microseconds
++ *
++ * The function calls the certain GPIO driver to set debounce timeout
++ * in the hardware.
++ *
++ * Returns 0 on success, or negative error code otherwise.
++ */
+ int gpio_set_debounce_timeout(struct gpio_desc *desc, unsigned int debounce)
+ {
+ 	return gpio_set_config_with_argument_optional(desc,
+@@ -3111,6 +3121,16 @@ int gpiod_to_irq(const struct gpio_desc *desc)
+ 
+ 		return retirq;
+ 	}
++#ifdef CONFIG_GPIOLIB_IRQCHIP
++	if (gc->irq.chip) {
++		/*
++		 * Avoid race condition with other code, which tries to lookup
++		 * an IRQ before the irqchip has been properly registered,
++		 * i.e. while gpiochip is still being brought up.
++		 */
++		return -EPROBE_DEFER;
++	}
++#endif
+ 	return -ENXIO;
+ }
+ EXPORT_SYMBOL_GPL(gpiod_to_irq);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+index dc50c05f23fc2..5c08047adb594 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+@@ -1145,7 +1145,7 @@ int amdgpu_display_framebuffer_init(struct drm_device *dev,
+ 	if (ret)
+ 		return ret;
+ 
+-	if (!dev->mode_config.allow_fb_modifiers) {
++	if (!dev->mode_config.allow_fb_modifiers && !adev->enable_virtual_display) {
+ 		drm_WARN_ONCE(dev, adev->family >= AMDGPU_FAMILY_AI,
+ 			      "GFX9+ requires FB check based on format modifier\n");
+ 		ret = check_tiling_flags_gfx6(rfb);
+diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c
+index 7a205fd5023bb..3ba8b717e176f 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr.c
++++ b/drivers/gpu/drm/i915/display/intel_psr.c
+@@ -1400,6 +1400,13 @@ static inline u32 man_trk_ctl_single_full_frame_bit_get(struct drm_i915_private
+ 	       PSR2_MAN_TRK_CTL_SF_SINGLE_FULL_FRAME;
+ }
+ 
++static inline u32 man_trk_ctl_partial_frame_bit_get(struct drm_i915_private *dev_priv)
++{
++	return IS_ALDERLAKE_P(dev_priv) ?
++	       ADLP_PSR2_MAN_TRK_CTL_SF_PARTIAL_FRAME_UPDATE :
++	       PSR2_MAN_TRK_CTL_SF_PARTIAL_FRAME_UPDATE;
++}
++
+ static void psr_force_hw_tracking_exit(struct intel_dp *intel_dp)
+ {
+ 	struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+@@ -1495,7 +1502,13 @@ static void psr2_man_trk_ctl_calc(struct intel_crtc_state *crtc_state,
+ {
+ 	struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
+ 	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
+-	u32 val = PSR2_MAN_TRK_CTL_ENABLE;
++	u32 val = 0;
++
++	if (!IS_ALDERLAKE_P(dev_priv))
++		val = PSR2_MAN_TRK_CTL_ENABLE;
++
++	/* SF partial frame enable has to be set even on full update */
++	val |= man_trk_ctl_partial_frame_bit_get(dev_priv);
+ 
+ 	if (full_update) {
+ 		/*
+@@ -1515,7 +1528,6 @@ static void psr2_man_trk_ctl_calc(struct intel_crtc_state *crtc_state,
+ 	} else {
+ 		drm_WARN_ON(crtc_state->uapi.crtc->dev, clip->y1 % 4 || clip->y2 % 4);
+ 
+-		val |= PSR2_MAN_TRK_CTL_SF_PARTIAL_FRAME_UPDATE;
+ 		val |= PSR2_MAN_TRK_CTL_SU_REGION_START_ADDR(clip->y1 / 4 + 1);
+ 		val |= PSR2_MAN_TRK_CTL_SU_REGION_END_ADDR(clip->y2 / 4 + 1);
+ 	}
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 14ce8809efdd5..e927776ae1832 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -4738,6 +4738,7 @@ enum {
+ #define  ADLP_PSR2_MAN_TRK_CTL_SU_REGION_START_ADDR(val)	REG_FIELD_PREP(ADLP_PSR2_MAN_TRK_CTL_SU_REGION_START_ADDR_MASK, val)
+ #define  ADLP_PSR2_MAN_TRK_CTL_SU_REGION_END_ADDR_MASK		REG_GENMASK(12, 0)
+ #define  ADLP_PSR2_MAN_TRK_CTL_SU_REGION_END_ADDR(val)		REG_FIELD_PREP(ADLP_PSR2_MAN_TRK_CTL_SU_REGION_END_ADDR_MASK, val)
++#define  ADLP_PSR2_MAN_TRK_CTL_SF_PARTIAL_FRAME_UPDATE		REG_BIT(31)
+ #define  ADLP_PSR2_MAN_TRK_CTL_SF_SINGLE_FULL_FRAME		REG_BIT(14)
+ #define  ADLP_PSR2_MAN_TRK_CTL_SF_CONTINUOS_FULL_FRAME		REG_BIT(13)
+ 
+diff --git a/drivers/gpu/drm/panel/Kconfig b/drivers/gpu/drm/panel/Kconfig
+index cfc8d644cedfd..0d3798354e6a7 100644
+--- a/drivers/gpu/drm/panel/Kconfig
++++ b/drivers/gpu/drm/panel/Kconfig
+@@ -95,6 +95,7 @@ config DRM_PANEL_EDP
+ 	depends on PM
+ 	select VIDEOMODE_HELPERS
+ 	select DRM_DP_AUX_BUS
++	select DRM_DP_HELPER
+ 	help
+ 	  DRM panel driver for dumb eDP panels that need at most a regulator and
+ 	  a GPIO to be powered up. Optionally a backlight can be attached so
+diff --git a/drivers/gpu/drm/sun4i/sun8i_mixer.h b/drivers/gpu/drm/sun4i/sun8i_mixer.h
+index 145833a9d82d4..5b3fbee186713 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_mixer.h
++++ b/drivers/gpu/drm/sun4i/sun8i_mixer.h
+@@ -111,10 +111,10 @@
+ /* format 13 is semi-planar YUV411 VUVU */
+ #define SUN8I_MIXER_FBFMT_YUV411	14
+ /* format 15 doesn't exist */
+-/* format 16 is P010 YVU */
+-#define SUN8I_MIXER_FBFMT_P010_YUV	17
+-/* format 18 is P210 YVU */
+-#define SUN8I_MIXER_FBFMT_P210_YUV	19
++#define SUN8I_MIXER_FBFMT_P010_YUV	16
++/* format 17 is P010 YVU */
++#define SUN8I_MIXER_FBFMT_P210_YUV	18
++/* format 19 is P210 YVU */
+ /* format 20 is packed YVU444 10-bit */
+ /* format 21 is packed YUV444 10-bit */
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 24f11c07bc3c7..2f53ba54b81ac 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -1522,6 +1522,7 @@ static int vc4_hdmi_audio_init(struct vc4_hdmi *vc4_hdmi)
+ 		dev_err(dev, "Couldn't register the HDMI codec: %ld\n", PTR_ERR(codec_pdev));
+ 		return PTR_ERR(codec_pdev);
+ 	}
++	vc4_hdmi->audio.codec_pdev = codec_pdev;
+ 
+ 	dai_link->cpus		= &vc4_hdmi->audio.cpu;
+ 	dai_link->codecs	= &vc4_hdmi->audio.codec;
+@@ -1561,6 +1562,12 @@ static int vc4_hdmi_audio_init(struct vc4_hdmi *vc4_hdmi)
+ 
+ }
+ 
++static void vc4_hdmi_audio_exit(struct vc4_hdmi *vc4_hdmi)
++{
++	platform_device_unregister(vc4_hdmi->audio.codec_pdev);
++	vc4_hdmi->audio.codec_pdev = NULL;
++}
++
+ static irqreturn_t vc4_hdmi_hpd_irq_thread(int irq, void *priv)
+ {
+ 	struct vc4_hdmi *vc4_hdmi = priv;
+@@ -2299,6 +2306,7 @@ static void vc4_hdmi_unbind(struct device *dev, struct device *master,
+ 	kfree(vc4_hdmi->hdmi_regset.regs);
+ 	kfree(vc4_hdmi->hd_regset.regs);
+ 
++	vc4_hdmi_audio_exit(vc4_hdmi);
+ 	vc4_hdmi_cec_exit(vc4_hdmi);
+ 	vc4_hdmi_hotplug_exit(vc4_hdmi);
+ 	vc4_hdmi_connector_destroy(&vc4_hdmi->connector);
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.h b/drivers/gpu/drm/vc4/vc4_hdmi.h
+index 33e9f665ab8e4..c0492da736833 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.h
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.h
+@@ -113,6 +113,7 @@ struct vc4_hdmi_audio {
+ 	struct snd_soc_dai_link_component platform;
+ 	struct snd_dmaengine_dai_dma_data dma_data;
+ 	struct hdmi_audio_infoframe infoframe;
++	struct platform_device *codec_pdev;
+ 	bool streaming;
+ };
+ 
+diff --git a/drivers/hid/hid-elo.c b/drivers/hid/hid-elo.c
+index 9b42b0cdeef06..2876cb6a7dcab 100644
+--- a/drivers/hid/hid-elo.c
++++ b/drivers/hid/hid-elo.c
+@@ -228,7 +228,6 @@ static int elo_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ {
+ 	struct elo_priv *priv;
+ 	int ret;
+-	struct usb_device *udev;
+ 
+ 	if (!hid_is_usb(hdev))
+ 		return -EINVAL;
+@@ -238,8 +237,7 @@ static int elo_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 		return -ENOMEM;
+ 
+ 	INIT_DELAYED_WORK(&priv->work, elo_work);
+-	udev = interface_to_usbdev(to_usb_interface(hdev->dev.parent));
+-	priv->usbdev = usb_get_dev(udev);
++	priv->usbdev = interface_to_usbdev(to_usb_interface(hdev->dev.parent));
+ 
+ 	hid_set_drvdata(hdev, priv);
+ 
+@@ -262,7 +260,6 @@ static int elo_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 
+ 	return 0;
+ err_free:
+-	usb_put_dev(udev);
+ 	kfree(priv);
+ 	return ret;
+ }
+@@ -271,8 +268,6 @@ static void elo_remove(struct hid_device *hdev)
+ {
+ 	struct elo_priv *priv = hid_get_drvdata(hdev);
+ 
+-	usb_put_dev(priv->usbdev);
+-
+ 	hid_hw_stop(hdev);
+ 	cancel_delayed_work_sync(&priv->work);
+ 	kfree(priv);
+diff --git a/drivers/hid/hid-nintendo.c b/drivers/hid/hid-nintendo.c
+index b6a9a0f3966ee..2204de889739f 100644
+--- a/drivers/hid/hid-nintendo.c
++++ b/drivers/hid/hid-nintendo.c
+@@ -2128,6 +2128,10 @@ static int nintendo_hid_probe(struct hid_device *hdev,
+ 	spin_lock_init(&ctlr->lock);
+ 	ctlr->rumble_queue = alloc_workqueue("hid-nintendo-rumble_wq",
+ 					     WQ_FREEZABLE | WQ_MEM_RECLAIM, 0);
++	if (!ctlr->rumble_queue) {
++		ret = -ENOMEM;
++		goto err;
++	}
+ 	INIT_WORK(&ctlr->rumble_worker, joycon_rumble_worker);
+ 
+ 	ret = hid_parse(hdev);
+diff --git a/drivers/hid/hid-thrustmaster.c b/drivers/hid/hid-thrustmaster.c
+index 03b935ff02d56..9da4240530dd7 100644
+--- a/drivers/hid/hid-thrustmaster.c
++++ b/drivers/hid/hid-thrustmaster.c
+@@ -158,6 +158,12 @@ static void thrustmaster_interrupts(struct hid_device *hdev)
+ 		return;
+ 	}
+ 
++	if (usbif->cur_altsetting->desc.bNumEndpoints < 2) {
++		kfree(send_buf);
++		hid_err(hdev, "Wrong number of endpoints?\n");
++		return;
++	}
++
+ 	ep = &usbif->cur_altsetting->endpoint[1];
+ 	b_ep = ep->desc.bEndpointAddress;
+ 
+diff --git a/drivers/hid/hid-vivaldi.c b/drivers/hid/hid-vivaldi.c
+index 576518e704ee6..d57ec17670379 100644
+--- a/drivers/hid/hid-vivaldi.c
++++ b/drivers/hid/hid-vivaldi.c
+@@ -143,7 +143,7 @@ out:
+ static int vivaldi_input_configured(struct hid_device *hdev,
+ 				    struct hid_input *hidinput)
+ {
+-	return sysfs_create_group(&hdev->dev.kobj, &input_attribute_group);
++	return devm_device_add_group(&hdev->dev, &input_attribute_group);
+ }
+ 
+ static const struct hid_device_id vivaldi_table[] = {
+diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
+index 776ee2237be20..ac2fbee1ba9c0 100644
+--- a/drivers/hwmon/pmbus/pmbus_core.c
++++ b/drivers/hwmon/pmbus/pmbus_core.c
+@@ -911,6 +911,11 @@ static int pmbus_get_boolean(struct i2c_client *client, struct pmbus_boolean *b,
+ 		pmbus_update_sensor_data(client, s2);
+ 
+ 	regval = status & mask;
++	if (regval) {
++		ret = pmbus_write_byte_data(client, page, reg, regval);
++		if (ret)
++			goto unlock;
++	}
+ 	if (s1 && s2) {
+ 		s64 v1, v2;
+ 
+diff --git a/drivers/isdn/hardware/mISDN/hfcpci.c b/drivers/isdn/hardware/mISDN/hfcpci.c
+index bd087cca1c1d2..af17459c1a5c0 100644
+--- a/drivers/isdn/hardware/mISDN/hfcpci.c
++++ b/drivers/isdn/hardware/mISDN/hfcpci.c
+@@ -2005,7 +2005,11 @@ setup_hw(struct hfc_pci *hc)
+ 	}
+ 	/* Allocate memory for FIFOS */
+ 	/* the memory needs to be on a 32k boundary within the first 4G */
+-	dma_set_mask(&hc->pdev->dev, 0xFFFF8000);
++	if (dma_set_mask(&hc->pdev->dev, 0xFFFF8000)) {
++		printk(KERN_WARNING
++		       "HFC-PCI: No usable DMA configuration!\n");
++		return -EIO;
++	}
+ 	buffer = dma_alloc_coherent(&hc->pdev->dev, 0x8000, &hc->hw.dmahandle,
+ 				    GFP_KERNEL);
+ 	/* We silently assume the address is okay if nonzero */
+diff --git a/drivers/isdn/mISDN/dsp_pipeline.c b/drivers/isdn/mISDN/dsp_pipeline.c
+index e11ca6bbc7f41..c3b2c99b5cd5c 100644
+--- a/drivers/isdn/mISDN/dsp_pipeline.c
++++ b/drivers/isdn/mISDN/dsp_pipeline.c
+@@ -192,7 +192,7 @@ void dsp_pipeline_destroy(struct dsp_pipeline *pipeline)
+ int dsp_pipeline_build(struct dsp_pipeline *pipeline, const char *cfg)
+ {
+ 	int found = 0;
+-	char *dup, *tok, *name, *args;
++	char *dup, *next, *tok, *name, *args;
+ 	struct dsp_element_entry *entry, *n;
+ 	struct dsp_pipeline_entry *pipeline_entry;
+ 	struct mISDN_dsp_element *elem;
+@@ -203,10 +203,10 @@ int dsp_pipeline_build(struct dsp_pipeline *pipeline, const char *cfg)
+ 	if (!list_empty(&pipeline->list))
+ 		_dsp_pipeline_destroy(pipeline);
+ 
+-	dup = kstrdup(cfg, GFP_ATOMIC);
++	dup = next = kstrdup(cfg, GFP_ATOMIC);
+ 	if (!dup)
+ 		return 0;
+-	while ((tok = strsep(&dup, "|"))) {
++	while ((tok = strsep(&next, "|"))) {
+ 		if (!strlen(tok))
+ 			continue;
+ 		name = strsep(&tok, "(");
+diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c
+index 8f36536cb1b6d..58ab9d90bc8b9 100644
+--- a/drivers/mmc/host/meson-gx-mmc.c
++++ b/drivers/mmc/host/meson-gx-mmc.c
+@@ -173,6 +173,8 @@ struct meson_host {
+ 	int irq;
+ 
+ 	bool vqmmc_enabled;
++	bool needs_pre_post_req;
++
+ };
+ 
+ #define CMD_CFG_LENGTH_MASK GENMASK(8, 0)
+@@ -663,6 +665,8 @@ static void meson_mmc_request_done(struct mmc_host *mmc,
+ 	struct meson_host *host = mmc_priv(mmc);
+ 
+ 	host->cmd = NULL;
++	if (host->needs_pre_post_req)
++		meson_mmc_post_req(mmc, mrq, 0);
+ 	mmc_request_done(host->mmc, mrq);
+ }
+ 
+@@ -880,7 +884,7 @@ static int meson_mmc_validate_dram_access(struct mmc_host *mmc, struct mmc_data
+ static void meson_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ {
+ 	struct meson_host *host = mmc_priv(mmc);
+-	bool needs_pre_post_req = mrq->data &&
++	host->needs_pre_post_req = mrq->data &&
+ 			!(mrq->data->host_cookie & SD_EMMC_PRE_REQ_DONE);
+ 
+ 	/*
+@@ -896,22 +900,19 @@ static void meson_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ 		}
+ 	}
+ 
+-	if (needs_pre_post_req) {
++	if (host->needs_pre_post_req) {
+ 		meson_mmc_get_transfer_mode(mmc, mrq);
+ 		if (!meson_mmc_desc_chain_mode(mrq->data))
+-			needs_pre_post_req = false;
++			host->needs_pre_post_req = false;
+ 	}
+ 
+-	if (needs_pre_post_req)
++	if (host->needs_pre_post_req)
+ 		meson_mmc_pre_req(mmc, mrq);
+ 
+ 	/* Stop execution */
+ 	writel(0, host->regs + SD_EMMC_START);
+ 
+ 	meson_mmc_start_cmd(mmc, mrq->sbc ?: mrq->cmd);
+-
+-	if (needs_pre_post_req)
+-		meson_mmc_post_req(mmc, mrq, 0);
+ }
+ 
+ static void meson_mmc_read_resp(struct mmc_host *mmc, struct mmc_command *cmd)
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index fb59efc7f9266..14bf1828cbba3 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -2928,7 +2928,7 @@ mt753x_phylink_validate(struct dsa_switch *ds, int port,
+ 
+ 	phylink_set_port_modes(mask);
+ 
+-	if (state->interface != PHY_INTERFACE_MODE_TRGMII ||
++	if (state->interface != PHY_INTERFACE_MODE_TRGMII &&
+ 	    !phy_interface_mode_is_8023z(state->interface)) {
+ 		phylink_set(mask, 10baseT_Half);
+ 		phylink_set(mask, 10baseT_Full);
+diff --git a/drivers/net/ethernet/arc/emac_mdio.c b/drivers/net/ethernet/arc/emac_mdio.c
+index 9acf589b1178b..87f40c2ba9040 100644
+--- a/drivers/net/ethernet/arc/emac_mdio.c
++++ b/drivers/net/ethernet/arc/emac_mdio.c
+@@ -132,6 +132,7 @@ int arc_mdio_probe(struct arc_emac_priv *priv)
+ {
+ 	struct arc_emac_mdio_bus_data *data = &priv->bus_data;
+ 	struct device_node *np = priv->dev->of_node;
++	const char *name = "Synopsys MII Bus";
+ 	struct mii_bus *bus;
+ 	int error;
+ 
+@@ -142,7 +143,7 @@ int arc_mdio_probe(struct arc_emac_priv *priv)
+ 	priv->bus = bus;
+ 	bus->priv = priv;
+ 	bus->parent = priv->dev;
+-	bus->name = "Synopsys MII Bus";
++	bus->name = name;
+ 	bus->read = &arc_mdio_read;
+ 	bus->write = &arc_mdio_write;
+ 	bus->reset = &arc_mdio_reset;
+@@ -167,7 +168,7 @@ int arc_mdio_probe(struct arc_emac_priv *priv)
+ 	if (error) {
+ 		mdiobus_free(bus);
+ 		return dev_err_probe(priv->dev, error,
+-				     "cannot register MDIO bus %s\n", bus->name);
++				     "cannot register MDIO bus %s\n", name);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
+index e31a5a397f114..f55d9d9c01a85 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
+@@ -40,6 +40,13 @@
+ void bcmgenet_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+ {
+ 	struct bcmgenet_priv *priv = netdev_priv(dev);
++	struct device *kdev = &priv->pdev->dev;
++
++	if (!device_can_wakeup(kdev)) {
++		wol->supported = 0;
++		wol->wolopts = 0;
++		return;
++	}
+ 
+ 	wol->supported = WAKE_MAGIC | WAKE_MAGICSECURE | WAKE_FILTER;
+ 	wol->wolopts = priv->wolopts;
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index aac1b27bfc7bf..12678cef58bc5 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -1614,7 +1614,14 @@ static int macb_poll(struct napi_struct *napi, int budget)
+ 	if (work_done < budget) {
+ 		napi_complete_done(napi, work_done);
+ 
+-		/* Packets received while interrupts were disabled */
++		/* RSR bits only seem to propagate to raise interrupts when
++		 * interrupts are enabled at the time, so if bits are already
++		 * set due to packets received while interrupts were disabled,
++		 * they will not cause another interrupt to be generated when
++		 * interrupts are re-enabled.
++		 * Check for this case here. This has been seen to happen
++		 * around 30% of the time under heavy network load.
++		 */
+ 		status = macb_readl(bp, RSR);
+ 		if (status) {
+ 			if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+@@ -1622,6 +1629,22 @@ static int macb_poll(struct napi_struct *napi, int budget)
+ 			napi_reschedule(napi);
+ 		} else {
+ 			queue_writel(queue, IER, bp->rx_intr_mask);
++
++			/* In rare cases, packets could have been received in
++			 * the window between the check above and re-enabling
++			 * interrupts. Therefore, a double-check is required
++			 * to avoid losing a wakeup. This can potentially race
++			 * with the interrupt handler doing the same actions
++			 * if an interrupt is raised just after enabling them,
++			 * but this should be harmless.
++			 */
++			status = macb_readl(bp, RSR);
++			if (unlikely(status)) {
++				queue_writel(queue, IDR, bp->rx_intr_mask);
++				if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
++					queue_writel(queue, ISR, MACB_BIT(RCOMP));
++				napi_schedule(napi);
++			}
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/ethernet/freescale/gianfar_ethtool.c b/drivers/net/ethernet/freescale/gianfar_ethtool.c
+index 7b32ed29bf4cb..8c17fe5d66ed4 100644
+--- a/drivers/net/ethernet/freescale/gianfar_ethtool.c
++++ b/drivers/net/ethernet/freescale/gianfar_ethtool.c
+@@ -1460,6 +1460,7 @@ static int gfar_get_ts_info(struct net_device *dev,
+ 	ptp_node = of_find_compatible_node(NULL, NULL, "fsl,etsec-ptp");
+ 	if (ptp_node) {
+ 		ptp_dev = of_find_device_by_node(ptp_node);
++		of_node_put(ptp_node);
+ 		if (ptp_dev)
+ 			ptp = platform_get_drvdata(ptp_dev);
+ 	}
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+index 1e57cc8c47d7b..9db5001297c7e 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+@@ -742,10 +742,8 @@ static void i40e_dbg_dump_vf(struct i40e_pf *pf, int vf_id)
+ 		vsi = pf->vsi[vf->lan_vsi_idx];
+ 		dev_info(&pf->pdev->dev, "vf %2d: VSI id=%d, seid=%d, qps=%d\n",
+ 			 vf_id, vf->lan_vsi_id, vsi->seid, vf->num_queue_pairs);
+-		dev_info(&pf->pdev->dev, "       num MDD=%lld, invalid msg=%lld, valid msg=%lld\n",
+-			 vf->num_mdd_events,
+-			 vf->num_invalid_msgs,
+-			 vf->num_valid_msgs);
++		dev_info(&pf->pdev->dev, "       num MDD=%lld\n",
++			 vf->num_mdd_events);
+ 	} else {
+ 		dev_info(&pf->pdev->dev, "invalid VF id %d\n", vf_id);
+ 	}
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index c6f643e54c4f7..babf8b7fa7678 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -1917,19 +1917,17 @@ sriov_configure_out:
+ /***********************virtual channel routines******************/
+ 
+ /**
+- * i40e_vc_send_msg_to_vf_ex
++ * i40e_vc_send_msg_to_vf
+  * @vf: pointer to the VF info
+  * @v_opcode: virtual channel opcode
+  * @v_retval: virtual channel return value
+  * @msg: pointer to the msg buffer
+  * @msglen: msg length
+- * @is_quiet: true for not printing unsuccessful return values, false otherwise
+  *
+  * send msg to VF
+  **/
+-static int i40e_vc_send_msg_to_vf_ex(struct i40e_vf *vf, u32 v_opcode,
+-				     u32 v_retval, u8 *msg, u16 msglen,
+-				     bool is_quiet)
++static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode,
++				  u32 v_retval, u8 *msg, u16 msglen)
+ {
+ 	struct i40e_pf *pf;
+ 	struct i40e_hw *hw;
+@@ -1944,25 +1942,6 @@ static int i40e_vc_send_msg_to_vf_ex(struct i40e_vf *vf, u32 v_opcode,
+ 	hw = &pf->hw;
+ 	abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
+ 
+-	/* single place to detect unsuccessful return values */
+-	if (v_retval && !is_quiet) {
+-		vf->num_invalid_msgs++;
+-		dev_info(&pf->pdev->dev, "VF %d failed opcode %d, retval: %d\n",
+-			 vf->vf_id, v_opcode, v_retval);
+-		if (vf->num_invalid_msgs >
+-		    I40E_DEFAULT_NUM_INVALID_MSGS_ALLOWED) {
+-			dev_err(&pf->pdev->dev,
+-				"Number of invalid messages exceeded for VF %d\n",
+-				vf->vf_id);
+-			dev_err(&pf->pdev->dev, "Use PF Control I/F to enable the VF\n");
+-			set_bit(I40E_VF_STATE_DISABLED, &vf->vf_states);
+-		}
+-	} else {
+-		vf->num_valid_msgs++;
+-		/* reset the invalid counter, if a valid message is received. */
+-		vf->num_invalid_msgs = 0;
+-	}
+-
+ 	aq_ret = i40e_aq_send_msg_to_vf(hw, abs_vf_id,	v_opcode, v_retval,
+ 					msg, msglen, NULL);
+ 	if (aq_ret) {
+@@ -1975,23 +1954,6 @@ static int i40e_vc_send_msg_to_vf_ex(struct i40e_vf *vf, u32 v_opcode,
+ 	return 0;
+ }
+ 
+-/**
+- * i40e_vc_send_msg_to_vf
+- * @vf: pointer to the VF info
+- * @v_opcode: virtual channel opcode
+- * @v_retval: virtual channel return value
+- * @msg: pointer to the msg buffer
+- * @msglen: msg length
+- *
+- * send msg to VF
+- **/
+-static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode,
+-				  u32 v_retval, u8 *msg, u16 msglen)
+-{
+-	return i40e_vc_send_msg_to_vf_ex(vf, v_opcode, v_retval,
+-					 msg, msglen, false);
+-}
+-
+ /**
+  * i40e_vc_send_resp_to_vf
+  * @vf: pointer to the VF info
+@@ -2813,7 +2775,6 @@ error_param:
+  * i40e_check_vf_permission
+  * @vf: pointer to the VF info
+  * @al: MAC address list from virtchnl
+- * @is_quiet: set true for printing msg without opcode info, false otherwise
+  *
+  * Check that the given list of MAC addresses is allowed. Will return -EPERM
+  * if any address in the list is not valid. Checks the following conditions:
+@@ -2828,15 +2789,13 @@ error_param:
+  * addresses might not be accurate.
+  **/
+ static inline int i40e_check_vf_permission(struct i40e_vf *vf,
+-					   struct virtchnl_ether_addr_list *al,
+-					   bool *is_quiet)
++					   struct virtchnl_ether_addr_list *al)
+ {
+ 	struct i40e_pf *pf = vf->pf;
+ 	struct i40e_vsi *vsi = pf->vsi[vf->lan_vsi_idx];
+ 	int mac2add_cnt = 0;
+ 	int i;
+ 
+-	*is_quiet = false;
+ 	for (i = 0; i < al->num_elements; i++) {
+ 		struct i40e_mac_filter *f;
+ 		u8 *addr = al->list[i].addr;
+@@ -2860,7 +2819,6 @@ static inline int i40e_check_vf_permission(struct i40e_vf *vf,
+ 		    !ether_addr_equal(addr, vf->default_lan_addr.addr)) {
+ 			dev_err(&pf->pdev->dev,
+ 				"VF attempting to override administratively set MAC address, bring down and up the VF interface to resume normal operation\n");
+-			*is_quiet = true;
+ 			return -EPERM;
+ 		}
+ 
+@@ -2897,7 +2855,6 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
+ 	    (struct virtchnl_ether_addr_list *)msg;
+ 	struct i40e_pf *pf = vf->pf;
+ 	struct i40e_vsi *vsi = NULL;
+-	bool is_quiet = false;
+ 	i40e_status ret = 0;
+ 	int i;
+ 
+@@ -2914,7 +2871,7 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
+ 	 */
+ 	spin_lock_bh(&vsi->mac_filter_hash_lock);
+ 
+-	ret = i40e_check_vf_permission(vf, al, &is_quiet);
++	ret = i40e_check_vf_permission(vf, al);
+ 	if (ret) {
+ 		spin_unlock_bh(&vsi->mac_filter_hash_lock);
+ 		goto error_param;
+@@ -2952,8 +2909,8 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
+ 
+ error_param:
+ 	/* send the response to the VF */
+-	return i40e_vc_send_msg_to_vf_ex(vf, VIRTCHNL_OP_ADD_ETH_ADDR,
+-				       ret, NULL, 0, is_quiet);
++	return i40e_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ADD_ETH_ADDR,
++				      ret, NULL, 0);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+index 03c42fd0fea19..a554d0a0b09bd 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+@@ -10,8 +10,6 @@
+ 
+ #define I40E_VIRTCHNL_SUPPORTED_QTYPES 2
+ 
+-#define I40E_DEFAULT_NUM_INVALID_MSGS_ALLOWED	10
+-
+ #define I40E_VLAN_PRIORITY_SHIFT	13
+ #define I40E_VLAN_MASK			0xFFF
+ #define I40E_PRIORITY_MASK		0xE000
+@@ -92,9 +90,6 @@ struct i40e_vf {
+ 	u8 num_queue_pairs;	/* num of qps assigned to VF vsis */
+ 	u8 num_req_queues;	/* num of requested qps */
+ 	u64 num_mdd_events;	/* num of mdd events detected */
+-	/* num of continuous malformed or invalid msgs detected */
+-	u64 num_invalid_msgs;
+-	u64 num_valid_msgs;	/* num of valid msgs detected */
+ 
+ 	unsigned long vf_caps;	/* vf's adv. capabilities */
+ 	unsigned long vf_states;	/* vf's runtime states */
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+index d3da65d24bd62..c83ac6adeeb77 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+@@ -1460,6 +1460,22 @@ void iavf_request_reset(struct iavf_adapter *adapter)
+ 	adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+ }
+ 
++/**
++ * iavf_netdev_features_vlan_strip_set - update vlan strip status
++ * @netdev: ptr to netdev being adjusted
++ * @enable: enable or disable vlan strip
++ *
++ * Helper function to change vlan strip status in netdev->features.
++ */
++static void iavf_netdev_features_vlan_strip_set(struct net_device *netdev,
++						const bool enable)
++{
++	if (enable)
++		netdev->features |= NETIF_F_HW_VLAN_CTAG_RX;
++	else
++		netdev->features &= ~NETIF_F_HW_VLAN_CTAG_RX;
++}
++
+ /**
+  * iavf_virtchnl_completion
+  * @adapter: adapter structure
+@@ -1683,8 +1699,18 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
+ 			}
+ 			break;
+ 		case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
++			dev_warn(&adapter->pdev->dev, "Changing VLAN Stripping is not allowed when Port VLAN is configured\n");
++			/* Vlan stripping could not be enabled by ethtool.
++			 * Disable it in netdev->features.
++			 */
++			iavf_netdev_features_vlan_strip_set(netdev, false);
++			break;
+ 		case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
+ 			dev_warn(&adapter->pdev->dev, "Changing VLAN Stripping is not allowed when Port VLAN is configured\n");
++			/* Vlan stripping could not be disabled by ethtool.
++			 * Enable it in netdev->features.
++			 */
++			iavf_netdev_features_vlan_strip_set(netdev, true);
+ 			break;
+ 		default:
+ 			dev_err(&adapter->pdev->dev, "PF returned error %d (%s) to our request %d\n",
+@@ -1918,6 +1944,20 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
+ 		spin_unlock_bh(&adapter->adv_rss_lock);
+ 		}
+ 		break;
++	case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
++		/* PF enabled vlan strip on this VF.
++		 * Update netdev->features if needed to be in sync with ethtool.
++		 */
++		if (!v_retval)
++			iavf_netdev_features_vlan_strip_set(netdev, true);
++		break;
++	case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
++		/* PF disabled vlan strip on this VF.
++		 * Update netdev->features if needed to be in sync with ethtool.
++		 */
++		if (!v_retval)
++			iavf_netdev_features_vlan_strip_set(netdev, false);
++		break;
+ 	default:
+ 		if (adapter->current_op && (v_opcode != adapter->current_op))
+ 			dev_warn(&adapter->pdev->dev, "Expected response %d from PF, received %d\n",
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index b067dd9c71e78..fa91896ae699d 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -483,6 +483,7 @@ enum ice_pf_flags {
+ 	ICE_FLAG_MDD_AUTO_RESET_VF,
+ 	ICE_FLAG_LINK_LENIENT_MODE_ENA,
+ 	ICE_FLAG_PLUG_AUX_DEV,
++	ICE_FLAG_MTU_CHANGED,
+ 	ICE_PF_FLAGS_NBITS		/* must be last */
+ };
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index 572519e402f46..b05a5029b61ff 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -2314,7 +2314,7 @@ ice_set_link_ksettings(struct net_device *netdev,
+ 		goto done;
+ 	}
+ 
+-	curr_link_speed = pi->phy.link_info.link_speed;
++	curr_link_speed = pi->phy.curr_user_speed_req;
+ 	adv_link_speed = ice_ksettings_find_adv_link_speed(ks);
+ 
+ 	/* If speed didn't get set, set it to what it currently is.
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 8ee778aaa8000..676e837d48cfc 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -2240,6 +2240,17 @@ static void ice_service_task(struct work_struct *work)
+ 	if (test_and_clear_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags))
+ 		ice_plug_aux_dev(pf);
+ 
++	if (test_and_clear_bit(ICE_FLAG_MTU_CHANGED, pf->flags)) {
++		struct iidc_event *event;
++
++		event = kzalloc(sizeof(*event), GFP_KERNEL);
++		if (event) {
++			set_bit(IIDC_EVENT_AFTER_MTU_CHANGE, event->type);
++			ice_send_event_to_aux(pf, event);
++			kfree(event);
++		}
++	}
++
+ 	ice_clean_adminq_subtask(pf);
+ 	ice_check_media_subtask(pf);
+ 	ice_check_for_hang_subtask(pf);
+@@ -3005,7 +3016,7 @@ static irqreturn_t ice_misc_intr(int __always_unused irq, void *data)
+ 		struct iidc_event *event;
+ 
+ 		ena_mask &= ~ICE_AUX_CRIT_ERR;
+-		event = kzalloc(sizeof(*event), GFP_KERNEL);
++		event = kzalloc(sizeof(*event), GFP_ATOMIC);
+ 		if (event) {
+ 			set_bit(IIDC_EVENT_CRIT_ERR, event->type);
+ 			/* report the entire OICR value to AUX driver */
+@@ -6822,7 +6833,6 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu)
+ 	struct ice_netdev_priv *np = netdev_priv(netdev);
+ 	struct ice_vsi *vsi = np->vsi;
+ 	struct ice_pf *pf = vsi->back;
+-	struct iidc_event *event;
+ 	u8 count = 0;
+ 	int err = 0;
+ 
+@@ -6857,14 +6867,6 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu)
+ 		return -EBUSY;
+ 	}
+ 
+-	event = kzalloc(sizeof(*event), GFP_KERNEL);
+-	if (!event)
+-		return -ENOMEM;
+-
+-	set_bit(IIDC_EVENT_BEFORE_MTU_CHANGE, event->type);
+-	ice_send_event_to_aux(pf, event);
+-	clear_bit(IIDC_EVENT_BEFORE_MTU_CHANGE, event->type);
+-
+ 	netdev->mtu = (unsigned int)new_mtu;
+ 
+ 	/* if VSI is up, bring it down and then back up */
+@@ -6872,21 +6874,18 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu)
+ 		err = ice_down(vsi);
+ 		if (err) {
+ 			netdev_err(netdev, "change MTU if_down err %d\n", err);
+-			goto event_after;
++			return err;
+ 		}
+ 
+ 		err = ice_up(vsi);
+ 		if (err) {
+ 			netdev_err(netdev, "change MTU if_up err %d\n", err);
+-			goto event_after;
++			return err;
+ 		}
+ 	}
+ 
+ 	netdev_dbg(netdev, "changed MTU to %d\n", new_mtu);
+-event_after:
+-	set_bit(IIDC_EVENT_AFTER_MTU_CHANGE, event->type);
+-	ice_send_event_to_aux(pf, event);
+-	kfree(event);
++	set_bit(ICE_FLAG_MTU_CHANGED, pf->flags);
+ 
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+index a12cc305c4619..e17813fb71a1b 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+@@ -2297,24 +2297,6 @@ ice_vc_send_msg_to_vf(struct ice_vf *vf, u32 v_opcode,
+ 
+ 	dev = ice_pf_to_dev(pf);
+ 
+-	/* single place to detect unsuccessful return values */
+-	if (v_retval) {
+-		vf->num_inval_msgs++;
+-		dev_info(dev, "VF %d failed opcode %d, retval: %d\n", vf->vf_id,
+-			 v_opcode, v_retval);
+-		if (vf->num_inval_msgs > ICE_DFLT_NUM_INVAL_MSGS_ALLOWED) {
+-			dev_err(dev, "Number of invalid messages exceeded for VF %d\n",
+-				vf->vf_id);
+-			dev_err(dev, "Use PF Control I/F to enable the VF\n");
+-			set_bit(ICE_VF_STATE_DIS, vf->vf_states);
+-			return -EIO;
+-		}
+-	} else {
+-		vf->num_valid_msgs++;
+-		/* reset the invalid counter, if a valid message is received. */
+-		vf->num_inval_msgs = 0;
+-	}
+-
+ 	aq_ret = ice_aq_send_msg_to_vf(&pf->hw, vf->vf_id, v_opcode, v_retval,
+ 				       msg, msglen, NULL);
+ 	if (aq_ret && pf->hw.mailboxq.sq_last_status != ICE_AQ_RC_ENOSYS) {
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
+index 7e28ecbbe7af0..f33c0889a5d40 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
+@@ -14,7 +14,6 @@
+ #define ICE_MAX_MACADDR_PER_VF		18
+ 
+ /* Malicious Driver Detection */
+-#define ICE_DFLT_NUM_INVAL_MSGS_ALLOWED		10
+ #define ICE_MDD_EVENTS_THRESHOLD		30
+ 
+ /* Static VF transaction/status register def */
+@@ -134,8 +133,6 @@ struct ice_vf {
+ 	unsigned int max_tx_rate;	/* Maximum Tx bandwidth limit in Mbps */
+ 	DECLARE_BITMAP(vf_states, ICE_VF_STATES_NBITS);	/* VF runtime states */
+ 
+-	u64 num_inval_msgs;		/* number of continuous invalid msgs */
+-	u64 num_valid_msgs;		/* number of valid msgs detected */
+ 	unsigned long vf_caps;		/* VF's adv. capabilities */
+ 	u8 num_req_qs;			/* num of queue pairs requested by VF */
+ 	u16 num_mac;
+diff --git a/drivers/net/ethernet/marvell/prestera/prestera_main.c b/drivers/net/ethernet/marvell/prestera/prestera_main.c
+index c687dc9aa9737..36c5b1eba30de 100644
+--- a/drivers/net/ethernet/marvell/prestera/prestera_main.c
++++ b/drivers/net/ethernet/marvell/prestera/prestera_main.c
+@@ -553,6 +553,7 @@ static int prestera_switch_set_base_mac_addr(struct prestera_switch *sw)
+ 		dev_info(prestera_dev(sw), "using random base mac address\n");
+ 	}
+ 	of_node_put(base_mac_np);
++	of_node_put(np);
+ 
+ 	return prestera_hw_switch_mac_set(sw, sw->base_mac);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 17fe058096533..3eacd87399294 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -131,11 +131,8 @@ static int cmd_alloc_index(struct mlx5_cmd *cmd)
+ 
+ static void cmd_free_index(struct mlx5_cmd *cmd, int idx)
+ {
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&cmd->alloc_lock, flags);
++	lockdep_assert_held(&cmd->alloc_lock);
+ 	set_bit(idx, &cmd->bitmask);
+-	spin_unlock_irqrestore(&cmd->alloc_lock, flags);
+ }
+ 
+ static void cmd_ent_get(struct mlx5_cmd_work_ent *ent)
+@@ -145,17 +142,21 @@ static void cmd_ent_get(struct mlx5_cmd_work_ent *ent)
+ 
+ static void cmd_ent_put(struct mlx5_cmd_work_ent *ent)
+ {
++	struct mlx5_cmd *cmd = ent->cmd;
++	unsigned long flags;
++
++	spin_lock_irqsave(&cmd->alloc_lock, flags);
+ 	if (!refcount_dec_and_test(&ent->refcnt))
+-		return;
++		goto out;
+ 
+ 	if (ent->idx >= 0) {
+-		struct mlx5_cmd *cmd = ent->cmd;
+-
+ 		cmd_free_index(cmd, ent->idx);
+ 		up(ent->page_queue ? &cmd->pages_sem : &cmd->sem);
+ 	}
+ 
+ 	cmd_free_ent(ent);
++out:
++	spin_unlock_irqrestore(&cmd->alloc_lock, flags);
+ }
+ 
+ static struct mlx5_cmd_layout *get_inst(struct mlx5_cmd *cmd, int idx)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tir.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tir.c
+index da169b8166650..d4239e3b3c88e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tir.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tir.c
+@@ -88,9 +88,6 @@ void mlx5e_tir_builder_build_packet_merge(struct mlx5e_tir_builder *builder,
+ 			 (MLX5E_PARAMS_DEFAULT_LRO_WQE_SZ - rough_max_l2_l3_hdr_sz) >> 8);
+ 		MLX5_SET(tirc, tirc, lro_timeout_period_usecs, pkt_merge_param->timeout);
+ 		break;
+-	case MLX5E_PACKET_MERGE_SHAMPO:
+-		MLX5_SET(tirc, tirc, packet_merge_mask, MLX5_TIRC_PACKET_MERGE_MASK_SHAMPO);
+-		break;
+ 	default:
+ 		break;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index d92b82cdfd4e1..22de7327c5a82 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3592,8 +3592,7 @@ static int set_feature_hw_gro(struct net_device *netdev, bool enable)
+ 		goto out;
+ 	}
+ 
+-	err = mlx5e_safe_switch_params(priv, &new_params,
+-				       mlx5e_modify_tirs_packet_merge_ctx, NULL, reset);
++	err = mlx5e_safe_switch_params(priv, &new_params, NULL, NULL, reset);
+ out:
+ 	mutex_unlock(&priv->state_lock);
+ 	return err;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c
+index 1ca01a5b6cdd8..626aa60b6099b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c
+@@ -126,6 +126,10 @@ static void mlx5_lag_fib_route_event(struct mlx5_lag *ldev,
+ 		return;
+ 	}
+ 
++	/* Handle multipath entry with lower priority value */
++	if (mp->mfi && mp->mfi != fi && fi->fib_priority >= mp->mfi->fib_priority)
++		return;
++
+ 	/* Handle add/replace event */
+ 	nhs = fib_info_num_path(fi);
+ 	if (nhs == 1) {
+@@ -135,12 +139,13 @@ static void mlx5_lag_fib_route_event(struct mlx5_lag *ldev,
+ 			int i = mlx5_lag_dev_get_netdev_idx(ldev, nh_dev);
+ 
+ 			if (i < 0)
+-				i = MLX5_LAG_NORMAL_AFFINITY;
+-			else
+-				++i;
++				return;
+ 
++			i++;
+ 			mlx5_lag_set_port_affinity(ldev, i);
+ 		}
++
++		mp->mfi = fi;
+ 		return;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
+index 1e8ec4f236b28..df58cba37930a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
+@@ -121,9 +121,6 @@ u32 mlx5_chains_get_nf_ft_chain(struct mlx5_fs_chains *chains)
+ 
+ u32 mlx5_chains_get_prio_range(struct mlx5_fs_chains *chains)
+ {
+-	if (!mlx5_chains_prios_supported(chains))
+-		return 1;
+-
+ 	if (mlx5_chains_ignore_flow_level_supported(chains))
+ 		return UINT_MAX;
+ 
+diff --git a/drivers/net/ethernet/nxp/lpc_eth.c b/drivers/net/ethernet/nxp/lpc_eth.c
+index bc39558fe82be..756f97dce85b2 100644
+--- a/drivers/net/ethernet/nxp/lpc_eth.c
++++ b/drivers/net/ethernet/nxp/lpc_eth.c
+@@ -1471,6 +1471,7 @@ static int lpc_eth_drv_resume(struct platform_device *pdev)
+ {
+ 	struct net_device *ndev = platform_get_drvdata(pdev);
+ 	struct netdata_local *pldat;
++	int ret;
+ 
+ 	if (device_may_wakeup(&pdev->dev))
+ 		disable_irq_wake(ndev->irq);
+@@ -1480,7 +1481,9 @@ static int lpc_eth_drv_resume(struct platform_device *pdev)
+ 			pldat = netdev_priv(ndev);
+ 
+ 			/* Enable interface clock */
+-			clk_enable(pldat->clk);
++			ret = clk_enable(pldat->clk);
++			if (ret)
++				return ret;
+ 
+ 			/* Reset and initialize */
+ 			__lpc_eth_reset(pldat);
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_sriov.c b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
+index 8ac38828ba459..48cf4355bc47a 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_sriov.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
+@@ -3806,11 +3806,11 @@ bool qed_iov_mark_vf_flr(struct qed_hwfn *p_hwfn, u32 *p_disabled_vfs)
+ 	return found;
+ }
+ 
+-static void qed_iov_get_link(struct qed_hwfn *p_hwfn,
+-			     u16 vfid,
+-			     struct qed_mcp_link_params *p_params,
+-			     struct qed_mcp_link_state *p_link,
+-			     struct qed_mcp_link_capabilities *p_caps)
++static int qed_iov_get_link(struct qed_hwfn *p_hwfn,
++			    u16 vfid,
++			    struct qed_mcp_link_params *p_params,
++			    struct qed_mcp_link_state *p_link,
++			    struct qed_mcp_link_capabilities *p_caps)
+ {
+ 	struct qed_vf_info *p_vf = qed_iov_get_vf_info(p_hwfn,
+ 						       vfid,
+@@ -3818,7 +3818,7 @@ static void qed_iov_get_link(struct qed_hwfn *p_hwfn,
+ 	struct qed_bulletin_content *p_bulletin;
+ 
+ 	if (!p_vf)
+-		return;
++		return -EINVAL;
+ 
+ 	p_bulletin = p_vf->bulletin.p_virt;
+ 
+@@ -3828,6 +3828,7 @@ static void qed_iov_get_link(struct qed_hwfn *p_hwfn,
+ 		__qed_vf_get_link_state(p_hwfn, p_link, p_bulletin);
+ 	if (p_caps)
+ 		__qed_vf_get_link_caps(p_hwfn, p_caps, p_bulletin);
++	return 0;
+ }
+ 
+ static int
+@@ -4686,6 +4687,7 @@ static int qed_get_vf_config(struct qed_dev *cdev,
+ 	struct qed_public_vf_info *vf_info;
+ 	struct qed_mcp_link_state link;
+ 	u32 tx_rate;
++	int ret;
+ 
+ 	/* Sanitize request */
+ 	if (IS_VF(cdev))
+@@ -4699,7 +4701,9 @@ static int qed_get_vf_config(struct qed_dev *cdev,
+ 
+ 	vf_info = qed_iov_get_public_vf_info(hwfn, vf_id, true);
+ 
+-	qed_iov_get_link(hwfn, vf_id, NULL, &link, NULL);
++	ret = qed_iov_get_link(hwfn, vf_id, NULL, &link, NULL);
++	if (ret)
++		return ret;
+ 
+ 	/* Fill information about VF */
+ 	ivi->vf = vf_id;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_vf.c b/drivers/net/ethernet/qlogic/qed/qed_vf.c
+index 597cd9cd57b54..7b0e390c0b07d 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_vf.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_vf.c
+@@ -513,6 +513,9 @@ int qed_vf_hw_prepare(struct qed_hwfn *p_hwfn)
+ 						    p_iov->bulletin.size,
+ 						    &p_iov->bulletin.phys,
+ 						    GFP_KERNEL);
++	if (!p_iov->bulletin.p_virt)
++		goto free_pf2vf_reply;
++
+ 	DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ 		   "VF's bulletin Board [%p virt 0x%llx phys 0x%08x bytes]\n",
+ 		   p_iov->bulletin.p_virt,
+@@ -552,6 +555,10 @@ int qed_vf_hw_prepare(struct qed_hwfn *p_hwfn)
+ 
+ 	return rc;
+ 
++free_pf2vf_reply:
++	dma_free_coherent(&p_hwfn->cdev->pdev->dev,
++			  sizeof(union pfvf_tlvs),
++			  p_iov->pf2vf_reply, p_iov->pf2vf_reply_phys);
+ free_vf2pf_request:
+ 	dma_free_coherent(&p_hwfn->cdev->pdev->dev,
+ 			  sizeof(union vfpf_tlvs),
+diff --git a/drivers/net/ethernet/ti/cpts.c b/drivers/net/ethernet/ti/cpts.c
+index dc70a6bfaa6a1..92ca739fac010 100644
+--- a/drivers/net/ethernet/ti/cpts.c
++++ b/drivers/net/ethernet/ti/cpts.c
+@@ -568,7 +568,9 @@ int cpts_register(struct cpts *cpts)
+ 	for (i = 0; i < CPTS_MAX_EVENTS; i++)
+ 		list_add(&cpts->pool_data[i].list, &cpts->pool);
+ 
+-	clk_enable(cpts->refclk);
++	err = clk_enable(cpts->refclk);
++	if (err)
++		return err;
+ 
+ 	cpts_write32(cpts, CPTS_EN, control);
+ 	cpts_write32(cpts, TS_PEND_EN, int_enable);
+diff --git a/drivers/net/ethernet/xilinx/xilinx_emaclite.c b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+index 0815de581c7f5..7ae67b054191b 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_emaclite.c
++++ b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+@@ -1186,7 +1186,7 @@ static int xemaclite_of_probe(struct platform_device *ofdev)
+ 	if (rc) {
+ 		dev_err(dev,
+ 			"Cannot register network device, aborting\n");
+-		goto error;
++		goto put_node;
+ 	}
+ 
+ 	dev_info(dev,
+@@ -1194,6 +1194,8 @@ static int xemaclite_of_probe(struct platform_device *ofdev)
+ 		 (unsigned long __force)ndev->mem_start, lp->base_addr, ndev->irq);
+ 	return 0;
+ 
++put_node:
++	of_node_put(lp->phy_node);
+ error:
+ 	free_netdev(ndev);
+ 	return rc;
+diff --git a/drivers/net/hamradio/6pack.c b/drivers/net/hamradio/6pack.c
+index 8a19a06b505d1..ff2bb3d80fac8 100644
+--- a/drivers/net/hamradio/6pack.c
++++ b/drivers/net/hamradio/6pack.c
+@@ -668,11 +668,11 @@ static void sixpack_close(struct tty_struct *tty)
+ 	 */
+ 	netif_stop_queue(sp->dev);
+ 
++	unregister_netdev(sp->dev);
++
+ 	del_timer_sync(&sp->tx_t);
+ 	del_timer_sync(&sp->resync_t);
+ 
+-	unregister_netdev(sp->dev);
+-
+ 	/* Free all 6pack frame buffers after unreg. */
+ 	kfree(sp->rbuff);
+ 	kfree(sp->xbuff);
+diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c
+index 211b5476a6f51..ce17b2af3218f 100644
+--- a/drivers/net/phy/dp83822.c
++++ b/drivers/net/phy/dp83822.c
+@@ -274,7 +274,7 @@ static int dp83822_config_intr(struct phy_device *phydev)
+ 		if (err < 0)
+ 			return err;
+ 
+-		err = phy_write(phydev, MII_DP83822_MISR1, 0);
++		err = phy_write(phydev, MII_DP83822_MISR2, 0);
+ 		if (err < 0)
+ 			return err;
+ 
+diff --git a/drivers/net/phy/meson-gxl.c b/drivers/net/phy/meson-gxl.c
+index 7e7904fee1d97..73f7962a37d33 100644
+--- a/drivers/net/phy/meson-gxl.c
++++ b/drivers/net/phy/meson-gxl.c
+@@ -30,8 +30,12 @@
+ #define  INTSRC_LINK_DOWN	BIT(4)
+ #define  INTSRC_REMOTE_FAULT	BIT(5)
+ #define  INTSRC_ANEG_COMPLETE	BIT(6)
++#define  INTSRC_ENERGY_DETECT	BIT(7)
+ #define INTSRC_MASK	30
+ 
++#define INT_SOURCES (INTSRC_LINK_DOWN | INTSRC_ANEG_COMPLETE | \
++		     INTSRC_ENERGY_DETECT)
++
+ #define BANK_ANALOG_DSP		0
+ #define BANK_WOL		1
+ #define BANK_BIST		3
+@@ -200,7 +204,6 @@ static int meson_gxl_ack_interrupt(struct phy_device *phydev)
+ 
+ static int meson_gxl_config_intr(struct phy_device *phydev)
+ {
+-	u16 val;
+ 	int ret;
+ 
+ 	if (phydev->interrupts == PHY_INTERRUPT_ENABLED) {
+@@ -209,16 +212,9 @@ static int meson_gxl_config_intr(struct phy_device *phydev)
+ 		if (ret)
+ 			return ret;
+ 
+-		val = INTSRC_ANEG_PR
+-			| INTSRC_PARALLEL_FAULT
+-			| INTSRC_ANEG_LP_ACK
+-			| INTSRC_LINK_DOWN
+-			| INTSRC_REMOTE_FAULT
+-			| INTSRC_ANEG_COMPLETE;
+-		ret = phy_write(phydev, INTSRC_MASK, val);
++		ret = phy_write(phydev, INTSRC_MASK, INT_SOURCES);
+ 	} else {
+-		val = 0;
+-		ret = phy_write(phydev, INTSRC_MASK, val);
++		ret = phy_write(phydev, INTSRC_MASK, 0);
+ 
+ 		/* Ack any pending IRQ */
+ 		ret = meson_gxl_ack_interrupt(phydev);
+@@ -237,10 +233,23 @@ static irqreturn_t meson_gxl_handle_interrupt(struct phy_device *phydev)
+ 		return IRQ_NONE;
+ 	}
+ 
++	irq_status &= INT_SOURCES;
++
+ 	if (irq_status == 0)
+ 		return IRQ_NONE;
+ 
+-	phy_trigger_machine(phydev);
++	/* Aneg-complete interrupt is used for link-up detection */
++	if (phydev->autoneg == AUTONEG_ENABLE &&
++	    irq_status == INTSRC_ENERGY_DETECT)
++		return IRQ_HANDLED;
++
++	/* Give PHY some time before MAC starts sending data. This works
++	 * around an issue where network doesn't come up properly.
++	 */
++	if (!(irq_status & INTSRC_LINK_DOWN))
++		phy_queue_state_machine(phydev, msecs_to_jiffies(100));
++	else
++		phy_trigger_machine(phydev);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index bc1e3dd67c04c..a0f29482294d4 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -84,9 +84,10 @@ static int __must_check __smsc95xx_read_reg(struct usbnet *dev, u32 index,
+ 	ret = fn(dev, USB_VENDOR_REQUEST_READ_REGISTER, USB_DIR_IN
+ 		 | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 		 0, index, &buf, 4);
+-	if (unlikely(ret < 0)) {
+-		netdev_warn(dev->net, "Failed to read reg index 0x%08x: %d\n",
+-			    index, ret);
++	if (ret < 0) {
++		if (ret != -ENODEV)
++			netdev_warn(dev->net, "Failed to read reg index 0x%08x: %d\n",
++				    index, ret);
+ 		return ret;
+ 	}
+ 
+@@ -116,7 +117,7 @@ static int __must_check __smsc95xx_write_reg(struct usbnet *dev, u32 index,
+ 	ret = fn(dev, USB_VENDOR_REQUEST_WRITE_REGISTER, USB_DIR_OUT
+ 		 | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 		 0, index, &buf, 4);
+-	if (unlikely(ret < 0))
++	if (ret < 0 && ret != -ENODEV)
+ 		netdev_warn(dev->net, "Failed to write reg index 0x%08x: %d\n",
+ 			    index, ret);
+ 
+@@ -159,6 +160,9 @@ static int __must_check __smsc95xx_phy_wait_not_busy(struct usbnet *dev,
+ 	do {
+ 		ret = __smsc95xx_read_reg(dev, MII_ADDR, &val, in_pm);
+ 		if (ret < 0) {
++			/* Ignore -ENODEV error during disconnect() */
++			if (ret == -ENODEV)
++				return 0;
+ 			netdev_warn(dev->net, "Error reading MII_ACCESS\n");
+ 			return ret;
+ 		}
+@@ -194,7 +198,8 @@ static int __smsc95xx_mdio_read(struct usbnet *dev, int phy_id, int idx,
+ 	addr = mii_address_cmd(phy_id, idx, MII_READ_ | MII_BUSY_);
+ 	ret = __smsc95xx_write_reg(dev, MII_ADDR, addr, in_pm);
+ 	if (ret < 0) {
+-		netdev_warn(dev->net, "Error writing MII_ADDR\n");
++		if (ret != -ENODEV)
++			netdev_warn(dev->net, "Error writing MII_ADDR\n");
+ 		goto done;
+ 	}
+ 
+@@ -206,7 +211,8 @@ static int __smsc95xx_mdio_read(struct usbnet *dev, int phy_id, int idx,
+ 
+ 	ret = __smsc95xx_read_reg(dev, MII_DATA, &val, in_pm);
+ 	if (ret < 0) {
+-		netdev_warn(dev->net, "Error reading MII_DATA\n");
++		if (ret != -ENODEV)
++			netdev_warn(dev->net, "Error reading MII_DATA\n");
+ 		goto done;
+ 	}
+ 
+@@ -214,6 +220,10 @@ static int __smsc95xx_mdio_read(struct usbnet *dev, int phy_id, int idx,
+ 
+ done:
+ 	mutex_unlock(&dev->phy_mutex);
++
++	/* Ignore -ENODEV error during disconnect() */
++	if (ret == -ENODEV)
++		return 0;
+ 	return ret;
+ }
+ 
+@@ -235,7 +245,8 @@ static void __smsc95xx_mdio_write(struct usbnet *dev, int phy_id,
+ 	val = regval;
+ 	ret = __smsc95xx_write_reg(dev, MII_DATA, val, in_pm);
+ 	if (ret < 0) {
+-		netdev_warn(dev->net, "Error writing MII_DATA\n");
++		if (ret != -ENODEV)
++			netdev_warn(dev->net, "Error writing MII_DATA\n");
+ 		goto done;
+ 	}
+ 
+@@ -243,7 +254,8 @@ static void __smsc95xx_mdio_write(struct usbnet *dev, int phy_id,
+ 	addr = mii_address_cmd(phy_id, idx, MII_WRITE_ | MII_BUSY_);
+ 	ret = __smsc95xx_write_reg(dev, MII_ADDR, addr, in_pm);
+ 	if (ret < 0) {
+-		netdev_warn(dev->net, "Error writing MII_ADDR\n");
++		if (ret != -ENODEV)
++			netdev_warn(dev->net, "Error writing MII_ADDR\n");
+ 		goto done;
+ 	}
+ 
+diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
+index d24b7a7993aa0..990360d75cb64 100644
+--- a/drivers/net/xen-netback/xenbus.c
++++ b/drivers/net/xen-netback/xenbus.c
+@@ -256,6 +256,7 @@ static void backend_disconnect(struct backend_info *be)
+ 		unsigned int queue_index;
+ 
+ 		xen_unregister_watchers(vif);
++		xenbus_rm(XBT_NIL, be->dev->nodename, "hotplug-status");
+ #ifdef CONFIG_DEBUG_FS
+ 		xenvif_debugfs_delif(vif);
+ #endif /* CONFIG_DEBUG_FS */
+@@ -675,7 +676,6 @@ static void hotplug_status_changed(struct xenbus_watch *watch,
+ 
+ 		/* Not interested in this watch anymore. */
+ 		unregister_hotplug_status_watch(be);
+-		xenbus_rm(XBT_NIL, be->dev->nodename, "hotplug-status");
+ 	}
+ 	kfree(str);
+ }
+@@ -824,15 +824,11 @@ static void connect(struct backend_info *be)
+ 	xenvif_carrier_on(be->vif);
+ 
+ 	unregister_hotplug_status_watch(be);
+-	if (xenbus_exists(XBT_NIL, dev->nodename, "hotplug-status")) {
+-		err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
+-					   NULL, hotplug_status_changed,
+-					   "%s/%s", dev->nodename,
+-					   "hotplug-status");
+-		if (err)
+-			goto err;
++	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch, NULL,
++				   hotplug_status_changed,
++				   "%s/%s", dev->nodename, "hotplug-status");
++	if (!err)
+ 		be->have_hotplug_status_watch = 1;
+-	}
+ 
+ 	netif_tx_wake_all_queues(be->vif->dev);
+ 
+diff --git a/drivers/nfc/port100.c b/drivers/nfc/port100.c
+index d7db1a0e6be12..00d8ea6dcb5db 100644
+--- a/drivers/nfc/port100.c
++++ b/drivers/nfc/port100.c
+@@ -1612,7 +1612,9 @@ free_nfc_dev:
+ 	nfc_digital_free_device(dev->nfc_digital_dev);
+ 
+ error:
++	usb_kill_urb(dev->in_urb);
+ 	usb_free_urb(dev->in_urb);
++	usb_kill_urb(dev->out_urb);
+ 	usb_free_urb(dev->out_urb);
+ 	usb_put_dev(dev->udev);
+ 
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 891a36d02e7c7..65e00c64a588b 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -44,6 +44,8 @@ struct nvme_tcp_request {
+ 	u32			data_len;
+ 	u32			pdu_len;
+ 	u32			pdu_sent;
++	u32			h2cdata_left;
++	u32			h2cdata_offset;
+ 	u16			ttag;
+ 	__le16			status;
+ 	struct list_head	entry;
+@@ -95,6 +97,7 @@ struct nvme_tcp_queue {
+ 	struct nvme_tcp_request *request;
+ 
+ 	int			queue_size;
++	u32			maxh2cdata;
+ 	size_t			cmnd_capsule_len;
+ 	struct nvme_tcp_ctrl	*ctrl;
+ 	unsigned long		flags;
+@@ -572,23 +575,26 @@ static int nvme_tcp_handle_comp(struct nvme_tcp_queue *queue,
+ 	return ret;
+ }
+ 
+-static void nvme_tcp_setup_h2c_data_pdu(struct nvme_tcp_request *req,
+-		struct nvme_tcp_r2t_pdu *pdu)
++static void nvme_tcp_setup_h2c_data_pdu(struct nvme_tcp_request *req)
+ {
+ 	struct nvme_tcp_data_pdu *data = req->pdu;
+ 	struct nvme_tcp_queue *queue = req->queue;
+ 	struct request *rq = blk_mq_rq_from_pdu(req);
++	u32 h2cdata_sent = req->pdu_len;
+ 	u8 hdgst = nvme_tcp_hdgst_len(queue);
+ 	u8 ddgst = nvme_tcp_ddgst_len(queue);
+ 
+ 	req->state = NVME_TCP_SEND_H2C_PDU;
+ 	req->offset = 0;
+-	req->pdu_len = le32_to_cpu(pdu->r2t_length);
++	req->pdu_len = min(req->h2cdata_left, queue->maxh2cdata);
+ 	req->pdu_sent = 0;
++	req->h2cdata_left -= req->pdu_len;
++	req->h2cdata_offset += h2cdata_sent;
+ 
+ 	memset(data, 0, sizeof(*data));
+ 	data->hdr.type = nvme_tcp_h2c_data;
+-	data->hdr.flags = NVME_TCP_F_DATA_LAST;
++	if (!req->h2cdata_left)
++		data->hdr.flags = NVME_TCP_F_DATA_LAST;
+ 	if (queue->hdr_digest)
+ 		data->hdr.flags |= NVME_TCP_F_HDGST;
+ 	if (queue->data_digest)
+@@ -597,9 +603,9 @@ static void nvme_tcp_setup_h2c_data_pdu(struct nvme_tcp_request *req,
+ 	data->hdr.pdo = data->hdr.hlen + hdgst;
+ 	data->hdr.plen =
+ 		cpu_to_le32(data->hdr.hlen + hdgst + req->pdu_len + ddgst);
+-	data->ttag = pdu->ttag;
++	data->ttag = req->ttag;
+ 	data->command_id = nvme_cid(rq);
+-	data->data_offset = pdu->r2t_offset;
++	data->data_offset = cpu_to_le32(req->h2cdata_offset);
+ 	data->data_length = cpu_to_le32(req->pdu_len);
+ }
+ 
+@@ -609,6 +615,7 @@ static int nvme_tcp_handle_r2t(struct nvme_tcp_queue *queue,
+ 	struct nvme_tcp_request *req;
+ 	struct request *rq;
+ 	u32 r2t_length = le32_to_cpu(pdu->r2t_length);
++	u32 r2t_offset = le32_to_cpu(pdu->r2t_offset);
+ 
+ 	rq = nvme_find_rq(nvme_tcp_tagset(queue), pdu->command_id);
+ 	if (!rq) {
+@@ -633,14 +640,19 @@ static int nvme_tcp_handle_r2t(struct nvme_tcp_queue *queue,
+ 		return -EPROTO;
+ 	}
+ 
+-	if (unlikely(le32_to_cpu(pdu->r2t_offset) < req->data_sent)) {
++	if (unlikely(r2t_offset < req->data_sent)) {
+ 		dev_err(queue->ctrl->ctrl.device,
+ 			"req %d unexpected r2t offset %u (expected %zu)\n",
+-			rq->tag, le32_to_cpu(pdu->r2t_offset), req->data_sent);
++			rq->tag, r2t_offset, req->data_sent);
+ 		return -EPROTO;
+ 	}
+ 
+-	nvme_tcp_setup_h2c_data_pdu(req, pdu);
++	req->pdu_len = 0;
++	req->h2cdata_left = r2t_length;
++	req->h2cdata_offset = r2t_offset;
++	req->ttag = pdu->ttag;
++
++	nvme_tcp_setup_h2c_data_pdu(req);
+ 	nvme_tcp_queue_request(req, false, true);
+ 
+ 	return 0;
+@@ -928,6 +940,7 @@ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
+ {
+ 	struct nvme_tcp_queue *queue = req->queue;
+ 	int req_data_len = req->data_len;
++	u32 h2cdata_left = req->h2cdata_left;
+ 
+ 	while (true) {
+ 		struct page *page = nvme_tcp_req_cur_page(req);
+@@ -972,7 +985,10 @@ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
+ 				req->state = NVME_TCP_SEND_DDGST;
+ 				req->offset = 0;
+ 			} else {
+-				nvme_tcp_done_send_req(queue);
++				if (h2cdata_left)
++					nvme_tcp_setup_h2c_data_pdu(req);
++				else
++					nvme_tcp_done_send_req(queue);
+ 			}
+ 			return 1;
+ 		}
+@@ -1030,9 +1046,14 @@ static int nvme_tcp_try_send_data_pdu(struct nvme_tcp_request *req)
+ 	if (queue->hdr_digest && !req->offset)
+ 		nvme_tcp_hdgst(queue->snd_hash, pdu, sizeof(*pdu));
+ 
+-	ret = kernel_sendpage(queue->sock, virt_to_page(pdu),
+-			offset_in_page(pdu) + req->offset, len,
+-			MSG_DONTWAIT | MSG_MORE | MSG_SENDPAGE_NOTLAST);
++	if (!req->h2cdata_left)
++		ret = kernel_sendpage(queue->sock, virt_to_page(pdu),
++				offset_in_page(pdu) + req->offset, len,
++				MSG_DONTWAIT | MSG_MORE | MSG_SENDPAGE_NOTLAST);
++	else
++		ret = sock_no_sendpage(queue->sock, virt_to_page(pdu),
++				offset_in_page(pdu) + req->offset, len,
++				MSG_DONTWAIT | MSG_MORE);
+ 	if (unlikely(ret <= 0))
+ 		return ret;
+ 
+@@ -1052,6 +1073,7 @@ static int nvme_tcp_try_send_ddgst(struct nvme_tcp_request *req)
+ {
+ 	struct nvme_tcp_queue *queue = req->queue;
+ 	size_t offset = req->offset;
++	u32 h2cdata_left = req->h2cdata_left;
+ 	int ret;
+ 	struct msghdr msg = { .msg_flags = MSG_DONTWAIT };
+ 	struct kvec iov = {
+@@ -1069,7 +1091,10 @@ static int nvme_tcp_try_send_ddgst(struct nvme_tcp_request *req)
+ 		return ret;
+ 
+ 	if (offset + ret == NVME_TCP_DIGEST_LENGTH) {
+-		nvme_tcp_done_send_req(queue);
++		if (h2cdata_left)
++			nvme_tcp_setup_h2c_data_pdu(req);
++		else
++			nvme_tcp_done_send_req(queue);
+ 		return 1;
+ 	}
+ 
+@@ -1261,6 +1286,7 @@ static int nvme_tcp_init_connection(struct nvme_tcp_queue *queue)
+ 	struct msghdr msg = {};
+ 	struct kvec iov;
+ 	bool ctrl_hdgst, ctrl_ddgst;
++	u32 maxh2cdata;
+ 	int ret;
+ 
+ 	icreq = kzalloc(sizeof(*icreq), GFP_KERNEL);
+@@ -1344,6 +1370,14 @@ static int nvme_tcp_init_connection(struct nvme_tcp_queue *queue)
+ 		goto free_icresp;
+ 	}
+ 
++	maxh2cdata = le32_to_cpu(icresp->maxdata);
++	if ((maxh2cdata % 4) || (maxh2cdata < NVME_TCP_MIN_MAXH2CDATA)) {
++		pr_err("queue %d: invalid maxh2cdata returned %u\n",
++		       nvme_tcp_queue_id(queue), maxh2cdata);
++		goto free_icresp;
++	}
++	queue->maxh2cdata = maxh2cdata;
++
+ 	ret = 0;
+ free_icresp:
+ 	kfree(icresp);
+@@ -2329,6 +2363,7 @@ static blk_status_t nvme_tcp_setup_cmd_pdu(struct nvme_ns *ns,
+ 	req->data_sent = 0;
+ 	req->pdu_len = 0;
+ 	req->pdu_sent = 0;
++	req->h2cdata_left = 0;
+ 	req->data_len = blk_rq_nr_phys_segments(rq) ?
+ 				blk_rq_payload_bytes(rq) : 0;
+ 	req->curr_bio = rq->bio;
+diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
+index 7e868e5995b7e..f66abb496ed16 100644
+--- a/drivers/of/fdt.c
++++ b/drivers/of/fdt.c
+@@ -644,8 +644,8 @@ void __init early_init_fdt_scan_reserved_mem(void)
+ 	}
+ 
+ 	fdt_scan_reserved_mem();
+-	fdt_init_reserved_mem();
+ 	fdt_reserve_elfcorehdr();
++	fdt_init_reserved_mem();
+ }
+ 
+ /**
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 20a9326907384..db864bf634a3e 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -5344,11 +5344,6 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0422, quirk_no_ext_tags);
+  */
+ static void quirk_amd_harvest_no_ats(struct pci_dev *pdev)
+ {
+-	if ((pdev->device == 0x7312 && pdev->revision != 0x00) ||
+-	    (pdev->device == 0x7340 && pdev->revision != 0xc5) ||
+-	    (pdev->device == 0x7341 && pdev->revision != 0x00))
+-		return;
+-
+ 	if (pdev->device == 0x15d8) {
+ 		if (pdev->revision == 0xcf &&
+ 		    pdev->subsystem_vendor == 0xea50 &&
+@@ -5370,10 +5365,19 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x98e4, quirk_amd_harvest_no_ats);
+ /* AMD Iceland dGPU */
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x6900, quirk_amd_harvest_no_ats);
+ /* AMD Navi10 dGPU */
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7310, quirk_amd_harvest_no_ats);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7312, quirk_amd_harvest_no_ats);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7318, quirk_amd_harvest_no_ats);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7319, quirk_amd_harvest_no_ats);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x731a, quirk_amd_harvest_no_ats);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x731b, quirk_amd_harvest_no_ats);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x731e, quirk_amd_harvest_no_ats);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x731f, quirk_amd_harvest_no_ats);
+ /* AMD Navi14 dGPU */
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7340, quirk_amd_harvest_no_ats);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7341, quirk_amd_harvest_no_ats);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7347, quirk_amd_harvest_no_ats);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x734f, quirk_amd_harvest_no_ats);
+ /* AMD Raven platform iGPU */
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x15d8, quirk_amd_harvest_no_ats);
+ #endif /* CONFIG_PCI_ATS */
+diff --git a/drivers/pinctrl/intel/pinctrl-tigerlake.c b/drivers/pinctrl/intel/pinctrl-tigerlake.c
+index 0bcd19597e4ad..3ddaeffc04150 100644
+--- a/drivers/pinctrl/intel/pinctrl-tigerlake.c
++++ b/drivers/pinctrl/intel/pinctrl-tigerlake.c
+@@ -749,7 +749,6 @@ static const struct acpi_device_id tgl_pinctrl_acpi_match[] = {
+ 	{ "INT34C5", (kernel_ulong_t)&tgllp_soc_data },
+ 	{ "INT34C6", (kernel_ulong_t)&tglh_soc_data },
+ 	{ "INTC1055", (kernel_ulong_t)&tgllp_soc_data },
+-	{ "INTC1057", (kernel_ulong_t)&tgllp_soc_data },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(acpi, tgl_pinctrl_acpi_match);
+diff --git a/drivers/soc/mediatek/mt8192-mmsys.h b/drivers/soc/mediatek/mt8192-mmsys.h
+index 6f0a57044a7bd..6aae0b12b6ff9 100644
+--- a/drivers/soc/mediatek/mt8192-mmsys.h
++++ b/drivers/soc/mediatek/mt8192-mmsys.h
+@@ -53,7 +53,8 @@ static const struct mtk_mmsys_routes mmsys_mt8192_routing_table[] = {
+ 		MT8192_AAL0_SEL_IN_CCORR0
+ 	}, {
+ 		DDP_COMPONENT_DITHER, DDP_COMPONENT_DSI0,
+-		MT8192_DISP_DSI0_SEL_IN, MT8192_DSI0_SEL_IN_DITHER0
++		MT8192_DISP_DSI0_SEL_IN, MT8192_DSI0_SEL_IN_DITHER0,
++		MT8192_DSI0_SEL_IN_DITHER0
+ 	}, {
+ 		DDP_COMPONENT_RDMA0, DDP_COMPONENT_COLOR0,
+ 		MT8192_DISP_RDMA0_SOUT_SEL, MT8192_RDMA0_SOUT_COLOR0,
+diff --git a/drivers/spi/spi-rockchip.c b/drivers/spi/spi-rockchip.c
+index 553b6b9d02222..c6a1bb09be056 100644
+--- a/drivers/spi/spi-rockchip.c
++++ b/drivers/spi/spi-rockchip.c
+@@ -585,6 +585,12 @@ static int rockchip_spi_slave_abort(struct spi_controller *ctlr)
+ {
+ 	struct rockchip_spi *rs = spi_controller_get_devdata(ctlr);
+ 
++	if (atomic_read(&rs->state) & RXDMA)
++		dmaengine_terminate_sync(ctlr->dma_rx);
++	if (atomic_read(&rs->state) & TXDMA)
++		dmaengine_terminate_sync(ctlr->dma_tx);
++	atomic_set(&rs->state, 0);
++	spi_enable_chip(rs, false);
+ 	rs->slave_abort = true;
+ 	spi_finalize_current_transfer(ctlr);
+ 
+@@ -654,7 +660,7 @@ static int rockchip_spi_probe(struct platform_device *pdev)
+ 	struct spi_controller *ctlr;
+ 	struct resource *mem;
+ 	struct device_node *np = pdev->dev.of_node;
+-	u32 rsd_nsecs;
++	u32 rsd_nsecs, num_cs;
+ 	bool slave_mode;
+ 
+ 	slave_mode = of_property_read_bool(np, "spi-slave");
+@@ -764,8 +770,9 @@ static int rockchip_spi_probe(struct platform_device *pdev)
+ 		 * rk spi0 has two native cs, spi1..5 one cs only
+ 		 * if num-cs is missing in the dts, default to 1
+ 		 */
+-		if (of_property_read_u16(np, "num-cs", &ctlr->num_chipselect))
+-			ctlr->num_chipselect = 1;
++		if (of_property_read_u32(np, "num-cs", &num_cs))
++			num_cs = 1;
++		ctlr->num_chipselect = num_cs;
+ 		ctlr->use_gpio_descriptors = true;
+ 	}
+ 	ctlr->dev.of_node = pdev->dev.of_node;
+diff --git a/drivers/staging/gdm724x/gdm_lte.c b/drivers/staging/gdm724x/gdm_lte.c
+index 493ed4821515b..0d8d8fed283d0 100644
+--- a/drivers/staging/gdm724x/gdm_lte.c
++++ b/drivers/staging/gdm724x/gdm_lte.c
+@@ -76,14 +76,15 @@ static void tx_complete(void *arg)
+ 
+ static int gdm_lte_rx(struct sk_buff *skb, struct nic *nic, int nic_type)
+ {
+-	int ret;
++	int ret, len;
+ 
++	len = skb->len + ETH_HLEN;
+ 	ret = netif_rx_ni(skb);
+ 	if (ret == NET_RX_DROP) {
+ 		nic->stats.rx_dropped++;
+ 	} else {
+ 		nic->stats.rx_packets++;
+-		nic->stats.rx_bytes += skb->len + ETH_HLEN;
++		nic->stats.rx_bytes += len;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c b/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c
+index 0f82f5031c434..49a3f45cb771b 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c
++++ b/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c
+@@ -5907,6 +5907,7 @@ u8 chk_bmc_sleepq_hdl(struct adapter *padapter, unsigned char *pbuf)
+ 	struct sta_info *psta_bmc;
+ 	struct list_head *xmitframe_plist, *xmitframe_phead, *tmp;
+ 	struct xmit_frame *pxmitframe = NULL;
++	struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
+ 	struct sta_priv  *pstapriv = &padapter->stapriv;
+ 
+ 	/* for BC/MC Frames */
+@@ -5917,7 +5918,8 @@ u8 chk_bmc_sleepq_hdl(struct adapter *padapter, unsigned char *pbuf)
+ 	if ((pstapriv->tim_bitmap&BIT(0)) && (psta_bmc->sleepq_len > 0)) {
+ 		msleep(10);/*  10ms, ATIM(HIQ) Windows */
+ 
+-		spin_lock_bh(&psta_bmc->sleep_q.lock);
++		/* spin_lock_bh(&psta_bmc->sleep_q.lock); */
++		spin_lock_bh(&pxmitpriv->lock);
+ 
+ 		xmitframe_phead = get_list_head(&psta_bmc->sleep_q);
+ 		list_for_each_safe(xmitframe_plist, tmp, xmitframe_phead) {
+@@ -5940,7 +5942,8 @@ u8 chk_bmc_sleepq_hdl(struct adapter *padapter, unsigned char *pbuf)
+ 			rtw_hal_xmitframe_enqueue(padapter, pxmitframe);
+ 		}
+ 
+-		spin_unlock_bh(&psta_bmc->sleep_q.lock);
++		/* spin_unlock_bh(&psta_bmc->sleep_q.lock); */
++		spin_unlock_bh(&pxmitpriv->lock);
+ 
+ 		/* check hi queue and bmc_sleepq */
+ 		rtw_chk_hi_queue_cmd(padapter);
+diff --git a/drivers/staging/rtl8723bs/core/rtw_recv.c b/drivers/staging/rtl8723bs/core/rtw_recv.c
+index 41bfca549c641..105fe0e3482a2 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_recv.c
++++ b/drivers/staging/rtl8723bs/core/rtw_recv.c
+@@ -957,8 +957,10 @@ static signed int validate_recv_ctrl_frame(struct adapter *padapter, union recv_
+ 		if ((psta->state&WIFI_SLEEP_STATE) && (pstapriv->sta_dz_bitmap&BIT(psta->aid))) {
+ 			struct list_head	*xmitframe_plist, *xmitframe_phead;
+ 			struct xmit_frame *pxmitframe = NULL;
++			struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
+ 
+-			spin_lock_bh(&psta->sleep_q.lock);
++			/* spin_lock_bh(&psta->sleep_q.lock); */
++			spin_lock_bh(&pxmitpriv->lock);
+ 
+ 			xmitframe_phead = get_list_head(&psta->sleep_q);
+ 			xmitframe_plist = get_next(xmitframe_phead);
+@@ -989,10 +991,12 @@ static signed int validate_recv_ctrl_frame(struct adapter *padapter, union recv_
+ 					update_beacon(padapter, WLAN_EID_TIM, NULL, true);
+ 				}
+ 
+-				spin_unlock_bh(&psta->sleep_q.lock);
++				/* spin_unlock_bh(&psta->sleep_q.lock); */
++				spin_unlock_bh(&pxmitpriv->lock);
+ 
+ 			} else {
+-				spin_unlock_bh(&psta->sleep_q.lock);
++				/* spin_unlock_bh(&psta->sleep_q.lock); */
++				spin_unlock_bh(&pxmitpriv->lock);
+ 
+ 				if (pstapriv->tim_bitmap&BIT(psta->aid)) {
+ 					if (psta->sleepq_len == 0) {
+diff --git a/drivers/staging/rtl8723bs/core/rtw_sta_mgt.c b/drivers/staging/rtl8723bs/core/rtw_sta_mgt.c
+index 0c9ea1520fd05..beb11d89db186 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_sta_mgt.c
++++ b/drivers/staging/rtl8723bs/core/rtw_sta_mgt.c
+@@ -293,48 +293,46 @@ u32 rtw_free_stainfo(struct adapter *padapter, struct sta_info *psta)
+ 
+ 	/* list_del_init(&psta->wakeup_list); */
+ 
+-	spin_lock_bh(&psta->sleep_q.lock);
++	spin_lock_bh(&pxmitpriv->lock);
++
+ 	rtw_free_xmitframe_queue(pxmitpriv, &psta->sleep_q);
+ 	psta->sleepq_len = 0;
+-	spin_unlock_bh(&psta->sleep_q.lock);
+-
+-	spin_lock_bh(&pxmitpriv->lock);
+ 
+ 	/* vo */
+-	spin_lock_bh(&pstaxmitpriv->vo_q.sta_pending.lock);
++	/* spin_lock_bh(&(pxmitpriv->vo_pending.lock)); */
+ 	rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->vo_q.sta_pending);
+ 	list_del_init(&(pstaxmitpriv->vo_q.tx_pending));
+ 	phwxmit = pxmitpriv->hwxmits;
+ 	phwxmit->accnt -= pstaxmitpriv->vo_q.qcnt;
+ 	pstaxmitpriv->vo_q.qcnt = 0;
+-	spin_unlock_bh(&pstaxmitpriv->vo_q.sta_pending.lock);
++	/* spin_unlock_bh(&(pxmitpriv->vo_pending.lock)); */
+ 
+ 	/* vi */
+-	spin_lock_bh(&pstaxmitpriv->vi_q.sta_pending.lock);
++	/* spin_lock_bh(&(pxmitpriv->vi_pending.lock)); */
+ 	rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->vi_q.sta_pending);
+ 	list_del_init(&(pstaxmitpriv->vi_q.tx_pending));
+ 	phwxmit = pxmitpriv->hwxmits+1;
+ 	phwxmit->accnt -= pstaxmitpriv->vi_q.qcnt;
+ 	pstaxmitpriv->vi_q.qcnt = 0;
+-	spin_unlock_bh(&pstaxmitpriv->vi_q.sta_pending.lock);
++	/* spin_unlock_bh(&(pxmitpriv->vi_pending.lock)); */
+ 
+ 	/* be */
+-	spin_lock_bh(&pstaxmitpriv->be_q.sta_pending.lock);
++	/* spin_lock_bh(&(pxmitpriv->be_pending.lock)); */
+ 	rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->be_q.sta_pending);
+ 	list_del_init(&(pstaxmitpriv->be_q.tx_pending));
+ 	phwxmit = pxmitpriv->hwxmits+2;
+ 	phwxmit->accnt -= pstaxmitpriv->be_q.qcnt;
+ 	pstaxmitpriv->be_q.qcnt = 0;
+-	spin_unlock_bh(&pstaxmitpriv->be_q.sta_pending.lock);
++	/* spin_unlock_bh(&(pxmitpriv->be_pending.lock)); */
+ 
+ 	/* bk */
+-	spin_lock_bh(&pstaxmitpriv->bk_q.sta_pending.lock);
++	/* spin_lock_bh(&(pxmitpriv->bk_pending.lock)); */
+ 	rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->bk_q.sta_pending);
+ 	list_del_init(&(pstaxmitpriv->bk_q.tx_pending));
+ 	phwxmit = pxmitpriv->hwxmits+3;
+ 	phwxmit->accnt -= pstaxmitpriv->bk_q.qcnt;
+ 	pstaxmitpriv->bk_q.qcnt = 0;
+-	spin_unlock_bh(&pstaxmitpriv->bk_q.sta_pending.lock);
++	/* spin_unlock_bh(&(pxmitpriv->bk_pending.lock)); */
+ 
+ 	spin_unlock_bh(&pxmitpriv->lock);
+ 
+diff --git a/drivers/staging/rtl8723bs/core/rtw_xmit.c b/drivers/staging/rtl8723bs/core/rtw_xmit.c
+index 13b8bd5ffabc4..f466bfd248fb6 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_xmit.c
++++ b/drivers/staging/rtl8723bs/core/rtw_xmit.c
+@@ -1734,12 +1734,15 @@ void rtw_free_xmitframe_queue(struct xmit_priv *pxmitpriv, struct __queue *pfram
+ 	struct list_head *plist, *phead, *tmp;
+ 	struct	xmit_frame	*pxmitframe;
+ 
++	spin_lock_bh(&pframequeue->lock);
++
+ 	phead = get_list_head(pframequeue);
+ 	list_for_each_safe(plist, tmp, phead) {
+ 		pxmitframe = list_entry(plist, struct xmit_frame, list);
+ 
+ 		rtw_free_xmitframe(pxmitpriv, pxmitframe);
+ 	}
++	spin_unlock_bh(&pframequeue->lock);
+ }
+ 
+ s32 rtw_xmitframe_enqueue(struct adapter *padapter, struct xmit_frame *pxmitframe)
+@@ -1794,7 +1797,6 @@ s32 rtw_xmit_classifier(struct adapter *padapter, struct xmit_frame *pxmitframe)
+ 	struct sta_info *psta;
+ 	struct tx_servq	*ptxservq;
+ 	struct pkt_attrib	*pattrib = &pxmitframe->attrib;
+-	struct xmit_priv *xmit_priv = &padapter->xmitpriv;
+ 	struct hw_xmit	*phwxmits =  padapter->xmitpriv.hwxmits;
+ 	signed int res = _SUCCESS;
+ 
+@@ -1812,14 +1814,12 @@ s32 rtw_xmit_classifier(struct adapter *padapter, struct xmit_frame *pxmitframe)
+ 
+ 	ptxservq = rtw_get_sta_pending(padapter, psta, pattrib->priority, (u8 *)(&ac_index));
+ 
+-	spin_lock_bh(&xmit_priv->lock);
+ 	if (list_empty(&ptxservq->tx_pending))
+ 		list_add_tail(&ptxservq->tx_pending, get_list_head(phwxmits[ac_index].sta_queue));
+ 
+ 	list_add_tail(&pxmitframe->list, get_list_head(&ptxservq->sta_pending));
+ 	ptxservq->qcnt++;
+ 	phwxmits[ac_index].accnt++;
+-	spin_unlock_bh(&xmit_priv->lock);
+ 
+ exit:
+ 
+@@ -2202,10 +2202,11 @@ void wakeup_sta_to_xmit(struct adapter *padapter, struct sta_info *psta)
+ 	struct list_head *xmitframe_plist, *xmitframe_phead, *tmp;
+ 	struct xmit_frame *pxmitframe = NULL;
+ 	struct sta_priv *pstapriv = &padapter->stapriv;
++	struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
+ 
+ 	psta_bmc = rtw_get_bcmc_stainfo(padapter);
+ 
+-	spin_lock_bh(&psta->sleep_q.lock);
++	spin_lock_bh(&pxmitpriv->lock);
+ 
+ 	xmitframe_phead = get_list_head(&psta->sleep_q);
+ 	list_for_each_safe(xmitframe_plist, tmp, xmitframe_phead) {
+@@ -2306,7 +2307,7 @@ void wakeup_sta_to_xmit(struct adapter *padapter, struct sta_info *psta)
+ 
+ _exit:
+ 
+-	spin_unlock_bh(&psta->sleep_q.lock);
++	spin_unlock_bh(&pxmitpriv->lock);
+ 
+ 	if (update_mask)
+ 		update_beacon(padapter, WLAN_EID_TIM, NULL, true);
+@@ -2318,8 +2319,9 @@ void xmit_delivery_enabled_frames(struct adapter *padapter, struct sta_info *pst
+ 	struct list_head *xmitframe_plist, *xmitframe_phead, *tmp;
+ 	struct xmit_frame *pxmitframe = NULL;
+ 	struct sta_priv *pstapriv = &padapter->stapriv;
++	struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
+ 
+-	spin_lock_bh(&psta->sleep_q.lock);
++	spin_lock_bh(&pxmitpriv->lock);
+ 
+ 	xmitframe_phead = get_list_head(&psta->sleep_q);
+ 	list_for_each_safe(xmitframe_plist, tmp, xmitframe_phead) {
+@@ -2372,7 +2374,7 @@ void xmit_delivery_enabled_frames(struct adapter *padapter, struct sta_info *pst
+ 		}
+ 	}
+ 
+-	spin_unlock_bh(&psta->sleep_q.lock);
++	spin_unlock_bh(&pxmitpriv->lock);
+ }
+ 
+ void enqueue_pending_xmitbuf(struct xmit_priv *pxmitpriv, struct xmit_buf *pxmitbuf)
+diff --git a/drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c b/drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c
+index 7fe3df863fe13..2b9a41b12d1f3 100644
+--- a/drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c
++++ b/drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c
+@@ -507,7 +507,9 @@ s32 rtl8723bs_hal_xmit(
+ 			rtw_issue_addbareq_cmd(padapter, pxmitframe);
+ 	}
+ 
++	spin_lock_bh(&pxmitpriv->lock);
+ 	err = rtw_xmitframe_enqueue(padapter, pxmitframe);
++	spin_unlock_bh(&pxmitpriv->lock);
+ 	if (err != _SUCCESS) {
+ 		rtw_free_xmitframe(pxmitpriv, pxmitframe);
+ 
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index 1ecedbb1684c8..06d0e88ec8af9 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -43,6 +43,7 @@
+ #define PCI_DEVICE_ID_INTEL_ADLP		0x51ee
+ #define PCI_DEVICE_ID_INTEL_ADLM		0x54ee
+ #define PCI_DEVICE_ID_INTEL_ADLS		0x7ae1
++#define PCI_DEVICE_ID_INTEL_RPLS		0x7a61
+ #define PCI_DEVICE_ID_INTEL_TGL			0x9a15
+ #define PCI_DEVICE_ID_AMD_MR			0x163a
+ 
+@@ -420,6 +421,9 @@ static const struct pci_device_id dwc3_pci_id_table[] = {
+ 	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADLS),
+ 	  (kernel_ulong_t) &dwc3_pci_intel_swnode, },
+ 
++	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_RPLS),
++	  (kernel_ulong_t) &dwc3_pci_intel_swnode, },
++
+ 	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL),
+ 	  (kernel_ulong_t) &dwc3_pci_intel_swnode, },
+ 
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index ef6da39ccb3f9..7b4ab7cfc3599 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -1571,11 +1571,27 @@ static virtio_net_ctrl_ack handle_ctrl_mq(struct mlx5_vdpa_dev *mvdev, u8 cmd)
+ 
+ 	switch (cmd) {
+ 	case VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET:
++		/* This mq feature check aligns with pre-existing userspace
++		 * implementation.
++		 *
++		 * Without it, an untrusted driver could fake a multiqueue config
++		 * request down to a non-mq device that may cause kernel to
++		 * panic due to uninitialized resources for extra vqs. Even with
++		 * a well behaving guest driver, it is not expected to allow
++		 * changing the number of vqs on a non-mq device.
++		 */
++		if (!MLX5_FEATURE(mvdev, VIRTIO_NET_F_MQ))
++			break;
++
+ 		read = vringh_iov_pull_iotlb(&cvq->vring, &cvq->riov, (void *)&mq, sizeof(mq));
+ 		if (read != sizeof(mq))
+ 			break;
+ 
+ 		newqps = mlx5vdpa16_to_cpu(mvdev, mq.virtqueue_pairs);
++		if (newqps < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
++		    newqps > mlx5_vdpa_max_qps(mvdev->max_vqs))
++			break;
++
+ 		if (ndev->cur_num_vqs == 2 * newqps) {
+ 			status = VIRTIO_NET_OK;
+ 			break;
+diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/iova_domain.c
+index 1daae26088609..0678c25141973 100644
+--- a/drivers/vdpa/vdpa_user/iova_domain.c
++++ b/drivers/vdpa/vdpa_user/iova_domain.c
+@@ -302,7 +302,7 @@ vduse_domain_alloc_iova(struct iova_domain *iovad,
+ 		iova_len = roundup_pow_of_two(iova_len);
+ 	iova_pfn = alloc_iova_fast(iovad, iova_len, limit >> shift, true);
+ 
+-	return iova_pfn << shift;
++	return (dma_addr_t)iova_pfn << shift;
+ }
+ 
+ static void vduse_domain_free_iova(struct iova_domain *iovad,
+diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c b/drivers/vdpa/virtio_pci/vp_vdpa.c
+index e3ff7875e1234..fab1619611603 100644
+--- a/drivers/vdpa/virtio_pci/vp_vdpa.c
++++ b/drivers/vdpa/virtio_pci/vp_vdpa.c
+@@ -525,8 +525,8 @@ static void vp_vdpa_remove(struct pci_dev *pdev)
+ {
+ 	struct vp_vdpa *vp_vdpa = pci_get_drvdata(pdev);
+ 
+-	vdpa_unregister_device(&vp_vdpa->vdpa);
+ 	vp_modern_remove(&vp_vdpa->mdev);
++	vdpa_unregister_device(&vp_vdpa->vdpa);
+ }
+ 
+ static struct pci_driver vp_vdpa_driver = {
+diff --git a/drivers/vhost/iotlb.c b/drivers/vhost/iotlb.c
+index 670d56c879e50..40b098320b2a7 100644
+--- a/drivers/vhost/iotlb.c
++++ b/drivers/vhost/iotlb.c
+@@ -57,6 +57,17 @@ int vhost_iotlb_add_range_ctx(struct vhost_iotlb *iotlb,
+ 	if (last < start)
+ 		return -EFAULT;
+ 
++	/* If the range being mapped is [0, ULONG_MAX], split it into two entries
++	 * otherwise its size would overflow u64.
++	 */
++	if (start == 0 && last == ULONG_MAX) {
++		u64 mid = last / 2;
++
++		vhost_iotlb_add_range_ctx(iotlb, start, mid, addr, perm, opaque);
++		addr += mid + 1;
++		start = mid + 1;
++	}
++
+ 	if (iotlb->limit &&
+ 	    iotlb->nmaps == iotlb->limit &&
+ 	    iotlb->flags & VHOST_IOTLB_FLAG_RETIRE) {
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index 59edb5a1ffe28..6942472cffb0f 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -1170,6 +1170,13 @@ ssize_t vhost_chr_write_iter(struct vhost_dev *dev,
+ 		goto done;
+ 	}
+ 
++	if ((msg.type == VHOST_IOTLB_UPDATE ||
++	     msg.type == VHOST_IOTLB_INVALIDATE) &&
++	     msg.size == 0) {
++		ret = -EINVAL;
++		goto done;
++	}
++
+ 	if (dev->msg_handler)
+ 		ret = dev->msg_handler(dev, &msg);
+ 	else
+diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
+index 236081afe9a2a..c2b733ef95b0d 100644
+--- a/drivers/virtio/virtio.c
++++ b/drivers/virtio/virtio.c
+@@ -166,14 +166,13 @@ void virtio_add_status(struct virtio_device *dev, unsigned int status)
+ }
+ EXPORT_SYMBOL_GPL(virtio_add_status);
+ 
+-int virtio_finalize_features(struct virtio_device *dev)
++/* Do some validation, then set FEATURES_OK */
++static int virtio_features_ok(struct virtio_device *dev)
+ {
+-	int ret = dev->config->finalize_features(dev);
+ 	unsigned status;
++	int ret;
+ 
+ 	might_sleep();
+-	if (ret)
+-		return ret;
+ 
+ 	ret = arch_has_restricted_virtio_memory_access();
+ 	if (ret) {
+@@ -202,7 +201,6 @@ int virtio_finalize_features(struct virtio_device *dev)
+ 	}
+ 	return 0;
+ }
+-EXPORT_SYMBOL_GPL(virtio_finalize_features);
+ 
+ static int virtio_dev_probe(struct device *_d)
+ {
+@@ -239,17 +237,6 @@ static int virtio_dev_probe(struct device *_d)
+ 		driver_features_legacy = driver_features;
+ 	}
+ 
+-	/*
+-	 * Some devices detect legacy solely via F_VERSION_1. Write
+-	 * F_VERSION_1 to force LE config space accesses before FEATURES_OK for
+-	 * these when needed.
+-	 */
+-	if (drv->validate && !virtio_legacy_is_little_endian()
+-			  && device_features & BIT_ULL(VIRTIO_F_VERSION_1)) {
+-		dev->features = BIT_ULL(VIRTIO_F_VERSION_1);
+-		dev->config->finalize_features(dev);
+-	}
+-
+ 	if (device_features & (1ULL << VIRTIO_F_VERSION_1))
+ 		dev->features = driver_features & device_features;
+ 	else
+@@ -260,13 +247,26 @@ static int virtio_dev_probe(struct device *_d)
+ 		if (device_features & (1ULL << i))
+ 			__virtio_set_bit(dev, i);
+ 
++	err = dev->config->finalize_features(dev);
++	if (err)
++		goto err;
++
+ 	if (drv->validate) {
++		u64 features = dev->features;
++
+ 		err = drv->validate(dev);
+ 		if (err)
+ 			goto err;
++
++		/* Did validation change any features? Then write them again. */
++		if (features != dev->features) {
++			err = dev->config->finalize_features(dev);
++			if (err)
++				goto err;
++		}
+ 	}
+ 
+-	err = virtio_finalize_features(dev);
++	err = virtio_features_ok(dev);
+ 	if (err)
+ 		goto err;
+ 
+@@ -490,7 +490,11 @@ int virtio_device_restore(struct virtio_device *dev)
+ 	/* We have a driver! */
+ 	virtio_add_status(dev, VIRTIO_CONFIG_S_DRIVER);
+ 
+-	ret = virtio_finalize_features(dev);
++	ret = dev->config->finalize_features(dev);
++	if (ret)
++		goto err;
++
++	ret = virtio_features_ok(dev);
+ 	if (ret)
+ 		goto err;
+ 
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index b67c965725ea6..c4c8db2ffa843 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1508,7 +1508,6 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
+ 		container_of(work, struct btrfs_fs_info, reclaim_bgs_work);
+ 	struct btrfs_block_group *bg;
+ 	struct btrfs_space_info *space_info;
+-	LIST_HEAD(again_list);
+ 
+ 	if (!test_bit(BTRFS_FS_OPEN, &fs_info->flags))
+ 		return;
+@@ -1585,18 +1584,14 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
+ 				div64_u64(zone_unusable * 100, bg->length));
+ 		trace_btrfs_reclaim_block_group(bg);
+ 		ret = btrfs_relocate_chunk(fs_info, bg->start);
+-		if (ret && ret != -EAGAIN)
++		if (ret)
+ 			btrfs_err(fs_info, "error relocating chunk %llu",
+ 				  bg->start);
+ 
+ next:
++		btrfs_put_block_group(bg);
+ 		spin_lock(&fs_info->unused_bgs_lock);
+-		if (ret == -EAGAIN && list_empty(&bg->bg_list))
+-			list_add_tail(&bg->bg_list, &again_list);
+-		else
+-			btrfs_put_block_group(bg);
+ 	}
+-	list_splice_tail(&again_list, &fs_info->reclaim_bgs);
+ 	spin_unlock(&fs_info->unused_bgs_lock);
+ 	mutex_unlock(&fs_info->reclaim_bgs_lock);
+ 	btrfs_exclop_finish(fs_info);
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 35660791e084a..6987a347483e3 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -1568,32 +1568,13 @@ static struct extent_buffer *btrfs_search_slot_get_root(struct btrfs_root *root,
+ 							struct btrfs_path *p,
+ 							int write_lock_level)
+ {
+-	struct btrfs_fs_info *fs_info = root->fs_info;
+ 	struct extent_buffer *b;
+ 	int root_lock = 0;
+ 	int level = 0;
+ 
+ 	if (p->search_commit_root) {
+-		/*
+-		 * The commit roots are read only so we always do read locks,
+-		 * and we always must hold the commit_root_sem when doing
+-		 * searches on them, the only exception is send where we don't
+-		 * want to block transaction commits for a long time, so
+-		 * we need to clone the commit root in order to avoid races
+-		 * with transaction commits that create a snapshot of one of
+-		 * the roots used by a send operation.
+-		 */
+-		if (p->need_commit_sem) {
+-			down_read(&fs_info->commit_root_sem);
+-			b = btrfs_clone_extent_buffer(root->commit_root);
+-			up_read(&fs_info->commit_root_sem);
+-			if (!b)
+-				return ERR_PTR(-ENOMEM);
+-
+-		} else {
+-			b = root->commit_root;
+-			atomic_inc(&b->refs);
+-		}
++		b = root->commit_root;
++		atomic_inc(&b->refs);
+ 		level = btrfs_header_level(b);
+ 		/*
+ 		 * Ensure that all callers have set skip_locking when
+@@ -1659,6 +1640,42 @@ out:
+ 	return b;
+ }
+ 
++/*
++ * Replace the extent buffer at the lowest level of the path with a cloned
++ * version. The purpose is to be able to use it safely, after releasing the
++ * commit root semaphore, even if relocation is happening in parallel, the
++ * transaction used for relocation is committed and the extent buffer is
++ * reallocated in the next transaction.
++ *
++ * This is used in a context where the caller does not prevent transaction
++ * commits from happening, either by holding a transaction handle or holding
++ * some lock, while it's doing searches through a commit root.
++ * At the moment it's only used for send operations.
++ */
++static int finish_need_commit_sem_search(struct btrfs_path *path)
++{
++	const int i = path->lowest_level;
++	const int slot = path->slots[i];
++	struct extent_buffer *lowest = path->nodes[i];
++	struct extent_buffer *clone;
++
++	ASSERT(path->need_commit_sem);
++
++	if (!lowest)
++		return 0;
++
++	lockdep_assert_held_read(&lowest->fs_info->commit_root_sem);
++
++	clone = btrfs_clone_extent_buffer(lowest);
++	if (!clone)
++		return -ENOMEM;
++
++	btrfs_release_path(path);
++	path->nodes[i] = clone;
++	path->slots[i] = slot;
++
++	return 0;
++}
+ 
+ /*
+  * btrfs_search_slot - look for a key in a tree and perform necessary
+@@ -1695,6 +1712,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ 		      const struct btrfs_key *key, struct btrfs_path *p,
+ 		      int ins_len, int cow)
+ {
++	struct btrfs_fs_info *fs_info = root->fs_info;
+ 	struct extent_buffer *b;
+ 	int slot;
+ 	int ret;
+@@ -1736,6 +1754,11 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ 
+ 	min_write_lock_level = write_lock_level;
+ 
++	if (p->need_commit_sem) {
++		ASSERT(p->search_commit_root);
++		down_read(&fs_info->commit_root_sem);
++	}
++
+ again:
+ 	prev_cmp = -1;
+ 	b = btrfs_search_slot_get_root(root, p, write_lock_level);
+@@ -1930,6 +1953,16 @@ cow_done:
+ done:
+ 	if (ret < 0 && !p->skip_release_on_error)
+ 		btrfs_release_path(p);
++
++	if (p->need_commit_sem) {
++		int ret2;
++
++		ret2 = finish_need_commit_sem_search(p);
++		up_read(&fs_info->commit_root_sem);
++		if (ret2)
++			ret = ret2;
++	}
++
+ 	return ret;
+ }
+ ALLOW_ERROR_INJECTION(btrfs_search_slot, ERRNO);
+@@ -4414,7 +4447,9 @@ int btrfs_next_old_leaf(struct btrfs_root *root, struct btrfs_path *path,
+ 	int level;
+ 	struct extent_buffer *c;
+ 	struct extent_buffer *next;
++	struct btrfs_fs_info *fs_info = root->fs_info;
+ 	struct btrfs_key key;
++	bool need_commit_sem = false;
+ 	u32 nritems;
+ 	int ret;
+ 	int i;
+@@ -4431,14 +4466,20 @@ again:
+ 
+ 	path->keep_locks = 1;
+ 
+-	if (time_seq)
++	if (time_seq) {
+ 		ret = btrfs_search_old_slot(root, &key, path, time_seq);
+-	else
++	} else {
++		if (path->need_commit_sem) {
++			path->need_commit_sem = 0;
++			need_commit_sem = true;
++			down_read(&fs_info->commit_root_sem);
++		}
+ 		ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
++	}
+ 	path->keep_locks = 0;
+ 
+ 	if (ret < 0)
+-		return ret;
++		goto done;
+ 
+ 	nritems = btrfs_header_nritems(path->nodes[0]);
+ 	/*
+@@ -4561,6 +4602,15 @@ again:
+ 	ret = 0;
+ done:
+ 	unlock_up(path, 0, 1, 0, NULL);
++	if (need_commit_sem) {
++		int ret2;
++
++		path->need_commit_sem = 1;
++		ret2 = finish_need_commit_sem_search(path);
++		up_read(&fs_info->commit_root_sem);
++		if (ret2)
++			ret = ret2;
++	}
+ 
+ 	return ret;
+ }
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index fd77eca085b41..b18939863c29c 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -576,7 +576,6 @@ enum {
+ 	/*
+ 	 * Indicate that relocation of a chunk has started, it's set per chunk
+ 	 * and is toggled between chunks.
+-	 * Set, tested and cleared while holding fs_info::send_reloc_lock.
+ 	 */
+ 	BTRFS_FS_RELOC_RUNNING,
+ 
+@@ -676,6 +675,12 @@ struct btrfs_fs_info {
+ 
+ 	u64 generation;
+ 	u64 last_trans_committed;
++	/*
++	 * Generation of the last transaction used for block group relocation
++	 * since the filesystem was last mounted (or 0 if none happened yet).
++	 * Must be written and read while holding btrfs_fs_info::commit_root_sem.
++	 */
++	u64 last_reloc_trans;
+ 	u64 avg_delayed_ref_runtime;
+ 
+ 	/*
+@@ -1006,13 +1011,6 @@ struct btrfs_fs_info {
+ 
+ 	struct crypto_shash *csum_shash;
+ 
+-	spinlock_t send_reloc_lock;
+-	/*
+-	 * Number of send operations in progress.
+-	 * Updated while holding fs_info::send_reloc_lock.
+-	 */
+-	int send_in_progress;
+-
+ 	/* Type of exclusive operation running, protected by super_lock */
+ 	enum btrfs_exclusive_operation exclusive_operation;
+ 
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 15ffb237a489f..0980025331101 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -2858,6 +2858,7 @@ static int __cold init_tree_roots(struct btrfs_fs_info *fs_info)
+ 		/* All successful */
+ 		fs_info->generation = generation;
+ 		fs_info->last_trans_committed = generation;
++		fs_info->last_reloc_trans = 0;
+ 
+ 		/* Always begin writing backup roots after the one being used */
+ 		if (backup_index < 0) {
+@@ -2993,9 +2994,6 @@ void btrfs_init_fs_info(struct btrfs_fs_info *fs_info)
+ 	spin_lock_init(&fs_info->swapfile_pins_lock);
+ 	fs_info->swapfile_pins = RB_ROOT;
+ 
+-	spin_lock_init(&fs_info->send_reloc_lock);
+-	fs_info->send_in_progress = 0;
+-
+ 	fs_info->bg_reclaim_threshold = BTRFS_DEFAULT_RECLAIM_THRESH;
+ 	INIT_WORK(&fs_info->reclaim_bgs_work, btrfs_reclaim_bgs_work);
+ }
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index f129e0718b110..be41c92ed837e 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -3858,25 +3858,14 @@ out:
+  *   0             success
+  *   -EINPROGRESS  operation is already in progress, that's probably a bug
+  *   -ECANCELED    cancellation request was set before the operation started
+- *   -EAGAIN       can not start because there are ongoing send operations
+  */
+ static int reloc_chunk_start(struct btrfs_fs_info *fs_info)
+ {
+-	spin_lock(&fs_info->send_reloc_lock);
+-	if (fs_info->send_in_progress) {
+-		btrfs_warn_rl(fs_info,
+-"cannot run relocation while send operations are in progress (%d in progress)",
+-			      fs_info->send_in_progress);
+-		spin_unlock(&fs_info->send_reloc_lock);
+-		return -EAGAIN;
+-	}
+ 	if (test_and_set_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags)) {
+ 		/* This should not happen */
+-		spin_unlock(&fs_info->send_reloc_lock);
+ 		btrfs_err(fs_info, "reloc already running, cannot start");
+ 		return -EINPROGRESS;
+ 	}
+-	spin_unlock(&fs_info->send_reloc_lock);
+ 
+ 	if (atomic_read(&fs_info->reloc_cancel_req) > 0) {
+ 		btrfs_info(fs_info, "chunk relocation canceled on start");
+@@ -3898,9 +3887,7 @@ static void reloc_chunk_end(struct btrfs_fs_info *fs_info)
+ 	/* Requested after start, clear bit first so any waiters can continue */
+ 	if (atomic_read(&fs_info->reloc_cancel_req) > 0)
+ 		btrfs_info(fs_info, "chunk relocation canceled during operation");
+-	spin_lock(&fs_info->send_reloc_lock);
+ 	clear_and_wake_up_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags);
+-	spin_unlock(&fs_info->send_reloc_lock);
+ 	atomic_set(&fs_info->reloc_cancel_req, 0);
+ }
+ 
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 7e1159474a4e6..ce72d29404ea9 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -24,6 +24,7 @@
+ #include "transaction.h"
+ #include "compression.h"
+ #include "xattr.h"
++#include "print-tree.h"
+ 
+ /*
+  * Maximum number of references an extent can have in order for us to attempt to
+@@ -97,6 +98,15 @@ struct send_ctx {
+ 	struct btrfs_path *right_path;
+ 	struct btrfs_key *cmp_key;
+ 
++	/*
++	 * Keep track of the generation of the last transaction that was used
++	 * for relocating a block group. This is periodically checked in order
++	 * to detect if a relocation happened since the last check, so that we
++	 * don't operate on stale extent buffers for nodes (level >= 1) or on
++	 * stale disk_bytenr values of file extent items.
++	 */
++	u64 last_reloc_trans;
++
+ 	/*
+ 	 * infos of the currently processed inode. In case of deleted inodes,
+ 	 * these are the values from the deleted inode.
+@@ -1427,6 +1437,26 @@ static int find_extent_clone(struct send_ctx *sctx,
+ 	if (ret < 0)
+ 		goto out;
+ 
++	down_read(&fs_info->commit_root_sem);
++	if (fs_info->last_reloc_trans > sctx->last_reloc_trans) {
++		/*
++		 * A transaction commit for a transaction in which block group
++		 * relocation was done just happened.
++		 * The disk_bytenr of the file extent item we processed is
++		 * possibly stale, referring to the extent's location before
++		 * relocation. So act as if we haven't found any clone sources
++		 * and fallback to write commands, which will read the correct
++		 * data from the new extent location. Otherwise we will fail
++		 * below because we haven't found our own back reference or we
++		 * could be getting incorrect sources in case the old extent
++		 * was already reallocated after the relocation.
++		 */
++		up_read(&fs_info->commit_root_sem);
++		ret = -ENOENT;
++		goto out;
++	}
++	up_read(&fs_info->commit_root_sem);
++
+ 	if (!backref_ctx.found_itself) {
+ 		/* found a bug in backref code? */
+ 		ret = -EIO;
+@@ -6601,6 +6631,50 @@ static int changed_cb(struct btrfs_path *left_path,
+ {
+ 	int ret = 0;
+ 
++	/*
++	 * We can not hold the commit root semaphore here. This is because in
++	 * the case of sending and receiving to the same filesystem, using a
++	 * pipe, could result in a deadlock:
++	 *
++	 * 1) The task running send blocks on the pipe because it's full;
++	 *
++	 * 2) The task running receive, which is the only consumer of the pipe,
++	 *    is waiting for a transaction commit (for example due to a space
++	 *    reservation when doing a write or triggering a transaction commit
++	 *    when creating a subvolume);
++	 *
++	 * 3) The transaction is waiting to write lock the commit root semaphore,
++	 *    but can not acquire it since it's being held at 1).
++	 *
++	 * Down this call chain we write to the pipe through kernel_write().
++	 * The same type of problem can also happen when sending to a file that
++	 * is stored in the same filesystem - when reserving space for a write
++	 * into the file, we can trigger a transaction commit.
++	 *
++	 * Our caller has supplied us with clones of leaves from the send and
++	 * parent roots, so we're safe here from a concurrent relocation and
++	 * further reallocation of metadata extents while we are here. Below we
++	 * also assert that the leaves are clones.
++	 */
++	lockdep_assert_not_held(&sctx->send_root->fs_info->commit_root_sem);
++
++	/*
++	 * We always have a send root, so left_path is never NULL. We will not
++	 * have a leaf when we have reached the end of the send root but have
++	 * not yet reached the end of the parent root.
++	 */
++	if (left_path->nodes[0])
++		ASSERT(test_bit(EXTENT_BUFFER_UNMAPPED,
++				&left_path->nodes[0]->bflags));
++	/*
++	 * When doing a full send we don't have a parent root, so right_path is
++	 * NULL. When doing an incremental send, we may have reached the end of
++	 * the parent root already, so we don't have a leaf at right_path.
++	 */
++	if (right_path && right_path->nodes[0])
++		ASSERT(test_bit(EXTENT_BUFFER_UNMAPPED,
++				&right_path->nodes[0]->bflags));
++
+ 	if (result == BTRFS_COMPARE_TREE_SAME) {
+ 		if (key->type == BTRFS_INODE_REF_KEY ||
+ 		    key->type == BTRFS_INODE_EXTREF_KEY) {
+@@ -6647,14 +6721,46 @@ out:
+ 	return ret;
+ }
+ 
++static int search_key_again(const struct send_ctx *sctx,
++			    struct btrfs_root *root,
++			    struct btrfs_path *path,
++			    const struct btrfs_key *key)
++{
++	int ret;
++
++	if (!path->need_commit_sem)
++		lockdep_assert_held_read(&root->fs_info->commit_root_sem);
++
++	/*
++	 * Roots used for send operations are readonly and no one can add,
++	 * update or remove keys from them, so we should be able to find our
++	 * key again. The only exception is deduplication, which can operate on
++	 * readonly roots and add, update or remove keys to/from them - but at
++	 * the moment we don't allow it to run in parallel with send.
++	 */
++	ret = btrfs_search_slot(NULL, root, key, path, 0, 0);
++	ASSERT(ret <= 0);
++	if (ret > 0) {
++		btrfs_print_tree(path->nodes[path->lowest_level], false);
++		btrfs_err(root->fs_info,
++"send: key (%llu %u %llu) not found in %s root %llu, lowest_level %d, slot %d",
++			  key->objectid, key->type, key->offset,
++			  (root == sctx->parent_root ? "parent" : "send"),
++			  root->root_key.objectid, path->lowest_level,
++			  path->slots[path->lowest_level]);
++		return -EUCLEAN;
++	}
++
++	return ret;
++}
++
+ static int full_send_tree(struct send_ctx *sctx)
+ {
+ 	int ret;
+ 	struct btrfs_root *send_root = sctx->send_root;
+ 	struct btrfs_key key;
++	struct btrfs_fs_info *fs_info = send_root->fs_info;
+ 	struct btrfs_path *path;
+-	struct extent_buffer *eb;
+-	int slot;
+ 
+ 	path = alloc_path_for_send();
+ 	if (!path)
+@@ -6665,6 +6771,10 @@ static int full_send_tree(struct send_ctx *sctx)
+ 	key.type = BTRFS_INODE_ITEM_KEY;
+ 	key.offset = 0;
+ 
++	down_read(&fs_info->commit_root_sem);
++	sctx->last_reloc_trans = fs_info->last_reloc_trans;
++	up_read(&fs_info->commit_root_sem);
++
+ 	ret = btrfs_search_slot_for_read(send_root, &key, path, 1, 0);
+ 	if (ret < 0)
+ 		goto out;
+@@ -6672,15 +6782,35 @@ static int full_send_tree(struct send_ctx *sctx)
+ 		goto out_finish;
+ 
+ 	while (1) {
+-		eb = path->nodes[0];
+-		slot = path->slots[0];
+-		btrfs_item_key_to_cpu(eb, &key, slot);
++		btrfs_item_key_to_cpu(path->nodes[0], &key, path->slots[0]);
+ 
+ 		ret = changed_cb(path, NULL, &key,
+ 				 BTRFS_COMPARE_TREE_NEW, sctx);
+ 		if (ret < 0)
+ 			goto out;
+ 
++		down_read(&fs_info->commit_root_sem);
++		if (fs_info->last_reloc_trans > sctx->last_reloc_trans) {
++			sctx->last_reloc_trans = fs_info->last_reloc_trans;
++			up_read(&fs_info->commit_root_sem);
++			/*
++			 * A transaction used for relocating a block group was
++			 * committed or is about to finish its commit. Release
++			 * our path (leaf) and restart the search, so that we
++			 * avoid operating on any file extent items that are
++			 * stale, with a disk_bytenr that reflects a pre
++			 * relocation value. This way we avoid as much as
++			 * possible to fallback to regular writes when checking
++			 * if we can clone file ranges.
++			 */
++			btrfs_release_path(path);
++			ret = search_key_again(sctx, send_root, path, &key);
++			if (ret < 0)
++				goto out;
++		} else {
++			up_read(&fs_info->commit_root_sem);
++		}
++
+ 		ret = btrfs_next_item(send_root, path);
+ 		if (ret < 0)
+ 			goto out;
+@@ -6698,6 +6828,20 @@ out:
+ 	return ret;
+ }
+ 
++static int replace_node_with_clone(struct btrfs_path *path, int level)
++{
++	struct extent_buffer *clone;
++
++	clone = btrfs_clone_extent_buffer(path->nodes[level]);
++	if (!clone)
++		return -ENOMEM;
++
++	free_extent_buffer(path->nodes[level]);
++	path->nodes[level] = clone;
++
++	return 0;
++}
++
+ static int tree_move_down(struct btrfs_path *path, int *level, u64 reada_min_gen)
+ {
+ 	struct extent_buffer *eb;
+@@ -6707,6 +6851,8 @@ static int tree_move_down(struct btrfs_path *path, int *level, u64 reada_min_gen
+ 	u64 reada_max;
+ 	u64 reada_done = 0;
+ 
++	lockdep_assert_held_read(&parent->fs_info->commit_root_sem);
++
+ 	BUG_ON(*level == 0);
+ 	eb = btrfs_read_node_slot(parent, slot);
+ 	if (IS_ERR(eb))
+@@ -6730,6 +6876,10 @@ static int tree_move_down(struct btrfs_path *path, int *level, u64 reada_min_gen
+ 	path->nodes[*level - 1] = eb;
+ 	path->slots[*level - 1] = 0;
+ 	(*level)--;
++
++	if (*level == 0)
++		return replace_node_with_clone(path, 0);
++
+ 	return 0;
+ }
+ 
+@@ -6743,8 +6893,10 @@ static int tree_move_next_or_upnext(struct btrfs_path *path,
+ 	path->slots[*level]++;
+ 
+ 	while (path->slots[*level] >= nritems) {
+-		if (*level == root_level)
++		if (*level == root_level) {
++			path->slots[*level] = nritems - 1;
+ 			return -1;
++		}
+ 
+ 		/* move upnext */
+ 		path->slots[*level] = 0;
+@@ -6776,14 +6928,20 @@ static int tree_advance(struct btrfs_path *path,
+ 	} else {
+ 		ret = tree_move_down(path, level, reada_min_gen);
+ 	}
+-	if (ret >= 0) {
+-		if (*level == 0)
+-			btrfs_item_key_to_cpu(path->nodes[*level], key,
+-					path->slots[*level]);
+-		else
+-			btrfs_node_key_to_cpu(path->nodes[*level], key,
+-					path->slots[*level]);
+-	}
++
++	/*
++	 * Even if we have reached the end of a tree, ret is -1, update the key
++	 * anyway, so that in case we need to restart due to a block group
++	 * relocation, we can assert that the last key of the root node still
++	 * exists in the tree.
++	 */
++	if (*level == 0)
++		btrfs_item_key_to_cpu(path->nodes[*level], key,
++				      path->slots[*level]);
++	else
++		btrfs_node_key_to_cpu(path->nodes[*level], key,
++				      path->slots[*level]);
++
+ 	return ret;
+ }
+ 
+@@ -6812,6 +6970,97 @@ static int tree_compare_item(struct btrfs_path *left_path,
+ 	return 0;
+ }
+ 
++/*
++ * A transaction used for relocating a block group was committed or is about to
++ * finish its commit. Release our paths and restart the search, so that we are
++ * not using stale extent buffers:
++ *
++ * 1) For levels > 0, we are only holding references of extent buffers, without
++ *    any locks on them, which does not prevent them from having been relocated
++ *    and reallocated after the last time we released the commit root semaphore.
++ *    The exception are the root nodes, for which we always have a clone, see
++ *    the comment at btrfs_compare_trees();
++ *
++ * 2) For leaves, level 0, we are holding copies (clones) of extent buffers, so
++ *    we are safe from the concurrent relocation and reallocation. However they
++ *    can have file extent items with a pre relocation disk_bytenr value, so we
++ *    restart the start from the current commit roots and clone the new leaves so
++ *    that we get the post relocation disk_bytenr values. Not doing so, could
++ *    make us clone the wrong data in case there are new extents using the old
++ *    disk_bytenr that happen to be shared.
++ */
++static int restart_after_relocation(struct btrfs_path *left_path,
++				    struct btrfs_path *right_path,
++				    const struct btrfs_key *left_key,
++				    const struct btrfs_key *right_key,
++				    int left_level,
++				    int right_level,
++				    const struct send_ctx *sctx)
++{
++	int root_level;
++	int ret;
++
++	lockdep_assert_held_read(&sctx->send_root->fs_info->commit_root_sem);
++
++	btrfs_release_path(left_path);
++	btrfs_release_path(right_path);
++
++	/*
++	 * Since keys can not be added or removed to/from our roots because they
++	 * are readonly and we do not allow deduplication to run in parallel
++	 * (which can add, remove or change keys), the layout of the trees should
++	 * not change.
++	 */
++	left_path->lowest_level = left_level;
++	ret = search_key_again(sctx, sctx->send_root, left_path, left_key);
++	if (ret < 0)
++		return ret;
++
++	right_path->lowest_level = right_level;
++	ret = search_key_again(sctx, sctx->parent_root, right_path, right_key);
++	if (ret < 0)
++		return ret;
++
++	/*
++	 * If the lowest level nodes are leaves, clone them so that they can be
++	 * safely used by changed_cb() while not under the protection of the
++	 * commit root semaphore, even if relocation and reallocation happens in
++	 * parallel.
++	 */
++	if (left_level == 0) {
++		ret = replace_node_with_clone(left_path, 0);
++		if (ret < 0)
++			return ret;
++	}
++
++	if (right_level == 0) {
++		ret = replace_node_with_clone(right_path, 0);
++		if (ret < 0)
++			return ret;
++	}
++
++	/*
++	 * Now clone the root nodes (unless they happen to be the leaves we have
++	 * already cloned). This is to protect against concurrent snapshotting of
++	 * the send and parent roots (see the comment at btrfs_compare_trees()).
++	 */
++	root_level = btrfs_header_level(sctx->send_root->commit_root);
++	if (root_level > 0) {
++		ret = replace_node_with_clone(left_path, root_level);
++		if (ret < 0)
++			return ret;
++	}
++
++	root_level = btrfs_header_level(sctx->parent_root->commit_root);
++	if (root_level > 0) {
++		ret = replace_node_with_clone(right_path, root_level);
++		if (ret < 0)
++			return ret;
++	}
++
++	return 0;
++}
++
+ /*
+  * This function compares two trees and calls the provided callback for
+  * every changed/new/deleted item it finds.
+@@ -6840,10 +7089,10 @@ static int btrfs_compare_trees(struct btrfs_root *left_root,
+ 	int right_root_level;
+ 	int left_level;
+ 	int right_level;
+-	int left_end_reached;
+-	int right_end_reached;
+-	int advance_left;
+-	int advance_right;
++	int left_end_reached = 0;
++	int right_end_reached = 0;
++	int advance_left = 0;
++	int advance_right = 0;
+ 	u64 left_blockptr;
+ 	u64 right_blockptr;
+ 	u64 left_gen;
+@@ -6911,12 +7160,18 @@ static int btrfs_compare_trees(struct btrfs_root *left_root,
+ 	down_read(&fs_info->commit_root_sem);
+ 	left_level = btrfs_header_level(left_root->commit_root);
+ 	left_root_level = left_level;
++	/*
++	 * We clone the root node of the send and parent roots to prevent races
++	 * with snapshot creation of these roots. Snapshot creation COWs the
++	 * root node of a tree, so after the transaction is committed the old
++	 * extent can be reallocated while this send operation is still ongoing.
++	 * So we clone them, under the commit root semaphore, to be race free.
++	 */
+ 	left_path->nodes[left_level] =
+ 			btrfs_clone_extent_buffer(left_root->commit_root);
+ 	if (!left_path->nodes[left_level]) {
+-		up_read(&fs_info->commit_root_sem);
+ 		ret = -ENOMEM;
+-		goto out;
++		goto out_unlock;
+ 	}
+ 
+ 	right_level = btrfs_header_level(right_root->commit_root);
+@@ -6924,9 +7179,8 @@ static int btrfs_compare_trees(struct btrfs_root *left_root,
+ 	right_path->nodes[right_level] =
+ 			btrfs_clone_extent_buffer(right_root->commit_root);
+ 	if (!right_path->nodes[right_level]) {
+-		up_read(&fs_info->commit_root_sem);
+ 		ret = -ENOMEM;
+-		goto out;
++		goto out_unlock;
+ 	}
+ 	/*
+ 	 * Our right root is the parent root, while the left root is the "send"
+@@ -6936,7 +7190,6 @@ static int btrfs_compare_trees(struct btrfs_root *left_root,
+ 	 * will need to read them at some point.
+ 	 */
+ 	reada_min_gen = btrfs_header_generation(right_root->commit_root);
+-	up_read(&fs_info->commit_root_sem);
+ 
+ 	if (left_level == 0)
+ 		btrfs_item_key_to_cpu(left_path->nodes[left_level],
+@@ -6951,11 +7204,26 @@ static int btrfs_compare_trees(struct btrfs_root *left_root,
+ 		btrfs_node_key_to_cpu(right_path->nodes[right_level],
+ 				&right_key, right_path->slots[right_level]);
+ 
+-	left_end_reached = right_end_reached = 0;
+-	advance_left = advance_right = 0;
++	sctx->last_reloc_trans = fs_info->last_reloc_trans;
+ 
+ 	while (1) {
+-		cond_resched();
++		if (need_resched() ||
++		    rwsem_is_contended(&fs_info->commit_root_sem)) {
++			up_read(&fs_info->commit_root_sem);
++			cond_resched();
++			down_read(&fs_info->commit_root_sem);
++		}
++
++		if (fs_info->last_reloc_trans > sctx->last_reloc_trans) {
++			ret = restart_after_relocation(left_path, right_path,
++						       &left_key, &right_key,
++						       left_level, right_level,
++						       sctx);
++			if (ret < 0)
++				goto out_unlock;
++			sctx->last_reloc_trans = fs_info->last_reloc_trans;
++		}
++
+ 		if (advance_left && !left_end_reached) {
+ 			ret = tree_advance(left_path, &left_level,
+ 					left_root_level,
+@@ -6964,7 +7232,7 @@ static int btrfs_compare_trees(struct btrfs_root *left_root,
+ 			if (ret == -1)
+ 				left_end_reached = ADVANCE;
+ 			else if (ret < 0)
+-				goto out;
++				goto out_unlock;
+ 			advance_left = 0;
+ 		}
+ 		if (advance_right && !right_end_reached) {
+@@ -6975,54 +7243,55 @@ static int btrfs_compare_trees(struct btrfs_root *left_root,
+ 			if (ret == -1)
+ 				right_end_reached = ADVANCE;
+ 			else if (ret < 0)
+-				goto out;
++				goto out_unlock;
+ 			advance_right = 0;
+ 		}
+ 
+ 		if (left_end_reached && right_end_reached) {
+ 			ret = 0;
+-			goto out;
++			goto out_unlock;
+ 		} else if (left_end_reached) {
+ 			if (right_level == 0) {
++				up_read(&fs_info->commit_root_sem);
+ 				ret = changed_cb(left_path, right_path,
+ 						&right_key,
+ 						BTRFS_COMPARE_TREE_DELETED,
+ 						sctx);
+ 				if (ret < 0)
+ 					goto out;
++				down_read(&fs_info->commit_root_sem);
+ 			}
+ 			advance_right = ADVANCE;
+ 			continue;
+ 		} else if (right_end_reached) {
+ 			if (left_level == 0) {
++				up_read(&fs_info->commit_root_sem);
+ 				ret = changed_cb(left_path, right_path,
+ 						&left_key,
+ 						BTRFS_COMPARE_TREE_NEW,
+ 						sctx);
+ 				if (ret < 0)
+ 					goto out;
++				down_read(&fs_info->commit_root_sem);
+ 			}
+ 			advance_left = ADVANCE;
+ 			continue;
+ 		}
+ 
+ 		if (left_level == 0 && right_level == 0) {
++			up_read(&fs_info->commit_root_sem);
+ 			cmp = btrfs_comp_cpu_keys(&left_key, &right_key);
+ 			if (cmp < 0) {
+ 				ret = changed_cb(left_path, right_path,
+ 						&left_key,
+ 						BTRFS_COMPARE_TREE_NEW,
+ 						sctx);
+-				if (ret < 0)
+-					goto out;
+ 				advance_left = ADVANCE;
+ 			} else if (cmp > 0) {
+ 				ret = changed_cb(left_path, right_path,
+ 						&right_key,
+ 						BTRFS_COMPARE_TREE_DELETED,
+ 						sctx);
+-				if (ret < 0)
+-					goto out;
+ 				advance_right = ADVANCE;
+ 			} else {
+ 				enum btrfs_compare_tree_result result;
+@@ -7036,11 +7305,13 @@ static int btrfs_compare_trees(struct btrfs_root *left_root,
+ 					result = BTRFS_COMPARE_TREE_SAME;
+ 				ret = changed_cb(left_path, right_path,
+ 						 &left_key, result, sctx);
+-				if (ret < 0)
+-					goto out;
+ 				advance_left = ADVANCE;
+ 				advance_right = ADVANCE;
+ 			}
++
++			if (ret < 0)
++				goto out;
++			down_read(&fs_info->commit_root_sem);
+ 		} else if (left_level == right_level) {
+ 			cmp = btrfs_comp_cpu_keys(&left_key, &right_key);
+ 			if (cmp < 0) {
+@@ -7080,6 +7351,8 @@ static int btrfs_compare_trees(struct btrfs_root *left_root,
+ 		}
+ 	}
+ 
++out_unlock:
++	up_read(&fs_info->commit_root_sem);
+ out:
+ 	btrfs_free_path(left_path);
+ 	btrfs_free_path(right_path);
+@@ -7429,21 +7702,7 @@ long btrfs_ioctl_send(struct file *mnt_file, struct btrfs_ioctl_send_args *arg)
+ 	if (ret)
+ 		goto out;
+ 
+-	spin_lock(&fs_info->send_reloc_lock);
+-	if (test_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags)) {
+-		spin_unlock(&fs_info->send_reloc_lock);
+-		btrfs_warn_rl(fs_info,
+-		"cannot run send because a relocation operation is in progress");
+-		ret = -EAGAIN;
+-		goto out;
+-	}
+-	fs_info->send_in_progress++;
+-	spin_unlock(&fs_info->send_reloc_lock);
+-
+ 	ret = send_subvol(sctx);
+-	spin_lock(&fs_info->send_reloc_lock);
+-	fs_info->send_in_progress--;
+-	spin_unlock(&fs_info->send_reloc_lock);
+ 	if (ret < 0)
+ 		goto out;
+ 
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index a43c101c5525c..23cb3642c6675 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -163,6 +163,10 @@ static noinline void switch_commit_roots(struct btrfs_trans_handle *trans)
+ 	struct btrfs_caching_control *caching_ctl, *next;
+ 
+ 	down_write(&fs_info->commit_root_sem);
++
++	if (test_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags))
++		fs_info->last_reloc_trans = trans->transid;
++
+ 	list_for_each_entry_safe(root, tmp, &cur_trans->switch_commits,
+ 				 dirty_list) {
+ 		list_del_init(&root->dirty_list);
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index cd54a529460da..592730fd6e424 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -941,7 +941,17 @@ static int fuse_copy_page(struct fuse_copy_state *cs, struct page **pagep,
+ 
+ 	while (count) {
+ 		if (cs->write && cs->pipebufs && page) {
+-			return fuse_ref_page(cs, page, offset, count);
++			/*
++			 * Can't control lifetime of pipe buffers, so always
++			 * copy user pages.
++			 */
++			if (cs->req->args->user_pages) {
++				err = fuse_copy_fill(cs);
++				if (err)
++					return err;
++			} else {
++				return fuse_ref_page(cs, page, offset, count);
++			}
+ 		} else if (!cs->len) {
+ 			if (cs->move_pages && page &&
+ 			    offset == 0 && count == PAGE_SIZE) {
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index df81768c81a73..032b0b2cc8d99 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -1413,6 +1413,7 @@ static int fuse_get_user_pages(struct fuse_args_pages *ap, struct iov_iter *ii,
+ 			(PAGE_SIZE - ret) & (PAGE_SIZE - 1);
+ 	}
+ 
++	ap->args.user_pages = true;
+ 	if (write)
+ 		ap->args.in_pages = true;
+ 	else
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index 198637b41e194..62fc7e2f94c81 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -256,6 +256,7 @@ struct fuse_args {
+ 	bool nocreds:1;
+ 	bool in_pages:1;
+ 	bool out_pages:1;
++	bool user_pages:1;
+ 	bool out_argvar:1;
+ 	bool page_zeroing:1;
+ 	bool page_replace:1;
+diff --git a/fs/fuse/ioctl.c b/fs/fuse/ioctl.c
+index fbc09dab1f851..df58966bc8745 100644
+--- a/fs/fuse/ioctl.c
++++ b/fs/fuse/ioctl.c
+@@ -394,9 +394,12 @@ static int fuse_priv_ioctl(struct inode *inode, struct fuse_file *ff,
+ 	args.out_args[1].value = ptr;
+ 
+ 	err = fuse_simple_request(fm, &args);
+-	if (!err && outarg.flags & FUSE_IOCTL_RETRY)
+-		err = -EIO;
+-
++	if (!err) {
++		if (outarg.result < 0)
++			err = outarg.result;
++		else if (outarg.flags & FUSE_IOCTL_RETRY)
++			err = -EIO;
++	}
+ 	return err;
+ }
+ 
+diff --git a/fs/pipe.c b/fs/pipe.c
+index 6d4342bad9f15..751d5b36c84bb 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -252,7 +252,8 @@ pipe_read(struct kiocb *iocb, struct iov_iter *to)
+ 	 */
+ 	was_full = pipe_full(pipe->head, pipe->tail, pipe->max_usage);
+ 	for (;;) {
+-		unsigned int head = pipe->head;
++		/* Read ->head with a barrier vs post_one_notification() */
++		unsigned int head = smp_load_acquire(&pipe->head);
+ 		unsigned int tail = pipe->tail;
+ 		unsigned int mask = pipe->ring_size - 1;
+ 
+@@ -830,10 +831,8 @@ void free_pipe_info(struct pipe_inode_info *pipe)
+ 	int i;
+ 
+ #ifdef CONFIG_WATCH_QUEUE
+-	if (pipe->watch_queue) {
++	if (pipe->watch_queue)
+ 		watch_queue_clear(pipe->watch_queue);
+-		put_watch_queue(pipe->watch_queue);
+-	}
+ #endif
+ 
+ 	(void) account_pipe_buffers(pipe->user, pipe->nr_accounted, 0);
+@@ -843,6 +842,10 @@ void free_pipe_info(struct pipe_inode_info *pipe)
+ 		if (buf->ops)
+ 			pipe_buf_release(pipe, buf);
+ 	}
++#ifdef CONFIG_WATCH_QUEUE
++	if (pipe->watch_queue)
++		put_watch_queue(pipe->watch_queue);
++#endif
+ 	if (pipe->tmp_page)
+ 		__free_page(pipe->tmp_page);
+ 	kfree(pipe->bufs);
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index fbaab440a4846..66522bc56a0b1 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -3410,7 +3410,6 @@ enum {
+ enum {
+ 	MLX5_TIRC_PACKET_MERGE_MASK_IPV4_LRO  = BIT(0),
+ 	MLX5_TIRC_PACKET_MERGE_MASK_IPV6_LRO  = BIT(1),
+-	MLX5_TIRC_PACKET_MERGE_MASK_SHAMPO    = BIT(2),
+ };
+ 
+ enum {
+@@ -9875,8 +9874,8 @@ struct mlx5_ifc_bufferx_reg_bits {
+ 	u8         reserved_at_0[0x6];
+ 	u8         lossy[0x1];
+ 	u8         epsb[0x1];
+-	u8         reserved_at_8[0xc];
+-	u8         size[0xc];
++	u8         reserved_at_8[0x8];
++	u8         size[0x10];
+ 
+ 	u8         xoff_threshold[0x10];
+ 	u8         xon_threshold[0x10];
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 049858c671efa..7500ac08c9ba2 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -3007,7 +3007,6 @@ struct net_device *dev_get_by_napi_id(unsigned int napi_id);
+ int netdev_get_name(struct net *net, char *name, int ifindex);
+ int dev_restart(struct net_device *dev);
+ int skb_gro_receive(struct sk_buff *p, struct sk_buff *skb);
+-int skb_gro_receive_list(struct sk_buff *p, struct sk_buff *skb);
+ 
+ static inline unsigned int skb_gro_offset(const struct sk_buff *skb)
+ {
+diff --git a/include/linux/nvme-tcp.h b/include/linux/nvme-tcp.h
+index 959e0bd9a913e..75470159a194d 100644
+--- a/include/linux/nvme-tcp.h
++++ b/include/linux/nvme-tcp.h
+@@ -12,6 +12,7 @@
+ #define NVME_TCP_DISC_PORT	8009
+ #define NVME_TCP_ADMIN_CCSZ	SZ_8K
+ #define NVME_TCP_DIGEST_LENGTH	4
++#define NVME_TCP_MIN_MAXH2CDATA 4096
+ 
+ enum nvme_tcp_pfv {
+ 	NVME_TCP_PFV_1_0 = 0x0,
+diff --git a/include/linux/virtio.h b/include/linux/virtio.h
+index 41edbc01ffa40..1af8d65d4c8f7 100644
+--- a/include/linux/virtio.h
++++ b/include/linux/virtio.h
+@@ -133,7 +133,6 @@ bool is_virtio_device(struct device *dev);
+ void virtio_break_device(struct virtio_device *dev);
+ 
+ void virtio_config_changed(struct virtio_device *dev);
+-int virtio_finalize_features(struct virtio_device *dev);
+ #ifdef CONFIG_PM_SLEEP
+ int virtio_device_freeze(struct virtio_device *dev);
+ int virtio_device_restore(struct virtio_device *dev);
+diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h
+index 4d107ad311490..dafdc7f48c01b 100644
+--- a/include/linux/virtio_config.h
++++ b/include/linux/virtio_config.h
+@@ -64,8 +64,9 @@ struct virtio_shm_region {
+  *	Returns the first 64 feature bits (all we currently need).
+  * @finalize_features: confirm what device features we'll be using.
+  *	vdev: the virtio_device
+- *	This gives the final feature bits for the device: it can change
++ *	This sends the driver feature bits to the device: it can change
+  *	the dev->feature bits if it wants.
++ * Note: despite the name this can be called any number of times.
+  *	Returns 0 on success or error status
+  * @bus_name: return the bus name associated with the device (optional)
+  *	vdev: the virtio_device
+diff --git a/include/linux/watch_queue.h b/include/linux/watch_queue.h
+index c994d1b2cdbaa..3b9a40ae8bdba 100644
+--- a/include/linux/watch_queue.h
++++ b/include/linux/watch_queue.h
+@@ -28,7 +28,8 @@ struct watch_type_filter {
+ struct watch_filter {
+ 	union {
+ 		struct rcu_head	rcu;
+-		unsigned long	type_filter[2];	/* Bitmask of accepted types */
++		/* Bitmask of accepted types */
++		DECLARE_BITMAP(type_filter, WATCH_TYPE__NR);
+ 	};
+ 	u32			nr_filters;	/* Number of filters */
+ 	struct watch_type_filter filters[];
+diff --git a/include/net/esp.h b/include/net/esp.h
+index 9c5637d41d951..90cd02ff77ef6 100644
+--- a/include/net/esp.h
++++ b/include/net/esp.h
+@@ -4,6 +4,8 @@
+ 
+ #include <linux/skbuff.h>
+ 
++#define ESP_SKB_FRAG_MAXSIZE (PAGE_SIZE << SKB_FRAG_PAGE_ORDER)
++
+ struct ip_esp_hdr;
+ 
+ static inline struct ip_esp_hdr *ip_esp_hdr(const struct sk_buff *skb)
+diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
+index 8e840fbbed7c7..e7f138c7636cb 100644
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -581,9 +581,14 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
+ 	for (i = 0; i < nr_slots(alloc_size + offset); i++)
+ 		mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
+ 	tlb_addr = slot_addr(mem->start, index) + offset;
+-	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+-	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
+-		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE);
++	/*
++	 * When dir == DMA_FROM_DEVICE we could omit the copy from the orig
++	 * to the tlb buffer, if we knew for sure the device will
++	 * overwirte the entire current content. But we don't. Thus
++	 * unconditional bounce may prevent leaking swiotlb content (i.e.
++	 * kernel memory) to user-space.
++	 */
++	swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE);
+ 	return tlb_addr;
+ }
+ 
+@@ -650,10 +655,13 @@ void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr,
+ void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
+ 		size_t size, enum dma_data_direction dir)
+ {
+-	if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
+-		swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
+-	else
+-		BUG_ON(dir != DMA_FROM_DEVICE);
++	/*
++	 * Unconditional bounce is necessary to avoid corruption on
++	 * sync_*_for_cpu or dma_ummap_* when the device didn't overwrite
++	 * the whole lengt of the bounce buffer.
++	 */
++	swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
++	BUG_ON(!valid_dma_direction(dir));
+ }
+ 
+ void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr,
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 24683115eade2..5816ad79cce85 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -1472,10 +1472,12 @@ static int __init set_buf_size(char *str)
+ 	if (!str)
+ 		return 0;
+ 	buf_size = memparse(str, &str);
+-	/* nr_entries can not be zero */
+-	if (buf_size == 0)
+-		return 0;
+-	trace_buf_size = buf_size;
++	/*
++	 * nr_entries can not be zero and the startup
++	 * tests require some buffer space. Therefore
++	 * ensure we have at least 4096 bytes of buffer.
++	 */
++	trace_buf_size = max(4096UL, buf_size);
+ 	return 1;
+ }
+ __setup("trace_buf_size=", set_buf_size);
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index b58674e8644a6..b6da7bd2383d5 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -1387,6 +1387,26 @@ static int run_osnoise(void)
+ 					osnoise_stop_tracing();
+ 		}
+ 
++		/*
++		 * In some cases, notably when running on a nohz_full CPU with
++		 * a stopped tick PREEMPT_RCU has no way to account for QSs.
++		 * This will eventually cause unwarranted noise as PREEMPT_RCU
++		 * will force preemption as the means of ending the current
++		 * grace period. We avoid this problem by calling
++		 * rcu_momentary_dyntick_idle(), which performs a zero duration
++		 * EQS allowing PREEMPT_RCU to end the current grace period.
++		 * This call shouldn't be wrapped inside an RCU critical
++		 * section.
++		 *
++		 * Note that in non PREEMPT_RCU kernels QSs are handled through
++		 * cond_resched()
++		 */
++		if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
++			local_irq_disable();
++			rcu_momentary_dyntick_idle();
++			local_irq_enable();
++		}
++
+ 		/*
+ 		 * For the non-preemptive kernel config: let threads runs, if
+ 		 * they so wish.
+@@ -1437,6 +1457,37 @@ out:
+ static struct cpumask osnoise_cpumask;
+ static struct cpumask save_cpumask;
+ 
++/*
++ * osnoise_sleep - sleep until the next period
++ */
++static void osnoise_sleep(void)
++{
++	u64 interval;
++	ktime_t wake_time;
++
++	mutex_lock(&interface_lock);
++	interval = osnoise_data.sample_period - osnoise_data.sample_runtime;
++	mutex_unlock(&interface_lock);
++
++	/*
++	 * differently from hwlat_detector, the osnoise tracer can run
++	 * without a pause because preemption is on.
++	 */
++	if (!interval) {
++		/* Let synchronize_rcu_tasks() make progress */
++		cond_resched_tasks_rcu_qs();
++		return;
++	}
++
++	wake_time = ktime_add_us(ktime_get(), interval);
++	__set_current_state(TASK_INTERRUPTIBLE);
++
++	while (schedule_hrtimeout_range(&wake_time, 0, HRTIMER_MODE_ABS)) {
++		if (kthread_should_stop())
++			break;
++	}
++}
++
+ /*
+  * osnoise_main - The osnoise detection kernel thread
+  *
+@@ -1445,30 +1496,10 @@ static struct cpumask save_cpumask;
+  */
+ static int osnoise_main(void *data)
+ {
+-	u64 interval;
+ 
+ 	while (!kthread_should_stop()) {
+-
+ 		run_osnoise();
+-
+-		mutex_lock(&interface_lock);
+-		interval = osnoise_data.sample_period - osnoise_data.sample_runtime;
+-		mutex_unlock(&interface_lock);
+-
+-		do_div(interval, USEC_PER_MSEC);
+-
+-		/*
+-		 * differently from hwlat_detector, the osnoise tracer can run
+-		 * without a pause because preemption is on.
+-		 */
+-		if (interval < 1) {
+-			/* Let synchronize_rcu_tasks() make progress */
+-			cond_resched_tasks_rcu_qs();
+-			continue;
+-		}
+-
+-		if (msleep_interruptible(interval))
+-			break;
++		osnoise_sleep();
+ 	}
+ 
+ 	return 0;
+@@ -2191,6 +2222,17 @@ static void osnoise_workload_stop(void)
+ 	if (osnoise_has_registered_instances())
+ 		return;
+ 
++	/*
++	 * If callbacks were already disabled in a previous stop
++	 * call, there is no need to disable then again.
++	 *
++	 * For instance, this happens when tracing is stopped via:
++	 * echo 0 > tracing_on
++	 * echo nop > current_tracer.
++	 */
++	if (!trace_osnoise_callback_enabled)
++		return;
++
+ 	trace_osnoise_callback_enabled = false;
+ 	/*
+ 	 * Make sure that ftrace_nmi_enter/exit() see
+diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
+index afd937a46496e..abcadbe933bb7 100644
+--- a/kernel/trace/trace_selftest.c
++++ b/kernel/trace/trace_selftest.c
+@@ -784,9 +784,7 @@ static struct fgraph_ops fgraph_ops __initdata  = {
+ 	.retfunc		= &trace_graph_return,
+ };
+ 
+-#if defined(CONFIG_DYNAMIC_FTRACE) && \
+-    defined(CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS)
+-#define TEST_DIRECT_TRAMP
++#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
+ noinline __noclone static void trace_direct_tramp(void) { }
+ #endif
+ 
+@@ -849,7 +847,7 @@ trace_selftest_startup_function_graph(struct tracer *trace,
+ 		goto out;
+ 	}
+ 
+-#ifdef TEST_DIRECT_TRAMP
++#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
+ 	tracing_reset_online_cpus(&tr->array_buffer);
+ 	set_graph_array(tr);
+ 
+diff --git a/kernel/watch_queue.c b/kernel/watch_queue.c
+index 9c9eb20dd2c50..055bc20ecdda5 100644
+--- a/kernel/watch_queue.c
++++ b/kernel/watch_queue.c
+@@ -54,6 +54,7 @@ static void watch_queue_pipe_buf_release(struct pipe_inode_info *pipe,
+ 	bit += page->index;
+ 
+ 	set_bit(bit, wqueue->notes_bitmap);
++	generic_pipe_buf_release(pipe, buf);
+ }
+ 
+ // No try_steal function => no stealing
+@@ -112,7 +113,7 @@ static bool post_one_notification(struct watch_queue *wqueue,
+ 	buf->offset = offset;
+ 	buf->len = len;
+ 	buf->flags = PIPE_BUF_FLAG_WHOLE;
+-	pipe->head = head + 1;
++	smp_store_release(&pipe->head, head + 1); /* vs pipe_read() */
+ 
+ 	if (!test_and_clear_bit(note, wqueue->notes_bitmap)) {
+ 		spin_unlock_irq(&pipe->rd_wait.lock);
+@@ -243,7 +244,8 @@ long watch_queue_set_size(struct pipe_inode_info *pipe, unsigned int nr_notes)
+ 		goto error;
+ 	}
+ 
+-	ret = pipe_resize_ring(pipe, nr_notes);
++	nr_notes = nr_pages * WATCH_QUEUE_NOTES_PER_PAGE;
++	ret = pipe_resize_ring(pipe, roundup_pow_of_two(nr_notes));
+ 	if (ret < 0)
+ 		goto error;
+ 
+@@ -268,7 +270,7 @@ long watch_queue_set_size(struct pipe_inode_info *pipe, unsigned int nr_notes)
+ 	wqueue->notes = pages;
+ 	wqueue->notes_bitmap = bitmap;
+ 	wqueue->nr_pages = nr_pages;
+-	wqueue->nr_notes = nr_pages * WATCH_QUEUE_NOTES_PER_PAGE;
++	wqueue->nr_notes = nr_notes;
+ 	return 0;
+ 
+ error_p:
+@@ -320,7 +322,7 @@ long watch_queue_set_filter(struct pipe_inode_info *pipe,
+ 		    tf[i].info_mask & WATCH_INFO_LENGTH)
+ 			goto err_filter;
+ 		/* Ignore any unknown types */
+-		if (tf[i].type >= sizeof(wfilter->type_filter) * 8)
++		if (tf[i].type >= WATCH_TYPE__NR)
+ 			continue;
+ 		nr_filter++;
+ 	}
+@@ -336,7 +338,7 @@ long watch_queue_set_filter(struct pipe_inode_info *pipe,
+ 
+ 	q = wfilter->filters;
+ 	for (i = 0; i < filter.nr_filters; i++) {
+-		if (tf[i].type >= sizeof(wfilter->type_filter) * BITS_PER_LONG)
++		if (tf[i].type >= WATCH_TYPE__NR)
+ 			continue;
+ 
+ 		q->type			= tf[i].type;
+@@ -371,6 +373,7 @@ static void __put_watch_queue(struct kref *kref)
+ 
+ 	for (i = 0; i < wqueue->nr_pages; i++)
+ 		__free_page(wqueue->notes[i]);
++	bitmap_free(wqueue->notes_bitmap);
+ 
+ 	wfilter = rcu_access_pointer(wqueue->filter);
+ 	if (wfilter)
+@@ -566,7 +569,7 @@ void watch_queue_clear(struct watch_queue *wqueue)
+ 	rcu_read_lock();
+ 	spin_lock_bh(&wqueue->lock);
+ 
+-	/* Prevent new additions and prevent notifications from happening */
++	/* Prevent new notifications from being stored. */
+ 	wqueue->defunct = true;
+ 
+ 	while (!hlist_empty(&wqueue->watches)) {
+diff --git a/mm/gup.c b/mm/gup.c
+index 37087529bb954..b7e5e80538c9f 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -1723,11 +1723,11 @@ EXPORT_SYMBOL(fault_in_writeable);
+  * @uaddr: start of address range
+  * @size: length of address range
+  *
+- * Faults in an address range using get_user_pages, i.e., without triggering
+- * hardware page faults.  This is primarily useful when we already know that
+- * some or all of the pages in the address range aren't in memory.
++ * Faults in an address range for writing.  This is primarily useful when we
++ * already know that some or all of the pages in the address range aren't in
++ * memory.
+  *
+- * Other than fault_in_writeable(), this function is non-destructive.
++ * Unlike fault_in_writeable(), this function is non-destructive.
+  *
+  * Note that we don't pin or otherwise hold the pages referenced that we fault
+  * in.  There's no guarantee that they'll stay in memory for any duration of
+@@ -1738,46 +1738,27 @@ EXPORT_SYMBOL(fault_in_writeable);
+  */
+ size_t fault_in_safe_writeable(const char __user *uaddr, size_t size)
+ {
+-	unsigned long start = (unsigned long)untagged_addr(uaddr);
+-	unsigned long end, nstart, nend;
++	unsigned long start = (unsigned long)uaddr, end;
+ 	struct mm_struct *mm = current->mm;
+-	struct vm_area_struct *vma = NULL;
+-	int locked = 0;
++	bool unlocked = false;
+ 
+-	nstart = start & PAGE_MASK;
++	if (unlikely(size == 0))
++		return 0;
+ 	end = PAGE_ALIGN(start + size);
+-	if (end < nstart)
++	if (end < start)
+ 		end = 0;
+-	for (; nstart != end; nstart = nend) {
+-		unsigned long nr_pages;
+-		long ret;
+ 
+-		if (!locked) {
+-			locked = 1;
+-			mmap_read_lock(mm);
+-			vma = find_vma(mm, nstart);
+-		} else if (nstart >= vma->vm_end)
+-			vma = vma->vm_next;
+-		if (!vma || vma->vm_start >= end)
+-			break;
+-		nend = end ? min(end, vma->vm_end) : vma->vm_end;
+-		if (vma->vm_flags & (VM_IO | VM_PFNMAP))
+-			continue;
+-		if (nstart < vma->vm_start)
+-			nstart = vma->vm_start;
+-		nr_pages = (nend - nstart) / PAGE_SIZE;
+-		ret = __get_user_pages_locked(mm, nstart, nr_pages,
+-					      NULL, NULL, &locked,
+-					      FOLL_TOUCH | FOLL_WRITE);
+-		if (ret <= 0)
++	mmap_read_lock(mm);
++	do {
++		if (fixup_user_fault(mm, start, FAULT_FLAG_WRITE, &unlocked))
+ 			break;
+-		nend = nstart + ret * PAGE_SIZE;
+-	}
+-	if (locked)
+-		mmap_read_unlock(mm);
+-	if (nstart == end)
+-		return 0;
+-	return size - min_t(size_t, nstart - start, size);
++		start = (start + PAGE_SIZE) & PAGE_MASK;
++	} while (start != end);
++	mmap_read_unlock(mm);
++
++	if (size > (unsigned long)uaddr - start)
++		return size - ((unsigned long)uaddr - start);
++	return 0;
+ }
+ EXPORT_SYMBOL(fault_in_safe_writeable);
+ 
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index 44a8730c26acc..00bb087c2ca8b 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -87,6 +87,13 @@ again:
+ 	ax25_for_each(s, &ax25_list) {
+ 		if (s->ax25_dev == ax25_dev) {
+ 			sk = s->sk;
++			if (!sk) {
++				spin_unlock_bh(&ax25_list_lock);
++				s->ax25_dev = NULL;
++				ax25_disconnect(s, ENETUNREACH);
++				spin_lock_bh(&ax25_list_lock);
++				goto again;
++			}
+ 			sock_hold(sk);
+ 			spin_unlock_bh(&ax25_list_lock);
+ 			lock_sock(sk);
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index d7f9ee830d34c..9e5657f632453 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -213,7 +213,7 @@ static ssize_t speed_show(struct device *dev,
+ 	if (!rtnl_trylock())
+ 		return restart_syscall();
+ 
+-	if (netif_running(netdev)) {
++	if (netif_running(netdev) && netif_device_present(netdev)) {
+ 		struct ethtool_link_ksettings cmd;
+ 
+ 		if (!__ethtool_get_link_ksettings(netdev, &cmd))
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 56e23333e7080..f1e3d70e89870 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -3919,32 +3919,6 @@ err_linearize:
+ }
+ EXPORT_SYMBOL_GPL(skb_segment_list);
+ 
+-int skb_gro_receive_list(struct sk_buff *p, struct sk_buff *skb)
+-{
+-	if (unlikely(p->len + skb->len >= 65536))
+-		return -E2BIG;
+-
+-	if (NAPI_GRO_CB(p)->last == p)
+-		skb_shinfo(p)->frag_list = skb;
+-	else
+-		NAPI_GRO_CB(p)->last->next = skb;
+-
+-	skb_pull(skb, skb_gro_offset(skb));
+-
+-	NAPI_GRO_CB(p)->last = skb;
+-	NAPI_GRO_CB(p)->count++;
+-	p->data_len += skb->len;
+-
+-	/* sk owenrship - if any - completely transferred to the aggregated packet */
+-	skb->destructor = NULL;
+-	p->truesize += skb->truesize;
+-	p->len += skb->len;
+-
+-	NAPI_GRO_CB(skb)->same_flow = 1;
+-
+-	return 0;
+-}
+-
+ /**
+  *	skb_segment - Perform protocol segmentation on skb.
+  *	@head_skb: buffer to segment
+diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
+index e1b1d080e908d..70e6c87fbe3df 100644
+--- a/net/ipv4/esp4.c
++++ b/net/ipv4/esp4.c
+@@ -446,6 +446,7 @@ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *
+ 	struct page *page;
+ 	struct sk_buff *trailer;
+ 	int tailen = esp->tailen;
++	unsigned int allocsz;
+ 
+ 	/* this is non-NULL only with TCP/UDP Encapsulation */
+ 	if (x->encap) {
+@@ -455,6 +456,10 @@ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *
+ 			return err;
+ 	}
+ 
++	allocsz = ALIGN(skb->data_len + tailen, L1_CACHE_BYTES);
++	if (allocsz > ESP_SKB_FRAG_MAXSIZE)
++		goto cow;
++
+ 	if (!skb_cloned(skb)) {
+ 		if (tailen <= skb_tailroom(skb)) {
+ 			nfrags = 1;
+diff --git a/net/ipv4/esp4_offload.c b/net/ipv4/esp4_offload.c
+index 8e4e9aa12130d..dad5d29a6a8db 100644
+--- a/net/ipv4/esp4_offload.c
++++ b/net/ipv4/esp4_offload.c
+@@ -159,6 +159,9 @@ static struct sk_buff *xfrm4_beet_gso_segment(struct xfrm_state *x,
+ 			skb_shinfo(skb)->gso_type |= SKB_GSO_TCPV4;
+ 	}
+ 
++	if (proto == IPPROTO_IPV6)
++		skb_shinfo(skb)->gso_type |= SKB_GSO_IPXIP4;
++
+ 	__skb_pull(skb, skb_transport_offset(skb));
+ 	ops = rcu_dereference(inet_offloads[proto]);
+ 	if (likely(ops && ops->callbacks.gso_segment))
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index 86d32a1e62ac9..c2398f9e46f06 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -424,6 +424,33 @@ out:
+ 	return segs;
+ }
+ 
++static int skb_gro_receive_list(struct sk_buff *p, struct sk_buff *skb)
++{
++	if (unlikely(p->len + skb->len >= 65536))
++		return -E2BIG;
++
++	if (NAPI_GRO_CB(p)->last == p)
++		skb_shinfo(p)->frag_list = skb;
++	else
++		NAPI_GRO_CB(p)->last->next = skb;
++
++	skb_pull(skb, skb_gro_offset(skb));
++
++	NAPI_GRO_CB(p)->last = skb;
++	NAPI_GRO_CB(p)->count++;
++	p->data_len += skb->len;
++
++	/* sk owenrship - if any - completely transferred to the aggregated packet */
++	skb->destructor = NULL;
++	p->truesize += skb->truesize;
++	p->len += skb->len;
++
++	NAPI_GRO_CB(skb)->same_flow = 1;
++
++	return 0;
++}
++
++
+ #define UDP_GRO_CNT_MAX 64
+ static struct sk_buff *udp_gro_receive_segment(struct list_head *head,
+ 					       struct sk_buff *skb)
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 7c78e1215ae34..e92ca415756a5 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -5002,6 +5002,7 @@ static int inet6_fill_ifaddr(struct sk_buff *skb, struct inet6_ifaddr *ifa,
+ 	    nla_put_s32(skb, IFA_TARGET_NETNSID, args->netnsid))
+ 		goto error;
+ 
++	spin_lock_bh(&ifa->lock);
+ 	if (!((ifa->flags&IFA_F_PERMANENT) &&
+ 	      (ifa->prefered_lft == INFINITY_LIFE_TIME))) {
+ 		preferred = ifa->prefered_lft;
+@@ -5023,6 +5024,7 @@ static int inet6_fill_ifaddr(struct sk_buff *skb, struct inet6_ifaddr *ifa,
+ 		preferred = INFINITY_LIFE_TIME;
+ 		valid = INFINITY_LIFE_TIME;
+ 	}
++	spin_unlock_bh(&ifa->lock);
+ 
+ 	if (!ipv6_addr_any(&ifa->peer_addr)) {
+ 		if (nla_put_in6_addr(skb, IFA_LOCAL, &ifa->addr) < 0 ||
+diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
+index 883b53fd78467..b7b573085bd51 100644
+--- a/net/ipv6/esp6.c
++++ b/net/ipv6/esp6.c
+@@ -483,6 +483,7 @@ int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info
+ 	struct page *page;
+ 	struct sk_buff *trailer;
+ 	int tailen = esp->tailen;
++	unsigned int allocsz;
+ 
+ 	if (x->encap) {
+ 		int err = esp6_output_encap(x, skb, esp);
+@@ -491,6 +492,10 @@ int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info
+ 			return err;
+ 	}
+ 
++	allocsz = ALIGN(skb->data_len + tailen, L1_CACHE_BYTES);
++	if (allocsz > ESP_SKB_FRAG_MAXSIZE)
++		goto cow;
++
+ 	if (!skb_cloned(skb)) {
+ 		if (tailen <= skb_tailroom(skb)) {
+ 			nfrags = 1;
+diff --git a/net/ipv6/esp6_offload.c b/net/ipv6/esp6_offload.c
+index a349d47980776..302170882382a 100644
+--- a/net/ipv6/esp6_offload.c
++++ b/net/ipv6/esp6_offload.c
+@@ -198,6 +198,9 @@ static struct sk_buff *xfrm6_beet_gso_segment(struct xfrm_state *x,
+ 			ipv6_skip_exthdr(skb, 0, &proto, &frag);
+ 	}
+ 
++	if (proto == IPPROTO_IPIP)
++		skb_shinfo(skb)->gso_type |= SKB_GSO_IPXIP6;
++
+ 	__skb_pull(skb, skb_transport_offset(skb));
+ 	ops = rcu_dereference(inet6_offloads[proto]);
+ 	if (likely(ops && ops->callbacks.gso_segment))
+diff --git a/net/sctp/diag.c b/net/sctp/diag.c
+index 034e2c74497df..d9c6d8f30f093 100644
+--- a/net/sctp/diag.c
++++ b/net/sctp/diag.c
+@@ -61,10 +61,6 @@ static void inet_diag_msg_sctpasoc_fill(struct inet_diag_msg *r,
+ 		r->idiag_timer = SCTP_EVENT_TIMEOUT_T3_RTX;
+ 		r->idiag_retrans = asoc->rtx_data_chunks;
+ 		r->idiag_expires = jiffies_to_msecs(t3_rtx->expires - jiffies);
+-	} else {
+-		r->idiag_timer = 0;
+-		r->idiag_retrans = 0;
+-		r->idiag_expires = 0;
+ 	}
+ }
+ 
+@@ -144,13 +140,14 @@ static int inet_sctp_diag_fill(struct sock *sk, struct sctp_association *asoc,
+ 	r = nlmsg_data(nlh);
+ 	BUG_ON(!sk_fullsock(sk));
+ 
++	r->idiag_timer = 0;
++	r->idiag_retrans = 0;
++	r->idiag_expires = 0;
+ 	if (asoc) {
+ 		inet_diag_msg_sctpasoc_fill(r, sk, asoc);
+ 	} else {
+ 		inet_diag_msg_common_fill(r, sk);
+ 		r->idiag_state = sk->sk_state;
+-		r->idiag_timer = 0;
+-		r->idiag_retrans = 0;
+ 	}
+ 
+ 	if (inet_diag_msg_attrs_fill(sk, skb, r, ext, user_ns, net_admin))
+diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c
+index 60bc74b76adc5..1cb5907d90d8e 100644
+--- a/net/tipc/bearer.c
++++ b/net/tipc/bearer.c
+@@ -352,16 +352,18 @@ static int tipc_enable_bearer(struct net *net, const char *name,
+ 		goto rejected;
+ 	}
+ 
+-	test_and_set_bit_lock(0, &b->up);
+-	rcu_assign_pointer(tn->bearer_list[bearer_id], b);
+-	if (skb)
+-		tipc_bearer_xmit_skb(net, bearer_id, skb, &b->bcast_addr);
+-
++	/* Create monitoring data before accepting activate messages */
+ 	if (tipc_mon_create(net, bearer_id)) {
+ 		bearer_disable(net, b);
++		kfree_skb(skb);
+ 		return -ENOMEM;
+ 	}
+ 
++	test_and_set_bit_lock(0, &b->up);
++	rcu_assign_pointer(tn->bearer_list[bearer_id], b);
++	if (skb)
++		tipc_bearer_xmit_skb(net, bearer_id, skb, &b->bcast_addr);
++
+ 	pr_info("Enabled bearer <%s>, priority %u\n", name, prio);
+ 
+ 	return res;
+diff --git a/net/tipc/link.c b/net/tipc/link.c
+index 4e7936d9b4424..115a4a7950f50 100644
+--- a/net/tipc/link.c
++++ b/net/tipc/link.c
+@@ -2285,6 +2285,11 @@ static int tipc_link_proto_rcv(struct tipc_link *l, struct sk_buff *skb,
+ 		break;
+ 
+ 	case STATE_MSG:
++		/* Validate Gap ACK blocks, drop if invalid */
++		glen = tipc_get_gap_ack_blks(&ga, l, hdr, true);
++		if (glen > dlen)
++			break;
++
+ 		l->rcv_nxt_state = msg_seqno(hdr) + 1;
+ 
+ 		/* Update own tolerance if peer indicates a non-zero value */
+@@ -2310,10 +2315,6 @@ static int tipc_link_proto_rcv(struct tipc_link *l, struct sk_buff *skb,
+ 			break;
+ 		}
+ 
+-		/* Receive Gap ACK blocks from peer if any */
+-		glen = tipc_get_gap_ack_blks(&ga, l, hdr, true);
+-		if(glen > dlen)
+-			break;
+ 		tipc_mon_rcv(l->net, data + glen, dlen - glen, l->addr,
+ 			     &l->mon_state, l->bearer_id);
+ 
+diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
+index ba74fdf74af91..61eee68190c6d 100644
+--- a/tools/perf/util/parse-events.c
++++ b/tools/perf/util/parse-events.c
+@@ -1648,6 +1648,7 @@ int parse_events_multi_pmu_add(struct parse_events_state *parse_state,
+ {
+ 	struct parse_events_term *term;
+ 	struct list_head *list = NULL;
++	struct list_head *orig_head = NULL;
+ 	struct perf_pmu *pmu = NULL;
+ 	int ok = 0;
+ 	char *config;
+@@ -1674,7 +1675,6 @@ int parse_events_multi_pmu_add(struct parse_events_state *parse_state,
+ 	}
+ 	list_add_tail(&term->list, head);
+ 
+-
+ 	/* Add it for all PMUs that support the alias */
+ 	list = malloc(sizeof(struct list_head));
+ 	if (!list)
+@@ -1687,13 +1687,15 @@ int parse_events_multi_pmu_add(struct parse_events_state *parse_state,
+ 
+ 		list_for_each_entry(alias, &pmu->aliases, list) {
+ 			if (!strcasecmp(alias->name, str)) {
++				parse_events_copy_term_list(head, &orig_head);
+ 				if (!parse_events_add_pmu(parse_state, list,
+-							  pmu->name, head,
++							  pmu->name, orig_head,
+ 							  true, true)) {
+ 					pr_debug("%s -> %s/%s/\n", str,
+ 						 pmu->name, alias->str);
+ 					ok++;
+ 				}
++				parse_events_terms__delete(orig_head);
+ 			}
+ 		}
+ 	}
+diff --git a/tools/testing/selftests/bpf/prog_tests/timer_crash.c b/tools/testing/selftests/bpf/prog_tests/timer_crash.c
+new file mode 100644
+index 0000000000000..f74b82305da8c
+--- /dev/null
++++ b/tools/testing/selftests/bpf/prog_tests/timer_crash.c
+@@ -0,0 +1,32 @@
++// SPDX-License-Identifier: GPL-2.0
++#include <test_progs.h>
++#include "timer_crash.skel.h"
++
++enum {
++	MODE_ARRAY,
++	MODE_HASH,
++};
++
++static void test_timer_crash_mode(int mode)
++{
++	struct timer_crash *skel;
++
++	skel = timer_crash__open_and_load();
++	if (!ASSERT_OK_PTR(skel, "timer_crash__open_and_load"))
++		return;
++	skel->bss->pid = getpid();
++	skel->bss->crash_map = mode;
++	if (!ASSERT_OK(timer_crash__attach(skel), "timer_crash__attach"))
++		goto end;
++	usleep(1);
++end:
++	timer_crash__destroy(skel);
++}
++
++void test_timer_crash(void)
++{
++	if (test__start_subtest("array"))
++		test_timer_crash_mode(MODE_ARRAY);
++	if (test__start_subtest("hash"))
++		test_timer_crash_mode(MODE_HASH);
++}
+diff --git a/tools/testing/selftests/bpf/progs/timer_crash.c b/tools/testing/selftests/bpf/progs/timer_crash.c
+new file mode 100644
+index 0000000000000..f8f7944e70dae
+--- /dev/null
++++ b/tools/testing/selftests/bpf/progs/timer_crash.c
+@@ -0,0 +1,54 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <vmlinux.h>
++#include <bpf/bpf_tracing.h>
++#include <bpf/bpf_helpers.h>
++
++struct map_elem {
++	struct bpf_timer timer;
++	struct bpf_spin_lock lock;
++};
++
++struct {
++	__uint(type, BPF_MAP_TYPE_ARRAY);
++	__uint(max_entries, 1);
++	__type(key, int);
++	__type(value, struct map_elem);
++} amap SEC(".maps");
++
++struct {
++	__uint(type, BPF_MAP_TYPE_HASH);
++	__uint(max_entries, 1);
++	__type(key, int);
++	__type(value, struct map_elem);
++} hmap SEC(".maps");
++
++int pid = 0;
++int crash_map = 0; /* 0 for amap, 1 for hmap */
++
++SEC("fentry/do_nanosleep")
++int sys_enter(void *ctx)
++{
++	struct map_elem *e, value = {};
++	void *map = crash_map ? (void *)&hmap : (void *)&amap;
++
++	if (bpf_get_current_task_btf()->tgid != pid)
++		return 0;
++
++	*(void **)&value = (void *)0xdeadcaf3;
++
++	bpf_map_update_elem(map, &(int){0}, &value, 0);
++	/* For array map, doing bpf_map_update_elem will do a
++	 * check_and_free_timer_in_array, which will trigger the crash if timer
++	 * pointer was overwritten, for hmap we need to use bpf_timer_cancel.
++	 */
++	if (crash_map == 1) {
++		e = bpf_map_lookup_elem(map, &(int){0});
++		if (!e)
++			return 0;
++		bpf_timer_cancel(&e->timer);
++	}
++	return 0;
++}
++
++char _license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/memfd/memfd_test.c b/tools/testing/selftests/memfd/memfd_test.c
+index 192a2899bae8f..94df2692e6e4a 100644
+--- a/tools/testing/selftests/memfd/memfd_test.c
++++ b/tools/testing/selftests/memfd/memfd_test.c
+@@ -455,6 +455,7 @@ static void mfd_fail_write(int fd)
+ 			printf("mmap()+mprotect() didn't fail as expected\n");
+ 			abort();
+ 		}
++		munmap(p, mfd_def_size);
+ 	}
+ 
+ 	/* verify PUNCH_HOLE fails */
+diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
+index 543ad7513a8e9..694732e4b3448 100755
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -374,6 +374,16 @@ run_cmd() {
+ 	return $rc
+ }
+ 
++run_cmd_bg() {
++	cmd="$*"
++
++	if [ "$VERBOSE" = "1" ]; then
++		printf "    COMMAND: %s &\n" "${cmd}"
++	fi
++
++	$cmd 2>&1 &
++}
++
+ # Find the auto-generated name for this namespace
+ nsname() {
+ 	eval echo \$NS_$1
+@@ -670,10 +680,10 @@ setup_nettest_xfrm() {
+ 	[ ${1} -eq 6 ] && proto="-6" || proto=""
+ 	port=${2}
+ 
+-	run_cmd ${ns_a} nettest ${proto} -q -D -s -x -p ${port} -t 5 &
++	run_cmd_bg "${ns_a}" nettest "${proto}" -q -D -s -x -p "${port}" -t 5
+ 	nettest_pids="${nettest_pids} $!"
+ 
+-	run_cmd ${ns_b} nettest ${proto} -q -D -s -x -p ${port} -t 5 &
++	run_cmd_bg "${ns_b}" nettest "${proto}" -q -D -s -x -p "${port}" -t 5
+ 	nettest_pids="${nettest_pids} $!"
+ }
+ 
+@@ -865,7 +875,6 @@ setup_ovs_bridge() {
+ setup() {
+ 	[ "$(id -u)" -ne 0 ] && echo "  need to run as root" && return $ksft_skip
+ 
+-	cleanup
+ 	for arg do
+ 		eval setup_${arg} || { echo "  ${arg} not supported"; return 1; }
+ 	done
+@@ -876,7 +885,7 @@ trace() {
+ 
+ 	for arg do
+ 		[ "${ns_cmd}" = "" ] && ns_cmd="${arg}" && continue
+-		${ns_cmd} tcpdump -s 0 -i "${arg}" -w "${name}_${arg}.pcap" 2> /dev/null &
++		${ns_cmd} tcpdump --immediate-mode -s 0 -i "${arg}" -w "${name}_${arg}.pcap" 2> /dev/null &
+ 		tcpdump_pids="${tcpdump_pids} $!"
+ 		ns_cmd=
+ 	done
+@@ -1836,6 +1845,10 @@ run_test() {
+ 
+ 	unset IFS
+ 
++	# Since cleanup() relies on variables modified by this subshell, it
++	# has to run in this context.
++	trap cleanup EXIT
++
+ 	if [ "$VERBOSE" = "1" ]; then
+ 		printf "\n##########################################################################\n\n"
+ 	fi
+diff --git a/tools/testing/selftests/vm/map_fixed_noreplace.c b/tools/testing/selftests/vm/map_fixed_noreplace.c
+index d91bde5112686..eed44322d1a63 100644
+--- a/tools/testing/selftests/vm/map_fixed_noreplace.c
++++ b/tools/testing/selftests/vm/map_fixed_noreplace.c
+@@ -17,9 +17,6 @@
+ #define MAP_FIXED_NOREPLACE 0x100000
+ #endif
+ 
+-#define BASE_ADDRESS	(256ul * 1024 * 1024)
+-
+-
+ static void dump_maps(void)
+ {
+ 	char cmd[32];
+@@ -28,18 +25,46 @@ static void dump_maps(void)
+ 	system(cmd);
+ }
+ 
++static unsigned long find_base_addr(unsigned long size)
++{
++	void *addr;
++	unsigned long flags;
++
++	flags = MAP_PRIVATE | MAP_ANONYMOUS;
++	addr = mmap(NULL, size, PROT_NONE, flags, -1, 0);
++	if (addr == MAP_FAILED) {
++		printf("Error: couldn't map the space we need for the test\n");
++		return 0;
++	}
++
++	if (munmap(addr, size) != 0) {
++		printf("Error: couldn't map the space we need for the test\n");
++		return 0;
++	}
++	return (unsigned long)addr;
++}
++
+ int main(void)
+ {
++	unsigned long base_addr;
+ 	unsigned long flags, addr, size, page_size;
+ 	char *p;
+ 
+ 	page_size = sysconf(_SC_PAGE_SIZE);
+ 
++	//let's find a base addr that is free before we start the tests
++	size = 5 * page_size;
++	base_addr = find_base_addr(size);
++	if (!base_addr) {
++		printf("Error: couldn't map the space we need for the test\n");
++		return 1;
++	}
++
+ 	flags = MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED_NOREPLACE;
+ 
+ 	// Check we can map all the areas we need below
+ 	errno = 0;
+-	addr = BASE_ADDRESS;
++	addr = base_addr;
+ 	size = 5 * page_size;
+ 	p = mmap((void *)addr, size, PROT_NONE, flags, -1, 0);
+ 
+@@ -60,7 +85,7 @@ int main(void)
+ 	printf("unmap() successful\n");
+ 
+ 	errno = 0;
+-	addr = BASE_ADDRESS + page_size;
++	addr = base_addr + page_size;
+ 	size = 3 * page_size;
+ 	p = mmap((void *)addr, size, PROT_NONE, flags, -1, 0);
+ 	printf("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p);
+@@ -80,7 +105,7 @@ int main(void)
+ 	 *     +4 |  free  | new
+ 	 */
+ 	errno = 0;
+-	addr = BASE_ADDRESS;
++	addr = base_addr;
+ 	size = 5 * page_size;
+ 	p = mmap((void *)addr, size, PROT_NONE, flags, -1, 0);
+ 	printf("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p);
+@@ -101,7 +126,7 @@ int main(void)
+ 	 *     +4 |  free  |
+ 	 */
+ 	errno = 0;
+-	addr = BASE_ADDRESS + (2 * page_size);
++	addr = base_addr + (2 * page_size);
+ 	size = page_size;
+ 	p = mmap((void *)addr, size, PROT_NONE, flags, -1, 0);
+ 	printf("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p);
+@@ -121,7 +146,7 @@ int main(void)
+ 	 *     +4 |  free  | new
+ 	 */
+ 	errno = 0;
+-	addr = BASE_ADDRESS + (3 * page_size);
++	addr = base_addr + (3 * page_size);
+ 	size = 2 * page_size;
+ 	p = mmap((void *)addr, size, PROT_NONE, flags, -1, 0);
+ 	printf("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p);
+@@ -141,7 +166,7 @@ int main(void)
+ 	 *     +4 |  free  |
+ 	 */
+ 	errno = 0;
+-	addr = BASE_ADDRESS;
++	addr = base_addr;
+ 	size = 2 * page_size;
+ 	p = mmap((void *)addr, size, PROT_NONE, flags, -1, 0);
+ 	printf("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p);
+@@ -161,7 +186,7 @@ int main(void)
+ 	 *     +4 |  free  |
+ 	 */
+ 	errno = 0;
+-	addr = BASE_ADDRESS;
++	addr = base_addr;
+ 	size = page_size;
+ 	p = mmap((void *)addr, size, PROT_NONE, flags, -1, 0);
+ 	printf("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p);
+@@ -181,7 +206,7 @@ int main(void)
+ 	 *     +4 |  free  |  new
+ 	 */
+ 	errno = 0;
+-	addr = BASE_ADDRESS + (4 * page_size);
++	addr = base_addr + (4 * page_size);
+ 	size = page_size;
+ 	p = mmap((void *)addr, size, PROT_NONE, flags, -1, 0);
+ 	printf("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p);
+@@ -192,7 +217,7 @@ int main(void)
+ 		return 1;
+ 	}
+ 
+-	addr = BASE_ADDRESS;
++	addr = base_addr;
+ 	size = 5 * page_size;
+ 	if (munmap((void *)addr, size) != 0) {
+ 		dump_maps();
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 71ddc7a8bc302..6ae9e04d0585e 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -5347,9 +5347,7 @@ static int kvm_suspend(void)
+ static void kvm_resume(void)
+ {
+ 	if (kvm_usage_count) {
+-#ifdef CONFIG_LOCKDEP
+-		WARN_ON(lockdep_is_held(&kvm_count_lock));
+-#endif
++		lockdep_assert_not_held(&kvm_count_lock);
+ 		hardware_enable_nolock(NULL);
+ 	}
+ }


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-03-19 13:17 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-03-19 13:17 UTC (permalink / raw
  To: gentoo-commits

commit:     82b5551e1b75cffbf5cfdf3d18bcd3f89a65891c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Mar 19 13:17:33 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Mar 19 13:17:33 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=82b5551e

Linux patch 5.16.16

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |   4 +
 1015_linux-5.16.16.patch | 763 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 767 insertions(+)

diff --git a/0000_README b/0000_README
index 3ef28034..2ecef651 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch:  1014_linux-5.16.15.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.16.15
 
+Patch:  1015_linux-5.16.16.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.16
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1015_linux-5.16.16.patch b/1015_linux-5.16.16.patch
new file mode 100644
index 00000000..88d8453e
--- /dev/null
+++ b/1015_linux-5.16.16.patch
@@ -0,0 +1,763 @@
+diff --git a/Makefile b/Makefile
+index 8675dd2a9cc85..d625d3aeab2e9 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/arch/arm/boot/dts/rk322x.dtsi b/arch/arm/boot/dts/rk322x.dtsi
+index 8eed9e3a92e90..5868eb512f69f 100644
+--- a/arch/arm/boot/dts/rk322x.dtsi
++++ b/arch/arm/boot/dts/rk322x.dtsi
+@@ -718,8 +718,8 @@
+ 		interrupts = <GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>;
+ 		assigned-clocks = <&cru SCLK_HDMI_PHY>;
+ 		assigned-clock-parents = <&hdmi_phy>;
+-		clocks = <&cru SCLK_HDMI_HDCP>, <&cru PCLK_HDMI_CTRL>, <&cru SCLK_HDMI_CEC>;
+-		clock-names = "isfr", "iahb", "cec";
++		clocks = <&cru PCLK_HDMI_CTRL>, <&cru SCLK_HDMI_HDCP>, <&cru SCLK_HDMI_CEC>;
++		clock-names = "iahb", "isfr", "cec";
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&hdmii2c_xfer &hdmi_hpd &hdmi_cec>;
+ 		resets = <&cru SRST_HDMI_P>;
+diff --git a/arch/arm/boot/dts/rk3288.dtsi b/arch/arm/boot/dts/rk3288.dtsi
+index aaaa61875701d..45a9d9b908d2a 100644
+--- a/arch/arm/boot/dts/rk3288.dtsi
++++ b/arch/arm/boot/dts/rk3288.dtsi
+@@ -971,7 +971,7 @@
+ 		status = "disabled";
+ 	};
+ 
+-	crypto: cypto-controller@ff8a0000 {
++	crypto: crypto@ff8a0000 {
+ 		compatible = "rockchip,rk3288-crypto";
+ 		reg = <0x0 0xff8a0000 0x0 0x4000>;
+ 		interrupts = <GIC_SPI 48 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/boot/dts/intel/socfpga_agilex.dtsi b/arch/arm64/boot/dts/intel/socfpga_agilex.dtsi
+index 0dd2d2ee765aa..f4270cf189962 100644
+--- a/arch/arm64/boot/dts/intel/socfpga_agilex.dtsi
++++ b/arch/arm64/boot/dts/intel/socfpga_agilex.dtsi
+@@ -502,7 +502,7 @@
+ 		};
+ 
+ 		usb0: usb@ffb00000 {
+-			compatible = "snps,dwc2";
++			compatible = "intel,socfpga-agilex-hsotg", "snps,dwc2";
+ 			reg = <0xffb00000 0x40000>;
+ 			interrupts = <GIC_SPI 93 IRQ_TYPE_LEVEL_HIGH>;
+ 			phys = <&usbphy0>;
+@@ -515,7 +515,7 @@
+ 		};
+ 
+ 		usb1: usb@ffb40000 {
+-			compatible = "snps,dwc2";
++			compatible = "intel,socfpga-agilex-hsotg", "snps,dwc2";
+ 			reg = <0xffb40000 0x40000>;
+ 			interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH>;
+ 			phys = <&usbphy0>;
+diff --git a/arch/arm64/boot/dts/rockchip/px30.dtsi b/arch/arm64/boot/dts/rockchip/px30.dtsi
+index 00f50b05d55a3..b72874c16a712 100644
+--- a/arch/arm64/boot/dts/rockchip/px30.dtsi
++++ b/arch/arm64/boot/dts/rockchip/px30.dtsi
+@@ -711,7 +711,7 @@
+ 		clock-names = "pclk", "timer";
+ 	};
+ 
+-	dmac: dmac@ff240000 {
++	dmac: dma-controller@ff240000 {
+ 		compatible = "arm,pl330", "arm,primecell";
+ 		reg = <0x0 0xff240000 0x0 0x4000>;
+ 		interrupts = <GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index 39db0b85b4da2..b822533dc7f19 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -489,7 +489,7 @@
+ 		status = "disabled";
+ 	};
+ 
+-	dmac: dmac@ff1f0000 {
++	dmac: dma-controller@ff1f0000 {
+ 		compatible = "arm,pl330", "arm,primecell";
+ 		reg = <0x0 0xff1f0000 0x0 0x4000>;
+ 		interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts b/arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts
+index 292bb7e80cf35..3ae5d727e3674 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts
+@@ -232,6 +232,7 @@
+ 
+ &usbdrd_dwc3_0 {
+ 	dr_mode = "otg";
++	extcon = <&extcon_usb3>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+index fb67db4619ea0..08fa00364b42f 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+@@ -25,6 +25,13 @@
+ 		};
+ 	};
+ 
++	extcon_usb3: extcon-usb3 {
++		compatible = "linux,extcon-usb-gpio";
++		id-gpio = <&gpio1 RK_PC2 GPIO_ACTIVE_HIGH>;
++		pinctrl-names = "default";
++		pinctrl-0 = <&usb3_id>;
++	};
++
+ 	clkin_gmac: external-gmac-clock {
+ 		compatible = "fixed-clock";
+ 		clock-frequency = <125000000>;
+@@ -422,9 +429,22 @@
+ 			  <4 RK_PA3 RK_FUNC_GPIO &pcfg_pull_none>;
+ 		};
+ 	};
++
++	usb3 {
++		usb3_id: usb3-id {
++			rockchip,pins =
++			  <1 RK_PC2 RK_FUNC_GPIO &pcfg_pull_none>;
++		};
++	};
+ };
+ 
+ &sdhci {
++	/*
++	 * Signal integrity isn't great at 200MHz but 100MHz has proven stable
++	 * enough.
++	 */
++	max-frequency = <100000000>;
++
+ 	bus-width = <8>;
+ 	mmc-hs400-1_8v;
+ 	mmc-hs400-enhanced-strobe;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399.dtsi b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+index d3cdf6f42a303..080457a68e3c7 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+@@ -1881,10 +1881,10 @@
+ 		interrupts = <GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH 0>;
+ 		clocks = <&cru PCLK_HDMI_CTRL>,
+ 			 <&cru SCLK_HDMI_SFR>,
+-			 <&cru PLL_VPLL>,
++			 <&cru SCLK_HDMI_CEC>,
+ 			 <&cru PCLK_VIO_GRF>,
+-			 <&cru SCLK_HDMI_CEC>;
+-		clock-names = "iahb", "isfr", "vpll", "grf", "cec";
++			 <&cru PLL_VPLL>;
++		clock-names = "iahb", "isfr", "cec", "grf", "vpll";
+ 		power-domains = <&power RK3399_PD_HDCP>;
+ 		reg-io-width = <4>;
+ 		rockchip,grf = <&grf>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk356x.dtsi b/arch/arm64/boot/dts/rockchip/rk356x.dtsi
+index 46d9552f60284..688e3585525a9 100644
+--- a/arch/arm64/boot/dts/rockchip/rk356x.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk356x.dtsi
+@@ -647,7 +647,7 @@
+ 		status = "disabled";
+ 	};
+ 
+-	dmac0: dmac@fe530000 {
++	dmac0: dma-controller@fe530000 {
+ 		compatible = "arm,pl330", "arm,primecell";
+ 		reg = <0x0 0xfe530000 0x0 0x4000>;
+ 		interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>,
+@@ -658,7 +658,7 @@
+ 		#dma-cells = <1>;
+ 	};
+ 
+-	dmac1: dmac@fe550000 {
++	dmac1: dma-controller@fe550000 {
+ 		compatible = "arm,pl330", "arm,primecell";
+ 		reg = <0x0 0xfe550000 0x0 0x4000>;
+ 		interrupts = <GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
+index d542fb7af3ba2..1986d13094100 100644
+--- a/arch/mips/kernel/smp.c
++++ b/arch/mips/kernel/smp.c
+@@ -351,6 +351,9 @@ asmlinkage void start_secondary(void)
+ 	cpu = smp_processor_id();
+ 	cpu_data[cpu].udelay_val = loops_per_jiffy;
+ 
++	set_cpu_sibling_map(cpu);
++	set_cpu_core_map(cpu);
++
+ 	cpumask_set_cpu(cpu, &cpu_coherent_mask);
+ 	notify_cpu_starting(cpu);
+ 
+@@ -362,9 +365,6 @@ asmlinkage void start_secondary(void)
+ 	/* The CPU is running and counters synchronised, now mark it online */
+ 	set_cpu_online(cpu, true);
+ 
+-	set_cpu_sibling_map(cpu);
+-	set_cpu_core_map(cpu);
+-
+ 	calculate_cpu_foreign_map();
+ 
+ 	/*
+diff --git a/drivers/atm/firestream.c b/drivers/atm/firestream.c
+index 3bc3c314a467b..4f67404fe64c7 100644
+--- a/drivers/atm/firestream.c
++++ b/drivers/atm/firestream.c
+@@ -1676,6 +1676,8 @@ static int fs_init(struct fs_dev *dev)
+ 	dev->hw_base = pci_resource_start(pci_dev, 0);
+ 
+ 	dev->base = ioremap(dev->hw_base, 0x1000);
++	if (!dev->base)
++		return 1;
+ 
+ 	reset_chip (dev);
+   
+diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
+index 52e20c68813b1..6ae26e7d3dece 100644
+--- a/drivers/gpu/drm/drm_connector.c
++++ b/drivers/gpu/drm/drm_connector.c
+@@ -2275,6 +2275,9 @@ EXPORT_SYMBOL(drm_connector_atomic_hdr_metadata_equal);
+ void drm_connector_set_vrr_capable_property(
+ 		struct drm_connector *connector, bool capable)
+ {
++	if (!connector->vrr_capable_property)
++		return;
++
+ 	drm_object_property_set_value(&connector->base,
+ 				      connector->vrr_capable_property,
+ 				      capable);
+diff --git a/drivers/input/touchscreen/goodix.c b/drivers/input/touchscreen/goodix.c
+index aaa3c455e01ea..d3136842b717b 100644
+--- a/drivers/input/touchscreen/goodix.c
++++ b/drivers/input/touchscreen/goodix.c
+@@ -18,6 +18,7 @@
+ #include <linux/delay.h>
+ #include <linux/irq.h>
+ #include <linux/interrupt.h>
++#include <linux/platform_data/x86/soc.h>
+ #include <linux/slab.h>
+ #include <linux/acpi.h>
+ #include <linux/of.h>
+@@ -686,21 +687,6 @@ static int goodix_reset(struct goodix_ts_data *ts)
+ }
+ 
+ #ifdef ACPI_GPIO_SUPPORT
+-#include <asm/cpu_device_id.h>
+-#include <asm/intel-family.h>
+-
+-static const struct x86_cpu_id baytrail_cpu_ids[] = {
+-	{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT, X86_FEATURE_ANY, },
+-	{}
+-};
+-
+-static inline bool is_byt(void)
+-{
+-	const struct x86_cpu_id *id = x86_match_cpu(baytrail_cpu_ids);
+-
+-	return !!id;
+-}
+-
+ static const struct acpi_gpio_params first_gpio = { 0, 0, false };
+ static const struct acpi_gpio_params second_gpio = { 1, 0, false };
+ 
+@@ -759,7 +745,7 @@ static int goodix_add_acpi_gpio_mappings(struct goodix_ts_data *ts)
+ 	const struct acpi_gpio_mapping *gpio_mapping = NULL;
+ 	struct device *dev = &ts->client->dev;
+ 	LIST_HEAD(resources);
+-	int ret;
++	int irq, ret;
+ 
+ 	ts->gpio_count = 0;
+ 	ts->gpio_int_idx = -1;
+@@ -772,6 +758,20 @@ static int goodix_add_acpi_gpio_mappings(struct goodix_ts_data *ts)
+ 
+ 	acpi_dev_free_resource_list(&resources);
+ 
++	/*
++	 * CHT devices should have a GpioInt + a regular GPIO ACPI resource.
++	 * Some CHT devices have a bug (where the also is bogus Interrupt
++	 * resource copied from a previous BYT based generation). i2c-core-acpi
++	 * will use the non-working Interrupt resource, fix this up.
++	 */
++	if (soc_intel_is_cht() && ts->gpio_count == 2 && ts->gpio_int_idx != -1) {
++		irq = acpi_dev_gpio_irq_get(ACPI_COMPANION(dev), 0);
++		if (irq > 0 && irq != ts->client->irq) {
++			dev_warn(dev, "Overriding IRQ %d -> %d\n", ts->client->irq, irq);
++			ts->client->irq = irq;
++		}
++	}
++
+ 	if (ts->gpio_count == 2 && ts->gpio_int_idx == 0) {
+ 		ts->irq_pin_access_method = IRQ_PIN_ACCESS_ACPI_GPIO;
+ 		gpio_mapping = acpi_goodix_int_first_gpios;
+@@ -784,7 +784,7 @@ static int goodix_add_acpi_gpio_mappings(struct goodix_ts_data *ts)
+ 		dev_info(dev, "Using ACPI INTI and INTO methods for IRQ pin access\n");
+ 		ts->irq_pin_access_method = IRQ_PIN_ACCESS_ACPI_METHOD;
+ 		gpio_mapping = acpi_goodix_reset_only_gpios;
+-	} else if (is_byt() && ts->gpio_count == 2 && ts->gpio_int_idx == -1) {
++	} else if (soc_intel_is_byt() && ts->gpio_count == 2 && ts->gpio_int_idx == -1) {
+ 		dev_info(dev, "No ACPI GpioInt resource, assuming that the GPIO order is reset, int\n");
+ 		ts->irq_pin_access_method = IRQ_PIN_ACCESS_ACPI_GPIO;
+ 		gpio_mapping = acpi_goodix_int_last_gpios;
+diff --git a/drivers/net/can/rcar/rcar_canfd.c b/drivers/net/can/rcar/rcar_canfd.c
+index 137eea4c7bad8..4871428859fdb 100644
+--- a/drivers/net/can/rcar/rcar_canfd.c
++++ b/drivers/net/can/rcar/rcar_canfd.c
+@@ -1716,15 +1716,15 @@ static int rcar_canfd_channel_probe(struct rcar_canfd_global *gpriv, u32 ch,
+ 
+ 	netif_napi_add(ndev, &priv->napi, rcar_canfd_rx_poll,
+ 		       RCANFD_NAPI_WEIGHT);
++	spin_lock_init(&priv->tx_lock);
++	devm_can_led_init(ndev);
++	gpriv->ch[priv->channel] = priv;
+ 	err = register_candev(ndev);
+ 	if (err) {
+ 		dev_err(&pdev->dev,
+ 			"register_candev() failed, error %d\n", err);
+ 		goto fail_candev;
+ 	}
+-	spin_lock_init(&priv->tx_lock);
+-	devm_can_led_init(ndev);
+-	gpriv->ch[priv->channel] = priv;
+ 	dev_info(&pdev->dev, "device registered (channel %u)\n", priv->channel);
+ 	return 0;
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnx2.c b/drivers/net/ethernet/broadcom/bnx2.c
+index babc955ba64e2..b47a8237c6dd8 100644
+--- a/drivers/net/ethernet/broadcom/bnx2.c
++++ b/drivers/net/ethernet/broadcom/bnx2.c
+@@ -8212,7 +8212,7 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev)
+ 		rc = dma_set_coherent_mask(&pdev->dev, persist_dma_mask);
+ 		if (rc) {
+ 			dev_err(&pdev->dev,
+-				"pci_set_consistent_dma_mask failed, aborting\n");
++				"dma_set_coherent_mask failed, aborting\n");
+ 			goto err_out_unmap;
+ 		}
+ 	} else if ((rc = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32))) != 0) {
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index fa91896ae699d..8093346f163b1 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -891,7 +891,16 @@ static inline void ice_set_rdma_cap(struct ice_pf *pf)
+  */
+ static inline void ice_clear_rdma_cap(struct ice_pf *pf)
+ {
+-	ice_unplug_aux_dev(pf);
++	/* We can directly unplug aux device here only if the flag bit
++	 * ICE_FLAG_PLUG_AUX_DEV is not set because ice_unplug_aux_dev()
++	 * could race with ice_plug_aux_dev() called from
++	 * ice_service_task(). In this case we only clear that bit now and
++	 * aux device will be unplugged later once ice_plug_aux_device()
++	 * called from ice_service_task() finishes (see ice_service_task()).
++	 */
++	if (!test_and_clear_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags))
++		ice_unplug_aux_dev(pf);
++
+ 	clear_bit(ICE_FLAG_RDMA_ENA, pf->flags);
+ 	clear_bit(ICE_FLAG_AUX_ENA, pf->flags);
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 676e837d48cfc..8a6c3716cdabd 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -2237,9 +2237,19 @@ static void ice_service_task(struct work_struct *work)
+ 		return;
+ 	}
+ 
+-	if (test_and_clear_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags))
++	if (test_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags)) {
++		/* Plug aux device per request */
+ 		ice_plug_aux_dev(pf);
+ 
++		/* Mark plugging as done but check whether unplug was
++		 * requested during ice_plug_aux_dev() call
++		 * (e.g. from ice_clear_rdma_cap()) and if so then
++		 * plug aux device.
++		 */
++		if (!test_and_clear_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags))
++			ice_unplug_aux_dev(pf);
++	}
++
+ 	if (test_and_clear_bit(ICE_FLAG_MTU_CHANGED, pf->flags)) {
+ 		struct iidc_event *event;
+ 
+diff --git a/drivers/net/ethernet/sfc/mcdi.c b/drivers/net/ethernet/sfc/mcdi.c
+index be6bfd6b7ec75..50baf62b2cbc6 100644
+--- a/drivers/net/ethernet/sfc/mcdi.c
++++ b/drivers/net/ethernet/sfc/mcdi.c
+@@ -163,9 +163,9 @@ static void efx_mcdi_send_request(struct efx_nic *efx, unsigned cmd,
+ 	/* Serialise with efx_mcdi_ev_cpl() and efx_mcdi_ev_death() */
+ 	spin_lock_bh(&mcdi->iface_lock);
+ 	++mcdi->seqno;
++	seqno = mcdi->seqno & SEQ_MASK;
+ 	spin_unlock_bh(&mcdi->iface_lock);
+ 
+-	seqno = mcdi->seqno & SEQ_MASK;
+ 	xflags = 0;
+ 	if (mcdi->mode == MCDI_MODE_EVENTS)
+ 		xflags |= MCDI_HEADER_XFLAGS_EVREQ;
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+index f470f9aea50f5..c97798f6290ae 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+@@ -552,8 +552,7 @@ static const struct ieee80211_sband_iftype_data iwl_he_capa[] = {
+ 			.has_he = true,
+ 			.he_cap_elem = {
+ 				.mac_cap_info[0] =
+-					IEEE80211_HE_MAC_CAP0_HTC_HE |
+-					IEEE80211_HE_MAC_CAP0_TWT_REQ,
++					IEEE80211_HE_MAC_CAP0_HTC_HE,
+ 				.mac_cap_info[1] =
+ 					IEEE80211_HE_MAC_CAP1_TF_MAC_PAD_DUR_16US |
+ 					IEEE80211_HE_MAC_CAP1_MULTI_TID_AGG_RX_QOS_8,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index cde3d2ce0b855..a65024fc96dd6 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -223,7 +223,6 @@ static const u8 he_if_types_ext_capa_sta[] = {
+ 	 [0] = WLAN_EXT_CAPA1_EXT_CHANNEL_SWITCHING,
+ 	 [2] = WLAN_EXT_CAPA3_MULTI_BSSID_SUPPORT,
+ 	 [7] = WLAN_EXT_CAPA8_OPMODE_NOTIF,
+-	 [9] = WLAN_EXT_CAPA10_TWT_REQUESTER_SUPPORT,
+ };
+ 
+ static const struct wiphy_iftype_ext_capab he_iftypes_ext_capa[] = {
+diff --git a/include/linux/netfilter_netdev.h b/include/linux/netfilter_netdev.h
+index b4dd96e4dc8dc..e6487a6911360 100644
+--- a/include/linux/netfilter_netdev.h
++++ b/include/linux/netfilter_netdev.h
+@@ -101,7 +101,11 @@ static inline struct sk_buff *nf_hook_egress(struct sk_buff *skb, int *rc,
+ 	nf_hook_state_init(&state, NF_NETDEV_EGRESS,
+ 			   NFPROTO_NETDEV, dev, NULL, NULL,
+ 			   dev_net(dev), NULL);
++
++	/* nf assumes rcu_read_lock, not just read_lock_bh */
++	rcu_read_lock();
+ 	ret = nf_hook_slow(skb, &state, e, 0);
++	rcu_read_unlock();
+ 
+ 	if (ret == 1) {
+ 		return skb;
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 301a164f17e9f..358dfe6fefefc 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -1679,14 +1679,15 @@ int km_migrate(const struct xfrm_selector *sel, u8 dir, u8 type,
+ 	       const struct xfrm_migrate *m, int num_bundles,
+ 	       const struct xfrm_kmaddress *k,
+ 	       const struct xfrm_encap_tmpl *encap);
+-struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *net);
++struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *net,
++						u32 if_id);
+ struct xfrm_state *xfrm_state_migrate(struct xfrm_state *x,
+ 				      struct xfrm_migrate *m,
+ 				      struct xfrm_encap_tmpl *encap);
+ int xfrm_migrate(const struct xfrm_selector *sel, u8 dir, u8 type,
+ 		 struct xfrm_migrate *m, int num_bundles,
+ 		 struct xfrm_kmaddress *k, struct net *net,
+-		 struct xfrm_encap_tmpl *encap);
++		 struct xfrm_encap_tmpl *encap, u32 if_id);
+ #endif
+ 
+ int km_new_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr, __be16 sport);
+diff --git a/lib/Kconfig b/lib/Kconfig
+index 5e7165e6a346c..fa4b10322efcd 100644
+--- a/lib/Kconfig
++++ b/lib/Kconfig
+@@ -45,7 +45,6 @@ config BITREVERSE
+ config HAVE_ARCH_BITREVERSE
+ 	bool
+ 	default n
+-	depends on BITREVERSE
+ 	help
+ 	  This option enables the use of hardware bit-reversal instructions on
+ 	  architectures which support such operations.
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 6c00ce302f095..1c8fb27b155a3 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3969,6 +3969,7 @@ void hci_release_dev(struct hci_dev *hdev)
+ 	hci_dev_unlock(hdev);
+ 
+ 	ida_simple_remove(&hci_index_ida, hdev->id);
++	kfree_skb(hdev->sent_cmd);
+ 	kfree(hdev);
+ }
+ EXPORT_SYMBOL(hci_release_dev);
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 28abb0bb1c515..38f9367851797 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1653,11 +1653,13 @@ int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
+ 				if (!copied)
+ 					copied = used;
+ 				break;
+-			} else if (used <= len) {
+-				seq += used;
+-				copied += used;
+-				offset += used;
+ 			}
++			if (WARN_ON_ONCE(used > len))
++				used = len;
++			seq += used;
++			copied += used;
++			offset += used;
++
+ 			/* If recv_actor drops the lock (e.g. TCP splice
+ 			 * receive) the skb pointer might be invalid when
+ 			 * getting here: tcp_collapse might have deleted it
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index de24a7d474dfd..9bf52a09b5ff3 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -2623,7 +2623,7 @@ static int pfkey_migrate(struct sock *sk, struct sk_buff *skb,
+ 	}
+ 
+ 	return xfrm_migrate(&sel, dir, XFRM_POLICY_TYPE_MAIN, m, i,
+-			    kma ? &k : NULL, net, NULL);
++			    kma ? &k : NULL, net, NULL, 0);
+ 
+  out:
+ 	return err;
+diff --git a/net/mac80211/agg-tx.c b/net/mac80211/agg-tx.c
+index 74a878f213d3e..1deb3d874a4b9 100644
+--- a/net/mac80211/agg-tx.c
++++ b/net/mac80211/agg-tx.c
+@@ -9,7 +9,7 @@
+  * Copyright 2007, Michael Wu <flamingice@sourmilk.net>
+  * Copyright 2007-2010, Intel Corporation
+  * Copyright(c) 2015-2017 Intel Deutschland GmbH
+- * Copyright (C) 2018 - 2021 Intel Corporation
++ * Copyright (C) 2018 - 2022 Intel Corporation
+  */
+ 
+ #include <linux/ieee80211.h>
+@@ -626,6 +626,14 @@ int ieee80211_start_tx_ba_session(struct ieee80211_sta *pubsta, u16 tid,
+ 		return -EINVAL;
+ 	}
+ 
++	if (test_sta_flag(sta, WLAN_STA_MFP) &&
++	    !test_sta_flag(sta, WLAN_STA_AUTHORIZED)) {
++		ht_dbg(sdata,
++		       "MFP STA not authorized - deny BA session request %pM tid %d\n",
++		       sta->sta.addr, tid);
++		return -EINVAL;
++	}
++
+ 	/*
+ 	 * 802.11n-2009 11.5.1.1: If the initiating STA is an HT STA, is a
+ 	 * member of an IBSS, and has no other existing Block Ack agreement
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index f732518287820..9b4bb1460cef9 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -17757,7 +17757,8 @@ void cfg80211_ch_switch_notify(struct net_device *dev,
+ 	wdev->chandef = *chandef;
+ 	wdev->preset_chandef = *chandef;
+ 
+-	if (wdev->iftype == NL80211_IFTYPE_STATION &&
++	if ((wdev->iftype == NL80211_IFTYPE_STATION ||
++	     wdev->iftype == NL80211_IFTYPE_P2P_CLIENT) &&
+ 	    !WARN_ON(!wdev->current_bss))
+ 		cfg80211_update_assoc_bss_entry(wdev, chandef->chan);
+ 
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 4924b9135c6ec..fbc0b2798184b 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -4257,7 +4257,7 @@ static bool xfrm_migrate_selector_match(const struct xfrm_selector *sel_cmp,
+ }
+ 
+ static struct xfrm_policy *xfrm_migrate_policy_find(const struct xfrm_selector *sel,
+-						    u8 dir, u8 type, struct net *net)
++						    u8 dir, u8 type, struct net *net, u32 if_id)
+ {
+ 	struct xfrm_policy *pol, *ret = NULL;
+ 	struct hlist_head *chain;
+@@ -4266,7 +4266,8 @@ static struct xfrm_policy *xfrm_migrate_policy_find(const struct xfrm_selector *
+ 	spin_lock_bh(&net->xfrm.xfrm_policy_lock);
+ 	chain = policy_hash_direct(net, &sel->daddr, &sel->saddr, sel->family, dir);
+ 	hlist_for_each_entry(pol, chain, bydst) {
+-		if (xfrm_migrate_selector_match(sel, &pol->selector) &&
++		if ((if_id == 0 || pol->if_id == if_id) &&
++		    xfrm_migrate_selector_match(sel, &pol->selector) &&
+ 		    pol->type == type) {
+ 			ret = pol;
+ 			priority = ret->priority;
+@@ -4278,7 +4279,8 @@ static struct xfrm_policy *xfrm_migrate_policy_find(const struct xfrm_selector *
+ 		if ((pol->priority >= priority) && ret)
+ 			break;
+ 
+-		if (xfrm_migrate_selector_match(sel, &pol->selector) &&
++		if ((if_id == 0 || pol->if_id == if_id) &&
++		    xfrm_migrate_selector_match(sel, &pol->selector) &&
+ 		    pol->type == type) {
+ 			ret = pol;
+ 			break;
+@@ -4394,7 +4396,7 @@ static int xfrm_migrate_check(const struct xfrm_migrate *m, int num_migrate)
+ int xfrm_migrate(const struct xfrm_selector *sel, u8 dir, u8 type,
+ 		 struct xfrm_migrate *m, int num_migrate,
+ 		 struct xfrm_kmaddress *k, struct net *net,
+-		 struct xfrm_encap_tmpl *encap)
++		 struct xfrm_encap_tmpl *encap, u32 if_id)
+ {
+ 	int i, err, nx_cur = 0, nx_new = 0;
+ 	struct xfrm_policy *pol = NULL;
+@@ -4413,14 +4415,14 @@ int xfrm_migrate(const struct xfrm_selector *sel, u8 dir, u8 type,
+ 	}
+ 
+ 	/* Stage 1 - find policy */
+-	if ((pol = xfrm_migrate_policy_find(sel, dir, type, net)) == NULL) {
++	if ((pol = xfrm_migrate_policy_find(sel, dir, type, net, if_id)) == NULL) {
+ 		err = -ENOENT;
+ 		goto out;
+ 	}
+ 
+ 	/* Stage 2 - find and update state(s) */
+ 	for (i = 0, mp = m; i < num_migrate; i++, mp++) {
+-		if ((x = xfrm_migrate_state_find(mp, net))) {
++		if ((x = xfrm_migrate_state_find(mp, net, if_id))) {
+ 			x_cur[nx_cur] = x;
+ 			nx_cur++;
+ 			xc = xfrm_state_migrate(x, mp, encap);
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 100b4b3723e72..f7bfa19169688 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -1578,9 +1578,6 @@ static struct xfrm_state *xfrm_state_clone(struct xfrm_state *orig,
+ 	memcpy(&x->mark, &orig->mark, sizeof(x->mark));
+ 	memcpy(&x->props.smark, &orig->props.smark, sizeof(x->props.smark));
+ 
+-	if (xfrm_init_state(x) < 0)
+-		goto error;
+-
+ 	x->props.flags = orig->props.flags;
+ 	x->props.extra_flags = orig->props.extra_flags;
+ 
+@@ -1605,7 +1602,8 @@ out:
+ 	return NULL;
+ }
+ 
+-struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *net)
++struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *net,
++						u32 if_id)
+ {
+ 	unsigned int h;
+ 	struct xfrm_state *x = NULL;
+@@ -1621,6 +1619,8 @@ struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *n
+ 				continue;
+ 			if (m->reqid && x->props.reqid != m->reqid)
+ 				continue;
++			if (if_id != 0 && x->if_id != if_id)
++				continue;
+ 			if (!xfrm_addr_equal(&x->id.daddr, &m->old_daddr,
+ 					     m->old_family) ||
+ 			    !xfrm_addr_equal(&x->props.saddr, &m->old_saddr,
+@@ -1636,6 +1636,8 @@ struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *n
+ 			if (x->props.mode != m->mode ||
+ 			    x->id.proto != m->proto)
+ 				continue;
++			if (if_id != 0 && x->if_id != if_id)
++				continue;
+ 			if (!xfrm_addr_equal(&x->id.daddr, &m->old_daddr,
+ 					     m->old_family) ||
+ 			    !xfrm_addr_equal(&x->props.saddr, &m->old_saddr,
+@@ -1662,6 +1664,11 @@ struct xfrm_state *xfrm_state_migrate(struct xfrm_state *x,
+ 	if (!xc)
+ 		return NULL;
+ 
++	xc->props.family = m->new_family;
++
++	if (xfrm_init_state(xc) < 0)
++		goto error;
++
+ 	memcpy(&xc->id.daddr, &m->new_daddr, sizeof(xc->id.daddr));
+ 	memcpy(&xc->props.saddr, &m->new_saddr, sizeof(xc->props.saddr));
+ 
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index c60441be883a8..a8c142bd12638 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -629,13 +629,8 @@ static struct xfrm_state *xfrm_state_construct(struct net *net,
+ 
+ 	xfrm_smark_init(attrs, &x->props.smark);
+ 
+-	if (attrs[XFRMA_IF_ID]) {
++	if (attrs[XFRMA_IF_ID])
+ 		x->if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
+-		if (!x->if_id) {
+-			err = -EINVAL;
+-			goto error;
+-		}
+-	}
+ 
+ 	err = __xfrm_init_state(x, false, attrs[XFRMA_OFFLOAD_DEV]);
+ 	if (err)
+@@ -1431,13 +1426,8 @@ static int xfrm_alloc_userspi(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 
+ 	mark = xfrm_mark_get(attrs, &m);
+ 
+-	if (attrs[XFRMA_IF_ID]) {
++	if (attrs[XFRMA_IF_ID])
+ 		if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
+-		if (!if_id) {
+-			err = -EINVAL;
+-			goto out_noput;
+-		}
+-	}
+ 
+ 	if (p->info.seq) {
+ 		x = xfrm_find_acq_byseq(net, mark, p->info.seq);
+@@ -1750,13 +1740,8 @@ static struct xfrm_policy *xfrm_policy_construct(struct net *net, struct xfrm_us
+ 
+ 	xfrm_mark_get(attrs, &xp->mark);
+ 
+-	if (attrs[XFRMA_IF_ID]) {
++	if (attrs[XFRMA_IF_ID])
+ 		xp->if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
+-		if (!xp->if_id) {
+-			err = -EINVAL;
+-			goto error;
+-		}
+-	}
+ 
+ 	return xp;
+  error:
+@@ -2607,6 +2592,7 @@ static int xfrm_do_migrate(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 	int n = 0;
+ 	struct net *net = sock_net(skb->sk);
+ 	struct xfrm_encap_tmpl  *encap = NULL;
++	u32 if_id = 0;
+ 
+ 	if (attrs[XFRMA_MIGRATE] == NULL)
+ 		return -EINVAL;
+@@ -2631,7 +2617,10 @@ static int xfrm_do_migrate(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 			return -ENOMEM;
+ 	}
+ 
+-	err = xfrm_migrate(&pi->sel, pi->dir, type, m, n, kmp, net, encap);
++	if (attrs[XFRMA_IF_ID])
++		if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
++
++	err = xfrm_migrate(&pi->sel, pi->dir, type, m, n, kmp, net, encap, if_id);
+ 
+ 	kfree(encap);
+ 
+diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c
+index 9354a5e0321ce..3f7f5bd4e4c0d 100644
+--- a/tools/testing/selftests/vm/userfaultfd.c
++++ b/tools/testing/selftests/vm/userfaultfd.c
+@@ -46,6 +46,7 @@
+ #include <signal.h>
+ #include <poll.h>
+ #include <string.h>
++#include <linux/mman.h>
+ #include <sys/mman.h>
+ #include <sys/syscall.h>
+ #include <sys/ioctl.h>


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-03-23 11:51 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-03-23 11:51 UTC (permalink / raw
  To: gentoo-commits

commit:     4a77e532969e53f6bbef0b498eb7472e6901d0d5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Mar 23 11:51:33 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Mar 23 11:51:33 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4a77e532

Linux patch 5.16.17

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1016_linux-5.16.17.patch | 1244 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1248 insertions(+)

diff --git a/0000_README b/0000_README
index 2ecef651..dc43bdfb 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch:  1015_linux-5.16.16.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.16.16
 
+Patch:  1016_linux-5.16.17.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.17
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1016_linux-5.16.17.patch b/1016_linux-5.16.17.patch
new file mode 100644
index 00000000..bca5ed0b
--- /dev/null
+++ b/1016_linux-5.16.17.patch
@@ -0,0 +1,1244 @@
+diff --git a/Makefile b/Makefile
+index d625d3aeab2e9..0ca39c16b3bfd 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1088a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1088a.dtsi
+index f891ef6a3754c..6050723172436 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1088a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1088a.dtsi
+@@ -241,18 +241,18 @@
+ 				interrupt-controller;
+ 				reg = <0x14 4>;
+ 				interrupt-map =
+-					<0 0 &gic 0 0 GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
+-					<1 0 &gic 0 0 GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
+-					<2 0 &gic 0 0 GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>,
+-					<3 0 &gic 0 0 GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
+-					<4 0 &gic 0 0 GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
+-					<5 0 &gic 0 0 GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>,
+-					<6 0 &gic 0 0 GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>,
+-					<7 0 &gic 0 0 GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
+-					<8 0 &gic 0 0 GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
+-					<9 0 &gic 0 0 GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,
+-					<10 0 &gic 0 0 GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>,
+-					<11 0 &gic 0 0 GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
++					<0 0 &gic GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
++					<1 0 &gic GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
++					<2 0 &gic GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>,
++					<3 0 &gic GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
++					<4 0 &gic GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
++					<5 0 &gic GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>,
++					<6 0 &gic GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>,
++					<7 0 &gic GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
++					<8 0 &gic GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
++					<9 0 &gic GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,
++					<10 0 &gic GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>,
++					<11 0 &gic GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
+ 				interrupt-map-mask = <0xffffffff 0x0>;
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi
+index 3cb9c21d2775a..1282b61da8a55 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi
+@@ -293,18 +293,18 @@
+ 				interrupt-controller;
+ 				reg = <0x14 4>;
+ 				interrupt-map =
+-					<0 0 &gic 0 0 GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
+-					<1 0 &gic 0 0 GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
+-					<2 0 &gic 0 0 GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>,
+-					<3 0 &gic 0 0 GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
+-					<4 0 &gic 0 0 GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
+-					<5 0 &gic 0 0 GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>,
+-					<6 0 &gic 0 0 GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>,
+-					<7 0 &gic 0 0 GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
+-					<8 0 &gic 0 0 GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
+-					<9 0 &gic 0 0 GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,
+-					<10 0 &gic 0 0 GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>,
+-					<11 0 &gic 0 0 GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
++					<0 0 &gic GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
++					<1 0 &gic GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
++					<2 0 &gic GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>,
++					<3 0 &gic GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
++					<4 0 &gic GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
++					<5 0 &gic GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>,
++					<6 0 &gic GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>,
++					<7 0 &gic GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
++					<8 0 &gic GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
++					<9 0 &gic GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,
++					<10 0 &gic GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>,
++					<11 0 &gic GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
+ 				interrupt-map-mask = <0xffffffff 0x0>;
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/freescale/fsl-lx2160a.dtsi b/arch/arm64/boot/dts/freescale/fsl-lx2160a.dtsi
+index 2433e6f2eda8b..51c4f61007cdb 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-lx2160a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-lx2160a.dtsi
+@@ -680,18 +680,18 @@
+ 				interrupt-controller;
+ 				reg = <0x14 4>;
+ 				interrupt-map =
+-					<0 0 &gic 0 0 GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
+-					<1 0 &gic 0 0 GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
+-					<2 0 &gic 0 0 GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>,
+-					<3 0 &gic 0 0 GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
+-					<4 0 &gic 0 0 GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
+-					<5 0 &gic 0 0 GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>,
+-					<6 0 &gic 0 0 GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>,
+-					<7 0 &gic 0 0 GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
+-					<8 0 &gic 0 0 GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
+-					<9 0 &gic 0 0 GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,
+-					<10 0 &gic 0 0 GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>,
+-					<11 0 &gic 0 0 GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
++					<0 0 &gic GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
++					<1 0 &gic GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
++					<2 0 &gic GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>,
++					<3 0 &gic GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
++					<4 0 &gic GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
++					<5 0 &gic GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>,
++					<6 0 &gic GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>,
++					<7 0 &gic GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
++					<8 0 &gic GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
++					<9 0 &gic GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,
++					<10 0 &gic GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>,
++					<11 0 &gic GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
+ 				interrupt-map-mask = <0xffffffff 0x0>;
+ 			};
+ 		};
+diff --git a/arch/arm64/include/asm/vectors.h b/arch/arm64/include/asm/vectors.h
+index f64613a96d530..bc9a2145f4194 100644
+--- a/arch/arm64/include/asm/vectors.h
++++ b/arch/arm64/include/asm/vectors.h
+@@ -56,14 +56,14 @@ enum arm64_bp_harden_el1_vectors {
+ DECLARE_PER_CPU_READ_MOSTLY(const char *, this_cpu_vector);
+ 
+ #ifndef CONFIG_UNMAP_KERNEL_AT_EL0
+-#define TRAMP_VALIAS	0
++#define TRAMP_VALIAS	0ul
+ #endif
+ 
+ static inline const char *
+ arm64_get_bp_hardening_vector(enum arm64_bp_harden_el1_vectors slot)
+ {
+ 	if (arm64_kernel_unmapped_at_el0())
+-		return (char *)TRAMP_VALIAS + SZ_2K * slot;
++		return (char *)(TRAMP_VALIAS + SZ_2K * slot);
+ 
+ 	WARN_ON_ONCE(slot == EL1_VECTOR_KPTI);
+ 
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index a401180e8d66c..146fa2e76834d 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -611,7 +611,6 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ 	{
+ 		.desc = "ARM erratum 2077057",
+ 		.capability = ARM64_WORKAROUND_2077057,
+-		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+ 		ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A510, 0, 0, 2),
+ 	},
+ #endif
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 5adca3a9cebea..94a551e7eed7f 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -51,6 +51,7 @@
+ #include "blk-mq-sched.h"
+ #include "blk-pm.h"
+ #include "blk-throttle.h"
++#include "blk-rq-qos.h"
+ 
+ struct dentry *blk_debugfs_root;
+ 
+@@ -354,6 +355,9 @@ void blk_cleanup_queue(struct request_queue *q)
+ 	 */
+ 	blk_freeze_queue(q);
+ 
++	/* cleanup rq qos structures for queue without disk */
++	rq_qos_exit(q);
++
+ 	blk_queue_flag_set(QUEUE_FLAG_DEAD, q);
+ 
+ 	blk_sync_queue(q);
+diff --git a/drivers/atm/eni.c b/drivers/atm/eni.c
+index 422753d52244b..a31ffe16e626f 100644
+--- a/drivers/atm/eni.c
++++ b/drivers/atm/eni.c
+@@ -1112,6 +1112,8 @@ DPRINTK("iovcnt = %d\n",skb_shinfo(skb)->nr_frags);
+ 	skb_data3 = skb->data[3];
+ 	paddr = dma_map_single(&eni_dev->pci_dev->dev,skb->data,skb->len,
+ 			       DMA_TO_DEVICE);
++	if (dma_mapping_error(&eni_dev->pci_dev->dev, paddr))
++		return enq_next;
+ 	ENI_PRV_PADDR(skb) = paddr;
+ 	/* prepare DMA queue entries */
+ 	j = 0;
+diff --git a/drivers/crypto/qcom-rng.c b/drivers/crypto/qcom-rng.c
+index 99ba8d51d1020..11f30fd48c141 100644
+--- a/drivers/crypto/qcom-rng.c
++++ b/drivers/crypto/qcom-rng.c
+@@ -8,6 +8,7 @@
+ #include <linux/clk.h>
+ #include <linux/crypto.h>
+ #include <linux/io.h>
++#include <linux/iopoll.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
+@@ -43,16 +44,19 @@ static int qcom_rng_read(struct qcom_rng *rng, u8 *data, unsigned int max)
+ {
+ 	unsigned int currsize = 0;
+ 	u32 val;
++	int ret;
+ 
+ 	/* read random data from hardware */
+ 	do {
+-		val = readl_relaxed(rng->base + PRNG_STATUS);
+-		if (!(val & PRNG_STATUS_DATA_AVAIL))
+-			break;
++		ret = readl_poll_timeout(rng->base + PRNG_STATUS, val,
++					 val & PRNG_STATUS_DATA_AVAIL,
++					 200, 10000);
++		if (ret)
++			return ret;
+ 
+ 		val = readl_relaxed(rng->base + PRNG_DATA_OUT);
+ 		if (!val)
+-			break;
++			return -EINVAL;
+ 
+ 		if ((max - currsize) >= WORD_SZ) {
+ 			memcpy(data, &val, WORD_SZ);
+@@ -61,11 +65,10 @@ static int qcom_rng_read(struct qcom_rng *rng, u8 *data, unsigned int max)
+ 		} else {
+ 			/* copy only remaining bytes */
+ 			memcpy(data, &val, max - currsize);
+-			break;
+ 		}
+ 	} while (currsize < max);
+ 
+-	return currsize;
++	return 0;
+ }
+ 
+ static int qcom_rng_generate(struct crypto_rng *tfm,
+@@ -87,7 +90,7 @@ static int qcom_rng_generate(struct crypto_rng *tfm,
+ 	mutex_unlock(&rng->lock);
+ 	clk_disable_unprepare(rng->clk);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int qcom_rng_seed(struct crypto_rng *tfm, const u8 *seed,
+diff --git a/drivers/firmware/efi/apple-properties.c b/drivers/firmware/efi/apple-properties.c
+index 4c3201e290e29..ea84108035eb0 100644
+--- a/drivers/firmware/efi/apple-properties.c
++++ b/drivers/firmware/efi/apple-properties.c
+@@ -24,7 +24,7 @@ static bool dump_properties __initdata;
+ static int __init dump_properties_enable(char *arg)
+ {
+ 	dump_properties = true;
+-	return 0;
++	return 1;
+ }
+ 
+ __setup("dump_apple_properties", dump_properties_enable);
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index 7de3f5b6e8d0a..5502e176d51be 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -212,7 +212,7 @@ static int __init efivar_ssdt_setup(char *str)
+ 		memcpy(efivar_ssdt, str, strlen(str));
+ 	else
+ 		pr_warn("efivar_ssdt: name too long: %s\n", str);
+-	return 0;
++	return 1;
+ }
+ __setup("efivar_ssdt=", efivar_ssdt_setup);
+ 
+diff --git a/drivers/gpu/drm/bridge/Kconfig b/drivers/gpu/drm/bridge/Kconfig
+index 431b6e12a81fe..68ec45abc1fbf 100644
+--- a/drivers/gpu/drm/bridge/Kconfig
++++ b/drivers/gpu/drm/bridge/Kconfig
+@@ -8,7 +8,6 @@ config DRM_BRIDGE
+ config DRM_PANEL_BRIDGE
+ 	def_bool y
+ 	depends on DRM_BRIDGE
+-	depends on DRM_KMS_HELPER
+ 	select DRM_PANEL
+ 	help
+ 	  DRM bridge wrapper of DRM panels
+@@ -30,6 +29,7 @@ config DRM_CDNS_DSI
+ config DRM_CHIPONE_ICN6211
+ 	tristate "Chipone ICN6211 MIPI-DSI/RGB Converter bridge"
+ 	depends on OF
++	select DRM_KMS_HELPER
+ 	select DRM_MIPI_DSI
+ 	select DRM_PANEL_BRIDGE
+ 	help
+diff --git a/drivers/gpu/drm/imx/parallel-display.c b/drivers/gpu/drm/imx/parallel-display.c
+index a8aba0141ce71..06cb1a59b9bcd 100644
+--- a/drivers/gpu/drm/imx/parallel-display.c
++++ b/drivers/gpu/drm/imx/parallel-display.c
+@@ -217,14 +217,6 @@ static int imx_pd_bridge_atomic_check(struct drm_bridge *bridge,
+ 	if (!imx_pd_format_supported(bus_fmt))
+ 		return -EINVAL;
+ 
+-	if (bus_flags &
+-	    ~(DRM_BUS_FLAG_DE_LOW | DRM_BUS_FLAG_DE_HIGH |
+-	      DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE |
+-	      DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE)) {
+-		dev_warn(imxpd->dev, "invalid bus_flags (%x)\n", bus_flags);
+-		return -EINVAL;
+-	}
+-
+ 	bridge_state->output_bus_cfg.flags = bus_flags;
+ 	bridge_state->input_bus_cfg.flags = bus_flags;
+ 	imx_crtc_state->bus_flags = bus_flags;
+diff --git a/drivers/gpu/drm/mgag200/mgag200_pll.c b/drivers/gpu/drm/mgag200/mgag200_pll.c
+index e9ae22b4f8138..52be08b744ade 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_pll.c
++++ b/drivers/gpu/drm/mgag200/mgag200_pll.c
+@@ -404,9 +404,9 @@ mgag200_pixpll_update_g200wb(struct mgag200_pll *pixpll, const struct mgag200_pl
+ 		udelay(50);
+ 
+ 		/* program pixel pll register */
+-		WREG_DAC(MGA1064_PIX_PLLC_N, xpixpllcn);
+-		WREG_DAC(MGA1064_PIX_PLLC_M, xpixpllcm);
+-		WREG_DAC(MGA1064_PIX_PLLC_P, xpixpllcp);
++		WREG_DAC(MGA1064_WB_PIX_PLLC_N, xpixpllcn);
++		WREG_DAC(MGA1064_WB_PIX_PLLC_M, xpixpllcm);
++		WREG_DAC(MGA1064_WB_PIX_PLLC_P, xpixpllcp);
+ 
+ 		udelay(50);
+ 
+diff --git a/drivers/gpu/drm/panel/Kconfig b/drivers/gpu/drm/panel/Kconfig
+index 0d3798354e6a7..42011d8842023 100644
+--- a/drivers/gpu/drm/panel/Kconfig
++++ b/drivers/gpu/drm/panel/Kconfig
+@@ -96,6 +96,7 @@ config DRM_PANEL_EDP
+ 	select VIDEOMODE_HELPERS
+ 	select DRM_DP_AUX_BUS
+ 	select DRM_DP_HELPER
++	select DRM_KMS_HELPER
+ 	help
+ 	  DRM panel driver for dumb eDP panels that need at most a regulator and
+ 	  a GPIO to be powered up. Optionally a backlight can be attached so
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 87f30bced7b7e..6a820b698f5a3 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -2017,7 +2017,7 @@ static const struct display_timing innolux_g070y2_l01_timing = {
+ static const struct panel_desc innolux_g070y2_l01 = {
+ 	.timings = &innolux_g070y2_l01_timing,
+ 	.num_timings = 1,
+-	.bpc = 6,
++	.bpc = 8,
+ 	.size = {
+ 		.width = 152,
+ 		.height = 91,
+diff --git a/drivers/input/tablet/aiptek.c b/drivers/input/tablet/aiptek.c
+index fcb1b646436a5..1581f6ef09279 100644
+--- a/drivers/input/tablet/aiptek.c
++++ b/drivers/input/tablet/aiptek.c
+@@ -1787,15 +1787,13 @@ aiptek_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ 	input_set_abs_params(inputdev, ABS_TILT_Y, AIPTEK_TILT_MIN, AIPTEK_TILT_MAX, 0, 0);
+ 	input_set_abs_params(inputdev, ABS_WHEEL, AIPTEK_WHEEL_MIN, AIPTEK_WHEEL_MAX - 1, 0, 0);
+ 
+-	/* Verify that a device really has an endpoint */
+-	if (intf->cur_altsetting->desc.bNumEndpoints < 1) {
++	err = usb_find_common_endpoints(intf->cur_altsetting,
++					NULL, NULL, &endpoint, NULL);
++	if (err) {
+ 		dev_err(&intf->dev,
+-			"interface has %d endpoints, but must have minimum 1\n",
+-			intf->cur_altsetting->desc.bNumEndpoints);
+-		err = -EINVAL;
++			"interface has no int in endpoints, but must have minimum 1\n");
+ 		goto fail3;
+ 	}
+-	endpoint = &intf->cur_altsetting->endpoint[0].desc;
+ 
+ 	/* Go set up our URB, which is called when the tablet receives
+ 	 * input.
+diff --git a/drivers/net/ethernet/atheros/alx/main.c b/drivers/net/ethernet/atheros/alx/main.c
+index 4ad3fc72e74e3..a89b93cb4e26d 100644
+--- a/drivers/net/ethernet/atheros/alx/main.c
++++ b/drivers/net/ethernet/atheros/alx/main.c
+@@ -1181,8 +1181,11 @@ static int alx_change_mtu(struct net_device *netdev, int mtu)
+ 	alx->hw.mtu = mtu;
+ 	alx->rxbuf_size = max(max_frame, ALX_DEF_RXBUF_SIZE);
+ 	netdev_update_features(netdev);
+-	if (netif_running(netdev))
++	if (netif_running(netdev)) {
++		mutex_lock(&alx->mtx);
+ 		alx_reinit(alx);
++		mutex_unlock(&alx->mtx);
++	}
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+index a19dd6797070c..2209d99b34047 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+@@ -2533,6 +2533,4 @@ void bnx2x_register_phc(struct bnx2x *bp);
+  * Meant for implicit re-load flows.
+  */
+ int bnx2x_vlan_reconfigure_vid(struct bnx2x *bp);
+-int bnx2x_init_firmware(struct bnx2x *bp);
+-void bnx2x_release_firmware(struct bnx2x *bp);
+ #endif /* bnx2x.h */
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+index e57fe0034ce2a..b1ad627748971 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+@@ -2363,24 +2363,30 @@ int bnx2x_compare_fw_ver(struct bnx2x *bp, u32 load_code, bool print_err)
+ 	/* is another pf loaded on this engine? */
+ 	if (load_code != FW_MSG_CODE_DRV_LOAD_COMMON_CHIP &&
+ 	    load_code != FW_MSG_CODE_DRV_LOAD_COMMON) {
+-		/* build my FW version dword */
+-		u32 my_fw = (bp->fw_major) + (bp->fw_minor << 8) +
+-				(bp->fw_rev << 16) + (bp->fw_eng << 24);
++		u8 loaded_fw_major, loaded_fw_minor, loaded_fw_rev, loaded_fw_eng;
++		u32 loaded_fw;
+ 
+ 		/* read loaded FW from chip */
+-		u32 loaded_fw = REG_RD(bp, XSEM_REG_PRAM);
++		loaded_fw = REG_RD(bp, XSEM_REG_PRAM);
+ 
+-		DP(BNX2X_MSG_SP, "loaded fw %x, my fw %x\n",
+-		   loaded_fw, my_fw);
++		loaded_fw_major = loaded_fw & 0xff;
++		loaded_fw_minor = (loaded_fw >> 8) & 0xff;
++		loaded_fw_rev = (loaded_fw >> 16) & 0xff;
++		loaded_fw_eng = (loaded_fw >> 24) & 0xff;
++
++		DP(BNX2X_MSG_SP, "loaded fw 0x%x major 0x%x minor 0x%x rev 0x%x eng 0x%x\n",
++		   loaded_fw, loaded_fw_major, loaded_fw_minor, loaded_fw_rev, loaded_fw_eng);
+ 
+ 		/* abort nic load if version mismatch */
+-		if (my_fw != loaded_fw) {
++		if (loaded_fw_major != BCM_5710_FW_MAJOR_VERSION ||
++		    loaded_fw_minor != BCM_5710_FW_MINOR_VERSION ||
++		    loaded_fw_eng != BCM_5710_FW_ENGINEERING_VERSION ||
++		    loaded_fw_rev < BCM_5710_FW_REVISION_VERSION_V15) {
+ 			if (print_err)
+-				BNX2X_ERR("bnx2x with FW %x was already loaded which mismatches my %x FW. Aborting\n",
+-					  loaded_fw, my_fw);
++				BNX2X_ERR("loaded FW incompatible. Aborting\n");
+ 			else
+-				BNX2X_DEV_INFO("bnx2x with FW %x was already loaded which mismatches my %x FW, possibly due to MF UNDI\n",
+-					       loaded_fw, my_fw);
++				BNX2X_DEV_INFO("loaded FW incompatible, possibly due to MF UNDI\n");
++
+ 			return -EBUSY;
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+index 4ce596daeaae3..569004c961d47 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+@@ -12319,15 +12319,6 @@ static int bnx2x_init_bp(struct bnx2x *bp)
+ 
+ 	bnx2x_read_fwinfo(bp);
+ 
+-	if (IS_PF(bp)) {
+-		rc = bnx2x_init_firmware(bp);
+-
+-		if (rc) {
+-			bnx2x_free_mem_bp(bp);
+-			return rc;
+-		}
+-	}
+-
+ 	func = BP_FUNC(bp);
+ 
+ 	/* need to reset chip if undi was active */
+@@ -12340,7 +12331,6 @@ static int bnx2x_init_bp(struct bnx2x *bp)
+ 
+ 		rc = bnx2x_prev_unload(bp);
+ 		if (rc) {
+-			bnx2x_release_firmware(bp);
+ 			bnx2x_free_mem_bp(bp);
+ 			return rc;
+ 		}
+@@ -13420,7 +13410,7 @@ do {									\
+ 	     (u8 *)bp->arr, len);					\
+ } while (0)
+ 
+-int bnx2x_init_firmware(struct bnx2x *bp)
++static int bnx2x_init_firmware(struct bnx2x *bp)
+ {
+ 	const char *fw_file_name, *fw_file_name_v15;
+ 	struct bnx2x_fw_file_hdr *fw_hdr;
+@@ -13520,7 +13510,7 @@ request_firmware_exit:
+ 	return rc;
+ }
+ 
+-void bnx2x_release_firmware(struct bnx2x *bp)
++static void bnx2x_release_firmware(struct bnx2x *bp)
+ {
+ 	kfree(bp->init_ops_offsets);
+ 	kfree(bp->init_ops);
+@@ -14037,7 +14027,6 @@ static int bnx2x_init_one(struct pci_dev *pdev,
+ 	return 0;
+ 
+ init_one_freemem:
+-	bnx2x_release_firmware(bp);
+ 	bnx2x_free_mem_bp(bp);
+ 
+ init_one_exit:
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 87f1056e29ff2..2da804f84b480 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -2287,8 +2287,10 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_rx_ring *ring,
+ 		dma_length_status = status->length_status;
+ 		if (dev->features & NETIF_F_RXCSUM) {
+ 			rx_csum = (__force __be16)(status->rx_csum & 0xffff);
+-			skb->csum = (__force __wsum)ntohs(rx_csum);
+-			skb->ip_summed = CHECKSUM_COMPLETE;
++			if (rx_csum) {
++				skb->csum = (__force __wsum)ntohs(rx_csum);
++				skb->ip_summed = CHECKSUM_COMPLETE;
++			}
+ 		}
+ 
+ 		/* DMA flags and length are still valid no matter how
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 138db07bdfa8e..42e6675078cd1 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -2133,6 +2133,13 @@ restart_watchdog:
+ 		queue_delayed_work(iavf_wq, &adapter->watchdog_task, HZ * 2);
+ }
+ 
++/**
++ * iavf_disable_vf - disable VF
++ * @adapter: board private structure
++ *
++ * Set communication failed flag and free all resources.
++ * NOTE: This function is expected to be called with crit_lock being held.
++ **/
+ static void iavf_disable_vf(struct iavf_adapter *adapter)
+ {
+ 	struct iavf_mac_filter *f, *ftmp;
+@@ -2187,7 +2194,6 @@ static void iavf_disable_vf(struct iavf_adapter *adapter)
+ 	memset(adapter->vf_res, 0, IAVF_VIRTCHNL_VF_RESOURCE_SIZE);
+ 	iavf_shutdown_adminq(&adapter->hw);
+ 	adapter->netdev->flags &= ~IFF_UP;
+-	mutex_unlock(&adapter->crit_lock);
+ 	adapter->flags &= ~IAVF_FLAG_RESET_PENDING;
+ 	iavf_change_state(adapter, __IAVF_DOWN);
+ 	wake_up(&adapter->down_waitqueue);
+@@ -4022,6 +4028,13 @@ static void iavf_remove(struct pci_dev *pdev)
+ 	struct iavf_hw *hw = &adapter->hw;
+ 	int err;
+ 
++	/* When reboot/shutdown is in progress no need to do anything
++	 * as the adapter is already REMOVE state that was set during
++	 * iavf_shutdown() callback.
++	 */
++	if (adapter->state == __IAVF_REMOVE)
++		return;
++
+ 	set_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section);
+ 	/* Wait until port initialization is complete.
+ 	 * There are flows where register/unregister netdev may race.
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 8a6c3716cdabd..b449b3408a1cb 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -5972,8 +5972,9 @@ ice_update_vsi_tx_ring_stats(struct ice_vsi *vsi,
+ 		u64 pkts = 0, bytes = 0;
+ 
+ 		ring = READ_ONCE(rings[i]);
+-		if (ring)
+-			ice_fetch_u64_stats_per_ring(&ring->syncp, ring->stats, &pkts, &bytes);
++		if (!ring)
++			continue;
++		ice_fetch_u64_stats_per_ring(&ring->syncp, ring->stats, &pkts, &bytes);
+ 		vsi_stats->tx_packets += pkts;
+ 		vsi_stats->tx_bytes += bytes;
+ 		vsi->tx_restart += ring->tx_stats.restart_q;
+diff --git a/drivers/net/ethernet/mscc/ocelot_flower.c b/drivers/net/ethernet/mscc/ocelot_flower.c
+index 738dd2be79dcf..a3f1f1fbaf7dc 100644
+--- a/drivers/net/ethernet/mscc/ocelot_flower.c
++++ b/drivers/net/ethernet/mscc/ocelot_flower.c
+@@ -54,6 +54,12 @@ static int ocelot_chain_to_block(int chain, bool ingress)
+  */
+ static int ocelot_chain_to_lookup(int chain)
+ {
++	/* Backwards compatibility with older, single-chain tc-flower
++	 * offload support in Ocelot
++	 */
++	if (chain == 0)
++		return 0;
++
+ 	return (chain / VCAP_LOOKUP) % 10;
+ }
+ 
+@@ -62,7 +68,15 @@ static int ocelot_chain_to_lookup(int chain)
+  */
+ static int ocelot_chain_to_pag(int chain)
+ {
+-	int lookup = ocelot_chain_to_lookup(chain);
++	int lookup;
++
++	/* Backwards compatibility with older, single-chain tc-flower
++	 * offload support in Ocelot
++	 */
++	if (chain == 0)
++		return 0;
++
++	lookup = ocelot_chain_to_lookup(chain);
+ 
+ 	/* calculate PAG value as chain index relative to the first PAG */
+ 	return chain - VCAP_IS2_CHAIN(lookup, 0);
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 7e66ae1d2a59b..7117187dfbd7a 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -1587,6 +1587,9 @@ static void netvsc_get_ethtool_stats(struct net_device *dev,
+ 	pcpu_sum = kvmalloc_array(num_possible_cpus(),
+ 				  sizeof(struct netvsc_ethtool_pcpu_stats),
+ 				  GFP_KERNEL);
++	if (!pcpu_sum)
++		return;
++
+ 	netvsc_get_pcpu_stats(dev, pcpu_sum);
+ 	for_each_present_cpu(cpu) {
+ 		struct netvsc_ethtool_pcpu_stats *this_sum = &pcpu_sum[cpu];
+diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
+index cfda625dbea57..4d726ee03ce20 100644
+--- a/drivers/net/phy/marvell.c
++++ b/drivers/net/phy/marvell.c
+@@ -1693,8 +1693,8 @@ static int marvell_suspend(struct phy_device *phydev)
+ 	int err;
+ 
+ 	/* Suspend the fiber mode first */
+-	if (!linkmode_test_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
+-			       phydev->supported)) {
++	if (linkmode_test_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
++			      phydev->supported)) {
+ 		err = marvell_set_page(phydev, MII_MARVELL_FIBER_PAGE);
+ 		if (err < 0)
+ 			goto error;
+@@ -1728,8 +1728,8 @@ static int marvell_resume(struct phy_device *phydev)
+ 	int err;
+ 
+ 	/* Resume the fiber mode first */
+-	if (!linkmode_test_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
+-			       phydev->supported)) {
++	if (linkmode_test_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
++			      phydev->supported)) {
+ 		err = marvell_set_page(phydev, MII_MARVELL_FIBER_PAGE);
+ 		if (err < 0)
+ 			goto error;
+diff --git a/drivers/net/phy/mscc/mscc_main.c b/drivers/net/phy/mscc/mscc_main.c
+index ebfeeb3c67c1d..7e3017e7a1c03 100644
+--- a/drivers/net/phy/mscc/mscc_main.c
++++ b/drivers/net/phy/mscc/mscc_main.c
+@@ -2685,3 +2685,6 @@ MODULE_DEVICE_TABLE(mdio, vsc85xx_tbl);
+ MODULE_DESCRIPTION("Microsemi VSC85xx PHY driver");
+ MODULE_AUTHOR("Nagaraju Lakkaraju");
+ MODULE_LICENSE("Dual MIT/GPL");
++
++MODULE_FIRMWARE(MSCC_VSC8584_REVB_INT8051_FW);
++MODULE_FIRMWARE(MSCC_VSC8574_REVB_INT8051_FW);
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index 4733fd7fb169e..7c1c2658cb5f8 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -2611,30 +2611,9 @@ int ath10k_wmi_event_mgmt_rx(struct ath10k *ar, struct sk_buff *skb)
+ 		ath10k_mac_handle_beacon(ar, skb);
+ 
+ 	if (ieee80211_is_beacon(hdr->frame_control) ||
+-	    ieee80211_is_probe_resp(hdr->frame_control)) {
+-		struct ieee80211_mgmt *mgmt = (void *)skb->data;
+-		u8 *ies;
+-		int ies_ch;
+-
++	    ieee80211_is_probe_resp(hdr->frame_control))
+ 		status->boottime_ns = ktime_get_boottime_ns();
+ 
+-		if (!ar->scan_channel)
+-			goto drop;
+-
+-		ies = mgmt->u.beacon.variable;
+-
+-		ies_ch = cfg80211_get_ies_channel_number(mgmt->u.beacon.variable,
+-							 skb_tail_pointer(skb) - ies,
+-							 sband->band);
+-
+-		if (ies_ch > 0 && ies_ch != channel) {
+-			ath10k_dbg(ar, ATH10K_DBG_MGMT,
+-				   "channel mismatched ds channel %d scan channel %d\n",
+-				   ies_ch, channel);
+-			goto drop;
+-		}
+-	}
+-
+ 	ath10k_dbg(ar, ATH10K_DBG_MGMT,
+ 		   "event mgmt rx skb %pK len %d ftype %02x stype %02x\n",
+ 		   skb, skb->len,
+@@ -2648,10 +2627,6 @@ int ath10k_wmi_event_mgmt_rx(struct ath10k *ar, struct sk_buff *skb)
+ 	ieee80211_rx_ni(ar->hw, skb);
+ 
+ 	return 0;
+-
+-drop:
+-	dev_kfree_skb(skb);
+-	return 0;
+ }
+ 
+ static int freq_to_idx(struct ath10k *ar, int freq)
+diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
+index 091a0ca16361c..496d775c67707 100644
+--- a/drivers/nvme/target/configfs.c
++++ b/drivers/nvme/target/configfs.c
+@@ -1233,44 +1233,6 @@ static ssize_t nvmet_subsys_attr_model_store(struct config_item *item,
+ }
+ CONFIGFS_ATTR(nvmet_subsys_, attr_model);
+ 
+-static ssize_t nvmet_subsys_attr_discovery_nqn_show(struct config_item *item,
+-			char *page)
+-{
+-	return snprintf(page, PAGE_SIZE, "%s\n",
+-			nvmet_disc_subsys->subsysnqn);
+-}
+-
+-static ssize_t nvmet_subsys_attr_discovery_nqn_store(struct config_item *item,
+-			const char *page, size_t count)
+-{
+-	struct nvmet_subsys *subsys = to_subsys(item);
+-	char *subsysnqn;
+-	int len;
+-
+-	len = strcspn(page, "\n");
+-	if (!len)
+-		return -EINVAL;
+-
+-	subsysnqn = kmemdup_nul(page, len, GFP_KERNEL);
+-	if (!subsysnqn)
+-		return -ENOMEM;
+-
+-	/*
+-	 * The discovery NQN must be different from subsystem NQN.
+-	 */
+-	if (!strcmp(subsysnqn, subsys->subsysnqn)) {
+-		kfree(subsysnqn);
+-		return -EBUSY;
+-	}
+-	down_write(&nvmet_config_sem);
+-	kfree(nvmet_disc_subsys->subsysnqn);
+-	nvmet_disc_subsys->subsysnqn = subsysnqn;
+-	up_write(&nvmet_config_sem);
+-
+-	return count;
+-}
+-CONFIGFS_ATTR(nvmet_subsys_, attr_discovery_nqn);
+-
+ #ifdef CONFIG_BLK_DEV_INTEGRITY
+ static ssize_t nvmet_subsys_attr_pi_enable_show(struct config_item *item,
+ 						char *page)
+@@ -1300,7 +1262,6 @@ static struct configfs_attribute *nvmet_subsys_attrs[] = {
+ 	&nvmet_subsys_attr_attr_cntlid_min,
+ 	&nvmet_subsys_attr_attr_cntlid_max,
+ 	&nvmet_subsys_attr_attr_model,
+-	&nvmet_subsys_attr_attr_discovery_nqn,
+ #ifdef CONFIG_BLK_DEV_INTEGRITY
+ 	&nvmet_subsys_attr_attr_pi_enable,
+ #endif
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 5119c687de683..626caf6f1e4b4 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -1493,8 +1493,7 @@ static struct nvmet_subsys *nvmet_find_get_subsys(struct nvmet_port *port,
+ 	if (!port)
+ 		return NULL;
+ 
+-	if (!strcmp(NVME_DISC_SUBSYS_NAME, subsysnqn) ||
+-	    !strcmp(nvmet_disc_subsys->subsysnqn, subsysnqn)) {
++	if (!strcmp(NVME_DISC_SUBSYS_NAME, subsysnqn)) {
+ 		if (!kref_get_unless_zero(&nvmet_disc_subsys->ref))
+ 			return NULL;
+ 		return nvmet_disc_subsys;
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 81dab9b82f79f..0d37c4aca175c 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -2011,9 +2011,10 @@ mpt3sas_base_sync_reply_irqs(struct MPT3SAS_ADAPTER *ioc, u8 poll)
+ 				enable_irq(reply_q->os_irq);
+ 			}
+ 		}
++
++		if (poll)
++			_base_process_reply_queue(reply_q);
+ 	}
+-	if (poll)
+-		_base_process_reply_queue(reply_q);
+ }
+ 
+ /**
+diff --git a/drivers/usb/class/usbtmc.c b/drivers/usb/class/usbtmc.c
+index 73f419adce610..4bb6d304eb4b2 100644
+--- a/drivers/usb/class/usbtmc.c
++++ b/drivers/usb/class/usbtmc.c
+@@ -1919,6 +1919,7 @@ static int usbtmc_ioctl_request(struct usbtmc_device_data *data,
+ 	struct usbtmc_ctrlrequest request;
+ 	u8 *buffer = NULL;
+ 	int rv;
++	unsigned int is_in, pipe;
+ 	unsigned long res;
+ 
+ 	res = copy_from_user(&request, arg, sizeof(struct usbtmc_ctrlrequest));
+@@ -1928,12 +1929,14 @@ static int usbtmc_ioctl_request(struct usbtmc_device_data *data,
+ 	if (request.req.wLength > USBTMC_BUFSIZE)
+ 		return -EMSGSIZE;
+ 
++	is_in = request.req.bRequestType & USB_DIR_IN;
++
+ 	if (request.req.wLength) {
+ 		buffer = kmalloc(request.req.wLength, GFP_KERNEL);
+ 		if (!buffer)
+ 			return -ENOMEM;
+ 
+-		if ((request.req.bRequestType & USB_DIR_IN) == 0) {
++		if (!is_in) {
+ 			/* Send control data to device */
+ 			res = copy_from_user(buffer, request.data,
+ 					     request.req.wLength);
+@@ -1944,8 +1947,12 @@ static int usbtmc_ioctl_request(struct usbtmc_device_data *data,
+ 		}
+ 	}
+ 
++	if (is_in)
++		pipe = usb_rcvctrlpipe(data->usb_dev, 0);
++	else
++		pipe = usb_sndctrlpipe(data->usb_dev, 0);
+ 	rv = usb_control_msg(data->usb_dev,
+-			usb_rcvctrlpipe(data->usb_dev, 0),
++			pipe,
+ 			request.req.bRequest,
+ 			request.req.bRequestType,
+ 			request.req.wValue,
+@@ -1957,7 +1964,7 @@ static int usbtmc_ioctl_request(struct usbtmc_device_data *data,
+ 		goto exit;
+ 	}
+ 
+-	if (rv && (request.req.bRequestType & USB_DIR_IN)) {
++	if (rv && is_in) {
+ 		/* Read control data from device */
+ 		res = copy_to_user(request.data, buffer, rv);
+ 		if (res)
+diff --git a/drivers/usb/gadget/function/rndis.c b/drivers/usb/gadget/function/rndis.c
+index 0f14c5291af07..4150de96b937a 100644
+--- a/drivers/usb/gadget/function/rndis.c
++++ b/drivers/usb/gadget/function/rndis.c
+@@ -640,6 +640,7 @@ static int rndis_set_response(struct rndis_params *params,
+ 	BufLength = le32_to_cpu(buf->InformationBufferLength);
+ 	BufOffset = le32_to_cpu(buf->InformationBufferOffset);
+ 	if ((BufLength > RNDIS_MAX_TOTAL_SIZE) ||
++	    (BufOffset > RNDIS_MAX_TOTAL_SIZE) ||
+ 	    (BufOffset + 8 >= RNDIS_MAX_TOTAL_SIZE))
+ 		    return -EINVAL;
+ 
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index 568534a0d17c8..c109b069f511f 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -1436,7 +1436,6 @@ static void usb_gadget_remove_driver(struct usb_udc *udc)
+ 	usb_gadget_udc_stop(udc);
+ 
+ 	udc->driver = NULL;
+-	udc->dev.driver = NULL;
+ 	udc->gadget->dev.driver = NULL;
+ }
+ 
+@@ -1498,7 +1497,6 @@ static int udc_bind_to_driver(struct usb_udc *udc, struct usb_gadget_driver *dri
+ 			driver->function);
+ 
+ 	udc->driver = driver;
+-	udc->dev.driver = &driver->driver;
+ 	udc->gadget->dev.driver = &driver->driver;
+ 
+ 	usb_gadget_udc_set_speed(udc, driver->max_speed);
+@@ -1521,7 +1519,6 @@ err1:
+ 		dev_err(&udc->dev, "failed to start %s: %d\n",
+ 			udc->driver->function, ret);
+ 	udc->driver = NULL;
+-	udc->dev.driver = NULL;
+ 	udc->gadget->dev.driver = NULL;
+ 	return ret;
+ }
+diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
+index 37f0b4274113c..e6c9d41db1de7 100644
+--- a/drivers/vhost/vsock.c
++++ b/drivers/vhost/vsock.c
+@@ -753,7 +753,8 @@ static int vhost_vsock_dev_release(struct inode *inode, struct file *file)
+ 
+ 	/* Iterating over all connections for all CIDs to find orphans is
+ 	 * inefficient.  Room for improvement here. */
+-	vsock_for_each_connected_socket(vhost_vsock_reset_orphans);
++	vsock_for_each_connected_socket(&vhost_transport.transport,
++					vhost_vsock_reset_orphans);
+ 
+ 	/* Don't check the owner, because we are in the release path, so we
+ 	 * need to stop the vsock device in any case.
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index c4c8db2ffa843..05c606fb39685 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -124,7 +124,16 @@ void btrfs_put_block_group(struct btrfs_block_group *cache)
+ {
+ 	if (refcount_dec_and_test(&cache->refs)) {
+ 		WARN_ON(cache->pinned > 0);
+-		WARN_ON(cache->reserved > 0);
++		/*
++		 * If there was a failure to cleanup a log tree, very likely due
++		 * to an IO failure on a writeback attempt of one or more of its
++		 * extent buffers, we could not do proper (and cheap) unaccounting
++		 * of their reserved space, so don't warn on reserved > 0 in that
++		 * case.
++		 */
++		if (!(cache->flags & BTRFS_BLOCK_GROUP_METADATA) ||
++		    !BTRFS_FS_LOG_CLEANUP_ERROR(cache->fs_info))
++			WARN_ON(cache->reserved > 0);
+ 
+ 		/*
+ 		 * A block_group shouldn't be on the discard_list anymore.
+@@ -3987,9 +3996,22 @@ int btrfs_free_block_groups(struct btrfs_fs_info *info)
+ 		 * important and indicates a real bug if this happens.
+ 		 */
+ 		if (WARN_ON(space_info->bytes_pinned > 0 ||
+-			    space_info->bytes_reserved > 0 ||
+ 			    space_info->bytes_may_use > 0))
+ 			btrfs_dump_space_info(info, space_info, 0, 0);
++
++		/*
++		 * If there was a failure to cleanup a log tree, very likely due
++		 * to an IO failure on a writeback attempt of one or more of its
++		 * extent buffers, we could not do proper (and cheap) unaccounting
++		 * of their reserved space, so don't warn on bytes_reserved > 0 in
++		 * that case.
++		 */
++		if (!(space_info->flags & BTRFS_BLOCK_GROUP_METADATA) ||
++		    !BTRFS_FS_LOG_CLEANUP_ERROR(info)) {
++			if (WARN_ON(space_info->bytes_reserved > 0))
++				btrfs_dump_space_info(info, space_info, 0, 0);
++		}
++
+ 		WARN_ON(space_info->reclaim_size > 0);
+ 		list_del(&space_info->list);
+ 		btrfs_sysfs_remove_space_info(space_info);
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index b18939863c29c..cd8e0f9ae82ca 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -143,6 +143,9 @@ enum {
+ 	BTRFS_FS_STATE_DEV_REPLACING,
+ 	/* The btrfs_fs_info created for self-tests */
+ 	BTRFS_FS_STATE_DUMMY_FS_INFO,
++
++	/* Indicates there was an error cleaning up a log tree. */
++	BTRFS_FS_STATE_LOG_CLEANUP_ERROR,
+ };
+ 
+ #define BTRFS_BACKREF_REV_MAX		256
+@@ -3585,6 +3588,10 @@ static inline void btrfs_print_v0_err(struct btrfs_fs_info *fs_info)
+ "Unsupported V0 extent filesystem detected. Aborting. Please re-create your filesystem with a newer kernel");
+ }
+ 
++#define BTRFS_FS_LOG_CLEANUP_ERROR(fs_info)				\
++	(unlikely(test_bit(BTRFS_FS_STATE_LOG_CLEANUP_ERROR,		\
++			   &(fs_info)->fs_state)))
++
+ __printf(5, 6)
+ __cold
+ void __btrfs_handle_fs_error(struct btrfs_fs_info *fs_info, const char *function,
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index f5371c4653cd6..90d5aa11e4178 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -3449,6 +3449,29 @@ static void free_log_tree(struct btrfs_trans_handle *trans,
+ 	if (log->node) {
+ 		ret = walk_log_tree(trans, log, &wc);
+ 		if (ret) {
++			/*
++			 * We weren't able to traverse the entire log tree, the
++			 * typical scenario is getting an -EIO when reading an
++			 * extent buffer of the tree, due to a previous writeback
++			 * failure of it.
++			 */
++			set_bit(BTRFS_FS_STATE_LOG_CLEANUP_ERROR,
++				&log->fs_info->fs_state);
++
++			/*
++			 * Some extent buffers of the log tree may still be dirty
++			 * and not yet written back to storage, because we may
++			 * have updates to a log tree without syncing a log tree,
++			 * such as during rename and link operations. So flush
++			 * them out and wait for their writeback to complete, so
++			 * that we properly cleanup their state and pages.
++			 */
++			btrfs_write_marked_extents(log->fs_info,
++						   &log->dirty_log_pages,
++						   EXTENT_DIRTY | EXTENT_NEW);
++			btrfs_wait_tree_log_extents(log,
++						    EXTENT_DIRTY | EXTENT_NEW);
++
+ 			if (trans)
+ 				btrfs_abort_transaction(trans, ret);
+ 			else
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index 1286b88b6fa17..35c674041337d 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -1106,17 +1106,6 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
+ 		goto read_super_error;
+ 	}
+ 
+-	root = d_make_root(inode);
+-	if (!root) {
+-		status = -ENOMEM;
+-		mlog_errno(status);
+-		goto read_super_error;
+-	}
+-
+-	sb->s_root = root;
+-
+-	ocfs2_complete_mount_recovery(osb);
+-
+ 	osb->osb_dev_kset = kset_create_and_add(sb->s_id, NULL,
+ 						&ocfs2_kset->kobj);
+ 	if (!osb->osb_dev_kset) {
+@@ -1134,6 +1123,17 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
+ 		goto read_super_error;
+ 	}
+ 
++	root = d_make_root(inode);
++	if (!root) {
++		status = -ENOMEM;
++		mlog_errno(status);
++		goto read_super_error;
++	}
++
++	sb->s_root = root;
++
++	ocfs2_complete_mount_recovery(osb);
++
+ 	if (ocfs2_mount_local(osb))
+ 		snprintf(nodestr, sizeof(nodestr), "local");
+ 	else
+diff --git a/include/linux/if_arp.h b/include/linux/if_arp.h
+index b712217f70304..1ed52441972f9 100644
+--- a/include/linux/if_arp.h
++++ b/include/linux/if_arp.h
+@@ -52,6 +52,7 @@ static inline bool dev_is_mac_header_xmit(const struct net_device *dev)
+ 	case ARPHRD_VOID:
+ 	case ARPHRD_NONE:
+ 	case ARPHRD_RAWIP:
++	case ARPHRD_PIMREG:
+ 		return false;
+ 	default:
+ 		return true;
+diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
+index ab207677e0a8b..f742e50207fbd 100644
+--- a/include/net/af_vsock.h
++++ b/include/net/af_vsock.h
+@@ -205,7 +205,8 @@ struct sock *vsock_find_bound_socket(struct sockaddr_vm *addr);
+ struct sock *vsock_find_connected_socket(struct sockaddr_vm *src,
+ 					 struct sockaddr_vm *dst);
+ void vsock_remove_sock(struct vsock_sock *vsk);
+-void vsock_for_each_connected_socket(void (*fn)(struct sock *sk));
++void vsock_for_each_connected_socket(struct vsock_transport *transport,
++				     void (*fn)(struct sock *sk));
+ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk);
+ bool vsock_find_cid(unsigned int cid);
+ 
+diff --git a/mm/swap_state.c b/mm/swap_state.c
+index 8d41042421000..ee67164531c06 100644
+--- a/mm/swap_state.c
++++ b/mm/swap_state.c
+@@ -478,7 +478,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
+ 		 * __read_swap_cache_async(), which has set SWAP_HAS_CACHE
+ 		 * in swap_map, but not yet added its page to swap cache.
+ 		 */
+-		cond_resched();
++		schedule_timeout_uninterruptible(1);
+ 	}
+ 
+ 	/*
+diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
+index 7578b1350f182..e64ef196a984a 100644
+--- a/net/dsa/dsa2.c
++++ b/net/dsa/dsa2.c
+@@ -1327,6 +1327,7 @@ static int dsa_port_parse_of(struct dsa_port *dp, struct device_node *dn)
+ 		const char *user_protocol;
+ 
+ 		master = of_find_net_device_by_node(ethernet);
++		of_node_put(ethernet);
+ 		if (!master)
+ 			return -EPROBE_DEFER;
+ 
+diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
+index b7b573085bd51..5023f59a5b968 100644
+--- a/net/ipv6/esp6.c
++++ b/net/ipv6/esp6.c
+@@ -813,8 +813,7 @@ int esp6_input_done2(struct sk_buff *skb, int err)
+ 		struct tcphdr *th;
+ 
+ 		offset = ipv6_skip_exthdr(skb, offset, &nexthdr, &frag_off);
+-
+-		if (offset < 0) {
++		if (offset == -1) {
+ 			err = -EINVAL;
+ 			goto out;
+ 		}
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index fe9b4c04744a2..92d180a0db5a5 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2316,8 +2316,11 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
+ 					copy_skb = skb_get(skb);
+ 					skb_head = skb->data;
+ 				}
+-				if (copy_skb)
++				if (copy_skb) {
++					memset(&PACKET_SKB_CB(copy_skb)->sa.ll, 0,
++					       sizeof(PACKET_SKB_CB(copy_skb)->sa.ll));
+ 					skb_set_owner_r(copy_skb, sk);
++				}
+ 			}
+ 			snaplen = po->rx_ring.frame_size - macoff;
+ 			if ((int)snaplen < 0) {
+@@ -3469,6 +3472,8 @@ static int packet_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 	sock_recv_ts_and_drops(msg, sk, skb);
+ 
+ 	if (msg->msg_name) {
++		const size_t max_len = min(sizeof(skb->cb),
++					   sizeof(struct sockaddr_storage));
+ 		int copy_len;
+ 
+ 		/* If the address length field is there to be filled
+@@ -3491,6 +3496,10 @@ static int packet_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 				msg->msg_namelen = sizeof(struct sockaddr_ll);
+ 			}
+ 		}
++		if (WARN_ON_ONCE(copy_len > max_len)) {
++			copy_len = max_len;
++			msg->msg_namelen = copy_len;
++		}
+ 		memcpy(msg->msg_name, &PACKET_SKB_CB(skb)->sa, copy_len);
+ 	}
+ 
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 8e4f2a1346be6..782aa4209718e 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -333,7 +333,8 @@ void vsock_remove_sock(struct vsock_sock *vsk)
+ }
+ EXPORT_SYMBOL_GPL(vsock_remove_sock);
+ 
+-void vsock_for_each_connected_socket(void (*fn)(struct sock *sk))
++void vsock_for_each_connected_socket(struct vsock_transport *transport,
++				     void (*fn)(struct sock *sk))
+ {
+ 	int i;
+ 
+@@ -342,8 +343,12 @@ void vsock_for_each_connected_socket(void (*fn)(struct sock *sk))
+ 	for (i = 0; i < ARRAY_SIZE(vsock_connected_table); i++) {
+ 		struct vsock_sock *vsk;
+ 		list_for_each_entry(vsk, &vsock_connected_table[i],
+-				    connected_table)
++				    connected_table) {
++			if (vsk->transport != transport)
++				continue;
++
+ 			fn(sk_vsock(vsk));
++		}
+ 	}
+ 
+ 	spin_unlock_bh(&vsock_table_lock);
+diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
+index 4f7c99dfd16cf..dad9ca65f4f9d 100644
+--- a/net/vmw_vsock/virtio_transport.c
++++ b/net/vmw_vsock/virtio_transport.c
+@@ -24,6 +24,7 @@
+ static struct workqueue_struct *virtio_vsock_workqueue;
+ static struct virtio_vsock __rcu *the_virtio_vsock;
+ static DEFINE_MUTEX(the_virtio_vsock_mutex); /* protects the_virtio_vsock */
++static struct virtio_transport virtio_transport; /* forward declaration */
+ 
+ struct virtio_vsock {
+ 	struct virtio_device *vdev;
+@@ -384,7 +385,8 @@ static void virtio_vsock_event_handle(struct virtio_vsock *vsock,
+ 	switch (le32_to_cpu(event->id)) {
+ 	case VIRTIO_VSOCK_EVENT_TRANSPORT_RESET:
+ 		virtio_vsock_update_guest_cid(vsock);
+-		vsock_for_each_connected_socket(virtio_vsock_reset_sock);
++		vsock_for_each_connected_socket(&virtio_transport.transport,
++						virtio_vsock_reset_sock);
+ 		break;
+ 	}
+ }
+@@ -662,7 +664,8 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
+ 	synchronize_rcu();
+ 
+ 	/* Reset all connected sockets when the device disappear */
+-	vsock_for_each_connected_socket(virtio_vsock_reset_sock);
++	vsock_for_each_connected_socket(&virtio_transport.transport,
++					virtio_vsock_reset_sock);
+ 
+ 	/* Stop all work handlers to make sure no one is accessing the device,
+ 	 * so we can safely call vdev->config->reset().
+diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
+index 7aef34e32bdf8..b17dc9745188e 100644
+--- a/net/vmw_vsock/vmci_transport.c
++++ b/net/vmw_vsock/vmci_transport.c
+@@ -75,6 +75,8 @@ static u32 vmci_transport_qp_resumed_sub_id = VMCI_INVALID_ID;
+ 
+ static int PROTOCOL_OVERRIDE = -1;
+ 
++static struct vsock_transport vmci_transport; /* forward declaration */
++
+ /* Helper function to convert from a VMCI error code to a VSock error code. */
+ 
+ static s32 vmci_transport_error_to_vsock_error(s32 vmci_error)
+@@ -882,7 +884,8 @@ static void vmci_transport_qp_resumed_cb(u32 sub_id,
+ 					 const struct vmci_event_data *e_data,
+ 					 void *client_data)
+ {
+-	vsock_for_each_connected_socket(vmci_transport_handle_detach);
++	vsock_for_each_connected_socket(&vmci_transport,
++					vmci_transport_handle_detach);
+ }
+ 
+ static void vmci_transport_recv_pkt_work(struct work_struct *work)
+diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
+index b2ed3140a1faa..dfde9eada224a 100644
+--- a/tools/perf/util/symbol.c
++++ b/tools/perf/util/symbol.c
+@@ -231,7 +231,7 @@ void symbols__fixup_end(struct rb_root_cached *symbols)
+ 		prev = curr;
+ 		curr = rb_entry(nd, struct symbol, rb_node);
+ 
+-		if (prev->end == prev->start && prev->end != curr->start)
++		if (prev->end == prev->start || prev->end != curr->start)
+ 			arch__symbols__fixup_end(prev, curr);
+ 	}
+ 


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-03-28 10:56 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-03-28 10:56 UTC (permalink / raw
  To: gentoo-commits

commit:     36e11aa7db4425eba57fe4d4db0ebe08713f85fe
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Mar 28 10:56:32 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Mar 28 10:56:32 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=36e11aa7

Linux patch 5.16.18

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1017_linux-5.16.18.patch | 1327 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1331 insertions(+)

diff --git a/0000_README b/0000_README
index dc43bdfb..ccfb3dab 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch:  1016_linux-5.16.17.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.16.17
 
+Patch:  1017_linux-5.16.18.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.18
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1017_linux-5.16.18.patch b/1017_linux-5.16.18.patch
new file mode 100644
index 00000000..38a460db
--- /dev/null
+++ b/1017_linux-5.16.18.patch
@@ -0,0 +1,1327 @@
+diff --git a/Makefile b/Makefile
+index 0ca39c16b3bfd..f47cecba0618b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 17
++SUBLEVEL = 18
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/arch/csky/include/asm/uaccess.h b/arch/csky/include/asm/uaccess.h
+index c40f06ee8d3ef..ac5a54f57d407 100644
+--- a/arch/csky/include/asm/uaccess.h
++++ b/arch/csky/include/asm/uaccess.h
+@@ -3,14 +3,13 @@
+ #ifndef __ASM_CSKY_UACCESS_H
+ #define __ASM_CSKY_UACCESS_H
+ 
+-#define user_addr_max() \
+-	(uaccess_kernel() ? KERNEL_DS.seg : get_fs().seg)
++#define user_addr_max() (current_thread_info()->addr_limit.seg)
+ 
+ static inline int __access_ok(unsigned long addr, unsigned long size)
+ {
+-	unsigned long limit = current_thread_info()->addr_limit.seg;
++	unsigned long limit = user_addr_max();
+ 
+-	return ((addr < limit) && ((addr + size) < limit));
++	return (size <= limit) && (addr <= (limit - size));
+ }
+ #define __access_ok __access_ok
+ 
+diff --git a/arch/hexagon/include/asm/uaccess.h b/arch/hexagon/include/asm/uaccess.h
+index ef5bfef8d490c..719ba3f3c45cd 100644
+--- a/arch/hexagon/include/asm/uaccess.h
++++ b/arch/hexagon/include/asm/uaccess.h
+@@ -25,17 +25,17 @@
+  * Returns true (nonzero) if the memory block *may* be valid, false (zero)
+  * if it is definitely invalid.
+  *
+- * User address space in Hexagon, like x86, goes to 0xbfffffff, so the
+- * simple MSB-based tests used by MIPS won't work.  Some further
+- * optimization is probably possible here, but for now, keep it
+- * reasonably simple and not *too* slow.  After all, we've got the
+- * MMU for backup.
+  */
++#define uaccess_kernel() (get_fs().seg == KERNEL_DS.seg)
++#define user_addr_max() (uaccess_kernel() ? ~0UL : TASK_SIZE)
+ 
+-#define __access_ok(addr, size) \
+-	((get_fs().seg == KERNEL_DS.seg) || \
+-	(((unsigned long)addr < get_fs().seg) && \
+-	  (unsigned long)size < (get_fs().seg - (unsigned long)addr)))
++static inline int __access_ok(unsigned long addr, unsigned long size)
++{
++	unsigned long limit = TASK_SIZE;
++
++	return (size <= limit) && (addr <= (limit - size));
++}
++#define __access_ok __access_ok
+ 
+ /*
+  * When a kernel-mode page fault is taken, the faulting instruction
+diff --git a/arch/m68k/include/asm/uaccess.h b/arch/m68k/include/asm/uaccess.h
+index ba670523885c8..60b786eb2254e 100644
+--- a/arch/m68k/include/asm/uaccess.h
++++ b/arch/m68k/include/asm/uaccess.h
+@@ -12,14 +12,17 @@
+ #include <asm/extable.h>
+ 
+ /* We let the MMU do all checking */
+-static inline int access_ok(const void __user *addr,
++static inline int access_ok(const void __user *ptr,
+ 			    unsigned long size)
+ {
+-	/*
+-	 * XXX: for !CONFIG_CPU_HAS_ADDRESS_SPACES this really needs to check
+-	 * for TASK_SIZE!
+-	 */
+-	return 1;
++	unsigned long limit = TASK_SIZE;
++	unsigned long addr = (unsigned long)ptr;
++
++	if (IS_ENABLED(CONFIG_CPU_HAS_ADDRESS_SPACES) ||
++	    !IS_ENABLED(CONFIG_MMU))
++		return 1;
++
++	return (size <= limit) && (addr <= (limit - size));
+ }
+ 
+ /*
+diff --git a/arch/microblaze/include/asm/uaccess.h b/arch/microblaze/include/asm/uaccess.h
+index d2a8ef9f89787..5b6e0e7788f44 100644
+--- a/arch/microblaze/include/asm/uaccess.h
++++ b/arch/microblaze/include/asm/uaccess.h
+@@ -39,24 +39,13 @@
+ 
+ # define uaccess_kernel()	(get_fs().seg == KERNEL_DS.seg)
+ 
+-static inline int access_ok(const void __user *addr, unsigned long size)
++static inline int __access_ok(unsigned long addr, unsigned long size)
+ {
+-	if (!size)
+-		goto ok;
++	unsigned long limit = user_addr_max();
+ 
+-	if ((get_fs().seg < ((unsigned long)addr)) ||
+-			(get_fs().seg < ((unsigned long)addr + size - 1))) {
+-		pr_devel("ACCESS fail at 0x%08x (size 0x%x), seg 0x%08x\n",
+-			(__force u32)addr, (u32)size,
+-			(u32)get_fs().seg);
+-		return 0;
+-	}
+-ok:
+-	pr_devel("ACCESS OK at 0x%08x (size 0x%x), seg 0x%08x\n",
+-			(__force u32)addr, (u32)size,
+-			(u32)get_fs().seg);
+-	return 1;
++	return (size <= limit) && (addr <= (limit - size));
+ }
++#define access_ok(addr, size) __access_ok((unsigned long)addr, size)
+ 
+ # define __FIXUP_SECTION	".section .fixup,\"ax\"\n"
+ # define __EX_TABLE_SECTION	".section __ex_table,\"a\"\n"
+diff --git a/arch/nds32/include/asm/uaccess.h b/arch/nds32/include/asm/uaccess.h
+index d4cbf069dc224..37a40981deb3b 100644
+--- a/arch/nds32/include/asm/uaccess.h
++++ b/arch/nds32/include/asm/uaccess.h
+@@ -70,9 +70,7 @@ static inline void set_fs(mm_segment_t fs)
+  * versions are void (ie, don't return a value as such).
+  */
+ 
+-#define get_user	__get_user					\
+-
+-#define __get_user(x, ptr)						\
++#define get_user(x, ptr)						\
+ ({									\
+ 	long __gu_err = 0;						\
+ 	__get_user_check((x), (ptr), __gu_err);				\
+@@ -85,6 +83,14 @@ static inline void set_fs(mm_segment_t fs)
+ 	(void)0;							\
+ })
+ 
++#define __get_user(x, ptr)						\
++({									\
++	long __gu_err = 0;						\
++	const __typeof__(*(ptr)) __user *__p = (ptr);			\
++	__get_user_err((x), __p, (__gu_err));				\
++	__gu_err;							\
++})
++
+ #define __get_user_check(x, ptr, err)					\
+ ({									\
+ 	const __typeof__(*(ptr)) __user *__p = (ptr);			\
+@@ -165,12 +171,18 @@ do {									\
+ 		: "r"(addr), "i"(-EFAULT)				\
+ 		: "cc")
+ 
+-#define put_user	__put_user					\
++#define put_user(x, ptr)						\
++({									\
++	long __pu_err = 0;						\
++	__put_user_check((x), (ptr), __pu_err);				\
++	__pu_err;							\
++})
+ 
+ #define __put_user(x, ptr)						\
+ ({									\
+ 	long __pu_err = 0;						\
+-	__put_user_err((x), (ptr), __pu_err);				\
++	__typeof__(*(ptr)) __user *__p = (ptr);				\
++	__put_user_err((x), __p, __pu_err);				\
+ 	__pu_err;							\
+ })
+ 
+diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
+index 5b6d1a95776f0..0d01e7f5078c2 100644
+--- a/arch/x86/kernel/acpi/boot.c
++++ b/arch/x86/kernel/acpi/boot.c
+@@ -1328,6 +1328,17 @@ static int __init disable_acpi_pci(const struct dmi_system_id *d)
+ 	return 0;
+ }
+ 
++static int __init disable_acpi_xsdt(const struct dmi_system_id *d)
++{
++	if (!acpi_force) {
++		pr_notice("%s detected: force use of acpi=rsdt\n", d->ident);
++		acpi_gbl_do_not_use_xsdt = TRUE;
++	} else {
++		pr_notice("Warning: DMI blacklist says broken, but acpi XSDT forced\n");
++	}
++	return 0;
++}
++
+ static int __init dmi_disable_acpi(const struct dmi_system_id *d)
+ {
+ 	if (!acpi_force) {
+@@ -1451,6 +1462,19 @@ static const struct dmi_system_id acpi_dmi_table[] __initconst = {
+ 		     DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 360"),
+ 		     },
+ 	 },
++	/*
++	 * Boxes that need ACPI XSDT use disabled due to corrupted tables
++	 */
++	{
++	 .callback = disable_acpi_xsdt,
++	 .ident = "Advantech DAC-BJ01",
++	 .matches = {
++		     DMI_MATCH(DMI_SYS_VENDOR, "NEC"),
++		     DMI_MATCH(DMI_PRODUCT_NAME, "Bearlake CRB Board"),
++		     DMI_MATCH(DMI_BIOS_VERSION, "V1.12"),
++		     DMI_MATCH(DMI_BIOS_DATE, "02/01/2011"),
++		     },
++	 },
+ 	{}
+ };
+ 
+diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
+index ead0114f27c9f..56db7b4da5140 100644
+--- a/drivers/acpi/battery.c
++++ b/drivers/acpi/battery.c
+@@ -60,6 +60,10 @@ MODULE_PARM_DESC(cache_time, "cache time in milliseconds");
+ 
+ static const struct acpi_device_id battery_device_ids[] = {
+ 	{"PNP0C0A", 0},
++
++	/* Microsoft Surface Go 3 */
++	{"MSHW0146", 0},
++
+ 	{"", 0},
+ };
+ 
+@@ -1177,6 +1181,14 @@ static const struct dmi_system_id bat_dmi_table[] __initconst = {
+ 			DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad"),
+ 		},
+ 	},
++	{
++		/* Microsoft Surface Go 3 */
++		.callback = battery_notification_delay_quirk,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Surface Go 3"),
++		},
++	},
+ 	{},
+ };
+ 
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index 068e393ea0c67..be2ecdcedd9f0 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -417,6 +417,81 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "GA503"),
+ 		},
+ 	},
++	/*
++	 * Clevo NL5xRU and NL5xNU/TUXEDO Aura 15 Gen1 and Gen2 have both a
++	 * working native and video interface. However the default detection
++	 * mechanism first registers the video interface before unregistering
++	 * it again and switching to the native interface during boot. This
++	 * results in a dangling SBIOS request for backlight change for some
++	 * reason, causing the backlight to switch to ~2% once per boot on the
++	 * first power cord connect or disconnect event. Setting the native
++	 * interface explicitly circumvents this buggy behaviour, by avoiding
++	 * the unregistering process.
++	 */
++	{
++	.callback = video_detect_force_native,
++	.ident = "Clevo NL5xRU",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++		DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "Clevo NL5xRU",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "SchenkerTechnologiesGmbH"),
++		DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "Clevo NL5xRU",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
++		DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "Clevo NL5xRU",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++		DMI_MATCH(DMI_BOARD_NAME, "AURA1501"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "Clevo NL5xRU",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++		DMI_MATCH(DMI_BOARD_NAME, "EDUBOOK1502"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "Clevo NL5xNU",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++		DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "Clevo NL5xNU",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "SchenkerTechnologiesGmbH"),
++		DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
++		},
++	},
++	{
++	.callback = video_detect_force_native,
++	.ident = "Clevo NL5xNU",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
++		DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
++		},
++	},
+ 
+ 	/*
+ 	 * Desktops which falsely report a backlight and which our heuristics
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index ea72afb7abead..c1f1237ddc76f 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -404,6 +404,8 @@ static const struct usb_device_id blacklist_table[] = {
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 
+ 	/* Realtek 8852AE Bluetooth devices */
++	{ USB_DEVICE(0x0bda, 0x2852), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x0bda, 0xc852), .driver_info = BTUSB_REALTEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x0bda, 0x385a), .driver_info = BTUSB_REALTEK |
+@@ -481,6 +483,8 @@ static const struct usb_device_id blacklist_table[] = {
+ 	/* Additional Realtek 8761BU Bluetooth devices */
+ 	{ USB_DEVICE(0x0b05, 0x190e), .driver_info = BTUSB_REALTEK |
+ 	  					     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x2550, 0x8761), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
+ 
+ 	/* Additional Realtek 8821AE Bluetooth devices */
+ 	{ USB_DEVICE(0x0b05, 0x17dc), .driver_info = BTUSB_REALTEK },
+diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c
+index c08cbb306636b..dc4c0a0a51290 100644
+--- a/drivers/char/tpm/tpm-dev-common.c
++++ b/drivers/char/tpm/tpm-dev-common.c
+@@ -69,7 +69,13 @@ static void tpm_dev_async_work(struct work_struct *work)
+ 	ret = tpm_dev_transmit(priv->chip, priv->space, priv->data_buffer,
+ 			       sizeof(priv->data_buffer));
+ 	tpm_put_ops(priv->chip);
+-	if (ret > 0) {
++
++	/*
++	 * If ret is > 0 then tpm_dev_transmit returned the size of the
++	 * response. If ret is < 0 then tpm_dev_transmit failed and
++	 * returned an error code.
++	 */
++	if (ret != 0) {
+ 		priv->response_length = ret;
+ 		mod_timer(&priv->user_read_timer, jiffies + (120 * HZ));
+ 	}
+diff --git a/drivers/char/tpm/tpm2-space.c b/drivers/char/tpm/tpm2-space.c
+index 97e916856cf3e..d2225020e4d2c 100644
+--- a/drivers/char/tpm/tpm2-space.c
++++ b/drivers/char/tpm/tpm2-space.c
+@@ -58,12 +58,12 @@ int tpm2_init_space(struct tpm_space *space, unsigned int buf_size)
+ 
+ void tpm2_del_space(struct tpm_chip *chip, struct tpm_space *space)
+ {
+-	mutex_lock(&chip->tpm_mutex);
+-	if (!tpm_chip_start(chip)) {
++
++	if (tpm_try_get_ops(chip) == 0) {
+ 		tpm2_flush_sessions(chip, space);
+-		tpm_chip_stop(chip);
++		tpm_put_ops(chip);
+ 	}
+-	mutex_unlock(&chip->tpm_mutex);
++
+ 	kfree(space->context_buf);
+ 	kfree(space->session_buf);
+ }
+diff --git a/drivers/crypto/qat/qat_4xxx/adf_drv.c b/drivers/crypto/qat/qat_4xxx/adf_drv.c
+index 71ef065914b22..c8dfc32084ec5 100644
+--- a/drivers/crypto/qat/qat_4xxx/adf_drv.c
++++ b/drivers/crypto/qat/qat_4xxx/adf_drv.c
+@@ -52,6 +52,13 @@ static int adf_crypto_dev_config(struct adf_accel_dev *accel_dev)
+ 	if (ret)
+ 		goto err;
+ 
++	/* Temporarily set the number of crypto instances to zero to avoid
++	 * registering the crypto algorithms.
++	 * This will be removed when the algorithms will support the
++	 * CRYPTO_TFM_REQ_MAY_BACKLOG flag
++	 */
++	instances = 0;
++
+ 	for (i = 0; i < instances; i++) {
+ 		val = i;
+ 		bank = i * 2;
+diff --git a/drivers/crypto/qat/qat_common/qat_crypto.c b/drivers/crypto/qat/qat_common/qat_crypto.c
+index ece6776fbd53d..3efbb38836010 100644
+--- a/drivers/crypto/qat/qat_common/qat_crypto.c
++++ b/drivers/crypto/qat/qat_common/qat_crypto.c
+@@ -136,6 +136,13 @@ int qat_crypto_dev_config(struct adf_accel_dev *accel_dev)
+ 	if (ret)
+ 		goto err;
+ 
++	/* Temporarily set the number of crypto instances to zero to avoid
++	 * registering the crypto algorithms.
++	 * This will be removed when the algorithms will support the
++	 * CRYPTO_TFM_REQ_MAY_BACKLOG flag
++	 */
++	instances = 0;
++
+ 	for (i = 0; i < instances; i++) {
+ 		val = i;
+ 		snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_ASYM_BANK_NUM, i);
+diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
+index 2de61b63ef91d..48d3c9955f0dd 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
++++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
+@@ -248,6 +248,9 @@ void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs)
+ {
+ 	u32 i;
+ 
++	if (!objs)
++		return;
++
+ 	for (i = 0; i < objs->nents; i++)
+ 		drm_gem_object_put(objs->objs[i]);
+ 	virtio_gpu_array_free(objs);
+diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
+index 220dc42af31ae..9c593f8e05f8d 100644
+--- a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
++++ b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
+@@ -696,6 +696,12 @@ static int xgene_enet_rx_frame(struct xgene_enet_desc_ring *rx_ring,
+ 	buf_pool->rx_skb[skb_index] = NULL;
+ 
+ 	datalen = xgene_enet_get_data_len(le64_to_cpu(raw_desc->m1));
++
++	/* strip off CRC as HW isn't doing this */
++	nv = GET_VAL(NV, le64_to_cpu(raw_desc->m0));
++	if (!nv)
++		datalen -= 4;
++
+ 	skb_put(skb, datalen);
+ 	prefetch(skb->data - NET_IP_ALIGN);
+ 	skb->protocol = eth_type_trans(skb, ndev);
+@@ -717,12 +723,8 @@ static int xgene_enet_rx_frame(struct xgene_enet_desc_ring *rx_ring,
+ 		}
+ 	}
+ 
+-	nv = GET_VAL(NV, le64_to_cpu(raw_desc->m0));
+-	if (!nv) {
+-		/* strip off CRC as HW isn't doing this */
+-		datalen -= 4;
++	if (!nv)
+ 		goto skip_jumbo;
+-	}
+ 
+ 	slots = page_pool->slots - 1;
+ 	head = page_pool->head;
+diff --git a/drivers/net/wireless/ath/regd.c b/drivers/net/wireless/ath/regd.c
+index b2400e2417a55..f15e7bd690b5b 100644
+--- a/drivers/net/wireless/ath/regd.c
++++ b/drivers/net/wireless/ath/regd.c
+@@ -667,14 +667,14 @@ ath_regd_init_wiphy(struct ath_regulatory *reg,
+ 
+ /*
+  * Some users have reported their EEPROM programmed with
+- * 0x8000 or 0x0 set, this is not a supported regulatory
+- * domain but since we have more than one user with it we
+- * need a solution for them. We default to 0x64, which is
+- * the default Atheros world regulatory domain.
++ * 0x8000 set, this is not a supported regulatory domain
++ * but since we have more than one user with it we need
++ * a solution for them. We default to 0x64, which is the
++ * default Atheros world regulatory domain.
+  */
+ static void ath_regd_sanitize(struct ath_regulatory *reg)
+ {
+-	if (reg->current_rd != COUNTRY_ERD_FLAG && reg->current_rd != 0)
++	if (reg->current_rd != COUNTRY_ERD_FLAG)
+ 		return;
+ 	printk(KERN_DEBUG "ath: EEPROM regdomain sanitized\n");
+ 	reg->current_rd = 0x64;
+diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
+index 0747c27f3bd75..6213aa7d75215 100644
+--- a/drivers/net/wireless/ath/wcn36xx/main.c
++++ b/drivers/net/wireless/ath/wcn36xx/main.c
+@@ -1483,6 +1483,9 @@ static int wcn36xx_platform_get_resources(struct wcn36xx *wcn,
+ 	if (iris_node) {
+ 		if (of_device_is_compatible(iris_node, "qcom,wcn3620"))
+ 			wcn->rf_id = RF_IRIS_WCN3620;
++		if (of_device_is_compatible(iris_node, "qcom,wcn3660") ||
++		    of_device_is_compatible(iris_node, "qcom,wcn3660b"))
++			wcn->rf_id = RF_IRIS_WCN3660;
+ 		if (of_device_is_compatible(iris_node, "qcom,wcn3680"))
+ 			wcn->rf_id = RF_IRIS_WCN3680;
+ 		of_node_put(iris_node);
+diff --git a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+index fbd0558c2c196..5d3f8f56e5681 100644
+--- a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
++++ b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+@@ -97,6 +97,7 @@ enum wcn36xx_ampdu_state {
+ 
+ #define RF_UNKNOWN	0x0000
+ #define RF_IRIS_WCN3620	0x3620
++#define RF_IRIS_WCN3660	0x3660
+ #define RF_IRIS_WCN3680	0x3680
+ 
+ static inline void buff_to_be(u32 *buf, size_t len)
+diff --git a/drivers/nfc/st21nfca/se.c b/drivers/nfc/st21nfca/se.c
+index a43fc4117fa57..c922f10d0d7b9 100644
+--- a/drivers/nfc/st21nfca/se.c
++++ b/drivers/nfc/st21nfca/se.c
+@@ -316,6 +316,11 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host,
+ 			return -ENOMEM;
+ 
+ 		transaction->aid_len = skb->data[1];
++
++		/* Checking if the length of the AID is valid */
++		if (transaction->aid_len > sizeof(transaction->aid))
++			return -EINVAL;
++
+ 		memcpy(transaction->aid, &skb->data[2],
+ 		       transaction->aid_len);
+ 
+@@ -325,6 +330,11 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host,
+ 			return -EPROTO;
+ 
+ 		transaction->params_len = skb->data[transaction->aid_len + 3];
++
++		/* Total size is allocated (skb->len - 2) minus fixed array members */
++		if (transaction->params_len > ((skb->len - 2) - sizeof(struct nfc_evt_transaction)))
++			return -EINVAL;
++
+ 		memcpy(transaction->params, skb->data +
+ 		       transaction->aid_len + 4, transaction->params_len);
+ 
+diff --git a/include/sound/pcm.h b/include/sound/pcm.h
+index 33451f8ff755b..2018b7512d3d5 100644
+--- a/include/sound/pcm.h
++++ b/include/sound/pcm.h
+@@ -398,6 +398,7 @@ struct snd_pcm_runtime {
+ 	wait_queue_head_t tsleep;	/* transfer sleep */
+ 	struct fasync_struct *fasync;
+ 	bool stop_operating;		/* sync_stop will be called */
++	struct mutex buffer_mutex;	/* protect for buffer changes */
+ 
+ 	/* -- private section -- */
+ 	void *private_data;
+diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
+index 5199559fbbf0c..674555be5a6ca 100644
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -554,16 +554,16 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags)
+ 			raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+ 		}
+ 
+-		/* Unboost if we were boosted. */
+-		if (IS_ENABLED(CONFIG_RCU_BOOST) && drop_boost_mutex)
+-			rt_mutex_futex_unlock(&rnp->boost_mtx.rtmutex);
+-
+ 		/*
+ 		 * If this was the last task on the expedited lists,
+ 		 * then we need to report up the rcu_node hierarchy.
+ 		 */
+ 		if (!empty_exp && empty_exp_now)
+ 			rcu_report_exp_rnp(rnp, true);
++
++		/* Unboost if we were boosted. */
++		if (IS_ENABLED(CONFIG_RCU_BOOST) && drop_boost_mutex)
++			rt_mutex_futex_unlock(&rnp->boost_mtx.rtmutex);
+ 	} else {
+ 		local_irq_restore(flags);
+ 	}
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 61970fd839c36..8aaf9cf3d74a7 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1476,8 +1476,8 @@ static int __ip6_append_data(struct sock *sk,
+ 		      sizeof(struct frag_hdr) : 0) +
+ 		     rt->rt6i_nfheader_len;
+ 
+-	if (mtu < fragheaderlen ||
+-	    ((mtu - fragheaderlen) & ~7) + fragheaderlen < sizeof(struct frag_hdr))
++	if (mtu <= fragheaderlen ||
++	    ((mtu - fragheaderlen) & ~7) + fragheaderlen <= sizeof(struct frag_hdr))
+ 		goto emsgsize;
+ 
+ 	maxfraglen = ((mtu - fragheaderlen) & ~7) + fragheaderlen -
+diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c
+index 3086f4a6ae683..99305aadaa087 100644
+--- a/net/llc/af_llc.c
++++ b/net/llc/af_llc.c
+@@ -275,6 +275,7 @@ static int llc_ui_autobind(struct socket *sock, struct sockaddr_llc *addr)
+ {
+ 	struct sock *sk = sock->sk;
+ 	struct llc_sock *llc = llc_sk(sk);
++	struct net_device *dev = NULL;
+ 	struct llc_sap *sap;
+ 	int rc = -EINVAL;
+ 
+@@ -286,14 +287,14 @@ static int llc_ui_autobind(struct socket *sock, struct sockaddr_llc *addr)
+ 		goto out;
+ 	rc = -ENODEV;
+ 	if (sk->sk_bound_dev_if) {
+-		llc->dev = dev_get_by_index(&init_net, sk->sk_bound_dev_if);
+-		if (llc->dev && addr->sllc_arphrd != llc->dev->type) {
+-			dev_put(llc->dev);
+-			llc->dev = NULL;
++		dev = dev_get_by_index(&init_net, sk->sk_bound_dev_if);
++		if (dev && addr->sllc_arphrd != dev->type) {
++			dev_put(dev);
++			dev = NULL;
+ 		}
+ 	} else
+-		llc->dev = dev_getfirstbyhwtype(&init_net, addr->sllc_arphrd);
+-	if (!llc->dev)
++		dev = dev_getfirstbyhwtype(&init_net, addr->sllc_arphrd);
++	if (!dev)
+ 		goto out;
+ 	rc = -EUSERS;
+ 	llc->laddr.lsap = llc_ui_autoport();
+@@ -303,6 +304,11 @@ static int llc_ui_autobind(struct socket *sock, struct sockaddr_llc *addr)
+ 	sap = llc_sap_open(llc->laddr.lsap, NULL);
+ 	if (!sap)
+ 		goto out;
++
++	/* Note: We do not expect errors from this point. */
++	llc->dev = dev;
++	dev = NULL;
++
+ 	memcpy(llc->laddr.mac, llc->dev->dev_addr, IFHWADDRLEN);
+ 	memcpy(&llc->addr, addr, sizeof(llc->addr));
+ 	/* assign new connection to its SAP */
+@@ -310,6 +316,7 @@ static int llc_ui_autobind(struct socket *sock, struct sockaddr_llc *addr)
+ 	sock_reset_flag(sk, SOCK_ZAPPED);
+ 	rc = 0;
+ out:
++	dev_put(dev);
+ 	return rc;
+ }
+ 
+@@ -332,6 +339,7 @@ static int llc_ui_bind(struct socket *sock, struct sockaddr *uaddr, int addrlen)
+ 	struct sockaddr_llc *addr = (struct sockaddr_llc *)uaddr;
+ 	struct sock *sk = sock->sk;
+ 	struct llc_sock *llc = llc_sk(sk);
++	struct net_device *dev = NULL;
+ 	struct llc_sap *sap;
+ 	int rc = -EINVAL;
+ 
+@@ -347,25 +355,27 @@ static int llc_ui_bind(struct socket *sock, struct sockaddr *uaddr, int addrlen)
+ 	rc = -ENODEV;
+ 	rcu_read_lock();
+ 	if (sk->sk_bound_dev_if) {
+-		llc->dev = dev_get_by_index_rcu(&init_net, sk->sk_bound_dev_if);
+-		if (llc->dev) {
++		dev = dev_get_by_index_rcu(&init_net, sk->sk_bound_dev_if);
++		if (dev) {
+ 			if (is_zero_ether_addr(addr->sllc_mac))
+-				memcpy(addr->sllc_mac, llc->dev->dev_addr,
++				memcpy(addr->sllc_mac, dev->dev_addr,
+ 				       IFHWADDRLEN);
+-			if (addr->sllc_arphrd != llc->dev->type ||
++			if (addr->sllc_arphrd != dev->type ||
+ 			    !ether_addr_equal(addr->sllc_mac,
+-					      llc->dev->dev_addr)) {
++					      dev->dev_addr)) {
+ 				rc = -EINVAL;
+-				llc->dev = NULL;
++				dev = NULL;
+ 			}
+ 		}
+-	} else
+-		llc->dev = dev_getbyhwaddr_rcu(&init_net, addr->sllc_arphrd,
++	} else {
++		dev = dev_getbyhwaddr_rcu(&init_net, addr->sllc_arphrd,
+ 					   addr->sllc_mac);
+-	dev_hold(llc->dev);
++	}
++	dev_hold(dev);
+ 	rcu_read_unlock();
+-	if (!llc->dev)
++	if (!dev)
+ 		goto out;
++
+ 	if (!addr->sllc_sap) {
+ 		rc = -EUSERS;
+ 		addr->sllc_sap = llc_ui_autoport();
+@@ -397,6 +407,11 @@ static int llc_ui_bind(struct socket *sock, struct sockaddr *uaddr, int addrlen)
+ 			goto out_put;
+ 		}
+ 	}
++
++	/* Note: We do not expect errors from this point. */
++	llc->dev = dev;
++	dev = NULL;
++
+ 	llc->laddr.lsap = addr->sllc_sap;
+ 	memcpy(llc->laddr.mac, addr->sllc_mac, IFHWADDRLEN);
+ 	memcpy(&llc->addr, addr, sizeof(llc->addr));
+@@ -407,6 +422,7 @@ static int llc_ui_bind(struct socket *sock, struct sockaddr *uaddr, int addrlen)
+ out_put:
+ 	llc_sap_put(sap);
+ out:
++	dev_put(dev);
+ 	release_sock(sk);
+ 	return rc;
+ }
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 2d0dd69f9753c..19d257f657ead 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -2148,14 +2148,12 @@ static int copy_mesh_setup(struct ieee80211_if_mesh *ifmsh,
+ 		const struct mesh_setup *setup)
+ {
+ 	u8 *new_ie;
+-	const u8 *old_ie;
+ 	struct ieee80211_sub_if_data *sdata = container_of(ifmsh,
+ 					struct ieee80211_sub_if_data, u.mesh);
+ 	int i;
+ 
+ 	/* allocate information elements */
+ 	new_ie = NULL;
+-	old_ie = ifmsh->ie;
+ 
+ 	if (setup->ie_len) {
+ 		new_ie = kmemdup(setup->ie, setup->ie_len,
+@@ -2165,7 +2163,6 @@ static int copy_mesh_setup(struct ieee80211_if_mesh *ifmsh,
+ 	}
+ 	ifmsh->ie_len = setup->ie_len;
+ 	ifmsh->ie = new_ie;
+-	kfree(old_ie);
+ 
+ 	/* now copy the rest of the setup parameters */
+ 	ifmsh->mesh_id_len = setup->mesh_id_len;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 2b2e0210a7f9e..3e7f97a707217 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -9208,17 +9208,23 @@ int nft_parse_u32_check(const struct nlattr *attr, int max, u32 *dest)
+ }
+ EXPORT_SYMBOL_GPL(nft_parse_u32_check);
+ 
+-static unsigned int nft_parse_register(const struct nlattr *attr)
++static unsigned int nft_parse_register(const struct nlattr *attr, u32 *preg)
+ {
+ 	unsigned int reg;
+ 
+ 	reg = ntohl(nla_get_be32(attr));
+ 	switch (reg) {
+ 	case NFT_REG_VERDICT...NFT_REG_4:
+-		return reg * NFT_REG_SIZE / NFT_REG32_SIZE;
++		*preg = reg * NFT_REG_SIZE / NFT_REG32_SIZE;
++		break;
++	case NFT_REG32_00...NFT_REG32_15:
++		*preg = reg + NFT_REG_SIZE / NFT_REG32_SIZE - NFT_REG32_00;
++		break;
+ 	default:
+-		return reg + NFT_REG_SIZE / NFT_REG32_SIZE - NFT_REG32_00;
++		return -ERANGE;
+ 	}
++
++	return 0;
+ }
+ 
+ /**
+@@ -9260,7 +9266,10 @@ int nft_parse_register_load(const struct nlattr *attr, u8 *sreg, u32 len)
+ 	u32 reg;
+ 	int err;
+ 
+-	reg = nft_parse_register(attr);
++	err = nft_parse_register(attr, &reg);
++	if (err < 0)
++		return err;
++
+ 	err = nft_validate_register_load(reg, len);
+ 	if (err < 0)
+ 		return err;
+@@ -9315,7 +9324,10 @@ int nft_parse_register_store(const struct nft_ctx *ctx,
+ 	int err;
+ 	u32 reg;
+ 
+-	reg = nft_parse_register(attr);
++	err = nft_parse_register(attr, &reg);
++	if (err < 0)
++		return err;
++
+ 	err = nft_validate_register_store(ctx, reg, data, type, len);
+ 	if (err < 0)
+ 		return err;
+diff --git a/net/netfilter/nf_tables_core.c b/net/netfilter/nf_tables_core.c
+index adc3480560767..d4d8f613af512 100644
+--- a/net/netfilter/nf_tables_core.c
++++ b/net/netfilter/nf_tables_core.c
+@@ -162,7 +162,7 @@ nft_do_chain(struct nft_pktinfo *pkt, void *priv)
+ 	struct nft_rule *const *rules;
+ 	const struct nft_rule *rule;
+ 	const struct nft_expr *expr, *last;
+-	struct nft_regs regs;
++	struct nft_regs regs = {};
+ 	unsigned int stackptr = 0;
+ 	struct nft_jumpstack jumpstack[NFT_JUMP_STACK_SIZE];
+ 	bool genbit = READ_ONCE(net->nft.gencursor);
+diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
+index 3ee9edf858156..f158f0abd25d8 100644
+--- a/sound/core/oss/pcm_oss.c
++++ b/sound/core/oss/pcm_oss.c
+@@ -774,6 +774,11 @@ static int snd_pcm_oss_period_size(struct snd_pcm_substream *substream,
+ 
+ 	if (oss_period_size < 16)
+ 		return -EINVAL;
++
++	/* don't allocate too large period; 1MB period must be enough */
++	if (oss_period_size > 1024 * 1024)
++		return -ENOMEM;
++
+ 	runtime->oss.period_bytes = oss_period_size;
+ 	runtime->oss.period_frames = 1;
+ 	runtime->oss.periods = oss_periods;
+@@ -1043,10 +1048,9 @@ static int snd_pcm_oss_change_params_locked(struct snd_pcm_substream *substream)
+ 			goto failure;
+ 	}
+ #endif
+-	oss_period_size *= oss_frame_size;
+-
+-	oss_buffer_size = oss_period_size * runtime->oss.periods;
+-	if (oss_buffer_size < 0) {
++	oss_period_size = array_size(oss_period_size, oss_frame_size);
++	oss_buffer_size = array_size(oss_period_size, runtime->oss.periods);
++	if (oss_buffer_size <= 0) {
+ 		err = -EINVAL;
+ 		goto failure;
+ 	}
+diff --git a/sound/core/oss/pcm_plugin.c b/sound/core/oss/pcm_plugin.c
+index 061ba06bc9262..82e180c776ae1 100644
+--- a/sound/core/oss/pcm_plugin.c
++++ b/sound/core/oss/pcm_plugin.c
+@@ -62,7 +62,10 @@ static int snd_pcm_plugin_alloc(struct snd_pcm_plugin *plugin, snd_pcm_uframes_t
+ 	width = snd_pcm_format_physical_width(format->format);
+ 	if (width < 0)
+ 		return width;
+-	size = frames * format->channels * width;
++	size = array3_size(frames, format->channels, width);
++	/* check for too large period size once again */
++	if (size > 1024 * 1024)
++		return -ENOMEM;
+ 	if (snd_BUG_ON(size % 8))
+ 		return -ENXIO;
+ 	size /= 8;
+diff --git a/sound/core/pcm.c b/sound/core/pcm.c
+index ba4a987ed1c62..edd9849210f2d 100644
+--- a/sound/core/pcm.c
++++ b/sound/core/pcm.c
+@@ -969,6 +969,7 @@ int snd_pcm_attach_substream(struct snd_pcm *pcm, int stream,
+ 	init_waitqueue_head(&runtime->tsleep);
+ 
+ 	runtime->status->state = SNDRV_PCM_STATE_OPEN;
++	mutex_init(&runtime->buffer_mutex);
+ 
+ 	substream->runtime = runtime;
+ 	substream->private_data = pcm->private_data;
+@@ -1002,6 +1003,7 @@ void snd_pcm_detach_substream(struct snd_pcm_substream *substream)
+ 	} else {
+ 		substream->runtime = NULL;
+ 	}
++	mutex_destroy(&runtime->buffer_mutex);
+ 	kfree(runtime);
+ 	put_pid(substream->pid);
+ 	substream->pid = NULL;
+diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
+index 4f4b4739f9871..27ee0a0bee04a 100644
+--- a/sound/core/pcm_lib.c
++++ b/sound/core/pcm_lib.c
+@@ -1906,9 +1906,11 @@ static int wait_for_avail(struct snd_pcm_substream *substream,
+ 		if (avail >= runtime->twake)
+ 			break;
+ 		snd_pcm_stream_unlock_irq(substream);
++		mutex_unlock(&runtime->buffer_mutex);
+ 
+ 		tout = schedule_timeout(wait_time);
+ 
++		mutex_lock(&runtime->buffer_mutex);
+ 		snd_pcm_stream_lock_irq(substream);
+ 		set_current_state(TASK_INTERRUPTIBLE);
+ 		switch (runtime->status->state) {
+@@ -2202,6 +2204,7 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream,
+ 
+ 	nonblock = !!(substream->f_flags & O_NONBLOCK);
+ 
++	mutex_lock(&runtime->buffer_mutex);
+ 	snd_pcm_stream_lock_irq(substream);
+ 	err = pcm_accessible_state(runtime);
+ 	if (err < 0)
+@@ -2293,6 +2296,7 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream,
+ 	if (xfer > 0 && err >= 0)
+ 		snd_pcm_update_state(substream, runtime);
+ 	snd_pcm_stream_unlock_irq(substream);
++	mutex_unlock(&runtime->buffer_mutex);
+ 	return xfer > 0 ? (snd_pcm_sframes_t)xfer : err;
+ }
+ EXPORT_SYMBOL(__snd_pcm_lib_xfer);
+diff --git a/sound/core/pcm_memory.c b/sound/core/pcm_memory.c
+index b70ce3b69ab4d..8848d2f3160d8 100644
+--- a/sound/core/pcm_memory.c
++++ b/sound/core/pcm_memory.c
+@@ -163,19 +163,20 @@ static void snd_pcm_lib_preallocate_proc_write(struct snd_info_entry *entry,
+ 	size_t size;
+ 	struct snd_dma_buffer new_dmab;
+ 
++	mutex_lock(&substream->pcm->open_mutex);
+ 	if (substream->runtime) {
+ 		buffer->error = -EBUSY;
+-		return;
++		goto unlock;
+ 	}
+ 	if (!snd_info_get_line(buffer, line, sizeof(line))) {
+ 		snd_info_get_str(str, line, sizeof(str));
+ 		size = simple_strtoul(str, NULL, 10) * 1024;
+ 		if ((size != 0 && size < 8192) || size > substream->dma_max) {
+ 			buffer->error = -EINVAL;
+-			return;
++			goto unlock;
+ 		}
+ 		if (substream->dma_buffer.bytes == size)
+-			return;
++			goto unlock;
+ 		memset(&new_dmab, 0, sizeof(new_dmab));
+ 		new_dmab.dev = substream->dma_buffer.dev;
+ 		if (size > 0) {
+@@ -189,7 +190,7 @@ static void snd_pcm_lib_preallocate_proc_write(struct snd_info_entry *entry,
+ 					 substream->pcm->card->number, substream->pcm->device,
+ 					 substream->stream ? 'c' : 'p', substream->number,
+ 					 substream->pcm->name, size);
+-				return;
++				goto unlock;
+ 			}
+ 			substream->buffer_bytes_max = size;
+ 		} else {
+@@ -201,6 +202,8 @@ static void snd_pcm_lib_preallocate_proc_write(struct snd_info_entry *entry,
+ 	} else {
+ 		buffer->error = -EINVAL;
+ 	}
++ unlock:
++	mutex_unlock(&substream->pcm->open_mutex);
+ }
+ 
+ static inline void preallocate_info_init(struct snd_pcm_substream *substream)
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 621883e711949..3cfae44c6b722 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -672,33 +672,40 @@ static int snd_pcm_hw_params_choose(struct snd_pcm_substream *pcm,
+ 	return 0;
+ }
+ 
++#if IS_ENABLED(CONFIG_SND_PCM_OSS)
++#define is_oss_stream(substream)	((substream)->oss.oss)
++#else
++#define is_oss_stream(substream)	false
++#endif
++
+ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
+ 			     struct snd_pcm_hw_params *params)
+ {
+ 	struct snd_pcm_runtime *runtime;
+-	int err, usecs;
++	int err = 0, usecs;
+ 	unsigned int bits;
+ 	snd_pcm_uframes_t frames;
+ 
+ 	if (PCM_RUNTIME_CHECK(substream))
+ 		return -ENXIO;
+ 	runtime = substream->runtime;
++	mutex_lock(&runtime->buffer_mutex);
+ 	snd_pcm_stream_lock_irq(substream);
+ 	switch (runtime->status->state) {
+ 	case SNDRV_PCM_STATE_OPEN:
+ 	case SNDRV_PCM_STATE_SETUP:
+ 	case SNDRV_PCM_STATE_PREPARED:
++		if (!is_oss_stream(substream) &&
++		    atomic_read(&substream->mmap_count))
++			err = -EBADFD;
+ 		break;
+ 	default:
+-		snd_pcm_stream_unlock_irq(substream);
+-		return -EBADFD;
++		err = -EBADFD;
++		break;
+ 	}
+ 	snd_pcm_stream_unlock_irq(substream);
+-#if IS_ENABLED(CONFIG_SND_PCM_OSS)
+-	if (!substream->oss.oss)
+-#endif
+-		if (atomic_read(&substream->mmap_count))
+-			return -EBADFD;
++	if (err)
++		goto unlock;
+ 
+ 	snd_pcm_sync_stop(substream, true);
+ 
+@@ -786,16 +793,21 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
+ 	if (usecs >= 0)
+ 		cpu_latency_qos_add_request(&substream->latency_pm_qos_req,
+ 					    usecs);
+-	return 0;
++	err = 0;
+  _error:
+-	/* hardware might be unusable from this time,
+-	   so we force application to retry to set
+-	   the correct hardware parameter settings */
+-	snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
+-	if (substream->ops->hw_free != NULL)
+-		substream->ops->hw_free(substream);
+-	if (substream->managed_buffer_alloc)
+-		snd_pcm_lib_free_pages(substream);
++	if (err) {
++		/* hardware might be unusable from this time,
++		 * so we force application to retry to set
++		 * the correct hardware parameter settings
++		 */
++		snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
++		if (substream->ops->hw_free != NULL)
++			substream->ops->hw_free(substream);
++		if (substream->managed_buffer_alloc)
++			snd_pcm_lib_free_pages(substream);
++	}
++ unlock:
++	mutex_unlock(&runtime->buffer_mutex);
+ 	return err;
+ }
+ 
+@@ -835,26 +847,31 @@ static int do_hw_free(struct snd_pcm_substream *substream)
+ static int snd_pcm_hw_free(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime;
+-	int result;
++	int result = 0;
+ 
+ 	if (PCM_RUNTIME_CHECK(substream))
+ 		return -ENXIO;
+ 	runtime = substream->runtime;
++	mutex_lock(&runtime->buffer_mutex);
+ 	snd_pcm_stream_lock_irq(substream);
+ 	switch (runtime->status->state) {
+ 	case SNDRV_PCM_STATE_SETUP:
+ 	case SNDRV_PCM_STATE_PREPARED:
++		if (atomic_read(&substream->mmap_count))
++			result = -EBADFD;
+ 		break;
+ 	default:
+-		snd_pcm_stream_unlock_irq(substream);
+-		return -EBADFD;
++		result = -EBADFD;
++		break;
+ 	}
+ 	snd_pcm_stream_unlock_irq(substream);
+-	if (atomic_read(&substream->mmap_count))
+-		return -EBADFD;
++	if (result)
++		goto unlock;
+ 	result = do_hw_free(substream);
+ 	snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
+ 	cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
++ unlock:
++	mutex_unlock(&runtime->buffer_mutex);
+ 	return result;
+ }
+ 
+@@ -1160,15 +1177,17 @@ struct action_ops {
+ static int snd_pcm_action_group(const struct action_ops *ops,
+ 				struct snd_pcm_substream *substream,
+ 				snd_pcm_state_t state,
+-				bool do_lock)
++				bool stream_lock)
+ {
+ 	struct snd_pcm_substream *s = NULL;
+ 	struct snd_pcm_substream *s1;
+ 	int res = 0, depth = 1;
+ 
+ 	snd_pcm_group_for_each_entry(s, substream) {
+-		if (do_lock && s != substream) {
+-			if (s->pcm->nonatomic)
++		if (s != substream) {
++			if (!stream_lock)
++				mutex_lock_nested(&s->runtime->buffer_mutex, depth);
++			else if (s->pcm->nonatomic)
+ 				mutex_lock_nested(&s->self_group.mutex, depth);
+ 			else
+ 				spin_lock_nested(&s->self_group.lock, depth);
+@@ -1196,18 +1215,18 @@ static int snd_pcm_action_group(const struct action_ops *ops,
+ 		ops->post_action(s, state);
+ 	}
+  _unlock:
+-	if (do_lock) {
+-		/* unlock streams */
+-		snd_pcm_group_for_each_entry(s1, substream) {
+-			if (s1 != substream) {
+-				if (s1->pcm->nonatomic)
+-					mutex_unlock(&s1->self_group.mutex);
+-				else
+-					spin_unlock(&s1->self_group.lock);
+-			}
+-			if (s1 == s)	/* end */
+-				break;
++	/* unlock streams */
++	snd_pcm_group_for_each_entry(s1, substream) {
++		if (s1 != substream) {
++			if (!stream_lock)
++				mutex_unlock(&s1->runtime->buffer_mutex);
++			else if (s1->pcm->nonatomic)
++				mutex_unlock(&s1->self_group.mutex);
++			else
++				spin_unlock(&s1->self_group.lock);
+ 		}
++		if (s1 == s)	/* end */
++			break;
+ 	}
+ 	return res;
+ }
+@@ -1337,10 +1356,12 @@ static int snd_pcm_action_nonatomic(const struct action_ops *ops,
+ 
+ 	/* Guarantee the group members won't change during non-atomic action */
+ 	down_read(&snd_pcm_link_rwsem);
++	mutex_lock(&substream->runtime->buffer_mutex);
+ 	if (snd_pcm_stream_linked(substream))
+ 		res = snd_pcm_action_group(ops, substream, state, false);
+ 	else
+ 		res = snd_pcm_action_single(ops, substream, state);
++	mutex_unlock(&substream->runtime->buffer_mutex);
+ 	up_read(&snd_pcm_link_rwsem);
+ 	return res;
+ }
+@@ -1830,11 +1851,13 @@ static int snd_pcm_do_reset(struct snd_pcm_substream *substream,
+ 	int err = snd_pcm_ops_ioctl(substream, SNDRV_PCM_IOCTL1_RESET, NULL);
+ 	if (err < 0)
+ 		return err;
++	snd_pcm_stream_lock_irq(substream);
+ 	runtime->hw_ptr_base = 0;
+ 	runtime->hw_ptr_interrupt = runtime->status->hw_ptr -
+ 		runtime->status->hw_ptr % runtime->period_size;
+ 	runtime->silence_start = runtime->status->hw_ptr;
+ 	runtime->silence_filled = 0;
++	snd_pcm_stream_unlock_irq(substream);
+ 	return 0;
+ }
+ 
+@@ -1842,10 +1865,12 @@ static void snd_pcm_post_reset(struct snd_pcm_substream *substream,
+ 			       snd_pcm_state_t state)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
++	snd_pcm_stream_lock_irq(substream);
+ 	runtime->control->appl_ptr = runtime->status->hw_ptr;
+ 	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK &&
+ 	    runtime->silence_size > 0)
+ 		snd_pcm_playback_silence(substream, ULONG_MAX);
++	snd_pcm_stream_unlock_irq(substream);
+ }
+ 
+ static const struct action_ops snd_pcm_action_reset = {
+diff --git a/sound/pci/ac97/ac97_codec.c b/sound/pci/ac97/ac97_codec.c
+index 01f296d524ce6..cb60a07d39a8e 100644
+--- a/sound/pci/ac97/ac97_codec.c
++++ b/sound/pci/ac97/ac97_codec.c
+@@ -938,8 +938,8 @@ static int snd_ac97_ad18xx_pcm_get_volume(struct snd_kcontrol *kcontrol, struct
+ 	int codec = kcontrol->private_value & 3;
+ 	
+ 	mutex_lock(&ac97->page_mutex);
+-	ucontrol->value.integer.value[0] = 31 - ((ac97->spec.ad18xx.pcmreg[codec] >> 0) & 31);
+-	ucontrol->value.integer.value[1] = 31 - ((ac97->spec.ad18xx.pcmreg[codec] >> 8) & 31);
++	ucontrol->value.integer.value[0] = 31 - ((ac97->spec.ad18xx.pcmreg[codec] >> 8) & 31);
++	ucontrol->value.integer.value[1] = 31 - ((ac97->spec.ad18xx.pcmreg[codec] >> 0) & 31);
+ 	mutex_unlock(&ac97->page_mutex);
+ 	return 0;
+ }
+diff --git a/sound/pci/cmipci.c b/sound/pci/cmipci.c
+index 9a678b5cf2855..dab801d9d3b48 100644
+--- a/sound/pci/cmipci.c
++++ b/sound/pci/cmipci.c
+@@ -298,7 +298,6 @@ MODULE_PARM_DESC(joystick_port, "Joystick port address.");
+ #define CM_MICGAINZ		0x01	/* mic boost */
+ #define CM_MICGAINZ_SHIFT	0
+ 
+-#define CM_REG_MIXER3		0x24
+ #define CM_REG_AUX_VOL		0x26
+ #define CM_VAUXL_MASK		0xf0
+ #define CM_VAUXR_MASK		0x0f
+@@ -3265,7 +3264,7 @@ static int snd_cmipci_probe(struct pci_dev *pci,
+  */
+ static const unsigned char saved_regs[] = {
+ 	CM_REG_FUNCTRL1, CM_REG_CHFORMAT, CM_REG_LEGACY_CTRL, CM_REG_MISC_CTRL,
+-	CM_REG_MIXER0, CM_REG_MIXER1, CM_REG_MIXER2, CM_REG_MIXER3, CM_REG_PLL,
++	CM_REG_MIXER0, CM_REG_MIXER1, CM_REG_MIXER2, CM_REG_AUX_VOL, CM_REG_PLL,
+ 	CM_REG_CH0_FRAME1, CM_REG_CH0_FRAME2,
+ 	CM_REG_CH1_FRAME1, CM_REG_CH1_FRAME2, CM_REG_EXT_MISC,
+ 	CM_REG_INT_STATUS, CM_REG_INT_HLDCLR, CM_REG_FUNCTRL0,
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 83b56c1ba3996..08bf8a77a3e4d 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8866,6 +8866,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
++	SND_PCI_QUIRK(0x1043, 0x1d42, "ASUS Zephyrus G14 2022", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x16b2, "ASUS GU603", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+ 	SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+@@ -8949,6 +8950,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x8561, "Clevo NH[57][0-9][ER][ACDH]Q", ALC269_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1558, 0x8562, "Clevo NH[57][0-9]RZ[Q]", ALC269_FIXUP_DMIC),
+ 	SND_PCI_QUIRK(0x1558, 0x8668, "Clevo NP50B[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x866d, "Clevo NP5[05]PN[HJK]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x867d, "Clevo NP7[01]PN[HJK]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8680, "Clevo NJ50LU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x8686, "Clevo NH50[CZ]U", ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME),
+ 	SND_PCI_QUIRK(0x1558, 0x8a20, "Clevo NH55DCQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -10909,6 +10912,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x069f, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800),
+ 	SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2),
++	SND_PCI_QUIRK(0x103c, 0x885f, "HP 288 Pro G8", ALC671_FIXUP_HP_HEADSET_MIC2),
+ 	SND_PCI_QUIRK(0x1043, 0x1080, "Asus UX501VW", ALC668_FIXUP_HEADSET_MODE),
+ 	SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_ASUS_Nx50),
+ 	SND_PCI_QUIRK(0x1043, 0x129d, "Asus N750", ALC662_FIXUP_ASUS_Nx50),
+diff --git a/sound/soc/sti/uniperif_player.c b/sound/soc/sti/uniperif_player.c
+index 2ed92c990b97c..dd9013c476649 100644
+--- a/sound/soc/sti/uniperif_player.c
++++ b/sound/soc/sti/uniperif_player.c
+@@ -91,7 +91,7 @@ static irqreturn_t uni_player_irq_handler(int irq, void *dev_id)
+ 			SET_UNIPERIF_ITM_BCLR_FIFO_ERROR(player);
+ 
+ 			/* Stop the player */
+-			snd_pcm_stop_xrun(player->substream);
++			snd_pcm_stop(player->substream, SNDRV_PCM_STATE_XRUN);
+ 		}
+ 
+ 		ret = IRQ_HANDLED;
+@@ -105,7 +105,7 @@ static irqreturn_t uni_player_irq_handler(int irq, void *dev_id)
+ 		SET_UNIPERIF_ITM_BCLR_DMA_ERROR(player);
+ 
+ 		/* Stop the player */
+-		snd_pcm_stop_xrun(player->substream);
++		snd_pcm_stop(player->substream, SNDRV_PCM_STATE_XRUN);
+ 
+ 		ret = IRQ_HANDLED;
+ 	}
+@@ -138,7 +138,7 @@ static irqreturn_t uni_player_irq_handler(int irq, void *dev_id)
+ 		dev_err(player->dev, "Underflow recovery failed\n");
+ 
+ 		/* Stop the player */
+-		snd_pcm_stop_xrun(player->substream);
++		snd_pcm_stop(player->substream, SNDRV_PCM_STATE_XRUN);
+ 
+ 		ret = IRQ_HANDLED;
+ 	}
+diff --git a/sound/soc/sti/uniperif_reader.c b/sound/soc/sti/uniperif_reader.c
+index 136059331211d..065c5f0d1f5f0 100644
+--- a/sound/soc/sti/uniperif_reader.c
++++ b/sound/soc/sti/uniperif_reader.c
+@@ -65,7 +65,7 @@ static irqreturn_t uni_reader_irq_handler(int irq, void *dev_id)
+ 	if (unlikely(status & UNIPERIF_ITS_FIFO_ERROR_MASK(reader))) {
+ 		dev_err(reader->dev, "FIFO error detected\n");
+ 
+-		snd_pcm_stop_xrun(reader->substream);
++		snd_pcm_stop(reader->substream, SNDRV_PCM_STATE_XRUN);
+ 
+ 		ret = IRQ_HANDLED;
+ 	}
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index 55eea90ee993f..6ffd23f2ee65d 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -536,6 +536,16 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ 		.id = USB_ID(0x1b1c, 0x0a41),
+ 		.map = corsair_virtuoso_map,
+ 	},
++	{
++		/* Corsair Virtuoso SE Latest (wired mode) */
++		.id = USB_ID(0x1b1c, 0x0a3f),
++		.map = corsair_virtuoso_map,
++	},
++	{
++		/* Corsair Virtuoso SE Latest (wireless mode) */
++		.id = USB_ID(0x1b1c, 0x0a40),
++		.map = corsair_virtuoso_map,
++	},
+ 	{
+ 		/* Corsair Virtuoso (wireless mode) */
+ 		.id = USB_ID(0x1b1c, 0x0a42),
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index d48729e6a3b0a..d12b87e52d22a 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -3362,9 +3362,10 @@ void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer,
+ 		if (unitid == 7 && cval->control == UAC_FU_VOLUME)
+ 			snd_dragonfly_quirk_db_scale(mixer, cval, kctl);
+ 		break;
+-	/* lowest playback value is muted on C-Media devices */
+-	case USB_ID(0x0d8c, 0x000c):
+-	case USB_ID(0x0d8c, 0x0014):
++	/* lowest playback value is muted on some devices */
++	case USB_ID(0x0d8c, 0x000c): /* C-Media */
++	case USB_ID(0x0d8c, 0x0014): /* C-Media */
++	case USB_ID(0x19f7, 0x0003): /* RODE NT-USB */
+ 		if (strstr(kctl->id.name, "Playback"))
+ 			cval->min_mute = 1;
+ 		break;


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-03-28 22:32 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-03-28 22:32 UTC (permalink / raw
  To: gentoo-commits

commit:     395cbb149a7874a22ba7417cab583e4d61d7098f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Mar 28 22:32:07 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Mar 28 22:32:07 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=395cbb14

Revert swiotlb: rework fix info leak with DMA_FROM_DEVICE

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   4 +
 ...rework-fix-info-leak-with-dma_from_device.patch | 187 +++++++++++++++++++++
 2 files changed, 191 insertions(+)

diff --git a/0000_README b/0000_README
index ccfb3dab..a4e3e3b3 100644
--- a/0000_README
+++ b/0000_README
@@ -131,6 +131,10 @@ Patch:  2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch
 From:   https://patchwork.kernel.org/project/linux-wireless/patch/70e27cbc652cbdb78277b9c691a3a5ba02653afb.1641540175.git.objelf@gmail.com/
 Desc:   mt76: mt7921e: fix possible probe failure after reboot
 
+Patch:  2410_revert-swiotlb-rework-fix-info-leak-with-dma_from_device.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git
+Desc:   Revert swiotlb: rework fix info leak with DMA_FROM_DEVICE
+
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2410_revert-swiotlb-rework-fix-info-leak-with-dma_from_device.patch b/2410_revert-swiotlb-rework-fix-info-leak-with-dma_from_device.patch
new file mode 100644
index 00000000..69476ab1
--- /dev/null
+++ b/2410_revert-swiotlb-rework-fix-info-leak-with-dma_from_device.patch
@@ -0,0 +1,187 @@
+From bddac7c1e02ba47f0570e494c9289acea3062cc1 Mon Sep 17 00:00:00 2001
+From: Linus Torvalds <torvalds@linux-foundation.org>
+Date: Sat, 26 Mar 2022 10:42:04 -0700
+Subject: Revert "swiotlb: rework "fix info leak with DMA_FROM_DEVICE""
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Linus Torvalds <torvalds@linux-foundation.org>
+
+commit bddac7c1e02ba47f0570e494c9289acea3062cc1 upstream.
+
+This reverts commit aa6f8dcbab473f3a3c7454b74caa46d36cdc5d13.
+
+It turns out this breaks at least the ath9k wireless driver, and
+possibly others.
+
+What the ath9k driver does on packet receive is to set up the DMA
+transfer with:
+
+  int ath_rx_init(..)
+  ..
+                bf->bf_buf_addr = dma_map_single(sc->dev, skb->data,
+                                                 common->rx_bufsize,
+                                                 DMA_FROM_DEVICE);
+
+and then the receive logic (through ath_rx_tasklet()) will fetch
+incoming packets
+
+  static bool ath_edma_get_buffers(..)
+  ..
+        dma_sync_single_for_cpu(sc->dev, bf->bf_buf_addr,
+                                common->rx_bufsize, DMA_FROM_DEVICE);
+
+        ret = ath9k_hw_process_rxdesc_edma(ah, rs, skb->data);
+        if (ret == -EINPROGRESS) {
+                /*let device gain the buffer again*/
+                dma_sync_single_for_device(sc->dev, bf->bf_buf_addr,
+                                common->rx_bufsize, DMA_FROM_DEVICE);
+                return false;
+        }
+
+and it's worth noting how that first DMA sync:
+
+    dma_sync_single_for_cpu(..DMA_FROM_DEVICE);
+
+is there to make sure the CPU can read the DMA buffer (possibly by
+copying it from the bounce buffer area, or by doing some cache flush).
+The iommu correctly turns that into a "copy from bounce bufer" so that
+the driver can look at the state of the packets.
+
+In the meantime, the device may continue to write to the DMA buffer, but
+we at least have a snapshot of the state due to that first DMA sync.
+
+But that _second_ DMA sync:
+
+    dma_sync_single_for_device(..DMA_FROM_DEVICE);
+
+is telling the DMA mapping that the CPU wasn't interested in the area
+because the packet wasn't there.  In the case of a DMA bounce buffer,
+that is a no-op.
+
+Note how it's not a sync for the CPU (the "for_device()" part), and it's
+not a sync for data written by the CPU (the "DMA_FROM_DEVICE" part).
+
+Or rather, it _should_ be a no-op.  That's what commit aa6f8dcbab47
+broke: it made the code bounce the buffer unconditionally, and changed
+the DMA_FROM_DEVICE to just unconditionally and illogically be
+DMA_TO_DEVICE.
+
+[ Side note: purely within the confines of the swiotlb driver it wasn't
+  entirely illogical: The reason it did that odd DMA_FROM_DEVICE ->
+  DMA_TO_DEVICE conversion thing is because inside the swiotlb driver,
+  it uses just a swiotlb_bounce() helper that doesn't care about the
+  whole distinction of who the sync is for - only which direction to
+  bounce.
+
+  So it took the "sync for device" to mean that the CPU must have been
+  the one writing, and thought it meant DMA_TO_DEVICE. ]
+
+Also note how the commentary in that commit was wrong, probably due to
+that whole confusion, claiming that the commit makes the swiotlb code
+
+                                  "bounce unconditionally (that is, also
+    when dir == DMA_TO_DEVICE) in order do avoid synchronising back stale
+    data from the swiotlb buffer"
+
+which is nonsensical for two reasons:
+
+ - that "also when dir == DMA_TO_DEVICE" is nonsensical, as that was
+   exactly when it always did - and should do - the bounce.
+
+ - since this is a sync for the device (not for the CPU), we're clearly
+   fundamentally not coping back stale data from the bounce buffers at
+   all, because we'd be copying *to* the bounce buffers.
+
+So that commit was just very confused.  It confused the direction of the
+synchronization (to the device, not the cpu) with the direction of the
+DMA (from the device).
+
+Reported-and-bisected-by: Oleksandr Natalenko <oleksandr@natalenko.name>
+Reported-by: Olha Cherevyk <olha.cherevyk@gmail.com>
+Cc: Halil Pasic <pasic@linux.ibm.com>
+Cc: Christoph Hellwig <hch@lst.de>
+Cc: Kalle Valo <kvalo@kernel.org>
+Cc: Robin Murphy <robin.murphy@arm.com>
+Cc: Toke Høiland-Jørgensen <toke@toke.dk>
+Cc: Maxime Bizon <mbizon@freebox.fr>
+Cc: Johannes Berg <johannes@sipsolutions.net>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ Documentation/core-api/dma-attributes.rst |    8 ++++++++
+ include/linux/dma-mapping.h               |    8 ++++++++
+ kernel/dma/swiotlb.c                      |   23 ++++++++---------------
+ 3 files changed, 24 insertions(+), 15 deletions(-)
+
+--- a/Documentation/core-api/dma-attributes.rst
++++ b/Documentation/core-api/dma-attributes.rst
+@@ -130,3 +130,11 @@ accesses to DMA buffers in both privileg
+ subsystem that the buffer is fully accessible at the elevated privilege
+ level (and ideally inaccessible or at least read-only at the
+ lesser-privileged levels).
++
++DMA_ATTR_OVERWRITE
++------------------
++
++This is a hint to the DMA-mapping subsystem that the device is expected to
++overwrite the entire mapped size, thus the caller does not require any of the
++previous buffer contents to be preserved. This allows bounce-buffering
++implementations to optimise DMA_FROM_DEVICE transfers.
+--- a/include/linux/dma-mapping.h
++++ b/include/linux/dma-mapping.h
+@@ -62,6 +62,14 @@
+ #define DMA_ATTR_PRIVILEGED		(1UL << 9)
+ 
+ /*
++ * This is a hint to the DMA-mapping subsystem that the device is expected
++ * to overwrite the entire mapped size, thus the caller does not require any
++ * of the previous buffer contents to be preserved. This allows
++ * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers.
++ */
++#define DMA_ATTR_OVERWRITE		(1UL << 10)
++
++/*
+  * A dma_addr_t can hold any valid DMA or bus address for the platform.  It can
+  * be given to a device to use as a DMA source or target.  It is specific to a
+  * given device and there may be a translation between the CPU physical address
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -627,14 +627,10 @@ phys_addr_t swiotlb_tbl_map_single(struc
+ 	for (i = 0; i < nr_slots(alloc_size + offset); i++)
+ 		mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
+ 	tlb_addr = slot_addr(mem->start, index) + offset;
+-	/*
+-	 * When dir == DMA_FROM_DEVICE we could omit the copy from the orig
+-	 * to the tlb buffer, if we knew for sure the device will
+-	 * overwirte the entire current content. But we don't. Thus
+-	 * unconditional bounce may prevent leaking swiotlb content (i.e.
+-	 * kernel memory) to user-space.
+-	 */
+-	swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE);
++	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
++	    (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE ||
++	    dir == DMA_BIDIRECTIONAL))
++		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE);
+ 	return tlb_addr;
+ }
+ 
+@@ -701,13 +697,10 @@ void swiotlb_tbl_unmap_single(struct dev
+ void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
+ 		size_t size, enum dma_data_direction dir)
+ {
+-	/*
+-	 * Unconditional bounce is necessary to avoid corruption on
+-	 * sync_*_for_cpu or dma_ummap_* when the device didn't overwrite
+-	 * the whole lengt of the bounce buffer.
+-	 */
+-	swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
+-	BUG_ON(!valid_dma_direction(dir));
++	if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
++		swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
++	else
++		BUG_ON(dir != DMA_FROM_DEVICE);
+ }
+ 
+ void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr,


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-04-08 12:55 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-04-08 12:55 UTC (permalink / raw
  To: gentoo-commits

commit:     28583fc64420f9c22874ceab8b5c5c6c909bf7ca
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Apr  8 12:54:41 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Apr  8 12:54:41 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=28583fc6

Linux patch 5.16.19

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |     4 +
 1018_linux-5.16.19.patch | 52834 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 52838 insertions(+)

diff --git a/0000_README b/0000_README
index a4e3e3b3..2a86553a 100644
--- a/0000_README
+++ b/0000_README
@@ -115,6 +115,10 @@ Patch:  1017_linux-5.16.18.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.16.18
 
+Patch:  1018_linux-5.16.19.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.19
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1018_linux-5.16.19.patch b/1018_linux-5.16.19.patch
new file mode 100644
index 00000000..ad2cfcf2
--- /dev/null
+++ b/1018_linux-5.16.19.patch
@@ -0,0 +1,52834 @@
+diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
+index b268e3e18b4a3..91e2b549f8172 100644
+--- a/Documentation/ABI/testing/sysfs-fs-f2fs
++++ b/Documentation/ABI/testing/sysfs-fs-f2fs
+@@ -425,6 +425,7 @@ Description:	Show status of f2fs superblock in real time.
+ 		0x800  SBI_QUOTA_SKIP_FLUSH  skip flushing quota in current CP
+ 		0x1000 SBI_QUOTA_NEED_REPAIR quota file may be corrupted
+ 		0x2000 SBI_IS_RESIZEFS       resizefs is in process
++		0x4000 SBI_IS_FREEZING       freefs is in process
+ 		====== ===================== =================================
+ 
+ What:		/sys/fs/f2fs/<disk>/ckpt_thread_ioprio
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 391b3f9055fe0..3e513d69e9de7 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -3476,8 +3476,7 @@
+ 			difficult since unequal pointers can no longer be
+ 			compared.  However, if this command-line option is
+ 			specified, then all normal pointers will have their true
+-			value printed.  Pointers printed via %pK may still be
+-			hashed.  This option should only be specified when
++			value printed. This option should only be specified when
+ 			debugging the kernel.  Please do not use on production
+ 			kernels.
+ 
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index 0e486f41185ef..d6977875c1b76 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -795,6 +795,7 @@ bit 1  print system memory info
+ bit 2  print timer info
+ bit 3  print locks info if ``CONFIG_LOCKDEP`` is on
+ bit 4  print ftrace buffer
++bit 5  print all printk messages in buffer
+ =====  ============================================
+ 
+ So for example to print tasks and memory info on panic, user can::
+diff --git a/Documentation/devicetree/bindings/media/i2c/hynix,hi846.yaml b/Documentation/devicetree/bindings/media/i2c/hynix,hi846.yaml
+index 85a8877c2f387..1e2df8cf2937b 100644
+--- a/Documentation/devicetree/bindings/media/i2c/hynix,hi846.yaml
++++ b/Documentation/devicetree/bindings/media/i2c/hynix,hi846.yaml
+@@ -49,7 +49,8 @@ properties:
+     description: Definition of the regulator used for the VDDD power supply.
+ 
+   port:
+-    $ref: /schemas/graph.yaml#/properties/port
++    $ref: /schemas/graph.yaml#/$defs/port-base
++    unevaluatedProperties: false
+ 
+     properties:
+       endpoint:
+@@ -68,8 +69,11 @@ properties:
+                   - const: 1
+                   - const: 2
+ 
++          link-frequencies: true
++
+         required:
+           - data-lanes
++          - link-frequencies
+ 
+ required:
+   - compatible
+diff --git a/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.yaml b/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.yaml
+index 3a82b0b27fa0a..4fca71f343109 100644
+--- a/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.yaml
++++ b/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.yaml
+@@ -88,10 +88,9 @@ allOf:
+               - mediatek,mt2701-smi-common
+     then:
+       properties:
+-        clock:
+-          items:
+-            minItems: 3
+-            maxItems: 3
++        clocks:
++          minItems: 3
++          maxItems: 3
+         clock-names:
+           items:
+             - const: apb
+@@ -108,10 +107,9 @@ allOf:
+       required:
+         - mediatek,smi
+       properties:
+-        clock:
+-          items:
+-            minItems: 3
+-            maxItems: 3
++        clocks:
++          minItems: 3
++          maxItems: 3
+         clock-names:
+           items:
+             - const: apb
+@@ -133,10 +131,9 @@ allOf:
+ 
+     then:
+       properties:
+-        clock:
+-          items:
+-            minItems: 4
+-            maxItems: 4
++        clocks:
++          minItems: 4
++          maxItems: 4
+         clock-names:
+           items:
+             - const: apb
+@@ -146,10 +143,9 @@ allOf:
+ 
+     else:  # for gen2 HW that don't have gals
+       properties:
+-        clock:
+-          items:
+-            minItems: 2
+-            maxItems: 2
++        clocks:
++          minItems: 2
++          maxItems: 2
+         clock-names:
+           items:
+             - const: apb
+diff --git a/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-larb.yaml b/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-larb.yaml
+index eaeff1ada7f89..c5c32c9100457 100644
+--- a/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-larb.yaml
++++ b/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-larb.yaml
+@@ -79,11 +79,11 @@ allOf:
+ 
+     then:
+       properties:
+-        clock:
+-          items:
+-            minItems: 3
+-            maxItems: 3
++        clocks:
++          minItems: 2
++          maxItems: 3
+         clock-names:
++          minItems: 2
+           items:
+             - const: apb
+             - const: smi
+@@ -91,10 +91,9 @@ allOf:
+ 
+     else:
+       properties:
+-        clock:
+-          items:
+-            minItems: 2
+-            maxItems: 2
++        clocks:
++          minItems: 2
++          maxItems: 2
+         clock-names:
+           items:
+             - const: apb
+@@ -108,7 +107,6 @@ allOf:
+               - mediatek,mt2701-smi-larb
+               - mediatek,mt2712-smi-larb
+               - mediatek,mt6779-smi-larb
+-              - mediatek,mt8167-smi-larb
+               - mediatek,mt8192-smi-larb
+               - mediatek,mt8195-smi-larb
+ 
+diff --git a/Documentation/devicetree/bindings/mtd/nand-controller.yaml b/Documentation/devicetree/bindings/mtd/nand-controller.yaml
+index bd217e6f5018a..5cd144a9ec992 100644
+--- a/Documentation/devicetree/bindings/mtd/nand-controller.yaml
++++ b/Documentation/devicetree/bindings/mtd/nand-controller.yaml
+@@ -55,7 +55,7 @@ patternProperties:
+     properties:
+       reg:
+         description:
+-          Contains the native Ready/Busy IDs.
++          Contains the chip-select IDs.
+ 
+       nand-ecc-engine:
+         allOf:
+@@ -184,7 +184,7 @@ examples:
+         nand-use-soft-ecc-engine;
+         nand-ecc-algo = "bch";
+ 
+-        /* controller specific properties */
++        /* NAND chip specific properties */
+       };
+ 
+       nand@1 {
+diff --git a/Documentation/devicetree/bindings/pinctrl/microchip,sparx5-sgpio.yaml b/Documentation/devicetree/bindings/pinctrl/microchip,sparx5-sgpio.yaml
+index cb554084bdf11..0df4e114fdd69 100644
+--- a/Documentation/devicetree/bindings/pinctrl/microchip,sparx5-sgpio.yaml
++++ b/Documentation/devicetree/bindings/pinctrl/microchip,sparx5-sgpio.yaml
+@@ -145,7 +145,7 @@ examples:
+       clocks = <&sys_clk>;
+       pinctrl-0 = <&sgpio2_pins>;
+       pinctrl-names = "default";
+-      reg = <0x1101059c 0x100>;
++      reg = <0x1101059c 0x118>;
+       microchip,sgpio-port-ranges = <0 0>, <16 18>, <28 31>;
+       bus-frequency = <25000000>;
+       sgpio_in2: gpio@0 {
+diff --git a/Documentation/devicetree/bindings/spi/nvidia,tegra210-quad.yaml b/Documentation/devicetree/bindings/spi/nvidia,tegra210-quad.yaml
+index 35a8045b2c70d..53627c6e2ae32 100644
+--- a/Documentation/devicetree/bindings/spi/nvidia,tegra210-quad.yaml
++++ b/Documentation/devicetree/bindings/spi/nvidia,tegra210-quad.yaml
+@@ -106,7 +106,7 @@ examples:
+             dma-names = "rx", "tx";
+ 
+             flash@0 {
+-                    compatible = "spi-nor";
++                    compatible = "jedec,spi-nor";
+                     reg = <0>;
+                     spi-max-frequency = <104000000>;
+                     spi-tx-bus-width = <2>;
+diff --git a/Documentation/devicetree/bindings/spi/spi-mxic.txt b/Documentation/devicetree/bindings/spi/spi-mxic.txt
+index 529f2dab2648a..7bcbb229b78bb 100644
+--- a/Documentation/devicetree/bindings/spi/spi-mxic.txt
++++ b/Documentation/devicetree/bindings/spi/spi-mxic.txt
+@@ -8,11 +8,13 @@ Required properties:
+ - reg: should contain 2 entries, one for the registers and one for the direct
+        mapping area
+ - reg-names: should contain "regs" and "dirmap"
+-- interrupts: interrupt line connected to the SPI controller
+ - clock-names: should contain "ps_clk", "send_clk" and "send_dly_clk"
+ - clocks: should contain 3 entries for the "ps_clk", "send_clk" and
+ 	  "send_dly_clk" clocks
+ 
++Optional properties:
++- interrupts: interrupt line connected to the SPI controller
++
+ Example:
+ 
+ 	spi@43c30000 {
+diff --git a/Documentation/devicetree/bindings/usb/usb-hcd.yaml b/Documentation/devicetree/bindings/usb/usb-hcd.yaml
+index 56853c17af667..1dc3d5d7b44fe 100644
+--- a/Documentation/devicetree/bindings/usb/usb-hcd.yaml
++++ b/Documentation/devicetree/bindings/usb/usb-hcd.yaml
+@@ -33,7 +33,7 @@ patternProperties:
+   "^.*@[0-9a-f]{1,2}$":
+     description: The hard wired USB devices
+     type: object
+-    $ref: /usb/usb-device.yaml
++    $ref: /schemas/usb/usb-device.yaml
+ 
+ additionalProperties: true
+ 
+diff --git a/Documentation/process/stable-kernel-rules.rst b/Documentation/process/stable-kernel-rules.rst
+index 003c865e9c212..fbcb48bc2a903 100644
+--- a/Documentation/process/stable-kernel-rules.rst
++++ b/Documentation/process/stable-kernel-rules.rst
+@@ -168,7 +168,16 @@ Trees
+  - The finalized and tagged releases of all stable kernels can be found
+    in separate branches per version at:
+ 
+-	https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
++	https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
++
++ - The release candidate of all stable kernel versions can be found at:
++
++        https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git/
++
++   .. warning::
++      The -stable-rc tree is a snapshot in time of the stable-queue tree and
++      will change frequently, hence will be rebased often. It should only be
++      used for testing purposes (e.g. to be consumed by CI systems).
+ 
+ 
+ Review committee
+diff --git a/Documentation/security/SCTP.rst b/Documentation/security/SCTP.rst
+index d5fd6ccc3dcbd..b73eb764a0017 100644
+--- a/Documentation/security/SCTP.rst
++++ b/Documentation/security/SCTP.rst
+@@ -15,10 +15,7 @@ For security module support, three SCTP specific hooks have been implemented::
+     security_sctp_assoc_request()
+     security_sctp_bind_connect()
+     security_sctp_sk_clone()
+-
+-Also the following security hook has been utilised::
+-
+-    security_inet_conn_established()
++    security_sctp_assoc_established()
+ 
+ The usage of these hooks are described below with the SELinux implementation
+ described in the `SCTP SELinux Support`_ chapter.
+@@ -122,11 +119,12 @@ calls **sctp_peeloff**\(3).
+     @newsk - pointer to new sock structure.
+ 
+ 
+-security_inet_conn_established()
+-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+-Called when a COOKIE ACK is received::
++security_sctp_assoc_established()
++~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++Called when a COOKIE ACK is received, and the peer secid will be
++saved into ``@asoc->peer_secid`` for client::
+ 
+-    @sk  - pointer to sock structure.
++    @asoc - pointer to sctp association structure.
+     @skb - pointer to skbuff of the COOKIE ACK packet.
+ 
+ 
+@@ -134,7 +132,7 @@ Security Hooks used for Association Establishment
+ -------------------------------------------------
+ 
+ The following diagram shows the use of ``security_sctp_bind_connect()``,
+-``security_sctp_assoc_request()``, ``security_inet_conn_established()`` when
++``security_sctp_assoc_request()``, ``security_sctp_assoc_established()`` when
+ establishing an association.
+ ::
+ 
+@@ -172,7 +170,7 @@ establishing an association.
+           <------------------------------------------- COOKIE ACK
+           |                                               |
+     sctp_sf_do_5_1E_ca                                    |
+- Call security_inet_conn_established()                    |
++ Call security_sctp_assoc_established()                   |
+  to set the peer label.                                   |
+           |                                               |
+           |                               If SCTP_SOCKET_TCP or peeled off
+@@ -198,7 +196,7 @@ hooks with the SELinux specifics expanded below::
+     security_sctp_assoc_request()
+     security_sctp_bind_connect()
+     security_sctp_sk_clone()
+-    security_inet_conn_established()
++    security_sctp_assoc_established()
+ 
+ 
+ security_sctp_assoc_request()
+@@ -271,12 +269,12 @@ sockets sid and peer sid to that contained in the ``@asoc sid`` and
+     @newsk - pointer to new sock structure.
+ 
+ 
+-security_inet_conn_established()
+-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++security_sctp_assoc_established()
++~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ Called when a COOKIE ACK is received where it sets the connection's peer sid
+ to that in ``@skb``::
+ 
+-    @sk  - pointer to sock structure.
++    @asoc - pointer to sctp association structure.
+     @skb - pointer to skbuff of the COOKIE ACK packet.
+ 
+ 
+diff --git a/Documentation/sound/hd-audio/models.rst b/Documentation/sound/hd-audio/models.rst
+index d25335993e553..9b52f50a68542 100644
+--- a/Documentation/sound/hd-audio/models.rst
++++ b/Documentation/sound/hd-audio/models.rst
+@@ -261,6 +261,10 @@ alc-sense-combo
+ huawei-mbx-stereo
+     Enable initialization verbs for Huawei MBX stereo speakers;
+     might be risky, try this at your own risk
++alc298-samsung-headphone
++    Samsung laptops with ALC298
++alc256-samsung-headphone
++    Samsung laptops with ALC256
+ 
+ ALC66x/67x/892
+ ==============
+diff --git a/Documentation/sphinx/requirements.txt b/Documentation/sphinx/requirements.txt
+index 9a35f50798a65..2c573541ab712 100644
+--- a/Documentation/sphinx/requirements.txt
++++ b/Documentation/sphinx/requirements.txt
+@@ -1,2 +1,4 @@
++# jinja2>=3.1 is not compatible with Sphinx<4.0
++jinja2<3.1
+ sphinx_rtd_theme
+ Sphinx==2.4.4
+diff --git a/MAINTAINERS b/MAINTAINERS
+index dd36acc87ce62..af9530d98717c 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -16147,8 +16147,7 @@ REALTEK RTL83xx SMI DSA ROUTER CHIPS
+ M:	Linus Walleij <linus.walleij@linaro.org>
+ S:	Maintained
+ F:	Documentation/devicetree/bindings/net/dsa/realtek-smi.txt
+-F:	drivers/net/dsa/realtek-smi*
+-F:	drivers/net/dsa/rtl83*
++F:	drivers/net/dsa/realtek/*
+ 
+ REALTEK WIRELESS DRIVER (rtlwifi family)
+ M:	Ping-Ke Shih <pkshih@realtek.com>
+diff --git a/Makefile b/Makefile
+index f47cecba0618b..c16a8a2ffd73e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 18
++SUBLEVEL = 19
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/arch/Kconfig b/arch/Kconfig
+index d3c4ab249e9c2..a825f8251f3da 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -1159,6 +1159,7 @@ config HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
+ config RANDOMIZE_KSTACK_OFFSET_DEFAULT
+ 	bool "Randomize kernel stack offset on syscall entry"
+ 	depends on HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
++	depends on INIT_STACK_NONE || !CC_IS_CLANG || CLANG_VERSION >= 140000
+ 	help
+ 	  The kernel stack offset can be randomized (after pt_regs) by
+ 	  roughly 5 bits of entropy, frustrating memory corruption
+diff --git a/arch/arc/kernel/process.c b/arch/arc/kernel/process.c
+index 8e90052f6f056..5f7f5aab361f1 100644
+--- a/arch/arc/kernel/process.c
++++ b/arch/arc/kernel/process.c
+@@ -43,7 +43,7 @@ SYSCALL_DEFINE0(arc_gettls)
+ 	return task_thread_info(current)->thr_ptr;
+ }
+ 
+-SYSCALL_DEFINE3(arc_usr_cmpxchg, int *, uaddr, int, expected, int, new)
++SYSCALL_DEFINE3(arc_usr_cmpxchg, int __user *, uaddr, int, expected, int, new)
+ {
+ 	struct pt_regs *regs = current_pt_regs();
+ 	u32 uval;
+diff --git a/arch/arm/boot/dts/bcm2711.dtsi b/arch/arm/boot/dts/bcm2711.dtsi
+index 21294f775a20f..89af57482bc8f 100644
+--- a/arch/arm/boot/dts/bcm2711.dtsi
++++ b/arch/arm/boot/dts/bcm2711.dtsi
+@@ -459,12 +459,26 @@
+ 		#size-cells = <0>;
+ 		enable-method = "brcm,bcm2836-smp"; // for ARM 32-bit
+ 
++		/* Source for d/i-cache-line-size and d/i-cache-sets
++		 * https://developer.arm.com/documentation/100095/0003
++		 * /Level-1-Memory-System/About-the-L1-memory-system?lang=en
++		 * Source for d/i-cache-size
++		 * https://www.raspberrypi.com/documentation/computers
++		 * /processors.html#bcm2711
++		 */
+ 		cpu0: cpu@0 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a72";
+ 			reg = <0>;
+ 			enable-method = "spin-table";
+ 			cpu-release-addr = <0x0 0x000000d8>;
++			d-cache-size = <0x8000>;
++			d-cache-line-size = <64>;
++			d-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++			i-cache-size = <0xc000>;
++			i-cache-line-size = <64>;
++			i-cache-sets = <256>; // 48KiB(size)/64(line-size)=768ways/3-way set
++			next-level-cache = <&l2>;
+ 		};
+ 
+ 		cpu1: cpu@1 {
+@@ -473,6 +487,13 @@
+ 			reg = <1>;
+ 			enable-method = "spin-table";
+ 			cpu-release-addr = <0x0 0x000000e0>;
++			d-cache-size = <0x8000>;
++			d-cache-line-size = <64>;
++			d-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++			i-cache-size = <0xc000>;
++			i-cache-line-size = <64>;
++			i-cache-sets = <256>; // 48KiB(size)/64(line-size)=768ways/3-way set
++			next-level-cache = <&l2>;
+ 		};
+ 
+ 		cpu2: cpu@2 {
+@@ -481,6 +502,13 @@
+ 			reg = <2>;
+ 			enable-method = "spin-table";
+ 			cpu-release-addr = <0x0 0x000000e8>;
++			d-cache-size = <0x8000>;
++			d-cache-line-size = <64>;
++			d-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++			i-cache-size = <0xc000>;
++			i-cache-line-size = <64>;
++			i-cache-sets = <256>; // 48KiB(size)/64(line-size)=768ways/3-way set
++			next-level-cache = <&l2>;
+ 		};
+ 
+ 		cpu3: cpu@3 {
+@@ -489,6 +517,28 @@
+ 			reg = <3>;
+ 			enable-method = "spin-table";
+ 			cpu-release-addr = <0x0 0x000000f0>;
++			d-cache-size = <0x8000>;
++			d-cache-line-size = <64>;
++			d-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++			i-cache-size = <0xc000>;
++			i-cache-line-size = <64>;
++			i-cache-sets = <256>; // 48KiB(size)/64(line-size)=768ways/3-way set
++			next-level-cache = <&l2>;
++		};
++
++		/* Source for d/i-cache-line-size and d/i-cache-sets
++		 *  https://developer.arm.com/documentation/100095/0003
++		 *  /Level-2-Memory-System/About-the-L2-memory-system?lang=en
++		 *  Source for d/i-cache-size
++		 *  https://www.raspberrypi.com/documentation/computers
++		 *  /processors.html#bcm2711
++		 */
++		l2: l2-cache0 {
++			compatible = "cache";
++			cache-size = <0x100000>;
++			cache-line-size = <64>;
++			cache-sets = <1024>; // 1MiB(size)/64(line-size)=16384ways/16-way set
++			cache-level = <2>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/bcm2837.dtsi b/arch/arm/boot/dts/bcm2837.dtsi
+index 0199ec98cd616..5dbdebc462594 100644
+--- a/arch/arm/boot/dts/bcm2837.dtsi
++++ b/arch/arm/boot/dts/bcm2837.dtsi
+@@ -40,12 +40,26 @@
+ 		#size-cells = <0>;
+ 		enable-method = "brcm,bcm2836-smp"; // for ARM 32-bit
+ 
++		/* Source for d/i-cache-line-size and d/i-cache-sets
++		 * https://developer.arm.com/documentation/ddi0500/e/level-1-memory-system
++		 * /about-the-l1-memory-system?lang=en
++		 *
++		 * Source for d/i-cache-size
++		 * https://magpi.raspberrypi.com/articles/raspberry-pi-3-specs-benchmarks
++		 */
+ 		cpu0: cpu@0 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a53";
+ 			reg = <0>;
+ 			enable-method = "spin-table";
+ 			cpu-release-addr = <0x0 0x000000d8>;
++			d-cache-size = <0x8000>;
++			d-cache-line-size = <64>;
++			d-cache-sets = <128>; // 32KiB(size)/64(line-size)=512ways/4-way set
++			i-cache-size = <0x8000>;
++			i-cache-line-size = <64>;
++			i-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++			next-level-cache = <&l2>;
+ 		};
+ 
+ 		cpu1: cpu@1 {
+@@ -54,6 +68,13 @@
+ 			reg = <1>;
+ 			enable-method = "spin-table";
+ 			cpu-release-addr = <0x0 0x000000e0>;
++			d-cache-size = <0x8000>;
++			d-cache-line-size = <64>;
++			d-cache-sets = <128>; // 32KiB(size)/64(line-size)=512ways/4-way set
++			i-cache-size = <0x8000>;
++			i-cache-line-size = <64>;
++			i-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++			next-level-cache = <&l2>;
+ 		};
+ 
+ 		cpu2: cpu@2 {
+@@ -62,6 +83,13 @@
+ 			reg = <2>;
+ 			enable-method = "spin-table";
+ 			cpu-release-addr = <0x0 0x000000e8>;
++			d-cache-size = <0x8000>;
++			d-cache-line-size = <64>;
++			d-cache-sets = <128>; // 32KiB(size)/64(line-size)=512ways/4-way set
++			i-cache-size = <0x8000>;
++			i-cache-line-size = <64>;
++			i-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++			next-level-cache = <&l2>;
+ 		};
+ 
+ 		cpu3: cpu@3 {
+@@ -70,6 +98,27 @@
+ 			reg = <3>;
+ 			enable-method = "spin-table";
+ 			cpu-release-addr = <0x0 0x000000f0>;
++			d-cache-size = <0x8000>;
++			d-cache-line-size = <64>;
++			d-cache-sets = <128>; // 32KiB(size)/64(line-size)=512ways/4-way set
++			i-cache-size = <0x8000>;
++			i-cache-line-size = <64>;
++			i-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++			next-level-cache = <&l2>;
++		};
++
++		/* Source for cache-line-size + cache-sets
++		 * https://developer.arm.com/documentation/ddi0500
++		 * /e/level-2-memory-system/about-the-l2-memory-system?lang=en
++		 * Source for cache-size
++		 * https://datasheets.raspberrypi.com/cm/cm1-and-cm3-datasheet.pdf
++		 */
++		l2: l2-cache0 {
++			compatible = "cache";
++			cache-size = <0x80000>;
++			cache-line-size = <64>;
++			cache-sets = <512>; // 512KiB(size)/64(line-size)=8192ways/16-way set
++			cache-level = <2>;
+ 		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/dra7-l4.dtsi b/arch/arm/boot/dts/dra7-l4.dtsi
+index 956a26d52a4c3..0a11bacffc1f1 100644
+--- a/arch/arm/boot/dts/dra7-l4.dtsi
++++ b/arch/arm/boot/dts/dra7-l4.dtsi
+@@ -3482,8 +3482,7 @@
+ 				ti,timer-pwm;
+ 			};
+ 		};
+-
+-		target-module@2c000 {			/* 0x4882c000, ap 17 02.0 */
++		timer15_target: target-module@2c000 {	/* 0x4882c000, ap 17 02.0 */
+ 			compatible = "ti,sysc-omap4-timer", "ti,sysc";
+ 			reg = <0x2c000 0x4>,
+ 			      <0x2c010 0x4>;
+@@ -3511,7 +3510,7 @@
+ 			};
+ 		};
+ 
+-		target-module@2e000 {			/* 0x4882e000, ap 19 14.0 */
++		timer16_target: target-module@2e000 {	/* 0x4882e000, ap 19 14.0 */
+ 			compatible = "ti,sysc-omap4-timer", "ti,sysc";
+ 			reg = <0x2e000 0x4>,
+ 			      <0x2e010 0x4>;
+diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
+index 6b485cbed8d50..8f7ffe2f66e90 100644
+--- a/arch/arm/boot/dts/dra7.dtsi
++++ b/arch/arm/boot/dts/dra7.dtsi
+@@ -1339,20 +1339,20 @@
+ };
+ 
+ /* Local timers, see ARM architected timer wrap erratum i940 */
+-&timer3_target {
++&timer15_target {
+ 	ti,no-reset-on-init;
+ 	ti,no-idle;
+ 	timer@0 {
+-		assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER3_CLKCTRL 24>;
++		assigned-clocks = <&l4per3_clkctrl DRA7_L4PER3_TIMER15_CLKCTRL 24>;
+ 		assigned-clock-parents = <&timer_sys_clk_div>;
+ 	};
+ };
+ 
+-&timer4_target {
++&timer16_target {
+ 	ti,no-reset-on-init;
+ 	ti,no-idle;
+ 	timer@0 {
+-		assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER4_CLKCTRL 24>;
++		assigned-clocks = <&l4per3_clkctrl DRA7_L4PER3_TIMER16_CLKCTRL 24>;
+ 		assigned-clock-parents = <&timer_sys_clk_div>;
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/exynos5250-pinctrl.dtsi b/arch/arm/boot/dts/exynos5250-pinctrl.dtsi
+index d31a68672bfac..d7d756614edd1 100644
+--- a/arch/arm/boot/dts/exynos5250-pinctrl.dtsi
++++ b/arch/arm/boot/dts/exynos5250-pinctrl.dtsi
+@@ -260,7 +260,7 @@
+ 	};
+ 
+ 	uart3_data: uart3-data {
+-		samsung,pins = "gpa1-4", "gpa1-4";
++		samsung,pins = "gpa1-4", "gpa1-5";
+ 		samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ 		samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
+ 		samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>;
+diff --git a/arch/arm/boot/dts/exynos5250-smdk5250.dts b/arch/arm/boot/dts/exynos5250-smdk5250.dts
+index 39bbe18145cf2..f042954bdfa5d 100644
+--- a/arch/arm/boot/dts/exynos5250-smdk5250.dts
++++ b/arch/arm/boot/dts/exynos5250-smdk5250.dts
+@@ -118,6 +118,9 @@
+ 	status = "okay";
+ 	ddc = <&i2c_2>;
+ 	hpd-gpios = <&gpx3 7 GPIO_ACTIVE_HIGH>;
++	vdd-supply = <&ldo8_reg>;
++	vdd_osc-supply = <&ldo10_reg>;
++	vdd_pll-supply = <&ldo8_reg>;
+ };
+ 
+ &i2c_0 {
+diff --git a/arch/arm/boot/dts/exynos5420-smdk5420.dts b/arch/arm/boot/dts/exynos5420-smdk5420.dts
+index a4f0e3ffedbd3..07f65213aae65 100644
+--- a/arch/arm/boot/dts/exynos5420-smdk5420.dts
++++ b/arch/arm/boot/dts/exynos5420-smdk5420.dts
+@@ -124,6 +124,9 @@
+ 	hpd-gpios = <&gpx3 7 GPIO_ACTIVE_HIGH>;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&hdmi_hpd_irq>;
++	vdd-supply = <&ldo6_reg>;
++	vdd_osc-supply = <&ldo7_reg>;
++	vdd_pll-supply = <&ldo6_reg>;
+ };
+ 
+ &hsi2c_4 {
+diff --git a/arch/arm/boot/dts/imx53-m53menlo.dts b/arch/arm/boot/dts/imx53-m53menlo.dts
+index 4f88e96d81ddb..d5c68d1ea707c 100644
+--- a/arch/arm/boot/dts/imx53-m53menlo.dts
++++ b/arch/arm/boot/dts/imx53-m53menlo.dts
+@@ -53,6 +53,31 @@
+ 		};
+ 	};
+ 
++	lvds-decoder {
++		compatible = "ti,ds90cf364a", "lvds-decoder";
++
++		ports {
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			port@0 {
++				reg = <0>;
++
++				lvds_decoder_in: endpoint {
++					remote-endpoint = <&lvds0_out>;
++				};
++			};
++
++			port@1 {
++				reg = <1>;
++
++				lvds_decoder_out: endpoint {
++					remote-endpoint = <&panel_in>;
++				};
++			};
++		};
++	};
++
+ 	panel {
+ 		compatible = "edt,etm0700g0dh6";
+ 		pinctrl-0 = <&pinctrl_display_gpio>;
+@@ -61,7 +86,7 @@
+ 
+ 		port {
+ 			panel_in: endpoint {
+-				remote-endpoint = <&lvds0_out>;
++				remote-endpoint = <&lvds_decoder_out>;
+ 			};
+ 		};
+ 	};
+@@ -450,7 +475,7 @@
+ 			reg = <2>;
+ 
+ 			lvds0_out: endpoint {
+-				remote-endpoint = <&panel_in>;
++				remote-endpoint = <&lvds_decoder_in>;
+ 			};
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/imx7-colibri.dtsi b/arch/arm/boot/dts/imx7-colibri.dtsi
+index 62b771c1d5a9a..f1c60b0cb143e 100644
+--- a/arch/arm/boot/dts/imx7-colibri.dtsi
++++ b/arch/arm/boot/dts/imx7-colibri.dtsi
+@@ -40,7 +40,7 @@
+ 
+ 		dailink_master: simple-audio-card,codec {
+ 			sound-dai = <&codec>;
+-			clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++			clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		};
+ 	};
+ };
+@@ -293,7 +293,7 @@
+ 		compatible = "fsl,sgtl5000";
+ 		#sound-dai-cells = <0>;
+ 		reg = <0x0a>;
+-		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&pinctrl_sai1_mclk>;
+ 		VDDA-supply = <&reg_module_3v3_avdd>;
+diff --git a/arch/arm/boot/dts/imx7-mba7.dtsi b/arch/arm/boot/dts/imx7-mba7.dtsi
+index 49086c6b6a0a2..3df6dff7734ae 100644
+--- a/arch/arm/boot/dts/imx7-mba7.dtsi
++++ b/arch/arm/boot/dts/imx7-mba7.dtsi
+@@ -302,7 +302,7 @@
+ 	tlv320aic32x4: audio-codec@18 {
+ 		compatible = "ti,tlv320aic32x4";
+ 		reg = <0x18>;
+-		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		clock-names = "mclk";
+ 		ldoin-supply = <&reg_audio_3v3>;
+ 		iov-supply = <&reg_audio_3v3>;
+diff --git a/arch/arm/boot/dts/imx7d-nitrogen7.dts b/arch/arm/boot/dts/imx7d-nitrogen7.dts
+index e0751e6ba3c0f..a31de900139d6 100644
+--- a/arch/arm/boot/dts/imx7d-nitrogen7.dts
++++ b/arch/arm/boot/dts/imx7d-nitrogen7.dts
+@@ -288,7 +288,7 @@
+ 	codec: wm8960@1a {
+ 		compatible = "wlf,wm8960";
+ 		reg = <0x1a>;
+-		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		clock-names = "mclk";
+ 		wlf,shared-lrclk;
+ 	};
+diff --git a/arch/arm/boot/dts/imx7d-pico-hobbit.dts b/arch/arm/boot/dts/imx7d-pico-hobbit.dts
+index 7b2198a9372c6..d917dc4f2f227 100644
+--- a/arch/arm/boot/dts/imx7d-pico-hobbit.dts
++++ b/arch/arm/boot/dts/imx7d-pico-hobbit.dts
+@@ -31,7 +31,7 @@
+ 
+ 		dailink_master: simple-audio-card,codec {
+ 			sound-dai = <&sgtl5000>;
+-			clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++			clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		};
+ 	};
+ };
+@@ -41,7 +41,7 @@
+ 		#sound-dai-cells = <0>;
+ 		reg = <0x0a>;
+ 		compatible = "fsl,sgtl5000";
+-		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		VDDA-supply = <&reg_2p5v>;
+ 		VDDIO-supply = <&reg_vref_1v8>;
+ 	};
+diff --git a/arch/arm/boot/dts/imx7d-pico-pi.dts b/arch/arm/boot/dts/imx7d-pico-pi.dts
+index 70bea95c06d83..f263e391e24cb 100644
+--- a/arch/arm/boot/dts/imx7d-pico-pi.dts
++++ b/arch/arm/boot/dts/imx7d-pico-pi.dts
+@@ -31,7 +31,7 @@
+ 
+ 		dailink_master: simple-audio-card,codec {
+ 			sound-dai = <&sgtl5000>;
+-			clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++			clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		};
+ 	};
+ };
+@@ -41,7 +41,7 @@
+ 		#sound-dai-cells = <0>;
+ 		reg = <0x0a>;
+ 		compatible = "fsl,sgtl5000";
+-		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		VDDA-supply = <&reg_2p5v>;
+ 		VDDIO-supply = <&reg_vref_1v8>;
+ 	};
+diff --git a/arch/arm/boot/dts/imx7d-sdb.dts b/arch/arm/boot/dts/imx7d-sdb.dts
+index 7813ef960f6ee..f053f51227417 100644
+--- a/arch/arm/boot/dts/imx7d-sdb.dts
++++ b/arch/arm/boot/dts/imx7d-sdb.dts
+@@ -385,14 +385,14 @@
+ 	codec: wm8960@1a {
+ 		compatible = "wlf,wm8960";
+ 		reg = <0x1a>;
+-		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		clock-names = "mclk";
+ 		wlf,shared-lrclk;
+ 		wlf,hp-cfg = <2 2 3>;
+ 		wlf,gpio-cfg = <1 3>;
+ 		assigned-clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_SRC>,
+ 				  <&clks IMX7D_PLL_AUDIO_POST_DIV>,
+-				  <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++				  <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		assigned-clock-parents = <&clks IMX7D_PLL_AUDIO_POST_DIV>;
+ 		assigned-clock-rates = <0>, <884736000>, <12288000>;
+ 	};
+diff --git a/arch/arm/boot/dts/imx7s-warp.dts b/arch/arm/boot/dts/imx7s-warp.dts
+index 569bbd84e371a..558b064da743c 100644
+--- a/arch/arm/boot/dts/imx7s-warp.dts
++++ b/arch/arm/boot/dts/imx7s-warp.dts
+@@ -75,7 +75,7 @@
+ 
+ 		dailink_master: simple-audio-card,codec {
+ 			sound-dai = <&codec>;
+-			clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++			clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		};
+ 	};
+ };
+@@ -232,7 +232,7 @@
+ 		#sound-dai-cells = <0>;
+ 		reg = <0x0a>;
+ 		compatible = "fsl,sgtl5000";
+-		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++		clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&pinctrl_sai1_mclk>;
+ 		VDDA-supply = <&vgen4_reg>;
+diff --git a/arch/arm/boot/dts/openbmc-flash-layout-64.dtsi b/arch/arm/boot/dts/openbmc-flash-layout-64.dtsi
+index 31f59de5190b8..7af41361c4800 100644
+--- a/arch/arm/boot/dts/openbmc-flash-layout-64.dtsi
++++ b/arch/arm/boot/dts/openbmc-flash-layout-64.dtsi
+@@ -28,7 +28,7 @@ partitions {
+ 		label = "rofs";
+ 	};
+ 
+-	rwfs@6000000 {
++	rwfs@2a00000 {
+ 		reg = <0x2a00000 0x1600000>; // 22MB
+ 		label = "rwfs";
+ 	};
+diff --git a/arch/arm/boot/dts/openbmc-flash-layout.dtsi b/arch/arm/boot/dts/openbmc-flash-layout.dtsi
+index 6c26524e93e11..b47e14063c380 100644
+--- a/arch/arm/boot/dts/openbmc-flash-layout.dtsi
++++ b/arch/arm/boot/dts/openbmc-flash-layout.dtsi
+@@ -20,7 +20,7 @@ partitions {
+ 		label = "kernel";
+ 	};
+ 
+-	rofs@c0000 {
++	rofs@4c0000 {
+ 		reg = <0x4c0000 0x1740000>;
+ 		label = "rofs";
+ 	};
+diff --git a/arch/arm/boot/dts/qcom-ipq4019.dtsi b/arch/arm/boot/dts/qcom-ipq4019.dtsi
+index ff1bdb10ad198..08bc5f46649dd 100644
+--- a/arch/arm/boot/dts/qcom-ipq4019.dtsi
++++ b/arch/arm/boot/dts/qcom-ipq4019.dtsi
+@@ -142,7 +142,8 @@
+ 	clocks {
+ 		sleep_clk: sleep_clk {
+ 			compatible = "fixed-clock";
+-			clock-frequency = <32768>;
++			clock-frequency = <32000>;
++			clock-output-names = "gcc_sleep_clk_src";
+ 			#clock-cells = <0>;
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/qcom-msm8960.dtsi b/arch/arm/boot/dts/qcom-msm8960.dtsi
+index 2a0ec97a264f2..a0f9ab7f08f34 100644
+--- a/arch/arm/boot/dts/qcom-msm8960.dtsi
++++ b/arch/arm/boot/dts/qcom-msm8960.dtsi
+@@ -146,7 +146,9 @@
+ 			reg		= <0x108000 0x1000>;
+ 			qcom,ipc	= <&l2cc 0x8 2>;
+ 
+-			interrupts	= <0 19 0>, <0 21 0>, <0 22 0>;
++			interrupts	= <GIC_SPI 19 IRQ_TYPE_EDGE_RISING>,
++					  <GIC_SPI 21 IRQ_TYPE_EDGE_RISING>,
++					  <GIC_SPI 22 IRQ_TYPE_EDGE_RISING>;
+ 			interrupt-names	= "ack", "err", "wakeup";
+ 
+ 			regulators {
+@@ -192,7 +194,7 @@
+ 				compatible = "qcom,msm-uartdm-v1.3", "qcom,msm-uartdm";
+ 				reg = <0x16440000 0x1000>,
+ 				      <0x16400000 0x1000>;
+-				interrupts = <0 154 0x0>;
++				interrupts = <GIC_SPI 154 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&gcc GSBI5_UART_CLK>, <&gcc GSBI5_H_CLK>;
+ 				clock-names = "core", "iface";
+ 				status = "disabled";
+@@ -318,7 +320,7 @@
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+ 				reg = <0x16080000 0x1000>;
+-				interrupts = <0 147 0>;
++				interrupts = <GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>;
+ 				spi-max-frequency = <24000000>;
+ 				cs-gpios = <&msmgpio 8 0>;
+ 
+diff --git a/arch/arm/boot/dts/sama5d2.dtsi b/arch/arm/boot/dts/sama5d2.dtsi
+index 801969c113d64..de88eb4847185 100644
+--- a/arch/arm/boot/dts/sama5d2.dtsi
++++ b/arch/arm/boot/dts/sama5d2.dtsi
+@@ -413,7 +413,7 @@
+ 				pmecc: ecc-engine@f8014070 {
+ 					compatible = "atmel,sama5d2-pmecc";
+ 					reg = <0xf8014070 0x490>,
+-					      <0xf8014500 0x100>;
++					      <0xf8014500 0x200>;
+ 				};
+ 			};
+ 
+diff --git a/arch/arm/boot/dts/sama7g5.dtsi b/arch/arm/boot/dts/sama7g5.dtsi
+index 7039311bf6788..91797323f04cb 100644
+--- a/arch/arm/boot/dts/sama7g5.dtsi
++++ b/arch/arm/boot/dts/sama7g5.dtsi
+@@ -352,8 +352,6 @@
+ 				dmas = <&dma0 AT91_XDMAC_DT_PERID(7)>,
+ 					<&dma0 AT91_XDMAC_DT_PERID(8)>;
+ 				dma-names = "rx", "tx";
+-				atmel,use-dma-rx;
+-				atmel,use-dma-tx;
+ 				status = "disabled";
+ 			};
+ 		};
+@@ -528,8 +526,6 @@
+ 				dmas = <&dma0 AT91_XDMAC_DT_PERID(21)>,
+ 					<&dma0 AT91_XDMAC_DT_PERID(22)>;
+ 				dma-names = "rx", "tx";
+-				atmel,use-dma-rx;
+-				atmel,use-dma-tx;
+ 				status = "disabled";
+ 			};
+ 		};
+@@ -554,8 +550,6 @@
+ 				dmas = <&dma0 AT91_XDMAC_DT_PERID(23)>,
+ 					<&dma0 AT91_XDMAC_DT_PERID(24)>;
+ 				dma-names = "rx", "tx";
+-				atmel,use-dma-rx;
+-				atmel,use-dma-tx;
+ 				status = "disabled";
+ 			};
+ 		};
+diff --git a/arch/arm/boot/dts/spear1340.dtsi b/arch/arm/boot/dts/spear1340.dtsi
+index 827e887afbda4..13e1bdb3ddbf1 100644
+--- a/arch/arm/boot/dts/spear1340.dtsi
++++ b/arch/arm/boot/dts/spear1340.dtsi
+@@ -134,9 +134,9 @@
+ 				reg = <0xb4100000 0x1000>;
+ 				interrupts = <0 105 0x4>;
+ 				status = "disabled";
+-				dmas = <&dwdma0 12 0 1>,
+-					<&dwdma0 13 1 0>;
+-				dma-names = "tx", "rx";
++				dmas = <&dwdma0 13 0 1>,
++					<&dwdma0 12 1 0>;
++				dma-names = "rx", "tx";
+ 			};
+ 
+ 			thermal@e07008c4 {
+diff --git a/arch/arm/boot/dts/spear13xx.dtsi b/arch/arm/boot/dts/spear13xx.dtsi
+index c87b881b2c8bb..9135533676879 100644
+--- a/arch/arm/boot/dts/spear13xx.dtsi
++++ b/arch/arm/boot/dts/spear13xx.dtsi
+@@ -284,9 +284,9 @@
+ 				#size-cells = <0>;
+ 				interrupts = <0 31 0x4>;
+ 				status = "disabled";
+-				dmas = <&dwdma0 4 0 0>,
+-					<&dwdma0 5 0 0>;
+-				dma-names = "tx", "rx";
++				dmas = <&dwdma0 5 0 0>,
++					<&dwdma0 4 0 0>;
++				dma-names = "rx", "tx";
+ 			};
+ 
+ 			rtc@e0580000 {
+diff --git a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+index 2ebafe27a865b..d3553e0f0187e 100644
+--- a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
++++ b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+@@ -1190,7 +1190,7 @@
+ 		};
+ 	};
+ 
+-	sai2a_sleep_pins_c: sai2a-2 {
++	sai2a_sleep_pins_c: sai2a-sleep-2 {
+ 		pins {
+ 			pinmux = <STM32_PINMUX('D', 13, ANALOG)>, /* SAI2_SCK_A */
+ 				 <STM32_PINMUX('D', 11, ANALOG)>, /* SAI2_SD_A */
+diff --git a/arch/arm/boot/dts/sun8i-v3s.dtsi b/arch/arm/boot/dts/sun8i-v3s.dtsi
+index b30bc1a25ebb9..084323d5c61cb 100644
+--- a/arch/arm/boot/dts/sun8i-v3s.dtsi
++++ b/arch/arm/boot/dts/sun8i-v3s.dtsi
+@@ -593,6 +593,17 @@
+ 			#size-cells = <0>;
+ 		};
+ 
++		gic: interrupt-controller@1c81000 {
++			compatible = "arm,gic-400";
++			reg = <0x01c81000 0x1000>,
++			      <0x01c82000 0x2000>,
++			      <0x01c84000 0x2000>,
++			      <0x01c86000 0x2000>;
++			interrupt-controller;
++			#interrupt-cells = <3>;
++			interrupts = <GIC_PPI 9 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>;
++		};
++
+ 		csi1: camera@1cb4000 {
+ 			compatible = "allwinner,sun8i-v3s-csi";
+ 			reg = <0x01cb4000 0x3000>;
+@@ -604,16 +615,5 @@
+ 			resets = <&ccu RST_BUS_CSI>;
+ 			status = "disabled";
+ 		};
+-
+-		gic: interrupt-controller@1c81000 {
+-			compatible = "arm,gic-400";
+-			reg = <0x01c81000 0x1000>,
+-			      <0x01c82000 0x2000>,
+-			      <0x01c84000 0x2000>,
+-			      <0x01c86000 0x2000>;
+-			interrupt-controller;
+-			#interrupt-cells = <3>;
+-			interrupts = <GIC_PPI 9 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>;
+-		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/tegra20-tamonten.dtsi b/arch/arm/boot/dts/tegra20-tamonten.dtsi
+index dd4d506683de7..7f14f0d005c3e 100644
+--- a/arch/arm/boot/dts/tegra20-tamonten.dtsi
++++ b/arch/arm/boot/dts/tegra20-tamonten.dtsi
+@@ -183,8 +183,8 @@
+ 			};
+ 			conf_ata {
+ 				nvidia,pins = "ata", "atb", "atc", "atd", "ate",
+-					"cdev1", "cdev2", "dap1", "dtb", "gma",
+-					"gmb", "gmc", "gmd", "gme", "gpu7",
++					"cdev1", "cdev2", "dap1", "dtb", "dtf",
++					"gma", "gmb", "gmc", "gmd", "gme", "gpu7",
+ 					"gpv", "i2cp", "irrx", "irtx", "pta",
+ 					"rm", "slxa", "slxk", "spia", "spib",
+ 					"uac";
+@@ -203,7 +203,7 @@
+ 			};
+ 			conf_crtp {
+ 				nvidia,pins = "crtp", "dap2", "dap3", "dap4",
+-					"dtc", "dte", "dtf", "gpu", "sdio1",
++					"dtc", "dte", "gpu", "sdio1",
+ 					"slxc", "slxd", "spdi", "spdo", "spig",
+ 					"uda";
+ 				nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+diff --git a/arch/arm/configs/multi_v5_defconfig b/arch/arm/configs/multi_v5_defconfig
+index fe8d760256a4c..3e3beb0cc33de 100644
+--- a/arch/arm/configs/multi_v5_defconfig
++++ b/arch/arm/configs/multi_v5_defconfig
+@@ -188,6 +188,7 @@ CONFIG_REGULATOR=y
+ CONFIG_REGULATOR_FIXED_VOLTAGE=y
+ CONFIG_MEDIA_SUPPORT=y
+ CONFIG_MEDIA_CAMERA_SUPPORT=y
++CONFIG_MEDIA_PLATFORM_SUPPORT=y
+ CONFIG_V4L_PLATFORM_DRIVERS=y
+ CONFIG_VIDEO_ASPEED=m
+ CONFIG_VIDEO_ATMEL_ISI=m
+@@ -196,6 +197,7 @@ CONFIG_DRM_ATMEL_HLCDC=m
+ CONFIG_DRM_PANEL_SIMPLE=y
+ CONFIG_DRM_PANEL_EDP=y
+ CONFIG_DRM_ASPEED_GFX=m
++CONFIG_FB=y
+ CONFIG_FB_IMX=y
+ CONFIG_FB_ATMEL=y
+ CONFIG_BACKLIGHT_ATMEL_LCDC=y
+diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig
+index 2b575792363e5..e4dba5461cb3e 100644
+--- a/arch/arm/crypto/Kconfig
++++ b/arch/arm/crypto/Kconfig
+@@ -102,6 +102,8 @@ config CRYPTO_AES_ARM_BS
+ 	depends on KERNEL_MODE_NEON
+ 	select CRYPTO_SKCIPHER
+ 	select CRYPTO_LIB_AES
++	select CRYPTO_AES
++	select CRYPTO_CBC
+ 	select CRYPTO_SIMD
+ 	help
+ 	  Use a faster and more secure NEON based implementation of AES in CBC,
+diff --git a/arch/arm/kernel/entry-ftrace.S b/arch/arm/kernel/entry-ftrace.S
+index a74289ebc8036..5f1b1ce10473a 100644
+--- a/arch/arm/kernel/entry-ftrace.S
++++ b/arch/arm/kernel/entry-ftrace.S
+@@ -22,10 +22,7 @@
+  * mcount can be thought of as a function called in the middle of a subroutine
+  * call.  As such, it needs to be transparent for both the caller and the
+  * callee: the original lr needs to be restored when leaving mcount, and no
+- * registers should be clobbered.  (In the __gnu_mcount_nc implementation, we
+- * clobber the ip register.  This is OK because the ARM calling convention
+- * allows it to be clobbered in subroutines and doesn't use it to hold
+- * parameters.)
++ * registers should be clobbered.
+  *
+  * When using dynamic ftrace, we patch out the mcount call by a "pop {lr}"
+  * instead of the __gnu_mcount_nc call (see arch/arm/kernel/ftrace.c).
+@@ -70,26 +67,25 @@
+ 
+ .macro __ftrace_regs_caller
+ 
+-	sub	sp, sp, #8	@ space for PC and CPSR OLD_R0,
++	str	lr, [sp, #-8]!	@ store LR as PC and make space for CPSR/OLD_R0,
+ 				@ OLD_R0 will overwrite previous LR
+ 
+-	add 	ip, sp, #12	@ move in IP the value of SP as it was
+-				@ before the push {lr} of the mcount mechanism
++	ldr	lr, [sp, #8]    @ get previous LR
+ 
+-	str     lr, [sp, #0]    @ store LR instead of PC
++	str	r0, [sp, #8]	@ write r0 as OLD_R0 over previous LR
+ 
+-	ldr     lr, [sp, #8]    @ get previous LR
++	str	lr, [sp, #-4]!	@ store previous LR as LR
+ 
+-	str	r0, [sp, #8]	@ write r0 as OLD_R0 over previous LR
++	add 	lr, sp, #16	@ move in LR the value of SP as it was
++				@ before the push {lr} of the mcount mechanism
+ 
+-	stmdb   sp!, {ip, lr}
+-	stmdb   sp!, {r0-r11, lr}
++	push	{r0-r11, ip, lr}
+ 
+ 	@ stack content at this point:
+ 	@ 0  4          48   52       56            60   64    68       72
+-	@ R0 | R1 | ... | LR | SP + 4 | previous LR | LR | PSR | OLD_R0 |
++	@ R0 | R1 | ... | IP | SP + 4 | previous LR | LR | PSR | OLD_R0 |
+ 
+-	mov r3, sp				@ struct pt_regs*
++	mov	r3, sp				@ struct pt_regs*
+ 
+ 	ldr r2, =function_trace_op
+ 	ldr r2, [r2]				@ pointer to the current
+@@ -112,11 +108,9 @@ ftrace_graph_regs_call:
+ #endif
+ 
+ 	@ pop saved regs
+-	ldmia   sp!, {r0-r12}			@ restore r0 through r12
+-	ldr	ip, [sp, #8]			@ restore PC
+-	ldr	lr, [sp, #4]			@ restore LR
+-	ldr	sp, [sp, #0]			@ restore SP
+-	mov	pc, ip				@ return
++	pop	{r0-r11, ip, lr}		@ restore r0 through r12
++	ldr	lr, [sp], #4			@ restore LR
++	ldr	pc, [sp], #12
+ .endm
+ 
+ #ifdef CONFIG_FUNCTION_GRAPH_TRACER
+@@ -132,11 +126,9 @@ ftrace_graph_regs_call:
+ 	bl	prepare_ftrace_return
+ 
+ 	@ pop registers saved in ftrace_regs_caller
+-	ldmia   sp!, {r0-r12}			@ restore r0 through r12
+-	ldr	ip, [sp, #8]			@ restore PC
+-	ldr	lr, [sp, #4]			@ restore LR
+-	ldr	sp, [sp, #0]			@ restore SP
+-	mov	pc, ip				@ return
++	pop	{r0-r11, ip, lr}		@ restore r0 through r12
++	ldr	lr, [sp], #4			@ restore LR
++	ldr	pc, [sp], #12
+ 
+ .endm
+ #endif
+@@ -202,16 +194,17 @@ ftrace_graph_call\suffix:
+ .endm
+ 
+ .macro mcount_exit
+-	ldmia	sp!, {r0-r3, ip, lr}
+-	ret	ip
++	ldmia	sp!, {r0-r3}
++	ldr	lr, [sp, #4]
++	ldr	pc, [sp], #8
+ .endm
+ 
+ ENTRY(__gnu_mcount_nc)
+ UNWIND(.fnstart)
+ #ifdef CONFIG_DYNAMIC_FTRACE
+-	mov	ip, lr
+-	ldmia	sp!, {lr}
+-	ret	ip
++	push	{lr}
++	ldr	lr, [sp, #4]
++	ldr	pc, [sp], #8
+ #else
+ 	__mcount
+ #endif
+diff --git a/arch/arm/kernel/swp_emulate.c b/arch/arm/kernel/swp_emulate.c
+index 6166ba38bf994..b74bfcf94fb1a 100644
+--- a/arch/arm/kernel/swp_emulate.c
++++ b/arch/arm/kernel/swp_emulate.c
+@@ -195,7 +195,7 @@ static int swp_handler(struct pt_regs *regs, unsigned int instr)
+ 		 destreg, EXTRACT_REG_NUM(instr, RT2_OFFSET), data);
+ 
+ 	/* Check access in reasonable access range for both SWP and SWPB */
+-	if (!access_ok((address & ~3), 4)) {
++	if (!access_ok((void __user *)(address & ~3), 4)) {
+ 		pr_debug("SWP{B} emulation: access to %p not allowed!\n",
+ 			 (void *)address);
+ 		res = -EFAULT;
+diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
+index 90c887aa67a44..f74460d3bef55 100644
+--- a/arch/arm/kernel/traps.c
++++ b/arch/arm/kernel/traps.c
+@@ -575,7 +575,7 @@ do_cache_op(unsigned long start, unsigned long end, int flags)
+ 	if (end < start || flags)
+ 		return -EINVAL;
+ 
+-	if (!access_ok(start, end - start))
++	if (!access_ok((void __user *)start, end - start))
+ 		return -EFAULT;
+ 
+ 	return __do_cache_op(start, end);
+diff --git a/arch/arm/mach-iop32x/include/mach/entry-macro.S b/arch/arm/mach-iop32x/include/mach/entry-macro.S
+index 8e6766d4621eb..341e5d9a6616d 100644
+--- a/arch/arm/mach-iop32x/include/mach/entry-macro.S
++++ b/arch/arm/mach-iop32x/include/mach/entry-macro.S
+@@ -20,7 +20,7 @@
+ 	mrc     p6, 0, \irqstat, c8, c0, 0	@ Read IINTSRC
+ 	cmp     \irqstat, #0
+ 	clzne   \irqnr, \irqstat
+-	rsbne   \irqnr, \irqnr, #31
++	rsbne   \irqnr, \irqnr, #32
+ 	.endm
+ 
+ 	.macro arch_ret_to_user, tmp1, tmp2
+diff --git a/arch/arm/mach-iop32x/include/mach/irqs.h b/arch/arm/mach-iop32x/include/mach/irqs.h
+index c4e78df428e86..e09ae5f48aec5 100644
+--- a/arch/arm/mach-iop32x/include/mach/irqs.h
++++ b/arch/arm/mach-iop32x/include/mach/irqs.h
+@@ -9,6 +9,6 @@
+ #ifndef __IRQS_H
+ #define __IRQS_H
+ 
+-#define NR_IRQS			32
++#define NR_IRQS			33
+ 
+ #endif
+diff --git a/arch/arm/mach-iop32x/irq.c b/arch/arm/mach-iop32x/irq.c
+index 2d48bf1398c10..d1e8824cbd824 100644
+--- a/arch/arm/mach-iop32x/irq.c
++++ b/arch/arm/mach-iop32x/irq.c
+@@ -32,14 +32,14 @@ static void intstr_write(u32 val)
+ static void
+ iop32x_irq_mask(struct irq_data *d)
+ {
+-	iop32x_mask &= ~(1 << d->irq);
++	iop32x_mask &= ~(1 << (d->irq - 1));
+ 	intctl_write(iop32x_mask);
+ }
+ 
+ static void
+ iop32x_irq_unmask(struct irq_data *d)
+ {
+-	iop32x_mask |= 1 << d->irq;
++	iop32x_mask |= 1 << (d->irq - 1);
+ 	intctl_write(iop32x_mask);
+ }
+ 
+@@ -65,7 +65,7 @@ void __init iop32x_init_irq(void)
+ 	    machine_is_em7210())
+ 		*IOP3XX_PCIIRSR = 0x0f;
+ 
+-	for (i = 0; i < NR_IRQS; i++) {
++	for (i = 1; i < NR_IRQS; i++) {
+ 		irq_set_chip_and_handler(i, &ext_chip, handle_level_irq);
+ 		irq_clear_status_flags(i, IRQ_NOREQUEST | IRQ_NOPROBE);
+ 	}
+diff --git a/arch/arm/mach-iop32x/irqs.h b/arch/arm/mach-iop32x/irqs.h
+index 69858e4e905d1..e1dfc8b4e7d7e 100644
+--- a/arch/arm/mach-iop32x/irqs.h
++++ b/arch/arm/mach-iop32x/irqs.h
+@@ -7,36 +7,40 @@
+ #ifndef __IOP32X_IRQS_H
+ #define __IOP32X_IRQS_H
+ 
++/* Interrupts in Linux start at 1, hardware starts at 0 */
++
++#define IOP_IRQ(x) ((x) + 1)
++
+ /*
+  * IOP80321 chipset interrupts
+  */
+-#define IRQ_IOP32X_DMA0_EOT	0
+-#define IRQ_IOP32X_DMA0_EOC	1
+-#define IRQ_IOP32X_DMA1_EOT	2
+-#define IRQ_IOP32X_DMA1_EOC	3
+-#define IRQ_IOP32X_AA_EOT	6
+-#define IRQ_IOP32X_AA_EOC	7
+-#define IRQ_IOP32X_CORE_PMON	8
+-#define IRQ_IOP32X_TIMER0	9
+-#define IRQ_IOP32X_TIMER1	10
+-#define IRQ_IOP32X_I2C_0	11
+-#define IRQ_IOP32X_I2C_1	12
+-#define IRQ_IOP32X_MESSAGING	13
+-#define IRQ_IOP32X_ATU_BIST	14
+-#define IRQ_IOP32X_PERFMON	15
+-#define IRQ_IOP32X_CORE_PMU	16
+-#define IRQ_IOP32X_BIU_ERR	17
+-#define IRQ_IOP32X_ATU_ERR	18
+-#define IRQ_IOP32X_MCU_ERR	19
+-#define IRQ_IOP32X_DMA0_ERR	20
+-#define IRQ_IOP32X_DMA1_ERR	21
+-#define IRQ_IOP32X_AA_ERR	23
+-#define IRQ_IOP32X_MSG_ERR	24
+-#define IRQ_IOP32X_SSP		25
+-#define IRQ_IOP32X_XINT0	27
+-#define IRQ_IOP32X_XINT1	28
+-#define IRQ_IOP32X_XINT2	29
+-#define IRQ_IOP32X_XINT3	30
+-#define IRQ_IOP32X_HPI		31
++#define IRQ_IOP32X_DMA0_EOT	IOP_IRQ(0)
++#define IRQ_IOP32X_DMA0_EOC	IOP_IRQ(1)
++#define IRQ_IOP32X_DMA1_EOT	IOP_IRQ(2)
++#define IRQ_IOP32X_DMA1_EOC	IOP_IRQ(3)
++#define IRQ_IOP32X_AA_EOT	IOP_IRQ(6)
++#define IRQ_IOP32X_AA_EOC	IOP_IRQ(7)
++#define IRQ_IOP32X_CORE_PMON	IOP_IRQ(8)
++#define IRQ_IOP32X_TIMER0	IOP_IRQ(9)
++#define IRQ_IOP32X_TIMER1	IOP_IRQ(10)
++#define IRQ_IOP32X_I2C_0	IOP_IRQ(11)
++#define IRQ_IOP32X_I2C_1	IOP_IRQ(12)
++#define IRQ_IOP32X_MESSAGING	IOP_IRQ(13)
++#define IRQ_IOP32X_ATU_BIST	IOP_IRQ(14)
++#define IRQ_IOP32X_PERFMON	IOP_IRQ(15)
++#define IRQ_IOP32X_CORE_PMU	IOP_IRQ(16)
++#define IRQ_IOP32X_BIU_ERR	IOP_IRQ(17)
++#define IRQ_IOP32X_ATU_ERR	IOP_IRQ(18)
++#define IRQ_IOP32X_MCU_ERR	IOP_IRQ(19)
++#define IRQ_IOP32X_DMA0_ERR	IOP_IRQ(20)
++#define IRQ_IOP32X_DMA1_ERR	IOP_IRQ(21)
++#define IRQ_IOP32X_AA_ERR	IOP_IRQ(23)
++#define IRQ_IOP32X_MSG_ERR	IOP_IRQ(24)
++#define IRQ_IOP32X_SSP		IOP_IRQ(25)
++#define IRQ_IOP32X_XINT0	IOP_IRQ(27)
++#define IRQ_IOP32X_XINT1	IOP_IRQ(28)
++#define IRQ_IOP32X_XINT2	IOP_IRQ(29)
++#define IRQ_IOP32X_XINT3	IOP_IRQ(30)
++#define IRQ_IOP32X_HPI		IOP_IRQ(31)
+ 
+ #endif
+diff --git a/arch/arm/mach-mmp/sram.c b/arch/arm/mach-mmp/sram.c
+index 6794e2db1ad5f..ecc46c31004f6 100644
+--- a/arch/arm/mach-mmp/sram.c
++++ b/arch/arm/mach-mmp/sram.c
+@@ -72,6 +72,8 @@ static int sram_probe(struct platform_device *pdev)
+ 	if (!info)
+ 		return -ENOMEM;
+ 
++	platform_set_drvdata(pdev, info);
++
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	if (res == NULL) {
+ 		dev_err(&pdev->dev, "no memory resource defined\n");
+@@ -107,8 +109,6 @@ static int sram_probe(struct platform_device *pdev)
+ 	list_add(&info->node, &sram_bank_list);
+ 	mutex_unlock(&sram_lock);
+ 
+-	platform_set_drvdata(pdev, info);
+-
+ 	dev_info(&pdev->dev, "initialized\n");
+ 	return 0;
+ 
+@@ -127,17 +127,19 @@ static int sram_remove(struct platform_device *pdev)
+ 	struct sram_bank_info *info;
+ 
+ 	info = platform_get_drvdata(pdev);
+-	if (info == NULL)
+-		return -ENODEV;
+ 
+-	mutex_lock(&sram_lock);
+-	list_del(&info->node);
+-	mutex_unlock(&sram_lock);
++	if (info->sram_size) {
++		mutex_lock(&sram_lock);
++		list_del(&info->node);
++		mutex_unlock(&sram_lock);
++
++		gen_pool_destroy(info->gpool);
++		iounmap(info->sram_virt);
++		kfree(info->pool_name);
++	}
+ 
+-	gen_pool_destroy(info->gpool);
+-	iounmap(info->sram_virt);
+-	kfree(info->pool_name);
+ 	kfree(info);
++
+ 	return 0;
+ }
+ 
+diff --git a/arch/arm/mach-mstar/Kconfig b/arch/arm/mach-mstar/Kconfig
+index cd300eeedc206..0bf4d312bcfd9 100644
+--- a/arch/arm/mach-mstar/Kconfig
++++ b/arch/arm/mach-mstar/Kconfig
+@@ -3,6 +3,7 @@ menuconfig ARCH_MSTARV7
+ 	depends on ARCH_MULTI_V7
+ 	select ARM_GIC
+ 	select ARM_HEAVY_MB
++	select HAVE_ARM_ARCH_TIMER
+ 	select MST_IRQ
+ 	select MSTAR_MSC313_MPLL
+ 	help
+diff --git a/arch/arm/mach-s3c/mach-jive.c b/arch/arm/mach-s3c/mach-jive.c
+index 0785638a9069b..7d15b84ae217e 100644
+--- a/arch/arm/mach-s3c/mach-jive.c
++++ b/arch/arm/mach-s3c/mach-jive.c
+@@ -236,11 +236,11 @@ static int __init jive_mtdset(char *options)
+ 	unsigned long set;
+ 
+ 	if (options == NULL || options[0] == '\0')
+-		return 0;
++		return 1;
+ 
+ 	if (kstrtoul(options, 10, &set)) {
+ 		printk(KERN_ERR "failed to parse mtdset=%s\n", options);
+-		return 0;
++		return 1;
+ 	}
+ 
+ 	switch (set) {
+@@ -255,7 +255,7 @@ static int __init jive_mtdset(char *options)
+ 		       "using default.", set);
+ 	}
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ /* parse the mtdset= option given to the kernel command line */
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index d05d94d2b28b9..0a83e1728f7e3 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -683,6 +683,7 @@ config ARM64_ERRATUM_2051678
+ 
+ config ARM64_ERRATUM_2077057
+ 	bool "Cortex-A510: 2077057: workaround software-step corrupting SPSR_EL2"
++	default y
+ 	help
+ 	  This option adds the workaround for ARM Cortex-A510 erratum 2077057.
+ 	  Affected Cortex-A510 may corrupt SPSR_EL2 when the a step exception is
+diff --git a/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi b/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi
+index 984c737fa627a..6e738f2a37013 100644
+--- a/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi
++++ b/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi
+@@ -273,9 +273,9 @@
+ 		#size-cells = <1>;
+ 		ranges = <0x00 0x00 0xff800000 0x3000>;
+ 
+-		timer: timer@400 {
+-			compatible = "brcm,bcm6328-timer", "syscon";
+-			reg = <0x400 0x3c>;
++		twd: timer-mfd@400 {
++			compatible = "brcm,bcm4908-twd", "simple-mfd", "syscon";
++			reg = <0x400 0x4c>;
+ 		};
+ 
+ 		gpio0: gpio-controller@500 {
+@@ -330,7 +330,7 @@
+ 
+ 	reboot {
+ 		compatible = "syscon-reboot";
+-		regmap = <&timer>;
++		regmap = <&twd>;
+ 		offset = <0x34>;
+ 		mask = <1>;
+ 	};
+diff --git a/arch/arm64/boot/dts/broadcom/northstar2/ns2-svk.dts b/arch/arm64/boot/dts/broadcom/northstar2/ns2-svk.dts
+index ec19fbf928a14..12a4b1c03390c 100644
+--- a/arch/arm64/boot/dts/broadcom/northstar2/ns2-svk.dts
++++ b/arch/arm64/boot/dts/broadcom/northstar2/ns2-svk.dts
+@@ -111,8 +111,8 @@
+ 		compatible = "silabs,si3226x";
+ 		reg = <0>;
+ 		spi-max-frequency = <5000000>;
+-		spi-cpha = <1>;
+-		spi-cpol = <1>;
++		spi-cpha;
++		spi-cpol;
+ 		pl022,hierarchy = <0>;
+ 		pl022,interface = <0>;
+ 		pl022,slave-tx-disable = <0>;
+@@ -135,8 +135,8 @@
+ 		at25,byte-len = <0x8000>;
+ 		at25,addr-mode = <2>;
+ 		at25,page-size = <64>;
+-		spi-cpha = <1>;
+-		spi-cpol = <1>;
++		spi-cpha;
++		spi-cpol;
+ 		pl022,hierarchy = <0>;
+ 		pl022,interface = <0>;
+ 		pl022,slave-tx-disable = <0>;
+diff --git a/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi b/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
+index 2cfeaf3b0a876..8c218689fef70 100644
+--- a/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
++++ b/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
+@@ -687,7 +687,7 @@
+ 			};
+ 		};
+ 
+-		sata: ahci@663f2000 {
++		sata: sata@663f2000 {
+ 			compatible = "brcm,iproc-ahci", "generic-ahci";
+ 			reg = <0x663f2000 0x1000>;
+ 			dma-coherent;
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi
+index 01b01e3204118..35d1939e690b0 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi
+@@ -536,9 +536,9 @@
+ 			clock-names = "i2c";
+ 			clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
+ 					    QORIQ_CLK_PLL_DIV(1)>;
+-			dmas = <&edma0 1 39>,
+-			       <&edma0 1 38>;
+-			dma-names = "tx", "rx";
++			dmas = <&edma0 1 38>,
++			       <&edma0 1 39>;
++			dma-names = "rx", "tx";
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
+index 687fea6d8afa4..4e7bd04d97984 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
+@@ -499,9 +499,9 @@
+ 			interrupts = <GIC_SPI 56 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
+ 					    QORIQ_CLK_PLL_DIV(2)>;
+-			dmas = <&edma0 1 39>,
+-			       <&edma0 1 38>;
+-			dma-names = "tx", "rx";
++			dmas = <&edma0 1 38>,
++			       <&edma0 1 39>;
++			dma-names = "rx", "tx";
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/ipq6018.dtsi b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+index 66ec5615651d4..5dea37651adfc 100644
+--- a/arch/arm64/boot/dts/qcom/ipq6018.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+@@ -748,7 +748,7 @@
+ 				snps,hird-threshold = /bits/ 8 <0x0>;
+ 				snps,dis_u2_susphy_quirk;
+ 				snps,dis_u3_susphy_quirk;
+-				snps,ref-clock-period-ns = <0x32>;
++				snps,ref-clock-period-ns = <0x29>;
+ 				dr_mode = "host";
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/qcom/msm8994.dtsi b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+index 5a9a5ed0565f6..215f56daa26c2 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+@@ -713,6 +713,9 @@
+ 			#reset-cells = <1>;
+ 			#power-domain-cells = <1>;
+ 			reg = <0xfc400000 0x2000>;
++
++			clock-names = "xo", "sleep_clk";
++			clocks = <&xo_board>, <&sleep_clk>;
+ 		};
+ 
+ 		rpm_msg_ram: sram@fc428000 {
+diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+index 6e27a1beaa33a..68a5740bc3604 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+@@ -1785,7 +1785,7 @@
+ 			};
+ 		};
+ 
+-		gmu: gmu@3d69000 {
++		gmu: gmu@3d6a000 {
+ 			compatible="qcom,adreno-gmu-635.0", "qcom,adreno-gmu";
+ 			reg = <0 0x03d6a000 0 0x34000>,
+ 				<0 0x3de0000 0 0x10000>,
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index 526087586ba42..0dda3a378158d 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -3613,10 +3613,10 @@
+ 					#clock-cells = <0>;
+ 					clock-frequency = <9600000>;
+ 					clock-output-names = "mclk";
+-					qcom,micbias1-millivolt = <1800>;
+-					qcom,micbias2-millivolt = <1800>;
+-					qcom,micbias3-millivolt = <1800>;
+-					qcom,micbias4-millivolt = <1800>;
++					qcom,micbias1-microvolt = <1800000>;
++					qcom,micbias2-microvolt = <1800000>;
++					qcom,micbias3-microvolt = <1800000>;
++					qcom,micbias4-microvolt = <1800000>;
+ 
+ 					#address-cells = <1>;
+ 					#size-cells = <1>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+index 81b4ff2cc4cdd..37f758cc4cc7b 100644
+--- a/arch/arm64/boot/dts/qcom/sm8150.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+@@ -3557,9 +3557,9 @@
+ 			qcom,tcs-offset = <0xd00>;
+ 			qcom,drv-id = <2>;
+ 			qcom,tcs-config = <ACTIVE_TCS  2>,
+-					  <SLEEP_TCS   1>,
+-					  <WAKE_TCS    1>,
+-					  <CONTROL_TCS 0>;
++					  <SLEEP_TCS   3>,
++					  <WAKE_TCS    3>,
++					  <CONTROL_TCS 1>;
+ 
+ 			rpmhcc: clock-controller {
+ 				compatible = "qcom,sm8150-rpmh-clk";
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index 6f6129b39c9cc..7d1ec0432020c 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -1426,8 +1426,8 @@
+ 			phys = <&pcie0_lane>;
+ 			phy-names = "pciephy";
+ 
+-			perst-gpio = <&tlmm 79 GPIO_ACTIVE_LOW>;
+-			enable-gpio = <&tlmm 81 GPIO_ACTIVE_HIGH>;
++			perst-gpios = <&tlmm 79 GPIO_ACTIVE_LOW>;
++			wake-gpios = <&tlmm 81 GPIO_ACTIVE_HIGH>;
+ 
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&pcie0_default_state>;
+@@ -1487,7 +1487,7 @@
+ 			ranges = <0x01000000 0x0 0x40200000 0x0 0x40200000 0x0 0x100000>,
+ 				 <0x02000000 0x0 0x40300000 0x0 0x40300000 0x0 0x1fd00000>;
+ 
+-			interrupts = <GIC_SPI 306 IRQ_TYPE_EDGE_RISING>;
++			interrupts = <GIC_SPI 307 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+ 			#interrupt-cells = <1>;
+ 			interrupt-map-mask = <0 0 0 0x7>;
+@@ -1530,8 +1530,8 @@
+ 			phys = <&pcie1_lane>;
+ 			phy-names = "pciephy";
+ 
+-			perst-gpio = <&tlmm 82 GPIO_ACTIVE_LOW>;
+-			enable-gpio = <&tlmm 84 GPIO_ACTIVE_HIGH>;
++			perst-gpios = <&tlmm 82 GPIO_ACTIVE_LOW>;
++			wake-gpios = <&tlmm 84 GPIO_ACTIVE_HIGH>;
+ 
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&pcie1_default_state>;
+@@ -1593,7 +1593,7 @@
+ 			ranges = <0x01000000 0x0 0x64200000 0x0 0x64200000 0x0 0x100000>,
+ 				 <0x02000000 0x0 0x64300000 0x0 0x64300000 0x0 0x3d00000>;
+ 
+-			interrupts = <GIC_SPI 236 IRQ_TYPE_EDGE_RISING>;
++			interrupts = <GIC_SPI 243 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+ 			#interrupt-cells = <1>;
+ 			interrupt-map-mask = <0 0 0 0x7>;
+@@ -1636,8 +1636,8 @@
+ 			phys = <&pcie2_lane>;
+ 			phy-names = "pciephy";
+ 
+-			perst-gpio = <&tlmm 85 GPIO_ACTIVE_LOW>;
+-			enable-gpio = <&tlmm 87 GPIO_ACTIVE_HIGH>;
++			perst-gpios = <&tlmm 85 GPIO_ACTIVE_LOW>;
++			wake-gpios = <&tlmm 87 GPIO_ACTIVE_HIGH>;
+ 
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&pcie2_default_state>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index 1a70a70ed056e..7c798c9667559 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -1124,7 +1124,7 @@
+ 			qcom,tcs-offset = <0xd00>;
+ 			qcom,drv-id = <2>;
+ 			qcom,tcs-config = <ACTIVE_TCS  2>, <SLEEP_TCS   3>,
+-					  <WAKE_TCS    3>, <CONTROL_TCS 1>;
++					  <WAKE_TCS    3>, <CONTROL_TCS 0>;
+ 
+ 			rpmhcc: clock-controller {
+ 				compatible = "qcom,sm8350-rpmh-clk";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-firefly.dts b/arch/arm64/boot/dts/rockchip/rk3399-firefly.dts
+index c4dd2a6b48368..f81ce3240342c 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-firefly.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-firefly.dts
+@@ -770,8 +770,8 @@
+ 	sd-uhs-sdr104;
+ 
+ 	/* Power supply */
+-	vqmmc-supply = &vcc1v8_s3;	/* IO line */
+-	vmmc-supply = &vcc_sdio;	/* card's power */
++	vqmmc-supply = <&vcc1v8_s3>;	/* IO line */
++	vmmc-supply = <&vcc_sdio>;	/* card's power */
+ 
+ 	#address-cells = <1>;
+ 	#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am64-main.dtsi b/arch/arm64/boot/dts/ti/k3-am64-main.dtsi
+index 5ad638b95ffc7..656b279b41da6 100644
+--- a/arch/arm64/boot/dts/ti/k3-am64-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am64-main.dtsi
+@@ -59,7 +59,10 @@
+ 		#interrupt-cells = <3>;
+ 		interrupt-controller;
+ 		reg = <0x00 0x01800000 0x00 0x10000>,	/* GICD */
+-		      <0x00 0x01840000 0x00 0xC0000>;	/* GICR */
++		      <0x00 0x01840000 0x00 0xC0000>,	/* GICR */
++		      <0x01 0x00000000 0x00 0x2000>,	/* GICC */
++		      <0x01 0x00010000 0x00 0x1000>,	/* GICH */
++		      <0x01 0x00020000 0x00 0x2000>;	/* GICV */
+ 		/*
+ 		 * vcpumntirq:
+ 		 * virtual CPU interface maintenance interrupt
+diff --git a/arch/arm64/boot/dts/ti/k3-am64.dtsi b/arch/arm64/boot/dts/ti/k3-am64.dtsi
+index 120974726be81..19684865d0d68 100644
+--- a/arch/arm64/boot/dts/ti/k3-am64.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am64.dtsi
+@@ -87,6 +87,7 @@
+ 			 <0x00 0x68000000 0x00 0x68000000 0x00 0x08000000>, /* PCIe DAT0 */
+ 			 <0x00 0x70000000 0x00 0x70000000 0x00 0x00200000>, /* OC SRAM */
+ 			 <0x00 0x78000000 0x00 0x78000000 0x00 0x00800000>, /* Main R5FSS */
++			 <0x01 0x00000000 0x01 0x00000000 0x00 0x00310000>, /* A53 PERIPHBASE */
+ 			 <0x06 0x00000000 0x06 0x00000000 0x01 0x00000000>, /* PCIe DAT1 */
+ 			 <0x05 0x00000000 0x05 0x00000000 0x01 0x00000000>, /* FSS0 DAT3 */
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+index ce8bb4a61011e..e749343accedd 100644
+--- a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+@@ -35,7 +35,10 @@
+ 		#interrupt-cells = <3>;
+ 		interrupt-controller;
+ 		reg = <0x00 0x01800000 0x00 0x10000>,	/* GICD */
+-		      <0x00 0x01880000 0x00 0x90000>;	/* GICR */
++		      <0x00 0x01880000 0x00 0x90000>,	/* GICR */
++		      <0x00 0x6f000000 0x00 0x2000>,	/* GICC */
++		      <0x00 0x6f010000 0x00 0x1000>,	/* GICH */
++		      <0x00 0x6f020000 0x00 0x2000>;	/* GICV */
+ 		/*
+ 		 * vcpumntirq:
+ 		 * virtual CPU interface maintenance interrupt
+diff --git a/arch/arm64/boot/dts/ti/k3-am65.dtsi b/arch/arm64/boot/dts/ti/k3-am65.dtsi
+index a58a39fa42dbc..c538a0bf3cdda 100644
+--- a/arch/arm64/boot/dts/ti/k3-am65.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am65.dtsi
+@@ -86,6 +86,7 @@
+ 			 <0x00 0x46000000 0x00 0x46000000 0x00 0x00200000>,
+ 			 <0x00 0x47000000 0x00 0x47000000 0x00 0x00068400>,
+ 			 <0x00 0x50000000 0x00 0x50000000 0x00 0x8000000>,
++			 <0x00 0x6f000000 0x00 0x6f000000 0x00 0x00310000>, /* A53 PERIPHBASE */
+ 			 <0x00 0x70000000 0x00 0x70000000 0x00 0x200000>,
+ 			 <0x05 0x00000000 0x05 0x00000000 0x01 0x0000000>,
+ 			 <0x07 0x00000000 0x07 0x00000000 0x01 0x0000000>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+index 05a627ad6cdc4..16684a2f054d9 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+@@ -54,7 +54,10 @@
+ 		#interrupt-cells = <3>;
+ 		interrupt-controller;
+ 		reg = <0x00 0x01800000 0x00 0x10000>,	/* GICD */
+-		      <0x00 0x01900000 0x00 0x100000>;	/* GICR */
++		      <0x00 0x01900000 0x00 0x100000>,	/* GICR */
++		      <0x00 0x6f000000 0x00 0x2000>,	/* GICC */
++		      <0x00 0x6f010000 0x00 0x1000>,	/* GICH */
++		      <0x00 0x6f020000 0x00 0x2000>;	/* GICV */
+ 
+ 		/* vcpumntirq: virtual CPU interface maintenance interrupt */
+ 		interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200.dtsi b/arch/arm64/boot/dts/ti/k3-j7200.dtsi
+index 64fef4e67d76a..b6da0454cc5bd 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200.dtsi
+@@ -129,6 +129,7 @@
+ 			 <0x00 0x00a40000 0x00 0x00a40000 0x00 0x00000800>, /* timesync router */
+ 			 <0x00 0x01000000 0x00 0x01000000 0x00 0x0d000000>, /* Most peripherals */
+ 			 <0x00 0x30000000 0x00 0x30000000 0x00 0x0c400000>, /* MAIN NAVSS */
++			 <0x00 0x6f000000 0x00 0x6f000000 0x00 0x00310000>, /* A72 PERIPHBASE */
+ 			 <0x00 0x70000000 0x00 0x70000000 0x00 0x00800000>, /* MSMC RAM */
+ 			 <0x00 0x18000000 0x00 0x18000000 0x00 0x08000000>, /* PCIe1 DAT0 */
+ 			 <0x41 0x00000000 0x41 0x00000000 0x01 0x00000000>, /* PCIe1 DAT1 */
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+index e85c89eebfa31..6c81997ee28ad 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+@@ -76,7 +76,10 @@
+ 		#interrupt-cells = <3>;
+ 		interrupt-controller;
+ 		reg = <0x00 0x01800000 0x00 0x10000>,	/* GICD */
+-		      <0x00 0x01900000 0x00 0x100000>;	/* GICR */
++		      <0x00 0x01900000 0x00 0x100000>,	/* GICR */
++		      <0x00 0x6f000000 0x00 0x2000>,	/* GICC */
++		      <0x00 0x6f010000 0x00 0x1000>,	/* GICH */
++		      <0x00 0x6f020000 0x00 0x2000>;	/* GICV */
+ 
+ 		/* vcpumntirq: virtual CPU interface maintenance interrupt */
+ 		interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e.dtsi b/arch/arm64/boot/dts/ti/k3-j721e.dtsi
+index 4a3872fce5339..0e23886c9fd1d 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e.dtsi
+@@ -139,6 +139,7 @@
+ 			 <0x00 0x0e000000 0x00 0x0e000000 0x00 0x01800000>, /* PCIe Core*/
+ 			 <0x00 0x10000000 0x00 0x10000000 0x00 0x10000000>, /* PCIe DAT */
+ 			 <0x00 0x64800000 0x00 0x64800000 0x00 0x00800000>, /* C71 */
++			 <0x00 0x6f000000 0x00 0x6f000000 0x00 0x00310000>, /* A72 PERIPHBASE */
+ 			 <0x44 0x00000000 0x44 0x00000000 0x00 0x08000000>, /* PCIe2 DAT */
+ 			 <0x44 0x10000000 0x44 0x10000000 0x00 0x08000000>, /* PCIe3 DAT */
+ 			 <0x4d 0x80800000 0x4d 0x80800000 0x00 0x00800000>, /* C66_0 */
+diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
+index f2e2b9bdd7024..d1fdf68be26ef 100644
+--- a/arch/arm64/configs/defconfig
++++ b/arch/arm64/configs/defconfig
+@@ -931,7 +931,7 @@ CONFIG_DMADEVICES=y
+ CONFIG_DMA_BCM2835=y
+ CONFIG_DMA_SUN6I=m
+ CONFIG_FSL_EDMA=y
+-CONFIG_IMX_SDMA=y
++CONFIG_IMX_SDMA=m
+ CONFIG_K3_DMA=y
+ CONFIG_MV_XOR=y
+ CONFIG_MV_XOR_V2=y
+diff --git a/arch/arm64/include/asm/module.lds.h b/arch/arm64/include/asm/module.lds.h
+index a11ccadd47d29..094701ec5500b 100644
+--- a/arch/arm64/include/asm/module.lds.h
++++ b/arch/arm64/include/asm/module.lds.h
+@@ -1,8 +1,8 @@
+ SECTIONS {
+ #ifdef CONFIG_ARM64_MODULE_PLTS
+-	.plt 0 (NOLOAD) : { BYTE(0) }
+-	.init.plt 0 (NOLOAD) : { BYTE(0) }
+-	.text.ftrace_trampoline 0 (NOLOAD) : { BYTE(0) }
++	.plt 0 : { BYTE(0) }
++	.init.plt 0 : { BYTE(0) }
++	.text.ftrace_trampoline 0 : { BYTE(0) }
+ #endif
+ 
+ #ifdef CONFIG_KASAN_SW_TAGS
+diff --git a/arch/arm64/include/asm/spectre.h b/arch/arm64/include/asm/spectre.h
+index 86e0cc9b9c685..aa3d3607d5c8d 100644
+--- a/arch/arm64/include/asm/spectre.h
++++ b/arch/arm64/include/asm/spectre.h
+@@ -67,7 +67,8 @@ struct bp_hardening_data {
+ 
+ DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
+ 
+-static inline void arm64_apply_bp_hardening(void)
++/* Called during entry so must be __always_inline */
++static __always_inline void arm64_apply_bp_hardening(void)
+ {
+ 	struct bp_hardening_data *d;
+ 
+diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
+index 6d45c63c64548..5777929d35bf4 100644
+--- a/arch/arm64/kernel/proton-pack.c
++++ b/arch/arm64/kernel/proton-pack.c
+@@ -233,17 +233,20 @@ static void install_bp_hardening_cb(bp_hardening_cb_t fn)
+ 	__this_cpu_write(bp_hardening_data.slot, HYP_VECTOR_SPECTRE_DIRECT);
+ }
+ 
+-static void call_smc_arch_workaround_1(void)
++/* Called during entry so must be noinstr */
++static noinstr void call_smc_arch_workaround_1(void)
+ {
+ 	arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
+ }
+ 
+-static void call_hvc_arch_workaround_1(void)
++/* Called during entry so must be noinstr */
++static noinstr void call_hvc_arch_workaround_1(void)
+ {
+ 	arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
+ }
+ 
+-static void qcom_link_stack_sanitisation(void)
++/* Called during entry so must be noinstr */
++static noinstr void qcom_link_stack_sanitisation(void)
+ {
+ 	u64 tmp;
+ 
+diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
+index 8f6372b44b658..da5de3d27f11b 100644
+--- a/arch/arm64/kernel/signal.c
++++ b/arch/arm64/kernel/signal.c
+@@ -577,10 +577,12 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user,
+ {
+ 	int err;
+ 
+-	err = sigframe_alloc(user, &user->fpsimd_offset,
+-			     sizeof(struct fpsimd_context));
+-	if (err)
+-		return err;
++	if (system_supports_fpsimd()) {
++		err = sigframe_alloc(user, &user->fpsimd_offset,
++				     sizeof(struct fpsimd_context));
++		if (err)
++			return err;
++	}
+ 
+ 	/* fault information, if valid */
+ 	if (add_all || current->thread.fault_code) {
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index a8834434af99a..2d9cac6df29e0 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -61,8 +61,34 @@ EXPORT_SYMBOL(memstart_addr);
+  * unless restricted on specific platforms (e.g. 30-bit on Raspberry Pi 4).
+  * In such case, ZONE_DMA32 covers the rest of the 32-bit addressable memory,
+  * otherwise it is empty.
++ *
++ * Memory reservation for crash kernel either done early or deferred
++ * depending on DMA memory zones configs (ZONE_DMA) --
++ *
++ * In absence of ZONE_DMA configs arm64_dma_phys_limit initialized
++ * here instead of max_zone_phys().  This lets early reservation of
++ * crash kernel memory which has a dependency on arm64_dma_phys_limit.
++ * Reserving memory early for crash kernel allows linear creation of block
++ * mappings (greater than page-granularity) for all the memory bank rangs.
++ * In this scheme a comparatively quicker boot is observed.
++ *
++ * If ZONE_DMA configs are defined, crash kernel memory reservation
++ * is delayed until DMA zone memory range size initilazation performed in
++ * zone_sizes_init().  The defer is necessary to steer clear of DMA zone
++ * memory range to avoid overlap allocation.  So crash kernel memory boundaries
++ * are not known when mapping all bank memory ranges, which otherwise means
++ * not possible to exclude crash kernel range from creating block mappings
++ * so page-granularity mappings are created for the entire memory range.
++ * Hence a slightly slower boot is observed.
++ *
++ * Note: Page-granularity mapppings are necessary for crash kernel memory
++ * range for shrinking its size via /sys/kernel/kexec_crash_size interface.
+  */
+-phys_addr_t arm64_dma_phys_limit __ro_after_init;
++#if IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32)
++phys_addr_t __ro_after_init arm64_dma_phys_limit;
++#else
++phys_addr_t __ro_after_init arm64_dma_phys_limit = PHYS_MASK + 1;
++#endif
+ 
+ #ifdef CONFIG_KEXEC_CORE
+ /*
+@@ -153,8 +179,6 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
+ 	if (!arm64_dma_phys_limit)
+ 		arm64_dma_phys_limit = dma32_phys_limit;
+ #endif
+-	if (!arm64_dma_phys_limit)
+-		arm64_dma_phys_limit = PHYS_MASK + 1;
+ 	max_zone_pfns[ZONE_NORMAL] = max;
+ 
+ 	free_area_init(max_zone_pfns);
+@@ -315,6 +339,9 @@ void __init arm64_memblock_init(void)
+ 
+ 	early_init_fdt_scan_reserved_mem();
+ 
++	if (!IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32))
++		reserve_crashkernel();
++
+ 	high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
+ }
+ 
+@@ -361,7 +388,8 @@ void __init bootmem_init(void)
+ 	 * request_standard_resources() depends on crashkernel's memory being
+ 	 * reserved, so do it here.
+ 	 */
+-	reserve_crashkernel();
++	if (IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32))
++		reserve_crashkernel();
+ 
+ 	memblock_dump_all();
+ }
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 49abbf43bf355..37b8230cda6a8 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -63,6 +63,7 @@ static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss __maybe_unused;
+ static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss __maybe_unused;
+ 
+ static DEFINE_SPINLOCK(swapper_pgdir_lock);
++static DEFINE_MUTEX(fixmap_lock);
+ 
+ void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd)
+ {
+@@ -329,6 +330,12 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
+ 	}
+ 	BUG_ON(p4d_bad(p4d));
+ 
++	/*
++	 * No need for locking during early boot. And it doesn't work as
++	 * expected with KASLR enabled.
++	 */
++	if (system_state != SYSTEM_BOOTING)
++		mutex_lock(&fixmap_lock);
+ 	pudp = pud_set_fixmap_offset(p4dp, addr);
+ 	do {
+ 		pud_t old_pud = READ_ONCE(*pudp);
+@@ -359,6 +366,8 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
+ 	} while (pudp++, addr = next, addr != end);
+ 
+ 	pud_clear_fixmap();
++	if (system_state != SYSTEM_BOOTING)
++		mutex_unlock(&fixmap_lock);
+ }
+ 
+ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
+@@ -517,7 +526,7 @@ static void __init map_mem(pgd_t *pgdp)
+ 	 */
+ 	BUILD_BUG_ON(pgd_index(direct_map_end - 1) == pgd_index(direct_map_end));
+ 
+-	if (can_set_direct_map() || crash_mem_map || IS_ENABLED(CONFIG_KFENCE))
++	if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
+ 		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
+ 
+ 	/*
+@@ -528,6 +537,17 @@ static void __init map_mem(pgd_t *pgdp)
+ 	 */
+ 	memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
+ 
++#ifdef CONFIG_KEXEC_CORE
++	if (crash_mem_map) {
++		if (IS_ENABLED(CONFIG_ZONE_DMA) ||
++		    IS_ENABLED(CONFIG_ZONE_DMA32))
++			flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
++		else if (crashk_res.end)
++			memblock_mark_nomap(crashk_res.start,
++			    resource_size(&crashk_res));
++	}
++#endif
++
+ 	/* map all the memory banks */
+ 	for_each_mem_range(i, &start, &end) {
+ 		if (start >= end)
+@@ -554,6 +574,25 @@ static void __init map_mem(pgd_t *pgdp)
+ 	__map_memblock(pgdp, kernel_start, kernel_end,
+ 		       PAGE_KERNEL, NO_CONT_MAPPINGS);
+ 	memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
++
++	/*
++	 * Use page-level mappings here so that we can shrink the region
++	 * in page granularity and put back unused memory to buddy system
++	 * through /sys/kernel/kexec_crash_size interface.
++	 */
++#ifdef CONFIG_KEXEC_CORE
++	if (crash_mem_map &&
++	    !IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32)) {
++		if (crashk_res.end) {
++			__map_memblock(pgdp, crashk_res.start,
++				       crashk_res.end + 1,
++				       PAGE_KERNEL,
++				       NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS);
++			memblock_clear_nomap(crashk_res.start,
++					     resource_size(&crashk_res));
++		}
++	}
++#endif
+ }
+ 
+ void mark_rodata_ro(void)
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index 71ef9dcd9b578..4d4e6ae39e56a 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -1049,15 +1049,18 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ 		goto out_off;
+ 	}
+ 
+-	/* 1. Initial fake pass to compute ctx->idx. */
+-
+-	/* Fake pass to fill in ctx->offset. */
+-	if (build_body(&ctx, extra_pass)) {
++	/*
++	 * 1. Initial fake pass to compute ctx->idx and ctx->offset.
++	 *
++	 * BPF line info needs ctx->offset[i] to be the offset of
++	 * instruction[i] in jited image, so build prologue first.
++	 */
++	if (build_prologue(&ctx, was_classic)) {
+ 		prog = orig_prog;
+ 		goto out_off;
+ 	}
+ 
+-	if (build_prologue(&ctx, was_classic)) {
++	if (build_body(&ctx, extra_pass)) {
+ 		prog = orig_prog;
+ 		goto out_off;
+ 	}
+@@ -1130,6 +1133,11 @@ skip_init_ctx:
+ 	prog->jited_len = prog_size;
+ 
+ 	if (!prog->is_func || extra_pass) {
++		int i;
++
++		/* offset[prog->len] is the size of program */
++		for (i = 0; i <= prog->len; i++)
++			ctx.offset[i] *= AARCH64_INSN_SIZE;
+ 		bpf_prog_fill_jited_linfo(prog, ctx.offset + 1);
+ out_off:
+ 		kfree(ctx.offset);
+diff --git a/arch/csky/kernel/perf_callchain.c b/arch/csky/kernel/perf_callchain.c
+index 35318a635a5fa..75e1f9df5f604 100644
+--- a/arch/csky/kernel/perf_callchain.c
++++ b/arch/csky/kernel/perf_callchain.c
+@@ -49,7 +49,7 @@ static unsigned long user_backtrace(struct perf_callchain_entry_ctx *entry,
+ {
+ 	struct stackframe buftail;
+ 	unsigned long lr = 0;
+-	unsigned long *user_frame_tail = (unsigned long *)fp;
++	unsigned long __user *user_frame_tail = (unsigned long __user *)fp;
+ 
+ 	/* Check accessibility of one struct frame_tail beyond */
+ 	if (!access_ok(user_frame_tail, sizeof(buftail)))
+diff --git a/arch/csky/kernel/signal.c b/arch/csky/kernel/signal.c
+index c7b763d2f526e..8867ddf3e6c77 100644
+--- a/arch/csky/kernel/signal.c
++++ b/arch/csky/kernel/signal.c
+@@ -136,7 +136,7 @@ static inline void __user *get_sigframe(struct ksignal *ksig,
+ static int
+ setup_rt_frame(struct ksignal *ksig, sigset_t *set, struct pt_regs *regs)
+ {
+-	struct rt_sigframe *frame;
++	struct rt_sigframe __user *frame;
+ 	int err = 0;
+ 
+ 	frame = get_sigframe(ksig, regs, sizeof(*frame));
+diff --git a/arch/m68k/coldfire/device.c b/arch/m68k/coldfire/device.c
+index 0386252e9d043..4218750414bbf 100644
+--- a/arch/m68k/coldfire/device.c
++++ b/arch/m68k/coldfire/device.c
+@@ -480,7 +480,7 @@ static struct platform_device mcf_i2c5 = {
+ #endif /* MCFI2C_BASE5 */
+ #endif /* IS_ENABLED(CONFIG_I2C_IMX) */
+ 
+-#if IS_ENABLED(CONFIG_MCF_EDMA)
++#ifdef MCFEDMA_BASE
+ 
+ static const struct dma_slave_map mcf_edma_map[] = {
+ 	{ "dreq0", "rx-tx", MCF_EDMA_FILTER_PARAM(0) },
+@@ -552,7 +552,7 @@ static struct platform_device mcf_edma = {
+ 		.platform_data = &mcf_edma_data,
+ 	}
+ };
+-#endif /* IS_ENABLED(CONFIG_MCF_EDMA) */
++#endif /* MCFEDMA_BASE */
+ 
+ #ifdef MCFSDHC_BASE
+ static struct mcf_esdhc_platform_data mcf_esdhc_data = {
+@@ -651,7 +651,7 @@ static struct platform_device *mcf_devices[] __initdata = {
+ 	&mcf_i2c5,
+ #endif
+ #endif
+-#if IS_ENABLED(CONFIG_MCF_EDMA)
++#ifdef MCFEDMA_BASE
+ 	&mcf_edma,
+ #endif
+ #ifdef MCFSDHC_BASE
+diff --git a/arch/microblaze/include/asm/uaccess.h b/arch/microblaze/include/asm/uaccess.h
+index 5b6e0e7788f44..3fe96979d2c62 100644
+--- a/arch/microblaze/include/asm/uaccess.h
++++ b/arch/microblaze/include/asm/uaccess.h
+@@ -130,27 +130,27 @@ extern long __user_bad(void);
+ 
+ #define __get_user(x, ptr)						\
+ ({									\
+-	unsigned long __gu_val = 0;					\
+ 	long __gu_err;							\
+ 	switch (sizeof(*(ptr))) {					\
+ 	case 1:								\
+-		__get_user_asm("lbu", (ptr), __gu_val, __gu_err);	\
++		__get_user_asm("lbu", (ptr), x, __gu_err);		\
+ 		break;							\
+ 	case 2:								\
+-		__get_user_asm("lhu", (ptr), __gu_val, __gu_err);	\
++		__get_user_asm("lhu", (ptr), x, __gu_err);		\
+ 		break;							\
+ 	case 4:								\
+-		__get_user_asm("lw", (ptr), __gu_val, __gu_err);	\
++		__get_user_asm("lw", (ptr), x, __gu_err);		\
+ 		break;							\
+-	case 8:								\
+-		__gu_err = __copy_from_user(&__gu_val, ptr, 8);		\
+-		if (__gu_err)						\
+-			__gu_err = -EFAULT;				\
++	case 8: {							\
++		__u64 __x = 0;						\
++		__gu_err = raw_copy_from_user(&__x, ptr, 8) ?		\
++							-EFAULT : 0;	\
++		(x) = (typeof(x))(typeof((x) - (x)))__x;		\
+ 		break;							\
++	}								\
+ 	default:							\
+ 		/* __gu_val = 0; __gu_err = -EINVAL;*/ __gu_err = __user_bad();\
+ 	}								\
+-	x = (__force __typeof__(*(ptr))) __gu_val;			\
+ 	__gu_err;							\
+ })
+ 
+diff --git a/arch/mips/crypto/crc32-mips.c b/arch/mips/crypto/crc32-mips.c
+index 0a03529cf3178..3e4f5ba104f89 100644
+--- a/arch/mips/crypto/crc32-mips.c
++++ b/arch/mips/crypto/crc32-mips.c
+@@ -28,7 +28,7 @@ enum crc_type {
+ };
+ 
+ #ifndef TOOLCHAIN_SUPPORTS_CRC
+-#define _ASM_MACRO_CRC32(OP, SZ, TYPE)					  \
++#define _ASM_SET_CRC(OP, SZ, TYPE)					  \
+ _ASM_MACRO_3R(OP, rt, rs, rt2,						  \
+ 	".ifnc	\\rt, \\rt2\n\t"					  \
+ 	".error	\"invalid operands \\\"" #OP " \\rt,\\rs,\\rt2\\\"\"\n\t" \
+@@ -37,30 +37,36 @@ _ASM_MACRO_3R(OP, rt, rs, rt2,						  \
+ 			  ((SZ) <<  6) | ((TYPE) << 8))			  \
+ 	_ASM_INSN32_IF_MM(0x00000030 | (__rs << 16) | (__rt << 21) |	  \
+ 			  ((SZ) << 14) | ((TYPE) << 3)))
+-_ASM_MACRO_CRC32(crc32b,  0, 0);
+-_ASM_MACRO_CRC32(crc32h,  1, 0);
+-_ASM_MACRO_CRC32(crc32w,  2, 0);
+-_ASM_MACRO_CRC32(crc32d,  3, 0);
+-_ASM_MACRO_CRC32(crc32cb, 0, 1);
+-_ASM_MACRO_CRC32(crc32ch, 1, 1);
+-_ASM_MACRO_CRC32(crc32cw, 2, 1);
+-_ASM_MACRO_CRC32(crc32cd, 3, 1);
+-#define _ASM_SET_CRC ""
++#define _ASM_UNSET_CRC(op, SZ, TYPE) ".purgem " #op "\n\t"
+ #else /* !TOOLCHAIN_SUPPORTS_CRC */
+-#define _ASM_SET_CRC ".set\tcrc\n\t"
++#define _ASM_SET_CRC(op, SZ, TYPE) ".set\tcrc\n\t"
++#define _ASM_UNSET_CRC(op, SZ, TYPE)
+ #endif
+ 
+-#define _CRC32(crc, value, size, type)		\
+-do {						\
+-	__asm__ __volatile__(			\
+-		".set	push\n\t"		\
+-		_ASM_SET_CRC			\
+-		#type #size "	%0, %1, %0\n\t"	\
+-		".set	pop"			\
+-		: "+r" (crc)			\
+-		: "r" (value));			\
++#define __CRC32(crc, value, op, SZ, TYPE)		\
++do {							\
++	__asm__ __volatile__(				\
++		".set	push\n\t"			\
++		_ASM_SET_CRC(op, SZ, TYPE)		\
++		#op "	%0, %1, %0\n\t"			\
++		_ASM_UNSET_CRC(op, SZ, TYPE)		\
++		".set	pop"				\
++		: "+r" (crc)				\
++		: "r" (value));				\
+ } while (0)
+ 
++#define _CRC32_crc32b(crc, value)	__CRC32(crc, value, crc32b, 0, 0)
++#define _CRC32_crc32h(crc, value)	__CRC32(crc, value, crc32h, 1, 0)
++#define _CRC32_crc32w(crc, value)	__CRC32(crc, value, crc32w, 2, 0)
++#define _CRC32_crc32d(crc, value)	__CRC32(crc, value, crc32d, 3, 0)
++#define _CRC32_crc32cb(crc, value)	__CRC32(crc, value, crc32cb, 0, 1)
++#define _CRC32_crc32ch(crc, value)	__CRC32(crc, value, crc32ch, 1, 1)
++#define _CRC32_crc32cw(crc, value)	__CRC32(crc, value, crc32cw, 2, 1)
++#define _CRC32_crc32cd(crc, value)	__CRC32(crc, value, crc32cd, 3, 1)
++
++#define _CRC32(crc, value, size, op) \
++	_CRC32_##op##size(crc, value)
++
+ #define CRC32(crc, value, size) \
+ 	_CRC32(crc, value, size, crc32)
+ 
+diff --git a/arch/mips/dec/int-handler.S b/arch/mips/dec/int-handler.S
+index ea5b5a83f1e11..011d1d678840a 100644
+--- a/arch/mips/dec/int-handler.S
++++ b/arch/mips/dec/int-handler.S
+@@ -131,7 +131,7 @@
+ 		 */
+ 		mfc0	t0,CP0_CAUSE		# get pending interrupts
+ 		mfc0	t1,CP0_STATUS
+-#ifdef CONFIG_32BIT
++#if defined(CONFIG_32BIT) && defined(CONFIG_MIPS_FP_SUPPORT)
+ 		lw	t2,cpu_fpu_mask
+ #endif
+ 		andi	t0,ST0_IM		# CAUSE.CE may be non-zero!
+@@ -139,7 +139,7 @@
+ 
+ 		beqz	t0,spurious
+ 
+-#ifdef CONFIG_32BIT
++#if defined(CONFIG_32BIT) && defined(CONFIG_MIPS_FP_SUPPORT)
+ 		 and	t2,t0
+ 		bnez	t2,fpu			# handle FPU immediately
+ #endif
+@@ -280,7 +280,7 @@ handle_it:
+ 		j	dec_irq_dispatch
+ 		 nop
+ 
+-#ifdef CONFIG_32BIT
++#if defined(CONFIG_32BIT) && defined(CONFIG_MIPS_FP_SUPPORT)
+ fpu:
+ 		lw	t0,fpu_kstat_irq
+ 		nop
+diff --git a/arch/mips/dec/prom/Makefile b/arch/mips/dec/prom/Makefile
+index d95016016b42b..2bad87551203b 100644
+--- a/arch/mips/dec/prom/Makefile
++++ b/arch/mips/dec/prom/Makefile
+@@ -6,4 +6,4 @@
+ 
+ lib-y			+= init.o memory.o cmdline.o identify.o console.o
+ 
+-lib-$(CONFIG_32BIT)	+= locore.o
++lib-$(CONFIG_CPU_R3000)	+= locore.o
+diff --git a/arch/mips/dec/setup.c b/arch/mips/dec/setup.c
+index a8a30bb1dee8c..82b00e45ce50a 100644
+--- a/arch/mips/dec/setup.c
++++ b/arch/mips/dec/setup.c
+@@ -746,7 +746,8 @@ void __init arch_init_irq(void)
+ 		dec_interrupt[DEC_IRQ_HALT] = -1;
+ 
+ 	/* Register board interrupts: FPU and cascade. */
+-	if (dec_interrupt[DEC_IRQ_FPU] >= 0 && cpu_has_fpu) {
++	if (IS_ENABLED(CONFIG_MIPS_FP_SUPPORT) &&
++	    dec_interrupt[DEC_IRQ_FPU] >= 0 && cpu_has_fpu) {
+ 		struct irq_desc *desc_fpu;
+ 		int irq_fpu;
+ 
+diff --git a/arch/mips/include/asm/dec/prom.h b/arch/mips/include/asm/dec/prom.h
+index 62c7dfb90e06c..1e1247add1cf8 100644
+--- a/arch/mips/include/asm/dec/prom.h
++++ b/arch/mips/include/asm/dec/prom.h
+@@ -43,16 +43,11 @@
+  */
+ #define REX_PROM_MAGIC		0x30464354
+ 
+-#ifdef CONFIG_64BIT
+-
+-#define prom_is_rex(magic)	1	/* KN04 and KN05 are REX PROMs.  */
+-
+-#else /* !CONFIG_64BIT */
+-
+-#define prom_is_rex(magic)	((magic) == REX_PROM_MAGIC)
+-
+-#endif /* !CONFIG_64BIT */
+-
++/* KN04 and KN05 are REX PROMs, so only do the check for R3k systems.  */
++static inline bool prom_is_rex(u32 magic)
++{
++	return !IS_ENABLED(CONFIG_CPU_R3000) || magic == REX_PROM_MAGIC;
++}
+ 
+ /*
+  * 3MIN/MAXINE PROM entry points for DS5000/1xx's, DS5000/xx's and
+diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h
+index c7925d0e98746..867e9c3db76e9 100644
+--- a/arch/mips/include/asm/pgalloc.h
++++ b/arch/mips/include/asm/pgalloc.h
+@@ -15,6 +15,7 @@
+ 
+ #define __HAVE_ARCH_PMD_ALLOC_ONE
+ #define __HAVE_ARCH_PUD_ALLOC_ONE
++#define __HAVE_ARCH_PGD_FREE
+ #include <asm-generic/pgalloc.h>
+ 
+ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
+@@ -48,6 +49,11 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
+ extern void pgd_init(unsigned long page);
+ extern pgd_t *pgd_alloc(struct mm_struct *mm);
+ 
++static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
++{
++	free_pages((unsigned long)pgd, PGD_ORDER);
++}
++
+ #define __pte_free_tlb(tlb,pte,address)			\
+ do {							\
+ 	pgtable_pte_page_dtor(pte);			\
+diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
+index b131e6a773832..5cda07688f67a 100644
+--- a/arch/mips/mm/tlbex.c
++++ b/arch/mips/mm/tlbex.c
+@@ -2160,16 +2160,14 @@ static void build_r4000_tlb_load_handler(void)
+ 		uasm_i_tlbr(&p);
+ 
+ 		switch (current_cpu_type()) {
+-		default:
+-			if (cpu_has_mips_r2_exec_hazard) {
+-				uasm_i_ehb(&p);
+-			fallthrough;
+-
+ 		case CPU_CAVIUM_OCTEON:
+ 		case CPU_CAVIUM_OCTEON_PLUS:
+ 		case CPU_CAVIUM_OCTEON2:
+-				break;
+-			}
++			break;
++		default:
++			if (cpu_has_mips_r2_exec_hazard)
++				uasm_i_ehb(&p);
++			break;
+ 		}
+ 
+ 		/* Examine  entrylo 0 or 1 based on ptr. */
+@@ -2236,15 +2234,14 @@ static void build_r4000_tlb_load_handler(void)
+ 		uasm_i_tlbr(&p);
+ 
+ 		switch (current_cpu_type()) {
+-		default:
+-			if (cpu_has_mips_r2_exec_hazard) {
+-				uasm_i_ehb(&p);
+-
+ 		case CPU_CAVIUM_OCTEON:
+ 		case CPU_CAVIUM_OCTEON_PLUS:
+ 		case CPU_CAVIUM_OCTEON2:
+-				break;
+-			}
++			break;
++		default:
++			if (cpu_has_mips_r2_exec_hazard)
++				uasm_i_ehb(&p);
++			break;
+ 		}
+ 
+ 		/* Examine  entrylo 0 or 1 based on ptr. */
+diff --git a/arch/mips/rb532/devices.c b/arch/mips/rb532/devices.c
+index 04684990e28ef..b7f6f782d9a13 100644
+--- a/arch/mips/rb532/devices.c
++++ b/arch/mips/rb532/devices.c
+@@ -301,11 +301,9 @@ static int __init plat_setup_devices(void)
+ static int __init setup_kmac(char *s)
+ {
+ 	printk(KERN_INFO "korina mac = %s\n", s);
+-	if (!mac_pton(s, korina_dev0_data.mac)) {
++	if (!mac_pton(s, korina_dev0_data.mac))
+ 		printk(KERN_ERR "Invalid mac\n");
+-		return -EINVAL;
+-	}
+-	return 0;
++	return 1;
+ }
+ 
+ __setup("kmac=", setup_kmac);
+diff --git a/arch/nios2/include/asm/uaccess.h b/arch/nios2/include/asm/uaccess.h
+index ba9340e96fd4c..ca9285a915efa 100644
+--- a/arch/nios2/include/asm/uaccess.h
++++ b/arch/nios2/include/asm/uaccess.h
+@@ -88,6 +88,7 @@ extern __must_check long strnlen_user(const char __user *s, long n);
+ /* Optimized macros */
+ #define __get_user_asm(val, insn, addr, err)				\
+ {									\
++	unsigned long __gu_val;						\
+ 	__asm__ __volatile__(						\
+ 	"       movi    %0, %3\n"					\
+ 	"1:   " insn " %1, 0(%2)\n"					\
+@@ -96,14 +97,20 @@ extern __must_check long strnlen_user(const char __user *s, long n);
+ 	"       .section __ex_table,\"a\"\n"				\
+ 	"       .word 1b, 2b\n"						\
+ 	"       .previous"						\
+-	: "=&r" (err), "=r" (val)					\
++	: "=&r" (err), "=r" (__gu_val)					\
+ 	: "r" (addr), "i" (-EFAULT));					\
++	val = (__force __typeof__(*(addr)))__gu_val;			\
+ }
+ 
+-#define __get_user_unknown(val, size, ptr, err) do {			\
++extern void __get_user_unknown(void);
++
++#define __get_user_8(val, ptr, err) do {				\
++	u64 __val = 0;							\
+ 	err = 0;							\
+-	if (__copy_from_user(&(val), ptr, size)) {			\
++	if (raw_copy_from_user(&(__val), ptr, sizeof(val))) {		\
+ 		err = -EFAULT;						\
++	} else {							\
++		val = (typeof(val))(typeof((val) - (val)))__val;	\
+ 	}								\
+ 	} while (0)
+ 
+@@ -119,8 +126,11 @@ do {									\
+ 	case 4:								\
+ 		__get_user_asm(val, "ldw", ptr, err);			\
+ 		break;							\
++	case 8:								\
++		__get_user_8(val, ptr, err);				\
++		break;							\
+ 	default:							\
+-		__get_user_unknown(val, size, ptr, err);		\
++		__get_user_unknown();					\
+ 		break;							\
+ 	}								\
+ } while (0)
+@@ -129,9 +139,7 @@ do {									\
+ 	({								\
+ 	long __gu_err = -EFAULT;					\
+ 	const __typeof__(*(ptr)) __user *__gu_ptr = (ptr);		\
+-	unsigned long __gu_val = 0;					\
+-	__get_user_common(__gu_val, sizeof(*(ptr)), __gu_ptr, __gu_err);\
+-	(x) = (__force __typeof__(x))__gu_val;				\
++	__get_user_common(x, sizeof(*(ptr)), __gu_ptr, __gu_err);	\
+ 	__gu_err;							\
+ 	})
+ 
+@@ -139,11 +147,9 @@ do {									\
+ ({									\
+ 	long __gu_err = -EFAULT;					\
+ 	const __typeof__(*(ptr)) __user *__gu_ptr = (ptr);		\
+-	unsigned long __gu_val = 0;					\
+ 	if (access_ok( __gu_ptr, sizeof(*__gu_ptr)))	\
+-		__get_user_common(__gu_val, sizeof(*__gu_ptr),		\
++		__get_user_common(x, sizeof(*__gu_ptr),			\
+ 			__gu_ptr, __gu_err);				\
+-	(x) = (__force __typeof__(x))__gu_val;				\
+ 	__gu_err;							\
+ })
+ 
+diff --git a/arch/nios2/kernel/signal.c b/arch/nios2/kernel/signal.c
+index 2009ae2d3c3bb..386e46443b605 100644
+--- a/arch/nios2/kernel/signal.c
++++ b/arch/nios2/kernel/signal.c
+@@ -36,10 +36,10 @@ struct rt_sigframe {
+ 
+ static inline int rt_restore_ucontext(struct pt_regs *regs,
+ 					struct switch_stack *sw,
+-					struct ucontext *uc, int *pr2)
++					struct ucontext __user *uc, int *pr2)
+ {
+ 	int temp;
+-	unsigned long *gregs = uc->uc_mcontext.gregs;
++	unsigned long __user *gregs = uc->uc_mcontext.gregs;
+ 	int err;
+ 
+ 	/* Always make any pending restarted system calls return -EINTR */
+@@ -102,10 +102,11 @@ asmlinkage int do_rt_sigreturn(struct switch_stack *sw)
+ {
+ 	struct pt_regs *regs = (struct pt_regs *)(sw + 1);
+ 	/* Verify, can we follow the stack back */
+-	struct rt_sigframe *frame = (struct rt_sigframe *) regs->sp;
++	struct rt_sigframe __user *frame;
+ 	sigset_t set;
+ 	int rval;
+ 
++	frame = (struct rt_sigframe __user *) regs->sp;
+ 	if (!access_ok(frame, sizeof(*frame)))
+ 		goto badframe;
+ 
+@@ -124,10 +125,10 @@ badframe:
+ 	return 0;
+ }
+ 
+-static inline int rt_setup_ucontext(struct ucontext *uc, struct pt_regs *regs)
++static inline int rt_setup_ucontext(struct ucontext __user *uc, struct pt_regs *regs)
+ {
+ 	struct switch_stack *sw = (struct switch_stack *)regs - 1;
+-	unsigned long *gregs = uc->uc_mcontext.gregs;
++	unsigned long __user *gregs = uc->uc_mcontext.gregs;
+ 	int err = 0;
+ 
+ 	err |= __put_user(MCONTEXT_VERSION, &uc->uc_mcontext.version);
+@@ -162,8 +163,9 @@ static inline int rt_setup_ucontext(struct ucontext *uc, struct pt_regs *regs)
+ 	return err;
+ }
+ 
+-static inline void *get_sigframe(struct ksignal *ksig, struct pt_regs *regs,
+-				 size_t frame_size)
++static inline void __user *get_sigframe(struct ksignal *ksig,
++					struct pt_regs *regs,
++					size_t frame_size)
+ {
+ 	unsigned long usp;
+ 
+@@ -174,13 +176,13 @@ static inline void *get_sigframe(struct ksignal *ksig, struct pt_regs *regs,
+ 	usp = sigsp(usp, ksig);
+ 
+ 	/* Verify, is it 32 or 64 bit aligned */
+-	return (void *)((usp - frame_size) & -8UL);
++	return (void __user *)((usp - frame_size) & -8UL);
+ }
+ 
+ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
+ 			  struct pt_regs *regs)
+ {
+-	struct rt_sigframe *frame;
++	struct rt_sigframe __user *frame;
+ 	int err = 0;
+ 
+ 	frame = get_sigframe(ksig, regs, sizeof(*frame));
+diff --git a/arch/parisc/include/asm/traps.h b/arch/parisc/include/asm/traps.h
+index 34619f010c631..0ccdb738a9a36 100644
+--- a/arch/parisc/include/asm/traps.h
++++ b/arch/parisc/include/asm/traps.h
+@@ -18,6 +18,7 @@ unsigned long parisc_acctyp(unsigned long code, unsigned int inst);
+ const char *trap_name(unsigned long code);
+ void do_page_fault(struct pt_regs *regs, unsigned long code,
+ 		unsigned long address);
++int handle_nadtlb_fault(struct pt_regs *regs);
+ #endif
+ 
+ #endif
+diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
+index 94150b91c96fb..bce71cefe5724 100644
+--- a/arch/parisc/kernel/cache.c
++++ b/arch/parisc/kernel/cache.c
+@@ -558,15 +558,6 @@ static void flush_cache_pages(struct vm_area_struct *vma, struct mm_struct *mm,
+ 	}
+ }
+ 
+-static void flush_user_cache_tlb(struct vm_area_struct *vma,
+-				 unsigned long start, unsigned long end)
+-{
+-	flush_user_dcache_range_asm(start, end);
+-	if (vma->vm_flags & VM_EXEC)
+-		flush_user_icache_range_asm(start, end);
+-	flush_tlb_range(vma, start, end);
+-}
+-
+ void flush_cache_mm(struct mm_struct *mm)
+ {
+ 	struct vm_area_struct *vma;
+@@ -581,17 +572,8 @@ void flush_cache_mm(struct mm_struct *mm)
+ 		return;
+ 	}
+ 
+-	preempt_disable();
+-	if (mm->context == mfsp(3)) {
+-		for (vma = mm->mmap; vma; vma = vma->vm_next)
+-			flush_user_cache_tlb(vma, vma->vm_start, vma->vm_end);
+-		preempt_enable();
+-		return;
+-	}
+-
+ 	for (vma = mm->mmap; vma; vma = vma->vm_next)
+ 		flush_cache_pages(vma, mm, vma->vm_start, vma->vm_end);
+-	preempt_enable();
+ }
+ 
+ void flush_cache_range(struct vm_area_struct *vma,
+@@ -605,15 +587,7 @@ void flush_cache_range(struct vm_area_struct *vma,
+ 		return;
+ 	}
+ 
+-	preempt_disable();
+-	if (vma->vm_mm->context == mfsp(3)) {
+-		flush_user_cache_tlb(vma, start, end);
+-		preempt_enable();
+-		return;
+-	}
+-
+-	flush_cache_pages(vma, vma->vm_mm, vma->vm_start, vma->vm_end);
+-	preempt_enable();
++	flush_cache_pages(vma, vma->vm_mm, start, end);
+ }
+ 
+ void
+diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c
+index eb41fece19104..b56aab7141edb 100644
+--- a/arch/parisc/kernel/traps.c
++++ b/arch/parisc/kernel/traps.c
+@@ -662,6 +662,8 @@ void notrace handle_interruption(int code, struct pt_regs *regs)
+ 			 by hand. Technically we need to emulate:
+ 			 fdc,fdce,pdc,"fic,4f",prober,probeir,probew, probeiw
+ 		*/
++		if (code == 17 && handle_nadtlb_fault(regs))
++			return;
+ 		fault_address = regs->ior;
+ 		fault_space = regs->isr;
+ 		break;
+diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c
+index 4a6221b869fd2..22ffe11cec72b 100644
+--- a/arch/parisc/mm/fault.c
++++ b/arch/parisc/mm/fault.c
+@@ -424,3 +424,92 @@ no_context:
+ 		goto no_context;
+ 	pagefault_out_of_memory();
+ }
++
++/* Handle non-access data TLB miss faults.
++ *
++ * For probe instructions, accesses to userspace are considered allowed
++ * if they lie in a valid VMA and the access type matches. We are not
++ * allowed to handle MM faults here so there may be situations where an
++ * actual access would fail even though a probe was successful.
++ */
++int
++handle_nadtlb_fault(struct pt_regs *regs)
++{
++	unsigned long insn = regs->iir;
++	int breg, treg, xreg, val = 0;
++	struct vm_area_struct *vma, *prev_vma;
++	struct task_struct *tsk;
++	struct mm_struct *mm;
++	unsigned long address;
++	unsigned long acc_type;
++
++	switch (insn & 0x380) {
++	case 0x280:
++		/* FDC instruction */
++		fallthrough;
++	case 0x380:
++		/* PDC and FIC instructions */
++		if (printk_ratelimit()) {
++			pr_warn("BUG: nullifying cache flush/purge instruction\n");
++			show_regs(regs);
++		}
++		if (insn & 0x20) {
++			/* Base modification */
++			breg = (insn >> 21) & 0x1f;
++			xreg = (insn >> 16) & 0x1f;
++			if (breg && xreg)
++				regs->gr[breg] += regs->gr[xreg];
++		}
++		regs->gr[0] |= PSW_N;
++		return 1;
++
++	case 0x180:
++		/* PROBE instruction */
++		treg = insn & 0x1f;
++		if (regs->isr) {
++			tsk = current;
++			mm = tsk->mm;
++			if (mm) {
++				/* Search for VMA */
++				address = regs->ior;
++				mmap_read_lock(mm);
++				vma = find_vma_prev(mm, address, &prev_vma);
++				mmap_read_unlock(mm);
++
++				/*
++				 * Check if access to the VMA is okay.
++				 * We don't allow for stack expansion.
++				 */
++				acc_type = (insn & 0x40) ? VM_WRITE : VM_READ;
++				if (vma
++				    && address >= vma->vm_start
++				    && (vma->vm_flags & acc_type) == acc_type)
++					val = 1;
++			}
++		}
++		if (treg)
++			regs->gr[treg] = val;
++		regs->gr[0] |= PSW_N;
++		return 1;
++
++	case 0x300:
++		/* LPA instruction */
++		if (insn & 0x20) {
++			/* Base modification */
++			breg = (insn >> 21) & 0x1f;
++			xreg = (insn >> 16) & 0x1f;
++			if (breg && xreg)
++				regs->gr[breg] += regs->gr[xreg];
++		}
++		treg = insn & 0x1f;
++		if (treg)
++			regs->gr[treg] = 0;
++		regs->gr[0] |= PSW_N;
++		return 1;
++
++	default:
++		break;
++	}
++
++	return 0;
++}
+diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
+index e02568f173341..3d2e74ebc0142 100644
+--- a/arch/powerpc/Makefile
++++ b/arch/powerpc/Makefile
+@@ -171,7 +171,7 @@ else
+ CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=power7,$(call cc-option,-mtune=power5))
+ CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mcpu=power5,-mcpu=power4)
+ endif
+-else
++else ifdef CONFIG_PPC_BOOK3E_64
+ CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=powerpc64
+ endif
+ 
+diff --git a/arch/powerpc/boot/dts/fsl/t1040rdb-rev-a.dts b/arch/powerpc/boot/dts/fsl/t1040rdb-rev-a.dts
+new file mode 100644
+index 0000000000000..73f8c998c64df
+--- /dev/null
++++ b/arch/powerpc/boot/dts/fsl/t1040rdb-rev-a.dts
+@@ -0,0 +1,30 @@
++// SPDX-License-Identifier: GPL-2.0-or-later
++/*
++ * T1040RDB-REV-A Device Tree Source
++ *
++ * Copyright 2014 - 2015 Freescale Semiconductor Inc.
++ *
++ */
++
++#include "t1040rdb.dts"
++
++/ {
++	model = "fsl,T1040RDB-REV-A";
++	compatible = "fsl,T1040RDB-REV-A";
++};
++
++&seville_port0 {
++	label = "ETH5";
++};
++
++&seville_port2 {
++	label = "ETH7";
++};
++
++&seville_port4 {
++	label = "ETH9";
++};
++
++&seville_port6 {
++	label = "ETH11";
++};
+diff --git a/arch/powerpc/boot/dts/fsl/t1040rdb.dts b/arch/powerpc/boot/dts/fsl/t1040rdb.dts
+index af0c8a6f56138..b6733e7e65805 100644
+--- a/arch/powerpc/boot/dts/fsl/t1040rdb.dts
++++ b/arch/powerpc/boot/dts/fsl/t1040rdb.dts
+@@ -119,7 +119,7 @@
+ 	managed = "in-band-status";
+ 	phy-handle = <&phy_qsgmii_0>;
+ 	phy-mode = "qsgmii";
+-	label = "ETH5";
++	label = "ETH3";
+ 	status = "okay";
+ };
+ 
+@@ -135,7 +135,7 @@
+ 	managed = "in-band-status";
+ 	phy-handle = <&phy_qsgmii_2>;
+ 	phy-mode = "qsgmii";
+-	label = "ETH7";
++	label = "ETH5";
+ 	status = "okay";
+ };
+ 
+@@ -151,7 +151,7 @@
+ 	managed = "in-band-status";
+ 	phy-handle = <&phy_qsgmii_4>;
+ 	phy-mode = "qsgmii";
+-	label = "ETH9";
++	label = "ETH7";
+ 	status = "okay";
+ };
+ 
+@@ -167,7 +167,7 @@
+ 	managed = "in-band-status";
+ 	phy-handle = <&phy_qsgmii_6>;
+ 	phy-mode = "qsgmii";
+-	label = "ETH11";
++	label = "ETH9";
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/powerpc/include/asm/io.h b/arch/powerpc/include/asm/io.h
+index beba4979bff93..fee979d3a1aa4 100644
+--- a/arch/powerpc/include/asm/io.h
++++ b/arch/powerpc/include/asm/io.h
+@@ -359,25 +359,37 @@ static inline void __raw_writeq_be(unsigned long v, volatile void __iomem *addr)
+  */
+ static inline void __raw_rm_writeb(u8 val, volatile void __iomem *paddr)
+ {
+-	__asm__ __volatile__("stbcix %0,0,%1"
++	__asm__ __volatile__(".machine push;   \
++			      .machine power6; \
++			      stbcix %0,0,%1;  \
++			      .machine pop;"
+ 		: : "r" (val), "r" (paddr) : "memory");
+ }
+ 
+ static inline void __raw_rm_writew(u16 val, volatile void __iomem *paddr)
+ {
+-	__asm__ __volatile__("sthcix %0,0,%1"
++	__asm__ __volatile__(".machine push;   \
++			      .machine power6; \
++			      sthcix %0,0,%1;  \
++			      .machine pop;"
+ 		: : "r" (val), "r" (paddr) : "memory");
+ }
+ 
+ static inline void __raw_rm_writel(u32 val, volatile void __iomem *paddr)
+ {
+-	__asm__ __volatile__("stwcix %0,0,%1"
++	__asm__ __volatile__(".machine push;   \
++			      .machine power6; \
++			      stwcix %0,0,%1;  \
++			      .machine pop;"
+ 		: : "r" (val), "r" (paddr) : "memory");
+ }
+ 
+ static inline void __raw_rm_writeq(u64 val, volatile void __iomem *paddr)
+ {
+-	__asm__ __volatile__("stdcix %0,0,%1"
++	__asm__ __volatile__(".machine push;   \
++			      .machine power6; \
++			      stdcix %0,0,%1;  \
++			      .machine pop;"
+ 		: : "r" (val), "r" (paddr) : "memory");
+ }
+ 
+@@ -389,7 +401,10 @@ static inline void __raw_rm_writeq_be(u64 val, volatile void __iomem *paddr)
+ static inline u8 __raw_rm_readb(volatile void __iomem *paddr)
+ {
+ 	u8 ret;
+-	__asm__ __volatile__("lbzcix %0,0, %1"
++	__asm__ __volatile__(".machine push;   \
++			      .machine power6; \
++			      lbzcix %0,0, %1; \
++			      .machine pop;"
+ 			     : "=r" (ret) : "r" (paddr) : "memory");
+ 	return ret;
+ }
+@@ -397,7 +412,10 @@ static inline u8 __raw_rm_readb(volatile void __iomem *paddr)
+ static inline u16 __raw_rm_readw(volatile void __iomem *paddr)
+ {
+ 	u16 ret;
+-	__asm__ __volatile__("lhzcix %0,0, %1"
++	__asm__ __volatile__(".machine push;   \
++			      .machine power6; \
++			      lhzcix %0,0, %1; \
++			      .machine pop;"
+ 			     : "=r" (ret) : "r" (paddr) : "memory");
+ 	return ret;
+ }
+@@ -405,7 +423,10 @@ static inline u16 __raw_rm_readw(volatile void __iomem *paddr)
+ static inline u32 __raw_rm_readl(volatile void __iomem *paddr)
+ {
+ 	u32 ret;
+-	__asm__ __volatile__("lwzcix %0,0, %1"
++	__asm__ __volatile__(".machine push;   \
++			      .machine power6; \
++			      lwzcix %0,0, %1; \
++			      .machine pop;"
+ 			     : "=r" (ret) : "r" (paddr) : "memory");
+ 	return ret;
+ }
+@@ -413,7 +434,10 @@ static inline u32 __raw_rm_readl(volatile void __iomem *paddr)
+ static inline u64 __raw_rm_readq(volatile void __iomem *paddr)
+ {
+ 	u64 ret;
+-	__asm__ __volatile__("ldcix %0,0, %1"
++	__asm__ __volatile__(".machine push;   \
++			      .machine power6; \
++			      ldcix %0,0, %1;  \
++			      .machine pop;"
+ 			     : "=r" (ret) : "r" (paddr) : "memory");
+ 	return ret;
+ }
+diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h
+index b040094f79202..7ebc807aa8cc8 100644
+--- a/arch/powerpc/include/asm/set_memory.h
++++ b/arch/powerpc/include/asm/set_memory.h
+@@ -6,6 +6,8 @@
+ #define SET_MEMORY_RW	1
+ #define SET_MEMORY_NX	2
+ #define SET_MEMORY_X	3
++#define SET_MEMORY_NP	4	/* Set memory non present */
++#define SET_MEMORY_P	5	/* Set memory present */
+ 
+ int change_memory_attr(unsigned long addr, int numpages, long action);
+ 
+@@ -29,6 +31,14 @@ static inline int set_memory_x(unsigned long addr, int numpages)
+ 	return change_memory_attr(addr, numpages, SET_MEMORY_X);
+ }
+ 
+-int set_memory_attr(unsigned long addr, int numpages, pgprot_t prot);
++static inline int set_memory_np(unsigned long addr, int numpages)
++{
++	return change_memory_attr(addr, numpages, SET_MEMORY_NP);
++}
++
++static inline int set_memory_p(unsigned long addr, int numpages)
++{
++	return change_memory_attr(addr, numpages, SET_MEMORY_P);
++}
+ 
+ #endif
+diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
+index 63316100080c1..4a35423f766db 100644
+--- a/arch/powerpc/include/asm/uaccess.h
++++ b/arch/powerpc/include/asm/uaccess.h
+@@ -125,8 +125,11 @@ do {								\
+  */
+ #define __get_user_atomic_128_aligned(kaddr, uaddr, err)		\
+ 	__asm__ __volatile__(				\
++		".machine push\n"			\
++		".machine altivec\n"			\
+ 		"1:	lvx  0,0,%1	# get user\n"	\
+ 		" 	stvx 0,0,%2	# put kernel\n"	\
++		".machine pop\n"			\
+ 		"2:\n"					\
+ 		".section .fixup,\"ax\"\n"		\
+ 		"3:	li %0,%3\n"			\
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index a2fd1db29f7e8..7fa6857116690 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -6101,8 +6101,11 @@ static int kvmppc_book3s_init_hv(void)
+ 	if (r)
+ 		return r;
+ 
+-	if (kvmppc_radix_possible())
++	if (kvmppc_radix_possible()) {
+ 		r = kvmppc_radix_init();
++		if (r)
++			return r;
++	}
+ 
+ 	r = kvmppc_uvmem_init();
+ 	if (r < 0)
+diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
+index a72920f4f221f..8d91a50a84a88 100644
+--- a/arch/powerpc/kvm/powerpc.c
++++ b/arch/powerpc/kvm/powerpc.c
+@@ -1507,7 +1507,7 @@ int kvmppc_handle_vmx_load(struct kvm_vcpu *vcpu,
+ {
+ 	enum emulation_result emulated = EMULATE_DONE;
+ 
+-	if (vcpu->arch.mmio_vsx_copy_nums > 2)
++	if (vcpu->arch.mmio_vmx_copy_nums > 2)
+ 		return EMULATE_FAIL;
+ 
+ 	while (vcpu->arch.mmio_vmx_copy_nums) {
+@@ -1604,7 +1604,7 @@ int kvmppc_handle_vmx_store(struct kvm_vcpu *vcpu,
+ 	unsigned int index = rs & KVM_MMIO_REG_MASK;
+ 	enum emulation_result emulated = EMULATE_DONE;
+ 
+-	if (vcpu->arch.mmio_vsx_copy_nums > 2)
++	if (vcpu->arch.mmio_vmx_copy_nums > 2)
+ 		return EMULATE_FAIL;
+ 
+ 	vcpu->arch.io_gpr = rs;
+diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
+index b042fcae39137..6417893bae5df 100644
+--- a/arch/powerpc/lib/sstep.c
++++ b/arch/powerpc/lib/sstep.c
+@@ -112,9 +112,9 @@ static nokprobe_inline long address_ok(struct pt_regs *regs,
+ {
+ 	if (!user_mode(regs))
+ 		return 1;
+-	if (__access_ok(ea, nb))
++	if (access_ok((void __user *)ea, nb))
+ 		return 1;
+-	if (__access_ok(ea, 1))
++	if (access_ok((void __user *)ea, 1))
+ 		/* Access overlaps the end of the user region */
+ 		regs->dar = TASK_SIZE_MAX - 1;
+ 	else
+@@ -1097,7 +1097,10 @@ NOKPROBE_SYMBOL(emulate_dcbz);
+ 
+ #define __put_user_asmx(x, addr, err, op, cr)		\
+ 	__asm__ __volatile__(				\
++		".machine push\n"			\
++		".machine power8\n"			\
+ 		"1:	" op " %2,0,%3\n"		\
++		".machine pop\n"			\
+ 		"	mfcr	%1\n"			\
+ 		"2:\n"					\
+ 		".section .fixup,\"ax\"\n"		\
+@@ -1110,7 +1113,10 @@ NOKPROBE_SYMBOL(emulate_dcbz);
+ 
+ #define __get_user_asmx(x, addr, err, op)		\
+ 	__asm__ __volatile__(				\
++		".machine push\n"			\
++		".machine power8\n"			\
+ 		"1:	"op" %1,0,%2\n"			\
++		".machine pop\n"			\
+ 		"2:\n"					\
+ 		".section .fixup,\"ax\"\n"		\
+ 		"3:	li	%0,%3\n"		\
+@@ -3389,7 +3395,7 @@ int emulate_loadstore(struct pt_regs *regs, struct instruction_op *op)
+ 			__put_user_asmx(op->val, ea, err, "stbcx.", cr);
+ 			break;
+ 		case 2:
+-			__put_user_asmx(op->val, ea, err, "stbcx.", cr);
++			__put_user_asmx(op->val, ea, err, "sthcx.", cr);
+ 			break;
+ #endif
+ 		case 4:
+diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
+index a8d0ce85d39ad..4a15172dfef29 100644
+--- a/arch/powerpc/mm/fault.c
++++ b/arch/powerpc/mm/fault.c
+@@ -568,18 +568,24 @@ NOKPROBE_SYMBOL(hash__do_page_fault);
+ static void __bad_page_fault(struct pt_regs *regs, int sig)
+ {
+ 	int is_write = page_fault_is_write(regs->dsisr);
++	const char *msg;
+ 
+ 	/* kernel has accessed a bad area */
+ 
++	if (regs->dar < PAGE_SIZE)
++		msg = "Kernel NULL pointer dereference";
++	else
++		msg = "Unable to handle kernel data access";
++
+ 	switch (TRAP(regs)) {
+ 	case INTERRUPT_DATA_STORAGE:
+-	case INTERRUPT_DATA_SEGMENT:
+ 	case INTERRUPT_H_DATA_STORAGE:
+-		pr_alert("BUG: %s on %s at 0x%08lx\n",
+-			 regs->dar < PAGE_SIZE ? "Kernel NULL pointer dereference" :
+-			 "Unable to handle kernel data access",
++		pr_alert("BUG: %s on %s at 0x%08lx\n", msg,
+ 			 is_write ? "write" : "read", regs->dar);
+ 		break;
++	case INTERRUPT_DATA_SEGMENT:
++		pr_alert("BUG: %s at 0x%08lx\n", msg, regs->dar);
++		break;
+ 	case INTERRUPT_INST_STORAGE:
+ 	case INTERRUPT_INST_SEGMENT:
+ 		pr_alert("BUG: Unable to handle kernel instruction fetch%s",
+diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
+index cf8770b1a692e..f3e4d069e0ba7 100644
+--- a/arch/powerpc/mm/kasan/kasan_init_32.c
++++ b/arch/powerpc/mm/kasan/kasan_init_32.c
+@@ -83,13 +83,12 @@ void __init
+ kasan_update_early_region(unsigned long k_start, unsigned long k_end, pte_t pte)
+ {
+ 	unsigned long k_cur;
+-	phys_addr_t pa = __pa(kasan_early_shadow_page);
+ 
+ 	for (k_cur = k_start; k_cur != k_end; k_cur += PAGE_SIZE) {
+ 		pmd_t *pmd = pmd_off_k(k_cur);
+ 		pte_t *ptep = pte_offset_kernel(pmd, k_cur);
+ 
+-		if ((pte_val(*ptep) & PTE_RPN_MASK) != pa)
++		if (pte_page(*ptep) != virt_to_page(lm_alias(kasan_early_shadow_page)))
+ 			continue;
+ 
+ 		__set_pte_at(&init_mm, k_cur, ptep, pte, 0);
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 59d3cfcd78879..5fb829256b59d 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -956,7 +956,9 @@ static int __init parse_numa_properties(void)
+ 			of_node_put(cpu);
+ 		}
+ 
+-		node_set_online(nid);
++		/* node_set_online() is an UB if 'nid' is negative */
++		if (likely(nid >= 0))
++			node_set_online(nid);
+ 	}
+ 
+ 	get_n_mem_cells(&n_mem_addr_cells, &n_mem_size_cells);
+diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
+index edea388e9d3fb..3bb9d168e3b31 100644
+--- a/arch/powerpc/mm/pageattr.c
++++ b/arch/powerpc/mm/pageattr.c
+@@ -48,6 +48,12 @@ static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
+ 	case SET_MEMORY_X:
+ 		pte = pte_mkexec(pte);
+ 		break;
++	case SET_MEMORY_NP:
++		pte_update(&init_mm, addr, ptep, _PAGE_PRESENT, 0, 0);
++		break;
++	case SET_MEMORY_P:
++		pte_update(&init_mm, addr, ptep, 0, _PAGE_PRESENT, 0);
++		break;
+ 	default:
+ 		WARN_ON_ONCE(1);
+ 		break;
+@@ -96,36 +102,3 @@ int change_memory_attr(unsigned long addr, int numpages, long action)
+ 	return apply_to_existing_page_range(&init_mm, start, size,
+ 					    change_page_attr, (void *)action);
+ }
+-
+-/*
+- * Set the attributes of a page:
+- *
+- * This function is used by PPC32 at the end of init to set final kernel memory
+- * protection. It includes changing the maping of the page it is executing from
+- * and data pages it is using.
+- */
+-static int set_page_attr(pte_t *ptep, unsigned long addr, void *data)
+-{
+-	pgprot_t prot = __pgprot((unsigned long)data);
+-
+-	spin_lock(&init_mm.page_table_lock);
+-
+-	set_pte_at(&init_mm, addr, ptep, pte_modify(*ptep, prot));
+-	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+-
+-	spin_unlock(&init_mm.page_table_lock);
+-
+-	return 0;
+-}
+-
+-int set_memory_attr(unsigned long addr, int numpages, pgprot_t prot)
+-{
+-	unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
+-	unsigned long sz = numpages * PAGE_SIZE;
+-
+-	if (numpages <= 0)
+-		return 0;
+-
+-	return apply_to_existing_page_range(&init_mm, start, sz, set_page_attr,
+-					    (void *)pgprot_val(prot));
+-}
+diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
+index 906e4e4328b2e..f71ededdc02a5 100644
+--- a/arch/powerpc/mm/pgtable_32.c
++++ b/arch/powerpc/mm/pgtable_32.c
+@@ -135,10 +135,12 @@ void mark_initmem_nx(void)
+ 	unsigned long numpages = PFN_UP((unsigned long)_einittext) -
+ 				 PFN_DOWN((unsigned long)_sinittext);
+ 
+-	if (v_block_mapped((unsigned long)_sinittext))
++	if (v_block_mapped((unsigned long)_sinittext)) {
+ 		mmu_mark_initmem_nx();
+-	else
+-		set_memory_attr((unsigned long)_sinittext, numpages, PAGE_KERNEL);
++	} else {
++		set_memory_nx((unsigned long)_sinittext, numpages);
++		set_memory_rw((unsigned long)_sinittext, numpages);
++	}
+ }
+ 
+ #ifdef CONFIG_STRICT_KERNEL_RWX
+@@ -152,18 +154,14 @@ void mark_rodata_ro(void)
+ 		return;
+ 	}
+ 
+-	numpages = PFN_UP((unsigned long)_etext) -
+-		   PFN_DOWN((unsigned long)_stext);
+-
+-	set_memory_attr((unsigned long)_stext, numpages, PAGE_KERNEL_ROX);
+ 	/*
+-	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
+-	 * to cover NOTES and EXCEPTION_TABLE.
++	 * mark .text and .rodata as read only. Use __init_begin rather than
++	 * __end_rodata to cover NOTES and EXCEPTION_TABLE.
+ 	 */
+ 	numpages = PFN_UP((unsigned long)__init_begin) -
+-		   PFN_DOWN((unsigned long)__start_rodata);
++		   PFN_DOWN((unsigned long)_stext);
+ 
+-	set_memory_attr((unsigned long)__start_rodata, numpages, PAGE_KERNEL_RO);
++	set_memory_ro((unsigned long)_stext, numpages);
+ 
+ 	// mark_initmem_nx() should have already run by now
+ 	ptdump_check_wx();
+@@ -179,8 +177,8 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
+ 		return;
+ 
+ 	if (enable)
+-		set_memory_attr(addr, numpages, PAGE_KERNEL);
++		set_memory_p(addr, numpages);
+ 	else
+-		set_memory_attr(addr, numpages, __pgprot(0));
++		set_memory_np(addr, numpages);
+ }
+ #endif /* CONFIG_DEBUG_PAGEALLOC */
+diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
+index e106909ff9c37..e7583fbcc8fa1 100644
+--- a/arch/powerpc/perf/imc-pmu.c
++++ b/arch/powerpc/perf/imc-pmu.c
+@@ -1457,7 +1457,11 @@ static int trace_imc_event_init(struct perf_event *event)
+ 
+ 	event->hw.idx = -1;
+ 
+-	event->pmu->task_ctx_nr = perf_hw_context;
++	/*
++	 * There can only be a single PMU for perf_hw_context events which is assigned to
++	 * core PMU. Hence use "perf_sw_context" for trace_imc.
++	 */
++	event->pmu->task_ctx_nr = perf_sw_context;
+ 	event->destroy = reset_global_refc;
+ 	return 0;
+ }
+diff --git a/arch/powerpc/platforms/8xx/pic.c b/arch/powerpc/platforms/8xx/pic.c
+index f2ba837249d69..04a6abf14c295 100644
+--- a/arch/powerpc/platforms/8xx/pic.c
++++ b/arch/powerpc/platforms/8xx/pic.c
+@@ -153,6 +153,7 @@ int __init mpc8xx_pic_init(void)
+ 	if (mpc8xx_pic_host == NULL) {
+ 		printk(KERN_ERR "MPC8xx PIC: failed to allocate irq host!\n");
+ 		ret = -ENOMEM;
++		goto out;
+ 	}
+ 
+ 	ret = 0;
+diff --git a/arch/powerpc/platforms/powernv/rng.c b/arch/powerpc/platforms/powernv/rng.c
+index 72c25295c1c2b..69c344c8884f3 100644
+--- a/arch/powerpc/platforms/powernv/rng.c
++++ b/arch/powerpc/platforms/powernv/rng.c
+@@ -43,7 +43,11 @@ static unsigned long rng_whiten(struct powernv_rng *rng, unsigned long val)
+ 	unsigned long parity;
+ 
+ 	/* Calculate the parity of the value */
+-	asm ("popcntd %0,%1" : "=r" (parity) : "r" (val));
++	asm (".machine push;   \
++	      .machine power7; \
++	      popcntd %0,%1;   \
++	      .machine pop;"
++	     : "=r" (parity) : "r" (val));
+ 
+ 	/* xor our value with the previous mask */
+ 	val ^= rng->mask;
+diff --git a/arch/powerpc/platforms/pseries/pci_dlpar.c b/arch/powerpc/platforms/pseries/pci_dlpar.c
+index 90c9d3531694b..4ba8245681192 100644
+--- a/arch/powerpc/platforms/pseries/pci_dlpar.c
++++ b/arch/powerpc/platforms/pseries/pci_dlpar.c
+@@ -78,6 +78,9 @@ int remove_phb_dynamic(struct pci_controller *phb)
+ 
+ 	pseries_msi_free_domains(phb);
+ 
++	/* Keep a reference so phb isn't freed yet */
++	get_device(&host_bridge->dev);
++
+ 	/* Remove the PCI bus and unregister the bridge device from sysfs */
+ 	phb->bus = NULL;
+ 	pci_remove_bus(b);
+@@ -101,6 +104,7 @@ int remove_phb_dynamic(struct pci_controller *phb)
+ 	 * the pcibios_free_controller_deferred() callback;
+ 	 * see pseries_root_bridge_prepare().
+ 	 */
++	put_device(&host_bridge->dev);
+ 
+ 	return 0;
+ }
+diff --git a/arch/powerpc/sysdev/fsl_gtm.c b/arch/powerpc/sysdev/fsl_gtm.c
+index 8963eaffb1b7b..39186ad6b3c3a 100644
+--- a/arch/powerpc/sysdev/fsl_gtm.c
++++ b/arch/powerpc/sysdev/fsl_gtm.c
+@@ -86,7 +86,7 @@ static LIST_HEAD(gtms);
+  */
+ struct gtm_timer *gtm_get_timer16(void)
+ {
+-	struct gtm *gtm = NULL;
++	struct gtm *gtm;
+ 	int i;
+ 
+ 	list_for_each_entry(gtm, &gtms, list_node) {
+@@ -103,7 +103,7 @@ struct gtm_timer *gtm_get_timer16(void)
+ 		spin_unlock_irq(&gtm->lock);
+ 	}
+ 
+-	if (gtm)
++	if (!list_empty(&gtms))
+ 		return ERR_PTR(-EBUSY);
+ 	return ERR_PTR(-ENODEV);
+ }
+diff --git a/arch/riscv/boot/dts/canaan/sipeed_maix_bit.dts b/arch/riscv/boot/dts/canaan/sipeed_maix_bit.dts
+index 0bcaf35045e79..82e7f8069ae77 100644
+--- a/arch/riscv/boot/dts/canaan/sipeed_maix_bit.dts
++++ b/arch/riscv/boot/dts/canaan/sipeed_maix_bit.dts
+@@ -203,6 +203,8 @@
+ 		compatible = "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-max-frequency = <50000000>;
++		spi-tx-bus-width = <4>;
++		spi-rx-bus-width = <4>;
+ 		m25p,fast-read;
+ 		broken-flash-reset;
+ 	};
+diff --git a/arch/riscv/boot/dts/canaan/sipeed_maix_dock.dts b/arch/riscv/boot/dts/canaan/sipeed_maix_dock.dts
+index ac8a03f5867ad..8d335233853a7 100644
+--- a/arch/riscv/boot/dts/canaan/sipeed_maix_dock.dts
++++ b/arch/riscv/boot/dts/canaan/sipeed_maix_dock.dts
+@@ -205,6 +205,8 @@
+ 		compatible = "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-max-frequency = <50000000>;
++		spi-tx-bus-width = <4>;
++		spi-rx-bus-width = <4>;
+ 		m25p,fast-read;
+ 		broken-flash-reset;
+ 	};
+diff --git a/arch/riscv/boot/dts/canaan/sipeed_maix_go.dts b/arch/riscv/boot/dts/canaan/sipeed_maix_go.dts
+index 623998194bc18..6703cfc055887 100644
+--- a/arch/riscv/boot/dts/canaan/sipeed_maix_go.dts
++++ b/arch/riscv/boot/dts/canaan/sipeed_maix_go.dts
+@@ -213,6 +213,8 @@
+ 		compatible = "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-max-frequency = <50000000>;
++		spi-tx-bus-width = <4>;
++		spi-rx-bus-width = <4>;
+ 		m25p,fast-read;
+ 		broken-flash-reset;
+ 	};
+diff --git a/arch/riscv/boot/dts/canaan/sipeed_maixduino.dts b/arch/riscv/boot/dts/canaan/sipeed_maixduino.dts
+index cf605ba0d67e4..ac0b56f7d2c9f 100644
+--- a/arch/riscv/boot/dts/canaan/sipeed_maixduino.dts
++++ b/arch/riscv/boot/dts/canaan/sipeed_maixduino.dts
+@@ -178,6 +178,8 @@
+ 		compatible = "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-max-frequency = <50000000>;
++		spi-tx-bus-width = <4>;
++		spi-rx-bus-width = <4>;
+ 		m25p,fast-read;
+ 		broken-flash-reset;
+ 	};
+diff --git a/arch/riscv/include/asm/module.lds.h b/arch/riscv/include/asm/module.lds.h
+index 4254ff2ff0494..1075beae1ac64 100644
+--- a/arch/riscv/include/asm/module.lds.h
++++ b/arch/riscv/include/asm/module.lds.h
+@@ -2,8 +2,8 @@
+ /* Copyright (C) 2017 Andes Technology Corporation */
+ #ifdef CONFIG_MODULE_SECTIONS
+ SECTIONS {
+-	.plt (NOLOAD) : { BYTE(0) }
+-	.got (NOLOAD) : { BYTE(0) }
+-	.got.plt (NOLOAD) : { BYTE(0) }
++	.plt : { BYTE(0) }
++	.got : { BYTE(0) }
++	.got.plt : { BYTE(0) }
+ }
+ #endif
+diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
+index 60da0dcacf145..74d888c8d631a 100644
+--- a/arch/riscv/include/asm/thread_info.h
++++ b/arch/riscv/include/asm/thread_info.h
+@@ -11,11 +11,17 @@
+ #include <asm/page.h>
+ #include <linux/const.h>
+ 
++#ifdef CONFIG_KASAN
++#define KASAN_STACK_ORDER 1
++#else
++#define KASAN_STACK_ORDER 0
++#endif
++
+ /* thread information allocation */
+ #ifdef CONFIG_64BIT
+-#define THREAD_SIZE_ORDER	(2)
++#define THREAD_SIZE_ORDER	(2 + KASAN_STACK_ORDER)
+ #else
+-#define THREAD_SIZE_ORDER	(1)
++#define THREAD_SIZE_ORDER	(1 + KASAN_STACK_ORDER)
+ #endif
+ #define THREAD_SIZE		(PAGE_SIZE << THREAD_SIZE_ORDER)
+ 
+diff --git a/arch/riscv/kernel/perf_callchain.c b/arch/riscv/kernel/perf_callchain.c
+index 8ecfc4c128bc5..357f985041cb9 100644
+--- a/arch/riscv/kernel/perf_callchain.c
++++ b/arch/riscv/kernel/perf_callchain.c
+@@ -15,8 +15,8 @@ static unsigned long user_backtrace(struct perf_callchain_entry_ctx *entry,
+ {
+ 	struct stackframe buftail;
+ 	unsigned long ra = 0;
+-	unsigned long *user_frame_tail =
+-			(unsigned long *)(fp - sizeof(struct stackframe));
++	unsigned long __user *user_frame_tail =
++		(unsigned long __user *)(fp - sizeof(struct stackframe));
+ 
+ 	/* Check accessibility of one struct frame_tail beyond */
+ 	if (!access_ok(user_frame_tail, sizeof(buftail)))
+@@ -73,7 +73,7 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
+ 
+ static bool fill_callchain(void *entry, unsigned long pc)
+ {
+-	return perf_callchain_store(entry, pc);
++	return perf_callchain_store(entry, pc) == 0;
+ }
+ 
+ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
+diff --git a/arch/sparc/kernel/signal_32.c b/arch/sparc/kernel/signal_32.c
+index ffab16369beac..74f80443b195f 100644
+--- a/arch/sparc/kernel/signal_32.c
++++ b/arch/sparc/kernel/signal_32.c
+@@ -65,7 +65,7 @@ struct rt_signal_frame {
+  */
+ static inline bool invalid_frame_pointer(void __user *fp, int fplen)
+ {
+-	if ((((unsigned long) fp) & 15) || !__access_ok((unsigned long)fp, fplen))
++	if ((((unsigned long) fp) & 15) || !access_ok(fp, fplen))
+ 		return true;
+ 
+ 	return false;
+diff --git a/arch/um/drivers/mconsole_kern.c b/arch/um/drivers/mconsole_kern.c
+index 6ead1e2404576..8ca67a6926830 100644
+--- a/arch/um/drivers/mconsole_kern.c
++++ b/arch/um/drivers/mconsole_kern.c
+@@ -224,7 +224,7 @@ void mconsole_go(struct mc_request *req)
+ 
+ void mconsole_stop(struct mc_request *req)
+ {
+-	deactivate_fd(req->originating_fd, MCONSOLE_IRQ);
++	block_signals();
+ 	os_set_fd_block(req->originating_fd, 1);
+ 	mconsole_reply(req, "stopped", 0, 0);
+ 	for (;;) {
+@@ -247,6 +247,7 @@ void mconsole_stop(struct mc_request *req)
+ 	}
+ 	os_set_fd_block(req->originating_fd, 0);
+ 	mconsole_reply(req, "", 0, 0);
++	unblock_signals();
+ }
+ 
+ static DEFINE_SPINLOCK(mc_devices_lock);
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index 2d33bba9a1440..215aed65e9782 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -472,7 +472,7 @@ static u64 pt_config_filters(struct perf_event *event)
+ 			pt->filters.filter[range].msr_b = filter->msr_b;
+ 		}
+ 
+-		rtit_ctl |= filter->config << pt_address_ranges[range].reg_off;
++		rtit_ctl |= (u64)filter->config << pt_address_ranges[range].reg_off;
+ 	}
+ 
+ 	return rtit_ctl;
+diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
+index ff3db164e52cb..a9bc113c512a0 100644
+--- a/arch/x86/kernel/kvm.c
++++ b/arch/x86/kernel/kvm.c
+@@ -515,7 +515,7 @@ static void __send_ipi_mask(const struct cpumask *mask, int vector)
+ 		} else if (apic_id < min && max - apic_id < KVM_IPI_CLUSTER_SIZE) {
+ 			ipi_bitmap <<= min - apic_id;
+ 			min = apic_id;
+-		} else if (apic_id < min + KVM_IPI_CLUSTER_SIZE) {
++		} else if (apic_id > min && apic_id < min + KVM_IPI_CLUSTER_SIZE) {
+ 			max = apic_id < max ? max : apic_id;
+ 		} else {
+ 			ret = kvm_hypercall4(KVM_HC_SEND_IPI, (unsigned long)ipi_bitmap,
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 28b1a4e57827e..5705446c12134 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -1614,11 +1614,6 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+ 		goto exception;
+ 	}
+ 
+-	if (!seg_desc.p) {
+-		err_vec = (seg == VCPU_SREG_SS) ? SS_VECTOR : NP_VECTOR;
+-		goto exception;
+-	}
+-
+ 	dpl = seg_desc.dpl;
+ 
+ 	switch (seg) {
+@@ -1658,6 +1653,10 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+ 	case VCPU_SREG_TR:
+ 		if (seg_desc.s || (seg_desc.type != 1 && seg_desc.type != 9))
+ 			goto exception;
++		if (!seg_desc.p) {
++			err_vec = NP_VECTOR;
++			goto exception;
++		}
+ 		old_desc = seg_desc;
+ 		seg_desc.type |= 2; /* busy */
+ 		ret = ctxt->ops->cmpxchg_emulated(ctxt, desc_addr, &old_desc, &seg_desc,
+@@ -1682,6 +1681,11 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+ 		break;
+ 	}
+ 
++	if (!seg_desc.p) {
++		err_vec = (seg == VCPU_SREG_SS) ? SS_VECTOR : NP_VECTOR;
++		goto exception;
++	}
++
+ 	if (seg_desc.s) {
+ 		/* mark segment as accessed */
+ 		if (!(seg_desc.type & 1)) {
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index 8d8c1cc7cb539..31116e277b428 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -236,7 +236,7 @@ static int synic_set_msr(struct kvm_vcpu_hv_synic *synic,
+ 	struct kvm_vcpu *vcpu = hv_synic_to_vcpu(synic);
+ 	int ret;
+ 
+-	if (!synic->active && !host)
++	if (!synic->active && (!host || data))
+ 		return 1;
+ 
+ 	trace_kvm_hv_synic_set_msr(vcpu->vcpu_id, msr, data, host);
+@@ -282,6 +282,9 @@ static int synic_set_msr(struct kvm_vcpu_hv_synic *synic,
+ 	case HV_X64_MSR_EOM: {
+ 		int i;
+ 
++		if (!synic->active)
++			break;
++
+ 		for (i = 0; i < ARRAY_SIZE(synic->sint); i++)
+ 			kvm_hv_notify_acked_sint(vcpu, i);
+ 		break;
+@@ -446,6 +449,9 @@ static int synic_set_irq(struct kvm_vcpu_hv_synic *synic, u32 sint)
+ 	struct kvm_lapic_irq irq;
+ 	int ret, vector;
+ 
++	if (KVM_BUG_ON(!lapic_in_kernel(vcpu), vcpu->kvm))
++		return -EINVAL;
++
+ 	if (sint >= ARRAY_SIZE(synic->sint))
+ 		return -EINVAL;
+ 
+@@ -658,7 +664,7 @@ static int stimer_set_config(struct kvm_vcpu_hv_stimer *stimer, u64 config,
+ 	struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
+ 	struct kvm_vcpu_hv_synic *synic = to_hv_synic(vcpu);
+ 
+-	if (!synic->active && !host)
++	if (!synic->active && (!host || config))
+ 		return 1;
+ 
+ 	if (unlikely(!host && hv_vcpu->enforce_cpuid && new_config.direct_mode &&
+@@ -687,7 +693,7 @@ static int stimer_set_count(struct kvm_vcpu_hv_stimer *stimer, u64 count,
+ 	struct kvm_vcpu *vcpu = hv_stimer_to_vcpu(stimer);
+ 	struct kvm_vcpu_hv_synic *synic = to_hv_synic(vcpu);
+ 
+-	if (!synic->active && !host)
++	if (!synic->active && (!host || count))
+ 		return 1;
+ 
+ 	trace_kvm_hv_stimer_set_count(hv_stimer_to_vcpu(stimer)->vcpu_id,
+@@ -1749,7 +1755,7 @@ struct kvm_hv_hcall {
+ 	sse128_t xmm[HV_HYPERCALL_MAX_XMM_REGISTERS];
+ };
+ 
+-static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool ex)
++static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
+ {
+ 	int i;
+ 	gpa_t gpa;
+@@ -1764,7 +1770,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
+ 	int sparse_banks_len;
+ 	bool all_cpus;
+ 
+-	if (!ex) {
++	if (hc->code == HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST ||
++	    hc->code == HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE) {
+ 		if (hc->fast) {
+ 			flush.address_space = hc->ingpa;
+ 			flush.flags = hc->outgpa;
+@@ -1818,7 +1825,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
+ 
+ 		if (!all_cpus) {
+ 			if (hc->fast) {
+-				if (sparse_banks_len > HV_HYPERCALL_MAX_XMM_REGISTERS - 1)
++				/* XMM0 is already consumed, each XMM holds two sparse banks. */
++				if (sparse_banks_len > 2 * (HV_HYPERCALL_MAX_XMM_REGISTERS - 1))
+ 					return HV_STATUS_INVALID_HYPERCALL_INPUT;
+ 				for (i = 0; i < sparse_banks_len; i += 2) {
+ 					sparse_banks[i] = sse128_lo(hc->xmm[i / 2 + 1]);
+@@ -1874,7 +1882,7 @@ static void kvm_send_ipi_to_many(struct kvm *kvm, u32 vector,
+ 	}
+ }
+ 
+-static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool ex)
++static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
+ {
+ 	struct kvm *kvm = vcpu->kvm;
+ 	struct hv_send_ipi_ex send_ipi_ex;
+@@ -1887,8 +1895,9 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
+ 	int sparse_banks_len;
+ 	u32 vector;
+ 	bool all_cpus;
++	int i;
+ 
+-	if (!ex) {
++	if (hc->code == HVCALL_SEND_IPI) {
+ 		if (!hc->fast) {
+ 			if (unlikely(kvm_read_guest(kvm, hc->ingpa, &send_ipi,
+ 						    sizeof(send_ipi))))
+@@ -1907,9 +1916,15 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
+ 
+ 		trace_kvm_hv_send_ipi(vector, sparse_banks[0]);
+ 	} else {
+-		if (unlikely(kvm_read_guest(kvm, hc->ingpa, &send_ipi_ex,
+-					    sizeof(send_ipi_ex))))
+-			return HV_STATUS_INVALID_HYPERCALL_INPUT;
++		if (!hc->fast) {
++			if (unlikely(kvm_read_guest(kvm, hc->ingpa, &send_ipi_ex,
++						    sizeof(send_ipi_ex))))
++				return HV_STATUS_INVALID_HYPERCALL_INPUT;
++		} else {
++			send_ipi_ex.vector = (u32)hc->ingpa;
++			send_ipi_ex.vp_set.format = hc->outgpa;
++			send_ipi_ex.vp_set.valid_bank_mask = sse128_lo(hc->xmm[0]);
++		}
+ 
+ 		trace_kvm_hv_send_ipi_ex(send_ipi_ex.vector,
+ 					 send_ipi_ex.vp_set.format,
+@@ -1917,8 +1932,7 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
+ 
+ 		vector = send_ipi_ex.vector;
+ 		valid_bank_mask = send_ipi_ex.vp_set.valid_bank_mask;
+-		sparse_banks_len = bitmap_weight(&valid_bank_mask, 64) *
+-			sizeof(sparse_banks[0]);
++		sparse_banks_len = bitmap_weight(&valid_bank_mask, 64);
+ 
+ 		all_cpus = send_ipi_ex.vp_set.format == HV_GENERIC_SET_ALL;
+ 
+@@ -1928,12 +1942,27 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
+ 		if (!sparse_banks_len)
+ 			goto ret_success;
+ 
+-		if (kvm_read_guest(kvm,
+-				   hc->ingpa + offsetof(struct hv_send_ipi_ex,
+-							vp_set.bank_contents),
+-				   sparse_banks,
+-				   sparse_banks_len))
+-			return HV_STATUS_INVALID_HYPERCALL_INPUT;
++		if (!hc->fast) {
++			if (kvm_read_guest(kvm,
++					   hc->ingpa + offsetof(struct hv_send_ipi_ex,
++								vp_set.bank_contents),
++					   sparse_banks,
++					   sparse_banks_len * sizeof(sparse_banks[0])))
++				return HV_STATUS_INVALID_HYPERCALL_INPUT;
++		} else {
++			/*
++			 * The lower half of XMM0 is already consumed, each XMM holds
++			 * two sparse banks.
++			 */
++			if (sparse_banks_len > (2 * HV_HYPERCALL_MAX_XMM_REGISTERS - 1))
++				return HV_STATUS_INVALID_HYPERCALL_INPUT;
++			for (i = 0; i < sparse_banks_len; i++) {
++				if (i % 2)
++					sparse_banks[i] = sse128_lo(hc->xmm[(i + 1) / 2]);
++				else
++					sparse_banks[i] = sse128_hi(hc->xmm[i / 2]);
++			}
++		}
+ 	}
+ 
+ check_and_send_ipi:
+@@ -2095,6 +2124,7 @@ static bool is_xmm_fast_hypercall(struct kvm_hv_hcall *hc)
+ 	case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
+ 	case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
+ 	case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
++	case HVCALL_SEND_IPI_EX:
+ 		return true;
+ 	}
+ 
+@@ -2246,46 +2276,28 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
+ 				kvm_hv_hypercall_complete_userspace;
+ 		return 0;
+ 	case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
+-		if (unlikely(!hc.rep_cnt || hc.rep_idx)) {
+-			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
+-			break;
+-		}
+-		ret = kvm_hv_flush_tlb(vcpu, &hc, false);
+-		break;
+-	case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
+-		if (unlikely(hc.rep)) {
+-			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
+-			break;
+-		}
+-		ret = kvm_hv_flush_tlb(vcpu, &hc, false);
+-		break;
+ 	case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
+ 		if (unlikely(!hc.rep_cnt || hc.rep_idx)) {
+ 			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
+ 			break;
+ 		}
+-		ret = kvm_hv_flush_tlb(vcpu, &hc, true);
++		ret = kvm_hv_flush_tlb(vcpu, &hc);
+ 		break;
++	case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
+ 	case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
+ 		if (unlikely(hc.rep)) {
+ 			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
+ 			break;
+ 		}
+-		ret = kvm_hv_flush_tlb(vcpu, &hc, true);
++		ret = kvm_hv_flush_tlb(vcpu, &hc);
+ 		break;
+ 	case HVCALL_SEND_IPI:
+-		if (unlikely(hc.rep)) {
+-			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
+-			break;
+-		}
+-		ret = kvm_hv_send_ipi(vcpu, &hc, false);
+-		break;
+ 	case HVCALL_SEND_IPI_EX:
+-		if (unlikely(hc.fast || hc.rep)) {
++		if (unlikely(hc.rep)) {
+ 			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
+ 			break;
+ 		}
+-		ret = kvm_hv_send_ipi(vcpu, &hc, true);
++		ret = kvm_hv_send_ipi(vcpu, &hc);
+ 		break;
+ 	case HVCALL_POST_DEBUG_DATA:
+ 	case HVCALL_RETRIEVE_DEBUG_DATA:
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index e8e383fbe8868..5c7a033d39d13 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -987,6 +987,10 @@ bool kvm_irq_delivery_to_apic_fast(struct kvm *kvm, struct kvm_lapic *src,
+ 	*r = -1;
+ 
+ 	if (irq->shorthand == APIC_DEST_SELF) {
++		if (KVM_BUG_ON(!src, kvm)) {
++			*r = 0;
++			return true;
++		}
+ 		*r = kvm_apic_set_irq(src->vcpu, irq, dest_map);
+ 		return true;
+ 	}
+@@ -2242,10 +2246,7 @@ void kvm_set_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu, u64 data)
+ 
+ void kvm_lapic_set_tpr(struct kvm_vcpu *vcpu, unsigned long cr8)
+ {
+-	struct kvm_lapic *apic = vcpu->arch.apic;
+-
+-	apic_set_tpr(apic, ((cr8 & 0x0f) << 4)
+-		     | (kvm_lapic_get_reg(apic, APIC_TASKPRI) & 4));
++	apic_set_tpr(vcpu->arch.apic, (cr8 & 0x0f) << 4);
+ }
+ 
+ u64 kvm_lapic_get_cr8(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
+index 9ae6168d381e2..f502fa3cb3b47 100644
+--- a/arch/x86/kvm/mmu.h
++++ b/arch/x86/kvm/mmu.h
+@@ -48,6 +48,7 @@
+ 			       X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE)
+ 
+ #define KVM_MMU_CR0_ROLE_BITS (X86_CR0_PG | X86_CR0_WP)
++#define KVM_MMU_EFER_ROLE_BITS (EFER_LME | EFER_NX)
+ 
+ static __always_inline u64 rsvd_bits(int s, int e)
+ {
+diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
+index 708a5d297fe1e..c005905f28526 100644
+--- a/arch/x86/kvm/mmu/paging_tmpl.h
++++ b/arch/x86/kvm/mmu/paging_tmpl.h
+@@ -34,9 +34,8 @@
+ 	#define PT_HAVE_ACCESSED_DIRTY(mmu) true
+ 	#ifdef CONFIG_X86_64
+ 	#define PT_MAX_FULL_LEVELS PT64_ROOT_MAX_LEVEL
+-	#define CMPXCHG cmpxchg
++	#define CMPXCHG "cmpxchgq"
+ 	#else
+-	#define CMPXCHG cmpxchg64
+ 	#define PT_MAX_FULL_LEVELS 2
+ 	#endif
+ #elif PTTYPE == 32
+@@ -52,7 +51,7 @@
+ 	#define PT_GUEST_DIRTY_SHIFT PT_DIRTY_SHIFT
+ 	#define PT_GUEST_ACCESSED_SHIFT PT_ACCESSED_SHIFT
+ 	#define PT_HAVE_ACCESSED_DIRTY(mmu) true
+-	#define CMPXCHG cmpxchg
++	#define CMPXCHG "cmpxchgl"
+ #elif PTTYPE == PTTYPE_EPT
+ 	#define pt_element_t u64
+ 	#define guest_walker guest_walkerEPT
+@@ -65,7 +64,9 @@
+ 	#define PT_GUEST_DIRTY_SHIFT 9
+ 	#define PT_GUEST_ACCESSED_SHIFT 8
+ 	#define PT_HAVE_ACCESSED_DIRTY(mmu) ((mmu)->ept_ad)
+-	#define CMPXCHG cmpxchg64
++	#ifdef CONFIG_X86_64
++	#define CMPXCHG "cmpxchgq"
++	#endif
+ 	#define PT_MAX_FULL_LEVELS PT64_ROOT_MAX_LEVEL
+ #else
+ 	#error Invalid PTTYPE value
+@@ -147,43 +148,39 @@ static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
+ 			       pt_element_t __user *ptep_user, unsigned index,
+ 			       pt_element_t orig_pte, pt_element_t new_pte)
+ {
+-	int npages;
+-	pt_element_t ret;
+-	pt_element_t *table;
+-	struct page *page;
+-
+-	npages = get_user_pages_fast((unsigned long)ptep_user, 1, FOLL_WRITE, &page);
+-	if (likely(npages == 1)) {
+-		table = kmap_atomic(page);
+-		ret = CMPXCHG(&table[index], orig_pte, new_pte);
+-		kunmap_atomic(table);
+-
+-		kvm_release_page_dirty(page);
+-	} else {
+-		struct vm_area_struct *vma;
+-		unsigned long vaddr = (unsigned long)ptep_user & PAGE_MASK;
+-		unsigned long pfn;
+-		unsigned long paddr;
+-
+-		mmap_read_lock(current->mm);
+-		vma = find_vma_intersection(current->mm, vaddr, vaddr + PAGE_SIZE);
+-		if (!vma || !(vma->vm_flags & VM_PFNMAP)) {
+-			mmap_read_unlock(current->mm);
+-			return -EFAULT;
+-		}
+-		pfn = ((vaddr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
+-		paddr = pfn << PAGE_SHIFT;
+-		table = memremap(paddr, PAGE_SIZE, MEMREMAP_WB);
+-		if (!table) {
+-			mmap_read_unlock(current->mm);
+-			return -EFAULT;
+-		}
+-		ret = CMPXCHG(&table[index], orig_pte, new_pte);
+-		memunmap(table);
+-		mmap_read_unlock(current->mm);
+-	}
++	int r = -EFAULT;
+ 
+-	return (ret != orig_pte);
++	if (!user_access_begin(ptep_user, sizeof(pt_element_t)))
++		return -EFAULT;
++
++#ifdef CMPXCHG
++	asm volatile("1:" LOCK_PREFIX CMPXCHG " %[new], %[ptr]\n"
++		     "mov $0, %[r]\n"
++		     "setnz %b[r]\n"
++		     "2:"
++		     _ASM_EXTABLE_UA(1b, 2b)
++		     : [ptr] "+m" (*ptep_user),
++		       [old] "+a" (orig_pte),
++		       [r] "+q" (r)
++		     : [new] "r" (new_pte)
++		     : "memory");
++#else
++	asm volatile("1:" LOCK_PREFIX "cmpxchg8b %[ptr]\n"
++		     "movl $0, %[r]\n"
++		     "jz 2f\n"
++		     "incl %[r]\n"
++		     "2:"
++		     _ASM_EXTABLE_UA(1b, 2b)
++		     : [ptr] "+m" (*ptep_user),
++		       [old] "+A" (orig_pte),
++		       [r] "+rm" (r)
++		     : [new_lo] "b" ((u32)new_pte),
++		       [new_hi] "c" ((u32)(new_pte >> 32))
++		     : "memory");
++#endif
++
++	user_access_end();
++	return r;
+ }
+ 
+ static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcpu *vcpu,
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
+index 095b5cb4e3c9b..0f0ec2c33b726 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.c
++++ b/arch/x86/kvm/mmu/tdp_mmu.c
+@@ -99,15 +99,18 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
+ }
+ 
+ /*
+- * Finds the next valid root after root (or the first valid root if root
+- * is NULL), takes a reference on it, and returns that next root. If root
+- * is not NULL, this thread should have already taken a reference on it, and
+- * that reference will be dropped. If no valid root is found, this
+- * function will return NULL.
++ * Returns the next root after @prev_root (or the first root if @prev_root is
++ * NULL).  A reference to the returned root is acquired, and the reference to
++ * @prev_root is released (the caller obviously must hold a reference to
++ * @prev_root if it's non-NULL).
++ *
++ * If @only_valid is true, invalid roots are skipped.
++ *
++ * Returns NULL if the end of tdp_mmu_roots was reached.
+  */
+ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
+ 					      struct kvm_mmu_page *prev_root,
+-					      bool shared)
++					      bool shared, bool only_valid)
+ {
+ 	struct kvm_mmu_page *next_root;
+ 
+@@ -121,9 +124,14 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
+ 		next_root = list_first_or_null_rcu(&kvm->arch.tdp_mmu_roots,
+ 						   typeof(*next_root), link);
+ 
+-	while (next_root && !kvm_tdp_mmu_get_root(kvm, next_root))
++	while (next_root) {
++		if ((!only_valid || !next_root->role.invalid) &&
++		    kvm_tdp_mmu_get_root(kvm, next_root))
++			break;
++
+ 		next_root = list_next_or_null_rcu(&kvm->arch.tdp_mmu_roots,
+ 				&next_root->link, typeof(*next_root), link);
++	}
+ 
+ 	rcu_read_unlock();
+ 
+@@ -143,13 +151,19 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
+  * mode. In the unlikely event that this thread must free a root, the lock
+  * will be temporarily dropped and reacquired in write mode.
+  */
+-#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared)	\
+-	for (_root = tdp_mmu_next_root(_kvm, NULL, _shared);		\
+-	     _root;							\
+-	     _root = tdp_mmu_next_root(_kvm, _root, _shared))		\
+-		if (kvm_mmu_page_as_id(_root) != _as_id) {		\
++#define __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, _only_valid)\
++	for (_root = tdp_mmu_next_root(_kvm, NULL, _shared, _only_valid);	\
++	     _root;								\
++	     _root = tdp_mmu_next_root(_kvm, _root, _shared, _only_valid))	\
++		if (kvm_mmu_page_as_id(_root) != _as_id) {			\
+ 		} else
+ 
++#define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared)	\
++	__for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, true)
++
++#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared)		\
++	__for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, false)
++
+ #define for_each_tdp_mmu_root(_kvm, _root, _as_id)				\
+ 	list_for_each_entry_rcu(_root, &_kvm->arch.tdp_mmu_roots, link,		\
+ 				lockdep_is_held_type(&kvm->mmu_lock, 0) ||	\
+@@ -200,7 +214,10 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
+ 
+ 	role = page_role_for_level(vcpu, vcpu->arch.mmu->shadow_root_level);
+ 
+-	/* Check for an existing root before allocating a new one. */
++	/*
++	 * Check for an existing root before allocating a new one.  Note, the
++	 * role check prevents consuming an invalid root.
++	 */
+ 	for_each_tdp_mmu_root(kvm, root, kvm_mmu_role_as_id(role)) {
+ 		if (root->role.word == role.word &&
+ 		    kvm_tdp_mmu_get_root(kvm, root))
+@@ -1032,13 +1049,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
+ bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range,
+ 				 bool flush)
+ {
+-	struct kvm_mmu_page *root;
+-
+-	for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id, false)
+-		flush = zap_gfn_range(kvm, root, range->start, range->end,
+-				      range->may_block, flush, false);
+-
+-	return flush;
++	return __kvm_tdp_mmu_zap_gfn_range(kvm, range->slot->as_id, range->start,
++					   range->end, range->may_block, flush);
+ }
+ 
+ typedef bool (*tdp_handler_t)(struct kvm *kvm, struct tdp_iter *iter,
+@@ -1221,7 +1233,7 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm,
+ 
+ 	lockdep_assert_held_read(&kvm->mmu_lock);
+ 
+-	for_each_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
++	for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
+ 		spte_set |= wrprot_gfn_range(kvm, root, slot->base_gfn,
+ 			     slot->base_gfn + slot->npages, min_level);
+ 
+@@ -1249,6 +1261,9 @@ retry:
+ 		if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true))
+ 			continue;
+ 
++		if (!is_shadow_present_pte(iter.old_spte))
++			continue;
++
+ 		if (spte_ad_need_write_protect(iter.old_spte)) {
+ 			if (is_writable_pte(iter.old_spte))
+ 				new_spte = iter.old_spte & ~PT_WRITABLE_MASK;
+@@ -1291,7 +1306,7 @@ bool kvm_tdp_mmu_clear_dirty_slot(struct kvm *kvm,
+ 
+ 	lockdep_assert_held_read(&kvm->mmu_lock);
+ 
+-	for_each_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
++	for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
+ 		spte_set |= clear_dirty_gfn_range(kvm, root, slot->base_gfn,
+ 				slot->base_gfn + slot->npages);
+ 
+@@ -1416,7 +1431,7 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm,
+ 
+ 	lockdep_assert_held_read(&kvm->mmu_lock);
+ 
+-	for_each_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
++	for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
+ 		zap_collapsible_spte_range(kvm, root, slot);
+ }
+ 
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
+index 3899004a5d91e..08c917511fedd 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.h
++++ b/arch/x86/kvm/mmu/tdp_mmu.h
+@@ -10,9 +10,6 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu);
+ __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm *kvm,
+ 						     struct kvm_mmu_page *root)
+ {
+-	if (root->role.invalid)
+-		return false;
+-
+ 	return refcount_inc_not_zero(&root->tdp_mmu_root_count);
+ }
+ 
+diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
+index 212af871ca746..72713324f3bf9 100644
+--- a/arch/x86/kvm/svm/avic.c
++++ b/arch/x86/kvm/svm/avic.c
+@@ -799,7 +799,7 @@ int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
+ {
+ 	struct kvm_kernel_irq_routing_entry *e;
+ 	struct kvm_irq_routing_table *irq_rt;
+-	int idx, ret = -EINVAL;
++	int idx, ret = 0;
+ 
+ 	if (!kvm_arch_has_assigned_device(kvm) ||
+ 	    !irq_remapping_cap(IRQ_POSTING_CAP))
+@@ -810,7 +810,13 @@ int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
+ 
+ 	idx = srcu_read_lock(&kvm->irq_srcu);
+ 	irq_rt = srcu_dereference(kvm->irq_routing, &kvm->irq_srcu);
+-	WARN_ON(guest_irq >= irq_rt->nr_rt_entries);
++
++	if (guest_irq >= irq_rt->nr_rt_entries ||
++		hlist_empty(&irq_rt->map[guest_irq])) {
++		pr_warn_once("no route for guest_irq %u/%u (broken user space?)\n",
++			     guest_irq, irq_rt->nr_rt_entries);
++		goto out;
++	}
+ 
+ 	hlist_for_each_entry(e, &irq_rt->map[guest_irq], link) {
+ 		struct vcpu_data vcpu_info;
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index be28831412209..20a5d3abaeda3 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -2352,7 +2352,7 @@ static void sev_es_sync_from_ghcb(struct vcpu_svm *svm)
+ 	memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap));
+ }
+ 
+-static bool sev_es_validate_vmgexit(struct vcpu_svm *svm)
++static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
+ {
+ 	struct kvm_vcpu *vcpu;
+ 	struct ghcb *ghcb;
+@@ -2457,7 +2457,7 @@ static bool sev_es_validate_vmgexit(struct vcpu_svm *svm)
+ 		goto vmgexit_err;
+ 	}
+ 
+-	return true;
++	return 0;
+ 
+ vmgexit_err:
+ 	vcpu = &svm->vcpu;
+@@ -2480,7 +2480,8 @@ vmgexit_err:
+ 	ghcb_set_sw_exit_info_1(ghcb, 2);
+ 	ghcb_set_sw_exit_info_2(ghcb, reason);
+ 
+-	return false;
++	/* Resume the guest to "return" the error code. */
++	return 1;
+ }
+ 
+ void sev_es_unmap_ghcb(struct vcpu_svm *svm)
+@@ -2539,7 +2540,7 @@ void pre_sev_run(struct vcpu_svm *svm, int cpu)
+ }
+ 
+ #define GHCB_SCRATCH_AREA_LIMIT		(16ULL * PAGE_SIZE)
+-static bool setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len)
++static int setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len)
+ {
+ 	struct vmcb_control_area *control = &svm->vmcb->control;
+ 	struct ghcb *ghcb = svm->sev_es.ghcb;
+@@ -2592,14 +2593,14 @@ static bool setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len)
+ 		}
+ 		scratch_va = kvzalloc(len, GFP_KERNEL_ACCOUNT);
+ 		if (!scratch_va)
+-			goto e_scratch;
++			return -ENOMEM;
+ 
+ 		if (kvm_read_guest(svm->vcpu.kvm, scratch_gpa_beg, scratch_va, len)) {
+ 			/* Unable to copy scratch area from guest */
+ 			pr_err("vmgexit: kvm_read_guest for scratch area failed\n");
+ 
+ 			kvfree(scratch_va);
+-			goto e_scratch;
++			return -EFAULT;
+ 		}
+ 
+ 		/*
+@@ -2615,13 +2616,13 @@ static bool setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len)
+ 	svm->sev_es.ghcb_sa = scratch_va;
+ 	svm->sev_es.ghcb_sa_len = len;
+ 
+-	return true;
++	return 0;
+ 
+ e_scratch:
+ 	ghcb_set_sw_exit_info_1(ghcb, 2);
+ 	ghcb_set_sw_exit_info_2(ghcb, GHCB_ERR_INVALID_SCRATCH_AREA);
+ 
+-	return false;
++	return 1;
+ }
+ 
+ static void set_ghcb_msr_bits(struct vcpu_svm *svm, u64 value, u64 mask,
+@@ -2759,17 +2760,18 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
+ 
+ 	exit_code = ghcb_get_sw_exit_code(ghcb);
+ 
+-	if (!sev_es_validate_vmgexit(svm))
+-		return 1;
++	ret = sev_es_validate_vmgexit(svm);
++	if (ret)
++		return ret;
+ 
+ 	sev_es_sync_from_ghcb(svm);
+ 	ghcb_set_sw_exit_info_1(ghcb, 0);
+ 	ghcb_set_sw_exit_info_2(ghcb, 0);
+ 
+-	ret = 1;
+ 	switch (exit_code) {
+ 	case SVM_VMGEXIT_MMIO_READ:
+-		if (!setup_vmgexit_scratch(svm, true, control->exit_info_2))
++		ret = setup_vmgexit_scratch(svm, true, control->exit_info_2);
++		if (ret)
+ 			break;
+ 
+ 		ret = kvm_sev_es_mmio_read(vcpu,
+@@ -2778,7 +2780,8 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
+ 					   svm->sev_es.ghcb_sa);
+ 		break;
+ 	case SVM_VMGEXIT_MMIO_WRITE:
+-		if (!setup_vmgexit_scratch(svm, false, control->exit_info_2))
++		ret = setup_vmgexit_scratch(svm, false, control->exit_info_2);
++		if (ret)
+ 			break;
+ 
+ 		ret = kvm_sev_es_mmio_write(vcpu,
+@@ -2811,6 +2814,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
+ 			ghcb_set_sw_exit_info_2(ghcb, GHCB_ERR_INVALID_INPUT);
+ 		}
+ 
++		ret = 1;
+ 		break;
+ 	}
+ 	case SVM_VMGEXIT_UNSUPPORTED_EVENT:
+@@ -2830,6 +2834,7 @@ int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in)
+ {
+ 	int count;
+ 	int bytes;
++	int r;
+ 
+ 	if (svm->vmcb->control.exit_info_2 > INT_MAX)
+ 		return -EINVAL;
+@@ -2838,8 +2843,9 @@ int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in)
+ 	if (unlikely(check_mul_overflow(count, size, &bytes)))
+ 		return -EINVAL;
+ 
+-	if (!setup_vmgexit_scratch(svm, in, bytes))
+-		return 1;
++	r = setup_vmgexit_scratch(svm, in, bytes);
++	if (r)
++		return r;
+ 
+ 	return kvm_sev_es_string_io(&svm->vcpu, size, port, svm->sev_es.ghcb_sa,
+ 				    count, in);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index e8f495b9ae103..ff181a6c1005f 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1614,8 +1614,7 @@ static int set_efer(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		return r;
+ 	}
+ 
+-	/* Update reserved bits */
+-	if ((efer ^ old_efer) & EFER_NX)
++	if ((efer ^ old_efer) & KVM_MMU_EFER_ROLE_BITS)
+ 		kvm_mmu_reset_context(vcpu);
+ 
+ 	return 0;
+diff --git a/arch/x86/xen/pmu.c b/arch/x86/xen/pmu.c
+index e13b0b49fcdfc..d7249f4c90f1b 100644
+--- a/arch/x86/xen/pmu.c
++++ b/arch/x86/xen/pmu.c
+@@ -512,10 +512,7 @@ irqreturn_t xen_pmu_irq_handler(int irq, void *dev_id)
+ 	return ret;
+ }
+ 
+-bool is_xen_pmu(int cpu)
+-{
+-	return (get_xenpmu_data() != NULL);
+-}
++bool is_xen_pmu;
+ 
+ void xen_pmu_init(int cpu)
+ {
+@@ -526,7 +523,7 @@ void xen_pmu_init(int cpu)
+ 
+ 	BUILD_BUG_ON(sizeof(struct xen_pmu_data) > PAGE_SIZE);
+ 
+-	if (xen_hvm_domain())
++	if (xen_hvm_domain() || (cpu != 0 && !is_xen_pmu))
+ 		return;
+ 
+ 	xenpmu_data = (struct xen_pmu_data *)get_zeroed_page(GFP_KERNEL);
+@@ -547,7 +544,8 @@ void xen_pmu_init(int cpu)
+ 	per_cpu(xenpmu_shared, cpu).xenpmu_data = xenpmu_data;
+ 	per_cpu(xenpmu_shared, cpu).flags = 0;
+ 
+-	if (cpu == 0) {
++	if (!is_xen_pmu) {
++		is_xen_pmu = true;
+ 		perf_register_guest_info_callbacks(&xen_guest_cbs);
+ 		xen_pmu_arch_init();
+ 	}
+diff --git a/arch/x86/xen/pmu.h b/arch/x86/xen/pmu.h
+index 0e83a160589bc..65c58894fc79f 100644
+--- a/arch/x86/xen/pmu.h
++++ b/arch/x86/xen/pmu.h
+@@ -4,6 +4,8 @@
+ 
+ #include <xen/interface/xenpmu.h>
+ 
++extern bool is_xen_pmu;
++
+ irqreturn_t xen_pmu_irq_handler(int irq, void *dev_id);
+ #ifdef CONFIG_XEN_HAVE_VPMU
+ void xen_pmu_init(int cpu);
+@@ -12,7 +14,6 @@ void xen_pmu_finish(int cpu);
+ static inline void xen_pmu_init(int cpu) {}
+ static inline void xen_pmu_finish(int cpu) {}
+ #endif
+-bool is_xen_pmu(int cpu);
+ bool pmu_msr_read(unsigned int msr, uint64_t *val, int *err);
+ bool pmu_msr_write(unsigned int msr, uint32_t low, uint32_t high, int *err);
+ int pmu_apic_update(uint32_t reg);
+diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
+index 4a6019238ee7d..688aa8b6ae29a 100644
+--- a/arch/x86/xen/smp_pv.c
++++ b/arch/x86/xen/smp_pv.c
+@@ -129,7 +129,7 @@ int xen_smp_intr_init_pv(unsigned int cpu)
+ 	per_cpu(xen_irq_work, cpu).irq = rc;
+ 	per_cpu(xen_irq_work, cpu).name = callfunc_name;
+ 
+-	if (is_xen_pmu(cpu)) {
++	if (is_xen_pmu) {
+ 		pmu_name = kasprintf(GFP_KERNEL, "pmu%d", cpu);
+ 		rc = bind_virq_to_irqhandler(VIRQ_XENPMU, cpu,
+ 					     xen_pmu_irq_handler,
+diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h
+index bd5aeb7955675..a63eca1266577 100644
+--- a/arch/xtensa/include/asm/pgtable.h
++++ b/arch/xtensa/include/asm/pgtable.h
+@@ -411,6 +411,10 @@ extern  void update_mmu_cache(struct vm_area_struct * vma,
+ 
+ typedef pte_t *pte_addr_t;
+ 
++void update_mmu_tlb(struct vm_area_struct *vma,
++		    unsigned long address, pte_t *ptep);
++#define __HAVE_ARCH_UPDATE_MMU_TLB
++
+ #endif /* !defined (__ASSEMBLY__) */
+ 
+ #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
+diff --git a/arch/xtensa/include/asm/processor.h b/arch/xtensa/include/asm/processor.h
+index 37d3e9887fe7b..d68987d703e75 100644
+--- a/arch/xtensa/include/asm/processor.h
++++ b/arch/xtensa/include/asm/processor.h
+@@ -246,8 +246,8 @@ extern unsigned long __get_wchan(struct task_struct *p);
+ 
+ #define xtensa_set_sr(x, sr) \
+ 	({ \
+-	 unsigned int v = (unsigned int)(x); \
+-	 __asm__ __volatile__ ("wsr %0, "__stringify(sr) :: "a"(v)); \
++	 __asm__ __volatile__ ("wsr %0, "__stringify(sr) :: \
++			       "a"((unsigned int)(x))); \
+ 	 })
+ 
+ #define xtensa_get_sr(sr) \
+diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c
+index 61cf6497a646b..0dde21e0d3de4 100644
+--- a/arch/xtensa/kernel/jump_label.c
++++ b/arch/xtensa/kernel/jump_label.c
+@@ -61,7 +61,7 @@ static void patch_text(unsigned long addr, const void *data, size_t sz)
+ 			.data = data,
+ 		};
+ 		stop_machine_cpuslocked(patch_text_stop_machine,
+-					&patch, NULL);
++					&patch, cpu_online_mask);
+ 	} else {
+ 		unsigned long flags;
+ 
+diff --git a/arch/xtensa/kernel/mxhead.S b/arch/xtensa/kernel/mxhead.S
+index 9f38437427264..b702c0908b1f6 100644
+--- a/arch/xtensa/kernel/mxhead.S
++++ b/arch/xtensa/kernel/mxhead.S
+@@ -37,11 +37,13 @@ _SetupOCD:
+ 	 * xt-gdb to single step via DEBUG exceptions received directly
+ 	 * by ocd.
+ 	 */
++#if XCHAL_HAVE_WINDOWED
+ 	movi	a1, 1
+ 	movi	a0, 0
+ 	wsr	a1, windowstart
+ 	wsr	a0, windowbase
+ 	rsync
++#endif
+ 
+ 	movi	a1, LOCKLEVEL
+ 	wsr	a1, ps
+diff --git a/arch/xtensa/mm/tlb.c b/arch/xtensa/mm/tlb.c
+index f436cf2efd8b7..27a477dae2322 100644
+--- a/arch/xtensa/mm/tlb.c
++++ b/arch/xtensa/mm/tlb.c
+@@ -162,6 +162,12 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
+ 	}
+ }
+ 
++void update_mmu_tlb(struct vm_area_struct *vma,
++		    unsigned long address, pte_t *ptep)
++{
++	local_flush_tlb_page(vma, address);
++}
++
+ #ifdef CONFIG_DEBUG_TLB_SANITY
+ 
+ static unsigned get_pte_for_vaddr(unsigned vaddr)
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index 24a5c5329bcd0..809bc612d96b3 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -646,6 +646,12 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ {
+ 	struct bfq_entity *entity = &bfqq->entity;
+ 
++	/*
++	 * oom_bfqq is not allowed to move, oom_bfqq will hold ref to root_group
++	 * until elevator exit.
++	 */
++	if (bfqq == &bfqd->oom_bfqq)
++		return;
+ 	/*
+ 	 * Get extra reference to prevent bfqq from being freed in
+ 	 * next possible expire or deactivate.
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 8c0950c9a1a2f..342e927f213b3 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2662,6 +2662,15 @@ bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
+ 	 * are likely to increase the throughput.
+ 	 */
+ 	bfqq->new_bfqq = new_bfqq;
++	/*
++	 * The above assignment schedules the following redirections:
++	 * each time some I/O for bfqq arrives, the process that
++	 * generated that I/O is disassociated from bfqq and
++	 * associated with new_bfqq. Here we increases new_bfqq->ref
++	 * in advance, adding the number of processes that are
++	 * expected to be associated with new_bfqq as they happen to
++	 * issue I/O.
++	 */
+ 	new_bfqq->ref += process_refs;
+ 	return new_bfqq;
+ }
+@@ -2724,6 +2733,10 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ {
+ 	struct bfq_queue *in_service_bfqq, *new_bfqq;
+ 
++	/* if a merge has already been setup, then proceed with that first */
++	if (bfqq->new_bfqq)
++		return bfqq->new_bfqq;
++
+ 	/*
+ 	 * Check delayed stable merge for rotational or non-queueing
+ 	 * devs. For this branch to be executed, bfqq must not be
+@@ -2825,9 +2838,6 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	if (bfq_too_late_for_merging(bfqq))
+ 		return NULL;
+ 
+-	if (bfqq->new_bfqq)
+-		return bfqq->new_bfqq;
+-
+ 	if (!io_struct || unlikely(bfqq == &bfqd->oom_bfqq))
+ 		return NULL;
+ 
+@@ -5061,7 +5071,7 @@ static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
+ 	struct bfq_data *bfqd = hctx->queue->elevator->elevator_data;
+ 	struct request *rq;
+ 	struct bfq_queue *in_serv_queue;
+-	bool waiting_rq, idle_timer_disabled;
++	bool waiting_rq, idle_timer_disabled = false;
+ 
+ 	spin_lock_irq(&bfqd->lock);
+ 
+@@ -5069,14 +5079,15 @@ static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
+ 	waiting_rq = in_serv_queue && bfq_bfqq_wait_request(in_serv_queue);
+ 
+ 	rq = __bfq_dispatch_request(hctx);
+-
+-	idle_timer_disabled =
+-		waiting_rq && !bfq_bfqq_wait_request(in_serv_queue);
++	if (in_serv_queue == bfqd->in_service_queue) {
++		idle_timer_disabled =
++			waiting_rq && !bfq_bfqq_wait_request(in_serv_queue);
++	}
+ 
+ 	spin_unlock_irq(&bfqd->lock);
+-
+-	bfq_update_dispatch_stats(hctx->queue, rq, in_serv_queue,
+-				  idle_timer_disabled);
++	bfq_update_dispatch_stats(hctx->queue, rq,
++			idle_timer_disabled ? in_serv_queue : NULL,
++				idle_timer_disabled);
+ 
+ 	return rq;
+ }
+diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
+index b74cc0da118ec..709b901de3ca9 100644
+--- a/block/bfq-wf2q.c
++++ b/block/bfq-wf2q.c
+@@ -519,7 +519,7 @@ unsigned short bfq_ioprio_to_weight(int ioprio)
+ static unsigned short bfq_weight_to_ioprio(int weight)
+ {
+ 	return max_t(int, 0,
+-		     IOPRIO_NR_LEVELS * BFQ_WEIGHT_CONVERSION_COEFF - weight);
++		     IOPRIO_NR_LEVELS - weight / BFQ_WEIGHT_CONVERSION_COEFF);
+ }
+ 
+ static void bfq_get_entity(struct bfq_entity *entity)
+diff --git a/block/bio.c b/block/bio.c
+index 99cad261ec531..b49c6f87e638a 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -1462,8 +1462,7 @@ again:
+ 	if (!bio_integrity_endio(bio))
+ 		return;
+ 
+-	if (bio->bi_bdev && bio_flagged(bio, BIO_TRACKED))
+-		rq_qos_done_bio(bdev_get_queue(bio->bi_bdev), bio);
++	rq_qos_done_bio(bio);
+ 
+ 	if (bio->bi_bdev && bio_flagged(bio, BIO_TRACE_COMPLETION)) {
+ 		trace_block_bio_complete(bdev_get_queue(bio->bi_bdev), bio);
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 663aabfeba183..f896f49826655 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -856,11 +856,11 @@ static void blkcg_fill_root_iostats(void)
+ 			blk_queue_root_blkg(bdev_get_queue(bdev));
+ 		struct blkg_iostat tmp;
+ 		int cpu;
++		unsigned long flags;
+ 
+ 		memset(&tmp, 0, sizeof(tmp));
+ 		for_each_possible_cpu(cpu) {
+ 			struct disk_stats *cpu_dkstats;
+-			unsigned long flags;
+ 
+ 			cpu_dkstats = per_cpu_ptr(bdev->bd_stats, cpu);
+ 			tmp.ios[BLKG_IOSTAT_READ] +=
+@@ -876,11 +876,11 @@ static void blkcg_fill_root_iostats(void)
+ 				cpu_dkstats->sectors[STAT_WRITE] << 9;
+ 			tmp.bytes[BLKG_IOSTAT_DISCARD] +=
+ 				cpu_dkstats->sectors[STAT_DISCARD] << 9;
+-
+-			flags = u64_stats_update_begin_irqsave(&blkg->iostat.sync);
+-			blkg_iostat_set(&blkg->iostat.cur, &tmp);
+-			u64_stats_update_end_irqrestore(&blkg->iostat.sync, flags);
+ 		}
++
++		flags = u64_stats_update_begin_irqsave(&blkg->iostat.sync);
++		blkg_iostat_set(&blkg->iostat.cur, &tmp);
++		u64_stats_update_end_irqrestore(&blkg->iostat.sync, flags);
+ 	}
+ }
+ 
+diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
+index 6593c7123b97e..24d70e0555ddb 100644
+--- a/block/blk-iolatency.c
++++ b/block/blk-iolatency.c
+@@ -598,7 +598,7 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio)
+ 	int inflight = 0;
+ 
+ 	blkg = bio->bi_blkg;
+-	if (!blkg || !bio_flagged(bio, BIO_TRACKED))
++	if (!blkg || !bio_flagged(bio, BIO_QOS_THROTTLED))
+ 		return;
+ 
+ 	iolat = blkg_to_lat(bio->bi_blkg);
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index 893c1a60b701f..09bf679dc132e 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -8,6 +8,7 @@
+ #include <linux/blkdev.h>
+ #include <linux/blk-integrity.h>
+ #include <linux/scatterlist.h>
++#include <linux/blk-cgroup.h>
+ 
+ #include <trace/events/block.h>
+ 
+@@ -366,8 +367,6 @@ void __blk_queue_split(struct request_queue *q, struct bio **bio,
+ 		trace_block_split(split, (*bio)->bi_iter.bi_sector);
+ 		submit_bio_noacct(*bio);
+ 		*bio = split;
+-
+-		blk_throtl_charge_bio_split(*bio);
+ 	}
+ }
+ 
+@@ -598,6 +597,9 @@ static inline unsigned int blk_rq_get_max_sectors(struct request *rq,
+ static inline int ll_new_hw_segment(struct request *req, struct bio *bio,
+ 		unsigned int nr_phys_segs)
+ {
++	if (!blk_cgroup_mergeable(req, bio))
++		goto no_merge;
++
+ 	if (blk_integrity_merge_bio(req->q, req, bio) == false)
+ 		goto no_merge;
+ 
+@@ -694,6 +696,9 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
+ 	if (total_phys_segments > blk_rq_get_max_segments(req))
+ 		return 0;
+ 
++	if (!blk_cgroup_mergeable(req, next->bio))
++		return 0;
++
+ 	if (blk_integrity_merge_rq(q, req, next) == false)
+ 		return 0;
+ 
+@@ -907,6 +912,10 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
+ 	if (rq->rq_disk != bio->bi_bdev->bd_disk)
+ 		return false;
+ 
++	/* don't merge across cgroup boundaries */
++	if (!blk_cgroup_mergeable(rq, bio))
++		return false;
++
+ 	/* only merge integrity protected bio into ditto rq */
+ 	if (blk_integrity_merge_bio(rq->q, rq, bio) == false)
+ 		return false;
+@@ -1093,18 +1102,21 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio,
+ 	if (!plug || rq_list_empty(plug->mq_list))
+ 		return false;
+ 
+-	/* check the previously added entry for a quick merge attempt */
+-	rq = rq_list_peek(&plug->mq_list);
+-	if (rq->q == q) {
++	rq_list_for_each(&plug->mq_list, rq) {
++		if (rq->q == q) {
++			*same_queue_rq = true;
++			if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) ==
++			    BIO_MERGE_OK)
++				return true;
++			break;
++		}
++
+ 		/*
+-		 * Only blk-mq multiple hardware queues case checks the rq in
+-		 * the same queue, there should be only one such rq in a queue
++		 * Only keep iterating plug list for merges if we have multiple
++		 * queues
+ 		 */
+-		*same_queue_rq = true;
+-
+-		if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) ==
+-				BIO_MERGE_OK)
+-			return true;
++		if (!plug->multiple_queues)
++			break;
+ 	}
+ 	return false;
+ }
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index ba21449439cc4..9fb68aafe6f4a 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -206,11 +206,18 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
+ 
+ static int blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
+ {
++	unsigned long end = jiffies + HZ;
+ 	int ret;
+ 
+ 	do {
+ 		ret = __blk_mq_do_dispatch_sched(hctx);
+-	} while (ret == 1);
++		if (ret != 1)
++			break;
++		if (need_resched() || time_is_before_jiffies(end)) {
++			blk_mq_delay_run_hw_queue(hctx, 0);
++			break;
++		}
++	} while (1);
+ 
+ 	return ret;
+ }
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 8874a63ae952b..8ef88aa1efbef 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2244,13 +2244,35 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug, bool from_schedule)
+ 		blk_mq_commit_rqs(hctx, &queued, from_schedule);
+ }
+ 
+-void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
++static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched)
+ {
+-	struct blk_mq_hw_ctx *this_hctx;
+-	struct blk_mq_ctx *this_ctx;
+-	unsigned int depth;
++	struct blk_mq_hw_ctx *this_hctx = NULL;
++	struct blk_mq_ctx *this_ctx = NULL;
++	struct request *requeue_list = NULL;
++	unsigned int depth = 0;
+ 	LIST_HEAD(list);
+ 
++	do {
++		struct request *rq = rq_list_pop(&plug->mq_list);
++
++		if (!this_hctx) {
++			this_hctx = rq->mq_hctx;
++			this_ctx = rq->mq_ctx;
++		} else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx) {
++			rq_list_add(&requeue_list, rq);
++			continue;
++		}
++		list_add_tail(&rq->queuelist, &list);
++		depth++;
++	} while (!rq_list_empty(plug->mq_list));
++
++	plug->mq_list = requeue_list;
++	trace_block_unplug(this_hctx->queue, depth, !from_sched);
++	blk_mq_sched_insert_requests(this_hctx, this_ctx, &list, from_sched);
++}
++
++void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
++{
+ 	if (rq_list_empty(plug->mq_list))
+ 		return;
+ 	plug->rq_count = 0;
+@@ -2261,37 +2283,9 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
+ 			return;
+ 	}
+ 
+-	this_hctx = NULL;
+-	this_ctx = NULL;
+-	depth = 0;
+ 	do {
+-		struct request *rq;
+-
+-		rq = rq_list_pop(&plug->mq_list);
+-
+-		if (!this_hctx) {
+-			this_hctx = rq->mq_hctx;
+-			this_ctx = rq->mq_ctx;
+-		} else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx) {
+-			trace_block_unplug(this_hctx->queue, depth,
+-						!from_schedule);
+-			blk_mq_sched_insert_requests(this_hctx, this_ctx,
+-						&list, from_schedule);
+-			depth = 0;
+-			this_hctx = rq->mq_hctx;
+-			this_ctx = rq->mq_ctx;
+-
+-		}
+-
+-		list_add(&rq->queuelist, &list);
+-		depth++;
++		blk_mq_dispatch_plug_list(plug, from_schedule);
+ 	} while (!rq_list_empty(plug->mq_list));
+-
+-	if (!list_empty(&list)) {
+-		trace_block_unplug(this_hctx->queue, depth, !from_schedule);
+-		blk_mq_sched_insert_requests(this_hctx, this_ctx, &list,
+-						from_schedule);
+-	}
+ }
+ 
+ static void blk_mq_bio_to_request(struct request *rq, struct bio *bio,
+diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
+index 3cfbc8668cba9..68267007da1c6 100644
+--- a/block/blk-rq-qos.h
++++ b/block/blk-rq-qos.h
+@@ -177,20 +177,20 @@ static inline void rq_qos_requeue(struct request_queue *q, struct request *rq)
+ 		__rq_qos_requeue(q->rq_qos, rq);
+ }
+ 
+-static inline void rq_qos_done_bio(struct request_queue *q, struct bio *bio)
++static inline void rq_qos_done_bio(struct bio *bio)
+ {
+-	if (q->rq_qos)
+-		__rq_qos_done_bio(q->rq_qos, bio);
++	if (bio->bi_bdev && (bio_flagged(bio, BIO_QOS_THROTTLED) ||
++			     bio_flagged(bio, BIO_QOS_MERGED))) {
++		struct request_queue *q = bdev_get_queue(bio->bi_bdev);
++		if (q->rq_qos)
++			__rq_qos_done_bio(q->rq_qos, bio);
++	}
+ }
+ 
+ static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio)
+ {
+-	/*
+-	 * BIO_TRACKED lets controllers know that a bio went through the
+-	 * normal rq_qos path.
+-	 */
+ 	if (q->rq_qos) {
+-		bio_set_flag(bio, BIO_TRACKED);
++		bio_set_flag(bio, BIO_QOS_THROTTLED);
+ 		__rq_qos_throttle(q->rq_qos, bio);
+ 	}
+ }
+@@ -205,8 +205,10 @@ static inline void rq_qos_track(struct request_queue *q, struct request *rq,
+ static inline void rq_qos_merge(struct request_queue *q, struct request *rq,
+ 				struct bio *bio)
+ {
+-	if (q->rq_qos)
++	if (q->rq_qos) {
++		bio_set_flag(bio, BIO_QOS_MERGED);
+ 		__rq_qos_merge(q->rq_qos, rq, bio);
++	}
+ }
+ 
+ static inline void rq_qos_queue_depth_changed(struct request_queue *q)
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index cd75b0f73dc6f..2fe8da2a7216a 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -949,9 +949,6 @@ void blk_unregister_queue(struct gendisk *disk)
+ 	 */
+ 	if (queue_is_mq(q))
+ 		blk_mq_unregister_dev(disk_to_dev(disk), q);
+-
+-	kobject_uevent(&q->kobj, KOBJ_REMOVE);
+-	kobject_del(&q->kobj);
+ 	blk_trace_remove_sysfs(disk_to_dev(disk));
+ 
+ 	mutex_lock(&q->sysfs_lock);
+@@ -959,6 +956,11 @@ void blk_unregister_queue(struct gendisk *disk)
+ 		elv_unregister_queue(q);
+ 	disk_unregister_independent_access_ranges(disk);
+ 	mutex_unlock(&q->sysfs_lock);
++
++	/* Now that we've deleted all child objects, we can delete the queue. */
++	kobject_uevent(&q->kobj, KOBJ_REMOVE);
++	kobject_del(&q->kobj);
++
+ 	mutex_unlock(&q->sysfs_dir_lock);
+ 
+ 	kobject_put(&disk_to_dev(disk)->kobj);
+diff --git a/block/blk-throttle.c b/block/blk-throttle.c
+index 39bb6e68a9a29..96573fa4edf2d 100644
+--- a/block/blk-throttle.c
++++ b/block/blk-throttle.c
+@@ -807,7 +807,8 @@ static bool tg_with_in_bps_limit(struct throtl_grp *tg, struct bio *bio,
+ 	unsigned long jiffy_elapsed, jiffy_wait, jiffy_elapsed_rnd;
+ 	unsigned int bio_size = throtl_bio_data_size(bio);
+ 
+-	if (bps_limit == U64_MAX) {
++	/* no need to throttle if this bio's bytes have been accounted */
++	if (bps_limit == U64_MAX || bio_flagged(bio, BIO_THROTTLED)) {
+ 		if (wait)
+ 			*wait = 0;
+ 		return true;
+@@ -919,9 +920,12 @@ static void throtl_charge_bio(struct throtl_grp *tg, struct bio *bio)
+ 	unsigned int bio_size = throtl_bio_data_size(bio);
+ 
+ 	/* Charge the bio to the group */
+-	tg->bytes_disp[rw] += bio_size;
++	if (!bio_flagged(bio, BIO_THROTTLED)) {
++		tg->bytes_disp[rw] += bio_size;
++		tg->last_bytes_disp[rw] += bio_size;
++	}
++
+ 	tg->io_disp[rw]++;
+-	tg->last_bytes_disp[rw] += bio_size;
+ 	tg->last_io_disp[rw]++;
+ 
+ 	/*
+diff --git a/block/blk-throttle.h b/block/blk-throttle.h
+index 175f03abd9e41..cb43f4417d6ea 100644
+--- a/block/blk-throttle.h
++++ b/block/blk-throttle.h
+@@ -170,8 +170,6 @@ static inline bool blk_throtl_bio(struct bio *bio)
+ {
+ 	struct throtl_grp *tg = blkg_to_tg(bio->bi_blkg);
+ 
+-	if (bio_flagged(bio, BIO_THROTTLED))
+-		return false;
+ 	if (!tg->has_rules[bio_data_dir(bio)])
+ 		return false;
+ 
+diff --git a/block/genhd.c b/block/genhd.c
+index f6a698f3252b5..2a469f774670f 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -328,7 +328,7 @@ int blk_alloc_ext_minor(void)
+ {
+ 	int idx;
+ 
+-	idx = ida_alloc_range(&ext_devt_ida, 0, NR_EXT_DEVT, GFP_KERNEL);
++	idx = ida_alloc_range(&ext_devt_ida, 0, NR_EXT_DEVT - 1, GFP_KERNEL);
+ 	if (idx == -ENOSPC)
+ 		return -EBUSY;
+ 	return idx;
+diff --git a/crypto/asymmetric_keys/pkcs7_verify.c b/crypto/asymmetric_keys/pkcs7_verify.c
+index 0b4d07aa88111..f94a1d1ad3a6c 100644
+--- a/crypto/asymmetric_keys/pkcs7_verify.c
++++ b/crypto/asymmetric_keys/pkcs7_verify.c
+@@ -174,12 +174,6 @@ static int pkcs7_find_key(struct pkcs7_message *pkcs7,
+ 		pr_devel("Sig %u: Found cert serial match X.509[%u]\n",
+ 			 sinfo->index, certix);
+ 
+-		if (strcmp(x509->pub->pkey_algo, sinfo->sig->pkey_algo) != 0) {
+-			pr_warn("Sig %u: X.509 algo and PKCS#7 sig algo don't match\n",
+-				sinfo->index);
+-			continue;
+-		}
+-
+ 		sinfo->signer = x509;
+ 		return 0;
+ 	}
+diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
+index 4fefb219bfdc8..7c9e6be35c30c 100644
+--- a/crypto/asymmetric_keys/public_key.c
++++ b/crypto/asymmetric_keys/public_key.c
+@@ -60,39 +60,83 @@ static void public_key_destroy(void *payload0, void *payload3)
+ }
+ 
+ /*
+- * Determine the crypto algorithm name.
++ * Given a public_key, and an encoding and hash_algo to be used for signing
++ * and/or verification with that key, determine the name of the corresponding
++ * akcipher algorithm.  Also check that encoding and hash_algo are allowed.
+  */
+-static
+-int software_key_determine_akcipher(const char *encoding,
+-				    const char *hash_algo,
+-				    const struct public_key *pkey,
+-				    char alg_name[CRYPTO_MAX_ALG_NAME])
++static int
++software_key_determine_akcipher(const struct public_key *pkey,
++				const char *encoding, const char *hash_algo,
++				char alg_name[CRYPTO_MAX_ALG_NAME])
+ {
+ 	int n;
+ 
+-	if (strcmp(encoding, "pkcs1") == 0) {
+-		/* The data wangled by the RSA algorithm is typically padded
+-		 * and encoded in some manner, such as EMSA-PKCS1-1_5 [RFC3447
+-		 * sec 8.2].
++	if (!encoding)
++		return -EINVAL;
++
++	if (strcmp(pkey->pkey_algo, "rsa") == 0) {
++		/*
++		 * RSA signatures usually use EMSA-PKCS1-1_5 [RFC3447 sec 8.2].
++		 */
++		if (strcmp(encoding, "pkcs1") == 0) {
++			if (!hash_algo)
++				n = snprintf(alg_name, CRYPTO_MAX_ALG_NAME,
++					     "pkcs1pad(%s)",
++					     pkey->pkey_algo);
++			else
++				n = snprintf(alg_name, CRYPTO_MAX_ALG_NAME,
++					     "pkcs1pad(%s,%s)",
++					     pkey->pkey_algo, hash_algo);
++			return n >= CRYPTO_MAX_ALG_NAME ? -EINVAL : 0;
++		}
++		if (strcmp(encoding, "raw") != 0)
++			return -EINVAL;
++		/*
++		 * Raw RSA cannot differentiate between different hash
++		 * algorithms.
++		 */
++		if (hash_algo)
++			return -EINVAL;
++	} else if (strncmp(pkey->pkey_algo, "ecdsa", 5) == 0) {
++		if (strcmp(encoding, "x962") != 0)
++			return -EINVAL;
++		/*
++		 * ECDSA signatures are taken over a raw hash, so they don't
++		 * differentiate between different hash algorithms.  That means
++		 * that the verifier should hard-code a specific hash algorithm.
++		 * Unfortunately, in practice ECDSA is used with multiple SHAs,
++		 * so we have to allow all of them and not just one.
+ 		 */
+ 		if (!hash_algo)
+-			n = snprintf(alg_name, CRYPTO_MAX_ALG_NAME,
+-				     "pkcs1pad(%s)",
+-				     pkey->pkey_algo);
+-		else
+-			n = snprintf(alg_name, CRYPTO_MAX_ALG_NAME,
+-				     "pkcs1pad(%s,%s)",
+-				     pkey->pkey_algo, hash_algo);
+-		return n >= CRYPTO_MAX_ALG_NAME ? -EINVAL : 0;
+-	}
+-
+-	if (strcmp(encoding, "raw") == 0 ||
+-	    strcmp(encoding, "x962") == 0) {
+-		strcpy(alg_name, pkey->pkey_algo);
+-		return 0;
++			return -EINVAL;
++		if (strcmp(hash_algo, "sha1") != 0 &&
++		    strcmp(hash_algo, "sha224") != 0 &&
++		    strcmp(hash_algo, "sha256") != 0 &&
++		    strcmp(hash_algo, "sha384") != 0 &&
++		    strcmp(hash_algo, "sha512") != 0)
++			return -EINVAL;
++	} else if (strcmp(pkey->pkey_algo, "sm2") == 0) {
++		if (strcmp(encoding, "raw") != 0)
++			return -EINVAL;
++		if (!hash_algo)
++			return -EINVAL;
++		if (strcmp(hash_algo, "sm3") != 0)
++			return -EINVAL;
++	} else if (strcmp(pkey->pkey_algo, "ecrdsa") == 0) {
++		if (strcmp(encoding, "raw") != 0)
++			return -EINVAL;
++		if (!hash_algo)
++			return -EINVAL;
++		if (strcmp(hash_algo, "streebog256") != 0 &&
++		    strcmp(hash_algo, "streebog512") != 0)
++			return -EINVAL;
++	} else {
++		/* Unknown public key algorithm */
++		return -ENOPKG;
+ 	}
+-
+-	return -ENOPKG;
++	if (strscpy(alg_name, pkey->pkey_algo, CRYPTO_MAX_ALG_NAME) < 0)
++		return -EINVAL;
++	return 0;
+ }
+ 
+ static u8 *pkey_pack_u32(u8 *dst, u32 val)
+@@ -113,9 +157,8 @@ static int software_key_query(const struct kernel_pkey_params *params,
+ 	u8 *key, *ptr;
+ 	int ret, len;
+ 
+-	ret = software_key_determine_akcipher(params->encoding,
+-					      params->hash_algo,
+-					      pkey, alg_name);
++	ret = software_key_determine_akcipher(pkey, params->encoding,
++					      params->hash_algo, alg_name);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -179,9 +222,8 @@ static int software_key_eds_op(struct kernel_pkey_params *params,
+ 
+ 	pr_devel("==>%s()\n", __func__);
+ 
+-	ret = software_key_determine_akcipher(params->encoding,
+-					      params->hash_algo,
+-					      pkey, alg_name);
++	ret = software_key_determine_akcipher(pkey, params->encoding,
++					      params->hash_algo, alg_name);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -325,9 +367,23 @@ int public_key_verify_signature(const struct public_key *pkey,
+ 	BUG_ON(!sig);
+ 	BUG_ON(!sig->s);
+ 
+-	ret = software_key_determine_akcipher(sig->encoding,
+-					      sig->hash_algo,
+-					      pkey, alg_name);
++	/*
++	 * If the signature specifies a public key algorithm, it *must* match
++	 * the key's actual public key algorithm.
++	 *
++	 * Small exception: ECDSA signatures don't specify the curve, but ECDSA
++	 * keys do.  So the strings can mismatch slightly in that case:
++	 * "ecdsa-nist-*" for the key, but "ecdsa" for the signature.
++	 */
++	if (sig->pkey_algo) {
++		if (strcmp(pkey->pkey_algo, sig->pkey_algo) != 0 &&
++		    (strncmp(pkey->pkey_algo, "ecdsa-", 6) != 0 ||
++		     strcmp(sig->pkey_algo, "ecdsa") != 0))
++			return -EKEYREJECTED;
++	}
++
++	ret = software_key_determine_akcipher(pkey, sig->encoding,
++					      sig->hash_algo, alg_name);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/crypto/asymmetric_keys/x509_public_key.c b/crypto/asymmetric_keys/x509_public_key.c
+index 3d45161b271a4..7fd56df8b9194 100644
+--- a/crypto/asymmetric_keys/x509_public_key.c
++++ b/crypto/asymmetric_keys/x509_public_key.c
+@@ -128,12 +128,6 @@ int x509_check_for_self_signed(struct x509_certificate *cert)
+ 			goto out;
+ 	}
+ 
+-	ret = -EKEYREJECTED;
+-	if (strcmp(cert->pub->pkey_algo, cert->sig->pkey_algo) != 0 &&
+-	    (strncmp(cert->pub->pkey_algo, "ecdsa-", 6) != 0 ||
+-	     strcmp(cert->sig->pkey_algo, "ecdsa") != 0))
+-		goto out;
+-
+ 	ret = public_key_verify_signature(cert->pub, cert->sig);
+ 	if (ret < 0) {
+ 		if (ret == -ENOPKG) {
+diff --git a/crypto/authenc.c b/crypto/authenc.c
+index 670bf1a01d00e..17f674a7cdff5 100644
+--- a/crypto/authenc.c
++++ b/crypto/authenc.c
+@@ -253,7 +253,7 @@ static int crypto_authenc_decrypt_tail(struct aead_request *req,
+ 		dst = scatterwalk_ffwd(areq_ctx->dst, req->dst, req->assoclen);
+ 
+ 	skcipher_request_set_tfm(skreq, ctx->enc);
+-	skcipher_request_set_callback(skreq, aead_request_flags(req),
++	skcipher_request_set_callback(skreq, flags,
+ 				      req->base.complete, req->base.data);
+ 	skcipher_request_set_crypt(skreq, src, dst,
+ 				   req->cryptlen - authsize, req->iv);
+diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c
+index 8ac3e73e8ea65..9d804831c8b3f 100644
+--- a/crypto/rsa-pkcs1pad.c
++++ b/crypto/rsa-pkcs1pad.c
+@@ -476,6 +476,8 @@ static int pkcs1pad_verify_complete(struct akcipher_request *req, int err)
+ 	pos++;
+ 
+ 	if (digest_info) {
++		if (digest_info->size > dst_len - pos)
++			goto done;
+ 		if (crypto_memneq(out_buf + pos, digest_info->data,
+ 				  digest_info->size))
+ 			goto done;
+@@ -495,7 +497,7 @@ static int pkcs1pad_verify_complete(struct akcipher_request *req, int err)
+ 			   sg_nents_for_len(req->src,
+ 					    req->src_len + req->dst_len),
+ 			   req_ctx->out_buf + ctx->key_size,
+-			   req->dst_len, ctx->key_size);
++			   req->dst_len, req->src_len);
+ 	/* Do the actual verification step. */
+ 	if (memcmp(req_ctx->out_buf + ctx->key_size, out_buf + pos,
+ 		   req->dst_len) != 0)
+@@ -538,7 +540,7 @@ static int pkcs1pad_verify(struct akcipher_request *req)
+ 
+ 	if (WARN_ON(req->dst) ||
+ 	    WARN_ON(!req->dst_len) ||
+-	    !ctx->key_size || req->src_len < ctx->key_size)
++	    !ctx->key_size || req->src_len != ctx->key_size)
+ 		return -EINVAL;
+ 
+ 	req_ctx->out_buf = kmalloc(ctx->key_size + req->dst_len, GFP_KERNEL);
+@@ -621,6 +623,11 @@ static int pkcs1pad_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 
+ 	rsa_alg = crypto_spawn_akcipher_alg(&ctx->spawn);
+ 
++	if (strcmp(rsa_alg->base.cra_name, "rsa") != 0) {
++		err = -EINVAL;
++		goto err_free_inst;
++	}
++
+ 	err = -ENAMETOOLONG;
+ 	hash_name = crypto_attr_alg_name(tb[2]);
+ 	if (IS_ERR(hash_name)) {
+diff --git a/crypto/xts.c b/crypto/xts.c
+index 6c12f30dbdd6d..63c85b9e64e08 100644
+--- a/crypto/xts.c
++++ b/crypto/xts.c
+@@ -466,3 +466,4 @@ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("XTS block cipher mode");
+ MODULE_ALIAS_CRYPTO("xts");
+ MODULE_IMPORT_NS(CRYPTO_INTERNAL);
++MODULE_SOFTDEP("pre: ecb");
+diff --git a/drivers/acpi/acpica/nswalk.c b/drivers/acpi/acpica/nswalk.c
+index 915c2433463d7..e7c30ce06e189 100644
+--- a/drivers/acpi/acpica/nswalk.c
++++ b/drivers/acpi/acpica/nswalk.c
+@@ -169,6 +169,9 @@ acpi_ns_walk_namespace(acpi_object_type type,
+ 
+ 	if (start_node == ACPI_ROOT_OBJECT) {
+ 		start_node = acpi_gbl_root_node;
++		if (!start_node) {
++			return_ACPI_STATUS(AE_NO_NAMESPACE);
++		}
+ 	}
+ 
+ 	/* Null child means "get first node" */
+diff --git a/drivers/acpi/apei/bert.c b/drivers/acpi/apei/bert.c
+index 19e50fcbf4d6f..598fd19b65fa4 100644
+--- a/drivers/acpi/apei/bert.c
++++ b/drivers/acpi/apei/bert.c
+@@ -29,6 +29,7 @@
+ 
+ #undef pr_fmt
+ #define pr_fmt(fmt) "BERT: " fmt
++#define ACPI_BERT_PRINT_MAX_LEN 1024
+ 
+ static int bert_disable;
+ 
+@@ -58,8 +59,11 @@ static void __init bert_print_all(struct acpi_bert_region *region,
+ 		}
+ 
+ 		pr_info_once("Error records from previous boot:\n");
+-
+-		cper_estatus_print(KERN_INFO HW_ERR, estatus);
++		if (region_len < ACPI_BERT_PRINT_MAX_LEN)
++			cper_estatus_print(KERN_INFO HW_ERR, estatus);
++		else
++			pr_info_once("Max print length exceeded, table data is available at:\n"
++				     "/sys/firmware/acpi/tables/data/BERT");
+ 
+ 		/*
+ 		 * Because the boot error source is "one-time polled" type,
+@@ -77,7 +81,7 @@ static int __init setup_bert_disable(char *str)
+ {
+ 	bert_disable = 1;
+ 
+-	return 0;
++	return 1;
+ }
+ __setup("bert_disable", setup_bert_disable);
+ 
+diff --git a/drivers/acpi/apei/erst.c b/drivers/acpi/apei/erst.c
+index 242f3c2d55330..698d67cee0527 100644
+--- a/drivers/acpi/apei/erst.c
++++ b/drivers/acpi/apei/erst.c
+@@ -891,7 +891,7 @@ EXPORT_SYMBOL_GPL(erst_clear);
+ static int __init setup_erst_disable(char *str)
+ {
+ 	erst_disable = 1;
+-	return 0;
++	return 1;
+ }
+ 
+ __setup("erst_disable", setup_erst_disable);
+diff --git a/drivers/acpi/apei/hest.c b/drivers/acpi/apei/hest.c
+index 0edc1ed476737..6aef1ee5e1bdb 100644
+--- a/drivers/acpi/apei/hest.c
++++ b/drivers/acpi/apei/hest.c
+@@ -224,7 +224,7 @@ err:
+ static int __init setup_hest_disable(char *str)
+ {
+ 	hest_disable = HEST_DISABLED;
+-	return 0;
++	return 1;
+ }
+ 
+ __setup("hest_disable", setup_hest_disable);
+diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
+index dd535b4b9a160..3500744e6862e 100644
+--- a/drivers/acpi/bus.c
++++ b/drivers/acpi/bus.c
+@@ -332,21 +332,32 @@ static void acpi_bus_osc_negotiate_platform_control(void)
+ 	if (ACPI_FAILURE(acpi_run_osc(handle, &context)))
+ 		return;
+ 
+-	kfree(context.ret.pointer);
++	capbuf_ret = context.ret.pointer;
++	if (context.ret.length <= OSC_SUPPORT_DWORD) {
++		kfree(context.ret.pointer);
++		return;
++	}
+ 
+-	/* Now run _OSC again with query flag clear */
++	/*
++	 * Now run _OSC again with query flag clear and with the caps
++	 * supported by both the OS and the platform.
++	 */
+ 	capbuf[OSC_QUERY_DWORD] = 0;
++	capbuf[OSC_SUPPORT_DWORD] = capbuf_ret[OSC_SUPPORT_DWORD];
++	kfree(context.ret.pointer);
+ 
+ 	if (ACPI_FAILURE(acpi_run_osc(handle, &context)))
+ 		return;
+ 
+ 	capbuf_ret = context.ret.pointer;
+-	osc_sb_apei_support_acked =
+-		capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_APEI_SUPPORT;
+-	osc_pc_lpi_support_confirmed =
+-		capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_PCLPI_SUPPORT;
+-	osc_sb_native_usb4_support_confirmed =
+-		capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_NATIVE_USB4_SUPPORT;
++	if (context.ret.length > OSC_SUPPORT_DWORD) {
++		osc_sb_apei_support_acked =
++			capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_APEI_SUPPORT;
++		osc_pc_lpi_support_confirmed =
++			capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_PCLPI_SUPPORT;
++		osc_sb_native_usb4_support_confirmed =
++			capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_NATIVE_USB4_SUPPORT;
++	}
+ 
+ 	kfree(context.ret.pointer);
+ }
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 12a156d8283e6..c496604fc9459 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -690,6 +690,11 @@ int acpi_cppc_processor_probe(struct acpi_processor *pr)
+ 	cpc_obj = &out_obj->package.elements[0];
+ 	if (cpc_obj->type == ACPI_TYPE_INTEGER)	{
+ 		num_ent = cpc_obj->integer.value;
++		if (num_ent <= 1) {
++			pr_debug("Unexpected _CPC NumEntries value (%d) for CPU:%d\n",
++				 num_ent, pr->id);
++			goto out_free;
++		}
+ 	} else {
+ 		pr_debug("Unexpected entry type(%d) for NumEntries\n",
+ 				cpc_obj->type);
+diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c
+index 2366f54d8e9cf..e8f56448ca012 100644
+--- a/drivers/acpi/property.c
++++ b/drivers/acpi/property.c
+@@ -685,7 +685,7 @@ int __acpi_node_get_property_reference(const struct fwnode_handle *fwnode,
+ 	 */
+ 	if (obj->type == ACPI_TYPE_LOCAL_REFERENCE) {
+ 		if (index)
+-			return -EINVAL;
++			return -ENOENT;
+ 
+ 		ret = acpi_bus_get_device(obj->reference.handle, &device);
+ 		if (ret)
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 6b66306932016..64ce42b6c6b64 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -809,7 +809,7 @@ static int __init save_async_options(char *buf)
+ 		pr_warn("Too long list of driver names for 'driver_async_probe'!\n");
+ 
+ 	strlcpy(async_probe_drv_names, buf, ASYNC_DRV_NAMES_MAX_LEN);
+-	return 0;
++	return 1;
+ }
+ __setup("driver_async_probe=", save_async_options);
+ 
+diff --git a/drivers/base/memory.c b/drivers/base/memory.c
+index 365cd4a7f2397..60c38f9cf1a75 100644
+--- a/drivers/base/memory.c
++++ b/drivers/base/memory.c
+@@ -663,14 +663,16 @@ static int init_memory_block(unsigned long block_id, unsigned long state,
+ 	mem->nr_vmemmap_pages = nr_vmemmap_pages;
+ 	INIT_LIST_HEAD(&mem->group_next);
+ 
++	ret = register_memory(mem);
++	if (ret)
++		return ret;
++
+ 	if (group) {
+ 		mem->group = group;
+ 		list_add(&mem->group_next, &group->memory_blocks);
+ 	}
+ 
+-	ret = register_memory(mem);
+-
+-	return ret;
++	return 0;
+ }
+ 
+ static int add_memory_block(unsigned long base_section_nr)
+diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
+index 5db704f02e712..7e8039d1884cc 100644
+--- a/drivers/base/power/domain.c
++++ b/drivers/base/power/domain.c
+@@ -2058,9 +2058,9 @@ static int genpd_remove(struct generic_pm_domain *genpd)
+ 		kfree(link);
+ 	}
+ 
+-	genpd_debug_remove(genpd);
+ 	list_del(&genpd->gpd_list_node);
+ 	genpd_unlock(genpd);
++	genpd_debug_remove(genpd);
+ 	cancel_work_sync(&genpd->power_off_work);
+ 	if (genpd_is_cpu_domain(genpd))
+ 		free_cpumask_var(genpd->cpus);
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index 04ea92cbd9cfd..08c8a69d7b810 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -2018,7 +2018,9 @@ static bool pm_ops_is_empty(const struct dev_pm_ops *ops)
+ 
+ void device_pm_check_callbacks(struct device *dev)
+ {
+-	spin_lock_irq(&dev->power.lock);
++	unsigned long flags;
++
++	spin_lock_irqsave(&dev->power.lock, flags);
+ 	dev->power.no_pm_callbacks =
+ 		(!dev->bus || (pm_ops_is_empty(dev->bus->pm) &&
+ 		 !dev->bus->suspend && !dev->bus->resume)) &&
+@@ -2027,7 +2029,7 @@ void device_pm_check_callbacks(struct device *dev)
+ 		(!dev->pm_domain || pm_ops_is_empty(&dev->pm_domain->ops)) &&
+ 		(!dev->driver || (pm_ops_is_empty(dev->driver->pm) &&
+ 		 !dev->driver->suspend && !dev->driver->resume));
+-	spin_unlock_irq(&dev->power.lock);
++	spin_unlock_irqrestore(&dev->power.lock, flags);
+ }
+ 
+ bool dev_pm_skip_suspend(struct device *dev)
+diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c
+index 3235532ae0778..8b26f631ebc15 100644
+--- a/drivers/block/drbd/drbd_req.c
++++ b/drivers/block/drbd/drbd_req.c
+@@ -180,7 +180,8 @@ void start_new_tl_epoch(struct drbd_connection *connection)
+ void complete_master_bio(struct drbd_device *device,
+ 		struct bio_and_error *m)
+ {
+-	m->bio->bi_status = errno_to_blk_status(m->error);
++	if (unlikely(m->error))
++		m->bio->bi_status = errno_to_blk_status(m->error);
+ 	bio_endio(m->bio);
+ 	dec_ap_bio(device);
+ }
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index fdb4798cb0065..5c97dcac27103 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -681,33 +681,33 @@ static ssize_t loop_attr_backing_file_show(struct loop_device *lo, char *buf)
+ 
+ static ssize_t loop_attr_offset_show(struct loop_device *lo, char *buf)
+ {
+-	return sprintf(buf, "%llu\n", (unsigned long long)lo->lo_offset);
++	return sysfs_emit(buf, "%llu\n", (unsigned long long)lo->lo_offset);
+ }
+ 
+ static ssize_t loop_attr_sizelimit_show(struct loop_device *lo, char *buf)
+ {
+-	return sprintf(buf, "%llu\n", (unsigned long long)lo->lo_sizelimit);
++	return sysfs_emit(buf, "%llu\n", (unsigned long long)lo->lo_sizelimit);
+ }
+ 
+ static ssize_t loop_attr_autoclear_show(struct loop_device *lo, char *buf)
+ {
+ 	int autoclear = (lo->lo_flags & LO_FLAGS_AUTOCLEAR);
+ 
+-	return sprintf(buf, "%s\n", autoclear ? "1" : "0");
++	return sysfs_emit(buf, "%s\n", autoclear ? "1" : "0");
+ }
+ 
+ static ssize_t loop_attr_partscan_show(struct loop_device *lo, char *buf)
+ {
+ 	int partscan = (lo->lo_flags & LO_FLAGS_PARTSCAN);
+ 
+-	return sprintf(buf, "%s\n", partscan ? "1" : "0");
++	return sysfs_emit(buf, "%s\n", partscan ? "1" : "0");
+ }
+ 
+ static ssize_t loop_attr_dio_show(struct loop_device *lo, char *buf)
+ {
+ 	int dio = (lo->lo_flags & LO_FLAGS_DIRECT_IO);
+ 
+-	return sprintf(buf, "%s\n", dio ? "1" : "0");
++	return sysfs_emit(buf, "%s\n", dio ? "1" : "0");
+ }
+ 
+ LOOP_ATTR_RO(backing_file);
+@@ -1603,6 +1603,7 @@ struct compat_loop_info {
+ 	compat_ulong_t	lo_inode;       /* ioctl r/o */
+ 	compat_dev_t	lo_rdevice;     /* ioctl r/o */
+ 	compat_int_t	lo_offset;
++	compat_int_t	lo_encrypt_type;        /* obsolete, ignored */
+ 	compat_int_t	lo_encrypt_key_size;    /* ioctl w/o */
+ 	compat_int_t	lo_flags;       /* ioctl r/o */
+ 	char		lo_name[LO_NAME_SIZE];
+diff --git a/drivers/block/n64cart.c b/drivers/block/n64cart.c
+index 78282f01f5813..ef1c9e2e7098f 100644
+--- a/drivers/block/n64cart.c
++++ b/drivers/block/n64cart.c
+@@ -88,7 +88,7 @@ static void n64cart_submit_bio(struct bio *bio)
+ {
+ 	struct bio_vec bvec;
+ 	struct bvec_iter iter;
+-	struct device *dev = bio->bi_disk->private_data;
++	struct device *dev = bio->bi_bdev->bd_disk->private_data;
+ 	u32 pos = bio->bi_iter.bi_sector << SECTOR_SHIFT;
+ 
+ 	bio_for_each_segment(bvec, bio, iter) {
+diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
+index 851a0c9b8fae5..a224e5337ef83 100644
+--- a/drivers/bluetooth/btintel.c
++++ b/drivers/bluetooth/btintel.c
+@@ -2426,10 +2426,15 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ 
+ 			/* Apply the device specific HCI quirks
+ 			 *
+-			 * WBS for SdP - SdP and Stp have a same hw_varaint but
+-			 * different fw_variant
++			 * WBS for SdP - For the Legacy ROM products, only SdP
++			 * supports the WBS. But the version information is not
++			 * enough to use here because the StP2 and SdP have same
++			 * hw_variant and fw_variant. So, this flag is set by
++			 * the transport driver (btusb) based on the HW info
++			 * (idProduct)
+ 			 */
+-			if (ver.hw_variant == 0x08 && ver.fw_variant == 0x22)
++			if (!btintel_test_flag(hdev,
++					       INTEL_ROM_LEGACY_NO_WBS_SUPPORT))
+ 				set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
+ 					&hdev->quirks);
+ 
+diff --git a/drivers/bluetooth/btintel.h b/drivers/bluetooth/btintel.h
+index c9b24e9299e2a..e0060e58573c3 100644
+--- a/drivers/bluetooth/btintel.h
++++ b/drivers/bluetooth/btintel.h
+@@ -152,6 +152,7 @@ enum {
+ 	INTEL_BROKEN_INITIAL_NCMD,
+ 	INTEL_BROKEN_SHUTDOWN_LED,
+ 	INTEL_ROM_LEGACY,
++	INTEL_ROM_LEGACY_NO_WBS_SUPPORT,
+ 
+ 	__INTEL_NUM_FLAGS,
+ };
+diff --git a/drivers/bluetooth/btmtksdio.c b/drivers/bluetooth/btmtksdio.c
+index 1cbdeca1fdc4a..ff1f5dfbb6dbf 100644
+--- a/drivers/bluetooth/btmtksdio.c
++++ b/drivers/bluetooth/btmtksdio.c
+@@ -981,6 +981,8 @@ static int btmtksdio_probe(struct sdio_func *func,
+ 	hdev->manufacturer = 70;
+ 	set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks);
+ 
++	sdio_set_drvdata(func, bdev);
++
+ 	err = hci_register_dev(hdev);
+ 	if (err < 0) {
+ 		dev_err(&func->dev, "Can't register HCI device\n");
+@@ -988,8 +990,6 @@ static int btmtksdio_probe(struct sdio_func *func,
+ 		return err;
+ 	}
+ 
+-	sdio_set_drvdata(func, bdev);
+-
+ 	/* pm_runtime_enable would be done after the firmware is being
+ 	 * downloaded because the core layer probably already enables
+ 	 * runtime PM for this func such as the case host->caps &
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index c1f1237ddc76f..8bb67c56eb7d1 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -61,6 +61,7 @@ static struct usb_driver btusb_driver;
+ #define BTUSB_QCA_WCN6855	0x1000000
+ #define BTUSB_INTEL_BROKEN_SHUTDOWN_LED	0x2000000
+ #define BTUSB_INTEL_BROKEN_INITIAL_NCMD 0x4000000
++#define BTUSB_INTEL_NO_WBS_SUPPORT	0x8000000
+ 
+ static const struct usb_device_id btusb_table[] = {
+ 	/* Generic Bluetooth USB device */
+@@ -384,9 +385,11 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x8087, 0x0033), .driver_info = BTUSB_INTEL_COMBINED },
+ 	{ USB_DEVICE(0x8087, 0x07da), .driver_info = BTUSB_CSR },
+ 	{ USB_DEVICE(0x8087, 0x07dc), .driver_info = BTUSB_INTEL_COMBINED |
++						     BTUSB_INTEL_NO_WBS_SUPPORT |
+ 						     BTUSB_INTEL_BROKEN_INITIAL_NCMD |
+ 						     BTUSB_INTEL_BROKEN_SHUTDOWN_LED },
+ 	{ USB_DEVICE(0x8087, 0x0a2a), .driver_info = BTUSB_INTEL_COMBINED |
++						     BTUSB_INTEL_NO_WBS_SUPPORT |
+ 						     BTUSB_INTEL_BROKEN_SHUTDOWN_LED },
+ 	{ USB_DEVICE(0x8087, 0x0a2b), .driver_info = BTUSB_INTEL_COMBINED },
+ 	{ USB_DEVICE(0x8087, 0x0aa7), .driver_info = BTUSB_INTEL_COMBINED |
+@@ -3902,6 +3905,9 @@ static int btusb_probe(struct usb_interface *intf,
+ 		hdev->send = btusb_send_frame_intel;
+ 		hdev->cmd_timeout = btusb_intel_cmd_timeout;
+ 
++		if (id->driver_info & BTUSB_INTEL_NO_WBS_SUPPORT)
++			btintel_set_flag(hdev, INTEL_ROM_LEGACY_NO_WBS_SUPPORT);
++
+ 		if (id->driver_info & BTUSB_INTEL_BROKEN_INITIAL_NCMD)
+ 			btintel_set_flag(hdev, INTEL_BROKEN_INITIAL_NCMD);
+ 
+diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c
+index 34286ffe0568f..7ac6908a4dfb4 100644
+--- a/drivers/bluetooth/hci_h5.c
++++ b/drivers/bluetooth/hci_h5.c
+@@ -629,9 +629,11 @@ static int h5_enqueue(struct hci_uart *hu, struct sk_buff *skb)
+ 		break;
+ 	}
+ 
+-	pm_runtime_get_sync(&hu->serdev->dev);
+-	pm_runtime_mark_last_busy(&hu->serdev->dev);
+-	pm_runtime_put_autosuspend(&hu->serdev->dev);
++	if (hu->serdev) {
++		pm_runtime_get_sync(&hu->serdev->dev);
++		pm_runtime_mark_last_busy(&hu->serdev->dev);
++		pm_runtime_put_autosuspend(&hu->serdev->dev);
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/bluetooth/hci_serdev.c b/drivers/bluetooth/hci_serdev.c
+index 3b00d82d36cf7..4cda890ce6470 100644
+--- a/drivers/bluetooth/hci_serdev.c
++++ b/drivers/bluetooth/hci_serdev.c
+@@ -305,6 +305,8 @@ int hci_uart_register_device(struct hci_uart *hu,
+ 	if (err)
+ 		return err;
+ 
++	percpu_init_rwsem(&hu->proto_lock);
++
+ 	err = p->open(hu);
+ 	if (err)
+ 		goto err_open;
+@@ -327,7 +329,6 @@ int hci_uart_register_device(struct hci_uart *hu,
+ 
+ 	INIT_WORK(&hu->init_ready, hci_uart_init_work);
+ 	INIT_WORK(&hu->write_work, hci_uart_write_work);
+-	percpu_init_rwsem(&hu->proto_lock);
+ 
+ 	/* Only when vendor specific setup callback is provided, consider
+ 	 * the manufacturer information valid. This avoids filling in the
+diff --git a/drivers/bus/mhi/core/debugfs.c b/drivers/bus/mhi/core/debugfs.c
+index 858d7516410bb..d818586c229d2 100644
+--- a/drivers/bus/mhi/core/debugfs.c
++++ b/drivers/bus/mhi/core/debugfs.c
+@@ -60,16 +60,16 @@ static int mhi_debugfs_events_show(struct seq_file *m, void *d)
+ 		}
+ 
+ 		seq_printf(m, "Index: %d intmod count: %lu time: %lu",
+-			   i, (er_ctxt->intmod & EV_CTX_INTMODC_MASK) >>
++			   i, (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODC_MASK) >>
+ 			   EV_CTX_INTMODC_SHIFT,
+-			   (er_ctxt->intmod & EV_CTX_INTMODT_MASK) >>
++			   (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODT_MASK) >>
+ 			   EV_CTX_INTMODT_SHIFT);
+ 
+-		seq_printf(m, " base: 0x%0llx len: 0x%llx", er_ctxt->rbase,
+-			   er_ctxt->rlen);
++		seq_printf(m, " base: 0x%0llx len: 0x%llx", le64_to_cpu(er_ctxt->rbase),
++			   le64_to_cpu(er_ctxt->rlen));
+ 
+-		seq_printf(m, " rp: 0x%llx wp: 0x%llx", er_ctxt->rp,
+-			   er_ctxt->wp);
++		seq_printf(m, " rp: 0x%llx wp: 0x%llx", le64_to_cpu(er_ctxt->rp),
++			   le64_to_cpu(er_ctxt->wp));
+ 
+ 		seq_printf(m, " local rp: 0x%pK db: 0x%pad\n", ring->rp,
+ 			   &mhi_event->db_cfg.db_val);
+@@ -106,18 +106,18 @@ static int mhi_debugfs_channels_show(struct seq_file *m, void *d)
+ 
+ 		seq_printf(m,
+ 			   "%s(%u) state: 0x%lx brstmode: 0x%lx pollcfg: 0x%lx",
+-			   mhi_chan->name, mhi_chan->chan, (chan_ctxt->chcfg &
++			   mhi_chan->name, mhi_chan->chan, (le32_to_cpu(chan_ctxt->chcfg) &
+ 			   CHAN_CTX_CHSTATE_MASK) >> CHAN_CTX_CHSTATE_SHIFT,
+-			   (chan_ctxt->chcfg & CHAN_CTX_BRSTMODE_MASK) >>
+-			   CHAN_CTX_BRSTMODE_SHIFT, (chan_ctxt->chcfg &
++			   (le32_to_cpu(chan_ctxt->chcfg) & CHAN_CTX_BRSTMODE_MASK) >>
++			   CHAN_CTX_BRSTMODE_SHIFT, (le32_to_cpu(chan_ctxt->chcfg) &
+ 			   CHAN_CTX_POLLCFG_MASK) >> CHAN_CTX_POLLCFG_SHIFT);
+ 
+-		seq_printf(m, " type: 0x%x event ring: %u", chan_ctxt->chtype,
+-			   chan_ctxt->erindex);
++		seq_printf(m, " type: 0x%x event ring: %u", le32_to_cpu(chan_ctxt->chtype),
++			   le32_to_cpu(chan_ctxt->erindex));
+ 
+ 		seq_printf(m, " base: 0x%llx len: 0x%llx rp: 0x%llx wp: 0x%llx",
+-			   chan_ctxt->rbase, chan_ctxt->rlen, chan_ctxt->rp,
+-			   chan_ctxt->wp);
++			   le64_to_cpu(chan_ctxt->rbase), le64_to_cpu(chan_ctxt->rlen),
++			   le64_to_cpu(chan_ctxt->rp), le64_to_cpu(chan_ctxt->wp));
+ 
+ 		seq_printf(m, " local rp: 0x%pK local wp: 0x%pK db: 0x%pad\n",
+ 			   ring->rp, ring->wp,
+diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
+index f1ec344175928..4183945fc2c41 100644
+--- a/drivers/bus/mhi/core/init.c
++++ b/drivers/bus/mhi/core/init.c
+@@ -290,17 +290,17 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
+ 		if (mhi_chan->offload_ch)
+ 			continue;
+ 
+-		tmp = chan_ctxt->chcfg;
++		tmp = le32_to_cpu(chan_ctxt->chcfg);
+ 		tmp &= ~CHAN_CTX_CHSTATE_MASK;
+ 		tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
+ 		tmp &= ~CHAN_CTX_BRSTMODE_MASK;
+ 		tmp |= (mhi_chan->db_cfg.brstmode << CHAN_CTX_BRSTMODE_SHIFT);
+ 		tmp &= ~CHAN_CTX_POLLCFG_MASK;
+ 		tmp |= (mhi_chan->db_cfg.pollcfg << CHAN_CTX_POLLCFG_SHIFT);
+-		chan_ctxt->chcfg = tmp;
++		chan_ctxt->chcfg = cpu_to_le32(tmp);
+ 
+-		chan_ctxt->chtype = mhi_chan->type;
+-		chan_ctxt->erindex = mhi_chan->er_index;
++		chan_ctxt->chtype = cpu_to_le32(mhi_chan->type);
++		chan_ctxt->erindex = cpu_to_le32(mhi_chan->er_index);
+ 
+ 		mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
+ 		mhi_chan->tre_ring.db_addr = (void __iomem *)&chan_ctxt->wp;
+@@ -325,14 +325,14 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
+ 		if (mhi_event->offload_ev)
+ 			continue;
+ 
+-		tmp = er_ctxt->intmod;
++		tmp = le32_to_cpu(er_ctxt->intmod);
+ 		tmp &= ~EV_CTX_INTMODC_MASK;
+ 		tmp &= ~EV_CTX_INTMODT_MASK;
+ 		tmp |= (mhi_event->intmod << EV_CTX_INTMODT_SHIFT);
+-		er_ctxt->intmod = tmp;
++		er_ctxt->intmod = cpu_to_le32(tmp);
+ 
+-		er_ctxt->ertype = MHI_ER_TYPE_VALID;
+-		er_ctxt->msivec = mhi_event->irq;
++		er_ctxt->ertype = cpu_to_le32(MHI_ER_TYPE_VALID);
++		er_ctxt->msivec = cpu_to_le32(mhi_event->irq);
+ 		mhi_event->db_cfg.db_mode = true;
+ 
+ 		ring->el_size = sizeof(struct mhi_tre);
+@@ -346,9 +346,9 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
+ 		 * ring is empty
+ 		 */
+ 		ring->rp = ring->wp = ring->base;
+-		er_ctxt->rbase = ring->iommu_base;
++		er_ctxt->rbase = cpu_to_le64(ring->iommu_base);
+ 		er_ctxt->rp = er_ctxt->wp = er_ctxt->rbase;
+-		er_ctxt->rlen = ring->len;
++		er_ctxt->rlen = cpu_to_le64(ring->len);
+ 		ring->ctxt_wp = &er_ctxt->wp;
+ 	}
+ 
+@@ -375,9 +375,9 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
+ 			goto error_alloc_cmd;
+ 
+ 		ring->rp = ring->wp = ring->base;
+-		cmd_ctxt->rbase = ring->iommu_base;
++		cmd_ctxt->rbase = cpu_to_le64(ring->iommu_base);
+ 		cmd_ctxt->rp = cmd_ctxt->wp = cmd_ctxt->rbase;
+-		cmd_ctxt->rlen = ring->len;
++		cmd_ctxt->rlen = cpu_to_le64(ring->len);
+ 		ring->ctxt_wp = &cmd_ctxt->wp;
+ 	}
+ 
+@@ -578,10 +578,10 @@ void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
+ 	chan_ctxt->rp = 0;
+ 	chan_ctxt->wp = 0;
+ 
+-	tmp = chan_ctxt->chcfg;
++	tmp = le32_to_cpu(chan_ctxt->chcfg);
+ 	tmp &= ~CHAN_CTX_CHSTATE_MASK;
+ 	tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
+-	chan_ctxt->chcfg = tmp;
++	chan_ctxt->chcfg = cpu_to_le32(tmp);
+ 
+ 	/* Update to all cores */
+ 	smp_wmb();
+@@ -615,14 +615,14 @@ int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
+ 		return -ENOMEM;
+ 	}
+ 
+-	tmp = chan_ctxt->chcfg;
++	tmp = le32_to_cpu(chan_ctxt->chcfg);
+ 	tmp &= ~CHAN_CTX_CHSTATE_MASK;
+ 	tmp |= (MHI_CH_STATE_ENABLED << CHAN_CTX_CHSTATE_SHIFT);
+-	chan_ctxt->chcfg = tmp;
++	chan_ctxt->chcfg = cpu_to_le32(tmp);
+ 
+-	chan_ctxt->rbase = tre_ring->iommu_base;
++	chan_ctxt->rbase = cpu_to_le64(tre_ring->iommu_base);
+ 	chan_ctxt->rp = chan_ctxt->wp = chan_ctxt->rbase;
+-	chan_ctxt->rlen = tre_ring->len;
++	chan_ctxt->rlen = cpu_to_le64(tre_ring->len);
+ 	tre_ring->ctxt_wp = &chan_ctxt->wp;
+ 
+ 	tre_ring->rp = tre_ring->wp = tre_ring->base;
+diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
+index 3a732afaf73ed..c02c4d48b744c 100644
+--- a/drivers/bus/mhi/core/internal.h
++++ b/drivers/bus/mhi/core/internal.h
+@@ -209,14 +209,14 @@ extern struct bus_type mhi_bus_type;
+ #define EV_CTX_INTMODT_MASK GENMASK(31, 16)
+ #define EV_CTX_INTMODT_SHIFT 16
+ struct mhi_event_ctxt {
+-	__u32 intmod;
+-	__u32 ertype;
+-	__u32 msivec;
+-
+-	__u64 rbase __packed __aligned(4);
+-	__u64 rlen __packed __aligned(4);
+-	__u64 rp __packed __aligned(4);
+-	__u64 wp __packed __aligned(4);
++	__le32 intmod;
++	__le32 ertype;
++	__le32 msivec;
++
++	__le64 rbase __packed __aligned(4);
++	__le64 rlen __packed __aligned(4);
++	__le64 rp __packed __aligned(4);
++	__le64 wp __packed __aligned(4);
+ };
+ 
+ #define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
+@@ -227,25 +227,25 @@ struct mhi_event_ctxt {
+ #define CHAN_CTX_POLLCFG_SHIFT 10
+ #define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
+ struct mhi_chan_ctxt {
+-	__u32 chcfg;
+-	__u32 chtype;
+-	__u32 erindex;
+-
+-	__u64 rbase __packed __aligned(4);
+-	__u64 rlen __packed __aligned(4);
+-	__u64 rp __packed __aligned(4);
+-	__u64 wp __packed __aligned(4);
++	__le32 chcfg;
++	__le32 chtype;
++	__le32 erindex;
++
++	__le64 rbase __packed __aligned(4);
++	__le64 rlen __packed __aligned(4);
++	__le64 rp __packed __aligned(4);
++	__le64 wp __packed __aligned(4);
+ };
+ 
+ struct mhi_cmd_ctxt {
+-	__u32 reserved0;
+-	__u32 reserved1;
+-	__u32 reserved2;
+-
+-	__u64 rbase __packed __aligned(4);
+-	__u64 rlen __packed __aligned(4);
+-	__u64 rp __packed __aligned(4);
+-	__u64 wp __packed __aligned(4);
++	__le32 reserved0;
++	__le32 reserved1;
++	__le32 reserved2;
++
++	__le64 rbase __packed __aligned(4);
++	__le64 rlen __packed __aligned(4);
++	__le64 rp __packed __aligned(4);
++	__le64 wp __packed __aligned(4);
+ };
+ 
+ struct mhi_ctxt {
+@@ -258,8 +258,8 @@ struct mhi_ctxt {
+ };
+ 
+ struct mhi_tre {
+-	u64 ptr;
+-	u32 dword[2];
++	__le64 ptr;
++	__le32 dword[2];
+ };
+ 
+ struct bhi_vec_entry {
+@@ -277,57 +277,58 @@ enum mhi_cmd_type {
+ /* No operation command */
+ #define MHI_TRE_CMD_NOOP_PTR (0)
+ #define MHI_TRE_CMD_NOOP_DWORD0 (0)
+-#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_NOP << 16)
++#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))
+ 
+ /* Channel reset command */
+ #define MHI_TRE_CMD_RESET_PTR (0)
+ #define MHI_TRE_CMD_RESET_DWORD0 (0)
+-#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
+-					(MHI_CMD_RESET_CHAN << 16))
++#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
++					(MHI_CMD_RESET_CHAN << 16)))
+ 
+ /* Channel stop command */
+ #define MHI_TRE_CMD_STOP_PTR (0)
+ #define MHI_TRE_CMD_STOP_DWORD0 (0)
+-#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | \
+-				       (MHI_CMD_STOP_CHAN << 16))
++#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
++				       (MHI_CMD_STOP_CHAN << 16)))
+ 
+ /* Channel start command */
+ #define MHI_TRE_CMD_START_PTR (0)
+ #define MHI_TRE_CMD_START_DWORD0 (0)
+-#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
+-					(MHI_CMD_START_CHAN << 16))
++#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
++					(MHI_CMD_START_CHAN << 16)))
+ 
+-#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
+-#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
++#define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
++#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
++#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
+ 
+ /* Event descriptor macros */
+-#define MHI_TRE_EV_PTR(ptr) (ptr)
+-#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
+-#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
+-#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
+-#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
+-#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
+-#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
+-#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
+-#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
+-#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
+-#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0])
+-#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
+-#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits((tre)->ptr)
+-#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xFF)
+-#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xFF)
+-#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xFF)
++#define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
++#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
++#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
++#define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
++#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
++#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
++#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
++#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
++#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
++#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
++#define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
++#define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
++#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
++#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
++#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
++#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
+ 
+ /* Transfer descriptor macros */
+-#define MHI_TRE_DATA_PTR(ptr) (ptr)
+-#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
+-#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
+-	| (ieot << 9) | (ieob << 8) | chain)
++#define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
++#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
++#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
++	| (ieot << 9) | (ieob << 8) | chain))
+ 
+ /* RSC transfer descriptor macros */
+-#define MHI_RSCTRE_DATA_PTR(ptr, len) (((u64)len << 48) | ptr)
+-#define MHI_RSCTRE_DATA_DWORD0(cookie) (cookie)
+-#define MHI_RSCTRE_DATA_DWORD1 (MHI_PKT_TYPE_COALESCING << 16)
++#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
++#define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
++#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
+ 
+ enum mhi_pkt_type {
+ 	MHI_PKT_TYPE_INVALID = 0x0,
+@@ -499,7 +500,7 @@ struct state_transition {
+ struct mhi_ring {
+ 	dma_addr_t dma_handle;
+ 	dma_addr_t iommu_base;
+-	u64 *ctxt_wp; /* point to ctxt wp */
++	__le64 *ctxt_wp; /* point to ctxt wp */
+ 	void *pre_aligned;
+ 	void *base;
+ 	void *rp;
+diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
+index b15c5bc37dd4f..9a94b8d66f575 100644
+--- a/drivers/bus/mhi/core/main.c
++++ b/drivers/bus/mhi/core/main.c
+@@ -114,7 +114,7 @@ void mhi_ring_er_db(struct mhi_event *mhi_event)
+ 	struct mhi_ring *ring = &mhi_event->ring;
+ 
+ 	mhi_event->db_cfg.process_db(mhi_event->mhi_cntrl, &mhi_event->db_cfg,
+-				     ring->db_addr, *ring->ctxt_wp);
++				     ring->db_addr, le64_to_cpu(*ring->ctxt_wp));
+ }
+ 
+ void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
+@@ -123,7 +123,7 @@ void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
+ 	struct mhi_ring *ring = &mhi_cmd->ring;
+ 
+ 	db = ring->iommu_base + (ring->wp - ring->base);
+-	*ring->ctxt_wp = db;
++	*ring->ctxt_wp = cpu_to_le64(db);
+ 	mhi_write_db(mhi_cntrl, ring->db_addr, db);
+ }
+ 
+@@ -140,7 +140,7 @@ void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
+ 	 * before letting h/w know there is new element to fetch.
+ 	 */
+ 	dma_wmb();
+-	*ring->ctxt_wp = db;
++	*ring->ctxt_wp = cpu_to_le64(db);
+ 
+ 	mhi_chan->db_cfg.process_db(mhi_cntrl, &mhi_chan->db_cfg,
+ 				    ring->db_addr, db);
+@@ -432,7 +432,7 @@ irqreturn_t mhi_irq_handler(int irq_number, void *dev)
+ 	struct mhi_event_ctxt *er_ctxt =
+ 		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
+ 	struct mhi_ring *ev_ring = &mhi_event->ring;
+-	dma_addr_t ptr = er_ctxt->rp;
++	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
+ 	void *dev_rp;
+ 
+ 	if (!is_valid_ring_ptr(ev_ring, ptr)) {
+@@ -537,14 +537,14 @@ static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl,
+ 
+ 	/* Update the WP */
+ 	ring->wp += ring->el_size;
+-	ctxt_wp = *ring->ctxt_wp + ring->el_size;
++	ctxt_wp = le64_to_cpu(*ring->ctxt_wp) + ring->el_size;
+ 
+ 	if (ring->wp >= (ring->base + ring->len)) {
+ 		ring->wp = ring->base;
+ 		ctxt_wp = ring->iommu_base;
+ 	}
+ 
+-	*ring->ctxt_wp = ctxt_wp;
++	*ring->ctxt_wp = cpu_to_le64(ctxt_wp);
+ 
+ 	/* Update the RP */
+ 	ring->rp += ring->el_size;
+@@ -801,7 +801,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
+ 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ 	u32 chan;
+ 	int count = 0;
+-	dma_addr_t ptr = er_ctxt->rp;
++	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
+ 
+ 	/*
+ 	 * This is a quick check to avoid unnecessary event processing
+@@ -940,7 +940,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
+ 		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
+ 		local_rp = ev_ring->rp;
+ 
+-		ptr = er_ctxt->rp;
++		ptr = le64_to_cpu(er_ctxt->rp);
+ 		if (!is_valid_ring_ptr(ev_ring, ptr)) {
+ 			dev_err(&mhi_cntrl->mhi_dev->dev,
+ 				"Event ring rp points outside of the event ring\n");
+@@ -970,7 +970,7 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
+ 	int count = 0;
+ 	u32 chan;
+ 	struct mhi_chan *mhi_chan;
+-	dma_addr_t ptr = er_ctxt->rp;
++	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
+ 
+ 	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
+ 		return -EIO;
+@@ -1011,7 +1011,7 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
+ 		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
+ 		local_rp = ev_ring->rp;
+ 
+-		ptr = er_ctxt->rp;
++		ptr = le64_to_cpu(er_ctxt->rp);
+ 		if (!is_valid_ring_ptr(ev_ring, ptr)) {
+ 			dev_err(&mhi_cntrl->mhi_dev->dev,
+ 				"Event ring rp points outside of the event ring\n");
+@@ -1529,7 +1529,7 @@ static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
+ 	/* mark all stale events related to channel as STALE event */
+ 	spin_lock_irqsave(&mhi_event->lock, flags);
+ 
+-	ptr = er_ctxt->rp;
++	ptr = le64_to_cpu(er_ctxt->rp);
+ 	if (!is_valid_ring_ptr(ev_ring, ptr)) {
+ 		dev_err(&mhi_cntrl->mhi_dev->dev,
+ 			"Event ring rp points outside of the event ring\n");
+diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
+index bb9a2043f3a20..1020268a075a5 100644
+--- a/drivers/bus/mhi/core/pm.c
++++ b/drivers/bus/mhi/core/pm.c
+@@ -218,7 +218,7 @@ int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
+ 			continue;
+ 
+ 		ring->wp = ring->base + ring->len - ring->el_size;
+-		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
++		*ring->ctxt_wp = cpu_to_le64(ring->iommu_base + ring->len - ring->el_size);
+ 		/* Update all cores */
+ 		smp_wmb();
+ 
+@@ -420,7 +420,7 @@ static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
+ 			continue;
+ 
+ 		ring->wp = ring->base + ring->len - ring->el_size;
+-		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
++		*ring->ctxt_wp = cpu_to_le64(ring->iommu_base + ring->len - ring->el_size);
+ 		/* Update to all cores */
+ 		smp_wmb();
+ 
+diff --git a/drivers/bus/mhi/pci_generic.c b/drivers/bus/mhi/pci_generic.c
+index d340d6864e13a..d243526b23d86 100644
+--- a/drivers/bus/mhi/pci_generic.c
++++ b/drivers/bus/mhi/pci_generic.c
+@@ -327,6 +327,7 @@ static const struct mhi_pci_dev_info mhi_quectel_em1xx_info = {
+ 	.config = &modem_quectel_em1xx_config,
+ 	.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
+ 	.dma_data_width = 32,
++	.mru_default = 32768,
+ 	.sideband_wake = true,
+ };
+ 
+diff --git a/drivers/bus/mips_cdmm.c b/drivers/bus/mips_cdmm.c
+index 626dedd110cbc..fca0d0669aa97 100644
+--- a/drivers/bus/mips_cdmm.c
++++ b/drivers/bus/mips_cdmm.c
+@@ -351,6 +351,7 @@ phys_addr_t __weak mips_cdmm_phys_base(void)
+ 	np = of_find_compatible_node(NULL, NULL, "mti,mips-cdmm");
+ 	if (np) {
+ 		err = of_address_to_resource(np, 0, &res);
++		of_node_put(np);
+ 		if (!err)
+ 			return res.start;
+ 	}
+diff --git a/drivers/char/hw_random/Kconfig b/drivers/char/hw_random/Kconfig
+index 814b3d0ca7b77..372859b43f897 100644
+--- a/drivers/char/hw_random/Kconfig
++++ b/drivers/char/hw_random/Kconfig
+@@ -414,7 +414,7 @@ config HW_RANDOM_MESON
+ 
+ config HW_RANDOM_CAVIUM
+ 	tristate "Cavium ThunderX Random Number Generator support"
+-	depends on HW_RANDOM && PCI && (ARM64 || (COMPILE_TEST && 64BIT))
++	depends on HW_RANDOM && PCI && ARCH_THUNDER
+ 	default HW_RANDOM
+ 	help
+ 	  This driver provides kernel-side support for the Random Number
+diff --git a/drivers/char/hw_random/atmel-rng.c b/drivers/char/hw_random/atmel-rng.c
+index ecb71c4317a50..8cf0ef501341e 100644
+--- a/drivers/char/hw_random/atmel-rng.c
++++ b/drivers/char/hw_random/atmel-rng.c
+@@ -114,6 +114,7 @@ static int atmel_trng_probe(struct platform_device *pdev)
+ 
+ err_register:
+ 	clk_disable_unprepare(trng->clk);
++	atmel_trng_disable(trng);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/char/hw_random/cavium-rng-vf.c b/drivers/char/hw_random/cavium-rng-vf.c
+index 3de4a6a443ef9..6f66919652bf5 100644
+--- a/drivers/char/hw_random/cavium-rng-vf.c
++++ b/drivers/char/hw_random/cavium-rng-vf.c
+@@ -1,10 +1,7 @@
++// SPDX-License-Identifier: GPL-2.0
+ /*
+- * Hardware Random Number Generator support for Cavium, Inc.
+- * Thunder processor family.
+- *
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License.  See the file "COPYING" in the main directory of this archive
+- * for more details.
++ * Hardware Random Number Generator support.
++ * Cavium Thunder, Marvell OcteonTx/Tx2 processor families.
+  *
+  * Copyright (C) 2016 Cavium, Inc.
+  */
+@@ -15,16 +12,146 @@
+ #include <linux/pci.h>
+ #include <linux/pci_ids.h>
+ 
++#include <asm/arch_timer.h>
++
++/* PCI device IDs */
++#define	PCI_DEVID_CAVIUM_RNG_PF		0xA018
++#define	PCI_DEVID_CAVIUM_RNG_VF		0xA033
++
++#define HEALTH_STATUS_REG		0x38
++
++/* RST device info */
++#define PCI_DEVICE_ID_RST_OTX2		0xA085
++#define RST_BOOT_REG			0x1600ULL
++#define CLOCK_BASE_RATE			50000000ULL
++#define MSEC_TO_NSEC(x)			(x * 1000000)
++
+ struct cavium_rng {
+ 	struct hwrng ops;
+ 	void __iomem *result;
++	void __iomem *pf_regbase;
++	struct pci_dev *pdev;
++	u64  clock_rate;
++	u64  prev_error;
++	u64  prev_time;
+ };
+ 
++static inline bool is_octeontx(struct pci_dev *pdev)
++{
++	if (midr_is_cpu_model_range(read_cpuid_id(), MIDR_THUNDERX_83XX,
++				    MIDR_CPU_VAR_REV(0, 0),
++				    MIDR_CPU_VAR_REV(3, 0)) ||
++	    midr_is_cpu_model_range(read_cpuid_id(), MIDR_THUNDERX_81XX,
++				    MIDR_CPU_VAR_REV(0, 0),
++				    MIDR_CPU_VAR_REV(3, 0)) ||
++	    midr_is_cpu_model_range(read_cpuid_id(), MIDR_THUNDERX,
++				    MIDR_CPU_VAR_REV(0, 0),
++				    MIDR_CPU_VAR_REV(3, 0)))
++		return true;
++
++	return false;
++}
++
++static u64 rng_get_coprocessor_clkrate(void)
++{
++	u64 ret = CLOCK_BASE_RATE * 16; /* Assume 800Mhz as default */
++	struct pci_dev *pdev;
++	void __iomem *base;
++
++	pdev = pci_get_device(PCI_VENDOR_ID_CAVIUM,
++			      PCI_DEVICE_ID_RST_OTX2, NULL);
++	if (!pdev)
++		goto error;
++
++	base = pci_ioremap_bar(pdev, 0);
++	if (!base)
++		goto error_put_pdev;
++
++	/* RST: PNR_MUL * 50Mhz gives clockrate */
++	ret = CLOCK_BASE_RATE * ((readq(base + RST_BOOT_REG) >> 33) & 0x3F);
++
++	iounmap(base);
++
++error_put_pdev:
++	pci_dev_put(pdev);
++
++error:
++	return ret;
++}
++
++static int check_rng_health(struct cavium_rng *rng)
++{
++	u64 cur_err, cur_time;
++	u64 status, cycles;
++	u64 time_elapsed;
++
++
++	/* Skip checking health for OcteonTx */
++	if (!rng->pf_regbase)
++		return 0;
++
++	status = readq(rng->pf_regbase + HEALTH_STATUS_REG);
++	if (status & BIT_ULL(0)) {
++		dev_err(&rng->pdev->dev, "HWRNG: Startup health test failed\n");
++		return -EIO;
++	}
++
++	cycles = status >> 1;
++	if (!cycles)
++		return 0;
++
++	cur_time = arch_timer_read_counter();
++
++	/* RNM_HEALTH_STATUS[CYCLES_SINCE_HEALTH_FAILURE]
++	 * Number of coprocessor cycles times 2 since the last failure.
++	 * This field doesn't get cleared/updated until another failure.
++	 */
++	cycles = cycles / 2;
++	cur_err = (cycles * 1000000000) / rng->clock_rate; /* In nanosec */
++
++	/* Ignore errors that happenned a long time ago, these
++	 * are most likely false positive errors.
++	 */
++	if (cur_err > MSEC_TO_NSEC(10)) {
++		rng->prev_error = 0;
++		rng->prev_time = 0;
++		return 0;
++	}
++
++	if (rng->prev_error) {
++		/* Calculate time elapsed since last error
++		 * '1' tick of CNTVCT is 10ns, since it runs at 100Mhz.
++		 */
++		time_elapsed = (cur_time - rng->prev_time) * 10;
++		time_elapsed += rng->prev_error;
++
++		/* Check if current error is a new one or the old one itself.
++		 * If error is a new one then consider there is a persistent
++		 * issue with entropy, declare hardware failure.
++		 */
++		if (cur_err < time_elapsed) {
++			dev_err(&rng->pdev->dev, "HWRNG failure detected\n");
++			rng->prev_error = cur_err;
++			rng->prev_time = cur_time;
++			return -EIO;
++		}
++	}
++
++	rng->prev_error = cur_err;
++	rng->prev_time = cur_time;
++	return 0;
++}
++
+ /* Read data from the RNG unit */
+ static int cavium_rng_read(struct hwrng *rng, void *dat, size_t max, bool wait)
+ {
+ 	struct cavium_rng *p = container_of(rng, struct cavium_rng, ops);
+ 	unsigned int size = max;
++	int err = 0;
++
++	err = check_rng_health(p);
++	if (err)
++		return err;
+ 
+ 	while (size >= 8) {
+ 		*((u64 *)dat) = readq(p->result);
+@@ -39,6 +166,39 @@ static int cavium_rng_read(struct hwrng *rng, void *dat, size_t max, bool wait)
+ 	return max;
+ }
+ 
++static int cavium_map_pf_regs(struct cavium_rng *rng)
++{
++	struct pci_dev *pdev;
++
++	/* Health status is not supported on 83xx, skip mapping PF CSRs */
++	if (is_octeontx(rng->pdev)) {
++		rng->pf_regbase = NULL;
++		return 0;
++	}
++
++	pdev = pci_get_device(PCI_VENDOR_ID_CAVIUM,
++			      PCI_DEVID_CAVIUM_RNG_PF, NULL);
++	if (!pdev) {
++		dev_err(&pdev->dev, "Cannot find RNG PF device\n");
++		return -EIO;
++	}
++
++	rng->pf_regbase = ioremap(pci_resource_start(pdev, 0),
++				  pci_resource_len(pdev, 0));
++	if (!rng->pf_regbase) {
++		dev_err(&pdev->dev, "Failed to map PF CSR region\n");
++		pci_dev_put(pdev);
++		return -ENOMEM;
++	}
++
++	pci_dev_put(pdev);
++
++	/* Get co-processor clock rate */
++	rng->clock_rate = rng_get_coprocessor_clkrate();
++
++	return 0;
++}
++
+ /* Map Cavium RNG to an HWRNG object */
+ static int cavium_rng_probe_vf(struct	pci_dev		*pdev,
+ 			 const struct	pci_device_id	*id)
+@@ -50,6 +210,8 @@ static int cavium_rng_probe_vf(struct	pci_dev		*pdev,
+ 	if (!rng)
+ 		return -ENOMEM;
+ 
++	rng->pdev = pdev;
++
+ 	/* Map the RNG result */
+ 	rng->result = pcim_iomap(pdev, 0, 0);
+ 	if (!rng->result) {
+@@ -67,6 +229,11 @@ static int cavium_rng_probe_vf(struct	pci_dev		*pdev,
+ 
+ 	pci_set_drvdata(pdev, rng);
+ 
++	/* Health status is available only at PF, hence map PF registers. */
++	ret = cavium_map_pf_regs(rng);
++	if (ret)
++		return ret;
++
+ 	ret = devm_hwrng_register(&pdev->dev, &rng->ops);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Error registering device as HWRNG.\n");
+@@ -76,10 +243,18 @@ static int cavium_rng_probe_vf(struct	pci_dev		*pdev,
+ 	return 0;
+ }
+ 
++/* Remove the VF */
++static void cavium_rng_remove_vf(struct pci_dev *pdev)
++{
++	struct cavium_rng *rng;
++
++	rng = pci_get_drvdata(pdev);
++	iounmap(rng->pf_regbase);
++}
+ 
+ static const struct pci_device_id cavium_rng_vf_id_table[] = {
+-	{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, 0xa033), 0, 0, 0},
+-	{0,},
++	{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CAVIUM_RNG_VF) },
++	{ 0, }
+ };
+ MODULE_DEVICE_TABLE(pci, cavium_rng_vf_id_table);
+ 
+@@ -87,8 +262,9 @@ static struct pci_driver cavium_rng_vf_driver = {
+ 	.name		= "cavium_rng_vf",
+ 	.id_table	= cavium_rng_vf_id_table,
+ 	.probe		= cavium_rng_probe_vf,
++	.remove		= cavium_rng_remove_vf,
+ };
+ module_pci_driver(cavium_rng_vf_driver);
+ 
+ MODULE_AUTHOR("Omer Khaliq <okhaliq@caviumnetworks.com>");
+-MODULE_LICENSE("GPL");
++MODULE_LICENSE("GPL v2");
+diff --git a/drivers/char/hw_random/cavium-rng.c b/drivers/char/hw_random/cavium-rng.c
+index 63d6e68c24d2f..b96579222408b 100644
+--- a/drivers/char/hw_random/cavium-rng.c
++++ b/drivers/char/hw_random/cavium-rng.c
+@@ -1,10 +1,7 @@
++// SPDX-License-Identifier: GPL-2.0
+ /*
+- * Hardware Random Number Generator support for Cavium Inc.
+- * Thunder processor family.
+- *
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License.  See the file "COPYING" in the main directory of this archive
+- * for more details.
++ * Hardware Random Number Generator support.
++ * Cavium Thunder, Marvell OcteonTx/Tx2 processor families.
+  *
+  * Copyright (C) 2016 Cavium, Inc.
+  */
+@@ -91,4 +88,4 @@ static struct pci_driver cavium_rng_pf_driver = {
+ 
+ module_pci_driver(cavium_rng_pf_driver);
+ MODULE_AUTHOR("Omer Khaliq <okhaliq@caviumnetworks.com>");
+-MODULE_LICENSE("GPL");
++MODULE_LICENSE("GPL v2");
+diff --git a/drivers/char/hw_random/nomadik-rng.c b/drivers/char/hw_random/nomadik-rng.c
+index 67947a19aa225..e8f9621e79541 100644
+--- a/drivers/char/hw_random/nomadik-rng.c
++++ b/drivers/char/hw_random/nomadik-rng.c
+@@ -65,14 +65,14 @@ static int nmk_rng_probe(struct amba_device *dev, const struct amba_id *id)
+ out_release:
+ 	amba_release_regions(dev);
+ out_clk:
+-	clk_disable(rng_clk);
++	clk_disable_unprepare(rng_clk);
+ 	return ret;
+ }
+ 
+ static void nmk_rng_remove(struct amba_device *dev)
+ {
+ 	amba_release_regions(dev);
+-	clk_disable(rng_clk);
++	clk_disable_unprepare(rng_clk);
+ }
+ 
+ static const struct amba_id nmk_rng_ids[] = {
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index df37e7b6a10a5..65d800ecc9964 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -274,14 +274,6 @@ static void tpm_dev_release(struct device *dev)
+ 	kfree(chip);
+ }
+ 
+-static void tpm_devs_release(struct device *dev)
+-{
+-	struct tpm_chip *chip = container_of(dev, struct tpm_chip, devs);
+-
+-	/* release the master device reference */
+-	put_device(&chip->dev);
+-}
+-
+ /**
+  * tpm_class_shutdown() - prepare the TPM device for loss of power.
+  * @dev: device to which the chip is associated.
+@@ -344,7 +336,6 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
+ 	chip->dev_num = rc;
+ 
+ 	device_initialize(&chip->dev);
+-	device_initialize(&chip->devs);
+ 
+ 	chip->dev.class = tpm_class;
+ 	chip->dev.class->shutdown_pre = tpm_class_shutdown;
+@@ -352,29 +343,12 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
+ 	chip->dev.parent = pdev;
+ 	chip->dev.groups = chip->groups;
+ 
+-	chip->devs.parent = pdev;
+-	chip->devs.class = tpmrm_class;
+-	chip->devs.release = tpm_devs_release;
+-	/* get extra reference on main device to hold on
+-	 * behalf of devs.  This holds the chip structure
+-	 * while cdevs is in use.  The corresponding put
+-	 * is in the tpm_devs_release (TPM2 only)
+-	 */
+-	if (chip->flags & TPM_CHIP_FLAG_TPM2)
+-		get_device(&chip->dev);
+-
+ 	if (chip->dev_num == 0)
+ 		chip->dev.devt = MKDEV(MISC_MAJOR, TPM_MINOR);
+ 	else
+ 		chip->dev.devt = MKDEV(MAJOR(tpm_devt), chip->dev_num);
+ 
+-	chip->devs.devt =
+-		MKDEV(MAJOR(tpm_devt), chip->dev_num + TPM_NUM_DEVICES);
+-
+ 	rc = dev_set_name(&chip->dev, "tpm%d", chip->dev_num);
+-	if (rc)
+-		goto out;
+-	rc = dev_set_name(&chip->devs, "tpmrm%d", chip->dev_num);
+ 	if (rc)
+ 		goto out;
+ 
+@@ -382,9 +356,7 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
+ 		chip->flags |= TPM_CHIP_FLAG_VIRTUAL;
+ 
+ 	cdev_init(&chip->cdev, &tpm_fops);
+-	cdev_init(&chip->cdevs, &tpmrm_fops);
+ 	chip->cdev.owner = THIS_MODULE;
+-	chip->cdevs.owner = THIS_MODULE;
+ 
+ 	rc = tpm2_init_space(&chip->work_space, TPM2_SPACE_BUFFER_SIZE);
+ 	if (rc) {
+@@ -396,7 +368,6 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
+ 	return chip;
+ 
+ out:
+-	put_device(&chip->devs);
+ 	put_device(&chip->dev);
+ 	return ERR_PTR(rc);
+ }
+@@ -445,14 +416,9 @@ static int tpm_add_char_device(struct tpm_chip *chip)
+ 	}
+ 
+ 	if (chip->flags & TPM_CHIP_FLAG_TPM2) {
+-		rc = cdev_device_add(&chip->cdevs, &chip->devs);
+-		if (rc) {
+-			dev_err(&chip->devs,
+-				"unable to cdev_device_add() %s, major %d, minor %d, err=%d\n",
+-				dev_name(&chip->devs), MAJOR(chip->devs.devt),
+-				MINOR(chip->devs.devt), rc);
+-			return rc;
+-		}
++		rc = tpm_devs_add(chip);
++		if (rc)
++			goto err_del_cdev;
+ 	}
+ 
+ 	/* Make the chip available. */
+@@ -460,6 +426,10 @@ static int tpm_add_char_device(struct tpm_chip *chip)
+ 	idr_replace(&dev_nums_idr, chip, chip->dev_num);
+ 	mutex_unlock(&idr_lock);
+ 
++	return 0;
++
++err_del_cdev:
++	cdev_device_del(&chip->cdev, &chip->dev);
+ 	return rc;
+ }
+ 
+@@ -649,7 +619,7 @@ void tpm_chip_unregister(struct tpm_chip *chip)
+ 		hwrng_unregister(&chip->hwrng);
+ 	tpm_bios_log_teardown(chip);
+ 	if (chip->flags & TPM_CHIP_FLAG_TPM2)
+-		cdev_device_del(&chip->cdevs, &chip->devs);
++		tpm_devs_remove(chip);
+ 	tpm_del_char_device(chip);
+ }
+ EXPORT_SYMBOL_GPL(tpm_chip_unregister);
+diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
+index 283f78211c3a7..2163c6ee0d364 100644
+--- a/drivers/char/tpm/tpm.h
++++ b/drivers/char/tpm/tpm.h
+@@ -234,6 +234,8 @@ int tpm2_prepare_space(struct tpm_chip *chip, struct tpm_space *space, u8 *cmd,
+ 		       size_t cmdsiz);
+ int tpm2_commit_space(struct tpm_chip *chip, struct tpm_space *space, void *buf,
+ 		      size_t *bufsiz);
++int tpm_devs_add(struct tpm_chip *chip);
++void tpm_devs_remove(struct tpm_chip *chip);
+ 
+ void tpm_bios_log_setup(struct tpm_chip *chip);
+ void tpm_bios_log_teardown(struct tpm_chip *chip);
+diff --git a/drivers/char/tpm/tpm2-space.c b/drivers/char/tpm/tpm2-space.c
+index d2225020e4d2c..ffb35f0154c16 100644
+--- a/drivers/char/tpm/tpm2-space.c
++++ b/drivers/char/tpm/tpm2-space.c
+@@ -574,3 +574,68 @@ out:
+ 	dev_err(&chip->dev, "%s: error %d\n", __func__, rc);
+ 	return rc;
+ }
++
++/*
++ * Put the reference to the main device.
++ */
++static void tpm_devs_release(struct device *dev)
++{
++	struct tpm_chip *chip = container_of(dev, struct tpm_chip, devs);
++
++	/* release the master device reference */
++	put_device(&chip->dev);
++}
++
++/*
++ * Remove the device file for exposed TPM spaces and release the device
++ * reference. This may also release the reference to the master device.
++ */
++void tpm_devs_remove(struct tpm_chip *chip)
++{
++	cdev_device_del(&chip->cdevs, &chip->devs);
++	put_device(&chip->devs);
++}
++
++/*
++ * Add a device file to expose TPM spaces. Also take a reference to the
++ * main device.
++ */
++int tpm_devs_add(struct tpm_chip *chip)
++{
++	int rc;
++
++	device_initialize(&chip->devs);
++	chip->devs.parent = chip->dev.parent;
++	chip->devs.class = tpmrm_class;
++
++	/*
++	 * Get extra reference on main device to hold on behalf of devs.
++	 * This holds the chip structure while cdevs is in use. The
++	 * corresponding put is in the tpm_devs_release.
++	 */
++	get_device(&chip->dev);
++	chip->devs.release = tpm_devs_release;
++	chip->devs.devt = MKDEV(MAJOR(tpm_devt), chip->dev_num + TPM_NUM_DEVICES);
++	cdev_init(&chip->cdevs, &tpmrm_fops);
++	chip->cdevs.owner = THIS_MODULE;
++
++	rc = dev_set_name(&chip->devs, "tpmrm%d", chip->dev_num);
++	if (rc)
++		goto err_put_devs;
++
++	rc = cdev_device_add(&chip->cdevs, &chip->devs);
++	if (rc) {
++		dev_err(&chip->devs,
++			"unable to cdev_device_add() %s, major %d, minor %d, err=%d\n",
++			dev_name(&chip->devs), MAJOR(chip->devs.devt),
++			MINOR(chip->devs.devt), rc);
++		goto err_put_devs;
++	}
++
++	return 0;
++
++err_put_devs:
++	put_device(&chip->devs);
++
++	return rc;
++}
+diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
+index 660c5c388c291..f864b17be7e36 100644
+--- a/drivers/char/virtio_console.c
++++ b/drivers/char/virtio_console.c
+@@ -1957,6 +1957,13 @@ static void virtcons_remove(struct virtio_device *vdev)
+ 	list_del(&portdev->list);
+ 	spin_unlock_irq(&pdrvdata_lock);
+ 
++	/* Device is going away, exit any polling for buffers */
++	virtio_break_device(vdev);
++	if (use_multiport(portdev))
++		flush_work(&portdev->control_work);
++	else
++		flush_work(&portdev->config_work);
++
+ 	/* Disable interrupts for vqs */
+ 	vdev->config->reset(vdev);
+ 	/* Finish up work that's lined up */
+diff --git a/drivers/clk/actions/owl-s700.c b/drivers/clk/actions/owl-s700.c
+index a2f34d13fb543..6ea7da1d6d755 100644
+--- a/drivers/clk/actions/owl-s700.c
++++ b/drivers/clk/actions/owl-s700.c
+@@ -162,6 +162,7 @@ static struct clk_div_table hdmia_div_table[] = {
+ 
+ static struct clk_div_table rmii_div_table[] = {
+ 	{0, 4},   {1, 10},
++	{0, 0}
+ };
+ 
+ /* divider clocks */
+diff --git a/drivers/clk/actions/owl-s900.c b/drivers/clk/actions/owl-s900.c
+index 790890978424a..5144ada2c7e1a 100644
+--- a/drivers/clk/actions/owl-s900.c
++++ b/drivers/clk/actions/owl-s900.c
+@@ -140,7 +140,7 @@ static struct clk_div_table rmii_ref_div_table[] = {
+ 
+ static struct clk_div_table usb3_mac_div_table[] = {
+ 	{ 1, 2 }, { 2, 3 }, { 3, 4 },
+-	{ 0, 8 },
++	{ 0, 0 }
+ };
+ 
+ static struct clk_div_table i2s_div_table[] = {
+diff --git a/drivers/clk/at91/sama7g5.c b/drivers/clk/at91/sama7g5.c
+index 369dfafabbca2..060e908086a13 100644
+--- a/drivers/clk/at91/sama7g5.c
++++ b/drivers/clk/at91/sama7g5.c
+@@ -696,16 +696,16 @@ static const struct {
+ 	{ .n  = "pdmc0_gclk",
+ 	  .id = 68,
+ 	  .r = { .max = 50000000  },
+-	  .pp = { "syspll_divpmcck", "baudpll_divpmcck", },
+-	  .pp_mux_table = { 5, 8, },
++	  .pp = { "syspll_divpmcck", "audiopll_divpmcck", },
++	  .pp_mux_table = { 5, 9, },
+ 	  .pp_count = 2,
+ 	  .pp_chg_id = INT_MIN, },
+ 
+ 	{ .n  = "pdmc1_gclk",
+ 	  .id = 69,
+ 	  .r = { .max = 50000000, },
+-	  .pp = { "syspll_divpmcck", "baudpll_divpmcck", },
+-	  .pp_mux_table = { 5, 8, },
++	  .pp = { "syspll_divpmcck", "audiopll_divpmcck", },
++	  .pp_mux_table = { 5, 9, },
+ 	  .pp_count = 2,
+ 	  .pp_chg_id = INT_MIN, },
+ 
+diff --git a/drivers/clk/clk-clps711x.c b/drivers/clk/clk-clps711x.c
+index a2c6486ef1708..f8417ee2961aa 100644
+--- a/drivers/clk/clk-clps711x.c
++++ b/drivers/clk/clk-clps711x.c
+@@ -28,11 +28,13 @@ static const struct clk_div_table spi_div_table[] = {
+ 	{ .val = 1, .div = 8, },
+ 	{ .val = 2, .div = 2, },
+ 	{ .val = 3, .div = 1, },
++	{ /* sentinel */ }
+ };
+ 
+ static const struct clk_div_table timer_div_table[] = {
+ 	{ .val = 0, .div = 256, },
+ 	{ .val = 1, .div = 1, },
++	{ /* sentinel */ }
+ };
+ 
+ struct clps711x_clk {
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 21b17d6ced6d5..2c5db2df9d423 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -3413,6 +3413,19 @@ static void clk_core_reparent_orphans_nolock(void)
+ 			__clk_set_parent_after(orphan, parent, NULL);
+ 			__clk_recalc_accuracies(orphan);
+ 			__clk_recalc_rates(orphan, 0);
++
++			/*
++			 * __clk_init_parent() will set the initial req_rate to
++			 * 0 if the clock doesn't have clk_ops::recalc_rate and
++			 * is an orphan when it's registered.
++			 *
++			 * 'req_rate' is used by clk_set_rate_range() and
++			 * clk_put() to trigger a clk_set_rate() call whenever
++			 * the boundaries are modified. Let's make sure
++			 * 'req_rate' is set to something non-zero so that
++			 * clk_set_rate_range() doesn't drop the frequency.
++			 */
++			orphan->req_rate = orphan->rate;
+ 		}
+ 	}
+ }
+@@ -3733,8 +3746,9 @@ struct clk *clk_hw_create_clk(struct device *dev, struct clk_hw *hw,
+ struct clk *clk_hw_get_clk(struct clk_hw *hw, const char *con_id)
+ {
+ 	struct device *dev = hw->core->dev;
++	const char *name = dev ? dev_name(dev) : NULL;
+ 
+-	return clk_hw_create_clk(dev, hw, dev_name(dev), con_id);
++	return clk_hw_create_clk(dev, hw, name, con_id);
+ }
+ EXPORT_SYMBOL(clk_hw_get_clk);
+ 
+diff --git a/drivers/clk/hisilicon/clk-hi3559a.c b/drivers/clk/hisilicon/clk-hi3559a.c
+index 56012a3d02192..9ea1a80acbe8b 100644
+--- a/drivers/clk/hisilicon/clk-hi3559a.c
++++ b/drivers/clk/hisilicon/clk-hi3559a.c
+@@ -611,8 +611,8 @@ static struct hisi_mux_clock hi3559av100_shub_mux_clks[] = {
+ 
+ 
+ /* shub div clk */
+-static struct clk_div_table shub_spi_clk_table[] = {{0, 8}, {1, 4}, {2, 2}};
+-static struct clk_div_table shub_uart_div_clk_table[] = {{1, 8}, {2, 4}};
++static struct clk_div_table shub_spi_clk_table[] = {{0, 8}, {1, 4}, {2, 2}, {/*sentinel*/}};
++static struct clk_div_table shub_uart_div_clk_table[] = {{1, 8}, {2, 4}, {/*sentinel*/}};
+ 
+ static struct hisi_divider_clock hi3559av100_shub_div_clks[] = {
+ 	{ HI3559AV100_SHUB_SPI_SOURCE_CLK, "clk_spi_clk", "shub_clk", 0, 0x20, 24, 2,
+diff --git a/drivers/clk/imx/clk-imx7d.c b/drivers/clk/imx/clk-imx7d.c
+index c4e0f1c07192f..3f6fd7ef2a68f 100644
+--- a/drivers/clk/imx/clk-imx7d.c
++++ b/drivers/clk/imx/clk-imx7d.c
+@@ -849,7 +849,6 @@ static void __init imx7d_clocks_init(struct device_node *ccm_node)
+ 	hws[IMX7D_WDOG4_ROOT_CLK] = imx_clk_hw_gate4("wdog4_root_clk", "wdog_post_div", base + 0x49f0, 0);
+ 	hws[IMX7D_KPP_ROOT_CLK] = imx_clk_hw_gate4("kpp_root_clk", "ipg_root_clk", base + 0x4aa0, 0);
+ 	hws[IMX7D_CSI_MCLK_ROOT_CLK] = imx_clk_hw_gate4("csi_mclk_root_clk", "csi_mclk_post_div", base + 0x4490, 0);
+-	hws[IMX7D_AUDIO_MCLK_ROOT_CLK] = imx_clk_hw_gate4("audio_mclk_root_clk", "audio_mclk_post_div", base + 0x4790, 0);
+ 	hws[IMX7D_WRCLK_ROOT_CLK] = imx_clk_hw_gate4("wrclk_root_clk", "wrclk_post_div", base + 0x47a0, 0);
+ 	hws[IMX7D_USB_CTRL_CLK] = imx_clk_hw_gate4("usb_ctrl_clk", "ahb_root_clk", base + 0x4680, 0);
+ 	hws[IMX7D_USB_PHY1_CLK] = imx_clk_hw_gate4("usb_phy1_clk", "pll_usb1_main_clk", base + 0x46a0, 0);
+diff --git a/drivers/clk/imx/clk-imx8qxp-lpcg.c b/drivers/clk/imx/clk-imx8qxp-lpcg.c
+index b23758083ce52..5e31a6a24b3a3 100644
+--- a/drivers/clk/imx/clk-imx8qxp-lpcg.c
++++ b/drivers/clk/imx/clk-imx8qxp-lpcg.c
+@@ -248,7 +248,7 @@ static int imx_lpcg_parse_clks_from_dt(struct platform_device *pdev,
+ 
+ 	for (i = 0; i < count; i++) {
+ 		idx = bit_offset[i] / 4;
+-		if (idx > IMX_LPCG_MAX_CLKS) {
++		if (idx >= IMX_LPCG_MAX_CLKS) {
+ 			dev_warn(&pdev->dev, "invalid bit offset of clock %d\n",
+ 				 i);
+ 			ret = -EINVAL;
+diff --git a/drivers/clk/loongson1/clk-loongson1c.c b/drivers/clk/loongson1/clk-loongson1c.c
+index 703f87622cf5f..1ebf740380efb 100644
+--- a/drivers/clk/loongson1/clk-loongson1c.c
++++ b/drivers/clk/loongson1/clk-loongson1c.c
+@@ -37,6 +37,7 @@ static const struct clk_div_table ahb_div_table[] = {
+ 	[1] = { .val = 1, .div = 4 },
+ 	[2] = { .val = 2, .div = 3 },
+ 	[3] = { .val = 3, .div = 3 },
++	[4] = { /* sentinel */ }
+ };
+ 
+ void __init ls1x_clk_init(void)
+diff --git a/drivers/clk/qcom/clk-rcg2.c b/drivers/clk/qcom/clk-rcg2.c
+index e1b1b426fae4b..f675fd969c4de 100644
+--- a/drivers/clk/qcom/clk-rcg2.c
++++ b/drivers/clk/qcom/clk-rcg2.c
+@@ -264,7 +264,7 @@ static int clk_rcg2_determine_floor_rate(struct clk_hw *hw,
+ 
+ static int __clk_rcg2_configure(struct clk_rcg2 *rcg, const struct freq_tbl *f)
+ {
+-	u32 cfg, mask;
++	u32 cfg, mask, d_val, not2d_val, n_minus_m;
+ 	struct clk_hw *hw = &rcg->clkr.hw;
+ 	int ret, index = qcom_find_src_index(hw, rcg->parent_map, f->src);
+ 
+@@ -283,8 +283,17 @@ static int __clk_rcg2_configure(struct clk_rcg2 *rcg, const struct freq_tbl *f)
+ 		if (ret)
+ 			return ret;
+ 
++		/* Calculate 2d value */
++		d_val = f->n;
++
++		n_minus_m = f->n - f->m;
++		n_minus_m *= 2;
++
++		d_val = clamp_t(u32, d_val, f->m, n_minus_m);
++		not2d_val = ~d_val & mask;
++
+ 		ret = regmap_update_bits(rcg->clkr.regmap,
+-				RCG_D_OFFSET(rcg), mask, ~f->n);
++				RCG_D_OFFSET(rcg), mask, not2d_val);
+ 		if (ret)
+ 			return ret;
+ 	}
+@@ -720,6 +729,7 @@ static const struct frac_entry frac_table_pixel[] = {
+ 	{ 2, 9 },
+ 	{ 4, 9 },
+ 	{ 1, 1 },
++	{ 2, 3 },
+ 	{ }
+ };
+ 
+diff --git a/drivers/clk/qcom/gcc-ipq8074.c b/drivers/clk/qcom/gcc-ipq8074.c
+index 108fe27bee10f..541016db3c4bb 100644
+--- a/drivers/clk/qcom/gcc-ipq8074.c
++++ b/drivers/clk/qcom/gcc-ipq8074.c
+@@ -60,11 +60,6 @@ static const struct parent_map gcc_xo_gpll0_gpll0_out_main_div2_map[] = {
+ 	{ P_GPLL0_DIV2, 4 },
+ };
+ 
+-static const char * const gcc_xo_gpll0[] = {
+-	"xo",
+-	"gpll0",
+-};
+-
+ static const struct parent_map gcc_xo_gpll0_map[] = {
+ 	{ P_XO, 0 },
+ 	{ P_GPLL0, 1 },
+@@ -956,6 +951,11 @@ static struct clk_rcg2 blsp1_uart6_apps_clk_src = {
+ 	},
+ };
+ 
++static const struct clk_parent_data gcc_xo_gpll0[] = {
++	{ .fw_name = "xo" },
++	{ .hw = &gpll0.clkr.hw },
++};
++
+ static const struct freq_tbl ftbl_pcie_axi_clk_src[] = {
+ 	F(19200000, P_XO, 1, 0, 0),
+ 	F(200000000, P_GPLL0, 4, 0, 0),
+@@ -969,7 +969,7 @@ static struct clk_rcg2 pcie0_axi_clk_src = {
+ 	.parent_map = gcc_xo_gpll0_map,
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "pcie0_axi_clk_src",
+-		.parent_names = gcc_xo_gpll0,
++		.parent_data = gcc_xo_gpll0,
+ 		.num_parents = 2,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -1016,7 +1016,7 @@ static struct clk_rcg2 pcie1_axi_clk_src = {
+ 	.parent_map = gcc_xo_gpll0_map,
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "pcie1_axi_clk_src",
+-		.parent_names = gcc_xo_gpll0,
++		.parent_data = gcc_xo_gpll0,
+ 		.num_parents = 2,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -1074,7 +1074,7 @@ static struct clk_rcg2 sdcc1_apps_clk_src = {
+ 		.name = "sdcc1_apps_clk_src",
+ 		.parent_names = gcc_xo_gpll0_gpll2_gpll0_out_main_div2,
+ 		.num_parents = 4,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+ 
+@@ -1330,7 +1330,7 @@ static struct clk_rcg2 nss_ce_clk_src = {
+ 	.parent_map = gcc_xo_gpll0_map,
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "nss_ce_clk_src",
+-		.parent_names = gcc_xo_gpll0,
++		.parent_data = gcc_xo_gpll0,
+ 		.num_parents = 2,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+@@ -4329,8 +4329,7 @@ static struct clk_rcg2 pcie0_rchng_clk_src = {
+ 	.parent_map = gcc_xo_gpll0_map,
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "pcie0_rchng_clk_src",
+-		.parent_hws = (const struct clk_hw *[]) {
+-				&gpll0.clkr.hw },
++		.parent_data = gcc_xo_gpll0,
+ 		.num_parents = 2,
+ 		.ops = &clk_rcg2_ops,
+ 	},
+diff --git a/drivers/clk/qcom/gcc-msm8994.c b/drivers/clk/qcom/gcc-msm8994.c
+index 5df9f1ead48e0..4a9eb845b0507 100644
+--- a/drivers/clk/qcom/gcc-msm8994.c
++++ b/drivers/clk/qcom/gcc-msm8994.c
+@@ -76,6 +76,7 @@ static struct clk_alpha_pll gpll4_early = {
+ 
+ static struct clk_alpha_pll_postdiv gpll4 = {
+ 	.offset = 0x1dc0,
++	.width = 4,
+ 	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll4",
+diff --git a/drivers/clk/renesas/r9a07g044-cpg.c b/drivers/clk/renesas/r9a07g044-cpg.c
+index 47c16265fca9e..3e72dd060ffac 100644
+--- a/drivers/clk/renesas/r9a07g044-cpg.c
++++ b/drivers/clk/renesas/r9a07g044-cpg.c
+@@ -78,8 +78,8 @@ static const struct cpg_core_clk r9a07g044_core_clks[] __initconst = {
+ 	DEF_FIXED(".osc", R9A07G044_OSCCLK, CLK_EXTAL, 1, 1),
+ 	DEF_FIXED(".osc_div1000", CLK_OSC_DIV1000, CLK_EXTAL, 1, 1000),
+ 	DEF_SAMPLL(".pll1", CLK_PLL1, CLK_EXTAL, PLL146_CONF(0)),
+-	DEF_FIXED(".pll2", CLK_PLL2, CLK_EXTAL, 133, 2),
+-	DEF_FIXED(".pll3", CLK_PLL3, CLK_EXTAL, 133, 2),
++	DEF_FIXED(".pll2", CLK_PLL2, CLK_EXTAL, 200, 3),
++	DEF_FIXED(".pll3", CLK_PLL3, CLK_EXTAL, 200, 3),
+ 	DEF_FIXED(".pll3_400", CLK_PLL3_400, CLK_PLL3, 1, 4),
+ 	DEF_FIXED(".pll3_533", CLK_PLL3_533, CLK_PLL3, 1, 3),
+ 
+diff --git a/drivers/clk/rockchip/clk.c b/drivers/clk/rockchip/clk.c
+index b7be7e11b0dfe..bb8a844309bf5 100644
+--- a/drivers/clk/rockchip/clk.c
++++ b/drivers/clk/rockchip/clk.c
+@@ -180,6 +180,7 @@ static void rockchip_fractional_approximation(struct clk_hw *hw,
+ 		unsigned long rate, unsigned long *parent_rate,
+ 		unsigned long *m, unsigned long *n)
+ {
++	struct clk_fractional_divider *fd = to_clk_fd(hw);
+ 	unsigned long p_rate, p_parent_rate;
+ 	struct clk_hw *p_parent;
+ 
+@@ -190,6 +191,8 @@ static void rockchip_fractional_approximation(struct clk_hw *hw,
+ 		*parent_rate = p_parent_rate;
+ 	}
+ 
++	fd->flags |= CLK_FRAC_DIVIDER_POWER_OF_TWO_PS;
++
+ 	clk_fractional_divider_general_approximation(hw, rate, parent_rate, m, n);
+ }
+ 
+diff --git a/drivers/clk/tegra/clk-tegra124-emc.c b/drivers/clk/tegra/clk-tegra124-emc.c
+index 74c1d894cca86..219c80653dbdb 100644
+--- a/drivers/clk/tegra/clk-tegra124-emc.c
++++ b/drivers/clk/tegra/clk-tegra124-emc.c
+@@ -198,6 +198,7 @@ static struct tegra_emc *emc_ensure_emc_driver(struct tegra_clk_emc *tegra)
+ 
+ 	tegra->emc = platform_get_drvdata(pdev);
+ 	if (!tegra->emc) {
++		put_device(&pdev->dev);
+ 		pr_err("%s: cannot find EMC driver\n", __func__);
+ 		return NULL;
+ 	}
+diff --git a/drivers/clk/uniphier/clk-uniphier-fixed-rate.c b/drivers/clk/uniphier/clk-uniphier-fixed-rate.c
+index 5319cd3804801..3bc55ab75314b 100644
+--- a/drivers/clk/uniphier/clk-uniphier-fixed-rate.c
++++ b/drivers/clk/uniphier/clk-uniphier-fixed-rate.c
+@@ -24,6 +24,7 @@ struct clk_hw *uniphier_clk_register_fixed_rate(struct device *dev,
+ 
+ 	init.name = name;
+ 	init.ops = &clk_fixed_rate_ops;
++	init.flags = 0;
+ 	init.parent_names = NULL;
+ 	init.num_parents = 0;
+ 
+diff --git a/drivers/clocksource/acpi_pm.c b/drivers/clocksource/acpi_pm.c
+index eb596ff9e7bb3..279ddff81ab49 100644
+--- a/drivers/clocksource/acpi_pm.c
++++ b/drivers/clocksource/acpi_pm.c
+@@ -229,8 +229,10 @@ static int __init parse_pmtmr(char *arg)
+ 	int ret;
+ 
+ 	ret = kstrtouint(arg, 16, &base);
+-	if (ret)
+-		return ret;
++	if (ret) {
++		pr_warn("PMTMR: invalid 'pmtmr=' value: '%s'\n", arg);
++		return 1;
++	}
+ 
+ 	pr_info("PMTMR IOPort override: 0x%04x -> 0x%04x\n", pmtmr_ioport,
+ 		base);
+diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c
+index 5e3e96d3d1b98..cc2a961ddd3be 100644
+--- a/drivers/clocksource/exynos_mct.c
++++ b/drivers/clocksource/exynos_mct.c
+@@ -504,11 +504,14 @@ static int exynos4_mct_dying_cpu(unsigned int cpu)
+ 	return 0;
+ }
+ 
+-static int __init exynos4_timer_resources(struct device_node *np, void __iomem *base)
++static int __init exynos4_timer_resources(struct device_node *np)
+ {
+-	int err, cpu;
+ 	struct clk *mct_clk, *tick_clk;
+ 
++	reg_base = of_iomap(np, 0);
++	if (!reg_base)
++		panic("%s: unable to ioremap mct address space\n", __func__);
++
+ 	tick_clk = of_clk_get_by_name(np, "fin_pll");
+ 	if (IS_ERR(tick_clk))
+ 		panic("%s: unable to determine tick clock rate\n", __func__);
+@@ -519,9 +522,32 @@ static int __init exynos4_timer_resources(struct device_node *np, void __iomem *
+ 		panic("%s: unable to retrieve mct clock instance\n", __func__);
+ 	clk_prepare_enable(mct_clk);
+ 
+-	reg_base = base;
+-	if (!reg_base)
+-		panic("%s: unable to ioremap mct address space\n", __func__);
++	return 0;
++}
++
++static int __init exynos4_timer_interrupts(struct device_node *np,
++					   unsigned int int_type)
++{
++	int nr_irqs, i, err, cpu;
++
++	mct_int_type = int_type;
++
++	/* This driver uses only one global timer interrupt */
++	mct_irqs[MCT_G0_IRQ] = irq_of_parse_and_map(np, MCT_G0_IRQ);
++
++	/*
++	 * Find out the number of local irqs specified. The local
++	 * timer irqs are specified after the four global timer
++	 * irqs are specified.
++	 */
++	nr_irqs = of_irq_count(np);
++	if (nr_irqs > ARRAY_SIZE(mct_irqs)) {
++		pr_err("exynos-mct: too many (%d) interrupts configured in DT\n",
++			nr_irqs);
++		nr_irqs = ARRAY_SIZE(mct_irqs);
++	}
++	for (i = MCT_L0_IRQ; i < nr_irqs; i++)
++		mct_irqs[i] = irq_of_parse_and_map(np, i);
+ 
+ 	if (mct_int_type == MCT_INT_PPI) {
+ 
+@@ -532,11 +558,14 @@ static int __init exynos4_timer_resources(struct device_node *np, void __iomem *
+ 		     mct_irqs[MCT_L0_IRQ], err);
+ 	} else {
+ 		for_each_possible_cpu(cpu) {
+-			int mct_irq = mct_irqs[MCT_L0_IRQ + cpu];
++			int mct_irq;
+ 			struct mct_clock_event_device *pcpu_mevt =
+ 				per_cpu_ptr(&percpu_mct_tick, cpu);
+ 
+ 			pcpu_mevt->evt.irq = -1;
++			if (MCT_L0_IRQ + cpu >= ARRAY_SIZE(mct_irqs))
++				break;
++			mct_irq = mct_irqs[MCT_L0_IRQ + cpu];
+ 
+ 			irq_set_status_flags(mct_irq, IRQ_NOAUTOEN);
+ 			if (request_irq(mct_irq,
+@@ -581,24 +610,13 @@ out_irq:
+ 
+ static int __init mct_init_dt(struct device_node *np, unsigned int int_type)
+ {
+-	u32 nr_irqs, i;
+ 	int ret;
+ 
+-	mct_int_type = int_type;
+-
+-	/* This driver uses only one global timer interrupt */
+-	mct_irqs[MCT_G0_IRQ] = irq_of_parse_and_map(np, MCT_G0_IRQ);
+-
+-	/*
+-	 * Find out the number of local irqs specified. The local
+-	 * timer irqs are specified after the four global timer
+-	 * irqs are specified.
+-	 */
+-	nr_irqs = of_irq_count(np);
+-	for (i = MCT_L0_IRQ; i < nr_irqs; i++)
+-		mct_irqs[i] = irq_of_parse_and_map(np, i);
++	ret = exynos4_timer_resources(np);
++	if (ret)
++		return ret;
+ 
+-	ret = exynos4_timer_resources(np, of_iomap(np, 0));
++	ret = exynos4_timer_interrupts(np, int_type);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/clocksource/timer-microchip-pit64b.c b/drivers/clocksource/timer-microchip-pit64b.c
+index cfa4ec7ef3968..790d2c9b42a70 100644
+--- a/drivers/clocksource/timer-microchip-pit64b.c
++++ b/drivers/clocksource/timer-microchip-pit64b.c
+@@ -165,7 +165,7 @@ static u64 mchp_pit64b_clksrc_read(struct clocksource *cs)
+ 	return mchp_pit64b_cnt_read(mchp_pit64b_cs_base);
+ }
+ 
+-static u64 mchp_pit64b_sched_read_clk(void)
++static u64 notrace mchp_pit64b_sched_read_clk(void)
+ {
+ 	return mchp_pit64b_cnt_read(mchp_pit64b_cs_base);
+ }
+diff --git a/drivers/clocksource/timer-of.c b/drivers/clocksource/timer-of.c
+index 529cc6a51cdb3..c3f54d9912be7 100644
+--- a/drivers/clocksource/timer-of.c
++++ b/drivers/clocksource/timer-of.c
+@@ -157,9 +157,9 @@ static __init int timer_of_base_init(struct device_node *np,
+ 	of_base->base = of_base->name ?
+ 		of_io_request_and_map(np, of_base->index, of_base->name) :
+ 		of_iomap(np, of_base->index);
+-	if (IS_ERR(of_base->base)) {
+-		pr_err("Failed to iomap (%s)\n", of_base->name);
+-		return PTR_ERR(of_base->base);
++	if (IS_ERR_OR_NULL(of_base->base)) {
++		pr_err("Failed to iomap (%s:%s)\n", np->name, of_base->name);
++		return of_base->base ? PTR_ERR(of_base->base) : -ENOMEM;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
+index 1fccb457fcc54..2737407ff0698 100644
+--- a/drivers/clocksource/timer-ti-dm-systimer.c
++++ b/drivers/clocksource/timer-ti-dm-systimer.c
+@@ -694,9 +694,9 @@ static int __init dmtimer_percpu_quirk_init(struct device_node *np, u32 pa)
+ 		return 0;
+ 	}
+ 
+-	if (pa == 0x48034000)		/* dra7 dmtimer3 */
++	if (pa == 0x4882c000)           /* dra7 dmtimer15 */
+ 		return dmtimer_percpu_timer_init(np, 0);
+-	else if (pa == 0x48036000)	/* dra7 dmtimer4 */
++	else if (pa == 0x4882e000)      /* dra7 dmtimer16 */
+ 		return dmtimer_percpu_timer_init(np, 1);
+ 
+ 	return 0;
+diff --git a/drivers/cpufreq/qcom-cpufreq-nvmem.c b/drivers/cpufreq/qcom-cpufreq-nvmem.c
+index d1744b5d96190..6dfa86971a757 100644
+--- a/drivers/cpufreq/qcom-cpufreq-nvmem.c
++++ b/drivers/cpufreq/qcom-cpufreq-nvmem.c
+@@ -130,7 +130,7 @@ static void get_krait_bin_format_b(struct device *cpu_dev,
+ 	}
+ 
+ 	/* Check PVS_BLOW_STATUS */
+-	pte_efuse = *(((u32 *)buf) + 4);
++	pte_efuse = *(((u32 *)buf) + 1);
+ 	pte_efuse &= BIT(21);
+ 	if (pte_efuse) {
+ 		dev_dbg(cpu_dev, "PVS bin: %d\n", *pvs);
+diff --git a/drivers/cpuidle/cpuidle-qcom-spm.c b/drivers/cpuidle/cpuidle-qcom-spm.c
+index 01e77913a4144..5f27dcc6c110f 100644
+--- a/drivers/cpuidle/cpuidle-qcom-spm.c
++++ b/drivers/cpuidle/cpuidle-qcom-spm.c
+@@ -155,6 +155,22 @@ static struct platform_driver spm_cpuidle_driver = {
+ 	},
+ };
+ 
++static bool __init qcom_spm_find_any_cpu(void)
++{
++	struct device_node *cpu_node, *saw_node;
++
++	for_each_of_cpu_node(cpu_node) {
++		saw_node = of_parse_phandle(cpu_node, "qcom,saw", 0);
++		if (of_device_is_available(saw_node)) {
++			of_node_put(saw_node);
++			of_node_put(cpu_node);
++			return true;
++		}
++		of_node_put(saw_node);
++	}
++	return false;
++}
++
+ static int __init qcom_spm_cpuidle_init(void)
+ {
+ 	struct platform_device *pdev;
+@@ -164,6 +180,10 @@ static int __init qcom_spm_cpuidle_init(void)
+ 	if (ret)
+ 		return ret;
+ 
++	/* Make sure there is actually any CPU managed by the SPM */
++	if (!qcom_spm_find_any_cpu())
++		return 0;
++
+ 	pdev = platform_device_register_simple("qcom-spm-cpuidle",
+ 					       -1, NULL, 0);
+ 	if (IS_ERR(pdev)) {
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
+index 54ae8d16e4931..35e3cadccac2b 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
+@@ -11,6 +11,7 @@
+  * You could find a link for the datasheet in Documentation/arm/sunxi.rst
+  */
+ 
++#include <linux/bottom_half.h>
+ #include <linux/crypto.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/io.h>
+@@ -283,7 +284,9 @@ static int sun8i_ce_cipher_run(struct crypto_engine *engine, void *areq)
+ 
+ 	flow = rctx->flow;
+ 	err = sun8i_ce_run_task(ce, flow, crypto_tfm_alg_name(breq->base.tfm));
++	local_bh_disable();
+ 	crypto_finalize_skcipher_request(engine, breq, err);
++	local_bh_enable();
+ 	return 0;
+ }
+ 
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
+index 88194718a806c..859b7522faaac 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
+@@ -9,6 +9,7 @@
+  *
+  * You could find the datasheet in Documentation/arm/sunxi.rst
+  */
++#include <linux/bottom_half.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/scatterlist.h>
+@@ -414,6 +415,8 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ theend:
+ 	kfree(buf);
+ 	kfree(result);
++	local_bh_disable();
+ 	crypto_finalize_hash_request(engine, breq, err);
++	local_bh_enable();
+ 	return 0;
+ }
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+index 9ef1c85c4aaa5..554e400d41cad 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+@@ -11,6 +11,7 @@
+  * You could find a link for the datasheet in Documentation/arm/sunxi.rst
+  */
+ 
++#include <linux/bottom_half.h>
+ #include <linux/crypto.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/io.h>
+@@ -274,7 +275,9 @@ static int sun8i_ss_handle_cipher_request(struct crypto_engine *engine, void *ar
+ 	struct skcipher_request *breq = container_of(areq, struct skcipher_request, base);
+ 
+ 	err = sun8i_ss_cipher(breq);
++	local_bh_disable();
+ 	crypto_finalize_skcipher_request(engine, breq, err);
++	local_bh_enable();
+ 
+ 	return 0;
+ }
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
+index 80e89066dbd1a..319fe3279a716 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
+@@ -30,6 +30,8 @@
+ static const struct ss_variant ss_a80_variant = {
+ 	.alg_cipher = { SS_ALG_AES, SS_ALG_DES, SS_ALG_3DES,
+ 	},
++	.alg_hash = { SS_ID_NOTSUPP, SS_ID_NOTSUPP, SS_ID_NOTSUPP, SS_ID_NOTSUPP,
++	},
+ 	.op_mode = { SS_OP_ECB, SS_OP_CBC,
+ 	},
+ 	.ss_clks = {
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+index 3c073eb3db038..1a71ed49d2333 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+@@ -9,6 +9,7 @@
+  *
+  * You could find the datasheet in Documentation/arm/sunxi.rst
+  */
++#include <linux/bottom_half.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/scatterlist.h>
+@@ -442,6 +443,8 @@ int sun8i_ss_hash_run(struct crypto_engine *engine, void *breq)
+ theend:
+ 	kfree(pad);
+ 	kfree(result);
++	local_bh_disable();
+ 	crypto_finalize_hash_request(engine, breq, err);
++	local_bh_enable();
+ 	return 0;
+ }
+diff --git a/drivers/crypto/amlogic/amlogic-gxl-cipher.c b/drivers/crypto/amlogic/amlogic-gxl-cipher.c
+index c6865cbd334b2..e79514fce731f 100644
+--- a/drivers/crypto/amlogic/amlogic-gxl-cipher.c
++++ b/drivers/crypto/amlogic/amlogic-gxl-cipher.c
+@@ -265,7 +265,9 @@ static int meson_handle_cipher_request(struct crypto_engine *engine,
+ 	struct skcipher_request *breq = container_of(areq, struct skcipher_request, base);
+ 
+ 	err = meson_cipher(breq);
++	local_bh_disable();
+ 	crypto_finalize_skcipher_request(engine, breq, err);
++	local_bh_enable();
+ 
+ 	return 0;
+ }
+diff --git a/drivers/crypto/ccp/ccp-dmaengine.c b/drivers/crypto/ccp/ccp-dmaengine.c
+index d718db224be42..7d4b4ad1db1f3 100644
+--- a/drivers/crypto/ccp/ccp-dmaengine.c
++++ b/drivers/crypto/ccp/ccp-dmaengine.c
+@@ -632,6 +632,20 @@ static int ccp_terminate_all(struct dma_chan *dma_chan)
+ 	return 0;
+ }
+ 
++static void ccp_dma_release(struct ccp_device *ccp)
++{
++	struct ccp_dma_chan *chan;
++	struct dma_chan *dma_chan;
++	unsigned int i;
++
++	for (i = 0; i < ccp->cmd_q_count; i++) {
++		chan = ccp->ccp_dma_chan + i;
++		dma_chan = &chan->dma_chan;
++		tasklet_kill(&chan->cleanup_tasklet);
++		list_del_rcu(&dma_chan->device_node);
++	}
++}
++
+ int ccp_dmaengine_register(struct ccp_device *ccp)
+ {
+ 	struct ccp_dma_chan *chan;
+@@ -736,6 +750,7 @@ int ccp_dmaengine_register(struct ccp_device *ccp)
+ 	return 0;
+ 
+ err_reg:
++	ccp_dma_release(ccp);
+ 	kmem_cache_destroy(ccp->dma_desc_cache);
+ 
+ err_cache:
+@@ -752,6 +767,7 @@ void ccp_dmaengine_unregister(struct ccp_device *ccp)
+ 		return;
+ 
+ 	dma_async_device_unregister(dma_dev);
++	ccp_dma_release(ccp);
+ 
+ 	kmem_cache_destroy(ccp->dma_desc_cache);
+ 	kmem_cache_destroy(ccp->dma_cmd_cache);
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 581a1b13d5c3d..de015995189fa 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -241,7 +241,7 @@ static int __sev_platform_init_locked(int *error)
+ 	struct psp_device *psp = psp_master;
+ 	struct sev_data_init data;
+ 	struct sev_device *sev;
+-	int psp_ret, rc = 0;
++	int psp_ret = -1, rc = 0;
+ 
+ 	if (!psp || !psp->sev_data)
+ 		return -ENODEV;
+diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c
+index a5e041d9d2cf1..11e0278c8631d 100644
+--- a/drivers/crypto/ccree/cc_buffer_mgr.c
++++ b/drivers/crypto/ccree/cc_buffer_mgr.c
+@@ -258,6 +258,13 @@ static int cc_map_sg(struct device *dev, struct scatterlist *sg,
+ {
+ 	int ret = 0;
+ 
++	if (!nbytes) {
++		*mapped_nents = 0;
++		*lbytes = 0;
++		*nents = 0;
++		return 0;
++	}
++
+ 	*nents = cc_get_sgl_nents(dev, sg, nbytes, lbytes);
+ 	if (*nents > max_sg_nents) {
+ 		*nents = 0;
+diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c
+index 78833491f534d..309da6334a0a0 100644
+--- a/drivers/crypto/ccree/cc_cipher.c
++++ b/drivers/crypto/ccree/cc_cipher.c
+@@ -257,8 +257,8 @@ static void cc_cipher_exit(struct crypto_tfm *tfm)
+ 		&ctx_p->user.key_dma_addr);
+ 
+ 	/* Free key buffer in context */
+-	kfree_sensitive(ctx_p->user.key);
+ 	dev_dbg(dev, "Free key buffer in context. key=@%p\n", ctx_p->user.key);
++	kfree_sensitive(ctx_p->user.key);
+ }
+ 
+ struct tdes_keys {
+diff --git a/drivers/crypto/gemini/sl3516-ce-cipher.c b/drivers/crypto/gemini/sl3516-ce-cipher.c
+index c1c2b1d866639..f2be0a7d7f7ac 100644
+--- a/drivers/crypto/gemini/sl3516-ce-cipher.c
++++ b/drivers/crypto/gemini/sl3516-ce-cipher.c
+@@ -264,7 +264,9 @@ static int sl3516_ce_handle_cipher_request(struct crypto_engine *engine, void *a
+ 	struct skcipher_request *breq = container_of(areq, struct skcipher_request, base);
+ 
+ 	err = sl3516_ce_cipher(breq);
++	local_bh_disable();
+ 	crypto_finalize_skcipher_request(engine, breq, err);
++	local_bh_enable();
+ 
+ 	return 0;
+ }
+diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
+index 1dc6a27ba0e0d..82d2b117e1623 100644
+--- a/drivers/crypto/hisilicon/qm.c
++++ b/drivers/crypto/hisilicon/qm.c
+@@ -4145,7 +4145,7 @@ static void qm_vf_get_qos(struct hisi_qm *qm, u32 fun_num)
+ static int qm_vf_read_qos(struct hisi_qm *qm)
+ {
+ 	int cnt = 0;
+-	int ret;
++	int ret = -EINVAL;
+ 
+ 	/* reset mailbox qos val */
+ 	qm->mb_qos = 0;
+diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+index 6a45bd23b3635..090920ed50c8f 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
++++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+@@ -2284,9 +2284,10 @@ static int sec_aead_soft_crypto(struct sec_ctx *ctx,
+ 				struct aead_request *aead_req,
+ 				bool encrypt)
+ {
+-	struct aead_request *subreq = aead_request_ctx(aead_req);
+ 	struct sec_auth_ctx *a_ctx = &ctx->a_ctx;
+ 	struct device *dev = ctx->dev;
++	struct aead_request *subreq;
++	int ret;
+ 
+ 	/* Kunpeng920 aead mode not support input 0 size */
+ 	if (!a_ctx->fallback_aead_tfm) {
+@@ -2294,6 +2295,10 @@ static int sec_aead_soft_crypto(struct sec_ctx *ctx,
+ 		return -EINVAL;
+ 	}
+ 
++	subreq = aead_request_alloc(a_ctx->fallback_aead_tfm, GFP_KERNEL);
++	if (!subreq)
++		return -ENOMEM;
++
+ 	aead_request_set_tfm(subreq, a_ctx->fallback_aead_tfm);
+ 	aead_request_set_callback(subreq, aead_req->base.flags,
+ 				  aead_req->base.complete, aead_req->base.data);
+@@ -2301,8 +2306,13 @@ static int sec_aead_soft_crypto(struct sec_ctx *ctx,
+ 			       aead_req->cryptlen, aead_req->iv);
+ 	aead_request_set_ad(subreq, aead_req->assoclen);
+ 
+-	return encrypt ? crypto_aead_encrypt(subreq) :
+-		   crypto_aead_decrypt(subreq);
++	if (encrypt)
++		ret = crypto_aead_encrypt(subreq);
++	else
++		ret = crypto_aead_decrypt(subreq);
++	aead_request_free(subreq);
++
++	return ret;
+ }
+ 
+ static int sec_aead_crypto(struct aead_request *a_req, bool encrypt)
+diff --git a/drivers/crypto/hisilicon/sec2/sec_main.c b/drivers/crypto/hisilicon/sec2/sec_main.c
+index 90551bf38b523..03d239cfdf8c6 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_main.c
++++ b/drivers/crypto/hisilicon/sec2/sec_main.c
+@@ -443,9 +443,11 @@ static int sec_engine_init(struct hisi_qm *qm)
+ 
+ 	writel(SEC_SAA_ENABLE, qm->io_base + SEC_SAA_EN_REG);
+ 
+-	/* Enable sm4 extra mode, as ctr/ecb */
+-	writel_relaxed(SEC_BD_ERR_CHK_EN0,
+-		       qm->io_base + SEC_BD_ERR_CHK_EN_REG0);
++	/* HW V2 enable sm4 extra mode, as ctr/ecb */
++	if (qm->ver < QM_HW_V3)
++		writel_relaxed(SEC_BD_ERR_CHK_EN0,
++			       qm->io_base + SEC_BD_ERR_CHK_EN_REG0);
++
+ 	/* Enable sm4 xts mode multiple iv */
+ 	writel_relaxed(SEC_BD_ERR_CHK_EN1,
+ 		       qm->io_base + SEC_BD_ERR_CHK_EN_REG1);
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c
+index 877a948469bd1..570074e23b60e 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c
+@@ -1634,16 +1634,13 @@ static inline int cpt_register_algs(void)
+ {
+ 	int i, err = 0;
+ 
+-	if (!IS_ENABLED(CONFIG_DM_CRYPT)) {
+-		for (i = 0; i < ARRAY_SIZE(otx2_cpt_skciphers); i++)
+-			otx2_cpt_skciphers[i].base.cra_flags &=
+-							~CRYPTO_ALG_DEAD;
+-
+-		err = crypto_register_skciphers(otx2_cpt_skciphers,
+-						ARRAY_SIZE(otx2_cpt_skciphers));
+-		if (err)
+-			return err;
+-	}
++	for (i = 0; i < ARRAY_SIZE(otx2_cpt_skciphers); i++)
++		otx2_cpt_skciphers[i].base.cra_flags &= ~CRYPTO_ALG_DEAD;
++
++	err = crypto_register_skciphers(otx2_cpt_skciphers,
++					ARRAY_SIZE(otx2_cpt_skciphers));
++	if (err)
++		return err;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(otx2_cpt_aeads); i++)
+ 		otx2_cpt_aeads[i].base.cra_flags &= ~CRYPTO_ALG_DEAD;
+diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
+index d19e5ffb5104b..d6f9e2fe863d7 100644
+--- a/drivers/crypto/mxs-dcp.c
++++ b/drivers/crypto/mxs-dcp.c
+@@ -331,7 +331,7 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq)
+ 		memset(key + AES_KEYSIZE_128, 0, AES_KEYSIZE_128);
+ 	}
+ 
+-	for_each_sg(req->src, src, sg_nents(src), i) {
++	for_each_sg(req->src, src, sg_nents(req->src), i) {
+ 		src_buf = sg_virt(src);
+ 		len = sg_dma_len(src);
+ 		tlen += len;
+diff --git a/drivers/crypto/rockchip/rk3288_crypto_skcipher.c b/drivers/crypto/rockchip/rk3288_crypto_skcipher.c
+index 1cece1a7d3f00..5bbf0d2722e11 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto_skcipher.c
++++ b/drivers/crypto/rockchip/rk3288_crypto_skcipher.c
+@@ -506,7 +506,6 @@ struct rk_crypto_tmp rk_ecb_des3_ede_alg = {
+ 		.exit			= rk_ablk_exit_tfm,
+ 		.min_keysize		= DES3_EDE_KEY_SIZE,
+ 		.max_keysize		= DES3_EDE_KEY_SIZE,
+-		.ivsize			= DES_BLOCK_SIZE,
+ 		.setkey			= rk_tdes_setkey,
+ 		.encrypt		= rk_des3_ede_ecb_encrypt,
+ 		.decrypt		= rk_des3_ede_ecb_decrypt,
+diff --git a/drivers/crypto/vmx/Kconfig b/drivers/crypto/vmx/Kconfig
+index c85fab7ef0bdd..b2c28b87f14b3 100644
+--- a/drivers/crypto/vmx/Kconfig
++++ b/drivers/crypto/vmx/Kconfig
+@@ -2,7 +2,11 @@
+ config CRYPTO_DEV_VMX_ENCRYPT
+ 	tristate "Encryption acceleration support on P8 CPU"
+ 	depends on CRYPTO_DEV_VMX
++	select CRYPTO_AES
++	select CRYPTO_CBC
++	select CRYPTO_CTR
+ 	select CRYPTO_GHASH
++	select CRYPTO_XTS
+ 	default m
+ 	help
+ 	  Support for VMX cryptographic acceleration instructions on Power8 CPU.
+diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c
+index 46ce58376580b..55e120350a296 100644
+--- a/drivers/cxl/core/bus.c
++++ b/drivers/cxl/core/bus.c
+@@ -182,6 +182,7 @@ static void cxl_decoder_release(struct device *dev)
+ 
+ 	ida_free(&port->decoder_ida, cxld->id);
+ 	kfree(cxld);
++	put_device(&port->dev);
+ }
+ 
+ static const struct device_type cxl_decoder_switch_type = {
+@@ -500,7 +501,10 @@ struct cxl_decoder *cxl_decoder_alloc(struct cxl_port *port, int nr_targets)
+ 	if (rc < 0)
+ 		goto err;
+ 
++	/* need parent to stick around to release the id */
++	get_device(&port->dev);
+ 	cxld->id = rc;
++
+ 	cxld->nr_targets = nr_targets;
+ 	dev = &cxld->dev;
+ 	device_initialize(dev);
+diff --git a/drivers/cxl/core/regs.c b/drivers/cxl/core/regs.c
+index 41de4a136ecd7..2e7027a3fef3b 100644
+--- a/drivers/cxl/core/regs.c
++++ b/drivers/cxl/core/regs.c
+@@ -35,7 +35,7 @@ void cxl_probe_component_regs(struct device *dev, void __iomem *base,
+ 			      struct cxl_component_reg_map *map)
+ {
+ 	int cap, cap_count;
+-	u64 cap_array;
++	u32 cap_array;
+ 
+ 	*map = (struct cxl_component_reg_map) { 0 };
+ 
+@@ -45,11 +45,11 @@ void cxl_probe_component_regs(struct device *dev, void __iomem *base,
+ 	 */
+ 	base += CXL_CM_OFFSET;
+ 
+-	cap_array = readq(base + CXL_CM_CAP_HDR_OFFSET);
++	cap_array = readl(base + CXL_CM_CAP_HDR_OFFSET);
+ 
+ 	if (FIELD_GET(CXL_CM_CAP_HDR_ID_MASK, cap_array) != CM_CAP_HDR_CAP_ID) {
+ 		dev_err(dev,
+-			"Couldn't locate the CXL.cache and CXL.mem capability array header./n");
++			"Couldn't locate the CXL.cache and CXL.mem capability array header.\n");
+ 		return;
+ 	}
+ 
+diff --git a/drivers/dax/super.c b/drivers/dax/super.c
+index e20d0cef10a18..0e55dde937d24 100644
+--- a/drivers/dax/super.c
++++ b/drivers/dax/super.c
+@@ -612,6 +612,7 @@ static int dax_fs_init(void)
+ static void dax_fs_exit(void)
+ {
+ 	kern_unmount(dax_mnt);
++	rcu_barrier();
+ 	kmem_cache_destroy(dax_cache);
+ }
+ 
+diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
+index c57a609db75be..e7330684d3b82 100644
+--- a/drivers/dma-buf/udmabuf.c
++++ b/drivers/dma-buf/udmabuf.c
+@@ -190,6 +190,10 @@ static long udmabuf_create(struct miscdevice *device,
+ 		if (ubuf->pagecount > pglimit)
+ 			goto err;
+ 	}
++
++	if (!ubuf->pagecount)
++		goto err;
++
+ 	ubuf->pages = kmalloc_array(ubuf->pagecount, sizeof(*ubuf->pages),
+ 				    GFP_KERNEL);
+ 	if (!ubuf->pages) {
+diff --git a/drivers/dma/hisi_dma.c b/drivers/dma/hisi_dma.c
+index 97c87a7cba879..43817ced3a3e1 100644
+--- a/drivers/dma/hisi_dma.c
++++ b/drivers/dma/hisi_dma.c
+@@ -30,7 +30,7 @@
+ #define HISI_DMA_MODE			0x217c
+ #define HISI_DMA_OFFSET			0x100
+ 
+-#define HISI_DMA_MSI_NUM		30
++#define HISI_DMA_MSI_NUM		32
+ #define HISI_DMA_CHAN_NUM		30
+ #define HISI_DMA_Q_DEPTH_VAL		1024
+ 
+diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
+index cd855097bfdba..4bafc88f425fa 100644
+--- a/drivers/dma/idxd/device.c
++++ b/drivers/dma/idxd/device.c
+@@ -688,11 +688,16 @@ static void idxd_groups_clear_state(struct idxd_device *idxd)
+ 		memset(&group->grpcfg, 0, sizeof(group->grpcfg));
+ 		group->num_engines = 0;
+ 		group->num_wqs = 0;
+-		group->use_token_limit = false;
+-		group->tokens_allowed = 0;
+-		group->tokens_reserved = 0;
+-		group->tc_a = -1;
+-		group->tc_b = -1;
++		group->use_rdbuf_limit = false;
++		group->rdbufs_allowed = 0;
++		group->rdbufs_reserved = 0;
++		if (idxd->hw.version < DEVICE_VERSION_2 && !tc_override) {
++			group->tc_a = 1;
++			group->tc_b = 1;
++		} else {
++			group->tc_a = -1;
++			group->tc_b = -1;
++		}
+ 	}
+ }
+ 
+@@ -788,10 +793,10 @@ static int idxd_groups_config_write(struct idxd_device *idxd)
+ 	int i;
+ 	struct device *dev = &idxd->pdev->dev;
+ 
+-	/* Setup bandwidth token limit */
+-	if (idxd->hw.gen_cap.config_en && idxd->token_limit) {
++	/* Setup bandwidth rdbuf limit */
++	if (idxd->hw.gen_cap.config_en && idxd->rdbuf_limit) {
+ 		reg.bits = ioread32(idxd->reg_base + IDXD_GENCFG_OFFSET);
+-		reg.token_limit = idxd->token_limit;
++		reg.rdbuf_limit = idxd->rdbuf_limit;
+ 		iowrite32(reg.bits, idxd->reg_base + IDXD_GENCFG_OFFSET);
+ 	}
+ 
+@@ -932,13 +937,12 @@ static void idxd_group_flags_setup(struct idxd_device *idxd)
+ 			group->tc_b = group->grpcfg.flags.tc_b = 1;
+ 		else
+ 			group->grpcfg.flags.tc_b = group->tc_b;
+-		group->grpcfg.flags.use_token_limit = group->use_token_limit;
+-		group->grpcfg.flags.tokens_reserved = group->tokens_reserved;
+-		if (group->tokens_allowed)
+-			group->grpcfg.flags.tokens_allowed =
+-				group->tokens_allowed;
++		group->grpcfg.flags.use_rdbuf_limit = group->use_rdbuf_limit;
++		group->grpcfg.flags.rdbufs_reserved = group->rdbufs_reserved;
++		if (group->rdbufs_allowed)
++			group->grpcfg.flags.rdbufs_allowed = group->rdbufs_allowed;
+ 		else
+-			group->grpcfg.flags.tokens_allowed = idxd->max_tokens;
++			group->grpcfg.flags.rdbufs_allowed = idxd->max_rdbufs;
+ 	}
+ }
+ 
+@@ -1131,7 +1135,7 @@ int idxd_device_load_config(struct idxd_device *idxd)
+ 	int i, rc;
+ 
+ 	reg.bits = ioread32(idxd->reg_base + IDXD_GENCFG_OFFSET);
+-	idxd->token_limit = reg.token_limit;
++	idxd->rdbuf_limit = reg.rdbuf_limit;
+ 
+ 	for (i = 0; i < idxd->max_groups; i++) {
+ 		struct idxd_group *group = idxd->groups[i];
+diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
+index 0cf8d3145870a..ca1e7bd05839c 100644
+--- a/drivers/dma/idxd/idxd.h
++++ b/drivers/dma/idxd/idxd.h
+@@ -84,9 +84,9 @@ struct idxd_group {
+ 	int id;
+ 	int num_engines;
+ 	int num_wqs;
+-	bool use_token_limit;
+-	u8 tokens_allowed;
+-	u8 tokens_reserved;
++	bool use_rdbuf_limit;
++	u8 rdbufs_allowed;
++	u8 rdbufs_reserved;
+ 	int tc_a;
+ 	int tc_b;
+ };
+@@ -276,11 +276,11 @@ struct idxd_device {
+ 	u32 max_batch_size;
+ 	int max_groups;
+ 	int max_engines;
+-	int max_tokens;
++	int max_rdbufs;
+ 	int max_wqs;
+ 	int max_wq_size;
+-	int token_limit;
+-	int nr_tokens;		/* non-reserved tokens */
++	int rdbuf_limit;
++	int nr_rdbufs;		/* non-reserved read buffers */
+ 	unsigned int wqcfg_size;
+ 
+ 	union sw_err_reg sw_err;
+diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
+index 7bf03f371ce19..6263d9825250b 100644
+--- a/drivers/dma/idxd/init.c
++++ b/drivers/dma/idxd/init.c
+@@ -464,9 +464,9 @@ static void idxd_read_caps(struct idxd_device *idxd)
+ 	dev_dbg(dev, "group_cap: %#llx\n", idxd->hw.group_cap.bits);
+ 	idxd->max_groups = idxd->hw.group_cap.num_groups;
+ 	dev_dbg(dev, "max groups: %u\n", idxd->max_groups);
+-	idxd->max_tokens = idxd->hw.group_cap.total_tokens;
+-	dev_dbg(dev, "max tokens: %u\n", idxd->max_tokens);
+-	idxd->nr_tokens = idxd->max_tokens;
++	idxd->max_rdbufs = idxd->hw.group_cap.total_rdbufs;
++	dev_dbg(dev, "max read buffers: %u\n", idxd->max_rdbufs);
++	idxd->nr_rdbufs = idxd->max_rdbufs;
+ 
+ 	/* read engine capabilities */
+ 	idxd->hw.engine_cap.bits =
+diff --git a/drivers/dma/idxd/registers.h b/drivers/dma/idxd/registers.h
+index 262c8220adbda..ac4fe0b1dd8aa 100644
+--- a/drivers/dma/idxd/registers.h
++++ b/drivers/dma/idxd/registers.h
+@@ -64,9 +64,9 @@ union wq_cap_reg {
+ union group_cap_reg {
+ 	struct {
+ 		u64 num_groups:8;
+-		u64 total_tokens:8;
+-		u64 token_en:1;
+-		u64 token_limit:1;
++		u64 total_rdbufs:8;	/* formerly total_tokens */
++		u64 rdbuf_ctrl:1;	/* formerly token_en */
++		u64 rdbuf_limit:1;	/* formerly token_limit */
+ 		u64 rsvd:46;
+ 	};
+ 	u64 bits;
+@@ -110,7 +110,7 @@ union offsets_reg {
+ #define IDXD_GENCFG_OFFSET		0x80
+ union gencfg_reg {
+ 	struct {
+-		u32 token_limit:8;
++		u32 rdbuf_limit:8;
+ 		u32 rsvd:4;
+ 		u32 user_int_en:1;
+ 		u32 rsvd2:19;
+@@ -287,10 +287,10 @@ union group_flags {
+ 		u32 tc_a:3;
+ 		u32 tc_b:3;
+ 		u32 rsvd:1;
+-		u32 use_token_limit:1;
+-		u32 tokens_reserved:8;
++		u32 use_rdbuf_limit:1;
++		u32 rdbufs_reserved:8;
+ 		u32 rsvd2:4;
+-		u32 tokens_allowed:8;
++		u32 rdbufs_allowed:8;
+ 		u32 rsvd3:4;
+ 	};
+ 	u32 bits;
+diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
+index a9025be940db2..999ce13a93adc 100644
+--- a/drivers/dma/idxd/sysfs.c
++++ b/drivers/dma/idxd/sysfs.c
+@@ -99,17 +99,17 @@ struct device_type idxd_engine_device_type = {
+ 
+ /* Group attributes */
+ 
+-static void idxd_set_free_tokens(struct idxd_device *idxd)
++static void idxd_set_free_rdbufs(struct idxd_device *idxd)
+ {
+-	int i, tokens;
++	int i, rdbufs;
+ 
+-	for (i = 0, tokens = 0; i < idxd->max_groups; i++) {
++	for (i = 0, rdbufs = 0; i < idxd->max_groups; i++) {
+ 		struct idxd_group *g = idxd->groups[i];
+ 
+-		tokens += g->tokens_reserved;
++		rdbufs += g->rdbufs_reserved;
+ 	}
+ 
+-	idxd->nr_tokens = idxd->max_tokens - tokens;
++	idxd->nr_rdbufs = idxd->max_rdbufs - rdbufs;
+ }
+ 
+ static ssize_t group_tokens_reserved_show(struct device *dev,
+@@ -118,7 +118,7 @@ static ssize_t group_tokens_reserved_show(struct device *dev,
+ {
+ 	struct idxd_group *group = confdev_to_group(dev);
+ 
+-	return sysfs_emit(buf, "%u\n", group->tokens_reserved);
++	return sysfs_emit(buf, "%u\n", group->rdbufs_reserved);
+ }
+ 
+ static ssize_t group_tokens_reserved_store(struct device *dev,
+@@ -143,14 +143,14 @@ static ssize_t group_tokens_reserved_store(struct device *dev,
+ 	if (idxd->state == IDXD_DEV_ENABLED)
+ 		return -EPERM;
+ 
+-	if (val > idxd->max_tokens)
++	if (val > idxd->max_rdbufs)
+ 		return -EINVAL;
+ 
+-	if (val > idxd->nr_tokens + group->tokens_reserved)
++	if (val > idxd->nr_rdbufs + group->rdbufs_reserved)
+ 		return -EINVAL;
+ 
+-	group->tokens_reserved = val;
+-	idxd_set_free_tokens(idxd);
++	group->rdbufs_reserved = val;
++	idxd_set_free_rdbufs(idxd);
+ 	return count;
+ }
+ 
+@@ -164,7 +164,7 @@ static ssize_t group_tokens_allowed_show(struct device *dev,
+ {
+ 	struct idxd_group *group = confdev_to_group(dev);
+ 
+-	return sysfs_emit(buf, "%u\n", group->tokens_allowed);
++	return sysfs_emit(buf, "%u\n", group->rdbufs_allowed);
+ }
+ 
+ static ssize_t group_tokens_allowed_store(struct device *dev,
+@@ -190,10 +190,10 @@ static ssize_t group_tokens_allowed_store(struct device *dev,
+ 		return -EPERM;
+ 
+ 	if (val < 4 * group->num_engines ||
+-	    val > group->tokens_reserved + idxd->nr_tokens)
++	    val > group->rdbufs_reserved + idxd->nr_rdbufs)
+ 		return -EINVAL;
+ 
+-	group->tokens_allowed = val;
++	group->rdbufs_allowed = val;
+ 	return count;
+ }
+ 
+@@ -207,7 +207,7 @@ static ssize_t group_use_token_limit_show(struct device *dev,
+ {
+ 	struct idxd_group *group = confdev_to_group(dev);
+ 
+-	return sysfs_emit(buf, "%u\n", group->use_token_limit);
++	return sysfs_emit(buf, "%u\n", group->use_rdbuf_limit);
+ }
+ 
+ static ssize_t group_use_token_limit_store(struct device *dev,
+@@ -232,10 +232,10 @@ static ssize_t group_use_token_limit_store(struct device *dev,
+ 	if (idxd->state == IDXD_DEV_ENABLED)
+ 		return -EPERM;
+ 
+-	if (idxd->token_limit == 0)
++	if (idxd->rdbuf_limit == 0)
+ 		return -EPERM;
+ 
+-	group->use_token_limit = !!val;
++	group->use_rdbuf_limit = !!val;
+ 	return count;
+ }
+ 
+@@ -1161,7 +1161,7 @@ static ssize_t max_tokens_show(struct device *dev,
+ {
+ 	struct idxd_device *idxd = confdev_to_idxd(dev);
+ 
+-	return sysfs_emit(buf, "%u\n", idxd->max_tokens);
++	return sysfs_emit(buf, "%u\n", idxd->max_rdbufs);
+ }
+ static DEVICE_ATTR_RO(max_tokens);
+ 
+@@ -1170,7 +1170,7 @@ static ssize_t token_limit_show(struct device *dev,
+ {
+ 	struct idxd_device *idxd = confdev_to_idxd(dev);
+ 
+-	return sysfs_emit(buf, "%u\n", idxd->token_limit);
++	return sysfs_emit(buf, "%u\n", idxd->rdbuf_limit);
+ }
+ 
+ static ssize_t token_limit_store(struct device *dev,
+@@ -1191,13 +1191,13 @@ static ssize_t token_limit_store(struct device *dev,
+ 	if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
+ 		return -EPERM;
+ 
+-	if (!idxd->hw.group_cap.token_limit)
++	if (!idxd->hw.group_cap.rdbuf_limit)
+ 		return -EPERM;
+ 
+-	if (val > idxd->hw.group_cap.total_tokens)
++	if (val > idxd->hw.group_cap.total_rdbufs)
+ 		return -EINVAL;
+ 
+-	idxd->token_limit = val;
++	idxd->rdbuf_limit = val;
+ 	return count;
+ }
+ static DEVICE_ATTR_RW(token_limit);
+diff --git a/drivers/firmware/efi/efi-pstore.c b/drivers/firmware/efi/efi-pstore.c
+index 0ef086e43090b..7e771c56c13c6 100644
+--- a/drivers/firmware/efi/efi-pstore.c
++++ b/drivers/firmware/efi/efi-pstore.c
+@@ -266,7 +266,7 @@ static int efi_pstore_write(struct pstore_record *record)
+ 		efi_name[i] = name[i];
+ 
+ 	ret = efivar_entry_set_safe(efi_name, vendor, PSTORE_EFI_ATTRIBUTES,
+-			      preemptible(), record->size, record->psi->buf);
++			      false, record->size, record->psi->buf);
+ 
+ 	if (record->reason == KMSG_DUMP_OOPS && try_module_get(THIS_MODULE))
+ 		if (!schedule_work(&efivar_work))
+diff --git a/drivers/firmware/google/Kconfig b/drivers/firmware/google/Kconfig
+index 931544c9f63d4..983e07dc022ed 100644
+--- a/drivers/firmware/google/Kconfig
++++ b/drivers/firmware/google/Kconfig
+@@ -21,7 +21,7 @@ config GOOGLE_SMI
+ 
+ config GOOGLE_COREBOOT_TABLE
+ 	tristate "Coreboot Table Access"
+-	depends on ACPI || OF
++	depends on HAS_IOMEM && (ACPI || OF)
+ 	help
+ 	  This option enables the coreboot_table module, which provides other
+ 	  firmware modules access to the coreboot table. The coreboot table
+diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c
+index 7db8066b19fd5..3f67bf774821d 100644
+--- a/drivers/firmware/qcom_scm.c
++++ b/drivers/firmware/qcom_scm.c
+@@ -749,12 +749,6 @@ int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare)
+ 	};
+ 	int ret;
+ 
+-	desc.args[0] = addr;
+-	desc.args[1] = size;
+-	desc.args[2] = spare;
+-	desc.arginfo = QCOM_SCM_ARGS(3, QCOM_SCM_RW, QCOM_SCM_VAL,
+-				     QCOM_SCM_VAL);
+-
+ 	ret = qcom_scm_call(__scm->dev, &desc, NULL);
+ 
+ 	/* the pg table has been initialized already, ignore the error */
+diff --git a/drivers/firmware/stratix10-svc.c b/drivers/firmware/stratix10-svc.c
+index 29c0a616b3177..c4bf934e3553e 100644
+--- a/drivers/firmware/stratix10-svc.c
++++ b/drivers/firmware/stratix10-svc.c
+@@ -477,7 +477,7 @@ static int svc_normal_to_secure_thread(void *data)
+ 		case INTEL_SIP_SMC_RSU_ERROR:
+ 			pr_err("%s: STATUS_ERROR\n", __func__);
+ 			cbdata->status = BIT(SVC_STATUS_ERROR);
+-			cbdata->kaddr1 = NULL;
++			cbdata->kaddr1 = &res.a1;
+ 			cbdata->kaddr2 = NULL;
+ 			cbdata->kaddr3 = NULL;
+ 			pdata->chan->scl->receive_cb(pdata->chan->scl, cbdata);
+diff --git a/drivers/firmware/sysfb_simplefb.c b/drivers/firmware/sysfb_simplefb.c
+index 303a491e520d1..757cc8b9f3de9 100644
+--- a/drivers/firmware/sysfb_simplefb.c
++++ b/drivers/firmware/sysfb_simplefb.c
+@@ -113,16 +113,21 @@ __init int sysfb_create_simplefb(const struct screen_info *si,
+ 	sysfb_apply_efi_quirks(pd);
+ 
+ 	ret = platform_device_add_resources(pd, &res, 1);
+-	if (ret) {
+-		platform_device_put(pd);
+-		return ret;
+-	}
++	if (ret)
++		goto err_put_device;
+ 
+ 	ret = platform_device_add_data(pd, mode, sizeof(*mode));
+-	if (ret) {
+-		platform_device_put(pd);
+-		return ret;
+-	}
++	if (ret)
++		goto err_put_device;
++
++	ret = platform_device_add(pd);
++	if (ret)
++		goto err_put_device;
++
++	return 0;
++
++err_put_device:
++	platform_device_put(pd);
+ 
+-	return platform_device_add(pd);
++	return ret;
+ }
+diff --git a/drivers/fsi/fsi-master-aspeed.c b/drivers/fsi/fsi-master-aspeed.c
+index 8606e55c1721c..0bed2fab80558 100644
+--- a/drivers/fsi/fsi-master-aspeed.c
++++ b/drivers/fsi/fsi-master-aspeed.c
+@@ -542,25 +542,28 @@ static int fsi_master_aspeed_probe(struct platform_device *pdev)
+ 		return rc;
+ 	}
+ 
+-	aspeed = devm_kzalloc(&pdev->dev, sizeof(*aspeed), GFP_KERNEL);
++	aspeed = kzalloc(sizeof(*aspeed), GFP_KERNEL);
+ 	if (!aspeed)
+ 		return -ENOMEM;
+ 
+ 	aspeed->dev = &pdev->dev;
+ 
+ 	aspeed->base = devm_platform_ioremap_resource(pdev, 0);
+-	if (IS_ERR(aspeed->base))
+-		return PTR_ERR(aspeed->base);
++	if (IS_ERR(aspeed->base)) {
++		rc = PTR_ERR(aspeed->base);
++		goto err_free_aspeed;
++	}
+ 
+ 	aspeed->clk = devm_clk_get(aspeed->dev, NULL);
+ 	if (IS_ERR(aspeed->clk)) {
+ 		dev_err(aspeed->dev, "couldn't get clock\n");
+-		return PTR_ERR(aspeed->clk);
++		rc = PTR_ERR(aspeed->clk);
++		goto err_free_aspeed;
+ 	}
+ 	rc = clk_prepare_enable(aspeed->clk);
+ 	if (rc) {
+ 		dev_err(aspeed->dev, "couldn't enable clock\n");
+-		return rc;
++		goto err_free_aspeed;
+ 	}
+ 
+ 	rc = setup_cfam_reset(aspeed);
+@@ -595,7 +598,7 @@ static int fsi_master_aspeed_probe(struct platform_device *pdev)
+ 	rc = opb_readl(aspeed, ctrl_base + FSI_MVER, &raw);
+ 	if (rc) {
+ 		dev_err(&pdev->dev, "failed to read hub version\n");
+-		return rc;
++		goto err_release;
+ 	}
+ 
+ 	reg = be32_to_cpu(raw);
+@@ -634,6 +637,8 @@ static int fsi_master_aspeed_probe(struct platform_device *pdev)
+ 
+ err_release:
+ 	clk_disable_unprepare(aspeed->clk);
++err_free_aspeed:
++	kfree(aspeed);
+ 	return rc;
+ }
+ 
+diff --git a/drivers/fsi/fsi-scom.c b/drivers/fsi/fsi-scom.c
+index da1486bb6a144..bcb756dc98663 100644
+--- a/drivers/fsi/fsi-scom.c
++++ b/drivers/fsi/fsi-scom.c
+@@ -145,7 +145,7 @@ static int put_indirect_scom_form0(struct scom_device *scom, uint64_t value,
+ 				   uint64_t addr, uint32_t *status)
+ {
+ 	uint64_t ind_data, ind_addr;
+-	int rc, retries, err = 0;
++	int rc, err;
+ 
+ 	if (value & ~XSCOM_DATA_IND_DATA)
+ 		return -EINVAL;
+@@ -156,19 +156,14 @@ static int put_indirect_scom_form0(struct scom_device *scom, uint64_t value,
+ 	if (rc || (*status & SCOM_STATUS_ANY_ERR))
+ 		return rc;
+ 
+-	for (retries = 0; retries < SCOM_MAX_IND_RETRIES; retries++) {
+-		rc = __get_scom(scom, &ind_data, addr, status);
+-		if (rc || (*status & SCOM_STATUS_ANY_ERR))
+-			return rc;
++	rc = __get_scom(scom, &ind_data, addr, status);
++	if (rc || (*status & SCOM_STATUS_ANY_ERR))
++		return rc;
+ 
+-		err = (ind_data & XSCOM_DATA_IND_ERR_MASK) >> XSCOM_DATA_IND_ERR_SHIFT;
+-		*status = err << SCOM_STATUS_PIB_RESP_SHIFT;
+-		if ((ind_data & XSCOM_DATA_IND_COMPLETE) || (err != SCOM_PIB_BLOCKED))
+-			return 0;
++	err = (ind_data & XSCOM_DATA_IND_ERR_MASK) >> XSCOM_DATA_IND_ERR_SHIFT;
++	*status = err << SCOM_STATUS_PIB_RESP_SHIFT;
+ 
+-		msleep(1);
+-	}
+-	return rc;
++	return 0;
+ }
+ 
+ static int put_indirect_scom_form1(struct scom_device *scom, uint64_t value,
+@@ -188,7 +183,7 @@ static int get_indirect_scom_form0(struct scom_device *scom, uint64_t *value,
+ 				   uint64_t addr, uint32_t *status)
+ {
+ 	uint64_t ind_data, ind_addr;
+-	int rc, retries, err = 0;
++	int rc, err;
+ 
+ 	ind_addr = addr & XSCOM_ADDR_DIRECT_PART;
+ 	ind_data = (addr & XSCOM_ADDR_INDIRECT_PART) | XSCOM_DATA_IND_READ;
+@@ -196,21 +191,15 @@ static int get_indirect_scom_form0(struct scom_device *scom, uint64_t *value,
+ 	if (rc || (*status & SCOM_STATUS_ANY_ERR))
+ 		return rc;
+ 
+-	for (retries = 0; retries < SCOM_MAX_IND_RETRIES; retries++) {
+-		rc = __get_scom(scom, &ind_data, addr, status);
+-		if (rc || (*status & SCOM_STATUS_ANY_ERR))
+-			return rc;
+-
+-		err = (ind_data & XSCOM_DATA_IND_ERR_MASK) >> XSCOM_DATA_IND_ERR_SHIFT;
+-		*status = err << SCOM_STATUS_PIB_RESP_SHIFT;
+-		*value = ind_data & XSCOM_DATA_IND_DATA;
++	rc = __get_scom(scom, &ind_data, addr, status);
++	if (rc || (*status & SCOM_STATUS_ANY_ERR))
++		return rc;
+ 
+-		if ((ind_data & XSCOM_DATA_IND_COMPLETE) || (err != SCOM_PIB_BLOCKED))
+-			return 0;
++	err = (ind_data & XSCOM_DATA_IND_ERR_MASK) >> XSCOM_DATA_IND_ERR_SHIFT;
++	*status = err << SCOM_STATUS_PIB_RESP_SHIFT;
++	*value = ind_data & XSCOM_DATA_IND_DATA;
+ 
+-		msleep(1);
+-	}
+-	return rc;
++	return 0;
+ }
+ 
+ static int raw_put_scom(struct scom_device *scom, uint64_t value,
+@@ -289,7 +278,7 @@ static int put_scom(struct scom_device *scom, uint64_t value,
+ 	int rc;
+ 
+ 	rc = raw_put_scom(scom, value, addr, &status);
+-	if (rc == -ENODEV)
++	if (rc)
+ 		return rc;
+ 
+ 	rc = handle_fsi2pib_status(scom, status);
+@@ -308,7 +297,7 @@ static int get_scom(struct scom_device *scom, uint64_t *value,
+ 	int rc;
+ 
+ 	rc = raw_get_scom(scom, value, addr, &status);
+-	if (rc == -ENODEV)
++	if (rc)
+ 		return rc;
+ 
+ 	rc = handle_fsi2pib_status(scom, status);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+index df1f9b88a53f9..a09876bb7ec8b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+@@ -175,7 +175,7 @@ int amdgpu_connector_get_monitor_bpc(struct drm_connector *connector)
+ 
+ 			/* Check if bpc is within clock limit. Try to degrade gracefully otherwise */
+ 			if ((bpc == 12) && (mode_clock * 3/2 > max_tmds_clock)) {
+-				if ((connector->display_info.edid_hdmi_dc_modes & DRM_EDID_HDMI_DC_30) &&
++				if ((connector->display_info.edid_hdmi_rgb444_dc_modes & DRM_EDID_HDMI_DC_30) &&
+ 				    (mode_clock * 5/4 <= max_tmds_clock))
+ 					bpc = 10;
+ 				else
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 694c3726e0f4d..6be480939d3f3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -30,6 +30,7 @@
+ #include <linux/module.h>
+ #include <linux/console.h>
+ #include <linux/slab.h>
++#include <linux/pci.h>
+ 
+ #include <drm/drm_atomic_helper.h>
+ #include <drm/drm_probe_helper.h>
+@@ -2070,6 +2071,8 @@ out:
+  */
+ static int amdgpu_device_ip_early_init(struct amdgpu_device *adev)
+ {
++	struct drm_device *dev = adev_to_drm(adev);
++	struct pci_dev *parent;
+ 	int i, r;
+ 
+ 	amdgpu_device_enable_virtual_display(adev);
+@@ -2134,6 +2137,18 @@ static int amdgpu_device_ip_early_init(struct amdgpu_device *adev)
+ 		break;
+ 	}
+ 
++	if (amdgpu_has_atpx() &&
++	    (amdgpu_is_atpx_hybrid() ||
++	     amdgpu_has_atpx_dgpu_power_cntl()) &&
++	    ((adev->flags & AMD_IS_APU) == 0) &&
++	    !pci_is_thunderbolt_attached(to_pci_dev(dev->dev)))
++		adev->flags |= AMD_IS_PX;
++
++	if (!(adev->flags & AMD_IS_APU)) {
++		parent = pci_upstream_bridge(adev->pdev);
++		adev->has_pr3 = parent ? pci_pr3_present(parent) : false;
++	}
++
+ 	amdgpu_amdkfd_device_probe(adev);
+ 
+ 	adev->pm.pp_feature = amdgpu_pp_feature_mask;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index 09ad17944eb2e..704702fef2574 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -152,21 +152,10 @@ static void amdgpu_get_audio_func(struct amdgpu_device *adev)
+ int amdgpu_driver_load_kms(struct amdgpu_device *adev, unsigned long flags)
+ {
+ 	struct drm_device *dev;
+-	struct pci_dev *parent;
+ 	int r, acpi_status;
+ 
+ 	dev = adev_to_drm(adev);
+ 
+-	if (amdgpu_has_atpx() &&
+-	    (amdgpu_is_atpx_hybrid() ||
+-	     amdgpu_has_atpx_dgpu_power_cntl()) &&
+-	    ((flags & AMD_IS_APU) == 0) &&
+-	    !pci_is_thunderbolt_attached(to_pci_dev(dev->dev)))
+-		flags |= AMD_IS_PX;
+-
+-	parent = pci_upstream_bridge(adev->pdev);
+-	adev->has_pr3 = parent ? pci_pr3_present(parent) : false;
+-
+ 	/* amdgpu_device_init should report only fatal error
+ 	 * like memory allocation failure or iomapping failure,
+ 	 * or memory manager initialization failure, it must
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 7a5bb5a3456a6..7cadb9e81d9d5 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -8030,6 +8030,9 @@ static void amdgpu_dm_connector_add_common_modes(struct drm_encoder *encoder,
+ 		mode = amdgpu_dm_create_common_mode(encoder,
+ 				common_modes[i].name, common_modes[i].w,
+ 				common_modes[i].h);
++		if (!mode)
++			continue;
++
+ 		drm_mode_probed_add(connector, mode);
+ 		amdgpu_dm_connector->num_modes++;
+ 	}
+@@ -10749,10 +10752,13 @@ static int dm_check_crtc_cursor(struct drm_atomic_state *state,
+ static int add_affected_mst_dsc_crtcs(struct drm_atomic_state *state, struct drm_crtc *crtc)
+ {
+ 	struct drm_connector *connector;
+-	struct drm_connector_state *conn_state;
++	struct drm_connector_state *conn_state, *old_conn_state;
+ 	struct amdgpu_dm_connector *aconnector = NULL;
+ 	int i;
+-	for_each_new_connector_in_state(state, connector, conn_state, i) {
++	for_each_oldnew_connector_in_state(state, connector, old_conn_state, conn_state, i) {
++		if (!conn_state->crtc)
++			conn_state = old_conn_state;
++
+ 		if (conn_state->crtc != crtc)
+ 			continue;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c b/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c
+index ed54e1c819bed..a728087b3f3d6 100644
+--- a/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c
++++ b/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c
+@@ -266,14 +266,6 @@ static const struct irq_source_info_funcs vline0_irq_info_funcs = {
+ 		.funcs = &pflip_irq_info_funcs\
+ 	}
+ 
+-#define vupdate_int_entry(reg_num)\
+-	[DC_IRQ_SOURCE_VUPDATE1 + reg_num] = {\
+-		IRQ_REG_ENTRY(OTG, reg_num,\
+-			OTG_GLOBAL_SYNC_STATUS, VUPDATE_INT_EN,\
+-			OTG_GLOBAL_SYNC_STATUS, VUPDATE_EVENT_CLEAR),\
+-		.funcs = &vblank_irq_info_funcs\
+-	}
+-
+ /* vupdate_no_lock_int_entry maps to DC_IRQ_SOURCE_VUPDATEx, to match semantic
+  * of DCE's DC_IRQ_SOURCE_VUPDATEx.
+  */
+@@ -402,12 +394,6 @@ irq_source_info_dcn21[DAL_IRQ_SOURCES_NUMBER] = {
+ 	dc_underflow_int_entry(6),
+ 	[DC_IRQ_SOURCE_DMCU_SCP] = dummy_irq_entry(),
+ 	[DC_IRQ_SOURCE_VBIOS_SW] = dummy_irq_entry(),
+-	vupdate_int_entry(0),
+-	vupdate_int_entry(1),
+-	vupdate_int_entry(2),
+-	vupdate_int_entry(3),
+-	vupdate_int_entry(4),
+-	vupdate_int_entry(5),
+ 	vupdate_no_lock_int_entry(0),
+ 	vupdate_no_lock_int_entry(1),
+ 	vupdate_no_lock_int_entry(2),
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+index d31719b3418fa..8744bda594741 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+@@ -2123,8 +2123,8 @@ static int default_attr_update(struct amdgpu_device *adev, struct amdgpu_device_
+ 		}
+ 	}
+ 
+-	/* setting should not be allowed from VF */
+-	if (amdgpu_sriov_vf(adev)) {
++	/* setting should not be allowed from VF if not in one VF mode */
++	if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev)) {
+ 		dev_attr->attr.mode &= ~S_IWUGO;
+ 		dev_attr->store = NULL;
+ 	}
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index 9d7d64fdf410e..9b54c1b89ea40 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -138,7 +138,7 @@ int smu_get_dpm_freq_range(struct smu_context *smu,
+ 			   uint32_t *min,
+ 			   uint32_t *max)
+ {
+-	int ret = 0;
++	int ret = -ENOTSUPP;
+ 
+ 	if (!min && !max)
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511.h b/drivers/gpu/drm/bridge/adv7511/adv7511.h
+index 05e3abb5a0c9a..1b00dfda6e0d9 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511.h
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511.h
+@@ -169,6 +169,7 @@
+ #define ADV7511_PACKET_ENABLE_SPARE2		BIT(1)
+ #define ADV7511_PACKET_ENABLE_SPARE1		BIT(0)
+ 
++#define ADV7535_REG_POWER2_HPD_OVERRIDE		BIT(6)
+ #define ADV7511_REG_POWER2_HPD_SRC_MASK		0xc0
+ #define ADV7511_REG_POWER2_HPD_SRC_BOTH		0x00
+ #define ADV7511_REG_POWER2_HPD_SRC_HPD		0x40
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+index 76555ae64e9ce..c02f3ec60b04c 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+@@ -351,11 +351,17 @@ static void __adv7511_power_on(struct adv7511 *adv7511)
+ 	 * from standby or are enabled. When the HPD goes low the adv7511 is
+ 	 * reset and the outputs are disabled which might cause the monitor to
+ 	 * go to standby again. To avoid this we ignore the HPD pin for the
+-	 * first few seconds after enabling the output.
++	 * first few seconds after enabling the output. On the other hand
++	 * adv7535 require to enable HPD Override bit for proper HPD.
+ 	 */
+-	regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
+-			   ADV7511_REG_POWER2_HPD_SRC_MASK,
+-			   ADV7511_REG_POWER2_HPD_SRC_NONE);
++	if (adv7511->type == ADV7535)
++		regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
++				   ADV7535_REG_POWER2_HPD_OVERRIDE,
++				   ADV7535_REG_POWER2_HPD_OVERRIDE);
++	else
++		regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
++				   ADV7511_REG_POWER2_HPD_SRC_MASK,
++				   ADV7511_REG_POWER2_HPD_SRC_NONE);
+ }
+ 
+ static void adv7511_power_on(struct adv7511 *adv7511)
+@@ -375,6 +381,10 @@ static void adv7511_power_on(struct adv7511 *adv7511)
+ static void __adv7511_power_off(struct adv7511 *adv7511)
+ {
+ 	/* TODO: setup additional power down modes */
++	if (adv7511->type == ADV7535)
++		regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
++				   ADV7535_REG_POWER2_HPD_OVERRIDE, 0);
++
+ 	regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
+ 			   ADV7511_POWER_POWER_DOWN,
+ 			   ADV7511_POWER_POWER_DOWN);
+@@ -672,9 +682,14 @@ adv7511_detect(struct adv7511 *adv7511, struct drm_connector *connector)
+ 			status = connector_status_disconnected;
+ 	} else {
+ 		/* Renable HPD sensing */
+-		regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
+-				   ADV7511_REG_POWER2_HPD_SRC_MASK,
+-				   ADV7511_REG_POWER2_HPD_SRC_BOTH);
++		if (adv7511->type == ADV7535)
++			regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
++					   ADV7535_REG_POWER2_HPD_OVERRIDE,
++					   ADV7535_REG_POWER2_HPD_OVERRIDE);
++		else
++			regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
++					   ADV7511_REG_POWER2_HPD_SRC_MASK,
++					   ADV7511_REG_POWER2_HPD_SRC_BOTH);
+ 	}
+ 
+ 	adv7511->status = status;
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index 1a871f6b6822e..9b24210409267 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -791,7 +791,8 @@ static int segments_edid_read(struct anx7625_data *ctx,
+ static int sp_tx_edid_read(struct anx7625_data *ctx,
+ 			   u8 *pedid_blocks_buf)
+ {
+-	u8 offset, edid_pos;
++	u8 offset;
++	int edid_pos;
+ 	int count, blocks_num;
+ 	u8 pblock_buf[MAX_DPCD_BUFFER_SIZE];
+ 	u8 i, j;
+diff --git a/drivers/gpu/drm/bridge/cdns-dsi.c b/drivers/gpu/drm/bridge/cdns-dsi.c
+index d8a15c459b42c..829e1a1446567 100644
+--- a/drivers/gpu/drm/bridge/cdns-dsi.c
++++ b/drivers/gpu/drm/bridge/cdns-dsi.c
+@@ -1284,6 +1284,7 @@ static const struct of_device_id cdns_dsi_of_match[] = {
+ 	{ .compatible = "cdns,dsi" },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, cdns_dsi_of_match);
+ 
+ static struct platform_driver cdns_dsi_platform_driver = {
+ 	.probe  = cdns_dsi_drm_probe,
+diff --git a/drivers/gpu/drm/bridge/nwl-dsi.c b/drivers/gpu/drm/bridge/nwl-dsi.c
+index af07eeb47ca02..6e484d836cfe2 100644
+--- a/drivers/gpu/drm/bridge/nwl-dsi.c
++++ b/drivers/gpu/drm/bridge/nwl-dsi.c
+@@ -1204,6 +1204,7 @@ static int nwl_dsi_probe(struct platform_device *pdev)
+ 
+ 	ret = nwl_dsi_select_input(dsi);
+ 	if (ret < 0) {
++		pm_runtime_disable(dev);
+ 		mipi_dsi_host_unregister(&dsi->dsi_host);
+ 		return ret;
+ 	}
+diff --git a/drivers/gpu/drm/bridge/sil-sii8620.c b/drivers/gpu/drm/bridge/sil-sii8620.c
+index 843265d7f1b12..ec7745c31da07 100644
+--- a/drivers/gpu/drm/bridge/sil-sii8620.c
++++ b/drivers/gpu/drm/bridge/sil-sii8620.c
+@@ -2120,7 +2120,7 @@ static void sii8620_init_rcp_input_dev(struct sii8620 *ctx)
+ 	if (ret) {
+ 		dev_err(ctx->dev, "Failed to register RC device\n");
+ 		ctx->error = ret;
+-		rc_free_device(ctx->rc_dev);
++		rc_free_device(rc_dev);
+ 		return;
+ 	}
+ 	ctx->rc_dev = rc_dev;
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
+index e1211a5b334ba..25d58dcfc87e1 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
++++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
+@@ -2551,8 +2551,9 @@ static u32 *dw_hdmi_bridge_atomic_get_output_bus_fmts(struct drm_bridge *bridge,
+ 	if (!output_fmts)
+ 		return NULL;
+ 
+-	/* If dw-hdmi is the only bridge, avoid negociating with ourselves */
+-	if (list_is_singular(&bridge->encoder->bridge_chain)) {
++	/* If dw-hdmi is the first or only bridge, avoid negociating with ourselves */
++	if (list_is_singular(&bridge->encoder->bridge_chain) ||
++	    list_is_first(&bridge->chain_node, &bridge->encoder->bridge_chain)) {
+ 		*num_output_fmts = 1;
+ 		output_fmts[0] = MEDIA_BUS_FMT_FIXED;
+ 
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c b/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
+index e44e18a0112af..56c3fd08c6a0b 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
++++ b/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
+@@ -1199,6 +1199,7 @@ __dw_mipi_dsi_probe(struct platform_device *pdev,
+ 	ret = mipi_dsi_host_register(&dsi->dsi_host);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to register MIPI host: %d\n", ret);
++		pm_runtime_disable(dev);
+ 		dw_mipi_dsi_debugfs_remove(dsi);
+ 		return ERR_PTR(ret);
+ 	}
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index f5f5de362ff2c..b8f5419e514ae 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -4848,7 +4848,8 @@ bool drm_detect_monitor_audio(struct edid *edid)
+ 	if (!edid_ext)
+ 		goto end;
+ 
+-	has_audio = ((edid_ext[3] & EDID_BASIC_AUDIO) != 0);
++	has_audio = (edid_ext[0] == CEA_EXT &&
++		    (edid_ext[3] & EDID_BASIC_AUDIO) != 0);
+ 
+ 	if (has_audio) {
+ 		DRM_DEBUG_KMS("Monitor has basic audio support\n");
+@@ -5075,21 +5076,21 @@ static void drm_parse_hdmi_deep_color_info(struct drm_connector *connector,
+ 
+ 	if (hdmi[6] & DRM_EDID_HDMI_DC_30) {
+ 		dc_bpc = 10;
+-		info->edid_hdmi_dc_modes |= DRM_EDID_HDMI_DC_30;
++		info->edid_hdmi_rgb444_dc_modes |= DRM_EDID_HDMI_DC_30;
+ 		DRM_DEBUG("%s: HDMI sink does deep color 30.\n",
+ 			  connector->name);
+ 	}
+ 
+ 	if (hdmi[6] & DRM_EDID_HDMI_DC_36) {
+ 		dc_bpc = 12;
+-		info->edid_hdmi_dc_modes |= DRM_EDID_HDMI_DC_36;
++		info->edid_hdmi_rgb444_dc_modes |= DRM_EDID_HDMI_DC_36;
+ 		DRM_DEBUG("%s: HDMI sink does deep color 36.\n",
+ 			  connector->name);
+ 	}
+ 
+ 	if (hdmi[6] & DRM_EDID_HDMI_DC_48) {
+ 		dc_bpc = 16;
+-		info->edid_hdmi_dc_modes |= DRM_EDID_HDMI_DC_48;
++		info->edid_hdmi_rgb444_dc_modes |= DRM_EDID_HDMI_DC_48;
+ 		DRM_DEBUG("%s: HDMI sink does deep color 48.\n",
+ 			  connector->name);
+ 	}
+@@ -5104,16 +5105,9 @@ static void drm_parse_hdmi_deep_color_info(struct drm_connector *connector,
+ 		  connector->name, dc_bpc);
+ 	info->bpc = dc_bpc;
+ 
+-	/*
+-	 * Deep color support mandates RGB444 support for all video
+-	 * modes and forbids YCRCB422 support for all video modes per
+-	 * HDMI 1.3 spec.
+-	 */
+-	info->color_formats = DRM_COLOR_FORMAT_RGB444;
+-
+ 	/* YCRCB444 is optional according to spec. */
+ 	if (hdmi[6] & DRM_EDID_HDMI_DC_Y444) {
+-		info->color_formats |= DRM_COLOR_FORMAT_YCRCB444;
++		info->edid_hdmi_ycbcr444_dc_modes = info->edid_hdmi_rgb444_dc_modes;
+ 		DRM_DEBUG("%s: HDMI sink does YCRCB444 in deep color.\n",
+ 			  connector->name);
+ 	}
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index 22bf690910b25..ed589e7182bb4 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -2346,6 +2346,7 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
+ 	fbi->fbops = &drm_fbdev_fb_ops;
+ 	fbi->screen_size = fb->height * fb->pitches[0];
+ 	fbi->fix.smem_len = fbi->screen_size;
++	fbi->flags = FBINFO_DEFAULT;
+ 
+ 	drm_fb_helper_fill_info(fbi, fb_helper, sizes);
+ 
+@@ -2353,19 +2354,21 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
+ 		fbi->screen_buffer = vzalloc(fbi->screen_size);
+ 		if (!fbi->screen_buffer)
+ 			return -ENOMEM;
++		fbi->flags |= FBINFO_VIRTFB | FBINFO_READS_FAST;
+ 
+ 		fbi->fbdefio = &drm_fbdev_defio;
+-
+ 		fb_deferred_io_init(fbi);
+ 	} else {
+ 		/* buffer is mapped for HW framebuffer */
+ 		ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
+ 		if (ret)
+ 			return ret;
+-		if (map.is_iomem)
++		if (map.is_iomem) {
+ 			fbi->screen_base = map.vaddr_iomem;
+-		else
++		} else {
+ 			fbi->screen_buffer = map.vaddr;
++			fbi->flags |= FBINFO_VIRTFB;
++		}
+ 
+ 		/*
+ 		 * Shamelessly leak the physical address to user-space. As
+diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
+index c313a5b4549c4..7e48dcd1bee4d 100644
+--- a/drivers/gpu/drm/drm_syncobj.c
++++ b/drivers/gpu/drm/drm_syncobj.c
+@@ -853,12 +853,57 @@ drm_syncobj_fd_to_handle_ioctl(struct drm_device *dev, void *data,
+ 					&args->handle);
+ }
+ 
++
++/*
++ * Try to flatten a dma_fence_chain into a dma_fence_array so that it can be
++ * added as timeline fence to a chain again.
++ */
++static int drm_syncobj_flatten_chain(struct dma_fence **f)
++{
++	struct dma_fence_chain *chain = to_dma_fence_chain(*f);
++	struct dma_fence *tmp, **fences;
++	struct dma_fence_array *array;
++	unsigned int count;
++
++	if (!chain)
++		return 0;
++
++	count = 0;
++	dma_fence_chain_for_each(tmp, &chain->base)
++		++count;
++
++	fences = kmalloc_array(count, sizeof(*fences), GFP_KERNEL);
++	if (!fences)
++		return -ENOMEM;
++
++	count = 0;
++	dma_fence_chain_for_each(tmp, &chain->base)
++		fences[count++] = dma_fence_get(tmp);
++
++	array = dma_fence_array_create(count, fences,
++				       dma_fence_context_alloc(1),
++				       1, false);
++	if (!array)
++		goto free_fences;
++
++	dma_fence_put(*f);
++	*f = &array->base;
++	return 0;
++
++free_fences:
++	while (count--)
++		dma_fence_put(fences[count]);
++
++	kfree(fences);
++	return -ENOMEM;
++}
++
+ static int drm_syncobj_transfer_to_timeline(struct drm_file *file_private,
+ 					    struct drm_syncobj_transfer *args)
+ {
+ 	struct drm_syncobj *timeline_syncobj = NULL;
+-	struct dma_fence *fence;
+ 	struct dma_fence_chain *chain;
++	struct dma_fence *fence;
+ 	int ret;
+ 
+ 	timeline_syncobj = drm_syncobj_find(file_private, args->dst_handle);
+@@ -869,16 +914,22 @@ static int drm_syncobj_transfer_to_timeline(struct drm_file *file_private,
+ 				     args->src_point, args->flags,
+ 				     &fence);
+ 	if (ret)
+-		goto err;
++		goto err_put_timeline;
++
++	ret = drm_syncobj_flatten_chain(&fence);
++	if (ret)
++		goto err_free_fence;
++
+ 	chain = dma_fence_chain_alloc();
+ 	if (!chain) {
+ 		ret = -ENOMEM;
+-		goto err1;
++		goto err_free_fence;
+ 	}
++
+ 	drm_syncobj_add_point(timeline_syncobj, chain, fence, args->dst_point);
+-err1:
++err_free_fence:
+ 	dma_fence_put(fence);
+-err:
++err_put_timeline:
+ 	drm_syncobj_put(timeline_syncobj);
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/i915/display/intel_bw.c b/drivers/gpu/drm/i915/display/intel_bw.c
+index 5a2f96d39ac78..8f0f1c63a353e 100644
+--- a/drivers/gpu/drm/i915/display/intel_bw.c
++++ b/drivers/gpu/drm/i915/display/intel_bw.c
+@@ -819,7 +819,8 @@ int intel_bw_atomic_check(struct intel_atomic_state *state)
+ 	 * cause.
+ 	 */
+ 	if (!intel_can_enable_sagv(dev_priv, new_bw_state)) {
+-		allowed_points = BIT(max_bw_point);
++		allowed_points &= ADLS_PSF_PT_MASK;
++		allowed_points |= BIT(max_bw_point);
+ 		drm_dbg_kms(&dev_priv->drm, "No SAGV, using single QGV point %d\n",
+ 			    max_bw_point);
+ 	}
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index a552f05a67e58..3ee0f2fc9c210 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -4742,7 +4742,7 @@ intel_dp_hpd_pulse(struct intel_digital_port *dig_port, bool long_hpd)
+ 	struct intel_dp *intel_dp = &dig_port->dp;
+ 
+ 	if (dig_port->base.type == INTEL_OUTPUT_EDP &&
+-	    (long_hpd || !intel_pps_have_power(intel_dp))) {
++	    (long_hpd || !intel_pps_have_panel_power_or_vdd(intel_dp))) {
+ 		/*
+ 		 * vdd off can generate a long/short pulse on eDP which
+ 		 * would require vdd on to handle it, and thus we
+diff --git a/drivers/gpu/drm/i915/display/intel_hdmi.c b/drivers/gpu/drm/i915/display/intel_hdmi.c
+index 371736bdc01fa..2ca9b90557a2c 100644
+--- a/drivers/gpu/drm/i915/display/intel_hdmi.c
++++ b/drivers/gpu/drm/i915/display/intel_hdmi.c
+@@ -1831,6 +1831,7 @@ hdmi_port_clock_valid(struct intel_hdmi *hdmi,
+ 		      bool has_hdmi_sink)
+ {
+ 	struct drm_i915_private *dev_priv = intel_hdmi_to_i915(hdmi);
++	enum phy phy = intel_port_to_phy(dev_priv, hdmi_to_dig_port(hdmi)->base.port);
+ 
+ 	if (clock < 25000)
+ 		return MODE_CLOCK_LOW;
+@@ -1851,6 +1852,14 @@ hdmi_port_clock_valid(struct intel_hdmi *hdmi,
+ 	if (IS_CHERRYVIEW(dev_priv) && clock > 216000 && clock < 240000)
+ 		return MODE_CLOCK_RANGE;
+ 
++	/* ICL+ combo PHY PLL can't generate 500-533.2 MHz */
++	if (intel_phy_is_combo(dev_priv, phy) && clock > 500000 && clock < 533200)
++		return MODE_CLOCK_RANGE;
++
++	/* ICL+ TC PHY PLL can't generate 500-532.8 MHz */
++	if (intel_phy_is_tc(dev_priv, phy) && clock > 500000 && clock < 532800)
++		return MODE_CLOCK_RANGE;
++
+ 	/*
+ 	 * SNPS PHYs' MPLLB table-based programming can only handle a fixed
+ 	 * set of link rates.
+@@ -1892,7 +1901,7 @@ static bool intel_hdmi_bpc_possible(struct drm_connector *connector,
+ 		if (ycbcr420_output)
+ 			return hdmi->y420_dc_modes & DRM_EDID_YCBCR420_DC_36;
+ 		else
+-			return info->edid_hdmi_dc_modes & DRM_EDID_HDMI_DC_36;
++			return info->edid_hdmi_rgb444_dc_modes & DRM_EDID_HDMI_DC_36;
+ 	case 10:
+ 		if (DISPLAY_VER(i915) < 11)
+ 			return false;
+@@ -1903,7 +1912,7 @@ static bool intel_hdmi_bpc_possible(struct drm_connector *connector,
+ 		if (ycbcr420_output)
+ 			return hdmi->y420_dc_modes & DRM_EDID_YCBCR420_DC_30;
+ 		else
+-			return info->edid_hdmi_dc_modes & DRM_EDID_HDMI_DC_30;
++			return info->edid_hdmi_rgb444_dc_modes & DRM_EDID_HDMI_DC_30;
+ 	case 8:
+ 		return true;
+ 	default:
+diff --git a/drivers/gpu/drm/i915/display/intel_opregion.c b/drivers/gpu/drm/i915/display/intel_opregion.c
+index 4a2662838cd8d..df10b6898987a 100644
+--- a/drivers/gpu/drm/i915/display/intel_opregion.c
++++ b/drivers/gpu/drm/i915/display/intel_opregion.c
+@@ -375,6 +375,21 @@ int intel_opregion_notify_encoder(struct intel_encoder *intel_encoder,
+ 		return -EINVAL;
+ 	}
+ 
++	/*
++	 * The port numbering and mapping here is bizarre. The now-obsolete
++	 * swsci spec supports ports numbered [0..4]. Port E is handled as a
++	 * special case, but port F and beyond are not. The functionality is
++	 * supposed to be obsolete for new platforms. Just bail out if the port
++	 * number is out of bounds after mapping.
++	 */
++	if (port > 4) {
++		drm_dbg_kms(&dev_priv->drm,
++			    "[ENCODER:%d:%s] port %c (index %u) out of bounds for display power state notification\n",
++			    intel_encoder->base.base.id, intel_encoder->base.name,
++			    port_name(intel_encoder->port), port);
++		return -EINVAL;
++	}
++
+ 	if (!enable)
+ 		parm |= 4 << 8;
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_pps.c b/drivers/gpu/drm/i915/display/intel_pps.c
+index e9c679bb1b2eb..5edd188d97479 100644
+--- a/drivers/gpu/drm/i915/display/intel_pps.c
++++ b/drivers/gpu/drm/i915/display/intel_pps.c
+@@ -1075,14 +1075,14 @@ static void intel_pps_vdd_sanitize(struct intel_dp *intel_dp)
+ 	edp_panel_vdd_schedule_off(intel_dp);
+ }
+ 
+-bool intel_pps_have_power(struct intel_dp *intel_dp)
++bool intel_pps_have_panel_power_or_vdd(struct intel_dp *intel_dp)
+ {
+ 	intel_wakeref_t wakeref;
+ 	bool have_power = false;
+ 
+ 	with_intel_pps_lock(intel_dp, wakeref) {
+-		have_power = edp_have_panel_power(intel_dp) &&
+-						  edp_have_panel_vdd(intel_dp);
++		have_power = edp_have_panel_power(intel_dp) ||
++			     edp_have_panel_vdd(intel_dp);
+ 	}
+ 
+ 	return have_power;
+diff --git a/drivers/gpu/drm/i915/display/intel_pps.h b/drivers/gpu/drm/i915/display/intel_pps.h
+index fbb47f6f453e4..e64144659d31f 100644
+--- a/drivers/gpu/drm/i915/display/intel_pps.h
++++ b/drivers/gpu/drm/i915/display/intel_pps.h
+@@ -37,7 +37,7 @@ void intel_pps_vdd_on(struct intel_dp *intel_dp);
+ void intel_pps_on(struct intel_dp *intel_dp);
+ void intel_pps_off(struct intel_dp *intel_dp);
+ void intel_pps_vdd_off_sync(struct intel_dp *intel_dp);
+-bool intel_pps_have_power(struct intel_dp *intel_dp);
++bool intel_pps_have_panel_power_or_vdd(struct intel_dp *intel_dp);
+ void intel_pps_wait_power_cycle(struct intel_dp *intel_dp);
+ 
+ void intel_pps_init(struct intel_dp *intel_dp);
+diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c
+index 3ba8b717e176f..f7ade9c06386d 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr.c
++++ b/drivers/gpu/drm/i915/display/intel_psr.c
+@@ -1793,6 +1793,9 @@ static void _intel_psr_post_plane_update(const struct intel_atomic_state *state,
+ 
+ 		mutex_lock(&psr->lock);
+ 
++		if (psr->sink_not_reliable)
++			goto exit;
++
+ 		drm_WARN_ON(&dev_priv->drm, psr->enabled && !crtc_state->active_planes);
+ 
+ 		/* Only enable if there is active planes */
+@@ -1803,6 +1806,7 @@ static void _intel_psr_post_plane_update(const struct intel_atomic_state *state,
+ 		if (crtc_state->crc_enabled && psr->enabled)
+ 			psr_force_hw_tracking_exit(intel_dp);
+ 
++exit:
+ 		mutex_unlock(&psr->lock);
+ 	}
+ }
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+index 65fc6ff5f59da..9be5678becd0c 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+@@ -438,7 +438,7 @@ vm_access(struct vm_area_struct *area, unsigned long addr,
+ 		return -EACCES;
+ 
+ 	addr -= area->vm_start;
+-	if (addr >= obj->base.size)
++	if (range_overflows_t(u64, addr, len, obj->base.size))
+ 		return -EINVAL;
+ 
+ 	i915_gem_ww_ctx_init(&ww, true);
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index 7cbffd9a7be88..db95ed16749ef 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -3712,8 +3712,7 @@ skl_setup_sagv_block_time(struct drm_i915_private *dev_priv)
+ 		MISSING_CASE(DISPLAY_VER(dev_priv));
+ 	}
+ 
+-	/* Default to an unusable block time */
+-	dev_priv->sagv_block_time_us = -1;
++	dev_priv->sagv_block_time_us = 0;
+ }
+ 
+ /*
+@@ -5634,7 +5633,7 @@ static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
+ 	result->min_ddb_alloc = max(min_ddb_alloc, blocks) + 1;
+ 	result->enable = true;
+ 
+-	if (DISPLAY_VER(dev_priv) < 12)
++	if (DISPLAY_VER(dev_priv) < 12 && dev_priv->sagv_block_time_us)
+ 		result->can_sagv = latency >= dev_priv->sagv_block_time_us;
+ }
+ 
+@@ -5665,7 +5664,10 @@ static void tgl_compute_sagv_wm(const struct intel_crtc_state *crtc_state,
+ 	struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
+ 	struct skl_wm_level *sagv_wm = &plane_wm->sagv.wm0;
+ 	struct skl_wm_level *levels = plane_wm->wm;
+-	unsigned int latency = dev_priv->wm.skl_latency[0] + dev_priv->sagv_block_time_us;
++	unsigned int latency = 0;
++
++	if (dev_priv->sagv_block_time_us)
++		latency = dev_priv->sagv_block_time_us + dev_priv->wm.skl_latency[0];
+ 
+ 	skl_compute_plane_wm(crtc_state, 0, latency,
+ 			     wm_params, &levels[0],
+diff --git a/drivers/gpu/drm/meson/Makefile b/drivers/gpu/drm/meson/Makefile
+index 28a519cdf66b8..523fce45f16ba 100644
+--- a/drivers/gpu/drm/meson/Makefile
++++ b/drivers/gpu/drm/meson/Makefile
+@@ -2,6 +2,7 @@
+ meson-drm-y := meson_drv.o meson_plane.o meson_crtc.o meson_venc_cvbs.o
+ meson-drm-y += meson_viu.o meson_vpp.o meson_venc.o meson_vclk.o meson_overlay.o
+ meson-drm-y += meson_rdma.o meson_osd_afbcd.o
++meson-drm-y += meson_encoder_hdmi.o
+ 
+ obj-$(CONFIG_DRM_MESON) += meson-drm.o
+ obj-$(CONFIG_DRM_MESON_DW_HDMI) += meson_dw_hdmi.o
+diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
+index 7f41a33592c8a..c98525d60df5b 100644
+--- a/drivers/gpu/drm/meson/meson_drv.c
++++ b/drivers/gpu/drm/meson/meson_drv.c
+@@ -32,6 +32,7 @@
+ #include "meson_osd_afbcd.h"
+ #include "meson_registers.h"
+ #include "meson_venc_cvbs.h"
++#include "meson_encoder_hdmi.h"
+ #include "meson_viu.h"
+ #include "meson_vpp.h"
+ #include "meson_rdma.h"
+@@ -301,38 +302,42 @@ static int meson_drv_bind_master(struct device *dev, bool has_components)
+ 	if (priv->afbcd.ops) {
+ 		ret = priv->afbcd.ops->init(priv);
+ 		if (ret)
+-			return ret;
++			goto free_drm;
+ 	}
+ 
+ 	/* Encoder Initialization */
+ 
+ 	ret = meson_venc_cvbs_create(priv);
+ 	if (ret)
+-		goto free_drm;
++		goto exit_afbcd;
+ 
+ 	if (has_components) {
+ 		ret = component_bind_all(drm->dev, drm);
+ 		if (ret) {
+ 			dev_err(drm->dev, "Couldn't bind all components\n");
+-			goto free_drm;
++			goto exit_afbcd;
+ 		}
+ 	}
+ 
++	ret = meson_encoder_hdmi_init(priv);
++	if (ret)
++		goto exit_afbcd;
++
+ 	ret = meson_plane_create(priv);
+ 	if (ret)
+-		goto free_drm;
++		goto exit_afbcd;
+ 
+ 	ret = meson_overlay_create(priv);
+ 	if (ret)
+-		goto free_drm;
++		goto exit_afbcd;
+ 
+ 	ret = meson_crtc_create(priv);
+ 	if (ret)
+-		goto free_drm;
++		goto exit_afbcd;
+ 
+ 	ret = request_irq(priv->vsync_irq, meson_irq, 0, drm->driver->name, drm);
+ 	if (ret)
+-		goto free_drm;
++		goto exit_afbcd;
+ 
+ 	drm_mode_config_reset(drm);
+ 
+@@ -350,6 +355,9 @@ static int meson_drv_bind_master(struct device *dev, bool has_components)
+ 
+ uninstall_irq:
+ 	free_irq(priv->vsync_irq, drm);
++exit_afbcd:
++	if (priv->afbcd.ops)
++		priv->afbcd.ops->exit(priv);
+ free_drm:
+ 	drm_dev_put(drm);
+ 
+@@ -380,10 +388,8 @@ static void meson_drv_unbind(struct device *dev)
+ 	free_irq(priv->vsync_irq, drm);
+ 	drm_dev_put(drm);
+ 
+-	if (priv->afbcd.ops) {
+-		priv->afbcd.ops->reset(priv);
+-		meson_rdma_free(priv);
+-	}
++	if (priv->afbcd.ops)
++		priv->afbcd.ops->exit(priv);
+ }
+ 
+ static const struct component_master_ops meson_drv_master_ops = {
+diff --git a/drivers/gpu/drm/meson/meson_dw_hdmi.c b/drivers/gpu/drm/meson/meson_dw_hdmi.c
+index 0afbd1e70bfc3..fb540a503efed 100644
+--- a/drivers/gpu/drm/meson/meson_dw_hdmi.c
++++ b/drivers/gpu/drm/meson/meson_dw_hdmi.c
+@@ -22,14 +22,11 @@
+ #include <drm/drm_probe_helper.h>
+ #include <drm/drm_print.h>
+ 
+-#include <linux/media-bus-format.h>
+ #include <linux/videodev2.h>
+ 
+ #include "meson_drv.h"
+ #include "meson_dw_hdmi.h"
+ #include "meson_registers.h"
+-#include "meson_vclk.h"
+-#include "meson_venc.h"
+ 
+ #define DRIVER_NAME "meson-dw-hdmi"
+ #define DRIVER_DESC "Amlogic Meson HDMI-TX DRM driver"
+@@ -135,8 +132,6 @@ struct meson_dw_hdmi_data {
+ };
+ 
+ struct meson_dw_hdmi {
+-	struct drm_encoder encoder;
+-	struct drm_bridge bridge;
+ 	struct dw_hdmi_plat_data dw_plat_data;
+ 	struct meson_drm *priv;
+ 	struct device *dev;
+@@ -148,12 +143,8 @@ struct meson_dw_hdmi {
+ 	struct regulator *hdmi_supply;
+ 	u32 irq_stat;
+ 	struct dw_hdmi *hdmi;
+-	unsigned long output_bus_fmt;
++	struct drm_bridge *bridge;
+ };
+-#define encoder_to_meson_dw_hdmi(x) \
+-	container_of(x, struct meson_dw_hdmi, encoder)
+-#define bridge_to_meson_dw_hdmi(x) \
+-	container_of(x, struct meson_dw_hdmi, bridge)
+ 
+ static inline int dw_hdmi_is_compatible(struct meson_dw_hdmi *dw_hdmi,
+ 					const char *compat)
+@@ -295,14 +286,14 @@ static inline void dw_hdmi_dwc_write_bits(struct meson_dw_hdmi *dw_hdmi,
+ 
+ /* Setup PHY bandwidth modes */
+ static void meson_hdmi_phy_setup_mode(struct meson_dw_hdmi *dw_hdmi,
+-				      const struct drm_display_mode *mode)
++				      const struct drm_display_mode *mode,
++				      bool mode_is_420)
+ {
+ 	struct meson_drm *priv = dw_hdmi->priv;
+ 	unsigned int pixel_clock = mode->clock;
+ 
+ 	/* For 420, pixel clock is half unlike venc clock */
+-	if (dw_hdmi->output_bus_fmt == MEDIA_BUS_FMT_UYYVYY8_0_5X24)
+-		pixel_clock /= 2;
++	if (mode_is_420) pixel_clock /= 2;
+ 
+ 	if (dw_hdmi_is_compatible(dw_hdmi, "amlogic,meson-gxl-dw-hdmi") ||
+ 	    dw_hdmi_is_compatible(dw_hdmi, "amlogic,meson-gxm-dw-hdmi")) {
+@@ -374,68 +365,25 @@ static inline void meson_dw_hdmi_phy_reset(struct meson_dw_hdmi *dw_hdmi)
+ 	mdelay(2);
+ }
+ 
+-static void dw_hdmi_set_vclk(struct meson_dw_hdmi *dw_hdmi,
+-			     const struct drm_display_mode *mode)
+-{
+-	struct meson_drm *priv = dw_hdmi->priv;
+-	int vic = drm_match_cea_mode(mode);
+-	unsigned int phy_freq;
+-	unsigned int vclk_freq;
+-	unsigned int venc_freq;
+-	unsigned int hdmi_freq;
+-
+-	vclk_freq = mode->clock;
+-
+-	/* For 420, pixel clock is half unlike venc clock */
+-	if (dw_hdmi->output_bus_fmt == MEDIA_BUS_FMT_UYYVYY8_0_5X24)
+-		vclk_freq /= 2;
+-
+-	/* TMDS clock is pixel_clock * 10 */
+-	phy_freq = vclk_freq * 10;
+-
+-	if (!vic) {
+-		meson_vclk_setup(priv, MESON_VCLK_TARGET_DMT, phy_freq,
+-				 vclk_freq, vclk_freq, vclk_freq, false);
+-		return;
+-	}
+-
+-	/* 480i/576i needs global pixel doubling */
+-	if (mode->flags & DRM_MODE_FLAG_DBLCLK)
+-		vclk_freq *= 2;
+-
+-	venc_freq = vclk_freq;
+-	hdmi_freq = vclk_freq;
+-
+-	/* VENC double pixels for 1080i, 720p and YUV420 modes */
+-	if (meson_venc_hdmi_venc_repeat(vic) ||
+-	    dw_hdmi->output_bus_fmt == MEDIA_BUS_FMT_UYYVYY8_0_5X24)
+-		venc_freq *= 2;
+-
+-	vclk_freq = max(venc_freq, hdmi_freq);
+-
+-	if (mode->flags & DRM_MODE_FLAG_DBLCLK)
+-		venc_freq /= 2;
+-
+-	DRM_DEBUG_DRIVER("vclk:%d phy=%d venc=%d hdmi=%d enci=%d\n",
+-		phy_freq, vclk_freq, venc_freq, hdmi_freq,
+-		priv->venc.hdmi_use_enci);
+-
+-	meson_vclk_setup(priv, MESON_VCLK_TARGET_HDMI, phy_freq, vclk_freq,
+-			 venc_freq, hdmi_freq, priv->venc.hdmi_use_enci);
+-}
+-
+ static int dw_hdmi_phy_init(struct dw_hdmi *hdmi, void *data,
+ 			    const struct drm_display_info *display,
+ 			    const struct drm_display_mode *mode)
+ {
+ 	struct meson_dw_hdmi *dw_hdmi = (struct meson_dw_hdmi *)data;
++	bool is_hdmi2_sink = display->hdmi.scdc.supported;
+ 	struct meson_drm *priv = dw_hdmi->priv;
+ 	unsigned int wr_clk =
+ 		readl_relaxed(priv->io_base + _REG(VPU_HDMI_SETTING));
++	bool mode_is_420 = false;
+ 
+ 	DRM_DEBUG_DRIVER("\"%s\" div%d\n", mode->name,
+ 			 mode->clock > 340000 ? 40 : 10);
+ 
++	if (drm_mode_is_420_only(display, mode) ||
++	    (!is_hdmi2_sink &&
++	     drm_mode_is_420_also(display, mode)))
++		mode_is_420 = true;
++
+ 	/* Enable clocks */
+ 	regmap_update_bits(priv->hhi, HHI_HDMI_CLK_CNTL, 0xffff, 0x100);
+ 
+@@ -457,8 +405,7 @@ static int dw_hdmi_phy_init(struct dw_hdmi *hdmi, void *data,
+ 	dw_hdmi->data->top_write(dw_hdmi, HDMITX_TOP_BIST_CNTL, BIT(12));
+ 
+ 	/* TMDS pattern setup */
+-	if (mode->clock > 340000 &&
+-	    dw_hdmi->output_bus_fmt == MEDIA_BUS_FMT_YUV8_1X24) {
++	if (mode->clock > 340000 && !mode_is_420) {
+ 		dw_hdmi->data->top_write(dw_hdmi, HDMITX_TOP_TMDS_CLK_PTTN_01,
+ 				  0);
+ 		dw_hdmi->data->top_write(dw_hdmi, HDMITX_TOP_TMDS_CLK_PTTN_23,
+@@ -476,7 +423,7 @@ static int dw_hdmi_phy_init(struct dw_hdmi *hdmi, void *data,
+ 	dw_hdmi->data->top_write(dw_hdmi, HDMITX_TOP_TMDS_CLK_PTTN_CNTL, 0x2);
+ 
+ 	/* Setup PHY parameters */
+-	meson_hdmi_phy_setup_mode(dw_hdmi, mode);
++	meson_hdmi_phy_setup_mode(dw_hdmi, mode, mode_is_420);
+ 
+ 	/* Setup PHY */
+ 	regmap_update_bits(priv->hhi, HHI_HDMI_PHY_CNTL1,
+@@ -622,214 +569,15 @@ static irqreturn_t dw_hdmi_top_thread_irq(int irq, void *dev_id)
+ 		dw_hdmi_setup_rx_sense(dw_hdmi->hdmi, hpd_connected,
+ 				       hpd_connected);
+ 
+-		drm_helper_hpd_irq_event(dw_hdmi->encoder.dev);
++		drm_helper_hpd_irq_event(dw_hdmi->bridge->dev);
++		drm_bridge_hpd_notify(dw_hdmi->bridge,
++				      hpd_connected ? connector_status_connected
++						    : connector_status_disconnected);
+ 	}
+ 
+ 	return IRQ_HANDLED;
+ }
+ 
+-static enum drm_mode_status
+-dw_hdmi_mode_valid(struct dw_hdmi *hdmi, void *data,
+-		   const struct drm_display_info *display_info,
+-		   const struct drm_display_mode *mode)
+-{
+-	struct meson_dw_hdmi *dw_hdmi = data;
+-	struct meson_drm *priv = dw_hdmi->priv;
+-	bool is_hdmi2_sink = display_info->hdmi.scdc.supported;
+-	unsigned int phy_freq;
+-	unsigned int vclk_freq;
+-	unsigned int venc_freq;
+-	unsigned int hdmi_freq;
+-	int vic = drm_match_cea_mode(mode);
+-	enum drm_mode_status status;
+-
+-	DRM_DEBUG_DRIVER("Modeline " DRM_MODE_FMT "\n", DRM_MODE_ARG(mode));
+-
+-	/* If sink does not support 540MHz, reject the non-420 HDMI2 modes */
+-	if (display_info->max_tmds_clock &&
+-	    mode->clock > display_info->max_tmds_clock &&
+-	    !drm_mode_is_420_only(display_info, mode) &&
+-	    !drm_mode_is_420_also(display_info, mode))
+-		return MODE_BAD;
+-
+-	/* Check against non-VIC supported modes */
+-	if (!vic) {
+-		status = meson_venc_hdmi_supported_mode(mode);
+-		if (status != MODE_OK)
+-			return status;
+-
+-		return meson_vclk_dmt_supported_freq(priv, mode->clock);
+-	/* Check against supported VIC modes */
+-	} else if (!meson_venc_hdmi_supported_vic(vic))
+-		return MODE_BAD;
+-
+-	vclk_freq = mode->clock;
+-
+-	/* For 420, pixel clock is half unlike venc clock */
+-	if (drm_mode_is_420_only(display_info, mode) ||
+-	    (!is_hdmi2_sink &&
+-	     drm_mode_is_420_also(display_info, mode)))
+-		vclk_freq /= 2;
+-
+-	/* TMDS clock is pixel_clock * 10 */
+-	phy_freq = vclk_freq * 10;
+-
+-	/* 480i/576i needs global pixel doubling */
+-	if (mode->flags & DRM_MODE_FLAG_DBLCLK)
+-		vclk_freq *= 2;
+-
+-	venc_freq = vclk_freq;
+-	hdmi_freq = vclk_freq;
+-
+-	/* VENC double pixels for 1080i, 720p and YUV420 modes */
+-	if (meson_venc_hdmi_venc_repeat(vic) ||
+-	    drm_mode_is_420_only(display_info, mode) ||
+-	    (!is_hdmi2_sink &&
+-	     drm_mode_is_420_also(display_info, mode)))
+-		venc_freq *= 2;
+-
+-	vclk_freq = max(venc_freq, hdmi_freq);
+-
+-	if (mode->flags & DRM_MODE_FLAG_DBLCLK)
+-		venc_freq /= 2;
+-
+-	dev_dbg(dw_hdmi->dev, "%s: vclk:%d phy=%d venc=%d hdmi=%d\n",
+-		__func__, phy_freq, vclk_freq, venc_freq, hdmi_freq);
+-
+-	return meson_vclk_vic_supported_freq(priv, phy_freq, vclk_freq);
+-}
+-
+-/* Encoder */
+-
+-static const u32 meson_dw_hdmi_out_bus_fmts[] = {
+-	MEDIA_BUS_FMT_YUV8_1X24,
+-	MEDIA_BUS_FMT_UYYVYY8_0_5X24,
+-};
+-
+-static void meson_venc_hdmi_encoder_destroy(struct drm_encoder *encoder)
+-{
+-	drm_encoder_cleanup(encoder);
+-}
+-
+-static const struct drm_encoder_funcs meson_venc_hdmi_encoder_funcs = {
+-	.destroy        = meson_venc_hdmi_encoder_destroy,
+-};
+-
+-static u32 *
+-meson_venc_hdmi_encoder_get_inp_bus_fmts(struct drm_bridge *bridge,
+-					struct drm_bridge_state *bridge_state,
+-					struct drm_crtc_state *crtc_state,
+-					struct drm_connector_state *conn_state,
+-					u32 output_fmt,
+-					unsigned int *num_input_fmts)
+-{
+-	u32 *input_fmts = NULL;
+-	int i;
+-
+-	*num_input_fmts = 0;
+-
+-	for (i = 0 ; i < ARRAY_SIZE(meson_dw_hdmi_out_bus_fmts) ; ++i) {
+-		if (output_fmt == meson_dw_hdmi_out_bus_fmts[i]) {
+-			*num_input_fmts = 1;
+-			input_fmts = kcalloc(*num_input_fmts,
+-					     sizeof(*input_fmts),
+-					     GFP_KERNEL);
+-			if (!input_fmts)
+-				return NULL;
+-
+-			input_fmts[0] = output_fmt;
+-
+-			break;
+-		}
+-	}
+-
+-	return input_fmts;
+-}
+-
+-static int meson_venc_hdmi_encoder_atomic_check(struct drm_bridge *bridge,
+-					struct drm_bridge_state *bridge_state,
+-					struct drm_crtc_state *crtc_state,
+-					struct drm_connector_state *conn_state)
+-{
+-	struct meson_dw_hdmi *dw_hdmi = bridge_to_meson_dw_hdmi(bridge);
+-
+-	dw_hdmi->output_bus_fmt = bridge_state->output_bus_cfg.format;
+-
+-	DRM_DEBUG_DRIVER("output_bus_fmt %lx\n", dw_hdmi->output_bus_fmt);
+-
+-	return 0;
+-}
+-
+-static void meson_venc_hdmi_encoder_disable(struct drm_bridge *bridge)
+-{
+-	struct meson_dw_hdmi *dw_hdmi = bridge_to_meson_dw_hdmi(bridge);
+-	struct meson_drm *priv = dw_hdmi->priv;
+-
+-	DRM_DEBUG_DRIVER("\n");
+-
+-	writel_bits_relaxed(0x3, 0,
+-			    priv->io_base + _REG(VPU_HDMI_SETTING));
+-
+-	writel_relaxed(0, priv->io_base + _REG(ENCI_VIDEO_EN));
+-	writel_relaxed(0, priv->io_base + _REG(ENCP_VIDEO_EN));
+-}
+-
+-static void meson_venc_hdmi_encoder_enable(struct drm_bridge *bridge)
+-{
+-	struct meson_dw_hdmi *dw_hdmi = bridge_to_meson_dw_hdmi(bridge);
+-	struct meson_drm *priv = dw_hdmi->priv;
+-
+-	DRM_DEBUG_DRIVER("%s\n", priv->venc.hdmi_use_enci ? "VENCI" : "VENCP");
+-
+-	if (priv->venc.hdmi_use_enci)
+-		writel_relaxed(1, priv->io_base + _REG(ENCI_VIDEO_EN));
+-	else
+-		writel_relaxed(1, priv->io_base + _REG(ENCP_VIDEO_EN));
+-}
+-
+-static void meson_venc_hdmi_encoder_mode_set(struct drm_bridge *bridge,
+-				   const struct drm_display_mode *mode,
+-				   const struct drm_display_mode *adjusted_mode)
+-{
+-	struct meson_dw_hdmi *dw_hdmi = bridge_to_meson_dw_hdmi(bridge);
+-	struct meson_drm *priv = dw_hdmi->priv;
+-	int vic = drm_match_cea_mode(mode);
+-	unsigned int ycrcb_map = VPU_HDMI_OUTPUT_CBYCR;
+-	bool yuv420_mode = false;
+-
+-	DRM_DEBUG_DRIVER("\"%s\" vic %d\n", mode->name, vic);
+-
+-	if (dw_hdmi->output_bus_fmt == MEDIA_BUS_FMT_UYYVYY8_0_5X24) {
+-		ycrcb_map = VPU_HDMI_OUTPUT_CRYCB;
+-		yuv420_mode = true;
+-	}
+-
+-	/* VENC + VENC-DVI Mode setup */
+-	meson_venc_hdmi_mode_set(priv, vic, ycrcb_map, yuv420_mode, mode);
+-
+-	/* VCLK Set clock */
+-	dw_hdmi_set_vclk(dw_hdmi, mode);
+-
+-	if (dw_hdmi->output_bus_fmt == MEDIA_BUS_FMT_UYYVYY8_0_5X24)
+-		/* Setup YUV420 to HDMI-TX, no 10bit diphering */
+-		writel_relaxed(2 | (2 << 2),
+-			       priv->io_base + _REG(VPU_HDMI_FMT_CTRL));
+-	else
+-		/* Setup YUV444 to HDMI-TX, no 10bit diphering */
+-		writel_relaxed(0, priv->io_base + _REG(VPU_HDMI_FMT_CTRL));
+-}
+-
+-static const struct drm_bridge_funcs meson_venc_hdmi_encoder_bridge_funcs = {
+-	.atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
+-	.atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,
+-	.atomic_get_input_bus_fmts = meson_venc_hdmi_encoder_get_inp_bus_fmts,
+-	.atomic_reset = drm_atomic_helper_bridge_reset,
+-	.atomic_check = meson_venc_hdmi_encoder_atomic_check,
+-	.enable	= meson_venc_hdmi_encoder_enable,
+-	.disable = meson_venc_hdmi_encoder_disable,
+-	.mode_set = meson_venc_hdmi_encoder_mode_set,
+-};
+-
+ /* DW HDMI Regmap */
+ 
+ static int meson_dw_hdmi_reg_read(void *context, unsigned int reg,
+@@ -876,28 +624,6 @@ static const struct meson_dw_hdmi_data meson_dw_hdmi_g12a_data = {
+ 	.dwc_write = dw_hdmi_g12a_dwc_write,
+ };
+ 
+-static bool meson_hdmi_connector_is_available(struct device *dev)
+-{
+-	struct device_node *ep, *remote;
+-
+-	/* HDMI Connector is on the second port, first endpoint */
+-	ep = of_graph_get_endpoint_by_regs(dev->of_node, 1, 0);
+-	if (!ep)
+-		return false;
+-
+-	/* If the endpoint node exists, consider it enabled */
+-	remote = of_graph_get_remote_port(ep);
+-	if (remote) {
+-		of_node_put(ep);
+-		return true;
+-	}
+-
+-	of_node_put(ep);
+-	of_node_put(remote);
+-
+-	return false;
+-}
+-
+ static void meson_dw_hdmi_init(struct meson_dw_hdmi *meson_dw_hdmi)
+ {
+ 	struct meson_drm *priv = meson_dw_hdmi->priv;
+@@ -976,18 +702,11 @@ static int meson_dw_hdmi_bind(struct device *dev, struct device *master,
+ 	struct drm_device *drm = data;
+ 	struct meson_drm *priv = drm->dev_private;
+ 	struct dw_hdmi_plat_data *dw_plat_data;
+-	struct drm_bridge *next_bridge;
+-	struct drm_encoder *encoder;
+ 	int irq;
+ 	int ret;
+ 
+ 	DRM_DEBUG_DRIVER("\n");
+ 
+-	if (!meson_hdmi_connector_is_available(dev)) {
+-		dev_info(drm->dev, "HDMI Output connector not available\n");
+-		return -ENODEV;
+-	}
+-
+ 	match = of_device_get_match_data(&pdev->dev);
+ 	if (!match) {
+ 		dev_err(&pdev->dev, "failed to get match data\n");
+@@ -1003,7 +722,6 @@ static int meson_dw_hdmi_bind(struct device *dev, struct device *master,
+ 	meson_dw_hdmi->dev = dev;
+ 	meson_dw_hdmi->data = match;
+ 	dw_plat_data = &meson_dw_hdmi->dw_plat_data;
+-	encoder = &meson_dw_hdmi->encoder;
+ 
+ 	meson_dw_hdmi->hdmi_supply = devm_regulator_get_optional(dev, "hdmi");
+ 	if (IS_ERR(meson_dw_hdmi->hdmi_supply)) {
+@@ -1074,28 +792,11 @@ static int meson_dw_hdmi_bind(struct device *dev, struct device *master,
+ 		return ret;
+ 	}
+ 
+-	/* Encoder */
+-
+-	ret = drm_encoder_init(drm, encoder, &meson_venc_hdmi_encoder_funcs,
+-			       DRM_MODE_ENCODER_TMDS, "meson_hdmi");
+-	if (ret) {
+-		dev_err(priv->dev, "Failed to init HDMI encoder\n");
+-		return ret;
+-	}
+-
+-	meson_dw_hdmi->bridge.funcs = &meson_venc_hdmi_encoder_bridge_funcs;
+-	drm_bridge_attach(encoder, &meson_dw_hdmi->bridge, NULL, 0);
+-
+-	encoder->possible_crtcs = BIT(0);
+-
+ 	meson_dw_hdmi_init(meson_dw_hdmi);
+ 
+-	DRM_DEBUG_DRIVER("encoder initialized\n");
+-
+ 	/* Bridge / Connector */
+ 
+ 	dw_plat_data->priv_data = meson_dw_hdmi;
+-	dw_plat_data->mode_valid = dw_hdmi_mode_valid;
+ 	dw_plat_data->phy_ops = &meson_dw_hdmi_phy_ops;
+ 	dw_plat_data->phy_name = "meson_dw_hdmi_phy";
+ 	dw_plat_data->phy_data = meson_dw_hdmi;
+@@ -1110,15 +811,11 @@ static int meson_dw_hdmi_bind(struct device *dev, struct device *master,
+ 
+ 	platform_set_drvdata(pdev, meson_dw_hdmi);
+ 
+-	meson_dw_hdmi->hdmi = dw_hdmi_probe(pdev,
+-					    &meson_dw_hdmi->dw_plat_data);
++	meson_dw_hdmi->hdmi = dw_hdmi_probe(pdev, &meson_dw_hdmi->dw_plat_data);
+ 	if (IS_ERR(meson_dw_hdmi->hdmi))
+ 		return PTR_ERR(meson_dw_hdmi->hdmi);
+ 
+-	next_bridge = of_drm_find_bridge(pdev->dev.of_node);
+-	if (next_bridge)
+-		drm_bridge_attach(encoder, next_bridge,
+-				  &meson_dw_hdmi->bridge, 0);
++	meson_dw_hdmi->bridge = of_drm_find_bridge(pdev->dev.of_node);
+ 
+ 	DRM_DEBUG_DRIVER("HDMI controller initialized\n");
+ 
+diff --git a/drivers/gpu/drm/meson/meson_encoder_hdmi.c b/drivers/gpu/drm/meson/meson_encoder_hdmi.c
+new file mode 100644
+index 0000000000000..db332fa4cd548
+--- /dev/null
++++ b/drivers/gpu/drm/meson/meson_encoder_hdmi.c
+@@ -0,0 +1,370 @@
++// SPDX-License-Identifier: GPL-2.0-or-later
++/*
++ * Copyright (C) 2016 BayLibre, SAS
++ * Author: Neil Armstrong <narmstrong@baylibre.com>
++ * Copyright (C) 2015 Amlogic, Inc. All rights reserved.
++ */
++
++#include <linux/clk.h>
++#include <linux/component.h>
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/of_device.h>
++#include <linux/of_graph.h>
++#include <linux/regulator/consumer.h>
++#include <linux/reset.h>
++
++#include <drm/drm_atomic_helper.h>
++#include <drm/drm_bridge.h>
++#include <drm/drm_device.h>
++#include <drm/drm_edid.h>
++#include <drm/drm_probe_helper.h>
++#include <drm/drm_simple_kms_helper.h>
++
++#include <linux/media-bus-format.h>
++#include <linux/videodev2.h>
++
++#include "meson_drv.h"
++#include "meson_registers.h"
++#include "meson_vclk.h"
++#include "meson_venc.h"
++#include "meson_encoder_hdmi.h"
++
++struct meson_encoder_hdmi {
++	struct drm_encoder encoder;
++	struct drm_bridge bridge;
++	struct drm_bridge *next_bridge;
++	struct meson_drm *priv;
++	unsigned long output_bus_fmt;
++};
++
++#define bridge_to_meson_encoder_hdmi(x) \
++	container_of(x, struct meson_encoder_hdmi, bridge)
++
++static int meson_encoder_hdmi_attach(struct drm_bridge *bridge,
++				     enum drm_bridge_attach_flags flags)
++{
++	struct meson_encoder_hdmi *encoder_hdmi = bridge_to_meson_encoder_hdmi(bridge);
++
++	return drm_bridge_attach(bridge->encoder, encoder_hdmi->next_bridge,
++				 &encoder_hdmi->bridge, flags);
++}
++
++static void meson_encoder_hdmi_set_vclk(struct meson_encoder_hdmi *encoder_hdmi,
++					const struct drm_display_mode *mode)
++{
++	struct meson_drm *priv = encoder_hdmi->priv;
++	int vic = drm_match_cea_mode(mode);
++	unsigned int phy_freq;
++	unsigned int vclk_freq;
++	unsigned int venc_freq;
++	unsigned int hdmi_freq;
++
++	vclk_freq = mode->clock;
++
++	/* For 420, pixel clock is half unlike venc clock */
++	if (encoder_hdmi->output_bus_fmt == MEDIA_BUS_FMT_UYYVYY8_0_5X24)
++		vclk_freq /= 2;
++
++	/* TMDS clock is pixel_clock * 10 */
++	phy_freq = vclk_freq * 10;
++
++	if (!vic) {
++		meson_vclk_setup(priv, MESON_VCLK_TARGET_DMT, phy_freq,
++				 vclk_freq, vclk_freq, vclk_freq, false);
++		return;
++	}
++
++	/* 480i/576i needs global pixel doubling */
++	if (mode->flags & DRM_MODE_FLAG_DBLCLK)
++		vclk_freq *= 2;
++
++	venc_freq = vclk_freq;
++	hdmi_freq = vclk_freq;
++
++	/* VENC double pixels for 1080i, 720p and YUV420 modes */
++	if (meson_venc_hdmi_venc_repeat(vic) ||
++	    encoder_hdmi->output_bus_fmt == MEDIA_BUS_FMT_UYYVYY8_0_5X24)
++		venc_freq *= 2;
++
++	vclk_freq = max(venc_freq, hdmi_freq);
++
++	if (mode->flags & DRM_MODE_FLAG_DBLCLK)
++		venc_freq /= 2;
++
++	dev_dbg(priv->dev, "vclk:%d phy=%d venc=%d hdmi=%d enci=%d\n",
++		phy_freq, vclk_freq, venc_freq, hdmi_freq,
++		priv->venc.hdmi_use_enci);
++
++	meson_vclk_setup(priv, MESON_VCLK_TARGET_HDMI, phy_freq, vclk_freq,
++			 venc_freq, hdmi_freq, priv->venc.hdmi_use_enci);
++}
++
++static enum drm_mode_status meson_encoder_hdmi_mode_valid(struct drm_bridge *bridge,
++					const struct drm_display_info *display_info,
++					const struct drm_display_mode *mode)
++{
++	struct meson_encoder_hdmi *encoder_hdmi = bridge_to_meson_encoder_hdmi(bridge);
++	struct meson_drm *priv = encoder_hdmi->priv;
++	bool is_hdmi2_sink = display_info->hdmi.scdc.supported;
++	unsigned int phy_freq;
++	unsigned int vclk_freq;
++	unsigned int venc_freq;
++	unsigned int hdmi_freq;
++	int vic = drm_match_cea_mode(mode);
++	enum drm_mode_status status;
++
++	dev_dbg(priv->dev, "Modeline " DRM_MODE_FMT "\n", DRM_MODE_ARG(mode));
++
++	/* If sink does not support 540MHz, reject the non-420 HDMI2 modes */
++	if (display_info->max_tmds_clock &&
++	    mode->clock > display_info->max_tmds_clock &&
++	    !drm_mode_is_420_only(display_info, mode) &&
++	    !drm_mode_is_420_also(display_info, mode))
++		return MODE_BAD;
++
++	/* Check against non-VIC supported modes */
++	if (!vic) {
++		status = meson_venc_hdmi_supported_mode(mode);
++		if (status != MODE_OK)
++			return status;
++
++		return meson_vclk_dmt_supported_freq(priv, mode->clock);
++	/* Check against supported VIC modes */
++	} else if (!meson_venc_hdmi_supported_vic(vic))
++		return MODE_BAD;
++
++	vclk_freq = mode->clock;
++
++	/* For 420, pixel clock is half unlike venc clock */
++	if (drm_mode_is_420_only(display_info, mode) ||
++	    (!is_hdmi2_sink &&
++	     drm_mode_is_420_also(display_info, mode)))
++		vclk_freq /= 2;
++
++	/* TMDS clock is pixel_clock * 10 */
++	phy_freq = vclk_freq * 10;
++
++	/* 480i/576i needs global pixel doubling */
++	if (mode->flags & DRM_MODE_FLAG_DBLCLK)
++		vclk_freq *= 2;
++
++	venc_freq = vclk_freq;
++	hdmi_freq = vclk_freq;
++
++	/* VENC double pixels for 1080i, 720p and YUV420 modes */
++	if (meson_venc_hdmi_venc_repeat(vic) ||
++	    drm_mode_is_420_only(display_info, mode) ||
++	    (!is_hdmi2_sink &&
++	     drm_mode_is_420_also(display_info, mode)))
++		venc_freq *= 2;
++
++	vclk_freq = max(venc_freq, hdmi_freq);
++
++	if (mode->flags & DRM_MODE_FLAG_DBLCLK)
++		venc_freq /= 2;
++
++	dev_dbg(priv->dev, "%s: vclk:%d phy=%d venc=%d hdmi=%d\n",
++		__func__, phy_freq, vclk_freq, venc_freq, hdmi_freq);
++
++	return meson_vclk_vic_supported_freq(priv, phy_freq, vclk_freq);
++}
++
++static void meson_encoder_hdmi_atomic_enable(struct drm_bridge *bridge,
++					     struct drm_bridge_state *bridge_state)
++{
++	struct meson_encoder_hdmi *encoder_hdmi = bridge_to_meson_encoder_hdmi(bridge);
++	struct drm_atomic_state *state = bridge_state->base.state;
++	unsigned int ycrcb_map = VPU_HDMI_OUTPUT_CBYCR;
++	struct meson_drm *priv = encoder_hdmi->priv;
++	struct drm_connector_state *conn_state;
++	const struct drm_display_mode *mode;
++	struct drm_crtc_state *crtc_state;
++	struct drm_connector *connector;
++	bool yuv420_mode = false;
++	int vic;
++
++	connector = drm_atomic_get_new_connector_for_encoder(state, bridge->encoder);
++	if (WARN_ON(!connector))
++		return;
++
++	conn_state = drm_atomic_get_new_connector_state(state, connector);
++	if (WARN_ON(!conn_state))
++		return;
++
++	crtc_state = drm_atomic_get_new_crtc_state(state, conn_state->crtc);
++	if (WARN_ON(!crtc_state))
++		return;
++
++	mode = &crtc_state->adjusted_mode;
++
++	vic = drm_match_cea_mode(mode);
++
++	dev_dbg(priv->dev, "\"%s\" vic %d\n", mode->name, vic);
++
++	if (encoder_hdmi->output_bus_fmt == MEDIA_BUS_FMT_UYYVYY8_0_5X24) {
++		ycrcb_map = VPU_HDMI_OUTPUT_CRYCB;
++		yuv420_mode = true;
++	}
++
++	/* VENC + VENC-DVI Mode setup */
++	meson_venc_hdmi_mode_set(priv, vic, ycrcb_map, yuv420_mode, mode);
++
++	/* VCLK Set clock */
++	meson_encoder_hdmi_set_vclk(encoder_hdmi, mode);
++
++	if (encoder_hdmi->output_bus_fmt == MEDIA_BUS_FMT_UYYVYY8_0_5X24)
++		/* Setup YUV420 to HDMI-TX, no 10bit diphering */
++		writel_relaxed(2 | (2 << 2),
++			       priv->io_base + _REG(VPU_HDMI_FMT_CTRL));
++	else
++		/* Setup YUV444 to HDMI-TX, no 10bit diphering */
++		writel_relaxed(0, priv->io_base + _REG(VPU_HDMI_FMT_CTRL));
++
++	dev_dbg(priv->dev, "%s\n", priv->venc.hdmi_use_enci ? "VENCI" : "VENCP");
++
++	if (priv->venc.hdmi_use_enci)
++		writel_relaxed(1, priv->io_base + _REG(ENCI_VIDEO_EN));
++	else
++		writel_relaxed(1, priv->io_base + _REG(ENCP_VIDEO_EN));
++}
++
++static void meson_encoder_hdmi_atomic_disable(struct drm_bridge *bridge,
++					     struct drm_bridge_state *bridge_state)
++{
++	struct meson_encoder_hdmi *encoder_hdmi = bridge_to_meson_encoder_hdmi(bridge);
++	struct meson_drm *priv = encoder_hdmi->priv;
++
++	writel_bits_relaxed(0x3, 0,
++			    priv->io_base + _REG(VPU_HDMI_SETTING));
++
++	writel_relaxed(0, priv->io_base + _REG(ENCI_VIDEO_EN));
++	writel_relaxed(0, priv->io_base + _REG(ENCP_VIDEO_EN));
++}
++
++static const u32 meson_encoder_hdmi_out_bus_fmts[] = {
++	MEDIA_BUS_FMT_YUV8_1X24,
++	MEDIA_BUS_FMT_UYYVYY8_0_5X24,
++};
++
++static u32 *
++meson_encoder_hdmi_get_inp_bus_fmts(struct drm_bridge *bridge,
++					struct drm_bridge_state *bridge_state,
++					struct drm_crtc_state *crtc_state,
++					struct drm_connector_state *conn_state,
++					u32 output_fmt,
++					unsigned int *num_input_fmts)
++{
++	u32 *input_fmts = NULL;
++	int i;
++
++	*num_input_fmts = 0;
++
++	for (i = 0 ; i < ARRAY_SIZE(meson_encoder_hdmi_out_bus_fmts) ; ++i) {
++		if (output_fmt == meson_encoder_hdmi_out_bus_fmts[i]) {
++			*num_input_fmts = 1;
++			input_fmts = kcalloc(*num_input_fmts,
++					     sizeof(*input_fmts),
++					     GFP_KERNEL);
++			if (!input_fmts)
++				return NULL;
++
++			input_fmts[0] = output_fmt;
++
++			break;
++		}
++	}
++
++	return input_fmts;
++}
++
++static int meson_encoder_hdmi_atomic_check(struct drm_bridge *bridge,
++					struct drm_bridge_state *bridge_state,
++					struct drm_crtc_state *crtc_state,
++					struct drm_connector_state *conn_state)
++{
++	struct meson_encoder_hdmi *encoder_hdmi = bridge_to_meson_encoder_hdmi(bridge);
++	struct drm_connector_state *old_conn_state =
++		drm_atomic_get_old_connector_state(conn_state->state, conn_state->connector);
++	struct meson_drm *priv = encoder_hdmi->priv;
++
++	encoder_hdmi->output_bus_fmt = bridge_state->output_bus_cfg.format;
++
++	dev_dbg(priv->dev, "output_bus_fmt %lx\n", encoder_hdmi->output_bus_fmt);
++
++	if (!drm_connector_atomic_hdr_metadata_equal(old_conn_state, conn_state))
++		crtc_state->mode_changed = true;
++
++	return 0;
++}
++
++static const struct drm_bridge_funcs meson_encoder_hdmi_bridge_funcs = {
++	.attach = meson_encoder_hdmi_attach,
++	.mode_valid = meson_encoder_hdmi_mode_valid,
++	.atomic_enable = meson_encoder_hdmi_atomic_enable,
++	.atomic_disable = meson_encoder_hdmi_atomic_disable,
++	.atomic_get_input_bus_fmts = meson_encoder_hdmi_get_inp_bus_fmts,
++	.atomic_check = meson_encoder_hdmi_atomic_check,
++	.atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
++	.atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,
++	.atomic_reset = drm_atomic_helper_bridge_reset,
++};
++
++int meson_encoder_hdmi_init(struct meson_drm *priv)
++{
++	struct meson_encoder_hdmi *meson_encoder_hdmi;
++	struct device_node *remote;
++	int ret;
++
++	meson_encoder_hdmi = devm_kzalloc(priv->dev, sizeof(*meson_encoder_hdmi), GFP_KERNEL);
++	if (!meson_encoder_hdmi)
++		return -ENOMEM;
++
++	/* HDMI Transceiver Bridge */
++	remote = of_graph_get_remote_node(priv->dev->of_node, 1, 0);
++	if (!remote) {
++		dev_err(priv->dev, "HDMI transceiver device is disabled");
++		return 0;
++	}
++
++	meson_encoder_hdmi->next_bridge = of_drm_find_bridge(remote);
++	if (!meson_encoder_hdmi->next_bridge) {
++		dev_err(priv->dev, "Failed to find HDMI transceiver bridge\n");
++		return -EPROBE_DEFER;
++	}
++
++	/* HDMI Encoder Bridge */
++	meson_encoder_hdmi->bridge.funcs = &meson_encoder_hdmi_bridge_funcs;
++	meson_encoder_hdmi->bridge.of_node = priv->dev->of_node;
++	meson_encoder_hdmi->bridge.type = DRM_MODE_CONNECTOR_HDMIA;
++
++	drm_bridge_add(&meson_encoder_hdmi->bridge);
++
++	meson_encoder_hdmi->priv = priv;
++
++	/* Encoder */
++	ret = drm_simple_encoder_init(priv->drm, &meson_encoder_hdmi->encoder,
++				      DRM_MODE_ENCODER_TMDS);
++	if (ret) {
++		dev_err(priv->dev, "Failed to init HDMI encoder: %d\n", ret);
++		return ret;
++	}
++
++	meson_encoder_hdmi->encoder.possible_crtcs = BIT(0);
++
++	/* Attach HDMI Encoder Bridge to Encoder */
++	ret = drm_bridge_attach(&meson_encoder_hdmi->encoder, &meson_encoder_hdmi->bridge, NULL, 0);
++	if (ret) {
++		dev_err(priv->dev, "Failed to attach bridge: %d\n", ret);
++		return ret;
++	}
++
++	/*
++	 * We should have now in place:
++	 * encoder->[hdmi encoder bridge]->[dw-hdmi bridge]->[dw-hdmi connector]
++	 */
++
++	dev_dbg(priv->dev, "HDMI encoder initialized\n");
++
++	return 0;
++}
+diff --git a/drivers/gpu/drm/meson/meson_encoder_hdmi.h b/drivers/gpu/drm/meson/meson_encoder_hdmi.h
+new file mode 100644
+index 0000000000000..ed19494f09563
+--- /dev/null
++++ b/drivers/gpu/drm/meson/meson_encoder_hdmi.h
+@@ -0,0 +1,12 @@
++/* SPDX-License-Identifier: GPL-2.0-or-later */
++/*
++ * Copyright (C) 2021 BayLibre, SAS
++ * Author: Neil Armstrong <narmstrong@baylibre.com>
++ */
++
++#ifndef __MESON_ENCODER_HDMI_H
++#define __MESON_ENCODER_HDMI_H
++
++int meson_encoder_hdmi_init(struct meson_drm *priv);
++
++#endif /* __MESON_ENCODER_HDMI_H */
+diff --git a/drivers/gpu/drm/meson/meson_osd_afbcd.c b/drivers/gpu/drm/meson/meson_osd_afbcd.c
+index ffc6b584dbf85..0cdbe899402f8 100644
+--- a/drivers/gpu/drm/meson/meson_osd_afbcd.c
++++ b/drivers/gpu/drm/meson/meson_osd_afbcd.c
+@@ -79,11 +79,6 @@ static bool meson_gxm_afbcd_supported_fmt(u64 modifier, uint32_t format)
+ 	return meson_gxm_afbcd_pixel_fmt(modifier, format) >= 0;
+ }
+ 
+-static int meson_gxm_afbcd_init(struct meson_drm *priv)
+-{
+-	return 0;
+-}
+-
+ static int meson_gxm_afbcd_reset(struct meson_drm *priv)
+ {
+ 	writel_relaxed(VIU_SW_RESET_OSD1_AFBCD,
+@@ -93,6 +88,16 @@ static int meson_gxm_afbcd_reset(struct meson_drm *priv)
+ 	return 0;
+ }
+ 
++static int meson_gxm_afbcd_init(struct meson_drm *priv)
++{
++	return 0;
++}
++
++static void meson_gxm_afbcd_exit(struct meson_drm *priv)
++{
++	meson_gxm_afbcd_reset(priv);
++}
++
+ static int meson_gxm_afbcd_enable(struct meson_drm *priv)
+ {
+ 	writel_relaxed(FIELD_PREP(OSD1_AFBCD_ID_FIFO_THRD, 0x40) |
+@@ -172,6 +177,7 @@ static int meson_gxm_afbcd_setup(struct meson_drm *priv)
+ 
+ struct meson_afbcd_ops meson_afbcd_gxm_ops = {
+ 	.init = meson_gxm_afbcd_init,
++	.exit = meson_gxm_afbcd_exit,
+ 	.reset = meson_gxm_afbcd_reset,
+ 	.enable = meson_gxm_afbcd_enable,
+ 	.disable = meson_gxm_afbcd_disable,
+@@ -269,6 +275,18 @@ static bool meson_g12a_afbcd_supported_fmt(u64 modifier, uint32_t format)
+ 	return meson_g12a_afbcd_pixel_fmt(modifier, format) >= 0;
+ }
+ 
++static int meson_g12a_afbcd_reset(struct meson_drm *priv)
++{
++	meson_rdma_reset(priv);
++
++	meson_rdma_writel_sync(priv, VIU_SW_RESET_G12A_AFBC_ARB |
++			       VIU_SW_RESET_G12A_OSD1_AFBCD,
++			       VIU_SW_RESET);
++	meson_rdma_writel_sync(priv, 0, VIU_SW_RESET);
++
++	return 0;
++}
++
+ static int meson_g12a_afbcd_init(struct meson_drm *priv)
+ {
+ 	int ret;
+@@ -286,16 +304,10 @@ static int meson_g12a_afbcd_init(struct meson_drm *priv)
+ 	return 0;
+ }
+ 
+-static int meson_g12a_afbcd_reset(struct meson_drm *priv)
++static void meson_g12a_afbcd_exit(struct meson_drm *priv)
+ {
+-	meson_rdma_reset(priv);
+-
+-	meson_rdma_writel_sync(priv, VIU_SW_RESET_G12A_AFBC_ARB |
+-			       VIU_SW_RESET_G12A_OSD1_AFBCD,
+-			       VIU_SW_RESET);
+-	meson_rdma_writel_sync(priv, 0, VIU_SW_RESET);
+-
+-	return 0;
++	meson_g12a_afbcd_reset(priv);
++	meson_rdma_free(priv);
+ }
+ 
+ static int meson_g12a_afbcd_enable(struct meson_drm *priv)
+@@ -380,6 +392,7 @@ static int meson_g12a_afbcd_setup(struct meson_drm *priv)
+ 
+ struct meson_afbcd_ops meson_afbcd_g12a_ops = {
+ 	.init = meson_g12a_afbcd_init,
++	.exit = meson_g12a_afbcd_exit,
+ 	.reset = meson_g12a_afbcd_reset,
+ 	.enable = meson_g12a_afbcd_enable,
+ 	.disable = meson_g12a_afbcd_disable,
+diff --git a/drivers/gpu/drm/meson/meson_osd_afbcd.h b/drivers/gpu/drm/meson/meson_osd_afbcd.h
+index 5e5523304f42f..e77ddeb6416f3 100644
+--- a/drivers/gpu/drm/meson/meson_osd_afbcd.h
++++ b/drivers/gpu/drm/meson/meson_osd_afbcd.h
+@@ -14,6 +14,7 @@
+ 
+ struct meson_afbcd_ops {
+ 	int (*init)(struct meson_drm *priv);
++	void (*exit)(struct meson_drm *priv);
+ 	int (*reset)(struct meson_drm *priv);
+ 	int (*enable)(struct meson_drm *priv);
+ 	int (*disable)(struct meson_drm *priv);
+diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
+index fd98e8bbc5500..2c7271f545dcc 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
++++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
+@@ -529,7 +529,10 @@ static void mgag200_set_format_regs(struct mga_device *mdev,
+ 	WREG_GFX(3, 0x00);
+ 	WREG_GFX(4, 0x00);
+ 	WREG_GFX(5, 0x40);
+-	WREG_GFX(6, 0x05);
++	/* GCTL6 should be 0x05, but we configure memmapsl to 0xb8000 (text mode),
++	 * so that it doesn't hang when running kexec/kdump on G200_SE rev42.
++	 */
++	WREG_GFX(6, 0x0d);
+ 	WREG_GFX(7, 0x0f);
+ 	WREG_GFX(8, 0x0f);
+ 
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index a305ff7e8c6fb..f880a59a40fcf 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -658,19 +658,23 @@ static void a6xx_set_cp_protect(struct msm_gpu *gpu)
+ {
+ 	struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ 	const u32 *regs = a6xx_protect;
+-	unsigned i, count = ARRAY_SIZE(a6xx_protect), count_max = 32;
+-
+-	BUILD_BUG_ON(ARRAY_SIZE(a6xx_protect) > 32);
+-	BUILD_BUG_ON(ARRAY_SIZE(a650_protect) > 48);
++	unsigned i, count, count_max;
+ 
+ 	if (adreno_is_a650(adreno_gpu)) {
+ 		regs = a650_protect;
+ 		count = ARRAY_SIZE(a650_protect);
+ 		count_max = 48;
++		BUILD_BUG_ON(ARRAY_SIZE(a650_protect) > 48);
+ 	} else if (adreno_is_a660_family(adreno_gpu)) {
+ 		regs = a660_protect;
+ 		count = ARRAY_SIZE(a660_protect);
+ 		count_max = 48;
++		BUILD_BUG_ON(ARRAY_SIZE(a660_protect) > 48);
++	} else {
++		regs = a6xx_protect;
++		count = ARRAY_SIZE(a6xx_protect);
++		count_max = 32;
++		BUILD_BUG_ON(ARRAY_SIZE(a6xx_protect) > 32);
+ 	}
+ 
+ 	/*
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index e7ee4cfb8461e..ad27a01c22af1 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -1102,7 +1102,7 @@ static void _dpu_encoder_virt_enable_helper(struct drm_encoder *drm_enc)
+ 	}
+ 
+ 
+-	if (dpu_enc->disp_info.intf_type == DRM_MODE_CONNECTOR_DisplayPort &&
++	if (dpu_enc->disp_info.intf_type == DRM_MODE_ENCODER_TMDS &&
+ 		dpu_enc->cur_master->hw_mdptop &&
+ 		dpu_enc->cur_master->hw_mdptop->ops.intf_audio_select)
+ 		dpu_enc->cur_master->hw_mdptop->ops.intf_audio_select(
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
+index f9c83d6e427ad..24fbaf562d418 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
+@@ -35,6 +35,14 @@ int dpu_rm_destroy(struct dpu_rm *rm)
+ {
+ 	int i;
+ 
++	for (i = 0; i < ARRAY_SIZE(rm->dspp_blks); i++) {
++		struct dpu_hw_dspp *hw;
++
++		if (rm->dspp_blks[i]) {
++			hw = to_dpu_hw_dspp(rm->dspp_blks[i]);
++			dpu_hw_dspp_destroy(hw);
++		}
++	}
+ 	for (i = 0; i < ARRAY_SIZE(rm->pingpong_blks); i++) {
+ 		struct dpu_hw_pingpong *hw;
+ 
+diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+index 62e75dc8afc63..4af281d97493f 100644
+--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+@@ -1744,6 +1744,9 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 				/* end with failure */
+ 				break; /* lane == 1 already */
+ 			}
++
++			/* stop link training before start re training  */
++			dp_ctrl_clear_training_pattern(ctrl);
+ 		}
+ 	}
+ 
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index aba8aa47ed76e..98a546a650914 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -1455,6 +1455,7 @@ int msm_dp_modeset_init(struct msm_dp *dp_display, struct drm_device *dev,
+ 			struct drm_encoder *encoder)
+ {
+ 	struct msm_drm_private *priv;
++	struct dp_display_private *dp_priv;
+ 	int ret;
+ 
+ 	if (WARN_ON(!encoder) || WARN_ON(!dp_display) || WARN_ON(!dev))
+@@ -1463,6 +1464,8 @@ int msm_dp_modeset_init(struct msm_dp *dp_display, struct drm_device *dev,
+ 	priv = dev->dev_private;
+ 	dp_display->drm_dev = dev;
+ 
++	dp_priv = container_of(dp_display, struct dp_display_private, dp_display);
++
+ 	ret = dp_display_request_irq(dp_display);
+ 	if (ret) {
+ 		DRM_ERROR("request_irq failed, ret=%d\n", ret);
+@@ -1480,6 +1483,8 @@ int msm_dp_modeset_init(struct msm_dp *dp_display, struct drm_device *dev,
+ 		return ret;
+ 	}
+ 
++	dp_priv->panel->connector = dp_display->connector;
++
+ 	priv->connectors[priv->num_connectors++] = dp_display->connector;
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c
+index 71db10c0f262d..f1418722c5492 100644
+--- a/drivers/gpu/drm/msm/dp/dp_panel.c
++++ b/drivers/gpu/drm/msm/dp/dp_panel.c
+@@ -212,6 +212,11 @@ int dp_panel_read_sink_caps(struct dp_panel *dp_panel,
+ 		if (drm_add_modes_noedid(connector, 640, 480))
+ 			drm_set_preferred_mode(connector, 640, 480);
+ 		mutex_unlock(&connector->dev->mode_config.mutex);
++	} else {
++		/* always add fail-safe mode as backup mode */
++		mutex_lock(&connector->dev->mode_config.mutex);
++		drm_add_modes_noedid(connector, 640, 480);
++		mutex_unlock(&connector->dev->mode_config.mutex);
+ 	}
+ 
+ 	if (panel->aux_cfg_update_done) {
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+index d8128f50b0dd5..0b782cc18b3f4 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+@@ -562,7 +562,9 @@ static int pll_10nm_register(struct dsi_pll_10nm *pll_10nm, struct clk_hw **prov
+ 	char clk_name[32], parent[32], vco_name[32];
+ 	char parent2[32], parent3[32], parent4[32];
+ 	struct clk_init_data vco_init = {
+-		.parent_names = (const char *[]){ "xo" },
++		.parent_data = &(const struct clk_parent_data) {
++			.fw_name = "ref",
++		},
+ 		.num_parents = 1,
+ 		.name = vco_name,
+ 		.flags = CLK_IGNORE_UNUSED,
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+index 7414966f198e3..75557ac99adf1 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+@@ -802,7 +802,9 @@ static int pll_14nm_register(struct dsi_pll_14nm *pll_14nm, struct clk_hw **prov
+ {
+ 	char clk_name[32], parent[32], vco_name[32];
+ 	struct clk_init_data vco_init = {
+-		.parent_names = (const char *[]){ "xo" },
++		.parent_data = &(const struct clk_parent_data) {
++			.fw_name = "ref",
++		},
+ 		.num_parents = 1,
+ 		.name = vco_name,
+ 		.flags = CLK_IGNORE_UNUSED,
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c
+index 2da673a2add69..48eab80b548e1 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c
+@@ -521,7 +521,9 @@ static int pll_28nm_register(struct dsi_pll_28nm *pll_28nm, struct clk_hw **prov
+ {
+ 	char clk_name[32], parent1[32], parent2[32], vco_name[32];
+ 	struct clk_init_data vco_init = {
+-		.parent_names = (const char *[]){ "xo" },
++		.parent_data = &(const struct clk_parent_data) {
++			.fw_name = "ref", .name = "xo",
++		},
+ 		.num_parents = 1,
+ 		.name = vco_name,
+ 		.flags = CLK_IGNORE_UNUSED,
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c
+index 71ed4aa0dc67e..fc56cdcc9ad64 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c
+@@ -385,7 +385,9 @@ static int pll_28nm_register(struct dsi_pll_28nm *pll_28nm, struct clk_hw **prov
+ {
+ 	char *clk_name, *parent_name, *vco_name;
+ 	struct clk_init_data vco_init = {
+-		.parent_names = (const char *[]){ "pxo" },
++		.parent_data = &(const struct clk_parent_data) {
++			.fw_name = "ref",
++		},
+ 		.num_parents = 1,
+ 		.flags = CLK_IGNORE_UNUSED,
+ 		.ops = &clk_ops_dsi_pll_28nm_vco,
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
+index 079613d2aaa98..6e506feb111fd 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
+@@ -588,7 +588,9 @@ static int pll_7nm_register(struct dsi_pll_7nm *pll_7nm, struct clk_hw **provide
+ 	char clk_name[32], parent[32], vco_name[32];
+ 	char parent2[32], parent3[32], parent4[32];
+ 	struct clk_init_data vco_init = {
+-		.parent_names = (const char *[]){ "bi_tcxo" },
++		.parent_data = &(const struct clk_parent_data) {
++			.fw_name = "ref",
++		},
+ 		.num_parents = 1,
+ 		.name = vco_name,
+ 		.flags = CLK_IGNORE_UNUSED,
+@@ -862,20 +864,26 @@ static int dsi_7nm_phy_enable(struct msm_dsi_phy *phy,
+ 	/* Alter PHY configurations if data rate less than 1.5GHZ*/
+ 	less_than_1500_mhz = (clk_req->bitclk_rate <= 1500000000);
+ 
+-	/* For C-PHY, no low power settings for lower clk rate */
+-	if (phy->cphy_mode)
+-		less_than_1500_mhz = false;
+-
+ 	if (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V4_1) {
+ 		vreg_ctrl_0 = less_than_1500_mhz ? 0x53 : 0x52;
+-		glbl_rescode_top_ctrl = less_than_1500_mhz ? 0x3d :  0x00;
+-		glbl_rescode_bot_ctrl = less_than_1500_mhz ? 0x39 :  0x3c;
++		if (phy->cphy_mode) {
++			glbl_rescode_top_ctrl = 0x00;
++			glbl_rescode_bot_ctrl = 0x3c;
++		} else {
++			glbl_rescode_top_ctrl = less_than_1500_mhz ? 0x3d :  0x00;
++			glbl_rescode_bot_ctrl = less_than_1500_mhz ? 0x39 :  0x3c;
++		}
+ 		glbl_str_swi_cal_sel_ctrl = 0x00;
+ 		glbl_hstx_str_ctrl_0 = 0x88;
+ 	} else {
+ 		vreg_ctrl_0 = less_than_1500_mhz ? 0x5B : 0x59;
+-		glbl_str_swi_cal_sel_ctrl = less_than_1500_mhz ? 0x03 : 0x00;
+-		glbl_hstx_str_ctrl_0 = less_than_1500_mhz ? 0x66 : 0x88;
++		if (phy->cphy_mode) {
++			glbl_str_swi_cal_sel_ctrl = 0x03;
++			glbl_hstx_str_ctrl_0 = 0x66;
++		} else {
++			glbl_str_swi_cal_sel_ctrl = less_than_1500_mhz ? 0x03 : 0x00;
++			glbl_hstx_str_ctrl_0 = less_than_1500_mhz ? 0x66 : 0x88;
++		}
+ 		glbl_rescode_top_ctrl = 0x03;
+ 		glbl_rescode_bot_ctrl = 0x3c;
+ 	}
+diff --git a/drivers/gpu/drm/nouveau/nouveau_backlight.c b/drivers/gpu/drm/nouveau/nouveau_backlight.c
+index 1cbd71abc80aa..12965a832f94a 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_backlight.c
++++ b/drivers/gpu/drm/nouveau/nouveau_backlight.c
+@@ -101,7 +101,6 @@ nv40_backlight_init(struct nouveau_encoder *encoder,
+ 	if (!(nvif_rd32(device, NV40_PMC_BACKLIGHT) & NV40_PMC_BACKLIGHT_MASK))
+ 		return -ENODEV;
+ 
+-	props->type = BACKLIGHT_RAW;
+ 	props->max_brightness = 31;
+ 	*ops = &nv40_bl_ops;
+ 	return 0;
+@@ -294,7 +293,8 @@ nv50_backlight_init(struct nouveau_backlight *bl,
+ 	struct nouveau_drm *drm = nouveau_drm(nv_encoder->base.base.dev);
+ 	struct nvif_object *device = &drm->client.device.object;
+ 
+-	if (!nvif_rd32(device, NV50_PDISP_SOR_PWM_CTL(ffs(nv_encoder->dcb->or) - 1)))
++	if (!nvif_rd32(device, NV50_PDISP_SOR_PWM_CTL(ffs(nv_encoder->dcb->or) - 1)) ||
++	    nv_conn->base.status != connector_status_connected)
+ 		return -ENODEV;
+ 
+ 	if (nv_conn->type == DCB_CONNECTOR_eDP) {
+@@ -339,7 +339,6 @@ nv50_backlight_init(struct nouveau_backlight *bl,
+ 	else
+ 		*ops = &nva3_bl_ops;
+ 
+-	props->type = BACKLIGHT_RAW;
+ 	props->max_brightness = 100;
+ 
+ 	return 0;
+@@ -407,6 +406,7 @@ nouveau_backlight_init(struct drm_connector *connector)
+ 		goto fail_alloc;
+ 	}
+ 
++	props.type = BACKLIGHT_RAW;
+ 	bl->dev = backlight_device_register(backlight_name, connector->kdev,
+ 					    nv_encoder, ops, &props);
+ 	if (IS_ERR(bl->dev)) {
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/hsfw.c b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/hsfw.c
+index 667fa016496ee..a6ea89a5d51ab 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/hsfw.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/hsfw.c
+@@ -142,11 +142,12 @@ nvkm_acr_hsfw_load_bl(struct nvkm_acr *acr, const char *name, int ver,
+ 
+ 	hsfw->imem_size = desc->code_size;
+ 	hsfw->imem_tag = desc->start_tag;
+-	hsfw->imem = kmalloc(desc->code_size, GFP_KERNEL);
+-	memcpy(hsfw->imem, data + desc->code_off, desc->code_size);
+-
++	hsfw->imem = kmemdup(data + desc->code_off, desc->code_size, GFP_KERNEL);
+ 	nvkm_firmware_put(fw);
+-	return 0;
++	if (!hsfw->imem)
++		return -ENOMEM;
++	else
++		return 0;
+ }
+ 
+ int
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gpu.c b/drivers/gpu/drm/panfrost/panfrost_gpu.c
+index bbe628b306ee3..f8355de6e335d 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gpu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_gpu.c
+@@ -360,8 +360,11 @@ int panfrost_gpu_init(struct panfrost_device *pfdev)
+ 
+ 	panfrost_gpu_init_features(pfdev);
+ 
+-	dma_set_mask_and_coherent(pfdev->dev,
++	err = dma_set_mask_and_coherent(pfdev->dev,
+ 		DMA_BIT_MASK(FIELD_GET(0xff00, pfdev->features.mmu_features)));
++	if (err)
++		return err;
++
+ 	dma_set_max_seg_size(pfdev->dev, UINT_MAX);
+ 
+ 	irq = platform_get_irq_byname(to_platform_device(pfdev->dev), "gpu");
+diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
+index 607ad5620bd99..1546abcadacf4 100644
+--- a/drivers/gpu/drm/radeon/radeon_connectors.c
++++ b/drivers/gpu/drm/radeon/radeon_connectors.c
+@@ -204,7 +204,7 @@ int radeon_get_monitor_bpc(struct drm_connector *connector)
+ 
+ 			/* Check if bpc is within clock limit. Try to degrade gracefully otherwise */
+ 			if ((bpc == 12) && (mode_clock * 3/2 > max_tmds_clock)) {
+-				if ((connector->display_info.edid_hdmi_dc_modes & DRM_EDID_HDMI_DC_30) &&
++				if ((connector->display_info.edid_hdmi_rgb444_dc_modes & DRM_EDID_HDMI_DC_30) &&
+ 					(mode_clock * 5/4 <= max_tmds_clock))
+ 					bpc = 10;
+ 				else
+diff --git a/drivers/gpu/drm/selftests/test-drm_dp_mst_helper.c b/drivers/gpu/drm/selftests/test-drm_dp_mst_helper.c
+index 6b4759ed6bfd4..c491429f1a029 100644
+--- a/drivers/gpu/drm/selftests/test-drm_dp_mst_helper.c
++++ b/drivers/gpu/drm/selftests/test-drm_dp_mst_helper.c
+@@ -131,8 +131,10 @@ sideband_msg_req_encode_decode(struct drm_dp_sideband_msg_req_body *in)
+ 		return false;
+ 
+ 	txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL);
+-	if (!txmsg)
++	if (!txmsg) {
++		kfree(out);
+ 		return false;
++	}
+ 
+ 	drm_dp_encode_sideband_req(in, txmsg);
+ 	ret = drm_dp_decode_sideband_req(txmsg, out);
+diff --git a/drivers/gpu/drm/tegra/dsi.c b/drivers/gpu/drm/tegra/dsi.c
+index f46d377f0c304..de1333dc0d867 100644
+--- a/drivers/gpu/drm/tegra/dsi.c
++++ b/drivers/gpu/drm/tegra/dsi.c
+@@ -1538,8 +1538,10 @@ static int tegra_dsi_ganged_probe(struct tegra_dsi *dsi)
+ 		dsi->slave = platform_get_drvdata(gangster);
+ 		of_node_put(np);
+ 
+-		if (!dsi->slave)
++		if (!dsi->slave) {
++			put_device(&gangster->dev);
+ 			return -EPROBE_DEFER;
++		}
+ 
+ 		dsi->slave->master = dsi;
+ 	}
+diff --git a/drivers/gpu/drm/tiny/simpledrm.c b/drivers/gpu/drm/tiny/simpledrm.c
+index 5a6e89825bc2f..3e3f9ba1e8858 100644
+--- a/drivers/gpu/drm/tiny/simpledrm.c
++++ b/drivers/gpu/drm/tiny/simpledrm.c
+@@ -779,6 +779,9 @@ static int simpledrm_device_init_modeset(struct simpledrm_device *sdev)
+ 	if (ret)
+ 		return ret;
+ 	drm_connector_helper_add(connector, &simpledrm_connector_helper_funcs);
++	drm_connector_set_panel_orientation_with_quirk(connector,
++						       DRM_MODE_PANEL_ORIENTATION_UNKNOWN,
++						       mode->hdisplay, mode->vdisplay);
+ 
+ 	formats = simpledrm_device_formats(sdev, &nformats);
+ 
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.c b/drivers/gpu/drm/v3d/v3d_drv.c
+index bd46396a1ae07..1afcd54fbbd53 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.c
++++ b/drivers/gpu/drm/v3d/v3d_drv.c
+@@ -219,6 +219,7 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
+ 	int ret;
+ 	u32 mmu_debug;
+ 	u32 ident1;
++	u64 mask;
+ 
+ 	v3d = devm_drm_dev_alloc(dev, &v3d_drm_driver, struct v3d_dev, drm);
+ 	if (IS_ERR(v3d))
+@@ -237,8 +238,11 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
+ 		return ret;
+ 
+ 	mmu_debug = V3D_READ(V3D_MMU_DEBUG_INFO);
+-	dma_set_mask_and_coherent(dev,
+-		DMA_BIT_MASK(30 + V3D_GET_FIELD(mmu_debug, V3D_MMU_PA_WIDTH)));
++	mask = DMA_BIT_MASK(30 + V3D_GET_FIELD(mmu_debug, V3D_MMU_PA_WIDTH));
++	ret = dma_set_mask_and_coherent(dev, mask);
++	if (ret)
++		return ret;
++
+ 	v3d->va_width = 30 + V3D_GET_FIELD(mmu_debug, V3D_MMU_VA_WIDTH);
+ 
+ 	ident1 = V3D_READ(V3D_HUB_IDENT1);
+diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c
+index 3872e4cd26989..fc9f54282f7d6 100644
+--- a/drivers/gpu/host1x/dev.c
++++ b/drivers/gpu/host1x/dev.c
+@@ -526,6 +526,7 @@ static int host1x_remove(struct platform_device *pdev)
+ 	host1x_syncpt_deinit(host);
+ 	reset_control_assert(host->rst);
+ 	clk_disable_unprepare(host->clk);
++	host1x_channel_list_free(&host->channel_list);
+ 	host1x_iommu_exit(host);
+ 
+ 	return 0;
+diff --git a/drivers/greybus/svc.c b/drivers/greybus/svc.c
+index ce7740ef449ba..51d0875a34800 100644
+--- a/drivers/greybus/svc.c
++++ b/drivers/greybus/svc.c
+@@ -866,8 +866,14 @@ static int gb_svc_hello(struct gb_operation *op)
+ 
+ 	gb_svc_debugfs_init(svc);
+ 
+-	return gb_svc_queue_deferred_request(op);
++	ret = gb_svc_queue_deferred_request(op);
++	if (ret)
++		goto err_remove_debugfs;
++
++	return 0;
+ 
++err_remove_debugfs:
++	gb_svc_debugfs_exit(svc);
+ err_unregister_device:
+ 	gb_svc_watchdog_destroy(svc);
+ 	device_del(&svc->dev);
+diff --git a/drivers/hid/hid-logitech-dj.c b/drivers/hid/hid-logitech-dj.c
+index 7106b921b53cf..c358778e070bc 100644
+--- a/drivers/hid/hid-logitech-dj.c
++++ b/drivers/hid/hid-logitech-dj.c
+@@ -1068,6 +1068,7 @@ static void logi_hidpp_recv_queue_notif(struct hid_device *hdev,
+ 		workitem.reports_supported |= STD_KEYBOARD;
+ 		break;
+ 	case 0x0f:
++	case 0x11:
+ 		device_type = "eQUAD Lightspeed 1.2";
+ 		logi_hidpp_dev_conn_notif_equad(hdev, hidpp_report, &workitem);
+ 		workitem.reports_supported |= STD_KEYBOARD;
+diff --git a/drivers/hid/hid-thrustmaster.c b/drivers/hid/hid-thrustmaster.c
+index 9da4240530dd7..c3e6d69fdfbd9 100644
+--- a/drivers/hid/hid-thrustmaster.c
++++ b/drivers/hid/hid-thrustmaster.c
+@@ -64,7 +64,9 @@ struct tm_wheel_info {
+  */
+ static const struct tm_wheel_info tm_wheels_infos[] = {
+ 	{0x0306, 0x0006, "Thrustmaster T150RS"},
++	{0x0200, 0x0005, "Thrustmaster T300RS (Missing Attachment)"},
+ 	{0x0206, 0x0005, "Thrustmaster T300RS"},
++	{0x0209, 0x0005, "Thrustmaster T300RS (Open Wheel Attachment)"},
+ 	{0x0204, 0x0005, "Thrustmaster T300 Ferrari Alcantara Edition"},
+ 	{0x0002, 0x0002, "Thrustmaster T500RS"}
+ 	//{0x0407, 0x0001, "Thrustmaster TMX"}
+diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
+index 4804d71e5293a..65c1f20ec420a 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-core.c
++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
+@@ -615,6 +615,17 @@ static int i2c_hid_get_raw_report(struct hid_device *hid,
+ 	if (report_type == HID_OUTPUT_REPORT)
+ 		return -EINVAL;
+ 
++	/*
++	 * In case of unnumbered reports the response from the device will
++	 * not have the report ID that the upper layers expect, so we need
++	 * to stash it the buffer ourselves and adjust the data size.
++	 */
++	if (!report_number) {
++		buf[0] = 0;
++		buf++;
++		count--;
++	}
++
+ 	/* +2 bytes to include the size of the reply in the query buffer */
+ 	ask_count = min(count + 2, (size_t)ihid->bufsize);
+ 
+@@ -636,6 +647,9 @@ static int i2c_hid_get_raw_report(struct hid_device *hid,
+ 	count = min(count, ret_count - 2);
+ 	memcpy(buf, ihid->rawbuf + 2, count);
+ 
++	if (!report_number)
++		count++;
++
+ 	return count;
+ }
+ 
+@@ -652,17 +666,19 @@ static int i2c_hid_output_raw_report(struct hid_device *hid, __u8 *buf,
+ 
+ 	mutex_lock(&ihid->reset_lock);
+ 
+-	if (report_id) {
+-		buf++;
+-		count--;
+-	}
+-
++	/*
++	 * Note that both numbered and unnumbered reports passed here
++	 * are supposed to have report ID stored in the 1st byte of the
++	 * buffer, so we strip it off unconditionally before passing payload
++	 * to i2c_hid_set_or_send_report which takes care of encoding
++	 * everything properly.
++	 */
+ 	ret = i2c_hid_set_or_send_report(client,
+ 				report_type == HID_FEATURE_REPORT ? 0x03 : 0x02,
+-				report_id, buf, count, use_data);
++				report_id, buf + 1, count - 1, use_data);
+ 
+-	if (report_id && ret >= 0)
+-		ret++; /* add report_id to the number of transfered bytes */
++	if (ret >= 0)
++		ret++; /* add report_id to the number of transferred bytes */
+ 
+ 	mutex_unlock(&ihid->reset_lock);
+ 
+diff --git a/drivers/hid/intel-ish-hid/ishtp-fw-loader.c b/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
+index 0e1183e961471..4f417d632bf16 100644
+--- a/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
++++ b/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
+@@ -660,21 +660,12 @@ static int ish_fw_xfer_direct_dma(struct ishtp_cl_data *client_data,
+ 	 */
+ 	payload_max_size &= ~(L1_CACHE_BYTES - 1);
+ 
+-	dma_buf = kmalloc(payload_max_size, GFP_KERNEL | GFP_DMA32);
++	dma_buf = dma_alloc_coherent(devc, payload_max_size, &dma_buf_phy, GFP_KERNEL);
+ 	if (!dma_buf) {
+ 		client_data->flag_retry = true;
+ 		return -ENOMEM;
+ 	}
+ 
+-	dma_buf_phy = dma_map_single(devc, dma_buf, payload_max_size,
+-				     DMA_TO_DEVICE);
+-	if (dma_mapping_error(devc, dma_buf_phy)) {
+-		dev_err(cl_data_to_dev(client_data), "DMA map failed\n");
+-		client_data->flag_retry = true;
+-		rv = -ENOMEM;
+-		goto end_err_dma_buf_release;
+-	}
+-
+ 	ldr_xfer_dma_frag.fragment.hdr.command = LOADER_CMD_XFER_FRAGMENT;
+ 	ldr_xfer_dma_frag.fragment.xfer_mode = LOADER_XFER_MODE_DIRECT_DMA;
+ 	ldr_xfer_dma_frag.ddr_phys_addr = (u64)dma_buf_phy;
+@@ -694,14 +685,7 @@ static int ish_fw_xfer_direct_dma(struct ishtp_cl_data *client_data,
+ 		ldr_xfer_dma_frag.fragment.size = fragment_size;
+ 		memcpy(dma_buf, &fw->data[fragment_offset], fragment_size);
+ 
+-		dma_sync_single_for_device(devc, dma_buf_phy,
+-					   payload_max_size,
+-					   DMA_TO_DEVICE);
+-
+-		/*
+-		 * Flush cache here because the dma_sync_single_for_device()
+-		 * does not do for x86.
+-		 */
++		/* Flush cache to be sure the data is in main memory. */
+ 		clflush_cache_range(dma_buf, payload_max_size);
+ 
+ 		dev_dbg(cl_data_to_dev(client_data),
+@@ -724,15 +708,8 @@ static int ish_fw_xfer_direct_dma(struct ishtp_cl_data *client_data,
+ 		fragment_offset += fragment_size;
+ 	}
+ 
+-	dma_unmap_single(devc, dma_buf_phy, payload_max_size, DMA_TO_DEVICE);
+-	kfree(dma_buf);
+-	return 0;
+-
+ end_err_resp_buf_release:
+-	/* Free ISH buffer if not done already, in error case */
+-	dma_unmap_single(devc, dma_buf_phy, payload_max_size, DMA_TO_DEVICE);
+-end_err_dma_buf_release:
+-	kfree(dma_buf);
++	dma_free_coherent(devc, payload_max_size, dma_buf, dma_buf_phy);
+ 	return rv;
+ }
+ 
+diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
+index f2d05bff42453..439f99b8b5de2 100644
+--- a/drivers/hv/hv_balloon.c
++++ b/drivers/hv/hv_balloon.c
+@@ -1563,7 +1563,7 @@ static void balloon_onchannelcallback(void *context)
+ 			break;
+ 
+ 		default:
+-			pr_warn("Unhandled message: type: %d\n", dm_hdr->type);
++			pr_warn_ratelimited("Unhandled message: type: %d\n", dm_hdr->type);
+ 
+ 		}
+ 	}
+diff --git a/drivers/hwmon/pmbus/pmbus.h b/drivers/hwmon/pmbus/pmbus.h
+index e0aa8aa46d8c4..ef3a8ecde4dfc 100644
+--- a/drivers/hwmon/pmbus/pmbus.h
++++ b/drivers/hwmon/pmbus/pmbus.h
+@@ -319,6 +319,7 @@ enum pmbus_fan_mode { percent = 0, rpm };
+ /*
+  * STATUS_VOUT, STATUS_INPUT
+  */
++#define PB_VOLTAGE_VIN_OFF		BIT(3)
+ #define PB_VOLTAGE_UV_FAULT		BIT(4)
+ #define PB_VOLTAGE_UV_WARNING		BIT(5)
+ #define PB_VOLTAGE_OV_WARNING		BIT(6)
+diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
+index ac2fbee1ba9c0..ca0bfaf2f6911 100644
+--- a/drivers/hwmon/pmbus/pmbus_core.c
++++ b/drivers/hwmon/pmbus/pmbus_core.c
+@@ -1373,7 +1373,7 @@ static const struct pmbus_limit_attr vin_limit_attrs[] = {
+ 		.reg = PMBUS_VIN_UV_FAULT_LIMIT,
+ 		.attr = "lcrit",
+ 		.alarm = "lcrit_alarm",
+-		.sbit = PB_VOLTAGE_UV_FAULT,
++		.sbit = PB_VOLTAGE_UV_FAULT | PB_VOLTAGE_VIN_OFF,
+ 	}, {
+ 		.reg = PMBUS_VIN_OV_WARN_LIMIT,
+ 		.attr = "max",
+@@ -2391,10 +2391,14 @@ static int pmbus_regulator_is_enabled(struct regulator_dev *rdev)
+ {
+ 	struct device *dev = rdev_get_dev(rdev);
+ 	struct i2c_client *client = to_i2c_client(dev->parent);
++	struct pmbus_data *data = i2c_get_clientdata(client);
+ 	u8 page = rdev_get_id(rdev);
+ 	int ret;
+ 
++	mutex_lock(&data->update_lock);
+ 	ret = pmbus_read_byte_data(client, page, PMBUS_OPERATION);
++	mutex_unlock(&data->update_lock);
++
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -2405,11 +2409,17 @@ static int _pmbus_regulator_on_off(struct regulator_dev *rdev, bool enable)
+ {
+ 	struct device *dev = rdev_get_dev(rdev);
+ 	struct i2c_client *client = to_i2c_client(dev->parent);
++	struct pmbus_data *data = i2c_get_clientdata(client);
+ 	u8 page = rdev_get_id(rdev);
++	int ret;
+ 
+-	return pmbus_update_byte_data(client, page, PMBUS_OPERATION,
+-				      PB_OPERATION_CONTROL_ON,
+-				      enable ? PB_OPERATION_CONTROL_ON : 0);
++	mutex_lock(&data->update_lock);
++	ret = pmbus_update_byte_data(client, page, PMBUS_OPERATION,
++				     PB_OPERATION_CONTROL_ON,
++				     enable ? PB_OPERATION_CONTROL_ON : 0);
++	mutex_unlock(&data->update_lock);
++
++	return ret;
+ }
+ 
+ static int pmbus_regulator_enable(struct regulator_dev *rdev)
+diff --git a/drivers/hwmon/sch56xx-common.c b/drivers/hwmon/sch56xx-common.c
+index 40cdadad35e52..f85eede6d7663 100644
+--- a/drivers/hwmon/sch56xx-common.c
++++ b/drivers/hwmon/sch56xx-common.c
+@@ -422,7 +422,7 @@ void sch56xx_watchdog_register(struct device *parent, u16 addr, u32 revision,
+ 	data->wddev.max_timeout = 255 * 60;
+ 	watchdog_set_nowayout(&data->wddev, nowayout);
+ 	if (output_enable & SCH56XX_WDOG_OUTPUT_ENABLE)
+-		set_bit(WDOG_ACTIVE, &data->wddev.status);
++		set_bit(WDOG_HW_RUNNING, &data->wddev.status);
+ 
+ 	/* Since the watchdog uses a downcounter there is no register to read
+ 	   the BIOS set timeout from (if any was set at all) ->
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+index a0640fa5c55bd..57e94424a8d65 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+@@ -367,8 +367,12 @@ static ssize_t mode_store(struct device *dev,
+ 	mode = ETM_MODE_QELEM(config->mode);
+ 	/* start by clearing QE bits */
+ 	config->cfg &= ~(BIT(13) | BIT(14));
+-	/* if supported, Q elements with instruction counts are enabled */
+-	if ((mode & BIT(0)) && (drvdata->q_support & BIT(0)))
++	/*
++	 * if supported, Q elements with instruction counts are enabled.
++	 * Always set the low bit for any requested mode. Valid combos are
++	 * 0b00, 0b01 and 0b11.
++	 */
++	if (mode && drvdata->q_support)
+ 		config->cfg |= BIT(13);
+ 	/*
+ 	 * if supported, Q elements with and without instruction
+diff --git a/drivers/hwtracing/coresight/coresight-syscfg.c b/drivers/hwtracing/coresight/coresight-syscfg.c
+index 43054568430f2..c30989e0675f5 100644
+--- a/drivers/hwtracing/coresight/coresight-syscfg.c
++++ b/drivers/hwtracing/coresight/coresight-syscfg.c
+@@ -791,7 +791,7 @@ static int cscfg_create_device(void)
+ 
+ 	err = device_register(dev);
+ 	if (err)
+-		cscfg_dev_release(dev);
++		put_device(dev);
+ 
+ create_dev_exit_unlock:
+ 	mutex_unlock(&cscfg_mutex);
+diff --git a/drivers/i2c/busses/i2c-bcm2835.c b/drivers/i2c/busses/i2c-bcm2835.c
+index ad3b124a2e376..f72c6576d8a36 100644
+--- a/drivers/i2c/busses/i2c-bcm2835.c
++++ b/drivers/i2c/busses/i2c-bcm2835.c
+@@ -407,7 +407,7 @@ static const struct i2c_adapter_quirks bcm2835_i2c_quirks = {
+ static int bcm2835_i2c_probe(struct platform_device *pdev)
+ {
+ 	struct bcm2835_i2c_dev *i2c_dev;
+-	struct resource *mem, *irq;
++	struct resource *mem;
+ 	int ret;
+ 	struct i2c_adapter *adap;
+ 	struct clk *mclk;
+@@ -454,21 +454,20 @@ static int bcm2835_i2c_probe(struct platform_device *pdev)
+ 	ret = clk_prepare_enable(i2c_dev->bus_clk);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Couldn't prepare clock");
+-		return ret;
++		goto err_put_exclusive_rate;
+ 	}
+ 
+-	irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+-	if (!irq) {
+-		dev_err(&pdev->dev, "No IRQ resource\n");
+-		return -ENODEV;
++	i2c_dev->irq = platform_get_irq(pdev, 0);
++	if (i2c_dev->irq < 0) {
++		ret = i2c_dev->irq;
++		goto err_disable_unprepare_clk;
+ 	}
+-	i2c_dev->irq = irq->start;
+ 
+ 	ret = request_irq(i2c_dev->irq, bcm2835_i2c_isr, IRQF_SHARED,
+ 			  dev_name(&pdev->dev), i2c_dev);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Could not request IRQ\n");
+-		return -ENODEV;
++		goto err_disable_unprepare_clk;
+ 	}
+ 
+ 	adap = &i2c_dev->adapter;
+@@ -492,7 +491,16 @@ static int bcm2835_i2c_probe(struct platform_device *pdev)
+ 
+ 	ret = i2c_add_adapter(adap);
+ 	if (ret)
+-		free_irq(i2c_dev->irq, i2c_dev);
++		goto err_free_irq;
++
++	return 0;
++
++err_free_irq:
++	free_irq(i2c_dev->irq, i2c_dev);
++err_disable_unprepare_clk:
++	clk_disable_unprepare(i2c_dev->bus_clk);
++err_put_exclusive_rate:
++	clk_rate_exclusive_put(i2c_dev->bus_clk);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/i2c/busses/i2c-meson.c b/drivers/i2c/busses/i2c-meson.c
+index ef73a42577cc7..07eb819072c4f 100644
+--- a/drivers/i2c/busses/i2c-meson.c
++++ b/drivers/i2c/busses/i2c-meson.c
+@@ -465,18 +465,18 @@ static int meson_i2c_probe(struct platform_device *pdev)
+ 	 */
+ 	meson_i2c_set_mask(i2c, REG_CTRL, REG_CTRL_START, 0);
+ 
+-	ret = i2c_add_adapter(&i2c->adap);
+-	if (ret < 0) {
+-		clk_disable_unprepare(i2c->clk);
+-		return ret;
+-	}
+-
+ 	/* Disable filtering */
+ 	meson_i2c_set_mask(i2c, REG_SLAVE_ADDR,
+ 			   REG_SLV_SDA_FILTER | REG_SLV_SCL_FILTER, 0);
+ 
+ 	meson_i2c_set_clk_div(i2c, timings.bus_freq_hz);
+ 
++	ret = i2c_add_adapter(&i2c->adap);
++	if (ret < 0) {
++		clk_disable_unprepare(i2c->clk);
++		return ret;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/i2c/busses/i2c-pasemi-core.c b/drivers/i2c/busses/i2c-pasemi-core.c
+index 4e161a4089d85..7728c8460dc0f 100644
+--- a/drivers/i2c/busses/i2c-pasemi-core.c
++++ b/drivers/i2c/busses/i2c-pasemi-core.c
+@@ -333,7 +333,6 @@ int pasemi_i2c_common_probe(struct pasemi_smbus *smbus)
+ 	smbus->adapter.owner = THIS_MODULE;
+ 	snprintf(smbus->adapter.name, sizeof(smbus->adapter.name),
+ 		 "PA Semi SMBus adapter (%s)", dev_name(smbus->dev));
+-	smbus->adapter.class = I2C_CLASS_HWMON | I2C_CLASS_SPD;
+ 	smbus->adapter.algo = &smbus_algorithm;
+ 	smbus->adapter.algo_data = smbus;
+ 
+diff --git a/drivers/i2c/busses/i2c-pasemi-pci.c b/drivers/i2c/busses/i2c-pasemi-pci.c
+index 1ab1f28744fb2..cfc89e04eb94c 100644
+--- a/drivers/i2c/busses/i2c-pasemi-pci.c
++++ b/drivers/i2c/busses/i2c-pasemi-pci.c
+@@ -56,6 +56,7 @@ static int pasemi_smb_pci_probe(struct pci_dev *dev,
+ 	if (!smbus->ioaddr)
+ 		return -EBUSY;
+ 
++	smbus->adapter.class = I2C_CLASS_HWMON | I2C_CLASS_SPD;
+ 	error = pasemi_i2c_common_probe(smbus);
+ 	if (error)
+ 		return error;
+diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c
+index eb789cfb99739..ffefe3c482e9c 100644
+--- a/drivers/i2c/busses/i2c-xiic.c
++++ b/drivers/i2c/busses/i2c-xiic.c
+@@ -734,7 +734,6 @@ static const struct i2c_adapter_quirks xiic_quirks = {
+ 
+ static const struct i2c_adapter xiic_adapter = {
+ 	.owner = THIS_MODULE,
+-	.name = DRIVER_NAME,
+ 	.class = I2C_CLASS_DEPRECATED,
+ 	.algo = &xiic_algorithm,
+ 	.quirks = &xiic_quirks,
+@@ -771,6 +770,8 @@ static int xiic_i2c_probe(struct platform_device *pdev)
+ 	i2c_set_adapdata(&i2c->adap, i2c);
+ 	i2c->adap.dev.parent = &pdev->dev;
+ 	i2c->adap.dev.of_node = pdev->dev.of_node;
++	snprintf(i2c->adap.name, sizeof(i2c->adap.name),
++		 DRIVER_NAME " %s", pdev->name);
+ 
+ 	mutex_init(&i2c->lock);
+ 
+diff --git a/drivers/i2c/muxes/i2c-demux-pinctrl.c b/drivers/i2c/muxes/i2c-demux-pinctrl.c
+index 5365199a31f41..f7a7405d4350a 100644
+--- a/drivers/i2c/muxes/i2c-demux-pinctrl.c
++++ b/drivers/i2c/muxes/i2c-demux-pinctrl.c
+@@ -261,7 +261,7 @@ static int i2c_demux_pinctrl_probe(struct platform_device *pdev)
+ 
+ 	err = device_create_file(&pdev->dev, &dev_attr_available_masters);
+ 	if (err)
+-		goto err_rollback;
++		goto err_rollback_activation;
+ 
+ 	err = device_create_file(&pdev->dev, &dev_attr_current_master);
+ 	if (err)
+@@ -271,8 +271,9 @@ static int i2c_demux_pinctrl_probe(struct platform_device *pdev)
+ 
+ err_rollback_available:
+ 	device_remove_file(&pdev->dev, &dev_attr_available_masters);
+-err_rollback:
++err_rollback_activation:
+ 	i2c_demux_deactivate_master(priv);
++err_rollback:
+ 	for (j = 0; j < i; j++) {
+ 		of_node_put(priv->chan[j].parent_np);
+ 		of_changeset_destroy(&priv->chan[j].chgset);
+diff --git a/drivers/iio/accel/mma8452.c b/drivers/iio/accel/mma8452.c
+index 09c7f10fefb6e..21a99467f3646 100644
+--- a/drivers/iio/accel/mma8452.c
++++ b/drivers/iio/accel/mma8452.c
+@@ -176,6 +176,7 @@ static const struct mma8452_event_regs trans_ev_regs = {
+  * @enabled_events:		event flags enabled and handled by this driver
+  */
+ struct mma_chip_info {
++	const char *name;
+ 	u8 chip_id;
+ 	const struct iio_chan_spec *channels;
+ 	int num_channels;
+@@ -1301,6 +1302,7 @@ enum {
+ 
+ static const struct mma_chip_info mma_chip_info_table[] = {
+ 	[mma8451] = {
++		.name = "mma8451",
+ 		.chip_id = MMA8451_DEVICE_ID,
+ 		.channels = mma8451_channels,
+ 		.num_channels = ARRAY_SIZE(mma8451_channels),
+@@ -1325,6 +1327,7 @@ static const struct mma_chip_info mma_chip_info_table[] = {
+ 					MMA8452_INT_FF_MT,
+ 	},
+ 	[mma8452] = {
++		.name = "mma8452",
+ 		.chip_id = MMA8452_DEVICE_ID,
+ 		.channels = mma8452_channels,
+ 		.num_channels = ARRAY_SIZE(mma8452_channels),
+@@ -1341,6 +1344,7 @@ static const struct mma_chip_info mma_chip_info_table[] = {
+ 					MMA8452_INT_FF_MT,
+ 	},
+ 	[mma8453] = {
++		.name = "mma8453",
+ 		.chip_id = MMA8453_DEVICE_ID,
+ 		.channels = mma8453_channels,
+ 		.num_channels = ARRAY_SIZE(mma8453_channels),
+@@ -1357,6 +1361,7 @@ static const struct mma_chip_info mma_chip_info_table[] = {
+ 					MMA8452_INT_FF_MT,
+ 	},
+ 	[mma8652] = {
++		.name = "mma8652",
+ 		.chip_id = MMA8652_DEVICE_ID,
+ 		.channels = mma8652_channels,
+ 		.num_channels = ARRAY_SIZE(mma8652_channels),
+@@ -1366,6 +1371,7 @@ static const struct mma_chip_info mma_chip_info_table[] = {
+ 		.enabled_events = MMA8452_INT_FF_MT,
+ 	},
+ 	[mma8653] = {
++		.name = "mma8653",
+ 		.chip_id = MMA8653_DEVICE_ID,
+ 		.channels = mma8653_channels,
+ 		.num_channels = ARRAY_SIZE(mma8653_channels),
+@@ -1380,6 +1386,7 @@ static const struct mma_chip_info mma_chip_info_table[] = {
+ 		.enabled_events = MMA8452_INT_FF_MT,
+ 	},
+ 	[fxls8471] = {
++		.name = "fxls8471",
+ 		.chip_id = FXLS8471_DEVICE_ID,
+ 		.channels = mma8451_channels,
+ 		.num_channels = ARRAY_SIZE(mma8451_channels),
+@@ -1522,13 +1529,6 @@ static int mma8452_probe(struct i2c_client *client,
+ 	struct mma8452_data *data;
+ 	struct iio_dev *indio_dev;
+ 	int ret;
+-	const struct of_device_id *match;
+-
+-	match = of_match_device(mma8452_dt_ids, &client->dev);
+-	if (!match) {
+-		dev_err(&client->dev, "unknown device model\n");
+-		return -ENODEV;
+-	}
+ 
+ 	indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*data));
+ 	if (!indio_dev)
+@@ -1537,7 +1537,14 @@ static int mma8452_probe(struct i2c_client *client,
+ 	data = iio_priv(indio_dev);
+ 	data->client = client;
+ 	mutex_init(&data->lock);
+-	data->chip_info = match->data;
++
++	data->chip_info = device_get_match_data(&client->dev);
++	if (!data->chip_info && id) {
++		data->chip_info = &mma_chip_info_table[id->driver_data];
++	} else {
++		dev_err(&client->dev, "unknown device model\n");
++		return -ENODEV;
++	}
+ 
+ 	data->vdd_reg = devm_regulator_get(&client->dev, "vdd");
+ 	if (IS_ERR(data->vdd_reg))
+@@ -1581,11 +1588,11 @@ static int mma8452_probe(struct i2c_client *client,
+ 	}
+ 
+ 	dev_info(&client->dev, "registering %s accelerometer; ID 0x%x\n",
+-		 match->compatible, data->chip_info->chip_id);
++		 data->chip_info->name, data->chip_info->chip_id);
+ 
+ 	i2c_set_clientdata(client, indio_dev);
+ 	indio_dev->info = &mma8452_info;
+-	indio_dev->name = id->name;
++	indio_dev->name = data->chip_info->name;
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 	indio_dev->channels = data->chip_info->channels;
+ 	indio_dev->num_channels = data->chip_info->num_channels;
+@@ -1810,7 +1817,7 @@ MODULE_DEVICE_TABLE(i2c, mma8452_id);
+ static struct i2c_driver mma8452_driver = {
+ 	.driver = {
+ 		.name	= "mma8452",
+-		.of_match_table = of_match_ptr(mma8452_dt_ids),
++		.of_match_table = mma8452_dt_ids,
+ 		.pm	= &mma8452_pm_ops,
+ 	},
+ 	.probe = mma8452_probe,
+diff --git a/drivers/iio/adc/aspeed_adc.c b/drivers/iio/adc/aspeed_adc.c
+index e939b84cbb561..0793d2474cdcf 100644
+--- a/drivers/iio/adc/aspeed_adc.c
++++ b/drivers/iio/adc/aspeed_adc.c
+@@ -539,7 +539,9 @@ static int aspeed_adc_probe(struct platform_device *pdev)
+ 	data->clk_scaler = devm_clk_hw_register_divider(
+ 		&pdev->dev, clk_name, clk_parent_name, scaler_flags,
+ 		data->base + ASPEED_REG_CLOCK_CONTROL, 0,
+-		data->model_data->scaler_bit_width, 0, &data->clk_lock);
++		data->model_data->scaler_bit_width,
++		data->model_data->need_prescaler ? CLK_DIVIDER_ONE_BASED : 0,
++		&data->clk_lock);
+ 	if (IS_ERR(data->clk_scaler))
+ 		return PTR_ERR(data->clk_scaler);
+ 
+diff --git a/drivers/iio/adc/twl6030-gpadc.c b/drivers/iio/adc/twl6030-gpadc.c
+index afdb59e0b5267..d0223e39d59af 100644
+--- a/drivers/iio/adc/twl6030-gpadc.c
++++ b/drivers/iio/adc/twl6030-gpadc.c
+@@ -911,6 +911,8 @@ static int twl6030_gpadc_probe(struct platform_device *pdev)
+ 	ret = devm_request_threaded_irq(dev, irq, NULL,
+ 				twl6030_gpadc_irq_handler,
+ 				IRQF_ONESHOT, "twl6030_gpadc", indio_dev);
++	if (ret)
++		return ret;
+ 
+ 	ret = twl6030_gpadc_enable_irq(TWL6030_GPADC_RT_SW1_EOC_MASK);
+ 	if (ret < 0) {
+diff --git a/drivers/iio/afe/iio-rescale.c b/drivers/iio/afe/iio-rescale.c
+index 774eb3044edd8..271d73e420c42 100644
+--- a/drivers/iio/afe/iio-rescale.c
++++ b/drivers/iio/afe/iio-rescale.c
+@@ -39,7 +39,7 @@ static int rescale_read_raw(struct iio_dev *indio_dev,
+ 			    int *val, int *val2, long mask)
+ {
+ 	struct rescale *rescale = iio_priv(indio_dev);
+-	unsigned long long tmp;
++	s64 tmp;
+ 	int ret;
+ 
+ 	switch (mask) {
+@@ -77,10 +77,10 @@ static int rescale_read_raw(struct iio_dev *indio_dev,
+ 			*val2 = rescale->denominator;
+ 			return IIO_VAL_FRACTIONAL;
+ 		case IIO_VAL_FRACTIONAL_LOG2:
+-			tmp = *val * 1000000000LL;
+-			do_div(tmp, rescale->denominator);
++			tmp = (s64)*val * 1000000000LL;
++			tmp = div_s64(tmp, rescale->denominator);
+ 			tmp *= rescale->numerator;
+-			do_div(tmp, 1000000000LL);
++			tmp = div_s64(tmp, 1000000000LL);
+ 			*val = tmp;
+ 			return ret;
+ 		default:
+diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c
+index 0222885b334c1..df74765d33dcb 100644
+--- a/drivers/iio/inkern.c
++++ b/drivers/iio/inkern.c
+@@ -595,28 +595,50 @@ EXPORT_SYMBOL_GPL(iio_read_channel_average_raw);
+ static int iio_convert_raw_to_processed_unlocked(struct iio_channel *chan,
+ 	int raw, int *processed, unsigned int scale)
+ {
+-	int scale_type, scale_val, scale_val2, offset;
++	int scale_type, scale_val, scale_val2;
++	int offset_type, offset_val, offset_val2;
+ 	s64 raw64 = raw;
+-	int ret;
+ 
+-	ret = iio_channel_read(chan, &offset, NULL, IIO_CHAN_INFO_OFFSET);
+-	if (ret >= 0)
+-		raw64 += offset;
++	offset_type = iio_channel_read(chan, &offset_val, &offset_val2,
++				       IIO_CHAN_INFO_OFFSET);
++	if (offset_type >= 0) {
++		switch (offset_type) {
++		case IIO_VAL_INT:
++			break;
++		case IIO_VAL_INT_PLUS_MICRO:
++		case IIO_VAL_INT_PLUS_NANO:
++			/*
++			 * Both IIO_VAL_INT_PLUS_MICRO and IIO_VAL_INT_PLUS_NANO
++			 * implicitely truncate the offset to it's integer form.
++			 */
++			break;
++		case IIO_VAL_FRACTIONAL:
++			offset_val /= offset_val2;
++			break;
++		case IIO_VAL_FRACTIONAL_LOG2:
++			offset_val >>= offset_val2;
++			break;
++		default:
++			return -EINVAL;
++		}
++
++		raw64 += offset_val;
++	}
+ 
+ 	scale_type = iio_channel_read(chan, &scale_val, &scale_val2,
+ 					IIO_CHAN_INFO_SCALE);
+ 	if (scale_type < 0) {
+ 		/*
+-		 * Just pass raw values as processed if no scaling is
+-		 * available.
++		 * If no channel scaling is available apply consumer scale to
++		 * raw value and return.
+ 		 */
+-		*processed = raw;
++		*processed = raw * scale;
+ 		return 0;
+ 	}
+ 
+ 	switch (scale_type) {
+ 	case IIO_VAL_INT:
+-		*processed = raw64 * scale_val;
++		*processed = raw64 * scale_val * scale;
+ 		break;
+ 	case IIO_VAL_INT_PLUS_MICRO:
+ 		if (scale_val2 < 0)
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 41ec05c4b0d0e..80a8e31e5b384 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -2642,7 +2642,7 @@ int rdma_set_ack_timeout(struct rdma_cm_id *id, u8 timeout)
+ {
+ 	struct rdma_id_private *id_priv;
+ 
+-	if (id->qp_type != IB_QPT_RC)
++	if (id->qp_type != IB_QPT_RC && id->qp_type != IB_QPT_XRC_INI)
+ 		return -EINVAL;
+ 
+ 	id_priv = container_of(id, struct rdma_id_private, id);
+diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c
+index f5aacaf7fb8ef..ca24ce34da766 100644
+--- a/drivers/infiniband/core/nldev.c
++++ b/drivers/infiniband/core/nldev.c
+@@ -1951,9 +1951,10 @@ static int nldev_stat_set_counter_dynamic_doit(struct nlattr *tb[],
+ 					       u32 port)
+ {
+ 	struct rdma_hw_stats *stats;
+-	int rem, i, index, ret = 0;
+ 	struct nlattr *entry_attr;
+ 	unsigned long *target;
++	int rem, i, ret = 0;
++	u32 index;
+ 
+ 	stats = ib_get_hw_stats_port(device, port);
+ 	if (!stats)
+diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
+index c18634bec2126..e821dc94a43ed 100644
+--- a/drivers/infiniband/core/verbs.c
++++ b/drivers/infiniband/core/verbs.c
+@@ -2153,6 +2153,7 @@ struct ib_mr *ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
+ 		return mr;
+ 
+ 	mr->device = pd->device;
++	mr->type = IB_MR_TYPE_USER;
+ 	mr->pd = pd;
+ 	mr->dm = NULL;
+ 	atomic_inc(&pd->usecnt);
+diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c
+index dc9211f3a0098..99d0743133cac 100644
+--- a/drivers/infiniband/hw/hfi1/verbs.c
++++ b/drivers/infiniband/hw/hfi1/verbs.c
+@@ -1397,8 +1397,7 @@ static int query_port(struct rvt_dev_info *rdi, u32 port_num,
+ 				      4096 : hfi1_max_mtu), IB_MTU_4096);
+ 	props->active_mtu = !valid_ib_mtu(ppd->ibmtu) ? props->max_mtu :
+ 		mtu_to_enum(ppd->ibmtu, IB_MTU_4096);
+-	props->phys_mtu = HFI1_CAP_IS_KSET(AIP) ? hfi1_max_mtu :
+-				ib_mtu_enum_to_int(props->max_mtu);
++	props->phys_mtu = hfi1_max_mtu;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/infiniband/hw/irdma/ctrl.c b/drivers/infiniband/hw/irdma/ctrl.c
+index 7264f8c2f7d57..1d6b578dbd82b 100644
+--- a/drivers/infiniband/hw/irdma/ctrl.c
++++ b/drivers/infiniband/hw/irdma/ctrl.c
+@@ -431,7 +431,7 @@ enum irdma_status_code irdma_sc_qp_create(struct irdma_sc_qp *qp, struct irdma_c
+ 
+ 	cqp = qp->dev->cqp;
+ 	if (qp->qp_uk.qp_id < cqp->dev->hw_attrs.min_hw_qp_id ||
+-	    qp->qp_uk.qp_id > (cqp->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_QP].max_cnt - 1))
++	    qp->qp_uk.qp_id >= (cqp->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_QP].max_cnt))
+ 		return IRDMA_ERR_INVALID_QP_ID;
+ 
+ 	wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch);
+@@ -2510,10 +2510,10 @@ static enum irdma_status_code irdma_sc_cq_create(struct irdma_sc_cq *cq,
+ 	enum irdma_status_code ret_code = 0;
+ 
+ 	cqp = cq->dev->cqp;
+-	if (cq->cq_uk.cq_id > (cqp->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].max_cnt - 1))
++	if (cq->cq_uk.cq_id >= (cqp->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].max_cnt))
+ 		return IRDMA_ERR_INVALID_CQ_ID;
+ 
+-	if (cq->ceq_id > (cq->dev->hmc_fpm_misc.max_ceqs - 1))
++	if (cq->ceq_id >= (cq->dev->hmc_fpm_misc.max_ceqs))
+ 		return IRDMA_ERR_INVALID_CEQ_ID;
+ 
+ 	ceq = cq->dev->ceq[cq->ceq_id];
+@@ -3615,7 +3615,7 @@ enum irdma_status_code irdma_sc_ceq_init(struct irdma_sc_ceq *ceq,
+ 	    info->elem_cnt > info->dev->hw_attrs.max_hw_ceq_size)
+ 		return IRDMA_ERR_INVALID_SIZE;
+ 
+-	if (info->ceq_id > (info->dev->hmc_fpm_misc.max_ceqs - 1))
++	if (info->ceq_id >= (info->dev->hmc_fpm_misc.max_ceqs))
+ 		return IRDMA_ERR_INVALID_CEQ_ID;
+ 	pble_obj_cnt = info->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt;
+ 
+@@ -4164,7 +4164,7 @@ enum irdma_status_code irdma_sc_ccq_init(struct irdma_sc_cq *cq,
+ 	    info->num_elem > info->dev->hw_attrs.uk_attrs.max_hw_cq_size)
+ 		return IRDMA_ERR_INVALID_SIZE;
+ 
+-	if (info->ceq_id > (info->dev->hmc_fpm_misc.max_ceqs - 1))
++	if (info->ceq_id >= (info->dev->hmc_fpm_misc.max_ceqs ))
+ 		return IRDMA_ERR_INVALID_CEQ_ID;
+ 
+ 	pble_obj_cnt = info->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt;
+diff --git a/drivers/infiniband/hw/irdma/hw.c b/drivers/infiniband/hw/irdma/hw.c
+index b4c657f5f2f95..f99c40893ffac 100644
+--- a/drivers/infiniband/hw/irdma/hw.c
++++ b/drivers/infiniband/hw/irdma/hw.c
+@@ -1608,7 +1608,7 @@ static enum irdma_status_code irdma_initialize_dev(struct irdma_pci_f *rf)
+ 	info.fpm_commit_buf = mem.va;
+ 
+ 	info.bar0 = rf->hw.hw_addr;
+-	info.hmc_fn_id = PCI_FUNC(rf->pcidev->devfn);
++	info.hmc_fn_id = rf->pf_id;
+ 	info.hw = &rf->hw;
+ 	status = irdma_sc_dev_init(rf->rdma_ver, &rf->sc_dev, &info);
+ 	if (status)
+diff --git a/drivers/infiniband/hw/irdma/i40iw_if.c b/drivers/infiniband/hw/irdma/i40iw_if.c
+index d219f64b2c3d5..a6f758b61b0c4 100644
+--- a/drivers/infiniband/hw/irdma/i40iw_if.c
++++ b/drivers/infiniband/hw/irdma/i40iw_if.c
+@@ -77,6 +77,7 @@ static void i40iw_fill_device_info(struct irdma_device *iwdev, struct i40e_info
+ 	rf->rdma_ver = IRDMA_GEN_1;
+ 	rf->gen_ops.request_reset = i40iw_request_reset;
+ 	rf->pcidev = cdev_info->pcidev;
++	rf->pf_id = cdev_info->fid;
+ 	rf->hw.hw_addr = cdev_info->hw_addr;
+ 	rf->cdev = cdev_info;
+ 	rf->msix_count = cdev_info->msix_count;
+diff --git a/drivers/infiniband/hw/irdma/main.c b/drivers/infiniband/hw/irdma/main.c
+index 51a41359e0b41..c556a36e76703 100644
+--- a/drivers/infiniband/hw/irdma/main.c
++++ b/drivers/infiniband/hw/irdma/main.c
+@@ -226,6 +226,7 @@ static void irdma_fill_device_info(struct irdma_device *iwdev, struct ice_pf *pf
+ 	rf->hw.hw_addr = pf->hw.hw_addr;
+ 	rf->pcidev = pf->pdev;
+ 	rf->msix_count =  pf->num_rdma_msix;
++	rf->pf_id = pf->hw.pf_id;
+ 	rf->msix_entries = &pf->msix_entries[pf->rdma_base_vector];
+ 	rf->default_vsi.vsi_idx = vsi->vsi_num;
+ 	rf->protocol_used = IRDMA_ROCE_PROTOCOL_ONLY;
+diff --git a/drivers/infiniband/hw/irdma/main.h b/drivers/infiniband/hw/irdma/main.h
+index cb218cab79ac1..fb7faa85e4c9d 100644
+--- a/drivers/infiniband/hw/irdma/main.h
++++ b/drivers/infiniband/hw/irdma/main.h
+@@ -257,6 +257,7 @@ struct irdma_pci_f {
+ 	u8 *mem_rsrc;
+ 	u8 rdma_ver;
+ 	u8 rst_to;
++	u8 pf_id;
+ 	enum irdma_protocol_used protocol_used;
+ 	u32 sd_type;
+ 	u32 msix_count;
+diff --git a/drivers/infiniband/hw/irdma/utils.c b/drivers/infiniband/hw/irdma/utils.c
+index 398736d8c78a4..e81b74a518dd0 100644
+--- a/drivers/infiniband/hw/irdma/utils.c
++++ b/drivers/infiniband/hw/irdma/utils.c
+@@ -150,31 +150,35 @@ int irdma_inetaddr_event(struct notifier_block *notifier, unsigned long event,
+ 			 void *ptr)
+ {
+ 	struct in_ifaddr *ifa = ptr;
+-	struct net_device *netdev = ifa->ifa_dev->dev;
++	struct net_device *real_dev, *netdev = ifa->ifa_dev->dev;
+ 	struct irdma_device *iwdev;
+ 	struct ib_device *ibdev;
+ 	u32 local_ipaddr;
+ 
+-	ibdev = ib_device_get_by_netdev(netdev, RDMA_DRIVER_IRDMA);
++	real_dev = rdma_vlan_dev_real_dev(netdev);
++	if (!real_dev)
++		real_dev = netdev;
++
++	ibdev = ib_device_get_by_netdev(real_dev, RDMA_DRIVER_IRDMA);
+ 	if (!ibdev)
+ 		return NOTIFY_DONE;
+ 
+ 	iwdev = to_iwdev(ibdev);
+ 	local_ipaddr = ntohl(ifa->ifa_address);
+ 	ibdev_dbg(&iwdev->ibdev,
+-		  "DEV: netdev %p event %lu local_ip=%pI4 MAC=%pM\n", netdev,
+-		  event, &local_ipaddr, netdev->dev_addr);
++		  "DEV: netdev %p event %lu local_ip=%pI4 MAC=%pM\n", real_dev,
++		  event, &local_ipaddr, real_dev->dev_addr);
+ 	switch (event) {
+ 	case NETDEV_DOWN:
+-		irdma_manage_arp_cache(iwdev->rf, netdev->dev_addr,
++		irdma_manage_arp_cache(iwdev->rf, real_dev->dev_addr,
+ 				       &local_ipaddr, true, IRDMA_ARP_DELETE);
+-		irdma_if_notify(iwdev, netdev, &local_ipaddr, true, false);
++		irdma_if_notify(iwdev, real_dev, &local_ipaddr, true, false);
+ 		irdma_gid_change_event(&iwdev->ibdev);
+ 		break;
+ 	case NETDEV_UP:
+ 	case NETDEV_CHANGEADDR:
+-		irdma_add_arp(iwdev->rf, &local_ipaddr, true, netdev->dev_addr);
+-		irdma_if_notify(iwdev, netdev, &local_ipaddr, true, true);
++		irdma_add_arp(iwdev->rf, &local_ipaddr, true, real_dev->dev_addr);
++		irdma_if_notify(iwdev, real_dev, &local_ipaddr, true, true);
+ 		irdma_gid_change_event(&iwdev->ibdev);
+ 		break;
+ 	default:
+@@ -196,32 +200,36 @@ int irdma_inet6addr_event(struct notifier_block *notifier, unsigned long event,
+ 			  void *ptr)
+ {
+ 	struct inet6_ifaddr *ifa = ptr;
+-	struct net_device *netdev = ifa->idev->dev;
++	struct net_device *real_dev, *netdev = ifa->idev->dev;
+ 	struct irdma_device *iwdev;
+ 	struct ib_device *ibdev;
+ 	u32 local_ipaddr6[4];
+ 
+-	ibdev = ib_device_get_by_netdev(netdev, RDMA_DRIVER_IRDMA);
++	real_dev = rdma_vlan_dev_real_dev(netdev);
++	if (!real_dev)
++		real_dev = netdev;
++
++	ibdev = ib_device_get_by_netdev(real_dev, RDMA_DRIVER_IRDMA);
+ 	if (!ibdev)
+ 		return NOTIFY_DONE;
+ 
+ 	iwdev = to_iwdev(ibdev);
+ 	irdma_copy_ip_ntohl(local_ipaddr6, ifa->addr.in6_u.u6_addr32);
+ 	ibdev_dbg(&iwdev->ibdev,
+-		  "DEV: netdev %p event %lu local_ip=%pI6 MAC=%pM\n", netdev,
+-		  event, local_ipaddr6, netdev->dev_addr);
++		  "DEV: netdev %p event %lu local_ip=%pI6 MAC=%pM\n", real_dev,
++		  event, local_ipaddr6, real_dev->dev_addr);
+ 	switch (event) {
+ 	case NETDEV_DOWN:
+-		irdma_manage_arp_cache(iwdev->rf, netdev->dev_addr,
++		irdma_manage_arp_cache(iwdev->rf, real_dev->dev_addr,
+ 				       local_ipaddr6, false, IRDMA_ARP_DELETE);
+-		irdma_if_notify(iwdev, netdev, local_ipaddr6, false, false);
++		irdma_if_notify(iwdev, real_dev, local_ipaddr6, false, false);
+ 		irdma_gid_change_event(&iwdev->ibdev);
+ 		break;
+ 	case NETDEV_UP:
+ 	case NETDEV_CHANGEADDR:
+ 		irdma_add_arp(iwdev->rf, local_ipaddr6, false,
+-			      netdev->dev_addr);
+-		irdma_if_notify(iwdev, netdev, local_ipaddr6, false, true);
++			      real_dev->dev_addr);
++		irdma_if_notify(iwdev, real_dev, local_ipaddr6, false, true);
+ 		irdma_gid_change_event(&iwdev->ibdev);
+ 		break;
+ 	default:
+@@ -243,14 +251,18 @@ int irdma_net_event(struct notifier_block *notifier, unsigned long event,
+ 		    void *ptr)
+ {
+ 	struct neighbour *neigh = ptr;
++	struct net_device *real_dev, *netdev = (struct net_device *)neigh->dev;
+ 	struct irdma_device *iwdev;
+ 	struct ib_device *ibdev;
+ 	__be32 *p;
+ 	u32 local_ipaddr[4] = {};
+ 	bool ipv4 = true;
+ 
+-	ibdev = ib_device_get_by_netdev((struct net_device *)neigh->dev,
+-					RDMA_DRIVER_IRDMA);
++	real_dev = rdma_vlan_dev_real_dev(netdev);
++	if (!real_dev)
++		real_dev = netdev;
++
++	ibdev = ib_device_get_by_netdev(real_dev, RDMA_DRIVER_IRDMA);
+ 	if (!ibdev)
+ 		return NOTIFY_DONE;
+ 
+diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
+index 8cd5f9261692d..03d4da16604d4 100644
+--- a/drivers/infiniband/hw/irdma/verbs.c
++++ b/drivers/infiniband/hw/irdma/verbs.c
+@@ -2504,7 +2504,7 @@ static int irdma_dealloc_mw(struct ib_mw *ibmw)
+ 	cqp_info = &cqp_request->info;
+ 	info = &cqp_info->in.u.dealloc_stag.info;
+ 	memset(info, 0, sizeof(*info));
+-	info->pd_id = iwpd->sc_pd.pd_id & 0x00007fff;
++	info->pd_id = iwpd->sc_pd.pd_id;
+ 	info->stag_idx = ibmw->rkey >> IRDMA_CQPSQ_STAG_IDX_S;
+ 	info->mr = false;
+ 	cqp_info->cqp_cmd = IRDMA_OP_DEALLOC_STAG;
+@@ -3016,7 +3016,7 @@ static int irdma_dereg_mr(struct ib_mr *ib_mr, struct ib_udata *udata)
+ 	cqp_info = &cqp_request->info;
+ 	info = &cqp_info->in.u.dealloc_stag.info;
+ 	memset(info, 0, sizeof(*info));
+-	info->pd_id = iwpd->sc_pd.pd_id & 0x00007fff;
++	info->pd_id = iwpd->sc_pd.pd_id;
+ 	info->stag_idx = ib_mr->rkey >> IRDMA_CQPSQ_STAG_IDX_S;
+ 	info->mr = true;
+ 	if (iwpbl->pbl_allocated)
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index 08b7f6bc56c37..15c0884d1f498 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -1886,8 +1886,10 @@ subscribe_event_xa_alloc(struct mlx5_devx_event_table *devx_event_table,
+ 				key_level2,
+ 				obj_event,
+ 				GFP_KERNEL);
+-		if (err)
++		if (err) {
++			kfree(obj_event);
+ 			return err;
++		}
+ 		INIT_LIST_HEAD(&obj_event->obj_sub_list);
+ 	}
+ 
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 157d862fb8642..2910d78333130 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -585,6 +585,8 @@ struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev,
+ 	ent = &cache->ent[entry];
+ 	spin_lock_irq(&ent->lock);
+ 	if (list_empty(&ent->head)) {
++		queue_adjust_cache_locked(ent);
++		ent->miss++;
+ 		spin_unlock_irq(&ent->lock);
+ 		mr = create_cache_mr(ent);
+ 		if (IS_ERR(mr))
+diff --git a/drivers/infiniband/sw/rxe/rxe_av.c b/drivers/infiniband/sw/rxe/rxe_av.c
+index 38c7b6fb39d70..360a567159fe5 100644
+--- a/drivers/infiniband/sw/rxe/rxe_av.c
++++ b/drivers/infiniband/sw/rxe/rxe_av.c
+@@ -99,11 +99,14 @@ void rxe_av_fill_ip_info(struct rxe_av *av, struct rdma_ah_attr *attr)
+ 	av->network_type = type;
+ }
+ 
+-struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt)
++struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt, struct rxe_ah **ahp)
+ {
+ 	struct rxe_ah *ah;
+ 	u32 ah_num;
+ 
++	if (ahp)
++		*ahp = NULL;
++
+ 	if (!pkt || !pkt->qp)
+ 		return NULL;
+ 
+@@ -117,10 +120,22 @@ struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt)
+ 	if (ah_num) {
+ 		/* only new user provider or kernel client */
+ 		ah = rxe_pool_get_index(&pkt->rxe->ah_pool, ah_num);
+-		if (!ah || ah->ah_num != ah_num || rxe_ah_pd(ah) != pkt->qp->pd) {
++		if (!ah) {
+ 			pr_warn("Unable to find AH matching ah_num\n");
+ 			return NULL;
+ 		}
++
++		if (rxe_ah_pd(ah) != pkt->qp->pd) {
++			pr_warn("PDs don't match for AH and QP\n");
++			rxe_drop_ref(ah);
++			return NULL;
++		}
++
++		if (ahp)
++			*ahp = ah;
++		else
++			rxe_drop_ref(ah);
++
+ 		return &ah->av;
+ 	}
+ 
+diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
+index 1ca43b859d806..42234744dadd4 100644
+--- a/drivers/infiniband/sw/rxe/rxe_loc.h
++++ b/drivers/infiniband/sw/rxe/rxe_loc.h
+@@ -19,7 +19,7 @@ void rxe_av_to_attr(struct rxe_av *av, struct rdma_ah_attr *attr);
+ 
+ void rxe_av_fill_ip_info(struct rxe_av *av, struct rdma_ah_attr *attr);
+ 
+-struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt);
++struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt, struct rxe_ah **ahp);
+ 
+ /* rxe_cq.c */
+ int rxe_cq_chk_attr(struct rxe_dev *rxe, struct rxe_cq *cq,
+@@ -102,7 +102,8 @@ void rxe_mw_cleanup(struct rxe_pool_entry *arg);
+ /* rxe_net.c */
+ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av,
+ 				int paylen, struct rxe_pkt_info *pkt);
+-int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb);
++int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt,
++		struct sk_buff *skb);
+ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt,
+ 		    struct sk_buff *skb);
+ const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num);
+diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c
+index 2cb810cb890a5..456e960cacd70 100644
+--- a/drivers/infiniband/sw/rxe/rxe_net.c
++++ b/drivers/infiniband/sw/rxe/rxe_net.c
+@@ -293,13 +293,13 @@ static void prepare_ipv6_hdr(struct dst_entry *dst, struct sk_buff *skb,
+ 	ip6h->payload_len = htons(skb->len - sizeof(*ip6h));
+ }
+ 
+-static int prepare4(struct rxe_pkt_info *pkt, struct sk_buff *skb)
++static int prepare4(struct rxe_av *av, struct rxe_pkt_info *pkt,
++		    struct sk_buff *skb)
+ {
+ 	struct rxe_qp *qp = pkt->qp;
+ 	struct dst_entry *dst;
+ 	bool xnet = false;
+ 	__be16 df = htons(IP_DF);
+-	struct rxe_av *av = rxe_get_av(pkt);
+ 	struct in_addr *saddr = &av->sgid_addr._sockaddr_in.sin_addr;
+ 	struct in_addr *daddr = &av->dgid_addr._sockaddr_in.sin_addr;
+ 
+@@ -319,11 +319,11 @@ static int prepare4(struct rxe_pkt_info *pkt, struct sk_buff *skb)
+ 	return 0;
+ }
+ 
+-static int prepare6(struct rxe_pkt_info *pkt, struct sk_buff *skb)
++static int prepare6(struct rxe_av *av, struct rxe_pkt_info *pkt,
++		    struct sk_buff *skb)
+ {
+ 	struct rxe_qp *qp = pkt->qp;
+ 	struct dst_entry *dst;
+-	struct rxe_av *av = rxe_get_av(pkt);
+ 	struct in6_addr *saddr = &av->sgid_addr._sockaddr_in6.sin6_addr;
+ 	struct in6_addr *daddr = &av->dgid_addr._sockaddr_in6.sin6_addr;
+ 
+@@ -344,16 +344,17 @@ static int prepare6(struct rxe_pkt_info *pkt, struct sk_buff *skb)
+ 	return 0;
+ }
+ 
+-int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb)
++int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt,
++		struct sk_buff *skb)
+ {
+ 	int err = 0;
+ 
+ 	if (skb->protocol == htons(ETH_P_IP))
+-		err = prepare4(pkt, skb);
++		err = prepare4(av, pkt, skb);
+ 	else if (skb->protocol == htons(ETH_P_IPV6))
+-		err = prepare6(pkt, skb);
++		err = prepare6(av, pkt, skb);
+ 
+-	if (ether_addr_equal(skb->dev->dev_addr, rxe_get_av(pkt)->dmac))
++	if (ether_addr_equal(skb->dev->dev_addr, av->dmac))
+ 		pkt->mask |= RXE_LOOPBACK_MASK;
+ 
+ 	return err;
+diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
+index 0c9d2af15f3d0..ae2a02f3d3351 100644
+--- a/drivers/infiniband/sw/rxe/rxe_req.c
++++ b/drivers/infiniband/sw/rxe/rxe_req.c
+@@ -361,14 +361,14 @@ static inline int get_mtu(struct rxe_qp *qp)
+ }
+ 
+ static struct sk_buff *init_req_packet(struct rxe_qp *qp,
++				       struct rxe_av *av,
+ 				       struct rxe_send_wqe *wqe,
+-				       int opcode, int payload,
++				       int opcode, u32 payload,
+ 				       struct rxe_pkt_info *pkt)
+ {
+ 	struct rxe_dev		*rxe = to_rdev(qp->ibqp.device);
+ 	struct sk_buff		*skb;
+ 	struct rxe_send_wr	*ibwr = &wqe->wr;
+-	struct rxe_av		*av;
+ 	int			pad = (-payload) & 0x3;
+ 	int			paylen;
+ 	int			solicited;
+@@ -378,21 +378,9 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp,
+ 
+ 	/* length from start of bth to end of icrc */
+ 	paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE;
+-
+-	/* pkt->hdr, port_num and mask are initialized in ifc layer */
+-	pkt->rxe	= rxe;
+-	pkt->opcode	= opcode;
+-	pkt->qp		= qp;
+-	pkt->psn	= qp->req.psn;
+-	pkt->mask	= rxe_opcode[opcode].mask;
+-	pkt->paylen	= paylen;
+-	pkt->wqe	= wqe;
++	pkt->paylen = paylen;
+ 
+ 	/* init skb */
+-	av = rxe_get_av(pkt);
+-	if (!av)
+-		return NULL;
+-
+ 	skb = rxe_init_packet(rxe, av, paylen, pkt);
+ 	if (unlikely(!skb))
+ 		return NULL;
+@@ -453,13 +441,13 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp,
+ 	return skb;
+ }
+ 
+-static int finish_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+-		       struct rxe_pkt_info *pkt, struct sk_buff *skb,
+-		       int paylen)
++static int finish_packet(struct rxe_qp *qp, struct rxe_av *av,
++			 struct rxe_send_wqe *wqe, struct rxe_pkt_info *pkt,
++			 struct sk_buff *skb, u32 paylen)
+ {
+ 	int err;
+ 
+-	err = rxe_prepare(pkt, skb);
++	err = rxe_prepare(av, pkt, skb);
+ 	if (err)
+ 		return err;
+ 
+@@ -503,7 +491,7 @@ static void update_wqe_state(struct rxe_qp *qp,
+ static void update_wqe_psn(struct rxe_qp *qp,
+ 			   struct rxe_send_wqe *wqe,
+ 			   struct rxe_pkt_info *pkt,
+-			   int payload)
++			   u32 payload)
+ {
+ 	/* number of packets left to send including current one */
+ 	int num_pkt = (wqe->dma.resid + payload + qp->mtu - 1) / qp->mtu;
+@@ -546,7 +534,7 @@ static void rollback_state(struct rxe_send_wqe *wqe,
+ }
+ 
+ static void update_state(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+-			 struct rxe_pkt_info *pkt, int payload)
++			 struct rxe_pkt_info *pkt, u32 payload)
+ {
+ 	qp->req.opcode = pkt->opcode;
+ 
+@@ -614,17 +602,20 @@ static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
+ int rxe_requester(void *arg)
+ {
+ 	struct rxe_qp *qp = (struct rxe_qp *)arg;
++	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
+ 	struct rxe_pkt_info pkt;
+ 	struct sk_buff *skb;
+ 	struct rxe_send_wqe *wqe;
+ 	enum rxe_hdr_mask mask;
+-	int payload;
++	u32 payload;
+ 	int mtu;
+ 	int opcode;
+ 	int ret;
+ 	struct rxe_send_wqe rollback_wqe;
+ 	u32 rollback_psn;
+ 	struct rxe_queue *q = qp->sq.queue;
++	struct rxe_ah *ah;
++	struct rxe_av *av;
+ 
+ 	rxe_add_ref(qp);
+ 
+@@ -711,14 +702,28 @@ next_wqe:
+ 		payload = mtu;
+ 	}
+ 
+-	skb = init_req_packet(qp, wqe, opcode, payload, &pkt);
++	pkt.rxe = rxe;
++	pkt.opcode = opcode;
++	pkt.qp = qp;
++	pkt.psn = qp->req.psn;
++	pkt.mask = rxe_opcode[opcode].mask;
++	pkt.wqe = wqe;
++
++	av = rxe_get_av(&pkt, &ah);
++	if (unlikely(!av)) {
++		pr_err("qp#%d Failed no address vector\n", qp_num(qp));
++		wqe->status = IB_WC_LOC_QP_OP_ERR;
++		goto err_drop_ah;
++	}
++
++	skb = init_req_packet(qp, av, wqe, opcode, payload, &pkt);
+ 	if (unlikely(!skb)) {
+ 		pr_err("qp#%d Failed allocating skb\n", qp_num(qp));
+ 		wqe->status = IB_WC_LOC_QP_OP_ERR;
+-		goto err;
++		goto err_drop_ah;
+ 	}
+ 
+-	ret = finish_packet(qp, wqe, &pkt, skb, payload);
++	ret = finish_packet(qp, av, wqe, &pkt, skb, payload);
+ 	if (unlikely(ret)) {
+ 		pr_debug("qp#%d Error during finish packet\n", qp_num(qp));
+ 		if (ret == -EFAULT)
+@@ -726,9 +731,12 @@ next_wqe:
+ 		else
+ 			wqe->status = IB_WC_LOC_QP_OP_ERR;
+ 		kfree_skb(skb);
+-		goto err;
++		goto err_drop_ah;
+ 	}
+ 
++	if (ah)
++		rxe_drop_ref(ah);
++
+ 	/*
+ 	 * To prevent a race on wqe access between requester and completer,
+ 	 * wqe members state and psn need to be set before calling
+@@ -757,6 +765,9 @@ next_wqe:
+ 
+ 	goto next_wqe;
+ 
++err_drop_ah:
++	if (ah)
++		rxe_drop_ref(ah);
+ err:
+ 	wqe->state = wqe_state_error;
+ 	__rxe_do_task(&qp->comp.task);
+diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
+index e8f435fa6e4d7..192cb9a096a14 100644
+--- a/drivers/infiniband/sw/rxe/rxe_resp.c
++++ b/drivers/infiniband/sw/rxe/rxe_resp.c
+@@ -632,7 +632,7 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp,
+ 	if (ack->mask & RXE_ATMACK_MASK)
+ 		atmack_set_orig(ack, qp->resp.atomic_orig);
+ 
+-	err = rxe_prepare(ack, skb);
++	err = rxe_prepare(&qp->pri_av, ack, skb);
+ 	if (err) {
+ 		kfree_skb(skb);
+ 		return NULL;
+@@ -814,6 +814,10 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt)
+ 			return RESPST_ERR_INVALIDATE_RKEY;
+ 	}
+ 
++	if (pkt->mask & RXE_END_MASK)
++		/* We successfully processed this new request. */
++		qp->resp.msn++;
++
+ 	/* next expected psn, read handles this separately */
+ 	qp->resp.psn = (pkt->psn + 1) & BTH_PSN_MASK;
+ 	qp->resp.ack_psn = qp->resp.psn;
+@@ -821,11 +825,9 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt)
+ 	qp->resp.opcode = pkt->opcode;
+ 	qp->resp.status = IB_WC_SUCCESS;
+ 
+-	if (pkt->mask & RXE_COMP_MASK) {
+-		/* We successfully processed this new request. */
+-		qp->resp.msn++;
++	if (pkt->mask & RXE_COMP_MASK)
+ 		return RESPST_COMPLETE;
+-	} else if (qp_type(qp) == IB_QPT_RC)
++	else if (qp_type(qp) == IB_QPT_RC)
+ 		return RESPST_ACKNOWLEDGE;
+ 	else
+ 		return RESPST_CLEANUP;
+diff --git a/drivers/input/input.c b/drivers/input/input.c
+index c3139bc2aa0db..ccaeb24263854 100644
+--- a/drivers/input/input.c
++++ b/drivers/input/input.c
+@@ -2285,12 +2285,6 @@ int input_register_device(struct input_dev *dev)
+ 	/* KEY_RESERVED is not supposed to be transmitted to userspace. */
+ 	__clear_bit(KEY_RESERVED, dev->keybit);
+ 
+-	/* Buttonpads should not map BTN_RIGHT and/or BTN_MIDDLE. */
+-	if (test_bit(INPUT_PROP_BUTTONPAD, dev->propbit)) {
+-		__clear_bit(BTN_RIGHT, dev->keybit);
+-		__clear_bit(BTN_MIDDLE, dev->keybit);
+-	}
+-
+ 	/* Make sure that bitmasks not mentioned in dev->evbit are clean. */
+ 	input_cleanse_bitmasks(dev);
+ 
+diff --git a/drivers/input/touchscreen/zinitix.c b/drivers/input/touchscreen/zinitix.c
+index 1e70b8d2a8d79..400957f4c8c9c 100644
+--- a/drivers/input/touchscreen/zinitix.c
++++ b/drivers/input/touchscreen/zinitix.c
+@@ -135,7 +135,7 @@ struct point_coord {
+ 
+ struct touch_event {
+ 	__le16	status;
+-	u8	finger_cnt;
++	u8	finger_mask;
+ 	u8	time_stamp;
+ 	struct point_coord point_coord[MAX_SUPPORTED_FINGER_NUM];
+ };
+@@ -311,11 +311,32 @@ static int zinitix_send_power_on_sequence(struct bt541_ts_data *bt541)
+ static void zinitix_report_finger(struct bt541_ts_data *bt541, int slot,
+ 				  const struct point_coord *p)
+ {
++	u16 x, y;
++
++	if (unlikely(!(p->sub_status &
++		       (SUB_BIT_UP | SUB_BIT_DOWN | SUB_BIT_MOVE)))) {
++		dev_dbg(&bt541->client->dev, "unknown finger event %#02x\n",
++			p->sub_status);
++		return;
++	}
++
++	x = le16_to_cpu(p->x);
++	y = le16_to_cpu(p->y);
++
+ 	input_mt_slot(bt541->input_dev, slot);
+-	input_mt_report_slot_state(bt541->input_dev, MT_TOOL_FINGER, true);
+-	touchscreen_report_pos(bt541->input_dev, &bt541->prop,
+-			       le16_to_cpu(p->x), le16_to_cpu(p->y), true);
+-	input_report_abs(bt541->input_dev, ABS_MT_TOUCH_MAJOR, p->width);
++	if (input_mt_report_slot_state(bt541->input_dev, MT_TOOL_FINGER,
++				       !(p->sub_status & SUB_BIT_UP))) {
++		touchscreen_report_pos(bt541->input_dev,
++				       &bt541->prop, x, y, true);
++		input_report_abs(bt541->input_dev,
++				 ABS_MT_TOUCH_MAJOR, p->width);
++		dev_dbg(&bt541->client->dev, "finger %d %s (%u, %u)\n",
++			slot, p->sub_status & SUB_BIT_DOWN ? "down" : "move",
++			x, y);
++	} else {
++		dev_dbg(&bt541->client->dev, "finger %d up (%u, %u)\n",
++			slot, x, y);
++	}
+ }
+ 
+ static irqreturn_t zinitix_ts_irq_handler(int irq, void *bt541_handler)
+@@ -323,6 +344,7 @@ static irqreturn_t zinitix_ts_irq_handler(int irq, void *bt541_handler)
+ 	struct bt541_ts_data *bt541 = bt541_handler;
+ 	struct i2c_client *client = bt541->client;
+ 	struct touch_event touch_event;
++	unsigned long finger_mask;
+ 	int error;
+ 	int i;
+ 
+@@ -335,10 +357,14 @@ static irqreturn_t zinitix_ts_irq_handler(int irq, void *bt541_handler)
+ 		goto out;
+ 	}
+ 
+-	for (i = 0; i < MAX_SUPPORTED_FINGER_NUM; i++)
+-		if (touch_event.point_coord[i].sub_status & SUB_BIT_EXIST)
+-			zinitix_report_finger(bt541, i,
+-					      &touch_event.point_coord[i]);
++	finger_mask = touch_event.finger_mask;
++	for_each_set_bit(i, &finger_mask, MAX_SUPPORTED_FINGER_NUM) {
++		const struct point_coord *p = &touch_event.point_coord[i];
++
++		/* Only process contacts that are actually reported */
++		if (p->sub_status & SUB_BIT_EXIST)
++			zinitix_report_finger(bt541, i, p);
++	}
+ 
+ 	input_mt_sync_frame(bt541->input_dev);
+ 	input_sync(bt541->input_dev);
+diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
+index 920fcc27c9a1e..cae5a73ff518c 100644
+--- a/drivers/iommu/iova.c
++++ b/drivers/iommu/iova.c
+@@ -154,10 +154,11 @@ __cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free)
+ 	cached_iova = to_iova(iovad->cached32_node);
+ 	if (free == cached_iova ||
+ 	    (free->pfn_hi < iovad->dma_32bit_pfn &&
+-	     free->pfn_lo >= cached_iova->pfn_lo)) {
++	     free->pfn_lo >= cached_iova->pfn_lo))
+ 		iovad->cached32_node = rb_next(&free->node);
++
++	if (free->pfn_lo < iovad->dma_32bit_pfn)
+ 		iovad->max32_alloc_size = iovad->dma_32bit_pfn;
+-	}
+ 
+ 	cached_iova = to_iova(iovad->cached_node);
+ 	if (free->pfn_lo >= cached_iova->pfn_lo)
+diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
+index ca752bdc710f6..61bd9a3004ede 100644
+--- a/drivers/iommu/ipmmu-vmsa.c
++++ b/drivers/iommu/ipmmu-vmsa.c
+@@ -1006,7 +1006,9 @@ static int ipmmu_probe(struct platform_device *pdev)
+ 	bitmap_zero(mmu->ctx, IPMMU_CTX_MAX);
+ 	mmu->features = of_device_get_match_data(&pdev->dev);
+ 	memset(mmu->utlb_ctx, IPMMU_CTX_INVALID, mmu->features->num_utlbs);
+-	dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(40));
++	ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(40));
++	if (ret)
++		return ret;
+ 
+ 	/* Map I/O memory and request IRQ. */
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
+index 25b834104790c..5971a11686662 100644
+--- a/drivers/iommu/mtk_iommu.c
++++ b/drivers/iommu/mtk_iommu.c
+@@ -562,22 +562,52 @@ static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
+ {
+ 	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+ 	struct mtk_iommu_data *data;
++	struct device_link *link;
++	struct device *larbdev;
++	unsigned int larbid, larbidx, i;
+ 
+ 	if (!fwspec || fwspec->ops != &mtk_iommu_ops)
+ 		return ERR_PTR(-ENODEV); /* Not a iommu client device */
+ 
+ 	data = dev_iommu_priv_get(dev);
+ 
++	/*
++	 * Link the consumer device with the smi-larb device(supplier).
++	 * The device that connects with each a larb is a independent HW.
++	 * All the ports in each a device should be in the same larbs.
++	 */
++	larbid = MTK_M4U_TO_LARB(fwspec->ids[0]);
++	for (i = 1; i < fwspec->num_ids; i++) {
++		larbidx = MTK_M4U_TO_LARB(fwspec->ids[i]);
++		if (larbid != larbidx) {
++			dev_err(dev, "Can only use one larb. Fail@larb%d-%d.\n",
++				larbid, larbidx);
++			return ERR_PTR(-EINVAL);
++		}
++	}
++	larbdev = data->larb_imu[larbid].dev;
++	link = device_link_add(dev, larbdev,
++			       DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS);
++	if (!link)
++		dev_err(dev, "Unable to link %s\n", dev_name(larbdev));
+ 	return &data->iommu;
+ }
+ 
+ static void mtk_iommu_release_device(struct device *dev)
+ {
+ 	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
++	struct mtk_iommu_data *data;
++	struct device *larbdev;
++	unsigned int larbid;
+ 
+ 	if (!fwspec || fwspec->ops != &mtk_iommu_ops)
+ 		return;
+ 
++	data = dev_iommu_priv_get(dev);
++	larbid = MTK_M4U_TO_LARB(fwspec->ids[0]);
++	larbdev = data->larb_imu[larbid].dev;
++	device_link_remove(dev, larbdev);
++
+ 	iommu_fwspec_free(dev);
+ }
+ 
+@@ -848,7 +878,7 @@ static int mtk_iommu_probe(struct platform_device *pdev)
+ 		plarbdev = of_find_device_by_node(larbnode);
+ 		if (!plarbdev) {
+ 			of_node_put(larbnode);
+-			return -EPROBE_DEFER;
++			return -ENODEV;
+ 		}
+ 		data->larb_imu[id].dev = &plarbdev->dev;
+ 
+diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c
+index be22fcf988cee..bc7ee90b9373d 100644
+--- a/drivers/iommu/mtk_iommu_v1.c
++++ b/drivers/iommu/mtk_iommu_v1.c
+@@ -423,7 +423,18 @@ static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
+ 	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+ 	struct of_phandle_args iommu_spec;
+ 	struct mtk_iommu_data *data;
+-	int err, idx = 0;
++	int err, idx = 0, larbid, larbidx;
++	struct device_link *link;
++	struct device *larbdev;
++
++	/*
++	 * In the deferred case, free the existed fwspec.
++	 * Always initialize the fwspec internally.
++	 */
++	if (fwspec) {
++		iommu_fwspec_free(dev);
++		fwspec = dev_iommu_fwspec_get(dev);
++	}
+ 
+ 	while (!of_parse_phandle_with_args(dev->of_node, "iommus",
+ 					   "#iommu-cells",
+@@ -444,6 +455,23 @@ static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
+ 
+ 	data = dev_iommu_priv_get(dev);
+ 
++	/* Link the consumer device with the smi-larb device(supplier) */
++	larbid = mt2701_m4u_to_larb(fwspec->ids[0]);
++	for (idx = 1; idx < fwspec->num_ids; idx++) {
++		larbidx = mt2701_m4u_to_larb(fwspec->ids[idx]);
++		if (larbid != larbidx) {
++			dev_err(dev, "Can only use one larb. Fail@larb%d-%d.\n",
++				larbid, larbidx);
++			return ERR_PTR(-EINVAL);
++		}
++	}
++
++	larbdev = data->larb_imu[larbid].dev;
++	link = device_link_add(dev, larbdev,
++			       DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS);
++	if (!link)
++		dev_err(dev, "Unable to link %s\n", dev_name(larbdev));
++
+ 	return &data->iommu;
+ }
+ 
+@@ -464,10 +492,18 @@ static void mtk_iommu_probe_finalize(struct device *dev)
+ static void mtk_iommu_release_device(struct device *dev)
+ {
+ 	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
++	struct mtk_iommu_data *data;
++	struct device *larbdev;
++	unsigned int larbid;
+ 
+ 	if (!fwspec || fwspec->ops != &mtk_iommu_ops)
+ 		return;
+ 
++	data = dev_iommu_priv_get(dev);
++	larbid = mt2701_m4u_to_larb(fwspec->ids[0]);
++	larbdev = data->larb_imu[larbid].dev;
++	device_link_remove(dev, larbdev);
++
+ 	iommu_fwspec_free(dev);
+ }
+ 
+@@ -595,7 +631,7 @@ static int mtk_iommu_probe(struct platform_device *pdev)
+ 		plarbdev = of_find_device_by_node(larbnode);
+ 		if (!plarbdev) {
+ 			of_node_put(larbnode);
+-			return -EPROBE_DEFER;
++			return -ENODEV;
+ 		}
+ 		data->larb_imu[i].dev = &plarbdev->dev;
+ 
+diff --git a/drivers/irqchip/irq-nvic.c b/drivers/irqchip/irq-nvic.c
+index ba4759b3e2693..94230306e0eee 100644
+--- a/drivers/irqchip/irq-nvic.c
++++ b/drivers/irqchip/irq-nvic.c
+@@ -107,6 +107,7 @@ static int __init nvic_of_init(struct device_node *node,
+ 
+ 	if (!nvic_irq_domain) {
+ 		pr_warn("Failed to allocate irq domain\n");
++		iounmap(nvic_base);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -116,6 +117,7 @@ static int __init nvic_of_init(struct device_node *node,
+ 	if (ret) {
+ 		pr_warn("Failed to allocate irq chips\n");
+ 		irq_domain_remove(nvic_irq_domain);
++		iounmap(nvic_base);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/irqchip/qcom-pdc.c b/drivers/irqchip/qcom-pdc.c
+index 173e6520e06ec..c0b457f26ec41 100644
+--- a/drivers/irqchip/qcom-pdc.c
++++ b/drivers/irqchip/qcom-pdc.c
+@@ -56,17 +56,18 @@ static u32 pdc_reg_read(int reg, u32 i)
+ static void pdc_enable_intr(struct irq_data *d, bool on)
+ {
+ 	int pin_out = d->hwirq;
++	unsigned long flags;
+ 	u32 index, mask;
+ 	u32 enable;
+ 
+ 	index = pin_out / 32;
+ 	mask = pin_out % 32;
+ 
+-	raw_spin_lock(&pdc_lock);
++	raw_spin_lock_irqsave(&pdc_lock, flags);
+ 	enable = pdc_reg_read(IRQ_ENABLE_BANK, index);
+ 	enable = on ? ENABLE_INTR(enable, mask) : CLEAR_INTR(enable, mask);
+ 	pdc_reg_write(IRQ_ENABLE_BANK, index, enable);
+-	raw_spin_unlock(&pdc_lock);
++	raw_spin_unlock_irqrestore(&pdc_lock, flags);
+ }
+ 
+ static void qcom_pdc_gic_disable(struct irq_data *d)
+diff --git a/drivers/mailbox/imx-mailbox.c b/drivers/mailbox/imx-mailbox.c
+index 544de2db64531..a0c252415c868 100644
+--- a/drivers/mailbox/imx-mailbox.c
++++ b/drivers/mailbox/imx-mailbox.c
+@@ -14,6 +14,7 @@
+ #include <linux/module.h>
+ #include <linux/of_device.h>
+ #include <linux/pm_runtime.h>
++#include <linux/suspend.h>
+ #include <linux/slab.h>
+ 
+ #define IMX_MU_CHANS		16
+@@ -76,6 +77,7 @@ struct imx_mu_priv {
+ 	const struct imx_mu_dcfg	*dcfg;
+ 	struct clk		*clk;
+ 	int			irq;
++	bool			suspend;
+ 
+ 	u32 xcr[4];
+ 
+@@ -334,6 +336,9 @@ static irqreturn_t imx_mu_isr(int irq, void *p)
+ 		return IRQ_NONE;
+ 	}
+ 
++	if (priv->suspend)
++		pm_system_wakeup();
++
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -702,6 +707,8 @@ static int __maybe_unused imx_mu_suspend_noirq(struct device *dev)
+ 			priv->xcr[i] = imx_mu_read(priv, priv->dcfg->xCR[i]);
+ 	}
+ 
++	priv->suspend = true;
++
+ 	return 0;
+ }
+ 
+@@ -718,11 +725,13 @@ static int __maybe_unused imx_mu_resume_noirq(struct device *dev)
+ 	 * send failed, may lead to system freeze. This issue
+ 	 * is observed by testing freeze mode suspend.
+ 	 */
+-	if (!imx_mu_read(priv, priv->dcfg->xCR[0]) && !priv->clk) {
++	if (!priv->clk && !imx_mu_read(priv, priv->dcfg->xCR[0])) {
+ 		for (i = 0; i < IMX_MU_xCR_MAX; i++)
+ 			imx_mu_write(priv, priv->xcr[i], priv->dcfg->xCR[i]);
+ 	}
+ 
++	priv->suspend = false;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/mailbox/tegra-hsp.c b/drivers/mailbox/tegra-hsp.c
+index acd0675da681e..78f7265039c66 100644
+--- a/drivers/mailbox/tegra-hsp.c
++++ b/drivers/mailbox/tegra-hsp.c
+@@ -412,6 +412,11 @@ static int tegra_hsp_mailbox_flush(struct mbox_chan *chan,
+ 		value = tegra_hsp_channel_readl(ch, HSP_SM_SHRD_MBOX);
+ 		if ((value & HSP_SM_SHRD_MBOX_FULL) == 0) {
+ 			mbox_chan_txdone(chan, 0);
++
++			/* Wait until channel is empty */
++			if (chan->active_req != NULL)
++				continue;
++
+ 			return 0;
+ 		}
+ 
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index 88c573eeb5982..ad9f16689419d 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -2060,9 +2060,11 @@ int bch_btree_check(struct cache_set *c)
+ 		}
+ 	}
+ 
++	/*
++	 * Must wait for all threads to stop.
++	 */
+ 	wait_event_interruptible(check_state->wait,
+-				 atomic_read(&check_state->started) == 0 ||
+-				  test_bit(CACHE_SET_IO_DISABLE, &c->flags));
++				 atomic_read(&check_state->started) == 0);
+ 
+ 	for (i = 0; i < check_state->total_threads; i++) {
+ 		if (check_state->infos[i].result) {
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index c7560f66dca88..68d3dd6b4f119 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -998,9 +998,11 @@ void bch_sectors_dirty_init(struct bcache_device *d)
+ 		}
+ 	}
+ 
++	/*
++	 * Must wait for all threads to stop.
++	 */
+ 	wait_event_interruptible(state->wait,
+-		 atomic_read(&state->started) == 0 ||
+-		 test_bit(CACHE_SET_IO_DISABLE, &c->flags));
++		 atomic_read(&state->started) == 0);
+ 
+ out:
+ 	kfree(state);
+diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h
+index b855fef4f38a6..adb9604e85ac4 100644
+--- a/drivers/md/dm-core.h
++++ b/drivers/md/dm-core.h
+@@ -65,6 +65,8 @@ struct mapped_device {
+ 	struct gendisk *disk;
+ 	struct dax_device *dax_dev;
+ 
++	unsigned long __percpu *pending_io;
++
+ 	/*
+ 	 * A list of ios that arrived while we were suspended.
+ 	 */
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index d4ae31558826a..f51aea71cb036 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -2590,7 +2590,7 @@ static int crypt_set_keyring_key(struct crypt_config *cc, const char *key_string
+ 
+ static int get_key_size(char **key_string)
+ {
+-	return (*key_string[0] == ':') ? -EINVAL : strlen(*key_string) >> 1;
++	return (*key_string[0] == ':') ? -EINVAL : (int)(strlen(*key_string) >> 1);
+ }
+ 
+ #endif /* CONFIG_KEYS */
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 7af242de3202e..5025c7f048ed5 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -2471,9 +2471,11 @@ static void do_journal_write(struct dm_integrity_c *ic, unsigned write_start,
+ 					dm_integrity_io_error(ic, "invalid sector in journal", -EIO);
+ 					sec &= ~(sector_t)(ic->sectors_per_block - 1);
+ 				}
++				if (unlikely(sec >= ic->provided_data_sectors)) {
++					journal_entry_set_unused(je);
++					continue;
++				}
+ 			}
+-			if (unlikely(sec >= ic->provided_data_sectors))
+-				continue;
+ 			get_area_and_offset(ic, sec, &area, &offset);
+ 			restore_last_bytes(ic, access_journal_data(ic, i, j), je);
+ 			for (k = j + 1; k < ic->journal_section_entries; k++) {
+diff --git a/drivers/md/dm-stats.c b/drivers/md/dm-stats.c
+index 35d368c418d03..0e039a8c0bf2e 100644
+--- a/drivers/md/dm-stats.c
++++ b/drivers/md/dm-stats.c
+@@ -195,6 +195,7 @@ void dm_stats_init(struct dm_stats *stats)
+ 
+ 	mutex_init(&stats->mutex);
+ 	INIT_LIST_HEAD(&stats->list);
++	stats->precise_timestamps = false;
+ 	stats->last = alloc_percpu(struct dm_stats_last_position);
+ 	for_each_possible_cpu(cpu) {
+ 		last = per_cpu_ptr(stats->last, cpu);
+@@ -231,6 +232,22 @@ void dm_stats_cleanup(struct dm_stats *stats)
+ 	mutex_destroy(&stats->mutex);
+ }
+ 
++static void dm_stats_recalc_precise_timestamps(struct dm_stats *stats)
++{
++	struct list_head *l;
++	struct dm_stat *tmp_s;
++	bool precise_timestamps = false;
++
++	list_for_each(l, &stats->list) {
++		tmp_s = container_of(l, struct dm_stat, list_entry);
++		if (tmp_s->stat_flags & STAT_PRECISE_TIMESTAMPS) {
++			precise_timestamps = true;
++			break;
++		}
++	}
++	stats->precise_timestamps = precise_timestamps;
++}
++
+ static int dm_stats_create(struct dm_stats *stats, sector_t start, sector_t end,
+ 			   sector_t step, unsigned stat_flags,
+ 			   unsigned n_histogram_entries,
+@@ -376,6 +393,9 @@ static int dm_stats_create(struct dm_stats *stats, sector_t start, sector_t end,
+ 	}
+ 	ret_id = s->id;
+ 	list_add_tail_rcu(&s->list_entry, l);
++
++	dm_stats_recalc_precise_timestamps(stats);
++
+ 	mutex_unlock(&stats->mutex);
+ 
+ 	resume_callback(md);
+@@ -418,6 +438,9 @@ static int dm_stats_delete(struct dm_stats *stats, int id)
+ 	}
+ 
+ 	list_del_rcu(&s->list_entry);
++
++	dm_stats_recalc_precise_timestamps(stats);
++
+ 	mutex_unlock(&stats->mutex);
+ 
+ 	/*
+@@ -621,13 +644,14 @@ static void __dm_stat_bio(struct dm_stat *s, int bi_rw,
+ 
+ void dm_stats_account_io(struct dm_stats *stats, unsigned long bi_rw,
+ 			 sector_t bi_sector, unsigned bi_sectors, bool end,
+-			 unsigned long duration_jiffies,
++			 unsigned long start_time,
+ 			 struct dm_stats_aux *stats_aux)
+ {
+ 	struct dm_stat *s;
+ 	sector_t end_sector;
+ 	struct dm_stats_last_position *last;
+ 	bool got_precise_time;
++	unsigned long duration_jiffies = 0;
+ 
+ 	if (unlikely(!bi_sectors))
+ 		return;
+@@ -647,16 +671,16 @@ void dm_stats_account_io(struct dm_stats *stats, unsigned long bi_rw,
+ 				       ));
+ 		WRITE_ONCE(last->last_sector, end_sector);
+ 		WRITE_ONCE(last->last_rw, bi_rw);
+-	}
++	} else
++		duration_jiffies = jiffies - start_time;
+ 
+ 	rcu_read_lock();
+ 
+ 	got_precise_time = false;
+ 	list_for_each_entry_rcu(s, &stats->list, list_entry) {
+ 		if (s->stat_flags & STAT_PRECISE_TIMESTAMPS && !got_precise_time) {
+-			if (!end)
+-				stats_aux->duration_ns = ktime_to_ns(ktime_get());
+-			else
++			/* start (!end) duration_ns is set by DM core's alloc_io() */
++			if (end)
+ 				stats_aux->duration_ns = ktime_to_ns(ktime_get()) - stats_aux->duration_ns;
+ 			got_precise_time = true;
+ 		}
+diff --git a/drivers/md/dm-stats.h b/drivers/md/dm-stats.h
+index 2ddfae678f320..09c81a1ec057d 100644
+--- a/drivers/md/dm-stats.h
++++ b/drivers/md/dm-stats.h
+@@ -13,8 +13,7 @@ struct dm_stats {
+ 	struct mutex mutex;
+ 	struct list_head list;	/* list of struct dm_stat */
+ 	struct dm_stats_last_position __percpu *last;
+-	sector_t last_sector;
+-	unsigned last_rw;
++	bool precise_timestamps;
+ };
+ 
+ struct dm_stats_aux {
+@@ -32,7 +31,7 @@ int dm_stats_message(struct mapped_device *md, unsigned argc, char **argv,
+ 
+ void dm_stats_account_io(struct dm_stats *stats, unsigned long bi_rw,
+ 			 sector_t bi_sector, unsigned bi_sectors, bool end,
+-			 unsigned long duration_jiffies,
++			 unsigned long start_time,
+ 			 struct dm_stats_aux *aux);
+ 
+ static inline bool dm_stats_used(struct dm_stats *st)
+@@ -40,4 +39,10 @@ static inline bool dm_stats_used(struct dm_stats *st)
+ 	return !list_empty(&st->list);
+ }
+ 
++static inline void dm_stats_record_start(struct dm_stats *stats, struct dm_stats_aux *aux)
++{
++	if (unlikely(stats->precise_timestamps))
++		aux->duration_ns = ktime_to_ns(ktime_get());
++}
++
+ #endif
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 356a0183e1ad1..5fd3660e07b5c 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -484,33 +484,48 @@ u64 dm_start_time_ns_from_clone(struct bio *bio)
+ }
+ EXPORT_SYMBOL_GPL(dm_start_time_ns_from_clone);
+ 
+-static void start_io_acct(struct dm_io *io)
++static bool bio_is_flush_with_data(struct bio *bio)
+ {
+-	struct mapped_device *md = io->md;
+-	struct bio *bio = io->orig_bio;
+-
+-	bio_start_io_acct_time(bio, io->start_time);
+-	if (unlikely(dm_stats_used(&md->stats)))
+-		dm_stats_account_io(&md->stats, bio_data_dir(bio),
+-				    bio->bi_iter.bi_sector, bio_sectors(bio),
+-				    false, 0, &io->stats_aux);
++	return ((bio->bi_opf & REQ_PREFLUSH) && bio->bi_iter.bi_size);
+ }
+ 
+-static void end_io_acct(struct mapped_device *md, struct bio *bio,
+-			unsigned long start_time, struct dm_stats_aux *stats_aux)
++static void dm_io_acct(bool end, struct mapped_device *md, struct bio *bio,
++		       unsigned long start_time, struct dm_stats_aux *stats_aux)
+ {
+-	unsigned long duration = jiffies - start_time;
++	bool is_flush_with_data;
++	unsigned int bi_size;
+ 
+-	bio_end_io_acct(bio, start_time);
++	/* If REQ_PREFLUSH set save any payload but do not account it */
++	is_flush_with_data = bio_is_flush_with_data(bio);
++	if (is_flush_with_data) {
++		bi_size = bio->bi_iter.bi_size;
++		bio->bi_iter.bi_size = 0;
++	}
++
++	if (!end)
++		bio_start_io_acct_time(bio, start_time);
++	else
++		bio_end_io_acct(bio, start_time);
+ 
+ 	if (unlikely(dm_stats_used(&md->stats)))
+ 		dm_stats_account_io(&md->stats, bio_data_dir(bio),
+ 				    bio->bi_iter.bi_sector, bio_sectors(bio),
+-				    true, duration, stats_aux);
++				    end, start_time, stats_aux);
++
++	/* Restore bio's payload so it does get accounted upon requeue */
++	if (is_flush_with_data)
++		bio->bi_iter.bi_size = bi_size;
++}
++
++static void start_io_acct(struct dm_io *io)
++{
++	dm_io_acct(false, io->md, io->orig_bio, io->start_time, &io->stats_aux);
++}
+ 
+-	/* nudge anyone waiting on suspend queue */
+-	if (unlikely(wq_has_sleeper(&md->wait)))
+-		wake_up(&md->wait);
++static void end_io_acct(struct mapped_device *md, struct bio *bio,
++			unsigned long start_time, struct dm_stats_aux *stats_aux)
++{
++	dm_io_acct(true, md, bio, start_time, stats_aux);
+ }
+ 
+ static struct dm_io *alloc_io(struct mapped_device *md, struct bio *bio)
+@@ -531,12 +546,15 @@ static struct dm_io *alloc_io(struct mapped_device *md, struct bio *bio)
+ 	io->magic = DM_IO_MAGIC;
+ 	io->status = 0;
+ 	atomic_set(&io->io_count, 1);
++	this_cpu_inc(*md->pending_io);
+ 	io->orig_bio = bio;
+ 	io->md = md;
+ 	spin_lock_init(&io->endio_lock);
+ 
+ 	io->start_time = jiffies;
+ 
++	dm_stats_record_start(&md->stats, &io->stats_aux);
++
+ 	return io;
+ }
+ 
+@@ -826,11 +844,17 @@ void dm_io_dec_pending(struct dm_io *io, blk_status_t error)
+ 		stats_aux = io->stats_aux;
+ 		free_io(md, io);
+ 		end_io_acct(md, bio, start_time, &stats_aux);
++		smp_wmb();
++		this_cpu_dec(*md->pending_io);
++
++		/* nudge anyone waiting on suspend queue */
++		if (unlikely(wq_has_sleeper(&md->wait)))
++			wake_up(&md->wait);
+ 
+ 		if (io_error == BLK_STS_DM_REQUEUE)
+ 			return;
+ 
+-		if ((bio->bi_opf & REQ_PREFLUSH) && bio->bi_iter.bi_size) {
++		if (bio_is_flush_with_data(bio)) {
+ 			/*
+ 			 * Preflush done for flush with data, reissue
+ 			 * without REQ_PREFLUSH.
+@@ -1674,6 +1698,7 @@ static void cleanup_mapped_device(struct mapped_device *md)
+ 		md->dax_dev = NULL;
+ 	}
+ 
++	dm_cleanup_zoned_dev(md);
+ 	if (md->disk) {
+ 		spin_lock(&_minor_lock);
+ 		md->disk->private_data = NULL;
+@@ -1686,6 +1711,11 @@ static void cleanup_mapped_device(struct mapped_device *md)
+ 		blk_cleanup_disk(md->disk);
+ 	}
+ 
++	if (md->pending_io) {
++		free_percpu(md->pending_io);
++		md->pending_io = NULL;
++	}
++
+ 	cleanup_srcu_struct(&md->io_barrier);
+ 
+ 	mutex_destroy(&md->suspend_lock);
+@@ -1694,7 +1724,6 @@ static void cleanup_mapped_device(struct mapped_device *md)
+ 	mutex_destroy(&md->swap_bios_lock);
+ 
+ 	dm_mq_cleanup_mapped_device(md);
+-	dm_cleanup_zoned_dev(md);
+ }
+ 
+ /*
+@@ -1784,6 +1813,10 @@ static struct mapped_device *alloc_dev(int minor)
+ 	if (!md->wq)
+ 		goto bad;
+ 
++	md->pending_io = alloc_percpu(unsigned long);
++	if (!md->pending_io)
++		goto bad;
++
+ 	dm_stats_init(&md->stats);
+ 
+ 	/* Populate the mapping, nobody knows we exist yet */
+@@ -2191,16 +2224,13 @@ void dm_put(struct mapped_device *md)
+ }
+ EXPORT_SYMBOL_GPL(dm_put);
+ 
+-static bool md_in_flight_bios(struct mapped_device *md)
++static bool dm_in_flight_bios(struct mapped_device *md)
+ {
+ 	int cpu;
+-	struct block_device *part = dm_disk(md)->part0;
+-	long sum = 0;
++	unsigned long sum = 0;
+ 
+-	for_each_possible_cpu(cpu) {
+-		sum += part_stat_local_read_cpu(part, in_flight[0], cpu);
+-		sum += part_stat_local_read_cpu(part, in_flight[1], cpu);
+-	}
++	for_each_possible_cpu(cpu)
++		sum += *per_cpu_ptr(md->pending_io, cpu);
+ 
+ 	return sum != 0;
+ }
+@@ -2213,7 +2243,7 @@ static int dm_wait_for_bios_completion(struct mapped_device *md, unsigned int ta
+ 	while (true) {
+ 		prepare_to_wait(&md->wait, &wait, task_state);
+ 
+-		if (!md_in_flight_bios(md))
++		if (!dm_in_flight_bios(md))
+ 			break;
+ 
+ 		if (signal_pending_state(task_state, current)) {
+@@ -2225,6 +2255,8 @@ static int dm_wait_for_bios_completion(struct mapped_device *md, unsigned int ta
+ 	}
+ 	finish_wait(&md->wait, &wait);
+ 
++	smp_rmb();
++
+ 	return r;
+ }
+ 
+diff --git a/drivers/media/i2c/adv7511-v4l2.c b/drivers/media/i2c/adv7511-v4l2.c
+index 41f4e749a859c..2217004264e4b 100644
+--- a/drivers/media/i2c/adv7511-v4l2.c
++++ b/drivers/media/i2c/adv7511-v4l2.c
+@@ -544,7 +544,7 @@ static void log_infoframe(struct v4l2_subdev *sd, const struct adv7511_cfg_read_
+ 	buffer[3] = 0;
+ 	buffer[3] = hdmi_infoframe_checksum(buffer, len + 4);
+ 
+-	if (hdmi_infoframe_unpack(&frame, buffer, sizeof(buffer)) < 0) {
++	if (hdmi_infoframe_unpack(&frame, buffer, len + 4) < 0) {
+ 		v4l2_err(sd, "%s: unpack of %s infoframe failed\n", __func__, cri->desc);
+ 		return;
+ 	}
+diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c
+index 44768b59a6ff0..0ce323836dad1 100644
+--- a/drivers/media/i2c/adv7604.c
++++ b/drivers/media/i2c/adv7604.c
+@@ -2484,7 +2484,7 @@ static int adv76xx_read_infoframe(struct v4l2_subdev *sd, int index,
+ 		buffer[i + 3] = infoframe_read(sd,
+ 				       adv76xx_cri[index].payload_addr + i);
+ 
+-	if (hdmi_infoframe_unpack(frame, buffer, sizeof(buffer)) < 0) {
++	if (hdmi_infoframe_unpack(frame, buffer, len + 3) < 0) {
+ 		v4l2_err(sd, "%s: unpack of %s infoframe failed\n", __func__,
+ 			 adv76xx_cri[index].desc);
+ 		return -ENOENT;
+diff --git a/drivers/media/i2c/adv7842.c b/drivers/media/i2c/adv7842.c
+index 7f8acbdf0db4a..8ab4c63839b49 100644
+--- a/drivers/media/i2c/adv7842.c
++++ b/drivers/media/i2c/adv7842.c
+@@ -2593,7 +2593,7 @@ static void log_infoframe(struct v4l2_subdev *sd, const struct adv7842_cfg_read_
+ 	for (i = 0; i < len; i++)
+ 		buffer[i + 3] = infoframe_read(sd, cri->payload_addr + i);
+ 
+-	if (hdmi_infoframe_unpack(&frame, buffer, sizeof(buffer)) < 0) {
++	if (hdmi_infoframe_unpack(&frame, buffer, len + 3) < 0) {
+ 		v4l2_err(sd, "%s: unpack of %s infoframe failed\n", __func__, cri->desc);
+ 		return;
+ 	}
+diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
+index ddbd71394db33..db5a19babe67d 100644
+--- a/drivers/media/i2c/ov5640.c
++++ b/drivers/media/i2c/ov5640.c
+@@ -2293,7 +2293,6 @@ static int ov5640_set_fmt(struct v4l2_subdev *sd,
+ 	struct ov5640_dev *sensor = to_ov5640_dev(sd);
+ 	const struct ov5640_mode_info *new_mode;
+ 	struct v4l2_mbus_framefmt *mbus_fmt = &format->format;
+-	struct v4l2_mbus_framefmt *fmt;
+ 	int ret;
+ 
+ 	if (format->pad != 0)
+@@ -2311,12 +2310,10 @@ static int ov5640_set_fmt(struct v4l2_subdev *sd,
+ 	if (ret)
+ 		goto out;
+ 
+-	if (format->which == V4L2_SUBDEV_FORMAT_TRY)
+-		fmt = v4l2_subdev_get_try_format(sd, sd_state, 0);
+-	else
+-		fmt = &sensor->fmt;
+-
+-	*fmt = *mbus_fmt;
++	if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
++		*v4l2_subdev_get_try_format(sd, sd_state, 0) = *mbus_fmt;
++		goto out;
++	}
+ 
+ 	if (new_mode != sensor->current_mode) {
+ 		sensor->current_mode = new_mode;
+@@ -2325,6 +2322,9 @@ static int ov5640_set_fmt(struct v4l2_subdev *sd,
+ 	if (mbus_fmt->code != sensor->fmt.code)
+ 		sensor->pending_fmt_change = true;
+ 
++	/* update format even if code is unchanged, resolution might change */
++	sensor->fmt = *mbus_fmt;
++
+ 	__v4l2_ctrl_s_ctrl_int64(sensor->ctrls.pixel_rate,
+ 				 ov5640_calc_pixel_rate(sensor));
+ out:
+diff --git a/drivers/media/i2c/ov5648.c b/drivers/media/i2c/ov5648.c
+index 947d437ed0efe..ef8b52dc9401d 100644
+--- a/drivers/media/i2c/ov5648.c
++++ b/drivers/media/i2c/ov5648.c
+@@ -639,7 +639,7 @@ struct ov5648_ctrls {
+ 	struct v4l2_ctrl *pixel_rate;
+ 
+ 	struct v4l2_ctrl_handler handler;
+-} __packed;
++};
+ 
+ struct ov5648_sensor {
+ 	struct device *dev;
+@@ -1778,8 +1778,14 @@ static int ov5648_state_configure(struct ov5648_sensor *sensor,
+ 
+ static int ov5648_state_init(struct ov5648_sensor *sensor)
+ {
+-	return ov5648_state_configure(sensor, &ov5648_modes[0],
+-				      ov5648_mbus_codes[0]);
++	int ret;
++
++	mutex_lock(&sensor->mutex);
++	ret = ov5648_state_configure(sensor, &ov5648_modes[0],
++				     ov5648_mbus_codes[0]);
++	mutex_unlock(&sensor->mutex);
++
++	return ret;
+ }
+ 
+ /* Sensor Base */
+diff --git a/drivers/media/i2c/ov6650.c b/drivers/media/i2c/ov6650.c
+index f67412150b16b..eb59dc8bb5929 100644
+--- a/drivers/media/i2c/ov6650.c
++++ b/drivers/media/i2c/ov6650.c
+@@ -472,9 +472,16 @@ static int ov6650_get_selection(struct v4l2_subdev *sd,
+ {
+ 	struct i2c_client *client = v4l2_get_subdevdata(sd);
+ 	struct ov6650 *priv = to_ov6650(client);
++	struct v4l2_rect *rect;
+ 
+-	if (sel->which != V4L2_SUBDEV_FORMAT_ACTIVE)
+-		return -EINVAL;
++	if (sel->which == V4L2_SUBDEV_FORMAT_TRY) {
++		/* pre-select try crop rectangle */
++		rect = &sd_state->pads->try_crop;
++
++	} else {
++		/* pre-select active crop rectangle */
++		rect = &priv->rect;
++	}
+ 
+ 	switch (sel->target) {
+ 	case V4L2_SEL_TGT_CROP_BOUNDS:
+@@ -483,14 +490,33 @@ static int ov6650_get_selection(struct v4l2_subdev *sd,
+ 		sel->r.width = W_CIF;
+ 		sel->r.height = H_CIF;
+ 		return 0;
++
+ 	case V4L2_SEL_TGT_CROP:
+-		sel->r = priv->rect;
++		/* use selected crop rectangle */
++		sel->r = *rect;
+ 		return 0;
++
+ 	default:
+ 		return -EINVAL;
+ 	}
+ }
+ 
++static bool is_unscaled_ok(int width, int height, struct v4l2_rect *rect)
++{
++	return width > rect->width >> 1 || height > rect->height >> 1;
++}
++
++static void ov6650_bind_align_crop_rectangle(struct v4l2_rect *rect)
++{
++	v4l_bound_align_image(&rect->width, 2, W_CIF, 1,
++			      &rect->height, 2, H_CIF, 1, 0);
++	v4l_bound_align_image(&rect->left, DEF_HSTRT << 1,
++			      (DEF_HSTRT << 1) + W_CIF - (__s32)rect->width, 1,
++			      &rect->top, DEF_VSTRT << 1,
++			      (DEF_VSTRT << 1) + H_CIF - (__s32)rect->height,
++			      1, 0);
++}
++
+ static int ov6650_set_selection(struct v4l2_subdev *sd,
+ 		struct v4l2_subdev_state *sd_state,
+ 		struct v4l2_subdev_selection *sel)
+@@ -499,18 +525,30 @@ static int ov6650_set_selection(struct v4l2_subdev *sd,
+ 	struct ov6650 *priv = to_ov6650(client);
+ 	int ret;
+ 
+-	if (sel->which != V4L2_SUBDEV_FORMAT_ACTIVE ||
+-	    sel->target != V4L2_SEL_TGT_CROP)
++	if (sel->target != V4L2_SEL_TGT_CROP)
+ 		return -EINVAL;
+ 
+-	v4l_bound_align_image(&sel->r.width, 2, W_CIF, 1,
+-			      &sel->r.height, 2, H_CIF, 1, 0);
+-	v4l_bound_align_image(&sel->r.left, DEF_HSTRT << 1,
+-			      (DEF_HSTRT << 1) + W_CIF - (__s32)sel->r.width, 1,
+-			      &sel->r.top, DEF_VSTRT << 1,
+-			      (DEF_VSTRT << 1) + H_CIF - (__s32)sel->r.height,
+-			      1, 0);
++	ov6650_bind_align_crop_rectangle(&sel->r);
++
++	if (sel->which == V4L2_SUBDEV_FORMAT_TRY) {
++		struct v4l2_rect *crop = &sd_state->pads->try_crop;
++		struct v4l2_mbus_framefmt *mf = &sd_state->pads->try_fmt;
++		/* detect current pad config scaling factor */
++		bool half_scale = !is_unscaled_ok(mf->width, mf->height, crop);
++
++		/* store new crop rectangle */
++		*crop = sel->r;
+ 
++		/* adjust frame size */
++		mf->width = crop->width >> half_scale;
++		mf->height = crop->height >> half_scale;
++
++		return 0;
++	}
++
++	/* V4L2_SUBDEV_FORMAT_ACTIVE */
++
++	/* apply new crop rectangle */
+ 	ret = ov6650_reg_write(client, REG_HSTRT, sel->r.left >> 1);
+ 	if (!ret) {
+ 		priv->rect.width += priv->rect.left - sel->r.left;
+@@ -562,30 +600,13 @@ static int ov6650_get_fmt(struct v4l2_subdev *sd,
+ 	return 0;
+ }
+ 
+-static bool is_unscaled_ok(int width, int height, struct v4l2_rect *rect)
+-{
+-	return width > rect->width >> 1 || height > rect->height >> 1;
+-}
+-
+ #define to_clkrc(div)	((div) - 1)
+ 
+ /* set the format we will capture in */
+-static int ov6650_s_fmt(struct v4l2_subdev *sd, struct v4l2_mbus_framefmt *mf)
++static int ov6650_s_fmt(struct v4l2_subdev *sd, u32 code, bool half_scale)
+ {
+ 	struct i2c_client *client = v4l2_get_subdevdata(sd);
+ 	struct ov6650 *priv = to_ov6650(client);
+-	bool half_scale = !is_unscaled_ok(mf->width, mf->height, &priv->rect);
+-	struct v4l2_subdev_selection sel = {
+-		.which = V4L2_SUBDEV_FORMAT_ACTIVE,
+-		.target = V4L2_SEL_TGT_CROP,
+-		.r.left = priv->rect.left + (priv->rect.width >> 1) -
+-			(mf->width >> (1 - half_scale)),
+-		.r.top = priv->rect.top + (priv->rect.height >> 1) -
+-			(mf->height >> (1 - half_scale)),
+-		.r.width = mf->width << half_scale,
+-		.r.height = mf->height << half_scale,
+-	};
+-	u32 code = mf->code;
+ 	u8 coma_set = 0, coma_mask = 0, coml_set, coml_mask;
+ 	int ret;
+ 
+@@ -653,9 +674,7 @@ static int ov6650_s_fmt(struct v4l2_subdev *sd, struct v4l2_mbus_framefmt *mf)
+ 		coma_mask |= COMA_QCIF;
+ 	}
+ 
+-	ret = ov6650_set_selection(sd, NULL, &sel);
+-	if (!ret)
+-		ret = ov6650_reg_rmw(client, REG_COMA, coma_set, coma_mask);
++	ret = ov6650_reg_rmw(client, REG_COMA, coma_set, coma_mask);
+ 	if (!ret) {
+ 		priv->half_scale = half_scale;
+ 
+@@ -674,14 +693,12 @@ static int ov6650_set_fmt(struct v4l2_subdev *sd,
+ 	struct v4l2_mbus_framefmt *mf = &format->format;
+ 	struct i2c_client *client = v4l2_get_subdevdata(sd);
+ 	struct ov6650 *priv = to_ov6650(client);
++	struct v4l2_rect *crop;
++	bool half_scale;
+ 
+ 	if (format->pad)
+ 		return -EINVAL;
+ 
+-	if (is_unscaled_ok(mf->width, mf->height, &priv->rect))
+-		v4l_bound_align_image(&mf->width, 2, W_CIF, 1,
+-				&mf->height, 2, H_CIF, 1, 0);
+-
+ 	switch (mf->code) {
+ 	case MEDIA_BUS_FMT_Y10_1X10:
+ 		mf->code = MEDIA_BUS_FMT_Y8_1X8;
+@@ -699,10 +716,17 @@ static int ov6650_set_fmt(struct v4l2_subdev *sd,
+ 		break;
+ 	}
+ 
++	if (format->which == V4L2_SUBDEV_FORMAT_TRY)
++		crop = &sd_state->pads->try_crop;
++	else
++		crop = &priv->rect;
++
++	half_scale = !is_unscaled_ok(mf->width, mf->height, crop);
++
+ 	if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
+-		/* store media bus format code and frame size in pad config */
+-		sd_state->pads->try_fmt.width = mf->width;
+-		sd_state->pads->try_fmt.height = mf->height;
++		/* store new mbus frame format code and size in pad config */
++		sd_state->pads->try_fmt.width = crop->width >> half_scale;
++		sd_state->pads->try_fmt.height = crop->height >> half_scale;
+ 		sd_state->pads->try_fmt.code = mf->code;
+ 
+ 		/* return default mbus frame format updated with pad config */
+@@ -712,9 +736,11 @@ static int ov6650_set_fmt(struct v4l2_subdev *sd,
+ 		mf->code = sd_state->pads->try_fmt.code;
+ 
+ 	} else {
+-		/* apply new media bus format code and frame size */
+-		int ret = ov6650_s_fmt(sd, mf);
++		int ret = 0;
+ 
++		/* apply new media bus frame format and scaling if changed */
++		if (mf->code != priv->code || half_scale != priv->half_scale)
++			ret = ov6650_s_fmt(sd, mf->code, half_scale);
+ 		if (ret)
+ 			return ret;
+ 
+@@ -890,9 +916,8 @@ static int ov6650_video_probe(struct v4l2_subdev *sd)
+ 	if (!ret)
+ 		ret = ov6650_prog_dflt(client, xclk->clkrc);
+ 	if (!ret) {
+-		struct v4l2_mbus_framefmt mf = ov6650_def_fmt;
+-
+-		ret = ov6650_s_fmt(sd, &mf);
++		/* driver default frame format, no scaling */
++		ret = ov6650_s_fmt(sd, ov6650_def_fmt.code, false);
+ 	}
+ 	if (!ret)
+ 		ret = v4l2_ctrl_handler_setup(&priv->hdl);
+diff --git a/drivers/media/pci/bt8xx/bttv-driver.c b/drivers/media/pci/bt8xx/bttv-driver.c
+index 0e9df8b35ac66..661ebfa7bf3f5 100644
+--- a/drivers/media/pci/bt8xx/bttv-driver.c
++++ b/drivers/media/pci/bt8xx/bttv-driver.c
+@@ -3890,7 +3890,7 @@ static int bttv_register_video(struct bttv *btv)
+ 
+ 	/* video */
+ 	vdev_init(btv, &btv->video_dev, &bttv_video_template, "video");
+-	btv->video_dev.device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_TUNER |
++	btv->video_dev.device_caps = V4L2_CAP_VIDEO_CAPTURE |
+ 				     V4L2_CAP_READWRITE | V4L2_CAP_STREAMING;
+ 	if (btv->tuner_type != TUNER_ABSENT)
+ 		btv->video_dev.device_caps |= V4L2_CAP_TUNER;
+@@ -3911,7 +3911,7 @@ static int bttv_register_video(struct bttv *btv)
+ 	/* vbi */
+ 	vdev_init(btv, &btv->vbi_dev, &bttv_video_template, "vbi");
+ 	btv->vbi_dev.device_caps = V4L2_CAP_VBI_CAPTURE | V4L2_CAP_READWRITE |
+-				   V4L2_CAP_STREAMING | V4L2_CAP_TUNER;
++				   V4L2_CAP_STREAMING;
+ 	if (btv->tuner_type != TUNER_ABSENT)
+ 		btv->vbi_dev.device_caps |= V4L2_CAP_TUNER;
+ 
+diff --git a/drivers/media/pci/cx88/cx88-mpeg.c b/drivers/media/pci/cx88/cx88-mpeg.c
+index 680e1e3fe89b7..2c1d5137ac470 100644
+--- a/drivers/media/pci/cx88/cx88-mpeg.c
++++ b/drivers/media/pci/cx88/cx88-mpeg.c
+@@ -162,6 +162,9 @@ int cx8802_start_dma(struct cx8802_dev    *dev,
+ 	cx_write(MO_TS_GPCNTRL, GP_COUNT_CONTROL_RESET);
+ 	q->count = 0;
+ 
++	/* clear interrupt status register */
++	cx_write(MO_TS_INTSTAT,  0x1f1111);
++
+ 	/* enable irqs */
+ 	dprintk(1, "setting the interrupt mask\n");
+ 	cx_set(MO_PCI_INTMSK, core->pci_irqmask | PCI_INT_TSINT);
+diff --git a/drivers/media/pci/ivtv/ivtv-driver.h b/drivers/media/pci/ivtv/ivtv-driver.h
+index 4cf92dee65271..ce3a7ca51736e 100644
+--- a/drivers/media/pci/ivtv/ivtv-driver.h
++++ b/drivers/media/pci/ivtv/ivtv-driver.h
+@@ -330,7 +330,6 @@ struct ivtv_stream {
+ 	struct ivtv *itv;		/* for ease of use */
+ 	const char *name;		/* name of the stream */
+ 	int type;			/* stream type */
+-	u32 caps;			/* V4L2 capabilities */
+ 
+ 	struct v4l2_fh *fh;		/* pointer to the streaming filehandle */
+ 	spinlock_t qlock;		/* locks access to the queues */
+diff --git a/drivers/media/pci/ivtv/ivtv-ioctl.c b/drivers/media/pci/ivtv/ivtv-ioctl.c
+index 0cdf6b3210c2f..fee460e2ca863 100644
+--- a/drivers/media/pci/ivtv/ivtv-ioctl.c
++++ b/drivers/media/pci/ivtv/ivtv-ioctl.c
+@@ -438,7 +438,7 @@ static int ivtv_g_fmt_vid_out_overlay(struct file *file, void *fh, struct v4l2_f
+ 	struct ivtv_stream *s = &itv->streams[fh2id(fh)->type];
+ 	struct v4l2_window *winfmt = &fmt->fmt.win;
+ 
+-	if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
++	if (!(s->vdev.device_caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ 		return -EINVAL;
+ 	if (!itv->osd_video_pbase)
+ 		return -EINVAL;
+@@ -549,7 +549,7 @@ static int ivtv_try_fmt_vid_out_overlay(struct file *file, void *fh, struct v4l2
+ 	u32 chromakey = fmt->fmt.win.chromakey;
+ 	u8 global_alpha = fmt->fmt.win.global_alpha;
+ 
+-	if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
++	if (!(s->vdev.device_caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ 		return -EINVAL;
+ 	if (!itv->osd_video_pbase)
+ 		return -EINVAL;
+@@ -1383,7 +1383,7 @@ static int ivtv_g_fbuf(struct file *file, void *fh, struct v4l2_framebuffer *fb)
+ 		0,
+ 	};
+ 
+-	if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
++	if (!(s->vdev.device_caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ 		return -ENOTTY;
+ 	if (!itv->osd_video_pbase)
+ 		return -ENOTTY;
+@@ -1450,7 +1450,7 @@ static int ivtv_s_fbuf(struct file *file, void *fh, const struct v4l2_framebuffe
+ 	struct ivtv_stream *s = &itv->streams[fh2id(fh)->type];
+ 	struct yuv_playback_info *yi = &itv->yuv_info;
+ 
+-	if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
++	if (!(s->vdev.device_caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ 		return -ENOTTY;
+ 	if (!itv->osd_video_pbase)
+ 		return -ENOTTY;
+@@ -1470,7 +1470,7 @@ static int ivtv_overlay(struct file *file, void *fh, unsigned int on)
+ 	struct ivtv *itv = id->itv;
+ 	struct ivtv_stream *s = &itv->streams[fh2id(fh)->type];
+ 
+-	if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
++	if (!(s->vdev.device_caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ 		return -ENOTTY;
+ 	if (!itv->osd_video_pbase)
+ 		return -ENOTTY;
+diff --git a/drivers/media/pci/ivtv/ivtv-streams.c b/drivers/media/pci/ivtv/ivtv-streams.c
+index 6e455948cc77a..13d7d55e65949 100644
+--- a/drivers/media/pci/ivtv/ivtv-streams.c
++++ b/drivers/media/pci/ivtv/ivtv-streams.c
+@@ -176,7 +176,7 @@ static void ivtv_stream_init(struct ivtv *itv, int type)
+ 	s->itv = itv;
+ 	s->type = type;
+ 	s->name = ivtv_stream_info[type].name;
+-	s->caps = ivtv_stream_info[type].v4l2_caps;
++	s->vdev.device_caps = ivtv_stream_info[type].v4l2_caps;
+ 
+ 	if (ivtv_stream_info[type].pio)
+ 		s->dma = DMA_NONE;
+@@ -299,12 +299,9 @@ static int ivtv_reg_dev(struct ivtv *itv, int type)
+ 		if (s_mpg->vdev.v4l2_dev)
+ 			num = s_mpg->vdev.num + ivtv_stream_info[type].num_offset;
+ 	}
+-	s->vdev.device_caps = s->caps;
+-	if (itv->osd_video_pbase) {
+-		itv->streams[IVTV_DEC_STREAM_TYPE_YUV].vdev.device_caps |=
+-			V4L2_CAP_VIDEO_OUTPUT_OVERLAY;
+-		itv->streams[IVTV_DEC_STREAM_TYPE_MPG].vdev.device_caps |=
+-			V4L2_CAP_VIDEO_OUTPUT_OVERLAY;
++	if (itv->osd_video_pbase && (type == IVTV_DEC_STREAM_TYPE_YUV ||
++				     type == IVTV_DEC_STREAM_TYPE_MPG)) {
++		s->vdev.device_caps |= V4L2_CAP_VIDEO_OUTPUT_OVERLAY;
+ 		itv->v4l2_cap |= V4L2_CAP_VIDEO_OUTPUT_OVERLAY;
+ 	}
+ 	video_set_drvdata(&s->vdev, s);
+diff --git a/drivers/media/pci/saa7134/saa7134-alsa.c b/drivers/media/pci/saa7134/saa7134-alsa.c
+index fb24d2ed3621b..d3cde05a6ebab 100644
+--- a/drivers/media/pci/saa7134/saa7134-alsa.c
++++ b/drivers/media/pci/saa7134/saa7134-alsa.c
+@@ -1214,7 +1214,7 @@ static int alsa_device_exit(struct saa7134_dev *dev)
+ 
+ static int saa7134_alsa_init(void)
+ {
+-	struct saa7134_dev *dev = NULL;
++	struct saa7134_dev *dev;
+ 
+ 	saa7134_dmasound_init = alsa_device_init;
+ 	saa7134_dmasound_exit = alsa_device_exit;
+@@ -1229,7 +1229,7 @@ static int saa7134_alsa_init(void)
+ 			alsa_device_init(dev);
+ 	}
+ 
+-	if (dev == NULL)
++	if (list_empty(&saa7134_devlist))
+ 		pr_info("saa7134 ALSA: no saa7134 cards found\n");
+ 
+ 	return 0;
+diff --git a/drivers/media/platform/aspeed-video.c b/drivers/media/platform/aspeed-video.c
+index 7a24daf7165a4..bdeecde0d9978 100644
+--- a/drivers/media/platform/aspeed-video.c
++++ b/drivers/media/platform/aspeed-video.c
+@@ -153,7 +153,7 @@
+ #define  VE_SRC_TB_EDGE_DET_BOT		GENMASK(28, VE_SRC_TB_EDGE_DET_BOT_SHF)
+ 
+ #define VE_MODE_DETECT_STATUS		0x098
+-#define  VE_MODE_DETECT_H_PIXELS	GENMASK(11, 0)
++#define  VE_MODE_DETECT_H_PERIOD	GENMASK(11, 0)
+ #define  VE_MODE_DETECT_V_LINES_SHF	16
+ #define  VE_MODE_DETECT_V_LINES		GENMASK(27, VE_MODE_DETECT_V_LINES_SHF)
+ #define  VE_MODE_DETECT_STATUS_VSYNC	BIT(28)
+@@ -164,6 +164,8 @@
+ #define  VE_SYNC_STATUS_VSYNC_SHF	16
+ #define  VE_SYNC_STATUS_VSYNC		GENMASK(27, VE_SYNC_STATUS_VSYNC_SHF)
+ 
++#define VE_H_TOTAL_PIXELS		0x0A0
++
+ #define VE_INTERRUPT_CTRL		0x304
+ #define VE_INTERRUPT_STATUS		0x308
+ #define  VE_INTERRUPT_MODE_DETECT_WD	BIT(0)
+@@ -802,6 +804,7 @@ static void aspeed_video_get_resolution(struct aspeed_video *video)
+ 	u32 src_lr_edge;
+ 	u32 src_tb_edge;
+ 	u32 sync;
++	u32 htotal;
+ 	struct v4l2_bt_timings *det = &video->detected_timings;
+ 
+ 	det->width = MIN_WIDTH;
+@@ -847,6 +850,7 @@ static void aspeed_video_get_resolution(struct aspeed_video *video)
+ 		src_tb_edge = aspeed_video_read(video, VE_SRC_TB_EDGE_DET);
+ 		mds = aspeed_video_read(video, VE_MODE_DETECT_STATUS);
+ 		sync = aspeed_video_read(video, VE_SYNC_STATUS);
++		htotal = aspeed_video_read(video, VE_H_TOTAL_PIXELS);
+ 
+ 		video->frame_bottom = (src_tb_edge & VE_SRC_TB_EDGE_DET_BOT) >>
+ 			VE_SRC_TB_EDGE_DET_BOT_SHF;
+@@ -863,8 +867,7 @@ static void aspeed_video_get_resolution(struct aspeed_video *video)
+ 			VE_SRC_LR_EDGE_DET_RT_SHF;
+ 		video->frame_left = src_lr_edge & VE_SRC_LR_EDGE_DET_LEFT;
+ 		det->hfrontporch = video->frame_left;
+-		det->hbackporch = (mds & VE_MODE_DETECT_H_PIXELS) -
+-			video->frame_right;
++		det->hbackporch = htotal - video->frame_right;
+ 		det->hsync = sync & VE_SYNC_STATUS_HSYNC;
+ 		if (video->frame_left > video->frame_right)
+ 			continue;
+diff --git a/drivers/media/platform/atmel/atmel-isc-base.c b/drivers/media/platform/atmel/atmel-isc-base.c
+index 660cd0ab6749a..24807782c9e50 100644
+--- a/drivers/media/platform/atmel/atmel-isc-base.c
++++ b/drivers/media/platform/atmel/atmel-isc-base.c
+@@ -1369,14 +1369,12 @@ static int isc_enum_framesizes(struct file *file, void *fh,
+ 			       struct v4l2_frmsizeenum *fsize)
+ {
+ 	struct isc_device *isc = video_drvdata(file);
+-	struct v4l2_subdev_frame_size_enum fse = {
+-		.code = isc->config.sd_format->mbus_code,
+-		.index = fsize->index,
+-		.which = V4L2_SUBDEV_FORMAT_ACTIVE,
+-	};
+ 	int ret = -EINVAL;
+ 	int i;
+ 
++	if (fsize->index)
++		return -EINVAL;
++
+ 	for (i = 0; i < isc->num_user_formats; i++)
+ 		if (isc->user_formats[i]->fourcc == fsize->pixel_format)
+ 			ret = 0;
+@@ -1388,14 +1386,14 @@ static int isc_enum_framesizes(struct file *file, void *fh,
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = v4l2_subdev_call(isc->current_subdev->sd, pad, enum_frame_size,
+-			       NULL, &fse);
+-	if (ret)
+-		return ret;
++	fsize->type = V4L2_FRMSIZE_TYPE_CONTINUOUS;
+ 
+-	fsize->type = V4L2_FRMSIZE_TYPE_DISCRETE;
+-	fsize->discrete.width = fse.max_width;
+-	fsize->discrete.height = fse.max_height;
++	fsize->stepwise.min_width = 16;
++	fsize->stepwise.max_width = isc->max_width;
++	fsize->stepwise.min_height = 16;
++	fsize->stepwise.max_height = isc->max_height;
++	fsize->stepwise.step_width = 1;
++	fsize->stepwise.step_height = 1;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/platform/atmel/atmel-sama7g5-isc.c b/drivers/media/platform/atmel/atmel-sama7g5-isc.c
+index 5d1c76f680f37..2b1082295c130 100644
+--- a/drivers/media/platform/atmel/atmel-sama7g5-isc.c
++++ b/drivers/media/platform/atmel/atmel-sama7g5-isc.c
+@@ -556,7 +556,6 @@ static int microchip_xisc_remove(struct platform_device *pdev)
+ 
+ 	v4l2_device_unregister(&isc->v4l2_dev);
+ 
+-	clk_disable_unprepare(isc->ispck);
+ 	clk_disable_unprepare(isc->hclock);
+ 
+ 	isc_clk_cleanup(isc);
+@@ -568,7 +567,6 @@ static int __maybe_unused xisc_runtime_suspend(struct device *dev)
+ {
+ 	struct isc_device *isc = dev_get_drvdata(dev);
+ 
+-	clk_disable_unprepare(isc->ispck);
+ 	clk_disable_unprepare(isc->hclock);
+ 
+ 	return 0;
+@@ -583,10 +581,6 @@ static int __maybe_unused xisc_runtime_resume(struct device *dev)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = clk_prepare_enable(isc->ispck);
+-	if (ret)
+-		clk_disable_unprepare(isc->hclock);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/media/platform/coda/coda-common.c b/drivers/media/platform/coda/coda-common.c
+index 9a2640a9c75c6..4a553f42ff0a3 100644
+--- a/drivers/media/platform/coda/coda-common.c
++++ b/drivers/media/platform/coda/coda-common.c
+@@ -408,6 +408,7 @@ static struct vdoa_data *coda_get_vdoa_data(void)
+ 	if (!vdoa_data)
+ 		vdoa_data = ERR_PTR(-EPROBE_DEFER);
+ 
++	put_device(&vdoa_pdev->dev);
+ out:
+ 	of_node_put(vdoa_node);
+ 
+diff --git a/drivers/media/platform/davinci/vpif.c b/drivers/media/platform/davinci/vpif.c
+index 5a89d885d0e3b..4a260f4ed236b 100644
+--- a/drivers/media/platform/davinci/vpif.c
++++ b/drivers/media/platform/davinci/vpif.c
+@@ -41,6 +41,11 @@ MODULE_ALIAS("platform:" VPIF_DRIVER_NAME);
+ #define VPIF_CH2_MAX_MODES	15
+ #define VPIF_CH3_MAX_MODES	2
+ 
++struct vpif_data {
++	struct platform_device *capture;
++	struct platform_device *display;
++};
++
+ DEFINE_SPINLOCK(vpif_lock);
+ EXPORT_SYMBOL_GPL(vpif_lock);
+ 
+@@ -423,16 +428,31 @@ int vpif_channel_getfid(u8 channel_id)
+ }
+ EXPORT_SYMBOL(vpif_channel_getfid);
+ 
++static void vpif_pdev_release(struct device *dev)
++{
++	struct platform_device *pdev = to_platform_device(dev);
++
++	kfree(pdev);
++}
++
+ static int vpif_probe(struct platform_device *pdev)
+ {
+ 	static struct resource *res_irq;
+ 	struct platform_device *pdev_capture, *pdev_display;
+ 	struct device_node *endpoint = NULL;
++	struct vpif_data *data;
++	int ret;
+ 
+ 	vpif_base = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(vpif_base))
+ 		return PTR_ERR(vpif_base);
+ 
++	data = kzalloc(sizeof(*data), GFP_KERNEL);
++	if (!data)
++		return -ENOMEM;
++
++	platform_set_drvdata(pdev, data);
++
+ 	pm_runtime_enable(&pdev->dev);
+ 	pm_runtime_get(&pdev->dev);
+ 
+@@ -456,46 +476,79 @@ static int vpif_probe(struct platform_device *pdev)
+ 	res_irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+ 	if (!res_irq) {
+ 		dev_warn(&pdev->dev, "Missing IRQ resource.\n");
+-		pm_runtime_put(&pdev->dev);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_put_rpm;
+ 	}
+ 
+-	pdev_capture = devm_kzalloc(&pdev->dev, sizeof(*pdev_capture),
+-				    GFP_KERNEL);
+-	if (pdev_capture) {
+-		pdev_capture->name = "vpif_capture";
+-		pdev_capture->id = -1;
+-		pdev_capture->resource = res_irq;
+-		pdev_capture->num_resources = 1;
+-		pdev_capture->dev.dma_mask = pdev->dev.dma_mask;
+-		pdev_capture->dev.coherent_dma_mask = pdev->dev.coherent_dma_mask;
+-		pdev_capture->dev.parent = &pdev->dev;
+-		platform_device_register(pdev_capture);
+-	} else {
+-		dev_warn(&pdev->dev, "Unable to allocate memory for pdev_capture.\n");
++	pdev_capture = kzalloc(sizeof(*pdev_capture), GFP_KERNEL);
++	if (!pdev_capture) {
++		ret = -ENOMEM;
++		goto err_put_rpm;
+ 	}
+ 
+-	pdev_display = devm_kzalloc(&pdev->dev, sizeof(*pdev_display),
+-				    GFP_KERNEL);
+-	if (pdev_display) {
+-		pdev_display->name = "vpif_display";
+-		pdev_display->id = -1;
+-		pdev_display->resource = res_irq;
+-		pdev_display->num_resources = 1;
+-		pdev_display->dev.dma_mask = pdev->dev.dma_mask;
+-		pdev_display->dev.coherent_dma_mask = pdev->dev.coherent_dma_mask;
+-		pdev_display->dev.parent = &pdev->dev;
+-		platform_device_register(pdev_display);
+-	} else {
+-		dev_warn(&pdev->dev, "Unable to allocate memory for pdev_display.\n");
++	pdev_capture->name = "vpif_capture";
++	pdev_capture->id = -1;
++	pdev_capture->resource = res_irq;
++	pdev_capture->num_resources = 1;
++	pdev_capture->dev.dma_mask = pdev->dev.dma_mask;
++	pdev_capture->dev.coherent_dma_mask = pdev->dev.coherent_dma_mask;
++	pdev_capture->dev.parent = &pdev->dev;
++	pdev_capture->dev.release = vpif_pdev_release;
++
++	ret = platform_device_register(pdev_capture);
++	if (ret)
++		goto err_put_pdev_capture;
++
++	pdev_display = kzalloc(sizeof(*pdev_display), GFP_KERNEL);
++	if (!pdev_display) {
++		ret = -ENOMEM;
++		goto err_put_pdev_capture;
+ 	}
+ 
++	pdev_display->name = "vpif_display";
++	pdev_display->id = -1;
++	pdev_display->resource = res_irq;
++	pdev_display->num_resources = 1;
++	pdev_display->dev.dma_mask = pdev->dev.dma_mask;
++	pdev_display->dev.coherent_dma_mask = pdev->dev.coherent_dma_mask;
++	pdev_display->dev.parent = &pdev->dev;
++	pdev_display->dev.release = vpif_pdev_release;
++
++	ret = platform_device_register(pdev_display);
++	if (ret)
++		goto err_put_pdev_display;
++
++	data->capture = pdev_capture;
++	data->display = pdev_display;
++
+ 	return 0;
++
++err_put_pdev_display:
++	platform_device_put(pdev_display);
++err_put_pdev_capture:
++	platform_device_put(pdev_capture);
++err_put_rpm:
++	pm_runtime_put(&pdev->dev);
++	pm_runtime_disable(&pdev->dev);
++	kfree(data);
++
++	return ret;
+ }
+ 
+ static int vpif_remove(struct platform_device *pdev)
+ {
++	struct vpif_data *data = platform_get_drvdata(pdev);
++
++	if (data->capture)
++		platform_device_unregister(data->capture);
++	if (data->display)
++		platform_device_unregister(data->display);
++
++	pm_runtime_put(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
++
++	kfree(data);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/platform/imx-jpeg/mxc-jpeg.c b/drivers/media/platform/imx-jpeg/mxc-jpeg.c
+index 4ca96cf9def76..83a2b4d13bad3 100644
+--- a/drivers/media/platform/imx-jpeg/mxc-jpeg.c
++++ b/drivers/media/platform/imx-jpeg/mxc-jpeg.c
+@@ -947,8 +947,13 @@ static void mxc_jpeg_device_run(void *priv)
+ 	v4l2_m2m_buf_copy_metadata(src_buf, dst_buf, true);
+ 
+ 	jpeg_src_buf = vb2_to_mxc_buf(&src_buf->vb2_buf);
++	if (q_data_cap->fmt->colplanes != dst_buf->vb2_buf.num_planes) {
++		dev_err(dev, "Capture format %s has %d planes, but capture buffer has %d planes\n",
++			q_data_cap->fmt->name, q_data_cap->fmt->colplanes,
++			dst_buf->vb2_buf.num_planes);
++		jpeg_src_buf->jpeg_parse_error = true;
++	}
+ 	if (jpeg_src_buf->jpeg_parse_error) {
+-		jpeg->slot_data[ctx->slot].used = false;
+ 		v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+ 		v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+ 		v4l2_m2m_buf_done(src_buf, VB2_BUF_STATE_ERROR);
+diff --git a/drivers/media/platform/meson/ge2d/ge2d.c b/drivers/media/platform/meson/ge2d/ge2d.c
+index ccda18e5a3774..5e7b319f300df 100644
+--- a/drivers/media/platform/meson/ge2d/ge2d.c
++++ b/drivers/media/platform/meson/ge2d/ge2d.c
+@@ -215,35 +215,35 @@ static void ge2d_hw_start(struct meson_ge2d *ge2d)
+ 
+ 	regmap_write(ge2d->map, GE2D_SRC1_CLIPY_START_END,
+ 		     FIELD_PREP(GE2D_START, ctx->in.crop.top) |
+-		     FIELD_PREP(GE2D_END, ctx->in.crop.top + ctx->in.crop.height));
++		     FIELD_PREP(GE2D_END, ctx->in.crop.top + ctx->in.crop.height - 1));
+ 	regmap_write(ge2d->map, GE2D_SRC1_CLIPX_START_END,
+ 		     FIELD_PREP(GE2D_START, ctx->in.crop.left) |
+-		     FIELD_PREP(GE2D_END, ctx->in.crop.left + ctx->in.crop.width));
++		     FIELD_PREP(GE2D_END, ctx->in.crop.left + ctx->in.crop.width - 1));
+ 	regmap_write(ge2d->map, GE2D_SRC2_CLIPY_START_END,
+ 		     FIELD_PREP(GE2D_START, ctx->out.crop.top) |
+-		     FIELD_PREP(GE2D_END, ctx->out.crop.top + ctx->out.crop.height));
++		     FIELD_PREP(GE2D_END, ctx->out.crop.top + ctx->out.crop.height - 1));
+ 	regmap_write(ge2d->map, GE2D_SRC2_CLIPX_START_END,
+ 		     FIELD_PREP(GE2D_START, ctx->out.crop.left) |
+-		     FIELD_PREP(GE2D_END, ctx->out.crop.left + ctx->out.crop.width));
++		     FIELD_PREP(GE2D_END, ctx->out.crop.left + ctx->out.crop.width - 1));
+ 	regmap_write(ge2d->map, GE2D_DST_CLIPY_START_END,
+ 		     FIELD_PREP(GE2D_START, ctx->out.crop.top) |
+-		     FIELD_PREP(GE2D_END, ctx->out.crop.top + ctx->out.crop.height));
++		     FIELD_PREP(GE2D_END, ctx->out.crop.top + ctx->out.crop.height - 1));
+ 	regmap_write(ge2d->map, GE2D_DST_CLIPX_START_END,
+ 		     FIELD_PREP(GE2D_START, ctx->out.crop.left) |
+-		     FIELD_PREP(GE2D_END, ctx->out.crop.left + ctx->out.crop.width));
++		     FIELD_PREP(GE2D_END, ctx->out.crop.left + ctx->out.crop.width - 1));
+ 
+ 	regmap_write(ge2d->map, GE2D_SRC1_Y_START_END,
+-		     FIELD_PREP(GE2D_END, ctx->in.pix_fmt.height));
++		     FIELD_PREP(GE2D_END, ctx->in.pix_fmt.height - 1));
+ 	regmap_write(ge2d->map, GE2D_SRC1_X_START_END,
+-		     FIELD_PREP(GE2D_END, ctx->in.pix_fmt.width));
++		     FIELD_PREP(GE2D_END, ctx->in.pix_fmt.width - 1));
+ 	regmap_write(ge2d->map, GE2D_SRC2_Y_START_END,
+-		     FIELD_PREP(GE2D_END, ctx->out.pix_fmt.height));
++		     FIELD_PREP(GE2D_END, ctx->out.pix_fmt.height - 1));
+ 	regmap_write(ge2d->map, GE2D_SRC2_X_START_END,
+-		     FIELD_PREP(GE2D_END, ctx->out.pix_fmt.width));
++		     FIELD_PREP(GE2D_END, ctx->out.pix_fmt.width - 1));
+ 	regmap_write(ge2d->map, GE2D_DST_Y_START_END,
+-		     FIELD_PREP(GE2D_END, ctx->out.pix_fmt.height));
++		     FIELD_PREP(GE2D_END, ctx->out.pix_fmt.height - 1));
+ 	regmap_write(ge2d->map, GE2D_DST_X_START_END,
+-		     FIELD_PREP(GE2D_END, ctx->out.pix_fmt.width));
++		     FIELD_PREP(GE2D_END, ctx->out.pix_fmt.width - 1));
+ 
+ 	/* Color, no blend, use source color */
+ 	reg = GE2D_ALU_DO_COLOR_OPERATION_LOGIC(LOGIC_OPERATION_COPY,
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_fw_vpu.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_fw_vpu.c
+index cd27f637dbe7c..cfc7ebed8fb7a 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_fw_vpu.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_fw_vpu.c
+@@ -102,6 +102,8 @@ struct mtk_vcodec_fw *mtk_vcodec_fw_vpu_init(struct mtk_vcodec_dev *dev,
+ 	vpu_wdt_reg_handler(fw_pdev, mtk_vcodec_vpu_reset_handler, dev, rst_id);
+ 
+ 	fw = devm_kzalloc(&dev->plat_dev->dev, sizeof(*fw), GFP_KERNEL);
++	if (!fw)
++		return ERR_PTR(-ENOMEM);
+ 	fw->type = VPU;
+ 	fw->ops = &mtk_vcodec_vpu_msg;
+ 	fw->pdev = fw_pdev;
+diff --git a/drivers/media/platform/omap3isp/ispstat.c b/drivers/media/platform/omap3isp/ispstat.c
+index 5b9b57f4d9bf8..68cf68dbcace2 100644
+--- a/drivers/media/platform/omap3isp/ispstat.c
++++ b/drivers/media/platform/omap3isp/ispstat.c
+@@ -512,7 +512,7 @@ int omap3isp_stat_request_statistics(struct ispstat *stat,
+ int omap3isp_stat_request_statistics_time32(struct ispstat *stat,
+ 					struct omap3isp_stat_data_time32 *data)
+ {
+-	struct omap3isp_stat_data data64;
++	struct omap3isp_stat_data data64 = { };
+ 	int ret;
+ 
+ 	ret = omap3isp_stat_request_statistics(stat, &data64);
+@@ -521,7 +521,8 @@ int omap3isp_stat_request_statistics_time32(struct ispstat *stat,
+ 
+ 	data->ts.tv_sec = data64.ts.tv_sec;
+ 	data->ts.tv_usec = data64.ts.tv_usec;
+-	memcpy(&data->buf, &data64.buf, sizeof(*data) - sizeof(data->ts));
++	data->buf = (uintptr_t)data64.buf;
++	memcpy(&data->frame, &data64.frame, sizeof(data->frame));
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/platform/qcom/camss/camss-csid-170.c b/drivers/media/platform/qcom/camss/camss-csid-170.c
+index ac22ff29d2a9f..82f59933ad7b3 100644
+--- a/drivers/media/platform/qcom/camss/camss-csid-170.c
++++ b/drivers/media/platform/qcom/camss/camss-csid-170.c
+@@ -105,7 +105,8 @@
+ #define CSID_RDI_CTRL(rdi)			((IS_LITE ? 0x208 : 0x308)\
+ 						+ 0x100 * (rdi))
+ #define		RDI_CTRL_HALT_CMD		0
+-#define			ALT_CMD_RESUME_AT_FRAME_BOUNDARY	1
++#define			HALT_CMD_HALT_AT_FRAME_BOUNDARY		0
++#define			HALT_CMD_RESUME_AT_FRAME_BOUNDARY	1
+ #define		RDI_CTRL_HALT_MODE		2
+ 
+ #define CSID_RDI_FRM_DROP_PATTERN(rdi)			((IS_LITE ? 0x20C : 0x30C)\
+@@ -366,7 +367,7 @@ static void csid_configure_stream(struct csid_device *csid, u8 enable)
+ 			val |= input_format->width & 0x1fff << TPG_DT_n_CFG_0_FRAME_WIDTH;
+ 			writel_relaxed(val, csid->base + CSID_TPG_DT_n_CFG_0(0));
+ 
+-			val = DATA_TYPE_RAW_10BIT << TPG_DT_n_CFG_1_DATA_TYPE;
++			val = format->data_type << TPG_DT_n_CFG_1_DATA_TYPE;
+ 			writel_relaxed(val, csid->base + CSID_TPG_DT_n_CFG_1(0));
+ 
+ 			val = tg->mode << TPG_DT_n_CFG_2_PAYLOAD_MODE;
+@@ -382,8 +383,9 @@ static void csid_configure_stream(struct csid_device *csid, u8 enable)
+ 		val = 1 << RDI_CFG0_BYTE_CNTR_EN;
+ 		val |= 1 << RDI_CFG0_FORMAT_MEASURE_EN;
+ 		val |= 1 << RDI_CFG0_TIMESTAMP_EN;
++		/* note: for non-RDI path, this should be format->decode_format */
+ 		val |= DECODE_FORMAT_PAYLOAD_ONLY << RDI_CFG0_DECODE_FORMAT;
+-		val |= DATA_TYPE_RAW_10BIT << RDI_CFG0_DATA_TYPE;
++		val |= format->data_type << RDI_CFG0_DATA_TYPE;
+ 		val |= vc << RDI_CFG0_VIRTUAL_CHANNEL;
+ 		val |= dt_id << RDI_CFG0_DT_ID;
+ 		writel_relaxed(val, csid->base + CSID_RDI_CFG0(0));
+@@ -443,13 +445,10 @@ static void csid_configure_stream(struct csid_device *csid, u8 enable)
+ 	val |= 1 << CSI2_RX_CFG1_MISR_EN;
+ 	writel_relaxed(val, csid->base + CSID_CSI2_RX_CFG1); // csi2_vc_mode_shift_val ?
+ 
+-	/* error irqs start at BIT(11) */
+-	writel_relaxed(~0u, csid->base + CSID_CSI2_RX_IRQ_MASK);
+-
+-	/* RDI irq */
+-	writel_relaxed(~0u, csid->base + CSID_TOP_IRQ_MASK);
+-
+-	val = 1 << RDI_CTRL_HALT_CMD;
++	if (enable)
++		val = HALT_CMD_RESUME_AT_FRAME_BOUNDARY << RDI_CTRL_HALT_CMD;
++	else
++		val = HALT_CMD_HALT_AT_FRAME_BOUNDARY << RDI_CTRL_HALT_CMD;
+ 	writel_relaxed(val, csid->base + CSID_RDI_CTRL(0));
+ }
+ 
+diff --git a/drivers/media/platform/qcom/camss/camss-vfe-170.c b/drivers/media/platform/qcom/camss/camss-vfe-170.c
+index 5c083d70d4950..af71dc659bb98 100644
+--- a/drivers/media/platform/qcom/camss/camss-vfe-170.c
++++ b/drivers/media/platform/qcom/camss/camss-vfe-170.c
+@@ -402,17 +402,7 @@ static irqreturn_t vfe_isr(int irq, void *dev)
+  */
+ static int vfe_halt(struct vfe_device *vfe)
+ {
+-	unsigned long time;
+-
+-	reinit_completion(&vfe->halt_complete);
+-
+-	time = wait_for_completion_timeout(&vfe->halt_complete,
+-					   msecs_to_jiffies(VFE_HALT_TIMEOUT_MS));
+-	if (!time) {
+-		dev_err(vfe->camss->dev, "VFE halt timeout\n");
+-		return -EIO;
+-	}
+-
++	/* rely on vfe_disable_output() to stop the VFE */
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/platform/qcom/venus/helpers.c b/drivers/media/platform/qcom/venus/helpers.c
+index 84c3a511ec31e..0bca95d016507 100644
+--- a/drivers/media/platform/qcom/venus/helpers.c
++++ b/drivers/media/platform/qcom/venus/helpers.c
+@@ -189,7 +189,6 @@ int venus_helper_alloc_dpb_bufs(struct venus_inst *inst)
+ 		buf->va = dma_alloc_attrs(dev, buf->size, &buf->da, GFP_KERNEL,
+ 					  buf->attrs);
+ 		if (!buf->va) {
+-			kfree(buf);
+ 			ret = -ENOMEM;
+ 			goto fail;
+ 		}
+@@ -209,6 +208,7 @@ int venus_helper_alloc_dpb_bufs(struct venus_inst *inst)
+ 	return 0;
+ 
+ fail:
++	kfree(buf);
+ 	venus_helper_free_dpb_bufs(inst);
+ 	return ret;
+ }
+diff --git a/drivers/media/platform/qcom/venus/hfi_cmds.c b/drivers/media/platform/qcom/venus/hfi_cmds.c
+index 5aea07307e02e..4ecd444050bb6 100644
+--- a/drivers/media/platform/qcom/venus/hfi_cmds.c
++++ b/drivers/media/platform/qcom/venus/hfi_cmds.c
+@@ -1054,6 +1054,8 @@ static int pkt_session_set_property_1x(struct hfi_session_set_property_pkt *pkt,
+ 		pkt->shdr.hdr.size += sizeof(u32) + sizeof(*info);
+ 		break;
+ 	}
++	case HFI_PROPERTY_PARAM_VENC_HDR10_PQ_SEI:
++		return -ENOTSUPP;
+ 
+ 	/* FOLLOWING PROPERTIES ARE NOT IMPLEMENTED IN CORE YET */
+ 	case HFI_PROPERTY_CONFIG_BUFFER_REQUIREMENTS:
+diff --git a/drivers/media/platform/qcom/venus/venc.c b/drivers/media/platform/qcom/venus/venc.c
+index 84bafc3118cc6..adea4c3b8c204 100644
+--- a/drivers/media/platform/qcom/venus/venc.c
++++ b/drivers/media/platform/qcom/venus/venc.c
+@@ -662,8 +662,8 @@ static int venc_set_properties(struct venus_inst *inst)
+ 
+ 		ptype = HFI_PROPERTY_PARAM_VENC_H264_TRANSFORM_8X8;
+ 		h264_transform.enable_type = 0;
+-		if (ctr->profile.h264 == HFI_H264_PROFILE_HIGH ||
+-		    ctr->profile.h264 == HFI_H264_PROFILE_CONSTRAINED_HIGH)
++		if (ctr->profile.h264 == V4L2_MPEG_VIDEO_H264_PROFILE_HIGH ||
++		    ctr->profile.h264 == V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_HIGH)
+ 			h264_transform.enable_type = ctr->h264_8x8_transform;
+ 
+ 		ret = hfi_session_set_property(inst, ptype, &h264_transform);
+diff --git a/drivers/media/platform/qcom/venus/venc_ctrls.c b/drivers/media/platform/qcom/venus/venc_ctrls.c
+index 1ada42df314dc..ea5805e71c143 100644
+--- a/drivers/media/platform/qcom/venus/venc_ctrls.c
++++ b/drivers/media/platform/qcom/venus/venc_ctrls.c
+@@ -320,8 +320,8 @@ static int venc_op_s_ctrl(struct v4l2_ctrl *ctrl)
+ 		ctr->intra_refresh_period = ctrl->val;
+ 		break;
+ 	case V4L2_CID_MPEG_VIDEO_H264_8X8_TRANSFORM:
+-		if (ctr->profile.h264 != HFI_H264_PROFILE_HIGH &&
+-		    ctr->profile.h264 != HFI_H264_PROFILE_CONSTRAINED_HIGH)
++		if (ctr->profile.h264 != V4L2_MPEG_VIDEO_H264_PROFILE_HIGH &&
++		    ctr->profile.h264 != V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_HIGH)
+ 			return -EINVAL;
+ 
+ 		/*
+@@ -457,7 +457,7 @@ int venc_ctrl_init(struct venus_inst *inst)
+ 			  V4L2_CID_MPEG_VIDEO_H264_I_FRAME_MIN_QP, 1, 51, 1, 1);
+ 
+ 	v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+-			  V4L2_CID_MPEG_VIDEO_H264_8X8_TRANSFORM, 0, 1, 1, 0);
++			  V4L2_CID_MPEG_VIDEO_H264_8X8_TRANSFORM, 0, 1, 1, 1);
+ 
+ 	v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+ 			  V4L2_CID_MPEG_VIDEO_H264_P_FRAME_MIN_QP, 1, 51, 1, 1);
+diff --git a/drivers/media/platform/ti-vpe/cal-video.c b/drivers/media/platform/ti-vpe/cal-video.c
+index 7799da1cc261b..3e936a2ca36c6 100644
+--- a/drivers/media/platform/ti-vpe/cal-video.c
++++ b/drivers/media/platform/ti-vpe/cal-video.c
+@@ -823,6 +823,9 @@ static int cal_ctx_v4l2_init_formats(struct cal_ctx *ctx)
+ 	/* Enumerate sub device formats and enable all matching local formats */
+ 	ctx->active_fmt = devm_kcalloc(ctx->cal->dev, cal_num_formats,
+ 				       sizeof(*ctx->active_fmt), GFP_KERNEL);
++	if (!ctx->active_fmt)
++		return -ENOMEM;
++
+ 	ctx->num_active_fmt = 0;
+ 
+ 	for (j = 0, i = 0; ; ++j) {
+diff --git a/drivers/media/rc/gpio-ir-tx.c b/drivers/media/rc/gpio-ir-tx.c
+index c6cd2e6d8e654..a50701cfbbd7b 100644
+--- a/drivers/media/rc/gpio-ir-tx.c
++++ b/drivers/media/rc/gpio-ir-tx.c
+@@ -48,11 +48,29 @@ static int gpio_ir_tx_set_carrier(struct rc_dev *dev, u32 carrier)
+ 	return 0;
+ }
+ 
++static void delay_until(ktime_t until)
++{
++	/*
++	 * delta should never exceed 0.5 seconds (IR_MAX_DURATION) and on
++	 * m68k ndelay(s64) does not compile; so use s32 rather than s64.
++	 */
++	s32 delta;
++
++	while (true) {
++		delta = ktime_us_delta(until, ktime_get());
++		if (delta <= 0)
++			return;
++
++		/* udelay more than 1ms may not work */
++		delta = min(delta, 1000);
++		udelay(delta);
++	}
++}
++
+ static void gpio_ir_tx_unmodulated(struct gpio_ir *gpio_ir, uint *txbuf,
+ 				   uint count)
+ {
+ 	ktime_t edge;
+-	s32 delta;
+ 	int i;
+ 
+ 	local_irq_disable();
+@@ -63,9 +81,7 @@ static void gpio_ir_tx_unmodulated(struct gpio_ir *gpio_ir, uint *txbuf,
+ 		gpiod_set_value(gpio_ir->gpio, !(i % 2));
+ 
+ 		edge = ktime_add_us(edge, txbuf[i]);
+-		delta = ktime_us_delta(edge, ktime_get());
+-		if (delta > 0)
+-			udelay(delta);
++		delay_until(edge);
+ 	}
+ 
+ 	gpiod_set_value(gpio_ir->gpio, 0);
+@@ -97,9 +113,7 @@ static void gpio_ir_tx_modulated(struct gpio_ir *gpio_ir, uint *txbuf,
+ 		if (i % 2) {
+ 			// space
+ 			edge = ktime_add_us(edge, txbuf[i]);
+-			delta = ktime_us_delta(edge, ktime_get());
+-			if (delta > 0)
+-				udelay(delta);
++			delay_until(edge);
+ 		} else {
+ 			// pulse
+ 			ktime_t last = ktime_add_us(edge, txbuf[i]);
+diff --git a/drivers/media/rc/ir_toy.c b/drivers/media/rc/ir_toy.c
+index 7e98e7e3aacec..1968067092594 100644
+--- a/drivers/media/rc/ir_toy.c
++++ b/drivers/media/rc/ir_toy.c
+@@ -458,7 +458,7 @@ static int irtoy_probe(struct usb_interface *intf,
+ 	err = usb_submit_urb(irtoy->urb_in, GFP_KERNEL);
+ 	if (err != 0) {
+ 		dev_err(irtoy->dev, "fail to submit in urb: %d\n", err);
+-		return err;
++		goto free_rcdev;
+ 	}
+ 
+ 	err = irtoy_setup(irtoy);
+diff --git a/drivers/media/test-drivers/vidtv/vidtv_s302m.c b/drivers/media/test-drivers/vidtv/vidtv_s302m.c
+index d79b65854627c..4676083cee3b8 100644
+--- a/drivers/media/test-drivers/vidtv/vidtv_s302m.c
++++ b/drivers/media/test-drivers/vidtv/vidtv_s302m.c
+@@ -455,6 +455,9 @@ struct vidtv_encoder
+ 		e->name = kstrdup(args.name, GFP_KERNEL);
+ 
+ 	e->encoder_buf = vzalloc(VIDTV_S302M_BUF_SZ);
++	if (!e->encoder_buf)
++		goto out_kfree_e;
++
+ 	e->encoder_buf_sz = VIDTV_S302M_BUF_SZ;
+ 	e->encoder_buf_offset = 0;
+ 
+@@ -467,10 +470,8 @@ struct vidtv_encoder
+ 	e->is_video_encoder = false;
+ 
+ 	ctx = kzalloc(priv_sz, GFP_KERNEL);
+-	if (!ctx) {
+-		kfree(e);
+-		return NULL;
+-	}
++	if (!ctx)
++		goto out_kfree_buf;
+ 
+ 	e->ctx = ctx;
+ 	ctx->last_duration = 0;
+@@ -498,6 +499,14 @@ struct vidtv_encoder
+ 	e->next = NULL;
+ 
+ 	return e;
++
++out_kfree_buf:
++	kfree(e->encoder_buf);
++
++out_kfree_e:
++	kfree(e->name);
++	kfree(e);
++	return NULL;
+ }
+ 
+ void vidtv_s302m_encoder_destroy(struct vidtv_encoder *e)
+diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
+index b451ce3cb169a..ae25d2cbfdfee 100644
+--- a/drivers/media/usb/em28xx/em28xx-cards.c
++++ b/drivers/media/usb/em28xx/em28xx-cards.c
+@@ -3936,6 +3936,8 @@ static int em28xx_usb_probe(struct usb_interface *intf,
+ 		goto err_free;
+ 	}
+ 
++	kref_init(&dev->ref);
++
+ 	dev->devno = nr;
+ 	dev->model = id->driver_info;
+ 	dev->alt   = -1;
+@@ -4036,6 +4038,8 @@ static int em28xx_usb_probe(struct usb_interface *intf,
+ 	}
+ 
+ 	if (dev->board.has_dual_ts && em28xx_duplicate_dev(dev) == 0) {
++		kref_init(&dev->dev_next->ref);
++
+ 		dev->dev_next->ts = SECONDARY_TS;
+ 		dev->dev_next->alt   = -1;
+ 		dev->dev_next->is_audio_only = has_vendor_audio &&
+@@ -4090,12 +4094,8 @@ static int em28xx_usb_probe(struct usb_interface *intf,
+ 			em28xx_write_reg(dev, 0x0b, 0x82);
+ 			mdelay(100);
+ 		}
+-
+-		kref_init(&dev->dev_next->ref);
+ 	}
+ 
+-	kref_init(&dev->ref);
+-
+ 	request_modules(dev);
+ 
+ 	/*
+@@ -4150,11 +4150,8 @@ static void em28xx_usb_disconnect(struct usb_interface *intf)
+ 
+ 	em28xx_close_extension(dev);
+ 
+-	if (dev->dev_next) {
+-		em28xx_close_extension(dev->dev_next);
++	if (dev->dev_next)
+ 		em28xx_release_resources(dev->dev_next);
+-	}
+-
+ 	em28xx_release_resources(dev);
+ 
+ 	if (dev->dev_next) {
+diff --git a/drivers/media/usb/go7007/s2250-board.c b/drivers/media/usb/go7007/s2250-board.c
+index c742cc88fac5c..1fa6f10ee157b 100644
+--- a/drivers/media/usb/go7007/s2250-board.c
++++ b/drivers/media/usb/go7007/s2250-board.c
+@@ -504,6 +504,7 @@ static int s2250_probe(struct i2c_client *client,
+ 	u8 *data;
+ 	struct go7007 *go = i2c_get_adapdata(adapter);
+ 	struct go7007_usb *usb = go->hpi_context;
++	int err = -EIO;
+ 
+ 	audio = i2c_new_dummy_device(adapter, TLV320_ADDRESS >> 1);
+ 	if (IS_ERR(audio))
+@@ -532,11 +533,8 @@ static int s2250_probe(struct i2c_client *client,
+ 		V4L2_CID_HUE, -512, 511, 1, 0);
+ 	sd->ctrl_handler = &state->hdl;
+ 	if (state->hdl.error) {
+-		int err = state->hdl.error;
+-
+-		v4l2_ctrl_handler_free(&state->hdl);
+-		kfree(state);
+-		return err;
++		err = state->hdl.error;
++		goto fail;
+ 	}
+ 
+ 	state->std = V4L2_STD_NTSC;
+@@ -600,7 +598,7 @@ fail:
+ 	i2c_unregister_device(audio);
+ 	v4l2_ctrl_handler_free(&state->hdl);
+ 	kfree(state);
+-	return -EIO;
++	return err;
+ }
+ 
+ static int s2250_remove(struct i2c_client *client)
+diff --git a/drivers/media/usb/hdpvr/hdpvr-video.c b/drivers/media/usb/hdpvr/hdpvr-video.c
+index 563128d117317..60e57e0f19272 100644
+--- a/drivers/media/usb/hdpvr/hdpvr-video.c
++++ b/drivers/media/usb/hdpvr/hdpvr-video.c
+@@ -308,7 +308,6 @@ static int hdpvr_start_streaming(struct hdpvr_device *dev)
+ 
+ 	dev->status = STATUS_STREAMING;
+ 
+-	INIT_WORK(&dev->worker, hdpvr_transmit_buffers);
+ 	schedule_work(&dev->worker);
+ 
+ 	v4l2_dbg(MSG_BUFFER, hdpvr_debug, &dev->v4l2_dev,
+@@ -1165,6 +1164,9 @@ int hdpvr_register_videodev(struct hdpvr_device *dev, struct device *parent,
+ 	bool ac3 = dev->flags & HDPVR_FLAG_AC3_CAP;
+ 	int res;
+ 
++	// initialize dev->worker
++	INIT_WORK(&dev->worker, hdpvr_transmit_buffers);
++
+ 	dev->cur_std = V4L2_STD_525_60;
+ 	dev->width = 720;
+ 	dev->height = 480;
+diff --git a/drivers/media/usb/stk1160/stk1160-core.c b/drivers/media/usb/stk1160/stk1160-core.c
+index 4e1698f788187..ce717502ea4c3 100644
+--- a/drivers/media/usb/stk1160/stk1160-core.c
++++ b/drivers/media/usb/stk1160/stk1160-core.c
+@@ -403,7 +403,7 @@ static void stk1160_disconnect(struct usb_interface *interface)
+ 	/* Here is the only place where isoc get released */
+ 	stk1160_uninit_isoc(dev);
+ 
+-	stk1160_clear_queue(dev);
++	stk1160_clear_queue(dev, VB2_BUF_STATE_ERROR);
+ 
+ 	video_unregister_device(&dev->vdev);
+ 	v4l2_device_disconnect(&dev->v4l2_dev);
+diff --git a/drivers/media/usb/stk1160/stk1160-v4l.c b/drivers/media/usb/stk1160/stk1160-v4l.c
+index 6a4eb616d5160..1aa953469402f 100644
+--- a/drivers/media/usb/stk1160/stk1160-v4l.c
++++ b/drivers/media/usb/stk1160/stk1160-v4l.c
+@@ -258,7 +258,7 @@ out_uninit:
+ 	stk1160_uninit_isoc(dev);
+ out_stop_hw:
+ 	usb_set_interface(dev->udev, 0, 0);
+-	stk1160_clear_queue(dev);
++	stk1160_clear_queue(dev, VB2_BUF_STATE_QUEUED);
+ 
+ 	mutex_unlock(&dev->v4l_lock);
+ 
+@@ -306,7 +306,7 @@ static int stk1160_stop_streaming(struct stk1160 *dev)
+ 
+ 	stk1160_stop_hw(dev);
+ 
+-	stk1160_clear_queue(dev);
++	stk1160_clear_queue(dev, VB2_BUF_STATE_ERROR);
+ 
+ 	stk1160_dbg("streaming stopped\n");
+ 
+@@ -745,7 +745,7 @@ static const struct video_device v4l_template = {
+ /********************************************************************/
+ 
+ /* Must be called with both v4l_lock and vb_queue_lock hold */
+-void stk1160_clear_queue(struct stk1160 *dev)
++void stk1160_clear_queue(struct stk1160 *dev, enum vb2_buffer_state vb2_state)
+ {
+ 	struct stk1160_buffer *buf;
+ 	unsigned long flags;
+@@ -756,7 +756,7 @@ void stk1160_clear_queue(struct stk1160 *dev)
+ 		buf = list_first_entry(&dev->avail_bufs,
+ 			struct stk1160_buffer, list);
+ 		list_del(&buf->list);
+-		vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
++		vb2_buffer_done(&buf->vb.vb2_buf, vb2_state);
+ 		stk1160_dbg("buffer [%p/%d] aborted\n",
+ 			    buf, buf->vb.vb2_buf.index);
+ 	}
+@@ -766,7 +766,7 @@ void stk1160_clear_queue(struct stk1160 *dev)
+ 		buf = dev->isoc_ctl.buf;
+ 		dev->isoc_ctl.buf = NULL;
+ 
+-		vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
++		vb2_buffer_done(&buf->vb.vb2_buf, vb2_state);
+ 		stk1160_dbg("buffer [%p/%d] aborted\n",
+ 			    buf, buf->vb.vb2_buf.index);
+ 	}
+diff --git a/drivers/media/usb/stk1160/stk1160.h b/drivers/media/usb/stk1160/stk1160.h
+index a31ea1c80f255..a70963ce87533 100644
+--- a/drivers/media/usb/stk1160/stk1160.h
++++ b/drivers/media/usb/stk1160/stk1160.h
+@@ -166,7 +166,7 @@ struct regval {
+ int stk1160_vb2_setup(struct stk1160 *dev);
+ int stk1160_video_register(struct stk1160 *dev);
+ void stk1160_video_unregister(struct stk1160 *dev);
+-void stk1160_clear_queue(struct stk1160 *dev);
++void stk1160_clear_queue(struct stk1160 *dev, enum vb2_buffer_state vb2_state);
+ 
+ /* Provided by stk1160-video.c */
+ int stk1160_alloc_isoc(struct stk1160 *dev);
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls-core.c b/drivers/media/v4l2-core/v4l2-ctrls-core.c
+index 70adfc1b9c817..fb9b99e9e12b3 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls-core.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls-core.c
+@@ -113,6 +113,7 @@ static void std_init_compound(const struct v4l2_ctrl *ctrl, u32 idx,
+ 	struct v4l2_ctrl_mpeg2_quantisation *p_mpeg2_quant;
+ 	struct v4l2_ctrl_vp8_frame *p_vp8_frame;
+ 	struct v4l2_ctrl_fwht_params *p_fwht_params;
++	struct v4l2_ctrl_h264_scaling_matrix *p_h264_scaling_matrix;
+ 	void *p = ptr.p + idx * ctrl->elem_size;
+ 
+ 	if (ctrl->p_def.p_const)
+@@ -160,6 +161,15 @@ static void std_init_compound(const struct v4l2_ctrl *ctrl, u32 idx,
+ 		p_fwht_params->flags = V4L2_FWHT_FL_PIXENC_YUV |
+ 			(2 << V4L2_FWHT_FL_COMPONENTS_NUM_OFFSET);
+ 		break;
++	case V4L2_CTRL_TYPE_H264_SCALING_MATRIX:
++		p_h264_scaling_matrix = p;
++		/*
++		 * The default (flat) H.264 scaling matrix when none are
++		 * specified in the bitstream, this is according to formulas
++		 *  (7-8) and (7-9) of the specification.
++		 */
++		memset(p_h264_scaling_matrix, 16, sizeof(*p_h264_scaling_matrix));
++		break;
+ 	}
+ }
+ 
+diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
+index 69b74d0e8a90a..059d6debb25d9 100644
+--- a/drivers/media/v4l2-core/v4l2-ioctl.c
++++ b/drivers/media/v4l2-core/v4l2-ioctl.c
+@@ -279,8 +279,8 @@ static void v4l_print_format(const void *arg, bool write_only)
+ 	const struct v4l2_vbi_format *vbi;
+ 	const struct v4l2_sliced_vbi_format *sliced;
+ 	const struct v4l2_window *win;
+-	const struct v4l2_sdr_format *sdr;
+ 	const struct v4l2_meta_format *meta;
++	u32 pixelformat;
+ 	u32 planes;
+ 	unsigned i;
+ 
+@@ -299,8 +299,9 @@ static void v4l_print_format(const void *arg, bool write_only)
+ 	case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
+ 	case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
+ 		mp = &p->fmt.pix_mp;
++		pixelformat = mp->pixelformat;
+ 		pr_cont(", width=%u, height=%u, format=%p4cc, field=%s, colorspace=%d, num_planes=%u, flags=0x%x, ycbcr_enc=%u, quantization=%u, xfer_func=%u\n",
+-			mp->width, mp->height, &mp->pixelformat,
++			mp->width, mp->height, &pixelformat,
+ 			prt_names(mp->field, v4l2_field_names),
+ 			mp->colorspace, mp->num_planes, mp->flags,
+ 			mp->ycbcr_enc, mp->quantization, mp->xfer_func);
+@@ -343,14 +344,15 @@ static void v4l_print_format(const void *arg, bool write_only)
+ 		break;
+ 	case V4L2_BUF_TYPE_SDR_CAPTURE:
+ 	case V4L2_BUF_TYPE_SDR_OUTPUT:
+-		sdr = &p->fmt.sdr;
+-		pr_cont(", pixelformat=%p4cc\n", &sdr->pixelformat);
++		pixelformat = p->fmt.sdr.pixelformat;
++		pr_cont(", pixelformat=%p4cc\n", &pixelformat);
+ 		break;
+ 	case V4L2_BUF_TYPE_META_CAPTURE:
+ 	case V4L2_BUF_TYPE_META_OUTPUT:
+ 		meta = &p->fmt.meta;
++		pixelformat = meta->dataformat;
+ 		pr_cont(", dataformat=%p4cc, buffersize=%u\n",
+-			&meta->dataformat, meta->buffersize);
++			&pixelformat, meta->buffersize);
+ 		break;
+ 	}
+ }
+diff --git a/drivers/media/v4l2-core/v4l2-mem2mem.c b/drivers/media/v4l2-core/v4l2-mem2mem.c
+index e7f4bf5bc8dd7..3de683b5e06d0 100644
+--- a/drivers/media/v4l2-core/v4l2-mem2mem.c
++++ b/drivers/media/v4l2-core/v4l2-mem2mem.c
+@@ -585,19 +585,14 @@ int v4l2_m2m_reqbufs(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ }
+ EXPORT_SYMBOL_GPL(v4l2_m2m_reqbufs);
+ 
+-int v4l2_m2m_querybuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+-		      struct v4l2_buffer *buf)
++static void v4l2_m2m_adjust_mem_offset(struct vb2_queue *vq,
++				       struct v4l2_buffer *buf)
+ {
+-	struct vb2_queue *vq;
+-	int ret = 0;
+-	unsigned int i;
+-
+-	vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
+-	ret = vb2_querybuf(vq, buf);
+-
+ 	/* Adjust MMAP memory offsets for the CAPTURE queue */
+ 	if (buf->memory == V4L2_MEMORY_MMAP && V4L2_TYPE_IS_CAPTURE(vq->type)) {
+ 		if (V4L2_TYPE_IS_MULTIPLANAR(vq->type)) {
++			unsigned int i;
++
+ 			for (i = 0; i < buf->length; ++i)
+ 				buf->m.planes[i].m.mem_offset
+ 					+= DST_QUEUE_OFF_BASE;
+@@ -605,8 +600,23 @@ int v4l2_m2m_querybuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ 			buf->m.offset += DST_QUEUE_OFF_BASE;
+ 		}
+ 	}
++}
+ 
+-	return ret;
++int v4l2_m2m_querybuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
++		      struct v4l2_buffer *buf)
++{
++	struct vb2_queue *vq;
++	int ret;
++
++	vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
++	ret = vb2_querybuf(vq, buf);
++	if (ret)
++		return ret;
++
++	/* Adjust MMAP memory offsets for the CAPTURE queue */
++	v4l2_m2m_adjust_mem_offset(vq, buf);
++
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_m2m_querybuf);
+ 
+@@ -763,6 +773,9 @@ int v4l2_m2m_qbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ 	if (ret)
+ 		return ret;
+ 
++	/* Adjust MMAP memory offsets for the CAPTURE queue */
++	v4l2_m2m_adjust_mem_offset(vq, buf);
++
+ 	/*
+ 	 * If the capture queue is streaming, but streaming hasn't started
+ 	 * on the device, but was asked to stop, mark the previously queued
+@@ -784,9 +797,17 @@ int v4l2_m2m_dqbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ 		   struct v4l2_buffer *buf)
+ {
+ 	struct vb2_queue *vq;
++	int ret;
+ 
+ 	vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
+-	return vb2_dqbuf(vq, buf, file->f_flags & O_NONBLOCK);
++	ret = vb2_dqbuf(vq, buf, file->f_flags & O_NONBLOCK);
++	if (ret)
++		return ret;
++
++	/* Adjust MMAP memory offsets for the CAPTURE queue */
++	v4l2_m2m_adjust_mem_offset(vq, buf);
++
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_m2m_dqbuf);
+ 
+@@ -795,9 +816,17 @@ int v4l2_m2m_prepare_buf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ {
+ 	struct video_device *vdev = video_devdata(file);
+ 	struct vb2_queue *vq;
++	int ret;
+ 
+ 	vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
+-	return vb2_prepare_buf(vq, vdev->v4l2_dev->mdev, buf);
++	ret = vb2_prepare_buf(vq, vdev->v4l2_dev->mdev, buf);
++	if (ret)
++		return ret;
++
++	/* Adjust MMAP memory offsets for the CAPTURE queue */
++	v4l2_m2m_adjust_mem_offset(vq, buf);
++
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_m2m_prepare_buf);
+ 
+diff --git a/drivers/memory/emif.c b/drivers/memory/emif.c
+index 762d0c0f0716f..ecc78d6f89ed2 100644
+--- a/drivers/memory/emif.c
++++ b/drivers/memory/emif.c
+@@ -1025,7 +1025,7 @@ static struct emif_data *__init_or_module get_device_details(
+ 	temp	= devm_kzalloc(dev, sizeof(*pd), GFP_KERNEL);
+ 	dev_info = devm_kzalloc(dev, sizeof(*dev_info), GFP_KERNEL);
+ 
+-	if (!emif || !pd || !dev_info) {
++	if (!emif || !temp || !dev_info) {
+ 		dev_err(dev, "%s:%d: allocation error\n", __func__, __LINE__);
+ 		goto error;
+ 	}
+@@ -1117,7 +1117,7 @@ static int __init_or_module emif_probe(struct platform_device *pdev)
+ {
+ 	struct emif_data	*emif;
+ 	struct resource		*res;
+-	int			irq;
++	int			irq, ret;
+ 
+ 	if (pdev->dev.of_node)
+ 		emif = of_get_memory_device_details(pdev->dev.of_node, &pdev->dev);
+@@ -1147,7 +1147,9 @@ static int __init_or_module emif_probe(struct platform_device *pdev)
+ 	emif_onetime_settings(emif);
+ 	emif_debugfs_init(emif);
+ 	disable_and_clear_all_interrupts(emif);
+-	setup_interrupts(emif, irq);
++	ret = setup_interrupts(emif, irq);
++	if (ret)
++		goto error;
+ 
+ 	/* One-time actions taken on probing the first device */
+ 	if (!emif1) {
+diff --git a/drivers/memory/tegra/tegra20-emc.c b/drivers/memory/tegra/tegra20-emc.c
+index 497b6edbf3ca1..25ba3c5e4ad6a 100644
+--- a/drivers/memory/tegra/tegra20-emc.c
++++ b/drivers/memory/tegra/tegra20-emc.c
+@@ -540,7 +540,7 @@ static int emc_read_lpddr_mode_register(struct tegra_emc *emc,
+ 					unsigned int register_addr,
+ 					unsigned int *register_data)
+ {
+-	u32 memory_dev = emem_dev + 1;
++	u32 memory_dev = emem_dev ? 1 : 2;
+ 	u32 val, mr_mask = 0xff;
+ 	int err;
+ 
+diff --git a/drivers/memstick/core/mspro_block.c b/drivers/memstick/core/mspro_block.c
+index c0450397b6735..7ea312f0840e0 100644
+--- a/drivers/memstick/core/mspro_block.c
++++ b/drivers/memstick/core/mspro_block.c
+@@ -186,13 +186,8 @@ static int mspro_block_bd_open(struct block_device *bdev, fmode_t mode)
+ 
+ 	mutex_lock(&mspro_block_disk_lock);
+ 
+-	if (msb && msb->card) {
++	if (msb && msb->card)
+ 		msb->usage_count++;
+-		if ((mode & FMODE_WRITE) && msb->read_only)
+-			rc = -EROFS;
+-		else
+-			rc = 0;
+-	}
+ 
+ 	mutex_unlock(&mspro_block_disk_lock);
+ 
+@@ -1239,6 +1234,9 @@ static int mspro_block_init_disk(struct memstick_dev *card)
+ 	set_capacity(msb->disk, capacity);
+ 	dev_dbg(&card->dev, "capacity set %ld\n", capacity);
+ 
++	if (msb->read_only)
++		set_disk_ro(msb->disk, true);
++
+ 	rc = device_add_disk(&card->dev, msb->disk, NULL);
+ 	if (rc)
+ 		goto out_cleanup_disk;
+diff --git a/drivers/mfd/asic3.c b/drivers/mfd/asic3.c
+index 8d58c8df46cfb..56338f9dbd0ba 100644
+--- a/drivers/mfd/asic3.c
++++ b/drivers/mfd/asic3.c
+@@ -906,14 +906,14 @@ static int __init asic3_mfd_probe(struct platform_device *pdev,
+ 		ret = mfd_add_devices(&pdev->dev, pdev->id,
+ 			&asic3_cell_ds1wm, 1, mem, asic->irq_base, NULL);
+ 		if (ret < 0)
+-			goto out;
++			goto out_unmap;
+ 	}
+ 
+ 	if (mem_sdio && (irq >= 0)) {
+ 		ret = mfd_add_devices(&pdev->dev, pdev->id,
+ 			&asic3_cell_mmc, 1, mem_sdio, irq, NULL);
+ 		if (ret < 0)
+-			goto out;
++			goto out_unmap;
+ 	}
+ 
+ 	ret = 0;
+@@ -927,8 +927,12 @@ static int __init asic3_mfd_probe(struct platform_device *pdev,
+ 		ret = mfd_add_devices(&pdev->dev, 0,
+ 			asic3_cell_leds, ASIC3_NUM_LEDS, NULL, 0, NULL);
+ 	}
++	return ret;
+ 
+- out:
++out_unmap:
++	if (asic->tmio_cnf)
++		iounmap(asic->tmio_cnf);
++out:
+ 	return ret;
+ }
+ 
+diff --git a/drivers/mfd/mc13xxx-core.c b/drivers/mfd/mc13xxx-core.c
+index 8a4f1d90dcfd1..1000572761a84 100644
+--- a/drivers/mfd/mc13xxx-core.c
++++ b/drivers/mfd/mc13xxx-core.c
+@@ -323,8 +323,10 @@ int mc13xxx_adc_do_conversion(struct mc13xxx *mc13xxx, unsigned int mode,
+ 		adc1 |= MC13783_ADC1_ATOX;
+ 
+ 	dev_dbg(mc13xxx->dev, "%s: request irq\n", __func__);
+-	mc13xxx_irq_request(mc13xxx, MC13XXX_IRQ_ADCDONE,
++	ret = mc13xxx_irq_request(mc13xxx, MC13XXX_IRQ_ADCDONE,
+ 			mc13xxx_handler_adcdone, __func__, &adcdone_data);
++	if (ret)
++		goto out;
+ 
+ 	mc13xxx_reg_write(mc13xxx, MC13XXX_ADC0, adc0);
+ 	mc13xxx_reg_write(mc13xxx, MC13XXX_ADC1, adc1);
+diff --git a/drivers/misc/cardreader/alcor_pci.c b/drivers/misc/cardreader/alcor_pci.c
+index de6d44a158bba..3f514d77a843f 100644
+--- a/drivers/misc/cardreader/alcor_pci.c
++++ b/drivers/misc/cardreader/alcor_pci.c
+@@ -266,7 +266,7 @@ static int alcor_pci_probe(struct pci_dev *pdev,
+ 	if (!priv)
+ 		return -ENOMEM;
+ 
+-	ret = ida_simple_get(&alcor_pci_idr, 0, 0, GFP_KERNEL);
++	ret = ida_alloc(&alcor_pci_idr, GFP_KERNEL);
+ 	if (ret < 0)
+ 		return ret;
+ 	priv->id = ret;
+@@ -280,7 +280,8 @@ static int alcor_pci_probe(struct pci_dev *pdev,
+ 	ret = pci_request_regions(pdev, DRV_NAME_ALCOR_PCI);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Cannot request region\n");
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto error_free_ida;
+ 	}
+ 
+ 	if (!(pci_resource_flags(pdev, bar) & IORESOURCE_MEM)) {
+@@ -324,6 +325,8 @@ static int alcor_pci_probe(struct pci_dev *pdev,
+ 
+ error_release_regions:
+ 	pci_release_regions(pdev);
++error_free_ida:
++	ida_free(&alcor_pci_idr, priv->id);
+ 	return ret;
+ }
+ 
+@@ -337,7 +340,7 @@ static void alcor_pci_remove(struct pci_dev *pdev)
+ 
+ 	mfd_remove_devices(&pdev->dev);
+ 
+-	ida_simple_remove(&alcor_pci_idr, priv->id);
++	ida_free(&alcor_pci_idr, priv->id);
+ 
+ 	pci_release_regions(pdev);
+ 	pci_set_drvdata(pdev, NULL);
+diff --git a/drivers/misc/habanalabs/common/debugfs.c b/drivers/misc/habanalabs/common/debugfs.c
+index 1f2a3dc6c4e2f..78a6789ef7cc5 100644
+--- a/drivers/misc/habanalabs/common/debugfs.c
++++ b/drivers/misc/habanalabs/common/debugfs.c
+@@ -856,6 +856,8 @@ static ssize_t hl_set_power_state(struct file *f, const char __user *buf,
+ 		pci_set_power_state(hdev->pdev, PCI_D0);
+ 		pci_restore_state(hdev->pdev);
+ 		rc = pci_enable_device(hdev->pdev);
++		if (rc < 0)
++			return rc;
+ 	} else if (value == 2) {
+ 		pci_save_state(hdev->pdev);
+ 		pci_disable_device(hdev->pdev);
+diff --git a/drivers/misc/kgdbts.c b/drivers/misc/kgdbts.c
+index 67c5b452dd356..88b91ad8e5413 100644
+--- a/drivers/misc/kgdbts.c
++++ b/drivers/misc/kgdbts.c
+@@ -1070,10 +1070,10 @@ static int kgdbts_option_setup(char *opt)
+ {
+ 	if (strlen(opt) >= MAX_CONFIG_LEN) {
+ 		printk(KERN_ERR "kgdbts: config string too long\n");
+-		return -ENOSPC;
++		return 1;
+ 	}
+ 	strcpy(config, opt);
+-	return 0;
++	return 1;
+ }
+ 
+ __setup("kgdbts=", kgdbts_option_setup);
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index 67bb6a25fd0a0..64ce3f830262b 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -107,6 +107,7 @@
+ #define MEI_DEV_ID_ADP_S      0x7AE8  /* Alder Lake Point S */
+ #define MEI_DEV_ID_ADP_LP     0x7A60  /* Alder Lake Point LP */
+ #define MEI_DEV_ID_ADP_P      0x51E0  /* Alder Lake Point P */
++#define MEI_DEV_ID_ADP_N      0x54E0  /* Alder Lake Point N */
+ 
+ /*
+  * MEI HW Section
+@@ -120,6 +121,7 @@
+ #define PCI_CFG_HFS_2         0x48
+ #define PCI_CFG_HFS_3         0x60
+ #  define PCI_CFG_HFS_3_FW_SKU_MSK   0x00000070
++#  define PCI_CFG_HFS_3_FW_SKU_IGN   0x00000000
+ #  define PCI_CFG_HFS_3_FW_SKU_SPS   0x00000060
+ #define PCI_CFG_HFS_4         0x64
+ #define PCI_CFG_HFS_5         0x68
+diff --git a/drivers/misc/mei/hw-me.c b/drivers/misc/mei/hw-me.c
+index d3a6c07286451..fbc4c95818645 100644
+--- a/drivers/misc/mei/hw-me.c
++++ b/drivers/misc/mei/hw-me.c
+@@ -1405,16 +1405,16 @@ static bool mei_me_fw_type_sps_4(const struct pci_dev *pdev)
+ 	.quirk_probe = mei_me_fw_type_sps_4
+ 
+ /**
+- * mei_me_fw_type_sps() - check for sps sku
++ * mei_me_fw_type_sps_ign() - check for sps or ign sku
+  *
+- * Read ME FW Status register to check for SPS Firmware.
+- * The SPS FW is only signaled in pci function 0
++ * Read ME FW Status register to check for SPS or IGN Firmware.
++ * The SPS/IGN FW is only signaled in pci function 0
+  *
+  * @pdev: pci device
+  *
+- * Return: true in case of SPS firmware
++ * Return: true in case of SPS/IGN firmware
+  */
+-static bool mei_me_fw_type_sps(const struct pci_dev *pdev)
++static bool mei_me_fw_type_sps_ign(const struct pci_dev *pdev)
+ {
+ 	u32 reg;
+ 	u32 fw_type;
+@@ -1427,14 +1427,15 @@ static bool mei_me_fw_type_sps(const struct pci_dev *pdev)
+ 
+ 	dev_dbg(&pdev->dev, "fw type is %d\n", fw_type);
+ 
+-	return fw_type == PCI_CFG_HFS_3_FW_SKU_SPS;
++	return fw_type == PCI_CFG_HFS_3_FW_SKU_IGN ||
++	       fw_type == PCI_CFG_HFS_3_FW_SKU_SPS;
+ }
+ 
+ #define MEI_CFG_KIND_ITOUCH                     \
+ 	.kind = "itouch"
+ 
+-#define MEI_CFG_FW_SPS                          \
+-	.quirk_probe = mei_me_fw_type_sps
++#define MEI_CFG_FW_SPS_IGN                      \
++	.quirk_probe = mei_me_fw_type_sps_ign
+ 
+ #define MEI_CFG_FW_VER_SUPP                     \
+ 	.fw_ver_supported = 1
+@@ -1535,7 +1536,7 @@ static const struct mei_cfg mei_me_pch12_sps_cfg = {
+ 	MEI_CFG_PCH8_HFS,
+ 	MEI_CFG_FW_VER_SUPP,
+ 	MEI_CFG_DMA_128,
+-	MEI_CFG_FW_SPS,
++	MEI_CFG_FW_SPS_IGN,
+ };
+ 
+ /* Cannon Lake itouch with quirk for SPS 5.0 and newer Firmware exclusion
+@@ -1545,7 +1546,7 @@ static const struct mei_cfg mei_me_pch12_itouch_sps_cfg = {
+ 	MEI_CFG_KIND_ITOUCH,
+ 	MEI_CFG_PCH8_HFS,
+ 	MEI_CFG_FW_VER_SUPP,
+-	MEI_CFG_FW_SPS,
++	MEI_CFG_FW_SPS_IGN,
+ };
+ 
+ /* Tiger Lake and newer devices */
+@@ -1562,7 +1563,7 @@ static const struct mei_cfg mei_me_pch15_sps_cfg = {
+ 	MEI_CFG_FW_VER_SUPP,
+ 	MEI_CFG_DMA_128,
+ 	MEI_CFG_TRC,
+-	MEI_CFG_FW_SPS,
++	MEI_CFG_FW_SPS_IGN,
+ };
+ 
+ /*
+diff --git a/drivers/misc/mei/interrupt.c b/drivers/misc/mei/interrupt.c
+index a67f4f2d33a93..0706322154cbe 100644
+--- a/drivers/misc/mei/interrupt.c
++++ b/drivers/misc/mei/interrupt.c
+@@ -424,31 +424,26 @@ int mei_irq_read_handler(struct mei_device *dev,
+ 	list_for_each_entry(cl, &dev->file_list, link) {
+ 		if (mei_cl_hbm_equal(cl, mei_hdr)) {
+ 			cl_dbg(dev, cl, "got a message\n");
+-			break;
++			ret = mei_cl_irq_read_msg(cl, mei_hdr, meta_hdr, cmpl_list);
++			goto reset_slots;
+ 		}
+ 	}
+ 
+ 	/* if no recipient cl was found we assume corrupted header */
+-	if (&cl->link == &dev->file_list) {
+-		/* A message for not connected fixed address clients
+-		 * should be silently discarded
+-		 * On power down client may be force cleaned,
+-		 * silently discard such messages
+-		 */
+-		if (hdr_is_fixed(mei_hdr) ||
+-		    dev->dev_state == MEI_DEV_POWER_DOWN) {
+-			mei_irq_discard_msg(dev, mei_hdr, mei_hdr->length);
+-			ret = 0;
+-			goto reset_slots;
+-		}
+-		dev_err(dev->dev, "no destination client found 0x%08X\n",
+-				dev->rd_msg_hdr[0]);
+-		ret = -EBADMSG;
+-		goto end;
++	/* A message for not connected fixed address clients
++	 * should be silently discarded
++	 * On power down client may be force cleaned,
++	 * silently discard such messages
++	 */
++	if (hdr_is_fixed(mei_hdr) ||
++	    dev->dev_state == MEI_DEV_POWER_DOWN) {
++		mei_irq_discard_msg(dev, mei_hdr, mei_hdr->length);
++		ret = 0;
++		goto reset_slots;
+ 	}
+-
+-	ret = mei_cl_irq_read_msg(cl, mei_hdr, meta_hdr, cmpl_list);
+-
++	dev_err(dev->dev, "no destination client found 0x%08X\n", dev->rd_msg_hdr[0]);
++	ret = -EBADMSG;
++	goto end;
+ 
+ reset_slots:
+ 	/* reset the number of slots and header */
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index 3a45aaf002ac8..a738253dbd056 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -113,6 +113,7 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_S, MEI_ME_PCH15_CFG)},
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_LP, MEI_ME_PCH15_CFG)},
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_P, MEI_ME_PCH15_CFG)},
++	{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_N, MEI_ME_PCH15_CFG)},
+ 
+ 	/* required last entry */
+ 	{0, }
+diff --git a/drivers/mmc/core/bus.c b/drivers/mmc/core/bus.c
+index f6b7a9c5bbffd..5aefe05a94f6e 100644
+--- a/drivers/mmc/core/bus.c
++++ b/drivers/mmc/core/bus.c
+@@ -15,6 +15,7 @@
+ #include <linux/stat.h>
+ #include <linux/of.h>
+ #include <linux/pm_runtime.h>
++#include <linux/sysfs.h>
+ 
+ #include <linux/mmc/card.h>
+ #include <linux/mmc/host.h>
+@@ -34,13 +35,13 @@ static ssize_t type_show(struct device *dev,
+ 
+ 	switch (card->type) {
+ 	case MMC_TYPE_MMC:
+-		return sprintf(buf, "MMC\n");
++		return sysfs_emit(buf, "MMC\n");
+ 	case MMC_TYPE_SD:
+-		return sprintf(buf, "SD\n");
++		return sysfs_emit(buf, "SD\n");
+ 	case MMC_TYPE_SDIO:
+-		return sprintf(buf, "SDIO\n");
++		return sysfs_emit(buf, "SDIO\n");
+ 	case MMC_TYPE_SD_COMBO:
+-		return sprintf(buf, "SDcombo\n");
++		return sysfs_emit(buf, "SDcombo\n");
+ 	default:
+ 		return -EFAULT;
+ 	}
+diff --git a/drivers/mmc/core/bus.h b/drivers/mmc/core/bus.h
+index 8105852c4b62f..3996b191b68d1 100644
+--- a/drivers/mmc/core/bus.h
++++ b/drivers/mmc/core/bus.h
+@@ -9,6 +9,7 @@
+ #define _MMC_CORE_BUS_H
+ 
+ #include <linux/device.h>
++#include <linux/sysfs.h>
+ 
+ struct mmc_host;
+ struct mmc_card;
+@@ -17,7 +18,7 @@ struct mmc_card;
+ static ssize_t mmc_##name##_show (struct device *dev, struct device_attribute *attr, char *buf)	\
+ {										\
+ 	struct mmc_card *card = mmc_dev_to_card(dev);				\
+-	return sprintf(buf, fmt, args);						\
++	return sysfs_emit(buf, fmt, args);					\
+ }										\
+ static DEVICE_ATTR(name, S_IRUGO, mmc_##name##_show, NULL)
+ 
+diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
+index cf140f4ec8643..d739e2b631fe8 100644
+--- a/drivers/mmc/core/host.c
++++ b/drivers/mmc/core/host.c
+@@ -588,6 +588,16 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
+ 
+ EXPORT_SYMBOL(mmc_alloc_host);
+ 
++static int mmc_validate_host_caps(struct mmc_host *host)
++{
++	if (host->caps & MMC_CAP_SDIO_IRQ && !host->ops->enable_sdio_irq) {
++		dev_warn(host->parent, "missing ->enable_sdio_irq() ops\n");
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ /**
+  *	mmc_add_host - initialise host hardware
+  *	@host: mmc host
+@@ -600,8 +610,9 @@ int mmc_add_host(struct mmc_host *host)
+ {
+ 	int err;
+ 
+-	WARN_ON((host->caps & MMC_CAP_SDIO_IRQ) &&
+-		!host->ops->enable_sdio_irq);
++	err = mmc_validate_host_caps(host);
++	if (err)
++		return err;
+ 
+ 	err = device_add(&host->class_dev);
+ 	if (err)
+diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
+index b1c1716dacf0e..4a590e5d268e6 100644
+--- a/drivers/mmc/core/mmc.c
++++ b/drivers/mmc/core/mmc.c
+@@ -12,6 +12,7 @@
+ #include <linux/slab.h>
+ #include <linux/stat.h>
+ #include <linux/pm_runtime.h>
++#include <linux/sysfs.h>
+ 
+ #include <linux/mmc/host.h>
+ #include <linux/mmc/card.h>
+@@ -812,12 +813,11 @@ static ssize_t mmc_fwrev_show(struct device *dev,
+ {
+ 	struct mmc_card *card = mmc_dev_to_card(dev);
+ 
+-	if (card->ext_csd.rev < 7) {
+-		return sprintf(buf, "0x%x\n", card->cid.fwrev);
+-	} else {
+-		return sprintf(buf, "0x%*phN\n", MMC_FIRMWARE_LEN,
+-			       card->ext_csd.fwrev);
+-	}
++	if (card->ext_csd.rev < 7)
++		return sysfs_emit(buf, "0x%x\n", card->cid.fwrev);
++	else
++		return sysfs_emit(buf, "0x%*phN\n", MMC_FIRMWARE_LEN,
++				  card->ext_csd.fwrev);
+ }
+ 
+ static DEVICE_ATTR(fwrev, S_IRUGO, mmc_fwrev_show, NULL);
+@@ -830,10 +830,10 @@ static ssize_t mmc_dsr_show(struct device *dev,
+ 	struct mmc_host *host = card->host;
+ 
+ 	if (card->csd.dsr_imp && host->dsr_req)
+-		return sprintf(buf, "0x%x\n", host->dsr);
++		return sysfs_emit(buf, "0x%x\n", host->dsr);
+ 	else
+ 		/* return default DSR value */
+-		return sprintf(buf, "0x%x\n", 0x404);
++		return sysfs_emit(buf, "0x%x\n", 0x404);
+ }
+ 
+ static DEVICE_ATTR(dsr, S_IRUGO, mmc_dsr_show, NULL);
+diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
+index 44746f2edadaf..17e25c9dc9357 100644
+--- a/drivers/mmc/core/sd.c
++++ b/drivers/mmc/core/sd.c
+@@ -13,6 +13,7 @@
+ #include <linux/stat.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/scatterlist.h>
++#include <linux/sysfs.h>
+ 
+ #include <linux/mmc/host.h>
+ #include <linux/mmc/card.h>
+@@ -708,18 +709,16 @@ MMC_DEV_ATTR(ocr, "0x%08x\n", card->ocr);
+ MMC_DEV_ATTR(rca, "0x%04x\n", card->rca);
+ 
+ 
+-static ssize_t mmc_dsr_show(struct device *dev,
+-                           struct device_attribute *attr,
+-                           char *buf)
++static ssize_t mmc_dsr_show(struct device *dev, struct device_attribute *attr,
++			    char *buf)
+ {
+-       struct mmc_card *card = mmc_dev_to_card(dev);
+-       struct mmc_host *host = card->host;
+-
+-       if (card->csd.dsr_imp && host->dsr_req)
+-               return sprintf(buf, "0x%x\n", host->dsr);
+-       else
+-               /* return default DSR value */
+-               return sprintf(buf, "0x%x\n", 0x404);
++	struct mmc_card *card = mmc_dev_to_card(dev);
++	struct mmc_host *host = card->host;
++
++	if (card->csd.dsr_imp && host->dsr_req)
++		return sysfs_emit(buf, "0x%x\n", host->dsr);
++	/* return default DSR value */
++	return sysfs_emit(buf, "0x%x\n", 0x404);
+ }
+ 
+ static DEVICE_ATTR(dsr, S_IRUGO, mmc_dsr_show, NULL);
+@@ -735,9 +734,9 @@ static ssize_t info##num##_show(struct device *dev, struct device_attribute *att
+ 												\
+ 	if (num > card->num_info)								\
+ 		return -ENODATA;								\
+-	if (!card->info[num-1][0])								\
++	if (!card->info[num - 1][0])								\
+ 		return 0;									\
+-	return sprintf(buf, "%s\n", card->info[num-1]);						\
++	return sysfs_emit(buf, "%s\n", card->info[num - 1]);					\
+ }												\
+ static DEVICE_ATTR_RO(info##num)
+ 
+diff --git a/drivers/mmc/core/sdio.c b/drivers/mmc/core/sdio.c
+index 5447c47157aa5..2bd9004d76d85 100644
+--- a/drivers/mmc/core/sdio.c
++++ b/drivers/mmc/core/sdio.c
+@@ -7,6 +7,7 @@
+ 
+ #include <linux/err.h>
+ #include <linux/pm_runtime.h>
++#include <linux/sysfs.h>
+ 
+ #include <linux/mmc/host.h>
+ #include <linux/mmc/card.h>
+@@ -40,9 +41,9 @@ static ssize_t info##num##_show(struct device *dev, struct device_attribute *att
+ 												\
+ 	if (num > card->num_info)								\
+ 		return -ENODATA;								\
+-	if (!card->info[num-1][0])								\
++	if (!card->info[num - 1][0])								\
+ 		return 0;									\
+-	return sprintf(buf, "%s\n", card->info[num-1]);						\
++	return sysfs_emit(buf, "%s\n", card->info[num - 1]);					\
+ }												\
+ static DEVICE_ATTR_RO(info##num)
+ 
+diff --git a/drivers/mmc/core/sdio_bus.c b/drivers/mmc/core/sdio_bus.c
+index fda03b35c14a5..c6268c38c69e5 100644
+--- a/drivers/mmc/core/sdio_bus.c
++++ b/drivers/mmc/core/sdio_bus.c
+@@ -14,6 +14,7 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/pm_domain.h>
+ #include <linux/acpi.h>
++#include <linux/sysfs.h>
+ 
+ #include <linux/mmc/card.h>
+ #include <linux/mmc/host.h>
+@@ -35,7 +36,7 @@ field##_show(struct device *dev, struct device_attribute *attr, char *buf)				\
+ 	struct sdio_func *func;						\
+ 									\
+ 	func = dev_to_sdio_func (dev);					\
+-	return sprintf(buf, format_string, args);			\
++	return sysfs_emit(buf, format_string, args);			\
+ }									\
+ static DEVICE_ATTR_RO(field)
+ 
+@@ -52,9 +53,9 @@ static ssize_t info##num##_show(struct device *dev, struct device_attribute *att
+ 												\
+ 	if (num > func->num_info)								\
+ 		return -ENODATA;								\
+-	if (!func->info[num-1][0])								\
++	if (!func->info[num - 1][0])								\
+ 		return 0;									\
+-	return sprintf(buf, "%s\n", func->info[num-1]);						\
++	return sysfs_emit(buf, "%s\n", func->info[num - 1]);					\
+ }												\
+ static DEVICE_ATTR_RO(info##num)
+ 
+diff --git a/drivers/mmc/host/davinci_mmc.c b/drivers/mmc/host/davinci_mmc.c
+index 2a757c88f9d21..80de660027d89 100644
+--- a/drivers/mmc/host/davinci_mmc.c
++++ b/drivers/mmc/host/davinci_mmc.c
+@@ -1375,8 +1375,12 @@ static int davinci_mmcsd_suspend(struct device *dev)
+ static int davinci_mmcsd_resume(struct device *dev)
+ {
+ 	struct mmc_davinci_host *host = dev_get_drvdata(dev);
++	int ret;
++
++	ret = clk_enable(host->clk);
++	if (ret)
++		return ret;
+ 
+-	clk_enable(host->clk);
+ 	mmc_davinci_reset_ctrl(host, 0);
+ 
+ 	return 0;
+diff --git a/drivers/mmc/host/rtsx_pci_sdmmc.c b/drivers/mmc/host/rtsx_pci_sdmmc.c
+index 58cfaffa3c2d8..f7c384db89bf3 100644
+--- a/drivers/mmc/host/rtsx_pci_sdmmc.c
++++ b/drivers/mmc/host/rtsx_pci_sdmmc.c
+@@ -1495,12 +1495,12 @@ static int rtsx_pci_sdmmc_drv_probe(struct platform_device *pdev)
+ 
+ 	realtek_init_host(host);
+ 
+-	if (pcr->rtd3_en) {
+-		pm_runtime_set_autosuspend_delay(&pdev->dev, 5000);
+-		pm_runtime_use_autosuspend(&pdev->dev);
+-		pm_runtime_enable(&pdev->dev);
+-	}
+-
++	pm_runtime_no_callbacks(&pdev->dev);
++	pm_runtime_set_active(&pdev->dev);
++	pm_runtime_enable(&pdev->dev);
++	pm_runtime_set_autosuspend_delay(&pdev->dev, 200);
++	pm_runtime_mark_last_busy(&pdev->dev);
++	pm_runtime_use_autosuspend(&pdev->dev);
+ 
+ 	mmc_add_host(mmc);
+ 
+@@ -1521,11 +1521,6 @@ static int rtsx_pci_sdmmc_drv_remove(struct platform_device *pdev)
+ 	pcr->slots[RTSX_SD_CARD].card_event = NULL;
+ 	mmc = host->mmc;
+ 
+-	if (pcr->rtd3_en) {
+-		pm_runtime_dont_use_autosuspend(&pdev->dev);
+-		pm_runtime_disable(&pdev->dev);
+-	}
+-
+ 	cancel_work_sync(&host->work);
+ 
+ 	mutex_lock(&host->host_mutex);
+@@ -1548,6 +1543,9 @@ static int rtsx_pci_sdmmc_drv_remove(struct platform_device *pdev)
+ 
+ 	flush_work(&host->work);
+ 
++	pm_runtime_dont_use_autosuspend(&pdev->dev);
++	pm_runtime_disable(&pdev->dev);
++
+ 	mmc_free_host(mmc);
+ 
+ 	dev_dbg(&(pdev->dev),
+diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c
+index f654afbe8e83c..b4891bb266485 100644
+--- a/drivers/mmc/host/sdhci_am654.c
++++ b/drivers/mmc/host/sdhci_am654.c
+@@ -514,26 +514,6 @@ static const struct sdhci_am654_driver_data sdhci_j721e_4bit_drvdata = {
+ 	.flags = IOMUX_PRESENT,
+ };
+ 
+-static const struct sdhci_pltfm_data sdhci_am64_8bit_pdata = {
+-	.ops = &sdhci_j721e_8bit_ops,
+-	.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
+-};
+-
+-static const struct sdhci_am654_driver_data sdhci_am64_8bit_drvdata = {
+-	.pdata = &sdhci_am64_8bit_pdata,
+-	.flags = DLL_PRESENT | DLL_CALIB,
+-};
+-
+-static const struct sdhci_pltfm_data sdhci_am64_4bit_pdata = {
+-	.ops = &sdhci_j721e_4bit_ops,
+-	.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
+-};
+-
+-static const struct sdhci_am654_driver_data sdhci_am64_4bit_drvdata = {
+-	.pdata = &sdhci_am64_4bit_pdata,
+-	.flags = IOMUX_PRESENT,
+-};
+-
+ static const struct soc_device_attribute sdhci_am654_devices[] = {
+ 	{ .family = "AM65X",
+ 	  .revision = "SR1.0",
+@@ -759,11 +739,11 @@ static const struct of_device_id sdhci_am654_of_match[] = {
+ 	},
+ 	{
+ 		.compatible = "ti,am64-sdhci-8bit",
+-		.data = &sdhci_am64_8bit_drvdata,
++		.data = &sdhci_j721e_8bit_drvdata,
+ 	},
+ 	{
+ 		.compatible = "ti,am64-sdhci-4bit",
+-		.data = &sdhci_am64_4bit_drvdata,
++		.data = &sdhci_j721e_4bit_drvdata,
+ 	},
+ 	{ /* sentinel */ }
+ };
+diff --git a/drivers/mtd/devices/mchp23k256.c b/drivers/mtd/devices/mchp23k256.c
+index 77c872fd3d839..7d188cdff6a26 100644
+--- a/drivers/mtd/devices/mchp23k256.c
++++ b/drivers/mtd/devices/mchp23k256.c
+@@ -229,6 +229,19 @@ static const struct of_device_id mchp23k256_of_table[] = {
+ };
+ MODULE_DEVICE_TABLE(of, mchp23k256_of_table);
+ 
++static const struct spi_device_id mchp23k256_spi_ids[] = {
++	{
++		.name = "mchp23k256",
++		.driver_data = (kernel_ulong_t)&mchp23k256_caps,
++	},
++	{
++		.name = "mchp23lcv1024",
++		.driver_data = (kernel_ulong_t)&mchp23lcv1024_caps,
++	},
++	{}
++};
++MODULE_DEVICE_TABLE(spi, mchp23k256_spi_ids);
++
+ static struct spi_driver mchp23k256_driver = {
+ 	.driver = {
+ 		.name	= "mchp23k256",
+@@ -236,6 +249,7 @@ static struct spi_driver mchp23k256_driver = {
+ 	},
+ 	.probe		= mchp23k256_probe,
+ 	.remove		= mchp23k256_remove,
++	.id_table	= mchp23k256_spi_ids,
+ };
+ 
+ module_spi_driver(mchp23k256_driver);
+diff --git a/drivers/mtd/devices/mchp48l640.c b/drivers/mtd/devices/mchp48l640.c
+index 99400d0fb8c1e..fbd6b6bf908e5 100644
+--- a/drivers/mtd/devices/mchp48l640.c
++++ b/drivers/mtd/devices/mchp48l640.c
+@@ -357,6 +357,15 @@ static const struct of_device_id mchp48l640_of_table[] = {
+ };
+ MODULE_DEVICE_TABLE(of, mchp48l640_of_table);
+ 
++static const struct spi_device_id mchp48l640_spi_ids[] = {
++	{
++		.name = "48l640",
++		.driver_data = (kernel_ulong_t)&mchp48l640_caps,
++	},
++	{}
++};
++MODULE_DEVICE_TABLE(spi, mchp48l640_spi_ids);
++
+ static struct spi_driver mchp48l640_driver = {
+ 	.driver = {
+ 		.name	= "mchp48l640",
+@@ -364,6 +373,7 @@ static struct spi_driver mchp48l640_driver = {
+ 	},
+ 	.probe		= mchp48l640_probe,
+ 	.remove		= mchp48l640_remove,
++	.id_table	= mchp48l640_spi_ids,
+ };
+ 
+ module_spi_driver(mchp48l640_driver);
+diff --git a/drivers/mtd/nand/onenand/generic.c b/drivers/mtd/nand/onenand/generic.c
+index 8b6f4da5d7201..a4b8b65fe15f5 100644
+--- a/drivers/mtd/nand/onenand/generic.c
++++ b/drivers/mtd/nand/onenand/generic.c
+@@ -53,7 +53,12 @@ static int generic_onenand_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	info->onenand.mmcontrol = pdata ? pdata->mmcontrol : NULL;
+-	info->onenand.irq = platform_get_irq(pdev, 0);
++
++	err = platform_get_irq(pdev, 0);
++	if (err < 0)
++		goto out_iounmap;
++
++	info->onenand.irq = err;
+ 
+ 	info->mtd.dev.parent = &pdev->dev;
+ 	info->mtd.priv = &info->onenand;
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index f3276ee9e4fe7..ddd93bc38ea6c 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -2060,13 +2060,15 @@ static int atmel_nand_controller_init(struct atmel_nand_controller *nc,
+ 	nc->mck = of_clk_get(dev->parent->of_node, 0);
+ 	if (IS_ERR(nc->mck)) {
+ 		dev_err(dev, "Failed to retrieve MCK clk\n");
+-		return PTR_ERR(nc->mck);
++		ret = PTR_ERR(nc->mck);
++		goto out_release_dma;
+ 	}
+ 
+ 	np = of_parse_phandle(dev->parent->of_node, "atmel,smc", 0);
+ 	if (!np) {
+ 		dev_err(dev, "Missing or invalid atmel,smc property\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto out_release_dma;
+ 	}
+ 
+ 	nc->smc = syscon_node_to_regmap(np);
+@@ -2074,10 +2076,16 @@ static int atmel_nand_controller_init(struct atmel_nand_controller *nc,
+ 	if (IS_ERR(nc->smc)) {
+ 		ret = PTR_ERR(nc->smc);
+ 		dev_err(dev, "Could not get SMC regmap (err = %d)\n", ret);
+-		return ret;
++		goto out_release_dma;
+ 	}
+ 
+ 	return 0;
++
++out_release_dma:
++	if (nc->dmac)
++		dma_release_channel(nc->dmac);
++
++	return ret;
+ }
+ 
+ static int
+diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+index 5eb20dfe4186e..42e0aab1a00c9 100644
+--- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
++++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+@@ -648,6 +648,7 @@ static void gpmi_nfc_compute_timings(struct gpmi_nand_data *this,
+ 				     const struct nand_sdr_timings *sdr)
+ {
+ 	struct gpmi_nfc_hardware_timing *hw = &this->hw;
++	struct resources *r = &this->resources;
+ 	unsigned int dll_threshold_ps = this->devdata->max_chain_delay;
+ 	unsigned int period_ps, reference_period_ps;
+ 	unsigned int data_setup_cycles, data_hold_cycles, addr_setup_cycles;
+@@ -671,6 +672,8 @@ static void gpmi_nfc_compute_timings(struct gpmi_nand_data *this,
+ 		wrn_dly_sel = BV_GPMI_CTRL1_WRN_DLY_SEL_NO_DELAY;
+ 	}
+ 
++	hw->clk_rate = clk_round_rate(r->clock[0], hw->clk_rate);
++
+ 	/* SDR core timings are given in picoseconds */
+ 	period_ps = div_u64((u64)NSEC_PER_SEC * 1000, hw->clk_rate);
+ 
+diff --git a/drivers/mtd/nand/raw/nand_base.c b/drivers/mtd/nand/raw/nand_base.c
+index d5a2110eb38ed..881e768f636f8 100644
+--- a/drivers/mtd/nand/raw/nand_base.c
++++ b/drivers/mtd/nand/raw/nand_base.c
+@@ -335,16 +335,19 @@ static int nand_isbad_bbm(struct nand_chip *chip, loff_t ofs)
+  *
+  * Return: -EBUSY if the chip has been suspended, 0 otherwise
+  */
+-static int nand_get_device(struct nand_chip *chip)
++static void nand_get_device(struct nand_chip *chip)
+ {
+-	mutex_lock(&chip->lock);
+-	if (chip->suspended) {
++	/* Wait until the device is resumed. */
++	while (1) {
++		mutex_lock(&chip->lock);
++		if (!chip->suspended) {
++			mutex_lock(&chip->controller->lock);
++			return;
++		}
+ 		mutex_unlock(&chip->lock);
+-		return -EBUSY;
+-	}
+-	mutex_lock(&chip->controller->lock);
+ 
+-	return 0;
++		wait_event(chip->resume_wq, !chip->suspended);
++	}
+ }
+ 
+ /**
+@@ -573,9 +576,7 @@ static int nand_block_markbad_lowlevel(struct nand_chip *chip, loff_t ofs)
+ 		nand_erase_nand(chip, &einfo, 0);
+ 
+ 		/* Write bad block marker to OOB */
+-		ret = nand_get_device(chip);
+-		if (ret)
+-			return ret;
++		nand_get_device(chip);
+ 
+ 		ret = nand_markbad_bbm(chip, ofs);
+ 		nand_release_device(chip);
+@@ -3823,9 +3824,7 @@ static int nand_read_oob(struct mtd_info *mtd, loff_t from,
+ 	    ops->mode != MTD_OPS_RAW)
+ 		return -ENOTSUPP;
+ 
+-	ret = nand_get_device(chip);
+-	if (ret)
+-		return ret;
++	nand_get_device(chip);
+ 
+ 	if (!ops->datbuf)
+ 		ret = nand_do_read_oob(chip, from, ops);
+@@ -4412,13 +4411,11 @@ static int nand_write_oob(struct mtd_info *mtd, loff_t to,
+ 			  struct mtd_oob_ops *ops)
+ {
+ 	struct nand_chip *chip = mtd_to_nand(mtd);
+-	int ret;
++	int ret = 0;
+ 
+ 	ops->retlen = 0;
+ 
+-	ret = nand_get_device(chip);
+-	if (ret)
+-		return ret;
++	nand_get_device(chip);
+ 
+ 	switch (ops->mode) {
+ 	case MTD_OPS_PLACE_OOB:
+@@ -4478,9 +4475,7 @@ int nand_erase_nand(struct nand_chip *chip, struct erase_info *instr,
+ 		return -EIO;
+ 
+ 	/* Grab the lock and see if the device is available */
+-	ret = nand_get_device(chip);
+-	if (ret)
+-		return ret;
++	nand_get_device(chip);
+ 
+ 	/* Shift to get first page */
+ 	page = (int)(instr->addr >> chip->page_shift);
+@@ -4567,7 +4562,7 @@ static void nand_sync(struct mtd_info *mtd)
+ 	pr_debug("%s: called\n", __func__);
+ 
+ 	/* Grab the lock and see if the device is available */
+-	WARN_ON(nand_get_device(chip));
++	nand_get_device(chip);
+ 	/* Release it and go back */
+ 	nand_release_device(chip);
+ }
+@@ -4584,9 +4579,7 @@ static int nand_block_isbad(struct mtd_info *mtd, loff_t offs)
+ 	int ret;
+ 
+ 	/* Select the NAND device */
+-	ret = nand_get_device(chip);
+-	if (ret)
+-		return ret;
++	nand_get_device(chip);
+ 
+ 	nand_select_target(chip, chipnr);
+ 
+@@ -4657,6 +4650,8 @@ static void nand_resume(struct mtd_info *mtd)
+ 			__func__);
+ 	}
+ 	mutex_unlock(&chip->lock);
++
++	wake_up_all(&chip->resume_wq);
+ }
+ 
+ /**
+@@ -5434,6 +5429,7 @@ static int nand_scan_ident(struct nand_chip *chip, unsigned int maxchips,
+ 	chip->cur_cs = -1;
+ 
+ 	mutex_init(&chip->lock);
++	init_waitqueue_head(&chip->resume_wq);
+ 
+ 	/* Enforce the right timings for reset/detection */
+ 	chip->current_interface_config = nand_get_reset_interface_config();
+diff --git a/drivers/mtd/nand/raw/pl35x-nand-controller.c b/drivers/mtd/nand/raw/pl35x-nand-controller.c
+index 8a91e069ee2e9..3c6f6aff649f8 100644
+--- a/drivers/mtd/nand/raw/pl35x-nand-controller.c
++++ b/drivers/mtd/nand/raw/pl35x-nand-controller.c
+@@ -1062,7 +1062,7 @@ static int pl35x_nand_chip_init(struct pl35x_nandc *nfc,
+ 	chip->controller = &nfc->controller;
+ 	mtd = nand_to_mtd(chip);
+ 	mtd->dev.parent = nfc->dev;
+-	nand_set_flash_node(chip, nfc->dev->of_node);
++	nand_set_flash_node(chip, np);
+ 	if (!mtd->name) {
+ 		mtd->name = devm_kasprintf(nfc->dev, GFP_KERNEL,
+ 					   "%s", PL35X_NANDC_DRIVER_NAME);
+diff --git a/drivers/mtd/ubi/build.c b/drivers/mtd/ubi/build.c
+index a7e3eb9befb62..a32050fecabf3 100644
+--- a/drivers/mtd/ubi/build.c
++++ b/drivers/mtd/ubi/build.c
+@@ -351,9 +351,6 @@ static ssize_t dev_attribute_show(struct device *dev,
+ 	 * we still can use 'ubi->ubi_num'.
+ 	 */
+ 	ubi = container_of(dev, struct ubi_device, dev);
+-	ubi = ubi_get_device(ubi->ubi_num);
+-	if (!ubi)
+-		return -ENODEV;
+ 
+ 	if (attr == &dev_eraseblock_size)
+ 		ret = sprintf(buf, "%d\n", ubi->leb_size);
+@@ -382,7 +379,6 @@ static ssize_t dev_attribute_show(struct device *dev,
+ 	else
+ 		ret = -EINVAL;
+ 
+-	ubi_put_device(ubi);
+ 	return ret;
+ }
+ 
+@@ -979,9 +975,6 @@ int ubi_attach_mtd_dev(struct mtd_info *mtd, int ubi_num,
+ 			goto out_detach;
+ 	}
+ 
+-	/* Make device "available" before it becomes accessible via sysfs */
+-	ubi_devices[ubi_num] = ubi;
+-
+ 	err = uif_init(ubi);
+ 	if (err)
+ 		goto out_detach;
+@@ -1026,6 +1019,7 @@ int ubi_attach_mtd_dev(struct mtd_info *mtd, int ubi_num,
+ 	wake_up_process(ubi->bgt_thread);
+ 	spin_unlock(&ubi->wl_lock);
+ 
++	ubi_devices[ubi_num] = ubi;
+ 	ubi_notify_all(ubi, UBI_VOLUME_ADDED, NULL);
+ 	return ubi_num;
+ 
+@@ -1034,7 +1028,6 @@ out_debugfs:
+ out_uif:
+ 	uif_close(ubi);
+ out_detach:
+-	ubi_devices[ubi_num] = NULL;
+ 	ubi_wl_close(ubi);
+ 	ubi_free_all_volumes(ubi);
+ 	vfree(ubi->vtbl);
+diff --git a/drivers/mtd/ubi/fastmap.c b/drivers/mtd/ubi/fastmap.c
+index 022af59906aa9..6b5f1ffd961b9 100644
+--- a/drivers/mtd/ubi/fastmap.c
++++ b/drivers/mtd/ubi/fastmap.c
+@@ -468,7 +468,9 @@ static int scan_pool(struct ubi_device *ubi, struct ubi_attach_info *ai,
+ 			if (err == UBI_IO_FF_BITFLIPS)
+ 				scrub = 1;
+ 
+-			add_aeb(ai, free, pnum, ec, scrub);
++			ret = add_aeb(ai, free, pnum, ec, scrub);
++			if (ret)
++				goto out;
+ 			continue;
+ 		} else if (err == 0 || err == UBI_IO_BITFLIPS) {
+ 			dbg_bld("Found non empty PEB:%i in pool", pnum);
+@@ -638,8 +640,10 @@ static int ubi_attach_fastmap(struct ubi_device *ubi,
+ 		if (fm_pos >= fm_size)
+ 			goto fail_bad;
+ 
+-		add_aeb(ai, &ai->free, be32_to_cpu(fmec->pnum),
+-			be32_to_cpu(fmec->ec), 0);
++		ret = add_aeb(ai, &ai->free, be32_to_cpu(fmec->pnum),
++			      be32_to_cpu(fmec->ec), 0);
++		if (ret)
++			goto fail;
+ 	}
+ 
+ 	/* read EC values from used list */
+@@ -649,8 +653,10 @@ static int ubi_attach_fastmap(struct ubi_device *ubi,
+ 		if (fm_pos >= fm_size)
+ 			goto fail_bad;
+ 
+-		add_aeb(ai, &used, be32_to_cpu(fmec->pnum),
+-			be32_to_cpu(fmec->ec), 0);
++		ret = add_aeb(ai, &used, be32_to_cpu(fmec->pnum),
++			      be32_to_cpu(fmec->ec), 0);
++		if (ret)
++			goto fail;
+ 	}
+ 
+ 	/* read EC values from scrub list */
+@@ -660,8 +666,10 @@ static int ubi_attach_fastmap(struct ubi_device *ubi,
+ 		if (fm_pos >= fm_size)
+ 			goto fail_bad;
+ 
+-		add_aeb(ai, &used, be32_to_cpu(fmec->pnum),
+-			be32_to_cpu(fmec->ec), 1);
++		ret = add_aeb(ai, &used, be32_to_cpu(fmec->pnum),
++			      be32_to_cpu(fmec->ec), 1);
++		if (ret)
++			goto fail;
+ 	}
+ 
+ 	/* read EC values from erase list */
+@@ -671,8 +679,10 @@ static int ubi_attach_fastmap(struct ubi_device *ubi,
+ 		if (fm_pos >= fm_size)
+ 			goto fail_bad;
+ 
+-		add_aeb(ai, &ai->erase, be32_to_cpu(fmec->pnum),
+-			be32_to_cpu(fmec->ec), 1);
++		ret = add_aeb(ai, &ai->erase, be32_to_cpu(fmec->pnum),
++			      be32_to_cpu(fmec->ec), 1);
++		if (ret)
++			goto fail;
+ 	}
+ 
+ 	ai->mean_ec = div_u64(ai->ec_sum, ai->ec_count);
+diff --git a/drivers/mtd/ubi/vmt.c b/drivers/mtd/ubi/vmt.c
+index 139ee132bfbcf..1bc7b3a056046 100644
+--- a/drivers/mtd/ubi/vmt.c
++++ b/drivers/mtd/ubi/vmt.c
+@@ -56,16 +56,11 @@ static ssize_t vol_attribute_show(struct device *dev,
+ {
+ 	int ret;
+ 	struct ubi_volume *vol = container_of(dev, struct ubi_volume, dev);
+-	struct ubi_device *ubi;
+-
+-	ubi = ubi_get_device(vol->ubi->ubi_num);
+-	if (!ubi)
+-		return -ENODEV;
++	struct ubi_device *ubi = vol->ubi;
+ 
+ 	spin_lock(&ubi->volumes_lock);
+ 	if (!ubi->volumes[vol->vol_id]) {
+ 		spin_unlock(&ubi->volumes_lock);
+-		ubi_put_device(ubi);
+ 		return -ENODEV;
+ 	}
+ 	/* Take a reference to prevent volume removal */
+@@ -103,7 +98,6 @@ static ssize_t vol_attribute_show(struct device *dev,
+ 	vol->ref_count -= 1;
+ 	ubi_assert(vol->ref_count >= 0);
+ 	spin_unlock(&ubi->volumes_lock);
+-	ubi_put_device(ubi);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/bareudp.c b/drivers/net/bareudp.c
+index edffc3489a120..a7d79100b6859 100644
+--- a/drivers/net/bareudp.c
++++ b/drivers/net/bareudp.c
+@@ -141,14 +141,14 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 	skb_reset_network_header(skb);
+ 	skb_reset_mac_header(skb);
+ 
+-	if (!IS_ENABLED(CONFIG_IPV6) || family == AF_INET)
++	if (!ipv6_mod_enabled() || family == AF_INET)
+ 		err = IP_ECN_decapsulate(oiph, skb);
+ 	else
+ 		err = IP6_ECN_decapsulate(oiph, skb);
+ 
+ 	if (unlikely(err)) {
+ 		if (log_ecn_error) {
+-			if  (!IS_ENABLED(CONFIG_IPV6) || family == AF_INET)
++			if  (!ipv6_mod_enabled() || family == AF_INET)
+ 				net_info_ratelimited("non-ECT from %pI4 "
+ 						     "with TOS=%#x\n",
+ 						     &((struct iphdr *)oiph)->saddr,
+@@ -214,11 +214,12 @@ static struct socket *bareudp_create_sock(struct net *net, __be16 port)
+ 	int err;
+ 
+ 	memset(&udp_conf, 0, sizeof(udp_conf));
+-#if IS_ENABLED(CONFIG_IPV6)
+-	udp_conf.family = AF_INET6;
+-#else
+-	udp_conf.family = AF_INET;
+-#endif
++
++	if (ipv6_mod_enabled())
++		udp_conf.family = AF_INET6;
++	else
++		udp_conf.family = AF_INET;
++
+ 	udp_conf.local_udp_port = port;
+ 	/* Open UDP socket */
+ 	err = udp_sock_create(net, &udp_conf, &sock);
+@@ -441,7 +442,7 @@ static netdev_tx_t bareudp_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	}
+ 
+ 	rcu_read_lock();
+-	if (IS_ENABLED(CONFIG_IPV6) && info->mode & IP_TUNNEL_INFO_IPV6)
++	if (ipv6_mod_enabled() && info->mode & IP_TUNNEL_INFO_IPV6)
+ 		err = bareudp6_xmit_skb(skb, dev, bareudp, info);
+ 	else
+ 		err = bareudp_xmit_skb(skb, dev, bareudp, info);
+@@ -471,7 +472,7 @@ static int bareudp_fill_metadata_dst(struct net_device *dev,
+ 
+ 	use_cache = ip_tunnel_dst_cache_usable(skb, info);
+ 
+-	if (!IS_ENABLED(CONFIG_IPV6) || ip_tunnel_info_af(info) == AF_INET) {
++	if (!ipv6_mod_enabled() || ip_tunnel_info_af(info) == AF_INET) {
+ 		struct rtable *rt;
+ 		__be32 saddr;
+ 
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 755f4a59f784e..915cfd71f7bf4 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -1633,8 +1633,6 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev)
+ 		if (err)
+ 			goto out_fail;
+ 
+-		can_put_echo_skb(skb, dev, 0, 0);
+-
+ 		if (cdev->can.ctrlmode & CAN_CTRLMODE_FD) {
+ 			cccr = m_can_read(cdev, M_CAN_CCCR);
+ 			cccr &= ~CCCR_CMR_MASK;
+@@ -1651,6 +1649,9 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev)
+ 			m_can_write(cdev, M_CAN_CCCR, cccr);
+ 		}
+ 		m_can_write(cdev, M_CAN_TXBTIE, 0x1);
++
++		can_put_echo_skb(skb, dev, 0, 0);
++
+ 		m_can_write(cdev, M_CAN_TXBAR, 0x1);
+ 		/* End of xmit function for version 3.0.x */
+ 	} else {
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+index 9a4791d88683c..3a0f022b15625 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+@@ -2706,7 +2706,7 @@ mcp251xfd_register_get_dev_id(const struct mcp251xfd_priv *priv,
+  out_kfree_buf_rx:
+ 	kfree(buf_rx);
+ 
+-	return 0;
++	return err;
+ }
+ 
+ #define MCP251XFD_QUIRK_ACTIVE(quirk) \
+diff --git a/drivers/net/can/usb/ems_usb.c b/drivers/net/can/usb/ems_usb.c
+index 7cf65936d02e5..ccbe606dc4003 100644
+--- a/drivers/net/can/usb/ems_usb.c
++++ b/drivers/net/can/usb/ems_usb.c
+@@ -821,7 +821,6 @@ static netdev_tx_t ems_usb_start_xmit(struct sk_buff *skb, struct net_device *ne
+ 
+ 		usb_unanchor_urb(urb);
+ 		usb_free_coherent(dev->udev, size, buf, urb->transfer_dma);
+-		dev_kfree_skb(skb);
+ 
+ 		atomic_dec(&dev->active_tx_urbs);
+ 
+diff --git a/drivers/net/can/usb/mcba_usb.c b/drivers/net/can/usb/mcba_usb.c
+index a1a154c08b7f7..023bd34d48e3c 100644
+--- a/drivers/net/can/usb/mcba_usb.c
++++ b/drivers/net/can/usb/mcba_usb.c
+@@ -33,10 +33,6 @@
+ #define MCBA_USB_RX_BUFF_SIZE 64
+ #define MCBA_USB_TX_BUFF_SIZE (sizeof(struct mcba_usb_msg))
+ 
+-/* MCBA endpoint numbers */
+-#define MCBA_USB_EP_IN 1
+-#define MCBA_USB_EP_OUT 1
+-
+ /* Microchip command id */
+ #define MBCA_CMD_RECEIVE_MESSAGE 0xE3
+ #define MBCA_CMD_I_AM_ALIVE_FROM_CAN 0xF5
+@@ -84,6 +80,8 @@ struct mcba_priv {
+ 	atomic_t free_ctx_cnt;
+ 	void *rxbuf[MCBA_MAX_RX_URBS];
+ 	dma_addr_t rxbuf_dma[MCBA_MAX_RX_URBS];
++	int rx_pipe;
++	int tx_pipe;
+ };
+ 
+ /* CAN frame */
+@@ -272,10 +270,8 @@ static netdev_tx_t mcba_usb_xmit(struct mcba_priv *priv,
+ 
+ 	memcpy(buf, usb_msg, MCBA_USB_TX_BUFF_SIZE);
+ 
+-	usb_fill_bulk_urb(urb, priv->udev,
+-			  usb_sndbulkpipe(priv->udev, MCBA_USB_EP_OUT), buf,
+-			  MCBA_USB_TX_BUFF_SIZE, mcba_usb_write_bulk_callback,
+-			  ctx);
++	usb_fill_bulk_urb(urb, priv->udev, priv->tx_pipe, buf, MCBA_USB_TX_BUFF_SIZE,
++			  mcba_usb_write_bulk_callback, ctx);
+ 
+ 	urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+ 	usb_anchor_urb(urb, &priv->tx_submitted);
+@@ -368,7 +364,6 @@ static netdev_tx_t mcba_usb_start_xmit(struct sk_buff *skb,
+ xmit_failed:
+ 	can_free_echo_skb(priv->netdev, ctx->ndx, NULL);
+ 	mcba_usb_free_ctx(ctx);
+-	dev_kfree_skb(skb);
+ 	stats->tx_dropped++;
+ 
+ 	return NETDEV_TX_OK;
+@@ -611,7 +606,7 @@ static void mcba_usb_read_bulk_callback(struct urb *urb)
+ resubmit_urb:
+ 
+ 	usb_fill_bulk_urb(urb, priv->udev,
+-			  usb_rcvbulkpipe(priv->udev, MCBA_USB_EP_OUT),
++			  priv->rx_pipe,
+ 			  urb->transfer_buffer, MCBA_USB_RX_BUFF_SIZE,
+ 			  mcba_usb_read_bulk_callback, priv);
+ 
+@@ -656,7 +651,7 @@ static int mcba_usb_start(struct mcba_priv *priv)
+ 		urb->transfer_dma = buf_dma;
+ 
+ 		usb_fill_bulk_urb(urb, priv->udev,
+-				  usb_rcvbulkpipe(priv->udev, MCBA_USB_EP_IN),
++				  priv->rx_pipe,
+ 				  buf, MCBA_USB_RX_BUFF_SIZE,
+ 				  mcba_usb_read_bulk_callback, priv);
+ 		urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+@@ -810,6 +805,13 @@ static int mcba_usb_probe(struct usb_interface *intf,
+ 	struct mcba_priv *priv;
+ 	int err;
+ 	struct usb_device *usbdev = interface_to_usbdev(intf);
++	struct usb_endpoint_descriptor *in, *out;
++
++	err = usb_find_common_endpoints(intf->cur_altsetting, &in, &out, NULL, NULL);
++	if (err) {
++		dev_err(&intf->dev, "Can't find endpoints\n");
++		return err;
++	}
+ 
+ 	netdev = alloc_candev(sizeof(struct mcba_priv), MCBA_MAX_TX_URBS);
+ 	if (!netdev) {
+@@ -855,6 +857,9 @@ static int mcba_usb_probe(struct usb_interface *intf,
+ 		goto cleanup_free_candev;
+ 	}
+ 
++	priv->rx_pipe = usb_rcvbulkpipe(priv->udev, in->bEndpointAddress);
++	priv->tx_pipe = usb_sndbulkpipe(priv->udev, out->bEndpointAddress);
++
+ 	devm_can_led_init(netdev);
+ 
+ 	/* Start USB dev only if we have successfully registered CAN device */
+diff --git a/drivers/net/can/usb/usb_8dev.c b/drivers/net/can/usb/usb_8dev.c
+index 040324362b260..614a342114e2f 100644
+--- a/drivers/net/can/usb/usb_8dev.c
++++ b/drivers/net/can/usb/usb_8dev.c
+@@ -668,9 +668,20 @@ static netdev_tx_t usb_8dev_start_xmit(struct sk_buff *skb,
+ 	atomic_inc(&priv->active_tx_urbs);
+ 
+ 	err = usb_submit_urb(urb, GFP_ATOMIC);
+-	if (unlikely(err))
+-		goto failed;
+-	else if (atomic_read(&priv->active_tx_urbs) >= MAX_TX_URBS)
++	if (unlikely(err)) {
++		can_free_echo_skb(netdev, context->echo_index, NULL);
++
++		usb_unanchor_urb(urb);
++		usb_free_coherent(priv->udev, size, buf, urb->transfer_dma);
++
++		atomic_dec(&priv->active_tx_urbs);
++
++		if (err == -ENODEV)
++			netif_device_detach(netdev);
++		else
++			netdev_warn(netdev, "failed tx_urb %d\n", err);
++		stats->tx_dropped++;
++	} else if (atomic_read(&priv->active_tx_urbs) >= MAX_TX_URBS)
+ 		/* Slow down tx path */
+ 		netif_stop_queue(netdev);
+ 
+@@ -689,19 +700,6 @@ nofreecontext:
+ 
+ 	return NETDEV_TX_BUSY;
+ 
+-failed:
+-	can_free_echo_skb(netdev, context->echo_index, NULL);
+-
+-	usb_unanchor_urb(urb);
+-	usb_free_coherent(priv->udev, size, buf, urb->transfer_dma);
+-
+-	atomic_dec(&priv->active_tx_urbs);
+-
+-	if (err == -ENODEV)
+-		netif_device_detach(netdev);
+-	else
+-		netdev_warn(netdev, "failed tx_urb %d\n", err);
+-
+ nomembuf:
+ 	usb_free_urb(urb);
+ 
+diff --git a/drivers/net/can/vxcan.c b/drivers/net/can/vxcan.c
+index 8861a7d875e7e..be5566168d0f3 100644
+--- a/drivers/net/can/vxcan.c
++++ b/drivers/net/can/vxcan.c
+@@ -148,7 +148,7 @@ static void vxcan_setup(struct net_device *dev)
+ 	dev->hard_header_len	= 0;
+ 	dev->addr_len		= 0;
+ 	dev->tx_queue_len	= 0;
+-	dev->flags		= (IFF_NOARP|IFF_ECHO);
++	dev->flags		= IFF_NOARP;
+ 	dev->netdev_ops		= &vxcan_netdev_ops;
+ 	dev->needs_free_netdev	= true;
+ 
+diff --git a/drivers/net/dsa/Kconfig b/drivers/net/dsa/Kconfig
+index 0029d279616fd..37a3dabdce313 100644
+--- a/drivers/net/dsa/Kconfig
++++ b/drivers/net/dsa/Kconfig
+@@ -68,17 +68,7 @@ config NET_DSA_QCA8K
+ 	  This enables support for the Qualcomm Atheros QCA8K Ethernet
+ 	  switch chips.
+ 
+-config NET_DSA_REALTEK_SMI
+-	tristate "Realtek SMI Ethernet switch family support"
+-	select NET_DSA_TAG_RTL4_A
+-	select NET_DSA_TAG_RTL8_4
+-	select FIXED_PHY
+-	select IRQ_DOMAIN
+-	select REALTEK_PHY
+-	select REGMAP
+-	help
+-	  This enables support for the Realtek SMI-based switch
+-	  chips, currently only RTL8366RB.
++source "drivers/net/dsa/realtek/Kconfig"
+ 
+ config NET_DSA_SMSC_LAN9303
+ 	tristate
+diff --git a/drivers/net/dsa/Makefile b/drivers/net/dsa/Makefile
+index 8da1569a34e6e..e73838c122560 100644
+--- a/drivers/net/dsa/Makefile
++++ b/drivers/net/dsa/Makefile
+@@ -9,8 +9,6 @@ obj-$(CONFIG_NET_DSA_LANTIQ_GSWIP) += lantiq_gswip.o
+ obj-$(CONFIG_NET_DSA_MT7530)	+= mt7530.o
+ obj-$(CONFIG_NET_DSA_MV88E6060) += mv88e6060.o
+ obj-$(CONFIG_NET_DSA_QCA8K)	+= qca8k.o
+-obj-$(CONFIG_NET_DSA_REALTEK_SMI) += realtek-smi.o
+-realtek-smi-objs		:= realtek-smi-core.o rtl8366.o rtl8366rb.o rtl8365mb.o
+ obj-$(CONFIG_NET_DSA_SMSC_LAN9303) += lan9303-core.o
+ obj-$(CONFIG_NET_DSA_SMSC_LAN9303_I2C) += lan9303_i2c.o
+ obj-$(CONFIG_NET_DSA_SMSC_LAN9303_MDIO) += lan9303_mdio.o
+@@ -23,5 +21,6 @@ obj-y				+= microchip/
+ obj-y				+= mv88e6xxx/
+ obj-y				+= ocelot/
+ obj-y				+= qca/
++obj-y				+= realtek/
+ obj-y				+= sja1105/
+ obj-y				+= xrs700x/
+diff --git a/drivers/net/dsa/bcm_sf2_cfp.c b/drivers/net/dsa/bcm_sf2_cfp.c
+index a7e2fcf2df2c9..edbe5e7f1cb6b 100644
+--- a/drivers/net/dsa/bcm_sf2_cfp.c
++++ b/drivers/net/dsa/bcm_sf2_cfp.c
+@@ -567,14 +567,14 @@ static void bcm_sf2_cfp_slice_ipv6(struct bcm_sf2_priv *priv,
+ static struct cfp_rule *bcm_sf2_cfp_rule_find(struct bcm_sf2_priv *priv,
+ 					      int port, u32 location)
+ {
+-	struct cfp_rule *rule = NULL;
++	struct cfp_rule *rule;
+ 
+ 	list_for_each_entry(rule, &priv->cfp.rules_list, next) {
+ 		if (rule->port == port && rule->fs.location == location)
+-			break;
++			return rule;
+ 	}
+ 
+-	return rule;
++	return NULL;
+ }
+ 
+ static int bcm_sf2_cfp_rule_cmp(struct bcm_sf2_priv *priv, int port,
+diff --git a/drivers/net/dsa/microchip/ksz8795_spi.c b/drivers/net/dsa/microchip/ksz8795_spi.c
+index 866767b70d65b..b0a7dee27ffc9 100644
+--- a/drivers/net/dsa/microchip/ksz8795_spi.c
++++ b/drivers/net/dsa/microchip/ksz8795_spi.c
+@@ -124,12 +124,23 @@ static const struct of_device_id ksz8795_dt_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(of, ksz8795_dt_ids);
+ 
++static const struct spi_device_id ksz8795_spi_ids[] = {
++	{ "ksz8765" },
++	{ "ksz8794" },
++	{ "ksz8795" },
++	{ "ksz8863" },
++	{ "ksz8873" },
++	{ },
++};
++MODULE_DEVICE_TABLE(spi, ksz8795_spi_ids);
++
+ static struct spi_driver ksz8795_spi_driver = {
+ 	.driver = {
+ 		.name	= "ksz8795-switch",
+ 		.owner	= THIS_MODULE,
+ 		.of_match_table = of_match_ptr(ksz8795_dt_ids),
+ 	},
++	.id_table = ksz8795_spi_ids,
+ 	.probe	= ksz8795_spi_probe,
+ 	.remove	= ksz8795_spi_remove,
+ 	.shutdown = ksz8795_spi_shutdown,
+diff --git a/drivers/net/dsa/microchip/ksz9477_spi.c b/drivers/net/dsa/microchip/ksz9477_spi.c
+index e3cb0e6c9f6f2..43addeabfc259 100644
+--- a/drivers/net/dsa/microchip/ksz9477_spi.c
++++ b/drivers/net/dsa/microchip/ksz9477_spi.c
+@@ -98,12 +98,24 @@ static const struct of_device_id ksz9477_dt_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(of, ksz9477_dt_ids);
+ 
++static const struct spi_device_id ksz9477_spi_ids[] = {
++	{ "ksz9477" },
++	{ "ksz9897" },
++	{ "ksz9893" },
++	{ "ksz9563" },
++	{ "ksz8563" },
++	{ "ksz9567" },
++	{ },
++};
++MODULE_DEVICE_TABLE(spi, ksz9477_spi_ids);
++
+ static struct spi_driver ksz9477_spi_driver = {
+ 	.driver = {
+ 		.name	= "ksz9477-switch",
+ 		.owner	= THIS_MODULE,
+ 		.of_match_table = of_match_ptr(ksz9477_dt_ids),
+ 	},
++	.id_table = ksz9477_spi_ids,
+ 	.probe	= ksz9477_spi_probe,
+ 	.remove	= ksz9477_spi_remove,
+ 	.shutdown = ksz9477_spi_shutdown,
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index ec8b02f5459d2..b420e0ef46e34 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -3655,6 +3655,7 @@ static const struct mv88e6xxx_ops mv88e6097_ops = {
+ 	.port_sync_link = mv88e6185_port_sync_link,
+ 	.port_set_speed_duplex = mv88e6185_port_set_speed_duplex,
+ 	.port_tag_remap = mv88e6095_port_tag_remap,
++	.port_set_policy = mv88e6352_port_set_policy,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_ucast_flood = mv88e6352_port_set_ucast_flood,
+ 	.port_set_mcast_flood = mv88e6352_port_set_mcast_flood,
+diff --git a/drivers/net/dsa/realtek-smi-core.c b/drivers/net/dsa/realtek-smi-core.c
+deleted file mode 100644
+index c66ebd0ee2171..0000000000000
+--- a/drivers/net/dsa/realtek-smi-core.c
++++ /dev/null
+@@ -1,523 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0+
+-/* Realtek Simple Management Interface (SMI) driver
+- * It can be discussed how "simple" this interface is.
+- *
+- * The SMI protocol piggy-backs the MDIO MDC and MDIO signals levels
+- * but the protocol is not MDIO at all. Instead it is a Realtek
+- * pecularity that need to bit-bang the lines in a special way to
+- * communicate with the switch.
+- *
+- * ASICs we intend to support with this driver:
+- *
+- * RTL8366   - The original version, apparently
+- * RTL8369   - Similar enough to have the same datsheet as RTL8366
+- * RTL8366RB - Probably reads out "RTL8366 revision B", has a quite
+- *             different register layout from the other two
+- * RTL8366S  - Is this "RTL8366 super"?
+- * RTL8367   - Has an OpenWRT driver as well
+- * RTL8368S  - Seems to be an alternative name for RTL8366RB
+- * RTL8370   - Also uses SMI
+- *
+- * Copyright (C) 2017 Linus Walleij <linus.walleij@linaro.org>
+- * Copyright (C) 2010 Antti Seppälä <a.seppala@gmail.com>
+- * Copyright (C) 2010 Roman Yeryomin <roman@advem.lv>
+- * Copyright (C) 2011 Colin Leitner <colin.leitner@googlemail.com>
+- * Copyright (C) 2009-2010 Gabor Juhos <juhosg@openwrt.org>
+- */
+-
+-#include <linux/kernel.h>
+-#include <linux/module.h>
+-#include <linux/device.h>
+-#include <linux/spinlock.h>
+-#include <linux/skbuff.h>
+-#include <linux/of.h>
+-#include <linux/of_device.h>
+-#include <linux/of_mdio.h>
+-#include <linux/delay.h>
+-#include <linux/gpio/consumer.h>
+-#include <linux/platform_device.h>
+-#include <linux/regmap.h>
+-#include <linux/bitops.h>
+-#include <linux/if_bridge.h>
+-
+-#include "realtek-smi-core.h"
+-
+-#define REALTEK_SMI_ACK_RETRY_COUNT		5
+-#define REALTEK_SMI_HW_STOP_DELAY		25	/* msecs */
+-#define REALTEK_SMI_HW_START_DELAY		100	/* msecs */
+-
+-static inline void realtek_smi_clk_delay(struct realtek_smi *smi)
+-{
+-	ndelay(smi->clk_delay);
+-}
+-
+-static void realtek_smi_start(struct realtek_smi *smi)
+-{
+-	/* Set GPIO pins to output mode, with initial state:
+-	 * SCK = 0, SDA = 1
+-	 */
+-	gpiod_direction_output(smi->mdc, 0);
+-	gpiod_direction_output(smi->mdio, 1);
+-	realtek_smi_clk_delay(smi);
+-
+-	/* CLK 1: 0 -> 1, 1 -> 0 */
+-	gpiod_set_value(smi->mdc, 1);
+-	realtek_smi_clk_delay(smi);
+-	gpiod_set_value(smi->mdc, 0);
+-	realtek_smi_clk_delay(smi);
+-
+-	/* CLK 2: */
+-	gpiod_set_value(smi->mdc, 1);
+-	realtek_smi_clk_delay(smi);
+-	gpiod_set_value(smi->mdio, 0);
+-	realtek_smi_clk_delay(smi);
+-	gpiod_set_value(smi->mdc, 0);
+-	realtek_smi_clk_delay(smi);
+-	gpiod_set_value(smi->mdio, 1);
+-}
+-
+-static void realtek_smi_stop(struct realtek_smi *smi)
+-{
+-	realtek_smi_clk_delay(smi);
+-	gpiod_set_value(smi->mdio, 0);
+-	gpiod_set_value(smi->mdc, 1);
+-	realtek_smi_clk_delay(smi);
+-	gpiod_set_value(smi->mdio, 1);
+-	realtek_smi_clk_delay(smi);
+-	gpiod_set_value(smi->mdc, 1);
+-	realtek_smi_clk_delay(smi);
+-	gpiod_set_value(smi->mdc, 0);
+-	realtek_smi_clk_delay(smi);
+-	gpiod_set_value(smi->mdc, 1);
+-
+-	/* Add a click */
+-	realtek_smi_clk_delay(smi);
+-	gpiod_set_value(smi->mdc, 0);
+-	realtek_smi_clk_delay(smi);
+-	gpiod_set_value(smi->mdc, 1);
+-
+-	/* Set GPIO pins to input mode */
+-	gpiod_direction_input(smi->mdio);
+-	gpiod_direction_input(smi->mdc);
+-}
+-
+-static void realtek_smi_write_bits(struct realtek_smi *smi, u32 data, u32 len)
+-{
+-	for (; len > 0; len--) {
+-		realtek_smi_clk_delay(smi);
+-
+-		/* Prepare data */
+-		gpiod_set_value(smi->mdio, !!(data & (1 << (len - 1))));
+-		realtek_smi_clk_delay(smi);
+-
+-		/* Clocking */
+-		gpiod_set_value(smi->mdc, 1);
+-		realtek_smi_clk_delay(smi);
+-		gpiod_set_value(smi->mdc, 0);
+-	}
+-}
+-
+-static void realtek_smi_read_bits(struct realtek_smi *smi, u32 len, u32 *data)
+-{
+-	gpiod_direction_input(smi->mdio);
+-
+-	for (*data = 0; len > 0; len--) {
+-		u32 u;
+-
+-		realtek_smi_clk_delay(smi);
+-
+-		/* Clocking */
+-		gpiod_set_value(smi->mdc, 1);
+-		realtek_smi_clk_delay(smi);
+-		u = !!gpiod_get_value(smi->mdio);
+-		gpiod_set_value(smi->mdc, 0);
+-
+-		*data |= (u << (len - 1));
+-	}
+-
+-	gpiod_direction_output(smi->mdio, 0);
+-}
+-
+-static int realtek_smi_wait_for_ack(struct realtek_smi *smi)
+-{
+-	int retry_cnt;
+-
+-	retry_cnt = 0;
+-	do {
+-		u32 ack;
+-
+-		realtek_smi_read_bits(smi, 1, &ack);
+-		if (ack == 0)
+-			break;
+-
+-		if (++retry_cnt > REALTEK_SMI_ACK_RETRY_COUNT) {
+-			dev_err(smi->dev, "ACK timeout\n");
+-			return -ETIMEDOUT;
+-		}
+-	} while (1);
+-
+-	return 0;
+-}
+-
+-static int realtek_smi_write_byte(struct realtek_smi *smi, u8 data)
+-{
+-	realtek_smi_write_bits(smi, data, 8);
+-	return realtek_smi_wait_for_ack(smi);
+-}
+-
+-static int realtek_smi_write_byte_noack(struct realtek_smi *smi, u8 data)
+-{
+-	realtek_smi_write_bits(smi, data, 8);
+-	return 0;
+-}
+-
+-static int realtek_smi_read_byte0(struct realtek_smi *smi, u8 *data)
+-{
+-	u32 t;
+-
+-	/* Read data */
+-	realtek_smi_read_bits(smi, 8, &t);
+-	*data = (t & 0xff);
+-
+-	/* Send an ACK */
+-	realtek_smi_write_bits(smi, 0x00, 1);
+-
+-	return 0;
+-}
+-
+-static int realtek_smi_read_byte1(struct realtek_smi *smi, u8 *data)
+-{
+-	u32 t;
+-
+-	/* Read data */
+-	realtek_smi_read_bits(smi, 8, &t);
+-	*data = (t & 0xff);
+-
+-	/* Send an ACK */
+-	realtek_smi_write_bits(smi, 0x01, 1);
+-
+-	return 0;
+-}
+-
+-static int realtek_smi_read_reg(struct realtek_smi *smi, u32 addr, u32 *data)
+-{
+-	unsigned long flags;
+-	u8 lo = 0;
+-	u8 hi = 0;
+-	int ret;
+-
+-	spin_lock_irqsave(&smi->lock, flags);
+-
+-	realtek_smi_start(smi);
+-
+-	/* Send READ command */
+-	ret = realtek_smi_write_byte(smi, smi->cmd_read);
+-	if (ret)
+-		goto out;
+-
+-	/* Set ADDR[7:0] */
+-	ret = realtek_smi_write_byte(smi, addr & 0xff);
+-	if (ret)
+-		goto out;
+-
+-	/* Set ADDR[15:8] */
+-	ret = realtek_smi_write_byte(smi, addr >> 8);
+-	if (ret)
+-		goto out;
+-
+-	/* Read DATA[7:0] */
+-	realtek_smi_read_byte0(smi, &lo);
+-	/* Read DATA[15:8] */
+-	realtek_smi_read_byte1(smi, &hi);
+-
+-	*data = ((u32)lo) | (((u32)hi) << 8);
+-
+-	ret = 0;
+-
+- out:
+-	realtek_smi_stop(smi);
+-	spin_unlock_irqrestore(&smi->lock, flags);
+-
+-	return ret;
+-}
+-
+-static int realtek_smi_write_reg(struct realtek_smi *smi,
+-				 u32 addr, u32 data, bool ack)
+-{
+-	unsigned long flags;
+-	int ret;
+-
+-	spin_lock_irqsave(&smi->lock, flags);
+-
+-	realtek_smi_start(smi);
+-
+-	/* Send WRITE command */
+-	ret = realtek_smi_write_byte(smi, smi->cmd_write);
+-	if (ret)
+-		goto out;
+-
+-	/* Set ADDR[7:0] */
+-	ret = realtek_smi_write_byte(smi, addr & 0xff);
+-	if (ret)
+-		goto out;
+-
+-	/* Set ADDR[15:8] */
+-	ret = realtek_smi_write_byte(smi, addr >> 8);
+-	if (ret)
+-		goto out;
+-
+-	/* Write DATA[7:0] */
+-	ret = realtek_smi_write_byte(smi, data & 0xff);
+-	if (ret)
+-		goto out;
+-
+-	/* Write DATA[15:8] */
+-	if (ack)
+-		ret = realtek_smi_write_byte(smi, data >> 8);
+-	else
+-		ret = realtek_smi_write_byte_noack(smi, data >> 8);
+-	if (ret)
+-		goto out;
+-
+-	ret = 0;
+-
+- out:
+-	realtek_smi_stop(smi);
+-	spin_unlock_irqrestore(&smi->lock, flags);
+-
+-	return ret;
+-}
+-
+-/* There is one single case when we need to use this accessor and that
+- * is when issueing soft reset. Since the device reset as soon as we write
+- * that bit, no ACK will come back for natural reasons.
+- */
+-int realtek_smi_write_reg_noack(struct realtek_smi *smi, u32 addr,
+-				u32 data)
+-{
+-	return realtek_smi_write_reg(smi, addr, data, false);
+-}
+-EXPORT_SYMBOL_GPL(realtek_smi_write_reg_noack);
+-
+-/* Regmap accessors */
+-
+-static int realtek_smi_write(void *ctx, u32 reg, u32 val)
+-{
+-	struct realtek_smi *smi = ctx;
+-
+-	return realtek_smi_write_reg(smi, reg, val, true);
+-}
+-
+-static int realtek_smi_read(void *ctx, u32 reg, u32 *val)
+-{
+-	struct realtek_smi *smi = ctx;
+-
+-	return realtek_smi_read_reg(smi, reg, val);
+-}
+-
+-static const struct regmap_config realtek_smi_mdio_regmap_config = {
+-	.reg_bits = 10, /* A4..A0 R4..R0 */
+-	.val_bits = 16,
+-	.reg_stride = 1,
+-	/* PHY regs are at 0x8000 */
+-	.max_register = 0xffff,
+-	.reg_format_endian = REGMAP_ENDIAN_BIG,
+-	.reg_read = realtek_smi_read,
+-	.reg_write = realtek_smi_write,
+-	.cache_type = REGCACHE_NONE,
+-};
+-
+-static int realtek_smi_mdio_read(struct mii_bus *bus, int addr, int regnum)
+-{
+-	struct realtek_smi *smi = bus->priv;
+-
+-	return smi->ops->phy_read(smi, addr, regnum);
+-}
+-
+-static int realtek_smi_mdio_write(struct mii_bus *bus, int addr, int regnum,
+-				  u16 val)
+-{
+-	struct realtek_smi *smi = bus->priv;
+-
+-	return smi->ops->phy_write(smi, addr, regnum, val);
+-}
+-
+-int realtek_smi_setup_mdio(struct realtek_smi *smi)
+-{
+-	struct device_node *mdio_np;
+-	int ret;
+-
+-	mdio_np = of_get_compatible_child(smi->dev->of_node, "realtek,smi-mdio");
+-	if (!mdio_np) {
+-		dev_err(smi->dev, "no MDIO bus node\n");
+-		return -ENODEV;
+-	}
+-
+-	smi->slave_mii_bus = devm_mdiobus_alloc(smi->dev);
+-	if (!smi->slave_mii_bus) {
+-		ret = -ENOMEM;
+-		goto err_put_node;
+-	}
+-	smi->slave_mii_bus->priv = smi;
+-	smi->slave_mii_bus->name = "SMI slave MII";
+-	smi->slave_mii_bus->read = realtek_smi_mdio_read;
+-	smi->slave_mii_bus->write = realtek_smi_mdio_write;
+-	snprintf(smi->slave_mii_bus->id, MII_BUS_ID_SIZE, "SMI-%d",
+-		 smi->ds->index);
+-	smi->slave_mii_bus->dev.of_node = mdio_np;
+-	smi->slave_mii_bus->parent = smi->dev;
+-	smi->ds->slave_mii_bus = smi->slave_mii_bus;
+-
+-	ret = devm_of_mdiobus_register(smi->dev, smi->slave_mii_bus, mdio_np);
+-	if (ret) {
+-		dev_err(smi->dev, "unable to register MDIO bus %s\n",
+-			smi->slave_mii_bus->id);
+-		goto err_put_node;
+-	}
+-
+-	return 0;
+-
+-err_put_node:
+-	of_node_put(mdio_np);
+-
+-	return ret;
+-}
+-
+-static int realtek_smi_probe(struct platform_device *pdev)
+-{
+-	const struct realtek_smi_variant *var;
+-	struct device *dev = &pdev->dev;
+-	struct realtek_smi *smi;
+-	struct device_node *np;
+-	int ret;
+-
+-	var = of_device_get_match_data(dev);
+-	np = dev->of_node;
+-
+-	smi = devm_kzalloc(dev, sizeof(*smi) + var->chip_data_sz, GFP_KERNEL);
+-	if (!smi)
+-		return -ENOMEM;
+-	smi->chip_data = (void *)smi + sizeof(*smi);
+-	smi->map = devm_regmap_init(dev, NULL, smi,
+-				    &realtek_smi_mdio_regmap_config);
+-	if (IS_ERR(smi->map)) {
+-		ret = PTR_ERR(smi->map);
+-		dev_err(dev, "regmap init failed: %d\n", ret);
+-		return ret;
+-	}
+-
+-	/* Link forward and backward */
+-	smi->dev = dev;
+-	smi->clk_delay = var->clk_delay;
+-	smi->cmd_read = var->cmd_read;
+-	smi->cmd_write = var->cmd_write;
+-	smi->ops = var->ops;
+-
+-	dev_set_drvdata(dev, smi);
+-	spin_lock_init(&smi->lock);
+-
+-	/* TODO: if power is software controlled, set up any regulators here */
+-
+-	/* Assert then deassert RESET */
+-	smi->reset = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH);
+-	if (IS_ERR(smi->reset)) {
+-		dev_err(dev, "failed to get RESET GPIO\n");
+-		return PTR_ERR(smi->reset);
+-	}
+-	msleep(REALTEK_SMI_HW_STOP_DELAY);
+-	gpiod_set_value(smi->reset, 0);
+-	msleep(REALTEK_SMI_HW_START_DELAY);
+-	dev_info(dev, "deasserted RESET\n");
+-
+-	/* Fetch MDIO pins */
+-	smi->mdc = devm_gpiod_get_optional(dev, "mdc", GPIOD_OUT_LOW);
+-	if (IS_ERR(smi->mdc))
+-		return PTR_ERR(smi->mdc);
+-	smi->mdio = devm_gpiod_get_optional(dev, "mdio", GPIOD_OUT_LOW);
+-	if (IS_ERR(smi->mdio))
+-		return PTR_ERR(smi->mdio);
+-
+-	smi->leds_disabled = of_property_read_bool(np, "realtek,disable-leds");
+-
+-	ret = smi->ops->detect(smi);
+-	if (ret) {
+-		dev_err(dev, "unable to detect switch\n");
+-		return ret;
+-	}
+-
+-	smi->ds = devm_kzalloc(dev, sizeof(*smi->ds), GFP_KERNEL);
+-	if (!smi->ds)
+-		return -ENOMEM;
+-
+-	smi->ds->dev = dev;
+-	smi->ds->num_ports = smi->num_ports;
+-	smi->ds->priv = smi;
+-
+-	smi->ds->ops = var->ds_ops;
+-	ret = dsa_register_switch(smi->ds);
+-	if (ret) {
+-		dev_err(dev, "unable to register switch ret = %d\n", ret);
+-		return ret;
+-	}
+-	return 0;
+-}
+-
+-static int realtek_smi_remove(struct platform_device *pdev)
+-{
+-	struct realtek_smi *smi = platform_get_drvdata(pdev);
+-
+-	if (!smi)
+-		return 0;
+-
+-	dsa_unregister_switch(smi->ds);
+-	if (smi->slave_mii_bus)
+-		of_node_put(smi->slave_mii_bus->dev.of_node);
+-	gpiod_set_value(smi->reset, 1);
+-
+-	platform_set_drvdata(pdev, NULL);
+-
+-	return 0;
+-}
+-
+-static void realtek_smi_shutdown(struct platform_device *pdev)
+-{
+-	struct realtek_smi *smi = platform_get_drvdata(pdev);
+-
+-	if (!smi)
+-		return;
+-
+-	dsa_switch_shutdown(smi->ds);
+-
+-	platform_set_drvdata(pdev, NULL);
+-}
+-
+-static const struct of_device_id realtek_smi_of_match[] = {
+-	{
+-		.compatible = "realtek,rtl8366rb",
+-		.data = &rtl8366rb_variant,
+-	},
+-	{
+-		/* FIXME: add support for RTL8366S and more */
+-		.compatible = "realtek,rtl8366s",
+-		.data = NULL,
+-	},
+-	{
+-		.compatible = "realtek,rtl8365mb",
+-		.data = &rtl8365mb_variant,
+-	},
+-	{ /* sentinel */ },
+-};
+-MODULE_DEVICE_TABLE(of, realtek_smi_of_match);
+-
+-static struct platform_driver realtek_smi_driver = {
+-	.driver = {
+-		.name = "realtek-smi",
+-		.of_match_table = of_match_ptr(realtek_smi_of_match),
+-	},
+-	.probe  = realtek_smi_probe,
+-	.remove = realtek_smi_remove,
+-	.shutdown = realtek_smi_shutdown,
+-};
+-module_platform_driver(realtek_smi_driver);
+-
+-MODULE_LICENSE("GPL");
+diff --git a/drivers/net/dsa/realtek-smi-core.h b/drivers/net/dsa/realtek-smi-core.h
+deleted file mode 100644
+index 5bfa53e2480ae..0000000000000
+--- a/drivers/net/dsa/realtek-smi-core.h
++++ /dev/null
+@@ -1,145 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0+ */
+-/* Realtek SMI interface driver defines
+- *
+- * Copyright (C) 2017 Linus Walleij <linus.walleij@linaro.org>
+- * Copyright (C) 2009-2010 Gabor Juhos <juhosg@openwrt.org>
+- */
+-
+-#ifndef _REALTEK_SMI_H
+-#define _REALTEK_SMI_H
+-
+-#include <linux/phy.h>
+-#include <linux/platform_device.h>
+-#include <linux/gpio/consumer.h>
+-#include <net/dsa.h>
+-
+-struct realtek_smi_ops;
+-struct dentry;
+-struct inode;
+-struct file;
+-
+-struct rtl8366_mib_counter {
+-	unsigned int	base;
+-	unsigned int	offset;
+-	unsigned int	length;
+-	const char	*name;
+-};
+-
+-/**
+- * struct rtl8366_vlan_mc - Virtual LAN member configuration
+- */
+-struct rtl8366_vlan_mc {
+-	u16	vid;
+-	u16	untag;
+-	u16	member;
+-	u8	fid;
+-	u8	priority;
+-};
+-
+-struct rtl8366_vlan_4k {
+-	u16	vid;
+-	u16	untag;
+-	u16	member;
+-	u8	fid;
+-};
+-
+-struct realtek_smi {
+-	struct device		*dev;
+-	struct gpio_desc	*reset;
+-	struct gpio_desc	*mdc;
+-	struct gpio_desc	*mdio;
+-	struct regmap		*map;
+-	struct mii_bus		*slave_mii_bus;
+-
+-	unsigned int		clk_delay;
+-	u8			cmd_read;
+-	u8			cmd_write;
+-	spinlock_t		lock; /* Locks around command writes */
+-	struct dsa_switch	*ds;
+-	struct irq_domain	*irqdomain;
+-	bool			leds_disabled;
+-
+-	unsigned int		cpu_port;
+-	unsigned int		num_ports;
+-	unsigned int		num_vlan_mc;
+-	unsigned int		num_mib_counters;
+-	struct rtl8366_mib_counter *mib_counters;
+-
+-	const struct realtek_smi_ops *ops;
+-
+-	int			vlan_enabled;
+-	int			vlan4k_enabled;
+-
+-	char			buf[4096];
+-	void			*chip_data; /* Per-chip extra variant data */
+-};
+-
+-/**
+- * struct realtek_smi_ops - vtable for the per-SMI-chiptype operations
+- * @detect: detects the chiptype
+- */
+-struct realtek_smi_ops {
+-	int	(*detect)(struct realtek_smi *smi);
+-	int	(*reset_chip)(struct realtek_smi *smi);
+-	int	(*setup)(struct realtek_smi *smi);
+-	void	(*cleanup)(struct realtek_smi *smi);
+-	int	(*get_mib_counter)(struct realtek_smi *smi,
+-				   int port,
+-				   struct rtl8366_mib_counter *mib,
+-				   u64 *mibvalue);
+-	int	(*get_vlan_mc)(struct realtek_smi *smi, u32 index,
+-			       struct rtl8366_vlan_mc *vlanmc);
+-	int	(*set_vlan_mc)(struct realtek_smi *smi, u32 index,
+-			       const struct rtl8366_vlan_mc *vlanmc);
+-	int	(*get_vlan_4k)(struct realtek_smi *smi, u32 vid,
+-			       struct rtl8366_vlan_4k *vlan4k);
+-	int	(*set_vlan_4k)(struct realtek_smi *smi,
+-			       const struct rtl8366_vlan_4k *vlan4k);
+-	int	(*get_mc_index)(struct realtek_smi *smi, int port, int *val);
+-	int	(*set_mc_index)(struct realtek_smi *smi, int port, int index);
+-	bool	(*is_vlan_valid)(struct realtek_smi *smi, unsigned int vlan);
+-	int	(*enable_vlan)(struct realtek_smi *smi, bool enable);
+-	int	(*enable_vlan4k)(struct realtek_smi *smi, bool enable);
+-	int	(*enable_port)(struct realtek_smi *smi, int port, bool enable);
+-	int	(*phy_read)(struct realtek_smi *smi, int phy, int regnum);
+-	int	(*phy_write)(struct realtek_smi *smi, int phy, int regnum,
+-			     u16 val);
+-};
+-
+-struct realtek_smi_variant {
+-	const struct dsa_switch_ops *ds_ops;
+-	const struct realtek_smi_ops *ops;
+-	unsigned int clk_delay;
+-	u8 cmd_read;
+-	u8 cmd_write;
+-	size_t chip_data_sz;
+-};
+-
+-/* SMI core calls */
+-int realtek_smi_write_reg_noack(struct realtek_smi *smi, u32 addr,
+-				u32 data);
+-int realtek_smi_setup_mdio(struct realtek_smi *smi);
+-
+-/* RTL8366 library helpers */
+-int rtl8366_mc_is_used(struct realtek_smi *smi, int mc_index, int *used);
+-int rtl8366_set_vlan(struct realtek_smi *smi, int vid, u32 member,
+-		     u32 untag, u32 fid);
+-int rtl8366_set_pvid(struct realtek_smi *smi, unsigned int port,
+-		     unsigned int vid);
+-int rtl8366_enable_vlan4k(struct realtek_smi *smi, bool enable);
+-int rtl8366_enable_vlan(struct realtek_smi *smi, bool enable);
+-int rtl8366_reset_vlan(struct realtek_smi *smi);
+-int rtl8366_vlan_add(struct dsa_switch *ds, int port,
+-		     const struct switchdev_obj_port_vlan *vlan,
+-		     struct netlink_ext_ack *extack);
+-int rtl8366_vlan_del(struct dsa_switch *ds, int port,
+-		     const struct switchdev_obj_port_vlan *vlan);
+-void rtl8366_get_strings(struct dsa_switch *ds, int port, u32 stringset,
+-			 uint8_t *data);
+-int rtl8366_get_sset_count(struct dsa_switch *ds, int port, int sset);
+-void rtl8366_get_ethtool_stats(struct dsa_switch *ds, int port, uint64_t *data);
+-
+-extern const struct realtek_smi_variant rtl8366rb_variant;
+-extern const struct realtek_smi_variant rtl8365mb_variant;
+-
+-#endif /*  _REALTEK_SMI_H */
+diff --git a/drivers/net/dsa/realtek/Kconfig b/drivers/net/dsa/realtek/Kconfig
+new file mode 100644
+index 0000000000000..1c62212fb0ecb
+--- /dev/null
++++ b/drivers/net/dsa/realtek/Kconfig
+@@ -0,0 +1,20 @@
++# SPDX-License-Identifier: GPL-2.0-only
++menuconfig NET_DSA_REALTEK
++	tristate "Realtek Ethernet switch family support"
++	depends on NET_DSA
++	select NET_DSA_TAG_RTL4_A
++	select NET_DSA_TAG_RTL8_4
++	select FIXED_PHY
++	select IRQ_DOMAIN
++	select REALTEK_PHY
++	select REGMAP
++	help
++	  Select to enable support for Realtek Ethernet switch chips.
++
++config NET_DSA_REALTEK_SMI
++	tristate "Realtek SMI connected switch driver"
++	depends on NET_DSA_REALTEK
++	default y
++	help
++	  Select to enable support for registering switches connected
++	  through SMI.
+diff --git a/drivers/net/dsa/realtek/Makefile b/drivers/net/dsa/realtek/Makefile
+new file mode 100644
+index 0000000000000..323b921bfce0f
+--- /dev/null
++++ b/drivers/net/dsa/realtek/Makefile
+@@ -0,0 +1,3 @@
++# SPDX-License-Identifier: GPL-2.0
++obj-$(CONFIG_NET_DSA_REALTEK_SMI) 	+= realtek-smi.o
++realtek-smi-objs			:= realtek-smi-core.o rtl8366.o rtl8366rb.o rtl8365mb.o
+diff --git a/drivers/net/dsa/realtek/realtek-smi-core.c b/drivers/net/dsa/realtek/realtek-smi-core.c
+new file mode 100644
+index 0000000000000..c66ebd0ee2171
+--- /dev/null
++++ b/drivers/net/dsa/realtek/realtek-smi-core.c
+@@ -0,0 +1,523 @@
++// SPDX-License-Identifier: GPL-2.0+
++/* Realtek Simple Management Interface (SMI) driver
++ * It can be discussed how "simple" this interface is.
++ *
++ * The SMI protocol piggy-backs the MDIO MDC and MDIO signals levels
++ * but the protocol is not MDIO at all. Instead it is a Realtek
++ * pecularity that need to bit-bang the lines in a special way to
++ * communicate with the switch.
++ *
++ * ASICs we intend to support with this driver:
++ *
++ * RTL8366   - The original version, apparently
++ * RTL8369   - Similar enough to have the same datsheet as RTL8366
++ * RTL8366RB - Probably reads out "RTL8366 revision B", has a quite
++ *             different register layout from the other two
++ * RTL8366S  - Is this "RTL8366 super"?
++ * RTL8367   - Has an OpenWRT driver as well
++ * RTL8368S  - Seems to be an alternative name for RTL8366RB
++ * RTL8370   - Also uses SMI
++ *
++ * Copyright (C) 2017 Linus Walleij <linus.walleij@linaro.org>
++ * Copyright (C) 2010 Antti Seppälä <a.seppala@gmail.com>
++ * Copyright (C) 2010 Roman Yeryomin <roman@advem.lv>
++ * Copyright (C) 2011 Colin Leitner <colin.leitner@googlemail.com>
++ * Copyright (C) 2009-2010 Gabor Juhos <juhosg@openwrt.org>
++ */
++
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/device.h>
++#include <linux/spinlock.h>
++#include <linux/skbuff.h>
++#include <linux/of.h>
++#include <linux/of_device.h>
++#include <linux/of_mdio.h>
++#include <linux/delay.h>
++#include <linux/gpio/consumer.h>
++#include <linux/platform_device.h>
++#include <linux/regmap.h>
++#include <linux/bitops.h>
++#include <linux/if_bridge.h>
++
++#include "realtek-smi-core.h"
++
++#define REALTEK_SMI_ACK_RETRY_COUNT		5
++#define REALTEK_SMI_HW_STOP_DELAY		25	/* msecs */
++#define REALTEK_SMI_HW_START_DELAY		100	/* msecs */
++
++static inline void realtek_smi_clk_delay(struct realtek_smi *smi)
++{
++	ndelay(smi->clk_delay);
++}
++
++static void realtek_smi_start(struct realtek_smi *smi)
++{
++	/* Set GPIO pins to output mode, with initial state:
++	 * SCK = 0, SDA = 1
++	 */
++	gpiod_direction_output(smi->mdc, 0);
++	gpiod_direction_output(smi->mdio, 1);
++	realtek_smi_clk_delay(smi);
++
++	/* CLK 1: 0 -> 1, 1 -> 0 */
++	gpiod_set_value(smi->mdc, 1);
++	realtek_smi_clk_delay(smi);
++	gpiod_set_value(smi->mdc, 0);
++	realtek_smi_clk_delay(smi);
++
++	/* CLK 2: */
++	gpiod_set_value(smi->mdc, 1);
++	realtek_smi_clk_delay(smi);
++	gpiod_set_value(smi->mdio, 0);
++	realtek_smi_clk_delay(smi);
++	gpiod_set_value(smi->mdc, 0);
++	realtek_smi_clk_delay(smi);
++	gpiod_set_value(smi->mdio, 1);
++}
++
++static void realtek_smi_stop(struct realtek_smi *smi)
++{
++	realtek_smi_clk_delay(smi);
++	gpiod_set_value(smi->mdio, 0);
++	gpiod_set_value(smi->mdc, 1);
++	realtek_smi_clk_delay(smi);
++	gpiod_set_value(smi->mdio, 1);
++	realtek_smi_clk_delay(smi);
++	gpiod_set_value(smi->mdc, 1);
++	realtek_smi_clk_delay(smi);
++	gpiod_set_value(smi->mdc, 0);
++	realtek_smi_clk_delay(smi);
++	gpiod_set_value(smi->mdc, 1);
++
++	/* Add a click */
++	realtek_smi_clk_delay(smi);
++	gpiod_set_value(smi->mdc, 0);
++	realtek_smi_clk_delay(smi);
++	gpiod_set_value(smi->mdc, 1);
++
++	/* Set GPIO pins to input mode */
++	gpiod_direction_input(smi->mdio);
++	gpiod_direction_input(smi->mdc);
++}
++
++static void realtek_smi_write_bits(struct realtek_smi *smi, u32 data, u32 len)
++{
++	for (; len > 0; len--) {
++		realtek_smi_clk_delay(smi);
++
++		/* Prepare data */
++		gpiod_set_value(smi->mdio, !!(data & (1 << (len - 1))));
++		realtek_smi_clk_delay(smi);
++
++		/* Clocking */
++		gpiod_set_value(smi->mdc, 1);
++		realtek_smi_clk_delay(smi);
++		gpiod_set_value(smi->mdc, 0);
++	}
++}
++
++static void realtek_smi_read_bits(struct realtek_smi *smi, u32 len, u32 *data)
++{
++	gpiod_direction_input(smi->mdio);
++
++	for (*data = 0; len > 0; len--) {
++		u32 u;
++
++		realtek_smi_clk_delay(smi);
++
++		/* Clocking */
++		gpiod_set_value(smi->mdc, 1);
++		realtek_smi_clk_delay(smi);
++		u = !!gpiod_get_value(smi->mdio);
++		gpiod_set_value(smi->mdc, 0);
++
++		*data |= (u << (len - 1));
++	}
++
++	gpiod_direction_output(smi->mdio, 0);
++}
++
++static int realtek_smi_wait_for_ack(struct realtek_smi *smi)
++{
++	int retry_cnt;
++
++	retry_cnt = 0;
++	do {
++		u32 ack;
++
++		realtek_smi_read_bits(smi, 1, &ack);
++		if (ack == 0)
++			break;
++
++		if (++retry_cnt > REALTEK_SMI_ACK_RETRY_COUNT) {
++			dev_err(smi->dev, "ACK timeout\n");
++			return -ETIMEDOUT;
++		}
++	} while (1);
++
++	return 0;
++}
++
++static int realtek_smi_write_byte(struct realtek_smi *smi, u8 data)
++{
++	realtek_smi_write_bits(smi, data, 8);
++	return realtek_smi_wait_for_ack(smi);
++}
++
++static int realtek_smi_write_byte_noack(struct realtek_smi *smi, u8 data)
++{
++	realtek_smi_write_bits(smi, data, 8);
++	return 0;
++}
++
++static int realtek_smi_read_byte0(struct realtek_smi *smi, u8 *data)
++{
++	u32 t;
++
++	/* Read data */
++	realtek_smi_read_bits(smi, 8, &t);
++	*data = (t & 0xff);
++
++	/* Send an ACK */
++	realtek_smi_write_bits(smi, 0x00, 1);
++
++	return 0;
++}
++
++static int realtek_smi_read_byte1(struct realtek_smi *smi, u8 *data)
++{
++	u32 t;
++
++	/* Read data */
++	realtek_smi_read_bits(smi, 8, &t);
++	*data = (t & 0xff);
++
++	/* Send an ACK */
++	realtek_smi_write_bits(smi, 0x01, 1);
++
++	return 0;
++}
++
++static int realtek_smi_read_reg(struct realtek_smi *smi, u32 addr, u32 *data)
++{
++	unsigned long flags;
++	u8 lo = 0;
++	u8 hi = 0;
++	int ret;
++
++	spin_lock_irqsave(&smi->lock, flags);
++
++	realtek_smi_start(smi);
++
++	/* Send READ command */
++	ret = realtek_smi_write_byte(smi, smi->cmd_read);
++	if (ret)
++		goto out;
++
++	/* Set ADDR[7:0] */
++	ret = realtek_smi_write_byte(smi, addr & 0xff);
++	if (ret)
++		goto out;
++
++	/* Set ADDR[15:8] */
++	ret = realtek_smi_write_byte(smi, addr >> 8);
++	if (ret)
++		goto out;
++
++	/* Read DATA[7:0] */
++	realtek_smi_read_byte0(smi, &lo);
++	/* Read DATA[15:8] */
++	realtek_smi_read_byte1(smi, &hi);
++
++	*data = ((u32)lo) | (((u32)hi) << 8);
++
++	ret = 0;
++
++ out:
++	realtek_smi_stop(smi);
++	spin_unlock_irqrestore(&smi->lock, flags);
++
++	return ret;
++}
++
++static int realtek_smi_write_reg(struct realtek_smi *smi,
++				 u32 addr, u32 data, bool ack)
++{
++	unsigned long flags;
++	int ret;
++
++	spin_lock_irqsave(&smi->lock, flags);
++
++	realtek_smi_start(smi);
++
++	/* Send WRITE command */
++	ret = realtek_smi_write_byte(smi, smi->cmd_write);
++	if (ret)
++		goto out;
++
++	/* Set ADDR[7:0] */
++	ret = realtek_smi_write_byte(smi, addr & 0xff);
++	if (ret)
++		goto out;
++
++	/* Set ADDR[15:8] */
++	ret = realtek_smi_write_byte(smi, addr >> 8);
++	if (ret)
++		goto out;
++
++	/* Write DATA[7:0] */
++	ret = realtek_smi_write_byte(smi, data & 0xff);
++	if (ret)
++		goto out;
++
++	/* Write DATA[15:8] */
++	if (ack)
++		ret = realtek_smi_write_byte(smi, data >> 8);
++	else
++		ret = realtek_smi_write_byte_noack(smi, data >> 8);
++	if (ret)
++		goto out;
++
++	ret = 0;
++
++ out:
++	realtek_smi_stop(smi);
++	spin_unlock_irqrestore(&smi->lock, flags);
++
++	return ret;
++}
++
++/* There is one single case when we need to use this accessor and that
++ * is when issueing soft reset. Since the device reset as soon as we write
++ * that bit, no ACK will come back for natural reasons.
++ */
++int realtek_smi_write_reg_noack(struct realtek_smi *smi, u32 addr,
++				u32 data)
++{
++	return realtek_smi_write_reg(smi, addr, data, false);
++}
++EXPORT_SYMBOL_GPL(realtek_smi_write_reg_noack);
++
++/* Regmap accessors */
++
++static int realtek_smi_write(void *ctx, u32 reg, u32 val)
++{
++	struct realtek_smi *smi = ctx;
++
++	return realtek_smi_write_reg(smi, reg, val, true);
++}
++
++static int realtek_smi_read(void *ctx, u32 reg, u32 *val)
++{
++	struct realtek_smi *smi = ctx;
++
++	return realtek_smi_read_reg(smi, reg, val);
++}
++
++static const struct regmap_config realtek_smi_mdio_regmap_config = {
++	.reg_bits = 10, /* A4..A0 R4..R0 */
++	.val_bits = 16,
++	.reg_stride = 1,
++	/* PHY regs are at 0x8000 */
++	.max_register = 0xffff,
++	.reg_format_endian = REGMAP_ENDIAN_BIG,
++	.reg_read = realtek_smi_read,
++	.reg_write = realtek_smi_write,
++	.cache_type = REGCACHE_NONE,
++};
++
++static int realtek_smi_mdio_read(struct mii_bus *bus, int addr, int regnum)
++{
++	struct realtek_smi *smi = bus->priv;
++
++	return smi->ops->phy_read(smi, addr, regnum);
++}
++
++static int realtek_smi_mdio_write(struct mii_bus *bus, int addr, int regnum,
++				  u16 val)
++{
++	struct realtek_smi *smi = bus->priv;
++
++	return smi->ops->phy_write(smi, addr, regnum, val);
++}
++
++int realtek_smi_setup_mdio(struct realtek_smi *smi)
++{
++	struct device_node *mdio_np;
++	int ret;
++
++	mdio_np = of_get_compatible_child(smi->dev->of_node, "realtek,smi-mdio");
++	if (!mdio_np) {
++		dev_err(smi->dev, "no MDIO bus node\n");
++		return -ENODEV;
++	}
++
++	smi->slave_mii_bus = devm_mdiobus_alloc(smi->dev);
++	if (!smi->slave_mii_bus) {
++		ret = -ENOMEM;
++		goto err_put_node;
++	}
++	smi->slave_mii_bus->priv = smi;
++	smi->slave_mii_bus->name = "SMI slave MII";
++	smi->slave_mii_bus->read = realtek_smi_mdio_read;
++	smi->slave_mii_bus->write = realtek_smi_mdio_write;
++	snprintf(smi->slave_mii_bus->id, MII_BUS_ID_SIZE, "SMI-%d",
++		 smi->ds->index);
++	smi->slave_mii_bus->dev.of_node = mdio_np;
++	smi->slave_mii_bus->parent = smi->dev;
++	smi->ds->slave_mii_bus = smi->slave_mii_bus;
++
++	ret = devm_of_mdiobus_register(smi->dev, smi->slave_mii_bus, mdio_np);
++	if (ret) {
++		dev_err(smi->dev, "unable to register MDIO bus %s\n",
++			smi->slave_mii_bus->id);
++		goto err_put_node;
++	}
++
++	return 0;
++
++err_put_node:
++	of_node_put(mdio_np);
++
++	return ret;
++}
++
++static int realtek_smi_probe(struct platform_device *pdev)
++{
++	const struct realtek_smi_variant *var;
++	struct device *dev = &pdev->dev;
++	struct realtek_smi *smi;
++	struct device_node *np;
++	int ret;
++
++	var = of_device_get_match_data(dev);
++	np = dev->of_node;
++
++	smi = devm_kzalloc(dev, sizeof(*smi) + var->chip_data_sz, GFP_KERNEL);
++	if (!smi)
++		return -ENOMEM;
++	smi->chip_data = (void *)smi + sizeof(*smi);
++	smi->map = devm_regmap_init(dev, NULL, smi,
++				    &realtek_smi_mdio_regmap_config);
++	if (IS_ERR(smi->map)) {
++		ret = PTR_ERR(smi->map);
++		dev_err(dev, "regmap init failed: %d\n", ret);
++		return ret;
++	}
++
++	/* Link forward and backward */
++	smi->dev = dev;
++	smi->clk_delay = var->clk_delay;
++	smi->cmd_read = var->cmd_read;
++	smi->cmd_write = var->cmd_write;
++	smi->ops = var->ops;
++
++	dev_set_drvdata(dev, smi);
++	spin_lock_init(&smi->lock);
++
++	/* TODO: if power is software controlled, set up any regulators here */
++
++	/* Assert then deassert RESET */
++	smi->reset = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH);
++	if (IS_ERR(smi->reset)) {
++		dev_err(dev, "failed to get RESET GPIO\n");
++		return PTR_ERR(smi->reset);
++	}
++	msleep(REALTEK_SMI_HW_STOP_DELAY);
++	gpiod_set_value(smi->reset, 0);
++	msleep(REALTEK_SMI_HW_START_DELAY);
++	dev_info(dev, "deasserted RESET\n");
++
++	/* Fetch MDIO pins */
++	smi->mdc = devm_gpiod_get_optional(dev, "mdc", GPIOD_OUT_LOW);
++	if (IS_ERR(smi->mdc))
++		return PTR_ERR(smi->mdc);
++	smi->mdio = devm_gpiod_get_optional(dev, "mdio", GPIOD_OUT_LOW);
++	if (IS_ERR(smi->mdio))
++		return PTR_ERR(smi->mdio);
++
++	smi->leds_disabled = of_property_read_bool(np, "realtek,disable-leds");
++
++	ret = smi->ops->detect(smi);
++	if (ret) {
++		dev_err(dev, "unable to detect switch\n");
++		return ret;
++	}
++
++	smi->ds = devm_kzalloc(dev, sizeof(*smi->ds), GFP_KERNEL);
++	if (!smi->ds)
++		return -ENOMEM;
++
++	smi->ds->dev = dev;
++	smi->ds->num_ports = smi->num_ports;
++	smi->ds->priv = smi;
++
++	smi->ds->ops = var->ds_ops;
++	ret = dsa_register_switch(smi->ds);
++	if (ret) {
++		dev_err(dev, "unable to register switch ret = %d\n", ret);
++		return ret;
++	}
++	return 0;
++}
++
++static int realtek_smi_remove(struct platform_device *pdev)
++{
++	struct realtek_smi *smi = platform_get_drvdata(pdev);
++
++	if (!smi)
++		return 0;
++
++	dsa_unregister_switch(smi->ds);
++	if (smi->slave_mii_bus)
++		of_node_put(smi->slave_mii_bus->dev.of_node);
++	gpiod_set_value(smi->reset, 1);
++
++	platform_set_drvdata(pdev, NULL);
++
++	return 0;
++}
++
++static void realtek_smi_shutdown(struct platform_device *pdev)
++{
++	struct realtek_smi *smi = platform_get_drvdata(pdev);
++
++	if (!smi)
++		return;
++
++	dsa_switch_shutdown(smi->ds);
++
++	platform_set_drvdata(pdev, NULL);
++}
++
++static const struct of_device_id realtek_smi_of_match[] = {
++	{
++		.compatible = "realtek,rtl8366rb",
++		.data = &rtl8366rb_variant,
++	},
++	{
++		/* FIXME: add support for RTL8366S and more */
++		.compatible = "realtek,rtl8366s",
++		.data = NULL,
++	},
++	{
++		.compatible = "realtek,rtl8365mb",
++		.data = &rtl8365mb_variant,
++	},
++	{ /* sentinel */ },
++};
++MODULE_DEVICE_TABLE(of, realtek_smi_of_match);
++
++static struct platform_driver realtek_smi_driver = {
++	.driver = {
++		.name = "realtek-smi",
++		.of_match_table = of_match_ptr(realtek_smi_of_match),
++	},
++	.probe  = realtek_smi_probe,
++	.remove = realtek_smi_remove,
++	.shutdown = realtek_smi_shutdown,
++};
++module_platform_driver(realtek_smi_driver);
++
++MODULE_LICENSE("GPL");
+diff --git a/drivers/net/dsa/realtek/realtek-smi-core.h b/drivers/net/dsa/realtek/realtek-smi-core.h
+new file mode 100644
+index 0000000000000..faed387d8db38
+--- /dev/null
++++ b/drivers/net/dsa/realtek/realtek-smi-core.h
+@@ -0,0 +1,145 @@
++/* SPDX-License-Identifier: GPL-2.0+ */
++/* Realtek SMI interface driver defines
++ *
++ * Copyright (C) 2017 Linus Walleij <linus.walleij@linaro.org>
++ * Copyright (C) 2009-2010 Gabor Juhos <juhosg@openwrt.org>
++ */
++
++#ifndef _REALTEK_SMI_H
++#define _REALTEK_SMI_H
++
++#include <linux/phy.h>
++#include <linux/platform_device.h>
++#include <linux/gpio/consumer.h>
++#include <net/dsa.h>
++
++struct realtek_smi_ops;
++struct dentry;
++struct inode;
++struct file;
++
++struct rtl8366_mib_counter {
++	unsigned int	base;
++	unsigned int	offset;
++	unsigned int	length;
++	const char	*name;
++};
++
++/*
++ * struct rtl8366_vlan_mc - Virtual LAN member configuration
++ */
++struct rtl8366_vlan_mc {
++	u16	vid;
++	u16	untag;
++	u16	member;
++	u8	fid;
++	u8	priority;
++};
++
++struct rtl8366_vlan_4k {
++	u16	vid;
++	u16	untag;
++	u16	member;
++	u8	fid;
++};
++
++struct realtek_smi {
++	struct device		*dev;
++	struct gpio_desc	*reset;
++	struct gpio_desc	*mdc;
++	struct gpio_desc	*mdio;
++	struct regmap		*map;
++	struct mii_bus		*slave_mii_bus;
++
++	unsigned int		clk_delay;
++	u8			cmd_read;
++	u8			cmd_write;
++	spinlock_t		lock; /* Locks around command writes */
++	struct dsa_switch	*ds;
++	struct irq_domain	*irqdomain;
++	bool			leds_disabled;
++
++	unsigned int		cpu_port;
++	unsigned int		num_ports;
++	unsigned int		num_vlan_mc;
++	unsigned int		num_mib_counters;
++	struct rtl8366_mib_counter *mib_counters;
++
++	const struct realtek_smi_ops *ops;
++
++	int			vlan_enabled;
++	int			vlan4k_enabled;
++
++	char			buf[4096];
++	void			*chip_data; /* Per-chip extra variant data */
++};
++
++/*
++ * struct realtek_smi_ops - vtable for the per-SMI-chiptype operations
++ * @detect: detects the chiptype
++ */
++struct realtek_smi_ops {
++	int	(*detect)(struct realtek_smi *smi);
++	int	(*reset_chip)(struct realtek_smi *smi);
++	int	(*setup)(struct realtek_smi *smi);
++	void	(*cleanup)(struct realtek_smi *smi);
++	int	(*get_mib_counter)(struct realtek_smi *smi,
++				   int port,
++				   struct rtl8366_mib_counter *mib,
++				   u64 *mibvalue);
++	int	(*get_vlan_mc)(struct realtek_smi *smi, u32 index,
++			       struct rtl8366_vlan_mc *vlanmc);
++	int	(*set_vlan_mc)(struct realtek_smi *smi, u32 index,
++			       const struct rtl8366_vlan_mc *vlanmc);
++	int	(*get_vlan_4k)(struct realtek_smi *smi, u32 vid,
++			       struct rtl8366_vlan_4k *vlan4k);
++	int	(*set_vlan_4k)(struct realtek_smi *smi,
++			       const struct rtl8366_vlan_4k *vlan4k);
++	int	(*get_mc_index)(struct realtek_smi *smi, int port, int *val);
++	int	(*set_mc_index)(struct realtek_smi *smi, int port, int index);
++	bool	(*is_vlan_valid)(struct realtek_smi *smi, unsigned int vlan);
++	int	(*enable_vlan)(struct realtek_smi *smi, bool enable);
++	int	(*enable_vlan4k)(struct realtek_smi *smi, bool enable);
++	int	(*enable_port)(struct realtek_smi *smi, int port, bool enable);
++	int	(*phy_read)(struct realtek_smi *smi, int phy, int regnum);
++	int	(*phy_write)(struct realtek_smi *smi, int phy, int regnum,
++			     u16 val);
++};
++
++struct realtek_smi_variant {
++	const struct dsa_switch_ops *ds_ops;
++	const struct realtek_smi_ops *ops;
++	unsigned int clk_delay;
++	u8 cmd_read;
++	u8 cmd_write;
++	size_t chip_data_sz;
++};
++
++/* SMI core calls */
++int realtek_smi_write_reg_noack(struct realtek_smi *smi, u32 addr,
++				u32 data);
++int realtek_smi_setup_mdio(struct realtek_smi *smi);
++
++/* RTL8366 library helpers */
++int rtl8366_mc_is_used(struct realtek_smi *smi, int mc_index, int *used);
++int rtl8366_set_vlan(struct realtek_smi *smi, int vid, u32 member,
++		     u32 untag, u32 fid);
++int rtl8366_set_pvid(struct realtek_smi *smi, unsigned int port,
++		     unsigned int vid);
++int rtl8366_enable_vlan4k(struct realtek_smi *smi, bool enable);
++int rtl8366_enable_vlan(struct realtek_smi *smi, bool enable);
++int rtl8366_reset_vlan(struct realtek_smi *smi);
++int rtl8366_vlan_add(struct dsa_switch *ds, int port,
++		     const struct switchdev_obj_port_vlan *vlan,
++		     struct netlink_ext_ack *extack);
++int rtl8366_vlan_del(struct dsa_switch *ds, int port,
++		     const struct switchdev_obj_port_vlan *vlan);
++void rtl8366_get_strings(struct dsa_switch *ds, int port, u32 stringset,
++			 uint8_t *data);
++int rtl8366_get_sset_count(struct dsa_switch *ds, int port, int sset);
++void rtl8366_get_ethtool_stats(struct dsa_switch *ds, int port, uint64_t *data);
++
++extern const struct realtek_smi_variant rtl8366rb_variant;
++extern const struct realtek_smi_variant rtl8365mb_variant;
++
++#endif /*  _REALTEK_SMI_H */
+diff --git a/drivers/net/dsa/realtek/rtl8365mb.c b/drivers/net/dsa/realtek/rtl8365mb.c
+new file mode 100644
+index 0000000000000..48c0e3e466001
+--- /dev/null
++++ b/drivers/net/dsa/realtek/rtl8365mb.c
+@@ -0,0 +1,1986 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Realtek SMI subdriver for the Realtek RTL8365MB-VC ethernet switch.
++ *
++ * Copyright (C) 2021 Alvin Šipraga <alsi@bang-olufsen.dk>
++ * Copyright (C) 2021 Michael Rasmussen <mir@bang-olufsen.dk>
++ *
++ * The RTL8365MB-VC is a 4+1 port 10/100/1000M switch controller. It includes 4
++ * integrated PHYs for the user facing ports, and an extension interface which
++ * can be connected to the CPU - or another PHY - via either MII, RMII, or
++ * RGMII. The switch is configured via the Realtek Simple Management Interface
++ * (SMI), which uses the MDIO/MDC lines.
++ *
++ * Below is a simplified block diagram of the chip and its relevant interfaces.
++ *
++ *                          .-----------------------------------.
++ *                          |                                   |
++ *         UTP <---------------> Giga PHY <-> PCS <-> P0 GMAC   |
++ *         UTP <---------------> Giga PHY <-> PCS <-> P1 GMAC   |
++ *         UTP <---------------> Giga PHY <-> PCS <-> P2 GMAC   |
++ *         UTP <---------------> Giga PHY <-> PCS <-> P3 GMAC   |
++ *                          |                                   |
++ *     CPU/PHY <-MII/RMII/RGMII--->  Extension  <---> Extension |
++ *                          |       interface 1        GMAC 1   |
++ *                          |                                   |
++ *     SMI driver/ <-MDC/SCL---> Management    ~~~~~~~~~~~~~~   |
++ *        EEPROM   <-MDIO/SDA--> interface     ~REALTEK ~~~~~   |
++ *                          |                  ~RTL8365MB ~~~   |
++ *                          |                  ~GXXXC TAIWAN~   |
++ *        GPIO <--------------> Reset          ~~~~~~~~~~~~~~   |
++ *                          |                                   |
++ *      Interrupt  <----------> Link UP/DOWN events             |
++ *      controller          |                                   |
++ *                          '-----------------------------------'
++ *
++ * The driver uses DSA to integrate the 4 user and 1 extension ports into the
++ * kernel. Netdevices are created for the user ports, as are PHY devices for
++ * their integrated PHYs. The device tree firmware should also specify the link
++ * partner of the extension port - either via a fixed-link or other phy-handle.
++ * See the device tree bindings for more detailed information. Note that the
++ * driver has only been tested with a fixed-link, but in principle it should not
++ * matter.
++ *
++ * NOTE: Currently, only the RGMII interface is implemented in this driver.
++ *
++ * The interrupt line is asserted on link UP/DOWN events. The driver creates a
++ * custom irqchip to handle this interrupt and demultiplex the events by reading
++ * the status registers via SMI. Interrupts are then propagated to the relevant
++ * PHY device.
++ *
++ * The EEPROM contains initial register values which the chip will read over I2C
++ * upon hardware reset. It is also possible to omit the EEPROM. In both cases,
++ * the driver will manually reprogram some registers using jam tables to reach
++ * an initial state defined by the vendor driver.
++ *
++ * This Linux driver is written based on an OS-agnostic vendor driver from
++ * Realtek. The reference GPL-licensed sources can be found in the OpenWrt
++ * source tree under the name rtl8367c. The vendor driver claims to support a
++ * number of similar switch controllers from Realtek, but the only hardware we
++ * have is the RTL8365MB-VC. Moreover, there does not seem to be any chip under
++ * the name RTL8367C. Although one wishes that the 'C' stood for some kind of
++ * common hardware revision, there exist examples of chips with the suffix -VC
++ * which are explicitly not supported by the rtl8367c driver and which instead
++ * require the rtl8367d vendor driver. With all this uncertainty, the driver has
++ * been modestly named rtl8365mb. Future implementors may wish to rename things
++ * accordingly.
++ *
++ * In the same family of chips, some carry up to 8 user ports and up to 2
++ * extension ports. Where possible this driver tries to make things generic, but
++ * more work must be done to support these configurations. According to
++ * documentation from Realtek, the family should include the following chips:
++ *
++ *  - RTL8363NB
++ *  - RTL8363NB-VB
++ *  - RTL8363SC
++ *  - RTL8363SC-VB
++ *  - RTL8364NB
++ *  - RTL8364NB-VB
++ *  - RTL8365MB-VC
++ *  - RTL8366SC
++ *  - RTL8367RB-VB
++ *  - RTL8367SB
++ *  - RTL8367S
++ *  - RTL8370MB
++ *  - RTL8310SR
++ *
++ * Some of the register logic for these additional chips has been skipped over
++ * while implementing this driver. It is therefore not possible to assume that
++ * things will work out-of-the-box for other chips, and a careful review of the
++ * vendor driver may be needed to expand support. The RTL8365MB-VC seems to be
++ * one of the simpler chips.
++ */
++
++#include <linux/bitfield.h>
++#include <linux/bitops.h>
++#include <linux/interrupt.h>
++#include <linux/irqdomain.h>
++#include <linux/mutex.h>
++#include <linux/of_irq.h>
++#include <linux/regmap.h>
++#include <linux/if_bridge.h>
++
++#include "realtek-smi-core.h"
++
++/* Chip-specific data and limits */
++#define RTL8365MB_CHIP_ID_8365MB_VC		0x6367
++#define RTL8365MB_CPU_PORT_NUM_8365MB_VC	6
++#define RTL8365MB_LEARN_LIMIT_MAX_8365MB_VC	2112
++
++/* Family-specific data and limits */
++#define RTL8365MB_PHYADDRMAX	7
++#define RTL8365MB_NUM_PHYREGS	32
++#define RTL8365MB_PHYREGMAX	(RTL8365MB_NUM_PHYREGS - 1)
++#define RTL8365MB_MAX_NUM_PORTS	(RTL8365MB_CPU_PORT_NUM_8365MB_VC + 1)
++
++/* Chip identification registers */
++#define RTL8365MB_CHIP_ID_REG		0x1300
++
++#define RTL8365MB_CHIP_VER_REG		0x1301
++
++#define RTL8365MB_MAGIC_REG		0x13C2
++#define   RTL8365MB_MAGIC_VALUE		0x0249
++
++/* Chip reset register */
++#define RTL8365MB_CHIP_RESET_REG	0x1322
++#define RTL8365MB_CHIP_RESET_SW_MASK	0x0002
++#define RTL8365MB_CHIP_RESET_HW_MASK	0x0001
++
++/* Interrupt polarity register */
++#define RTL8365MB_INTR_POLARITY_REG	0x1100
++#define   RTL8365MB_INTR_POLARITY_MASK	0x0001
++#define   RTL8365MB_INTR_POLARITY_HIGH	0
++#define   RTL8365MB_INTR_POLARITY_LOW	1
++
++/* Interrupt control/status register - enable/check specific interrupt types */
++#define RTL8365MB_INTR_CTRL_REG			0x1101
++#define RTL8365MB_INTR_STATUS_REG		0x1102
++#define   RTL8365MB_INTR_SLIENT_START_2_MASK	0x1000
++#define   RTL8365MB_INTR_SLIENT_START_MASK	0x0800
++#define   RTL8365MB_INTR_ACL_ACTION_MASK	0x0200
++#define   RTL8365MB_INTR_CABLE_DIAG_FIN_MASK	0x0100
++#define   RTL8365MB_INTR_INTERRUPT_8051_MASK	0x0080
++#define   RTL8365MB_INTR_LOOP_DETECTION_MASK	0x0040
++#define   RTL8365MB_INTR_GREEN_TIMER_MASK	0x0020
++#define   RTL8365MB_INTR_SPECIAL_CONGEST_MASK	0x0010
++#define   RTL8365MB_INTR_SPEED_CHANGE_MASK	0x0008
++#define   RTL8365MB_INTR_LEARN_OVER_MASK	0x0004
++#define   RTL8365MB_INTR_METER_EXCEEDED_MASK	0x0002
++#define   RTL8365MB_INTR_LINK_CHANGE_MASK	0x0001
++#define   RTL8365MB_INTR_ALL_MASK                      \
++		(RTL8365MB_INTR_SLIENT_START_2_MASK |  \
++		 RTL8365MB_INTR_SLIENT_START_MASK |    \
++		 RTL8365MB_INTR_ACL_ACTION_MASK |      \
++		 RTL8365MB_INTR_CABLE_DIAG_FIN_MASK |  \
++		 RTL8365MB_INTR_INTERRUPT_8051_MASK |  \
++		 RTL8365MB_INTR_LOOP_DETECTION_MASK |  \
++		 RTL8365MB_INTR_GREEN_TIMER_MASK |     \
++		 RTL8365MB_INTR_SPECIAL_CONGEST_MASK | \
++		 RTL8365MB_INTR_SPEED_CHANGE_MASK |    \
++		 RTL8365MB_INTR_LEARN_OVER_MASK |      \
++		 RTL8365MB_INTR_METER_EXCEEDED_MASK |  \
++		 RTL8365MB_INTR_LINK_CHANGE_MASK)
++
++/* Per-port interrupt type status registers */
++#define RTL8365MB_PORT_LINKDOWN_IND_REG		0x1106
++#define   RTL8365MB_PORT_LINKDOWN_IND_MASK	0x07FF
++
++#define RTL8365MB_PORT_LINKUP_IND_REG		0x1107
++#define   RTL8365MB_PORT_LINKUP_IND_MASK	0x07FF
++
++/* PHY indirect access registers */
++#define RTL8365MB_INDIRECT_ACCESS_CTRL_REG			0x1F00
++#define   RTL8365MB_INDIRECT_ACCESS_CTRL_RW_MASK		0x0002
++#define   RTL8365MB_INDIRECT_ACCESS_CTRL_RW_READ		0
++#define   RTL8365MB_INDIRECT_ACCESS_CTRL_RW_WRITE		1
++#define   RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_MASK		0x0001
++#define   RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_VALUE		1
++#define RTL8365MB_INDIRECT_ACCESS_STATUS_REG			0x1F01
++#define RTL8365MB_INDIRECT_ACCESS_ADDRESS_REG			0x1F02
++#define   RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_5_1_MASK	GENMASK(4, 0)
++#define   RTL8365MB_INDIRECT_ACCESS_ADDRESS_PHYNUM_MASK		GENMASK(7, 5)
++#define   RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_9_6_MASK	GENMASK(11, 8)
++#define   RTL8365MB_PHY_BASE					0x2000
++#define RTL8365MB_INDIRECT_ACCESS_WRITE_DATA_REG		0x1F03
++#define RTL8365MB_INDIRECT_ACCESS_READ_DATA_REG			0x1F04
++
++/* PHY OCP address prefix register */
++#define RTL8365MB_GPHY_OCP_MSB_0_REG			0x1D15
++#define   RTL8365MB_GPHY_OCP_MSB_0_CFG_CPU_OCPADR_MASK	0x0FC0
++#define RTL8365MB_PHY_OCP_ADDR_PREFIX_MASK		0xFC00
++
++/* The PHY OCP addresses of PHY registers 0~31 start here */
++#define RTL8365MB_PHY_OCP_ADDR_PHYREG_BASE		0xA400
++
++/* EXT port interface mode values - used in DIGITAL_INTERFACE_SELECT */
++#define RTL8365MB_EXT_PORT_MODE_DISABLE		0
++#define RTL8365MB_EXT_PORT_MODE_RGMII		1
++#define RTL8365MB_EXT_PORT_MODE_MII_MAC		2
++#define RTL8365MB_EXT_PORT_MODE_MII_PHY		3
++#define RTL8365MB_EXT_PORT_MODE_TMII_MAC	4
++#define RTL8365MB_EXT_PORT_MODE_TMII_PHY	5
++#define RTL8365MB_EXT_PORT_MODE_GMII		6
++#define RTL8365MB_EXT_PORT_MODE_RMII_MAC	7
++#define RTL8365MB_EXT_PORT_MODE_RMII_PHY	8
++#define RTL8365MB_EXT_PORT_MODE_SGMII		9
++#define RTL8365MB_EXT_PORT_MODE_HSGMII		10
++#define RTL8365MB_EXT_PORT_MODE_1000X_100FX	11
++#define RTL8365MB_EXT_PORT_MODE_1000X		12
++#define RTL8365MB_EXT_PORT_MODE_100FX		13
++
++/* EXT port interface mode configuration registers 0~1 */
++#define RTL8365MB_DIGITAL_INTERFACE_SELECT_REG0		0x1305
++#define RTL8365MB_DIGITAL_INTERFACE_SELECT_REG1		0x13C3
++#define RTL8365MB_DIGITAL_INTERFACE_SELECT_REG(_extport)   \
++		(RTL8365MB_DIGITAL_INTERFACE_SELECT_REG0 + \
++		 ((_extport) >> 1) * (0x13C3 - 0x1305))
++#define   RTL8365MB_DIGITAL_INTERFACE_SELECT_MODE_MASK(_extport) \
++		(0xF << (((_extport) % 2)))
++#define   RTL8365MB_DIGITAL_INTERFACE_SELECT_MODE_OFFSET(_extport) \
++		(((_extport) % 2) * 4)
++
++/* EXT port RGMII TX/RX delay configuration registers 1~2 */
++#define RTL8365MB_EXT_RGMXF_REG1		0x1307
++#define RTL8365MB_EXT_RGMXF_REG2		0x13C5
++#define RTL8365MB_EXT_RGMXF_REG(_extport)   \
++		(RTL8365MB_EXT_RGMXF_REG1 + \
++		 (((_extport) >> 1) * (0x13C5 - 0x1307)))
++#define   RTL8365MB_EXT_RGMXF_RXDELAY_MASK	0x0007
++#define   RTL8365MB_EXT_RGMXF_TXDELAY_MASK	0x0008
++
++/* External port speed values - used in DIGITAL_INTERFACE_FORCE */
++#define RTL8365MB_PORT_SPEED_10M	0
++#define RTL8365MB_PORT_SPEED_100M	1
++#define RTL8365MB_PORT_SPEED_1000M	2
++
++/* EXT port force configuration registers 0~2 */
++#define RTL8365MB_DIGITAL_INTERFACE_FORCE_REG0			0x1310
++#define RTL8365MB_DIGITAL_INTERFACE_FORCE_REG1			0x1311
++#define RTL8365MB_DIGITAL_INTERFACE_FORCE_REG2			0x13C4
++#define RTL8365MB_DIGITAL_INTERFACE_FORCE_REG(_extport)   \
++		(RTL8365MB_DIGITAL_INTERFACE_FORCE_REG0 + \
++		 ((_extport) & 0x1) +                     \
++		 ((((_extport) >> 1) & 0x1) * (0x13C4 - 0x1310)))
++#define   RTL8365MB_DIGITAL_INTERFACE_FORCE_EN_MASK		0x1000
++#define   RTL8365MB_DIGITAL_INTERFACE_FORCE_NWAY_MASK		0x0080
++#define   RTL8365MB_DIGITAL_INTERFACE_FORCE_TXPAUSE_MASK	0x0040
++#define   RTL8365MB_DIGITAL_INTERFACE_FORCE_RXPAUSE_MASK	0x0020
++#define   RTL8365MB_DIGITAL_INTERFACE_FORCE_LINK_MASK		0x0010
++#define   RTL8365MB_DIGITAL_INTERFACE_FORCE_DUPLEX_MASK		0x0004
++#define   RTL8365MB_DIGITAL_INTERFACE_FORCE_SPEED_MASK		0x0003
++
++/* CPU port mask register - controls which ports are treated as CPU ports */
++#define RTL8365MB_CPU_PORT_MASK_REG	0x1219
++#define   RTL8365MB_CPU_PORT_MASK_MASK	0x07FF
++
++/* CPU control register */
++#define RTL8365MB_CPU_CTRL_REG			0x121A
++#define   RTL8365MB_CPU_CTRL_TRAP_PORT_EXT_MASK	0x0400
++#define   RTL8365MB_CPU_CTRL_TAG_FORMAT_MASK	0x0200
++#define   RTL8365MB_CPU_CTRL_RXBYTECOUNT_MASK	0x0080
++#define   RTL8365MB_CPU_CTRL_TAG_POSITION_MASK	0x0040
++#define   RTL8365MB_CPU_CTRL_TRAP_PORT_MASK	0x0038
++#define   RTL8365MB_CPU_CTRL_INSERTMODE_MASK	0x0006
++#define   RTL8365MB_CPU_CTRL_EN_MASK		0x0001
++
++/* Maximum packet length register */
++#define RTL8365MB_CFG0_MAX_LEN_REG	0x088C
++#define   RTL8365MB_CFG0_MAX_LEN_MASK	0x3FFF
++
++/* Port learning limit registers */
++#define RTL8365MB_LUT_PORT_LEARN_LIMIT_BASE		0x0A20
++#define RTL8365MB_LUT_PORT_LEARN_LIMIT_REG(_physport) \
++		(RTL8365MB_LUT_PORT_LEARN_LIMIT_BASE + (_physport))
++
++/* Port isolation (forwarding mask) registers */
++#define RTL8365MB_PORT_ISOLATION_REG_BASE		0x08A2
++#define RTL8365MB_PORT_ISOLATION_REG(_physport) \
++		(RTL8365MB_PORT_ISOLATION_REG_BASE + (_physport))
++#define   RTL8365MB_PORT_ISOLATION_MASK			0x07FF
++
++/* MSTP port state registers - indexed by tree instancrSTI (tree ine */
++#define RTL8365MB_MSTI_CTRL_BASE			0x0A00
++#define RTL8365MB_MSTI_CTRL_REG(_msti, _physport) \
++		(RTL8365MB_MSTI_CTRL_BASE + ((_msti) << 1) + ((_physport) >> 3))
++#define   RTL8365MB_MSTI_CTRL_PORT_STATE_OFFSET(_physport) ((_physport) << 1)
++#define   RTL8365MB_MSTI_CTRL_PORT_STATE_MASK(_physport) \
++		(0x3 << RTL8365MB_MSTI_CTRL_PORT_STATE_OFFSET((_physport)))
++
++/* MIB counter value registers */
++#define RTL8365MB_MIB_COUNTER_BASE	0x1000
++#define RTL8365MB_MIB_COUNTER_REG(_x)	(RTL8365MB_MIB_COUNTER_BASE + (_x))
++
++/* MIB counter address register */
++#define RTL8365MB_MIB_ADDRESS_REG		0x1004
++#define   RTL8365MB_MIB_ADDRESS_PORT_OFFSET	0x007C
++#define   RTL8365MB_MIB_ADDRESS(_p, _x) \
++		(((RTL8365MB_MIB_ADDRESS_PORT_OFFSET) * (_p) + (_x)) >> 2)
++
++#define RTL8365MB_MIB_CTRL0_REG			0x1005
++#define   RTL8365MB_MIB_CTRL0_RESET_MASK	0x0002
++#define   RTL8365MB_MIB_CTRL0_BUSY_MASK		0x0001
++
++/* The DSA callback .get_stats64 runs in atomic context, so we are not allowed
++ * to block. On the other hand, accessing MIB counters absolutely requires us to
++ * block. The solution is thus to schedule work which polls the MIB counters
++ * asynchronously and updates some private data, which the callback can then
++ * fetch atomically. Three seconds should be a good enough polling interval.
++ */
++#define RTL8365MB_STATS_INTERVAL_JIFFIES	(3 * HZ)
++
++enum rtl8365mb_mib_counter_index {
++	RTL8365MB_MIB_ifInOctets,
++	RTL8365MB_MIB_dot3StatsFCSErrors,
++	RTL8365MB_MIB_dot3StatsSymbolErrors,
++	RTL8365MB_MIB_dot3InPauseFrames,
++	RTL8365MB_MIB_dot3ControlInUnknownOpcodes,
++	RTL8365MB_MIB_etherStatsFragments,
++	RTL8365MB_MIB_etherStatsJabbers,
++	RTL8365MB_MIB_ifInUcastPkts,
++	RTL8365MB_MIB_etherStatsDropEvents,
++	RTL8365MB_MIB_ifInMulticastPkts,
++	RTL8365MB_MIB_ifInBroadcastPkts,
++	RTL8365MB_MIB_inMldChecksumError,
++	RTL8365MB_MIB_inIgmpChecksumError,
++	RTL8365MB_MIB_inMldSpecificQuery,
++	RTL8365MB_MIB_inMldGeneralQuery,
++	RTL8365MB_MIB_inIgmpSpecificQuery,
++	RTL8365MB_MIB_inIgmpGeneralQuery,
++	RTL8365MB_MIB_inMldLeaves,
++	RTL8365MB_MIB_inIgmpLeaves,
++	RTL8365MB_MIB_etherStatsOctets,
++	RTL8365MB_MIB_etherStatsUnderSizePkts,
++	RTL8365MB_MIB_etherOversizeStats,
++	RTL8365MB_MIB_etherStatsPkts64Octets,
++	RTL8365MB_MIB_etherStatsPkts65to127Octets,
++	RTL8365MB_MIB_etherStatsPkts128to255Octets,
++	RTL8365MB_MIB_etherStatsPkts256to511Octets,
++	RTL8365MB_MIB_etherStatsPkts512to1023Octets,
++	RTL8365MB_MIB_etherStatsPkts1024to1518Octets,
++	RTL8365MB_MIB_ifOutOctets,
++	RTL8365MB_MIB_dot3StatsSingleCollisionFrames,
++	RTL8365MB_MIB_dot3StatsMultipleCollisionFrames,
++	RTL8365MB_MIB_dot3StatsDeferredTransmissions,
++	RTL8365MB_MIB_dot3StatsLateCollisions,
++	RTL8365MB_MIB_etherStatsCollisions,
++	RTL8365MB_MIB_dot3StatsExcessiveCollisions,
++	RTL8365MB_MIB_dot3OutPauseFrames,
++	RTL8365MB_MIB_ifOutDiscards,
++	RTL8365MB_MIB_dot1dTpPortInDiscards,
++	RTL8365MB_MIB_ifOutUcastPkts,
++	RTL8365MB_MIB_ifOutMulticastPkts,
++	RTL8365MB_MIB_ifOutBroadcastPkts,
++	RTL8365MB_MIB_outOampduPkts,
++	RTL8365MB_MIB_inOampduPkts,
++	RTL8365MB_MIB_inIgmpJoinsSuccess,
++	RTL8365MB_MIB_inIgmpJoinsFail,
++	RTL8365MB_MIB_inMldJoinsSuccess,
++	RTL8365MB_MIB_inMldJoinsFail,
++	RTL8365MB_MIB_inReportSuppressionDrop,
++	RTL8365MB_MIB_inLeaveSuppressionDrop,
++	RTL8365MB_MIB_outIgmpReports,
++	RTL8365MB_MIB_outIgmpLeaves,
++	RTL8365MB_MIB_outIgmpGeneralQuery,
++	RTL8365MB_MIB_outIgmpSpecificQuery,
++	RTL8365MB_MIB_outMldReports,
++	RTL8365MB_MIB_outMldLeaves,
++	RTL8365MB_MIB_outMldGeneralQuery,
++	RTL8365MB_MIB_outMldSpecificQuery,
++	RTL8365MB_MIB_inKnownMulticastPkts,
++	RTL8365MB_MIB_END,
++};
++
++struct rtl8365mb_mib_counter {
++	u32 offset;
++	u32 length;
++	const char *name;
++};
++
++#define RTL8365MB_MAKE_MIB_COUNTER(_offset, _length, _name) \
++		[RTL8365MB_MIB_ ## _name] = { _offset, _length, #_name }
++
++static struct rtl8365mb_mib_counter rtl8365mb_mib_counters[] = {
++	RTL8365MB_MAKE_MIB_COUNTER(0, 4, ifInOctets),
++	RTL8365MB_MAKE_MIB_COUNTER(4, 2, dot3StatsFCSErrors),
++	RTL8365MB_MAKE_MIB_COUNTER(6, 2, dot3StatsSymbolErrors),
++	RTL8365MB_MAKE_MIB_COUNTER(8, 2, dot3InPauseFrames),
++	RTL8365MB_MAKE_MIB_COUNTER(10, 2, dot3ControlInUnknownOpcodes),
++	RTL8365MB_MAKE_MIB_COUNTER(12, 2, etherStatsFragments),
++	RTL8365MB_MAKE_MIB_COUNTER(14, 2, etherStatsJabbers),
++	RTL8365MB_MAKE_MIB_COUNTER(16, 2, ifInUcastPkts),
++	RTL8365MB_MAKE_MIB_COUNTER(18, 2, etherStatsDropEvents),
++	RTL8365MB_MAKE_MIB_COUNTER(20, 2, ifInMulticastPkts),
++	RTL8365MB_MAKE_MIB_COUNTER(22, 2, ifInBroadcastPkts),
++	RTL8365MB_MAKE_MIB_COUNTER(24, 2, inMldChecksumError),
++	RTL8365MB_MAKE_MIB_COUNTER(26, 2, inIgmpChecksumError),
++	RTL8365MB_MAKE_MIB_COUNTER(28, 2, inMldSpecificQuery),
++	RTL8365MB_MAKE_MIB_COUNTER(30, 2, inMldGeneralQuery),
++	RTL8365MB_MAKE_MIB_COUNTER(32, 2, inIgmpSpecificQuery),
++	RTL8365MB_MAKE_MIB_COUNTER(34, 2, inIgmpGeneralQuery),
++	RTL8365MB_MAKE_MIB_COUNTER(36, 2, inMldLeaves),
++	RTL8365MB_MAKE_MIB_COUNTER(38, 2, inIgmpLeaves),
++	RTL8365MB_MAKE_MIB_COUNTER(40, 4, etherStatsOctets),
++	RTL8365MB_MAKE_MIB_COUNTER(44, 2, etherStatsUnderSizePkts),
++	RTL8365MB_MAKE_MIB_COUNTER(46, 2, etherOversizeStats),
++	RTL8365MB_MAKE_MIB_COUNTER(48, 2, etherStatsPkts64Octets),
++	RTL8365MB_MAKE_MIB_COUNTER(50, 2, etherStatsPkts65to127Octets),
++	RTL8365MB_MAKE_MIB_COUNTER(52, 2, etherStatsPkts128to255Octets),
++	RTL8365MB_MAKE_MIB_COUNTER(54, 2, etherStatsPkts256to511Octets),
++	RTL8365MB_MAKE_MIB_COUNTER(56, 2, etherStatsPkts512to1023Octets),
++	RTL8365MB_MAKE_MIB_COUNTER(58, 2, etherStatsPkts1024to1518Octets),
++	RTL8365MB_MAKE_MIB_COUNTER(60, 4, ifOutOctets),
++	RTL8365MB_MAKE_MIB_COUNTER(64, 2, dot3StatsSingleCollisionFrames),
++	RTL8365MB_MAKE_MIB_COUNTER(66, 2, dot3StatsMultipleCollisionFrames),
++	RTL8365MB_MAKE_MIB_COUNTER(68, 2, dot3StatsDeferredTransmissions),
++	RTL8365MB_MAKE_MIB_COUNTER(70, 2, dot3StatsLateCollisions),
++	RTL8365MB_MAKE_MIB_COUNTER(72, 2, etherStatsCollisions),
++	RTL8365MB_MAKE_MIB_COUNTER(74, 2, dot3StatsExcessiveCollisions),
++	RTL8365MB_MAKE_MIB_COUNTER(76, 2, dot3OutPauseFrames),
++	RTL8365MB_MAKE_MIB_COUNTER(78, 2, ifOutDiscards),
++	RTL8365MB_MAKE_MIB_COUNTER(80, 2, dot1dTpPortInDiscards),
++	RTL8365MB_MAKE_MIB_COUNTER(82, 2, ifOutUcastPkts),
++	RTL8365MB_MAKE_MIB_COUNTER(84, 2, ifOutMulticastPkts),
++	RTL8365MB_MAKE_MIB_COUNTER(86, 2, ifOutBroadcastPkts),
++	RTL8365MB_MAKE_MIB_COUNTER(88, 2, outOampduPkts),
++	RTL8365MB_MAKE_MIB_COUNTER(90, 2, inOampduPkts),
++	RTL8365MB_MAKE_MIB_COUNTER(92, 4, inIgmpJoinsSuccess),
++	RTL8365MB_MAKE_MIB_COUNTER(96, 2, inIgmpJoinsFail),
++	RTL8365MB_MAKE_MIB_COUNTER(98, 2, inMldJoinsSuccess),
++	RTL8365MB_MAKE_MIB_COUNTER(100, 2, inMldJoinsFail),
++	RTL8365MB_MAKE_MIB_COUNTER(102, 2, inReportSuppressionDrop),
++	RTL8365MB_MAKE_MIB_COUNTER(104, 2, inLeaveSuppressionDrop),
++	RTL8365MB_MAKE_MIB_COUNTER(106, 2, outIgmpReports),
++	RTL8365MB_MAKE_MIB_COUNTER(108, 2, outIgmpLeaves),
++	RTL8365MB_MAKE_MIB_COUNTER(110, 2, outIgmpGeneralQuery),
++	RTL8365MB_MAKE_MIB_COUNTER(112, 2, outIgmpSpecificQuery),
++	RTL8365MB_MAKE_MIB_COUNTER(114, 2, outMldReports),
++	RTL8365MB_MAKE_MIB_COUNTER(116, 2, outMldLeaves),
++	RTL8365MB_MAKE_MIB_COUNTER(118, 2, outMldGeneralQuery),
++	RTL8365MB_MAKE_MIB_COUNTER(120, 2, outMldSpecificQuery),
++	RTL8365MB_MAKE_MIB_COUNTER(122, 2, inKnownMulticastPkts),
++};
++
++static_assert(ARRAY_SIZE(rtl8365mb_mib_counters) == RTL8365MB_MIB_END);
++
++struct rtl8365mb_jam_tbl_entry {
++	u16 reg;
++	u16 val;
++};
++
++/* Lifted from the vendor driver sources */
++static const struct rtl8365mb_jam_tbl_entry rtl8365mb_init_jam_8365mb_vc[] = {
++	{ 0x13EB, 0x15BB }, { 0x1303, 0x06D6 }, { 0x1304, 0x0700 },
++	{ 0x13E2, 0x003F }, { 0x13F9, 0x0090 }, { 0x121E, 0x03CA },
++	{ 0x1233, 0x0352 }, { 0x1237, 0x00A0 }, { 0x123A, 0x0030 },
++	{ 0x1239, 0x0084 }, { 0x0301, 0x1000 }, { 0x1349, 0x001F },
++	{ 0x18E0, 0x4004 }, { 0x122B, 0x241C }, { 0x1305, 0xC000 },
++	{ 0x13F0, 0x0000 },
++};
++
++static const struct rtl8365mb_jam_tbl_entry rtl8365mb_init_jam_common[] = {
++	{ 0x1200, 0x7FCB }, { 0x0884, 0x0003 }, { 0x06EB, 0x0001 },
++	{ 0x03Fa, 0x0007 }, { 0x08C8, 0x00C0 }, { 0x0A30, 0x020E },
++	{ 0x0800, 0x0000 }, { 0x0802, 0x0000 }, { 0x09DA, 0x0013 },
++	{ 0x1D32, 0x0002 },
++};
++
++enum rtl8365mb_stp_state {
++	RTL8365MB_STP_STATE_DISABLED = 0,
++	RTL8365MB_STP_STATE_BLOCKING = 1,
++	RTL8365MB_STP_STATE_LEARNING = 2,
++	RTL8365MB_STP_STATE_FORWARDING = 3,
++};
++
++enum rtl8365mb_cpu_insert {
++	RTL8365MB_CPU_INSERT_TO_ALL = 0,
++	RTL8365MB_CPU_INSERT_TO_TRAPPING = 1,
++	RTL8365MB_CPU_INSERT_TO_NONE = 2,
++};
++
++enum rtl8365mb_cpu_position {
++	RTL8365MB_CPU_POS_AFTER_SA = 0,
++	RTL8365MB_CPU_POS_BEFORE_CRC = 1,
++};
++
++enum rtl8365mb_cpu_format {
++	RTL8365MB_CPU_FORMAT_8BYTES = 0,
++	RTL8365MB_CPU_FORMAT_4BYTES = 1,
++};
++
++enum rtl8365mb_cpu_rxlen {
++	RTL8365MB_CPU_RXLEN_72BYTES = 0,
++	RTL8365MB_CPU_RXLEN_64BYTES = 1,
++};
++
++/**
++ * struct rtl8365mb_cpu - CPU port configuration
++ * @enable: enable/disable hardware insertion of CPU tag in switch->CPU frames
++ * @mask: port mask of ports that parse should parse CPU tags
++ * @trap_port: forward trapped frames to this port
++ * @insert: CPU tag insertion mode in switch->CPU frames
++ * @position: position of CPU tag in frame
++ * @rx_length: minimum CPU RX length
++ * @format: CPU tag format
++ *
++ * Represents the CPU tagging and CPU port configuration of the switch. These
++ * settings are configurable at runtime.
++ */
++struct rtl8365mb_cpu {
++	bool enable;
++	u32 mask;
++	u32 trap_port;
++	enum rtl8365mb_cpu_insert insert;
++	enum rtl8365mb_cpu_position position;
++	enum rtl8365mb_cpu_rxlen rx_length;
++	enum rtl8365mb_cpu_format format;
++};
++
++/**
++ * struct rtl8365mb_port - private per-port data
++ * @smi: pointer to parent realtek_smi data
++ * @index: DSA port index, same as dsa_port::index
++ * @stats: link statistics populated by rtl8365mb_stats_poll, ready for atomic
++ *         access via rtl8365mb_get_stats64
++ * @stats_lock: protect the stats structure during read/update
++ * @mib_work: delayed work for polling MIB counters
++ */
++struct rtl8365mb_port {
++	struct realtek_smi *smi;
++	unsigned int index;
++	struct rtnl_link_stats64 stats;
++	spinlock_t stats_lock;
++	struct delayed_work mib_work;
++};
++
++/**
++ * struct rtl8365mb - private chip-specific driver data
++ * @smi: pointer to parent realtek_smi data
++ * @irq: registered IRQ or zero
++ * @chip_id: chip identifier
++ * @chip_ver: chip silicon revision
++ * @port_mask: mask of all ports
++ * @learn_limit_max: maximum number of L2 addresses the chip can learn
++ * @cpu: CPU tagging and CPU port configuration for this chip
++ * @mib_lock: prevent concurrent reads of MIB counters
++ * @ports: per-port data
++ * @jam_table: chip-specific initialization jam table
++ * @jam_size: size of the chip's jam table
++ *
++ * Private data for this driver.
++ */
++struct rtl8365mb {
++	struct realtek_smi *smi;
++	int irq;
++	u32 chip_id;
++	u32 chip_ver;
++	u32 port_mask;
++	u32 learn_limit_max;
++	struct rtl8365mb_cpu cpu;
++	struct mutex mib_lock;
++	struct rtl8365mb_port ports[RTL8365MB_MAX_NUM_PORTS];
++	const struct rtl8365mb_jam_tbl_entry *jam_table;
++	size_t jam_size;
++};
++
++static int rtl8365mb_phy_poll_busy(struct realtek_smi *smi)
++{
++	u32 val;
++
++	return regmap_read_poll_timeout(smi->map,
++					RTL8365MB_INDIRECT_ACCESS_STATUS_REG,
++					val, !val, 10, 100);
++}
++
++static int rtl8365mb_phy_ocp_prepare(struct realtek_smi *smi, int phy,
++				     u32 ocp_addr)
++{
++	u32 val;
++	int ret;
++
++	/* Set OCP prefix */
++	val = FIELD_GET(RTL8365MB_PHY_OCP_ADDR_PREFIX_MASK, ocp_addr);
++	ret = regmap_update_bits(
++		smi->map, RTL8365MB_GPHY_OCP_MSB_0_REG,
++		RTL8365MB_GPHY_OCP_MSB_0_CFG_CPU_OCPADR_MASK,
++		FIELD_PREP(RTL8365MB_GPHY_OCP_MSB_0_CFG_CPU_OCPADR_MASK, val));
++	if (ret)
++		return ret;
++
++	/* Set PHY register address */
++	val = RTL8365MB_PHY_BASE;
++	val |= FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_ADDRESS_PHYNUM_MASK, phy);
++	val |= FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_5_1_MASK,
++			  ocp_addr >> 1);
++	val |= FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_9_6_MASK,
++			  ocp_addr >> 6);
++	ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_ADDRESS_REG,
++			   val);
++	if (ret)
++		return ret;
++
++	return 0;
++}
++
++static int rtl8365mb_phy_ocp_read(struct realtek_smi *smi, int phy,
++				  u32 ocp_addr, u16 *data)
++{
++	u32 val;
++	int ret;
++
++	ret = rtl8365mb_phy_poll_busy(smi);
++	if (ret)
++		return ret;
++
++	ret = rtl8365mb_phy_ocp_prepare(smi, phy, ocp_addr);
++	if (ret)
++		return ret;
++
++	/* Execute read operation */
++	val = FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_MASK,
++			 RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_VALUE) |
++	      FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_RW_MASK,
++			 RTL8365MB_INDIRECT_ACCESS_CTRL_RW_READ);
++	ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_CTRL_REG, val);
++	if (ret)
++		return ret;
++
++	ret = rtl8365mb_phy_poll_busy(smi);
++	if (ret)
++		return ret;
++
++	/* Get PHY register data */
++	ret = regmap_read(smi->map, RTL8365MB_INDIRECT_ACCESS_READ_DATA_REG,
++			  &val);
++	if (ret)
++		return ret;
++
++	*data = val & 0xFFFF;
++
++	return 0;
++}
++
++static int rtl8365mb_phy_ocp_write(struct realtek_smi *smi, int phy,
++				   u32 ocp_addr, u16 data)
++{
++	u32 val;
++	int ret;
++
++	ret = rtl8365mb_phy_poll_busy(smi);
++	if (ret)
++		return ret;
++
++	ret = rtl8365mb_phy_ocp_prepare(smi, phy, ocp_addr);
++	if (ret)
++		return ret;
++
++	/* Set PHY register data */
++	ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_WRITE_DATA_REG,
++			   data);
++	if (ret)
++		return ret;
++
++	/* Execute write operation */
++	val = FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_MASK,
++			 RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_VALUE) |
++	      FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_RW_MASK,
++			 RTL8365MB_INDIRECT_ACCESS_CTRL_RW_WRITE);
++	ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_CTRL_REG, val);
++	if (ret)
++		return ret;
++
++	ret = rtl8365mb_phy_poll_busy(smi);
++	if (ret)
++		return ret;
++
++	return 0;
++}
++
++static int rtl8365mb_phy_read(struct realtek_smi *smi, int phy, int regnum)
++{
++	u32 ocp_addr;
++	u16 val;
++	int ret;
++
++	if (phy > RTL8365MB_PHYADDRMAX)
++		return -EINVAL;
++
++	if (regnum > RTL8365MB_PHYREGMAX)
++		return -EINVAL;
++
++	ocp_addr = RTL8365MB_PHY_OCP_ADDR_PHYREG_BASE + regnum * 2;
++
++	ret = rtl8365mb_phy_ocp_read(smi, phy, ocp_addr, &val);
++	if (ret) {
++		dev_err(smi->dev,
++			"failed to read PHY%d reg %02x @ %04x, ret %d\n", phy,
++			regnum, ocp_addr, ret);
++		return ret;
++	}
++
++	dev_dbg(smi->dev, "read PHY%d register 0x%02x @ %04x, val <- %04x\n",
++		phy, regnum, ocp_addr, val);
++
++	return val;
++}
++
++static int rtl8365mb_phy_write(struct realtek_smi *smi, int phy, int regnum,
++			       u16 val)
++{
++	u32 ocp_addr;
++	int ret;
++
++	if (phy > RTL8365MB_PHYADDRMAX)
++		return -EINVAL;
++
++	if (regnum > RTL8365MB_PHYREGMAX)
++		return -EINVAL;
++
++	ocp_addr = RTL8365MB_PHY_OCP_ADDR_PHYREG_BASE + regnum * 2;
++
++	ret = rtl8365mb_phy_ocp_write(smi, phy, ocp_addr, val);
++	if (ret) {
++		dev_err(smi->dev,
++			"failed to write PHY%d reg %02x @ %04x, ret %d\n", phy,
++			regnum, ocp_addr, ret);
++		return ret;
++	}
++
++	dev_dbg(smi->dev, "write PHY%d register 0x%02x @ %04x, val -> %04x\n",
++		phy, regnum, ocp_addr, val);
++
++	return 0;
++}
++
++static enum dsa_tag_protocol
++rtl8365mb_get_tag_protocol(struct dsa_switch *ds, int port,
++			   enum dsa_tag_protocol mp)
++{
++	return DSA_TAG_PROTO_RTL8_4;
++}
++
++static int rtl8365mb_ext_config_rgmii(struct realtek_smi *smi, int port,
++				      phy_interface_t interface)
++{
++	struct device_node *dn;
++	struct dsa_port *dp;
++	int tx_delay = 0;
++	int rx_delay = 0;
++	int ext_port;
++	u32 val;
++	int ret;
++
++	if (port == smi->cpu_port) {
++		ext_port = 1;
++	} else {
++		dev_err(smi->dev, "only one EXT port is currently supported\n");
++		return -EINVAL;
++	}
++
++	dp = dsa_to_port(smi->ds, port);
++	dn = dp->dn;
++
++	/* Set the RGMII TX/RX delay
++	 *
++	 * The Realtek vendor driver indicates the following possible
++	 * configuration settings:
++	 *
++	 *   TX delay:
++	 *     0 = no delay, 1 = 2 ns delay
++	 *   RX delay:
++	 *     0 = no delay, 7 = maximum delay
++	 *     Each step is approximately 0.3 ns, so the maximum delay is about
++	 *     2.1 ns.
++	 *
++	 * The vendor driver also states that this must be configured *before*
++	 * forcing the external interface into a particular mode, which is done
++	 * in the rtl8365mb_phylink_mac_link_{up,down} functions.
++	 *
++	 * Only configure an RGMII TX (resp. RX) delay if the
++	 * tx-internal-delay-ps (resp. rx-internal-delay-ps) OF property is
++	 * specified. We ignore the detail of the RGMII interface mode
++	 * (RGMII_{RXID, TXID, etc.}), as this is considered to be a PHY-only
++	 * property.
++	 */
++	if (!of_property_read_u32(dn, "tx-internal-delay-ps", &val)) {
++		val = val / 1000; /* convert to ns */
++
++		if (val == 0 || val == 2)
++			tx_delay = val / 2;
++		else
++			dev_warn(smi->dev,
++				 "EXT port TX delay must be 0 or 2 ns\n");
++	}
++
++	if (!of_property_read_u32(dn, "rx-internal-delay-ps", &val)) {
++		val = DIV_ROUND_CLOSEST(val, 300); /* convert to 0.3 ns step */
++
++		if (val <= 7)
++			rx_delay = val;
++		else
++			dev_warn(smi->dev,
++				 "EXT port RX delay must be 0 to 2.1 ns\n");
++	}
++
++	ret = regmap_update_bits(
++		smi->map, RTL8365MB_EXT_RGMXF_REG(ext_port),
++		RTL8365MB_EXT_RGMXF_TXDELAY_MASK |
++			RTL8365MB_EXT_RGMXF_RXDELAY_MASK,
++		FIELD_PREP(RTL8365MB_EXT_RGMXF_TXDELAY_MASK, tx_delay) |
++			FIELD_PREP(RTL8365MB_EXT_RGMXF_RXDELAY_MASK, rx_delay));
++	if (ret)
++		return ret;
++
++	ret = regmap_update_bits(
++		smi->map, RTL8365MB_DIGITAL_INTERFACE_SELECT_REG(ext_port),
++		RTL8365MB_DIGITAL_INTERFACE_SELECT_MODE_MASK(ext_port),
++		RTL8365MB_EXT_PORT_MODE_RGMII
++			<< RTL8365MB_DIGITAL_INTERFACE_SELECT_MODE_OFFSET(
++				   ext_port));
++	if (ret)
++		return ret;
++
++	return 0;
++}
++
++static int rtl8365mb_ext_config_forcemode(struct realtek_smi *smi, int port,
++					  bool link, int speed, int duplex,
++					  bool tx_pause, bool rx_pause)
++{
++	u32 r_tx_pause;
++	u32 r_rx_pause;
++	u32 r_duplex;
++	u32 r_speed;
++	u32 r_link;
++	int ext_port;
++	int val;
++	int ret;
++
++	if (port == smi->cpu_port) {
++		ext_port = 1;
++	} else {
++		dev_err(smi->dev, "only one EXT port is currently supported\n");
++		return -EINVAL;
++	}
++
++	if (link) {
++		/* Force the link up with the desired configuration */
++		r_link = 1;
++		r_rx_pause = rx_pause ? 1 : 0;
++		r_tx_pause = tx_pause ? 1 : 0;
++
++		if (speed == SPEED_1000) {
++			r_speed = RTL8365MB_PORT_SPEED_1000M;
++		} else if (speed == SPEED_100) {
++			r_speed = RTL8365MB_PORT_SPEED_100M;
++		} else if (speed == SPEED_10) {
++			r_speed = RTL8365MB_PORT_SPEED_10M;
++		} else {
++			dev_err(smi->dev, "unsupported port speed %s\n",
++				phy_speed_to_str(speed));
++			return -EINVAL;
++		}
++
++		if (duplex == DUPLEX_FULL) {
++			r_duplex = 1;
++		} else if (duplex == DUPLEX_HALF) {
++			r_duplex = 0;
++		} else {
++			dev_err(smi->dev, "unsupported duplex %s\n",
++				phy_duplex_to_str(duplex));
++			return -EINVAL;
++		}
++	} else {
++		/* Force the link down and reset any programmed configuration */
++		r_link = 0;
++		r_tx_pause = 0;
++		r_rx_pause = 0;
++		r_speed = 0;
++		r_duplex = 0;
++	}
++
++	val = FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_EN_MASK, 1) |
++	      FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_TXPAUSE_MASK,
++			 r_tx_pause) |
++	      FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_RXPAUSE_MASK,
++			 r_rx_pause) |
++	      FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_LINK_MASK, r_link) |
++	      FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_DUPLEX_MASK,
++			 r_duplex) |
++	      FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_SPEED_MASK, r_speed);
++	ret = regmap_write(smi->map,
++			   RTL8365MB_DIGITAL_INTERFACE_FORCE_REG(ext_port),
++			   val);
++	if (ret)
++		return ret;
++
++	return 0;
++}
++
++static bool rtl8365mb_phy_mode_supported(struct dsa_switch *ds, int port,
++					 phy_interface_t interface)
++{
++	if (dsa_is_user_port(ds, port) &&
++	    (interface == PHY_INTERFACE_MODE_NA ||
++	     interface == PHY_INTERFACE_MODE_INTERNAL))
++		/* Internal PHY */
++		return true;
++	else if (dsa_is_cpu_port(ds, port) &&
++		 phy_interface_mode_is_rgmii(interface))
++		/* Extension MAC */
++		return true;
++
++	return false;
++}
++
++static void rtl8365mb_phylink_validate(struct dsa_switch *ds, int port,
++				       unsigned long *supported,
++				       struct phylink_link_state *state)
++{
++	struct realtek_smi *smi = ds->priv;
++	__ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0 };
++
++	/* include/linux/phylink.h says:
++	 *     When @state->interface is %PHY_INTERFACE_MODE_NA, phylink
++	 *     expects the MAC driver to return all supported link modes.
++	 */
++	if (state->interface != PHY_INTERFACE_MODE_NA &&
++	    !rtl8365mb_phy_mode_supported(ds, port, state->interface)) {
++		dev_err(smi->dev, "phy mode %s is unsupported on port %d\n",
++			phy_modes(state->interface), port);
++		linkmode_zero(supported);
++		return;
++	}
++
++	phylink_set_port_modes(mask);
++
++	phylink_set(mask, Autoneg);
++	phylink_set(mask, Pause);
++	phylink_set(mask, Asym_Pause);
++
++	phylink_set(mask, 10baseT_Half);
++	phylink_set(mask, 10baseT_Full);
++	phylink_set(mask, 100baseT_Half);
++	phylink_set(mask, 100baseT_Full);
++	phylink_set(mask, 1000baseT_Full);
++
++	linkmode_and(supported, supported, mask);
++	linkmode_and(state->advertising, state->advertising, mask);
++}
++
++static void rtl8365mb_phylink_mac_config(struct dsa_switch *ds, int port,
++					 unsigned int mode,
++					 const struct phylink_link_state *state)
++{
++	struct realtek_smi *smi = ds->priv;
++	int ret;
++
++	if (!rtl8365mb_phy_mode_supported(ds, port, state->interface)) {
++		dev_err(smi->dev, "phy mode %s is unsupported on port %d\n",
++			phy_modes(state->interface), port);
++		return;
++	}
++
++	if (mode != MLO_AN_PHY && mode != MLO_AN_FIXED) {
++		dev_err(smi->dev,
++			"port %d supports only conventional PHY or fixed-link\n",
++			port);
++		return;
++	}
++
++	if (phy_interface_mode_is_rgmii(state->interface)) {
++		ret = rtl8365mb_ext_config_rgmii(smi, port, state->interface);
++		if (ret)
++			dev_err(smi->dev,
++				"failed to configure RGMII mode on port %d: %d\n",
++				port, ret);
++		return;
++	}
++
++	/* TODO: Implement MII and RMII modes, which the RTL8365MB-VC also
++	 * supports
++	 */
++}
++
++static void rtl8365mb_phylink_mac_link_down(struct dsa_switch *ds, int port,
++					    unsigned int mode,
++					    phy_interface_t interface)
++{
++	struct realtek_smi *smi = ds->priv;
++	struct rtl8365mb_port *p;
++	struct rtl8365mb *mb;
++	int ret;
++
++	mb = smi->chip_data;
++	p = &mb->ports[port];
++	cancel_delayed_work_sync(&p->mib_work);
++
++	if (phy_interface_mode_is_rgmii(interface)) {
++		ret = rtl8365mb_ext_config_forcemode(smi, port, false, 0, 0,
++						     false, false);
++		if (ret)
++			dev_err(smi->dev,
++				"failed to reset forced mode on port %d: %d\n",
++				port, ret);
++
++		return;
++	}
++}
++
++static void rtl8365mb_phylink_mac_link_up(struct dsa_switch *ds, int port,
++					  unsigned int mode,
++					  phy_interface_t interface,
++					  struct phy_device *phydev, int speed,
++					  int duplex, bool tx_pause,
++					  bool rx_pause)
++{
++	struct realtek_smi *smi = ds->priv;
++	struct rtl8365mb_port *p;
++	struct rtl8365mb *mb;
++	int ret;
++
++	mb = smi->chip_data;
++	p = &mb->ports[port];
++	schedule_delayed_work(&p->mib_work, 0);
++
++	if (phy_interface_mode_is_rgmii(interface)) {
++		ret = rtl8365mb_ext_config_forcemode(smi, port, true, speed,
++						     duplex, tx_pause,
++						     rx_pause);
++		if (ret)
++			dev_err(smi->dev,
++				"failed to force mode on port %d: %d\n", port,
++				ret);
++
++		return;
++	}
++}
++
++static void rtl8365mb_port_stp_state_set(struct dsa_switch *ds, int port,
++					 u8 state)
++{
++	struct realtek_smi *smi = ds->priv;
++	enum rtl8365mb_stp_state val;
++	int msti = 0;
++
++	switch (state) {
++	case BR_STATE_DISABLED:
++		val = RTL8365MB_STP_STATE_DISABLED;
++		break;
++	case BR_STATE_BLOCKING:
++	case BR_STATE_LISTENING:
++		val = RTL8365MB_STP_STATE_BLOCKING;
++		break;
++	case BR_STATE_LEARNING:
++		val = RTL8365MB_STP_STATE_LEARNING;
++		break;
++	case BR_STATE_FORWARDING:
++		val = RTL8365MB_STP_STATE_FORWARDING;
++		break;
++	default:
++		dev_err(smi->dev, "invalid STP state: %u\n", state);
++		return;
++	}
++
++	regmap_update_bits(smi->map, RTL8365MB_MSTI_CTRL_REG(msti, port),
++			   RTL8365MB_MSTI_CTRL_PORT_STATE_MASK(port),
++			   val << RTL8365MB_MSTI_CTRL_PORT_STATE_OFFSET(port));
++}
++
++static int rtl8365mb_port_set_learning(struct realtek_smi *smi, int port,
++				       bool enable)
++{
++	struct rtl8365mb *mb = smi->chip_data;
++
++	/* Enable/disable learning by limiting the number of L2 addresses the
++	 * port can learn. Realtek documentation states that a limit of zero
++	 * disables learning. When enabling learning, set it to the chip's
++	 * maximum.
++	 */
++	return regmap_write(smi->map, RTL8365MB_LUT_PORT_LEARN_LIMIT_REG(port),
++			    enable ? mb->learn_limit_max : 0);
++}
++
++static int rtl8365mb_port_set_isolation(struct realtek_smi *smi, int port,
++					u32 mask)
++{
++	return regmap_write(smi->map, RTL8365MB_PORT_ISOLATION_REG(port), mask);
++}
++
++static int rtl8365mb_mib_counter_read(struct realtek_smi *smi, int port,
++				      u32 offset, u32 length, u64 *mibvalue)
++{
++	u64 tmpvalue = 0;
++	u32 val;
++	int ret;
++	int i;
++
++	/* The MIB address is an SRAM address. We request a particular address
++	 * and then poll the control register before reading the value from some
++	 * counter registers.
++	 */
++	ret = regmap_write(smi->map, RTL8365MB_MIB_ADDRESS_REG,
++			   RTL8365MB_MIB_ADDRESS(port, offset));
++	if (ret)
++		return ret;
++
++	/* Poll for completion */
++	ret = regmap_read_poll_timeout(smi->map, RTL8365MB_MIB_CTRL0_REG, val,
++				       !(val & RTL8365MB_MIB_CTRL0_BUSY_MASK),
++				       10, 100);
++	if (ret)
++		return ret;
++
++	/* Presumably this indicates a MIB counter read failure */
++	if (val & RTL8365MB_MIB_CTRL0_RESET_MASK)
++		return -EIO;
++
++	/* There are four MIB counter registers each holding a 16 bit word of a
++	 * MIB counter. Depending on the offset, we should read from the upper
++	 * two or lower two registers. In case the MIB counter is 4 words, we
++	 * read from all four registers.
++	 */
++	if (length == 4)
++		offset = 3;
++	else
++		offset = (offset + 1) % 4;
++
++	/* Read the MIB counter 16 bits at a time */
++	for (i = 0; i < length; i++) {
++		ret = regmap_read(smi->map,
++				  RTL8365MB_MIB_COUNTER_REG(offset - i), &val);
++		if (ret)
++			return ret;
++
++		tmpvalue = ((tmpvalue) << 16) | (val & 0xFFFF);
++	}
++
++	/* Only commit the result if no error occurred */
++	*mibvalue = tmpvalue;
++
++	return 0;
++}
++
++static void rtl8365mb_get_ethtool_stats(struct dsa_switch *ds, int port, u64 *data)
++{
++	struct realtek_smi *smi = ds->priv;
++	struct rtl8365mb *mb;
++	int ret;
++	int i;
++
++	mb = smi->chip_data;
++
++	mutex_lock(&mb->mib_lock);
++	for (i = 0; i < RTL8365MB_MIB_END; i++) {
++		struct rtl8365mb_mib_counter *mib = &rtl8365mb_mib_counters[i];
++
++		ret = rtl8365mb_mib_counter_read(smi, port, mib->offset,
++						 mib->length, &data[i]);
++		if (ret) {
++			dev_err(smi->dev,
++				"failed to read port %d counters: %d\n", port,
++				ret);
++			break;
++		}
++	}
++	mutex_unlock(&mb->mib_lock);
++}
++
++static void rtl8365mb_get_strings(struct dsa_switch *ds, int port, u32 stringset, u8 *data)
++{
++	int i;
++
++	if (stringset != ETH_SS_STATS)
++		return;
++
++	for (i = 0; i < RTL8365MB_MIB_END; i++) {
++		struct rtl8365mb_mib_counter *mib = &rtl8365mb_mib_counters[i];
++
++		strncpy(data + i * ETH_GSTRING_LEN, mib->name, ETH_GSTRING_LEN);
++	}
++}
++
++static int rtl8365mb_get_sset_count(struct dsa_switch *ds, int port, int sset)
++{
++	if (sset != ETH_SS_STATS)
++		return -EOPNOTSUPP;
++
++	return RTL8365MB_MIB_END;
++}
++
++static void rtl8365mb_get_phy_stats(struct dsa_switch *ds, int port,
++				    struct ethtool_eth_phy_stats *phy_stats)
++{
++	struct realtek_smi *smi = ds->priv;
++	struct rtl8365mb_mib_counter *mib;
++	struct rtl8365mb *mb;
++
++	mb = smi->chip_data;
++	mib = &rtl8365mb_mib_counters[RTL8365MB_MIB_dot3StatsSymbolErrors];
++
++	mutex_lock(&mb->mib_lock);
++	rtl8365mb_mib_counter_read(smi, port, mib->offset, mib->length,
++				   &phy_stats->SymbolErrorDuringCarrier);
++	mutex_unlock(&mb->mib_lock);
++}
++
++static void rtl8365mb_get_mac_stats(struct dsa_switch *ds, int port,
++				    struct ethtool_eth_mac_stats *mac_stats)
++{
++	u64 cnt[RTL8365MB_MIB_END] = {
++		[RTL8365MB_MIB_ifOutOctets] = 1,
++		[RTL8365MB_MIB_ifOutUcastPkts] = 1,
++		[RTL8365MB_MIB_ifOutMulticastPkts] = 1,
++		[RTL8365MB_MIB_ifOutBroadcastPkts] = 1,
++		[RTL8365MB_MIB_dot3OutPauseFrames] = 1,
++		[RTL8365MB_MIB_ifOutDiscards] = 1,
++		[RTL8365MB_MIB_ifInOctets] = 1,
++		[RTL8365MB_MIB_ifInUcastPkts] = 1,
++		[RTL8365MB_MIB_ifInMulticastPkts] = 1,
++		[RTL8365MB_MIB_ifInBroadcastPkts] = 1,
++		[RTL8365MB_MIB_dot3InPauseFrames] = 1,
++		[RTL8365MB_MIB_dot3StatsSingleCollisionFrames] = 1,
++		[RTL8365MB_MIB_dot3StatsMultipleCollisionFrames] = 1,
++		[RTL8365MB_MIB_dot3StatsFCSErrors] = 1,
++		[RTL8365MB_MIB_dot3StatsDeferredTransmissions] = 1,
++		[RTL8365MB_MIB_dot3StatsLateCollisions] = 1,
++		[RTL8365MB_MIB_dot3StatsExcessiveCollisions] = 1,
++
++	};
++	struct realtek_smi *smi = ds->priv;
++	struct rtl8365mb *mb;
++	int ret;
++	int i;
++
++	mb = smi->chip_data;
++
++	mutex_lock(&mb->mib_lock);
++	for (i = 0; i < RTL8365MB_MIB_END; i++) {
++		struct rtl8365mb_mib_counter *mib = &rtl8365mb_mib_counters[i];
++
++		/* Only fetch required MIB counters (marked = 1 above) */
++		if (!cnt[i])
++			continue;
++
++		ret = rtl8365mb_mib_counter_read(smi, port, mib->offset,
++						 mib->length, &cnt[i]);
++		if (ret)
++			break;
++	}
++	mutex_unlock(&mb->mib_lock);
++
++	/* The RTL8365MB-VC exposes MIB objects, which we have to translate into
++	 * IEEE 802.3 Managed Objects. This is not always completely faithful,
++	 * but we try out best. See RFC 3635 for a detailed treatment of the
++	 * subject.
++	 */
++
++	mac_stats->FramesTransmittedOK = cnt[RTL8365MB_MIB_ifOutUcastPkts] +
++					 cnt[RTL8365MB_MIB_ifOutMulticastPkts] +
++					 cnt[RTL8365MB_MIB_ifOutBroadcastPkts] +
++					 cnt[RTL8365MB_MIB_dot3OutPauseFrames] -
++					 cnt[RTL8365MB_MIB_ifOutDiscards];
++	mac_stats->SingleCollisionFrames =
++		cnt[RTL8365MB_MIB_dot3StatsSingleCollisionFrames];
++	mac_stats->MultipleCollisionFrames =
++		cnt[RTL8365MB_MIB_dot3StatsMultipleCollisionFrames];
++	mac_stats->FramesReceivedOK = cnt[RTL8365MB_MIB_ifInUcastPkts] +
++				      cnt[RTL8365MB_MIB_ifInMulticastPkts] +
++				      cnt[RTL8365MB_MIB_ifInBroadcastPkts] +
++				      cnt[RTL8365MB_MIB_dot3InPauseFrames];
++	mac_stats->FrameCheckSequenceErrors =
++		cnt[RTL8365MB_MIB_dot3StatsFCSErrors];
++	mac_stats->OctetsTransmittedOK = cnt[RTL8365MB_MIB_ifOutOctets] -
++					 18 * mac_stats->FramesTransmittedOK;
++	mac_stats->FramesWithDeferredXmissions =
++		cnt[RTL8365MB_MIB_dot3StatsDeferredTransmissions];
++	mac_stats->LateCollisions = cnt[RTL8365MB_MIB_dot3StatsLateCollisions];
++	mac_stats->FramesAbortedDueToXSColls =
++		cnt[RTL8365MB_MIB_dot3StatsExcessiveCollisions];
++	mac_stats->OctetsReceivedOK = cnt[RTL8365MB_MIB_ifInOctets] -
++				      18 * mac_stats->FramesReceivedOK;
++	mac_stats->MulticastFramesXmittedOK =
++		cnt[RTL8365MB_MIB_ifOutMulticastPkts];
++	mac_stats->BroadcastFramesXmittedOK =
++		cnt[RTL8365MB_MIB_ifOutBroadcastPkts];
++	mac_stats->MulticastFramesReceivedOK =
++		cnt[RTL8365MB_MIB_ifInMulticastPkts];
++	mac_stats->BroadcastFramesReceivedOK =
++		cnt[RTL8365MB_MIB_ifInBroadcastPkts];
++}
++
++static void rtl8365mb_get_ctrl_stats(struct dsa_switch *ds, int port,
++				     struct ethtool_eth_ctrl_stats *ctrl_stats)
++{
++	struct realtek_smi *smi = ds->priv;
++	struct rtl8365mb_mib_counter *mib;
++	struct rtl8365mb *mb;
++
++	mb = smi->chip_data;
++	mib = &rtl8365mb_mib_counters[RTL8365MB_MIB_dot3ControlInUnknownOpcodes];
++
++	mutex_lock(&mb->mib_lock);
++	rtl8365mb_mib_counter_read(smi, port, mib->offset, mib->length,
++				   &ctrl_stats->UnsupportedOpcodesReceived);
++	mutex_unlock(&mb->mib_lock);
++}
++
++static void rtl8365mb_stats_update(struct realtek_smi *smi, int port)
++{
++	u64 cnt[RTL8365MB_MIB_END] = {
++		[RTL8365MB_MIB_ifOutOctets] = 1,
++		[RTL8365MB_MIB_ifOutUcastPkts] = 1,
++		[RTL8365MB_MIB_ifOutMulticastPkts] = 1,
++		[RTL8365MB_MIB_ifOutBroadcastPkts] = 1,
++		[RTL8365MB_MIB_ifOutDiscards] = 1,
++		[RTL8365MB_MIB_ifInOctets] = 1,
++		[RTL8365MB_MIB_ifInUcastPkts] = 1,
++		[RTL8365MB_MIB_ifInMulticastPkts] = 1,
++		[RTL8365MB_MIB_ifInBroadcastPkts] = 1,
++		[RTL8365MB_MIB_etherStatsDropEvents] = 1,
++		[RTL8365MB_MIB_etherStatsCollisions] = 1,
++		[RTL8365MB_MIB_etherStatsFragments] = 1,
++		[RTL8365MB_MIB_etherStatsJabbers] = 1,
++		[RTL8365MB_MIB_dot3StatsFCSErrors] = 1,
++		[RTL8365MB_MIB_dot3StatsLateCollisions] = 1,
++	};
++	struct rtl8365mb *mb = smi->chip_data;
++	struct rtnl_link_stats64 *stats;
++	int ret;
++	int i;
++
++	stats = &mb->ports[port].stats;
++
++	mutex_lock(&mb->mib_lock);
++	for (i = 0; i < RTL8365MB_MIB_END; i++) {
++		struct rtl8365mb_mib_counter *c = &rtl8365mb_mib_counters[i];
++
++		/* Only fetch required MIB counters (marked = 1 above) */
++		if (!cnt[i])
++			continue;
++
++		ret = rtl8365mb_mib_counter_read(smi, port, c->offset,
++						 c->length, &cnt[i]);
++		if (ret)
++			break;
++	}
++	mutex_unlock(&mb->mib_lock);
++
++	/* Don't update statistics if there was an error reading the counters */
++	if (ret)
++		return;
++
++	spin_lock(&mb->ports[port].stats_lock);
++
++	stats->rx_packets = cnt[RTL8365MB_MIB_ifInUcastPkts] +
++			    cnt[RTL8365MB_MIB_ifInMulticastPkts] +
++			    cnt[RTL8365MB_MIB_ifInBroadcastPkts] -
++			    cnt[RTL8365MB_MIB_ifOutDiscards];
++
++	stats->tx_packets = cnt[RTL8365MB_MIB_ifOutUcastPkts] +
++			    cnt[RTL8365MB_MIB_ifOutMulticastPkts] +
++			    cnt[RTL8365MB_MIB_ifOutBroadcastPkts];
++
++	/* if{In,Out}Octets includes FCS - remove it */
++	stats->rx_bytes = cnt[RTL8365MB_MIB_ifInOctets] - 4 * stats->rx_packets;
++	stats->tx_bytes =
++		cnt[RTL8365MB_MIB_ifOutOctets] - 4 * stats->tx_packets;
++
++	stats->rx_dropped = cnt[RTL8365MB_MIB_etherStatsDropEvents];
++	stats->tx_dropped = cnt[RTL8365MB_MIB_ifOutDiscards];
++
++	stats->multicast = cnt[RTL8365MB_MIB_ifInMulticastPkts];
++	stats->collisions = cnt[RTL8365MB_MIB_etherStatsCollisions];
++
++	stats->rx_length_errors = cnt[RTL8365MB_MIB_etherStatsFragments] +
++				  cnt[RTL8365MB_MIB_etherStatsJabbers];
++	stats->rx_crc_errors = cnt[RTL8365MB_MIB_dot3StatsFCSErrors];
++	stats->rx_errors = stats->rx_length_errors + stats->rx_crc_errors;
++
++	stats->tx_aborted_errors = cnt[RTL8365MB_MIB_ifOutDiscards];
++	stats->tx_window_errors = cnt[RTL8365MB_MIB_dot3StatsLateCollisions];
++	stats->tx_errors = stats->tx_aborted_errors + stats->tx_window_errors;
++
++	spin_unlock(&mb->ports[port].stats_lock);
++}
++
++static void rtl8365mb_stats_poll(struct work_struct *work)
++{
++	struct rtl8365mb_port *p = container_of(to_delayed_work(work),
++						struct rtl8365mb_port,
++						mib_work);
++	struct realtek_smi *smi = p->smi;
++
++	rtl8365mb_stats_update(smi, p->index);
++
++	schedule_delayed_work(&p->mib_work, RTL8365MB_STATS_INTERVAL_JIFFIES);
++}
++
++static void rtl8365mb_get_stats64(struct dsa_switch *ds, int port,
++				  struct rtnl_link_stats64 *s)
++{
++	struct realtek_smi *smi = ds->priv;
++	struct rtl8365mb_port *p;
++	struct rtl8365mb *mb;
++
++	mb = smi->chip_data;
++	p = &mb->ports[port];
++
++	spin_lock(&p->stats_lock);
++	memcpy(s, &p->stats, sizeof(*s));
++	spin_unlock(&p->stats_lock);
++}
++
++static void rtl8365mb_stats_setup(struct realtek_smi *smi)
++{
++	struct rtl8365mb *mb = smi->chip_data;
++	int i;
++
++	/* Per-chip global mutex to protect MIB counter access, since doing
++	 * so requires accessing a series of registers in a particular order.
++	 */
++	mutex_init(&mb->mib_lock);
++
++	for (i = 0; i < smi->num_ports; i++) {
++		struct rtl8365mb_port *p = &mb->ports[i];
++
++		if (dsa_is_unused_port(smi->ds, i))
++			continue;
++
++		/* Per-port spinlock to protect the stats64 data */
++		spin_lock_init(&p->stats_lock);
++
++		/* This work polls the MIB counters and keeps the stats64 data
++		 * up-to-date.
++		 */
++		INIT_DELAYED_WORK(&p->mib_work, rtl8365mb_stats_poll);
++	}
++}
++
++static void rtl8365mb_stats_teardown(struct realtek_smi *smi)
++{
++	struct rtl8365mb *mb = smi->chip_data;
++	int i;
++
++	for (i = 0; i < smi->num_ports; i++) {
++		struct rtl8365mb_port *p = &mb->ports[i];
++
++		if (dsa_is_unused_port(smi->ds, i))
++			continue;
++
++		cancel_delayed_work_sync(&p->mib_work);
++	}
++}
++
++static int rtl8365mb_get_and_clear_status_reg(struct realtek_smi *smi, u32 reg,
++					      u32 *val)
++{
++	int ret;
++
++	ret = regmap_read(smi->map, reg, val);
++	if (ret)
++		return ret;
++
++	return regmap_write(smi->map, reg, *val);
++}
++
++static irqreturn_t rtl8365mb_irq(int irq, void *data)
++{
++	struct realtek_smi *smi = data;
++	unsigned long line_changes = 0;
++	struct rtl8365mb *mb;
++	u32 stat;
++	int line;
++	int ret;
++
++	mb = smi->chip_data;
++
++	ret = rtl8365mb_get_and_clear_status_reg(smi, RTL8365MB_INTR_STATUS_REG,
++						 &stat);
++	if (ret)
++		goto out_error;
++
++	if (stat & RTL8365MB_INTR_LINK_CHANGE_MASK) {
++		u32 linkdown_ind;
++		u32 linkup_ind;
++		u32 val;
++
++		ret = rtl8365mb_get_and_clear_status_reg(
++			smi, RTL8365MB_PORT_LINKUP_IND_REG, &val);
++		if (ret)
++			goto out_error;
++
++		linkup_ind = FIELD_GET(RTL8365MB_PORT_LINKUP_IND_MASK, val);
++
++		ret = rtl8365mb_get_and_clear_status_reg(
++			smi, RTL8365MB_PORT_LINKDOWN_IND_REG, &val);
++		if (ret)
++			goto out_error;
++
++		linkdown_ind = FIELD_GET(RTL8365MB_PORT_LINKDOWN_IND_MASK, val);
++
++		line_changes = (linkup_ind | linkdown_ind) & mb->port_mask;
++	}
++
++	if (!line_changes)
++		goto out_none;
++
++	for_each_set_bit(line, &line_changes, smi->num_ports) {
++		int child_irq = irq_find_mapping(smi->irqdomain, line);
++
++		handle_nested_irq(child_irq);
++	}
++
++	return IRQ_HANDLED;
++
++out_error:
++	dev_err(smi->dev, "failed to read interrupt status: %d\n", ret);
++
++out_none:
++	return IRQ_NONE;
++}
++
++static struct irq_chip rtl8365mb_irq_chip = {
++	.name = "rtl8365mb",
++	/* The hardware doesn't support masking IRQs on a per-port basis */
++};
++
++static int rtl8365mb_irq_map(struct irq_domain *domain, unsigned int irq,
++			     irq_hw_number_t hwirq)
++{
++	irq_set_chip_data(irq, domain->host_data);
++	irq_set_chip_and_handler(irq, &rtl8365mb_irq_chip, handle_simple_irq);
++	irq_set_nested_thread(irq, 1);
++	irq_set_noprobe(irq);
++
++	return 0;
++}
++
++static void rtl8365mb_irq_unmap(struct irq_domain *d, unsigned int irq)
++{
++	irq_set_nested_thread(irq, 0);
++	irq_set_chip_and_handler(irq, NULL, NULL);
++	irq_set_chip_data(irq, NULL);
++}
++
++static const struct irq_domain_ops rtl8365mb_irqdomain_ops = {
++	.map = rtl8365mb_irq_map,
++	.unmap = rtl8365mb_irq_unmap,
++	.xlate = irq_domain_xlate_onecell,
++};
++
++static int rtl8365mb_set_irq_enable(struct realtek_smi *smi, bool enable)
++{
++	return regmap_update_bits(smi->map, RTL8365MB_INTR_CTRL_REG,
++				  RTL8365MB_INTR_LINK_CHANGE_MASK,
++				  FIELD_PREP(RTL8365MB_INTR_LINK_CHANGE_MASK,
++					     enable ? 1 : 0));
++}
++
++static int rtl8365mb_irq_enable(struct realtek_smi *smi)
++{
++	return rtl8365mb_set_irq_enable(smi, true);
++}
++
++static int rtl8365mb_irq_disable(struct realtek_smi *smi)
++{
++	return rtl8365mb_set_irq_enable(smi, false);
++}
++
++static int rtl8365mb_irq_setup(struct realtek_smi *smi)
++{
++	struct rtl8365mb *mb = smi->chip_data;
++	struct device_node *intc;
++	u32 irq_trig;
++	int virq;
++	int irq;
++	u32 val;
++	int ret;
++	int i;
++
++	intc = of_get_child_by_name(smi->dev->of_node, "interrupt-controller");
++	if (!intc) {
++		dev_err(smi->dev, "missing child interrupt-controller node\n");
++		return -EINVAL;
++	}
++
++	/* rtl8365mb IRQs cascade off this one */
++	irq = of_irq_get(intc, 0);
++	if (irq <= 0) {
++		if (irq != -EPROBE_DEFER)
++			dev_err(smi->dev, "failed to get parent irq: %d\n",
++				irq);
++		ret = irq ? irq : -EINVAL;
++		goto out_put_node;
++	}
++
++	smi->irqdomain = irq_domain_add_linear(intc, smi->num_ports,
++					       &rtl8365mb_irqdomain_ops, smi);
++	if (!smi->irqdomain) {
++		dev_err(smi->dev, "failed to add irq domain\n");
++		ret = -ENOMEM;
++		goto out_put_node;
++	}
++
++	for (i = 0; i < smi->num_ports; i++) {
++		virq = irq_create_mapping(smi->irqdomain, i);
++		if (!virq) {
++			dev_err(smi->dev,
++				"failed to create irq domain mapping\n");
++			ret = -EINVAL;
++			goto out_remove_irqdomain;
++		}
++
++		irq_set_parent(virq, irq);
++	}
++
++	/* Configure chip interrupt signal polarity */
++	irq_trig = irqd_get_trigger_type(irq_get_irq_data(irq));
++	switch (irq_trig) {
++	case IRQF_TRIGGER_RISING:
++	case IRQF_TRIGGER_HIGH:
++		val = RTL8365MB_INTR_POLARITY_HIGH;
++		break;
++	case IRQF_TRIGGER_FALLING:
++	case IRQF_TRIGGER_LOW:
++		val = RTL8365MB_INTR_POLARITY_LOW;
++		break;
++	default:
++		dev_err(smi->dev, "unsupported irq trigger type %u\n",
++			irq_trig);
++		ret = -EINVAL;
++		goto out_remove_irqdomain;
++	}
++
++	ret = regmap_update_bits(smi->map, RTL8365MB_INTR_POLARITY_REG,
++				 RTL8365MB_INTR_POLARITY_MASK,
++				 FIELD_PREP(RTL8365MB_INTR_POLARITY_MASK, val));
++	if (ret)
++		goto out_remove_irqdomain;
++
++	/* Disable the interrupt in case the chip has it enabled on reset */
++	ret = rtl8365mb_irq_disable(smi);
++	if (ret)
++		goto out_remove_irqdomain;
++
++	/* Clear the interrupt status register */
++	ret = regmap_write(smi->map, RTL8365MB_INTR_STATUS_REG,
++			   RTL8365MB_INTR_ALL_MASK);
++	if (ret)
++		goto out_remove_irqdomain;
++
++	ret = request_threaded_irq(irq, NULL, rtl8365mb_irq, IRQF_ONESHOT,
++				   "rtl8365mb", smi);
++	if (ret) {
++		dev_err(smi->dev, "failed to request irq: %d\n", ret);
++		goto out_remove_irqdomain;
++	}
++
++	/* Store the irq so that we know to free it during teardown */
++	mb->irq = irq;
++
++	ret = rtl8365mb_irq_enable(smi);
++	if (ret)
++		goto out_free_irq;
++
++	of_node_put(intc);
++
++	return 0;
++
++out_free_irq:
++	free_irq(mb->irq, smi);
++	mb->irq = 0;
++
++out_remove_irqdomain:
++	for (i = 0; i < smi->num_ports; i++) {
++		virq = irq_find_mapping(smi->irqdomain, i);
++		irq_dispose_mapping(virq);
++	}
++
++	irq_domain_remove(smi->irqdomain);
++	smi->irqdomain = NULL;
++
++out_put_node:
++	of_node_put(intc);
++
++	return ret;
++}
++
++static void rtl8365mb_irq_teardown(struct realtek_smi *smi)
++{
++	struct rtl8365mb *mb = smi->chip_data;
++	int virq;
++	int i;
++
++	if (mb->irq) {
++		free_irq(mb->irq, smi);
++		mb->irq = 0;
++	}
++
++	if (smi->irqdomain) {
++		for (i = 0; i < smi->num_ports; i++) {
++			virq = irq_find_mapping(smi->irqdomain, i);
++			irq_dispose_mapping(virq);
++		}
++
++		irq_domain_remove(smi->irqdomain);
++		smi->irqdomain = NULL;
++	}
++}
++
++static int rtl8365mb_cpu_config(struct realtek_smi *smi)
++{
++	struct rtl8365mb *mb = smi->chip_data;
++	struct rtl8365mb_cpu *cpu = &mb->cpu;
++	u32 val;
++	int ret;
++
++	ret = regmap_update_bits(smi->map, RTL8365MB_CPU_PORT_MASK_REG,
++				 RTL8365MB_CPU_PORT_MASK_MASK,
++				 FIELD_PREP(RTL8365MB_CPU_PORT_MASK_MASK,
++					    cpu->mask));
++	if (ret)
++		return ret;
++
++	val = FIELD_PREP(RTL8365MB_CPU_CTRL_EN_MASK, cpu->enable ? 1 : 0) |
++	      FIELD_PREP(RTL8365MB_CPU_CTRL_INSERTMODE_MASK, cpu->insert) |
++	      FIELD_PREP(RTL8365MB_CPU_CTRL_TAG_POSITION_MASK, cpu->position) |
++	      FIELD_PREP(RTL8365MB_CPU_CTRL_RXBYTECOUNT_MASK, cpu->rx_length) |
++	      FIELD_PREP(RTL8365MB_CPU_CTRL_TAG_FORMAT_MASK, cpu->format) |
++	      FIELD_PREP(RTL8365MB_CPU_CTRL_TRAP_PORT_MASK, cpu->trap_port) |
++	      FIELD_PREP(RTL8365MB_CPU_CTRL_TRAP_PORT_EXT_MASK,
++			 cpu->trap_port >> 3);
++	ret = regmap_write(smi->map, RTL8365MB_CPU_CTRL_REG, val);
++	if (ret)
++		return ret;
++
++	return 0;
++}
++
++static int rtl8365mb_switch_init(struct realtek_smi *smi)
++{
++	struct rtl8365mb *mb = smi->chip_data;
++	int ret;
++	int i;
++
++	/* Do any chip-specific init jam before getting to the common stuff */
++	if (mb->jam_table) {
++		for (i = 0; i < mb->jam_size; i++) {
++			ret = regmap_write(smi->map, mb->jam_table[i].reg,
++					   mb->jam_table[i].val);
++			if (ret)
++				return ret;
++		}
++	}
++
++	/* Common init jam */
++	for (i = 0; i < ARRAY_SIZE(rtl8365mb_init_jam_common); i++) {
++		ret = regmap_write(smi->map, rtl8365mb_init_jam_common[i].reg,
++				   rtl8365mb_init_jam_common[i].val);
++		if (ret)
++			return ret;
++	}
++
++	return 0;
++}
++
++static int rtl8365mb_reset_chip(struct realtek_smi *smi)
++{
++	u32 val;
++
++	realtek_smi_write_reg_noack(smi, RTL8365MB_CHIP_RESET_REG,
++				    FIELD_PREP(RTL8365MB_CHIP_RESET_HW_MASK,
++					       1));
++
++	/* Realtek documentation says the chip needs 1 second to reset. Sleep
++	 * for 100 ms before accessing any registers to prevent ACK timeouts.
++	 */
++	msleep(100);
++	return regmap_read_poll_timeout(smi->map, RTL8365MB_CHIP_RESET_REG, val,
++					!(val & RTL8365MB_CHIP_RESET_HW_MASK),
++					20000, 1e6);
++}
++
++static int rtl8365mb_setup(struct dsa_switch *ds)
++{
++	struct realtek_smi *smi = ds->priv;
++	struct rtl8365mb *mb;
++	int ret;
++	int i;
++
++	mb = smi->chip_data;
++
++	ret = rtl8365mb_reset_chip(smi);
++	if (ret) {
++		dev_err(smi->dev, "failed to reset chip: %d\n", ret);
++		goto out_error;
++	}
++
++	/* Configure switch to vendor-defined initial state */
++	ret = rtl8365mb_switch_init(smi);
++	if (ret) {
++		dev_err(smi->dev, "failed to initialize switch: %d\n", ret);
++		goto out_error;
++	}
++
++	/* Set up cascading IRQs */
++	ret = rtl8365mb_irq_setup(smi);
++	if (ret == -EPROBE_DEFER)
++		return ret;
++	else if (ret)
++		dev_info(smi->dev, "no interrupt support\n");
++
++	/* Configure CPU tagging */
++	ret = rtl8365mb_cpu_config(smi);
++	if (ret)
++		goto out_teardown_irq;
++
++	/* Configure ports */
++	for (i = 0; i < smi->num_ports; i++) {
++		struct rtl8365mb_port *p = &mb->ports[i];
++
++		if (dsa_is_unused_port(smi->ds, i))
++			continue;
++
++		/* Set up per-port private data */
++		p->smi = smi;
++		p->index = i;
++
++		/* Forward only to the CPU */
++		ret = rtl8365mb_port_set_isolation(smi, i, BIT(smi->cpu_port));
++		if (ret)
++			goto out_teardown_irq;
++
++		/* Disable learning */
++		ret = rtl8365mb_port_set_learning(smi, i, false);
++		if (ret)
++			goto out_teardown_irq;
++
++		/* Set the initial STP state of all ports to DISABLED, otherwise
++		 * ports will still forward frames to the CPU despite being
++		 * administratively down by default.
++		 */
++		rtl8365mb_port_stp_state_set(smi->ds, i, BR_STATE_DISABLED);
++	}
++
++	/* Set maximum packet length to 1536 bytes */
++	ret = regmap_update_bits(smi->map, RTL8365MB_CFG0_MAX_LEN_REG,
++				 RTL8365MB_CFG0_MAX_LEN_MASK,
++				 FIELD_PREP(RTL8365MB_CFG0_MAX_LEN_MASK, 1536));
++	if (ret)
++		goto out_teardown_irq;
++
++	ret = realtek_smi_setup_mdio(smi);
++	if (ret) {
++		dev_err(smi->dev, "could not set up MDIO bus\n");
++		goto out_teardown_irq;
++	}
++
++	/* Start statistics counter polling */
++	rtl8365mb_stats_setup(smi);
++
++	return 0;
++
++out_teardown_irq:
++	rtl8365mb_irq_teardown(smi);
++
++out_error:
++	return ret;
++}
++
++static void rtl8365mb_teardown(struct dsa_switch *ds)
++{
++	struct realtek_smi *smi = ds->priv;
++
++	rtl8365mb_stats_teardown(smi);
++	rtl8365mb_irq_teardown(smi);
++}
++
++static int rtl8365mb_get_chip_id_and_ver(struct regmap *map, u32 *id, u32 *ver)
++{
++	int ret;
++
++	/* For some reason we have to write a magic value to an arbitrary
++	 * register whenever accessing the chip ID/version registers.
++	 */
++	ret = regmap_write(map, RTL8365MB_MAGIC_REG, RTL8365MB_MAGIC_VALUE);
++	if (ret)
++		return ret;
++
++	ret = regmap_read(map, RTL8365MB_CHIP_ID_REG, id);
++	if (ret)
++		return ret;
++
++	ret = regmap_read(map, RTL8365MB_CHIP_VER_REG, ver);
++	if (ret)
++		return ret;
++
++	/* Reset magic register */
++	ret = regmap_write(map, RTL8365MB_MAGIC_REG, 0);
++	if (ret)
++		return ret;
++
++	return 0;
++}
++
++static int rtl8365mb_detect(struct realtek_smi *smi)
++{
++	struct rtl8365mb *mb = smi->chip_data;
++	u32 chip_id;
++	u32 chip_ver;
++	int ret;
++
++	ret = rtl8365mb_get_chip_id_and_ver(smi->map, &chip_id, &chip_ver);
++	if (ret) {
++		dev_err(smi->dev, "failed to read chip id and version: %d\n",
++			ret);
++		return ret;
++	}
++
++	switch (chip_id) {
++	case RTL8365MB_CHIP_ID_8365MB_VC:
++		dev_info(smi->dev,
++			 "found an RTL8365MB-VC switch (ver=0x%04x)\n",
++			 chip_ver);
++
++		smi->cpu_port = RTL8365MB_CPU_PORT_NUM_8365MB_VC;
++		smi->num_ports = smi->cpu_port + 1;
++
++		mb->smi = smi;
++		mb->chip_id = chip_id;
++		mb->chip_ver = chip_ver;
++		mb->port_mask = BIT(smi->num_ports) - 1;
++		mb->learn_limit_max = RTL8365MB_LEARN_LIMIT_MAX_8365MB_VC;
++		mb->jam_table = rtl8365mb_init_jam_8365mb_vc;
++		mb->jam_size = ARRAY_SIZE(rtl8365mb_init_jam_8365mb_vc);
++
++		mb->cpu.enable = 1;
++		mb->cpu.mask = BIT(smi->cpu_port);
++		mb->cpu.trap_port = smi->cpu_port;
++		mb->cpu.insert = RTL8365MB_CPU_INSERT_TO_ALL;
++		mb->cpu.position = RTL8365MB_CPU_POS_AFTER_SA;
++		mb->cpu.rx_length = RTL8365MB_CPU_RXLEN_64BYTES;
++		mb->cpu.format = RTL8365MB_CPU_FORMAT_8BYTES;
++
++		break;
++	default:
++		dev_err(smi->dev,
++			"found an unknown Realtek switch (id=0x%04x, ver=0x%04x)\n",
++			chip_id, chip_ver);
++		return -ENODEV;
++	}
++
++	return 0;
++}
++
++static const struct dsa_switch_ops rtl8365mb_switch_ops = {
++	.get_tag_protocol = rtl8365mb_get_tag_protocol,
++	.setup = rtl8365mb_setup,
++	.teardown = rtl8365mb_teardown,
++	.phylink_validate = rtl8365mb_phylink_validate,
++	.phylink_mac_config = rtl8365mb_phylink_mac_config,
++	.phylink_mac_link_down = rtl8365mb_phylink_mac_link_down,
++	.phylink_mac_link_up = rtl8365mb_phylink_mac_link_up,
++	.port_stp_state_set = rtl8365mb_port_stp_state_set,
++	.get_strings = rtl8365mb_get_strings,
++	.get_ethtool_stats = rtl8365mb_get_ethtool_stats,
++	.get_sset_count = rtl8365mb_get_sset_count,
++	.get_eth_phy_stats = rtl8365mb_get_phy_stats,
++	.get_eth_mac_stats = rtl8365mb_get_mac_stats,
++	.get_eth_ctrl_stats = rtl8365mb_get_ctrl_stats,
++	.get_stats64 = rtl8365mb_get_stats64,
++};
++
++static const struct realtek_smi_ops rtl8365mb_smi_ops = {
++	.detect = rtl8365mb_detect,
++	.phy_read = rtl8365mb_phy_read,
++	.phy_write = rtl8365mb_phy_write,
++};
++
++const struct realtek_smi_variant rtl8365mb_variant = {
++	.ds_ops = &rtl8365mb_switch_ops,
++	.ops = &rtl8365mb_smi_ops,
++	.clk_delay = 10,
++	.cmd_read = 0xb9,
++	.cmd_write = 0xb8,
++	.chip_data_sz = sizeof(struct rtl8365mb),
++};
++EXPORT_SYMBOL_GPL(rtl8365mb_variant);
+diff --git a/drivers/net/dsa/realtek/rtl8366.c b/drivers/net/dsa/realtek/rtl8366.c
+new file mode 100644
+index 0000000000000..bdb8d8d348807
+--- /dev/null
++++ b/drivers/net/dsa/realtek/rtl8366.c
+@@ -0,0 +1,448 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Realtek SMI library helpers for the RTL8366x variants
++ * RTL8366RB and RTL8366S
++ *
++ * Copyright (C) 2017 Linus Walleij <linus.walleij@linaro.org>
++ * Copyright (C) 2009-2010 Gabor Juhos <juhosg@openwrt.org>
++ * Copyright (C) 2010 Antti Seppälä <a.seppala@gmail.com>
++ * Copyright (C) 2010 Roman Yeryomin <roman@advem.lv>
++ * Copyright (C) 2011 Colin Leitner <colin.leitner@googlemail.com>
++ */
++#include <linux/if_bridge.h>
++#include <net/dsa.h>
++
++#include "realtek-smi-core.h"
++
++int rtl8366_mc_is_used(struct realtek_smi *smi, int mc_index, int *used)
++{
++	int ret;
++	int i;
++
++	*used = 0;
++	for (i = 0; i < smi->num_ports; i++) {
++		int index = 0;
++
++		ret = smi->ops->get_mc_index(smi, i, &index);
++		if (ret)
++			return ret;
++
++		if (mc_index == index) {
++			*used = 1;
++			break;
++		}
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(rtl8366_mc_is_used);
++
++/**
++ * rtl8366_obtain_mc() - retrieve or allocate a VLAN member configuration
++ * @smi: the Realtek SMI device instance
++ * @vid: the VLAN ID to look up or allocate
++ * @vlanmc: the pointer will be assigned to a pointer to a valid member config
++ * if successful
++ * @return: index of a new member config or negative error number
++ */
++static int rtl8366_obtain_mc(struct realtek_smi *smi, int vid,
++			     struct rtl8366_vlan_mc *vlanmc)
++{
++	struct rtl8366_vlan_4k vlan4k;
++	int ret;
++	int i;
++
++	/* Try to find an existing member config entry for this VID */
++	for (i = 0; i < smi->num_vlan_mc; i++) {
++		ret = smi->ops->get_vlan_mc(smi, i, vlanmc);
++		if (ret) {
++			dev_err(smi->dev, "error searching for VLAN MC %d for VID %d\n",
++				i, vid);
++			return ret;
++		}
++
++		if (vid == vlanmc->vid)
++			return i;
++	}
++
++	/* We have no MC entry for this VID, try to find an empty one */
++	for (i = 0; i < smi->num_vlan_mc; i++) {
++		ret = smi->ops->get_vlan_mc(smi, i, vlanmc);
++		if (ret) {
++			dev_err(smi->dev, "error searching for VLAN MC %d for VID %d\n",
++				i, vid);
++			return ret;
++		}
++
++		if (vlanmc->vid == 0 && vlanmc->member == 0) {
++			/* Update the entry from the 4K table */
++			ret = smi->ops->get_vlan_4k(smi, vid, &vlan4k);
++			if (ret) {
++				dev_err(smi->dev, "error looking for 4K VLAN MC %d for VID %d\n",
++					i, vid);
++				return ret;
++			}
++
++			vlanmc->vid = vid;
++			vlanmc->member = vlan4k.member;
++			vlanmc->untag = vlan4k.untag;
++			vlanmc->fid = vlan4k.fid;
++			ret = smi->ops->set_vlan_mc(smi, i, vlanmc);
++			if (ret) {
++				dev_err(smi->dev, "unable to set/update VLAN MC %d for VID %d\n",
++					i, vid);
++				return ret;
++			}
++
++			dev_dbg(smi->dev, "created new MC at index %d for VID %d\n",
++				i, vid);
++			return i;
++		}
++	}
++
++	/* MC table is full, try to find an unused entry and replace it */
++	for (i = 0; i < smi->num_vlan_mc; i++) {
++		int used;
++
++		ret = rtl8366_mc_is_used(smi, i, &used);
++		if (ret)
++			return ret;
++
++		if (!used) {
++			/* Update the entry from the 4K table */
++			ret = smi->ops->get_vlan_4k(smi, vid, &vlan4k);
++			if (ret)
++				return ret;
++
++			vlanmc->vid = vid;
++			vlanmc->member = vlan4k.member;
++			vlanmc->untag = vlan4k.untag;
++			vlanmc->fid = vlan4k.fid;
++			ret = smi->ops->set_vlan_mc(smi, i, vlanmc);
++			if (ret) {
++				dev_err(smi->dev, "unable to set/update VLAN MC %d for VID %d\n",
++					i, vid);
++				return ret;
++			}
++			dev_dbg(smi->dev, "recycled MC at index %i for VID %d\n",
++				i, vid);
++			return i;
++		}
++	}
++
++	dev_err(smi->dev, "all VLAN member configurations are in use\n");
++	return -ENOSPC;
++}
++
++int rtl8366_set_vlan(struct realtek_smi *smi, int vid, u32 member,
++		     u32 untag, u32 fid)
++{
++	struct rtl8366_vlan_mc vlanmc;
++	struct rtl8366_vlan_4k vlan4k;
++	int mc;
++	int ret;
++
++	if (!smi->ops->is_vlan_valid(smi, vid))
++		return -EINVAL;
++
++	dev_dbg(smi->dev,
++		"setting VLAN%d 4k members: 0x%02x, untagged: 0x%02x\n",
++		vid, member, untag);
++
++	/* Update the 4K table */
++	ret = smi->ops->get_vlan_4k(smi, vid, &vlan4k);
++	if (ret)
++		return ret;
++
++	vlan4k.member |= member;
++	vlan4k.untag |= untag;
++	vlan4k.fid = fid;
++	ret = smi->ops->set_vlan_4k(smi, &vlan4k);
++	if (ret)
++		return ret;
++
++	dev_dbg(smi->dev,
++		"resulting VLAN%d 4k members: 0x%02x, untagged: 0x%02x\n",
++		vid, vlan4k.member, vlan4k.untag);
++
++	/* Find or allocate a member config for this VID */
++	ret = rtl8366_obtain_mc(smi, vid, &vlanmc);
++	if (ret < 0)
++		return ret;
++	mc = ret;
++
++	/* Update the MC entry */
++	vlanmc.member |= member;
++	vlanmc.untag |= untag;
++	vlanmc.fid = fid;
++
++	/* Commit updates to the MC entry */
++	ret = smi->ops->set_vlan_mc(smi, mc, &vlanmc);
++	if (ret)
++		dev_err(smi->dev, "failed to commit changes to VLAN MC index %d for VID %d\n",
++			mc, vid);
++	else
++		dev_dbg(smi->dev,
++			"resulting VLAN%d MC members: 0x%02x, untagged: 0x%02x\n",
++			vid, vlanmc.member, vlanmc.untag);
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(rtl8366_set_vlan);
++
++int rtl8366_set_pvid(struct realtek_smi *smi, unsigned int port,
++		     unsigned int vid)
++{
++	struct rtl8366_vlan_mc vlanmc;
++	int mc;
++	int ret;
++
++	if (!smi->ops->is_vlan_valid(smi, vid))
++		return -EINVAL;
++
++	/* Find or allocate a member config for this VID */
++	ret = rtl8366_obtain_mc(smi, vid, &vlanmc);
++	if (ret < 0)
++		return ret;
++	mc = ret;
++
++	ret = smi->ops->set_mc_index(smi, port, mc);
++	if (ret) {
++		dev_err(smi->dev, "set PVID: failed to set MC index %d for port %d\n",
++			mc, port);
++		return ret;
++	}
++
++	dev_dbg(smi->dev, "set PVID: the PVID for port %d set to %d using existing MC index %d\n",
++		port, vid, mc);
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(rtl8366_set_pvid);
++
++int rtl8366_enable_vlan4k(struct realtek_smi *smi, bool enable)
++{
++	int ret;
++
++	/* To enable 4k VLAN, ordinary VLAN must be enabled first,
++	 * but if we disable 4k VLAN it is fine to leave ordinary
++	 * VLAN enabled.
++	 */
++	if (enable) {
++		/* Make sure VLAN is ON */
++		ret = smi->ops->enable_vlan(smi, true);
++		if (ret)
++			return ret;
++
++		smi->vlan_enabled = true;
++	}
++
++	ret = smi->ops->enable_vlan4k(smi, enable);
++	if (ret)
++		return ret;
++
++	smi->vlan4k_enabled = enable;
++	return 0;
++}
++EXPORT_SYMBOL_GPL(rtl8366_enable_vlan4k);
++
++int rtl8366_enable_vlan(struct realtek_smi *smi, bool enable)
++{
++	int ret;
++
++	ret = smi->ops->enable_vlan(smi, enable);
++	if (ret)
++		return ret;
++
++	smi->vlan_enabled = enable;
++
++	/* If we turn VLAN off, make sure that we turn off
++	 * 4k VLAN as well, if that happened to be on.
++	 */
++	if (!enable) {
++		smi->vlan4k_enabled = false;
++		ret = smi->ops->enable_vlan4k(smi, false);
++	}
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(rtl8366_enable_vlan);
++
++int rtl8366_reset_vlan(struct realtek_smi *smi)
++{
++	struct rtl8366_vlan_mc vlanmc;
++	int ret;
++	int i;
++
++	rtl8366_enable_vlan(smi, false);
++	rtl8366_enable_vlan4k(smi, false);
++
++	/* Clear the 16 VLAN member configurations */
++	vlanmc.vid = 0;
++	vlanmc.priority = 0;
++	vlanmc.member = 0;
++	vlanmc.untag = 0;
++	vlanmc.fid = 0;
++	for (i = 0; i < smi->num_vlan_mc; i++) {
++		ret = smi->ops->set_vlan_mc(smi, i, &vlanmc);
++		if (ret)
++			return ret;
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(rtl8366_reset_vlan);
++
++int rtl8366_vlan_add(struct dsa_switch *ds, int port,
++		     const struct switchdev_obj_port_vlan *vlan,
++		     struct netlink_ext_ack *extack)
++{
++	bool untagged = !!(vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED);
++	bool pvid = !!(vlan->flags & BRIDGE_VLAN_INFO_PVID);
++	struct realtek_smi *smi = ds->priv;
++	u32 member = 0;
++	u32 untag = 0;
++	int ret;
++
++	if (!smi->ops->is_vlan_valid(smi, vlan->vid)) {
++		NL_SET_ERR_MSG_MOD(extack, "VLAN ID not valid");
++		return -EINVAL;
++	}
++
++	/* Enable VLAN in the hardware
++	 * FIXME: what's with this 4k business?
++	 * Just rtl8366_enable_vlan() seems inconclusive.
++	 */
++	ret = rtl8366_enable_vlan4k(smi, true);
++	if (ret) {
++		NL_SET_ERR_MSG_MOD(extack, "Failed to enable VLAN 4K");
++		return ret;
++	}
++
++	dev_dbg(smi->dev, "add VLAN %d on port %d, %s, %s\n",
++		vlan->vid, port, untagged ? "untagged" : "tagged",
++		pvid ? "PVID" : "no PVID");
++
++	member |= BIT(port);
++
++	if (untagged)
++		untag |= BIT(port);
++
++	ret = rtl8366_set_vlan(smi, vlan->vid, member, untag, 0);
++	if (ret) {
++		dev_err(smi->dev, "failed to set up VLAN %04x", vlan->vid);
++		return ret;
++	}
++
++	if (!pvid)
++		return 0;
++
++	ret = rtl8366_set_pvid(smi, port, vlan->vid);
++	if (ret) {
++		dev_err(smi->dev, "failed to set PVID on port %d to VLAN %04x",
++			port, vlan->vid);
++		return ret;
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(rtl8366_vlan_add);
++
++int rtl8366_vlan_del(struct dsa_switch *ds, int port,
++		     const struct switchdev_obj_port_vlan *vlan)
++{
++	struct realtek_smi *smi = ds->priv;
++	int ret, i;
++
++	dev_dbg(smi->dev, "del VLAN %d on port %d\n", vlan->vid, port);
++
++	for (i = 0; i < smi->num_vlan_mc; i++) {
++		struct rtl8366_vlan_mc vlanmc;
++
++		ret = smi->ops->get_vlan_mc(smi, i, &vlanmc);
++		if (ret)
++			return ret;
++
++		if (vlan->vid == vlanmc.vid) {
++			/* Remove this port from the VLAN */
++			vlanmc.member &= ~BIT(port);
++			vlanmc.untag &= ~BIT(port);
++			/*
++			 * If no ports are members of this VLAN
++			 * anymore then clear the whole member
++			 * config so it can be reused.
++			 */
++			if (!vlanmc.member) {
++				vlanmc.vid = 0;
++				vlanmc.priority = 0;
++				vlanmc.fid = 0;
++			}
++			ret = smi->ops->set_vlan_mc(smi, i, &vlanmc);
++			if (ret) {
++				dev_err(smi->dev,
++					"failed to remove VLAN %04x\n",
++					vlan->vid);
++				return ret;
++			}
++			break;
++		}
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(rtl8366_vlan_del);
++
++void rtl8366_get_strings(struct dsa_switch *ds, int port, u32 stringset,
++			 uint8_t *data)
++{
++	struct realtek_smi *smi = ds->priv;
++	struct rtl8366_mib_counter *mib;
++	int i;
++
++	if (port >= smi->num_ports)
++		return;
++
++	for (i = 0; i < smi->num_mib_counters; i++) {
++		mib = &smi->mib_counters[i];
++		strncpy(data + i * ETH_GSTRING_LEN,
++			mib->name, ETH_GSTRING_LEN);
++	}
++}
++EXPORT_SYMBOL_GPL(rtl8366_get_strings);
++
++int rtl8366_get_sset_count(struct dsa_switch *ds, int port, int sset)
++{
++	struct realtek_smi *smi = ds->priv;
++
++	/* We only support SS_STATS */
++	if (sset != ETH_SS_STATS)
++		return 0;
++	if (port >= smi->num_ports)
++		return -EINVAL;
++
++	return smi->num_mib_counters;
++}
++EXPORT_SYMBOL_GPL(rtl8366_get_sset_count);
++
++void rtl8366_get_ethtool_stats(struct dsa_switch *ds, int port, uint64_t *data)
++{
++	struct realtek_smi *smi = ds->priv;
++	int i;
++	int ret;
++
++	if (port >= smi->num_ports)
++		return;
++
++	for (i = 0; i < smi->num_mib_counters; i++) {
++		struct rtl8366_mib_counter *mib;
++		u64 mibvalue = 0;
++
++		mib = &smi->mib_counters[i];
++		ret = smi->ops->get_mib_counter(smi, port, mib, &mibvalue);
++		if (ret) {
++			dev_err(smi->dev, "error reading MIB counter %s\n",
++				mib->name);
++		}
++		data[i] = mibvalue;
++	}
++}
++EXPORT_SYMBOL_GPL(rtl8366_get_ethtool_stats);
+diff --git a/drivers/net/dsa/realtek/rtl8366rb.c b/drivers/net/dsa/realtek/rtl8366rb.c
+new file mode 100644
+index 0000000000000..2a523a33529b4
+--- /dev/null
++++ b/drivers/net/dsa/realtek/rtl8366rb.c
+@@ -0,0 +1,1815 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Realtek SMI subdriver for the Realtek RTL8366RB ethernet switch
++ *
++ * This is a sparsely documented chip, the only viable documentation seems
++ * to be a patched up code drop from the vendor that appear in various
++ * GPL source trees.
++ *
++ * Copyright (C) 2017 Linus Walleij <linus.walleij@linaro.org>
++ * Copyright (C) 2009-2010 Gabor Juhos <juhosg@openwrt.org>
++ * Copyright (C) 2010 Antti Seppälä <a.seppala@gmail.com>
++ * Copyright (C) 2010 Roman Yeryomin <roman@advem.lv>
++ * Copyright (C) 2011 Colin Leitner <colin.leitner@googlemail.com>
++ */
++
++#include <linux/bitops.h>
++#include <linux/etherdevice.h>
++#include <linux/if_bridge.h>
++#include <linux/interrupt.h>
++#include <linux/irqdomain.h>
++#include <linux/irqchip/chained_irq.h>
++#include <linux/of_irq.h>
++#include <linux/regmap.h>
++
++#include "realtek-smi-core.h"
++
++#define RTL8366RB_PORT_NUM_CPU		5
++#define RTL8366RB_NUM_PORTS		6
++#define RTL8366RB_PHY_NO_MAX		4
++#define RTL8366RB_PHY_ADDR_MAX		31
++
++/* Switch Global Configuration register */
++#define RTL8366RB_SGCR				0x0000
++#define RTL8366RB_SGCR_EN_BC_STORM_CTRL		BIT(0)
++#define RTL8366RB_SGCR_MAX_LENGTH(a)		((a) << 4)
++#define RTL8366RB_SGCR_MAX_LENGTH_MASK		RTL8366RB_SGCR_MAX_LENGTH(0x3)
++#define RTL8366RB_SGCR_MAX_LENGTH_1522		RTL8366RB_SGCR_MAX_LENGTH(0x0)
++#define RTL8366RB_SGCR_MAX_LENGTH_1536		RTL8366RB_SGCR_MAX_LENGTH(0x1)
++#define RTL8366RB_SGCR_MAX_LENGTH_1552		RTL8366RB_SGCR_MAX_LENGTH(0x2)
++#define RTL8366RB_SGCR_MAX_LENGTH_16000		RTL8366RB_SGCR_MAX_LENGTH(0x3)
++#define RTL8366RB_SGCR_EN_VLAN			BIT(13)
++#define RTL8366RB_SGCR_EN_VLAN_4KTB		BIT(14)
++
++/* Port Enable Control register */
++#define RTL8366RB_PECR				0x0001
++
++/* Switch per-port learning disablement register */
++#define RTL8366RB_PORT_LEARNDIS_CTRL		0x0002
++
++/* Security control, actually aging register */
++#define RTL8366RB_SECURITY_CTRL			0x0003
++
++#define RTL8366RB_SSCR2				0x0004
++#define RTL8366RB_SSCR2_DROP_UNKNOWN_DA		BIT(0)
++
++/* Port Mode Control registers */
++#define RTL8366RB_PMC0				0x0005
++#define RTL8366RB_PMC0_SPI			BIT(0)
++#define RTL8366RB_PMC0_EN_AUTOLOAD		BIT(1)
++#define RTL8366RB_PMC0_PROBE			BIT(2)
++#define RTL8366RB_PMC0_DIS_BISR			BIT(3)
++#define RTL8366RB_PMC0_ADCTEST			BIT(4)
++#define RTL8366RB_PMC0_SRAM_DIAG		BIT(5)
++#define RTL8366RB_PMC0_EN_SCAN			BIT(6)
++#define RTL8366RB_PMC0_P4_IOMODE_SHIFT		7
++#define RTL8366RB_PMC0_P4_IOMODE_MASK		GENMASK(9, 7)
++#define RTL8366RB_PMC0_P5_IOMODE_SHIFT		10
++#define RTL8366RB_PMC0_P5_IOMODE_MASK		GENMASK(12, 10)
++#define RTL8366RB_PMC0_SDSMODE_SHIFT		13
++#define RTL8366RB_PMC0_SDSMODE_MASK		GENMASK(15, 13)
++#define RTL8366RB_PMC1				0x0006
++
++/* Port Mirror Control Register */
++#define RTL8366RB_PMCR				0x0007
++#define RTL8366RB_PMCR_SOURCE_PORT(a)		(a)
++#define RTL8366RB_PMCR_SOURCE_PORT_MASK		0x000f
++#define RTL8366RB_PMCR_MONITOR_PORT(a)		((a) << 4)
++#define RTL8366RB_PMCR_MONITOR_PORT_MASK	0x00f0
++#define RTL8366RB_PMCR_MIRROR_RX		BIT(8)
++#define RTL8366RB_PMCR_MIRROR_TX		BIT(9)
++#define RTL8366RB_PMCR_MIRROR_SPC		BIT(10)
++#define RTL8366RB_PMCR_MIRROR_ISO		BIT(11)
++
++/* bits 0..7 = port 0, bits 8..15 = port 1 */
++#define RTL8366RB_PAACR0		0x0010
++/* bits 0..7 = port 2, bits 8..15 = port 3 */
++#define RTL8366RB_PAACR1		0x0011
++/* bits 0..7 = port 4, bits 8..15 = port 5 */
++#define RTL8366RB_PAACR2		0x0012
++#define RTL8366RB_PAACR_SPEED_10M	0
++#define RTL8366RB_PAACR_SPEED_100M	1
++#define RTL8366RB_PAACR_SPEED_1000M	2
++#define RTL8366RB_PAACR_FULL_DUPLEX	BIT(2)
++#define RTL8366RB_PAACR_LINK_UP		BIT(4)
++#define RTL8366RB_PAACR_TX_PAUSE	BIT(5)
++#define RTL8366RB_PAACR_RX_PAUSE	BIT(6)
++#define RTL8366RB_PAACR_AN		BIT(7)
++
++#define RTL8366RB_PAACR_CPU_PORT	(RTL8366RB_PAACR_SPEED_1000M | \
++					 RTL8366RB_PAACR_FULL_DUPLEX | \
++					 RTL8366RB_PAACR_LINK_UP | \
++					 RTL8366RB_PAACR_TX_PAUSE | \
++					 RTL8366RB_PAACR_RX_PAUSE)
++
++/* bits 0..7 = port 0, bits 8..15 = port 1 */
++#define RTL8366RB_PSTAT0		0x0014
++/* bits 0..7 = port 2, bits 8..15 = port 3 */
++#define RTL8366RB_PSTAT1		0x0015
++/* bits 0..7 = port 4, bits 8..15 = port 5 */
++#define RTL8366RB_PSTAT2		0x0016
++
++#define RTL8366RB_POWER_SAVING_REG	0x0021
++
++/* Spanning tree status (STP) control, two bits per port per FID */
++#define RTL8366RB_STP_STATE_BASE	0x0050 /* 0x0050..0x0057 */
++#define RTL8366RB_STP_STATE_DISABLED	0x0
++#define RTL8366RB_STP_STATE_BLOCKING	0x1
++#define RTL8366RB_STP_STATE_LEARNING	0x2
++#define RTL8366RB_STP_STATE_FORWARDING	0x3
++#define RTL8366RB_STP_MASK		GENMASK(1, 0)
++#define RTL8366RB_STP_STATE(port, state) \
++	((state) << ((port) * 2))
++#define RTL8366RB_STP_STATE_MASK(port) \
++	RTL8366RB_STP_STATE((port), RTL8366RB_STP_MASK)
++
++/* CPU port control reg */
++#define RTL8368RB_CPU_CTRL_REG		0x0061
++#define RTL8368RB_CPU_PORTS_MSK		0x00FF
++/* Disables inserting custom tag length/type 0x8899 */
++#define RTL8368RB_CPU_NO_TAG		BIT(15)
++
++#define RTL8366RB_SMAR0			0x0070 /* bits 0..15 */
++#define RTL8366RB_SMAR1			0x0071 /* bits 16..31 */
++#define RTL8366RB_SMAR2			0x0072 /* bits 32..47 */
++
++#define RTL8366RB_RESET_CTRL_REG		0x0100
++#define RTL8366RB_CHIP_CTRL_RESET_HW		BIT(0)
++#define RTL8366RB_CHIP_CTRL_RESET_SW		BIT(1)
++
++#define RTL8366RB_CHIP_ID_REG			0x0509
++#define RTL8366RB_CHIP_ID_8366			0x5937
++#define RTL8366RB_CHIP_VERSION_CTRL_REG		0x050A
++#define RTL8366RB_CHIP_VERSION_MASK		0xf
++
++/* PHY registers control */
++#define RTL8366RB_PHY_ACCESS_CTRL_REG		0x8000
++#define RTL8366RB_PHY_CTRL_READ			BIT(0)
++#define RTL8366RB_PHY_CTRL_WRITE		0
++#define RTL8366RB_PHY_ACCESS_BUSY_REG		0x8001
++#define RTL8366RB_PHY_INT_BUSY			BIT(0)
++#define RTL8366RB_PHY_EXT_BUSY			BIT(4)
++#define RTL8366RB_PHY_ACCESS_DATA_REG		0x8002
++#define RTL8366RB_PHY_EXT_CTRL_REG		0x8010
++#define RTL8366RB_PHY_EXT_WRDATA_REG		0x8011
++#define RTL8366RB_PHY_EXT_RDDATA_REG		0x8012
++
++#define RTL8366RB_PHY_REG_MASK			0x1f
++#define RTL8366RB_PHY_PAGE_OFFSET		5
++#define RTL8366RB_PHY_PAGE_MASK			(0xf << 5)
++#define RTL8366RB_PHY_NO_OFFSET			9
++#define RTL8366RB_PHY_NO_MASK			(0x1f << 9)
++
++/* VLAN Ingress Control Register 1, one bit per port.
++ * bit 0 .. 5 will make the switch drop ingress frames without
++ * VID such as untagged or priority-tagged frames for respective
++ * port.
++ * bit 6 .. 11 will make the switch drop ingress frames carrying
++ * a C-tag with VID != 0 for respective port.
++ */
++#define RTL8366RB_VLAN_INGRESS_CTRL1_REG	0x037E
++#define RTL8366RB_VLAN_INGRESS_CTRL1_DROP(port)	(BIT((port)) | BIT((port) + 6))
++
++/* VLAN Ingress Control Register 2, one bit per port.
++ * bit0 .. bit5 will make the switch drop all ingress frames with
++ * a VLAN classification that does not include the port is in its
++ * member set.
++ */
++#define RTL8366RB_VLAN_INGRESS_CTRL2_REG	0x037f
++
++/* LED control registers */
++#define RTL8366RB_LED_BLINKRATE_REG		0x0430
++#define RTL8366RB_LED_BLINKRATE_MASK		0x0007
++#define RTL8366RB_LED_BLINKRATE_28MS		0x0000
++#define RTL8366RB_LED_BLINKRATE_56MS		0x0001
++#define RTL8366RB_LED_BLINKRATE_84MS		0x0002
++#define RTL8366RB_LED_BLINKRATE_111MS		0x0003
++#define RTL8366RB_LED_BLINKRATE_222MS		0x0004
++#define RTL8366RB_LED_BLINKRATE_446MS		0x0005
++
++#define RTL8366RB_LED_CTRL_REG			0x0431
++#define RTL8366RB_LED_OFF			0x0
++#define RTL8366RB_LED_DUP_COL			0x1
++#define RTL8366RB_LED_LINK_ACT			0x2
++#define RTL8366RB_LED_SPD1000			0x3
++#define RTL8366RB_LED_SPD100			0x4
++#define RTL8366RB_LED_SPD10			0x5
++#define RTL8366RB_LED_SPD1000_ACT		0x6
++#define RTL8366RB_LED_SPD100_ACT		0x7
++#define RTL8366RB_LED_SPD10_ACT			0x8
++#define RTL8366RB_LED_SPD100_10_ACT		0x9
++#define RTL8366RB_LED_FIBER			0xa
++#define RTL8366RB_LED_AN_FAULT			0xb
++#define RTL8366RB_LED_LINK_RX			0xc
++#define RTL8366RB_LED_LINK_TX			0xd
++#define RTL8366RB_LED_MASTER			0xe
++#define RTL8366RB_LED_FORCE			0xf
++#define RTL8366RB_LED_0_1_CTRL_REG		0x0432
++#define RTL8366RB_LED_1_OFFSET			6
++#define RTL8366RB_LED_2_3_CTRL_REG		0x0433
++#define RTL8366RB_LED_3_OFFSET			6
++
++#define RTL8366RB_MIB_COUNT			33
++#define RTL8366RB_GLOBAL_MIB_COUNT		1
++#define RTL8366RB_MIB_COUNTER_PORT_OFFSET	0x0050
++#define RTL8366RB_MIB_COUNTER_BASE		0x1000
++#define RTL8366RB_MIB_CTRL_REG			0x13F0
++#define RTL8366RB_MIB_CTRL_USER_MASK		0x0FFC
++#define RTL8366RB_MIB_CTRL_BUSY_MASK		BIT(0)
++#define RTL8366RB_MIB_CTRL_RESET_MASK		BIT(1)
++#define RTL8366RB_MIB_CTRL_PORT_RESET(_p)	BIT(2 + (_p))
++#define RTL8366RB_MIB_CTRL_GLOBAL_RESET		BIT(11)
++
++#define RTL8366RB_PORT_VLAN_CTRL_BASE		0x0063
++#define RTL8366RB_PORT_VLAN_CTRL_REG(_p)  \
++		(RTL8366RB_PORT_VLAN_CTRL_BASE + (_p) / 4)
++#define RTL8366RB_PORT_VLAN_CTRL_MASK		0xf
++#define RTL8366RB_PORT_VLAN_CTRL_SHIFT(_p)	(4 * ((_p) % 4))
++
++#define RTL8366RB_VLAN_TABLE_READ_BASE		0x018C
++#define RTL8366RB_VLAN_TABLE_WRITE_BASE		0x0185
++
++#define RTL8366RB_TABLE_ACCESS_CTRL_REG		0x0180
++#define RTL8366RB_TABLE_VLAN_READ_CTRL		0x0E01
++#define RTL8366RB_TABLE_VLAN_WRITE_CTRL		0x0F01
++
++#define RTL8366RB_VLAN_MC_BASE(_x)		(0x0020 + (_x) * 3)
++
++#define RTL8366RB_PORT_LINK_STATUS_BASE		0x0014
++#define RTL8366RB_PORT_STATUS_SPEED_MASK	0x0003
++#define RTL8366RB_PORT_STATUS_DUPLEX_MASK	0x0004
++#define RTL8366RB_PORT_STATUS_LINK_MASK		0x0010
++#define RTL8366RB_PORT_STATUS_TXPAUSE_MASK	0x0020
++#define RTL8366RB_PORT_STATUS_RXPAUSE_MASK	0x0040
++#define RTL8366RB_PORT_STATUS_AN_MASK		0x0080
++
++#define RTL8366RB_NUM_VLANS		16
++#define RTL8366RB_NUM_LEDGROUPS		4
++#define RTL8366RB_NUM_VIDS		4096
++#define RTL8366RB_PRIORITYMAX		7
++#define RTL8366RB_NUM_FIDS		8
++#define RTL8366RB_FIDMAX		7
++
++#define RTL8366RB_PORT_1		BIT(0) /* In userspace port 0 */
++#define RTL8366RB_PORT_2		BIT(1) /* In userspace port 1 */
++#define RTL8366RB_PORT_3		BIT(2) /* In userspace port 2 */
++#define RTL8366RB_PORT_4		BIT(3) /* In userspace port 3 */
++#define RTL8366RB_PORT_5		BIT(4) /* In userspace port 4 */
++
++#define RTL8366RB_PORT_CPU		BIT(5) /* CPU port */
++
++#define RTL8366RB_PORT_ALL		(RTL8366RB_PORT_1 |	\
++					 RTL8366RB_PORT_2 |	\
++					 RTL8366RB_PORT_3 |	\
++					 RTL8366RB_PORT_4 |	\
++					 RTL8366RB_PORT_5 |	\
++					 RTL8366RB_PORT_CPU)
++
++#define RTL8366RB_PORT_ALL_BUT_CPU	(RTL8366RB_PORT_1 |	\
++					 RTL8366RB_PORT_2 |	\
++					 RTL8366RB_PORT_3 |	\
++					 RTL8366RB_PORT_4 |	\
++					 RTL8366RB_PORT_5)
++
++#define RTL8366RB_PORT_ALL_EXTERNAL	(RTL8366RB_PORT_1 |	\
++					 RTL8366RB_PORT_2 |	\
++					 RTL8366RB_PORT_3 |	\
++					 RTL8366RB_PORT_4)
++
++#define RTL8366RB_PORT_ALL_INTERNAL	 RTL8366RB_PORT_CPU
++
++/* First configuration word per member config, VID and prio */
++#define RTL8366RB_VLAN_VID_MASK		0xfff
++#define RTL8366RB_VLAN_PRIORITY_SHIFT	12
++#define RTL8366RB_VLAN_PRIORITY_MASK	0x7
++/* Second configuration word per member config, member and untagged */
++#define RTL8366RB_VLAN_UNTAG_SHIFT	8
++#define RTL8366RB_VLAN_UNTAG_MASK	0xff
++#define RTL8366RB_VLAN_MEMBER_MASK	0xff
++/* Third config word per member config, STAG currently unused */
++#define RTL8366RB_VLAN_STAG_MBR_MASK	0xff
++#define RTL8366RB_VLAN_STAG_MBR_SHIFT	8
++#define RTL8366RB_VLAN_STAG_IDX_MASK	0x7
++#define RTL8366RB_VLAN_STAG_IDX_SHIFT	5
++#define RTL8366RB_VLAN_FID_MASK		0x7
++
++/* Port ingress bandwidth control */
++#define RTL8366RB_IB_BASE		0x0200
++#define RTL8366RB_IB_REG(pnum)		(RTL8366RB_IB_BASE + (pnum))
++#define RTL8366RB_IB_BDTH_MASK		0x3fff
++#define RTL8366RB_IB_PREIFG		BIT(14)
++
++/* Port egress bandwidth control */
++#define RTL8366RB_EB_BASE		0x02d1
++#define RTL8366RB_EB_REG(pnum)		(RTL8366RB_EB_BASE + (pnum))
++#define RTL8366RB_EB_BDTH_MASK		0x3fff
++#define RTL8366RB_EB_PREIFG_REG		0x02f8
++#define RTL8366RB_EB_PREIFG		BIT(9)
++
++#define RTL8366RB_BDTH_SW_MAX		1048512 /* 1048576? */
++#define RTL8366RB_BDTH_UNIT		64
++#define RTL8366RB_BDTH_REG_DEFAULT	16383
++
++/* QOS */
++#define RTL8366RB_QOS			BIT(15)
++/* Include/Exclude Preamble and IFG (20 bytes). 0:Exclude, 1:Include. */
++#define RTL8366RB_QOS_DEFAULT_PREIFG	1
++
++/* Interrupt handling */
++#define RTL8366RB_INTERRUPT_CONTROL_REG	0x0440
++#define RTL8366RB_INTERRUPT_POLARITY	BIT(0)
++#define RTL8366RB_P4_RGMII_LED		BIT(2)
++#define RTL8366RB_INTERRUPT_MASK_REG	0x0441
++#define RTL8366RB_INTERRUPT_LINK_CHGALL	GENMASK(11, 0)
++#define RTL8366RB_INTERRUPT_ACLEXCEED	BIT(8)
++#define RTL8366RB_INTERRUPT_STORMEXCEED	BIT(9)
++#define RTL8366RB_INTERRUPT_P4_FIBER	BIT(12)
++#define RTL8366RB_INTERRUPT_P4_UTP	BIT(13)
++#define RTL8366RB_INTERRUPT_VALID	(RTL8366RB_INTERRUPT_LINK_CHGALL | \
++					 RTL8366RB_INTERRUPT_ACLEXCEED | \
++					 RTL8366RB_INTERRUPT_STORMEXCEED | \
++					 RTL8366RB_INTERRUPT_P4_FIBER | \
++					 RTL8366RB_INTERRUPT_P4_UTP)
++#define RTL8366RB_INTERRUPT_STATUS_REG	0x0442
++#define RTL8366RB_NUM_INTERRUPT		14 /* 0..13 */
++
++/* Port isolation registers */
++#define RTL8366RB_PORT_ISO_BASE		0x0F08
++#define RTL8366RB_PORT_ISO(pnum)	(RTL8366RB_PORT_ISO_BASE + (pnum))
++#define RTL8366RB_PORT_ISO_EN		BIT(0)
++#define RTL8366RB_PORT_ISO_PORTS_MASK	GENMASK(7, 1)
++#define RTL8366RB_PORT_ISO_PORTS(pmask)	((pmask) << 1)
++
++/* bits 0..5 enable force when cleared */
++#define RTL8366RB_MAC_FORCE_CTRL_REG	0x0F11
++
++#define RTL8366RB_OAM_PARSER_REG	0x0F14
++#define RTL8366RB_OAM_MULTIPLEXER_REG	0x0F15
++
++#define RTL8366RB_GREEN_FEATURE_REG	0x0F51
++#define RTL8366RB_GREEN_FEATURE_MSK	0x0007
++#define RTL8366RB_GREEN_FEATURE_TX	BIT(0)
++#define RTL8366RB_GREEN_FEATURE_RX	BIT(2)
++
++/**
++ * struct rtl8366rb - RTL8366RB-specific data
++ * @max_mtu: per-port max MTU setting
++ * @pvid_enabled: if PVID is set for respective port
++ */
++struct rtl8366rb {
++	unsigned int max_mtu[RTL8366RB_NUM_PORTS];
++	bool pvid_enabled[RTL8366RB_NUM_PORTS];
++};
++
++static struct rtl8366_mib_counter rtl8366rb_mib_counters[] = {
++	{ 0,  0, 4, "IfInOctets"				},
++	{ 0,  4, 4, "EtherStatsOctets"				},
++	{ 0,  8, 2, "EtherStatsUnderSizePkts"			},
++	{ 0, 10, 2, "EtherFragments"				},
++	{ 0, 12, 2, "EtherStatsPkts64Octets"			},
++	{ 0, 14, 2, "EtherStatsPkts65to127Octets"		},
++	{ 0, 16, 2, "EtherStatsPkts128to255Octets"		},
++	{ 0, 18, 2, "EtherStatsPkts256to511Octets"		},
++	{ 0, 20, 2, "EtherStatsPkts512to1023Octets"		},
++	{ 0, 22, 2, "EtherStatsPkts1024to1518Octets"		},
++	{ 0, 24, 2, "EtherOversizeStats"			},
++	{ 0, 26, 2, "EtherStatsJabbers"				},
++	{ 0, 28, 2, "IfInUcastPkts"				},
++	{ 0, 30, 2, "EtherStatsMulticastPkts"			},
++	{ 0, 32, 2, "EtherStatsBroadcastPkts"			},
++	{ 0, 34, 2, "EtherStatsDropEvents"			},
++	{ 0, 36, 2, "Dot3StatsFCSErrors"			},
++	{ 0, 38, 2, "Dot3StatsSymbolErrors"			},
++	{ 0, 40, 2, "Dot3InPauseFrames"				},
++	{ 0, 42, 2, "Dot3ControlInUnknownOpcodes"		},
++	{ 0, 44, 4, "IfOutOctets"				},
++	{ 0, 48, 2, "Dot3StatsSingleCollisionFrames"		},
++	{ 0, 50, 2, "Dot3StatMultipleCollisionFrames"		},
++	{ 0, 52, 2, "Dot3sDeferredTransmissions"		},
++	{ 0, 54, 2, "Dot3StatsLateCollisions"			},
++	{ 0, 56, 2, "EtherStatsCollisions"			},
++	{ 0, 58, 2, "Dot3StatsExcessiveCollisions"		},
++	{ 0, 60, 2, "Dot3OutPauseFrames"			},
++	{ 0, 62, 2, "Dot1dBasePortDelayExceededDiscards"	},
++	{ 0, 64, 2, "Dot1dTpPortInDiscards"			},
++	{ 0, 66, 2, "IfOutUcastPkts"				},
++	{ 0, 68, 2, "IfOutMulticastPkts"			},
++	{ 0, 70, 2, "IfOutBroadcastPkts"			},
++};
++
++static int rtl8366rb_get_mib_counter(struct realtek_smi *smi,
++				     int port,
++				     struct rtl8366_mib_counter *mib,
++				     u64 *mibvalue)
++{
++	u32 addr, val;
++	int ret;
++	int i;
++
++	addr = RTL8366RB_MIB_COUNTER_BASE +
++		RTL8366RB_MIB_COUNTER_PORT_OFFSET * (port) +
++		mib->offset;
++
++	/* Writing access counter address first
++	 * then ASIC will prepare 64bits counter wait for being retrived
++	 */
++	ret = regmap_write(smi->map, addr, 0); /* Write whatever */
++	if (ret)
++		return ret;
++
++	/* Read MIB control register */
++	ret = regmap_read(smi->map, RTL8366RB_MIB_CTRL_REG, &val);
++	if (ret)
++		return -EIO;
++
++	if (val & RTL8366RB_MIB_CTRL_BUSY_MASK)
++		return -EBUSY;
++
++	if (val & RTL8366RB_MIB_CTRL_RESET_MASK)
++		return -EIO;
++
++	/* Read each individual MIB 16 bits at the time */
++	*mibvalue = 0;
++	for (i = mib->length; i > 0; i--) {
++		ret = regmap_read(smi->map, addr + (i - 1), &val);
++		if (ret)
++			return ret;
++		*mibvalue = (*mibvalue << 16) | (val & 0xFFFF);
++	}
++	return 0;
++}
++
++static u32 rtl8366rb_get_irqmask(struct irq_data *d)
++{
++	int line = irqd_to_hwirq(d);
++	u32 val;
++
++	/* For line interrupts we combine link down in bits
++	 * 6..11 with link up in bits 0..5 into one interrupt.
++	 */
++	if (line < 12)
++		val = BIT(line) | BIT(line + 6);
++	else
++		val = BIT(line);
++	return val;
++}
++
++static void rtl8366rb_mask_irq(struct irq_data *d)
++{
++	struct realtek_smi *smi = irq_data_get_irq_chip_data(d);
++	int ret;
++
++	ret = regmap_update_bits(smi->map, RTL8366RB_INTERRUPT_MASK_REG,
++				 rtl8366rb_get_irqmask(d), 0);
++	if (ret)
++		dev_err(smi->dev, "could not mask IRQ\n");
++}
++
++static void rtl8366rb_unmask_irq(struct irq_data *d)
++{
++	struct realtek_smi *smi = irq_data_get_irq_chip_data(d);
++	int ret;
++
++	ret = regmap_update_bits(smi->map, RTL8366RB_INTERRUPT_MASK_REG,
++				 rtl8366rb_get_irqmask(d),
++				 rtl8366rb_get_irqmask(d));
++	if (ret)
++		dev_err(smi->dev, "could not unmask IRQ\n");
++}
++
++static irqreturn_t rtl8366rb_irq(int irq, void *data)
++{
++	struct realtek_smi *smi = data;
++	u32 stat;
++	int ret;
++
++	/* This clears the IRQ status register */
++	ret = regmap_read(smi->map, RTL8366RB_INTERRUPT_STATUS_REG,
++			  &stat);
++	if (ret) {
++		dev_err(smi->dev, "can't read interrupt status\n");
++		return IRQ_NONE;
++	}
++	stat &= RTL8366RB_INTERRUPT_VALID;
++	if (!stat)
++		return IRQ_NONE;
++	while (stat) {
++		int line = __ffs(stat);
++		int child_irq;
++
++		stat &= ~BIT(line);
++		/* For line interrupts we combine link down in bits
++		 * 6..11 with link up in bits 0..5 into one interrupt.
++		 */
++		if (line < 12 && line > 5)
++			line -= 5;
++		child_irq = irq_find_mapping(smi->irqdomain, line);
++		handle_nested_irq(child_irq);
++	}
++	return IRQ_HANDLED;
++}
++
++static struct irq_chip rtl8366rb_irq_chip = {
++	.name = "RTL8366RB",
++	.irq_mask = rtl8366rb_mask_irq,
++	.irq_unmask = rtl8366rb_unmask_irq,
++};
++
++static int rtl8366rb_irq_map(struct irq_domain *domain, unsigned int irq,
++			     irq_hw_number_t hwirq)
++{
++	irq_set_chip_data(irq, domain->host_data);
++	irq_set_chip_and_handler(irq, &rtl8366rb_irq_chip, handle_simple_irq);
++	irq_set_nested_thread(irq, 1);
++	irq_set_noprobe(irq);
++
++	return 0;
++}
++
++static void rtl8366rb_irq_unmap(struct irq_domain *d, unsigned int irq)
++{
++	irq_set_nested_thread(irq, 0);
++	irq_set_chip_and_handler(irq, NULL, NULL);
++	irq_set_chip_data(irq, NULL);
++}
++
++static const struct irq_domain_ops rtl8366rb_irqdomain_ops = {
++	.map = rtl8366rb_irq_map,
++	.unmap = rtl8366rb_irq_unmap,
++	.xlate  = irq_domain_xlate_onecell,
++};
++
++static int rtl8366rb_setup_cascaded_irq(struct realtek_smi *smi)
++{
++	struct device_node *intc;
++	unsigned long irq_trig;
++	int irq;
++	int ret;
++	u32 val;
++	int i;
++
++	intc = of_get_child_by_name(smi->dev->of_node, "interrupt-controller");
++	if (!intc) {
++		dev_err(smi->dev, "missing child interrupt-controller node\n");
++		return -EINVAL;
++	}
++	/* RB8366RB IRQs cascade off this one */
++	irq = of_irq_get(intc, 0);
++	if (irq <= 0) {
++		dev_err(smi->dev, "failed to get parent IRQ\n");
++		ret = irq ? irq : -EINVAL;
++		goto out_put_node;
++	}
++
++	/* This clears the IRQ status register */
++	ret = regmap_read(smi->map, RTL8366RB_INTERRUPT_STATUS_REG,
++			  &val);
++	if (ret) {
++		dev_err(smi->dev, "can't read interrupt status\n");
++		goto out_put_node;
++	}
++
++	/* Fetch IRQ edge information from the descriptor */
++	irq_trig = irqd_get_trigger_type(irq_get_irq_data(irq));
++	switch (irq_trig) {
++	case IRQF_TRIGGER_RISING:
++	case IRQF_TRIGGER_HIGH:
++		dev_info(smi->dev, "active high/rising IRQ\n");
++		val = 0;
++		break;
++	case IRQF_TRIGGER_FALLING:
++	case IRQF_TRIGGER_LOW:
++		dev_info(smi->dev, "active low/falling IRQ\n");
++		val = RTL8366RB_INTERRUPT_POLARITY;
++		break;
++	}
++	ret = regmap_update_bits(smi->map, RTL8366RB_INTERRUPT_CONTROL_REG,
++				 RTL8366RB_INTERRUPT_POLARITY,
++				 val);
++	if (ret) {
++		dev_err(smi->dev, "could not configure IRQ polarity\n");
++		goto out_put_node;
++	}
++
++	ret = devm_request_threaded_irq(smi->dev, irq, NULL,
++					rtl8366rb_irq, IRQF_ONESHOT,
++					"RTL8366RB", smi);
++	if (ret) {
++		dev_err(smi->dev, "unable to request irq: %d\n", ret);
++		goto out_put_node;
++	}
++	smi->irqdomain = irq_domain_add_linear(intc,
++					       RTL8366RB_NUM_INTERRUPT,
++					       &rtl8366rb_irqdomain_ops,
++					       smi);
++	if (!smi->irqdomain) {
++		dev_err(smi->dev, "failed to create IRQ domain\n");
++		ret = -EINVAL;
++		goto out_put_node;
++	}
++	for (i = 0; i < smi->num_ports; i++)
++		irq_set_parent(irq_create_mapping(smi->irqdomain, i), irq);
++
++out_put_node:
++	of_node_put(intc);
++	return ret;
++}
++
++static int rtl8366rb_set_addr(struct realtek_smi *smi)
++{
++	u8 addr[ETH_ALEN];
++	u16 val;
++	int ret;
++
++	eth_random_addr(addr);
++
++	dev_info(smi->dev, "set MAC: %02X:%02X:%02X:%02X:%02X:%02X\n",
++		 addr[0], addr[1], addr[2], addr[3], addr[4], addr[5]);
++	val = addr[0] << 8 | addr[1];
++	ret = regmap_write(smi->map, RTL8366RB_SMAR0, val);
++	if (ret)
++		return ret;
++	val = addr[2] << 8 | addr[3];
++	ret = regmap_write(smi->map, RTL8366RB_SMAR1, val);
++	if (ret)
++		return ret;
++	val = addr[4] << 8 | addr[5];
++	ret = regmap_write(smi->map, RTL8366RB_SMAR2, val);
++	if (ret)
++		return ret;
++
++	return 0;
++}
++
++/* Found in a vendor driver */
++
++/* Struct for handling the jam tables' entries */
++struct rtl8366rb_jam_tbl_entry {
++	u16 reg;
++	u16 val;
++};
++
++/* For the "version 0" early silicon, appear in most source releases */
++static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_ver_0[] = {
++	{0x000B, 0x0001}, {0x03A6, 0x0100}, {0x03A7, 0x0001}, {0x02D1, 0x3FFF},
++	{0x02D2, 0x3FFF}, {0x02D3, 0x3FFF}, {0x02D4, 0x3FFF}, {0x02D5, 0x3FFF},
++	{0x02D6, 0x3FFF}, {0x02D7, 0x3FFF}, {0x02D8, 0x3FFF}, {0x022B, 0x0688},
++	{0x022C, 0x0FAC}, {0x03D0, 0x4688}, {0x03D1, 0x01F5}, {0x0000, 0x0830},
++	{0x02F9, 0x0200}, {0x02F7, 0x7FFF}, {0x02F8, 0x03FF}, {0x0080, 0x03E8},
++	{0x0081, 0x00CE}, {0x0082, 0x00DA}, {0x0083, 0x0230}, {0xBE0F, 0x2000},
++	{0x0231, 0x422A}, {0x0232, 0x422A}, {0x0233, 0x422A}, {0x0234, 0x422A},
++	{0x0235, 0x422A}, {0x0236, 0x422A}, {0x0237, 0x422A}, {0x0238, 0x422A},
++	{0x0239, 0x422A}, {0x023A, 0x422A}, {0x023B, 0x422A}, {0x023C, 0x422A},
++	{0x023D, 0x422A}, {0x023E, 0x422A}, {0x023F, 0x422A}, {0x0240, 0x422A},
++	{0x0241, 0x422A}, {0x0242, 0x422A}, {0x0243, 0x422A}, {0x0244, 0x422A},
++	{0x0245, 0x422A}, {0x0246, 0x422A}, {0x0247, 0x422A}, {0x0248, 0x422A},
++	{0x0249, 0x0146}, {0x024A, 0x0146}, {0x024B, 0x0146}, {0xBE03, 0xC961},
++	{0x024D, 0x0146}, {0x024E, 0x0146}, {0x024F, 0x0146}, {0x0250, 0x0146},
++	{0xBE64, 0x0226}, {0x0252, 0x0146}, {0x0253, 0x0146}, {0x024C, 0x0146},
++	{0x0251, 0x0146}, {0x0254, 0x0146}, {0xBE62, 0x3FD0}, {0x0084, 0x0320},
++	{0x0255, 0x0146}, {0x0256, 0x0146}, {0x0257, 0x0146}, {0x0258, 0x0146},
++	{0x0259, 0x0146}, {0x025A, 0x0146}, {0x025B, 0x0146}, {0x025C, 0x0146},
++	{0x025D, 0x0146}, {0x025E, 0x0146}, {0x025F, 0x0146}, {0x0260, 0x0146},
++	{0x0261, 0xA23F}, {0x0262, 0x0294}, {0x0263, 0xA23F}, {0x0264, 0x0294},
++	{0x0265, 0xA23F}, {0x0266, 0x0294}, {0x0267, 0xA23F}, {0x0268, 0x0294},
++	{0x0269, 0xA23F}, {0x026A, 0x0294}, {0x026B, 0xA23F}, {0x026C, 0x0294},
++	{0x026D, 0xA23F}, {0x026E, 0x0294}, {0x026F, 0xA23F}, {0x0270, 0x0294},
++	{0x02F5, 0x0048}, {0xBE09, 0x0E00}, {0xBE1E, 0x0FA0}, {0xBE14, 0x8448},
++	{0xBE15, 0x1007}, {0xBE4A, 0xA284}, {0xC454, 0x3F0B}, {0xC474, 0x3F0B},
++	{0xBE48, 0x3672}, {0xBE4B, 0x17A7}, {0xBE4C, 0x0B15}, {0xBE52, 0x0EDD},
++	{0xBE49, 0x8C00}, {0xBE5B, 0x785C}, {0xBE5C, 0x785C}, {0xBE5D, 0x785C},
++	{0xBE61, 0x368A}, {0xBE63, 0x9B84}, {0xC456, 0xCC13}, {0xC476, 0xCC13},
++	{0xBE65, 0x307D}, {0xBE6D, 0x0005}, {0xBE6E, 0xE120}, {0xBE2E, 0x7BAF},
++};
++
++/* This v1 init sequence is from Belkin F5D8235 U-Boot release */
++static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_ver_1[] = {
++	{0x0000, 0x0830}, {0x0001, 0x8000}, {0x0400, 0x8130}, {0xBE78, 0x3C3C},
++	{0x0431, 0x5432}, {0xBE37, 0x0CE4}, {0x02FA, 0xFFDF}, {0x02FB, 0xFFE0},
++	{0xC44C, 0x1585}, {0xC44C, 0x1185}, {0xC44C, 0x1585}, {0xC46C, 0x1585},
++	{0xC46C, 0x1185}, {0xC46C, 0x1585}, {0xC451, 0x2135}, {0xC471, 0x2135},
++	{0xBE10, 0x8140}, {0xBE15, 0x0007}, {0xBE6E, 0xE120}, {0xBE69, 0xD20F},
++	{0xBE6B, 0x0320}, {0xBE24, 0xB000}, {0xBE23, 0xFF51}, {0xBE22, 0xDF20},
++	{0xBE21, 0x0140}, {0xBE20, 0x00BB}, {0xBE24, 0xB800}, {0xBE24, 0x0000},
++	{0xBE24, 0x7000}, {0xBE23, 0xFF51}, {0xBE22, 0xDF60}, {0xBE21, 0x0140},
++	{0xBE20, 0x0077}, {0xBE24, 0x7800}, {0xBE24, 0x0000}, {0xBE2E, 0x7B7A},
++	{0xBE36, 0x0CE4}, {0x02F5, 0x0048}, {0xBE77, 0x2940}, {0x000A, 0x83E0},
++	{0xBE79, 0x3C3C}, {0xBE00, 0x1340},
++};
++
++/* This v2 init sequence is from Belkin F5D8235 U-Boot release */
++static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_ver_2[] = {
++	{0x0450, 0x0000}, {0x0400, 0x8130}, {0x000A, 0x83ED}, {0x0431, 0x5432},
++	{0xC44F, 0x6250}, {0xC46F, 0x6250}, {0xC456, 0x0C14}, {0xC476, 0x0C14},
++	{0xC44C, 0x1C85}, {0xC44C, 0x1885}, {0xC44C, 0x1C85}, {0xC46C, 0x1C85},
++	{0xC46C, 0x1885}, {0xC46C, 0x1C85}, {0xC44C, 0x0885}, {0xC44C, 0x0881},
++	{0xC44C, 0x0885}, {0xC46C, 0x0885}, {0xC46C, 0x0881}, {0xC46C, 0x0885},
++	{0xBE2E, 0x7BA7}, {0xBE36, 0x1000}, {0xBE37, 0x1000}, {0x8000, 0x0001},
++	{0xBE69, 0xD50F}, {0x8000, 0x0000}, {0xBE69, 0xD50F}, {0xBE6E, 0x0320},
++	{0xBE77, 0x2940}, {0xBE78, 0x3C3C}, {0xBE79, 0x3C3C}, {0xBE6E, 0xE120},
++	{0x8000, 0x0001}, {0xBE15, 0x1007}, {0x8000, 0x0000}, {0xBE15, 0x1007},
++	{0xBE14, 0x0448}, {0xBE1E, 0x00A0}, {0xBE10, 0x8160}, {0xBE10, 0x8140},
++	{0xBE00, 0x1340}, {0x0F51, 0x0010},
++};
++
++/* Appears in a DDWRT code dump */
++static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_ver_3[] = {
++	{0x0000, 0x0830}, {0x0400, 0x8130}, {0x000A, 0x83ED}, {0x0431, 0x5432},
++	{0x0F51, 0x0017}, {0x02F5, 0x0048}, {0x02FA, 0xFFDF}, {0x02FB, 0xFFE0},
++	{0xC456, 0x0C14}, {0xC476, 0x0C14}, {0xC454, 0x3F8B}, {0xC474, 0x3F8B},
++	{0xC450, 0x2071}, {0xC470, 0x2071}, {0xC451, 0x226B}, {0xC471, 0x226B},
++	{0xC452, 0xA293}, {0xC472, 0xA293}, {0xC44C, 0x1585}, {0xC44C, 0x1185},
++	{0xC44C, 0x1585}, {0xC46C, 0x1585}, {0xC46C, 0x1185}, {0xC46C, 0x1585},
++	{0xC44C, 0x0185}, {0xC44C, 0x0181}, {0xC44C, 0x0185}, {0xC46C, 0x0185},
++	{0xC46C, 0x0181}, {0xC46C, 0x0185}, {0xBE24, 0xB000}, {0xBE23, 0xFF51},
++	{0xBE22, 0xDF20}, {0xBE21, 0x0140}, {0xBE20, 0x00BB}, {0xBE24, 0xB800},
++	{0xBE24, 0x0000}, {0xBE24, 0x7000}, {0xBE23, 0xFF51}, {0xBE22, 0xDF60},
++	{0xBE21, 0x0140}, {0xBE20, 0x0077}, {0xBE24, 0x7800}, {0xBE24, 0x0000},
++	{0xBE2E, 0x7BA7}, {0xBE36, 0x1000}, {0xBE37, 0x1000}, {0x8000, 0x0001},
++	{0xBE69, 0xD50F}, {0x8000, 0x0000}, {0xBE69, 0xD50F}, {0xBE6B, 0x0320},
++	{0xBE77, 0x2800}, {0xBE78, 0x3C3C}, {0xBE79, 0x3C3C}, {0xBE6E, 0xE120},
++	{0x8000, 0x0001}, {0xBE10, 0x8140}, {0x8000, 0x0000}, {0xBE10, 0x8140},
++	{0xBE15, 0x1007}, {0xBE14, 0x0448}, {0xBE1E, 0x00A0}, {0xBE10, 0x8160},
++	{0xBE10, 0x8140}, {0xBE00, 0x1340}, {0x0450, 0x0000}, {0x0401, 0x0000},
++};
++
++/* Belkin F5D8235 v1, "belkin,f5d8235-v1" */
++static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_f5d8235[] = {
++	{0x0242, 0x02BF}, {0x0245, 0x02BF}, {0x0248, 0x02BF}, {0x024B, 0x02BF},
++	{0x024E, 0x02BF}, {0x0251, 0x02BF}, {0x0254, 0x0A3F}, {0x0256, 0x0A3F},
++	{0x0258, 0x0A3F}, {0x025A, 0x0A3F}, {0x025C, 0x0A3F}, {0x025E, 0x0A3F},
++	{0x0263, 0x007C}, {0x0100, 0x0004}, {0xBE5B, 0x3500}, {0x800E, 0x200F},
++	{0xBE1D, 0x0F00}, {0x8001, 0x5011}, {0x800A, 0xA2F4}, {0x800B, 0x17A3},
++	{0xBE4B, 0x17A3}, {0xBE41, 0x5011}, {0xBE17, 0x2100}, {0x8000, 0x8304},
++	{0xBE40, 0x8304}, {0xBE4A, 0xA2F4}, {0x800C, 0xA8D5}, {0x8014, 0x5500},
++	{0x8015, 0x0004}, {0xBE4C, 0xA8D5}, {0xBE59, 0x0008}, {0xBE09, 0x0E00},
++	{0xBE36, 0x1036}, {0xBE37, 0x1036}, {0x800D, 0x00FF}, {0xBE4D, 0x00FF},
++};
++
++/* DGN3500, "netgear,dgn3500", "netgear,dgn3500b" */
++static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_dgn3500[] = {
++	{0x0000, 0x0830}, {0x0400, 0x8130}, {0x000A, 0x83ED}, {0x0F51, 0x0017},
++	{0x02F5, 0x0048}, {0x02FA, 0xFFDF}, {0x02FB, 0xFFE0}, {0x0450, 0x0000},
++	{0x0401, 0x0000}, {0x0431, 0x0960},
++};
++
++/* This jam table activates "green ethernet", which means low power mode
++ * and is claimed to detect the cable length and not use more power than
++ * necessary, and the ports should enter power saving mode 10 seconds after
++ * a cable is disconnected. Seems to always be the same.
++ */
++static const struct rtl8366rb_jam_tbl_entry rtl8366rb_green_jam[] = {
++	{0xBE78, 0x323C}, {0xBE77, 0x5000}, {0xBE2E, 0x7BA7},
++	{0xBE59, 0x3459}, {0xBE5A, 0x745A}, {0xBE5B, 0x785C},
++	{0xBE5C, 0x785C}, {0xBE6E, 0xE120}, {0xBE79, 0x323C},
++};
++
++/* Function that jams the tables in the proper registers */
++static int rtl8366rb_jam_table(const struct rtl8366rb_jam_tbl_entry *jam_table,
++			       int jam_size, struct realtek_smi *smi,
++			       bool write_dbg)
++{
++	u32 val;
++	int ret;
++	int i;
++
++	for (i = 0; i < jam_size; i++) {
++		if ((jam_table[i].reg & 0xBE00) == 0xBE00) {
++			ret = regmap_read(smi->map,
++					  RTL8366RB_PHY_ACCESS_BUSY_REG,
++					  &val);
++			if (ret)
++				return ret;
++			if (!(val & RTL8366RB_PHY_INT_BUSY)) {
++				ret = regmap_write(smi->map,
++						RTL8366RB_PHY_ACCESS_CTRL_REG,
++						RTL8366RB_PHY_CTRL_WRITE);
++				if (ret)
++					return ret;
++			}
++		}
++		if (write_dbg)
++			dev_dbg(smi->dev, "jam %04x into register %04x\n",
++				jam_table[i].val,
++				jam_table[i].reg);
++		ret = regmap_write(smi->map,
++				   jam_table[i].reg,
++				   jam_table[i].val);
++		if (ret)
++			return ret;
++	}
++	return 0;
++}
++
++static int rtl8366rb_setup(struct dsa_switch *ds)
++{
++	struct realtek_smi *smi = ds->priv;
++	const struct rtl8366rb_jam_tbl_entry *jam_table;
++	struct rtl8366rb *rb;
++	u32 chip_ver = 0;
++	u32 chip_id = 0;
++	int jam_size;
++	u32 val;
++	int ret;
++	int i;
++
++	rb = smi->chip_data;
++
++	ret = regmap_read(smi->map, RTL8366RB_CHIP_ID_REG, &chip_id);
++	if (ret) {
++		dev_err(smi->dev, "unable to read chip id\n");
++		return ret;
++	}
++
++	switch (chip_id) {
++	case RTL8366RB_CHIP_ID_8366:
++		break;
++	default:
++		dev_err(smi->dev, "unknown chip id (%04x)\n", chip_id);
++		return -ENODEV;
++	}
++
++	ret = regmap_read(smi->map, RTL8366RB_CHIP_VERSION_CTRL_REG,
++			  &chip_ver);
++	if (ret) {
++		dev_err(smi->dev, "unable to read chip version\n");
++		return ret;
++	}
++
++	dev_info(smi->dev, "RTL%04x ver %u chip found\n",
++		 chip_id, chip_ver & RTL8366RB_CHIP_VERSION_MASK);
++
++	/* Do the init dance using the right jam table */
++	switch (chip_ver) {
++	case 0:
++		jam_table = rtl8366rb_init_jam_ver_0;
++		jam_size = ARRAY_SIZE(rtl8366rb_init_jam_ver_0);
++		break;
++	case 1:
++		jam_table = rtl8366rb_init_jam_ver_1;
++		jam_size = ARRAY_SIZE(rtl8366rb_init_jam_ver_1);
++		break;
++	case 2:
++		jam_table = rtl8366rb_init_jam_ver_2;
++		jam_size = ARRAY_SIZE(rtl8366rb_init_jam_ver_2);
++		break;
++	default:
++		jam_table = rtl8366rb_init_jam_ver_3;
++		jam_size = ARRAY_SIZE(rtl8366rb_init_jam_ver_3);
++		break;
++	}
++
++	/* Special jam tables for special routers
++	 * TODO: are these necessary? Maintainers, please test
++	 * without them, using just the off-the-shelf tables.
++	 */
++	if (of_machine_is_compatible("belkin,f5d8235-v1")) {
++		jam_table = rtl8366rb_init_jam_f5d8235;
++		jam_size = ARRAY_SIZE(rtl8366rb_init_jam_f5d8235);
++	}
++	if (of_machine_is_compatible("netgear,dgn3500") ||
++	    of_machine_is_compatible("netgear,dgn3500b")) {
++		jam_table = rtl8366rb_init_jam_dgn3500;
++		jam_size = ARRAY_SIZE(rtl8366rb_init_jam_dgn3500);
++	}
++
++	ret = rtl8366rb_jam_table(jam_table, jam_size, smi, true);
++	if (ret)
++		return ret;
++
++	/* Isolate all user ports so they can only send packets to itself and the CPU port */
++	for (i = 0; i < RTL8366RB_PORT_NUM_CPU; i++) {
++		ret = regmap_write(smi->map, RTL8366RB_PORT_ISO(i),
++				   RTL8366RB_PORT_ISO_PORTS(BIT(RTL8366RB_PORT_NUM_CPU)) |
++				   RTL8366RB_PORT_ISO_EN);
++		if (ret)
++			return ret;
++	}
++	/* CPU port can send packets to all ports */
++	ret = regmap_write(smi->map, RTL8366RB_PORT_ISO(RTL8366RB_PORT_NUM_CPU),
++			   RTL8366RB_PORT_ISO_PORTS(dsa_user_ports(ds)) |
++			   RTL8366RB_PORT_ISO_EN);
++	if (ret)
++		return ret;
++
++	/* Set up the "green ethernet" feature */
++	ret = rtl8366rb_jam_table(rtl8366rb_green_jam,
++				  ARRAY_SIZE(rtl8366rb_green_jam), smi, false);
++	if (ret)
++		return ret;
++
++	ret = regmap_write(smi->map,
++			   RTL8366RB_GREEN_FEATURE_REG,
++			   (chip_ver == 1) ? 0x0007 : 0x0003);
++	if (ret)
++		return ret;
++
++	/* Vendor driver sets 0x240 in registers 0xc and 0xd (undocumented) */
++	ret = regmap_write(smi->map, 0x0c, 0x240);
++	if (ret)
++		return ret;
++	ret = regmap_write(smi->map, 0x0d, 0x240);
++	if (ret)
++		return ret;
++
++	/* Set some random MAC address */
++	ret = rtl8366rb_set_addr(smi);
++	if (ret)
++		return ret;
++
++	/* Enable CPU port with custom DSA tag 8899.
++	 *
++	 * If you set RTL8368RB_CPU_NO_TAG (bit 15) in this registers
++	 * the custom tag is turned off.
++	 */
++	ret = regmap_update_bits(smi->map, RTL8368RB_CPU_CTRL_REG,
++				 0xFFFF,
++				 BIT(smi->cpu_port));
++	if (ret)
++		return ret;
++
++	/* Make sure we default-enable the fixed CPU port */
++	ret = regmap_update_bits(smi->map, RTL8366RB_PECR,
++				 BIT(smi->cpu_port),
++				 0);
++	if (ret)
++		return ret;
++
++	/* Set maximum packet length to 1536 bytes */
++	ret = regmap_update_bits(smi->map, RTL8366RB_SGCR,
++				 RTL8366RB_SGCR_MAX_LENGTH_MASK,
++				 RTL8366RB_SGCR_MAX_LENGTH_1536);
++	if (ret)
++		return ret;
++	for (i = 0; i < RTL8366RB_NUM_PORTS; i++)
++		/* layer 2 size, see rtl8366rb_change_mtu() */
++		rb->max_mtu[i] = 1532;
++
++	/* Disable learning for all ports */
++	ret = regmap_write(smi->map, RTL8366RB_PORT_LEARNDIS_CTRL,
++			   RTL8366RB_PORT_ALL);
++	if (ret)
++		return ret;
++
++	/* Enable auto ageing for all ports */
++	ret = regmap_write(smi->map, RTL8366RB_SECURITY_CTRL, 0);
++	if (ret)
++		return ret;
++
++	/* Port 4 setup: this enables Port 4, usually the WAN port,
++	 * common PHY IO mode is apparently mode 0, and this is not what
++	 * the port is initialized to. There is no explanation of the
++	 * IO modes in the Realtek source code, if your WAN port is
++	 * connected to something exotic such as fiber, then this might
++	 * be worth experimenting with.
++	 */
++	ret = regmap_update_bits(smi->map, RTL8366RB_PMC0,
++				 RTL8366RB_PMC0_P4_IOMODE_MASK,
++				 0 << RTL8366RB_PMC0_P4_IOMODE_SHIFT);
++	if (ret)
++		return ret;
++
++	/* Accept all packets by default, we enable filtering on-demand */
++	ret = regmap_write(smi->map, RTL8366RB_VLAN_INGRESS_CTRL1_REG,
++			   0);
++	if (ret)
++		return ret;
++	ret = regmap_write(smi->map, RTL8366RB_VLAN_INGRESS_CTRL2_REG,
++			   0);
++	if (ret)
++		return ret;
++
++	/* Don't drop packets whose DA has not been learned */
++	ret = regmap_update_bits(smi->map, RTL8366RB_SSCR2,
++				 RTL8366RB_SSCR2_DROP_UNKNOWN_DA, 0);
++	if (ret)
++		return ret;
++
++	/* Set blinking, TODO: make this configurable */
++	ret = regmap_update_bits(smi->map, RTL8366RB_LED_BLINKRATE_REG,
++				 RTL8366RB_LED_BLINKRATE_MASK,
++				 RTL8366RB_LED_BLINKRATE_56MS);
++	if (ret)
++		return ret;
++
++	/* Set up LED activity:
++	 * Each port has 4 LEDs, we configure all ports to the same
++	 * behaviour (no individual config) but we can set up each
++	 * LED separately.
++	 */
++	if (smi->leds_disabled) {
++		/* Turn everything off */
++		regmap_update_bits(smi->map,
++				   RTL8366RB_LED_0_1_CTRL_REG,
++				   0x0FFF, 0);
++		regmap_update_bits(smi->map,
++				   RTL8366RB_LED_2_3_CTRL_REG,
++				   0x0FFF, 0);
++		regmap_update_bits(smi->map,
++				   RTL8366RB_INTERRUPT_CONTROL_REG,
++				   RTL8366RB_P4_RGMII_LED,
++				   0);
++		val = RTL8366RB_LED_OFF;
++	} else {
++		/* TODO: make this configurable per LED */
++		val = RTL8366RB_LED_FORCE;
++	}
++	for (i = 0; i < 4; i++) {
++		ret = regmap_update_bits(smi->map,
++					 RTL8366RB_LED_CTRL_REG,
++					 0xf << (i * 4),
++					 val << (i * 4));
++		if (ret)
++			return ret;
++	}
++
++	ret = rtl8366_reset_vlan(smi);
++	if (ret)
++		return ret;
++
++	ret = rtl8366rb_setup_cascaded_irq(smi);
++	if (ret)
++		dev_info(smi->dev, "no interrupt support\n");
++
++	ret = realtek_smi_setup_mdio(smi);
++	if (ret) {
++		dev_info(smi->dev, "could not set up MDIO bus\n");
++		return -ENODEV;
++	}
++
++	return 0;
++}
++
++static enum dsa_tag_protocol rtl8366_get_tag_protocol(struct dsa_switch *ds,
++						      int port,
++						      enum dsa_tag_protocol mp)
++{
++	/* This switch uses the 4 byte protocol A Realtek DSA tag */
++	return DSA_TAG_PROTO_RTL4_A;
++}
++
++static void
++rtl8366rb_mac_link_up(struct dsa_switch *ds, int port, unsigned int mode,
++		      phy_interface_t interface, struct phy_device *phydev,
++		      int speed, int duplex, bool tx_pause, bool rx_pause)
++{
++	struct realtek_smi *smi = ds->priv;
++	int ret;
++
++	if (port != smi->cpu_port)
++		return;
++
++	dev_dbg(smi->dev, "MAC link up on CPU port (%d)\n", port);
++
++	/* Force the fixed CPU port into 1Gbit mode, no autonegotiation */
++	ret = regmap_update_bits(smi->map, RTL8366RB_MAC_FORCE_CTRL_REG,
++				 BIT(port), BIT(port));
++	if (ret) {
++		dev_err(smi->dev, "failed to force 1Gbit on CPU port\n");
++		return;
++	}
++
++	ret = regmap_update_bits(smi->map, RTL8366RB_PAACR2,
++				 0xFF00U,
++				 RTL8366RB_PAACR_CPU_PORT << 8);
++	if (ret) {
++		dev_err(smi->dev, "failed to set PAACR on CPU port\n");
++		return;
++	}
++
++	/* Enable the CPU port */
++	ret = regmap_update_bits(smi->map, RTL8366RB_PECR, BIT(port),
++				 0);
++	if (ret) {
++		dev_err(smi->dev, "failed to enable the CPU port\n");
++		return;
++	}
++}
++
++static void
++rtl8366rb_mac_link_down(struct dsa_switch *ds, int port, unsigned int mode,
++			phy_interface_t interface)
++{
++	struct realtek_smi *smi = ds->priv;
++	int ret;
++
++	if (port != smi->cpu_port)
++		return;
++
++	dev_dbg(smi->dev, "MAC link down on CPU port (%d)\n", port);
++
++	/* Disable the CPU port */
++	ret = regmap_update_bits(smi->map, RTL8366RB_PECR, BIT(port),
++				 BIT(port));
++	if (ret) {
++		dev_err(smi->dev, "failed to disable the CPU port\n");
++		return;
++	}
++}
++
++static void rb8366rb_set_port_led(struct realtek_smi *smi,
++				  int port, bool enable)
++{
++	u16 val = enable ? 0x3f : 0;
++	int ret;
++
++	if (smi->leds_disabled)
++		return;
++
++	switch (port) {
++	case 0:
++		ret = regmap_update_bits(smi->map,
++					 RTL8366RB_LED_0_1_CTRL_REG,
++					 0x3F, val);
++		break;
++	case 1:
++		ret = regmap_update_bits(smi->map,
++					 RTL8366RB_LED_0_1_CTRL_REG,
++					 0x3F << RTL8366RB_LED_1_OFFSET,
++					 val << RTL8366RB_LED_1_OFFSET);
++		break;
++	case 2:
++		ret = regmap_update_bits(smi->map,
++					 RTL8366RB_LED_2_3_CTRL_REG,
++					 0x3F, val);
++		break;
++	case 3:
++		ret = regmap_update_bits(smi->map,
++					 RTL8366RB_LED_2_3_CTRL_REG,
++					 0x3F << RTL8366RB_LED_3_OFFSET,
++					 val << RTL8366RB_LED_3_OFFSET);
++		break;
++	case 4:
++		ret = regmap_update_bits(smi->map,
++					 RTL8366RB_INTERRUPT_CONTROL_REG,
++					 RTL8366RB_P4_RGMII_LED,
++					 enable ? RTL8366RB_P4_RGMII_LED : 0);
++		break;
++	default:
++		dev_err(smi->dev, "no LED for port %d\n", port);
++		return;
++	}
++	if (ret)
++		dev_err(smi->dev, "error updating LED on port %d\n", port);
++}
++
++static int
++rtl8366rb_port_enable(struct dsa_switch *ds, int port,
++		      struct phy_device *phy)
++{
++	struct realtek_smi *smi = ds->priv;
++	int ret;
++
++	dev_dbg(smi->dev, "enable port %d\n", port);
++	ret = regmap_update_bits(smi->map, RTL8366RB_PECR, BIT(port),
++				 0);
++	if (ret)
++		return ret;
++
++	rb8366rb_set_port_led(smi, port, true);
++	return 0;
++}
++
++static void
++rtl8366rb_port_disable(struct dsa_switch *ds, int port)
++{
++	struct realtek_smi *smi = ds->priv;
++	int ret;
++
++	dev_dbg(smi->dev, "disable port %d\n", port);
++	ret = regmap_update_bits(smi->map, RTL8366RB_PECR, BIT(port),
++				 BIT(port));
++	if (ret)
++		return;
++
++	rb8366rb_set_port_led(smi, port, false);
++}
++
++static int
++rtl8366rb_port_bridge_join(struct dsa_switch *ds, int port,
++			   struct net_device *bridge)
++{
++	struct realtek_smi *smi = ds->priv;
++	unsigned int port_bitmap = 0;
++	int ret, i;
++
++	/* Loop over all other ports than the current one */
++	for (i = 0; i < RTL8366RB_PORT_NUM_CPU; i++) {
++		/* Current port handled last */
++		if (i == port)
++			continue;
++		/* Not on this bridge */
++		if (dsa_to_port(ds, i)->bridge_dev != bridge)
++			continue;
++		/* Join this port to each other port on the bridge */
++		ret = regmap_update_bits(smi->map, RTL8366RB_PORT_ISO(i),
++					 RTL8366RB_PORT_ISO_PORTS(BIT(port)),
++					 RTL8366RB_PORT_ISO_PORTS(BIT(port)));
++		if (ret)
++			dev_err(smi->dev, "failed to join port %d\n", port);
++
++		port_bitmap |= BIT(i);
++	}
++
++	/* Set the bits for the ports we can access */
++	return regmap_update_bits(smi->map, RTL8366RB_PORT_ISO(port),
++				  RTL8366RB_PORT_ISO_PORTS(port_bitmap),
++				  RTL8366RB_PORT_ISO_PORTS(port_bitmap));
++}
++
++static void
++rtl8366rb_port_bridge_leave(struct dsa_switch *ds, int port,
++			    struct net_device *bridge)
++{
++	struct realtek_smi *smi = ds->priv;
++	unsigned int port_bitmap = 0;
++	int ret, i;
++
++	/* Loop over all other ports than this one */
++	for (i = 0; i < RTL8366RB_PORT_NUM_CPU; i++) {
++		/* Current port handled last */
++		if (i == port)
++			continue;
++		/* Not on this bridge */
++		if (dsa_to_port(ds, i)->bridge_dev != bridge)
++			continue;
++		/* Remove this port from any other port on the bridge */
++		ret = regmap_update_bits(smi->map, RTL8366RB_PORT_ISO(i),
++					 RTL8366RB_PORT_ISO_PORTS(BIT(port)), 0);
++		if (ret)
++			dev_err(smi->dev, "failed to leave port %d\n", port);
++
++		port_bitmap |= BIT(i);
++	}
++
++	/* Clear the bits for the ports we can not access, leave ourselves */
++	regmap_update_bits(smi->map, RTL8366RB_PORT_ISO(port),
++			   RTL8366RB_PORT_ISO_PORTS(port_bitmap), 0);
++}
++
++/**
++ * rtl8366rb_drop_untagged() - make the switch drop untagged and C-tagged frames
++ * @smi: SMI state container
++ * @port: the port to drop untagged and C-tagged frames on
++ * @drop: whether to drop or pass untagged and C-tagged frames
++ *
++ * Return: zero for success, a negative number on error.
++ */
++static int rtl8366rb_drop_untagged(struct realtek_smi *smi, int port, bool drop)
++{
++	return regmap_update_bits(smi->map, RTL8366RB_VLAN_INGRESS_CTRL1_REG,
++				  RTL8366RB_VLAN_INGRESS_CTRL1_DROP(port),
++				  drop ? RTL8366RB_VLAN_INGRESS_CTRL1_DROP(port) : 0);
++}
++
++static int rtl8366rb_vlan_filtering(struct dsa_switch *ds, int port,
++				    bool vlan_filtering,
++				    struct netlink_ext_ack *extack)
++{
++	struct realtek_smi *smi = ds->priv;
++	struct rtl8366rb *rb;
++	int ret;
++
++	rb = smi->chip_data;
++
++	dev_dbg(smi->dev, "port %d: %s VLAN filtering\n", port,
++		vlan_filtering ? "enable" : "disable");
++
++	/* If the port is not in the member set, the frame will be dropped */
++	ret = regmap_update_bits(smi->map, RTL8366RB_VLAN_INGRESS_CTRL2_REG,
++				 BIT(port), vlan_filtering ? BIT(port) : 0);
++	if (ret)
++		return ret;
++
++	/* If VLAN filtering is enabled and PVID is also enabled, we must
++	 * not drop any untagged or C-tagged frames. If we turn off VLAN
++	 * filtering on a port, we need to accept any frames.
++	 */
++	if (vlan_filtering)
++		ret = rtl8366rb_drop_untagged(smi, port, !rb->pvid_enabled[port]);
++	else
++		ret = rtl8366rb_drop_untagged(smi, port, false);
++
++	return ret;
++}
++
++static int
++rtl8366rb_port_pre_bridge_flags(struct dsa_switch *ds, int port,
++				struct switchdev_brport_flags flags,
++				struct netlink_ext_ack *extack)
++{
++	/* We support enabling/disabling learning */
++	if (flags.mask & ~(BR_LEARNING))
++		return -EINVAL;
++
++	return 0;
++}
++
++static int
++rtl8366rb_port_bridge_flags(struct dsa_switch *ds, int port,
++			    struct switchdev_brport_flags flags,
++			    struct netlink_ext_ack *extack)
++{
++	struct realtek_smi *smi = ds->priv;
++	int ret;
++
++	if (flags.mask & BR_LEARNING) {
++		ret = regmap_update_bits(smi->map, RTL8366RB_PORT_LEARNDIS_CTRL,
++					 BIT(port),
++					 (flags.val & BR_LEARNING) ? 0 : BIT(port));
++		if (ret)
++			return ret;
++	}
++
++	return 0;
++}
++
++static void
++rtl8366rb_port_stp_state_set(struct dsa_switch *ds, int port, u8 state)
++{
++	struct realtek_smi *smi = ds->priv;
++	u32 val;
++	int i;
++
++	switch (state) {
++	case BR_STATE_DISABLED:
++		val = RTL8366RB_STP_STATE_DISABLED;
++		break;
++	case BR_STATE_BLOCKING:
++	case BR_STATE_LISTENING:
++		val = RTL8366RB_STP_STATE_BLOCKING;
++		break;
++	case BR_STATE_LEARNING:
++		val = RTL8366RB_STP_STATE_LEARNING;
++		break;
++	case BR_STATE_FORWARDING:
++		val = RTL8366RB_STP_STATE_FORWARDING;
++		break;
++	default:
++		dev_err(smi->dev, "unknown bridge state requested\n");
++		return;
++	}
++
++	/* Set the same status for the port on all the FIDs */
++	for (i = 0; i < RTL8366RB_NUM_FIDS; i++) {
++		regmap_update_bits(smi->map, RTL8366RB_STP_STATE_BASE + i,
++				   RTL8366RB_STP_STATE_MASK(port),
++				   RTL8366RB_STP_STATE(port, val));
++	}
++}
++
++static void
++rtl8366rb_port_fast_age(struct dsa_switch *ds, int port)
++{
++	struct realtek_smi *smi = ds->priv;
++
++	/* This will age out any learned L2 entries */
++	regmap_update_bits(smi->map, RTL8366RB_SECURITY_CTRL,
++			   BIT(port), BIT(port));
++	/* Restore the normal state of things */
++	regmap_update_bits(smi->map, RTL8366RB_SECURITY_CTRL,
++			   BIT(port), 0);
++}
++
++static int rtl8366rb_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
++{
++	struct realtek_smi *smi = ds->priv;
++	struct rtl8366rb *rb;
++	unsigned int max_mtu;
++	u32 len;
++	int i;
++
++	/* Cache the per-port MTU setting */
++	rb = smi->chip_data;
++	rb->max_mtu[port] = new_mtu;
++
++	/* Roof out the MTU for the entire switch to the greatest
++	 * common denominator: the biggest set for any one port will
++	 * be the biggest MTU for the switch.
++	 *
++	 * The first setting, 1522 bytes, is max IP packet 1500 bytes,
++	 * plus ethernet header, 1518 bytes, plus CPU tag, 4 bytes.
++	 * This function should consider the parameter an SDU, so the
++	 * MTU passed for this setting is 1518 bytes. The same logic
++	 * of subtracting the DSA tag of 4 bytes apply to the other
++	 * settings.
++	 */
++	max_mtu = 1518;
++	for (i = 0; i < RTL8366RB_NUM_PORTS; i++) {
++		if (rb->max_mtu[i] > max_mtu)
++			max_mtu = rb->max_mtu[i];
++	}
++	if (max_mtu <= 1518)
++		len = RTL8366RB_SGCR_MAX_LENGTH_1522;
++	else if (max_mtu > 1518 && max_mtu <= 1532)
++		len = RTL8366RB_SGCR_MAX_LENGTH_1536;
++	else if (max_mtu > 1532 && max_mtu <= 1548)
++		len = RTL8366RB_SGCR_MAX_LENGTH_1552;
++	else
++		len = RTL8366RB_SGCR_MAX_LENGTH_16000;
++
++	return regmap_update_bits(smi->map, RTL8366RB_SGCR,
++				  RTL8366RB_SGCR_MAX_LENGTH_MASK,
++				  len);
++}
++
++static int rtl8366rb_max_mtu(struct dsa_switch *ds, int port)
++{
++	/* The max MTU is 16000 bytes, so we subtract the CPU tag
++	 * and the max presented to the system is 15996 bytes.
++	 */
++	return 15996;
++}
++
++static int rtl8366rb_get_vlan_4k(struct realtek_smi *smi, u32 vid,
++				 struct rtl8366_vlan_4k *vlan4k)
++{
++	u32 data[3];
++	int ret;
++	int i;
++
++	memset(vlan4k, '\0', sizeof(struct rtl8366_vlan_4k));
++
++	if (vid >= RTL8366RB_NUM_VIDS)
++		return -EINVAL;
++
++	/* write VID */
++	ret = regmap_write(smi->map, RTL8366RB_VLAN_TABLE_WRITE_BASE,
++			   vid & RTL8366RB_VLAN_VID_MASK);
++	if (ret)
++		return ret;
++
++	/* write table access control word */
++	ret = regmap_write(smi->map, RTL8366RB_TABLE_ACCESS_CTRL_REG,
++			   RTL8366RB_TABLE_VLAN_READ_CTRL);
++	if (ret)
++		return ret;
++
++	for (i = 0; i < 3; i++) {
++		ret = regmap_read(smi->map,
++				  RTL8366RB_VLAN_TABLE_READ_BASE + i,
++				  &data[i]);
++		if (ret)
++			return ret;
++	}
++
++	vlan4k->vid = vid;
++	vlan4k->untag = (data[1] >> RTL8366RB_VLAN_UNTAG_SHIFT) &
++			RTL8366RB_VLAN_UNTAG_MASK;
++	vlan4k->member = data[1] & RTL8366RB_VLAN_MEMBER_MASK;
++	vlan4k->fid = data[2] & RTL8366RB_VLAN_FID_MASK;
++
++	return 0;
++}
++
++static int rtl8366rb_set_vlan_4k(struct realtek_smi *smi,
++				 const struct rtl8366_vlan_4k *vlan4k)
++{
++	u32 data[3];
++	int ret;
++	int i;
++
++	if (vlan4k->vid >= RTL8366RB_NUM_VIDS ||
++	    vlan4k->member > RTL8366RB_VLAN_MEMBER_MASK ||
++	    vlan4k->untag > RTL8366RB_VLAN_UNTAG_MASK ||
++	    vlan4k->fid > RTL8366RB_FIDMAX)
++		return -EINVAL;
++
++	data[0] = vlan4k->vid & RTL8366RB_VLAN_VID_MASK;
++	data[1] = (vlan4k->member & RTL8366RB_VLAN_MEMBER_MASK) |
++		  ((vlan4k->untag & RTL8366RB_VLAN_UNTAG_MASK) <<
++			RTL8366RB_VLAN_UNTAG_SHIFT);
++	data[2] = vlan4k->fid & RTL8366RB_VLAN_FID_MASK;
++
++	for (i = 0; i < 3; i++) {
++		ret = regmap_write(smi->map,
++				   RTL8366RB_VLAN_TABLE_WRITE_BASE + i,
++				   data[i]);
++		if (ret)
++			return ret;
++	}
++
++	/* write table access control word */
++	ret = regmap_write(smi->map, RTL8366RB_TABLE_ACCESS_CTRL_REG,
++			   RTL8366RB_TABLE_VLAN_WRITE_CTRL);
++
++	return ret;
++}
++
++static int rtl8366rb_get_vlan_mc(struct realtek_smi *smi, u32 index,
++				 struct rtl8366_vlan_mc *vlanmc)
++{
++	u32 data[3];
++	int ret;
++	int i;
++
++	memset(vlanmc, '\0', sizeof(struct rtl8366_vlan_mc));
++
++	if (index >= RTL8366RB_NUM_VLANS)
++		return -EINVAL;
++
++	for (i = 0; i < 3; i++) {
++		ret = regmap_read(smi->map,
++				  RTL8366RB_VLAN_MC_BASE(index) + i,
++				  &data[i]);
++		if (ret)
++			return ret;
++	}
++
++	vlanmc->vid = data[0] & RTL8366RB_VLAN_VID_MASK;
++	vlanmc->priority = (data[0] >> RTL8366RB_VLAN_PRIORITY_SHIFT) &
++		RTL8366RB_VLAN_PRIORITY_MASK;
++	vlanmc->untag = (data[1] >> RTL8366RB_VLAN_UNTAG_SHIFT) &
++		RTL8366RB_VLAN_UNTAG_MASK;
++	vlanmc->member = data[1] & RTL8366RB_VLAN_MEMBER_MASK;
++	vlanmc->fid = data[2] & RTL8366RB_VLAN_FID_MASK;
++
++	return 0;
++}
++
++static int rtl8366rb_set_vlan_mc(struct realtek_smi *smi, u32 index,
++				 const struct rtl8366_vlan_mc *vlanmc)
++{
++	u32 data[3];
++	int ret;
++	int i;
++
++	if (index >= RTL8366RB_NUM_VLANS ||
++	    vlanmc->vid >= RTL8366RB_NUM_VIDS ||
++	    vlanmc->priority > RTL8366RB_PRIORITYMAX ||
++	    vlanmc->member > RTL8366RB_VLAN_MEMBER_MASK ||
++	    vlanmc->untag > RTL8366RB_VLAN_UNTAG_MASK ||
++	    vlanmc->fid > RTL8366RB_FIDMAX)
++		return -EINVAL;
++
++	data[0] = (vlanmc->vid & RTL8366RB_VLAN_VID_MASK) |
++		  ((vlanmc->priority & RTL8366RB_VLAN_PRIORITY_MASK) <<
++			RTL8366RB_VLAN_PRIORITY_SHIFT);
++	data[1] = (vlanmc->member & RTL8366RB_VLAN_MEMBER_MASK) |
++		  ((vlanmc->untag & RTL8366RB_VLAN_UNTAG_MASK) <<
++			RTL8366RB_VLAN_UNTAG_SHIFT);
++	data[2] = vlanmc->fid & RTL8366RB_VLAN_FID_MASK;
++
++	for (i = 0; i < 3; i++) {
++		ret = regmap_write(smi->map,
++				   RTL8366RB_VLAN_MC_BASE(index) + i,
++				   data[i]);
++		if (ret)
++			return ret;
++	}
++
++	return 0;
++}
++
++static int rtl8366rb_get_mc_index(struct realtek_smi *smi, int port, int *val)
++{
++	u32 data;
++	int ret;
++
++	if (port >= smi->num_ports)
++		return -EINVAL;
++
++	ret = regmap_read(smi->map, RTL8366RB_PORT_VLAN_CTRL_REG(port),
++			  &data);
++	if (ret)
++		return ret;
++
++	*val = (data >> RTL8366RB_PORT_VLAN_CTRL_SHIFT(port)) &
++		RTL8366RB_PORT_VLAN_CTRL_MASK;
++
++	return 0;
++}
++
++static int rtl8366rb_set_mc_index(struct realtek_smi *smi, int port, int index)
++{
++	struct rtl8366rb *rb;
++	bool pvid_enabled;
++	int ret;
++
++	rb = smi->chip_data;
++	pvid_enabled = !!index;
++
++	if (port >= smi->num_ports || index >= RTL8366RB_NUM_VLANS)
++		return -EINVAL;
++
++	ret = regmap_update_bits(smi->map, RTL8366RB_PORT_VLAN_CTRL_REG(port),
++				RTL8366RB_PORT_VLAN_CTRL_MASK <<
++					RTL8366RB_PORT_VLAN_CTRL_SHIFT(port),
++				(index & RTL8366RB_PORT_VLAN_CTRL_MASK) <<
++					RTL8366RB_PORT_VLAN_CTRL_SHIFT(port));
++	if (ret)
++		return ret;
++
++	rb->pvid_enabled[port] = pvid_enabled;
++
++	/* If VLAN filtering is enabled and PVID is also enabled, we must
++	 * not drop any untagged or C-tagged frames. Make sure to update the
++	 * filtering setting.
++	 */
++	if (dsa_port_is_vlan_filtering(dsa_to_port(smi->ds, port)))
++		ret = rtl8366rb_drop_untagged(smi, port, !pvid_enabled);
++
++	return ret;
++}
++
++static bool rtl8366rb_is_vlan_valid(struct realtek_smi *smi, unsigned int vlan)
++{
++	unsigned int max = RTL8366RB_NUM_VLANS - 1;
++
++	if (smi->vlan4k_enabled)
++		max = RTL8366RB_NUM_VIDS - 1;
++
++	if (vlan > max)
++		return false;
++
++	return true;
++}
++
++static int rtl8366rb_enable_vlan(struct realtek_smi *smi, bool enable)
++{
++	dev_dbg(smi->dev, "%s VLAN\n", enable ? "enable" : "disable");
++	return regmap_update_bits(smi->map,
++				  RTL8366RB_SGCR, RTL8366RB_SGCR_EN_VLAN,
++				  enable ? RTL8366RB_SGCR_EN_VLAN : 0);
++}
++
++static int rtl8366rb_enable_vlan4k(struct realtek_smi *smi, bool enable)
++{
++	dev_dbg(smi->dev, "%s VLAN 4k\n", enable ? "enable" : "disable");
++	return regmap_update_bits(smi->map, RTL8366RB_SGCR,
++				  RTL8366RB_SGCR_EN_VLAN_4KTB,
++				  enable ? RTL8366RB_SGCR_EN_VLAN_4KTB : 0);
++}
++
++static int rtl8366rb_phy_read(struct realtek_smi *smi, int phy, int regnum)
++{
++	u32 val;
++	u32 reg;
++	int ret;
++
++	if (phy > RTL8366RB_PHY_NO_MAX)
++		return -EINVAL;
++
++	ret = regmap_write(smi->map, RTL8366RB_PHY_ACCESS_CTRL_REG,
++			   RTL8366RB_PHY_CTRL_READ);
++	if (ret)
++		return ret;
++
++	reg = 0x8000 | (1 << (phy + RTL8366RB_PHY_NO_OFFSET)) | regnum;
++
++	ret = regmap_write(smi->map, reg, 0);
++	if (ret) {
++		dev_err(smi->dev,
++			"failed to write PHY%d reg %04x @ %04x, ret %d\n",
++			phy, regnum, reg, ret);
++		return ret;
++	}
++
++	ret = regmap_read(smi->map, RTL8366RB_PHY_ACCESS_DATA_REG, &val);
++	if (ret)
++		return ret;
++
++	dev_dbg(smi->dev, "read PHY%d register 0x%04x @ %08x, val <- %04x\n",
++		phy, regnum, reg, val);
++
++	return val;
++}
++
++static int rtl8366rb_phy_write(struct realtek_smi *smi, int phy, int regnum,
++			       u16 val)
++{
++	u32 reg;
++	int ret;
++
++	if (phy > RTL8366RB_PHY_NO_MAX)
++		return -EINVAL;
++
++	ret = regmap_write(smi->map, RTL8366RB_PHY_ACCESS_CTRL_REG,
++			   RTL8366RB_PHY_CTRL_WRITE);
++	if (ret)
++		return ret;
++
++	reg = 0x8000 | (1 << (phy + RTL8366RB_PHY_NO_OFFSET)) | regnum;
++
++	dev_dbg(smi->dev, "write PHY%d register 0x%04x @ %04x, val -> %04x\n",
++		phy, regnum, reg, val);
++
++	ret = regmap_write(smi->map, reg, val);
++	if (ret)
++		return ret;
++
++	return 0;
++}
++
++static int rtl8366rb_reset_chip(struct realtek_smi *smi)
++{
++	int timeout = 10;
++	u32 val;
++	int ret;
++
++	realtek_smi_write_reg_noack(smi, RTL8366RB_RESET_CTRL_REG,
++				    RTL8366RB_CHIP_CTRL_RESET_HW);
++	do {
++		usleep_range(20000, 25000);
++		ret = regmap_read(smi->map, RTL8366RB_RESET_CTRL_REG, &val);
++		if (ret)
++			return ret;
++
++		if (!(val & RTL8366RB_CHIP_CTRL_RESET_HW))
++			break;
++	} while (--timeout);
++
++	if (!timeout) {
++		dev_err(smi->dev, "timeout waiting for the switch to reset\n");
++		return -EIO;
++	}
++
++	return 0;
++}
++
++static int rtl8366rb_detect(struct realtek_smi *smi)
++{
++	struct device *dev = smi->dev;
++	int ret;
++	u32 val;
++
++	/* Detect device */
++	ret = regmap_read(smi->map, 0x5c, &val);
++	if (ret) {
++		dev_err(dev, "can't get chip ID (%d)\n", ret);
++		return ret;
++	}
++
++	switch (val) {
++	case 0x6027:
++		dev_info(dev, "found an RTL8366S switch\n");
++		dev_err(dev, "this switch is not yet supported, submit patches!\n");
++		return -ENODEV;
++	case 0x5937:
++		dev_info(dev, "found an RTL8366RB switch\n");
++		smi->cpu_port = RTL8366RB_PORT_NUM_CPU;
++		smi->num_ports = RTL8366RB_NUM_PORTS;
++		smi->num_vlan_mc = RTL8366RB_NUM_VLANS;
++		smi->mib_counters = rtl8366rb_mib_counters;
++		smi->num_mib_counters = ARRAY_SIZE(rtl8366rb_mib_counters);
++		break;
++	default:
++		dev_info(dev, "found an Unknown Realtek switch (id=0x%04x)\n",
++			 val);
++		break;
++	}
++
++	ret = rtl8366rb_reset_chip(smi);
++	if (ret)
++		return ret;
++
++	return 0;
++}
++
++static const struct dsa_switch_ops rtl8366rb_switch_ops = {
++	.get_tag_protocol = rtl8366_get_tag_protocol,
++	.setup = rtl8366rb_setup,
++	.phylink_mac_link_up = rtl8366rb_mac_link_up,
++	.phylink_mac_link_down = rtl8366rb_mac_link_down,
++	.get_strings = rtl8366_get_strings,
++	.get_ethtool_stats = rtl8366_get_ethtool_stats,
++	.get_sset_count = rtl8366_get_sset_count,
++	.port_bridge_join = rtl8366rb_port_bridge_join,
++	.port_bridge_leave = rtl8366rb_port_bridge_leave,
++	.port_vlan_filtering = rtl8366rb_vlan_filtering,
++	.port_vlan_add = rtl8366_vlan_add,
++	.port_vlan_del = rtl8366_vlan_del,
++	.port_enable = rtl8366rb_port_enable,
++	.port_disable = rtl8366rb_port_disable,
++	.port_pre_bridge_flags = rtl8366rb_port_pre_bridge_flags,
++	.port_bridge_flags = rtl8366rb_port_bridge_flags,
++	.port_stp_state_set = rtl8366rb_port_stp_state_set,
++	.port_fast_age = rtl8366rb_port_fast_age,
++	.port_change_mtu = rtl8366rb_change_mtu,
++	.port_max_mtu = rtl8366rb_max_mtu,
++};
++
++static const struct realtek_smi_ops rtl8366rb_smi_ops = {
++	.detect		= rtl8366rb_detect,
++	.get_vlan_mc	= rtl8366rb_get_vlan_mc,
++	.set_vlan_mc	= rtl8366rb_set_vlan_mc,
++	.get_vlan_4k	= rtl8366rb_get_vlan_4k,
++	.set_vlan_4k	= rtl8366rb_set_vlan_4k,
++	.get_mc_index	= rtl8366rb_get_mc_index,
++	.set_mc_index	= rtl8366rb_set_mc_index,
++	.get_mib_counter = rtl8366rb_get_mib_counter,
++	.is_vlan_valid	= rtl8366rb_is_vlan_valid,
++	.enable_vlan	= rtl8366rb_enable_vlan,
++	.enable_vlan4k	= rtl8366rb_enable_vlan4k,
++	.phy_read	= rtl8366rb_phy_read,
++	.phy_write	= rtl8366rb_phy_write,
++};
++
++const struct realtek_smi_variant rtl8366rb_variant = {
++	.ds_ops = &rtl8366rb_switch_ops,
++	.ops = &rtl8366rb_smi_ops,
++	.clk_delay = 10,
++	.cmd_read = 0xa9,
++	.cmd_write = 0xa8,
++	.chip_data_sz = sizeof(struct rtl8366rb),
++};
++EXPORT_SYMBOL_GPL(rtl8366rb_variant);
+diff --git a/drivers/net/dsa/rtl8365mb.c b/drivers/net/dsa/rtl8365mb.c
+deleted file mode 100644
+index 48c0e3e466001..0000000000000
+--- a/drivers/net/dsa/rtl8365mb.c
++++ /dev/null
+@@ -1,1986 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/* Realtek SMI subdriver for the Realtek RTL8365MB-VC ethernet switch.
+- *
+- * Copyright (C) 2021 Alvin Šipraga <alsi@bang-olufsen.dk>
+- * Copyright (C) 2021 Michael Rasmussen <mir@bang-olufsen.dk>
+- *
+- * The RTL8365MB-VC is a 4+1 port 10/100/1000M switch controller. It includes 4
+- * integrated PHYs for the user facing ports, and an extension interface which
+- * can be connected to the CPU - or another PHY - via either MII, RMII, or
+- * RGMII. The switch is configured via the Realtek Simple Management Interface
+- * (SMI), which uses the MDIO/MDC lines.
+- *
+- * Below is a simplified block diagram of the chip and its relevant interfaces.
+- *
+- *                          .-----------------------------------.
+- *                          |                                   |
+- *         UTP <---------------> Giga PHY <-> PCS <-> P0 GMAC   |
+- *         UTP <---------------> Giga PHY <-> PCS <-> P1 GMAC   |
+- *         UTP <---------------> Giga PHY <-> PCS <-> P2 GMAC   |
+- *         UTP <---------------> Giga PHY <-> PCS <-> P3 GMAC   |
+- *                          |                                   |
+- *     CPU/PHY <-MII/RMII/RGMII--->  Extension  <---> Extension |
+- *                          |       interface 1        GMAC 1   |
+- *                          |                                   |
+- *     SMI driver/ <-MDC/SCL---> Management    ~~~~~~~~~~~~~~   |
+- *        EEPROM   <-MDIO/SDA--> interface     ~REALTEK ~~~~~   |
+- *                          |                  ~RTL8365MB ~~~   |
+- *                          |                  ~GXXXC TAIWAN~   |
+- *        GPIO <--------------> Reset          ~~~~~~~~~~~~~~   |
+- *                          |                                   |
+- *      Interrupt  <----------> Link UP/DOWN events             |
+- *      controller          |                                   |
+- *                          '-----------------------------------'
+- *
+- * The driver uses DSA to integrate the 4 user and 1 extension ports into the
+- * kernel. Netdevices are created for the user ports, as are PHY devices for
+- * their integrated PHYs. The device tree firmware should also specify the link
+- * partner of the extension port - either via a fixed-link or other phy-handle.
+- * See the device tree bindings for more detailed information. Note that the
+- * driver has only been tested with a fixed-link, but in principle it should not
+- * matter.
+- *
+- * NOTE: Currently, only the RGMII interface is implemented in this driver.
+- *
+- * The interrupt line is asserted on link UP/DOWN events. The driver creates a
+- * custom irqchip to handle this interrupt and demultiplex the events by reading
+- * the status registers via SMI. Interrupts are then propagated to the relevant
+- * PHY device.
+- *
+- * The EEPROM contains initial register values which the chip will read over I2C
+- * upon hardware reset. It is also possible to omit the EEPROM. In both cases,
+- * the driver will manually reprogram some registers using jam tables to reach
+- * an initial state defined by the vendor driver.
+- *
+- * This Linux driver is written based on an OS-agnostic vendor driver from
+- * Realtek. The reference GPL-licensed sources can be found in the OpenWrt
+- * source tree under the name rtl8367c. The vendor driver claims to support a
+- * number of similar switch controllers from Realtek, but the only hardware we
+- * have is the RTL8365MB-VC. Moreover, there does not seem to be any chip under
+- * the name RTL8367C. Although one wishes that the 'C' stood for some kind of
+- * common hardware revision, there exist examples of chips with the suffix -VC
+- * which are explicitly not supported by the rtl8367c driver and which instead
+- * require the rtl8367d vendor driver. With all this uncertainty, the driver has
+- * been modestly named rtl8365mb. Future implementors may wish to rename things
+- * accordingly.
+- *
+- * In the same family of chips, some carry up to 8 user ports and up to 2
+- * extension ports. Where possible this driver tries to make things generic, but
+- * more work must be done to support these configurations. According to
+- * documentation from Realtek, the family should include the following chips:
+- *
+- *  - RTL8363NB
+- *  - RTL8363NB-VB
+- *  - RTL8363SC
+- *  - RTL8363SC-VB
+- *  - RTL8364NB
+- *  - RTL8364NB-VB
+- *  - RTL8365MB-VC
+- *  - RTL8366SC
+- *  - RTL8367RB-VB
+- *  - RTL8367SB
+- *  - RTL8367S
+- *  - RTL8370MB
+- *  - RTL8310SR
+- *
+- * Some of the register logic for these additional chips has been skipped over
+- * while implementing this driver. It is therefore not possible to assume that
+- * things will work out-of-the-box for other chips, and a careful review of the
+- * vendor driver may be needed to expand support. The RTL8365MB-VC seems to be
+- * one of the simpler chips.
+- */
+-
+-#include <linux/bitfield.h>
+-#include <linux/bitops.h>
+-#include <linux/interrupt.h>
+-#include <linux/irqdomain.h>
+-#include <linux/mutex.h>
+-#include <linux/of_irq.h>
+-#include <linux/regmap.h>
+-#include <linux/if_bridge.h>
+-
+-#include "realtek-smi-core.h"
+-
+-/* Chip-specific data and limits */
+-#define RTL8365MB_CHIP_ID_8365MB_VC		0x6367
+-#define RTL8365MB_CPU_PORT_NUM_8365MB_VC	6
+-#define RTL8365MB_LEARN_LIMIT_MAX_8365MB_VC	2112
+-
+-/* Family-specific data and limits */
+-#define RTL8365MB_PHYADDRMAX	7
+-#define RTL8365MB_NUM_PHYREGS	32
+-#define RTL8365MB_PHYREGMAX	(RTL8365MB_NUM_PHYREGS - 1)
+-#define RTL8365MB_MAX_NUM_PORTS	(RTL8365MB_CPU_PORT_NUM_8365MB_VC + 1)
+-
+-/* Chip identification registers */
+-#define RTL8365MB_CHIP_ID_REG		0x1300
+-
+-#define RTL8365MB_CHIP_VER_REG		0x1301
+-
+-#define RTL8365MB_MAGIC_REG		0x13C2
+-#define   RTL8365MB_MAGIC_VALUE		0x0249
+-
+-/* Chip reset register */
+-#define RTL8365MB_CHIP_RESET_REG	0x1322
+-#define RTL8365MB_CHIP_RESET_SW_MASK	0x0002
+-#define RTL8365MB_CHIP_RESET_HW_MASK	0x0001
+-
+-/* Interrupt polarity register */
+-#define RTL8365MB_INTR_POLARITY_REG	0x1100
+-#define   RTL8365MB_INTR_POLARITY_MASK	0x0001
+-#define   RTL8365MB_INTR_POLARITY_HIGH	0
+-#define   RTL8365MB_INTR_POLARITY_LOW	1
+-
+-/* Interrupt control/status register - enable/check specific interrupt types */
+-#define RTL8365MB_INTR_CTRL_REG			0x1101
+-#define RTL8365MB_INTR_STATUS_REG		0x1102
+-#define   RTL8365MB_INTR_SLIENT_START_2_MASK	0x1000
+-#define   RTL8365MB_INTR_SLIENT_START_MASK	0x0800
+-#define   RTL8365MB_INTR_ACL_ACTION_MASK	0x0200
+-#define   RTL8365MB_INTR_CABLE_DIAG_FIN_MASK	0x0100
+-#define   RTL8365MB_INTR_INTERRUPT_8051_MASK	0x0080
+-#define   RTL8365MB_INTR_LOOP_DETECTION_MASK	0x0040
+-#define   RTL8365MB_INTR_GREEN_TIMER_MASK	0x0020
+-#define   RTL8365MB_INTR_SPECIAL_CONGEST_MASK	0x0010
+-#define   RTL8365MB_INTR_SPEED_CHANGE_MASK	0x0008
+-#define   RTL8365MB_INTR_LEARN_OVER_MASK	0x0004
+-#define   RTL8365MB_INTR_METER_EXCEEDED_MASK	0x0002
+-#define   RTL8365MB_INTR_LINK_CHANGE_MASK	0x0001
+-#define   RTL8365MB_INTR_ALL_MASK                      \
+-		(RTL8365MB_INTR_SLIENT_START_2_MASK |  \
+-		 RTL8365MB_INTR_SLIENT_START_MASK |    \
+-		 RTL8365MB_INTR_ACL_ACTION_MASK |      \
+-		 RTL8365MB_INTR_CABLE_DIAG_FIN_MASK |  \
+-		 RTL8365MB_INTR_INTERRUPT_8051_MASK |  \
+-		 RTL8365MB_INTR_LOOP_DETECTION_MASK |  \
+-		 RTL8365MB_INTR_GREEN_TIMER_MASK |     \
+-		 RTL8365MB_INTR_SPECIAL_CONGEST_MASK | \
+-		 RTL8365MB_INTR_SPEED_CHANGE_MASK |    \
+-		 RTL8365MB_INTR_LEARN_OVER_MASK |      \
+-		 RTL8365MB_INTR_METER_EXCEEDED_MASK |  \
+-		 RTL8365MB_INTR_LINK_CHANGE_MASK)
+-
+-/* Per-port interrupt type status registers */
+-#define RTL8365MB_PORT_LINKDOWN_IND_REG		0x1106
+-#define   RTL8365MB_PORT_LINKDOWN_IND_MASK	0x07FF
+-
+-#define RTL8365MB_PORT_LINKUP_IND_REG		0x1107
+-#define   RTL8365MB_PORT_LINKUP_IND_MASK	0x07FF
+-
+-/* PHY indirect access registers */
+-#define RTL8365MB_INDIRECT_ACCESS_CTRL_REG			0x1F00
+-#define   RTL8365MB_INDIRECT_ACCESS_CTRL_RW_MASK		0x0002
+-#define   RTL8365MB_INDIRECT_ACCESS_CTRL_RW_READ		0
+-#define   RTL8365MB_INDIRECT_ACCESS_CTRL_RW_WRITE		1
+-#define   RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_MASK		0x0001
+-#define   RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_VALUE		1
+-#define RTL8365MB_INDIRECT_ACCESS_STATUS_REG			0x1F01
+-#define RTL8365MB_INDIRECT_ACCESS_ADDRESS_REG			0x1F02
+-#define   RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_5_1_MASK	GENMASK(4, 0)
+-#define   RTL8365MB_INDIRECT_ACCESS_ADDRESS_PHYNUM_MASK		GENMASK(7, 5)
+-#define   RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_9_6_MASK	GENMASK(11, 8)
+-#define   RTL8365MB_PHY_BASE					0x2000
+-#define RTL8365MB_INDIRECT_ACCESS_WRITE_DATA_REG		0x1F03
+-#define RTL8365MB_INDIRECT_ACCESS_READ_DATA_REG			0x1F04
+-
+-/* PHY OCP address prefix register */
+-#define RTL8365MB_GPHY_OCP_MSB_0_REG			0x1D15
+-#define   RTL8365MB_GPHY_OCP_MSB_0_CFG_CPU_OCPADR_MASK	0x0FC0
+-#define RTL8365MB_PHY_OCP_ADDR_PREFIX_MASK		0xFC00
+-
+-/* The PHY OCP addresses of PHY registers 0~31 start here */
+-#define RTL8365MB_PHY_OCP_ADDR_PHYREG_BASE		0xA400
+-
+-/* EXT port interface mode values - used in DIGITAL_INTERFACE_SELECT */
+-#define RTL8365MB_EXT_PORT_MODE_DISABLE		0
+-#define RTL8365MB_EXT_PORT_MODE_RGMII		1
+-#define RTL8365MB_EXT_PORT_MODE_MII_MAC		2
+-#define RTL8365MB_EXT_PORT_MODE_MII_PHY		3
+-#define RTL8365MB_EXT_PORT_MODE_TMII_MAC	4
+-#define RTL8365MB_EXT_PORT_MODE_TMII_PHY	5
+-#define RTL8365MB_EXT_PORT_MODE_GMII		6
+-#define RTL8365MB_EXT_PORT_MODE_RMII_MAC	7
+-#define RTL8365MB_EXT_PORT_MODE_RMII_PHY	8
+-#define RTL8365MB_EXT_PORT_MODE_SGMII		9
+-#define RTL8365MB_EXT_PORT_MODE_HSGMII		10
+-#define RTL8365MB_EXT_PORT_MODE_1000X_100FX	11
+-#define RTL8365MB_EXT_PORT_MODE_1000X		12
+-#define RTL8365MB_EXT_PORT_MODE_100FX		13
+-
+-/* EXT port interface mode configuration registers 0~1 */
+-#define RTL8365MB_DIGITAL_INTERFACE_SELECT_REG0		0x1305
+-#define RTL8365MB_DIGITAL_INTERFACE_SELECT_REG1		0x13C3
+-#define RTL8365MB_DIGITAL_INTERFACE_SELECT_REG(_extport)   \
+-		(RTL8365MB_DIGITAL_INTERFACE_SELECT_REG0 + \
+-		 ((_extport) >> 1) * (0x13C3 - 0x1305))
+-#define   RTL8365MB_DIGITAL_INTERFACE_SELECT_MODE_MASK(_extport) \
+-		(0xF << (((_extport) % 2)))
+-#define   RTL8365MB_DIGITAL_INTERFACE_SELECT_MODE_OFFSET(_extport) \
+-		(((_extport) % 2) * 4)
+-
+-/* EXT port RGMII TX/RX delay configuration registers 1~2 */
+-#define RTL8365MB_EXT_RGMXF_REG1		0x1307
+-#define RTL8365MB_EXT_RGMXF_REG2		0x13C5
+-#define RTL8365MB_EXT_RGMXF_REG(_extport)   \
+-		(RTL8365MB_EXT_RGMXF_REG1 + \
+-		 (((_extport) >> 1) * (0x13C5 - 0x1307)))
+-#define   RTL8365MB_EXT_RGMXF_RXDELAY_MASK	0x0007
+-#define   RTL8365MB_EXT_RGMXF_TXDELAY_MASK	0x0008
+-
+-/* External port speed values - used in DIGITAL_INTERFACE_FORCE */
+-#define RTL8365MB_PORT_SPEED_10M	0
+-#define RTL8365MB_PORT_SPEED_100M	1
+-#define RTL8365MB_PORT_SPEED_1000M	2
+-
+-/* EXT port force configuration registers 0~2 */
+-#define RTL8365MB_DIGITAL_INTERFACE_FORCE_REG0			0x1310
+-#define RTL8365MB_DIGITAL_INTERFACE_FORCE_REG1			0x1311
+-#define RTL8365MB_DIGITAL_INTERFACE_FORCE_REG2			0x13C4
+-#define RTL8365MB_DIGITAL_INTERFACE_FORCE_REG(_extport)   \
+-		(RTL8365MB_DIGITAL_INTERFACE_FORCE_REG0 + \
+-		 ((_extport) & 0x1) +                     \
+-		 ((((_extport) >> 1) & 0x1) * (0x13C4 - 0x1310)))
+-#define   RTL8365MB_DIGITAL_INTERFACE_FORCE_EN_MASK		0x1000
+-#define   RTL8365MB_DIGITAL_INTERFACE_FORCE_NWAY_MASK		0x0080
+-#define   RTL8365MB_DIGITAL_INTERFACE_FORCE_TXPAUSE_MASK	0x0040
+-#define   RTL8365MB_DIGITAL_INTERFACE_FORCE_RXPAUSE_MASK	0x0020
+-#define   RTL8365MB_DIGITAL_INTERFACE_FORCE_LINK_MASK		0x0010
+-#define   RTL8365MB_DIGITAL_INTERFACE_FORCE_DUPLEX_MASK		0x0004
+-#define   RTL8365MB_DIGITAL_INTERFACE_FORCE_SPEED_MASK		0x0003
+-
+-/* CPU port mask register - controls which ports are treated as CPU ports */
+-#define RTL8365MB_CPU_PORT_MASK_REG	0x1219
+-#define   RTL8365MB_CPU_PORT_MASK_MASK	0x07FF
+-
+-/* CPU control register */
+-#define RTL8365MB_CPU_CTRL_REG			0x121A
+-#define   RTL8365MB_CPU_CTRL_TRAP_PORT_EXT_MASK	0x0400
+-#define   RTL8365MB_CPU_CTRL_TAG_FORMAT_MASK	0x0200
+-#define   RTL8365MB_CPU_CTRL_RXBYTECOUNT_MASK	0x0080
+-#define   RTL8365MB_CPU_CTRL_TAG_POSITION_MASK	0x0040
+-#define   RTL8365MB_CPU_CTRL_TRAP_PORT_MASK	0x0038
+-#define   RTL8365MB_CPU_CTRL_INSERTMODE_MASK	0x0006
+-#define   RTL8365MB_CPU_CTRL_EN_MASK		0x0001
+-
+-/* Maximum packet length register */
+-#define RTL8365MB_CFG0_MAX_LEN_REG	0x088C
+-#define   RTL8365MB_CFG0_MAX_LEN_MASK	0x3FFF
+-
+-/* Port learning limit registers */
+-#define RTL8365MB_LUT_PORT_LEARN_LIMIT_BASE		0x0A20
+-#define RTL8365MB_LUT_PORT_LEARN_LIMIT_REG(_physport) \
+-		(RTL8365MB_LUT_PORT_LEARN_LIMIT_BASE + (_physport))
+-
+-/* Port isolation (forwarding mask) registers */
+-#define RTL8365MB_PORT_ISOLATION_REG_BASE		0x08A2
+-#define RTL8365MB_PORT_ISOLATION_REG(_physport) \
+-		(RTL8365MB_PORT_ISOLATION_REG_BASE + (_physport))
+-#define   RTL8365MB_PORT_ISOLATION_MASK			0x07FF
+-
+-/* MSTP port state registers - indexed by tree instancrSTI (tree ine */
+-#define RTL8365MB_MSTI_CTRL_BASE			0x0A00
+-#define RTL8365MB_MSTI_CTRL_REG(_msti, _physport) \
+-		(RTL8365MB_MSTI_CTRL_BASE + ((_msti) << 1) + ((_physport) >> 3))
+-#define   RTL8365MB_MSTI_CTRL_PORT_STATE_OFFSET(_physport) ((_physport) << 1)
+-#define   RTL8365MB_MSTI_CTRL_PORT_STATE_MASK(_physport) \
+-		(0x3 << RTL8365MB_MSTI_CTRL_PORT_STATE_OFFSET((_physport)))
+-
+-/* MIB counter value registers */
+-#define RTL8365MB_MIB_COUNTER_BASE	0x1000
+-#define RTL8365MB_MIB_COUNTER_REG(_x)	(RTL8365MB_MIB_COUNTER_BASE + (_x))
+-
+-/* MIB counter address register */
+-#define RTL8365MB_MIB_ADDRESS_REG		0x1004
+-#define   RTL8365MB_MIB_ADDRESS_PORT_OFFSET	0x007C
+-#define   RTL8365MB_MIB_ADDRESS(_p, _x) \
+-		(((RTL8365MB_MIB_ADDRESS_PORT_OFFSET) * (_p) + (_x)) >> 2)
+-
+-#define RTL8365MB_MIB_CTRL0_REG			0x1005
+-#define   RTL8365MB_MIB_CTRL0_RESET_MASK	0x0002
+-#define   RTL8365MB_MIB_CTRL0_BUSY_MASK		0x0001
+-
+-/* The DSA callback .get_stats64 runs in atomic context, so we are not allowed
+- * to block. On the other hand, accessing MIB counters absolutely requires us to
+- * block. The solution is thus to schedule work which polls the MIB counters
+- * asynchronously and updates some private data, which the callback can then
+- * fetch atomically. Three seconds should be a good enough polling interval.
+- */
+-#define RTL8365MB_STATS_INTERVAL_JIFFIES	(3 * HZ)
+-
+-enum rtl8365mb_mib_counter_index {
+-	RTL8365MB_MIB_ifInOctets,
+-	RTL8365MB_MIB_dot3StatsFCSErrors,
+-	RTL8365MB_MIB_dot3StatsSymbolErrors,
+-	RTL8365MB_MIB_dot3InPauseFrames,
+-	RTL8365MB_MIB_dot3ControlInUnknownOpcodes,
+-	RTL8365MB_MIB_etherStatsFragments,
+-	RTL8365MB_MIB_etherStatsJabbers,
+-	RTL8365MB_MIB_ifInUcastPkts,
+-	RTL8365MB_MIB_etherStatsDropEvents,
+-	RTL8365MB_MIB_ifInMulticastPkts,
+-	RTL8365MB_MIB_ifInBroadcastPkts,
+-	RTL8365MB_MIB_inMldChecksumError,
+-	RTL8365MB_MIB_inIgmpChecksumError,
+-	RTL8365MB_MIB_inMldSpecificQuery,
+-	RTL8365MB_MIB_inMldGeneralQuery,
+-	RTL8365MB_MIB_inIgmpSpecificQuery,
+-	RTL8365MB_MIB_inIgmpGeneralQuery,
+-	RTL8365MB_MIB_inMldLeaves,
+-	RTL8365MB_MIB_inIgmpLeaves,
+-	RTL8365MB_MIB_etherStatsOctets,
+-	RTL8365MB_MIB_etherStatsUnderSizePkts,
+-	RTL8365MB_MIB_etherOversizeStats,
+-	RTL8365MB_MIB_etherStatsPkts64Octets,
+-	RTL8365MB_MIB_etherStatsPkts65to127Octets,
+-	RTL8365MB_MIB_etherStatsPkts128to255Octets,
+-	RTL8365MB_MIB_etherStatsPkts256to511Octets,
+-	RTL8365MB_MIB_etherStatsPkts512to1023Octets,
+-	RTL8365MB_MIB_etherStatsPkts1024to1518Octets,
+-	RTL8365MB_MIB_ifOutOctets,
+-	RTL8365MB_MIB_dot3StatsSingleCollisionFrames,
+-	RTL8365MB_MIB_dot3StatsMultipleCollisionFrames,
+-	RTL8365MB_MIB_dot3StatsDeferredTransmissions,
+-	RTL8365MB_MIB_dot3StatsLateCollisions,
+-	RTL8365MB_MIB_etherStatsCollisions,
+-	RTL8365MB_MIB_dot3StatsExcessiveCollisions,
+-	RTL8365MB_MIB_dot3OutPauseFrames,
+-	RTL8365MB_MIB_ifOutDiscards,
+-	RTL8365MB_MIB_dot1dTpPortInDiscards,
+-	RTL8365MB_MIB_ifOutUcastPkts,
+-	RTL8365MB_MIB_ifOutMulticastPkts,
+-	RTL8365MB_MIB_ifOutBroadcastPkts,
+-	RTL8365MB_MIB_outOampduPkts,
+-	RTL8365MB_MIB_inOampduPkts,
+-	RTL8365MB_MIB_inIgmpJoinsSuccess,
+-	RTL8365MB_MIB_inIgmpJoinsFail,
+-	RTL8365MB_MIB_inMldJoinsSuccess,
+-	RTL8365MB_MIB_inMldJoinsFail,
+-	RTL8365MB_MIB_inReportSuppressionDrop,
+-	RTL8365MB_MIB_inLeaveSuppressionDrop,
+-	RTL8365MB_MIB_outIgmpReports,
+-	RTL8365MB_MIB_outIgmpLeaves,
+-	RTL8365MB_MIB_outIgmpGeneralQuery,
+-	RTL8365MB_MIB_outIgmpSpecificQuery,
+-	RTL8365MB_MIB_outMldReports,
+-	RTL8365MB_MIB_outMldLeaves,
+-	RTL8365MB_MIB_outMldGeneralQuery,
+-	RTL8365MB_MIB_outMldSpecificQuery,
+-	RTL8365MB_MIB_inKnownMulticastPkts,
+-	RTL8365MB_MIB_END,
+-};
+-
+-struct rtl8365mb_mib_counter {
+-	u32 offset;
+-	u32 length;
+-	const char *name;
+-};
+-
+-#define RTL8365MB_MAKE_MIB_COUNTER(_offset, _length, _name) \
+-		[RTL8365MB_MIB_ ## _name] = { _offset, _length, #_name }
+-
+-static struct rtl8365mb_mib_counter rtl8365mb_mib_counters[] = {
+-	RTL8365MB_MAKE_MIB_COUNTER(0, 4, ifInOctets),
+-	RTL8365MB_MAKE_MIB_COUNTER(4, 2, dot3StatsFCSErrors),
+-	RTL8365MB_MAKE_MIB_COUNTER(6, 2, dot3StatsSymbolErrors),
+-	RTL8365MB_MAKE_MIB_COUNTER(8, 2, dot3InPauseFrames),
+-	RTL8365MB_MAKE_MIB_COUNTER(10, 2, dot3ControlInUnknownOpcodes),
+-	RTL8365MB_MAKE_MIB_COUNTER(12, 2, etherStatsFragments),
+-	RTL8365MB_MAKE_MIB_COUNTER(14, 2, etherStatsJabbers),
+-	RTL8365MB_MAKE_MIB_COUNTER(16, 2, ifInUcastPkts),
+-	RTL8365MB_MAKE_MIB_COUNTER(18, 2, etherStatsDropEvents),
+-	RTL8365MB_MAKE_MIB_COUNTER(20, 2, ifInMulticastPkts),
+-	RTL8365MB_MAKE_MIB_COUNTER(22, 2, ifInBroadcastPkts),
+-	RTL8365MB_MAKE_MIB_COUNTER(24, 2, inMldChecksumError),
+-	RTL8365MB_MAKE_MIB_COUNTER(26, 2, inIgmpChecksumError),
+-	RTL8365MB_MAKE_MIB_COUNTER(28, 2, inMldSpecificQuery),
+-	RTL8365MB_MAKE_MIB_COUNTER(30, 2, inMldGeneralQuery),
+-	RTL8365MB_MAKE_MIB_COUNTER(32, 2, inIgmpSpecificQuery),
+-	RTL8365MB_MAKE_MIB_COUNTER(34, 2, inIgmpGeneralQuery),
+-	RTL8365MB_MAKE_MIB_COUNTER(36, 2, inMldLeaves),
+-	RTL8365MB_MAKE_MIB_COUNTER(38, 2, inIgmpLeaves),
+-	RTL8365MB_MAKE_MIB_COUNTER(40, 4, etherStatsOctets),
+-	RTL8365MB_MAKE_MIB_COUNTER(44, 2, etherStatsUnderSizePkts),
+-	RTL8365MB_MAKE_MIB_COUNTER(46, 2, etherOversizeStats),
+-	RTL8365MB_MAKE_MIB_COUNTER(48, 2, etherStatsPkts64Octets),
+-	RTL8365MB_MAKE_MIB_COUNTER(50, 2, etherStatsPkts65to127Octets),
+-	RTL8365MB_MAKE_MIB_COUNTER(52, 2, etherStatsPkts128to255Octets),
+-	RTL8365MB_MAKE_MIB_COUNTER(54, 2, etherStatsPkts256to511Octets),
+-	RTL8365MB_MAKE_MIB_COUNTER(56, 2, etherStatsPkts512to1023Octets),
+-	RTL8365MB_MAKE_MIB_COUNTER(58, 2, etherStatsPkts1024to1518Octets),
+-	RTL8365MB_MAKE_MIB_COUNTER(60, 4, ifOutOctets),
+-	RTL8365MB_MAKE_MIB_COUNTER(64, 2, dot3StatsSingleCollisionFrames),
+-	RTL8365MB_MAKE_MIB_COUNTER(66, 2, dot3StatsMultipleCollisionFrames),
+-	RTL8365MB_MAKE_MIB_COUNTER(68, 2, dot3StatsDeferredTransmissions),
+-	RTL8365MB_MAKE_MIB_COUNTER(70, 2, dot3StatsLateCollisions),
+-	RTL8365MB_MAKE_MIB_COUNTER(72, 2, etherStatsCollisions),
+-	RTL8365MB_MAKE_MIB_COUNTER(74, 2, dot3StatsExcessiveCollisions),
+-	RTL8365MB_MAKE_MIB_COUNTER(76, 2, dot3OutPauseFrames),
+-	RTL8365MB_MAKE_MIB_COUNTER(78, 2, ifOutDiscards),
+-	RTL8365MB_MAKE_MIB_COUNTER(80, 2, dot1dTpPortInDiscards),
+-	RTL8365MB_MAKE_MIB_COUNTER(82, 2, ifOutUcastPkts),
+-	RTL8365MB_MAKE_MIB_COUNTER(84, 2, ifOutMulticastPkts),
+-	RTL8365MB_MAKE_MIB_COUNTER(86, 2, ifOutBroadcastPkts),
+-	RTL8365MB_MAKE_MIB_COUNTER(88, 2, outOampduPkts),
+-	RTL8365MB_MAKE_MIB_COUNTER(90, 2, inOampduPkts),
+-	RTL8365MB_MAKE_MIB_COUNTER(92, 4, inIgmpJoinsSuccess),
+-	RTL8365MB_MAKE_MIB_COUNTER(96, 2, inIgmpJoinsFail),
+-	RTL8365MB_MAKE_MIB_COUNTER(98, 2, inMldJoinsSuccess),
+-	RTL8365MB_MAKE_MIB_COUNTER(100, 2, inMldJoinsFail),
+-	RTL8365MB_MAKE_MIB_COUNTER(102, 2, inReportSuppressionDrop),
+-	RTL8365MB_MAKE_MIB_COUNTER(104, 2, inLeaveSuppressionDrop),
+-	RTL8365MB_MAKE_MIB_COUNTER(106, 2, outIgmpReports),
+-	RTL8365MB_MAKE_MIB_COUNTER(108, 2, outIgmpLeaves),
+-	RTL8365MB_MAKE_MIB_COUNTER(110, 2, outIgmpGeneralQuery),
+-	RTL8365MB_MAKE_MIB_COUNTER(112, 2, outIgmpSpecificQuery),
+-	RTL8365MB_MAKE_MIB_COUNTER(114, 2, outMldReports),
+-	RTL8365MB_MAKE_MIB_COUNTER(116, 2, outMldLeaves),
+-	RTL8365MB_MAKE_MIB_COUNTER(118, 2, outMldGeneralQuery),
+-	RTL8365MB_MAKE_MIB_COUNTER(120, 2, outMldSpecificQuery),
+-	RTL8365MB_MAKE_MIB_COUNTER(122, 2, inKnownMulticastPkts),
+-};
+-
+-static_assert(ARRAY_SIZE(rtl8365mb_mib_counters) == RTL8365MB_MIB_END);
+-
+-struct rtl8365mb_jam_tbl_entry {
+-	u16 reg;
+-	u16 val;
+-};
+-
+-/* Lifted from the vendor driver sources */
+-static const struct rtl8365mb_jam_tbl_entry rtl8365mb_init_jam_8365mb_vc[] = {
+-	{ 0x13EB, 0x15BB }, { 0x1303, 0x06D6 }, { 0x1304, 0x0700 },
+-	{ 0x13E2, 0x003F }, { 0x13F9, 0x0090 }, { 0x121E, 0x03CA },
+-	{ 0x1233, 0x0352 }, { 0x1237, 0x00A0 }, { 0x123A, 0x0030 },
+-	{ 0x1239, 0x0084 }, { 0x0301, 0x1000 }, { 0x1349, 0x001F },
+-	{ 0x18E0, 0x4004 }, { 0x122B, 0x241C }, { 0x1305, 0xC000 },
+-	{ 0x13F0, 0x0000 },
+-};
+-
+-static const struct rtl8365mb_jam_tbl_entry rtl8365mb_init_jam_common[] = {
+-	{ 0x1200, 0x7FCB }, { 0x0884, 0x0003 }, { 0x06EB, 0x0001 },
+-	{ 0x03Fa, 0x0007 }, { 0x08C8, 0x00C0 }, { 0x0A30, 0x020E },
+-	{ 0x0800, 0x0000 }, { 0x0802, 0x0000 }, { 0x09DA, 0x0013 },
+-	{ 0x1D32, 0x0002 },
+-};
+-
+-enum rtl8365mb_stp_state {
+-	RTL8365MB_STP_STATE_DISABLED = 0,
+-	RTL8365MB_STP_STATE_BLOCKING = 1,
+-	RTL8365MB_STP_STATE_LEARNING = 2,
+-	RTL8365MB_STP_STATE_FORWARDING = 3,
+-};
+-
+-enum rtl8365mb_cpu_insert {
+-	RTL8365MB_CPU_INSERT_TO_ALL = 0,
+-	RTL8365MB_CPU_INSERT_TO_TRAPPING = 1,
+-	RTL8365MB_CPU_INSERT_TO_NONE = 2,
+-};
+-
+-enum rtl8365mb_cpu_position {
+-	RTL8365MB_CPU_POS_AFTER_SA = 0,
+-	RTL8365MB_CPU_POS_BEFORE_CRC = 1,
+-};
+-
+-enum rtl8365mb_cpu_format {
+-	RTL8365MB_CPU_FORMAT_8BYTES = 0,
+-	RTL8365MB_CPU_FORMAT_4BYTES = 1,
+-};
+-
+-enum rtl8365mb_cpu_rxlen {
+-	RTL8365MB_CPU_RXLEN_72BYTES = 0,
+-	RTL8365MB_CPU_RXLEN_64BYTES = 1,
+-};
+-
+-/**
+- * struct rtl8365mb_cpu - CPU port configuration
+- * @enable: enable/disable hardware insertion of CPU tag in switch->CPU frames
+- * @mask: port mask of ports that parse should parse CPU tags
+- * @trap_port: forward trapped frames to this port
+- * @insert: CPU tag insertion mode in switch->CPU frames
+- * @position: position of CPU tag in frame
+- * @rx_length: minimum CPU RX length
+- * @format: CPU tag format
+- *
+- * Represents the CPU tagging and CPU port configuration of the switch. These
+- * settings are configurable at runtime.
+- */
+-struct rtl8365mb_cpu {
+-	bool enable;
+-	u32 mask;
+-	u32 trap_port;
+-	enum rtl8365mb_cpu_insert insert;
+-	enum rtl8365mb_cpu_position position;
+-	enum rtl8365mb_cpu_rxlen rx_length;
+-	enum rtl8365mb_cpu_format format;
+-};
+-
+-/**
+- * struct rtl8365mb_port - private per-port data
+- * @smi: pointer to parent realtek_smi data
+- * @index: DSA port index, same as dsa_port::index
+- * @stats: link statistics populated by rtl8365mb_stats_poll, ready for atomic
+- *         access via rtl8365mb_get_stats64
+- * @stats_lock: protect the stats structure during read/update
+- * @mib_work: delayed work for polling MIB counters
+- */
+-struct rtl8365mb_port {
+-	struct realtek_smi *smi;
+-	unsigned int index;
+-	struct rtnl_link_stats64 stats;
+-	spinlock_t stats_lock;
+-	struct delayed_work mib_work;
+-};
+-
+-/**
+- * struct rtl8365mb - private chip-specific driver data
+- * @smi: pointer to parent realtek_smi data
+- * @irq: registered IRQ or zero
+- * @chip_id: chip identifier
+- * @chip_ver: chip silicon revision
+- * @port_mask: mask of all ports
+- * @learn_limit_max: maximum number of L2 addresses the chip can learn
+- * @cpu: CPU tagging and CPU port configuration for this chip
+- * @mib_lock: prevent concurrent reads of MIB counters
+- * @ports: per-port data
+- * @jam_table: chip-specific initialization jam table
+- * @jam_size: size of the chip's jam table
+- *
+- * Private data for this driver.
+- */
+-struct rtl8365mb {
+-	struct realtek_smi *smi;
+-	int irq;
+-	u32 chip_id;
+-	u32 chip_ver;
+-	u32 port_mask;
+-	u32 learn_limit_max;
+-	struct rtl8365mb_cpu cpu;
+-	struct mutex mib_lock;
+-	struct rtl8365mb_port ports[RTL8365MB_MAX_NUM_PORTS];
+-	const struct rtl8365mb_jam_tbl_entry *jam_table;
+-	size_t jam_size;
+-};
+-
+-static int rtl8365mb_phy_poll_busy(struct realtek_smi *smi)
+-{
+-	u32 val;
+-
+-	return regmap_read_poll_timeout(smi->map,
+-					RTL8365MB_INDIRECT_ACCESS_STATUS_REG,
+-					val, !val, 10, 100);
+-}
+-
+-static int rtl8365mb_phy_ocp_prepare(struct realtek_smi *smi, int phy,
+-				     u32 ocp_addr)
+-{
+-	u32 val;
+-	int ret;
+-
+-	/* Set OCP prefix */
+-	val = FIELD_GET(RTL8365MB_PHY_OCP_ADDR_PREFIX_MASK, ocp_addr);
+-	ret = regmap_update_bits(
+-		smi->map, RTL8365MB_GPHY_OCP_MSB_0_REG,
+-		RTL8365MB_GPHY_OCP_MSB_0_CFG_CPU_OCPADR_MASK,
+-		FIELD_PREP(RTL8365MB_GPHY_OCP_MSB_0_CFG_CPU_OCPADR_MASK, val));
+-	if (ret)
+-		return ret;
+-
+-	/* Set PHY register address */
+-	val = RTL8365MB_PHY_BASE;
+-	val |= FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_ADDRESS_PHYNUM_MASK, phy);
+-	val |= FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_5_1_MASK,
+-			  ocp_addr >> 1);
+-	val |= FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_9_6_MASK,
+-			  ocp_addr >> 6);
+-	ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_ADDRESS_REG,
+-			   val);
+-	if (ret)
+-		return ret;
+-
+-	return 0;
+-}
+-
+-static int rtl8365mb_phy_ocp_read(struct realtek_smi *smi, int phy,
+-				  u32 ocp_addr, u16 *data)
+-{
+-	u32 val;
+-	int ret;
+-
+-	ret = rtl8365mb_phy_poll_busy(smi);
+-	if (ret)
+-		return ret;
+-
+-	ret = rtl8365mb_phy_ocp_prepare(smi, phy, ocp_addr);
+-	if (ret)
+-		return ret;
+-
+-	/* Execute read operation */
+-	val = FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_MASK,
+-			 RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_VALUE) |
+-	      FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_RW_MASK,
+-			 RTL8365MB_INDIRECT_ACCESS_CTRL_RW_READ);
+-	ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_CTRL_REG, val);
+-	if (ret)
+-		return ret;
+-
+-	ret = rtl8365mb_phy_poll_busy(smi);
+-	if (ret)
+-		return ret;
+-
+-	/* Get PHY register data */
+-	ret = regmap_read(smi->map, RTL8365MB_INDIRECT_ACCESS_READ_DATA_REG,
+-			  &val);
+-	if (ret)
+-		return ret;
+-
+-	*data = val & 0xFFFF;
+-
+-	return 0;
+-}
+-
+-static int rtl8365mb_phy_ocp_write(struct realtek_smi *smi, int phy,
+-				   u32 ocp_addr, u16 data)
+-{
+-	u32 val;
+-	int ret;
+-
+-	ret = rtl8365mb_phy_poll_busy(smi);
+-	if (ret)
+-		return ret;
+-
+-	ret = rtl8365mb_phy_ocp_prepare(smi, phy, ocp_addr);
+-	if (ret)
+-		return ret;
+-
+-	/* Set PHY register data */
+-	ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_WRITE_DATA_REG,
+-			   data);
+-	if (ret)
+-		return ret;
+-
+-	/* Execute write operation */
+-	val = FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_MASK,
+-			 RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_VALUE) |
+-	      FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_RW_MASK,
+-			 RTL8365MB_INDIRECT_ACCESS_CTRL_RW_WRITE);
+-	ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_CTRL_REG, val);
+-	if (ret)
+-		return ret;
+-
+-	ret = rtl8365mb_phy_poll_busy(smi);
+-	if (ret)
+-		return ret;
+-
+-	return 0;
+-}
+-
+-static int rtl8365mb_phy_read(struct realtek_smi *smi, int phy, int regnum)
+-{
+-	u32 ocp_addr;
+-	u16 val;
+-	int ret;
+-
+-	if (phy > RTL8365MB_PHYADDRMAX)
+-		return -EINVAL;
+-
+-	if (regnum > RTL8365MB_PHYREGMAX)
+-		return -EINVAL;
+-
+-	ocp_addr = RTL8365MB_PHY_OCP_ADDR_PHYREG_BASE + regnum * 2;
+-
+-	ret = rtl8365mb_phy_ocp_read(smi, phy, ocp_addr, &val);
+-	if (ret) {
+-		dev_err(smi->dev,
+-			"failed to read PHY%d reg %02x @ %04x, ret %d\n", phy,
+-			regnum, ocp_addr, ret);
+-		return ret;
+-	}
+-
+-	dev_dbg(smi->dev, "read PHY%d register 0x%02x @ %04x, val <- %04x\n",
+-		phy, regnum, ocp_addr, val);
+-
+-	return val;
+-}
+-
+-static int rtl8365mb_phy_write(struct realtek_smi *smi, int phy, int regnum,
+-			       u16 val)
+-{
+-	u32 ocp_addr;
+-	int ret;
+-
+-	if (phy > RTL8365MB_PHYADDRMAX)
+-		return -EINVAL;
+-
+-	if (regnum > RTL8365MB_PHYREGMAX)
+-		return -EINVAL;
+-
+-	ocp_addr = RTL8365MB_PHY_OCP_ADDR_PHYREG_BASE + regnum * 2;
+-
+-	ret = rtl8365mb_phy_ocp_write(smi, phy, ocp_addr, val);
+-	if (ret) {
+-		dev_err(smi->dev,
+-			"failed to write PHY%d reg %02x @ %04x, ret %d\n", phy,
+-			regnum, ocp_addr, ret);
+-		return ret;
+-	}
+-
+-	dev_dbg(smi->dev, "write PHY%d register 0x%02x @ %04x, val -> %04x\n",
+-		phy, regnum, ocp_addr, val);
+-
+-	return 0;
+-}
+-
+-static enum dsa_tag_protocol
+-rtl8365mb_get_tag_protocol(struct dsa_switch *ds, int port,
+-			   enum dsa_tag_protocol mp)
+-{
+-	return DSA_TAG_PROTO_RTL8_4;
+-}
+-
+-static int rtl8365mb_ext_config_rgmii(struct realtek_smi *smi, int port,
+-				      phy_interface_t interface)
+-{
+-	struct device_node *dn;
+-	struct dsa_port *dp;
+-	int tx_delay = 0;
+-	int rx_delay = 0;
+-	int ext_port;
+-	u32 val;
+-	int ret;
+-
+-	if (port == smi->cpu_port) {
+-		ext_port = 1;
+-	} else {
+-		dev_err(smi->dev, "only one EXT port is currently supported\n");
+-		return -EINVAL;
+-	}
+-
+-	dp = dsa_to_port(smi->ds, port);
+-	dn = dp->dn;
+-
+-	/* Set the RGMII TX/RX delay
+-	 *
+-	 * The Realtek vendor driver indicates the following possible
+-	 * configuration settings:
+-	 *
+-	 *   TX delay:
+-	 *     0 = no delay, 1 = 2 ns delay
+-	 *   RX delay:
+-	 *     0 = no delay, 7 = maximum delay
+-	 *     Each step is approximately 0.3 ns, so the maximum delay is about
+-	 *     2.1 ns.
+-	 *
+-	 * The vendor driver also states that this must be configured *before*
+-	 * forcing the external interface into a particular mode, which is done
+-	 * in the rtl8365mb_phylink_mac_link_{up,down} functions.
+-	 *
+-	 * Only configure an RGMII TX (resp. RX) delay if the
+-	 * tx-internal-delay-ps (resp. rx-internal-delay-ps) OF property is
+-	 * specified. We ignore the detail of the RGMII interface mode
+-	 * (RGMII_{RXID, TXID, etc.}), as this is considered to be a PHY-only
+-	 * property.
+-	 */
+-	if (!of_property_read_u32(dn, "tx-internal-delay-ps", &val)) {
+-		val = val / 1000; /* convert to ns */
+-
+-		if (val == 0 || val == 2)
+-			tx_delay = val / 2;
+-		else
+-			dev_warn(smi->dev,
+-				 "EXT port TX delay must be 0 or 2 ns\n");
+-	}
+-
+-	if (!of_property_read_u32(dn, "rx-internal-delay-ps", &val)) {
+-		val = DIV_ROUND_CLOSEST(val, 300); /* convert to 0.3 ns step */
+-
+-		if (val <= 7)
+-			rx_delay = val;
+-		else
+-			dev_warn(smi->dev,
+-				 "EXT port RX delay must be 0 to 2.1 ns\n");
+-	}
+-
+-	ret = regmap_update_bits(
+-		smi->map, RTL8365MB_EXT_RGMXF_REG(ext_port),
+-		RTL8365MB_EXT_RGMXF_TXDELAY_MASK |
+-			RTL8365MB_EXT_RGMXF_RXDELAY_MASK,
+-		FIELD_PREP(RTL8365MB_EXT_RGMXF_TXDELAY_MASK, tx_delay) |
+-			FIELD_PREP(RTL8365MB_EXT_RGMXF_RXDELAY_MASK, rx_delay));
+-	if (ret)
+-		return ret;
+-
+-	ret = regmap_update_bits(
+-		smi->map, RTL8365MB_DIGITAL_INTERFACE_SELECT_REG(ext_port),
+-		RTL8365MB_DIGITAL_INTERFACE_SELECT_MODE_MASK(ext_port),
+-		RTL8365MB_EXT_PORT_MODE_RGMII
+-			<< RTL8365MB_DIGITAL_INTERFACE_SELECT_MODE_OFFSET(
+-				   ext_port));
+-	if (ret)
+-		return ret;
+-
+-	return 0;
+-}
+-
+-static int rtl8365mb_ext_config_forcemode(struct realtek_smi *smi, int port,
+-					  bool link, int speed, int duplex,
+-					  bool tx_pause, bool rx_pause)
+-{
+-	u32 r_tx_pause;
+-	u32 r_rx_pause;
+-	u32 r_duplex;
+-	u32 r_speed;
+-	u32 r_link;
+-	int ext_port;
+-	int val;
+-	int ret;
+-
+-	if (port == smi->cpu_port) {
+-		ext_port = 1;
+-	} else {
+-		dev_err(smi->dev, "only one EXT port is currently supported\n");
+-		return -EINVAL;
+-	}
+-
+-	if (link) {
+-		/* Force the link up with the desired configuration */
+-		r_link = 1;
+-		r_rx_pause = rx_pause ? 1 : 0;
+-		r_tx_pause = tx_pause ? 1 : 0;
+-
+-		if (speed == SPEED_1000) {
+-			r_speed = RTL8365MB_PORT_SPEED_1000M;
+-		} else if (speed == SPEED_100) {
+-			r_speed = RTL8365MB_PORT_SPEED_100M;
+-		} else if (speed == SPEED_10) {
+-			r_speed = RTL8365MB_PORT_SPEED_10M;
+-		} else {
+-			dev_err(smi->dev, "unsupported port speed %s\n",
+-				phy_speed_to_str(speed));
+-			return -EINVAL;
+-		}
+-
+-		if (duplex == DUPLEX_FULL) {
+-			r_duplex = 1;
+-		} else if (duplex == DUPLEX_HALF) {
+-			r_duplex = 0;
+-		} else {
+-			dev_err(smi->dev, "unsupported duplex %s\n",
+-				phy_duplex_to_str(duplex));
+-			return -EINVAL;
+-		}
+-	} else {
+-		/* Force the link down and reset any programmed configuration */
+-		r_link = 0;
+-		r_tx_pause = 0;
+-		r_rx_pause = 0;
+-		r_speed = 0;
+-		r_duplex = 0;
+-	}
+-
+-	val = FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_EN_MASK, 1) |
+-	      FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_TXPAUSE_MASK,
+-			 r_tx_pause) |
+-	      FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_RXPAUSE_MASK,
+-			 r_rx_pause) |
+-	      FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_LINK_MASK, r_link) |
+-	      FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_DUPLEX_MASK,
+-			 r_duplex) |
+-	      FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_SPEED_MASK, r_speed);
+-	ret = regmap_write(smi->map,
+-			   RTL8365MB_DIGITAL_INTERFACE_FORCE_REG(ext_port),
+-			   val);
+-	if (ret)
+-		return ret;
+-
+-	return 0;
+-}
+-
+-static bool rtl8365mb_phy_mode_supported(struct dsa_switch *ds, int port,
+-					 phy_interface_t interface)
+-{
+-	if (dsa_is_user_port(ds, port) &&
+-	    (interface == PHY_INTERFACE_MODE_NA ||
+-	     interface == PHY_INTERFACE_MODE_INTERNAL))
+-		/* Internal PHY */
+-		return true;
+-	else if (dsa_is_cpu_port(ds, port) &&
+-		 phy_interface_mode_is_rgmii(interface))
+-		/* Extension MAC */
+-		return true;
+-
+-	return false;
+-}
+-
+-static void rtl8365mb_phylink_validate(struct dsa_switch *ds, int port,
+-				       unsigned long *supported,
+-				       struct phylink_link_state *state)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	__ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0 };
+-
+-	/* include/linux/phylink.h says:
+-	 *     When @state->interface is %PHY_INTERFACE_MODE_NA, phylink
+-	 *     expects the MAC driver to return all supported link modes.
+-	 */
+-	if (state->interface != PHY_INTERFACE_MODE_NA &&
+-	    !rtl8365mb_phy_mode_supported(ds, port, state->interface)) {
+-		dev_err(smi->dev, "phy mode %s is unsupported on port %d\n",
+-			phy_modes(state->interface), port);
+-		linkmode_zero(supported);
+-		return;
+-	}
+-
+-	phylink_set_port_modes(mask);
+-
+-	phylink_set(mask, Autoneg);
+-	phylink_set(mask, Pause);
+-	phylink_set(mask, Asym_Pause);
+-
+-	phylink_set(mask, 10baseT_Half);
+-	phylink_set(mask, 10baseT_Full);
+-	phylink_set(mask, 100baseT_Half);
+-	phylink_set(mask, 100baseT_Full);
+-	phylink_set(mask, 1000baseT_Full);
+-
+-	linkmode_and(supported, supported, mask);
+-	linkmode_and(state->advertising, state->advertising, mask);
+-}
+-
+-static void rtl8365mb_phylink_mac_config(struct dsa_switch *ds, int port,
+-					 unsigned int mode,
+-					 const struct phylink_link_state *state)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	int ret;
+-
+-	if (!rtl8365mb_phy_mode_supported(ds, port, state->interface)) {
+-		dev_err(smi->dev, "phy mode %s is unsupported on port %d\n",
+-			phy_modes(state->interface), port);
+-		return;
+-	}
+-
+-	if (mode != MLO_AN_PHY && mode != MLO_AN_FIXED) {
+-		dev_err(smi->dev,
+-			"port %d supports only conventional PHY or fixed-link\n",
+-			port);
+-		return;
+-	}
+-
+-	if (phy_interface_mode_is_rgmii(state->interface)) {
+-		ret = rtl8365mb_ext_config_rgmii(smi, port, state->interface);
+-		if (ret)
+-			dev_err(smi->dev,
+-				"failed to configure RGMII mode on port %d: %d\n",
+-				port, ret);
+-		return;
+-	}
+-
+-	/* TODO: Implement MII and RMII modes, which the RTL8365MB-VC also
+-	 * supports
+-	 */
+-}
+-
+-static void rtl8365mb_phylink_mac_link_down(struct dsa_switch *ds, int port,
+-					    unsigned int mode,
+-					    phy_interface_t interface)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	struct rtl8365mb_port *p;
+-	struct rtl8365mb *mb;
+-	int ret;
+-
+-	mb = smi->chip_data;
+-	p = &mb->ports[port];
+-	cancel_delayed_work_sync(&p->mib_work);
+-
+-	if (phy_interface_mode_is_rgmii(interface)) {
+-		ret = rtl8365mb_ext_config_forcemode(smi, port, false, 0, 0,
+-						     false, false);
+-		if (ret)
+-			dev_err(smi->dev,
+-				"failed to reset forced mode on port %d: %d\n",
+-				port, ret);
+-
+-		return;
+-	}
+-}
+-
+-static void rtl8365mb_phylink_mac_link_up(struct dsa_switch *ds, int port,
+-					  unsigned int mode,
+-					  phy_interface_t interface,
+-					  struct phy_device *phydev, int speed,
+-					  int duplex, bool tx_pause,
+-					  bool rx_pause)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	struct rtl8365mb_port *p;
+-	struct rtl8365mb *mb;
+-	int ret;
+-
+-	mb = smi->chip_data;
+-	p = &mb->ports[port];
+-	schedule_delayed_work(&p->mib_work, 0);
+-
+-	if (phy_interface_mode_is_rgmii(interface)) {
+-		ret = rtl8365mb_ext_config_forcemode(smi, port, true, speed,
+-						     duplex, tx_pause,
+-						     rx_pause);
+-		if (ret)
+-			dev_err(smi->dev,
+-				"failed to force mode on port %d: %d\n", port,
+-				ret);
+-
+-		return;
+-	}
+-}
+-
+-static void rtl8365mb_port_stp_state_set(struct dsa_switch *ds, int port,
+-					 u8 state)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	enum rtl8365mb_stp_state val;
+-	int msti = 0;
+-
+-	switch (state) {
+-	case BR_STATE_DISABLED:
+-		val = RTL8365MB_STP_STATE_DISABLED;
+-		break;
+-	case BR_STATE_BLOCKING:
+-	case BR_STATE_LISTENING:
+-		val = RTL8365MB_STP_STATE_BLOCKING;
+-		break;
+-	case BR_STATE_LEARNING:
+-		val = RTL8365MB_STP_STATE_LEARNING;
+-		break;
+-	case BR_STATE_FORWARDING:
+-		val = RTL8365MB_STP_STATE_FORWARDING;
+-		break;
+-	default:
+-		dev_err(smi->dev, "invalid STP state: %u\n", state);
+-		return;
+-	}
+-
+-	regmap_update_bits(smi->map, RTL8365MB_MSTI_CTRL_REG(msti, port),
+-			   RTL8365MB_MSTI_CTRL_PORT_STATE_MASK(port),
+-			   val << RTL8365MB_MSTI_CTRL_PORT_STATE_OFFSET(port));
+-}
+-
+-static int rtl8365mb_port_set_learning(struct realtek_smi *smi, int port,
+-				       bool enable)
+-{
+-	struct rtl8365mb *mb = smi->chip_data;
+-
+-	/* Enable/disable learning by limiting the number of L2 addresses the
+-	 * port can learn. Realtek documentation states that a limit of zero
+-	 * disables learning. When enabling learning, set it to the chip's
+-	 * maximum.
+-	 */
+-	return regmap_write(smi->map, RTL8365MB_LUT_PORT_LEARN_LIMIT_REG(port),
+-			    enable ? mb->learn_limit_max : 0);
+-}
+-
+-static int rtl8365mb_port_set_isolation(struct realtek_smi *smi, int port,
+-					u32 mask)
+-{
+-	return regmap_write(smi->map, RTL8365MB_PORT_ISOLATION_REG(port), mask);
+-}
+-
+-static int rtl8365mb_mib_counter_read(struct realtek_smi *smi, int port,
+-				      u32 offset, u32 length, u64 *mibvalue)
+-{
+-	u64 tmpvalue = 0;
+-	u32 val;
+-	int ret;
+-	int i;
+-
+-	/* The MIB address is an SRAM address. We request a particular address
+-	 * and then poll the control register before reading the value from some
+-	 * counter registers.
+-	 */
+-	ret = regmap_write(smi->map, RTL8365MB_MIB_ADDRESS_REG,
+-			   RTL8365MB_MIB_ADDRESS(port, offset));
+-	if (ret)
+-		return ret;
+-
+-	/* Poll for completion */
+-	ret = regmap_read_poll_timeout(smi->map, RTL8365MB_MIB_CTRL0_REG, val,
+-				       !(val & RTL8365MB_MIB_CTRL0_BUSY_MASK),
+-				       10, 100);
+-	if (ret)
+-		return ret;
+-
+-	/* Presumably this indicates a MIB counter read failure */
+-	if (val & RTL8365MB_MIB_CTRL0_RESET_MASK)
+-		return -EIO;
+-
+-	/* There are four MIB counter registers each holding a 16 bit word of a
+-	 * MIB counter. Depending on the offset, we should read from the upper
+-	 * two or lower two registers. In case the MIB counter is 4 words, we
+-	 * read from all four registers.
+-	 */
+-	if (length == 4)
+-		offset = 3;
+-	else
+-		offset = (offset + 1) % 4;
+-
+-	/* Read the MIB counter 16 bits at a time */
+-	for (i = 0; i < length; i++) {
+-		ret = regmap_read(smi->map,
+-				  RTL8365MB_MIB_COUNTER_REG(offset - i), &val);
+-		if (ret)
+-			return ret;
+-
+-		tmpvalue = ((tmpvalue) << 16) | (val & 0xFFFF);
+-	}
+-
+-	/* Only commit the result if no error occurred */
+-	*mibvalue = tmpvalue;
+-
+-	return 0;
+-}
+-
+-static void rtl8365mb_get_ethtool_stats(struct dsa_switch *ds, int port, u64 *data)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	struct rtl8365mb *mb;
+-	int ret;
+-	int i;
+-
+-	mb = smi->chip_data;
+-
+-	mutex_lock(&mb->mib_lock);
+-	for (i = 0; i < RTL8365MB_MIB_END; i++) {
+-		struct rtl8365mb_mib_counter *mib = &rtl8365mb_mib_counters[i];
+-
+-		ret = rtl8365mb_mib_counter_read(smi, port, mib->offset,
+-						 mib->length, &data[i]);
+-		if (ret) {
+-			dev_err(smi->dev,
+-				"failed to read port %d counters: %d\n", port,
+-				ret);
+-			break;
+-		}
+-	}
+-	mutex_unlock(&mb->mib_lock);
+-}
+-
+-static void rtl8365mb_get_strings(struct dsa_switch *ds, int port, u32 stringset, u8 *data)
+-{
+-	int i;
+-
+-	if (stringset != ETH_SS_STATS)
+-		return;
+-
+-	for (i = 0; i < RTL8365MB_MIB_END; i++) {
+-		struct rtl8365mb_mib_counter *mib = &rtl8365mb_mib_counters[i];
+-
+-		strncpy(data + i * ETH_GSTRING_LEN, mib->name, ETH_GSTRING_LEN);
+-	}
+-}
+-
+-static int rtl8365mb_get_sset_count(struct dsa_switch *ds, int port, int sset)
+-{
+-	if (sset != ETH_SS_STATS)
+-		return -EOPNOTSUPP;
+-
+-	return RTL8365MB_MIB_END;
+-}
+-
+-static void rtl8365mb_get_phy_stats(struct dsa_switch *ds, int port,
+-				    struct ethtool_eth_phy_stats *phy_stats)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	struct rtl8365mb_mib_counter *mib;
+-	struct rtl8365mb *mb;
+-
+-	mb = smi->chip_data;
+-	mib = &rtl8365mb_mib_counters[RTL8365MB_MIB_dot3StatsSymbolErrors];
+-
+-	mutex_lock(&mb->mib_lock);
+-	rtl8365mb_mib_counter_read(smi, port, mib->offset, mib->length,
+-				   &phy_stats->SymbolErrorDuringCarrier);
+-	mutex_unlock(&mb->mib_lock);
+-}
+-
+-static void rtl8365mb_get_mac_stats(struct dsa_switch *ds, int port,
+-				    struct ethtool_eth_mac_stats *mac_stats)
+-{
+-	u64 cnt[RTL8365MB_MIB_END] = {
+-		[RTL8365MB_MIB_ifOutOctets] = 1,
+-		[RTL8365MB_MIB_ifOutUcastPkts] = 1,
+-		[RTL8365MB_MIB_ifOutMulticastPkts] = 1,
+-		[RTL8365MB_MIB_ifOutBroadcastPkts] = 1,
+-		[RTL8365MB_MIB_dot3OutPauseFrames] = 1,
+-		[RTL8365MB_MIB_ifOutDiscards] = 1,
+-		[RTL8365MB_MIB_ifInOctets] = 1,
+-		[RTL8365MB_MIB_ifInUcastPkts] = 1,
+-		[RTL8365MB_MIB_ifInMulticastPkts] = 1,
+-		[RTL8365MB_MIB_ifInBroadcastPkts] = 1,
+-		[RTL8365MB_MIB_dot3InPauseFrames] = 1,
+-		[RTL8365MB_MIB_dot3StatsSingleCollisionFrames] = 1,
+-		[RTL8365MB_MIB_dot3StatsMultipleCollisionFrames] = 1,
+-		[RTL8365MB_MIB_dot3StatsFCSErrors] = 1,
+-		[RTL8365MB_MIB_dot3StatsDeferredTransmissions] = 1,
+-		[RTL8365MB_MIB_dot3StatsLateCollisions] = 1,
+-		[RTL8365MB_MIB_dot3StatsExcessiveCollisions] = 1,
+-
+-	};
+-	struct realtek_smi *smi = ds->priv;
+-	struct rtl8365mb *mb;
+-	int ret;
+-	int i;
+-
+-	mb = smi->chip_data;
+-
+-	mutex_lock(&mb->mib_lock);
+-	for (i = 0; i < RTL8365MB_MIB_END; i++) {
+-		struct rtl8365mb_mib_counter *mib = &rtl8365mb_mib_counters[i];
+-
+-		/* Only fetch required MIB counters (marked = 1 above) */
+-		if (!cnt[i])
+-			continue;
+-
+-		ret = rtl8365mb_mib_counter_read(smi, port, mib->offset,
+-						 mib->length, &cnt[i]);
+-		if (ret)
+-			break;
+-	}
+-	mutex_unlock(&mb->mib_lock);
+-
+-	/* The RTL8365MB-VC exposes MIB objects, which we have to translate into
+-	 * IEEE 802.3 Managed Objects. This is not always completely faithful,
+-	 * but we try out best. See RFC 3635 for a detailed treatment of the
+-	 * subject.
+-	 */
+-
+-	mac_stats->FramesTransmittedOK = cnt[RTL8365MB_MIB_ifOutUcastPkts] +
+-					 cnt[RTL8365MB_MIB_ifOutMulticastPkts] +
+-					 cnt[RTL8365MB_MIB_ifOutBroadcastPkts] +
+-					 cnt[RTL8365MB_MIB_dot3OutPauseFrames] -
+-					 cnt[RTL8365MB_MIB_ifOutDiscards];
+-	mac_stats->SingleCollisionFrames =
+-		cnt[RTL8365MB_MIB_dot3StatsSingleCollisionFrames];
+-	mac_stats->MultipleCollisionFrames =
+-		cnt[RTL8365MB_MIB_dot3StatsMultipleCollisionFrames];
+-	mac_stats->FramesReceivedOK = cnt[RTL8365MB_MIB_ifInUcastPkts] +
+-				      cnt[RTL8365MB_MIB_ifInMulticastPkts] +
+-				      cnt[RTL8365MB_MIB_ifInBroadcastPkts] +
+-				      cnt[RTL8365MB_MIB_dot3InPauseFrames];
+-	mac_stats->FrameCheckSequenceErrors =
+-		cnt[RTL8365MB_MIB_dot3StatsFCSErrors];
+-	mac_stats->OctetsTransmittedOK = cnt[RTL8365MB_MIB_ifOutOctets] -
+-					 18 * mac_stats->FramesTransmittedOK;
+-	mac_stats->FramesWithDeferredXmissions =
+-		cnt[RTL8365MB_MIB_dot3StatsDeferredTransmissions];
+-	mac_stats->LateCollisions = cnt[RTL8365MB_MIB_dot3StatsLateCollisions];
+-	mac_stats->FramesAbortedDueToXSColls =
+-		cnt[RTL8365MB_MIB_dot3StatsExcessiveCollisions];
+-	mac_stats->OctetsReceivedOK = cnt[RTL8365MB_MIB_ifInOctets] -
+-				      18 * mac_stats->FramesReceivedOK;
+-	mac_stats->MulticastFramesXmittedOK =
+-		cnt[RTL8365MB_MIB_ifOutMulticastPkts];
+-	mac_stats->BroadcastFramesXmittedOK =
+-		cnt[RTL8365MB_MIB_ifOutBroadcastPkts];
+-	mac_stats->MulticastFramesReceivedOK =
+-		cnt[RTL8365MB_MIB_ifInMulticastPkts];
+-	mac_stats->BroadcastFramesReceivedOK =
+-		cnt[RTL8365MB_MIB_ifInBroadcastPkts];
+-}
+-
+-static void rtl8365mb_get_ctrl_stats(struct dsa_switch *ds, int port,
+-				     struct ethtool_eth_ctrl_stats *ctrl_stats)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	struct rtl8365mb_mib_counter *mib;
+-	struct rtl8365mb *mb;
+-
+-	mb = smi->chip_data;
+-	mib = &rtl8365mb_mib_counters[RTL8365MB_MIB_dot3ControlInUnknownOpcodes];
+-
+-	mutex_lock(&mb->mib_lock);
+-	rtl8365mb_mib_counter_read(smi, port, mib->offset, mib->length,
+-				   &ctrl_stats->UnsupportedOpcodesReceived);
+-	mutex_unlock(&mb->mib_lock);
+-}
+-
+-static void rtl8365mb_stats_update(struct realtek_smi *smi, int port)
+-{
+-	u64 cnt[RTL8365MB_MIB_END] = {
+-		[RTL8365MB_MIB_ifOutOctets] = 1,
+-		[RTL8365MB_MIB_ifOutUcastPkts] = 1,
+-		[RTL8365MB_MIB_ifOutMulticastPkts] = 1,
+-		[RTL8365MB_MIB_ifOutBroadcastPkts] = 1,
+-		[RTL8365MB_MIB_ifOutDiscards] = 1,
+-		[RTL8365MB_MIB_ifInOctets] = 1,
+-		[RTL8365MB_MIB_ifInUcastPkts] = 1,
+-		[RTL8365MB_MIB_ifInMulticastPkts] = 1,
+-		[RTL8365MB_MIB_ifInBroadcastPkts] = 1,
+-		[RTL8365MB_MIB_etherStatsDropEvents] = 1,
+-		[RTL8365MB_MIB_etherStatsCollisions] = 1,
+-		[RTL8365MB_MIB_etherStatsFragments] = 1,
+-		[RTL8365MB_MIB_etherStatsJabbers] = 1,
+-		[RTL8365MB_MIB_dot3StatsFCSErrors] = 1,
+-		[RTL8365MB_MIB_dot3StatsLateCollisions] = 1,
+-	};
+-	struct rtl8365mb *mb = smi->chip_data;
+-	struct rtnl_link_stats64 *stats;
+-	int ret;
+-	int i;
+-
+-	stats = &mb->ports[port].stats;
+-
+-	mutex_lock(&mb->mib_lock);
+-	for (i = 0; i < RTL8365MB_MIB_END; i++) {
+-		struct rtl8365mb_mib_counter *c = &rtl8365mb_mib_counters[i];
+-
+-		/* Only fetch required MIB counters (marked = 1 above) */
+-		if (!cnt[i])
+-			continue;
+-
+-		ret = rtl8365mb_mib_counter_read(smi, port, c->offset,
+-						 c->length, &cnt[i]);
+-		if (ret)
+-			break;
+-	}
+-	mutex_unlock(&mb->mib_lock);
+-
+-	/* Don't update statistics if there was an error reading the counters */
+-	if (ret)
+-		return;
+-
+-	spin_lock(&mb->ports[port].stats_lock);
+-
+-	stats->rx_packets = cnt[RTL8365MB_MIB_ifInUcastPkts] +
+-			    cnt[RTL8365MB_MIB_ifInMulticastPkts] +
+-			    cnt[RTL8365MB_MIB_ifInBroadcastPkts] -
+-			    cnt[RTL8365MB_MIB_ifOutDiscards];
+-
+-	stats->tx_packets = cnt[RTL8365MB_MIB_ifOutUcastPkts] +
+-			    cnt[RTL8365MB_MIB_ifOutMulticastPkts] +
+-			    cnt[RTL8365MB_MIB_ifOutBroadcastPkts];
+-
+-	/* if{In,Out}Octets includes FCS - remove it */
+-	stats->rx_bytes = cnt[RTL8365MB_MIB_ifInOctets] - 4 * stats->rx_packets;
+-	stats->tx_bytes =
+-		cnt[RTL8365MB_MIB_ifOutOctets] - 4 * stats->tx_packets;
+-
+-	stats->rx_dropped = cnt[RTL8365MB_MIB_etherStatsDropEvents];
+-	stats->tx_dropped = cnt[RTL8365MB_MIB_ifOutDiscards];
+-
+-	stats->multicast = cnt[RTL8365MB_MIB_ifInMulticastPkts];
+-	stats->collisions = cnt[RTL8365MB_MIB_etherStatsCollisions];
+-
+-	stats->rx_length_errors = cnt[RTL8365MB_MIB_etherStatsFragments] +
+-				  cnt[RTL8365MB_MIB_etherStatsJabbers];
+-	stats->rx_crc_errors = cnt[RTL8365MB_MIB_dot3StatsFCSErrors];
+-	stats->rx_errors = stats->rx_length_errors + stats->rx_crc_errors;
+-
+-	stats->tx_aborted_errors = cnt[RTL8365MB_MIB_ifOutDiscards];
+-	stats->tx_window_errors = cnt[RTL8365MB_MIB_dot3StatsLateCollisions];
+-	stats->tx_errors = stats->tx_aborted_errors + stats->tx_window_errors;
+-
+-	spin_unlock(&mb->ports[port].stats_lock);
+-}
+-
+-static void rtl8365mb_stats_poll(struct work_struct *work)
+-{
+-	struct rtl8365mb_port *p = container_of(to_delayed_work(work),
+-						struct rtl8365mb_port,
+-						mib_work);
+-	struct realtek_smi *smi = p->smi;
+-
+-	rtl8365mb_stats_update(smi, p->index);
+-
+-	schedule_delayed_work(&p->mib_work, RTL8365MB_STATS_INTERVAL_JIFFIES);
+-}
+-
+-static void rtl8365mb_get_stats64(struct dsa_switch *ds, int port,
+-				  struct rtnl_link_stats64 *s)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	struct rtl8365mb_port *p;
+-	struct rtl8365mb *mb;
+-
+-	mb = smi->chip_data;
+-	p = &mb->ports[port];
+-
+-	spin_lock(&p->stats_lock);
+-	memcpy(s, &p->stats, sizeof(*s));
+-	spin_unlock(&p->stats_lock);
+-}
+-
+-static void rtl8365mb_stats_setup(struct realtek_smi *smi)
+-{
+-	struct rtl8365mb *mb = smi->chip_data;
+-	int i;
+-
+-	/* Per-chip global mutex to protect MIB counter access, since doing
+-	 * so requires accessing a series of registers in a particular order.
+-	 */
+-	mutex_init(&mb->mib_lock);
+-
+-	for (i = 0; i < smi->num_ports; i++) {
+-		struct rtl8365mb_port *p = &mb->ports[i];
+-
+-		if (dsa_is_unused_port(smi->ds, i))
+-			continue;
+-
+-		/* Per-port spinlock to protect the stats64 data */
+-		spin_lock_init(&p->stats_lock);
+-
+-		/* This work polls the MIB counters and keeps the stats64 data
+-		 * up-to-date.
+-		 */
+-		INIT_DELAYED_WORK(&p->mib_work, rtl8365mb_stats_poll);
+-	}
+-}
+-
+-static void rtl8365mb_stats_teardown(struct realtek_smi *smi)
+-{
+-	struct rtl8365mb *mb = smi->chip_data;
+-	int i;
+-
+-	for (i = 0; i < smi->num_ports; i++) {
+-		struct rtl8365mb_port *p = &mb->ports[i];
+-
+-		if (dsa_is_unused_port(smi->ds, i))
+-			continue;
+-
+-		cancel_delayed_work_sync(&p->mib_work);
+-	}
+-}
+-
+-static int rtl8365mb_get_and_clear_status_reg(struct realtek_smi *smi, u32 reg,
+-					      u32 *val)
+-{
+-	int ret;
+-
+-	ret = regmap_read(smi->map, reg, val);
+-	if (ret)
+-		return ret;
+-
+-	return regmap_write(smi->map, reg, *val);
+-}
+-
+-static irqreturn_t rtl8365mb_irq(int irq, void *data)
+-{
+-	struct realtek_smi *smi = data;
+-	unsigned long line_changes = 0;
+-	struct rtl8365mb *mb;
+-	u32 stat;
+-	int line;
+-	int ret;
+-
+-	mb = smi->chip_data;
+-
+-	ret = rtl8365mb_get_and_clear_status_reg(smi, RTL8365MB_INTR_STATUS_REG,
+-						 &stat);
+-	if (ret)
+-		goto out_error;
+-
+-	if (stat & RTL8365MB_INTR_LINK_CHANGE_MASK) {
+-		u32 linkdown_ind;
+-		u32 linkup_ind;
+-		u32 val;
+-
+-		ret = rtl8365mb_get_and_clear_status_reg(
+-			smi, RTL8365MB_PORT_LINKUP_IND_REG, &val);
+-		if (ret)
+-			goto out_error;
+-
+-		linkup_ind = FIELD_GET(RTL8365MB_PORT_LINKUP_IND_MASK, val);
+-
+-		ret = rtl8365mb_get_and_clear_status_reg(
+-			smi, RTL8365MB_PORT_LINKDOWN_IND_REG, &val);
+-		if (ret)
+-			goto out_error;
+-
+-		linkdown_ind = FIELD_GET(RTL8365MB_PORT_LINKDOWN_IND_MASK, val);
+-
+-		line_changes = (linkup_ind | linkdown_ind) & mb->port_mask;
+-	}
+-
+-	if (!line_changes)
+-		goto out_none;
+-
+-	for_each_set_bit(line, &line_changes, smi->num_ports) {
+-		int child_irq = irq_find_mapping(smi->irqdomain, line);
+-
+-		handle_nested_irq(child_irq);
+-	}
+-
+-	return IRQ_HANDLED;
+-
+-out_error:
+-	dev_err(smi->dev, "failed to read interrupt status: %d\n", ret);
+-
+-out_none:
+-	return IRQ_NONE;
+-}
+-
+-static struct irq_chip rtl8365mb_irq_chip = {
+-	.name = "rtl8365mb",
+-	/* The hardware doesn't support masking IRQs on a per-port basis */
+-};
+-
+-static int rtl8365mb_irq_map(struct irq_domain *domain, unsigned int irq,
+-			     irq_hw_number_t hwirq)
+-{
+-	irq_set_chip_data(irq, domain->host_data);
+-	irq_set_chip_and_handler(irq, &rtl8365mb_irq_chip, handle_simple_irq);
+-	irq_set_nested_thread(irq, 1);
+-	irq_set_noprobe(irq);
+-
+-	return 0;
+-}
+-
+-static void rtl8365mb_irq_unmap(struct irq_domain *d, unsigned int irq)
+-{
+-	irq_set_nested_thread(irq, 0);
+-	irq_set_chip_and_handler(irq, NULL, NULL);
+-	irq_set_chip_data(irq, NULL);
+-}
+-
+-static const struct irq_domain_ops rtl8365mb_irqdomain_ops = {
+-	.map = rtl8365mb_irq_map,
+-	.unmap = rtl8365mb_irq_unmap,
+-	.xlate = irq_domain_xlate_onecell,
+-};
+-
+-static int rtl8365mb_set_irq_enable(struct realtek_smi *smi, bool enable)
+-{
+-	return regmap_update_bits(smi->map, RTL8365MB_INTR_CTRL_REG,
+-				  RTL8365MB_INTR_LINK_CHANGE_MASK,
+-				  FIELD_PREP(RTL8365MB_INTR_LINK_CHANGE_MASK,
+-					     enable ? 1 : 0));
+-}
+-
+-static int rtl8365mb_irq_enable(struct realtek_smi *smi)
+-{
+-	return rtl8365mb_set_irq_enable(smi, true);
+-}
+-
+-static int rtl8365mb_irq_disable(struct realtek_smi *smi)
+-{
+-	return rtl8365mb_set_irq_enable(smi, false);
+-}
+-
+-static int rtl8365mb_irq_setup(struct realtek_smi *smi)
+-{
+-	struct rtl8365mb *mb = smi->chip_data;
+-	struct device_node *intc;
+-	u32 irq_trig;
+-	int virq;
+-	int irq;
+-	u32 val;
+-	int ret;
+-	int i;
+-
+-	intc = of_get_child_by_name(smi->dev->of_node, "interrupt-controller");
+-	if (!intc) {
+-		dev_err(smi->dev, "missing child interrupt-controller node\n");
+-		return -EINVAL;
+-	}
+-
+-	/* rtl8365mb IRQs cascade off this one */
+-	irq = of_irq_get(intc, 0);
+-	if (irq <= 0) {
+-		if (irq != -EPROBE_DEFER)
+-			dev_err(smi->dev, "failed to get parent irq: %d\n",
+-				irq);
+-		ret = irq ? irq : -EINVAL;
+-		goto out_put_node;
+-	}
+-
+-	smi->irqdomain = irq_domain_add_linear(intc, smi->num_ports,
+-					       &rtl8365mb_irqdomain_ops, smi);
+-	if (!smi->irqdomain) {
+-		dev_err(smi->dev, "failed to add irq domain\n");
+-		ret = -ENOMEM;
+-		goto out_put_node;
+-	}
+-
+-	for (i = 0; i < smi->num_ports; i++) {
+-		virq = irq_create_mapping(smi->irqdomain, i);
+-		if (!virq) {
+-			dev_err(smi->dev,
+-				"failed to create irq domain mapping\n");
+-			ret = -EINVAL;
+-			goto out_remove_irqdomain;
+-		}
+-
+-		irq_set_parent(virq, irq);
+-	}
+-
+-	/* Configure chip interrupt signal polarity */
+-	irq_trig = irqd_get_trigger_type(irq_get_irq_data(irq));
+-	switch (irq_trig) {
+-	case IRQF_TRIGGER_RISING:
+-	case IRQF_TRIGGER_HIGH:
+-		val = RTL8365MB_INTR_POLARITY_HIGH;
+-		break;
+-	case IRQF_TRIGGER_FALLING:
+-	case IRQF_TRIGGER_LOW:
+-		val = RTL8365MB_INTR_POLARITY_LOW;
+-		break;
+-	default:
+-		dev_err(smi->dev, "unsupported irq trigger type %u\n",
+-			irq_trig);
+-		ret = -EINVAL;
+-		goto out_remove_irqdomain;
+-	}
+-
+-	ret = regmap_update_bits(smi->map, RTL8365MB_INTR_POLARITY_REG,
+-				 RTL8365MB_INTR_POLARITY_MASK,
+-				 FIELD_PREP(RTL8365MB_INTR_POLARITY_MASK, val));
+-	if (ret)
+-		goto out_remove_irqdomain;
+-
+-	/* Disable the interrupt in case the chip has it enabled on reset */
+-	ret = rtl8365mb_irq_disable(smi);
+-	if (ret)
+-		goto out_remove_irqdomain;
+-
+-	/* Clear the interrupt status register */
+-	ret = regmap_write(smi->map, RTL8365MB_INTR_STATUS_REG,
+-			   RTL8365MB_INTR_ALL_MASK);
+-	if (ret)
+-		goto out_remove_irqdomain;
+-
+-	ret = request_threaded_irq(irq, NULL, rtl8365mb_irq, IRQF_ONESHOT,
+-				   "rtl8365mb", smi);
+-	if (ret) {
+-		dev_err(smi->dev, "failed to request irq: %d\n", ret);
+-		goto out_remove_irqdomain;
+-	}
+-
+-	/* Store the irq so that we know to free it during teardown */
+-	mb->irq = irq;
+-
+-	ret = rtl8365mb_irq_enable(smi);
+-	if (ret)
+-		goto out_free_irq;
+-
+-	of_node_put(intc);
+-
+-	return 0;
+-
+-out_free_irq:
+-	free_irq(mb->irq, smi);
+-	mb->irq = 0;
+-
+-out_remove_irqdomain:
+-	for (i = 0; i < smi->num_ports; i++) {
+-		virq = irq_find_mapping(smi->irqdomain, i);
+-		irq_dispose_mapping(virq);
+-	}
+-
+-	irq_domain_remove(smi->irqdomain);
+-	smi->irqdomain = NULL;
+-
+-out_put_node:
+-	of_node_put(intc);
+-
+-	return ret;
+-}
+-
+-static void rtl8365mb_irq_teardown(struct realtek_smi *smi)
+-{
+-	struct rtl8365mb *mb = smi->chip_data;
+-	int virq;
+-	int i;
+-
+-	if (mb->irq) {
+-		free_irq(mb->irq, smi);
+-		mb->irq = 0;
+-	}
+-
+-	if (smi->irqdomain) {
+-		for (i = 0; i < smi->num_ports; i++) {
+-			virq = irq_find_mapping(smi->irqdomain, i);
+-			irq_dispose_mapping(virq);
+-		}
+-
+-		irq_domain_remove(smi->irqdomain);
+-		smi->irqdomain = NULL;
+-	}
+-}
+-
+-static int rtl8365mb_cpu_config(struct realtek_smi *smi)
+-{
+-	struct rtl8365mb *mb = smi->chip_data;
+-	struct rtl8365mb_cpu *cpu = &mb->cpu;
+-	u32 val;
+-	int ret;
+-
+-	ret = regmap_update_bits(smi->map, RTL8365MB_CPU_PORT_MASK_REG,
+-				 RTL8365MB_CPU_PORT_MASK_MASK,
+-				 FIELD_PREP(RTL8365MB_CPU_PORT_MASK_MASK,
+-					    cpu->mask));
+-	if (ret)
+-		return ret;
+-
+-	val = FIELD_PREP(RTL8365MB_CPU_CTRL_EN_MASK, cpu->enable ? 1 : 0) |
+-	      FIELD_PREP(RTL8365MB_CPU_CTRL_INSERTMODE_MASK, cpu->insert) |
+-	      FIELD_PREP(RTL8365MB_CPU_CTRL_TAG_POSITION_MASK, cpu->position) |
+-	      FIELD_PREP(RTL8365MB_CPU_CTRL_RXBYTECOUNT_MASK, cpu->rx_length) |
+-	      FIELD_PREP(RTL8365MB_CPU_CTRL_TAG_FORMAT_MASK, cpu->format) |
+-	      FIELD_PREP(RTL8365MB_CPU_CTRL_TRAP_PORT_MASK, cpu->trap_port) |
+-	      FIELD_PREP(RTL8365MB_CPU_CTRL_TRAP_PORT_EXT_MASK,
+-			 cpu->trap_port >> 3);
+-	ret = regmap_write(smi->map, RTL8365MB_CPU_CTRL_REG, val);
+-	if (ret)
+-		return ret;
+-
+-	return 0;
+-}
+-
+-static int rtl8365mb_switch_init(struct realtek_smi *smi)
+-{
+-	struct rtl8365mb *mb = smi->chip_data;
+-	int ret;
+-	int i;
+-
+-	/* Do any chip-specific init jam before getting to the common stuff */
+-	if (mb->jam_table) {
+-		for (i = 0; i < mb->jam_size; i++) {
+-			ret = regmap_write(smi->map, mb->jam_table[i].reg,
+-					   mb->jam_table[i].val);
+-			if (ret)
+-				return ret;
+-		}
+-	}
+-
+-	/* Common init jam */
+-	for (i = 0; i < ARRAY_SIZE(rtl8365mb_init_jam_common); i++) {
+-		ret = regmap_write(smi->map, rtl8365mb_init_jam_common[i].reg,
+-				   rtl8365mb_init_jam_common[i].val);
+-		if (ret)
+-			return ret;
+-	}
+-
+-	return 0;
+-}
+-
+-static int rtl8365mb_reset_chip(struct realtek_smi *smi)
+-{
+-	u32 val;
+-
+-	realtek_smi_write_reg_noack(smi, RTL8365MB_CHIP_RESET_REG,
+-				    FIELD_PREP(RTL8365MB_CHIP_RESET_HW_MASK,
+-					       1));
+-
+-	/* Realtek documentation says the chip needs 1 second to reset. Sleep
+-	 * for 100 ms before accessing any registers to prevent ACK timeouts.
+-	 */
+-	msleep(100);
+-	return regmap_read_poll_timeout(smi->map, RTL8365MB_CHIP_RESET_REG, val,
+-					!(val & RTL8365MB_CHIP_RESET_HW_MASK),
+-					20000, 1e6);
+-}
+-
+-static int rtl8365mb_setup(struct dsa_switch *ds)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	struct rtl8365mb *mb;
+-	int ret;
+-	int i;
+-
+-	mb = smi->chip_data;
+-
+-	ret = rtl8365mb_reset_chip(smi);
+-	if (ret) {
+-		dev_err(smi->dev, "failed to reset chip: %d\n", ret);
+-		goto out_error;
+-	}
+-
+-	/* Configure switch to vendor-defined initial state */
+-	ret = rtl8365mb_switch_init(smi);
+-	if (ret) {
+-		dev_err(smi->dev, "failed to initialize switch: %d\n", ret);
+-		goto out_error;
+-	}
+-
+-	/* Set up cascading IRQs */
+-	ret = rtl8365mb_irq_setup(smi);
+-	if (ret == -EPROBE_DEFER)
+-		return ret;
+-	else if (ret)
+-		dev_info(smi->dev, "no interrupt support\n");
+-
+-	/* Configure CPU tagging */
+-	ret = rtl8365mb_cpu_config(smi);
+-	if (ret)
+-		goto out_teardown_irq;
+-
+-	/* Configure ports */
+-	for (i = 0; i < smi->num_ports; i++) {
+-		struct rtl8365mb_port *p = &mb->ports[i];
+-
+-		if (dsa_is_unused_port(smi->ds, i))
+-			continue;
+-
+-		/* Set up per-port private data */
+-		p->smi = smi;
+-		p->index = i;
+-
+-		/* Forward only to the CPU */
+-		ret = rtl8365mb_port_set_isolation(smi, i, BIT(smi->cpu_port));
+-		if (ret)
+-			goto out_teardown_irq;
+-
+-		/* Disable learning */
+-		ret = rtl8365mb_port_set_learning(smi, i, false);
+-		if (ret)
+-			goto out_teardown_irq;
+-
+-		/* Set the initial STP state of all ports to DISABLED, otherwise
+-		 * ports will still forward frames to the CPU despite being
+-		 * administratively down by default.
+-		 */
+-		rtl8365mb_port_stp_state_set(smi->ds, i, BR_STATE_DISABLED);
+-	}
+-
+-	/* Set maximum packet length to 1536 bytes */
+-	ret = regmap_update_bits(smi->map, RTL8365MB_CFG0_MAX_LEN_REG,
+-				 RTL8365MB_CFG0_MAX_LEN_MASK,
+-				 FIELD_PREP(RTL8365MB_CFG0_MAX_LEN_MASK, 1536));
+-	if (ret)
+-		goto out_teardown_irq;
+-
+-	ret = realtek_smi_setup_mdio(smi);
+-	if (ret) {
+-		dev_err(smi->dev, "could not set up MDIO bus\n");
+-		goto out_teardown_irq;
+-	}
+-
+-	/* Start statistics counter polling */
+-	rtl8365mb_stats_setup(smi);
+-
+-	return 0;
+-
+-out_teardown_irq:
+-	rtl8365mb_irq_teardown(smi);
+-
+-out_error:
+-	return ret;
+-}
+-
+-static void rtl8365mb_teardown(struct dsa_switch *ds)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-
+-	rtl8365mb_stats_teardown(smi);
+-	rtl8365mb_irq_teardown(smi);
+-}
+-
+-static int rtl8365mb_get_chip_id_and_ver(struct regmap *map, u32 *id, u32 *ver)
+-{
+-	int ret;
+-
+-	/* For some reason we have to write a magic value to an arbitrary
+-	 * register whenever accessing the chip ID/version registers.
+-	 */
+-	ret = regmap_write(map, RTL8365MB_MAGIC_REG, RTL8365MB_MAGIC_VALUE);
+-	if (ret)
+-		return ret;
+-
+-	ret = regmap_read(map, RTL8365MB_CHIP_ID_REG, id);
+-	if (ret)
+-		return ret;
+-
+-	ret = regmap_read(map, RTL8365MB_CHIP_VER_REG, ver);
+-	if (ret)
+-		return ret;
+-
+-	/* Reset magic register */
+-	ret = regmap_write(map, RTL8365MB_MAGIC_REG, 0);
+-	if (ret)
+-		return ret;
+-
+-	return 0;
+-}
+-
+-static int rtl8365mb_detect(struct realtek_smi *smi)
+-{
+-	struct rtl8365mb *mb = smi->chip_data;
+-	u32 chip_id;
+-	u32 chip_ver;
+-	int ret;
+-
+-	ret = rtl8365mb_get_chip_id_and_ver(smi->map, &chip_id, &chip_ver);
+-	if (ret) {
+-		dev_err(smi->dev, "failed to read chip id and version: %d\n",
+-			ret);
+-		return ret;
+-	}
+-
+-	switch (chip_id) {
+-	case RTL8365MB_CHIP_ID_8365MB_VC:
+-		dev_info(smi->dev,
+-			 "found an RTL8365MB-VC switch (ver=0x%04x)\n",
+-			 chip_ver);
+-
+-		smi->cpu_port = RTL8365MB_CPU_PORT_NUM_8365MB_VC;
+-		smi->num_ports = smi->cpu_port + 1;
+-
+-		mb->smi = smi;
+-		mb->chip_id = chip_id;
+-		mb->chip_ver = chip_ver;
+-		mb->port_mask = BIT(smi->num_ports) - 1;
+-		mb->learn_limit_max = RTL8365MB_LEARN_LIMIT_MAX_8365MB_VC;
+-		mb->jam_table = rtl8365mb_init_jam_8365mb_vc;
+-		mb->jam_size = ARRAY_SIZE(rtl8365mb_init_jam_8365mb_vc);
+-
+-		mb->cpu.enable = 1;
+-		mb->cpu.mask = BIT(smi->cpu_port);
+-		mb->cpu.trap_port = smi->cpu_port;
+-		mb->cpu.insert = RTL8365MB_CPU_INSERT_TO_ALL;
+-		mb->cpu.position = RTL8365MB_CPU_POS_AFTER_SA;
+-		mb->cpu.rx_length = RTL8365MB_CPU_RXLEN_64BYTES;
+-		mb->cpu.format = RTL8365MB_CPU_FORMAT_8BYTES;
+-
+-		break;
+-	default:
+-		dev_err(smi->dev,
+-			"found an unknown Realtek switch (id=0x%04x, ver=0x%04x)\n",
+-			chip_id, chip_ver);
+-		return -ENODEV;
+-	}
+-
+-	return 0;
+-}
+-
+-static const struct dsa_switch_ops rtl8365mb_switch_ops = {
+-	.get_tag_protocol = rtl8365mb_get_tag_protocol,
+-	.setup = rtl8365mb_setup,
+-	.teardown = rtl8365mb_teardown,
+-	.phylink_validate = rtl8365mb_phylink_validate,
+-	.phylink_mac_config = rtl8365mb_phylink_mac_config,
+-	.phylink_mac_link_down = rtl8365mb_phylink_mac_link_down,
+-	.phylink_mac_link_up = rtl8365mb_phylink_mac_link_up,
+-	.port_stp_state_set = rtl8365mb_port_stp_state_set,
+-	.get_strings = rtl8365mb_get_strings,
+-	.get_ethtool_stats = rtl8365mb_get_ethtool_stats,
+-	.get_sset_count = rtl8365mb_get_sset_count,
+-	.get_eth_phy_stats = rtl8365mb_get_phy_stats,
+-	.get_eth_mac_stats = rtl8365mb_get_mac_stats,
+-	.get_eth_ctrl_stats = rtl8365mb_get_ctrl_stats,
+-	.get_stats64 = rtl8365mb_get_stats64,
+-};
+-
+-static const struct realtek_smi_ops rtl8365mb_smi_ops = {
+-	.detect = rtl8365mb_detect,
+-	.phy_read = rtl8365mb_phy_read,
+-	.phy_write = rtl8365mb_phy_write,
+-};
+-
+-const struct realtek_smi_variant rtl8365mb_variant = {
+-	.ds_ops = &rtl8365mb_switch_ops,
+-	.ops = &rtl8365mb_smi_ops,
+-	.clk_delay = 10,
+-	.cmd_read = 0xb9,
+-	.cmd_write = 0xb8,
+-	.chip_data_sz = sizeof(struct rtl8365mb),
+-};
+-EXPORT_SYMBOL_GPL(rtl8365mb_variant);
+diff --git a/drivers/net/dsa/rtl8366.c b/drivers/net/dsa/rtl8366.c
+deleted file mode 100644
+index bdb8d8d348807..0000000000000
+--- a/drivers/net/dsa/rtl8366.c
++++ /dev/null
+@@ -1,448 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/* Realtek SMI library helpers for the RTL8366x variants
+- * RTL8366RB and RTL8366S
+- *
+- * Copyright (C) 2017 Linus Walleij <linus.walleij@linaro.org>
+- * Copyright (C) 2009-2010 Gabor Juhos <juhosg@openwrt.org>
+- * Copyright (C) 2010 Antti Seppälä <a.seppala@gmail.com>
+- * Copyright (C) 2010 Roman Yeryomin <roman@advem.lv>
+- * Copyright (C) 2011 Colin Leitner <colin.leitner@googlemail.com>
+- */
+-#include <linux/if_bridge.h>
+-#include <net/dsa.h>
+-
+-#include "realtek-smi-core.h"
+-
+-int rtl8366_mc_is_used(struct realtek_smi *smi, int mc_index, int *used)
+-{
+-	int ret;
+-	int i;
+-
+-	*used = 0;
+-	for (i = 0; i < smi->num_ports; i++) {
+-		int index = 0;
+-
+-		ret = smi->ops->get_mc_index(smi, i, &index);
+-		if (ret)
+-			return ret;
+-
+-		if (mc_index == index) {
+-			*used = 1;
+-			break;
+-		}
+-	}
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_mc_is_used);
+-
+-/**
+- * rtl8366_obtain_mc() - retrieve or allocate a VLAN member configuration
+- * @smi: the Realtek SMI device instance
+- * @vid: the VLAN ID to look up or allocate
+- * @vlanmc: the pointer will be assigned to a pointer to a valid member config
+- * if successful
+- * @return: index of a new member config or negative error number
+- */
+-static int rtl8366_obtain_mc(struct realtek_smi *smi, int vid,
+-			     struct rtl8366_vlan_mc *vlanmc)
+-{
+-	struct rtl8366_vlan_4k vlan4k;
+-	int ret;
+-	int i;
+-
+-	/* Try to find an existing member config entry for this VID */
+-	for (i = 0; i < smi->num_vlan_mc; i++) {
+-		ret = smi->ops->get_vlan_mc(smi, i, vlanmc);
+-		if (ret) {
+-			dev_err(smi->dev, "error searching for VLAN MC %d for VID %d\n",
+-				i, vid);
+-			return ret;
+-		}
+-
+-		if (vid == vlanmc->vid)
+-			return i;
+-	}
+-
+-	/* We have no MC entry for this VID, try to find an empty one */
+-	for (i = 0; i < smi->num_vlan_mc; i++) {
+-		ret = smi->ops->get_vlan_mc(smi, i, vlanmc);
+-		if (ret) {
+-			dev_err(smi->dev, "error searching for VLAN MC %d for VID %d\n",
+-				i, vid);
+-			return ret;
+-		}
+-
+-		if (vlanmc->vid == 0 && vlanmc->member == 0) {
+-			/* Update the entry from the 4K table */
+-			ret = smi->ops->get_vlan_4k(smi, vid, &vlan4k);
+-			if (ret) {
+-				dev_err(smi->dev, "error looking for 4K VLAN MC %d for VID %d\n",
+-					i, vid);
+-				return ret;
+-			}
+-
+-			vlanmc->vid = vid;
+-			vlanmc->member = vlan4k.member;
+-			vlanmc->untag = vlan4k.untag;
+-			vlanmc->fid = vlan4k.fid;
+-			ret = smi->ops->set_vlan_mc(smi, i, vlanmc);
+-			if (ret) {
+-				dev_err(smi->dev, "unable to set/update VLAN MC %d for VID %d\n",
+-					i, vid);
+-				return ret;
+-			}
+-
+-			dev_dbg(smi->dev, "created new MC at index %d for VID %d\n",
+-				i, vid);
+-			return i;
+-		}
+-	}
+-
+-	/* MC table is full, try to find an unused entry and replace it */
+-	for (i = 0; i < smi->num_vlan_mc; i++) {
+-		int used;
+-
+-		ret = rtl8366_mc_is_used(smi, i, &used);
+-		if (ret)
+-			return ret;
+-
+-		if (!used) {
+-			/* Update the entry from the 4K table */
+-			ret = smi->ops->get_vlan_4k(smi, vid, &vlan4k);
+-			if (ret)
+-				return ret;
+-
+-			vlanmc->vid = vid;
+-			vlanmc->member = vlan4k.member;
+-			vlanmc->untag = vlan4k.untag;
+-			vlanmc->fid = vlan4k.fid;
+-			ret = smi->ops->set_vlan_mc(smi, i, vlanmc);
+-			if (ret) {
+-				dev_err(smi->dev, "unable to set/update VLAN MC %d for VID %d\n",
+-					i, vid);
+-				return ret;
+-			}
+-			dev_dbg(smi->dev, "recycled MC at index %i for VID %d\n",
+-				i, vid);
+-			return i;
+-		}
+-	}
+-
+-	dev_err(smi->dev, "all VLAN member configurations are in use\n");
+-	return -ENOSPC;
+-}
+-
+-int rtl8366_set_vlan(struct realtek_smi *smi, int vid, u32 member,
+-		     u32 untag, u32 fid)
+-{
+-	struct rtl8366_vlan_mc vlanmc;
+-	struct rtl8366_vlan_4k vlan4k;
+-	int mc;
+-	int ret;
+-
+-	if (!smi->ops->is_vlan_valid(smi, vid))
+-		return -EINVAL;
+-
+-	dev_dbg(smi->dev,
+-		"setting VLAN%d 4k members: 0x%02x, untagged: 0x%02x\n",
+-		vid, member, untag);
+-
+-	/* Update the 4K table */
+-	ret = smi->ops->get_vlan_4k(smi, vid, &vlan4k);
+-	if (ret)
+-		return ret;
+-
+-	vlan4k.member |= member;
+-	vlan4k.untag |= untag;
+-	vlan4k.fid = fid;
+-	ret = smi->ops->set_vlan_4k(smi, &vlan4k);
+-	if (ret)
+-		return ret;
+-
+-	dev_dbg(smi->dev,
+-		"resulting VLAN%d 4k members: 0x%02x, untagged: 0x%02x\n",
+-		vid, vlan4k.member, vlan4k.untag);
+-
+-	/* Find or allocate a member config for this VID */
+-	ret = rtl8366_obtain_mc(smi, vid, &vlanmc);
+-	if (ret < 0)
+-		return ret;
+-	mc = ret;
+-
+-	/* Update the MC entry */
+-	vlanmc.member |= member;
+-	vlanmc.untag |= untag;
+-	vlanmc.fid = fid;
+-
+-	/* Commit updates to the MC entry */
+-	ret = smi->ops->set_vlan_mc(smi, mc, &vlanmc);
+-	if (ret)
+-		dev_err(smi->dev, "failed to commit changes to VLAN MC index %d for VID %d\n",
+-			mc, vid);
+-	else
+-		dev_dbg(smi->dev,
+-			"resulting VLAN%d MC members: 0x%02x, untagged: 0x%02x\n",
+-			vid, vlanmc.member, vlanmc.untag);
+-
+-	return ret;
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_set_vlan);
+-
+-int rtl8366_set_pvid(struct realtek_smi *smi, unsigned int port,
+-		     unsigned int vid)
+-{
+-	struct rtl8366_vlan_mc vlanmc;
+-	int mc;
+-	int ret;
+-
+-	if (!smi->ops->is_vlan_valid(smi, vid))
+-		return -EINVAL;
+-
+-	/* Find or allocate a member config for this VID */
+-	ret = rtl8366_obtain_mc(smi, vid, &vlanmc);
+-	if (ret < 0)
+-		return ret;
+-	mc = ret;
+-
+-	ret = smi->ops->set_mc_index(smi, port, mc);
+-	if (ret) {
+-		dev_err(smi->dev, "set PVID: failed to set MC index %d for port %d\n",
+-			mc, port);
+-		return ret;
+-	}
+-
+-	dev_dbg(smi->dev, "set PVID: the PVID for port %d set to %d using existing MC index %d\n",
+-		port, vid, mc);
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_set_pvid);
+-
+-int rtl8366_enable_vlan4k(struct realtek_smi *smi, bool enable)
+-{
+-	int ret;
+-
+-	/* To enable 4k VLAN, ordinary VLAN must be enabled first,
+-	 * but if we disable 4k VLAN it is fine to leave ordinary
+-	 * VLAN enabled.
+-	 */
+-	if (enable) {
+-		/* Make sure VLAN is ON */
+-		ret = smi->ops->enable_vlan(smi, true);
+-		if (ret)
+-			return ret;
+-
+-		smi->vlan_enabled = true;
+-	}
+-
+-	ret = smi->ops->enable_vlan4k(smi, enable);
+-	if (ret)
+-		return ret;
+-
+-	smi->vlan4k_enabled = enable;
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_enable_vlan4k);
+-
+-int rtl8366_enable_vlan(struct realtek_smi *smi, bool enable)
+-{
+-	int ret;
+-
+-	ret = smi->ops->enable_vlan(smi, enable);
+-	if (ret)
+-		return ret;
+-
+-	smi->vlan_enabled = enable;
+-
+-	/* If we turn VLAN off, make sure that we turn off
+-	 * 4k VLAN as well, if that happened to be on.
+-	 */
+-	if (!enable) {
+-		smi->vlan4k_enabled = false;
+-		ret = smi->ops->enable_vlan4k(smi, false);
+-	}
+-
+-	return ret;
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_enable_vlan);
+-
+-int rtl8366_reset_vlan(struct realtek_smi *smi)
+-{
+-	struct rtl8366_vlan_mc vlanmc;
+-	int ret;
+-	int i;
+-
+-	rtl8366_enable_vlan(smi, false);
+-	rtl8366_enable_vlan4k(smi, false);
+-
+-	/* Clear the 16 VLAN member configurations */
+-	vlanmc.vid = 0;
+-	vlanmc.priority = 0;
+-	vlanmc.member = 0;
+-	vlanmc.untag = 0;
+-	vlanmc.fid = 0;
+-	for (i = 0; i < smi->num_vlan_mc; i++) {
+-		ret = smi->ops->set_vlan_mc(smi, i, &vlanmc);
+-		if (ret)
+-			return ret;
+-	}
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_reset_vlan);
+-
+-int rtl8366_vlan_add(struct dsa_switch *ds, int port,
+-		     const struct switchdev_obj_port_vlan *vlan,
+-		     struct netlink_ext_ack *extack)
+-{
+-	bool untagged = !!(vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED);
+-	bool pvid = !!(vlan->flags & BRIDGE_VLAN_INFO_PVID);
+-	struct realtek_smi *smi = ds->priv;
+-	u32 member = 0;
+-	u32 untag = 0;
+-	int ret;
+-
+-	if (!smi->ops->is_vlan_valid(smi, vlan->vid)) {
+-		NL_SET_ERR_MSG_MOD(extack, "VLAN ID not valid");
+-		return -EINVAL;
+-	}
+-
+-	/* Enable VLAN in the hardware
+-	 * FIXME: what's with this 4k business?
+-	 * Just rtl8366_enable_vlan() seems inconclusive.
+-	 */
+-	ret = rtl8366_enable_vlan4k(smi, true);
+-	if (ret) {
+-		NL_SET_ERR_MSG_MOD(extack, "Failed to enable VLAN 4K");
+-		return ret;
+-	}
+-
+-	dev_dbg(smi->dev, "add VLAN %d on port %d, %s, %s\n",
+-		vlan->vid, port, untagged ? "untagged" : "tagged",
+-		pvid ? "PVID" : "no PVID");
+-
+-	member |= BIT(port);
+-
+-	if (untagged)
+-		untag |= BIT(port);
+-
+-	ret = rtl8366_set_vlan(smi, vlan->vid, member, untag, 0);
+-	if (ret) {
+-		dev_err(smi->dev, "failed to set up VLAN %04x", vlan->vid);
+-		return ret;
+-	}
+-
+-	if (!pvid)
+-		return 0;
+-
+-	ret = rtl8366_set_pvid(smi, port, vlan->vid);
+-	if (ret) {
+-		dev_err(smi->dev, "failed to set PVID on port %d to VLAN %04x",
+-			port, vlan->vid);
+-		return ret;
+-	}
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_vlan_add);
+-
+-int rtl8366_vlan_del(struct dsa_switch *ds, int port,
+-		     const struct switchdev_obj_port_vlan *vlan)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	int ret, i;
+-
+-	dev_dbg(smi->dev, "del VLAN %d on port %d\n", vlan->vid, port);
+-
+-	for (i = 0; i < smi->num_vlan_mc; i++) {
+-		struct rtl8366_vlan_mc vlanmc;
+-
+-		ret = smi->ops->get_vlan_mc(smi, i, &vlanmc);
+-		if (ret)
+-			return ret;
+-
+-		if (vlan->vid == vlanmc.vid) {
+-			/* Remove this port from the VLAN */
+-			vlanmc.member &= ~BIT(port);
+-			vlanmc.untag &= ~BIT(port);
+-			/*
+-			 * If no ports are members of this VLAN
+-			 * anymore then clear the whole member
+-			 * config so it can be reused.
+-			 */
+-			if (!vlanmc.member) {
+-				vlanmc.vid = 0;
+-				vlanmc.priority = 0;
+-				vlanmc.fid = 0;
+-			}
+-			ret = smi->ops->set_vlan_mc(smi, i, &vlanmc);
+-			if (ret) {
+-				dev_err(smi->dev,
+-					"failed to remove VLAN %04x\n",
+-					vlan->vid);
+-				return ret;
+-			}
+-			break;
+-		}
+-	}
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_vlan_del);
+-
+-void rtl8366_get_strings(struct dsa_switch *ds, int port, u32 stringset,
+-			 uint8_t *data)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	struct rtl8366_mib_counter *mib;
+-	int i;
+-
+-	if (port >= smi->num_ports)
+-		return;
+-
+-	for (i = 0; i < smi->num_mib_counters; i++) {
+-		mib = &smi->mib_counters[i];
+-		strncpy(data + i * ETH_GSTRING_LEN,
+-			mib->name, ETH_GSTRING_LEN);
+-	}
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_get_strings);
+-
+-int rtl8366_get_sset_count(struct dsa_switch *ds, int port, int sset)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-
+-	/* We only support SS_STATS */
+-	if (sset != ETH_SS_STATS)
+-		return 0;
+-	if (port >= smi->num_ports)
+-		return -EINVAL;
+-
+-	return smi->num_mib_counters;
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_get_sset_count);
+-
+-void rtl8366_get_ethtool_stats(struct dsa_switch *ds, int port, uint64_t *data)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	int i;
+-	int ret;
+-
+-	if (port >= smi->num_ports)
+-		return;
+-
+-	for (i = 0; i < smi->num_mib_counters; i++) {
+-		struct rtl8366_mib_counter *mib;
+-		u64 mibvalue = 0;
+-
+-		mib = &smi->mib_counters[i];
+-		ret = smi->ops->get_mib_counter(smi, port, mib, &mibvalue);
+-		if (ret) {
+-			dev_err(smi->dev, "error reading MIB counter %s\n",
+-				mib->name);
+-		}
+-		data[i] = mibvalue;
+-	}
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_get_ethtool_stats);
+diff --git a/drivers/net/dsa/rtl8366rb.c b/drivers/net/dsa/rtl8366rb.c
+deleted file mode 100644
+index 03deacd83e613..0000000000000
+--- a/drivers/net/dsa/rtl8366rb.c
++++ /dev/null
+@@ -1,1813 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/* Realtek SMI subdriver for the Realtek RTL8366RB ethernet switch
+- *
+- * This is a sparsely documented chip, the only viable documentation seems
+- * to be a patched up code drop from the vendor that appear in various
+- * GPL source trees.
+- *
+- * Copyright (C) 2017 Linus Walleij <linus.walleij@linaro.org>
+- * Copyright (C) 2009-2010 Gabor Juhos <juhosg@openwrt.org>
+- * Copyright (C) 2010 Antti Seppälä <a.seppala@gmail.com>
+- * Copyright (C) 2010 Roman Yeryomin <roman@advem.lv>
+- * Copyright (C) 2011 Colin Leitner <colin.leitner@googlemail.com>
+- */
+-
+-#include <linux/bitops.h>
+-#include <linux/etherdevice.h>
+-#include <linux/if_bridge.h>
+-#include <linux/interrupt.h>
+-#include <linux/irqdomain.h>
+-#include <linux/irqchip/chained_irq.h>
+-#include <linux/of_irq.h>
+-#include <linux/regmap.h>
+-
+-#include "realtek-smi-core.h"
+-
+-#define RTL8366RB_PORT_NUM_CPU		5
+-#define RTL8366RB_NUM_PORTS		6
+-#define RTL8366RB_PHY_NO_MAX		4
+-#define RTL8366RB_PHY_ADDR_MAX		31
+-
+-/* Switch Global Configuration register */
+-#define RTL8366RB_SGCR				0x0000
+-#define RTL8366RB_SGCR_EN_BC_STORM_CTRL		BIT(0)
+-#define RTL8366RB_SGCR_MAX_LENGTH(a)		((a) << 4)
+-#define RTL8366RB_SGCR_MAX_LENGTH_MASK		RTL8366RB_SGCR_MAX_LENGTH(0x3)
+-#define RTL8366RB_SGCR_MAX_LENGTH_1522		RTL8366RB_SGCR_MAX_LENGTH(0x0)
+-#define RTL8366RB_SGCR_MAX_LENGTH_1536		RTL8366RB_SGCR_MAX_LENGTH(0x1)
+-#define RTL8366RB_SGCR_MAX_LENGTH_1552		RTL8366RB_SGCR_MAX_LENGTH(0x2)
+-#define RTL8366RB_SGCR_MAX_LENGTH_16000		RTL8366RB_SGCR_MAX_LENGTH(0x3)
+-#define RTL8366RB_SGCR_EN_VLAN			BIT(13)
+-#define RTL8366RB_SGCR_EN_VLAN_4KTB		BIT(14)
+-
+-/* Port Enable Control register */
+-#define RTL8366RB_PECR				0x0001
+-
+-/* Switch per-port learning disablement register */
+-#define RTL8366RB_PORT_LEARNDIS_CTRL		0x0002
+-
+-/* Security control, actually aging register */
+-#define RTL8366RB_SECURITY_CTRL			0x0003
+-
+-#define RTL8366RB_SSCR2				0x0004
+-#define RTL8366RB_SSCR2_DROP_UNKNOWN_DA		BIT(0)
+-
+-/* Port Mode Control registers */
+-#define RTL8366RB_PMC0				0x0005
+-#define RTL8366RB_PMC0_SPI			BIT(0)
+-#define RTL8366RB_PMC0_EN_AUTOLOAD		BIT(1)
+-#define RTL8366RB_PMC0_PROBE			BIT(2)
+-#define RTL8366RB_PMC0_DIS_BISR			BIT(3)
+-#define RTL8366RB_PMC0_ADCTEST			BIT(4)
+-#define RTL8366RB_PMC0_SRAM_DIAG		BIT(5)
+-#define RTL8366RB_PMC0_EN_SCAN			BIT(6)
+-#define RTL8366RB_PMC0_P4_IOMODE_SHIFT		7
+-#define RTL8366RB_PMC0_P4_IOMODE_MASK		GENMASK(9, 7)
+-#define RTL8366RB_PMC0_P5_IOMODE_SHIFT		10
+-#define RTL8366RB_PMC0_P5_IOMODE_MASK		GENMASK(12, 10)
+-#define RTL8366RB_PMC0_SDSMODE_SHIFT		13
+-#define RTL8366RB_PMC0_SDSMODE_MASK		GENMASK(15, 13)
+-#define RTL8366RB_PMC1				0x0006
+-
+-/* Port Mirror Control Register */
+-#define RTL8366RB_PMCR				0x0007
+-#define RTL8366RB_PMCR_SOURCE_PORT(a)		(a)
+-#define RTL8366RB_PMCR_SOURCE_PORT_MASK		0x000f
+-#define RTL8366RB_PMCR_MONITOR_PORT(a)		((a) << 4)
+-#define RTL8366RB_PMCR_MONITOR_PORT_MASK	0x00f0
+-#define RTL8366RB_PMCR_MIRROR_RX		BIT(8)
+-#define RTL8366RB_PMCR_MIRROR_TX		BIT(9)
+-#define RTL8366RB_PMCR_MIRROR_SPC		BIT(10)
+-#define RTL8366RB_PMCR_MIRROR_ISO		BIT(11)
+-
+-/* bits 0..7 = port 0, bits 8..15 = port 1 */
+-#define RTL8366RB_PAACR0		0x0010
+-/* bits 0..7 = port 2, bits 8..15 = port 3 */
+-#define RTL8366RB_PAACR1		0x0011
+-/* bits 0..7 = port 4, bits 8..15 = port 5 */
+-#define RTL8366RB_PAACR2		0x0012
+-#define RTL8366RB_PAACR_SPEED_10M	0
+-#define RTL8366RB_PAACR_SPEED_100M	1
+-#define RTL8366RB_PAACR_SPEED_1000M	2
+-#define RTL8366RB_PAACR_FULL_DUPLEX	BIT(2)
+-#define RTL8366RB_PAACR_LINK_UP		BIT(4)
+-#define RTL8366RB_PAACR_TX_PAUSE	BIT(5)
+-#define RTL8366RB_PAACR_RX_PAUSE	BIT(6)
+-#define RTL8366RB_PAACR_AN		BIT(7)
+-
+-#define RTL8366RB_PAACR_CPU_PORT	(RTL8366RB_PAACR_SPEED_1000M | \
+-					 RTL8366RB_PAACR_FULL_DUPLEX | \
+-					 RTL8366RB_PAACR_LINK_UP | \
+-					 RTL8366RB_PAACR_TX_PAUSE | \
+-					 RTL8366RB_PAACR_RX_PAUSE)
+-
+-/* bits 0..7 = port 0, bits 8..15 = port 1 */
+-#define RTL8366RB_PSTAT0		0x0014
+-/* bits 0..7 = port 2, bits 8..15 = port 3 */
+-#define RTL8366RB_PSTAT1		0x0015
+-/* bits 0..7 = port 4, bits 8..15 = port 5 */
+-#define RTL8366RB_PSTAT2		0x0016
+-
+-#define RTL8366RB_POWER_SAVING_REG	0x0021
+-
+-/* Spanning tree status (STP) control, two bits per port per FID */
+-#define RTL8366RB_STP_STATE_BASE	0x0050 /* 0x0050..0x0057 */
+-#define RTL8366RB_STP_STATE_DISABLED	0x0
+-#define RTL8366RB_STP_STATE_BLOCKING	0x1
+-#define RTL8366RB_STP_STATE_LEARNING	0x2
+-#define RTL8366RB_STP_STATE_FORWARDING	0x3
+-#define RTL8366RB_STP_MASK		GENMASK(1, 0)
+-#define RTL8366RB_STP_STATE(port, state) \
+-	((state) << ((port) * 2))
+-#define RTL8366RB_STP_STATE_MASK(port) \
+-	RTL8366RB_STP_STATE((port), RTL8366RB_STP_MASK)
+-
+-/* CPU port control reg */
+-#define RTL8368RB_CPU_CTRL_REG		0x0061
+-#define RTL8368RB_CPU_PORTS_MSK		0x00FF
+-/* Disables inserting custom tag length/type 0x8899 */
+-#define RTL8368RB_CPU_NO_TAG		BIT(15)
+-
+-#define RTL8366RB_SMAR0			0x0070 /* bits 0..15 */
+-#define RTL8366RB_SMAR1			0x0071 /* bits 16..31 */
+-#define RTL8366RB_SMAR2			0x0072 /* bits 32..47 */
+-
+-#define RTL8366RB_RESET_CTRL_REG		0x0100
+-#define RTL8366RB_CHIP_CTRL_RESET_HW		BIT(0)
+-#define RTL8366RB_CHIP_CTRL_RESET_SW		BIT(1)
+-
+-#define RTL8366RB_CHIP_ID_REG			0x0509
+-#define RTL8366RB_CHIP_ID_8366			0x5937
+-#define RTL8366RB_CHIP_VERSION_CTRL_REG		0x050A
+-#define RTL8366RB_CHIP_VERSION_MASK		0xf
+-
+-/* PHY registers control */
+-#define RTL8366RB_PHY_ACCESS_CTRL_REG		0x8000
+-#define RTL8366RB_PHY_CTRL_READ			BIT(0)
+-#define RTL8366RB_PHY_CTRL_WRITE		0
+-#define RTL8366RB_PHY_ACCESS_BUSY_REG		0x8001
+-#define RTL8366RB_PHY_INT_BUSY			BIT(0)
+-#define RTL8366RB_PHY_EXT_BUSY			BIT(4)
+-#define RTL8366RB_PHY_ACCESS_DATA_REG		0x8002
+-#define RTL8366RB_PHY_EXT_CTRL_REG		0x8010
+-#define RTL8366RB_PHY_EXT_WRDATA_REG		0x8011
+-#define RTL8366RB_PHY_EXT_RDDATA_REG		0x8012
+-
+-#define RTL8366RB_PHY_REG_MASK			0x1f
+-#define RTL8366RB_PHY_PAGE_OFFSET		5
+-#define RTL8366RB_PHY_PAGE_MASK			(0xf << 5)
+-#define RTL8366RB_PHY_NO_OFFSET			9
+-#define RTL8366RB_PHY_NO_MASK			(0x1f << 9)
+-
+-/* VLAN Ingress Control Register 1, one bit per port.
+- * bit 0 .. 5 will make the switch drop ingress frames without
+- * VID such as untagged or priority-tagged frames for respective
+- * port.
+- * bit 6 .. 11 will make the switch drop ingress frames carrying
+- * a C-tag with VID != 0 for respective port.
+- */
+-#define RTL8366RB_VLAN_INGRESS_CTRL1_REG	0x037E
+-#define RTL8366RB_VLAN_INGRESS_CTRL1_DROP(port)	(BIT((port)) | BIT((port) + 6))
+-
+-/* VLAN Ingress Control Register 2, one bit per port.
+- * bit0 .. bit5 will make the switch drop all ingress frames with
+- * a VLAN classification that does not include the port is in its
+- * member set.
+- */
+-#define RTL8366RB_VLAN_INGRESS_CTRL2_REG	0x037f
+-
+-/* LED control registers */
+-#define RTL8366RB_LED_BLINKRATE_REG		0x0430
+-#define RTL8366RB_LED_BLINKRATE_MASK		0x0007
+-#define RTL8366RB_LED_BLINKRATE_28MS		0x0000
+-#define RTL8366RB_LED_BLINKRATE_56MS		0x0001
+-#define RTL8366RB_LED_BLINKRATE_84MS		0x0002
+-#define RTL8366RB_LED_BLINKRATE_111MS		0x0003
+-#define RTL8366RB_LED_BLINKRATE_222MS		0x0004
+-#define RTL8366RB_LED_BLINKRATE_446MS		0x0005
+-
+-#define RTL8366RB_LED_CTRL_REG			0x0431
+-#define RTL8366RB_LED_OFF			0x0
+-#define RTL8366RB_LED_DUP_COL			0x1
+-#define RTL8366RB_LED_LINK_ACT			0x2
+-#define RTL8366RB_LED_SPD1000			0x3
+-#define RTL8366RB_LED_SPD100			0x4
+-#define RTL8366RB_LED_SPD10			0x5
+-#define RTL8366RB_LED_SPD1000_ACT		0x6
+-#define RTL8366RB_LED_SPD100_ACT		0x7
+-#define RTL8366RB_LED_SPD10_ACT			0x8
+-#define RTL8366RB_LED_SPD100_10_ACT		0x9
+-#define RTL8366RB_LED_FIBER			0xa
+-#define RTL8366RB_LED_AN_FAULT			0xb
+-#define RTL8366RB_LED_LINK_RX			0xc
+-#define RTL8366RB_LED_LINK_TX			0xd
+-#define RTL8366RB_LED_MASTER			0xe
+-#define RTL8366RB_LED_FORCE			0xf
+-#define RTL8366RB_LED_0_1_CTRL_REG		0x0432
+-#define RTL8366RB_LED_1_OFFSET			6
+-#define RTL8366RB_LED_2_3_CTRL_REG		0x0433
+-#define RTL8366RB_LED_3_OFFSET			6
+-
+-#define RTL8366RB_MIB_COUNT			33
+-#define RTL8366RB_GLOBAL_MIB_COUNT		1
+-#define RTL8366RB_MIB_COUNTER_PORT_OFFSET	0x0050
+-#define RTL8366RB_MIB_COUNTER_BASE		0x1000
+-#define RTL8366RB_MIB_CTRL_REG			0x13F0
+-#define RTL8366RB_MIB_CTRL_USER_MASK		0x0FFC
+-#define RTL8366RB_MIB_CTRL_BUSY_MASK		BIT(0)
+-#define RTL8366RB_MIB_CTRL_RESET_MASK		BIT(1)
+-#define RTL8366RB_MIB_CTRL_PORT_RESET(_p)	BIT(2 + (_p))
+-#define RTL8366RB_MIB_CTRL_GLOBAL_RESET		BIT(11)
+-
+-#define RTL8366RB_PORT_VLAN_CTRL_BASE		0x0063
+-#define RTL8366RB_PORT_VLAN_CTRL_REG(_p)  \
+-		(RTL8366RB_PORT_VLAN_CTRL_BASE + (_p) / 4)
+-#define RTL8366RB_PORT_VLAN_CTRL_MASK		0xf
+-#define RTL8366RB_PORT_VLAN_CTRL_SHIFT(_p)	(4 * ((_p) % 4))
+-
+-#define RTL8366RB_VLAN_TABLE_READ_BASE		0x018C
+-#define RTL8366RB_VLAN_TABLE_WRITE_BASE		0x0185
+-
+-#define RTL8366RB_TABLE_ACCESS_CTRL_REG		0x0180
+-#define RTL8366RB_TABLE_VLAN_READ_CTRL		0x0E01
+-#define RTL8366RB_TABLE_VLAN_WRITE_CTRL		0x0F01
+-
+-#define RTL8366RB_VLAN_MC_BASE(_x)		(0x0020 + (_x) * 3)
+-
+-#define RTL8366RB_PORT_LINK_STATUS_BASE		0x0014
+-#define RTL8366RB_PORT_STATUS_SPEED_MASK	0x0003
+-#define RTL8366RB_PORT_STATUS_DUPLEX_MASK	0x0004
+-#define RTL8366RB_PORT_STATUS_LINK_MASK		0x0010
+-#define RTL8366RB_PORT_STATUS_TXPAUSE_MASK	0x0020
+-#define RTL8366RB_PORT_STATUS_RXPAUSE_MASK	0x0040
+-#define RTL8366RB_PORT_STATUS_AN_MASK		0x0080
+-
+-#define RTL8366RB_NUM_VLANS		16
+-#define RTL8366RB_NUM_LEDGROUPS		4
+-#define RTL8366RB_NUM_VIDS		4096
+-#define RTL8366RB_PRIORITYMAX		7
+-#define RTL8366RB_NUM_FIDS		8
+-#define RTL8366RB_FIDMAX		7
+-
+-#define RTL8366RB_PORT_1		BIT(0) /* In userspace port 0 */
+-#define RTL8366RB_PORT_2		BIT(1) /* In userspace port 1 */
+-#define RTL8366RB_PORT_3		BIT(2) /* In userspace port 2 */
+-#define RTL8366RB_PORT_4		BIT(3) /* In userspace port 3 */
+-#define RTL8366RB_PORT_5		BIT(4) /* In userspace port 4 */
+-
+-#define RTL8366RB_PORT_CPU		BIT(5) /* CPU port */
+-
+-#define RTL8366RB_PORT_ALL		(RTL8366RB_PORT_1 |	\
+-					 RTL8366RB_PORT_2 |	\
+-					 RTL8366RB_PORT_3 |	\
+-					 RTL8366RB_PORT_4 |	\
+-					 RTL8366RB_PORT_5 |	\
+-					 RTL8366RB_PORT_CPU)
+-
+-#define RTL8366RB_PORT_ALL_BUT_CPU	(RTL8366RB_PORT_1 |	\
+-					 RTL8366RB_PORT_2 |	\
+-					 RTL8366RB_PORT_3 |	\
+-					 RTL8366RB_PORT_4 |	\
+-					 RTL8366RB_PORT_5)
+-
+-#define RTL8366RB_PORT_ALL_EXTERNAL	(RTL8366RB_PORT_1 |	\
+-					 RTL8366RB_PORT_2 |	\
+-					 RTL8366RB_PORT_3 |	\
+-					 RTL8366RB_PORT_4)
+-
+-#define RTL8366RB_PORT_ALL_INTERNAL	 RTL8366RB_PORT_CPU
+-
+-/* First configuration word per member config, VID and prio */
+-#define RTL8366RB_VLAN_VID_MASK		0xfff
+-#define RTL8366RB_VLAN_PRIORITY_SHIFT	12
+-#define RTL8366RB_VLAN_PRIORITY_MASK	0x7
+-/* Second configuration word per member config, member and untagged */
+-#define RTL8366RB_VLAN_UNTAG_SHIFT	8
+-#define RTL8366RB_VLAN_UNTAG_MASK	0xff
+-#define RTL8366RB_VLAN_MEMBER_MASK	0xff
+-/* Third config word per member config, STAG currently unused */
+-#define RTL8366RB_VLAN_STAG_MBR_MASK	0xff
+-#define RTL8366RB_VLAN_STAG_MBR_SHIFT	8
+-#define RTL8366RB_VLAN_STAG_IDX_MASK	0x7
+-#define RTL8366RB_VLAN_STAG_IDX_SHIFT	5
+-#define RTL8366RB_VLAN_FID_MASK		0x7
+-
+-/* Port ingress bandwidth control */
+-#define RTL8366RB_IB_BASE		0x0200
+-#define RTL8366RB_IB_REG(pnum)		(RTL8366RB_IB_BASE + (pnum))
+-#define RTL8366RB_IB_BDTH_MASK		0x3fff
+-#define RTL8366RB_IB_PREIFG		BIT(14)
+-
+-/* Port egress bandwidth control */
+-#define RTL8366RB_EB_BASE		0x02d1
+-#define RTL8366RB_EB_REG(pnum)		(RTL8366RB_EB_BASE + (pnum))
+-#define RTL8366RB_EB_BDTH_MASK		0x3fff
+-#define RTL8366RB_EB_PREIFG_REG		0x02f8
+-#define RTL8366RB_EB_PREIFG		BIT(9)
+-
+-#define RTL8366RB_BDTH_SW_MAX		1048512 /* 1048576? */
+-#define RTL8366RB_BDTH_UNIT		64
+-#define RTL8366RB_BDTH_REG_DEFAULT	16383
+-
+-/* QOS */
+-#define RTL8366RB_QOS			BIT(15)
+-/* Include/Exclude Preamble and IFG (20 bytes). 0:Exclude, 1:Include. */
+-#define RTL8366RB_QOS_DEFAULT_PREIFG	1
+-
+-/* Interrupt handling */
+-#define RTL8366RB_INTERRUPT_CONTROL_REG	0x0440
+-#define RTL8366RB_INTERRUPT_POLARITY	BIT(0)
+-#define RTL8366RB_P4_RGMII_LED		BIT(2)
+-#define RTL8366RB_INTERRUPT_MASK_REG	0x0441
+-#define RTL8366RB_INTERRUPT_LINK_CHGALL	GENMASK(11, 0)
+-#define RTL8366RB_INTERRUPT_ACLEXCEED	BIT(8)
+-#define RTL8366RB_INTERRUPT_STORMEXCEED	BIT(9)
+-#define RTL8366RB_INTERRUPT_P4_FIBER	BIT(12)
+-#define RTL8366RB_INTERRUPT_P4_UTP	BIT(13)
+-#define RTL8366RB_INTERRUPT_VALID	(RTL8366RB_INTERRUPT_LINK_CHGALL | \
+-					 RTL8366RB_INTERRUPT_ACLEXCEED | \
+-					 RTL8366RB_INTERRUPT_STORMEXCEED | \
+-					 RTL8366RB_INTERRUPT_P4_FIBER | \
+-					 RTL8366RB_INTERRUPT_P4_UTP)
+-#define RTL8366RB_INTERRUPT_STATUS_REG	0x0442
+-#define RTL8366RB_NUM_INTERRUPT		14 /* 0..13 */
+-
+-/* Port isolation registers */
+-#define RTL8366RB_PORT_ISO_BASE		0x0F08
+-#define RTL8366RB_PORT_ISO(pnum)	(RTL8366RB_PORT_ISO_BASE + (pnum))
+-#define RTL8366RB_PORT_ISO_EN		BIT(0)
+-#define RTL8366RB_PORT_ISO_PORTS_MASK	GENMASK(7, 1)
+-#define RTL8366RB_PORT_ISO_PORTS(pmask)	((pmask) << 1)
+-
+-/* bits 0..5 enable force when cleared */
+-#define RTL8366RB_MAC_FORCE_CTRL_REG	0x0F11
+-
+-#define RTL8366RB_OAM_PARSER_REG	0x0F14
+-#define RTL8366RB_OAM_MULTIPLEXER_REG	0x0F15
+-
+-#define RTL8366RB_GREEN_FEATURE_REG	0x0F51
+-#define RTL8366RB_GREEN_FEATURE_MSK	0x0007
+-#define RTL8366RB_GREEN_FEATURE_TX	BIT(0)
+-#define RTL8366RB_GREEN_FEATURE_RX	BIT(2)
+-
+-/**
+- * struct rtl8366rb - RTL8366RB-specific data
+- * @max_mtu: per-port max MTU setting
+- * @pvid_enabled: if PVID is set for respective port
+- */
+-struct rtl8366rb {
+-	unsigned int max_mtu[RTL8366RB_NUM_PORTS];
+-	bool pvid_enabled[RTL8366RB_NUM_PORTS];
+-};
+-
+-static struct rtl8366_mib_counter rtl8366rb_mib_counters[] = {
+-	{ 0,  0, 4, "IfInOctets"				},
+-	{ 0,  4, 4, "EtherStatsOctets"				},
+-	{ 0,  8, 2, "EtherStatsUnderSizePkts"			},
+-	{ 0, 10, 2, "EtherFragments"				},
+-	{ 0, 12, 2, "EtherStatsPkts64Octets"			},
+-	{ 0, 14, 2, "EtherStatsPkts65to127Octets"		},
+-	{ 0, 16, 2, "EtherStatsPkts128to255Octets"		},
+-	{ 0, 18, 2, "EtherStatsPkts256to511Octets"		},
+-	{ 0, 20, 2, "EtherStatsPkts512to1023Octets"		},
+-	{ 0, 22, 2, "EtherStatsPkts1024to1518Octets"		},
+-	{ 0, 24, 2, "EtherOversizeStats"			},
+-	{ 0, 26, 2, "EtherStatsJabbers"				},
+-	{ 0, 28, 2, "IfInUcastPkts"				},
+-	{ 0, 30, 2, "EtherStatsMulticastPkts"			},
+-	{ 0, 32, 2, "EtherStatsBroadcastPkts"			},
+-	{ 0, 34, 2, "EtherStatsDropEvents"			},
+-	{ 0, 36, 2, "Dot3StatsFCSErrors"			},
+-	{ 0, 38, 2, "Dot3StatsSymbolErrors"			},
+-	{ 0, 40, 2, "Dot3InPauseFrames"				},
+-	{ 0, 42, 2, "Dot3ControlInUnknownOpcodes"		},
+-	{ 0, 44, 4, "IfOutOctets"				},
+-	{ 0, 48, 2, "Dot3StatsSingleCollisionFrames"		},
+-	{ 0, 50, 2, "Dot3StatMultipleCollisionFrames"		},
+-	{ 0, 52, 2, "Dot3sDeferredTransmissions"		},
+-	{ 0, 54, 2, "Dot3StatsLateCollisions"			},
+-	{ 0, 56, 2, "EtherStatsCollisions"			},
+-	{ 0, 58, 2, "Dot3StatsExcessiveCollisions"		},
+-	{ 0, 60, 2, "Dot3OutPauseFrames"			},
+-	{ 0, 62, 2, "Dot1dBasePortDelayExceededDiscards"	},
+-	{ 0, 64, 2, "Dot1dTpPortInDiscards"			},
+-	{ 0, 66, 2, "IfOutUcastPkts"				},
+-	{ 0, 68, 2, "IfOutMulticastPkts"			},
+-	{ 0, 70, 2, "IfOutBroadcastPkts"			},
+-};
+-
+-static int rtl8366rb_get_mib_counter(struct realtek_smi *smi,
+-				     int port,
+-				     struct rtl8366_mib_counter *mib,
+-				     u64 *mibvalue)
+-{
+-	u32 addr, val;
+-	int ret;
+-	int i;
+-
+-	addr = RTL8366RB_MIB_COUNTER_BASE +
+-		RTL8366RB_MIB_COUNTER_PORT_OFFSET * (port) +
+-		mib->offset;
+-
+-	/* Writing access counter address first
+-	 * then ASIC will prepare 64bits counter wait for being retrived
+-	 */
+-	ret = regmap_write(smi->map, addr, 0); /* Write whatever */
+-	if (ret)
+-		return ret;
+-
+-	/* Read MIB control register */
+-	ret = regmap_read(smi->map, RTL8366RB_MIB_CTRL_REG, &val);
+-	if (ret)
+-		return -EIO;
+-
+-	if (val & RTL8366RB_MIB_CTRL_BUSY_MASK)
+-		return -EBUSY;
+-
+-	if (val & RTL8366RB_MIB_CTRL_RESET_MASK)
+-		return -EIO;
+-
+-	/* Read each individual MIB 16 bits at the time */
+-	*mibvalue = 0;
+-	for (i = mib->length; i > 0; i--) {
+-		ret = regmap_read(smi->map, addr + (i - 1), &val);
+-		if (ret)
+-			return ret;
+-		*mibvalue = (*mibvalue << 16) | (val & 0xFFFF);
+-	}
+-	return 0;
+-}
+-
+-static u32 rtl8366rb_get_irqmask(struct irq_data *d)
+-{
+-	int line = irqd_to_hwirq(d);
+-	u32 val;
+-
+-	/* For line interrupts we combine link down in bits
+-	 * 6..11 with link up in bits 0..5 into one interrupt.
+-	 */
+-	if (line < 12)
+-		val = BIT(line) | BIT(line + 6);
+-	else
+-		val = BIT(line);
+-	return val;
+-}
+-
+-static void rtl8366rb_mask_irq(struct irq_data *d)
+-{
+-	struct realtek_smi *smi = irq_data_get_irq_chip_data(d);
+-	int ret;
+-
+-	ret = regmap_update_bits(smi->map, RTL8366RB_INTERRUPT_MASK_REG,
+-				 rtl8366rb_get_irqmask(d), 0);
+-	if (ret)
+-		dev_err(smi->dev, "could not mask IRQ\n");
+-}
+-
+-static void rtl8366rb_unmask_irq(struct irq_data *d)
+-{
+-	struct realtek_smi *smi = irq_data_get_irq_chip_data(d);
+-	int ret;
+-
+-	ret = regmap_update_bits(smi->map, RTL8366RB_INTERRUPT_MASK_REG,
+-				 rtl8366rb_get_irqmask(d),
+-				 rtl8366rb_get_irqmask(d));
+-	if (ret)
+-		dev_err(smi->dev, "could not unmask IRQ\n");
+-}
+-
+-static irqreturn_t rtl8366rb_irq(int irq, void *data)
+-{
+-	struct realtek_smi *smi = data;
+-	u32 stat;
+-	int ret;
+-
+-	/* This clears the IRQ status register */
+-	ret = regmap_read(smi->map, RTL8366RB_INTERRUPT_STATUS_REG,
+-			  &stat);
+-	if (ret) {
+-		dev_err(smi->dev, "can't read interrupt status\n");
+-		return IRQ_NONE;
+-	}
+-	stat &= RTL8366RB_INTERRUPT_VALID;
+-	if (!stat)
+-		return IRQ_NONE;
+-	while (stat) {
+-		int line = __ffs(stat);
+-		int child_irq;
+-
+-		stat &= ~BIT(line);
+-		/* For line interrupts we combine link down in bits
+-		 * 6..11 with link up in bits 0..5 into one interrupt.
+-		 */
+-		if (line < 12 && line > 5)
+-			line -= 5;
+-		child_irq = irq_find_mapping(smi->irqdomain, line);
+-		handle_nested_irq(child_irq);
+-	}
+-	return IRQ_HANDLED;
+-}
+-
+-static struct irq_chip rtl8366rb_irq_chip = {
+-	.name = "RTL8366RB",
+-	.irq_mask = rtl8366rb_mask_irq,
+-	.irq_unmask = rtl8366rb_unmask_irq,
+-};
+-
+-static int rtl8366rb_irq_map(struct irq_domain *domain, unsigned int irq,
+-			     irq_hw_number_t hwirq)
+-{
+-	irq_set_chip_data(irq, domain->host_data);
+-	irq_set_chip_and_handler(irq, &rtl8366rb_irq_chip, handle_simple_irq);
+-	irq_set_nested_thread(irq, 1);
+-	irq_set_noprobe(irq);
+-
+-	return 0;
+-}
+-
+-static void rtl8366rb_irq_unmap(struct irq_domain *d, unsigned int irq)
+-{
+-	irq_set_nested_thread(irq, 0);
+-	irq_set_chip_and_handler(irq, NULL, NULL);
+-	irq_set_chip_data(irq, NULL);
+-}
+-
+-static const struct irq_domain_ops rtl8366rb_irqdomain_ops = {
+-	.map = rtl8366rb_irq_map,
+-	.unmap = rtl8366rb_irq_unmap,
+-	.xlate  = irq_domain_xlate_onecell,
+-};
+-
+-static int rtl8366rb_setup_cascaded_irq(struct realtek_smi *smi)
+-{
+-	struct device_node *intc;
+-	unsigned long irq_trig;
+-	int irq;
+-	int ret;
+-	u32 val;
+-	int i;
+-
+-	intc = of_get_child_by_name(smi->dev->of_node, "interrupt-controller");
+-	if (!intc) {
+-		dev_err(smi->dev, "missing child interrupt-controller node\n");
+-		return -EINVAL;
+-	}
+-	/* RB8366RB IRQs cascade off this one */
+-	irq = of_irq_get(intc, 0);
+-	if (irq <= 0) {
+-		dev_err(smi->dev, "failed to get parent IRQ\n");
+-		ret = irq ? irq : -EINVAL;
+-		goto out_put_node;
+-	}
+-
+-	/* This clears the IRQ status register */
+-	ret = regmap_read(smi->map, RTL8366RB_INTERRUPT_STATUS_REG,
+-			  &val);
+-	if (ret) {
+-		dev_err(smi->dev, "can't read interrupt status\n");
+-		goto out_put_node;
+-	}
+-
+-	/* Fetch IRQ edge information from the descriptor */
+-	irq_trig = irqd_get_trigger_type(irq_get_irq_data(irq));
+-	switch (irq_trig) {
+-	case IRQF_TRIGGER_RISING:
+-	case IRQF_TRIGGER_HIGH:
+-		dev_info(smi->dev, "active high/rising IRQ\n");
+-		val = 0;
+-		break;
+-	case IRQF_TRIGGER_FALLING:
+-	case IRQF_TRIGGER_LOW:
+-		dev_info(smi->dev, "active low/falling IRQ\n");
+-		val = RTL8366RB_INTERRUPT_POLARITY;
+-		break;
+-	}
+-	ret = regmap_update_bits(smi->map, RTL8366RB_INTERRUPT_CONTROL_REG,
+-				 RTL8366RB_INTERRUPT_POLARITY,
+-				 val);
+-	if (ret) {
+-		dev_err(smi->dev, "could not configure IRQ polarity\n");
+-		goto out_put_node;
+-	}
+-
+-	ret = devm_request_threaded_irq(smi->dev, irq, NULL,
+-					rtl8366rb_irq, IRQF_ONESHOT,
+-					"RTL8366RB", smi);
+-	if (ret) {
+-		dev_err(smi->dev, "unable to request irq: %d\n", ret);
+-		goto out_put_node;
+-	}
+-	smi->irqdomain = irq_domain_add_linear(intc,
+-					       RTL8366RB_NUM_INTERRUPT,
+-					       &rtl8366rb_irqdomain_ops,
+-					       smi);
+-	if (!smi->irqdomain) {
+-		dev_err(smi->dev, "failed to create IRQ domain\n");
+-		ret = -EINVAL;
+-		goto out_put_node;
+-	}
+-	for (i = 0; i < smi->num_ports; i++)
+-		irq_set_parent(irq_create_mapping(smi->irqdomain, i), irq);
+-
+-out_put_node:
+-	of_node_put(intc);
+-	return ret;
+-}
+-
+-static int rtl8366rb_set_addr(struct realtek_smi *smi)
+-{
+-	u8 addr[ETH_ALEN];
+-	u16 val;
+-	int ret;
+-
+-	eth_random_addr(addr);
+-
+-	dev_info(smi->dev, "set MAC: %02X:%02X:%02X:%02X:%02X:%02X\n",
+-		 addr[0], addr[1], addr[2], addr[3], addr[4], addr[5]);
+-	val = addr[0] << 8 | addr[1];
+-	ret = regmap_write(smi->map, RTL8366RB_SMAR0, val);
+-	if (ret)
+-		return ret;
+-	val = addr[2] << 8 | addr[3];
+-	ret = regmap_write(smi->map, RTL8366RB_SMAR1, val);
+-	if (ret)
+-		return ret;
+-	val = addr[4] << 8 | addr[5];
+-	ret = regmap_write(smi->map, RTL8366RB_SMAR2, val);
+-	if (ret)
+-		return ret;
+-
+-	return 0;
+-}
+-
+-/* Found in a vendor driver */
+-
+-/* Struct for handling the jam tables' entries */
+-struct rtl8366rb_jam_tbl_entry {
+-	u16 reg;
+-	u16 val;
+-};
+-
+-/* For the "version 0" early silicon, appear in most source releases */
+-static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_ver_0[] = {
+-	{0x000B, 0x0001}, {0x03A6, 0x0100}, {0x03A7, 0x0001}, {0x02D1, 0x3FFF},
+-	{0x02D2, 0x3FFF}, {0x02D3, 0x3FFF}, {0x02D4, 0x3FFF}, {0x02D5, 0x3FFF},
+-	{0x02D6, 0x3FFF}, {0x02D7, 0x3FFF}, {0x02D8, 0x3FFF}, {0x022B, 0x0688},
+-	{0x022C, 0x0FAC}, {0x03D0, 0x4688}, {0x03D1, 0x01F5}, {0x0000, 0x0830},
+-	{0x02F9, 0x0200}, {0x02F7, 0x7FFF}, {0x02F8, 0x03FF}, {0x0080, 0x03E8},
+-	{0x0081, 0x00CE}, {0x0082, 0x00DA}, {0x0083, 0x0230}, {0xBE0F, 0x2000},
+-	{0x0231, 0x422A}, {0x0232, 0x422A}, {0x0233, 0x422A}, {0x0234, 0x422A},
+-	{0x0235, 0x422A}, {0x0236, 0x422A}, {0x0237, 0x422A}, {0x0238, 0x422A},
+-	{0x0239, 0x422A}, {0x023A, 0x422A}, {0x023B, 0x422A}, {0x023C, 0x422A},
+-	{0x023D, 0x422A}, {0x023E, 0x422A}, {0x023F, 0x422A}, {0x0240, 0x422A},
+-	{0x0241, 0x422A}, {0x0242, 0x422A}, {0x0243, 0x422A}, {0x0244, 0x422A},
+-	{0x0245, 0x422A}, {0x0246, 0x422A}, {0x0247, 0x422A}, {0x0248, 0x422A},
+-	{0x0249, 0x0146}, {0x024A, 0x0146}, {0x024B, 0x0146}, {0xBE03, 0xC961},
+-	{0x024D, 0x0146}, {0x024E, 0x0146}, {0x024F, 0x0146}, {0x0250, 0x0146},
+-	{0xBE64, 0x0226}, {0x0252, 0x0146}, {0x0253, 0x0146}, {0x024C, 0x0146},
+-	{0x0251, 0x0146}, {0x0254, 0x0146}, {0xBE62, 0x3FD0}, {0x0084, 0x0320},
+-	{0x0255, 0x0146}, {0x0256, 0x0146}, {0x0257, 0x0146}, {0x0258, 0x0146},
+-	{0x0259, 0x0146}, {0x025A, 0x0146}, {0x025B, 0x0146}, {0x025C, 0x0146},
+-	{0x025D, 0x0146}, {0x025E, 0x0146}, {0x025F, 0x0146}, {0x0260, 0x0146},
+-	{0x0261, 0xA23F}, {0x0262, 0x0294}, {0x0263, 0xA23F}, {0x0264, 0x0294},
+-	{0x0265, 0xA23F}, {0x0266, 0x0294}, {0x0267, 0xA23F}, {0x0268, 0x0294},
+-	{0x0269, 0xA23F}, {0x026A, 0x0294}, {0x026B, 0xA23F}, {0x026C, 0x0294},
+-	{0x026D, 0xA23F}, {0x026E, 0x0294}, {0x026F, 0xA23F}, {0x0270, 0x0294},
+-	{0x02F5, 0x0048}, {0xBE09, 0x0E00}, {0xBE1E, 0x0FA0}, {0xBE14, 0x8448},
+-	{0xBE15, 0x1007}, {0xBE4A, 0xA284}, {0xC454, 0x3F0B}, {0xC474, 0x3F0B},
+-	{0xBE48, 0x3672}, {0xBE4B, 0x17A7}, {0xBE4C, 0x0B15}, {0xBE52, 0x0EDD},
+-	{0xBE49, 0x8C00}, {0xBE5B, 0x785C}, {0xBE5C, 0x785C}, {0xBE5D, 0x785C},
+-	{0xBE61, 0x368A}, {0xBE63, 0x9B84}, {0xC456, 0xCC13}, {0xC476, 0xCC13},
+-	{0xBE65, 0x307D}, {0xBE6D, 0x0005}, {0xBE6E, 0xE120}, {0xBE2E, 0x7BAF},
+-};
+-
+-/* This v1 init sequence is from Belkin F5D8235 U-Boot release */
+-static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_ver_1[] = {
+-	{0x0000, 0x0830}, {0x0001, 0x8000}, {0x0400, 0x8130}, {0xBE78, 0x3C3C},
+-	{0x0431, 0x5432}, {0xBE37, 0x0CE4}, {0x02FA, 0xFFDF}, {0x02FB, 0xFFE0},
+-	{0xC44C, 0x1585}, {0xC44C, 0x1185}, {0xC44C, 0x1585}, {0xC46C, 0x1585},
+-	{0xC46C, 0x1185}, {0xC46C, 0x1585}, {0xC451, 0x2135}, {0xC471, 0x2135},
+-	{0xBE10, 0x8140}, {0xBE15, 0x0007}, {0xBE6E, 0xE120}, {0xBE69, 0xD20F},
+-	{0xBE6B, 0x0320}, {0xBE24, 0xB000}, {0xBE23, 0xFF51}, {0xBE22, 0xDF20},
+-	{0xBE21, 0x0140}, {0xBE20, 0x00BB}, {0xBE24, 0xB800}, {0xBE24, 0x0000},
+-	{0xBE24, 0x7000}, {0xBE23, 0xFF51}, {0xBE22, 0xDF60}, {0xBE21, 0x0140},
+-	{0xBE20, 0x0077}, {0xBE24, 0x7800}, {0xBE24, 0x0000}, {0xBE2E, 0x7B7A},
+-	{0xBE36, 0x0CE4}, {0x02F5, 0x0048}, {0xBE77, 0x2940}, {0x000A, 0x83E0},
+-	{0xBE79, 0x3C3C}, {0xBE00, 0x1340},
+-};
+-
+-/* This v2 init sequence is from Belkin F5D8235 U-Boot release */
+-static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_ver_2[] = {
+-	{0x0450, 0x0000}, {0x0400, 0x8130}, {0x000A, 0x83ED}, {0x0431, 0x5432},
+-	{0xC44F, 0x6250}, {0xC46F, 0x6250}, {0xC456, 0x0C14}, {0xC476, 0x0C14},
+-	{0xC44C, 0x1C85}, {0xC44C, 0x1885}, {0xC44C, 0x1C85}, {0xC46C, 0x1C85},
+-	{0xC46C, 0x1885}, {0xC46C, 0x1C85}, {0xC44C, 0x0885}, {0xC44C, 0x0881},
+-	{0xC44C, 0x0885}, {0xC46C, 0x0885}, {0xC46C, 0x0881}, {0xC46C, 0x0885},
+-	{0xBE2E, 0x7BA7}, {0xBE36, 0x1000}, {0xBE37, 0x1000}, {0x8000, 0x0001},
+-	{0xBE69, 0xD50F}, {0x8000, 0x0000}, {0xBE69, 0xD50F}, {0xBE6E, 0x0320},
+-	{0xBE77, 0x2940}, {0xBE78, 0x3C3C}, {0xBE79, 0x3C3C}, {0xBE6E, 0xE120},
+-	{0x8000, 0x0001}, {0xBE15, 0x1007}, {0x8000, 0x0000}, {0xBE15, 0x1007},
+-	{0xBE14, 0x0448}, {0xBE1E, 0x00A0}, {0xBE10, 0x8160}, {0xBE10, 0x8140},
+-	{0xBE00, 0x1340}, {0x0F51, 0x0010},
+-};
+-
+-/* Appears in a DDWRT code dump */
+-static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_ver_3[] = {
+-	{0x0000, 0x0830}, {0x0400, 0x8130}, {0x000A, 0x83ED}, {0x0431, 0x5432},
+-	{0x0F51, 0x0017}, {0x02F5, 0x0048}, {0x02FA, 0xFFDF}, {0x02FB, 0xFFE0},
+-	{0xC456, 0x0C14}, {0xC476, 0x0C14}, {0xC454, 0x3F8B}, {0xC474, 0x3F8B},
+-	{0xC450, 0x2071}, {0xC470, 0x2071}, {0xC451, 0x226B}, {0xC471, 0x226B},
+-	{0xC452, 0xA293}, {0xC472, 0xA293}, {0xC44C, 0x1585}, {0xC44C, 0x1185},
+-	{0xC44C, 0x1585}, {0xC46C, 0x1585}, {0xC46C, 0x1185}, {0xC46C, 0x1585},
+-	{0xC44C, 0x0185}, {0xC44C, 0x0181}, {0xC44C, 0x0185}, {0xC46C, 0x0185},
+-	{0xC46C, 0x0181}, {0xC46C, 0x0185}, {0xBE24, 0xB000}, {0xBE23, 0xFF51},
+-	{0xBE22, 0xDF20}, {0xBE21, 0x0140}, {0xBE20, 0x00BB}, {0xBE24, 0xB800},
+-	{0xBE24, 0x0000}, {0xBE24, 0x7000}, {0xBE23, 0xFF51}, {0xBE22, 0xDF60},
+-	{0xBE21, 0x0140}, {0xBE20, 0x0077}, {0xBE24, 0x7800}, {0xBE24, 0x0000},
+-	{0xBE2E, 0x7BA7}, {0xBE36, 0x1000}, {0xBE37, 0x1000}, {0x8000, 0x0001},
+-	{0xBE69, 0xD50F}, {0x8000, 0x0000}, {0xBE69, 0xD50F}, {0xBE6B, 0x0320},
+-	{0xBE77, 0x2800}, {0xBE78, 0x3C3C}, {0xBE79, 0x3C3C}, {0xBE6E, 0xE120},
+-	{0x8000, 0x0001}, {0xBE10, 0x8140}, {0x8000, 0x0000}, {0xBE10, 0x8140},
+-	{0xBE15, 0x1007}, {0xBE14, 0x0448}, {0xBE1E, 0x00A0}, {0xBE10, 0x8160},
+-	{0xBE10, 0x8140}, {0xBE00, 0x1340}, {0x0450, 0x0000}, {0x0401, 0x0000},
+-};
+-
+-/* Belkin F5D8235 v1, "belkin,f5d8235-v1" */
+-static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_f5d8235[] = {
+-	{0x0242, 0x02BF}, {0x0245, 0x02BF}, {0x0248, 0x02BF}, {0x024B, 0x02BF},
+-	{0x024E, 0x02BF}, {0x0251, 0x02BF}, {0x0254, 0x0A3F}, {0x0256, 0x0A3F},
+-	{0x0258, 0x0A3F}, {0x025A, 0x0A3F}, {0x025C, 0x0A3F}, {0x025E, 0x0A3F},
+-	{0x0263, 0x007C}, {0x0100, 0x0004}, {0xBE5B, 0x3500}, {0x800E, 0x200F},
+-	{0xBE1D, 0x0F00}, {0x8001, 0x5011}, {0x800A, 0xA2F4}, {0x800B, 0x17A3},
+-	{0xBE4B, 0x17A3}, {0xBE41, 0x5011}, {0xBE17, 0x2100}, {0x8000, 0x8304},
+-	{0xBE40, 0x8304}, {0xBE4A, 0xA2F4}, {0x800C, 0xA8D5}, {0x8014, 0x5500},
+-	{0x8015, 0x0004}, {0xBE4C, 0xA8D5}, {0xBE59, 0x0008}, {0xBE09, 0x0E00},
+-	{0xBE36, 0x1036}, {0xBE37, 0x1036}, {0x800D, 0x00FF}, {0xBE4D, 0x00FF},
+-};
+-
+-/* DGN3500, "netgear,dgn3500", "netgear,dgn3500b" */
+-static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_dgn3500[] = {
+-	{0x0000, 0x0830}, {0x0400, 0x8130}, {0x000A, 0x83ED}, {0x0F51, 0x0017},
+-	{0x02F5, 0x0048}, {0x02FA, 0xFFDF}, {0x02FB, 0xFFE0}, {0x0450, 0x0000},
+-	{0x0401, 0x0000}, {0x0431, 0x0960},
+-};
+-
+-/* This jam table activates "green ethernet", which means low power mode
+- * and is claimed to detect the cable length and not use more power than
+- * necessary, and the ports should enter power saving mode 10 seconds after
+- * a cable is disconnected. Seems to always be the same.
+- */
+-static const struct rtl8366rb_jam_tbl_entry rtl8366rb_green_jam[] = {
+-	{0xBE78, 0x323C}, {0xBE77, 0x5000}, {0xBE2E, 0x7BA7},
+-	{0xBE59, 0x3459}, {0xBE5A, 0x745A}, {0xBE5B, 0x785C},
+-	{0xBE5C, 0x785C}, {0xBE6E, 0xE120}, {0xBE79, 0x323C},
+-};
+-
+-/* Function that jams the tables in the proper registers */
+-static int rtl8366rb_jam_table(const struct rtl8366rb_jam_tbl_entry *jam_table,
+-			       int jam_size, struct realtek_smi *smi,
+-			       bool write_dbg)
+-{
+-	u32 val;
+-	int ret;
+-	int i;
+-
+-	for (i = 0; i < jam_size; i++) {
+-		if ((jam_table[i].reg & 0xBE00) == 0xBE00) {
+-			ret = regmap_read(smi->map,
+-					  RTL8366RB_PHY_ACCESS_BUSY_REG,
+-					  &val);
+-			if (ret)
+-				return ret;
+-			if (!(val & RTL8366RB_PHY_INT_BUSY)) {
+-				ret = regmap_write(smi->map,
+-						RTL8366RB_PHY_ACCESS_CTRL_REG,
+-						RTL8366RB_PHY_CTRL_WRITE);
+-				if (ret)
+-					return ret;
+-			}
+-		}
+-		if (write_dbg)
+-			dev_dbg(smi->dev, "jam %04x into register %04x\n",
+-				jam_table[i].val,
+-				jam_table[i].reg);
+-		ret = regmap_write(smi->map,
+-				   jam_table[i].reg,
+-				   jam_table[i].val);
+-		if (ret)
+-			return ret;
+-	}
+-	return 0;
+-}
+-
+-static int rtl8366rb_setup(struct dsa_switch *ds)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	const struct rtl8366rb_jam_tbl_entry *jam_table;
+-	struct rtl8366rb *rb;
+-	u32 chip_ver = 0;
+-	u32 chip_id = 0;
+-	int jam_size;
+-	u32 val;
+-	int ret;
+-	int i;
+-
+-	rb = smi->chip_data;
+-
+-	ret = regmap_read(smi->map, RTL8366RB_CHIP_ID_REG, &chip_id);
+-	if (ret) {
+-		dev_err(smi->dev, "unable to read chip id\n");
+-		return ret;
+-	}
+-
+-	switch (chip_id) {
+-	case RTL8366RB_CHIP_ID_8366:
+-		break;
+-	default:
+-		dev_err(smi->dev, "unknown chip id (%04x)\n", chip_id);
+-		return -ENODEV;
+-	}
+-
+-	ret = regmap_read(smi->map, RTL8366RB_CHIP_VERSION_CTRL_REG,
+-			  &chip_ver);
+-	if (ret) {
+-		dev_err(smi->dev, "unable to read chip version\n");
+-		return ret;
+-	}
+-
+-	dev_info(smi->dev, "RTL%04x ver %u chip found\n",
+-		 chip_id, chip_ver & RTL8366RB_CHIP_VERSION_MASK);
+-
+-	/* Do the init dance using the right jam table */
+-	switch (chip_ver) {
+-	case 0:
+-		jam_table = rtl8366rb_init_jam_ver_0;
+-		jam_size = ARRAY_SIZE(rtl8366rb_init_jam_ver_0);
+-		break;
+-	case 1:
+-		jam_table = rtl8366rb_init_jam_ver_1;
+-		jam_size = ARRAY_SIZE(rtl8366rb_init_jam_ver_1);
+-		break;
+-	case 2:
+-		jam_table = rtl8366rb_init_jam_ver_2;
+-		jam_size = ARRAY_SIZE(rtl8366rb_init_jam_ver_2);
+-		break;
+-	default:
+-		jam_table = rtl8366rb_init_jam_ver_3;
+-		jam_size = ARRAY_SIZE(rtl8366rb_init_jam_ver_3);
+-		break;
+-	}
+-
+-	/* Special jam tables for special routers
+-	 * TODO: are these necessary? Maintainers, please test
+-	 * without them, using just the off-the-shelf tables.
+-	 */
+-	if (of_machine_is_compatible("belkin,f5d8235-v1")) {
+-		jam_table = rtl8366rb_init_jam_f5d8235;
+-		jam_size = ARRAY_SIZE(rtl8366rb_init_jam_f5d8235);
+-	}
+-	if (of_machine_is_compatible("netgear,dgn3500") ||
+-	    of_machine_is_compatible("netgear,dgn3500b")) {
+-		jam_table = rtl8366rb_init_jam_dgn3500;
+-		jam_size = ARRAY_SIZE(rtl8366rb_init_jam_dgn3500);
+-	}
+-
+-	ret = rtl8366rb_jam_table(jam_table, jam_size, smi, true);
+-	if (ret)
+-		return ret;
+-
+-	/* Isolate all user ports so they can only send packets to itself and the CPU port */
+-	for (i = 0; i < RTL8366RB_PORT_NUM_CPU; i++) {
+-		ret = regmap_write(smi->map, RTL8366RB_PORT_ISO(i),
+-				   RTL8366RB_PORT_ISO_PORTS(BIT(RTL8366RB_PORT_NUM_CPU)) |
+-				   RTL8366RB_PORT_ISO_EN);
+-		if (ret)
+-			return ret;
+-	}
+-	/* CPU port can send packets to all ports */
+-	ret = regmap_write(smi->map, RTL8366RB_PORT_ISO(RTL8366RB_PORT_NUM_CPU),
+-			   RTL8366RB_PORT_ISO_PORTS(dsa_user_ports(ds)) |
+-			   RTL8366RB_PORT_ISO_EN);
+-	if (ret)
+-		return ret;
+-
+-	/* Set up the "green ethernet" feature */
+-	ret = rtl8366rb_jam_table(rtl8366rb_green_jam,
+-				  ARRAY_SIZE(rtl8366rb_green_jam), smi, false);
+-	if (ret)
+-		return ret;
+-
+-	ret = regmap_write(smi->map,
+-			   RTL8366RB_GREEN_FEATURE_REG,
+-			   (chip_ver == 1) ? 0x0007 : 0x0003);
+-	if (ret)
+-		return ret;
+-
+-	/* Vendor driver sets 0x240 in registers 0xc and 0xd (undocumented) */
+-	ret = regmap_write(smi->map, 0x0c, 0x240);
+-	if (ret)
+-		return ret;
+-	ret = regmap_write(smi->map, 0x0d, 0x240);
+-	if (ret)
+-		return ret;
+-
+-	/* Set some random MAC address */
+-	ret = rtl8366rb_set_addr(smi);
+-	if (ret)
+-		return ret;
+-
+-	/* Enable CPU port with custom DSA tag 8899.
+-	 *
+-	 * If you set RTL8368RB_CPU_NO_TAG (bit 15) in this registers
+-	 * the custom tag is turned off.
+-	 */
+-	ret = regmap_update_bits(smi->map, RTL8368RB_CPU_CTRL_REG,
+-				 0xFFFF,
+-				 BIT(smi->cpu_port));
+-	if (ret)
+-		return ret;
+-
+-	/* Make sure we default-enable the fixed CPU port */
+-	ret = regmap_update_bits(smi->map, RTL8366RB_PECR,
+-				 BIT(smi->cpu_port),
+-				 0);
+-	if (ret)
+-		return ret;
+-
+-	/* Set maximum packet length to 1536 bytes */
+-	ret = regmap_update_bits(smi->map, RTL8366RB_SGCR,
+-				 RTL8366RB_SGCR_MAX_LENGTH_MASK,
+-				 RTL8366RB_SGCR_MAX_LENGTH_1536);
+-	if (ret)
+-		return ret;
+-	for (i = 0; i < RTL8366RB_NUM_PORTS; i++)
+-		/* layer 2 size, see rtl8366rb_change_mtu() */
+-		rb->max_mtu[i] = 1532;
+-
+-	/* Disable learning for all ports */
+-	ret = regmap_write(smi->map, RTL8366RB_PORT_LEARNDIS_CTRL,
+-			   RTL8366RB_PORT_ALL);
+-	if (ret)
+-		return ret;
+-
+-	/* Enable auto ageing for all ports */
+-	ret = regmap_write(smi->map, RTL8366RB_SECURITY_CTRL, 0);
+-	if (ret)
+-		return ret;
+-
+-	/* Port 4 setup: this enables Port 4, usually the WAN port,
+-	 * common PHY IO mode is apparently mode 0, and this is not what
+-	 * the port is initialized to. There is no explanation of the
+-	 * IO modes in the Realtek source code, if your WAN port is
+-	 * connected to something exotic such as fiber, then this might
+-	 * be worth experimenting with.
+-	 */
+-	ret = regmap_update_bits(smi->map, RTL8366RB_PMC0,
+-				 RTL8366RB_PMC0_P4_IOMODE_MASK,
+-				 0 << RTL8366RB_PMC0_P4_IOMODE_SHIFT);
+-	if (ret)
+-		return ret;
+-
+-	/* Accept all packets by default, we enable filtering on-demand */
+-	ret = regmap_write(smi->map, RTL8366RB_VLAN_INGRESS_CTRL1_REG,
+-			   0);
+-	if (ret)
+-		return ret;
+-	ret = regmap_write(smi->map, RTL8366RB_VLAN_INGRESS_CTRL2_REG,
+-			   0);
+-	if (ret)
+-		return ret;
+-
+-	/* Don't drop packets whose DA has not been learned */
+-	ret = regmap_update_bits(smi->map, RTL8366RB_SSCR2,
+-				 RTL8366RB_SSCR2_DROP_UNKNOWN_DA, 0);
+-	if (ret)
+-		return ret;
+-
+-	/* Set blinking, TODO: make this configurable */
+-	ret = regmap_update_bits(smi->map, RTL8366RB_LED_BLINKRATE_REG,
+-				 RTL8366RB_LED_BLINKRATE_MASK,
+-				 RTL8366RB_LED_BLINKRATE_56MS);
+-	if (ret)
+-		return ret;
+-
+-	/* Set up LED activity:
+-	 * Each port has 4 LEDs, we configure all ports to the same
+-	 * behaviour (no individual config) but we can set up each
+-	 * LED separately.
+-	 */
+-	if (smi->leds_disabled) {
+-		/* Turn everything off */
+-		regmap_update_bits(smi->map,
+-				   RTL8366RB_LED_0_1_CTRL_REG,
+-				   0x0FFF, 0);
+-		regmap_update_bits(smi->map,
+-				   RTL8366RB_LED_2_3_CTRL_REG,
+-				   0x0FFF, 0);
+-		regmap_update_bits(smi->map,
+-				   RTL8366RB_INTERRUPT_CONTROL_REG,
+-				   RTL8366RB_P4_RGMII_LED,
+-				   0);
+-		val = RTL8366RB_LED_OFF;
+-	} else {
+-		/* TODO: make this configurable per LED */
+-		val = RTL8366RB_LED_FORCE;
+-	}
+-	for (i = 0; i < 4; i++) {
+-		ret = regmap_update_bits(smi->map,
+-					 RTL8366RB_LED_CTRL_REG,
+-					 0xf << (i * 4),
+-					 val << (i * 4));
+-		if (ret)
+-			return ret;
+-	}
+-
+-	ret = rtl8366_reset_vlan(smi);
+-	if (ret)
+-		return ret;
+-
+-	ret = rtl8366rb_setup_cascaded_irq(smi);
+-	if (ret)
+-		dev_info(smi->dev, "no interrupt support\n");
+-
+-	ret = realtek_smi_setup_mdio(smi);
+-	if (ret) {
+-		dev_info(smi->dev, "could not set up MDIO bus\n");
+-		return -ENODEV;
+-	}
+-
+-	return 0;
+-}
+-
+-static enum dsa_tag_protocol rtl8366_get_tag_protocol(struct dsa_switch *ds,
+-						      int port,
+-						      enum dsa_tag_protocol mp)
+-{
+-	/* This switch uses the 4 byte protocol A Realtek DSA tag */
+-	return DSA_TAG_PROTO_RTL4_A;
+-}
+-
+-static void
+-rtl8366rb_mac_link_up(struct dsa_switch *ds, int port, unsigned int mode,
+-		      phy_interface_t interface, struct phy_device *phydev,
+-		      int speed, int duplex, bool tx_pause, bool rx_pause)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	int ret;
+-
+-	if (port != smi->cpu_port)
+-		return;
+-
+-	dev_dbg(smi->dev, "MAC link up on CPU port (%d)\n", port);
+-
+-	/* Force the fixed CPU port into 1Gbit mode, no autonegotiation */
+-	ret = regmap_update_bits(smi->map, RTL8366RB_MAC_FORCE_CTRL_REG,
+-				 BIT(port), BIT(port));
+-	if (ret) {
+-		dev_err(smi->dev, "failed to force 1Gbit on CPU port\n");
+-		return;
+-	}
+-
+-	ret = regmap_update_bits(smi->map, RTL8366RB_PAACR2,
+-				 0xFF00U,
+-				 RTL8366RB_PAACR_CPU_PORT << 8);
+-	if (ret) {
+-		dev_err(smi->dev, "failed to set PAACR on CPU port\n");
+-		return;
+-	}
+-
+-	/* Enable the CPU port */
+-	ret = regmap_update_bits(smi->map, RTL8366RB_PECR, BIT(port),
+-				 0);
+-	if (ret) {
+-		dev_err(smi->dev, "failed to enable the CPU port\n");
+-		return;
+-	}
+-}
+-
+-static void
+-rtl8366rb_mac_link_down(struct dsa_switch *ds, int port, unsigned int mode,
+-			phy_interface_t interface)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	int ret;
+-
+-	if (port != smi->cpu_port)
+-		return;
+-
+-	dev_dbg(smi->dev, "MAC link down on CPU port (%d)\n", port);
+-
+-	/* Disable the CPU port */
+-	ret = regmap_update_bits(smi->map, RTL8366RB_PECR, BIT(port),
+-				 BIT(port));
+-	if (ret) {
+-		dev_err(smi->dev, "failed to disable the CPU port\n");
+-		return;
+-	}
+-}
+-
+-static void rb8366rb_set_port_led(struct realtek_smi *smi,
+-				  int port, bool enable)
+-{
+-	u16 val = enable ? 0x3f : 0;
+-	int ret;
+-
+-	if (smi->leds_disabled)
+-		return;
+-
+-	switch (port) {
+-	case 0:
+-		ret = regmap_update_bits(smi->map,
+-					 RTL8366RB_LED_0_1_CTRL_REG,
+-					 0x3F, val);
+-		break;
+-	case 1:
+-		ret = regmap_update_bits(smi->map,
+-					 RTL8366RB_LED_0_1_CTRL_REG,
+-					 0x3F << RTL8366RB_LED_1_OFFSET,
+-					 val << RTL8366RB_LED_1_OFFSET);
+-		break;
+-	case 2:
+-		ret = regmap_update_bits(smi->map,
+-					 RTL8366RB_LED_2_3_CTRL_REG,
+-					 0x3F, val);
+-		break;
+-	case 3:
+-		ret = regmap_update_bits(smi->map,
+-					 RTL8366RB_LED_2_3_CTRL_REG,
+-					 0x3F << RTL8366RB_LED_3_OFFSET,
+-					 val << RTL8366RB_LED_3_OFFSET);
+-		break;
+-	case 4:
+-		ret = regmap_update_bits(smi->map,
+-					 RTL8366RB_INTERRUPT_CONTROL_REG,
+-					 RTL8366RB_P4_RGMII_LED,
+-					 enable ? RTL8366RB_P4_RGMII_LED : 0);
+-		break;
+-	default:
+-		dev_err(smi->dev, "no LED for port %d\n", port);
+-		return;
+-	}
+-	if (ret)
+-		dev_err(smi->dev, "error updating LED on port %d\n", port);
+-}
+-
+-static int
+-rtl8366rb_port_enable(struct dsa_switch *ds, int port,
+-		      struct phy_device *phy)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	int ret;
+-
+-	dev_dbg(smi->dev, "enable port %d\n", port);
+-	ret = regmap_update_bits(smi->map, RTL8366RB_PECR, BIT(port),
+-				 0);
+-	if (ret)
+-		return ret;
+-
+-	rb8366rb_set_port_led(smi, port, true);
+-	return 0;
+-}
+-
+-static void
+-rtl8366rb_port_disable(struct dsa_switch *ds, int port)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	int ret;
+-
+-	dev_dbg(smi->dev, "disable port %d\n", port);
+-	ret = regmap_update_bits(smi->map, RTL8366RB_PECR, BIT(port),
+-				 BIT(port));
+-	if (ret)
+-		return;
+-
+-	rb8366rb_set_port_led(smi, port, false);
+-}
+-
+-static int
+-rtl8366rb_port_bridge_join(struct dsa_switch *ds, int port,
+-			   struct net_device *bridge)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	unsigned int port_bitmap = 0;
+-	int ret, i;
+-
+-	/* Loop over all other ports than the current one */
+-	for (i = 0; i < RTL8366RB_PORT_NUM_CPU; i++) {
+-		/* Current port handled last */
+-		if (i == port)
+-			continue;
+-		/* Not on this bridge */
+-		if (dsa_to_port(ds, i)->bridge_dev != bridge)
+-			continue;
+-		/* Join this port to each other port on the bridge */
+-		ret = regmap_update_bits(smi->map, RTL8366RB_PORT_ISO(i),
+-					 RTL8366RB_PORT_ISO_PORTS(BIT(port)),
+-					 RTL8366RB_PORT_ISO_PORTS(BIT(port)));
+-		if (ret)
+-			dev_err(smi->dev, "failed to join port %d\n", port);
+-
+-		port_bitmap |= BIT(i);
+-	}
+-
+-	/* Set the bits for the ports we can access */
+-	return regmap_update_bits(smi->map, RTL8366RB_PORT_ISO(port),
+-				  RTL8366RB_PORT_ISO_PORTS(port_bitmap),
+-				  RTL8366RB_PORT_ISO_PORTS(port_bitmap));
+-}
+-
+-static void
+-rtl8366rb_port_bridge_leave(struct dsa_switch *ds, int port,
+-			    struct net_device *bridge)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	unsigned int port_bitmap = 0;
+-	int ret, i;
+-
+-	/* Loop over all other ports than this one */
+-	for (i = 0; i < RTL8366RB_PORT_NUM_CPU; i++) {
+-		/* Current port handled last */
+-		if (i == port)
+-			continue;
+-		/* Not on this bridge */
+-		if (dsa_to_port(ds, i)->bridge_dev != bridge)
+-			continue;
+-		/* Remove this port from any other port on the bridge */
+-		ret = regmap_update_bits(smi->map, RTL8366RB_PORT_ISO(i),
+-					 RTL8366RB_PORT_ISO_PORTS(BIT(port)), 0);
+-		if (ret)
+-			dev_err(smi->dev, "failed to leave port %d\n", port);
+-
+-		port_bitmap |= BIT(i);
+-	}
+-
+-	/* Clear the bits for the ports we can not access, leave ourselves */
+-	regmap_update_bits(smi->map, RTL8366RB_PORT_ISO(port),
+-			   RTL8366RB_PORT_ISO_PORTS(port_bitmap), 0);
+-}
+-
+-/**
+- * rtl8366rb_drop_untagged() - make the switch drop untagged and C-tagged frames
+- * @smi: SMI state container
+- * @port: the port to drop untagged and C-tagged frames on
+- * @drop: whether to drop or pass untagged and C-tagged frames
+- */
+-static int rtl8366rb_drop_untagged(struct realtek_smi *smi, int port, bool drop)
+-{
+-	return regmap_update_bits(smi->map, RTL8366RB_VLAN_INGRESS_CTRL1_REG,
+-				  RTL8366RB_VLAN_INGRESS_CTRL1_DROP(port),
+-				  drop ? RTL8366RB_VLAN_INGRESS_CTRL1_DROP(port) : 0);
+-}
+-
+-static int rtl8366rb_vlan_filtering(struct dsa_switch *ds, int port,
+-				    bool vlan_filtering,
+-				    struct netlink_ext_ack *extack)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	struct rtl8366rb *rb;
+-	int ret;
+-
+-	rb = smi->chip_data;
+-
+-	dev_dbg(smi->dev, "port %d: %s VLAN filtering\n", port,
+-		vlan_filtering ? "enable" : "disable");
+-
+-	/* If the port is not in the member set, the frame will be dropped */
+-	ret = regmap_update_bits(smi->map, RTL8366RB_VLAN_INGRESS_CTRL2_REG,
+-				 BIT(port), vlan_filtering ? BIT(port) : 0);
+-	if (ret)
+-		return ret;
+-
+-	/* If VLAN filtering is enabled and PVID is also enabled, we must
+-	 * not drop any untagged or C-tagged frames. If we turn off VLAN
+-	 * filtering on a port, we need to accept any frames.
+-	 */
+-	if (vlan_filtering)
+-		ret = rtl8366rb_drop_untagged(smi, port, !rb->pvid_enabled[port]);
+-	else
+-		ret = rtl8366rb_drop_untagged(smi, port, false);
+-
+-	return ret;
+-}
+-
+-static int
+-rtl8366rb_port_pre_bridge_flags(struct dsa_switch *ds, int port,
+-				struct switchdev_brport_flags flags,
+-				struct netlink_ext_ack *extack)
+-{
+-	/* We support enabling/disabling learning */
+-	if (flags.mask & ~(BR_LEARNING))
+-		return -EINVAL;
+-
+-	return 0;
+-}
+-
+-static int
+-rtl8366rb_port_bridge_flags(struct dsa_switch *ds, int port,
+-			    struct switchdev_brport_flags flags,
+-			    struct netlink_ext_ack *extack)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	int ret;
+-
+-	if (flags.mask & BR_LEARNING) {
+-		ret = regmap_update_bits(smi->map, RTL8366RB_PORT_LEARNDIS_CTRL,
+-					 BIT(port),
+-					 (flags.val & BR_LEARNING) ? 0 : BIT(port));
+-		if (ret)
+-			return ret;
+-	}
+-
+-	return 0;
+-}
+-
+-static void
+-rtl8366rb_port_stp_state_set(struct dsa_switch *ds, int port, u8 state)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	u32 val;
+-	int i;
+-
+-	switch (state) {
+-	case BR_STATE_DISABLED:
+-		val = RTL8366RB_STP_STATE_DISABLED;
+-		break;
+-	case BR_STATE_BLOCKING:
+-	case BR_STATE_LISTENING:
+-		val = RTL8366RB_STP_STATE_BLOCKING;
+-		break;
+-	case BR_STATE_LEARNING:
+-		val = RTL8366RB_STP_STATE_LEARNING;
+-		break;
+-	case BR_STATE_FORWARDING:
+-		val = RTL8366RB_STP_STATE_FORWARDING;
+-		break;
+-	default:
+-		dev_err(smi->dev, "unknown bridge state requested\n");
+-		return;
+-	}
+-
+-	/* Set the same status for the port on all the FIDs */
+-	for (i = 0; i < RTL8366RB_NUM_FIDS; i++) {
+-		regmap_update_bits(smi->map, RTL8366RB_STP_STATE_BASE + i,
+-				   RTL8366RB_STP_STATE_MASK(port),
+-				   RTL8366RB_STP_STATE(port, val));
+-	}
+-}
+-
+-static void
+-rtl8366rb_port_fast_age(struct dsa_switch *ds, int port)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-
+-	/* This will age out any learned L2 entries */
+-	regmap_update_bits(smi->map, RTL8366RB_SECURITY_CTRL,
+-			   BIT(port), BIT(port));
+-	/* Restore the normal state of things */
+-	regmap_update_bits(smi->map, RTL8366RB_SECURITY_CTRL,
+-			   BIT(port), 0);
+-}
+-
+-static int rtl8366rb_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
+-{
+-	struct realtek_smi *smi = ds->priv;
+-	struct rtl8366rb *rb;
+-	unsigned int max_mtu;
+-	u32 len;
+-	int i;
+-
+-	/* Cache the per-port MTU setting */
+-	rb = smi->chip_data;
+-	rb->max_mtu[port] = new_mtu;
+-
+-	/* Roof out the MTU for the entire switch to the greatest
+-	 * common denominator: the biggest set for any one port will
+-	 * be the biggest MTU for the switch.
+-	 *
+-	 * The first setting, 1522 bytes, is max IP packet 1500 bytes,
+-	 * plus ethernet header, 1518 bytes, plus CPU tag, 4 bytes.
+-	 * This function should consider the parameter an SDU, so the
+-	 * MTU passed for this setting is 1518 bytes. The same logic
+-	 * of subtracting the DSA tag of 4 bytes apply to the other
+-	 * settings.
+-	 */
+-	max_mtu = 1518;
+-	for (i = 0; i < RTL8366RB_NUM_PORTS; i++) {
+-		if (rb->max_mtu[i] > max_mtu)
+-			max_mtu = rb->max_mtu[i];
+-	}
+-	if (max_mtu <= 1518)
+-		len = RTL8366RB_SGCR_MAX_LENGTH_1522;
+-	else if (max_mtu > 1518 && max_mtu <= 1532)
+-		len = RTL8366RB_SGCR_MAX_LENGTH_1536;
+-	else if (max_mtu > 1532 && max_mtu <= 1548)
+-		len = RTL8366RB_SGCR_MAX_LENGTH_1552;
+-	else
+-		len = RTL8366RB_SGCR_MAX_LENGTH_16000;
+-
+-	return regmap_update_bits(smi->map, RTL8366RB_SGCR,
+-				  RTL8366RB_SGCR_MAX_LENGTH_MASK,
+-				  len);
+-}
+-
+-static int rtl8366rb_max_mtu(struct dsa_switch *ds, int port)
+-{
+-	/* The max MTU is 16000 bytes, so we subtract the CPU tag
+-	 * and the max presented to the system is 15996 bytes.
+-	 */
+-	return 15996;
+-}
+-
+-static int rtl8366rb_get_vlan_4k(struct realtek_smi *smi, u32 vid,
+-				 struct rtl8366_vlan_4k *vlan4k)
+-{
+-	u32 data[3];
+-	int ret;
+-	int i;
+-
+-	memset(vlan4k, '\0', sizeof(struct rtl8366_vlan_4k));
+-
+-	if (vid >= RTL8366RB_NUM_VIDS)
+-		return -EINVAL;
+-
+-	/* write VID */
+-	ret = regmap_write(smi->map, RTL8366RB_VLAN_TABLE_WRITE_BASE,
+-			   vid & RTL8366RB_VLAN_VID_MASK);
+-	if (ret)
+-		return ret;
+-
+-	/* write table access control word */
+-	ret = regmap_write(smi->map, RTL8366RB_TABLE_ACCESS_CTRL_REG,
+-			   RTL8366RB_TABLE_VLAN_READ_CTRL);
+-	if (ret)
+-		return ret;
+-
+-	for (i = 0; i < 3; i++) {
+-		ret = regmap_read(smi->map,
+-				  RTL8366RB_VLAN_TABLE_READ_BASE + i,
+-				  &data[i]);
+-		if (ret)
+-			return ret;
+-	}
+-
+-	vlan4k->vid = vid;
+-	vlan4k->untag = (data[1] >> RTL8366RB_VLAN_UNTAG_SHIFT) &
+-			RTL8366RB_VLAN_UNTAG_MASK;
+-	vlan4k->member = data[1] & RTL8366RB_VLAN_MEMBER_MASK;
+-	vlan4k->fid = data[2] & RTL8366RB_VLAN_FID_MASK;
+-
+-	return 0;
+-}
+-
+-static int rtl8366rb_set_vlan_4k(struct realtek_smi *smi,
+-				 const struct rtl8366_vlan_4k *vlan4k)
+-{
+-	u32 data[3];
+-	int ret;
+-	int i;
+-
+-	if (vlan4k->vid >= RTL8366RB_NUM_VIDS ||
+-	    vlan4k->member > RTL8366RB_VLAN_MEMBER_MASK ||
+-	    vlan4k->untag > RTL8366RB_VLAN_UNTAG_MASK ||
+-	    vlan4k->fid > RTL8366RB_FIDMAX)
+-		return -EINVAL;
+-
+-	data[0] = vlan4k->vid & RTL8366RB_VLAN_VID_MASK;
+-	data[1] = (vlan4k->member & RTL8366RB_VLAN_MEMBER_MASK) |
+-		  ((vlan4k->untag & RTL8366RB_VLAN_UNTAG_MASK) <<
+-			RTL8366RB_VLAN_UNTAG_SHIFT);
+-	data[2] = vlan4k->fid & RTL8366RB_VLAN_FID_MASK;
+-
+-	for (i = 0; i < 3; i++) {
+-		ret = regmap_write(smi->map,
+-				   RTL8366RB_VLAN_TABLE_WRITE_BASE + i,
+-				   data[i]);
+-		if (ret)
+-			return ret;
+-	}
+-
+-	/* write table access control word */
+-	ret = regmap_write(smi->map, RTL8366RB_TABLE_ACCESS_CTRL_REG,
+-			   RTL8366RB_TABLE_VLAN_WRITE_CTRL);
+-
+-	return ret;
+-}
+-
+-static int rtl8366rb_get_vlan_mc(struct realtek_smi *smi, u32 index,
+-				 struct rtl8366_vlan_mc *vlanmc)
+-{
+-	u32 data[3];
+-	int ret;
+-	int i;
+-
+-	memset(vlanmc, '\0', sizeof(struct rtl8366_vlan_mc));
+-
+-	if (index >= RTL8366RB_NUM_VLANS)
+-		return -EINVAL;
+-
+-	for (i = 0; i < 3; i++) {
+-		ret = regmap_read(smi->map,
+-				  RTL8366RB_VLAN_MC_BASE(index) + i,
+-				  &data[i]);
+-		if (ret)
+-			return ret;
+-	}
+-
+-	vlanmc->vid = data[0] & RTL8366RB_VLAN_VID_MASK;
+-	vlanmc->priority = (data[0] >> RTL8366RB_VLAN_PRIORITY_SHIFT) &
+-		RTL8366RB_VLAN_PRIORITY_MASK;
+-	vlanmc->untag = (data[1] >> RTL8366RB_VLAN_UNTAG_SHIFT) &
+-		RTL8366RB_VLAN_UNTAG_MASK;
+-	vlanmc->member = data[1] & RTL8366RB_VLAN_MEMBER_MASK;
+-	vlanmc->fid = data[2] & RTL8366RB_VLAN_FID_MASK;
+-
+-	return 0;
+-}
+-
+-static int rtl8366rb_set_vlan_mc(struct realtek_smi *smi, u32 index,
+-				 const struct rtl8366_vlan_mc *vlanmc)
+-{
+-	u32 data[3];
+-	int ret;
+-	int i;
+-
+-	if (index >= RTL8366RB_NUM_VLANS ||
+-	    vlanmc->vid >= RTL8366RB_NUM_VIDS ||
+-	    vlanmc->priority > RTL8366RB_PRIORITYMAX ||
+-	    vlanmc->member > RTL8366RB_VLAN_MEMBER_MASK ||
+-	    vlanmc->untag > RTL8366RB_VLAN_UNTAG_MASK ||
+-	    vlanmc->fid > RTL8366RB_FIDMAX)
+-		return -EINVAL;
+-
+-	data[0] = (vlanmc->vid & RTL8366RB_VLAN_VID_MASK) |
+-		  ((vlanmc->priority & RTL8366RB_VLAN_PRIORITY_MASK) <<
+-			RTL8366RB_VLAN_PRIORITY_SHIFT);
+-	data[1] = (vlanmc->member & RTL8366RB_VLAN_MEMBER_MASK) |
+-		  ((vlanmc->untag & RTL8366RB_VLAN_UNTAG_MASK) <<
+-			RTL8366RB_VLAN_UNTAG_SHIFT);
+-	data[2] = vlanmc->fid & RTL8366RB_VLAN_FID_MASK;
+-
+-	for (i = 0; i < 3; i++) {
+-		ret = regmap_write(smi->map,
+-				   RTL8366RB_VLAN_MC_BASE(index) + i,
+-				   data[i]);
+-		if (ret)
+-			return ret;
+-	}
+-
+-	return 0;
+-}
+-
+-static int rtl8366rb_get_mc_index(struct realtek_smi *smi, int port, int *val)
+-{
+-	u32 data;
+-	int ret;
+-
+-	if (port >= smi->num_ports)
+-		return -EINVAL;
+-
+-	ret = regmap_read(smi->map, RTL8366RB_PORT_VLAN_CTRL_REG(port),
+-			  &data);
+-	if (ret)
+-		return ret;
+-
+-	*val = (data >> RTL8366RB_PORT_VLAN_CTRL_SHIFT(port)) &
+-		RTL8366RB_PORT_VLAN_CTRL_MASK;
+-
+-	return 0;
+-}
+-
+-static int rtl8366rb_set_mc_index(struct realtek_smi *smi, int port, int index)
+-{
+-	struct rtl8366rb *rb;
+-	bool pvid_enabled;
+-	int ret;
+-
+-	rb = smi->chip_data;
+-	pvid_enabled = !!index;
+-
+-	if (port >= smi->num_ports || index >= RTL8366RB_NUM_VLANS)
+-		return -EINVAL;
+-
+-	ret = regmap_update_bits(smi->map, RTL8366RB_PORT_VLAN_CTRL_REG(port),
+-				RTL8366RB_PORT_VLAN_CTRL_MASK <<
+-					RTL8366RB_PORT_VLAN_CTRL_SHIFT(port),
+-				(index & RTL8366RB_PORT_VLAN_CTRL_MASK) <<
+-					RTL8366RB_PORT_VLAN_CTRL_SHIFT(port));
+-	if (ret)
+-		return ret;
+-
+-	rb->pvid_enabled[port] = pvid_enabled;
+-
+-	/* If VLAN filtering is enabled and PVID is also enabled, we must
+-	 * not drop any untagged or C-tagged frames. Make sure to update the
+-	 * filtering setting.
+-	 */
+-	if (dsa_port_is_vlan_filtering(dsa_to_port(smi->ds, port)))
+-		ret = rtl8366rb_drop_untagged(smi, port, !pvid_enabled);
+-
+-	return ret;
+-}
+-
+-static bool rtl8366rb_is_vlan_valid(struct realtek_smi *smi, unsigned int vlan)
+-{
+-	unsigned int max = RTL8366RB_NUM_VLANS - 1;
+-
+-	if (smi->vlan4k_enabled)
+-		max = RTL8366RB_NUM_VIDS - 1;
+-
+-	if (vlan > max)
+-		return false;
+-
+-	return true;
+-}
+-
+-static int rtl8366rb_enable_vlan(struct realtek_smi *smi, bool enable)
+-{
+-	dev_dbg(smi->dev, "%s VLAN\n", enable ? "enable" : "disable");
+-	return regmap_update_bits(smi->map,
+-				  RTL8366RB_SGCR, RTL8366RB_SGCR_EN_VLAN,
+-				  enable ? RTL8366RB_SGCR_EN_VLAN : 0);
+-}
+-
+-static int rtl8366rb_enable_vlan4k(struct realtek_smi *smi, bool enable)
+-{
+-	dev_dbg(smi->dev, "%s VLAN 4k\n", enable ? "enable" : "disable");
+-	return regmap_update_bits(smi->map, RTL8366RB_SGCR,
+-				  RTL8366RB_SGCR_EN_VLAN_4KTB,
+-				  enable ? RTL8366RB_SGCR_EN_VLAN_4KTB : 0);
+-}
+-
+-static int rtl8366rb_phy_read(struct realtek_smi *smi, int phy, int regnum)
+-{
+-	u32 val;
+-	u32 reg;
+-	int ret;
+-
+-	if (phy > RTL8366RB_PHY_NO_MAX)
+-		return -EINVAL;
+-
+-	ret = regmap_write(smi->map, RTL8366RB_PHY_ACCESS_CTRL_REG,
+-			   RTL8366RB_PHY_CTRL_READ);
+-	if (ret)
+-		return ret;
+-
+-	reg = 0x8000 | (1 << (phy + RTL8366RB_PHY_NO_OFFSET)) | regnum;
+-
+-	ret = regmap_write(smi->map, reg, 0);
+-	if (ret) {
+-		dev_err(smi->dev,
+-			"failed to write PHY%d reg %04x @ %04x, ret %d\n",
+-			phy, regnum, reg, ret);
+-		return ret;
+-	}
+-
+-	ret = regmap_read(smi->map, RTL8366RB_PHY_ACCESS_DATA_REG, &val);
+-	if (ret)
+-		return ret;
+-
+-	dev_dbg(smi->dev, "read PHY%d register 0x%04x @ %08x, val <- %04x\n",
+-		phy, regnum, reg, val);
+-
+-	return val;
+-}
+-
+-static int rtl8366rb_phy_write(struct realtek_smi *smi, int phy, int regnum,
+-			       u16 val)
+-{
+-	u32 reg;
+-	int ret;
+-
+-	if (phy > RTL8366RB_PHY_NO_MAX)
+-		return -EINVAL;
+-
+-	ret = regmap_write(smi->map, RTL8366RB_PHY_ACCESS_CTRL_REG,
+-			   RTL8366RB_PHY_CTRL_WRITE);
+-	if (ret)
+-		return ret;
+-
+-	reg = 0x8000 | (1 << (phy + RTL8366RB_PHY_NO_OFFSET)) | regnum;
+-
+-	dev_dbg(smi->dev, "write PHY%d register 0x%04x @ %04x, val -> %04x\n",
+-		phy, regnum, reg, val);
+-
+-	ret = regmap_write(smi->map, reg, val);
+-	if (ret)
+-		return ret;
+-
+-	return 0;
+-}
+-
+-static int rtl8366rb_reset_chip(struct realtek_smi *smi)
+-{
+-	int timeout = 10;
+-	u32 val;
+-	int ret;
+-
+-	realtek_smi_write_reg_noack(smi, RTL8366RB_RESET_CTRL_REG,
+-				    RTL8366RB_CHIP_CTRL_RESET_HW);
+-	do {
+-		usleep_range(20000, 25000);
+-		ret = regmap_read(smi->map, RTL8366RB_RESET_CTRL_REG, &val);
+-		if (ret)
+-			return ret;
+-
+-		if (!(val & RTL8366RB_CHIP_CTRL_RESET_HW))
+-			break;
+-	} while (--timeout);
+-
+-	if (!timeout) {
+-		dev_err(smi->dev, "timeout waiting for the switch to reset\n");
+-		return -EIO;
+-	}
+-
+-	return 0;
+-}
+-
+-static int rtl8366rb_detect(struct realtek_smi *smi)
+-{
+-	struct device *dev = smi->dev;
+-	int ret;
+-	u32 val;
+-
+-	/* Detect device */
+-	ret = regmap_read(smi->map, 0x5c, &val);
+-	if (ret) {
+-		dev_err(dev, "can't get chip ID (%d)\n", ret);
+-		return ret;
+-	}
+-
+-	switch (val) {
+-	case 0x6027:
+-		dev_info(dev, "found an RTL8366S switch\n");
+-		dev_err(dev, "this switch is not yet supported, submit patches!\n");
+-		return -ENODEV;
+-	case 0x5937:
+-		dev_info(dev, "found an RTL8366RB switch\n");
+-		smi->cpu_port = RTL8366RB_PORT_NUM_CPU;
+-		smi->num_ports = RTL8366RB_NUM_PORTS;
+-		smi->num_vlan_mc = RTL8366RB_NUM_VLANS;
+-		smi->mib_counters = rtl8366rb_mib_counters;
+-		smi->num_mib_counters = ARRAY_SIZE(rtl8366rb_mib_counters);
+-		break;
+-	default:
+-		dev_info(dev, "found an Unknown Realtek switch (id=0x%04x)\n",
+-			 val);
+-		break;
+-	}
+-
+-	ret = rtl8366rb_reset_chip(smi);
+-	if (ret)
+-		return ret;
+-
+-	return 0;
+-}
+-
+-static const struct dsa_switch_ops rtl8366rb_switch_ops = {
+-	.get_tag_protocol = rtl8366_get_tag_protocol,
+-	.setup = rtl8366rb_setup,
+-	.phylink_mac_link_up = rtl8366rb_mac_link_up,
+-	.phylink_mac_link_down = rtl8366rb_mac_link_down,
+-	.get_strings = rtl8366_get_strings,
+-	.get_ethtool_stats = rtl8366_get_ethtool_stats,
+-	.get_sset_count = rtl8366_get_sset_count,
+-	.port_bridge_join = rtl8366rb_port_bridge_join,
+-	.port_bridge_leave = rtl8366rb_port_bridge_leave,
+-	.port_vlan_filtering = rtl8366rb_vlan_filtering,
+-	.port_vlan_add = rtl8366_vlan_add,
+-	.port_vlan_del = rtl8366_vlan_del,
+-	.port_enable = rtl8366rb_port_enable,
+-	.port_disable = rtl8366rb_port_disable,
+-	.port_pre_bridge_flags = rtl8366rb_port_pre_bridge_flags,
+-	.port_bridge_flags = rtl8366rb_port_bridge_flags,
+-	.port_stp_state_set = rtl8366rb_port_stp_state_set,
+-	.port_fast_age = rtl8366rb_port_fast_age,
+-	.port_change_mtu = rtl8366rb_change_mtu,
+-	.port_max_mtu = rtl8366rb_max_mtu,
+-};
+-
+-static const struct realtek_smi_ops rtl8366rb_smi_ops = {
+-	.detect		= rtl8366rb_detect,
+-	.get_vlan_mc	= rtl8366rb_get_vlan_mc,
+-	.set_vlan_mc	= rtl8366rb_set_vlan_mc,
+-	.get_vlan_4k	= rtl8366rb_get_vlan_4k,
+-	.set_vlan_4k	= rtl8366rb_set_vlan_4k,
+-	.get_mc_index	= rtl8366rb_get_mc_index,
+-	.set_mc_index	= rtl8366rb_set_mc_index,
+-	.get_mib_counter = rtl8366rb_get_mib_counter,
+-	.is_vlan_valid	= rtl8366rb_is_vlan_valid,
+-	.enable_vlan	= rtl8366rb_enable_vlan,
+-	.enable_vlan4k	= rtl8366rb_enable_vlan4k,
+-	.phy_read	= rtl8366rb_phy_read,
+-	.phy_write	= rtl8366rb_phy_write,
+-};
+-
+-const struct realtek_smi_variant rtl8366rb_variant = {
+-	.ds_ops = &rtl8366rb_switch_ops,
+-	.ops = &rtl8366rb_smi_ops,
+-	.clk_delay = 10,
+-	.cmd_read = 0xa9,
+-	.cmd_write = 0xa8,
+-	.chip_data_sz = sizeof(struct rtl8366rb),
+-};
+-EXPORT_SYMBOL_GPL(rtl8366rb_variant);
+diff --git a/drivers/net/ethernet/8390/mcf8390.c b/drivers/net/ethernet/8390/mcf8390.c
+index e320cccba61a6..90cd7bdf06f5d 100644
+--- a/drivers/net/ethernet/8390/mcf8390.c
++++ b/drivers/net/ethernet/8390/mcf8390.c
+@@ -405,12 +405,12 @@ static int mcf8390_init(struct net_device *dev)
+ static int mcf8390_probe(struct platform_device *pdev)
+ {
+ 	struct net_device *dev;
+-	struct resource *mem, *irq;
++	struct resource *mem;
+ 	resource_size_t msize;
+-	int ret;
++	int ret, irq;
+ 
+-	irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+-	if (irq == NULL) {
++	irq = platform_get_irq(pdev, 0);
++	if (irq < 0) {
+ 		dev_err(&pdev->dev, "no IRQ specified?\n");
+ 		return -ENXIO;
+ 	}
+@@ -433,7 +433,7 @@ static int mcf8390_probe(struct platform_device *pdev)
+ 	SET_NETDEV_DEV(dev, &pdev->dev);
+ 	platform_set_drvdata(pdev, dev);
+ 
+-	dev->irq = irq->start;
++	dev->irq = irq;
+ 	dev->base_addr = mem->start;
+ 
+ 	ret = mcf8390_init(dev);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+index 8388be119f9a0..4eca32a29c9b0 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+@@ -329,7 +329,7 @@ static int bnxt_ptp_enable(struct ptp_clock_info *ptp_info,
+ 	struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg,
+ 						ptp_info);
+ 	struct bnxt *bp = ptp->bp;
+-	u8 pin_id;
++	int pin_id;
+ 	int rc;
+ 
+ 	switch (rq->type) {
+@@ -337,6 +337,8 @@ static int bnxt_ptp_enable(struct ptp_clock_info *ptp_info,
+ 		/* Configure an External PPS IN */
+ 		pin_id = ptp_find_pin(ptp->ptp_clock, PTP_PF_EXTTS,
+ 				      rq->extts.index);
++		if (!TSIO_PIN_VALID(pin_id))
++			return -EOPNOTSUPP;
+ 		if (!on)
+ 			break;
+ 		rc = bnxt_ptp_cfg_pin(bp, pin_id, BNXT_PPS_PIN_PPS_IN);
+@@ -350,6 +352,8 @@ static int bnxt_ptp_enable(struct ptp_clock_info *ptp_info,
+ 		/* Configure a Periodic PPS OUT */
+ 		pin_id = ptp_find_pin(ptp->ptp_clock, PTP_PF_PEROUT,
+ 				      rq->perout.index);
++		if (!TSIO_PIN_VALID(pin_id))
++			return -EOPNOTSUPP;
+ 		if (!on)
+ 			break;
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
+index 7c528e1f8713e..8205140db829e 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
+@@ -31,7 +31,7 @@ struct pps_pin {
+ 	u8 state;
+ };
+ 
+-#define TSIO_PIN_VALID(pin) ((pin) < (BNXT_MAX_TSIO_PINS))
++#define TSIO_PIN_VALID(pin) ((pin) >= 0 && (pin) < (BNXT_MAX_TSIO_PINS))
+ 
+ #define EVENT_DATA2_PPS_EVENT_TYPE(data2)				\
+ 	((data2) & ASYNC_EVENT_CMPL_PPS_TIMESTAMP_EVENT_DATA2_EVENT_TYPE)
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 2da804f84b480..bd5998012a876 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -76,7 +76,7 @@ static inline void bcmgenet_writel(u32 value, void __iomem *offset)
+ 	if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
+ 		__raw_writel(value, offset);
+ 	else
+-		writel_relaxed(value, offset);
++		writel(value, offset);
+ }
+ 
+ static inline u32 bcmgenet_readl(void __iomem *offset)
+@@ -84,7 +84,7 @@ static inline u32 bcmgenet_readl(void __iomem *offset)
+ 	if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
+ 		return __raw_readl(offset);
+ 	else
+-		return readl_relaxed(offset);
++		return readl(offset);
+ }
+ 
+ static inline void dmadesc_set_length_status(struct bcmgenet_priv *priv,
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
+index 910b9f722504a..d62c188c87480 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
+@@ -672,7 +672,10 @@ static int enetc_get_ts_info(struct net_device *ndev,
+ #ifdef CONFIG_FSL_ENETC_PTP_CLOCK
+ 	info->so_timestamping = SOF_TIMESTAMPING_TX_HARDWARE |
+ 				SOF_TIMESTAMPING_RX_HARDWARE |
+-				SOF_TIMESTAMPING_RAW_HARDWARE;
++				SOF_TIMESTAMPING_RAW_HARDWARE |
++				SOF_TIMESTAMPING_TX_SOFTWARE |
++				SOF_TIMESTAMPING_RX_SOFTWARE |
++				SOF_TIMESTAMPING_SOFTWARE;
+ 
+ 	info->tx_types = (1 << HWTSTAMP_TX_OFF) |
+ 			 (1 << HWTSTAMP_TX_ON) |
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+index 0536d2c76fbc4..d779dde522c86 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+@@ -45,6 +45,7 @@ void enetc_sched_speed_set(struct enetc_ndev_priv *priv, int speed)
+ 		      | pspeed);
+ }
+ 
++#define ENETC_QOS_ALIGN	64
+ static int enetc_setup_taprio(struct net_device *ndev,
+ 			      struct tc_taprio_qopt_offload *admin_conf)
+ {
+@@ -52,10 +53,11 @@ static int enetc_setup_taprio(struct net_device *ndev,
+ 	struct enetc_cbd cbd = {.cmd = 0};
+ 	struct tgs_gcl_conf *gcl_config;
+ 	struct tgs_gcl_data *gcl_data;
++	dma_addr_t dma, dma_align;
+ 	struct gce *gce;
+-	dma_addr_t dma;
+ 	u16 data_size;
+ 	u16 gcl_len;
++	void *tmp;
+ 	u32 tge;
+ 	int err;
+ 	int i;
+@@ -82,9 +84,16 @@ static int enetc_setup_taprio(struct net_device *ndev,
+ 	gcl_config = &cbd.gcl_conf;
+ 
+ 	data_size = struct_size(gcl_data, entry, gcl_len);
+-	gcl_data = kzalloc(data_size, __GFP_DMA | GFP_KERNEL);
+-	if (!gcl_data)
++	tmp = dma_alloc_coherent(&priv->si->pdev->dev,
++				 data_size + ENETC_QOS_ALIGN,
++				 &dma, GFP_KERNEL);
++	if (!tmp) {
++		dev_err(&priv->si->pdev->dev,
++			"DMA mapping of taprio gate list failed!\n");
+ 		return -ENOMEM;
++	}
++	dma_align = ALIGN(dma, ENETC_QOS_ALIGN);
++	gcl_data = (struct tgs_gcl_data *)PTR_ALIGN(tmp, ENETC_QOS_ALIGN);
+ 
+ 	gce = (struct gce *)(gcl_data + 1);
+ 
+@@ -110,16 +119,8 @@ static int enetc_setup_taprio(struct net_device *ndev,
+ 	cbd.length = cpu_to_le16(data_size);
+ 	cbd.status_flags = 0;
+ 
+-	dma = dma_map_single(&priv->si->pdev->dev, gcl_data,
+-			     data_size, DMA_TO_DEVICE);
+-	if (dma_mapping_error(&priv->si->pdev->dev, dma)) {
+-		netdev_err(priv->si->ndev, "DMA mapping failed!\n");
+-		kfree(gcl_data);
+-		return -ENOMEM;
+-	}
+-
+-	cbd.addr[0] = cpu_to_le32(lower_32_bits(dma));
+-	cbd.addr[1] = cpu_to_le32(upper_32_bits(dma));
++	cbd.addr[0] = cpu_to_le32(lower_32_bits(dma_align));
++	cbd.addr[1] = cpu_to_le32(upper_32_bits(dma_align));
+ 	cbd.cls = BDCR_CMD_PORT_GCL;
+ 	cbd.status_flags = 0;
+ 
+@@ -132,8 +133,8 @@ static int enetc_setup_taprio(struct net_device *ndev,
+ 			 ENETC_QBV_PTGCR_OFFSET,
+ 			 tge & (~ENETC_QBV_TGE));
+ 
+-	dma_unmap_single(&priv->si->pdev->dev, dma, data_size, DMA_TO_DEVICE);
+-	kfree(gcl_data);
++	dma_free_coherent(&priv->si->pdev->dev, data_size + ENETC_QOS_ALIGN,
++			  tmp, dma);
+ 
+ 	return err;
+ }
+@@ -463,8 +464,9 @@ static int enetc_streamid_hw_set(struct enetc_ndev_priv *priv,
+ 	struct enetc_cbd cbd = {.cmd = 0};
+ 	struct streamid_data *si_data;
+ 	struct streamid_conf *si_conf;
++	dma_addr_t dma, dma_align;
+ 	u16 data_size;
+-	dma_addr_t dma;
++	void *tmp;
+ 	int port;
+ 	int err;
+ 
+@@ -485,21 +487,20 @@ static int enetc_streamid_hw_set(struct enetc_ndev_priv *priv,
+ 	cbd.status_flags = 0;
+ 
+ 	data_size = sizeof(struct streamid_data);
+-	si_data = kzalloc(data_size, __GFP_DMA | GFP_KERNEL);
+-	if (!si_data)
++	tmp = dma_alloc_coherent(&priv->si->pdev->dev,
++				 data_size + ENETC_QOS_ALIGN,
++				 &dma, GFP_KERNEL);
++	if (!tmp) {
++		dev_err(&priv->si->pdev->dev,
++			"DMA mapping of stream identify failed!\n");
+ 		return -ENOMEM;
+-	cbd.length = cpu_to_le16(data_size);
+-
+-	dma = dma_map_single(&priv->si->pdev->dev, si_data,
+-			     data_size, DMA_FROM_DEVICE);
+-	if (dma_mapping_error(&priv->si->pdev->dev, dma)) {
+-		netdev_err(priv->si->ndev, "DMA mapping failed!\n");
+-		err = -ENOMEM;
+-		goto out;
+ 	}
++	dma_align = ALIGN(dma, ENETC_QOS_ALIGN);
++	si_data = (struct streamid_data *)PTR_ALIGN(tmp, ENETC_QOS_ALIGN);
+ 
+-	cbd.addr[0] = cpu_to_le32(lower_32_bits(dma));
+-	cbd.addr[1] = cpu_to_le32(upper_32_bits(dma));
++	cbd.length = cpu_to_le16(data_size);
++	cbd.addr[0] = cpu_to_le32(lower_32_bits(dma_align));
++	cbd.addr[1] = cpu_to_le32(upper_32_bits(dma_align));
+ 	eth_broadcast_addr(si_data->dmac);
+ 	si_data->vid_vidm_tg = (ENETC_CBDR_SID_VID_MASK
+ 			       + ((0x3 << 14) | ENETC_CBDR_SID_VIDM));
+@@ -539,8 +540,8 @@ static int enetc_streamid_hw_set(struct enetc_ndev_priv *priv,
+ 
+ 	cbd.length = cpu_to_le16(data_size);
+ 
+-	cbd.addr[0] = cpu_to_le32(lower_32_bits(dma));
+-	cbd.addr[1] = cpu_to_le32(upper_32_bits(dma));
++	cbd.addr[0] = cpu_to_le32(lower_32_bits(dma_align));
++	cbd.addr[1] = cpu_to_le32(upper_32_bits(dma_align));
+ 
+ 	/* VIDM default to be 1.
+ 	 * VID Match. If set (b1) then the VID must match, otherwise
+@@ -561,10 +562,8 @@ static int enetc_streamid_hw_set(struct enetc_ndev_priv *priv,
+ 
+ 	err = enetc_send_cmd(priv->si, &cbd);
+ out:
+-	if (!dma_mapping_error(&priv->si->pdev->dev, dma))
+-		dma_unmap_single(&priv->si->pdev->dev, dma, data_size, DMA_FROM_DEVICE);
+-
+-	kfree(si_data);
++	dma_free_coherent(&priv->si->pdev->dev, data_size + ENETC_QOS_ALIGN,
++			  tmp, dma);
+ 
+ 	return err;
+ }
+@@ -633,8 +632,9 @@ static int enetc_streamcounter_hw_get(struct enetc_ndev_priv *priv,
+ {
+ 	struct enetc_cbd cbd = { .cmd = 2 };
+ 	struct sfi_counter_data *data_buf;
+-	dma_addr_t dma;
++	dma_addr_t dma, dma_align;
+ 	u16 data_size;
++	void *tmp;
+ 	int err;
+ 
+ 	cbd.index = cpu_to_le16((u16)index);
+@@ -643,19 +643,19 @@ static int enetc_streamcounter_hw_get(struct enetc_ndev_priv *priv,
+ 	cbd.status_flags = 0;
+ 
+ 	data_size = sizeof(struct sfi_counter_data);
+-	data_buf = kzalloc(data_size, __GFP_DMA | GFP_KERNEL);
+-	if (!data_buf)
++	tmp = dma_alloc_coherent(&priv->si->pdev->dev,
++				 data_size + ENETC_QOS_ALIGN,
++				 &dma, GFP_KERNEL);
++	if (!tmp) {
++		dev_err(&priv->si->pdev->dev,
++			"DMA mapping of stream counter failed!\n");
+ 		return -ENOMEM;
+-
+-	dma = dma_map_single(&priv->si->pdev->dev, data_buf,
+-			     data_size, DMA_FROM_DEVICE);
+-	if (dma_mapping_error(&priv->si->pdev->dev, dma)) {
+-		netdev_err(priv->si->ndev, "DMA mapping failed!\n");
+-		err = -ENOMEM;
+-		goto exit;
+ 	}
+-	cbd.addr[0] = cpu_to_le32(lower_32_bits(dma));
+-	cbd.addr[1] = cpu_to_le32(upper_32_bits(dma));
++	dma_align = ALIGN(dma, ENETC_QOS_ALIGN);
++	data_buf = (struct sfi_counter_data *)PTR_ALIGN(tmp, ENETC_QOS_ALIGN);
++
++	cbd.addr[0] = cpu_to_le32(lower_32_bits(dma_align));
++	cbd.addr[1] = cpu_to_le32(upper_32_bits(dma_align));
+ 
+ 	cbd.length = cpu_to_le16(data_size);
+ 
+@@ -684,7 +684,9 @@ static int enetc_streamcounter_hw_get(struct enetc_ndev_priv *priv,
+ 				data_buf->flow_meter_dropl;
+ 
+ exit:
+-	kfree(data_buf);
++	dma_free_coherent(&priv->si->pdev->dev, data_size + ENETC_QOS_ALIGN,
++			  tmp, dma);
++
+ 	return err;
+ }
+ 
+@@ -723,9 +725,10 @@ static int enetc_streamgate_hw_set(struct enetc_ndev_priv *priv,
+ 	struct sgcl_conf *sgcl_config;
+ 	struct sgcl_data *sgcl_data;
+ 	struct sgce *sgce;
+-	dma_addr_t dma;
++	dma_addr_t dma, dma_align;
+ 	u16 data_size;
+ 	int err, i;
++	void *tmp;
+ 	u64 now;
+ 
+ 	cbd.index = cpu_to_le16(sgi->index);
+@@ -772,24 +775,20 @@ static int enetc_streamgate_hw_set(struct enetc_ndev_priv *priv,
+ 	sgcl_config->acl_len = (sgi->num_entries - 1) & 0x3;
+ 
+ 	data_size = struct_size(sgcl_data, sgcl, sgi->num_entries);
+-
+-	sgcl_data = kzalloc(data_size, __GFP_DMA | GFP_KERNEL);
+-	if (!sgcl_data)
+-		return -ENOMEM;
+-
+-	cbd.length = cpu_to_le16(data_size);
+-
+-	dma = dma_map_single(&priv->si->pdev->dev,
+-			     sgcl_data, data_size,
+-			     DMA_FROM_DEVICE);
+-	if (dma_mapping_error(&priv->si->pdev->dev, dma)) {
+-		netdev_err(priv->si->ndev, "DMA mapping failed!\n");
+-		kfree(sgcl_data);
++	tmp = dma_alloc_coherent(&priv->si->pdev->dev,
++				 data_size + ENETC_QOS_ALIGN,
++				 &dma, GFP_KERNEL);
++	if (!tmp) {
++		dev_err(&priv->si->pdev->dev,
++			"DMA mapping of stream counter failed!\n");
+ 		return -ENOMEM;
+ 	}
++	dma_align = ALIGN(dma, ENETC_QOS_ALIGN);
++	sgcl_data = (struct sgcl_data *)PTR_ALIGN(tmp, ENETC_QOS_ALIGN);
+ 
+-	cbd.addr[0] = cpu_to_le32(lower_32_bits(dma));
+-	cbd.addr[1] = cpu_to_le32(upper_32_bits(dma));
++	cbd.length = cpu_to_le16(data_size);
++	cbd.addr[0] = cpu_to_le32(lower_32_bits(dma_align));
++	cbd.addr[1] = cpu_to_le32(upper_32_bits(dma_align));
+ 
+ 	sgce = &sgcl_data->sgcl[0];
+ 
+@@ -844,7 +843,8 @@ static int enetc_streamgate_hw_set(struct enetc_ndev_priv *priv,
+ 	err = enetc_send_cmd(priv->si, &cbd);
+ 
+ exit:
+-	kfree(sgcl_data);
++	dma_free_coherent(&priv->si->pdev->dev, data_size + ENETC_QOS_ALIGN,
++			  tmp, dma);
+ 
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index 63f5abcc6bf41..8184a954f6481 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -536,6 +536,8 @@ struct hnae3_ae_dev {
+  *   Get 1588 rx hwstamp
+  * get_ts_info
+  *   Get phc info
++ * clean_vf_config
++ *   Clean residual vf info after disable sriov
+  */
+ struct hnae3_ae_ops {
+ 	int (*init_ae_dev)(struct hnae3_ae_dev *ae_dev);
+@@ -729,6 +731,7 @@ struct hnae3_ae_ops {
+ 			   struct ethtool_ts_info *info);
+ 	int (*get_link_diagnosis_info)(struct hnae3_handle *handle,
+ 				       u32 *status_code);
++	void (*clean_vf_config)(struct hnae3_ae_dev *ae_dev, int num_vfs);
+ };
+ 
+ struct hnae3_dcb_ops {
+@@ -841,6 +844,7 @@ struct hnae3_handle {
+ 	struct dentry *hnae3_dbgfs;
+ 	/* protects concurrent contention between debugfs commands */
+ 	struct mutex dbgfs_lock;
++	char **dbgfs_buf;
+ 
+ 	/* Network interface message level enabled bits */
+ 	u32 msg_enable;
+@@ -861,6 +865,20 @@ struct hnae3_handle {
+ #define hnae3_get_bit(origin, shift) \
+ 	hnae3_get_field(origin, 0x1 << (shift), shift)
+ 
++#define HNAE3_FORMAT_MAC_ADDR_LEN	18
++#define HNAE3_FORMAT_MAC_ADDR_OFFSET_0	0
++#define HNAE3_FORMAT_MAC_ADDR_OFFSET_4	4
++#define HNAE3_FORMAT_MAC_ADDR_OFFSET_5	5
++
++static inline void hnae3_format_mac_addr(char *format_mac_addr,
++					 const u8 *mac_addr)
++{
++	snprintf(format_mac_addr, HNAE3_FORMAT_MAC_ADDR_LEN, "%02x:**:**:**:%02x:%02x",
++		 mac_addr[HNAE3_FORMAT_MAC_ADDR_OFFSET_0],
++		 mac_addr[HNAE3_FORMAT_MAC_ADDR_OFFSET_4],
++		 mac_addr[HNAE3_FORMAT_MAC_ADDR_OFFSET_5]);
++}
++
+ int hnae3_register_ae_dev(struct hnae3_ae_dev *ae_dev);
+ void hnae3_unregister_ae_dev(struct hnae3_ae_dev *ae_dev);
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+index c381f8af67f08..1b155d7452186 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+@@ -1227,7 +1227,7 @@ static ssize_t hns3_dbg_read(struct file *filp, char __user *buffer,
+ 		return ret;
+ 
+ 	mutex_lock(&handle->dbgfs_lock);
+-	save_buf = &hns3_dbg_cmd[index].buf;
++	save_buf = &handle->dbgfs_buf[index];
+ 
+ 	if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) ||
+ 	    test_bit(HNS3_NIC_STATE_RESETTING, &priv->state)) {
+@@ -1332,6 +1332,13 @@ int hns3_dbg_init(struct hnae3_handle *handle)
+ 	int ret;
+ 	u32 i;
+ 
++	handle->dbgfs_buf = devm_kcalloc(&handle->pdev->dev,
++					 ARRAY_SIZE(hns3_dbg_cmd),
++					 sizeof(*handle->dbgfs_buf),
++					 GFP_KERNEL);
++	if (!handle->dbgfs_buf)
++		return -ENOMEM;
++
+ 	hns3_dbg_dentry[HNS3_DBG_DENTRY_COMMON].dentry =
+ 				debugfs_create_dir(name, hns3_dbgfs_root);
+ 	handle->hnae3_dbgfs = hns3_dbg_dentry[HNS3_DBG_DENTRY_COMMON].dentry;
+@@ -1380,9 +1387,9 @@ void hns3_dbg_uninit(struct hnae3_handle *handle)
+ 	u32 i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++)
+-		if (hns3_dbg_cmd[i].buf) {
+-			kvfree(hns3_dbg_cmd[i].buf);
+-			hns3_dbg_cmd[i].buf = NULL;
++		if (handle->dbgfs_buf[i]) {
++			kvfree(handle->dbgfs_buf[i]);
++			handle->dbgfs_buf[i] = NULL;
+ 		}
+ 
+ 	mutex_destroy(&handle->dbgfs_lock);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.h b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.h
+index bd8801065e024..814f7491ca08d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.h
+@@ -47,7 +47,6 @@ struct hns3_dbg_cmd_info {
+ 	enum hnae3_dbg_cmd cmd;
+ 	enum hns3_dbg_dentry_type dentry;
+ 	u32 buf_len;
+-	char *buf;
+ 	int (*init)(struct hnae3_handle *handle, unsigned int cmd);
+ };
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 9ccebbaa0d696..ef75223b0b55e 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -2255,6 +2255,8 @@ out_err_tx_ok:
+ 
+ static int hns3_nic_net_set_mac_address(struct net_device *netdev, void *p)
+ {
++	char format_mac_addr_perm[HNAE3_FORMAT_MAC_ADDR_LEN];
++	char format_mac_addr_sa[HNAE3_FORMAT_MAC_ADDR_LEN];
+ 	struct hnae3_handle *h = hns3_get_handle(netdev);
+ 	struct sockaddr *mac_addr = p;
+ 	int ret;
+@@ -2263,8 +2265,9 @@ static int hns3_nic_net_set_mac_address(struct net_device *netdev, void *p)
+ 		return -EADDRNOTAVAIL;
+ 
+ 	if (ether_addr_equal(netdev->dev_addr, mac_addr->sa_data)) {
+-		netdev_info(netdev, "already using mac address %pM\n",
+-			    mac_addr->sa_data);
++		hnae3_format_mac_addr(format_mac_addr_sa, mac_addr->sa_data);
++		netdev_info(netdev, "already using mac address %s\n",
++			    format_mac_addr_sa);
+ 		return 0;
+ 	}
+ 
+@@ -2273,8 +2276,10 @@ static int hns3_nic_net_set_mac_address(struct net_device *netdev, void *p)
+ 	 */
+ 	if (!hns3_is_phys_func(h->pdev) &&
+ 	    !is_zero_ether_addr(netdev->perm_addr)) {
+-		netdev_err(netdev, "has permanent MAC %pM, user MAC %pM not allow\n",
+-			   netdev->perm_addr, mac_addr->sa_data);
++		hnae3_format_mac_addr(format_mac_addr_perm, netdev->perm_addr);
++		hnae3_format_mac_addr(format_mac_addr_sa, mac_addr->sa_data);
++		netdev_err(netdev, "has permanent MAC %s, user MAC %s not allow\n",
++			   format_mac_addr_perm, format_mac_addr_sa);
+ 		return -EPERM;
+ 	}
+ 
+@@ -2836,14 +2841,16 @@ static int hns3_nic_set_vf_rate(struct net_device *ndev, int vf,
+ static int hns3_nic_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac)
+ {
+ 	struct hnae3_handle *h = hns3_get_handle(netdev);
++	char format_mac_addr[HNAE3_FORMAT_MAC_ADDR_LEN];
+ 
+ 	if (!h->ae_algo->ops->set_vf_mac)
+ 		return -EOPNOTSUPP;
+ 
+ 	if (is_multicast_ether_addr(mac)) {
++		hnae3_format_mac_addr(format_mac_addr, mac);
+ 		netdev_err(netdev,
+-			   "Invalid MAC:%pM specified. Could not set MAC\n",
+-			   mac);
++			   "Invalid MAC:%s specified. Could not set MAC\n",
++			   format_mac_addr);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -2947,6 +2954,21 @@ static int hns3_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	return ret;
+ }
+ 
++/**
++ * hns3_clean_vf_config
++ * @pdev: pointer to a pci_dev structure
++ * @num_vfs: number of VFs allocated
++ *
++ * Clean residual vf config after disable sriov
++ **/
++static void hns3_clean_vf_config(struct pci_dev *pdev, int num_vfs)
++{
++	struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev);
++
++	if (ae_dev->ops->clean_vf_config)
++		ae_dev->ops->clean_vf_config(ae_dev, num_vfs);
++}
++
+ /* hns3_remove - Device removal routine
+  * @pdev: PCI device information struct
+  */
+@@ -2985,7 +3007,10 @@ static int hns3_pci_sriov_configure(struct pci_dev *pdev, int num_vfs)
+ 		else
+ 			return num_vfs;
+ 	} else if (!pci_vfs_assigned(pdev)) {
++		int num_vfs_pre = pci_num_vf(pdev);
++
+ 		pci_disable_sriov(pdev);
++		hns3_clean_vf_config(pdev, num_vfs_pre);
+ 	} else {
+ 		dev_warn(&pdev->dev,
+ 			 "Unable to free VFs because some are assigned to VMs.\n");
+@@ -4934,6 +4959,7 @@ static void hns3_uninit_all_ring(struct hns3_nic_priv *priv)
+ static int hns3_init_mac_addr(struct net_device *netdev)
+ {
+ 	struct hns3_nic_priv *priv = netdev_priv(netdev);
++	char format_mac_addr[HNAE3_FORMAT_MAC_ADDR_LEN];
+ 	struct hnae3_handle *h = priv->ae_handle;
+ 	u8 mac_addr_temp[ETH_ALEN];
+ 	int ret = 0;
+@@ -4944,8 +4970,9 @@ static int hns3_init_mac_addr(struct net_device *netdev)
+ 	/* Check if the MAC address is valid, if not get a random one */
+ 	if (!is_valid_ether_addr(mac_addr_temp)) {
+ 		eth_hw_addr_random(netdev);
+-		dev_warn(priv->dev, "using random MAC address %pM\n",
+-			 netdev->dev_addr);
++		hnae3_format_mac_addr(format_mac_addr, netdev->dev_addr);
++		dev_warn(priv->dev, "using random MAC address %s\n",
++			 format_mac_addr);
+ 	} else if (!ether_addr_equal(netdev->dev_addr, mac_addr_temp)) {
+ 		eth_hw_addr_set(netdev, mac_addr_temp);
+ 		ether_addr_copy(netdev->perm_addr, mac_addr_temp);
+@@ -4997,8 +5024,10 @@ static void hns3_client_stop(struct hnae3_handle *handle)
+ static void hns3_info_show(struct hns3_nic_priv *priv)
+ {
+ 	struct hnae3_knic_private_info *kinfo = &priv->ae_handle->kinfo;
++	char format_mac_addr[HNAE3_FORMAT_MAC_ADDR_LEN];
+ 
+-	dev_info(priv->dev, "MAC address: %pM\n", priv->netdev->dev_addr);
++	hnae3_format_mac_addr(format_mac_addr, priv->netdev->dev_addr);
++	dev_info(priv->dev, "MAC address: %s\n", format_mac_addr);
+ 	dev_info(priv->dev, "Task queue pairs numbers: %u\n", kinfo->num_tqps);
+ 	dev_info(priv->dev, "RSS size: %u\n", kinfo->rss_size);
+ 	dev_info(priv->dev, "Allocated RSS size: %u\n", kinfo->req_rss_size);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index c2a58101144e3..4ad08849e41ae 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -1956,6 +1956,7 @@ static int hclge_alloc_vport(struct hclge_dev *hdev)
+ 		vport->vf_info.link_state = IFLA_VF_LINK_STATE_AUTO;
+ 		vport->mps = HCLGE_MAC_DEFAULT_FRAME;
+ 		vport->port_base_vlan_cfg.state = HNAE3_PORT_BASE_VLAN_DISABLE;
++		vport->port_base_vlan_cfg.tbl_sta = true;
+ 		vport->rxvlan_cfg.rx_vlan_offload_en = true;
+ 		vport->req_vlan_fltr_en = true;
+ 		INIT_LIST_HEAD(&vport->vlan_list);
+@@ -8743,6 +8744,7 @@ int hclge_update_mac_list(struct hclge_vport *vport,
+ 			  enum HCLGE_MAC_ADDR_TYPE mac_type,
+ 			  const unsigned char *addr)
+ {
++	char format_mac_addr[HNAE3_FORMAT_MAC_ADDR_LEN];
+ 	struct hclge_dev *hdev = vport->back;
+ 	struct hclge_mac_node *mac_node;
+ 	struct list_head *list;
+@@ -8767,9 +8769,10 @@ int hclge_update_mac_list(struct hclge_vport *vport,
+ 	/* if this address is never added, unnecessary to delete */
+ 	if (state == HCLGE_MAC_TO_DEL) {
+ 		spin_unlock_bh(&vport->mac_list_lock);
++		hnae3_format_mac_addr(format_mac_addr, addr);
+ 		dev_err(&hdev->pdev->dev,
+-			"failed to delete address %pM from mac list\n",
+-			addr);
++			"failed to delete address %s from mac list\n",
++			format_mac_addr);
+ 		return -ENOENT;
+ 	}
+ 
+@@ -8802,6 +8805,7 @@ static int hclge_add_uc_addr(struct hnae3_handle *handle,
+ int hclge_add_uc_addr_common(struct hclge_vport *vport,
+ 			     const unsigned char *addr)
+ {
++	char format_mac_addr[HNAE3_FORMAT_MAC_ADDR_LEN];
+ 	struct hclge_dev *hdev = vport->back;
+ 	struct hclge_mac_vlan_tbl_entry_cmd req;
+ 	struct hclge_desc desc;
+@@ -8812,9 +8816,10 @@ int hclge_add_uc_addr_common(struct hclge_vport *vport,
+ 	if (is_zero_ether_addr(addr) ||
+ 	    is_broadcast_ether_addr(addr) ||
+ 	    is_multicast_ether_addr(addr)) {
++		hnae3_format_mac_addr(format_mac_addr, addr);
+ 		dev_err(&hdev->pdev->dev,
+-			"Set_uc mac err! invalid mac:%pM. is_zero:%d,is_br=%d,is_mul=%d\n",
+-			 addr, is_zero_ether_addr(addr),
++			"Set_uc mac err! invalid mac:%s. is_zero:%d,is_br=%d,is_mul=%d\n",
++			 format_mac_addr, is_zero_ether_addr(addr),
+ 			 is_broadcast_ether_addr(addr),
+ 			 is_multicast_ether_addr(addr));
+ 		return -EINVAL;
+@@ -8871,6 +8876,7 @@ static int hclge_rm_uc_addr(struct hnae3_handle *handle,
+ int hclge_rm_uc_addr_common(struct hclge_vport *vport,
+ 			    const unsigned char *addr)
+ {
++	char format_mac_addr[HNAE3_FORMAT_MAC_ADDR_LEN];
+ 	struct hclge_dev *hdev = vport->back;
+ 	struct hclge_mac_vlan_tbl_entry_cmd req;
+ 	int ret;
+@@ -8879,8 +8885,9 @@ int hclge_rm_uc_addr_common(struct hclge_vport *vport,
+ 	if (is_zero_ether_addr(addr) ||
+ 	    is_broadcast_ether_addr(addr) ||
+ 	    is_multicast_ether_addr(addr)) {
+-		dev_dbg(&hdev->pdev->dev, "Remove mac err! invalid mac:%pM.\n",
+-			addr);
++		hnae3_format_mac_addr(format_mac_addr, addr);
++		dev_dbg(&hdev->pdev->dev, "Remove mac err! invalid mac:%s.\n",
++			format_mac_addr);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -8888,12 +8895,11 @@ int hclge_rm_uc_addr_common(struct hclge_vport *vport,
+ 	hnae3_set_bit(req.entry_type, HCLGE_MAC_VLAN_BIT0_EN_B, 0);
+ 	hclge_prepare_mac_addr(&req, addr, false);
+ 	ret = hclge_remove_mac_vlan_tbl(vport, &req);
+-	if (!ret) {
++	if (!ret || ret == -ENOENT) {
+ 		mutex_lock(&hdev->vport_lock);
+ 		hclge_update_umv_space(vport, true);
+ 		mutex_unlock(&hdev->vport_lock);
+-	} else if (ret == -ENOENT) {
+-		ret = 0;
++		return 0;
+ 	}
+ 
+ 	return ret;
+@@ -8911,6 +8917,7 @@ static int hclge_add_mc_addr(struct hnae3_handle *handle,
+ int hclge_add_mc_addr_common(struct hclge_vport *vport,
+ 			     const unsigned char *addr)
+ {
++	char format_mac_addr[HNAE3_FORMAT_MAC_ADDR_LEN];
+ 	struct hclge_dev *hdev = vport->back;
+ 	struct hclge_mac_vlan_tbl_entry_cmd req;
+ 	struct hclge_desc desc[3];
+@@ -8919,9 +8926,10 @@ int hclge_add_mc_addr_common(struct hclge_vport *vport,
+ 
+ 	/* mac addr check */
+ 	if (!is_multicast_ether_addr(addr)) {
++		hnae3_format_mac_addr(format_mac_addr, addr);
+ 		dev_err(&hdev->pdev->dev,
+-			"Add mc mac err! invalid mac:%pM.\n",
+-			 addr);
++			"Add mc mac err! invalid mac:%s.\n",
++			 format_mac_addr);
+ 		return -EINVAL;
+ 	}
+ 	memset(&req, 0, sizeof(req));
+@@ -8973,6 +8981,7 @@ static int hclge_rm_mc_addr(struct hnae3_handle *handle,
+ int hclge_rm_mc_addr_common(struct hclge_vport *vport,
+ 			    const unsigned char *addr)
+ {
++	char format_mac_addr[HNAE3_FORMAT_MAC_ADDR_LEN];
+ 	struct hclge_dev *hdev = vport->back;
+ 	struct hclge_mac_vlan_tbl_entry_cmd req;
+ 	enum hclge_cmd_status status;
+@@ -8980,9 +8989,10 @@ int hclge_rm_mc_addr_common(struct hclge_vport *vport,
+ 
+ 	/* mac addr check */
+ 	if (!is_multicast_ether_addr(addr)) {
++		hnae3_format_mac_addr(format_mac_addr, addr);
+ 		dev_dbg(&hdev->pdev->dev,
+-			"Remove mc mac err! invalid mac:%pM.\n",
+-			 addr);
++			"Remove mc mac err! invalid mac:%s.\n",
++			 format_mac_addr);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -9422,30 +9432,37 @@ static int hclge_set_vf_mac(struct hnae3_handle *handle, int vf,
+ 			    u8 *mac_addr)
+ {
+ 	struct hclge_vport *vport = hclge_get_vport(handle);
++	char format_mac_addr[HNAE3_FORMAT_MAC_ADDR_LEN];
+ 	struct hclge_dev *hdev = vport->back;
+ 
+ 	vport = hclge_get_vf_vport(hdev, vf);
+ 	if (!vport)
+ 		return -EINVAL;
+ 
++	hnae3_format_mac_addr(format_mac_addr, mac_addr);
+ 	if (ether_addr_equal(mac_addr, vport->vf_info.mac)) {
+ 		dev_info(&hdev->pdev->dev,
+-			 "Specified MAC(=%pM) is same as before, no change committed!\n",
+-			 mac_addr);
++			 "Specified MAC(=%s) is same as before, no change committed!\n",
++			 format_mac_addr);
+ 		return 0;
+ 	}
+ 
+ 	ether_addr_copy(vport->vf_info.mac, mac_addr);
+ 
++	/* there is a timewindow for PF to know VF unalive, it may
++	 * cause send mailbox fail, but it doesn't matter, VF will
++	 * query it when reinit.
++	 */
+ 	if (test_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state)) {
+ 		dev_info(&hdev->pdev->dev,
+-			 "MAC of VF %d has been set to %pM, and it will be reinitialized!\n",
+-			 vf, mac_addr);
+-		return hclge_inform_reset_assert_to_vf(vport);
++			 "MAC of VF %d has been set to %s, and it will be reinitialized!\n",
++			 vf, format_mac_addr);
++		(void)hclge_inform_reset_assert_to_vf(vport);
++		return 0;
+ 	}
+ 
+-	dev_info(&hdev->pdev->dev, "MAC of VF %d has been set to %pM\n",
+-		 vf, mac_addr);
++	dev_info(&hdev->pdev->dev, "MAC of VF %d has been set to %s\n",
++		 vf, format_mac_addr);
+ 	return 0;
+ }
+ 
+@@ -9549,6 +9566,7 @@ static int hclge_set_mac_addr(struct hnae3_handle *handle, const void *p,
+ {
+ 	const unsigned char *new_addr = (const unsigned char *)p;
+ 	struct hclge_vport *vport = hclge_get_vport(handle);
++	char format_mac_addr[HNAE3_FORMAT_MAC_ADDR_LEN];
+ 	struct hclge_dev *hdev = vport->back;
+ 	unsigned char *old_addr = NULL;
+ 	int ret;
+@@ -9557,9 +9575,10 @@ static int hclge_set_mac_addr(struct hnae3_handle *handle, const void *p,
+ 	if (is_zero_ether_addr(new_addr) ||
+ 	    is_broadcast_ether_addr(new_addr) ||
+ 	    is_multicast_ether_addr(new_addr)) {
++		hnae3_format_mac_addr(format_mac_addr, new_addr);
+ 		dev_err(&hdev->pdev->dev,
+-			"change uc mac err! invalid mac: %pM.\n",
+-			 new_addr);
++			"change uc mac err! invalid mac: %s.\n",
++			 format_mac_addr);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -9577,9 +9596,10 @@ static int hclge_set_mac_addr(struct hnae3_handle *handle, const void *p,
+ 	spin_lock_bh(&vport->mac_list_lock);
+ 	ret = hclge_update_mac_node_for_dev_addr(vport, old_addr, new_addr);
+ 	if (ret) {
++		hnae3_format_mac_addr(format_mac_addr, new_addr);
+ 		dev_err(&hdev->pdev->dev,
+-			"failed to change the mac addr:%pM, ret = %d\n",
+-			new_addr, ret);
++			"failed to change the mac addr:%s, ret = %d\n",
++			format_mac_addr, ret);
+ 		spin_unlock_bh(&vport->mac_list_lock);
+ 
+ 		if (!is_first)
+@@ -10237,19 +10257,28 @@ static void hclge_add_vport_vlan_table(struct hclge_vport *vport, u16 vlan_id,
+ 				       bool writen_to_tbl)
+ {
+ 	struct hclge_vport_vlan_cfg *vlan, *tmp;
++	struct hclge_dev *hdev = vport->back;
+ 
+-	list_for_each_entry_safe(vlan, tmp, &vport->vlan_list, node)
+-		if (vlan->vlan_id == vlan_id)
++	mutex_lock(&hdev->vport_lock);
++
++	list_for_each_entry_safe(vlan, tmp, &vport->vlan_list, node) {
++		if (vlan->vlan_id == vlan_id) {
++			mutex_unlock(&hdev->vport_lock);
+ 			return;
++		}
++	}
+ 
+ 	vlan = kzalloc(sizeof(*vlan), GFP_KERNEL);
+-	if (!vlan)
++	if (!vlan) {
++		mutex_unlock(&hdev->vport_lock);
+ 		return;
++	}
+ 
+ 	vlan->hd_tbl_status = writen_to_tbl;
+ 	vlan->vlan_id = vlan_id;
+ 
+ 	list_add_tail(&vlan->node, &vport->vlan_list);
++	mutex_unlock(&hdev->vport_lock);
+ }
+ 
+ static int hclge_add_vport_all_vlan_table(struct hclge_vport *vport)
+@@ -10258,6 +10287,8 @@ static int hclge_add_vport_all_vlan_table(struct hclge_vport *vport)
+ 	struct hclge_dev *hdev = vport->back;
+ 	int ret;
+ 
++	mutex_lock(&hdev->vport_lock);
++
+ 	list_for_each_entry_safe(vlan, tmp, &vport->vlan_list, node) {
+ 		if (!vlan->hd_tbl_status) {
+ 			ret = hclge_set_vlan_filter_hw(hdev, htons(ETH_P_8021Q),
+@@ -10267,12 +10298,16 @@ static int hclge_add_vport_all_vlan_table(struct hclge_vport *vport)
+ 				dev_err(&hdev->pdev->dev,
+ 					"restore vport vlan list failed, ret=%d\n",
+ 					ret);
++
++				mutex_unlock(&hdev->vport_lock);
+ 				return ret;
+ 			}
+ 		}
+ 		vlan->hd_tbl_status = true;
+ 	}
+ 
++	mutex_unlock(&hdev->vport_lock);
++
+ 	return 0;
+ }
+ 
+@@ -10282,6 +10317,8 @@ static void hclge_rm_vport_vlan_table(struct hclge_vport *vport, u16 vlan_id,
+ 	struct hclge_vport_vlan_cfg *vlan, *tmp;
+ 	struct hclge_dev *hdev = vport->back;
+ 
++	mutex_lock(&hdev->vport_lock);
++
+ 	list_for_each_entry_safe(vlan, tmp, &vport->vlan_list, node) {
+ 		if (vlan->vlan_id == vlan_id) {
+ 			if (is_write_tbl && vlan->hd_tbl_status)
+@@ -10296,6 +10333,8 @@ static void hclge_rm_vport_vlan_table(struct hclge_vport *vport, u16 vlan_id,
+ 			break;
+ 		}
+ 	}
++
++	mutex_unlock(&hdev->vport_lock);
+ }
+ 
+ void hclge_rm_vport_all_vlan_table(struct hclge_vport *vport, bool is_del_list)
+@@ -10303,6 +10342,8 @@ void hclge_rm_vport_all_vlan_table(struct hclge_vport *vport, bool is_del_list)
+ 	struct hclge_vport_vlan_cfg *vlan, *tmp;
+ 	struct hclge_dev *hdev = vport->back;
+ 
++	mutex_lock(&hdev->vport_lock);
++
+ 	list_for_each_entry_safe(vlan, tmp, &vport->vlan_list, node) {
+ 		if (vlan->hd_tbl_status)
+ 			hclge_set_vlan_filter_hw(hdev,
+@@ -10318,6 +10359,7 @@ void hclge_rm_vport_all_vlan_table(struct hclge_vport *vport, bool is_del_list)
+ 		}
+ 	}
+ 	clear_bit(vport->vport_id, hdev->vf_vlan_full);
++	mutex_unlock(&hdev->vport_lock);
+ }
+ 
+ void hclge_uninit_vport_vlan_table(struct hclge_dev *hdev)
+@@ -10326,6 +10368,8 @@ void hclge_uninit_vport_vlan_table(struct hclge_dev *hdev)
+ 	struct hclge_vport *vport;
+ 	int i;
+ 
++	mutex_lock(&hdev->vport_lock);
++
+ 	for (i = 0; i < hdev->num_alloc_vport; i++) {
+ 		vport = &hdev->vport[i];
+ 		list_for_each_entry_safe(vlan, tmp, &vport->vlan_list, node) {
+@@ -10333,37 +10377,61 @@ void hclge_uninit_vport_vlan_table(struct hclge_dev *hdev)
+ 			kfree(vlan);
+ 		}
+ 	}
++
++	mutex_unlock(&hdev->vport_lock);
+ }
+ 
+-void hclge_restore_vport_vlan_table(struct hclge_vport *vport)
++void hclge_restore_vport_port_base_vlan_config(struct hclge_dev *hdev)
+ {
+-	struct hclge_vport_vlan_cfg *vlan, *tmp;
+-	struct hclge_dev *hdev = vport->back;
++	struct hclge_vlan_info *vlan_info;
++	struct hclge_vport *vport;
+ 	u16 vlan_proto;
+ 	u16 vlan_id;
+ 	u16 state;
++	int vf_id;
+ 	int ret;
+ 
+-	vlan_proto = vport->port_base_vlan_cfg.vlan_info.vlan_proto;
+-	vlan_id = vport->port_base_vlan_cfg.vlan_info.vlan_tag;
+-	state = vport->port_base_vlan_cfg.state;
++	/* PF should restore all vfs port base vlan */
++	for (vf_id = 0; vf_id < hdev->num_alloc_vfs; vf_id++) {
++		vport = &hdev->vport[vf_id + HCLGE_VF_VPORT_START_NUM];
++		vlan_info = vport->port_base_vlan_cfg.tbl_sta ?
++			    &vport->port_base_vlan_cfg.vlan_info :
++			    &vport->port_base_vlan_cfg.old_vlan_info;
+ 
+-	if (state != HNAE3_PORT_BASE_VLAN_DISABLE) {
+-		clear_bit(vport->vport_id, hdev->vlan_table[vlan_id]);
+-		hclge_set_vlan_filter_hw(hdev, htons(vlan_proto),
+-					 vport->vport_id, vlan_id,
+-					 false);
+-		return;
++		vlan_id = vlan_info->vlan_tag;
++		vlan_proto = vlan_info->vlan_proto;
++		state = vport->port_base_vlan_cfg.state;
++
++		if (state != HNAE3_PORT_BASE_VLAN_DISABLE) {
++			clear_bit(vport->vport_id, hdev->vlan_table[vlan_id]);
++			ret = hclge_set_vlan_filter_hw(hdev, htons(vlan_proto),
++						       vport->vport_id,
++						       vlan_id, false);
++			vport->port_base_vlan_cfg.tbl_sta = ret == 0;
++		}
+ 	}
++}
+ 
+-	list_for_each_entry_safe(vlan, tmp, &vport->vlan_list, node) {
+-		ret = hclge_set_vlan_filter_hw(hdev, htons(ETH_P_8021Q),
+-					       vport->vport_id,
+-					       vlan->vlan_id, false);
+-		if (ret)
+-			break;
+-		vlan->hd_tbl_status = true;
++void hclge_restore_vport_vlan_table(struct hclge_vport *vport)
++{
++	struct hclge_vport_vlan_cfg *vlan, *tmp;
++	struct hclge_dev *hdev = vport->back;
++	int ret;
++
++	mutex_lock(&hdev->vport_lock);
++
++	if (vport->port_base_vlan_cfg.state == HNAE3_PORT_BASE_VLAN_DISABLE) {
++		list_for_each_entry_safe(vlan, tmp, &vport->vlan_list, node) {
++			ret = hclge_set_vlan_filter_hw(hdev, htons(ETH_P_8021Q),
++						       vport->vport_id,
++						       vlan->vlan_id, false);
++			if (ret)
++				break;
++			vlan->hd_tbl_status = true;
++		}
+ 	}
++
++	mutex_unlock(&hdev->vport_lock);
+ }
+ 
+ /* For global reset and imp reset, hardware will clear the mac table,
+@@ -10403,6 +10471,7 @@ static void hclge_restore_hw_table(struct hclge_dev *hdev)
+ 	struct hnae3_handle *handle = &vport->nic;
+ 
+ 	hclge_restore_mac_table_common(vport);
++	hclge_restore_vport_port_base_vlan_config(hdev);
+ 	hclge_restore_vport_vlan_table(vport);
+ 	set_bit(HCLGE_STATE_FD_USER_DEF_CHANGED, &hdev->state);
+ 	hclge_restore_fd_entries(handle);
+@@ -10459,6 +10528,8 @@ static int hclge_update_vlan_filter_entries(struct hclge_vport *vport,
+ 						 false);
+ 	}
+ 
++	vport->port_base_vlan_cfg.tbl_sta = false;
++
+ 	/* force add VLAN 0 */
+ 	ret = hclge_set_vf_vlan_common(hdev, vport->vport_id, false, 0);
+ 	if (ret)
+@@ -10545,7 +10616,9 @@ out:
+ 	else
+ 		nic->port_base_vlan_state = HNAE3_PORT_BASE_VLAN_ENABLE;
+ 
++	vport->port_base_vlan_cfg.old_vlan_info = *old_vlan_info;
+ 	vport->port_base_vlan_cfg.vlan_info = *vlan_info;
++	vport->port_base_vlan_cfg.tbl_sta = true;
+ 	hclge_set_vport_vlan_fltr_change(vport);
+ 
+ 	return 0;
+@@ -10613,14 +10686,17 @@ static int hclge_set_vf_vlan_filter(struct hnae3_handle *handle, int vfid,
+ 		return ret;
+ 	}
+ 
+-	/* for DEVICE_VERSION_V3, vf doesn't need to know about the port based
++	/* there is a timewindow for PF to know VF unalive, it may
++	 * cause send mailbox fail, but it doesn't matter, VF will
++	 * query it when reinit.
++	 * for DEVICE_VERSION_V3, vf doesn't need to know about the port based
+ 	 * VLAN state.
+ 	 */
+ 	if (ae_dev->dev_version < HNAE3_DEVICE_VERSION_V3 &&
+ 	    test_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state))
+-		hclge_push_vf_port_base_vlan_info(&hdev->vport[0],
+-						  vport->vport_id, state,
+-						  &vlan_info);
++		(void)hclge_push_vf_port_base_vlan_info(&hdev->vport[0],
++							vport->vport_id,
++							state, &vlan_info);
+ 
+ 	return 0;
+ }
+@@ -10678,11 +10754,11 @@ int hclge_set_vlan_filter(struct hnae3_handle *handle, __be16 proto,
+ 	}
+ 
+ 	if (!ret) {
+-		if (is_kill)
+-			hclge_rm_vport_vlan_table(vport, vlan_id, false);
+-		else
++		if (!is_kill)
+ 			hclge_add_vport_vlan_table(vport, vlan_id,
+ 						   writen_to_tbl);
++		else if (is_kill && vlan_id != 0)
++			hclge_rm_vport_vlan_table(vport, vlan_id, false);
+ 	} else if (is_kill) {
+ 		/* when remove hw vlan filter failed, record the vlan id,
+ 		 * and try to remove it from hw later, to be consistence
+@@ -12256,8 +12332,8 @@ static void hclge_uninit_ae_dev(struct hnae3_ae_dev *ae_dev)
+ 	hclge_misc_irq_uninit(hdev);
+ 	hclge_devlink_uninit(hdev);
+ 	hclge_pci_uninit(hdev);
+-	mutex_destroy(&hdev->vport_lock);
+ 	hclge_uninit_vport_vlan_table(hdev);
++	mutex_destroy(&hdev->vport_lock);
+ 	ae_dev->priv = NULL;
+ }
+ 
+@@ -13070,6 +13146,55 @@ static int hclge_get_link_diagnosis_info(struct hnae3_handle *handle,
+ 	return 0;
+ }
+ 
++/* After disable sriov, VF still has some config and info need clean,
++ * which configed by PF.
++ */
++static void hclge_clear_vport_vf_info(struct hclge_vport *vport, int vfid)
++{
++	struct hclge_dev *hdev = vport->back;
++	struct hclge_vlan_info vlan_info;
++	int ret;
++
++	/* after disable sriov, clean VF rate configured by PF */
++	ret = hclge_tm_qs_shaper_cfg(vport, 0);
++	if (ret)
++		dev_err(&hdev->pdev->dev,
++			"failed to clean vf%d rate config, ret = %d\n",
++			vfid, ret);
++
++	vlan_info.vlan_tag = 0;
++	vlan_info.qos = 0;
++	vlan_info.vlan_proto = ETH_P_8021Q;
++	ret = hclge_update_port_base_vlan_cfg(vport,
++					      HNAE3_PORT_BASE_VLAN_DISABLE,
++					      &vlan_info);
++	if (ret)
++		dev_err(&hdev->pdev->dev,
++			"failed to clean vf%d port base vlan, ret = %d\n",
++			vfid, ret);
++
++	ret = hclge_set_vf_spoofchk_hw(hdev, vport->vport_id, false);
++	if (ret)
++		dev_err(&hdev->pdev->dev,
++			"failed to clean vf%d spoof config, ret = %d\n",
++			vfid, ret);
++
++	memset(&vport->vf_info, 0, sizeof(vport->vf_info));
++}
++
++static void hclge_clean_vport_config(struct hnae3_ae_dev *ae_dev, int num_vfs)
++{
++	struct hclge_dev *hdev = ae_dev->priv;
++	struct hclge_vport *vport;
++	int i;
++
++	for (i = 0; i < num_vfs; i++) {
++		vport = &hdev->vport[i + HCLGE_VF_VPORT_START_NUM];
++
++		hclge_clear_vport_vf_info(vport, i);
++	}
++}
++
+ static const struct hnae3_ae_ops hclge_ops = {
+ 	.init_ae_dev = hclge_init_ae_dev,
+ 	.uninit_ae_dev = hclge_uninit_ae_dev,
+@@ -13171,6 +13296,7 @@ static const struct hnae3_ae_ops hclge_ops = {
+ 	.get_rx_hwts = hclge_ptp_get_rx_hwts,
+ 	.get_ts_info = hclge_ptp_get_ts_info,
+ 	.get_link_diagnosis_info = hclge_get_link_diagnosis_info,
++	.clean_vf_config = hclge_clean_vport_config,
+ };
+ 
+ static struct hnae3_ae_algo ae_algo = {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+index ebba603483a04..c841cce2d0253 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+@@ -1030,7 +1030,9 @@ struct hclge_vlan_info {
+ 
+ struct hclge_port_base_vlan_config {
+ 	u16 state;
++	bool tbl_sta;
+ 	struct hclge_vlan_info vlan_info;
++	struct hclge_vlan_info old_vlan_info;
+ };
+ 
+ struct hclge_vf_info {
+@@ -1085,6 +1087,7 @@ struct hclge_vport {
+ 	spinlock_t mac_list_lock; /* protect mac address need to add/detele */
+ 	struct list_head uc_mac_list;   /* Store VF unicast table */
+ 	struct list_head mc_mac_list;   /* Store VF multicast table */
++
+ 	struct list_head vlan_list;     /* Store VF vlan table */
+ };
+ 
+@@ -1154,6 +1157,7 @@ void hclge_rm_vport_all_mac_table(struct hclge_vport *vport, bool is_del_list,
+ void hclge_rm_vport_all_vlan_table(struct hclge_vport *vport, bool is_del_list);
+ void hclge_uninit_vport_vlan_table(struct hclge_dev *hdev);
+ void hclge_restore_mac_table_common(struct hclge_vport *vport);
++void hclge_restore_vport_port_base_vlan_config(struct hclge_dev *hdev);
+ void hclge_restore_vport_vlan_table(struct hclge_vport *vport);
+ int hclge_update_port_base_vlan_cfg(struct hclge_vport *vport, u16 state,
+ 				    struct hclge_vlan_info *vlan_info);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 70491e07b0ff6..7e30bad08356c 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -1514,15 +1514,18 @@ static void hclgevf_config_mac_list(struct hclgevf_dev *hdev,
+ 				    struct list_head *list,
+ 				    enum HCLGEVF_MAC_ADDR_TYPE mac_type)
+ {
++	char format_mac_addr[HNAE3_FORMAT_MAC_ADDR_LEN];
+ 	struct hclgevf_mac_addr_node *mac_node, *tmp;
+ 	int ret;
+ 
+ 	list_for_each_entry_safe(mac_node, tmp, list, node) {
+ 		ret = hclgevf_add_del_mac_addr(hdev, mac_node, mac_type);
+ 		if  (ret) {
++			hnae3_format_mac_addr(format_mac_addr,
++					      mac_node->mac_addr);
+ 			dev_err(&hdev->pdev->dev,
+-				"failed to configure mac %pM, state = %d, ret = %d\n",
+-				mac_node->mac_addr, mac_node->state, ret);
++				"failed to configure mac %s, state = %d, ret = %d\n",
++				format_mac_addr, mac_node->state, ret);
+ 			return;
+ 		}
+ 		if (mac_node->state == HCLGEVF_MAC_TO_ADD) {
+@@ -3341,6 +3344,11 @@ static int hclgevf_reset_hdev(struct hclgevf_dev *hdev)
+ 		return ret;
+ 	}
+ 
++	/* get current port based vlan state from PF */
++	ret = hclgevf_get_port_base_vlan_filter_state(hdev);
++	if (ret)
++		return ret;
++
+ 	set_bit(HCLGEVF_STATE_PROMISC_CHANGED, &hdev->state);
+ 
+ 	hclgevf_init_rxd_adv_layout(hdev);
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index a8b65c072f64e..82f47bf4d67ca 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1429,6 +1429,15 @@ static int __ibmvnic_open(struct net_device *netdev)
+ 		return rc;
+ 	}
+ 
++	adapter->tx_queues_active = true;
++
++	/* Since queues were stopped until now, there shouldn't be any
++	 * one in ibmvnic_complete_tx() or ibmvnic_xmit() so maybe we
++	 * don't need the synchronize_rcu()? Leaving it for consistency
++	 * with setting ->tx_queues_active = false.
++	 */
++	synchronize_rcu();
++
+ 	netif_tx_start_all_queues(netdev);
+ 
+ 	if (prev_state == VNIC_CLOSED) {
+@@ -1603,6 +1612,14 @@ static void ibmvnic_cleanup(struct net_device *netdev)
+ 	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
+ 
+ 	/* ensure that transmissions are stopped if called by do_reset */
++
++	adapter->tx_queues_active = false;
++
++	/* Ensure complete_tx() and ibmvnic_xmit() see ->tx_queues_active
++	 * update so they don't restart a queue after we stop it below.
++	 */
++	synchronize_rcu();
++
+ 	if (test_bit(0, &adapter->resetting))
+ 		netif_tx_disable(netdev);
+ 	else
+@@ -1842,14 +1859,21 @@ static void ibmvnic_tx_scrq_clean_buffer(struct ibmvnic_adapter *adapter,
+ 		tx_buff->skb = NULL;
+ 		adapter->netdev->stats.tx_dropped++;
+ 	}
++
+ 	ind_bufp->index = 0;
++
+ 	if (atomic_sub_return(entries, &tx_scrq->used) <=
+ 	    (adapter->req_tx_entries_per_subcrq / 2) &&
+-	    __netif_subqueue_stopped(adapter->netdev, queue_num) &&
+-	    !test_bit(0, &adapter->resetting)) {
+-		netif_wake_subqueue(adapter->netdev, queue_num);
+-		netdev_dbg(adapter->netdev, "Started queue %d\n",
+-			   queue_num);
++	    __netif_subqueue_stopped(adapter->netdev, queue_num)) {
++		rcu_read_lock();
++
++		if (adapter->tx_queues_active) {
++			netif_wake_subqueue(adapter->netdev, queue_num);
++			netdev_dbg(adapter->netdev, "Started queue %d\n",
++				   queue_num);
++		}
++
++		rcu_read_unlock();
+ 	}
+ }
+ 
+@@ -1904,11 +1928,12 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ 	int index = 0;
+ 	u8 proto = 0;
+ 
+-	tx_scrq = adapter->tx_scrq[queue_num];
+-	txq = netdev_get_tx_queue(netdev, queue_num);
+-	ind_bufp = &tx_scrq->ind_buf;
+-
+-	if (test_bit(0, &adapter->resetting)) {
++	/* If a reset is in progress, drop the packet since
++	 * the scrqs may get torn down. Otherwise use the
++	 * rcu to ensure reset waits for us to complete.
++	 */
++	rcu_read_lock();
++	if (!adapter->tx_queues_active) {
+ 		dev_kfree_skb_any(skb);
+ 
+ 		tx_send_failed++;
+@@ -1917,6 +1942,10 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ 		goto out;
+ 	}
+ 
++	tx_scrq = adapter->tx_scrq[queue_num];
++	txq = netdev_get_tx_queue(netdev, queue_num);
++	ind_bufp = &tx_scrq->ind_buf;
++
+ 	if (ibmvnic_xmit_workarounds(skb, netdev)) {
+ 		tx_dropped++;
+ 		tx_send_failed++;
+@@ -1924,6 +1953,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ 		ibmvnic_tx_scrq_flush(adapter, tx_scrq);
+ 		goto out;
+ 	}
++
+ 	if (skb_is_gso(skb))
+ 		tx_pool = &adapter->tso_pool[queue_num];
+ 	else
+@@ -2078,6 +2108,7 @@ tx_err:
+ 		netif_carrier_off(netdev);
+ 	}
+ out:
++	rcu_read_unlock();
+ 	netdev->stats.tx_dropped += tx_dropped;
+ 	netdev->stats.tx_bytes += tx_bytes;
+ 	netdev->stats.tx_packets += tx_packets;
+@@ -3728,9 +3759,15 @@ restart_loop:
+ 		    (adapter->req_tx_entries_per_subcrq / 2) &&
+ 		    __netif_subqueue_stopped(adapter->netdev,
+ 					     scrq->pool_index)) {
+-			netif_wake_subqueue(adapter->netdev, scrq->pool_index);
+-			netdev_dbg(adapter->netdev, "Started queue %d\n",
+-				   scrq->pool_index);
++			rcu_read_lock();
++			if (adapter->tx_queues_active) {
++				netif_wake_subqueue(adapter->netdev,
++						    scrq->pool_index);
++				netdev_dbg(adapter->netdev,
++					   "Started queue %d\n",
++					   scrq->pool_index);
++			}
++			rcu_read_unlock();
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
+index 549a9b7b1a706..1b2bdb85ad675 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.h
++++ b/drivers/net/ethernet/ibm/ibmvnic.h
+@@ -1009,11 +1009,14 @@ struct ibmvnic_adapter {
+ 	struct work_struct ibmvnic_reset;
+ 	struct delayed_work ibmvnic_delayed_reset;
+ 	unsigned long resetting;
+-	bool napi_enabled, from_passive_init;
+-	bool login_pending;
+ 	/* last device reset time */
+ 	unsigned long last_reset_time;
+ 
++	bool napi_enabled;
++	bool from_passive_init;
++	bool login_pending;
++	/* protected by rcu */
++	bool tx_queues_active;
+ 	bool failover_pending;
+ 	bool force_reset_recovery;
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+index ea06e957393e6..0e8cf275e084d 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+@@ -218,7 +218,6 @@ bool i40e_alloc_rx_buffers_zc(struct i40e_ring *rx_ring, u16 count)
+ 	ntu += nb_buffs;
+ 	if (ntu == rx_ring->count) {
+ 		rx_desc = I40E_RX_DESC(rx_ring, 0);
+-		xdp = i40e_rx_bi(rx_ring, 0);
+ 		ntu = 0;
+ 	}
+ 
+@@ -241,21 +240,25 @@ bool i40e_alloc_rx_buffers_zc(struct i40e_ring *rx_ring, u16 count)
+ static struct sk_buff *i40e_construct_skb_zc(struct i40e_ring *rx_ring,
+ 					     struct xdp_buff *xdp)
+ {
++	unsigned int totalsize = xdp->data_end - xdp->data_meta;
+ 	unsigned int metasize = xdp->data - xdp->data_meta;
+-	unsigned int datasize = xdp->data_end - xdp->data;
+ 	struct sk_buff *skb;
+ 
++	net_prefetch(xdp->data_meta);
++
+ 	/* allocate a skb to store the frags */
+-	skb = __napi_alloc_skb(&rx_ring->q_vector->napi,
+-			       xdp->data_end - xdp->data_hard_start,
++	skb = __napi_alloc_skb(&rx_ring->q_vector->napi, totalsize,
+ 			       GFP_ATOMIC | __GFP_NOWARN);
+ 	if (unlikely(!skb))
+ 		goto out;
+ 
+-	skb_reserve(skb, xdp->data - xdp->data_hard_start);
+-	memcpy(__skb_put(skb, datasize), xdp->data, datasize);
+-	if (metasize)
++	memcpy(__skb_put(skb, totalsize), xdp->data_meta,
++	       ALIGN(totalsize, sizeof(long)));
++
++	if (metasize) {
+ 		skb_metadata_set(skb, metasize);
++		__skb_pull(skb, metasize);
++	}
+ 
+ out:
+ 	xsk_buff_free(xdp);
+@@ -324,11 +327,11 @@ static void i40e_handle_xdp_result_zc(struct i40e_ring *rx_ring,
+ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget)
+ {
+ 	unsigned int total_rx_bytes = 0, total_rx_packets = 0;
+-	u16 cleaned_count = I40E_DESC_UNUSED(rx_ring);
+ 	u16 next_to_clean = rx_ring->next_to_clean;
+ 	u16 count_mask = rx_ring->count - 1;
+ 	unsigned int xdp_res, xdp_xmit = 0;
+ 	bool failure = false;
++	u16 cleaned_count;
+ 
+ 	while (likely(total_rx_packets < (unsigned int)budget)) {
+ 		union i40e_rx_desc *rx_desc;
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index 8093346f163b1..a474715860475 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -290,6 +290,7 @@ enum ice_pf_state {
+ 	ICE_LINK_DEFAULT_OVERRIDE_PENDING,
+ 	ICE_PHY_INIT_COMPLETE,
+ 	ICE_FD_VF_FLUSH_CTX,		/* set at FD Rx IRQ or timeout */
++	ICE_AUX_ERR_PENDING,
+ 	ICE_STATE_NBITS		/* must be last */
+ };
+ 
+@@ -557,6 +558,7 @@ struct ice_pf {
+ 	wait_queue_head_t reset_wait_queue;
+ 
+ 	u32 hw_csum_rx_error;
++	u32 oicr_err_reg;
+ 	u16 oicr_idx;		/* Other interrupt cause MSIX vector index */
+ 	u16 num_avail_sw_msix;	/* remaining MSIX SW vectors left unclaimed */
+ 	u16 max_pf_txqs;	/* Total Tx queues PF wide */
+@@ -707,7 +709,7 @@ static inline struct xsk_buff_pool *ice_tx_xsk_pool(struct ice_tx_ring *ring)
+ 	struct ice_vsi *vsi = ring->vsi;
+ 	u16 qid;
+ 
+-	qid = ring->q_index - vsi->num_xdp_txq;
++	qid = ring->q_index - vsi->alloc_txq;
+ 
+ 	if (!ice_is_xdp_ena_vsi(vsi) || !test_bit(qid, vsi->af_xdp_zc_qps))
+ 		return NULL;
+diff --git a/drivers/net/ethernet/intel/ice/ice_idc.c b/drivers/net/ethernet/intel/ice/ice_idc.c
+index adcc9a251595a..a2714988dd96f 100644
+--- a/drivers/net/ethernet/intel/ice/ice_idc.c
++++ b/drivers/net/ethernet/intel/ice/ice_idc.c
+@@ -34,6 +34,9 @@ void ice_send_event_to_aux(struct ice_pf *pf, struct iidc_event *event)
+ {
+ 	struct iidc_auxiliary_drv *iadrv;
+ 
++	if (WARN_ON_ONCE(!in_task()))
++		return;
++
+ 	if (!pf->adev)
+ 		return;
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index b449b3408a1cb..523877c2f8b6b 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -2237,6 +2237,19 @@ static void ice_service_task(struct work_struct *work)
+ 		return;
+ 	}
+ 
++	if (test_and_clear_bit(ICE_AUX_ERR_PENDING, pf->state)) {
++		struct iidc_event *event;
++
++		event = kzalloc(sizeof(*event), GFP_KERNEL);
++		if (event) {
++			set_bit(IIDC_EVENT_CRIT_ERR, event->type);
++			/* report the entire OICR value to AUX driver */
++			swap(event->reg, pf->oicr_err_reg);
++			ice_send_event_to_aux(pf, event);
++			kfree(event);
++		}
++	}
++
+ 	if (test_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags)) {
+ 		/* Plug aux device per request */
+ 		ice_plug_aux_dev(pf);
+@@ -3023,17 +3036,9 @@ static irqreturn_t ice_misc_intr(int __always_unused irq, void *data)
+ 
+ #define ICE_AUX_CRIT_ERR (PFINT_OICR_PE_CRITERR_M | PFINT_OICR_HMC_ERR_M | PFINT_OICR_PE_PUSH_M)
+ 	if (oicr & ICE_AUX_CRIT_ERR) {
+-		struct iidc_event *event;
+-
++		pf->oicr_err_reg |= oicr;
++		set_bit(ICE_AUX_ERR_PENDING, pf->state);
+ 		ena_mask &= ~ICE_AUX_CRIT_ERR;
+-		event = kzalloc(sizeof(*event), GFP_ATOMIC);
+-		if (event) {
+-			set_bit(IIDC_EVENT_CRIT_ERR, event->type);
+-			/* report the entire OICR value to AUX driver */
+-			event->reg = oicr;
+-			ice_send_event_to_aux(pf, event);
+-			kfree(event);
+-		}
+ 	}
+ 
+ 	/* Report any remaining unexpected interrupts */
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
+index c895351b25e0a..ac97cf3c58040 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
+@@ -428,20 +428,24 @@ static void ice_bump_ntc(struct ice_rx_ring *rx_ring)
+ static struct sk_buff *
+ ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp)
+ {
+-	unsigned int datasize_hard = xdp->data_end - xdp->data_hard_start;
++	unsigned int totalsize = xdp->data_end - xdp->data_meta;
+ 	unsigned int metasize = xdp->data - xdp->data_meta;
+-	unsigned int datasize = xdp->data_end - xdp->data;
+ 	struct sk_buff *skb;
+ 
+-	skb = __napi_alloc_skb(&rx_ring->q_vector->napi, datasize_hard,
++	net_prefetch(xdp->data_meta);
++
++	skb = __napi_alloc_skb(&rx_ring->q_vector->napi, totalsize,
+ 			       GFP_ATOMIC | __GFP_NOWARN);
+ 	if (unlikely(!skb))
+ 		return NULL;
+ 
+-	skb_reserve(skb, xdp->data - xdp->data_hard_start);
+-	memcpy(__skb_put(skb, datasize), xdp->data, datasize);
+-	if (metasize)
++	memcpy(__skb_put(skb, totalsize), xdp->data_meta,
++	       ALIGN(totalsize, sizeof(long)));
++
++	if (metasize) {
+ 		skb_metadata_set(skb, metasize);
++		__skb_pull(skb, metasize);
++	}
+ 
+ 	xsk_buff_free(xdp);
+ 	return skb;
+diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+index fb1029352c3e7..3cbb5a89b336f 100644
+--- a/drivers/net/ethernet/intel/igb/igb_ethtool.c
++++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+@@ -961,10 +961,6 @@ static int igb_set_ringparam(struct net_device *netdev,
+ 			memcpy(&temp_ring[i], adapter->rx_ring[i],
+ 			       sizeof(struct igb_ring));
+ 
+-			/* Clear copied XDP RX-queue info */
+-			memset(&temp_ring[i].xdp_rxq, 0,
+-			       sizeof(temp_ring[i].xdp_rxq));
+-
+ 			temp_ring[i].count = new_rx_count;
+ 			err = igb_setup_rx_resources(&temp_ring[i]);
+ 			if (err) {
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 446894dde1820..5034ebb57b65c 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -4352,7 +4352,18 @@ int igb_setup_rx_resources(struct igb_ring *rx_ring)
+ {
+ 	struct igb_adapter *adapter = netdev_priv(rx_ring->netdev);
+ 	struct device *dev = rx_ring->dev;
+-	int size;
++	int size, res;
++
++	/* XDP RX-queue info */
++	if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq))
++		xdp_rxq_info_unreg(&rx_ring->xdp_rxq);
++	res = xdp_rxq_info_reg(&rx_ring->xdp_rxq, rx_ring->netdev,
++			       rx_ring->queue_index, 0);
++	if (res < 0) {
++		dev_err(dev, "Failed to register xdp_rxq index %u\n",
++			rx_ring->queue_index);
++		return res;
++	}
+ 
+ 	size = sizeof(struct igb_rx_buffer) * rx_ring->count;
+ 
+@@ -4375,14 +4386,10 @@ int igb_setup_rx_resources(struct igb_ring *rx_ring)
+ 
+ 	rx_ring->xdp_prog = adapter->xdp_prog;
+ 
+-	/* XDP RX-queue info */
+-	if (xdp_rxq_info_reg(&rx_ring->xdp_rxq, rx_ring->netdev,
+-			     rx_ring->queue_index, 0) < 0)
+-		goto err;
+-
+ 	return 0;
+ 
+ err:
++	xdp_rxq_info_unreg(&rx_ring->xdp_rxq);
+ 	vfree(rx_ring->rx_buffer_info);
+ 	rx_ring->rx_buffer_info = NULL;
+ 	dev_err(dev, "Unable to allocate memory for the Rx descriptor ring\n");
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index d83e665b3a4f2..0000eae0d7296 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -505,6 +505,9 @@ int igc_setup_rx_resources(struct igc_ring *rx_ring)
+ 	u8 index = rx_ring->queue_index;
+ 	int size, desc_len, res;
+ 
++	/* XDP RX-queue info */
++	if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq))
++		xdp_rxq_info_unreg(&rx_ring->xdp_rxq);
+ 	res = xdp_rxq_info_reg(&rx_ring->xdp_rxq, ndev, index,
+ 			       rx_ring->q_vector->napi.napi_id);
+ 	if (res < 0) {
+@@ -2435,19 +2438,20 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget)
+ static struct sk_buff *igc_construct_skb_zc(struct igc_ring *ring,
+ 					    struct xdp_buff *xdp)
+ {
++	unsigned int totalsize = xdp->data_end - xdp->data_meta;
+ 	unsigned int metasize = xdp->data - xdp->data_meta;
+-	unsigned int datasize = xdp->data_end - xdp->data;
+-	unsigned int totalsize = metasize + datasize;
+ 	struct sk_buff *skb;
+ 
+-	skb = __napi_alloc_skb(&ring->q_vector->napi,
+-			       xdp->data_end - xdp->data_hard_start,
++	net_prefetch(xdp->data_meta);
++
++	skb = __napi_alloc_skb(&ring->q_vector->napi, totalsize,
+ 			       GFP_ATOMIC | __GFP_NOWARN);
+ 	if (unlikely(!skb))
+ 		return NULL;
+ 
+-	skb_reserve(skb, xdp->data_meta - xdp->data_hard_start);
+-	memcpy(__skb_put(skb, totalsize), xdp->data_meta, totalsize);
++	memcpy(__skb_put(skb, totalsize), xdp->data_meta,
++	       ALIGN(totalsize, sizeof(long)));
++
+ 	if (metasize) {
+ 		skb_metadata_set(skb, metasize);
+ 		__skb_pull(skb, metasize);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+index 666ff2c07ab90..92a5080f82012 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+@@ -207,26 +207,28 @@ bool ixgbe_alloc_rx_buffers_zc(struct ixgbe_ring *rx_ring, u16 count)
+ }
+ 
+ static struct sk_buff *ixgbe_construct_skb_zc(struct ixgbe_ring *rx_ring,
+-					      struct ixgbe_rx_buffer *bi)
++					      const struct xdp_buff *xdp)
+ {
+-	unsigned int metasize = bi->xdp->data - bi->xdp->data_meta;
+-	unsigned int datasize = bi->xdp->data_end - bi->xdp->data;
++	unsigned int totalsize = xdp->data_end - xdp->data_meta;
++	unsigned int metasize = xdp->data - xdp->data_meta;
+ 	struct sk_buff *skb;
+ 
++	net_prefetch(xdp->data_meta);
++
+ 	/* allocate a skb to store the frags */
+-	skb = __napi_alloc_skb(&rx_ring->q_vector->napi,
+-			       bi->xdp->data_end - bi->xdp->data_hard_start,
++	skb = __napi_alloc_skb(&rx_ring->q_vector->napi, totalsize,
+ 			       GFP_ATOMIC | __GFP_NOWARN);
+ 	if (unlikely(!skb))
+ 		return NULL;
+ 
+-	skb_reserve(skb, bi->xdp->data - bi->xdp->data_hard_start);
+-	memcpy(__skb_put(skb, datasize), bi->xdp->data, datasize);
+-	if (metasize)
++	memcpy(__skb_put(skb, totalsize), xdp->data_meta,
++	       ALIGN(totalsize, sizeof(long)));
++
++	if (metasize) {
+ 		skb_metadata_set(skb, metasize);
++		__skb_pull(skb, metasize);
++	}
+ 
+-	xsk_buff_free(bi->xdp);
+-	bi->xdp = NULL;
+ 	return skb;
+ }
+ 
+@@ -317,12 +319,15 @@ int ixgbe_clean_rx_irq_zc(struct ixgbe_q_vector *q_vector,
+ 		}
+ 
+ 		/* XDP_PASS path */
+-		skb = ixgbe_construct_skb_zc(rx_ring, bi);
++		skb = ixgbe_construct_skb_zc(rx_ring, bi->xdp);
+ 		if (!skb) {
+ 			rx_ring->rx_stats.alloc_rx_buff_failed++;
+ 			break;
+ 		}
+ 
++		xsk_buff_free(bi->xdp);
++		bi->xdp = NULL;
++
+ 		cleaned_count++;
+ 		ixgbe_inc_ntc(rx_ring);
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+index 91f86d77cd41b..3a31fb8cc1554 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+@@ -605,7 +605,7 @@ void rvu_npc_install_ucast_entry(struct rvu *rvu, u16 pcifunc,
+ 	struct npc_install_flow_req req = { 0 };
+ 	struct npc_install_flow_rsp rsp = { 0 };
+ 	struct npc_mcam *mcam = &rvu->hw->mcam;
+-	struct nix_rx_action action;
++	struct nix_rx_action action = { 0 };
+ 	int blkaddr, index;
+ 
+ 	/* AF's and SDP VFs work in promiscuous mode */
+@@ -626,7 +626,6 @@ void rvu_npc_install_ucast_entry(struct rvu *rvu, u16 pcifunc,
+ 		*(u64 *)&action = npc_get_mcam_action(rvu, mcam,
+ 						      blkaddr, index);
+ 	} else {
+-		*(u64 *)&action = 0x00;
+ 		action.op = NIX_RX_ACTIONOP_UCAST;
+ 		action.pf_func = pcifunc;
+ 	}
+@@ -657,7 +656,7 @@ void rvu_npc_install_promisc_entry(struct rvu *rvu, u16 pcifunc,
+ 	struct npc_mcam *mcam = &rvu->hw->mcam;
+ 	struct rvu_hwinfo *hw = rvu->hw;
+ 	int blkaddr, ucast_idx, index;
+-	struct nix_rx_action action;
++	struct nix_rx_action action = { 0 };
+ 	u64 relaxed_mask;
+ 
+ 	if (!hw->cap.nix_rx_multicast && is_cgx_vf(rvu, pcifunc))
+@@ -685,14 +684,14 @@ void rvu_npc_install_promisc_entry(struct rvu *rvu, u16 pcifunc,
+ 						      blkaddr, ucast_idx);
+ 
+ 	if (action.op != NIX_RX_ACTIONOP_RSS) {
+-		*(u64 *)&action = 0x00;
++		*(u64 *)&action = 0;
+ 		action.op = NIX_RX_ACTIONOP_UCAST;
+ 	}
+ 
+ 	/* RX_ACTION set to MCAST for CGX PF's */
+ 	if (hw->cap.nix_rx_multicast && pfvf->use_mce_list &&
+ 	    is_pf_cgxmapped(rvu, rvu_get_pf(pcifunc))) {
+-		*(u64 *)&action = 0x00;
++		*(u64 *)&action = 0;
+ 		action.op = NIX_RX_ACTIONOP_MCAST;
+ 		pfvf = rvu_get_pfvf(rvu, pcifunc & ~RVU_PFVF_FUNC_MASK);
+ 		action.index = pfvf->promisc_mce_idx;
+@@ -832,7 +831,7 @@ void rvu_npc_install_allmulti_entry(struct rvu *rvu, u16 pcifunc, int nixlf,
+ 	struct rvu_hwinfo *hw = rvu->hw;
+ 	int blkaddr, ucast_idx, index;
+ 	u8 mac_addr[ETH_ALEN] = { 0 };
+-	struct nix_rx_action action;
++	struct nix_rx_action action = { 0 };
+ 	struct rvu_pfvf *pfvf;
+ 	u16 vf_func;
+ 
+@@ -861,14 +860,14 @@ void rvu_npc_install_allmulti_entry(struct rvu *rvu, u16 pcifunc, int nixlf,
+ 							blkaddr, ucast_idx);
+ 
+ 	if (action.op != NIX_RX_ACTIONOP_RSS) {
+-		*(u64 *)&action = 0x00;
++		*(u64 *)&action = 0;
+ 		action.op = NIX_RX_ACTIONOP_UCAST;
+ 		action.pf_func = pcifunc;
+ 	}
+ 
+ 	/* RX_ACTION set to MCAST for CGX PF's */
+ 	if (hw->cap.nix_rx_multicast && pfvf->use_mce_list) {
+-		*(u64 *)&action = 0x00;
++		*(u64 *)&action = 0;
+ 		action.op = NIX_RX_ACTIONOP_MCAST;
+ 		action.index = pfvf->mcast_mce_idx;
+ 	}
+diff --git a/drivers/net/ethernet/microchip/sparx5/Kconfig b/drivers/net/ethernet/microchip/sparx5/Kconfig
+index 7bdbb2d09a148..cc5e48e1bb4c3 100644
+--- a/drivers/net/ethernet/microchip/sparx5/Kconfig
++++ b/drivers/net/ethernet/microchip/sparx5/Kconfig
+@@ -4,6 +4,8 @@ config SPARX5_SWITCH
+ 	depends on HAS_IOMEM
+ 	depends on OF
+ 	depends on ARCH_SPARX5 || COMPILE_TEST
++	depends on PTP_1588_CLOCK_OPTIONAL
++	depends on BRIDGE || BRIDGE=n
+ 	select PHYLINK
+ 	select PHY_SPARX5_SERDES
+ 	select RESET_CONTROLLER
+diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_fdma.c b/drivers/net/ethernet/microchip/sparx5/sparx5_fdma.c
+index 7436f62fa1525..174ad95e746a3 100644
+--- a/drivers/net/ethernet/microchip/sparx5/sparx5_fdma.c
++++ b/drivers/net/ethernet/microchip/sparx5/sparx5_fdma.c
+@@ -420,6 +420,8 @@ static int sparx5_fdma_tx_alloc(struct sparx5 *sparx5)
+ 			db_hw->dataptr = phys;
+ 			db_hw->status = 0;
+ 			db = devm_kzalloc(sparx5->dev, sizeof(*db), GFP_KERNEL);
++			if (!db)
++				return -ENOMEM;
+ 			db->cpu_addr = cpu_addr;
+ 			list_add_tail(&db->list, &tx->db_list);
+ 		}
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
+index 7e296fa71b368..40fa5bce2ac2c 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
+@@ -331,6 +331,9 @@ static int ionic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		goto err_out_deregister_lifs;
+ 	}
+ 
++	mod_timer(&ionic->watchdog_timer,
++		  round_jiffies(jiffies + ionic->watchdog_period));
++
+ 	return 0;
+ 
+ err_out_deregister_lifs:
+@@ -348,7 +351,6 @@ err_out_port_reset:
+ err_out_reset:
+ 	ionic_reset(ionic);
+ err_out_teardown:
+-	del_timer_sync(&ionic->watchdog_timer);
+ 	pci_clear_master(pdev);
+ 	/* Don't fail the probe for these errors, keep
+ 	 * the hw interface around for inspection
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.c b/drivers/net/ethernet/pensando/ionic/ionic_dev.c
+index d57e80d44c9df..2c7ce820a1fa7 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_dev.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.c
+@@ -122,9 +122,6 @@ int ionic_dev_setup(struct ionic *ionic)
+ 	idev->fw_generation = IONIC_FW_STS_F_GENERATION &
+ 			      ioread8(&idev->dev_info_regs->fw_status);
+ 
+-	mod_timer(&ionic->watchdog_timer,
+-		  round_jiffies(jiffies + ionic->watchdog_period));
+-
+ 	idev->db_pages = bar->vaddr;
+ 	idev->phy_db_pages = bar->bus_addr;
+ 
+@@ -132,6 +129,16 @@ int ionic_dev_setup(struct ionic *ionic)
+ }
+ 
+ /* Devcmd Interface */
++bool ionic_is_fw_running(struct ionic_dev *idev)
++{
++	u8 fw_status = ioread8(&idev->dev_info_regs->fw_status);
++
++	/* firmware is useful only if the running bit is set and
++	 * fw_status != 0xff (bad PCI read)
++	 */
++	return (fw_status != 0xff) && (fw_status & IONIC_FW_STS_F_RUNNING);
++}
++
+ int ionic_heartbeat_check(struct ionic *ionic)
+ {
+ 	struct ionic_dev *idev = &ionic->idev;
+@@ -155,13 +162,10 @@ do_check_time:
+ 		goto do_check_time;
+ 	}
+ 
+-	/* firmware is useful only if the running bit is set and
+-	 * fw_status != 0xff (bad PCI read)
+-	 * If fw_status is not ready don't bother with the generation.
+-	 */
+ 	fw_status = ioread8(&idev->dev_info_regs->fw_status);
+ 
+-	if (fw_status == 0xff || !(fw_status & IONIC_FW_STS_F_RUNNING)) {
++	/* If fw_status is not ready don't bother with the generation */
++	if (!ionic_is_fw_running(idev)) {
+ 		fw_status_ready = false;
+ 	} else {
+ 		fw_generation = fw_status & IONIC_FW_STS_F_GENERATION;
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/ethernet/pensando/ionic/ionic_dev.h
+index e5acf3bd62b2d..73b950ac12722 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h
++++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h
+@@ -353,5 +353,6 @@ void ionic_q_rewind(struct ionic_queue *q, struct ionic_desc_info *start);
+ void ionic_q_service(struct ionic_queue *q, struct ionic_cq_info *cq_info,
+ 		     unsigned int stop_index);
+ int ionic_heartbeat_check(struct ionic *ionic);
++bool ionic_is_fw_running(struct ionic_dev *idev);
+ 
+ #endif /* _IONIC_DEV_H_ */
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_main.c b/drivers/net/ethernet/pensando/ionic/ionic_main.c
+index 875f4ec42efee..a0f9136b2d899 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_main.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_main.c
+@@ -215,9 +215,13 @@ static void ionic_adminq_flush(struct ionic_lif *lif)
+ void ionic_adminq_netdev_err_print(struct ionic_lif *lif, u8 opcode,
+ 				   u8 status, int err)
+ {
++	const char *stat_str;
++
++	stat_str = (err == -ETIMEDOUT) ? "TIMEOUT" :
++					 ionic_error_to_str(status);
++
+ 	netdev_err(lif->netdev, "%s (%d) failed: %s (%d)\n",
+-		   ionic_opcode_to_str(opcode), opcode,
+-		   ionic_error_to_str(status), err);
++		   ionic_opcode_to_str(opcode), opcode, stat_str, err);
+ }
+ 
+ static int ionic_adminq_check_err(struct ionic_lif *lif,
+@@ -318,6 +322,7 @@ int ionic_adminq_wait(struct ionic_lif *lif, struct ionic_admin_ctx *ctx,
+ 		if (do_msg && !test_bit(IONIC_LIF_F_FW_RESET, lif->state))
+ 			netdev_err(netdev, "Posting of %s (%d) failed: %d\n",
+ 				   name, ctx->cmd.cmd.opcode, err);
++		ctx->comp.comp.status = IONIC_RC_ERROR;
+ 		return err;
+ 	}
+ 
+@@ -336,6 +341,7 @@ int ionic_adminq_wait(struct ionic_lif *lif, struct ionic_admin_ctx *ctx,
+ 			if (do_msg)
+ 				netdev_err(netdev, "%s (%d) interrupted, FW in reset\n",
+ 					   name, ctx->cmd.cmd.opcode);
++			ctx->comp.comp.status = IONIC_RC_ERROR;
+ 			return -ENXIO;
+ 		}
+ 
+@@ -370,10 +376,10 @@ int ionic_adminq_post_wait_nomsg(struct ionic_lif *lif, struct ionic_admin_ctx *
+ 
+ static void ionic_dev_cmd_clean(struct ionic *ionic)
+ {
+-	union __iomem ionic_dev_cmd_regs *regs = ionic->idev.dev_cmd_regs;
++	struct ionic_dev *idev = &ionic->idev;
+ 
+-	iowrite32(0, &regs->doorbell);
+-	memset_io(&regs->cmd, 0, sizeof(regs->cmd));
++	iowrite32(0, &idev->dev_cmd_regs->doorbell);
++	memset_io(&idev->dev_cmd_regs->cmd, 0, sizeof(idev->dev_cmd_regs->cmd));
+ }
+ 
+ int ionic_dev_cmd_wait(struct ionic *ionic, unsigned long max_seconds)
+@@ -540,6 +546,9 @@ int ionic_reset(struct ionic *ionic)
+ 	struct ionic_dev *idev = &ionic->idev;
+ 	int err;
+ 
++	if (!ionic_is_fw_running(idev))
++		return 0;
++
+ 	mutex_lock(&ionic->dev_cmd_lock);
+ 	ionic_dev_cmd_reset(idev);
+ 	err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
+@@ -612,15 +621,17 @@ int ionic_port_init(struct ionic *ionic)
+ int ionic_port_reset(struct ionic *ionic)
+ {
+ 	struct ionic_dev *idev = &ionic->idev;
+-	int err;
++	int err = 0;
+ 
+ 	if (!idev->port_info)
+ 		return 0;
+ 
+-	mutex_lock(&ionic->dev_cmd_lock);
+-	ionic_dev_cmd_port_reset(idev);
+-	err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
+-	mutex_unlock(&ionic->dev_cmd_lock);
++	if (ionic_is_fw_running(idev)) {
++		mutex_lock(&ionic->dev_cmd_lock);
++		ionic_dev_cmd_port_reset(idev);
++		err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
++		mutex_unlock(&ionic->dev_cmd_lock);
++	}
+ 
+ 	dma_free_coherent(ionic->dev, idev->port_info_sz,
+ 			  idev->port_info, idev->port_info_pa);
+@@ -628,9 +639,6 @@ int ionic_port_reset(struct ionic *ionic)
+ 	idev->port_info = NULL;
+ 	idev->port_info_pa = 0;
+ 
+-	if (err)
+-		dev_err(ionic->dev, "Failed to reset port\n");
+-
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_sriov.c b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
+index 48cf4355bc47a..0848b5529d48a 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_sriov.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
+@@ -2984,12 +2984,16 @@ static int qed_iov_pre_update_vport(struct qed_hwfn *hwfn,
+ 	u8 mask = QED_ACCEPT_UCAST_UNMATCHED | QED_ACCEPT_MCAST_UNMATCHED;
+ 	struct qed_filter_accept_flags *flags = &params->accept_flags;
+ 	struct qed_public_vf_info *vf_info;
++	u16 tlv_mask;
++
++	tlv_mask = BIT(QED_IOV_VP_UPDATE_ACCEPT_PARAM) |
++		   BIT(QED_IOV_VP_UPDATE_ACCEPT_ANY_VLAN);
+ 
+ 	/* Untrusted VFs can't even be trusted to know that fact.
+ 	 * Simply indicate everything is configured fine, and trace
+ 	 * configuration 'behind their back'.
+ 	 */
+-	if (!(*tlvs & BIT(QED_IOV_VP_UPDATE_ACCEPT_PARAM)))
++	if (!(*tlvs & tlv_mask))
+ 		return 0;
+ 
+ 	vf_info = qed_iov_get_public_vf_info(hwfn, vfid, true);
+@@ -3006,6 +3010,13 @@ static int qed_iov_pre_update_vport(struct qed_hwfn *hwfn,
+ 			flags->tx_accept_filter &= ~mask;
+ 	}
+ 
++	if (params->update_accept_any_vlan_flg) {
++		vf_info->accept_any_vlan = params->accept_any_vlan;
++
++		if (vf_info->forced_vlan && !vf_info->is_trusted_configured)
++			params->accept_any_vlan = false;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -4719,6 +4730,7 @@ static int qed_get_vf_config(struct qed_dev *cdev,
+ 	tx_rate = vf_info->tx_rate;
+ 	ivi->max_tx_rate = tx_rate ? tx_rate : link.speed;
+ 	ivi->min_tx_rate = qed_iov_get_vf_min_rate(hwfn, vf_id);
++	ivi->trusted = vf_info->is_trusted_request;
+ 
+ 	return 0;
+ }
+@@ -5149,6 +5161,12 @@ static void qed_iov_handle_trust_change(struct qed_hwfn *hwfn)
+ 
+ 		params.update_ctl_frame_check = 1;
+ 		params.mac_chk_en = !vf_info->is_trusted_configured;
++		params.update_accept_any_vlan_flg = 0;
++
++		if (vf_info->accept_any_vlan && vf_info->forced_vlan) {
++			params.update_accept_any_vlan_flg = 1;
++			params.accept_any_vlan = vf_info->accept_any_vlan;
++		}
+ 
+ 		if (vf_info->rx_accept_mode & mask) {
+ 			flags->update_rx_mode_config = 1;
+@@ -5164,13 +5182,20 @@ static void qed_iov_handle_trust_change(struct qed_hwfn *hwfn)
+ 		if (!vf_info->is_trusted_configured) {
+ 			flags->rx_accept_filter &= ~mask;
+ 			flags->tx_accept_filter &= ~mask;
++			params.accept_any_vlan = false;
+ 		}
+ 
+ 		if (flags->update_rx_mode_config ||
+ 		    flags->update_tx_mode_config ||
+-		    params.update_ctl_frame_check)
++		    params.update_ctl_frame_check ||
++		    params.update_accept_any_vlan_flg) {
++			DP_VERBOSE(hwfn, QED_MSG_IOV,
++				   "vport update config for %s VF[abs 0x%x rel 0x%x]\n",
++				   vf_info->is_trusted_configured ? "trusted" : "untrusted",
++				   vf->abs_vf_id, vf->relative_vf_id);
+ 			qed_sp_vport_update(hwfn, &params,
+ 					    QED_SPQ_MODE_EBLOCK, NULL);
++		}
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_sriov.h b/drivers/net/ethernet/qlogic/qed/qed_sriov.h
+index f448e3dd6c8ba..6ee2493de1642 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_sriov.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_sriov.h
+@@ -62,6 +62,7 @@ struct qed_public_vf_info {
+ 	bool is_trusted_request;
+ 	u8 rx_accept_mode;
+ 	u8 tx_accept_mode;
++	bool accept_any_vlan;
+ };
+ 
+ struct qed_iov_vf_init_params {
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h
+index 5d79ee4370bcd..7519773eaca6e 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h
+@@ -51,7 +51,7 @@ static inline int qlcnic_dcb_get_hw_capability(struct qlcnic_dcb *dcb)
+ 	if (dcb && dcb->ops->get_hw_capability)
+ 		return dcb->ops->get_hw_capability(dcb);
+ 
+-	return 0;
++	return -EOPNOTSUPP;
+ }
+ 
+ static inline void qlcnic_dcb_free(struct qlcnic_dcb *dcb)
+@@ -65,7 +65,7 @@ static inline int qlcnic_dcb_attach(struct qlcnic_dcb *dcb)
+ 	if (dcb && dcb->ops->attach)
+ 		return dcb->ops->attach(dcb);
+ 
+-	return 0;
++	return -EOPNOTSUPP;
+ }
+ 
+ static inline int
+@@ -74,7 +74,7 @@ qlcnic_dcb_query_hw_capability(struct qlcnic_dcb *dcb, char *buf)
+ 	if (dcb && dcb->ops->query_hw_capability)
+ 		return dcb->ops->query_hw_capability(dcb, buf);
+ 
+-	return 0;
++	return -EOPNOTSUPP;
+ }
+ 
+ static inline void qlcnic_dcb_get_info(struct qlcnic_dcb *dcb)
+@@ -89,7 +89,7 @@ qlcnic_dcb_query_cee_param(struct qlcnic_dcb *dcb, char *buf, u8 type)
+ 	if (dcb && dcb->ops->query_cee_param)
+ 		return dcb->ops->query_cee_param(dcb, buf, type);
+ 
+-	return 0;
++	return -EOPNOTSUPP;
+ }
+ 
+ static inline int qlcnic_dcb_get_cee_cfg(struct qlcnic_dcb *dcb)
+@@ -97,7 +97,7 @@ static inline int qlcnic_dcb_get_cee_cfg(struct qlcnic_dcb *dcb)
+ 	if (dcb && dcb->ops->get_cee_cfg)
+ 		return dcb->ops->get_cee_cfg(dcb);
+ 
+-	return 0;
++	return -EOPNOTSUPP;
+ }
+ 
+ static inline void qlcnic_dcb_aen_handler(struct qlcnic_dcb *dcb, void *msg)
+diff --git a/drivers/net/ethernet/sun/sunhme.c b/drivers/net/ethernet/sun/sunhme.c
+index ad9029ae68485..77e5dffb558f4 100644
+--- a/drivers/net/ethernet/sun/sunhme.c
++++ b/drivers/net/ethernet/sun/sunhme.c
+@@ -3146,7 +3146,7 @@ static int happy_meal_pci_probe(struct pci_dev *pdev,
+ 	if (err) {
+ 		printk(KERN_ERR "happymeal(PCI): Cannot register net device, "
+ 		       "aborting.\n");
+-		goto err_out_iounmap;
++		goto err_out_free_coherent;
+ 	}
+ 
+ 	pci_set_drvdata(pdev, hp);
+@@ -3179,6 +3179,10 @@ static int happy_meal_pci_probe(struct pci_dev *pdev,
+ 
+ 	return 0;
+ 
++err_out_free_coherent:
++	dma_free_coherent(hp->dma_dev, PAGE_SIZE,
++			  hp->happy_block, hp->hblock_dvma);
++
+ err_out_iounmap:
+ 	iounmap(hp->gregs);
+ 
+diff --git a/drivers/net/ethernet/ti/cpsw_ethtool.c b/drivers/net/ethernet/ti/cpsw_ethtool.c
+index 158c8d3793f43..b5bae6324970a 100644
+--- a/drivers/net/ethernet/ti/cpsw_ethtool.c
++++ b/drivers/net/ethernet/ti/cpsw_ethtool.c
+@@ -364,11 +364,9 @@ int cpsw_ethtool_op_begin(struct net_device *ndev)
+ 	struct cpsw_common *cpsw = priv->cpsw;
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(cpsw->dev);
+-	if (ret < 0) {
++	ret = pm_runtime_resume_and_get(cpsw->dev);
++	if (ret < 0)
+ 		cpsw_err(priv, drv, "ethtool begin failed %d\n", ret);
+-		pm_runtime_put_noidle(cpsw->dev);
+-	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index f12eb5beaded4..0e066f2432dc2 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -857,46 +857,53 @@ static void axienet_recv(struct net_device *ndev)
+ 	while ((cur_p->status & XAXIDMA_BD_STS_COMPLETE_MASK)) {
+ 		dma_addr_t phys;
+ 
+-		tail_p = lp->rx_bd_p + sizeof(*lp->rx_bd_v) * lp->rx_bd_ci;
+-
+ 		/* Ensure we see complete descriptor update */
+ 		dma_rmb();
+-		phys = desc_get_phys_addr(lp, cur_p);
+-		dma_unmap_single(ndev->dev.parent, phys, lp->max_frm_size,
+-				 DMA_FROM_DEVICE);
+ 
+ 		skb = cur_p->skb;
+ 		cur_p->skb = NULL;
+-		length = cur_p->app4 & 0x0000FFFF;
+-
+-		skb_put(skb, length);
+-		skb->protocol = eth_type_trans(skb, ndev);
+-		/*skb_checksum_none_assert(skb);*/
+-		skb->ip_summed = CHECKSUM_NONE;
+-
+-		/* if we're doing Rx csum offload, set it up */
+-		if (lp->features & XAE_FEATURE_FULL_RX_CSUM) {
+-			csumstatus = (cur_p->app2 &
+-				      XAE_FULL_CSUM_STATUS_MASK) >> 3;
+-			if ((csumstatus == XAE_IP_TCP_CSUM_VALIDATED) ||
+-			    (csumstatus == XAE_IP_UDP_CSUM_VALIDATED)) {
+-				skb->ip_summed = CHECKSUM_UNNECESSARY;
++
++		/* skb could be NULL if a previous pass already received the
++		 * packet for this slot in the ring, but failed to refill it
++		 * with a newly allocated buffer. In this case, don't try to
++		 * receive it again.
++		 */
++		if (likely(skb)) {
++			length = cur_p->app4 & 0x0000FFFF;
++
++			phys = desc_get_phys_addr(lp, cur_p);
++			dma_unmap_single(ndev->dev.parent, phys, lp->max_frm_size,
++					 DMA_FROM_DEVICE);
++
++			skb_put(skb, length);
++			skb->protocol = eth_type_trans(skb, ndev);
++			/*skb_checksum_none_assert(skb);*/
++			skb->ip_summed = CHECKSUM_NONE;
++
++			/* if we're doing Rx csum offload, set it up */
++			if (lp->features & XAE_FEATURE_FULL_RX_CSUM) {
++				csumstatus = (cur_p->app2 &
++					      XAE_FULL_CSUM_STATUS_MASK) >> 3;
++				if (csumstatus == XAE_IP_TCP_CSUM_VALIDATED ||
++				    csumstatus == XAE_IP_UDP_CSUM_VALIDATED) {
++					skb->ip_summed = CHECKSUM_UNNECESSARY;
++				}
++			} else if ((lp->features & XAE_FEATURE_PARTIAL_RX_CSUM) != 0 &&
++				   skb->protocol == htons(ETH_P_IP) &&
++				   skb->len > 64) {
++				skb->csum = be32_to_cpu(cur_p->app3 & 0xFFFF);
++				skb->ip_summed = CHECKSUM_COMPLETE;
+ 			}
+-		} else if ((lp->features & XAE_FEATURE_PARTIAL_RX_CSUM) != 0 &&
+-			   skb->protocol == htons(ETH_P_IP) &&
+-			   skb->len > 64) {
+-			skb->csum = be32_to_cpu(cur_p->app3 & 0xFFFF);
+-			skb->ip_summed = CHECKSUM_COMPLETE;
+-		}
+ 
+-		netif_rx(skb);
++			netif_rx(skb);
+ 
+-		size += length;
+-		packets++;
++			size += length;
++			packets++;
++		}
+ 
+ 		new_skb = netdev_alloc_skb_ip_align(ndev, lp->max_frm_size);
+ 		if (!new_skb)
+-			return;
++			break;
+ 
+ 		phys = dma_map_single(ndev->dev.parent, new_skb->data,
+ 				      lp->max_frm_size,
+@@ -905,7 +912,7 @@ static void axienet_recv(struct net_device *ndev)
+ 			if (net_ratelimit())
+ 				netdev_err(ndev, "RX DMA mapping error\n");
+ 			dev_kfree_skb(new_skb);
+-			return;
++			break;
+ 		}
+ 		desc_set_phys_addr(lp, phys, cur_p);
+ 
+@@ -913,6 +920,11 @@ static void axienet_recv(struct net_device *ndev)
+ 		cur_p->status = 0;
+ 		cur_p->skb = new_skb;
+ 
++		/* Only update tail_p to mark this slot as usable after it has
++		 * been successfully refilled.
++		 */
++		tail_p = lp->rx_bd_p + sizeof(*lp->rx_bd_v) * lp->rx_bd_ci;
++
+ 		if (++lp->rx_bd_ci >= lp->rx_bd_num)
+ 			lp->rx_bd_ci = 0;
+ 		cur_p = &lp->rx_bd_v[lp->rx_bd_ci];
+diff --git a/drivers/net/phy/at803x.c b/drivers/net/phy/at803x.c
+index 32eeed6728619..7e0f817d817f8 100644
+--- a/drivers/net/phy/at803x.c
++++ b/drivers/net/phy/at803x.c
+@@ -784,25 +784,7 @@ static int at803x_probe(struct phy_device *phydev)
+ 			return ret;
+ 	}
+ 
+-	/* Some bootloaders leave the fiber page selected.
+-	 * Switch to the copper page, as otherwise we read
+-	 * the PHY capabilities from the fiber side.
+-	 */
+-	if (phydev->drv->phy_id == ATH8031_PHY_ID) {
+-		phy_lock_mdio_bus(phydev);
+-		ret = at803x_write_page(phydev, AT803X_PAGE_COPPER);
+-		phy_unlock_mdio_bus(phydev);
+-		if (ret)
+-			goto err;
+-	}
+-
+ 	return 0;
+-
+-err:
+-	if (priv->vddio)
+-		regulator_disable(priv->vddio);
+-
+-	return ret;
+ }
+ 
+ static void at803x_remove(struct phy_device *phydev)
+@@ -912,6 +894,22 @@ static int at803x_config_init(struct phy_device *phydev)
+ {
+ 	int ret;
+ 
++	if (phydev->drv->phy_id == ATH8031_PHY_ID) {
++		/* Some bootloaders leave the fiber page selected.
++		 * Switch to the copper page, as otherwise we read
++		 * the PHY capabilities from the fiber side.
++		 */
++		phy_lock_mdio_bus(phydev);
++		ret = at803x_write_page(phydev, AT803X_PAGE_COPPER);
++		phy_unlock_mdio_bus(phydev);
++		if (ret)
++			return ret;
++
++		ret = at8031_pll_config(phydev);
++		if (ret < 0)
++			return ret;
++	}
++
+ 	/* The RX and TX delay default is:
+ 	 *   after HW reset: RX delay enabled and TX delay disabled
+ 	 *   after SW reset: RX delay enabled, while TX delay retains the
+@@ -941,12 +939,6 @@ static int at803x_config_init(struct phy_device *phydev)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	if (phydev->drv->phy_id == ATH8031_PHY_ID) {
+-		ret = at8031_pll_config(phydev);
+-		if (ret < 0)
+-			return ret;
+-	}
+-
+ 	/* Ar803x extended next page bit is enabled by default. Cisco
+ 	 * multigig switches read this bit and attempt to negotiate 10Gbps
+ 	 * rates even if the next page bit is disabled. This is incorrect
+diff --git a/drivers/net/phy/broadcom.c b/drivers/net/phy/broadcom.c
+index 3c683e0e40e9e..e36809aa6d300 100644
+--- a/drivers/net/phy/broadcom.c
++++ b/drivers/net/phy/broadcom.c
+@@ -11,6 +11,7 @@
+  */
+ 
+ #include "bcm-phy-lib.h"
++#include <linux/delay.h>
+ #include <linux/module.h>
+ #include <linux/phy.h>
+ #include <linux/brcmphy.h>
+@@ -602,6 +603,26 @@ static int brcm_fet_config_init(struct phy_device *phydev)
+ 	if (err < 0)
+ 		return err;
+ 
++	/* The datasheet indicates the PHY needs up to 1us to complete a reset,
++	 * build some slack here.
++	 */
++	usleep_range(1000, 2000);
++
++	/* The PHY requires 65 MDC clock cycles to complete a write operation
++	 * and turnaround the line properly.
++	 *
++	 * We ignore -EIO here as the MDIO controller (e.g.: mdio-bcm-unimac)
++	 * may flag the lack of turn-around as a read failure. This is
++	 * particularly true with this combination since the MDIO controller
++	 * only used 64 MDC cycles. This is not a critical failure in this
++	 * specific case and it has no functional impact otherwise, so we let
++	 * that one go through. If there is a genuine bus error, the next read
++	 * of MII_BRCM_FET_INTREG will error out.
++	 */
++	err = phy_read(phydev, MII_BMCR);
++	if (err < 0 && err != -EIO)
++		return err;
++
+ 	reg = phy_read(phydev, MII_BRCM_FET_INTREG);
+ 	if (reg < 0)
+ 		return reg;
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 76ef4e019ca92..15fe0fa78092f 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -1575,11 +1575,13 @@ static int lanphy_read_page_reg(struct phy_device *phydev, int page, u32 addr)
+ {
+ 	u32 data;
+ 
+-	phy_write(phydev, LAN_EXT_PAGE_ACCESS_CONTROL, page);
+-	phy_write(phydev, LAN_EXT_PAGE_ACCESS_ADDRESS_DATA, addr);
+-	phy_write(phydev, LAN_EXT_PAGE_ACCESS_CONTROL,
+-		  (page | LAN_EXT_PAGE_ACCESS_CTRL_EP_FUNC));
+-	data = phy_read(phydev, LAN_EXT_PAGE_ACCESS_ADDRESS_DATA);
++	phy_lock_mdio_bus(phydev);
++	__phy_write(phydev, LAN_EXT_PAGE_ACCESS_CONTROL, page);
++	__phy_write(phydev, LAN_EXT_PAGE_ACCESS_ADDRESS_DATA, addr);
++	__phy_write(phydev, LAN_EXT_PAGE_ACCESS_CONTROL,
++		    (page | LAN_EXT_PAGE_ACCESS_CTRL_EP_FUNC));
++	data = __phy_read(phydev, LAN_EXT_PAGE_ACCESS_ADDRESS_DATA);
++	phy_unlock_mdio_bus(phydev);
+ 
+ 	return data;
+ }
+@@ -1587,18 +1589,18 @@ static int lanphy_read_page_reg(struct phy_device *phydev, int page, u32 addr)
+ static int lanphy_write_page_reg(struct phy_device *phydev, int page, u16 addr,
+ 				 u16 val)
+ {
+-	phy_write(phydev, LAN_EXT_PAGE_ACCESS_CONTROL, page);
+-	phy_write(phydev, LAN_EXT_PAGE_ACCESS_ADDRESS_DATA, addr);
+-	phy_write(phydev, LAN_EXT_PAGE_ACCESS_CONTROL,
+-		  (page | LAN_EXT_PAGE_ACCESS_CTRL_EP_FUNC));
+-
+-	val = phy_write(phydev, LAN_EXT_PAGE_ACCESS_ADDRESS_DATA, val);
+-	if (val) {
++	phy_lock_mdio_bus(phydev);
++	__phy_write(phydev, LAN_EXT_PAGE_ACCESS_CONTROL, page);
++	__phy_write(phydev, LAN_EXT_PAGE_ACCESS_ADDRESS_DATA, addr);
++	__phy_write(phydev, LAN_EXT_PAGE_ACCESS_CONTROL,
++		    page | LAN_EXT_PAGE_ACCESS_CTRL_EP_FUNC);
++
++	val = __phy_write(phydev, LAN_EXT_PAGE_ACCESS_ADDRESS_DATA, val);
++	if (val != 0)
+ 		phydev_err(phydev, "Error: phy_write has returned error %d\n",
+ 			   val);
+-		return val;
+-	}
+-	return 0;
++	phy_unlock_mdio_bus(phydev);
++	return val;
+ }
+ 
+ static int lan8804_config_init(struct phy_device *phydev)
+diff --git a/drivers/net/usb/asix.h b/drivers/net/usb/asix.h
+index 2a1e31defe718..4334aafab59a4 100644
+--- a/drivers/net/usb/asix.h
++++ b/drivers/net/usb/asix.h
+@@ -192,8 +192,8 @@ extern const struct driver_info ax88172a_info;
+ /* ASIX specific flags */
+ #define FLAG_EEPROM_MAC		(1UL << 0)  /* init device MAC from eeprom */
+ 
+-int asix_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index,
+-		  u16 size, void *data, int in_pm);
++int __must_check asix_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index,
++			       u16 size, void *data, int in_pm);
+ 
+ int asix_write_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index,
+ 		   u16 size, void *data, int in_pm);
+diff --git a/drivers/net/usb/asix_common.c b/drivers/net/usb/asix_common.c
+index 71682970be584..524805285019a 100644
+--- a/drivers/net/usb/asix_common.c
++++ b/drivers/net/usb/asix_common.c
+@@ -11,8 +11,8 @@
+ 
+ #define AX_HOST_EN_RETRIES	30
+ 
+-int asix_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index,
+-		  u16 size, void *data, int in_pm)
++int __must_check asix_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index,
++			       u16 size, void *data, int in_pm)
+ {
+ 	int ret;
+ 	int (*fn)(struct usbnet *, u8, u8, u16, u16, void *, u16);
+@@ -27,9 +27,12 @@ int asix_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index,
+ 	ret = fn(dev, cmd, USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 		 value, index, data, size);
+ 
+-	if (unlikely(ret < 0))
++	if (unlikely(ret < size)) {
++		ret = ret < 0 ? ret : -ENODATA;
++
+ 		netdev_warn(dev->net, "Failed to read reg index 0x%04x: %d\n",
+ 			    index, ret);
++	}
+ 
+ 	return ret;
+ }
+@@ -79,7 +82,7 @@ static int asix_check_host_enable(struct usbnet *dev, int in_pm)
+ 				    0, 0, 1, &smsr, in_pm);
+ 		if (ret == -ENODEV)
+ 			break;
+-		else if (ret < sizeof(smsr))
++		else if (ret < 0)
+ 			continue;
+ 		else if (smsr & AX_HOST_EN)
+ 			break;
+@@ -579,8 +582,12 @@ int asix_mdio_read_nopm(struct net_device *netdev, int phy_id, int loc)
+ 		return ret;
+ 	}
+ 
+-	asix_read_cmd(dev, AX_CMD_READ_MII_REG, phy_id,
+-		      (__u16)loc, 2, &res, 1);
++	ret = asix_read_cmd(dev, AX_CMD_READ_MII_REG, phy_id,
++			    (__u16)loc, 2, &res, 1);
++	if (ret < 0) {
++		mutex_unlock(&dev->phy_mutex);
++		return ret;
++	}
+ 	asix_set_hw_mii(dev, 1);
+ 	mutex_unlock(&dev->phy_mutex);
+ 
+diff --git a/drivers/net/usb/asix_devices.c b/drivers/net/usb/asix_devices.c
+index 4514d35ef4c48..6b2fbdf4e0fde 100644
+--- a/drivers/net/usb/asix_devices.c
++++ b/drivers/net/usb/asix_devices.c
+@@ -755,7 +755,12 @@ static int ax88772_bind(struct usbnet *dev, struct usb_interface *intf)
+ 	priv->phy_addr = ret;
+ 	priv->embd_phy = ((priv->phy_addr & 0x1f) == 0x10);
+ 
+-	asix_read_cmd(dev, AX_CMD_STATMNGSTS_REG, 0, 0, 1, &chipcode, 0);
++	ret = asix_read_cmd(dev, AX_CMD_STATMNGSTS_REG, 0, 0, 1, &chipcode, 0);
++	if (ret < 0) {
++		netdev_dbg(dev->net, "Failed to read STATMNGSTS_REG: %d\n", ret);
++		return ret;
++	}
++
+ 	chipcode &= AX_CHIPCODE_MASK;
+ 
+ 	ret = (chipcode == AX_AX88772_CHIPCODE) ? ax88772_hw_reset(dev, 0) :
+@@ -920,11 +925,21 @@ static int ax88178_reset(struct usbnet *dev)
+ 	int gpio0 = 0;
+ 	u32 phyid;
+ 
+-	asix_read_cmd(dev, AX_CMD_READ_GPIOS, 0, 0, 1, &status, 0);
++	ret = asix_read_cmd(dev, AX_CMD_READ_GPIOS, 0, 0, 1, &status, 0);
++	if (ret < 0) {
++		netdev_dbg(dev->net, "Failed to read GPIOS: %d\n", ret);
++		return ret;
++	}
++
+ 	netdev_dbg(dev->net, "GPIO Status: 0x%04x\n", status);
+ 
+ 	asix_write_cmd(dev, AX_CMD_WRITE_ENABLE, 0, 0, 0, NULL, 0);
+-	asix_read_cmd(dev, AX_CMD_READ_EEPROM, 0x0017, 0, 2, &eeprom, 0);
++	ret = asix_read_cmd(dev, AX_CMD_READ_EEPROM, 0x0017, 0, 2, &eeprom, 0);
++	if (ret < 0) {
++		netdev_dbg(dev->net, "Failed to read EEPROM: %d\n", ret);
++		return ret;
++	}
++
+ 	asix_write_cmd(dev, AX_CMD_WRITE_DISABLE, 0, 0, 0, NULL, 0);
+ 
+ 	netdev_dbg(dev->net, "EEPROM index 0x17 is 0x%04x\n", eeprom);
+diff --git a/drivers/net/wireguard/queueing.c b/drivers/net/wireguard/queueing.c
+index 1de413b19e342..8084e7408c0ae 100644
+--- a/drivers/net/wireguard/queueing.c
++++ b/drivers/net/wireguard/queueing.c
+@@ -4,6 +4,7 @@
+  */
+ 
+ #include "queueing.h"
++#include <linux/skb_array.h>
+ 
+ struct multicore_worker __percpu *
+ wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr)
+@@ -42,7 +43,7 @@ void wg_packet_queue_free(struct crypt_queue *queue, bool purge)
+ {
+ 	free_percpu(queue->worker);
+ 	WARN_ON(!purge && !__ptr_ring_empty(&queue->ring));
+-	ptr_ring_cleanup(&queue->ring, purge ? (void(*)(void*))kfree_skb : NULL);
++	ptr_ring_cleanup(&queue->ring, purge ? __skb_array_destroy_skb : NULL);
+ }
+ 
+ #define NEXT(skb) ((skb)->prev)
+diff --git a/drivers/net/wireguard/socket.c b/drivers/net/wireguard/socket.c
+index 6f07b949cb81d..0414d7a6ce741 100644
+--- a/drivers/net/wireguard/socket.c
++++ b/drivers/net/wireguard/socket.c
+@@ -160,6 +160,7 @@ out:
+ 	rcu_read_unlock_bh();
+ 	return ret;
+ #else
++	kfree_skb(skb);
+ 	return -EAFNOSUPPORT;
+ #endif
+ }
+@@ -241,7 +242,7 @@ int wg_socket_endpoint_from_skb(struct endpoint *endpoint,
+ 		endpoint->addr4.sin_addr.s_addr = ip_hdr(skb)->saddr;
+ 		endpoint->src4.s_addr = ip_hdr(skb)->daddr;
+ 		endpoint->src_if4 = skb->skb_iif;
+-	} else if (skb->protocol == htons(ETH_P_IPV6)) {
++	} else if (IS_ENABLED(CONFIG_IPV6) && skb->protocol == htons(ETH_P_IPV6)) {
+ 		endpoint->addr6.sin6_family = AF_INET6;
+ 		endpoint->addr6.sin6_port = udp_hdr(skb)->source;
+ 		endpoint->addr6.sin6_addr = ipv6_hdr(skb)->saddr;
+@@ -284,7 +285,7 @@ void wg_socket_set_peer_endpoint(struct wg_peer *peer,
+ 		peer->endpoint.addr4 = endpoint->addr4;
+ 		peer->endpoint.src4 = endpoint->src4;
+ 		peer->endpoint.src_if4 = endpoint->src_if4;
+-	} else if (endpoint->addr.sa_family == AF_INET6) {
++	} else if (IS_ENABLED(CONFIG_IPV6) && endpoint->addr.sa_family == AF_INET6) {
+ 		peer->endpoint.addr6 = endpoint->addr6;
+ 		peer->endpoint.src6 = endpoint->src6;
+ 	} else {
+diff --git a/drivers/net/wireless/ath/ath10k/snoc.c b/drivers/net/wireless/ath/ath10k/snoc.c
+index 9513ab696fff1..f79dd9a716906 100644
+--- a/drivers/net/wireless/ath/ath10k/snoc.c
++++ b/drivers/net/wireless/ath/ath10k/snoc.c
+@@ -1556,11 +1556,11 @@ static int ath10k_setup_msa_resources(struct ath10k *ar, u32 msa_size)
+ 	node = of_parse_phandle(dev->of_node, "memory-region", 0);
+ 	if (node) {
+ 		ret = of_address_to_resource(node, 0, &r);
++		of_node_put(node);
+ 		if (ret) {
+ 			dev_err(dev, "failed to resolve msa fixed region\n");
+ 			return ret;
+ 		}
+-		of_node_put(node);
+ 
+ 		ar->msa.paddr = r.start;
+ 		ar->msa.mem_size = resource_size(&r);
+diff --git a/drivers/net/wireless/ath/ath10k/wow.c b/drivers/net/wireless/ath/ath10k/wow.c
+index 7d65c115669fe..20b9aa8ddf7d5 100644
+--- a/drivers/net/wireless/ath/ath10k/wow.c
++++ b/drivers/net/wireless/ath/ath10k/wow.c
+@@ -337,14 +337,15 @@ static int ath10k_vif_wow_set_wakeups(struct ath10k_vif *arvif,
+ 			if (patterns[i].mask[j / 8] & BIT(j % 8))
+ 				bitmask[j] = 0xff;
+ 		old_pattern.mask = bitmask;
+-		new_pattern = old_pattern;
+ 
+ 		if (ar->wmi.rx_decap_mode == ATH10K_HW_TXRX_NATIVE_WIFI) {
+-			if (patterns[i].pkt_offset < ETH_HLEN)
++			if (patterns[i].pkt_offset < ETH_HLEN) {
+ 				ath10k_wow_convert_8023_to_80211(&new_pattern,
+ 								 &old_pattern);
+-			else
++			} else {
++				new_pattern = old_pattern;
+ 				new_pattern.pkt_offset += WOW_HDR_LEN - ETH_HLEN;
++			}
+ 		}
+ 
+ 		if (WARN_ON(new_pattern.pattern_len > WOW_MAX_PATTERN_SIZE))
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index 621372c568d2c..38102617c3427 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -2596,13 +2596,11 @@ free_out:
+ static void ath11k_dp_rx_process_received_packets(struct ath11k_base *ab,
+ 						  struct napi_struct *napi,
+ 						  struct sk_buff_head *msdu_list,
+-						  int *quota, int ring_id)
++						  int *quota, int mac_id)
+ {
+-	struct ath11k_skb_rxcb *rxcb;
+ 	struct sk_buff *msdu;
+ 	struct ath11k *ar;
+ 	struct ieee80211_rx_status rx_status = {0};
+-	u8 mac_id;
+ 	int ret;
+ 
+ 	if (skb_queue_empty(msdu_list))
+@@ -2610,20 +2608,20 @@ static void ath11k_dp_rx_process_received_packets(struct ath11k_base *ab,
+ 
+ 	rcu_read_lock();
+ 
+-	while (*quota && (msdu = __skb_dequeue(msdu_list))) {
+-		rxcb = ATH11K_SKB_RXCB(msdu);
+-		mac_id = rxcb->mac_id;
+-		ar = ab->pdevs[mac_id].ar;
+-		if (!rcu_dereference(ab->pdevs_active[mac_id])) {
+-			dev_kfree_skb_any(msdu);
+-			continue;
+-		}
++	ar = ab->pdevs[mac_id].ar;
++	if (!rcu_dereference(ab->pdevs_active[mac_id])) {
++		__skb_queue_purge(msdu_list);
++		rcu_read_unlock();
++		return;
++	}
+ 
+-		if (test_bit(ATH11K_CAC_RUNNING, &ar->dev_flags)) {
+-			dev_kfree_skb_any(msdu);
+-			continue;
+-		}
++	if (test_bit(ATH11K_CAC_RUNNING, &ar->dev_flags)) {
++		__skb_queue_purge(msdu_list);
++		rcu_read_unlock();
++		return;
++	}
+ 
++	while ((msdu = __skb_dequeue(msdu_list))) {
+ 		ret = ath11k_dp_rx_process_msdu(ar, msdu, msdu_list, &rx_status);
+ 		if (ret) {
+ 			ath11k_dbg(ab, ATH11K_DBG_DATA,
+@@ -2645,7 +2643,7 @@ int ath11k_dp_process_rx(struct ath11k_base *ab, int ring_id,
+ 	struct ath11k_dp *dp = &ab->dp;
+ 	struct dp_rxdma_ring *rx_ring;
+ 	int num_buffs_reaped[MAX_RADIOS] = {0};
+-	struct sk_buff_head msdu_list;
++	struct sk_buff_head msdu_list[MAX_RADIOS];
+ 	struct ath11k_skb_rxcb *rxcb;
+ 	int total_msdu_reaped = 0;
+ 	struct hal_srng *srng;
+@@ -2654,25 +2652,26 @@ int ath11k_dp_process_rx(struct ath11k_base *ab, int ring_id,
+ 	bool done = false;
+ 	int buf_id, mac_id;
+ 	struct ath11k *ar;
+-	u32 *rx_desc;
++	struct hal_reo_dest_ring *desc;
++	enum hal_reo_dest_ring_push_reason push_reason;
++	u32 cookie;
+ 	int i;
+ 
+-	__skb_queue_head_init(&msdu_list);
++	for (i = 0; i < MAX_RADIOS; i++)
++		__skb_queue_head_init(&msdu_list[i]);
+ 
+ 	srng = &ab->hal.srng_list[dp->reo_dst_ring[ring_id].ring_id];
+ 
+ 	spin_lock_bh(&srng->lock);
+ 
+-	ath11k_hal_srng_access_begin(ab, srng);
+-
+ try_again:
+-	while ((rx_desc = ath11k_hal_srng_dst_get_next_entry(ab, srng))) {
+-		struct hal_reo_dest_ring desc = *(struct hal_reo_dest_ring *)rx_desc;
+-		enum hal_reo_dest_ring_push_reason push_reason;
+-		u32 cookie;
++	ath11k_hal_srng_access_begin(ab, srng);
+ 
++	while (likely(desc =
++	      (struct hal_reo_dest_ring *)ath11k_hal_srng_dst_get_next_entry(ab,
++									     srng))) {
+ 		cookie = FIELD_GET(BUFFER_ADDR_INFO1_SW_COOKIE,
+-				   desc.buf_addr_info.info1);
++				   desc->buf_addr_info.info1);
+ 		buf_id = FIELD_GET(DP_RXDMA_BUF_COOKIE_BUF_ID,
+ 				   cookie);
+ 		mac_id = FIELD_GET(DP_RXDMA_BUF_COOKIE_PDEV_ID, cookie);
+@@ -2700,7 +2699,7 @@ try_again:
+ 		total_msdu_reaped++;
+ 
+ 		push_reason = FIELD_GET(HAL_REO_DEST_RING_INFO0_PUSH_REASON,
+-					desc.info0);
++					desc->info0);
+ 		if (push_reason !=
+ 		    HAL_REO_DEST_RING_PUSH_REASON_ROUTING_INSTRUCTION) {
+ 			dev_kfree_skb_any(msdu);
+@@ -2708,21 +2707,21 @@ try_again:
+ 			continue;
+ 		}
+ 
+-		rxcb->is_first_msdu = !!(desc.rx_msdu_info.info0 &
++		rxcb->is_first_msdu = !!(desc->rx_msdu_info.info0 &
+ 					 RX_MSDU_DESC_INFO0_FIRST_MSDU_IN_MPDU);
+-		rxcb->is_last_msdu = !!(desc.rx_msdu_info.info0 &
++		rxcb->is_last_msdu = !!(desc->rx_msdu_info.info0 &
+ 					RX_MSDU_DESC_INFO0_LAST_MSDU_IN_MPDU);
+-		rxcb->is_continuation = !!(desc.rx_msdu_info.info0 &
++		rxcb->is_continuation = !!(desc->rx_msdu_info.info0 &
+ 					   RX_MSDU_DESC_INFO0_MSDU_CONTINUATION);
+ 		rxcb->peer_id = FIELD_GET(RX_MPDU_DESC_META_DATA_PEER_ID,
+-					  desc.rx_mpdu_info.meta_data);
++					  desc->rx_mpdu_info.meta_data);
+ 		rxcb->seq_no = FIELD_GET(RX_MPDU_DESC_INFO0_SEQ_NUM,
+-					 desc.rx_mpdu_info.info0);
++					 desc->rx_mpdu_info.info0);
+ 		rxcb->tid = FIELD_GET(HAL_REO_DEST_RING_INFO0_RX_QUEUE_NUM,
+-				      desc.info0);
++				      desc->info0);
+ 
+ 		rxcb->mac_id = mac_id;
+-		__skb_queue_tail(&msdu_list, msdu);
++		__skb_queue_tail(&msdu_list[mac_id], msdu);
+ 
+ 		if (total_msdu_reaped >= quota && !rxcb->is_continuation) {
+ 			done = true;
+@@ -2752,16 +2751,15 @@ try_again:
+ 		if (!num_buffs_reaped[i])
+ 			continue;
+ 
++		ath11k_dp_rx_process_received_packets(ab, napi, &msdu_list[i],
++						      &quota, i);
++
+ 		ar = ab->pdevs[i].ar;
+ 		rx_ring = &ar->dp.rx_refill_buf_ring;
+ 
+ 		ath11k_dp_rxbufs_replenish(ab, i, rx_ring, num_buffs_reaped[i],
+ 					   ab->hw_params.hal_params->rx_buf_rbm);
+ 	}
+-
+-	ath11k_dp_rx_process_received_packets(ab, napi, &msdu_list,
+-					      &quota, ring_id);
+-
+ exit:
+ 	return budget - quota;
+ }
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index a7400ade7a0cf..8d8eae438567c 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -2039,6 +2039,9 @@ static void ath11k_peer_assoc_h_he_6ghz(struct ath11k *ar,
+ 	if (!arg->he_flag || band != NL80211_BAND_6GHZ || !sta->he_6ghz_capa.capa)
+ 		return;
+ 
++	if (sta->bandwidth == IEEE80211_STA_RX_BW_40)
++		arg->bw_40 = true;
++
+ 	if (sta->bandwidth == IEEE80211_STA_RX_BW_80)
+ 		arg->bw_80 = true;
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
+index 510e61e97dbcb..994ec48b2f669 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
++++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
+@@ -30,6 +30,7 @@ static int htc_issue_send(struct htc_target *target, struct sk_buff* skb,
+ 	hdr->endpoint_id = epid;
+ 	hdr->flags = flags;
+ 	hdr->payload_len = cpu_to_be16(len);
++	memset(hdr->control, 0, sizeof(hdr->control));
+ 
+ 	status = target->hif->send(target->hif_dev, endpoint->ul_pipeid, skb);
+ 
+@@ -272,6 +273,10 @@ int htc_connect_service(struct htc_target *target,
+ 	conn_msg->dl_pipeid = endpoint->dl_pipeid;
+ 	conn_msg->ul_pipeid = endpoint->ul_pipeid;
+ 
++	/* To prevent infoleak */
++	conn_msg->svc_meta_len = 0;
++	conn_msg->pad = 0;
++
+ 	ret = htc_issue_send(target, skb, skb->len, 0, ENDPOINT0);
+ 	if (ret)
+ 		goto err;
+diff --git a/drivers/net/wireless/ath/carl9170/main.c b/drivers/net/wireless/ath/carl9170/main.c
+index cca3b086aa701..a87476383c540 100644
+--- a/drivers/net/wireless/ath/carl9170/main.c
++++ b/drivers/net/wireless/ath/carl9170/main.c
+@@ -1915,7 +1915,7 @@ static int carl9170_parse_eeprom(struct ar9170 *ar)
+ 		WARN_ON(!(tx_streams >= 1 && tx_streams <=
+ 			IEEE80211_HT_MCS_TX_MAX_STREAMS));
+ 
+-		tx_params = (tx_streams - 1) <<
++		tx_params |= (tx_streams - 1) <<
+ 			    IEEE80211_HT_MCS_TX_MAX_STREAMS_SHIFT;
+ 
+ 		carl9170_band_2GHz.ht_cap.mcs.tx_params |= tx_params;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
+index d99140960a820..dcbe55b56e437 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
+@@ -207,6 +207,8 @@ static int brcmf_init_nvram_parser(struct nvram_parser *nvp,
+ 		size = BRCMF_FW_MAX_NVRAM_SIZE;
+ 	else
+ 		size = data_len;
++	/* Add space for properties we may add */
++	size += strlen(BRCMF_FW_DEFAULT_BOARDREV) + 1;
+ 	/* Alloc for extra 0 byte + roundup by 4 + length field */
+ 	size += 1 + 3 + sizeof(u32);
+ 	nvp->nvram = kzalloc(size, GFP_KERNEL);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+index 8b149996fc000..3ff4997e1c97a 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+@@ -12,6 +12,7 @@
+ #include <linux/interrupt.h>
+ #include <linux/bcma/bcma.h>
+ #include <linux/sched.h>
++#include <linux/io.h>
+ #include <asm/unaligned.h>
+ 
+ #include <soc.h>
+@@ -59,6 +60,13 @@ BRCMF_FW_DEF(4366B, "brcmfmac4366b-pcie");
+ BRCMF_FW_DEF(4366C, "brcmfmac4366c-pcie");
+ BRCMF_FW_DEF(4371, "brcmfmac4371-pcie");
+ 
++/* firmware config files */
++MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-pcie.txt");
++MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-pcie.*.txt");
++
++/* per-board firmware binaries */
++MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-pcie.*.bin");
++
+ static const struct brcmf_firmware_mapping brcmf_pcie_fwnames[] = {
+ 	BRCMF_FW_ENTRY(BRCM_CC_43602_CHIP_ID, 0xFFFFFFFF, 43602),
+ 	BRCMF_FW_ENTRY(BRCM_CC_43465_CHIP_ID, 0xFFFFFFF0, 4366C),
+@@ -447,47 +455,6 @@ brcmf_pcie_write_ram32(struct brcmf_pciedev_info *devinfo, u32 mem_offset,
+ }
+ 
+ 
+-static void
+-brcmf_pcie_copy_mem_todev(struct brcmf_pciedev_info *devinfo, u32 mem_offset,
+-			  void *srcaddr, u32 len)
+-{
+-	void __iomem *address = devinfo->tcm + mem_offset;
+-	__le32 *src32;
+-	__le16 *src16;
+-	u8 *src8;
+-
+-	if (((ulong)address & 4) || ((ulong)srcaddr & 4) || (len & 4)) {
+-		if (((ulong)address & 2) || ((ulong)srcaddr & 2) || (len & 2)) {
+-			src8 = (u8 *)srcaddr;
+-			while (len) {
+-				iowrite8(*src8, address);
+-				address++;
+-				src8++;
+-				len--;
+-			}
+-		} else {
+-			len = len / 2;
+-			src16 = (__le16 *)srcaddr;
+-			while (len) {
+-				iowrite16(le16_to_cpu(*src16), address);
+-				address += 2;
+-				src16++;
+-				len--;
+-			}
+-		}
+-	} else {
+-		len = len / 4;
+-		src32 = (__le32 *)srcaddr;
+-		while (len) {
+-			iowrite32(le32_to_cpu(*src32), address);
+-			address += 4;
+-			src32++;
+-			len--;
+-		}
+-	}
+-}
+-
+-
+ static void
+ brcmf_pcie_copy_dev_tomem(struct brcmf_pciedev_info *devinfo, u32 mem_offset,
+ 			  void *dstaddr, u32 len)
+@@ -1348,6 +1315,18 @@ static void brcmf_pcie_down(struct device *dev)
+ {
+ }
+ 
++static int brcmf_pcie_preinit(struct device *dev)
++{
++	struct brcmf_bus *bus_if = dev_get_drvdata(dev);
++	struct brcmf_pciedev *buspub = bus_if->bus_priv.pcie;
++
++	brcmf_dbg(PCIE, "Enter\n");
++
++	brcmf_pcie_intr_enable(buspub->devinfo);
++	brcmf_pcie_hostready(buspub->devinfo);
++
++	return 0;
++}
+ 
+ static int brcmf_pcie_tx(struct device *dev, struct sk_buff *skb)
+ {
+@@ -1456,6 +1435,7 @@ static int brcmf_pcie_reset(struct device *dev)
+ }
+ 
+ static const struct brcmf_bus_ops brcmf_pcie_bus_ops = {
++	.preinit = brcmf_pcie_preinit,
+ 	.txdata = brcmf_pcie_tx,
+ 	.stop = brcmf_pcie_down,
+ 	.txctl = brcmf_pcie_tx_ctlpkt,
+@@ -1563,8 +1543,8 @@ static int brcmf_pcie_download_fw_nvram(struct brcmf_pciedev_info *devinfo,
+ 		return err;
+ 
+ 	brcmf_dbg(PCIE, "Download FW %s\n", devinfo->fw_name);
+-	brcmf_pcie_copy_mem_todev(devinfo, devinfo->ci->rambase,
+-				  (void *)fw->data, fw->size);
++	memcpy_toio(devinfo->tcm + devinfo->ci->rambase,
++		    (void *)fw->data, fw->size);
+ 
+ 	resetintr = get_unaligned_le32(fw->data);
+ 	release_firmware(fw);
+@@ -1578,7 +1558,7 @@ static int brcmf_pcie_download_fw_nvram(struct brcmf_pciedev_info *devinfo,
+ 		brcmf_dbg(PCIE, "Download NVRAM %s\n", devinfo->nvram_name);
+ 		address = devinfo->ci->rambase + devinfo->ci->ramsize -
+ 			  nvram_len;
+-		brcmf_pcie_copy_mem_todev(devinfo, address, nvram, nvram_len);
++		memcpy_toio(devinfo->tcm + address, nvram, nvram_len);
+ 		brcmf_fw_nvram_free(nvram);
+ 	} else {
+ 		brcmf_dbg(PCIE, "No matching NVRAM file found %s\n",
+@@ -1777,6 +1757,8 @@ static void brcmf_pcie_setup(struct device *dev, int ret,
+ 	ret = brcmf_chip_get_raminfo(devinfo->ci);
+ 	if (ret) {
+ 		brcmf_err(bus, "Failed to get RAM info\n");
++		release_firmware(fw);
++		brcmf_fw_nvram_free(nvram);
+ 		goto fail;
+ 	}
+ 
+@@ -1826,9 +1808,6 @@ static void brcmf_pcie_setup(struct device *dev, int ret,
+ 
+ 	init_waitqueue_head(&devinfo->mbdata_resp_wait);
+ 
+-	brcmf_pcie_intr_enable(devinfo);
+-	brcmf_pcie_hostready(devinfo);
+-
+ 	ret = brcmf_attach(&devinfo->pdev->dev);
+ 	if (ret)
+ 		goto fail;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+index 8effeb7a7269b..5d156e591b35c 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+@@ -629,7 +629,6 @@ BRCMF_FW_CLM_DEF(43752, "brcmfmac43752-sdio");
+ 
+ /* firmware config files */
+ MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-sdio.*.txt");
+-MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-pcie.*.txt");
+ 
+ /* per-board firmware binaries */
+ MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-sdio.*.bin");
+diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c
+index 754876cd27ce8..e8bd4f0e3d2dc 100644
+--- a/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c
+@@ -299,7 +299,7 @@ static int iwlagn_mac_start(struct ieee80211_hw *hw)
+ 
+ 	priv->is_open = 1;
+ 	IWL_DEBUG_MAC80211(priv, "leave\n");
+-	return 0;
++	return ret;
+ }
+ 
+ static void iwlagn_mac_stop(struct ieee80211_hw *hw)
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+index a39013c401c92..e2001e88a4b4b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+@@ -1562,8 +1562,6 @@ iwl_dump_ini_dbgi_sram_iter(struct iwl_fw_runtime *fwrt,
+ 		return -EBUSY;
+ 
+ 	range->range_data_size = reg->dev_addr.size;
+-	iwl_write_prph_no_grab(fwrt->trans, DBGI_SRAM_TARGET_ACCESS_CFG,
+-			       DBGI_SRAM_TARGET_ACCESS_CFG_RESET_ADDRESS_MSK);
+ 	for (i = 0; i < (le32_to_cpu(reg->dev_addr.size) / 4); i++) {
+ 		prph_data = iwl_read_prph(fwrt->trans, (i % 2) ?
+ 					  DBGI_SRAM_TARGET_ACCESS_RDATA_MSB :
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+index 7ab98b419cc1f..d412b6d0b28e2 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+@@ -849,11 +849,18 @@ static void iwl_dbg_tlv_apply_config(struct iwl_fw_runtime *fwrt,
+ 		case IWL_FW_INI_CONFIG_SET_TYPE_DBGC_DRAM_ADDR: {
+ 			struct iwl_dbgc1_info dram_info = {};
+ 			struct iwl_dram_data *frags = &fwrt->trans->dbg.fw_mon_ini[1].frags[0];
+-			__le64 dram_base_addr = cpu_to_le64(frags->physical);
+-			__le32 dram_size = cpu_to_le32(frags->size);
+-			u64  dram_addr = le64_to_cpu(dram_base_addr);
++			__le64 dram_base_addr;
++			__le32 dram_size;
++			u64 dram_addr;
+ 			u32 ret;
+ 
++			if (!frags)
++				break;
++
++			dram_base_addr = cpu_to_le64(frags->physical);
++			dram_size = cpu_to_le32(frags->size);
++			dram_addr = le64_to_cpu(dram_base_addr);
++
+ 			IWL_DEBUG_FW(fwrt, "WRT: dram_base_addr 0x%016llx, dram_size 0x%x\n",
+ 				     dram_base_addr, dram_size);
+ 			IWL_DEBUG_FW(fwrt, "WRT: config_list->addr_offset: %u\n",
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-prph.h b/drivers/net/wireless/intel/iwlwifi/iwl-prph.h
+index a84ab02cf9d7e..7d3fedfdb3487 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-prph.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-prph.h
+@@ -356,8 +356,6 @@
+ #define WFPM_GP2			0xA030B4
+ 
+ /* DBGI SRAM Register details */
+-#define DBGI_SRAM_TARGET_ACCESS_CFG			0x00A2E14C
+-#define DBGI_SRAM_TARGET_ACCESS_CFG_RESET_ADDRESS_MSK	0x10000
+ #define DBGI_SRAM_TARGET_ACCESS_RDATA_LSB		0x00A2E154
+ #define DBGI_SRAM_TARGET_ACCESS_RDATA_MSB		0x00A2E158
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+index a19f646a324f4..75e9776001b89 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+@@ -2566,7 +2566,9 @@ static int iwl_mvm_d3_test_open(struct inode *inode, struct file *file)
+ 
+ 	/* start pseudo D3 */
+ 	rtnl_lock();
++	wiphy_lock(mvm->hw->wiphy);
+ 	err = __iwl_mvm_suspend(mvm->hw, mvm->hw->wiphy->wowlan_config, true);
++	wiphy_unlock(mvm->hw->wiphy);
+ 	rtnl_unlock();
+ 	if (err > 0)
+ 		err = -EINVAL;
+@@ -2622,7 +2624,9 @@ static int iwl_mvm_d3_test_release(struct inode *inode, struct file *file)
+ 	iwl_fw_dbg_read_d3_debug_data(&mvm->fwrt);
+ 
+ 	rtnl_lock();
++	wiphy_lock(mvm->hw->wiphy);
+ 	__iwl_mvm_resume(mvm, true);
++	wiphy_unlock(mvm->hw->wiphy);
+ 	rtnl_unlock();
+ 
+ 	iwl_mvm_resume_tcm(mvm);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 58d5395acf73c..6d17d7a71182a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -1553,8 +1553,10 @@ int iwl_mvm_up(struct iwl_mvm *mvm)
+ 	while (!sband && i < NUM_NL80211_BANDS)
+ 		sband = mvm->hw->wiphy->bands[i++];
+ 
+-	if (WARN_ON_ONCE(!sband))
++	if (WARN_ON_ONCE(!sband)) {
++		ret = -ENODEV;
+ 		goto error;
++	}
+ 
+ 	chan = &sband->channels[0];
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index 364f6aefae81d..5b7d89e33431c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -237,7 +237,8 @@ static void iwl_mvm_rx_thermal_dual_chain_req(struct iwl_mvm *mvm,
+ 	 */
+ 	mvm->fw_static_smps_request =
+ 		req->event == cpu_to_le32(THERMAL_DUAL_CHAIN_REQ_DISABLE);
+-	ieee80211_iterate_interfaces(mvm->hw, IEEE80211_IFACE_ITER_NORMAL,
++	ieee80211_iterate_interfaces(mvm->hw,
++				     IEEE80211_IFACE_SKIP_SDATA_NOT_IN_DRIVER,
+ 				     iwl_mvm_intf_dual_chain_req, NULL);
+ }
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index 0f96d422d6e06..bbd394a7b5314 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -272,15 +272,14 @@ static u32 iwl_mvm_get_tx_rate(struct iwl_mvm *mvm,
+ 
+ 	/* info->control is only relevant for non HW rate control */
+ 	if (!ieee80211_hw_check(mvm->hw, HAS_RATE_CONTROL)) {
+-		struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
+-
+ 		/* HT rate doesn't make sense for a non data frame */
+ 		WARN_ONCE(info->control.rates[0].flags & IEEE80211_TX_RC_MCS &&
+ 			  !ieee80211_is_data(fc),
+ 			  "Got a HT rate (flags:0x%x/mcs:%d/fc:0x%x/state:%d) for a non data frame\n",
+ 			  info->control.rates[0].flags,
+ 			  info->control.rates[0].idx,
+-			  le16_to_cpu(fc), sta ? mvmsta->sta_state : -1);
++			  le16_to_cpu(fc),
++			  sta ? iwl_mvm_sta_from_mac80211(sta)->sta_state : -1);
+ 
+ 		rate_idx = info->control.rates[0].idx;
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index 3b38c426575bc..39cd7dfea78f7 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -1084,7 +1084,7 @@ static const struct iwl_causes_list causes_list_pre_bz[] = {
+ };
+ 
+ static const struct iwl_causes_list causes_list_bz[] = {
+-	{MSIX_HW_INT_CAUSES_REG_SW_ERR_BZ,	CSR_MSIX_HW_INT_MASK_AD, 0x29},
++	{MSIX_HW_INT_CAUSES_REG_SW_ERR_BZ,	CSR_MSIX_HW_INT_MASK_AD, 0x15},
+ };
+ 
+ static void iwl_pcie_map_list(struct iwl_trans *trans,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/main.c b/drivers/net/wireless/mediatek/mt76/mt7603/main.c
+index 7ac4cd247a730..689fcee282b3e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7603/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7603/main.c
+@@ -623,6 +623,9 @@ mt7603_sta_rate_tbl_update(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 	struct ieee80211_sta_rates *sta_rates = rcu_dereference(sta->rates);
+ 	int i;
+ 
++	if (!sta_rates)
++		return;
++
+ 	spin_lock_bh(&dev->mt76.lock);
+ 	for (i = 0; i < ARRAY_SIZE(msta->rates); i++) {
+ 		msta->rates[i].idx = sta_rates->rate[i].idx;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index c79abce543f3b..ed8f7bc18977d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -2000,6 +2000,14 @@ void mt7615_pm_power_save_work(struct work_struct *work)
+ 	    test_bit(MT76_HW_SCHED_SCANNING, &dev->mphy.state))
+ 		goto out;
+ 
++	if (mutex_is_locked(&dev->mt76.mutex))
++		/* if mt76 mutex is held we should not put the device
++		 * to sleep since we are currently accessing device
++		 * register map. We need to wait for the next power_save
++		 * trigger.
++		 */
++		goto out;
++
+ 	if (time_is_after_jiffies(dev->pm.last_activity + delta)) {
+ 		delta = dev->pm.last_activity + delta - jiffies;
+ 		goto out;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/main.c b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+index 1fdcada157d6b..18a320d00d8d2 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+@@ -686,6 +686,9 @@ static void mt7615_sta_rate_tbl_update(struct ieee80211_hw *hw,
+ 	struct ieee80211_sta_rates *sta_rates = rcu_dereference(sta->rates);
+ 	int i;
+ 
++	if (!sta_rates)
++		return;
++
+ 	spin_lock_bh(&dev->mt76.lock);
+ 	for (i = 0; i < ARRAY_SIZE(msta->rates); i++) {
+ 		msta->rates[i].idx = sta_rates->rate[i].idx;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+index 1fb8432aa27ca..6235ed3a332b2 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+@@ -313,7 +313,7 @@ mt76_connac_mcu_alloc_wtbl_req(struct mt76_dev *dev, struct mt76_wcid *wcid,
+ 	}
+ 
+ 	if (sta_hdr)
+-		sta_hdr->len = cpu_to_le16(sizeof(hdr));
++		le16_add_cpu(&sta_hdr->len, sizeof(hdr));
+ 
+ 	return skb_put_data(nskb, &hdr, sizeof(hdr));
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+index acb9a286d3546..43a8b3651dc2d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+@@ -598,7 +598,8 @@ enum {
+ 	MCU_CE_CMD_SET_BSS_CONNECTED = 0x16,
+ 	MCU_CE_CMD_SET_BSS_ABORT = 0x17,
+ 	MCU_CE_CMD_CANCEL_HW_SCAN = 0x1b,
+-	MCU_CE_CMD_SET_ROC = 0x1d,
++	MCU_CE_CMD_SET_ROC = 0x1c,
++	MCU_CE_CMD_SET_EDCA_PARMS = 0x1d,
+ 	MCU_CE_CMD_SET_P2P_OPPPS = 0x33,
+ 	MCU_CE_CMD_SET_RATE_TX_POWER = 0x5d,
+ 	MCU_CE_CMD_SCHED_SCAN_ENABLE = 0x61,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index 38d66411444a1..a1da514ca2564 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -1412,7 +1412,6 @@ mt7915_mac_add_txs_skb(struct mt7915_dev *dev, struct mt76_wcid *wcid, int pid,
+ 		break;
+ 	case MT_PHY_TYPE_HT:
+ 	case MT_PHY_TYPE_HT_GF:
+-		rate.mcs += (rate.nss - 1) * 8;
+ 		if (rate.mcs > 31)
+ 			goto out;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index 8215b3d79bbdc..5ea82f05fd916 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -211,24 +211,12 @@ mt7915_mcu_get_sta_nss(u16 mcs_map)
+ 
+ static void
+ mt7915_mcu_set_sta_he_mcs(struct ieee80211_sta *sta, __le16 *he_mcs,
+-			  const u16 *mask)
++			  u16 mcs_map)
+ {
+ 	struct mt7915_sta *msta = (struct mt7915_sta *)sta->drv_priv;
+-	struct cfg80211_chan_def *chandef = &msta->vif->phy->mt76->chandef;
++	enum nl80211_band band = msta->vif->phy->mt76->chandef.chan->band;
++	const u16 *mask = msta->vif->bitrate_mask.control[band].he_mcs;
+ 	int nss, max_nss = sta->rx_nss > 3 ? 4 : sta->rx_nss;
+-	u16 mcs_map;
+-
+-	switch (chandef->width) {
+-	case NL80211_CHAN_WIDTH_80P80:
+-		mcs_map = le16_to_cpu(sta->he_cap.he_mcs_nss_supp.rx_mcs_80p80);
+-		break;
+-	case NL80211_CHAN_WIDTH_160:
+-		mcs_map = le16_to_cpu(sta->he_cap.he_mcs_nss_supp.rx_mcs_160);
+-		break;
+-	default:
+-		mcs_map = le16_to_cpu(sta->he_cap.he_mcs_nss_supp.rx_mcs_80);
+-		break;
+-	}
+ 
+ 	for (nss = 0; nss < max_nss; nss++) {
+ 		int mcs;
+@@ -1263,8 +1251,11 @@ mt7915_mcu_wtbl_generic_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
+ 	generic = (struct wtbl_generic *)tlv;
+ 
+ 	if (sta) {
++		if (vif->type == NL80211_IFTYPE_STATION)
++			generic->partial_aid = cpu_to_le16(vif->bss_conf.aid);
++		else
++			generic->partial_aid = cpu_to_le16(sta->aid);
+ 		memcpy(generic->peer_addr, sta->addr, ETH_ALEN);
+-		generic->partial_aid = cpu_to_le16(sta->aid);
+ 		generic->muar_idx = mvif->omac_idx;
+ 		generic->qos = sta->wme;
+ 	} else {
+@@ -1318,12 +1309,15 @@ mt7915_mcu_sta_basic_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
+ 	case NL80211_IFTYPE_MESH_POINT:
+ 	case NL80211_IFTYPE_AP:
+ 		basic->conn_type = cpu_to_le32(CONNECTION_INFRA_STA);
++		basic->aid = cpu_to_le16(sta->aid);
+ 		break;
+ 	case NL80211_IFTYPE_STATION:
+ 		basic->conn_type = cpu_to_le32(CONNECTION_INFRA_AP);
++		basic->aid = cpu_to_le16(vif->bss_conf.aid);
+ 		break;
+ 	case NL80211_IFTYPE_ADHOC:
+ 		basic->conn_type = cpu_to_le32(CONNECTION_IBSS_ADHOC);
++		basic->aid = cpu_to_le16(sta->aid);
+ 		break;
+ 	default:
+ 		WARN_ON(1);
+@@ -1331,7 +1325,6 @@ mt7915_mcu_sta_basic_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
+ 	}
+ 
+ 	memcpy(basic->peer_addr, sta->addr, ETH_ALEN);
+-	basic->aid = cpu_to_le16(sta->aid);
+ 	basic->qos = sta->wme;
+ }
+ 
+@@ -1339,11 +1332,9 @@ static void
+ mt7915_mcu_sta_he_tlv(struct sk_buff *skb, struct ieee80211_sta *sta,
+ 		      struct ieee80211_vif *vif)
+ {
+-	struct mt7915_sta *msta = (struct mt7915_sta *)sta->drv_priv;
+ 	struct mt7915_vif *mvif = (struct mt7915_vif *)vif->drv_priv;
+ 	struct ieee80211_he_cap_elem *elem = &sta->he_cap.he_cap_elem;
+-	enum nl80211_band band = msta->vif->phy->mt76->chandef.chan->band;
+-	const u16 *mcs_mask = msta->vif->bitrate_mask.control[band].he_mcs;
++	struct ieee80211_he_mcs_nss_supp mcs_map;
+ 	struct sta_rec_he *he;
+ 	struct tlv *tlv;
+ 	u32 cap = 0;
+@@ -1433,22 +1424,23 @@ mt7915_mcu_sta_he_tlv(struct sk_buff *skb, struct ieee80211_sta *sta,
+ 
+ 	he->he_cap = cpu_to_le32(cap);
+ 
++	mcs_map = sta->he_cap.he_mcs_nss_supp;
+ 	switch (sta->bandwidth) {
+ 	case IEEE80211_STA_RX_BW_160:
+ 		if (elem->phy_cap_info[0] &
+ 		    IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_80PLUS80_MHZ_IN_5G)
+ 			mt7915_mcu_set_sta_he_mcs(sta,
+ 						  &he->max_nss_mcs[CMD_HE_MCS_BW8080],
+-						  mcs_mask);
++						  le16_to_cpu(mcs_map.rx_mcs_80p80));
+ 
+ 		mt7915_mcu_set_sta_he_mcs(sta,
+ 					  &he->max_nss_mcs[CMD_HE_MCS_BW160],
+-					  mcs_mask);
++					  le16_to_cpu(mcs_map.rx_mcs_160));
+ 		fallthrough;
+ 	default:
+ 		mt7915_mcu_set_sta_he_mcs(sta,
+ 					  &he->max_nss_mcs[CMD_HE_MCS_BW80],
+-					  mcs_mask);
++					  le16_to_cpu(mcs_map.rx_mcs_80));
+ 		break;
+ 	}
+ 
+@@ -1523,9 +1515,6 @@ mt7915_mcu_sta_muru_tlv(struct sk_buff *skb, struct ieee80211_sta *sta,
+ 	    vif->type != NL80211_IFTYPE_AP)
+ 		return;
+ 
+-	if (!sta->vht_cap.vht_supported)
+-		return;
+-
+ 	tlv = mt7915_mcu_add_tlv(skb, STA_REC_MURU, sizeof(*muru));
+ 
+ 	muru = (struct sta_rec_muru *)tlv;
+@@ -1533,9 +1522,12 @@ mt7915_mcu_sta_muru_tlv(struct sk_buff *skb, struct ieee80211_sta *sta,
+ 	muru->cfg.mimo_dl_en = mvif->cap.he_mu_ebfer ||
+ 			       mvif->cap.vht_mu_ebfer ||
+ 			       mvif->cap.vht_mu_ebfee;
++	muru->cfg.mimo_ul_en = true;
++	muru->cfg.ofdma_dl_en = true;
+ 
+-	muru->mimo_dl.vht_mu_bfee =
+-		!!(sta->vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE);
++	if (sta->vht_cap.vht_supported)
++		muru->mimo_dl.vht_mu_bfee =
++			!!(sta->vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE);
+ 
+ 	if (!sta->he_cap.has_he)
+ 		return;
+@@ -1543,13 +1535,11 @@ mt7915_mcu_sta_muru_tlv(struct sk_buff *skb, struct ieee80211_sta *sta,
+ 	muru->mimo_dl.partial_bw_dl_mimo =
+ 		HE_PHY(CAP6_PARTIAL_BANDWIDTH_DL_MUMIMO, elem->phy_cap_info[6]);
+ 
+-	muru->cfg.mimo_ul_en = true;
+ 	muru->mimo_ul.full_ul_mimo =
+ 		HE_PHY(CAP2_UL_MU_FULL_MU_MIMO, elem->phy_cap_info[2]);
+ 	muru->mimo_ul.partial_ul_mimo =
+ 		HE_PHY(CAP2_UL_MU_PARTIAL_MU_MIMO, elem->phy_cap_info[2]);
+ 
+-	muru->cfg.ofdma_dl_en = true;
+ 	muru->ofdma_dl.punc_pream_rx =
+ 		HE_PHY(CAP1_PREAMBLE_PUNC_RX_MASK, elem->phy_cap_info[1]);
+ 	muru->ofdma_dl.he_20m_in_40m_2g =
+@@ -2132,9 +2122,12 @@ mt7915_mcu_add_rate_ctrl_fixed(struct mt7915_dev *dev,
+ 			phy.sgi |= gi << (i << (_he));				\
+ 			phy.he_ltf |= mask->control[band].he_ltf << (i << (_he));\
+ 		}								\
+-		for (i = 0; i < ARRAY_SIZE(mask->control[band]._mcs); i++) 	\
+-			nrates += hweight16(mask->control[band]._mcs[i]);  	\
+-		phy.mcs = ffs(mask->control[band]._mcs[0]) - 1;			\
++		for (i = 0; i < ARRAY_SIZE(mask->control[band]._mcs); i++) {	\
++			if (!mask->control[band]._mcs[i])			\
++				continue;					\
++			nrates += hweight16(mask->control[band]._mcs[i]);	\
++			phy.mcs = ffs(mask->control[band]._mcs[i]) - 1;		\
++		}								\
+ 	} while (0)
+ 
+ 	if (sta->he_cap.has_he) {
+@@ -2392,8 +2385,10 @@ int mt7915_mcu_add_sta(struct mt7915_dev *dev, struct ieee80211_vif *vif,
+ 	}
+ 
+ 	ret = mt7915_mcu_sta_wtbl_tlv(dev, skb, vif, sta);
+-	if (ret)
++	if (ret) {
++		dev_kfree_skb(skb);
+ 		return ret;
++	}
+ 
+ 	if (sta && sta->ht_cap.ht_supported) {
+ 		/* starec amsdu */
+@@ -2407,8 +2402,10 @@ int mt7915_mcu_add_sta(struct mt7915_dev *dev, struct ieee80211_vif *vif,
+ 	}
+ 
+ 	ret = mt7915_mcu_add_group(dev, vif, sta);
+-	if (ret)
++	if (ret) {
++		dev_kfree_skb(skb);
+ 		return ret;
++	}
+ out:
+ 	return mt76_mcu_skb_send_msg(&dev->mt76, skb,
+ 				     MCU_EXT_CMD(STA_REC_UPDATE), true);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c
+index 86fd7292b229f..196b50e616fe0 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c
+@@ -129,23 +129,22 @@ mt7921_queues_acq(struct seq_file *s, void *data)
+ 
+ 	mt7921_mutex_acquire(dev);
+ 
+-	for (i = 0; i < 16; i++) {
+-		int j, acs = i / 4, index = i % 4;
++	for (i = 0; i < 4; i++) {
+ 		u32 ctrl, val, qlen = 0;
++		int j;
+ 
+-		val = mt76_rr(dev, MT_PLE_AC_QEMPTY(acs, index));
+-		ctrl = BIT(31) | BIT(15) | (acs << 8);
++		val = mt76_rr(dev, MT_PLE_AC_QEMPTY(i));
++		ctrl = BIT(31) | BIT(11) | (i << 24);
+ 
+ 		for (j = 0; j < 32; j++) {
+ 			if (val & BIT(j))
+ 				continue;
+ 
+-			mt76_wr(dev, MT_PLE_FL_Q0_CTRL,
+-				ctrl | (j + (index << 5)));
++			mt76_wr(dev, MT_PLE_FL_Q0_CTRL, ctrl | j);
+ 			qlen += mt76_get_field(dev, MT_PLE_FL_Q3_CTRL,
+ 					       GENMASK(11, 0));
+ 		}
+-		seq_printf(s, "AC%d%d: queued=%d\n", acs, index, qlen);
++		seq_printf(s, "AC%d: queued=%d\n", i, qlen);
+ 	}
+ 
+ 	mt7921_mutex_release(dev);
+@@ -291,13 +290,12 @@ mt7921_pm_set(void *data, u64 val)
+ 	pm->enable = false;
+ 	mt76_connac_pm_wake(&dev->mphy, pm);
+ 
++	pm->enable = val;
+ 	ieee80211_iterate_active_interfaces(mt76_hw(dev),
+ 					    IEEE80211_IFACE_ITER_RESUME_ALL,
+ 					    mt7921_pm_interface_iter, dev);
+ 
+ 	mt76_connac_mcu_set_deep_sleep(&dev->mt76, pm->ds_enable);
+-
+-	pm->enable = val;
+ 	mt76_connac_power_save_sched(&dev->mphy, pm);
+ out:
+ 	mutex_unlock(&dev->mt76.mutex);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
+index cdff1fd52d93a..39d6ce4ecddd7 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
+@@ -78,110 +78,6 @@ static void mt7921_dma_prefetch(struct mt7921_dev *dev)
+ 	mt76_wr(dev, MT_WFDMA0_TX_RING17_EXT_CTRL, PREFETCH(0x380, 0x4));
+ }
+ 
+-static u32 __mt7921_reg_addr(struct mt7921_dev *dev, u32 addr)
+-{
+-	static const struct {
+-		u32 phys;
+-		u32 mapped;
+-		u32 size;
+-	} fixed_map[] = {
+-		{ 0x820d0000, 0x30000, 0x10000 }, /* WF_LMAC_TOP (WF_WTBLON) */
+-		{ 0x820ed000, 0x24800, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_MIB) */
+-		{ 0x820e4000, 0x21000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_TMAC) */
+-		{ 0x820e7000, 0x21e00, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_DMA) */
+-		{ 0x820eb000, 0x24200, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_LPON) */
+-		{ 0x820e2000, 0x20800, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_AGG) */
+-		{ 0x820e3000, 0x20c00, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_ARB) */
+-		{ 0x820e5000, 0x21400, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_RMAC) */
+-		{ 0x00400000, 0x80000, 0x10000 }, /* WF_MCU_SYSRAM */
+-		{ 0x00410000, 0x90000, 0x10000 }, /* WF_MCU_SYSRAM (configure register) */
+-		{ 0x40000000, 0x70000, 0x10000 }, /* WF_UMAC_SYSRAM */
+-		{ 0x54000000, 0x02000, 0x1000 }, /* WFDMA PCIE0 MCU DMA0 */
+-		{ 0x55000000, 0x03000, 0x1000 }, /* WFDMA PCIE0 MCU DMA1 */
+-		{ 0x58000000, 0x06000, 0x1000 }, /* WFDMA PCIE1 MCU DMA0 (MEM_DMA) */
+-		{ 0x59000000, 0x07000, 0x1000 }, /* WFDMA PCIE1 MCU DMA1 */
+-		{ 0x7c000000, 0xf0000, 0x10000 }, /* CONN_INFRA */
+-		{ 0x7c020000, 0xd0000, 0x10000 }, /* CONN_INFRA, WFDMA */
+-		{ 0x7c060000, 0xe0000, 0x10000 }, /* CONN_INFRA, conn_host_csr_top */
+-		{ 0x80020000, 0xb0000, 0x10000 }, /* WF_TOP_MISC_OFF */
+-		{ 0x81020000, 0xc0000, 0x10000 }, /* WF_TOP_MISC_ON */
+-		{ 0x820c0000, 0x08000, 0x4000 }, /* WF_UMAC_TOP (PLE) */
+-		{ 0x820c8000, 0x0c000, 0x2000 }, /* WF_UMAC_TOP (PSE) */
+-		{ 0x820cc000, 0x0e000, 0x1000 }, /* WF_UMAC_TOP (PP) */
+-		{ 0x820cd000, 0x0f000, 0x1000 }, /* WF_MDP_TOP */
+-		{ 0x820ce000, 0x21c00, 0x0200 }, /* WF_LMAC_TOP (WF_SEC) */
+-		{ 0x820cf000, 0x22000, 0x1000 }, /* WF_LMAC_TOP (WF_PF) */
+-		{ 0x820e0000, 0x20000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_CFG) */
+-		{ 0x820e1000, 0x20400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_TRB) */
+-		{ 0x820e9000, 0x23400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_WTBLOFF) */
+-		{ 0x820ea000, 0x24000, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_ETBF) */
+-		{ 0x820ec000, 0x24600, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_INT) */
+-		{ 0x820f0000, 0xa0000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_CFG) */
+-		{ 0x820f1000, 0xa0600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_TRB) */
+-		{ 0x820f2000, 0xa0800, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_AGG) */
+-		{ 0x820f3000, 0xa0c00, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_ARB) */
+-		{ 0x820f4000, 0xa1000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_TMAC) */
+-		{ 0x820f5000, 0xa1400, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_RMAC) */
+-		{ 0x820f7000, 0xa1e00, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_DMA) */
+-		{ 0x820f9000, 0xa3400, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_WTBLOFF) */
+-		{ 0x820fa000, 0xa4000, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_ETBF) */
+-		{ 0x820fb000, 0xa4200, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_LPON) */
+-		{ 0x820fc000, 0xa4600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_INT) */
+-		{ 0x820fd000, 0xa4800, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_MIB) */
+-	};
+-	int i;
+-
+-	if (addr < 0x100000)
+-		return addr;
+-
+-	for (i = 0; i < ARRAY_SIZE(fixed_map); i++) {
+-		u32 ofs;
+-
+-		if (addr < fixed_map[i].phys)
+-			continue;
+-
+-		ofs = addr - fixed_map[i].phys;
+-		if (ofs > fixed_map[i].size)
+-			continue;
+-
+-		return fixed_map[i].mapped + ofs;
+-	}
+-
+-	if ((addr >= 0x18000000 && addr < 0x18c00000) ||
+-	    (addr >= 0x70000000 && addr < 0x78000000) ||
+-	    (addr >= 0x7c000000 && addr < 0x7c400000))
+-		return mt7921_reg_map_l1(dev, addr);
+-
+-	dev_err(dev->mt76.dev, "Access currently unsupported address %08x\n",
+-		addr);
+-
+-	return 0;
+-}
+-
+-static u32 mt7921_rr(struct mt76_dev *mdev, u32 offset)
+-{
+-	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
+-	u32 addr = __mt7921_reg_addr(dev, offset);
+-
+-	return dev->bus_ops->rr(mdev, addr);
+-}
+-
+-static void mt7921_wr(struct mt76_dev *mdev, u32 offset, u32 val)
+-{
+-	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
+-	u32 addr = __mt7921_reg_addr(dev, offset);
+-
+-	dev->bus_ops->wr(mdev, addr, val);
+-}
+-
+-static u32 mt7921_rmw(struct mt76_dev *mdev, u32 offset, u32 mask, u32 val)
+-{
+-	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
+-	u32 addr = __mt7921_reg_addr(dev, offset);
+-
+-	return dev->bus_ops->rmw(mdev, addr, mask, val);
+-}
+-
+ static int mt7921_dma_disable(struct mt7921_dev *dev, bool force)
+ {
+ 	if (force) {
+@@ -341,23 +237,8 @@ int mt7921_wpdma_reinit_cond(struct mt7921_dev *dev)
+ 
+ int mt7921_dma_init(struct mt7921_dev *dev)
+ {
+-	struct mt76_bus_ops *bus_ops;
+ 	int ret;
+ 
+-	dev->phy.dev = dev;
+-	dev->phy.mt76 = &dev->mt76.phy;
+-	dev->mt76.phy.priv = &dev->phy;
+-	dev->bus_ops = dev->mt76.bus;
+-	bus_ops = devm_kmemdup(dev->mt76.dev, dev->bus_ops, sizeof(*bus_ops),
+-			       GFP_KERNEL);
+-	if (!bus_ops)
+-		return -ENOMEM;
+-
+-	bus_ops->rr = mt7921_rr;
+-	bus_ops->wr = mt7921_wr;
+-	bus_ops->rmw = mt7921_rmw;
+-	dev->mt76.bus = bus_ops;
+-
+ 	mt76_dma_attach(&dev->mt76);
+ 
+ 	ret = mt7921_dma_disable(dev, true);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+index fc21a78b37c49..57fd626b40da9 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+@@ -836,9 +836,15 @@ mt7921_mac_write_txwi_80211(struct mt7921_dev *dev, __le32 *txwi,
+ 		txwi[3] |= cpu_to_le32(val);
+ 	}
+ 
+-	val = FIELD_PREP(MT_TXD7_TYPE, fc_type) |
+-	      FIELD_PREP(MT_TXD7_SUB_TYPE, fc_stype);
+-	txwi[7] |= cpu_to_le32(val);
++	if (mt76_is_mmio(&dev->mt76)) {
++		val = FIELD_PREP(MT_TXD7_TYPE, fc_type) |
++		      FIELD_PREP(MT_TXD7_SUB_TYPE, fc_stype);
++		txwi[7] |= cpu_to_le32(val);
++	} else {
++		val = FIELD_PREP(MT_TXD8_L_TYPE, fc_type) |
++		      FIELD_PREP(MT_TXD8_L_SUB_TYPE, fc_stype);
++		txwi[8] |= cpu_to_le32(val);
++	}
+ }
+ 
+ void mt7921_mac_write_txwi(struct mt7921_dev *dev, __le32 *txwi,
+@@ -1014,7 +1020,6 @@ mt7921_mac_add_txs_skb(struct mt7921_dev *dev, struct mt76_wcid *wcid, int pid,
+ 		break;
+ 	case MT_PHY_TYPE_HT:
+ 	case MT_PHY_TYPE_HT_GF:
+-		rate.mcs += (rate.nss - 1) * 8;
+ 		if (rate.mcs > 31)
+ 			goto out;
+ 
+@@ -1472,6 +1477,14 @@ void mt7921_pm_power_save_work(struct work_struct *work)
+ 	    test_bit(MT76_HW_SCHED_SCANNING, &mphy->state))
+ 		goto out;
+ 
++	if (mutex_is_locked(&dev->mt76.mutex))
++		/* if mt76 mutex is held we should not put the device
++		 * to sleep since we are currently accessing device
++		 * register map. We need to wait for the next power_save
++		 * trigger.
++		 */
++		goto out;
++
+ 	if (time_is_after_jiffies(dev->pm.last_activity + delta)) {
+ 		delta = dev->pm.last_activity + delta - jiffies;
+ 		goto out;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.h b/drivers/net/wireless/mediatek/mt76/mt7921/mac.h
+index 544a1c33126a4..12e1cf8abe6ea 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.h
+@@ -284,6 +284,9 @@ enum tx_mcu_port_q_idx {
+ #define MT_TXD7_HW_AMSDU		BIT(10)
+ #define MT_TXD7_TX_TIME			GENMASK(9, 0)
+ 
++#define MT_TXD8_L_TYPE			GENMASK(5, 4)
++#define MT_TXD8_L_SUB_TYPE		GENMASK(3, 0)
++
+ #define MT_TX_RATE_STBC			BIT(13)
+ #define MT_TX_RATE_NSS			GENMASK(12, 10)
+ #define MT_TX_RATE_MODE			GENMASK(9, 6)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+index 4c6adbb969550..50c953b08ed0c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+@@ -912,33 +912,28 @@ EXPORT_SYMBOL_GPL(mt7921_mcu_exit);
+ 
+ int mt7921_mcu_set_tx(struct mt7921_dev *dev, struct ieee80211_vif *vif)
+ {
+-#define WMM_AIFS_SET		BIT(0)
+-#define WMM_CW_MIN_SET		BIT(1)
+-#define WMM_CW_MAX_SET		BIT(2)
+-#define WMM_TXOP_SET		BIT(3)
+-#define WMM_PARAM_SET		GENMASK(3, 0)
+-#define TX_CMD_MODE		1
++	struct mt7921_vif *mvif = (struct mt7921_vif *)vif->drv_priv;
++
+ 	struct edca {
+-		u8 queue;
+-		u8 set;
+-		u8 aifs;
+-		u8 cw_min;
++		__le16 cw_min;
+ 		__le16 cw_max;
+ 		__le16 txop;
+-	};
++		__le16 aifs;
++		u8 guardtime;
++		u8 acm;
++	} __packed;
+ 	struct mt7921_mcu_tx {
+-		u8 total;
+-		u8 action;
+-		u8 valid;
+-		u8 mode;
+-
+ 		struct edca edca[IEEE80211_NUM_ACS];
++		u8 bss_idx;
++		u8 qos;
++		u8 wmm_idx;
++		u8 pad;
+ 	} __packed req = {
+-		.valid = true,
+-		.mode = TX_CMD_MODE,
+-		.total = IEEE80211_NUM_ACS,
++		.bss_idx = mvif->mt76.idx,
++		.qos = vif->bss_conf.qos,
++		.wmm_idx = mvif->mt76.wmm_idx,
+ 	};
+-	struct mt7921_vif *mvif = (struct mt7921_vif *)vif->drv_priv;
++
+ 	struct mu_edca {
+ 		u8 cw_min;
+ 		u8 cw_max;
+@@ -962,30 +957,29 @@ int mt7921_mcu_set_tx(struct mt7921_dev *dev, struct ieee80211_vif *vif)
+ 		.qos = vif->bss_conf.qos,
+ 		.wmm_idx = mvif->mt76.wmm_idx,
+ 	};
++	int to_aci[] = {1, 0, 2, 3};
+ 	int ac, ret;
+ 
+ 	for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) {
+ 		struct ieee80211_tx_queue_params *q = &mvif->queue_params[ac];
+-		struct edca *e = &req.edca[ac];
++		struct edca *e = &req.edca[to_aci[ac]];
+ 
+-		e->set = WMM_PARAM_SET;
+-		e->queue = ac + mvif->mt76.wmm_idx * MT7921_MAX_WMM_SETS;
+ 		e->aifs = q->aifs;
+ 		e->txop = cpu_to_le16(q->txop);
+ 
+ 		if (q->cw_min)
+-			e->cw_min = fls(q->cw_min);
++			e->cw_min = cpu_to_le16(q->cw_min);
+ 		else
+ 			e->cw_min = 5;
+ 
+ 		if (q->cw_max)
+-			e->cw_max = cpu_to_le16(fls(q->cw_max));
++			e->cw_max = cpu_to_le16(q->cw_max);
+ 		else
+ 			e->cw_max = cpu_to_le16(10);
+ 	}
+ 
+-	ret = mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(EDCA_UPDATE),
+-				&req, sizeof(req), true);
++	ret = mt76_mcu_send_msg(&dev->mt76, MCU_CE_CMD(SET_EDCA_PARMS), &req,
++				sizeof(req), false);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -995,7 +989,6 @@ int mt7921_mcu_set_tx(struct mt7921_dev *dev, struct ieee80211_vif *vif)
+ 	for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) {
+ 		struct ieee80211_he_mu_edca_param_ac_rec *q;
+ 		struct mu_edca *e;
+-		int to_aci[] = {1, 0, 2, 3};
+ 
+ 		if (!mvif->queue_params[ac].mu_edca)
+ 			break;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+index 96647801850a5..33f8e5b541b35 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+@@ -452,6 +452,7 @@ int mt7921e_mcu_init(struct mt7921_dev *dev);
+ int mt7921s_wfsys_reset(struct mt7921_dev *dev);
+ int mt7921s_mac_reset(struct mt7921_dev *dev);
+ int mt7921s_init_reset(struct mt7921_dev *dev);
++int __mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev);
+ int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev);
+ int mt7921e_mcu_fw_pmctrl(struct mt7921_dev *dev);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+index 40186e6cd865e..006e6a11e6faf 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+@@ -119,6 +119,110 @@ static void mt7921e_unregister_device(struct mt7921_dev *dev)
+ 	mt76_free_device(&dev->mt76);
+ }
+ 
++static u32 __mt7921_reg_addr(struct mt7921_dev *dev, u32 addr)
++{
++	static const struct {
++		u32 phys;
++		u32 mapped;
++		u32 size;
++	} fixed_map[] = {
++		{ 0x820d0000, 0x30000, 0x10000 }, /* WF_LMAC_TOP (WF_WTBLON) */
++		{ 0x820ed000, 0x24800, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_MIB) */
++		{ 0x820e4000, 0x21000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_TMAC) */
++		{ 0x820e7000, 0x21e00, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_DMA) */
++		{ 0x820eb000, 0x24200, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_LPON) */
++		{ 0x820e2000, 0x20800, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_AGG) */
++		{ 0x820e3000, 0x20c00, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_ARB) */
++		{ 0x820e5000, 0x21400, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_RMAC) */
++		{ 0x00400000, 0x80000, 0x10000 }, /* WF_MCU_SYSRAM */
++		{ 0x00410000, 0x90000, 0x10000 }, /* WF_MCU_SYSRAM (configure register) */
++		{ 0x40000000, 0x70000, 0x10000 }, /* WF_UMAC_SYSRAM */
++		{ 0x54000000, 0x02000, 0x1000 }, /* WFDMA PCIE0 MCU DMA0 */
++		{ 0x55000000, 0x03000, 0x1000 }, /* WFDMA PCIE0 MCU DMA1 */
++		{ 0x58000000, 0x06000, 0x1000 }, /* WFDMA PCIE1 MCU DMA0 (MEM_DMA) */
++		{ 0x59000000, 0x07000, 0x1000 }, /* WFDMA PCIE1 MCU DMA1 */
++		{ 0x7c000000, 0xf0000, 0x10000 }, /* CONN_INFRA */
++		{ 0x7c020000, 0xd0000, 0x10000 }, /* CONN_INFRA, WFDMA */
++		{ 0x7c060000, 0xe0000, 0x10000 }, /* CONN_INFRA, conn_host_csr_top */
++		{ 0x80020000, 0xb0000, 0x10000 }, /* WF_TOP_MISC_OFF */
++		{ 0x81020000, 0xc0000, 0x10000 }, /* WF_TOP_MISC_ON */
++		{ 0x820c0000, 0x08000, 0x4000 }, /* WF_UMAC_TOP (PLE) */
++		{ 0x820c8000, 0x0c000, 0x2000 }, /* WF_UMAC_TOP (PSE) */
++		{ 0x820cc000, 0x0e000, 0x1000 }, /* WF_UMAC_TOP (PP) */
++		{ 0x820cd000, 0x0f000, 0x1000 }, /* WF_MDP_TOP */
++		{ 0x820ce000, 0x21c00, 0x0200 }, /* WF_LMAC_TOP (WF_SEC) */
++		{ 0x820cf000, 0x22000, 0x1000 }, /* WF_LMAC_TOP (WF_PF) */
++		{ 0x820e0000, 0x20000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_CFG) */
++		{ 0x820e1000, 0x20400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_TRB) */
++		{ 0x820e9000, 0x23400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_WTBLOFF) */
++		{ 0x820ea000, 0x24000, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_ETBF) */
++		{ 0x820ec000, 0x24600, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_INT) */
++		{ 0x820f0000, 0xa0000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_CFG) */
++		{ 0x820f1000, 0xa0600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_TRB) */
++		{ 0x820f2000, 0xa0800, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_AGG) */
++		{ 0x820f3000, 0xa0c00, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_ARB) */
++		{ 0x820f4000, 0xa1000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_TMAC) */
++		{ 0x820f5000, 0xa1400, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_RMAC) */
++		{ 0x820f7000, 0xa1e00, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_DMA) */
++		{ 0x820f9000, 0xa3400, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_WTBLOFF) */
++		{ 0x820fa000, 0xa4000, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_ETBF) */
++		{ 0x820fb000, 0xa4200, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_LPON) */
++		{ 0x820fc000, 0xa4600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_INT) */
++		{ 0x820fd000, 0xa4800, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_MIB) */
++	};
++	int i;
++
++	if (addr < 0x100000)
++		return addr;
++
++	for (i = 0; i < ARRAY_SIZE(fixed_map); i++) {
++		u32 ofs;
++
++		if (addr < fixed_map[i].phys)
++			continue;
++
++		ofs = addr - fixed_map[i].phys;
++		if (ofs > fixed_map[i].size)
++			continue;
++
++		return fixed_map[i].mapped + ofs;
++	}
++
++	if ((addr >= 0x18000000 && addr < 0x18c00000) ||
++	    (addr >= 0x70000000 && addr < 0x78000000) ||
++	    (addr >= 0x7c000000 && addr < 0x7c400000))
++		return mt7921_reg_map_l1(dev, addr);
++
++	dev_err(dev->mt76.dev, "Access currently unsupported address %08x\n",
++		addr);
++
++	return 0;
++}
++
++static u32 mt7921_rr(struct mt76_dev *mdev, u32 offset)
++{
++	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
++	u32 addr = __mt7921_reg_addr(dev, offset);
++
++	return dev->bus_ops->rr(mdev, addr);
++}
++
++static void mt7921_wr(struct mt76_dev *mdev, u32 offset, u32 val)
++{
++	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
++	u32 addr = __mt7921_reg_addr(dev, offset);
++
++	dev->bus_ops->wr(mdev, addr, val);
++}
++
++static u32 mt7921_rmw(struct mt76_dev *mdev, u32 offset, u32 mask, u32 val)
++{
++	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
++	u32 addr = __mt7921_reg_addr(dev, offset);
++
++	return dev->bus_ops->rmw(mdev, addr, mask, val);
++}
++
+ static int mt7921_pci_probe(struct pci_dev *pdev,
+ 			    const struct pci_device_id *id)
+ {
+@@ -149,6 +253,7 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
+ 		.fw_own = mt7921e_mcu_fw_pmctrl,
+ 	};
+ 
++	struct mt76_bus_ops *bus_ops;
+ 	struct mt7921_dev *dev;
+ 	struct mt76_dev *mdev;
+ 	int ret;
+@@ -186,6 +291,25 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
+ 
+ 	mt76_mmio_init(&dev->mt76, pcim_iomap_table(pdev)[0]);
+ 	tasklet_init(&dev->irq_tasklet, mt7921_irq_tasklet, (unsigned long)dev);
++
++	dev->phy.dev = dev;
++	dev->phy.mt76 = &dev->mt76.phy;
++	dev->mt76.phy.priv = &dev->phy;
++	dev->bus_ops = dev->mt76.bus;
++	bus_ops = devm_kmemdup(dev->mt76.dev, dev->bus_ops, sizeof(*bus_ops),
++			       GFP_KERNEL);
++	if (!bus_ops)
++		return -ENOMEM;
++
++	bus_ops->rr = mt7921_rr;
++	bus_ops->wr = mt7921_wr;
++	bus_ops->rmw = mt7921_rmw;
++	dev->mt76.bus = bus_ops;
++
++	ret = __mt7921e_mcu_drv_pmctrl(dev);
++	if (ret)
++		return ret;
++
+ 	mdev->rev = (mt7921_l1_rr(dev, MT_HW_CHIPID) << 16) |
+ 		    (mt7921_l1_rr(dev, MT_HW_REV) & 0xff);
+ 	dev_err(mdev->dev, "ASIC revision: %04x\n", mdev->rev);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
+index 7b34c7f2ab3a6..3f80beca965ad 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
+@@ -59,10 +59,8 @@ int mt7921e_mcu_init(struct mt7921_dev *dev)
+ 	return err;
+ }
+ 
+-int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
++int __mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
+ {
+-	struct mt76_phy *mphy = &dev->mt76.phy;
+-	struct mt76_connac_pm *pm = &dev->pm;
+ 	int i, err = 0;
+ 
+ 	for (i = 0; i < MT7921_DRV_OWN_RETRY_COUNT; i++) {
+@@ -75,9 +73,21 @@ int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
+ 	if (i == MT7921_DRV_OWN_RETRY_COUNT) {
+ 		dev_err(dev->mt76.dev, "driver own failed\n");
+ 		err = -EIO;
+-		goto out;
+ 	}
+ 
++	return err;
++}
++
++int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
++{
++	struct mt76_phy *mphy = &dev->mt76.phy;
++	struct mt76_connac_pm *pm = &dev->pm;
++	int err;
++
++	err = __mt7921e_mcu_drv_pmctrl(dev);
++	if (err < 0)
++		goto out;
++
+ 	mt7921_wpdma_reinit_cond(dev);
+ 	clear_bit(MT76_STATE_PM, &mphy->state);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/regs.h b/drivers/net/wireless/mediatek/mt76/mt7921/regs.h
+index cbd38122c510f..c8c92faa4624f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/regs.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/regs.h
+@@ -17,13 +17,12 @@
+ #define MT_PLE_BASE			0x820c0000
+ #define MT_PLE(ofs)			(MT_PLE_BASE + (ofs))
+ 
+-#define MT_PLE_FL_Q0_CTRL		MT_PLE(0x1b0)
+-#define MT_PLE_FL_Q1_CTRL		MT_PLE(0x1b4)
+-#define MT_PLE_FL_Q2_CTRL		MT_PLE(0x1b8)
+-#define MT_PLE_FL_Q3_CTRL		MT_PLE(0x1bc)
++#define MT_PLE_FL_Q0_CTRL		MT_PLE(0x3e0)
++#define MT_PLE_FL_Q1_CTRL		MT_PLE(0x3e4)
++#define MT_PLE_FL_Q2_CTRL		MT_PLE(0x3e8)
++#define MT_PLE_FL_Q3_CTRL		MT_PLE(0x3ec)
+ 
+-#define MT_PLE_AC_QEMPTY(ac, n)		MT_PLE(0x300 + 0x10 * (ac) + \
+-					       ((n) << 2))
++#define MT_PLE_AC_QEMPTY(_n)		MT_PLE(0x500 + 0x40 * (_n))
+ #define MT_PLE_AMSDU_PACK_MSDU_CNT(n)	MT_PLE(0x10e0 + ((n) << 2))
+ 
+ #define MT_MDP_BASE			0x820cd000
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mcu.c
+index 437cddad9a90c..353d99fef065a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mcu.c
+@@ -49,6 +49,26 @@ mt7921s_mcu_send_message(struct mt76_dev *mdev, struct sk_buff *skb,
+ 	return ret;
+ }
+ 
++static u32 mt7921s_read_rm3r(struct mt7921_dev *dev)
++{
++	struct mt76_sdio *sdio = &dev->mt76.sdio;
++
++	return sdio_readl(sdio->func, MCR_D2HRM3R, NULL);
++}
++
++static u32 mt7921s_clear_rm3r_drv_own(struct mt7921_dev *dev)
++{
++	struct mt76_sdio *sdio = &dev->mt76.sdio;
++	u32 val;
++
++	val = sdio_readl(sdio->func, MCR_D2HRM3R, NULL);
++	if (val)
++		sdio_writel(sdio->func, H2D_SW_INT_CLEAR_MAILBOX_ACK,
++			    MCR_WSICR, NULL);
++
++	return val;
++}
++
+ int mt7921s_mcu_init(struct mt7921_dev *dev)
+ {
+ 	static const struct mt76_mcu_ops mt7921s_mcu_ops = {
+@@ -88,6 +108,12 @@ int mt7921s_mcu_drv_pmctrl(struct mt7921_dev *dev)
+ 
+ 	err = readx_poll_timeout(mt76s_read_pcr, &dev->mt76, status,
+ 				 status & WHLPCR_IS_DRIVER_OWN, 2000, 1000000);
++
++	if (!err && test_bit(MT76_STATE_MCU_RUNNING, &dev->mphy.state))
++		err = readx_poll_timeout(mt7921s_read_rm3r, dev, status,
++					 status & D2HRM3R_IS_DRIVER_OWN,
++					 2000, 1000000);
++
+ 	sdio_release_host(func);
+ 
+ 	if (err < 0) {
+@@ -115,12 +141,24 @@ int mt7921s_mcu_fw_pmctrl(struct mt7921_dev *dev)
+ 
+ 	sdio_claim_host(func);
+ 
++	if (test_bit(MT76_STATE_MCU_RUNNING, &dev->mphy.state)) {
++		err = readx_poll_timeout(mt7921s_clear_rm3r_drv_own,
++					 dev, status,
++					 !(status & D2HRM3R_IS_DRIVER_OWN),
++					 2000, 1000000);
++		if (err < 0) {
++			dev_err(dev->mt76.dev, "mailbox ACK not cleared\n");
++			goto err;
++		}
++	}
++
+ 	sdio_writel(func, WHLPCR_FW_OWN_REQ_SET, MCR_WHLPCR, NULL);
+ 
+ 	err = readx_poll_timeout(mt76s_read_pcr, &dev->mt76, status,
+ 				 !(status & WHLPCR_IS_DRIVER_OWN), 2000, 1000000);
+ 	sdio_release_host(func);
+ 
++err:
+ 	if (err < 0) {
+ 		dev_err(dev->mt76.dev, "firmware own failed\n");
+ 		clear_bit(MT76_STATE_PM, &mphy->state);
+diff --git a/drivers/net/wireless/mediatek/mt76/sdio.h b/drivers/net/wireless/mediatek/mt76/sdio.h
+index 99db4ad93b7c7..27d5d2077ebae 100644
+--- a/drivers/net/wireless/mediatek/mt76/sdio.h
++++ b/drivers/net/wireless/mediatek/mt76/sdio.h
+@@ -65,6 +65,7 @@
+ #define MCR_H2DSM0R			0x0070
+ #define H2D_SW_INT_READ			BIT(16)
+ #define H2D_SW_INT_WRITE		BIT(17)
++#define H2D_SW_INT_CLEAR_MAILBOX_ACK	BIT(22)
+ 
+ #define MCR_H2DSM1R			0x0074
+ #define MCR_D2HRM0R			0x0078
+@@ -109,6 +110,7 @@
+ #define MCR_H2DSM2R			0x0160 /* supported in CONNAC2 */
+ #define MCR_H2DSM3R			0x0164 /* supported in CONNAC2 */
+ #define MCR_D2HRM3R			0x0174 /* supported in CONNAC2 */
++#define D2HRM3R_IS_DRIVER_OWN		BIT(0)
+ #define MCR_WTQCR8			0x0190 /* supported in CONNAC2 */
+ #define MCR_WTQCR9			0x0194 /* supported in CONNAC2 */
+ #define MCR_WTQCR10			0x0198 /* supported in CONNAC2 */
+diff --git a/drivers/net/wireless/ray_cs.c b/drivers/net/wireless/ray_cs.c
+index e3a3dc3e45b43..03f0953440b50 100644
+--- a/drivers/net/wireless/ray_cs.c
++++ b/drivers/net/wireless/ray_cs.c
+@@ -382,6 +382,8 @@ static int ray_config(struct pcmcia_device *link)
+ 		goto failed;
+ 	local->sram = ioremap(link->resource[2]->start,
+ 			resource_size(link->resource[2]));
++	if (!local->sram)
++		goto failed;
+ 
+ /*** Set up 16k window for shared memory (receive buffer) ***************/
+ 	link->resource[3]->flags |=
+@@ -396,6 +398,8 @@ static int ray_config(struct pcmcia_device *link)
+ 		goto failed;
+ 	local->rmem = ioremap(link->resource[3]->start,
+ 			resource_size(link->resource[3]));
++	if (!local->rmem)
++		goto failed;
+ 
+ /*** Set up window for attribute memory ***********************************/
+ 	link->resource[4]->flags |=
+@@ -410,6 +414,8 @@ static int ray_config(struct pcmcia_device *link)
+ 		goto failed;
+ 	local->amem = ioremap(link->resource[4]->start,
+ 			resource_size(link->resource[4]));
++	if (!local->amem)
++		goto failed;
+ 
+ 	dev_dbg(&link->dev, "ray_config sram=%p\n", local->sram);
+ 	dev_dbg(&link->dev, "ray_config rmem=%p\n", local->rmem);
+diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
+index 9ccf3d6087993..70ad891a76bae 100644
+--- a/drivers/nvdimm/region_devs.c
++++ b/drivers/nvdimm/region_devs.c
+@@ -1025,6 +1025,9 @@ static unsigned long default_align(struct nd_region *nd_region)
+ 		}
+ 	}
+ 
++	if (nd_region->ndr_size < MEMREMAP_COMPAT_ALIGN_MAX)
++		align = PAGE_SIZE;
++
+ 	mappings = max_t(u16, 1, nd_region->ndr_mappings);
+ 	div_u64_rem(align, mappings, &remainder);
+ 	if (remainder)
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 5785f6abf1945..ecb98d1ad9f64 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1685,13 +1685,6 @@ static void nvme_config_discard(struct gendisk *disk, struct nvme_ns *ns)
+ 		blk_queue_max_write_zeroes_sectors(queue, UINT_MAX);
+ }
+ 
+-static bool nvme_ns_ids_valid(struct nvme_ns_ids *ids)
+-{
+-	return !uuid_is_null(&ids->uuid) ||
+-		memchr_inv(ids->nguid, 0, sizeof(ids->nguid)) ||
+-		memchr_inv(ids->eui64, 0, sizeof(ids->eui64));
+-}
+-
+ static bool nvme_ns_ids_equal(struct nvme_ns_ids *a, struct nvme_ns_ids *b)
+ {
+ 	return uuid_equal(&a->uuid, &b->uuid) &&
+@@ -1867,9 +1860,6 @@ static void nvme_update_disk_info(struct gendisk *disk,
+ 	nvme_config_discard(disk, ns);
+ 	blk_queue_max_write_zeroes_sectors(disk->queue,
+ 					   ns->ctrl->max_zeroes_sectors);
+-
+-	set_disk_ro(disk, (id->nsattr & NVME_NS_ATTR_RO) ||
+-		test_bit(NVME_NS_FORCE_RO, &ns->flags));
+ }
+ 
+ static inline bool nvme_first_scan(struct gendisk *disk)
+@@ -1930,6 +1920,8 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
+ 			goto out_unfreeze;
+ 	}
+ 
++	set_disk_ro(ns->disk, (id->nsattr & NVME_NS_ATTR_RO) ||
++		test_bit(NVME_NS_FORCE_RO, &ns->flags));
+ 	set_bit(NVME_NS_READY, &ns->flags);
+ 	blk_mq_unfreeze_queue(ns->disk->queue);
+ 
+@@ -1942,6 +1934,9 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
+ 	if (nvme_ns_head_multipath(ns->head)) {
+ 		blk_mq_freeze_queue(ns->head->disk->queue);
+ 		nvme_update_disk_info(ns->head->disk, ns, id);
++		set_disk_ro(ns->head->disk,
++			    (id->nsattr & NVME_NS_ATTR_RO) ||
++				    test_bit(NVME_NS_FORCE_RO, &ns->flags));
+ 		nvme_mpath_revalidate_paths(ns);
+ 		blk_stack_limits(&ns->head->disk->queue->limits,
+ 				 &ns->queue->limits, 0);
+@@ -3588,15 +3583,20 @@ static const struct attribute_group *nvme_dev_attr_groups[] = {
+ 	NULL,
+ };
+ 
+-static struct nvme_ns_head *nvme_find_ns_head(struct nvme_subsystem *subsys,
++static struct nvme_ns_head *nvme_find_ns_head(struct nvme_ctrl *ctrl,
+ 		unsigned nsid)
+ {
+ 	struct nvme_ns_head *h;
+ 
+-	lockdep_assert_held(&subsys->lock);
++	lockdep_assert_held(&ctrl->subsys->lock);
+ 
+-	list_for_each_entry(h, &subsys->nsheads, entry) {
+-		if (h->ns_id != nsid)
++	list_for_each_entry(h, &ctrl->subsys->nsheads, entry) {
++		/*
++		 * Private namespaces can share NSIDs under some conditions.
++		 * In that case we can't use the same ns_head for namespaces
++		 * with the same NSID.
++		 */
++		if (h->ns_id != nsid || !nvme_is_unique_nsid(ctrl, h))
+ 			continue;
+ 		if (!list_empty(&h->list) && nvme_tryget_ns_head(h))
+ 			return h;
+@@ -3605,16 +3605,24 @@ static struct nvme_ns_head *nvme_find_ns_head(struct nvme_subsystem *subsys,
+ 	return NULL;
+ }
+ 
+-static int __nvme_check_ids(struct nvme_subsystem *subsys,
+-		struct nvme_ns_head *new)
++static int nvme_subsys_check_duplicate_ids(struct nvme_subsystem *subsys,
++		struct nvme_ns_ids *ids)
+ {
++	bool has_uuid = !uuid_is_null(&ids->uuid);
++	bool has_nguid = memchr_inv(ids->nguid, 0, sizeof(ids->nguid));
++	bool has_eui64 = memchr_inv(ids->eui64, 0, sizeof(ids->eui64));
+ 	struct nvme_ns_head *h;
+ 
+ 	lockdep_assert_held(&subsys->lock);
+ 
+ 	list_for_each_entry(h, &subsys->nsheads, entry) {
+-		if (nvme_ns_ids_valid(&new->ids) &&
+-		    nvme_ns_ids_equal(&new->ids, &h->ids))
++		if (has_uuid && uuid_equal(&ids->uuid, &h->ids.uuid))
++			return -EINVAL;
++		if (has_nguid &&
++		    memcmp(&ids->nguid, &h->ids.nguid, sizeof(ids->nguid)) == 0)
++			return -EINVAL;
++		if (has_eui64 &&
++		    memcmp(&ids->eui64, &h->ids.eui64, sizeof(ids->eui64)) == 0)
+ 			return -EINVAL;
+ 	}
+ 
+@@ -3713,7 +3721,7 @@ static struct nvme_ns_head *nvme_alloc_ns_head(struct nvme_ctrl *ctrl,
+ 	head->ids = *ids;
+ 	kref_init(&head->ref);
+ 
+-	ret = __nvme_check_ids(ctrl->subsys, head);
++	ret = nvme_subsys_check_duplicate_ids(ctrl->subsys, &head->ids);
+ 	if (ret) {
+ 		dev_err(ctrl->device,
+ 			"duplicate IDs for nsid %d\n", nsid);
+@@ -3756,7 +3764,7 @@ static int nvme_init_ns_head(struct nvme_ns *ns, unsigned nsid,
+ 	int ret = 0;
+ 
+ 	mutex_lock(&ctrl->subsys->lock);
+-	head = nvme_find_ns_head(ctrl->subsys, nsid);
++	head = nvme_find_ns_head(ctrl, nsid);
+ 	if (!head) {
+ 		head = nvme_alloc_ns_head(ctrl, nsid, ids);
+ 		if (IS_ERR(head)) {
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 99c2307b04e2c..65fe28895ef39 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -468,10 +468,11 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
+ 
+ 	/*
+ 	 * Add a multipath node if the subsystems supports multiple controllers.
+-	 * We also do this for private namespaces as the namespace sharing data could
+-	 * change after a rescan.
++	 * We also do this for private namespaces as the namespace sharing flag
++	 * could change after a rescan.
+ 	 */
+-	if (!(ctrl->subsys->cmic & NVME_CTRL_CMIC_MULTI_CTRL) || !multipath)
++	if (!(ctrl->subsys->cmic & NVME_CTRL_CMIC_MULTI_CTRL) ||
++	    !nvme_is_unique_nsid(ctrl, head) || !multipath)
+ 		return 0;
+ 
+ 	head->disk = blk_alloc_disk(ctrl->numa_node);
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 9b095ee013649..ec5d3d4dc2cca 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -712,6 +712,25 @@ static inline bool nvme_check_ready(struct nvme_ctrl *ctrl, struct request *rq,
+ 		return queue_live;
+ 	return __nvme_check_ready(ctrl, rq, queue_live);
+ }
++
++/*
++ * NSID shall be unique for all shared namespaces, or if at least one of the
++ * following conditions is met:
++ *   1. Namespace Management is supported by the controller
++ *   2. ANA is supported by the controller
++ *   3. NVM Set are supported by the controller
++ *
++ * In other case, private namespace are not required to report a unique NSID.
++ */
++static inline bool nvme_is_unique_nsid(struct nvme_ctrl *ctrl,
++		struct nvme_ns_head *head)
++{
++	return head->shared ||
++		(ctrl->oacs & NVME_CTRL_OACS_NS_MNGT_SUPP) ||
++		(ctrl->subsys->cmic & NVME_CTRL_CMIC_ANA) ||
++		(ctrl->ctratt & NVME_CTRL_CTRATT_NVM_SETS);
++}
++
+ int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
+ 		void *buf, unsigned bufflen);
+ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 65e00c64a588b..d66e2de044e0a 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -30,6 +30,44 @@ static int so_priority;
+ module_param(so_priority, int, 0644);
+ MODULE_PARM_DESC(so_priority, "nvme tcp socket optimize priority");
+ 
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++/* lockdep can detect a circular dependency of the form
++ *   sk_lock -> mmap_lock (page fault) -> fs locks -> sk_lock
++ * because dependencies are tracked for both nvme-tcp and user contexts. Using
++ * a separate class prevents lockdep from conflating nvme-tcp socket use with
++ * user-space socket API use.
++ */
++static struct lock_class_key nvme_tcp_sk_key[2];
++static struct lock_class_key nvme_tcp_slock_key[2];
++
++static void nvme_tcp_reclassify_socket(struct socket *sock)
++{
++	struct sock *sk = sock->sk;
++
++	if (WARN_ON_ONCE(!sock_allow_reclassification(sk)))
++		return;
++
++	switch (sk->sk_family) {
++	case AF_INET:
++		sock_lock_init_class_and_name(sk, "slock-AF_INET-NVME",
++					      &nvme_tcp_slock_key[0],
++					      "sk_lock-AF_INET-NVME",
++					      &nvme_tcp_sk_key[0]);
++		break;
++	case AF_INET6:
++		sock_lock_init_class_and_name(sk, "slock-AF_INET6-NVME",
++					      &nvme_tcp_slock_key[1],
++					      "sk_lock-AF_INET6-NVME",
++					      &nvme_tcp_sk_key[1]);
++		break;
++	default:
++		WARN_ON_ONCE(1);
++	}
++}
++#else
++static void nvme_tcp_reclassify_socket(struct socket *sock) { }
++#endif
++
+ enum nvme_tcp_send_state {
+ 	NVME_TCP_SEND_CMD_PDU = 0,
+ 	NVME_TCP_SEND_H2C_PDU,
+@@ -1469,6 +1507,8 @@ static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl,
+ 		goto err_destroy_mutex;
+ 	}
+ 
++	nvme_tcp_reclassify_socket(queue->sock);
++
+ 	/* Single syn retry */
+ 	tcp_sock_set_syncnt(queue->sock->sk, 1);
+ 
+diff --git a/drivers/pci/access.c b/drivers/pci/access.c
+index 46935695cfb90..8d0d1f61c650d 100644
+--- a/drivers/pci/access.c
++++ b/drivers/pci/access.c
+@@ -160,9 +160,12 @@ int pci_generic_config_write32(struct pci_bus *bus, unsigned int devfn,
+ 	 * write happen to have any RW1C (write-one-to-clear) bits set, we
+ 	 * just inadvertently cleared something we shouldn't have.
+ 	 */
+-	dev_warn_ratelimited(&bus->dev, "%d-byte config write to %04x:%02x:%02x.%d offset %#x may corrupt adjacent RW1C bits\n",
+-			     size, pci_domain_nr(bus), bus->number,
+-			     PCI_SLOT(devfn), PCI_FUNC(devfn), where);
++	if (!bus->unsafe_warn) {
++		dev_warn(&bus->dev, "%d-byte config write to %04x:%02x:%02x.%d offset %#x may corrupt adjacent RW1C bits\n",
++			 size, pci_domain_nr(bus), bus->number,
++			 PCI_SLOT(devfn), PCI_FUNC(devfn), where);
++		bus->unsafe_warn = 1;
++	}
+ 
+ 	mask = ~(((1 << (size * 8)) - 1) << ((where & 0x3) * 8));
+ 	tmp = readl(addr) & mask;
+diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
+index 26f49f797b0fe..bbc3a46549f83 100644
+--- a/drivers/pci/controller/dwc/pci-imx6.c
++++ b/drivers/pci/controller/dwc/pci-imx6.c
+@@ -779,9 +779,7 @@ static int imx6_pcie_start_link(struct dw_pcie *pci)
+ 	/* Start LTSSM. */
+ 	imx6_pcie_ltssm_enable(dev);
+ 
+-	ret = dw_pcie_wait_for_link(pci);
+-	if (ret)
+-		goto err_reset_phy;
++	dw_pcie_wait_for_link(pci);
+ 
+ 	if (pci->link_gen == 2) {
+ 		/* Allow Gen2 mode after the link is up. */
+@@ -817,11 +815,7 @@ static int imx6_pcie_start_link(struct dw_pcie *pci)
+ 		}
+ 
+ 		/* Make sure link training is finished as well! */
+-		ret = dw_pcie_wait_for_link(pci);
+-		if (ret) {
+-			dev_err(dev, "Failed to bring link up!\n");
+-			goto err_reset_phy;
+-		}
++		dw_pcie_wait_for_link(pci);
+ 	} else {
+ 		dev_info(dev, "Link: Gen2 disabled\n");
+ 	}
+diff --git a/drivers/pci/controller/dwc/pcie-fu740.c b/drivers/pci/controller/dwc/pcie-fu740.c
+index 00cde9a248b5a..78d002be4f821 100644
+--- a/drivers/pci/controller/dwc/pcie-fu740.c
++++ b/drivers/pci/controller/dwc/pcie-fu740.c
+@@ -181,10 +181,59 @@ static int fu740_pcie_start_link(struct dw_pcie *pci)
+ {
+ 	struct device *dev = pci->dev;
+ 	struct fu740_pcie *afp = dev_get_drvdata(dev);
++	u8 cap_exp = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
++	int ret;
++	u32 orig, tmp;
++
++	/*
++	 * Force 2.5GT/s when starting the link, due to some devices not
++	 * probing at higher speeds. This happens with the PCIe switch
++	 * on the Unmatched board when U-Boot has not initialised the PCIe.
++	 * The fix in U-Boot is to force 2.5GT/s, which then gets cleared
++	 * by the soft reset done by this driver.
++	 */
++	dev_dbg(dev, "cap_exp at %x\n", cap_exp);
++	dw_pcie_dbi_ro_wr_en(pci);
++
++	tmp = dw_pcie_readl_dbi(pci, cap_exp + PCI_EXP_LNKCAP);
++	orig = tmp & PCI_EXP_LNKCAP_SLS;
++	tmp &= ~PCI_EXP_LNKCAP_SLS;
++	tmp |= PCI_EXP_LNKCAP_SLS_2_5GB;
++	dw_pcie_writel_dbi(pci, cap_exp + PCI_EXP_LNKCAP, tmp);
+ 
+ 	/* Enable LTSSM */
+ 	writel_relaxed(0x1, afp->mgmt_base + PCIEX8MGMT_APP_LTSSM_ENABLE);
+-	return 0;
++
++	ret = dw_pcie_wait_for_link(pci);
++	if (ret) {
++		dev_err(dev, "error: link did not start\n");
++		goto err;
++	}
++
++	tmp = dw_pcie_readl_dbi(pci, cap_exp + PCI_EXP_LNKCAP);
++	if ((tmp & PCI_EXP_LNKCAP_SLS) != orig) {
++		dev_dbg(dev, "changing speed back to original\n");
++
++		tmp &= ~PCI_EXP_LNKCAP_SLS;
++		tmp |= orig;
++		dw_pcie_writel_dbi(pci, cap_exp + PCI_EXP_LNKCAP, tmp);
++
++		tmp = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL);
++		tmp |= PORT_LOGIC_SPEED_CHANGE;
++		dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, tmp);
++
++		ret = dw_pcie_wait_for_link(pci);
++		if (ret) {
++			dev_err(dev, "error: link did not start at new speed\n");
++			goto err;
++		}
++	}
++
++	ret = 0;
++err:
++	WARN_ON(ret);	/* we assume that errors will be very rare */
++	dw_pcie_dbi_ro_wr_dis(pci);
++	return ret;
+ }
+ 
+ static int fu740_pcie_host_init(struct pcie_port *pp)
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index b2217e2b3efde..a924564fdbbc3 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -844,7 +844,9 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+ 	case PCI_EXP_RTSTA: {
+ 		u32 isr0 = advk_readl(pcie, PCIE_ISR0_REG);
+ 		u32 msglog = advk_readl(pcie, PCIE_MSG_LOG_REG);
+-		*value = (isr0 & PCIE_MSG_PM_PME_MASK) << 16 | (msglog >> 16);
++		*value = msglog >> 16;
++		if (isr0 & PCIE_MSG_PM_PME_MASK)
++			*value |= PCI_EXP_RTSTA_PME;
+ 		return PCI_BRIDGE_EMUL_HANDLED;
+ 	}
+ 
+@@ -1381,7 +1383,6 @@ static void advk_pcie_remove_irq_domain(struct advk_pcie *pcie)
+ static void advk_pcie_handle_msi(struct advk_pcie *pcie)
+ {
+ 	u32 msi_val, msi_mask, msi_status, msi_idx;
+-	u16 msi_data;
+ 
+ 	msi_mask = advk_readl(pcie, PCIE_MSI_MASK_REG);
+ 	msi_val = advk_readl(pcie, PCIE_MSI_STATUS_REG);
+@@ -1391,13 +1392,9 @@ static void advk_pcie_handle_msi(struct advk_pcie *pcie)
+ 		if (!(BIT(msi_idx) & msi_status))
+ 			continue;
+ 
+-		/*
+-		 * msi_idx contains bits [4:0] of the msi_data and msi_data
+-		 * contains 16bit MSI interrupt number
+-		 */
+ 		advk_writel(pcie, BIT(msi_idx), PCIE_MSI_STATUS_REG);
+-		msi_data = advk_readl(pcie, PCIE_MSI_PAYLOAD_REG) & PCIE_MSI_DATA_MASK;
+-		generic_handle_irq(msi_data);
++		if (generic_handle_domain_irq(pcie->msi_inner_domain, msi_idx) == -EINVAL)
++			dev_err_ratelimited(&pcie->pdev->dev, "unexpected MSI 0x%02x\n", msi_idx);
+ 	}
+ 
+ 	advk_writel(pcie, PCIE_ISR0_MSI_INT_PENDING,
+diff --git a/drivers/pci/controller/pci-xgene.c b/drivers/pci/controller/pci-xgene.c
+index d83dbd9774182..02869d3ed0314 100644
+--- a/drivers/pci/controller/pci-xgene.c
++++ b/drivers/pci/controller/pci-xgene.c
+@@ -465,7 +465,7 @@ static int xgene_pcie_select_ib_reg(u8 *ib_reg_mask, u64 size)
+ 		return 1;
+ 	}
+ 
+-	if ((size > SZ_1K) && (size < SZ_4G) && !(*ib_reg_mask & (1 << 0))) {
++	if ((size > SZ_1K) && (size < SZ_1T) && !(*ib_reg_mask & (1 << 0))) {
+ 		*ib_reg_mask |= (1 << 0);
+ 		return 0;
+ 	}
+@@ -479,28 +479,27 @@ static int xgene_pcie_select_ib_reg(u8 *ib_reg_mask, u64 size)
+ }
+ 
+ static void xgene_pcie_setup_ib_reg(struct xgene_pcie_port *port,
+-				    struct resource_entry *entry,
+-				    u8 *ib_reg_mask)
++				    struct of_pci_range *range, u8 *ib_reg_mask)
+ {
+ 	void __iomem *cfg_base = port->cfg_base;
+ 	struct device *dev = port->dev;
+ 	void __iomem *bar_addr;
+ 	u32 pim_reg;
+-	u64 cpu_addr = entry->res->start;
+-	u64 pci_addr = cpu_addr - entry->offset;
+-	u64 size = resource_size(entry->res);
++	u64 cpu_addr = range->cpu_addr;
++	u64 pci_addr = range->pci_addr;
++	u64 size = range->size;
+ 	u64 mask = ~(size - 1) | EN_REG;
+ 	u32 flags = PCI_BASE_ADDRESS_MEM_TYPE_64;
+ 	u32 bar_low;
+ 	int region;
+ 
+-	region = xgene_pcie_select_ib_reg(ib_reg_mask, size);
++	region = xgene_pcie_select_ib_reg(ib_reg_mask, range->size);
+ 	if (region < 0) {
+ 		dev_warn(dev, "invalid pcie dma-range config\n");
+ 		return;
+ 	}
+ 
+-	if (entry->res->flags & IORESOURCE_PREFETCH)
++	if (range->flags & IORESOURCE_PREFETCH)
+ 		flags |= PCI_BASE_ADDRESS_MEM_PREFETCH;
+ 
+ 	bar_low = pcie_bar_low_val((u32)cpu_addr, flags);
+@@ -531,13 +530,25 @@ static void xgene_pcie_setup_ib_reg(struct xgene_pcie_port *port,
+ 
+ static int xgene_pcie_parse_map_dma_ranges(struct xgene_pcie_port *port)
+ {
+-	struct pci_host_bridge *bridge = pci_host_bridge_from_priv(port);
+-	struct resource_entry *entry;
++	struct device_node *np = port->node;
++	struct of_pci_range range;
++	struct of_pci_range_parser parser;
++	struct device *dev = port->dev;
+ 	u8 ib_reg_mask = 0;
+ 
+-	resource_list_for_each_entry(entry, &bridge->dma_ranges)
+-		xgene_pcie_setup_ib_reg(port, entry, &ib_reg_mask);
++	if (of_pci_dma_range_parser_init(&parser, np)) {
++		dev_err(dev, "missing dma-ranges property\n");
++		return -EINVAL;
++	}
++
++	/* Get the dma-ranges from DT */
++	for_each_of_pci_range(&parser, &range) {
++		u64 end = range.cpu_addr + range.size - 1;
+ 
++		dev_dbg(dev, "0x%08x 0x%016llx..0x%016llx -> 0x%016llx\n",
++			range.flags, range.cpu_addr, end, range.pci_addr);
++		xgene_pcie_setup_ib_reg(port, &range, &ib_reg_mask);
++	}
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 1d3108e6c1284..0e765e0644c22 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -98,6 +98,8 @@ static int pcie_poll_cmd(struct controller *ctrl, int timeout)
+ 		if (slot_status & PCI_EXP_SLTSTA_CC) {
+ 			pcie_capability_write_word(pdev, PCI_EXP_SLTSTA,
+ 						   PCI_EXP_SLTSTA_CC);
++			ctrl->cmd_busy = 0;
++			smp_mb();
+ 			return 1;
+ 		}
+ 		msleep(10);
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index db864bf634a3e..6272b122f400d 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -1811,6 +1811,18 @@ static void quirk_alder_ioapic(struct pci_dev *pdev)
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL,	PCI_DEVICE_ID_INTEL_EESSC,	quirk_alder_ioapic);
+ #endif
+ 
++static void quirk_no_msi(struct pci_dev *dev)
++{
++	pci_info(dev, "avoiding MSI to work around a hardware defect\n");
++	dev->no_msi = 1;
++}
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4386, quirk_no_msi);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4387, quirk_no_msi);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4388, quirk_no_msi);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4389, quirk_no_msi);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x438a, quirk_no_msi);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x438b, quirk_no_msi);
++
+ static void quirk_pcie_mch(struct pci_dev *pdev)
+ {
+ 	pdev->no_msi = 1;
+diff --git a/drivers/phy/broadcom/phy-brcm-usb-init.c b/drivers/phy/broadcom/phy-brcm-usb-init.c
+index 9391ab42a12b3..dd0f66288fbdd 100644
+--- a/drivers/phy/broadcom/phy-brcm-usb-init.c
++++ b/drivers/phy/broadcom/phy-brcm-usb-init.c
+@@ -79,6 +79,7 @@
+ 
+ enum brcm_family_type {
+ 	BRCM_FAMILY_3390A0,
++	BRCM_FAMILY_4908,
+ 	BRCM_FAMILY_7250B0,
+ 	BRCM_FAMILY_7271A0,
+ 	BRCM_FAMILY_7364A0,
+@@ -96,6 +97,7 @@ enum brcm_family_type {
+ 
+ static const char *family_names[BRCM_FAMILY_COUNT] = {
+ 	USB_BRCM_FAMILY(3390A0),
++	USB_BRCM_FAMILY(4908),
+ 	USB_BRCM_FAMILY(7250B0),
+ 	USB_BRCM_FAMILY(7271A0),
+ 	USB_BRCM_FAMILY(7364A0),
+@@ -203,6 +205,27 @@ usb_reg_bits_map_table[BRCM_FAMILY_COUNT][USB_CTRL_SELECTOR_COUNT] = {
+ 		USB_CTRL_USB_PM_USB20_HC_RESETB_VAR_MASK,
+ 		ENDIAN_SETTINGS, /* USB_CTRL_SETUP ENDIAN bits */
+ 	},
++	/* 4908 */
++	[BRCM_FAMILY_4908] = {
++		0, /* USB_CTRL_SETUP_SCB1_EN_MASK */
++		0, /* USB_CTRL_SETUP_SCB2_EN_MASK */
++		0, /* USB_CTRL_SETUP_SS_EHCI64BIT_EN_MASK */
++		0, /* USB_CTRL_SETUP_STRAP_IPP_SEL_MASK */
++		0, /* USB_CTRL_SETUP_OC3_DISABLE_MASK */
++		0, /* USB_CTRL_PLL_CTL_PLL_IDDQ_PWRDN_MASK */
++		0, /* USB_CTRL_USB_PM_BDC_SOFT_RESETB_MASK */
++		USB_CTRL_USB_PM_XHC_SOFT_RESETB_MASK,
++		USB_CTRL_USB_PM_USB_PWRDN_MASK,
++		0, /* USB_CTRL_USB30_CTL1_XHC_SOFT_RESETB_MASK */
++		0, /* USB_CTRL_USB30_CTL1_USB3_IOC_MASK */
++		0, /* USB_CTRL_USB30_CTL1_USB3_IPP_MASK */
++		0, /* USB_CTRL_USB_DEVICE_CTL1_PORT_MODE_MASK */
++		0, /* USB_CTRL_USB_PM_SOFT_RESET_MASK */
++		0, /* USB_CTRL_SETUP_CC_DRD_MODE_ENABLE_MASK */
++		0, /* USB_CTRL_SETUP_STRAP_CC_DRD_MODE_ENABLE_SEL_MASK */
++		0, /* USB_CTRL_USB_PM_USB20_HC_RESETB_VAR_MASK */
++		0, /* USB_CTRL_SETUP ENDIAN bits */
++	},
+ 	/* 7250b0 */
+ 	[BRCM_FAMILY_7250B0] = {
+ 		USB_CTRL_SETUP_SCB1_EN_MASK,
+@@ -559,6 +582,7 @@ static void brcmusb_usb3_pll_54mhz(struct brcm_usb_init_params *params)
+ 	 */
+ 	switch (params->selected_family) {
+ 	case BRCM_FAMILY_3390A0:
++	case BRCM_FAMILY_4908:
+ 	case BRCM_FAMILY_7250B0:
+ 	case BRCM_FAMILY_7366C0:
+ 	case BRCM_FAMILY_74371A0:
+@@ -1004,6 +1028,18 @@ static const struct brcm_usb_init_ops bcm7445_ops = {
+ 	.set_dual_select = usb_set_dual_select,
+ };
+ 
++void brcm_usb_dvr_init_4908(struct brcm_usb_init_params *params)
++{
++	int fam;
++
++	fam = BRCM_FAMILY_4908;
++	params->selected_family = fam;
++	params->usb_reg_bits_map =
++		&usb_reg_bits_map_table[fam][0];
++	params->family_name = family_names[fam];
++	params->ops = &bcm7445_ops;
++}
++
+ void brcm_usb_dvr_init_7445(struct brcm_usb_init_params *params)
+ {
+ 	int fam;
+diff --git a/drivers/phy/broadcom/phy-brcm-usb-init.h b/drivers/phy/broadcom/phy-brcm-usb-init.h
+index a39f30fa2e991..1ccb5ddab865c 100644
+--- a/drivers/phy/broadcom/phy-brcm-usb-init.h
++++ b/drivers/phy/broadcom/phy-brcm-usb-init.h
+@@ -64,6 +64,7 @@ struct  brcm_usb_init_params {
+ 	bool suspend_with_clocks;
+ };
+ 
++void brcm_usb_dvr_init_4908(struct brcm_usb_init_params *params);
+ void brcm_usb_dvr_init_7445(struct brcm_usb_init_params *params);
+ void brcm_usb_dvr_init_7216(struct brcm_usb_init_params *params);
+ void brcm_usb_dvr_init_7211b0(struct brcm_usb_init_params *params);
+diff --git a/drivers/phy/broadcom/phy-brcm-usb.c b/drivers/phy/broadcom/phy-brcm-usb.c
+index 0f1deb6e0eabf..2cb3779fcdf82 100644
+--- a/drivers/phy/broadcom/phy-brcm-usb.c
++++ b/drivers/phy/broadcom/phy-brcm-usb.c
+@@ -283,6 +283,15 @@ static const struct attribute_group brcm_usb_phy_group = {
+ 	.attrs = brcm_usb_phy_attrs,
+ };
+ 
++static const struct match_chip_info chip_info_4908 = {
++	.init_func = &brcm_usb_dvr_init_4908,
++	.required_regs = {
++		BRCM_REGS_CTRL,
++		BRCM_REGS_XHCI_EC,
++		-1,
++	},
++};
++
+ static const struct match_chip_info chip_info_7216 = {
+ 	.init_func = &brcm_usb_dvr_init_7216,
+ 	.required_regs = {
+@@ -318,7 +327,7 @@ static const struct match_chip_info chip_info_7445 = {
+ static const struct of_device_id brcm_usb_dt_ids[] = {
+ 	{
+ 		.compatible = "brcm,bcm4908-usb-phy",
+-		.data = &chip_info_7445,
++		.data = &chip_info_4908,
+ 	},
+ 	{
+ 		.compatible = "brcm,bcm7216-usb-phy",
+diff --git a/drivers/phy/phy-core-mipi-dphy.c b/drivers/phy/phy-core-mipi-dphy.c
+index ccb4045685cdd..929e86d6558e0 100644
+--- a/drivers/phy/phy-core-mipi-dphy.c
++++ b/drivers/phy/phy-core-mipi-dphy.c
+@@ -64,10 +64,10 @@ int phy_mipi_dphy_get_default_config(unsigned long pixel_clock,
+ 	cfg->hs_trail = max(4 * 8 * ui, 60000 + 4 * 4 * ui);
+ 
+ 	cfg->init = 100;
+-	cfg->lpx = 60000;
++	cfg->lpx = 50000;
+ 	cfg->ta_get = 5 * cfg->lpx;
+ 	cfg->ta_go = 4 * cfg->lpx;
+-	cfg->ta_sure = 2 * cfg->lpx;
++	cfg->ta_sure = cfg->lpx;
+ 	cfg->wakeup = 1000;
+ 
+ 	cfg->hs_clk_rate = hs_clk_rate;
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mtk-common.c b/drivers/pinctrl/mediatek/pinctrl-mtk-common.c
+index 5f7c421ab6e76..334cb85855a93 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mtk-common.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mtk-common.c
+@@ -1038,6 +1038,7 @@ int mtk_pctrl_init(struct platform_device *pdev,
+ 	node = of_parse_phandle(np, "mediatek,pctl-regmap", 0);
+ 	if (node) {
+ 		pctl->regmap1 = syscon_node_to_regmap(node);
++		of_node_put(node);
+ 		if (IS_ERR(pctl->regmap1))
+ 			return PTR_ERR(pctl->regmap1);
+ 	} else if (regmap) {
+@@ -1051,6 +1052,7 @@ int mtk_pctrl_init(struct platform_device *pdev,
+ 	node = of_parse_phandle(np, "mediatek,pctl-regmap", 1);
+ 	if (node) {
+ 		pctl->regmap2 = syscon_node_to_regmap(node);
++		of_node_put(node);
+ 		if (IS_ERR(pctl->regmap2))
+ 			return PTR_ERR(pctl->regmap2);
+ 	}
+diff --git a/drivers/pinctrl/mediatek/pinctrl-paris.c b/drivers/pinctrl/mediatek/pinctrl-paris.c
+index 4c6f6d967b18a..da6c2c7f1c0dc 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-paris.c
++++ b/drivers/pinctrl/mediatek/pinctrl-paris.c
+@@ -96,20 +96,16 @@ static int mtk_pinconf_get(struct pinctrl_dev *pctldev,
+ 			err = hw->soc->bias_get_combo(hw, desc, &pullup, &ret);
+ 			if (err)
+ 				goto out;
++			if (ret == MTK_PUPD_SET_R1R0_00)
++				ret = MTK_DISABLE;
+ 			if (param == PIN_CONFIG_BIAS_DISABLE) {
+-				if (ret == MTK_PUPD_SET_R1R0_00)
+-					ret = MTK_DISABLE;
++				if (ret != MTK_DISABLE)
++					err = -EINVAL;
+ 			} else if (param == PIN_CONFIG_BIAS_PULL_UP) {
+-				/* When desire to get pull-up value, return
+-				 *  error if current setting is pull-down
+-				 */
+-				if (!pullup)
++				if (!pullup || ret == MTK_DISABLE)
+ 					err = -EINVAL;
+ 			} else if (param == PIN_CONFIG_BIAS_PULL_DOWN) {
+-				/* When desire to get pull-down value, return
+-				 *  error if current setting is pull-up
+-				 */
+-				if (pullup)
++				if (pullup || ret == MTK_DISABLE)
+ 					err = -EINVAL;
+ 			}
+ 		} else {
+@@ -188,8 +184,7 @@ out:
+ }
+ 
+ static int mtk_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,
+-			   enum pin_config_param param,
+-			   enum pin_config_param arg)
++			   enum pin_config_param param, u32 arg)
+ {
+ 	struct mtk_pinctrl *hw = pinctrl_dev_get_drvdata(pctldev);
+ 	const struct mtk_pin_desc *desc;
+@@ -586,6 +581,9 @@ ssize_t mtk_pctrl_show_one_pin(struct mtk_pinctrl *hw,
+ 	if (gpio >= hw->soc->npins)
+ 		return -EINVAL;
+ 
++	if (mtk_is_virt_gpio(hw, gpio))
++		return -EINVAL;
++
+ 	desc = (const struct mtk_pin_desc *)&hw->soc->pins[gpio];
+ 	pinmux = mtk_pctrl_get_pinmux(hw, gpio);
+ 	if (pinmux >= hw->soc->nfuncs)
+@@ -737,10 +735,10 @@ static int mtk_pconf_group_get(struct pinctrl_dev *pctldev, unsigned group,
+ 			       unsigned long *config)
+ {
+ 	struct mtk_pinctrl *hw = pinctrl_dev_get_drvdata(pctldev);
++	struct mtk_pinctrl_group *grp = &hw->groups[group];
+ 
+-	*config = hw->groups[group].config;
+-
+-	return 0;
++	 /* One pin per group only */
++	return mtk_pinconf_get(pctldev, grp->pin, config);
+ }
+ 
+ static int mtk_pconf_group_set(struct pinctrl_dev *pctldev, unsigned group,
+@@ -756,8 +754,6 @@ static int mtk_pconf_group_set(struct pinctrl_dev *pctldev, unsigned group,
+ 				      pinconf_to_config_argument(configs[i]));
+ 		if (ret < 0)
+ 			return ret;
+-
+-		grp->config = configs[i];
+ 	}
+ 
+ 	return 0;
+@@ -989,7 +985,7 @@ int mtk_paris_pinctrl_probe(struct platform_device *pdev,
+ 	hw->nbase = hw->soc->nbase_names;
+ 
+ 	if (of_find_property(hw->dev->of_node,
+-			     "mediatek,rsel_resistance_in_si_unit", NULL))
++			     "mediatek,rsel-resistance-in-si-unit", NULL))
+ 		hw->rsel_si_unit = true;
+ 	else
+ 		hw->rsel_si_unit = false;
+diff --git a/drivers/pinctrl/nomadik/pinctrl-nomadik.c b/drivers/pinctrl/nomadik/pinctrl-nomadik.c
+index 39828e9c3120a..4757bf964d3cd 100644
+--- a/drivers/pinctrl/nomadik/pinctrl-nomadik.c
++++ b/drivers/pinctrl/nomadik/pinctrl-nomadik.c
+@@ -1883,8 +1883,10 @@ static int nmk_pinctrl_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	prcm_np = of_parse_phandle(np, "prcm", 0);
+-	if (prcm_np)
++	if (prcm_np) {
+ 		npct->prcm_base = of_iomap(prcm_np, 0);
++		of_node_put(prcm_np);
++	}
+ 	if (!npct->prcm_base) {
+ 		if (version == PINCTRL_NMK_STN8815) {
+ 			dev_info(&pdev->dev,
+diff --git a/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c b/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c
+index 4d81908d6725d..41136f63014a4 100644
+--- a/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c
++++ b/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c
+@@ -78,7 +78,6 @@ struct npcm7xx_gpio {
+ 	struct gpio_chip	gc;
+ 	int			irqbase;
+ 	int			irq;
+-	void			*priv;
+ 	struct irq_chip		irq_chip;
+ 	u32			pinctrl_id;
+ 	int (*direction_input)(struct gpio_chip *chip, unsigned offset);
+@@ -226,7 +225,7 @@ static void npcmgpio_irq_handler(struct irq_desc *desc)
+ 	chained_irq_enter(chip, desc);
+ 	sts = ioread32(bank->base + NPCM7XX_GP_N_EVST);
+ 	en  = ioread32(bank->base + NPCM7XX_GP_N_EVEN);
+-	dev_dbg(chip->parent_device, "==> got irq sts %.8x %.8x\n", sts,
++	dev_dbg(bank->gc.parent, "==> got irq sts %.8x %.8x\n", sts,
+ 		en);
+ 
+ 	sts &= en;
+@@ -241,33 +240,33 @@ static int npcmgpio_set_irq_type(struct irq_data *d, unsigned int type)
+ 		gpiochip_get_data(irq_data_get_irq_chip_data(d));
+ 	unsigned int gpio = BIT(d->hwirq);
+ 
+-	dev_dbg(d->chip->parent_device, "setirqtype: %u.%u = %u\n", gpio,
++	dev_dbg(bank->gc.parent, "setirqtype: %u.%u = %u\n", gpio,
+ 		d->irq, type);
+ 	switch (type) {
+ 	case IRQ_TYPE_EDGE_RISING:
+-		dev_dbg(d->chip->parent_device, "edge.rising\n");
++		dev_dbg(bank->gc.parent, "edge.rising\n");
+ 		npcm_gpio_clr(&bank->gc, bank->base + NPCM7XX_GP_N_EVBE, gpio);
+ 		npcm_gpio_clr(&bank->gc, bank->base + NPCM7XX_GP_N_POL, gpio);
+ 		break;
+ 	case IRQ_TYPE_EDGE_FALLING:
+-		dev_dbg(d->chip->parent_device, "edge.falling\n");
++		dev_dbg(bank->gc.parent, "edge.falling\n");
+ 		npcm_gpio_clr(&bank->gc, bank->base + NPCM7XX_GP_N_EVBE, gpio);
+ 		npcm_gpio_set(&bank->gc, bank->base + NPCM7XX_GP_N_POL, gpio);
+ 		break;
+ 	case IRQ_TYPE_EDGE_BOTH:
+-		dev_dbg(d->chip->parent_device, "edge.both\n");
++		dev_dbg(bank->gc.parent, "edge.both\n");
+ 		npcm_gpio_set(&bank->gc, bank->base + NPCM7XX_GP_N_EVBE, gpio);
+ 		break;
+ 	case IRQ_TYPE_LEVEL_LOW:
+-		dev_dbg(d->chip->parent_device, "level.low\n");
++		dev_dbg(bank->gc.parent, "level.low\n");
+ 		npcm_gpio_set(&bank->gc, bank->base + NPCM7XX_GP_N_POL, gpio);
+ 		break;
+ 	case IRQ_TYPE_LEVEL_HIGH:
+-		dev_dbg(d->chip->parent_device, "level.high\n");
++		dev_dbg(bank->gc.parent, "level.high\n");
+ 		npcm_gpio_clr(&bank->gc, bank->base + NPCM7XX_GP_N_POL, gpio);
+ 		break;
+ 	default:
+-		dev_dbg(d->chip->parent_device, "invalid irq type\n");
++		dev_dbg(bank->gc.parent, "invalid irq type\n");
+ 		return -EINVAL;
+ 	}
+ 
+@@ -289,7 +288,7 @@ static void npcmgpio_irq_ack(struct irq_data *d)
+ 		gpiochip_get_data(irq_data_get_irq_chip_data(d));
+ 	unsigned int gpio = d->hwirq;
+ 
+-	dev_dbg(d->chip->parent_device, "irq_ack: %u.%u\n", gpio, d->irq);
++	dev_dbg(bank->gc.parent, "irq_ack: %u.%u\n", gpio, d->irq);
+ 	iowrite32(BIT(gpio), bank->base + NPCM7XX_GP_N_EVST);
+ }
+ 
+@@ -301,7 +300,7 @@ static void npcmgpio_irq_mask(struct irq_data *d)
+ 	unsigned int gpio = d->hwirq;
+ 
+ 	/* Clear events */
+-	dev_dbg(d->chip->parent_device, "irq_mask: %u.%u\n", gpio, d->irq);
++	dev_dbg(bank->gc.parent, "irq_mask: %u.%u\n", gpio, d->irq);
+ 	iowrite32(BIT(gpio), bank->base + NPCM7XX_GP_N_EVENC);
+ }
+ 
+@@ -313,7 +312,7 @@ static void npcmgpio_irq_unmask(struct irq_data *d)
+ 	unsigned int gpio = d->hwirq;
+ 
+ 	/* Enable events */
+-	dev_dbg(d->chip->parent_device, "irq_unmask: %u.%u\n", gpio, d->irq);
++	dev_dbg(bank->gc.parent, "irq_unmask: %u.%u\n", gpio, d->irq);
+ 	iowrite32(BIT(gpio), bank->base + NPCM7XX_GP_N_EVENS);
+ }
+ 
+@@ -323,7 +322,7 @@ static unsigned int npcmgpio_irq_startup(struct irq_data *d)
+ 	unsigned int gpio = d->hwirq;
+ 
+ 	/* active-high, input, clear interrupt, enable interrupt */
+-	dev_dbg(d->chip->parent_device, "startup: %u.%u\n", gpio, d->irq);
++	dev_dbg(gc->parent, "startup: %u.%u\n", gpio, d->irq);
+ 	npcmgpio_direction_input(gc, gpio);
+ 	npcmgpio_irq_ack(d);
+ 	npcmgpio_irq_unmask(d);
+@@ -905,7 +904,7 @@ static struct npcm7xx_func npcm7xx_funcs[] = {
+ #define DRIVE_STRENGTH_HI_SHIFT		12
+ #define DRIVE_STRENGTH_MASK		0x0000FF00
+ 
+-#define DS(lo, hi)	(((lo) << DRIVE_STRENGTH_LO_SHIFT) | \
++#define DSTR(lo, hi)	(((lo) << DRIVE_STRENGTH_LO_SHIFT) | \
+ 			 ((hi) << DRIVE_STRENGTH_HI_SHIFT))
+ #define DSLO(x)		(((x) >> DRIVE_STRENGTH_LO_SHIFT) & 0xF)
+ #define DSHI(x)		(((x) >> DRIVE_STRENGTH_HI_SHIFT) & 0xF)
+@@ -925,31 +924,31 @@ struct npcm7xx_pincfg {
+ static const struct npcm7xx_pincfg pincfg[] = {
+ 	/*		PIN	  FUNCTION 1		   FUNCTION 2		  FUNCTION 3	    FLAGS */
+ 	NPCM7XX_PINCFG(0,	 iox1, MFSEL1, 30,	  none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(1,	 iox1, MFSEL1, 30,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(2,	 iox1, MFSEL1, 30,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
++	NPCM7XX_PINCFG(1,	 iox1, MFSEL1, 30,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(2,	 iox1, MFSEL1, 30,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
+ 	NPCM7XX_PINCFG(3,	 iox1, MFSEL1, 30,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(4,	 iox2, MFSEL3, 14,	 smb1d, I2CSEGSEL, 7,	none, NONE, 0,	     SLEW),
+ 	NPCM7XX_PINCFG(5,	 iox2, MFSEL3, 14,	 smb1d, I2CSEGSEL, 7,	none, NONE, 0,	     SLEW),
+ 	NPCM7XX_PINCFG(6,	 iox2, MFSEL3, 14,	 smb2d, I2CSEGSEL, 10,  none, NONE, 0,       SLEW),
+ 	NPCM7XX_PINCFG(7,	 iox2, MFSEL3, 14,	 smb2d, I2CSEGSEL, 10,  none, NONE, 0,       SLEW),
+-	NPCM7XX_PINCFG(8,      lkgpo1, FLOCKR1, 4,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(9,      lkgpo2, FLOCKR1, 8,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(10,	 ioxh, MFSEL3, 18,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(11,	 ioxh, MFSEL3, 18,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
++	NPCM7XX_PINCFG(8,      lkgpo1, FLOCKR1, 4,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(9,      lkgpo2, FLOCKR1, 8,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(10,	 ioxh, MFSEL3, 18,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(11,	 ioxh, MFSEL3, 18,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
+ 	NPCM7XX_PINCFG(12,	 gspi, MFSEL1, 24,	 smb5b, I2CSEGSEL, 19,  none, NONE, 0,	     SLEW),
+ 	NPCM7XX_PINCFG(13,	 gspi, MFSEL1, 24,	 smb5b, I2CSEGSEL, 19,  none, NONE, 0,	     SLEW),
+ 	NPCM7XX_PINCFG(14,	 gspi, MFSEL1, 24,	 smb5c, I2CSEGSEL, 20,	none, NONE, 0,	     SLEW),
+ 	NPCM7XX_PINCFG(15,	 gspi, MFSEL1, 24,	 smb5c, I2CSEGSEL, 20,	none, NONE, 0,	     SLEW),
+-	NPCM7XX_PINCFG(16,     lkgpo0, FLOCKR1, 0,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(17,      pspi2, MFSEL3, 13,     smb4den, I2CSEGSEL, 23,  none, NONE, 0,       DS(8, 12)),
+-	NPCM7XX_PINCFG(18,      pspi2, MFSEL3, 13,	 smb4b, I2CSEGSEL, 14,  none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(19,      pspi2, MFSEL3, 13,	 smb4b, I2CSEGSEL, 14,  none, NONE, 0,	     DS(8, 12)),
++	NPCM7XX_PINCFG(16,     lkgpo0, FLOCKR1, 0,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(17,      pspi2, MFSEL3, 13,     smb4den, I2CSEGSEL, 23,  none, NONE, 0,       DSTR(8, 12)),
++	NPCM7XX_PINCFG(18,      pspi2, MFSEL3, 13,	 smb4b, I2CSEGSEL, 14,  none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(19,      pspi2, MFSEL3, 13,	 smb4b, I2CSEGSEL, 14,  none, NONE, 0,	     DSTR(8, 12)),
+ 	NPCM7XX_PINCFG(20,	smb4c, I2CSEGSEL, 15,    smb15, MFSEL3, 8,      none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(21,	smb4c, I2CSEGSEL, 15,    smb15, MFSEL3, 8,      none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(22,      smb4d, I2CSEGSEL, 16,	 smb14, MFSEL3, 7,      none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(23,      smb4d, I2CSEGSEL, 16,	 smb14, MFSEL3, 7,      none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(24,	 ioxh, MFSEL3, 18,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(25,	 ioxh, MFSEL3, 18,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
++	NPCM7XX_PINCFG(24,	 ioxh, MFSEL3, 18,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(25,	 ioxh, MFSEL3, 18,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
+ 	NPCM7XX_PINCFG(26,	 smb5, MFSEL1, 2,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(27,	 smb5, MFSEL1, 2,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(28,	 smb4, MFSEL1, 1,	  none, NONE, 0,	none, NONE, 0,	     0),
+@@ -965,12 +964,12 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ 	NPCM7XX_PINCFG(39,	smb3b, I2CSEGSEL, 11,	  none, NONE, 0,	none, NONE, 0,	     SLEW),
+ 	NPCM7XX_PINCFG(40,	smb3b, I2CSEGSEL, 11,	  none, NONE, 0,	none, NONE, 0,	     SLEW),
+ 	NPCM7XX_PINCFG(41,  bmcuart0a, MFSEL1, 9,         none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(42,  bmcuart0a, MFSEL1, 9,         none, NONE, 0,	none, NONE, 0,	     DS(2, 4) | GPO),
++	NPCM7XX_PINCFG(42,  bmcuart0a, MFSEL1, 9,         none, NONE, 0,	none, NONE, 0,	     DSTR(2, 4) | GPO),
+ 	NPCM7XX_PINCFG(43,      uart1, MFSEL1, 10,	 jtag2, MFSEL4, 0,  bmcuart1, MFSEL3, 24,    0),
+ 	NPCM7XX_PINCFG(44,      uart1, MFSEL1, 10,	 jtag2, MFSEL4, 0,  bmcuart1, MFSEL3, 24,    0),
+ 	NPCM7XX_PINCFG(45,      uart1, MFSEL1, 10,	 jtag2, MFSEL4, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(46,      uart1, MFSEL1, 10,	 jtag2, MFSEL4, 0,	none, NONE, 0,	     DS(2, 8)),
+-	NPCM7XX_PINCFG(47,      uart1, MFSEL1, 10,	 jtag2, MFSEL4, 0,	none, NONE, 0,	     DS(2, 8)),
++	NPCM7XX_PINCFG(46,      uart1, MFSEL1, 10,	 jtag2, MFSEL4, 0,	none, NONE, 0,	     DSTR(2, 8)),
++	NPCM7XX_PINCFG(47,      uart1, MFSEL1, 10,	 jtag2, MFSEL4, 0,	none, NONE, 0,	     DSTR(2, 8)),
+ 	NPCM7XX_PINCFG(48,	uart2, MFSEL1, 11,   bmcuart0b, MFSEL4, 1,      none, NONE, 0,	     GPO),
+ 	NPCM7XX_PINCFG(49,	uart2, MFSEL1, 11,   bmcuart0b, MFSEL4, 1,      none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(50,	uart2, MFSEL1, 11,	  none, NONE, 0,        none, NONE, 0,	     0),
+@@ -980,8 +979,8 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ 	NPCM7XX_PINCFG(54,	uart2, MFSEL1, 11,	  none, NONE, 0,        none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(55,	uart2, MFSEL1, 11,	  none, NONE, 0,        none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(56,	r1err, MFSEL1, 12,	  none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(57,       r1md, MFSEL1, 13,        none, NONE, 0,        none, NONE, 0,       DS(2, 4)),
+-	NPCM7XX_PINCFG(58,       r1md, MFSEL1, 13,        none, NONE, 0,	none, NONE, 0,	     DS(2, 4)),
++	NPCM7XX_PINCFG(57,       r1md, MFSEL1, 13,        none, NONE, 0,        none, NONE, 0,       DSTR(2, 4)),
++	NPCM7XX_PINCFG(58,       r1md, MFSEL1, 13,        none, NONE, 0,	none, NONE, 0,	     DSTR(2, 4)),
+ 	NPCM7XX_PINCFG(59,	smb3d, I2CSEGSEL, 13,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(60,	smb3d, I2CSEGSEL, 13,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(61,      uart1, MFSEL1, 10,	  none, NONE, 0,	none, NONE, 0,     GPO),
+@@ -1004,19 +1003,19 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ 	NPCM7XX_PINCFG(77,    fanin13, MFSEL2, 13,        none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(78,    fanin14, MFSEL2, 14,        none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(79,    fanin15, MFSEL2, 15,        none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(80,	 pwm0, MFSEL2, 16,        none, NONE, 0,	none, NONE, 0,	     DS(4, 8)),
+-	NPCM7XX_PINCFG(81,	 pwm1, MFSEL2, 17,        none, NONE, 0,	none, NONE, 0,	     DS(4, 8)),
+-	NPCM7XX_PINCFG(82,	 pwm2, MFSEL2, 18,        none, NONE, 0,	none, NONE, 0,	     DS(4, 8)),
+-	NPCM7XX_PINCFG(83,	 pwm3, MFSEL2, 19,        none, NONE, 0,	none, NONE, 0,	     DS(4, 8)),
+-	NPCM7XX_PINCFG(84,         r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(85,         r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(86,         r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,	     DS(8, 12) | SLEW),
++	NPCM7XX_PINCFG(80,	 pwm0, MFSEL2, 16,        none, NONE, 0,	none, NONE, 0,	     DSTR(4, 8)),
++	NPCM7XX_PINCFG(81,	 pwm1, MFSEL2, 17,        none, NONE, 0,	none, NONE, 0,	     DSTR(4, 8)),
++	NPCM7XX_PINCFG(82,	 pwm2, MFSEL2, 18,        none, NONE, 0,	none, NONE, 0,	     DSTR(4, 8)),
++	NPCM7XX_PINCFG(83,	 pwm3, MFSEL2, 19,        none, NONE, 0,	none, NONE, 0,	     DSTR(4, 8)),
++	NPCM7XX_PINCFG(84,         r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(85,         r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(86,         r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,	     DSTR(8, 12) | SLEW),
+ 	NPCM7XX_PINCFG(87,         r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(88,         r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(89,         r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(90,      r2err, MFSEL1, 15,        none, NONE, 0,        none, NONE, 0,       0),
+-	NPCM7XX_PINCFG(91,       r2md, MFSEL1, 16,	  none, NONE, 0,        none, NONE, 0,	     DS(2, 4)),
+-	NPCM7XX_PINCFG(92,       r2md, MFSEL1, 16,	  none, NONE, 0,        none, NONE, 0,	     DS(2, 4)),
++	NPCM7XX_PINCFG(91,       r2md, MFSEL1, 16,	  none, NONE, 0,        none, NONE, 0,	     DSTR(2, 4)),
++	NPCM7XX_PINCFG(92,       r2md, MFSEL1, 16,	  none, NONE, 0,        none, NONE, 0,	     DSTR(2, 4)),
+ 	NPCM7XX_PINCFG(93,    ga20kbc, MFSEL1, 17,	 smb5d, I2CSEGSEL, 21,  none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(94,    ga20kbc, MFSEL1, 17,	 smb5d, I2CSEGSEL, 21,  none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(95,	  lpc, NONE, 0,		  espi, MFSEL4, 8,      gpio, MFSEL1, 26,    0),
+@@ -1062,34 +1061,34 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ 	NPCM7XX_PINCFG(133,	smb10, MFSEL4, 13,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(134,	smb11, MFSEL4, 14,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(135,	smb11, MFSEL4, 14,	  none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(136,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(137,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(138,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(139,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(140,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
++	NPCM7XX_PINCFG(136,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(137,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(138,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(139,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(140,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
+ 	NPCM7XX_PINCFG(141,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(142,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
++	NPCM7XX_PINCFG(142,	  sd1, MFSEL3, 12,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
+ 	NPCM7XX_PINCFG(143,       sd1, MFSEL3, 12,      sd1pwr, MFSEL4, 5,      none, NONE, 0,       0),
+-	NPCM7XX_PINCFG(144,	 pwm4, MFSEL2, 20,	  none, NONE, 0,	none, NONE, 0,	     DS(4, 8)),
+-	NPCM7XX_PINCFG(145,	 pwm5, MFSEL2, 21,	  none, NONE, 0,	none, NONE, 0,	     DS(4, 8)),
+-	NPCM7XX_PINCFG(146,	 pwm6, MFSEL2, 22,	  none, NONE, 0,	none, NONE, 0,	     DS(4, 8)),
+-	NPCM7XX_PINCFG(147,	 pwm7, MFSEL2, 23,	  none, NONE, 0,	none, NONE, 0,	     DS(4, 8)),
+-	NPCM7XX_PINCFG(148,	 mmc8, MFSEL3, 11,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(149,	 mmc8, MFSEL3, 11,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(150,	 mmc8, MFSEL3, 11,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(151,	 mmc8, MFSEL3, 11,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(152,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
++	NPCM7XX_PINCFG(144,	 pwm4, MFSEL2, 20,	  none, NONE, 0,	none, NONE, 0,	     DSTR(4, 8)),
++	NPCM7XX_PINCFG(145,	 pwm5, MFSEL2, 21,	  none, NONE, 0,	none, NONE, 0,	     DSTR(4, 8)),
++	NPCM7XX_PINCFG(146,	 pwm6, MFSEL2, 22,	  none, NONE, 0,	none, NONE, 0,	     DSTR(4, 8)),
++	NPCM7XX_PINCFG(147,	 pwm7, MFSEL2, 23,	  none, NONE, 0,	none, NONE, 0,	     DSTR(4, 8)),
++	NPCM7XX_PINCFG(148,	 mmc8, MFSEL3, 11,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(149,	 mmc8, MFSEL3, 11,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(150,	 mmc8, MFSEL3, 11,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(151,	 mmc8, MFSEL3, 11,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(152,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
+ 	NPCM7XX_PINCFG(153,     mmcwp, FLOCKR1, 24,       none, NONE, 0,	none, NONE, 0,	     0),  /* Z1/A1 */
+-	NPCM7XX_PINCFG(154,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
++	NPCM7XX_PINCFG(154,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
+ 	NPCM7XX_PINCFG(155,     mmccd, MFSEL3, 25,      mmcrst, MFSEL4, 6,      none, NONE, 0,       0),  /* Z1/A1 */
+-	NPCM7XX_PINCFG(156,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(157,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(158,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(159,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-
+-	NPCM7XX_PINCFG(160,    clkout, MFSEL1, 21,        none, NONE, 0,        none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(161,	  lpc, NONE, 0,		  espi, MFSEL4, 8,      gpio, MFSEL1, 26,    DS(8, 12)),
+-	NPCM7XX_PINCFG(162,    serirq, NONE, 0,           gpio, MFSEL1, 31,	none, NONE, 0,	     DS(8, 12)),
++	NPCM7XX_PINCFG(156,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(157,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(158,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(159,	  mmc, MFSEL3, 10,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++
++	NPCM7XX_PINCFG(160,    clkout, MFSEL1, 21,        none, NONE, 0,        none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(161,	  lpc, NONE, 0,		  espi, MFSEL4, 8,      gpio, MFSEL1, 26,    DSTR(8, 12)),
++	NPCM7XX_PINCFG(162,    serirq, NONE, 0,           gpio, MFSEL1, 31,	none, NONE, 0,	     DSTR(8, 12)),
+ 	NPCM7XX_PINCFG(163,	  lpc, NONE, 0,		  espi, MFSEL4, 8,      gpio, MFSEL1, 26,    0),
+ 	NPCM7XX_PINCFG(164,	  lpc, NONE, 0,		  espi, MFSEL4, 8,      gpio, MFSEL1, 26,    SLEWLPC),
+ 	NPCM7XX_PINCFG(165,	  lpc, NONE, 0,		  espi, MFSEL4, 8,      gpio, MFSEL1, 26,    SLEWLPC),
+@@ -1102,25 +1101,25 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ 	NPCM7XX_PINCFG(172,	 smb6, MFSEL3, 1,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(173,	 smb7, MFSEL3, 2,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(174,	 smb7, MFSEL3, 2,	  none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(175,	pspi1, MFSEL3, 4,       faninx, MFSEL3, 3,      none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(176,     pspi1, MFSEL3, 4,       faninx, MFSEL3, 3,      none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(177,     pspi1, MFSEL3, 4,       faninx, MFSEL3, 3,      none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(178,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(179,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(180,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
++	NPCM7XX_PINCFG(175,	pspi1, MFSEL3, 4,       faninx, MFSEL3, 3,      none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(176,     pspi1, MFSEL3, 4,       faninx, MFSEL3, 3,      none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(177,     pspi1, MFSEL3, 4,       faninx, MFSEL3, 3,      none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(178,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(179,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(180,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
+ 	NPCM7XX_PINCFG(181,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(182,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(183,     spi3, MFSEL4, 16,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(184,     spi3, MFSEL4, 16,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW | GPO),
+-	NPCM7XX_PINCFG(185,     spi3, MFSEL4, 16,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW | GPO),
+-	NPCM7XX_PINCFG(186,     spi3, MFSEL4, 16,	  none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(187,   spi3cs1, MFSEL4, 17,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
+-	NPCM7XX_PINCFG(188,  spi3quad, MFSEL4, 20,     spi3cs2, MFSEL4, 18,     none, NONE, 0,    DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(189,  spi3quad, MFSEL4, 20,     spi3cs3, MFSEL4, 19,     none, NONE, 0,    DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(190,      gpio, FLOCKR1, 20,   nprd_smi, NONE, 0,	none, NONE, 0,	     DS(2, 4)),
+-	NPCM7XX_PINCFG(191,	 none, NONE, 0,		  none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),  /* XX */
+-
+-	NPCM7XX_PINCFG(192,	 none, NONE, 0,		  none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),  /* XX */
++	NPCM7XX_PINCFG(183,     spi3, MFSEL4, 16,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(184,     spi3, MFSEL4, 16,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW | GPO),
++	NPCM7XX_PINCFG(185,     spi3, MFSEL4, 16,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW | GPO),
++	NPCM7XX_PINCFG(186,     spi3, MFSEL4, 16,	  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(187,   spi3cs1, MFSEL4, 17,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
++	NPCM7XX_PINCFG(188,  spi3quad, MFSEL4, 20,     spi3cs2, MFSEL4, 18,     none, NONE, 0,    DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(189,  spi3quad, MFSEL4, 20,     spi3cs3, MFSEL4, 19,     none, NONE, 0,    DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(190,      gpio, FLOCKR1, 20,   nprd_smi, NONE, 0,	none, NONE, 0,	     DSTR(2, 4)),
++	NPCM7XX_PINCFG(191,	 none, NONE, 0,		  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),  /* XX */
++
++	NPCM7XX_PINCFG(192,	 none, NONE, 0,		  none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),  /* XX */
+ 	NPCM7XX_PINCFG(193,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(194,	smb0b, I2CSEGSEL, 0,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(195,	smb0b, I2CSEGSEL, 0,	  none, NONE, 0,	none, NONE, 0,	     0),
+@@ -1131,11 +1130,11 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ 	NPCM7XX_PINCFG(200,        r2, MFSEL1, 14,        none, NONE, 0,        none, NONE, 0,       0),
+ 	NPCM7XX_PINCFG(201,	   r1, MFSEL3, 9,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(202,	smb0c, I2CSEGSEL, 1,	  none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(203,    faninx, MFSEL3, 3,         none, NONE, 0,	none, NONE, 0,	     DS(8, 12)),
++	NPCM7XX_PINCFG(203,    faninx, MFSEL3, 3,         none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12)),
+ 	NPCM7XX_PINCFG(204,	  ddc, NONE, 0,           gpio, MFSEL3, 22,	none, NONE, 0,	     SLEW),
+ 	NPCM7XX_PINCFG(205,	  ddc, NONE, 0,           gpio, MFSEL3, 22,	none, NONE, 0,	     SLEW),
+-	NPCM7XX_PINCFG(206,	  ddc, NONE, 0,           gpio, MFSEL3, 22,	none, NONE, 0,	     DS(4, 8)),
+-	NPCM7XX_PINCFG(207,	  ddc, NONE, 0,           gpio, MFSEL3, 22,	none, NONE, 0,	     DS(4, 8)),
++	NPCM7XX_PINCFG(206,	  ddc, NONE, 0,           gpio, MFSEL3, 22,	none, NONE, 0,	     DSTR(4, 8)),
++	NPCM7XX_PINCFG(207,	  ddc, NONE, 0,           gpio, MFSEL3, 22,	none, NONE, 0,	     DSTR(4, 8)),
+ 	NPCM7XX_PINCFG(208,       rg2, MFSEL4, 24,         ddr, MFSEL3, 26,     none, NONE, 0,       0),
+ 	NPCM7XX_PINCFG(209,       rg2, MFSEL4, 24,         ddr, MFSEL3, 26,     none, NONE, 0,       0),
+ 	NPCM7XX_PINCFG(210,       rg2, MFSEL4, 24,         ddr, MFSEL3, 26,     none, NONE, 0,       0),
+@@ -1147,20 +1146,20 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ 	NPCM7XX_PINCFG(216,   rg2mdio, MFSEL4, 23,         ddr, MFSEL3, 26,     none, NONE, 0,       0),
+ 	NPCM7XX_PINCFG(217,   rg2mdio, MFSEL4, 23,         ddr, MFSEL3, 26,     none, NONE, 0,       0),
+ 	NPCM7XX_PINCFG(218,     wdog1, MFSEL3, 19,        none, NONE, 0,	none, NONE, 0,	     0),
+-	NPCM7XX_PINCFG(219,     wdog2, MFSEL3, 20,        none, NONE, 0,	none, NONE, 0,	     DS(4, 8)),
++	NPCM7XX_PINCFG(219,     wdog2, MFSEL3, 20,        none, NONE, 0,	none, NONE, 0,	     DSTR(4, 8)),
+ 	NPCM7XX_PINCFG(220,	smb12, MFSEL3, 5,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(221,	smb12, MFSEL3, 5,	  none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(222,     smb13, MFSEL3, 6,         none, NONE, 0,	none, NONE, 0,	     0),
+ 	NPCM7XX_PINCFG(223,     smb13, MFSEL3, 6,         none, NONE, 0,	none, NONE, 0,	     0),
+ 
+ 	NPCM7XX_PINCFG(224,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     SLEW),
+-	NPCM7XX_PINCFG(225,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW | GPO),
+-	NPCM7XX_PINCFG(226,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW | GPO),
+-	NPCM7XX_PINCFG(227,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(228,   spixcs1, MFSEL4, 28,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(229,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(230,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DS(8, 12) | SLEW),
+-	NPCM7XX_PINCFG(231,    clkreq, MFSEL4, 9,         none, NONE, 0,        none, NONE, 0,	     DS(8, 12)),
++	NPCM7XX_PINCFG(225,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW | GPO),
++	NPCM7XX_PINCFG(226,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW | GPO),
++	NPCM7XX_PINCFG(227,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(228,   spixcs1, MFSEL4, 28,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(229,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(230,	 spix, MFSEL4, 27,        none, NONE, 0,	none, NONE, 0,	     DSTR(8, 12) | SLEW),
++	NPCM7XX_PINCFG(231,    clkreq, MFSEL4, 9,         none, NONE, 0,        none, NONE, 0,	     DSTR(8, 12)),
+ 	NPCM7XX_PINCFG(253,	 none, NONE, 0,		  none, NONE, 0,	none, NONE, 0,	     GPI), /* SDHC1 power */
+ 	NPCM7XX_PINCFG(254,	 none, NONE, 0,		  none, NONE, 0,	none, NONE, 0,	     GPI), /* SDHC2 power */
+ 	NPCM7XX_PINCFG(255,	 none, NONE, 0,		  none, NONE, 0,	none, NONE, 0,	     GPI), /* DACOSEL */
+@@ -1561,7 +1560,7 @@ static int npcm7xx_get_groups_count(struct pinctrl_dev *pctldev)
+ {
+ 	struct npcm7xx_pinctrl *npcm = pinctrl_dev_get_drvdata(pctldev);
+ 
+-	dev_dbg(npcm->dev, "group size: %d\n", ARRAY_SIZE(npcm7xx_groups));
++	dev_dbg(npcm->dev, "group size: %zu\n", ARRAY_SIZE(npcm7xx_groups));
+ 	return ARRAY_SIZE(npcm7xx_groups);
+ }
+ 
+diff --git a/drivers/pinctrl/pinconf-generic.c b/drivers/pinctrl/pinconf-generic.c
+index 22e8d4c4040e1..b1db28007986e 100644
+--- a/drivers/pinctrl/pinconf-generic.c
++++ b/drivers/pinctrl/pinconf-generic.c
+@@ -30,10 +30,10 @@ static const struct pin_config_item conf_items[] = {
+ 	PCONFDUMP(PIN_CONFIG_BIAS_BUS_HOLD, "input bias bus hold", NULL, false),
+ 	PCONFDUMP(PIN_CONFIG_BIAS_DISABLE, "input bias disabled", NULL, false),
+ 	PCONFDUMP(PIN_CONFIG_BIAS_HIGH_IMPEDANCE, "input bias high impedance", NULL, false),
+-	PCONFDUMP(PIN_CONFIG_BIAS_PULL_DOWN, "input bias pull down", NULL, false),
++	PCONFDUMP(PIN_CONFIG_BIAS_PULL_DOWN, "input bias pull down", "ohms", true),
+ 	PCONFDUMP(PIN_CONFIG_BIAS_PULL_PIN_DEFAULT,
+-				"input bias pull to pin specific state", NULL, false),
+-	PCONFDUMP(PIN_CONFIG_BIAS_PULL_UP, "input bias pull up", NULL, false),
++				"input bias pull to pin specific state", "ohms", true),
++	PCONFDUMP(PIN_CONFIG_BIAS_PULL_UP, "input bias pull up", "ohms", true),
+ 	PCONFDUMP(PIN_CONFIG_DRIVE_OPEN_DRAIN, "output drive open drain", NULL, false),
+ 	PCONFDUMP(PIN_CONFIG_DRIVE_OPEN_SOURCE, "output drive open source", NULL, false),
+ 	PCONFDUMP(PIN_CONFIG_DRIVE_PUSH_PULL, "output drive push pull", NULL, false),
+diff --git a/drivers/pinctrl/pinctrl-ingenic.c b/drivers/pinctrl/pinctrl-ingenic.c
+index 2712f51eb2381..fa6becca17889 100644
+--- a/drivers/pinctrl/pinctrl-ingenic.c
++++ b/drivers/pinctrl/pinctrl-ingenic.c
+@@ -119,6 +119,8 @@ struct ingenic_chip_info {
+ 	unsigned int num_functions;
+ 
+ 	const u32 *pull_ups, *pull_downs;
++
++	const struct regmap_access_table *access_table;
+ };
+ 
+ struct ingenic_pinctrl {
+@@ -2179,6 +2181,17 @@ static const struct function_desc x1000_functions[] = {
+ 	{ "mac", x1000_mac_groups, ARRAY_SIZE(x1000_mac_groups), },
+ };
+ 
++static const struct regmap_range x1000_access_ranges[] = {
++	regmap_reg_range(0x000, 0x400 - 4),
++	regmap_reg_range(0x700, 0x800 - 4),
++};
++
++/* shared with X1500 */
++static const struct regmap_access_table x1000_access_table = {
++	.yes_ranges = x1000_access_ranges,
++	.n_yes_ranges = ARRAY_SIZE(x1000_access_ranges),
++};
++
+ static const struct ingenic_chip_info x1000_chip_info = {
+ 	.num_chips = 4,
+ 	.reg_offset = 0x100,
+@@ -2189,6 +2202,7 @@ static const struct ingenic_chip_info x1000_chip_info = {
+ 	.num_functions = ARRAY_SIZE(x1000_functions),
+ 	.pull_ups = x1000_pull_ups,
+ 	.pull_downs = x1000_pull_downs,
++	.access_table = &x1000_access_table,
+ };
+ 
+ static int x1500_uart0_data_pins[] = { 0x4a, 0x4b, };
+@@ -2300,6 +2314,7 @@ static const struct ingenic_chip_info x1500_chip_info = {
+ 	.num_functions = ARRAY_SIZE(x1500_functions),
+ 	.pull_ups = x1000_pull_ups,
+ 	.pull_downs = x1000_pull_downs,
++	.access_table = &x1000_access_table,
+ };
+ 
+ static const u32 x1830_pull_ups[4] = {
+@@ -2506,6 +2521,16 @@ static const struct function_desc x1830_functions[] = {
+ 	{ "mac", x1830_mac_groups, ARRAY_SIZE(x1830_mac_groups), },
+ };
+ 
++static const struct regmap_range x1830_access_ranges[] = {
++	regmap_reg_range(0x0000, 0x4000 - 4),
++	regmap_reg_range(0x7000, 0x8000 - 4),
++};
++
++static const struct regmap_access_table x1830_access_table = {
++	.yes_ranges = x1830_access_ranges,
++	.n_yes_ranges = ARRAY_SIZE(x1830_access_ranges),
++};
++
+ static const struct ingenic_chip_info x1830_chip_info = {
+ 	.num_chips = 4,
+ 	.reg_offset = 0x1000,
+@@ -2516,6 +2541,7 @@ static const struct ingenic_chip_info x1830_chip_info = {
+ 	.num_functions = ARRAY_SIZE(x1830_functions),
+ 	.pull_ups = x1830_pull_ups,
+ 	.pull_downs = x1830_pull_downs,
++	.access_table = &x1830_access_table,
+ };
+ 
+ static const u32 x2000_pull_ups[5] = {
+@@ -2969,6 +2995,17 @@ static const struct function_desc x2000_functions[] = {
+ 	{ "otg", x2000_otg_groups, ARRAY_SIZE(x2000_otg_groups), },
+ };
+ 
++static const struct regmap_range x2000_access_ranges[] = {
++	regmap_reg_range(0x000, 0x500 - 4),
++	regmap_reg_range(0x700, 0x800 - 4),
++};
++
++/* shared with X2100 */
++static const struct regmap_access_table x2000_access_table = {
++	.yes_ranges = x2000_access_ranges,
++	.n_yes_ranges = ARRAY_SIZE(x2000_access_ranges),
++};
++
+ static const struct ingenic_chip_info x2000_chip_info = {
+ 	.num_chips = 5,
+ 	.reg_offset = 0x100,
+@@ -2979,6 +3016,7 @@ static const struct ingenic_chip_info x2000_chip_info = {
+ 	.num_functions = ARRAY_SIZE(x2000_functions),
+ 	.pull_ups = x2000_pull_ups,
+ 	.pull_downs = x2000_pull_downs,
++	.access_table = &x2000_access_table,
+ };
+ 
+ static const u32 x2100_pull_ups[5] = {
+@@ -3189,6 +3227,7 @@ static const struct ingenic_chip_info x2100_chip_info = {
+ 	.num_functions = ARRAY_SIZE(x2100_functions),
+ 	.pull_ups = x2100_pull_ups,
+ 	.pull_downs = x2100_pull_downs,
++	.access_table = &x2000_access_table,
+ };
+ 
+ static u32 ingenic_gpio_read_reg(struct ingenic_gpio_chip *jzgc, u8 reg)
+@@ -4168,7 +4207,12 @@ static int __init ingenic_pinctrl_probe(struct platform_device *pdev)
+ 		return PTR_ERR(base);
+ 
+ 	regmap_config = ingenic_pinctrl_regmap_config;
+-	regmap_config.max_register = chip_info->num_chips * chip_info->reg_offset;
++	if (chip_info->access_table) {
++		regmap_config.rd_table = chip_info->access_table;
++		regmap_config.wr_table = chip_info->access_table;
++	} else {
++		regmap_config.max_register = chip_info->num_chips * chip_info->reg_offset - 4;
++	}
+ 
+ 	jzpc->map = devm_regmap_init_mmio(dev, base, &regmap_config);
+ 	if (IS_ERR(jzpc->map)) {
+diff --git a/drivers/pinctrl/pinctrl-microchip-sgpio.c b/drivers/pinctrl/pinctrl-microchip-sgpio.c
+index 78765faa245ae..dfa374195694d 100644
+--- a/drivers/pinctrl/pinctrl-microchip-sgpio.c
++++ b/drivers/pinctrl/pinctrl-microchip-sgpio.c
+@@ -18,6 +18,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/property.h>
+ #include <linux/reset.h>
++#include <linux/spinlock.h>
+ 
+ #include "core.h"
+ #include "pinconf.h"
+@@ -115,6 +116,7 @@ struct sgpio_priv {
+ 	u32 clock;
+ 	u32 __iomem *regs;
+ 	const struct sgpio_properties *properties;
++	spinlock_t lock;
+ };
+ 
+ struct sgpio_port_addr {
+@@ -216,6 +218,7 @@ static void sgpio_output_set(struct sgpio_priv *priv,
+ 			     int value)
+ {
+ 	unsigned int bit = SGPIO_SRC_BITS * addr->bit;
++	unsigned long flags;
+ 	u32 clr, set;
+ 
+ 	switch (priv->properties->arch) {
+@@ -234,7 +237,10 @@ static void sgpio_output_set(struct sgpio_priv *priv,
+ 	default:
+ 		return;
+ 	}
++
++	spin_lock_irqsave(&priv->lock, flags);
+ 	sgpio_clrsetbits(priv, REG_PORT_CONFIG, addr->port, clr, set);
++	spin_unlock_irqrestore(&priv->lock, flags);
+ }
+ 
+ static int sgpio_output_get(struct sgpio_priv *priv,
+@@ -562,10 +568,13 @@ static void microchip_sgpio_irq_settype(struct irq_data *data,
+ 	struct sgpio_bank *bank = gpiochip_get_data(chip);
+ 	unsigned int gpio = irqd_to_hwirq(data);
+ 	struct sgpio_port_addr addr;
++	unsigned long flags;
+ 	u32 ena;
+ 
+ 	sgpio_pin_to_addr(bank->priv, gpio, &addr);
+ 
++	spin_lock_irqsave(&bank->priv->lock, flags);
++
+ 	/* Disable interrupt while changing type */
+ 	ena = sgpio_readl(bank->priv, REG_INT_ENABLE, addr.bit);
+ 	sgpio_writel(bank->priv, ena & ~BIT(addr.port), REG_INT_ENABLE, addr.bit);
+@@ -582,6 +591,8 @@ static void microchip_sgpio_irq_settype(struct irq_data *data,
+ 
+ 	/* Possibly re-enable interrupts */
+ 	sgpio_writel(bank->priv, ena, REG_INT_ENABLE, addr.bit);
++
++	spin_unlock_irqrestore(&bank->priv->lock, flags);
+ }
+ 
+ static void microchip_sgpio_irq_setreg(struct irq_data *data,
+@@ -592,13 +603,16 @@ static void microchip_sgpio_irq_setreg(struct irq_data *data,
+ 	struct sgpio_bank *bank = gpiochip_get_data(chip);
+ 	unsigned int gpio = irqd_to_hwirq(data);
+ 	struct sgpio_port_addr addr;
++	unsigned long flags;
+ 
+ 	sgpio_pin_to_addr(bank->priv, gpio, &addr);
+ 
++	spin_lock_irqsave(&bank->priv->lock, flags);
+ 	if (clear)
+ 		sgpio_clrsetbits(bank->priv, reg, addr.bit, BIT(addr.port), 0);
+ 	else
+ 		sgpio_clrsetbits(bank->priv, reg, addr.bit, 0, BIT(addr.port));
++	spin_unlock_irqrestore(&bank->priv->lock, flags);
+ }
+ 
+ static void microchip_sgpio_irq_mask(struct irq_data *data)
+@@ -814,6 +828,7 @@ static int microchip_sgpio_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	priv->dev = dev;
++	spin_lock_init(&priv->lock);
+ 
+ 	reset = devm_reset_control_get_optional_shared(&pdev->dev, "switch");
+ 	if (IS_ERR(reset))
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index dc52da94af0b9..923ff21a44c05 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -2702,6 +2702,7 @@ static int rockchip_pinctrl_probe(struct platform_device *pdev)
+ 	node = of_parse_phandle(np, "rockchip,grf", 0);
+ 	if (node) {
+ 		info->regmap_base = syscon_node_to_regmap(node);
++		of_node_put(node);
+ 		if (IS_ERR(info->regmap_base))
+ 			return PTR_ERR(info->regmap_base);
+ 	} else {
+@@ -2738,6 +2739,7 @@ static int rockchip_pinctrl_probe(struct platform_device *pdev)
+ 	node = of_parse_phandle(np, "rockchip,pmu", 0);
+ 	if (node) {
+ 		info->regmap_pmu = syscon_node_to_regmap(node);
++		of_node_put(node);
+ 		if (IS_ERR(info->regmap_pmu))
+ 			return PTR_ERR(info->regmap_pmu);
+ 	}
+diff --git a/drivers/pinctrl/renesas/core.c b/drivers/pinctrl/renesas/core.c
+index 0d4ea2e22a535..12d41ac017b53 100644
+--- a/drivers/pinctrl/renesas/core.c
++++ b/drivers/pinctrl/renesas/core.c
+@@ -741,7 +741,7 @@ static int sh_pfc_suspend_init(struct sh_pfc *pfc) { return 0; }
+ 
+ #ifdef DEBUG
+ #define SH_PFC_MAX_REGS		300
+-#define SH_PFC_MAX_ENUMS	3000
++#define SH_PFC_MAX_ENUMS	5000
+ 
+ static unsigned int sh_pfc_errors __initdata;
+ static unsigned int sh_pfc_warnings __initdata;
+@@ -865,7 +865,8 @@ static void __init sh_pfc_check_cfg_reg(const char *drvname,
+ 			 GENMASK(cfg_reg->reg_width - 1, 0));
+ 
+ 	if (cfg_reg->field_width) {
+-		n = cfg_reg->reg_width / cfg_reg->field_width;
++		fw = cfg_reg->field_width;
++		n = (cfg_reg->reg_width / fw) << fw;
+ 		/* Skip field checks (done at build time) */
+ 		goto check_enum_ids;
+ 	}
+diff --git a/drivers/pinctrl/renesas/pfc-r8a77470.c b/drivers/pinctrl/renesas/pfc-r8a77470.c
+index e6e5487691c16..cf7153d06a953 100644
+--- a/drivers/pinctrl/renesas/pfc-r8a77470.c
++++ b/drivers/pinctrl/renesas/pfc-r8a77470.c
+@@ -2140,7 +2140,7 @@ static const unsigned int vin0_clk_mux[] = {
+ 	VI0_CLK_MARK,
+ };
+ /* - VIN1 ------------------------------------------------------------------- */
+-static const union vin_data vin1_data_pins = {
++static const union vin_data12 vin1_data_pins = {
+ 	.data12 = {
+ 		RCAR_GP_PIN(3,  1), RCAR_GP_PIN(3, 2),
+ 		RCAR_GP_PIN(3,  3), RCAR_GP_PIN(3, 4),
+@@ -2150,7 +2150,7 @@ static const union vin_data vin1_data_pins = {
+ 		RCAR_GP_PIN(3, 15), RCAR_GP_PIN(3, 16),
+ 	},
+ };
+-static const union vin_data vin1_data_mux = {
++static const union vin_data12 vin1_data_mux = {
+ 	.data12 = {
+ 		VI1_DATA0_MARK, VI1_DATA1_MARK,
+ 		VI1_DATA2_MARK, VI1_DATA3_MARK,
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c b/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
+index 6b77fd24571e1..5c00be179082d 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
++++ b/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
+@@ -504,13 +504,11 @@ static const struct samsung_pin_ctrl exynos850_pin_ctrl[] __initconst = {
+ 		/* pin-controller instance 0 ALIVE data */
+ 		.pin_banks	= exynos850_pin_banks0,
+ 		.nr_banks	= ARRAY_SIZE(exynos850_pin_banks0),
+-		.eint_gpio_init = exynos_eint_gpio_init,
+ 		.eint_wkup_init = exynos_eint_wkup_init,
+ 	}, {
+ 		/* pin-controller instance 1 CMGP data */
+ 		.pin_banks	= exynos850_pin_banks1,
+ 		.nr_banks	= ARRAY_SIZE(exynos850_pin_banks1),
+-		.eint_gpio_init = exynos_eint_gpio_init,
+ 		.eint_wkup_init = exynos_eint_wkup_init,
+ 	}, {
+ 		/* pin-controller instance 2 AUD data */
+diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.c b/drivers/pinctrl/samsung/pinctrl-samsung.c
+index 23f355ae9ca01..81d512b9956cf 100644
+--- a/drivers/pinctrl/samsung/pinctrl-samsung.c
++++ b/drivers/pinctrl/samsung/pinctrl-samsung.c
+@@ -1002,6 +1002,16 @@ samsung_pinctrl_get_soc_data_for_of_alias(struct platform_device *pdev)
+ 	return &(of_data->ctrl[id]);
+ }
+ 
++static void samsung_banks_of_node_put(struct samsung_pinctrl_drv_data *d)
++{
++	struct samsung_pin_bank *bank;
++	unsigned int i;
++
++	bank = d->pin_banks;
++	for (i = 0; i < d->nr_banks; ++i, ++bank)
++		of_node_put(bank->of_node);
++}
++
+ /* retrieve the soc specific data */
+ static const struct samsung_pin_ctrl *
+ samsung_pinctrl_get_soc_data(struct samsung_pinctrl_drv_data *d,
+@@ -1116,19 +1126,19 @@ static int samsung_pinctrl_probe(struct platform_device *pdev)
+ 	if (ctrl->retention_data) {
+ 		drvdata->retention_ctrl = ctrl->retention_data->init(drvdata,
+ 							  ctrl->retention_data);
+-		if (IS_ERR(drvdata->retention_ctrl))
+-			return PTR_ERR(drvdata->retention_ctrl);
++		if (IS_ERR(drvdata->retention_ctrl)) {
++			ret = PTR_ERR(drvdata->retention_ctrl);
++			goto err_put_banks;
++		}
+ 	}
+ 
+ 	ret = samsung_pinctrl_register(pdev, drvdata);
+ 	if (ret)
+-		return ret;
++		goto err_put_banks;
+ 
+ 	ret = samsung_gpiolib_register(pdev, drvdata);
+-	if (ret) {
+-		samsung_pinctrl_unregister(pdev, drvdata);
+-		return ret;
+-	}
++	if (ret)
++		goto err_unregister;
+ 
+ 	if (ctrl->eint_gpio_init)
+ 		ctrl->eint_gpio_init(drvdata);
+@@ -1138,6 +1148,12 @@ static int samsung_pinctrl_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, drvdata);
+ 
+ 	return 0;
++
++err_unregister:
++	samsung_pinctrl_unregister(pdev, drvdata);
++err_put_banks:
++	samsung_banks_of_node_put(drvdata);
++	return ret;
+ }
+ 
+ /*
+diff --git a/drivers/platform/chrome/Makefile b/drivers/platform/chrome/Makefile
+index f901d2e43166c..88cbc434c06b2 100644
+--- a/drivers/platform/chrome/Makefile
++++ b/drivers/platform/chrome/Makefile
+@@ -2,6 +2,7 @@
+ 
+ # tell define_trace.h where to find the cros ec trace header
+ CFLAGS_cros_ec_trace.o:=		-I$(src)
++CFLAGS_cros_ec_sensorhub_ring.o:=	-I$(src)
+ 
+ obj-$(CONFIG_CHROMEOS_LAPTOP)		+= chromeos_laptop.o
+ obj-$(CONFIG_CHROMEOS_PSTORE)		+= chromeos_pstore.o
+@@ -20,7 +21,7 @@ obj-$(CONFIG_CROS_EC_CHARDEV)		+= cros_ec_chardev.o
+ obj-$(CONFIG_CROS_EC_LIGHTBAR)		+= cros_ec_lightbar.o
+ obj-$(CONFIG_CROS_EC_VBC)		+= cros_ec_vbc.o
+ obj-$(CONFIG_CROS_EC_DEBUGFS)		+= cros_ec_debugfs.o
+-cros-ec-sensorhub-objs			:= cros_ec_sensorhub.o cros_ec_sensorhub_ring.o cros_ec_trace.o
++cros-ec-sensorhub-objs			:= cros_ec_sensorhub.o cros_ec_sensorhub_ring.o
+ obj-$(CONFIG_CROS_EC_SENSORHUB)		+= cros-ec-sensorhub.o
+ obj-$(CONFIG_CROS_EC_SYSFS)		+= cros_ec_sysfs.o
+ obj-$(CONFIG_CROS_USBPD_LOGGER)		+= cros_usbpd_logger.o
+diff --git a/drivers/platform/chrome/cros_ec_sensorhub_ring.c b/drivers/platform/chrome/cros_ec_sensorhub_ring.c
+index 98e37080f7609..71948dade0e2a 100644
+--- a/drivers/platform/chrome/cros_ec_sensorhub_ring.c
++++ b/drivers/platform/chrome/cros_ec_sensorhub_ring.c
+@@ -17,7 +17,8 @@
+ #include <linux/sort.h>
+ #include <linux/slab.h>
+ 
+-#include "cros_ec_trace.h"
++#define CREATE_TRACE_POINTS
++#include "cros_ec_sensorhub_trace.h"
+ 
+ /* Precision of fixed point for the m values from the filter */
+ #define M_PRECISION BIT(23)
+diff --git a/drivers/platform/chrome/cros_ec_sensorhub_trace.h b/drivers/platform/chrome/cros_ec_sensorhub_trace.h
+new file mode 100644
+index 0000000000000..57d9b47859692
+--- /dev/null
++++ b/drivers/platform/chrome/cros_ec_sensorhub_trace.h
+@@ -0,0 +1,123 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * Trace events for the ChromeOS Sensorhub kernel module
++ *
++ * Copyright 2021 Google LLC.
++ */
++
++#undef TRACE_SYSTEM
++#define TRACE_SYSTEM cros_ec
++
++#if !defined(_CROS_EC_SENSORHUB_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)
++#define _CROS_EC_SENSORHUB_TRACE_H_
++
++#include <linux/types.h>
++#include <linux/platform_data/cros_ec_sensorhub.h>
++
++#include <linux/tracepoint.h>
++
++TRACE_EVENT(cros_ec_sensorhub_timestamp,
++	    TP_PROTO(u32 ec_sample_timestamp, u32 ec_fifo_timestamp, s64 fifo_timestamp,
++		     s64 current_timestamp, s64 current_time),
++	TP_ARGS(ec_sample_timestamp, ec_fifo_timestamp, fifo_timestamp, current_timestamp,
++		current_time),
++	TP_STRUCT__entry(
++		__field(u32, ec_sample_timestamp)
++		__field(u32, ec_fifo_timestamp)
++		__field(s64, fifo_timestamp)
++		__field(s64, current_timestamp)
++		__field(s64, current_time)
++		__field(s64, delta)
++	),
++	TP_fast_assign(
++		__entry->ec_sample_timestamp = ec_sample_timestamp;
++		__entry->ec_fifo_timestamp = ec_fifo_timestamp;
++		__entry->fifo_timestamp = fifo_timestamp;
++		__entry->current_timestamp = current_timestamp;
++		__entry->current_time = current_time;
++		__entry->delta = current_timestamp - current_time;
++	),
++	TP_printk("ec_ts: %9u, ec_fifo_ts: %9u, fifo_ts: %12lld, curr_ts: %12lld, curr_time: %12lld, delta %12lld",
++		  __entry->ec_sample_timestamp,
++		__entry->ec_fifo_timestamp,
++		__entry->fifo_timestamp,
++		__entry->current_timestamp,
++		__entry->current_time,
++		__entry->delta
++	)
++);
++
++TRACE_EVENT(cros_ec_sensorhub_data,
++	    TP_PROTO(u32 ec_sensor_num, u32 ec_fifo_timestamp, s64 fifo_timestamp,
++		     s64 current_timestamp, s64 current_time),
++	TP_ARGS(ec_sensor_num, ec_fifo_timestamp, fifo_timestamp, current_timestamp, current_time),
++	TP_STRUCT__entry(
++		__field(u32, ec_sensor_num)
++		__field(u32, ec_fifo_timestamp)
++		__field(s64, fifo_timestamp)
++		__field(s64, current_timestamp)
++		__field(s64, current_time)
++		__field(s64, delta)
++	),
++	TP_fast_assign(
++		__entry->ec_sensor_num = ec_sensor_num;
++		__entry->ec_fifo_timestamp = ec_fifo_timestamp;
++		__entry->fifo_timestamp = fifo_timestamp;
++		__entry->current_timestamp = current_timestamp;
++		__entry->current_time = current_time;
++		__entry->delta = current_timestamp - current_time;
++	),
++	TP_printk("ec_num: %4u, ec_fifo_ts: %9u, fifo_ts: %12lld, curr_ts: %12lld, curr_time: %12lld, delta %12lld",
++		  __entry->ec_sensor_num,
++		__entry->ec_fifo_timestamp,
++		__entry->fifo_timestamp,
++		__entry->current_timestamp,
++		__entry->current_time,
++		__entry->delta
++	)
++);
++
++TRACE_EVENT(cros_ec_sensorhub_filter,
++	    TP_PROTO(struct cros_ec_sensors_ts_filter_state *state, s64 dx, s64 dy),
++	TP_ARGS(state, dx, dy),
++	TP_STRUCT__entry(
++		__field(s64, dx)
++		__field(s64, dy)
++		__field(s64, median_m)
++		__field(s64, median_error)
++		__field(s64, history_len)
++		__field(s64, x)
++		__field(s64, y)
++	),
++	TP_fast_assign(
++		__entry->dx = dx;
++		__entry->dy = dy;
++		__entry->median_m = state->median_m;
++		__entry->median_error = state->median_error;
++		__entry->history_len = state->history_len;
++		__entry->x = state->x_offset;
++		__entry->y = state->y_offset;
++	),
++	TP_printk("dx: %12lld. dy: %12lld median_m: %12lld median_error: %12lld len: %lld x: %12lld y: %12lld",
++		  __entry->dx,
++		__entry->dy,
++		__entry->median_m,
++		__entry->median_error,
++		__entry->history_len,
++		__entry->x,
++		__entry->y
++	)
++);
++
++
++#endif /* _CROS_EC_SENSORHUB_TRACE_H_ */
++
++/* this part must be outside header guard */
++
++#undef TRACE_INCLUDE_PATH
++#define TRACE_INCLUDE_PATH .
++
++#undef TRACE_INCLUDE_FILE
++#define TRACE_INCLUDE_FILE cros_ec_sensorhub_trace
++
++#include <trace/define_trace.h>
+diff --git a/drivers/platform/chrome/cros_ec_trace.h b/drivers/platform/chrome/cros_ec_trace.h
+index 7e7cfc98657a4..9bb5cd2c98b8b 100644
+--- a/drivers/platform/chrome/cros_ec_trace.h
++++ b/drivers/platform/chrome/cros_ec_trace.h
+@@ -15,7 +15,6 @@
+ #include <linux/types.h>
+ #include <linux/platform_data/cros_ec_commands.h>
+ #include <linux/platform_data/cros_ec_proto.h>
+-#include <linux/platform_data/cros_ec_sensorhub.h>
+ 
+ #include <linux/tracepoint.h>
+ 
+@@ -71,100 +70,6 @@ TRACE_EVENT(cros_ec_request_done,
+ 		  __entry->retval)
+ );
+ 
+-TRACE_EVENT(cros_ec_sensorhub_timestamp,
+-	    TP_PROTO(u32 ec_sample_timestamp, u32 ec_fifo_timestamp, s64 fifo_timestamp,
+-		     s64 current_timestamp, s64 current_time),
+-	TP_ARGS(ec_sample_timestamp, ec_fifo_timestamp, fifo_timestamp, current_timestamp,
+-		current_time),
+-	TP_STRUCT__entry(
+-		__field(u32, ec_sample_timestamp)
+-		__field(u32, ec_fifo_timestamp)
+-		__field(s64, fifo_timestamp)
+-		__field(s64, current_timestamp)
+-		__field(s64, current_time)
+-		__field(s64, delta)
+-	),
+-	TP_fast_assign(
+-		__entry->ec_sample_timestamp = ec_sample_timestamp;
+-		__entry->ec_fifo_timestamp = ec_fifo_timestamp;
+-		__entry->fifo_timestamp = fifo_timestamp;
+-		__entry->current_timestamp = current_timestamp;
+-		__entry->current_time = current_time;
+-		__entry->delta = current_timestamp - current_time;
+-	),
+-	TP_printk("ec_ts: %9u, ec_fifo_ts: %9u, fifo_ts: %12lld, curr_ts: %12lld, curr_time: %12lld, delta %12lld",
+-		  __entry->ec_sample_timestamp,
+-		__entry->ec_fifo_timestamp,
+-		__entry->fifo_timestamp,
+-		__entry->current_timestamp,
+-		__entry->current_time,
+-		__entry->delta
+-	)
+-);
+-
+-TRACE_EVENT(cros_ec_sensorhub_data,
+-	    TP_PROTO(u32 ec_sensor_num, u32 ec_fifo_timestamp, s64 fifo_timestamp,
+-		     s64 current_timestamp, s64 current_time),
+-	TP_ARGS(ec_sensor_num, ec_fifo_timestamp, fifo_timestamp, current_timestamp, current_time),
+-	TP_STRUCT__entry(
+-		__field(u32, ec_sensor_num)
+-		__field(u32, ec_fifo_timestamp)
+-		__field(s64, fifo_timestamp)
+-		__field(s64, current_timestamp)
+-		__field(s64, current_time)
+-		__field(s64, delta)
+-	),
+-	TP_fast_assign(
+-		__entry->ec_sensor_num = ec_sensor_num;
+-		__entry->ec_fifo_timestamp = ec_fifo_timestamp;
+-		__entry->fifo_timestamp = fifo_timestamp;
+-		__entry->current_timestamp = current_timestamp;
+-		__entry->current_time = current_time;
+-		__entry->delta = current_timestamp - current_time;
+-	),
+-	TP_printk("ec_num: %4u, ec_fifo_ts: %9u, fifo_ts: %12lld, curr_ts: %12lld, curr_time: %12lld, delta %12lld",
+-		  __entry->ec_sensor_num,
+-		__entry->ec_fifo_timestamp,
+-		__entry->fifo_timestamp,
+-		__entry->current_timestamp,
+-		__entry->current_time,
+-		__entry->delta
+-	)
+-);
+-
+-TRACE_EVENT(cros_ec_sensorhub_filter,
+-	    TP_PROTO(struct cros_ec_sensors_ts_filter_state *state, s64 dx, s64 dy),
+-	TP_ARGS(state, dx, dy),
+-	TP_STRUCT__entry(
+-		__field(s64, dx)
+-		__field(s64, dy)
+-		__field(s64, median_m)
+-		__field(s64, median_error)
+-		__field(s64, history_len)
+-		__field(s64, x)
+-		__field(s64, y)
+-	),
+-	TP_fast_assign(
+-		__entry->dx = dx;
+-		__entry->dy = dy;
+-		__entry->median_m = state->median_m;
+-		__entry->median_error = state->median_error;
+-		__entry->history_len = state->history_len;
+-		__entry->x = state->x_offset;
+-		__entry->y = state->y_offset;
+-	),
+-	TP_printk("dx: %12lld. dy: %12lld median_m: %12lld median_error: %12lld len: %lld x: %12lld y: %12lld",
+-		  __entry->dx,
+-		__entry->dy,
+-		__entry->median_m,
+-		__entry->median_error,
+-		__entry->history_len,
+-		__entry->x,
+-		__entry->y
+-	)
+-);
+-
+-
+ #endif /* _CROS_EC_TRACE_H_ */
+ 
+ /* this part must be outside header guard */
+diff --git a/drivers/platform/chrome/cros_ec_typec.c b/drivers/platform/chrome/cros_ec_typec.c
+index 5de0bfb0bc4d9..952c1756f59ee 100644
+--- a/drivers/platform/chrome/cros_ec_typec.c
++++ b/drivers/platform/chrome/cros_ec_typec.c
+@@ -1075,7 +1075,13 @@ static int cros_typec_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	typec->dev = dev;
++
+ 	typec->ec = dev_get_drvdata(pdev->dev.parent);
++	if (!typec->ec) {
++		dev_err(dev, "couldn't find parent EC device\n");
++		return -ENODEV;
++	}
++
+ 	platform_set_drvdata(pdev, typec);
+ 
+ 	ret = cros_typec_get_cmd_version(typec);
+diff --git a/drivers/platform/x86/huawei-wmi.c b/drivers/platform/x86/huawei-wmi.c
+index a2d846c4a7eef..eac3e6b4ea113 100644
+--- a/drivers/platform/x86/huawei-wmi.c
++++ b/drivers/platform/x86/huawei-wmi.c
+@@ -470,10 +470,17 @@ static DEVICE_ATTR_RW(charge_control_thresholds);
+ 
+ static int huawei_wmi_battery_add(struct power_supply *battery)
+ {
+-	device_create_file(&battery->dev, &dev_attr_charge_control_start_threshold);
+-	device_create_file(&battery->dev, &dev_attr_charge_control_end_threshold);
++	int err = 0;
+ 
+-	return 0;
++	err = device_create_file(&battery->dev, &dev_attr_charge_control_start_threshold);
++	if (err)
++		return err;
++
++	err = device_create_file(&battery->dev, &dev_attr_charge_control_end_threshold);
++	if (err)
++		device_remove_file(&battery->dev, &dev_attr_charge_control_start_threshold);
++
++	return err;
+ }
+ 
+ static int huawei_wmi_battery_remove(struct power_supply *battery)
+diff --git a/drivers/power/reset/gemini-poweroff.c b/drivers/power/reset/gemini-poweroff.c
+index 90e35c07240ae..b7f7a8225f22e 100644
+--- a/drivers/power/reset/gemini-poweroff.c
++++ b/drivers/power/reset/gemini-poweroff.c
+@@ -107,8 +107,8 @@ static int gemini_poweroff_probe(struct platform_device *pdev)
+ 		return PTR_ERR(gpw->base);
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (!irq)
+-		return -EINVAL;
++	if (irq < 0)
++		return irq;
+ 
+ 	gpw->dev = dev;
+ 
+diff --git a/drivers/power/supply/ab8500_chargalg.c b/drivers/power/supply/ab8500_chargalg.c
+index ff4b26b1cecae..b809fa5abbbaf 100644
+--- a/drivers/power/supply/ab8500_chargalg.c
++++ b/drivers/power/supply/ab8500_chargalg.c
+@@ -2019,11 +2019,11 @@ static int ab8500_chargalg_probe(struct platform_device *pdev)
+ 	psy_cfg.drv_data = di;
+ 
+ 	/* Initilialize safety timer */
+-	hrtimer_init(&di->safety_timer, CLOCK_REALTIME, HRTIMER_MODE_ABS);
++	hrtimer_init(&di->safety_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ 	di->safety_timer.function = ab8500_chargalg_safety_timer_expired;
+ 
+ 	/* Initilialize maintenance timer */
+-	hrtimer_init(&di->maintenance_timer, CLOCK_REALTIME, HRTIMER_MODE_ABS);
++	hrtimer_init(&di->maintenance_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ 	di->maintenance_timer.function =
+ 		ab8500_chargalg_maintenance_timer_expired;
+ 
+diff --git a/drivers/power/supply/ab8500_fg.c b/drivers/power/supply/ab8500_fg.c
+index 05fe9724ba508..57799a8079d44 100644
+--- a/drivers/power/supply/ab8500_fg.c
++++ b/drivers/power/supply/ab8500_fg.c
+@@ -2545,8 +2545,10 @@ static int ab8500_fg_sysfs_init(struct ab8500_fg *di)
+ 	ret = kobject_init_and_add(&di->fg_kobject,
+ 		&ab8500_fg_ktype,
+ 		NULL, "battery");
+-	if (ret < 0)
++	if (ret < 0) {
++		kobject_put(&di->fg_kobject);
+ 		dev_err(di->dev, "failed to create sysfs entry\n");
++	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/power/supply/bq24190_charger.c b/drivers/power/supply/bq24190_charger.c
+index 35ff0c8fe96f5..16c4876fe5afb 100644
+--- a/drivers/power/supply/bq24190_charger.c
++++ b/drivers/power/supply/bq24190_charger.c
+@@ -39,6 +39,7 @@
+ #define BQ24190_REG_POC_CHG_CONFIG_DISABLE		0x0
+ #define BQ24190_REG_POC_CHG_CONFIG_CHARGE		0x1
+ #define BQ24190_REG_POC_CHG_CONFIG_OTG			0x2
++#define BQ24190_REG_POC_CHG_CONFIG_OTG_ALT		0x3
+ #define BQ24190_REG_POC_SYS_MIN_MASK		(BIT(3) | BIT(2) | BIT(1))
+ #define BQ24190_REG_POC_SYS_MIN_SHIFT		1
+ #define BQ24190_REG_POC_SYS_MIN_MIN			3000
+@@ -550,7 +551,11 @@ static int bq24190_vbus_is_enabled(struct regulator_dev *dev)
+ 	pm_runtime_mark_last_busy(bdi->dev);
+ 	pm_runtime_put_autosuspend(bdi->dev);
+ 
+-	return ret ? ret : val == BQ24190_REG_POC_CHG_CONFIG_OTG;
++	if (ret)
++		return ret;
++
++	return (val == BQ24190_REG_POC_CHG_CONFIG_OTG ||
++		val == BQ24190_REG_POC_CHG_CONFIG_OTG_ALT);
+ }
+ 
+ static const struct regulator_ops bq24190_vbus_ops = {
+diff --git a/drivers/power/supply/sbs-charger.c b/drivers/power/supply/sbs-charger.c
+index 6fa65d118ec12..b08f7d0c41815 100644
+--- a/drivers/power/supply/sbs-charger.c
++++ b/drivers/power/supply/sbs-charger.c
+@@ -18,6 +18,7 @@
+ #include <linux/interrupt.h>
+ #include <linux/regmap.h>
+ #include <linux/bitops.h>
++#include <linux/devm-helpers.h>
+ 
+ #define SBS_CHARGER_REG_SPEC_INFO		0x11
+ #define SBS_CHARGER_REG_STATUS			0x13
+@@ -209,7 +210,12 @@ static int sbs_probe(struct i2c_client *client,
+ 		if (ret)
+ 			return dev_err_probe(&client->dev, ret, "Failed to request irq\n");
+ 	} else {
+-		INIT_DELAYED_WORK(&chip->work, sbs_delayed_work);
++		ret = devm_delayed_work_autocancel(&client->dev, &chip->work,
++						   sbs_delayed_work);
++		if (ret)
++			return dev_err_probe(&client->dev, ret,
++					     "Failed to init work for polling\n");
++
+ 		schedule_delayed_work(&chip->work,
+ 				      msecs_to_jiffies(SBS_CHARGER_POLL_TIME));
+ 	}
+@@ -220,15 +226,6 @@ static int sbs_probe(struct i2c_client *client,
+ 	return 0;
+ }
+ 
+-static int sbs_remove(struct i2c_client *client)
+-{
+-	struct sbs_info *chip = i2c_get_clientdata(client);
+-
+-	cancel_delayed_work_sync(&chip->work);
+-
+-	return 0;
+-}
+-
+ #ifdef CONFIG_OF
+ static const struct of_device_id sbs_dt_ids[] = {
+ 	{ .compatible = "sbs,sbs-charger" },
+@@ -245,7 +242,6 @@ MODULE_DEVICE_TABLE(i2c, sbs_id);
+ 
+ static struct i2c_driver sbs_driver = {
+ 	.probe		= sbs_probe,
+-	.remove		= sbs_remove,
+ 	.id_table	= sbs_id,
+ 	.driver = {
+ 		.name	= "sbs-charger",
+diff --git a/drivers/power/supply/wm8350_power.c b/drivers/power/supply/wm8350_power.c
+index e05cee457471b..908cfd45d2624 100644
+--- a/drivers/power/supply/wm8350_power.c
++++ b/drivers/power/supply/wm8350_power.c
+@@ -408,44 +408,112 @@ static const struct power_supply_desc wm8350_usb_desc = {
+  *		Initialisation
+  *********************************************************************/
+ 
+-static void wm8350_init_charger(struct wm8350 *wm8350)
++static int wm8350_init_charger(struct wm8350 *wm8350)
+ {
++	int ret;
++
+ 	/* register our interest in charger events */
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_BAT_HOT,
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_BAT_HOT,
+ 			    wm8350_charger_handler, 0, "Battery hot", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_BAT_COLD,
++	if (ret)
++		goto err;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_BAT_COLD,
+ 			    wm8350_charger_handler, 0, "Battery cold", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_BAT_FAIL,
++	if (ret)
++		goto free_chg_bat_hot;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_BAT_FAIL,
+ 			    wm8350_charger_handler, 0, "Battery fail", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_TO,
++	if (ret)
++		goto free_chg_bat_cold;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_TO,
+ 			    wm8350_charger_handler, 0,
+ 			    "Charger timeout", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_END,
++	if (ret)
++		goto free_chg_bat_fail;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_END,
+ 			    wm8350_charger_handler, 0,
+ 			    "Charge end", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_START,
++	if (ret)
++		goto free_chg_to;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_START,
+ 			    wm8350_charger_handler, 0,
+ 			    "Charge start", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_FAST_RDY,
++	if (ret)
++		goto free_chg_end;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_FAST_RDY,
+ 			    wm8350_charger_handler, 0,
+ 			    "Fast charge ready", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P9,
++	if (ret)
++		goto free_chg_start;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P9,
+ 			    wm8350_charger_handler, 0,
+ 			    "Battery <3.9V", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P1,
++	if (ret)
++		goto free_chg_fast_rdy;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P1,
+ 			    wm8350_charger_handler, 0,
+ 			    "Battery <3.1V", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_2P85,
++	if (ret)
++		goto free_chg_vbatt_lt_3p9;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_2P85,
+ 			    wm8350_charger_handler, 0,
+ 			    "Battery <2.85V", wm8350);
++	if (ret)
++		goto free_chg_vbatt_lt_3p1;
+ 
+ 	/* and supply change events */
+-	wm8350_register_irq(wm8350, WM8350_IRQ_EXT_USB_FB,
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_EXT_USB_FB,
+ 			    wm8350_charger_handler, 0, "USB", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_EXT_WALL_FB,
++	if (ret)
++		goto free_chg_vbatt_lt_2p85;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_EXT_WALL_FB,
+ 			    wm8350_charger_handler, 0, "Wall", wm8350);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_EXT_BAT_FB,
++	if (ret)
++		goto free_ext_usb_fb;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_EXT_BAT_FB,
+ 			    wm8350_charger_handler, 0, "Battery", wm8350);
++	if (ret)
++		goto free_ext_wall_fb;
++
++	return 0;
++
++free_ext_wall_fb:
++	wm8350_free_irq(wm8350, WM8350_IRQ_EXT_WALL_FB, wm8350);
++free_ext_usb_fb:
++	wm8350_free_irq(wm8350, WM8350_IRQ_EXT_USB_FB, wm8350);
++free_chg_vbatt_lt_2p85:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_2P85, wm8350);
++free_chg_vbatt_lt_3p1:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P1, wm8350);
++free_chg_vbatt_lt_3p9:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P9, wm8350);
++free_chg_fast_rdy:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_FAST_RDY, wm8350);
++free_chg_start:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_START, wm8350);
++free_chg_end:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_END, wm8350);
++free_chg_to:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_TO, wm8350);
++free_chg_bat_fail:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_BAT_FAIL, wm8350);
++free_chg_bat_cold:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_BAT_COLD, wm8350);
++free_chg_bat_hot:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_BAT_HOT, wm8350);
++err:
++	return ret;
+ }
+ 
+ static void free_charger_irq(struct wm8350 *wm8350)
+@@ -456,6 +524,7 @@ static void free_charger_irq(struct wm8350 *wm8350)
+ 	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_TO, wm8350);
+ 	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_END, wm8350);
+ 	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_START, wm8350);
++	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_FAST_RDY, wm8350);
+ 	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P9, wm8350);
+ 	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P1, wm8350);
+ 	wm8350_free_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_2P85, wm8350);
+diff --git a/drivers/powercap/dtpm_cpu.c b/drivers/powercap/dtpm_cpu.c
+index b740866b228d9..1e8cac699646c 100644
+--- a/drivers/powercap/dtpm_cpu.c
++++ b/drivers/powercap/dtpm_cpu.c
+@@ -150,10 +150,17 @@ static int update_pd_power_uw(struct dtpm *dtpm)
+ static void pd_release(struct dtpm *dtpm)
+ {
+ 	struct dtpm_cpu *dtpm_cpu = to_dtpm_cpu(dtpm);
++	struct cpufreq_policy *policy;
+ 
+ 	if (freq_qos_request_active(&dtpm_cpu->qos_req))
+ 		freq_qos_remove_request(&dtpm_cpu->qos_req);
+ 
++	policy = cpufreq_cpu_get(dtpm_cpu->cpu);
++	if (policy) {
++		for_each_cpu(dtpm_cpu->cpu, policy->related_cpus)
++			per_cpu(dtpm_per_cpu, dtpm_cpu->cpu) = NULL;
++	}
++	
+ 	kfree(dtpm_cpu);
+ }
+ 
+diff --git a/drivers/pps/clients/pps-gpio.c b/drivers/pps/clients/pps-gpio.c
+index 35799e6401c99..2f4b11b4dfcd9 100644
+--- a/drivers/pps/clients/pps-gpio.c
++++ b/drivers/pps/clients/pps-gpio.c
+@@ -169,7 +169,7 @@ static int pps_gpio_probe(struct platform_device *pdev)
+ 	/* GPIO setup */
+ 	ret = pps_gpio_setup(dev);
+ 	if (ret)
+-		return -EINVAL;
++		return ret;
+ 
+ 	/* IRQ setup */
+ 	ret = gpiod_to_irq(data->gpio_pin);
+diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c
+index 0e4bc8b9329dd..b6f2cfd15dd2d 100644
+--- a/drivers/ptp/ptp_clock.c
++++ b/drivers/ptp/ptp_clock.c
+@@ -317,11 +317,18 @@ no_memory:
+ }
+ EXPORT_SYMBOL(ptp_clock_register);
+ 
++static int unregister_vclock(struct device *dev, void *data)
++{
++	struct ptp_clock *ptp = dev_get_drvdata(dev);
++
++	ptp_vclock_unregister(info_to_vclock(ptp->info));
++	return 0;
++}
++
+ int ptp_clock_unregister(struct ptp_clock *ptp)
+ {
+ 	if (ptp_vclock_in_use(ptp)) {
+-		pr_err("ptp: virtual clock in use\n");
+-		return -EBUSY;
++		device_for_each_child(&ptp->dev, NULL, unregister_vclock);
+ 	}
+ 
+ 	ptp->defunct = 1;
+diff --git a/drivers/pwm/pwm-lpc18xx-sct.c b/drivers/pwm/pwm-lpc18xx-sct.c
+index 8e461f3baa05a..8cc8ae16553cf 100644
+--- a/drivers/pwm/pwm-lpc18xx-sct.c
++++ b/drivers/pwm/pwm-lpc18xx-sct.c
+@@ -395,12 +395,6 @@ static int lpc18xx_pwm_probe(struct platform_device *pdev)
+ 	lpc18xx_pwm_writel(lpc18xx_pwm, LPC18XX_PWM_LIMIT,
+ 			   BIT(lpc18xx_pwm->period_event));
+ 
+-	ret = pwmchip_add(&lpc18xx_pwm->chip);
+-	if (ret < 0) {
+-		dev_err(&pdev->dev, "pwmchip_add failed: %d\n", ret);
+-		goto disable_pwmclk;
+-	}
+-
+ 	for (i = 0; i < lpc18xx_pwm->chip.npwm; i++) {
+ 		struct lpc18xx_pwm_data *data;
+ 
+@@ -410,14 +404,12 @@ static int lpc18xx_pwm_probe(struct platform_device *pdev)
+ 				    GFP_KERNEL);
+ 		if (!data) {
+ 			ret = -ENOMEM;
+-			goto remove_pwmchip;
++			goto disable_pwmclk;
+ 		}
+ 
+ 		pwm_set_chip_data(pwm, data);
+ 	}
+ 
+-	platform_set_drvdata(pdev, lpc18xx_pwm);
+-
+ 	val = lpc18xx_pwm_readl(lpc18xx_pwm, LPC18XX_PWM_CTRL);
+ 	val &= ~LPC18XX_PWM_BIDIR;
+ 	val &= ~LPC18XX_PWM_CTRL_HALT;
+@@ -425,10 +417,16 @@ static int lpc18xx_pwm_probe(struct platform_device *pdev)
+ 	val |= LPC18XX_PWM_PRE(0);
+ 	lpc18xx_pwm_writel(lpc18xx_pwm, LPC18XX_PWM_CTRL, val);
+ 
++	ret = pwmchip_add(&lpc18xx_pwm->chip);
++	if (ret < 0) {
++		dev_err(&pdev->dev, "pwmchip_add failed: %d\n", ret);
++		goto disable_pwmclk;
++	}
++
++	platform_set_drvdata(pdev, lpc18xx_pwm);
++
+ 	return 0;
+ 
+-remove_pwmchip:
+-	pwmchip_remove(&lpc18xx_pwm->chip);
+ disable_pwmclk:
+ 	clk_disable_unprepare(lpc18xx_pwm->pwm_clk);
+ 	return ret;
+diff --git a/drivers/regulator/qcom_smd-regulator.c b/drivers/regulator/qcom_smd-regulator.c
+index 9fc666107a06c..8490aa8eecb1a 100644
+--- a/drivers/regulator/qcom_smd-regulator.c
++++ b/drivers/regulator/qcom_smd-regulator.c
+@@ -1317,8 +1317,10 @@ static int rpm_reg_probe(struct platform_device *pdev)
+ 
+ 	for_each_available_child_of_node(dev->of_node, node) {
+ 		vreg = devm_kzalloc(&pdev->dev, sizeof(*vreg), GFP_KERNEL);
+-		if (!vreg)
++		if (!vreg) {
++			of_node_put(node);
+ 			return -ENOMEM;
++		}
+ 
+ 		ret = rpm_regulator_init_vreg(vreg, dev, node, rpm, vreg_data);
+ 
+diff --git a/drivers/regulator/rpi-panel-attiny-regulator.c b/drivers/regulator/rpi-panel-attiny-regulator.c
+index ee46bfbf5eee7..991b4730d7687 100644
+--- a/drivers/regulator/rpi-panel-attiny-regulator.c
++++ b/drivers/regulator/rpi-panel-attiny-regulator.c
+@@ -37,11 +37,24 @@ static const struct regmap_config attiny_regmap_config = {
+ static int attiny_lcd_power_enable(struct regulator_dev *rdev)
+ {
+ 	unsigned int data;
++	int ret, i;
+ 
+ 	regmap_write(rdev->regmap, REG_POWERON, 1);
++	msleep(80);
++
+ 	/* Wait for nPWRDWN to go low to indicate poweron is done. */
+-	regmap_read_poll_timeout(rdev->regmap, REG_PORTB, data,
+-					data & BIT(0), 10, 1000000);
++	for (i = 0; i < 20; i++) {
++		ret = regmap_read(rdev->regmap, REG_PORTB, &data);
++		if (!ret) {
++			if (data & BIT(0))
++				break;
++		}
++		usleep_range(10000, 12000);
++	}
++	usleep_range(10000, 12000);
++
++	if (ret)
++		pr_err("%s: regmap_read_poll_timeout failed %d\n", __func__, ret);
+ 
+ 	/* Default to the same orientation as the closed source
+ 	 * firmware used for the panel.  Runtime rotation
+@@ -57,23 +70,34 @@ static int attiny_lcd_power_disable(struct regulator_dev *rdev)
+ {
+ 	regmap_write(rdev->regmap, REG_PWM, 0);
+ 	regmap_write(rdev->regmap, REG_POWERON, 0);
+-	udelay(1);
++	msleep(30);
+ 	return 0;
+ }
+ 
+ static int attiny_lcd_power_is_enabled(struct regulator_dev *rdev)
+ {
+ 	unsigned int data;
+-	int ret;
++	int ret, i;
+ 
+-	ret = regmap_read(rdev->regmap, REG_POWERON, &data);
++	for (i = 0; i < 10; i++) {
++		ret = regmap_read(rdev->regmap, REG_POWERON, &data);
++		if (!ret)
++			break;
++		usleep_range(10000, 12000);
++	}
+ 	if (ret < 0)
+ 		return ret;
+ 
+ 	if (!(data & BIT(0)))
+ 		return 0;
+ 
+-	ret = regmap_read(rdev->regmap, REG_PORTB, &data);
++	for (i = 0; i < 10; i++) {
++		ret = regmap_read(rdev->regmap, REG_PORTB, &data);
++		if (!ret)
++			break;
++		usleep_range(10000, 12000);
++	}
++
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -103,20 +127,32 @@ static int attiny_update_status(struct backlight_device *bl)
+ {
+ 	struct regmap *regmap = bl_get_data(bl);
+ 	int brightness = bl->props.brightness;
++	int ret, i;
+ 
+ 	if (bl->props.power != FB_BLANK_UNBLANK ||
+ 	    bl->props.fb_blank != FB_BLANK_UNBLANK)
+ 		brightness = 0;
+ 
+-	return regmap_write(regmap, REG_PWM, brightness);
++	for (i = 0; i < 10; i++) {
++		ret = regmap_write(regmap, REG_PWM, brightness);
++		if (!ret)
++			break;
++	}
++
++	return ret;
+ }
+ 
+ static int attiny_get_brightness(struct backlight_device *bl)
+ {
+ 	struct regmap *regmap = bl_get_data(bl);
+-	int ret, brightness;
++	int ret, brightness, i;
++
++	for (i = 0; i < 10; i++) {
++		ret = regmap_read(regmap, REG_PWM, &brightness);
++		if (!ret)
++			break;
++	}
+ 
+-	ret = regmap_read(regmap, REG_PWM, &brightness);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -166,7 +202,7 @@ static int attiny_i2c_probe(struct i2c_client *i2c,
+ 	}
+ 
+ 	regmap_write(regmap, REG_POWERON, 0);
+-	mdelay(1);
++	msleep(30);
+ 
+ 	config.dev = &i2c->dev;
+ 	config.regmap = regmap;
+diff --git a/drivers/remoteproc/qcom_q6v5_adsp.c b/drivers/remoteproc/qcom_q6v5_adsp.c
+index 098362e6e233b..7c02bc1322479 100644
+--- a/drivers/remoteproc/qcom_q6v5_adsp.c
++++ b/drivers/remoteproc/qcom_q6v5_adsp.c
+@@ -408,6 +408,7 @@ static int adsp_alloc_memory_region(struct qcom_adsp *adsp)
+ 	}
+ 
+ 	ret = of_address_to_resource(node, 0, &r);
++	of_node_put(node);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index 43ea8455546ca..b9ab91540b00d 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -1806,18 +1806,20 @@ static int q6v5_alloc_memory_region(struct q6v5 *qproc)
+ 	 * reserved memory regions from device's memory-region property.
+ 	 */
+ 	child = of_get_child_by_name(qproc->dev->of_node, "mba");
+-	if (!child)
++	if (!child) {
+ 		node = of_parse_phandle(qproc->dev->of_node,
+ 					"memory-region", 0);
+-	else
++	} else {
+ 		node = of_parse_phandle(child, "memory-region", 0);
++		of_node_put(child);
++	}
+ 
+ 	ret = of_address_to_resource(node, 0, &r);
++	of_node_put(node);
+ 	if (ret) {
+ 		dev_err(qproc->dev, "unable to resolve mba region\n");
+ 		return ret;
+ 	}
+-	of_node_put(node);
+ 
+ 	qproc->mba_phys = r.start;
+ 	qproc->mba_size = resource_size(&r);
+@@ -1828,14 +1830,15 @@ static int q6v5_alloc_memory_region(struct q6v5 *qproc)
+ 	} else {
+ 		child = of_get_child_by_name(qproc->dev->of_node, "mpss");
+ 		node = of_parse_phandle(child, "memory-region", 0);
++		of_node_put(child);
+ 	}
+ 
+ 	ret = of_address_to_resource(node, 0, &r);
++	of_node_put(node);
+ 	if (ret) {
+ 		dev_err(qproc->dev, "unable to resolve mpss region\n");
+ 		return ret;
+ 	}
+-	of_node_put(node);
+ 
+ 	qproc->mpss_phys = qproc->mpss_reloc = r.start;
+ 	qproc->mpss_size = resource_size(&r);
+diff --git a/drivers/remoteproc/qcom_wcnss.c b/drivers/remoteproc/qcom_wcnss.c
+index 80bbafee98463..9a223d394087f 100644
+--- a/drivers/remoteproc/qcom_wcnss.c
++++ b/drivers/remoteproc/qcom_wcnss.c
+@@ -500,6 +500,7 @@ static int wcnss_alloc_memory_region(struct qcom_wcnss *wcnss)
+ 	}
+ 
+ 	ret = of_address_to_resource(node, 0, &r);
++	of_node_put(node);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/remoteproc/remoteproc_debugfs.c b/drivers/remoteproc/remoteproc_debugfs.c
+index b5a1e3b697d9f..581930483ef84 100644
+--- a/drivers/remoteproc/remoteproc_debugfs.c
++++ b/drivers/remoteproc/remoteproc_debugfs.c
+@@ -76,7 +76,7 @@ static ssize_t rproc_coredump_write(struct file *filp,
+ 	int ret, err = 0;
+ 	char buf[20];
+ 
+-	if (count > sizeof(buf))
++	if (count < 1 || count > sizeof(buf))
+ 		return -EINVAL;
+ 
+ 	ret = copy_from_user(buf, user_buf, count);
+diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
+index d8e8357981537..9edd662c69ace 100644
+--- a/drivers/rtc/interface.c
++++ b/drivers/rtc/interface.c
+@@ -804,9 +804,13 @@ static int rtc_timer_enqueue(struct rtc_device *rtc, struct rtc_timer *timer)
+ 	struct timerqueue_node *next = timerqueue_getnext(&rtc->timerqueue);
+ 	struct rtc_time tm;
+ 	ktime_t now;
++	int err;
++
++	err = __rtc_read_time(rtc, &tm);
++	if (err)
++		return err;
+ 
+ 	timer->enabled = 1;
+-	__rtc_read_time(rtc, &tm);
+ 	now = rtc_tm_to_ktime(tm);
+ 
+ 	/* Skip over expired timers */
+@@ -820,7 +824,6 @@ static int rtc_timer_enqueue(struct rtc_device *rtc, struct rtc_timer *timer)
+ 	trace_rtc_timer_enqueue(timer);
+ 	if (!next || ktime_before(timer->node.expires, next->expires)) {
+ 		struct rtc_wkalrm alarm;
+-		int err;
+ 
+ 		alarm.time = rtc_ktime_to_tm(timer->node.expires);
+ 		alarm.enabled = 1;
+diff --git a/drivers/rtc/rtc-mc146818-lib.c b/drivers/rtc/rtc-mc146818-lib.c
+index 2065842f775d6..04b05e3b68cb8 100644
+--- a/drivers/rtc/rtc-mc146818-lib.c
++++ b/drivers/rtc/rtc-mc146818-lib.c
+@@ -176,8 +176,10 @@ int mc146818_set_time(struct rtc_time *time)
+ 	if (yrs >= 100)
+ 		yrs -= 100;
+ 
+-	if (!(CMOS_READ(RTC_CONTROL) & RTC_DM_BINARY)
+-	    || RTC_ALWAYS_BCD) {
++	spin_lock_irqsave(&rtc_lock, flags);
++	save_control = CMOS_READ(RTC_CONTROL);
++	spin_unlock_irqrestore(&rtc_lock, flags);
++	if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+ 		sec = bin2bcd(sec);
+ 		min = bin2bcd(min);
+ 		hrs = bin2bcd(hrs);
+diff --git a/drivers/rtc/rtc-pl031.c b/drivers/rtc/rtc-pl031.c
+index e38ee88483855..bad6a5d9c6839 100644
+--- a/drivers/rtc/rtc-pl031.c
++++ b/drivers/rtc/rtc-pl031.c
+@@ -350,9 +350,6 @@ static int pl031_probe(struct amba_device *adev, const struct amba_id *id)
+ 		}
+ 	}
+ 
+-	if (!adev->irq[0])
+-		clear_bit(RTC_FEATURE_ALARM, ldata->rtc->features);
+-
+ 	device_init_wakeup(&adev->dev, true);
+ 	ldata->rtc = devm_rtc_allocate_device(&adev->dev);
+ 	if (IS_ERR(ldata->rtc)) {
+@@ -360,6 +357,9 @@ static int pl031_probe(struct amba_device *adev, const struct amba_id *id)
+ 		goto out;
+ 	}
+ 
++	if (!adev->irq[0])
++		clear_bit(RTC_FEATURE_ALARM, ldata->rtc->features);
++
+ 	ldata->rtc->ops = ops;
+ 	ldata->rtc->range_min = vendor->range_min;
+ 	ldata->rtc->range_max = vendor->range_max;
+diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c
+index 88c549f257dbf..65047806a5410 100644
+--- a/drivers/scsi/fnic/fnic_scsi.c
++++ b/drivers/scsi/fnic/fnic_scsi.c
+@@ -604,7 +604,7 @@ out:
+ 
+ 	FNIC_TRACE(fnic_queuecommand, sc->device->host->host_no,
+ 		  tag, sc, io_req, sg_count, cmd_trace,
+-		  (((u64)CMD_FLAGS(sc) >> 32) | CMD_STATE(sc)));
++		  (((u64)CMD_FLAGS(sc) << 32) | CMD_STATE(sc)));
+ 
+ 	/* if only we issued IO, will we have the io lock */
+ 	if (io_lock_acquired)
+@@ -986,8 +986,6 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic,
+ 	CMD_SP(sc) = NULL;
+ 	CMD_FLAGS(sc) |= FNIC_IO_DONE;
+ 
+-	spin_unlock_irqrestore(io_lock, flags);
+-
+ 	if (hdr_status != FCPIO_SUCCESS) {
+ 		atomic64_inc(&fnic_stats->io_stats.io_failures);
+ 		shost_printk(KERN_ERR, fnic->lport->host, "hdr status = %s\n",
+@@ -996,8 +994,6 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic,
+ 
+ 	fnic_release_ioreq_buf(fnic, io_req, sc);
+ 
+-	mempool_free(io_req, fnic->io_req_pool);
+-
+ 	cmd_trace = ((u64)hdr_status << 56) |
+ 		  (u64)icmnd_cmpl->scsi_status << 48 |
+ 		  (u64)icmnd_cmpl->flags << 40 | (u64)sc->cmnd[0] << 32 |
+@@ -1021,6 +1017,12 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic,
+ 	} else
+ 		fnic->lport->host_stats.fcp_control_requests++;
+ 
++	/* Call SCSI completion function to complete the IO */
++	scsi_done(sc);
++	spin_unlock_irqrestore(io_lock, flags);
++
++	mempool_free(io_req, fnic->io_req_pool);
++
+ 	atomic64_dec(&fnic_stats->io_stats.active_ios);
+ 	if (atomic64_read(&fnic->io_cmpl_skip))
+ 		atomic64_dec(&fnic->io_cmpl_skip);
+@@ -1049,9 +1051,6 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic,
+ 		if(io_duration_time > atomic64_read(&fnic_stats->io_stats.current_max_io_time))
+ 			atomic64_set(&fnic_stats->io_stats.current_max_io_time, io_duration_time);
+ 	}
+-
+-	/* Call SCSI completion function to complete the IO */
+-	scsi_done(sc);
+ }
+ 
+ /* fnic_fcpio_itmf_cmpl_handler
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 11a44d9dd9b2d..b3bdbea3c9556 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -530,7 +530,7 @@ MODULE_PARM_DESC(intr_conv, "interrupt converge enable (0-1)");
+ 
+ /* permit overriding the host protection capabilities mask (EEDP/T10 PI) */
+ static int prot_mask;
+-module_param(prot_mask, int, 0);
++module_param(prot_mask, int, 0444);
+ MODULE_PARM_DESC(prot_mask, " host protection capabilities mask, def=0x0 ");
+ 
+ static void debugfs_work_handler_v3_hw(struct work_struct *work);
+diff --git a/drivers/scsi/libsas/sas_ata.c b/drivers/scsi/libsas/sas_ata.c
+index a315715b36227..7e0cde710fc3c 100644
+--- a/drivers/scsi/libsas/sas_ata.c
++++ b/drivers/scsi/libsas/sas_ata.c
+@@ -197,7 +197,7 @@ static unsigned int sas_ata_qc_issue(struct ata_queued_cmd *qc)
+ 		task->total_xfer_len = qc->nbytes;
+ 		task->num_scatter = qc->n_elem;
+ 		task->data_dir = qc->dma_dir;
+-	} else if (qc->tf.protocol == ATA_PROT_NODATA) {
++	} else if (!ata_is_data(qc->tf.protocol)) {
+ 		task->data_dir = DMA_NONE;
+ 	} else {
+ 		for_each_sg(qc->sg, sg, qc->n_elem, si)
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 0d37c4aca175c..c38e689432054 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -5737,14 +5737,13 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc)
+  */
+ 
+ static int
+-mpt3sas_check_same_4gb_region(long reply_pool_start_address, u32 pool_sz)
++mpt3sas_check_same_4gb_region(dma_addr_t start_address, u32 pool_sz)
+ {
+-	long reply_pool_end_address;
++	dma_addr_t end_address;
+ 
+-	reply_pool_end_address = reply_pool_start_address + pool_sz;
++	end_address = start_address + pool_sz - 1;
+ 
+-	if (upper_32_bits(reply_pool_start_address) ==
+-		upper_32_bits(reply_pool_end_address))
++	if (upper_32_bits(start_address) == upper_32_bits(end_address))
+ 		return 1;
+ 	else
+ 		return 0;
+@@ -5805,7 +5804,7 @@ _base_allocate_pcie_sgl_pool(struct MPT3SAS_ADAPTER *ioc, u32 sz)
+ 		}
+ 
+ 		if (!mpt3sas_check_same_4gb_region(
+-		    (long)ioc->pcie_sg_lookup[i].pcie_sgl, sz)) {
++		    ioc->pcie_sg_lookup[i].pcie_sgl_dma, sz)) {
+ 			ioc_err(ioc, "PCIE SGLs are not in same 4G !! pcie sgl (0x%p) dma = (0x%llx)\n",
+ 			    ioc->pcie_sg_lookup[i].pcie_sgl,
+ 			    (unsigned long long)
+@@ -5860,8 +5859,8 @@ _base_allocate_chain_dma_pool(struct MPT3SAS_ADAPTER *ioc, u32 sz)
+ 			    GFP_KERNEL, &ctr->chain_buffer_dma);
+ 			if (!ctr->chain_buffer)
+ 				return -EAGAIN;
+-			if (!mpt3sas_check_same_4gb_region((long)
+-			    ctr->chain_buffer, ioc->chain_segment_sz)) {
++			if (!mpt3sas_check_same_4gb_region(
++			    ctr->chain_buffer_dma, ioc->chain_segment_sz)) {
+ 				ioc_err(ioc,
+ 				    "Chain buffers are not in same 4G !!! Chain buff (0x%p) dma = (0x%llx)\n",
+ 				    ctr->chain_buffer,
+@@ -5897,7 +5896,7 @@ _base_allocate_sense_dma_pool(struct MPT3SAS_ADAPTER *ioc, u32 sz)
+ 	    GFP_KERNEL, &ioc->sense_dma);
+ 	if (!ioc->sense)
+ 		return -EAGAIN;
+-	if (!mpt3sas_check_same_4gb_region((long)ioc->sense, sz)) {
++	if (!mpt3sas_check_same_4gb_region(ioc->sense_dma, sz)) {
+ 		dinitprintk(ioc, pr_err(
+ 		    "Bad Sense Pool! sense (0x%p) sense_dma = (0x%llx)\n",
+ 		    ioc->sense, (unsigned long long) ioc->sense_dma));
+@@ -5930,7 +5929,7 @@ _base_allocate_reply_pool(struct MPT3SAS_ADAPTER *ioc, u32 sz)
+ 	    &ioc->reply_dma);
+ 	if (!ioc->reply)
+ 		return -EAGAIN;
+-	if (!mpt3sas_check_same_4gb_region((long)ioc->reply_free, sz)) {
++	if (!mpt3sas_check_same_4gb_region(ioc->reply_dma, sz)) {
+ 		dinitprintk(ioc, pr_err(
+ 		    "Bad Reply Pool! Reply (0x%p) Reply dma = (0x%llx)\n",
+ 		    ioc->reply, (unsigned long long) ioc->reply_dma));
+@@ -5965,7 +5964,7 @@ _base_allocate_reply_free_dma_pool(struct MPT3SAS_ADAPTER *ioc, u32 sz)
+ 	    GFP_KERNEL, &ioc->reply_free_dma);
+ 	if (!ioc->reply_free)
+ 		return -EAGAIN;
+-	if (!mpt3sas_check_same_4gb_region((long)ioc->reply_free, sz)) {
++	if (!mpt3sas_check_same_4gb_region(ioc->reply_free_dma, sz)) {
+ 		dinitprintk(ioc,
+ 		    pr_err("Bad Reply Free Pool! Reply Free (0x%p) Reply Free dma = (0x%llx)\n",
+ 		    ioc->reply_free, (unsigned long long) ioc->reply_free_dma));
+@@ -6004,7 +6003,7 @@ _base_allocate_reply_post_free_array(struct MPT3SAS_ADAPTER *ioc,
+ 	    GFP_KERNEL, &ioc->reply_post_free_array_dma);
+ 	if (!ioc->reply_post_free_array)
+ 		return -EAGAIN;
+-	if (!mpt3sas_check_same_4gb_region((long)ioc->reply_post_free_array,
++	if (!mpt3sas_check_same_4gb_region(ioc->reply_post_free_array_dma,
+ 	    reply_post_free_array_sz)) {
+ 		dinitprintk(ioc, pr_err(
+ 		    "Bad Reply Free Pool! Reply Free (0x%p) Reply Free dma = (0x%llx)\n",
+@@ -6069,7 +6068,7 @@ base_alloc_rdpq_dma_pool(struct MPT3SAS_ADAPTER *ioc, int sz)
+ 			 * resources and set DMA mask to 32 and allocate.
+ 			 */
+ 			if (!mpt3sas_check_same_4gb_region(
+-				(long)ioc->reply_post[i].reply_post_free, sz)) {
++				ioc->reply_post[i].reply_post_free_dma, sz)) {
+ 				dinitprintk(ioc,
+ 				    ioc_err(ioc, "bad Replypost free pool(0x%p)"
+ 				    "reply_post_free_dma = (0x%llx)\n",
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index 066290dd57565..6332cf23bf6b2 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -1783,6 +1783,7 @@ static void pm8001_send_abort_all(struct pm8001_hba_info *pm8001_ha,
+ 	ccb->device = pm8001_ha_dev;
+ 	ccb->ccb_tag = ccb_tag;
+ 	ccb->task = task;
++	ccb->n_elem = 0;
+ 
+ 	circularQ = &pm8001_ha->inbnd_q_tbl[0];
+ 
+@@ -1844,6 +1845,7 @@ static void pm8001_send_read_log(struct pm8001_hba_info *pm8001_ha,
+ 	ccb->device = pm8001_ha_dev;
+ 	ccb->ccb_tag = ccb_tag;
+ 	ccb->task = task;
++	ccb->n_elem = 0;
+ 	pm8001_ha_dev->id |= NCQ_READ_LOG_FLAG;
+ 	pm8001_ha_dev->id |= NCQ_2ND_RLE_FLAG;
+ 
+@@ -1860,7 +1862,7 @@ static void pm8001_send_read_log(struct pm8001_hba_info *pm8001_ha,
+ 
+ 	sata_cmd.tag = cpu_to_le32(ccb_tag);
+ 	sata_cmd.device_id = cpu_to_le32(pm8001_ha_dev->device_id);
+-	sata_cmd.ncqtag_atap_dir_m |= ((0x1 << 7) | (0x5 << 9));
++	sata_cmd.ncqtag_atap_dir_m = cpu_to_le32((0x1 << 7) | (0x5 << 9));
+ 	memcpy(&sata_cmd.sata_fis, &fis, sizeof(struct host_to_dev_fis));
+ 
+ 	res = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &sata_cmd,
+@@ -2421,7 +2423,8 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 				len = sizeof(struct pio_setup_fis);
+ 				pm8001_dbg(pm8001_ha, IO,
+ 					   "PIO read len = %d\n", len);
+-			} else if (t->ata_task.use_ncq) {
++			} else if (t->ata_task.use_ncq &&
++				   t->data_dir != DMA_NONE) {
+ 				len = sizeof(struct set_dev_bits_fis);
+ 				pm8001_dbg(pm8001_ha, IO, "FPDMA len = %d\n",
+ 					   len);
+@@ -4282,22 +4285,22 @@ static int pm8001_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 	u32  opc = OPC_INB_SATA_HOST_OPSTART;
+ 	memset(&sata_cmd, 0, sizeof(sata_cmd));
+ 	circularQ = &pm8001_ha->inbnd_q_tbl[0];
+-	if (task->data_dir == DMA_NONE) {
++
++	if (task->data_dir == DMA_NONE && !task->ata_task.use_ncq) {
+ 		ATAP = 0x04;  /* no data*/
+ 		pm8001_dbg(pm8001_ha, IO, "no data\n");
+ 	} else if (likely(!task->ata_task.device_control_reg_update)) {
+-		if (task->ata_task.dma_xfer) {
++		if (task->ata_task.use_ncq &&
++		    dev->sata_dev.class != ATA_DEV_ATAPI) {
++			ATAP = 0x07; /* FPDMA */
++			pm8001_dbg(pm8001_ha, IO, "FPDMA\n");
++		} else if (task->ata_task.dma_xfer) {
+ 			ATAP = 0x06; /* DMA */
+ 			pm8001_dbg(pm8001_ha, IO, "DMA\n");
+ 		} else {
+ 			ATAP = 0x05; /* PIO*/
+ 			pm8001_dbg(pm8001_ha, IO, "PIO\n");
+ 		}
+-		if (task->ata_task.use_ncq &&
+-			dev->sata_dev.class != ATA_DEV_ATAPI) {
+-			ATAP = 0x07; /* FPDMA */
+-			pm8001_dbg(pm8001_ha, IO, "FPDMA\n");
+-		}
+ 	}
+ 	if (task->ata_task.use_ncq && pm8001_get_ncq_tag(task, &hdr_tag)) {
+ 		task->ata_task.fis.sector_count |= (u8) (hdr_tag << 3);
+@@ -4637,7 +4640,7 @@ int pm8001_chip_ssp_tm_req(struct pm8001_hba_info *pm8001_ha,
+ 	memcpy(sspTMCmd.lun, task->ssp_task.LUN, 8);
+ 	sspTMCmd.tag = cpu_to_le32(ccb->ccb_tag);
+ 	if (pm8001_ha->chip_id != chip_8001)
+-		sspTMCmd.ds_ads_m = 0x08;
++		sspTMCmd.ds_ads_m = cpu_to_le32(0x08);
+ 	circularQ = &pm8001_ha->inbnd_q_tbl[0];
+ 	ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &sspTMCmd,
+ 			sizeof(sspTMCmd), 0);
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index ca4820d99dc70..48b0154211c75 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -1202,9 +1202,11 @@ pm80xx_set_thermal_config(struct pm8001_hba_info *pm8001_ha)
+ 	else
+ 		page_code = THERMAL_PAGE_CODE_8H;
+ 
+-	payload.cfg_pg[0] = (THERMAL_LOG_ENABLE << 9) |
+-				(THERMAL_ENABLE << 8) | page_code;
+-	payload.cfg_pg[1] = (LTEMPHIL << 24) | (RTEMPHIL << 8);
++	payload.cfg_pg[0] =
++		cpu_to_le32((THERMAL_LOG_ENABLE << 9) |
++			    (THERMAL_ENABLE << 8) | page_code);
++	payload.cfg_pg[1] =
++		cpu_to_le32((LTEMPHIL << 24) | (RTEMPHIL << 8));
+ 
+ 	pm8001_dbg(pm8001_ha, DEV,
+ 		   "Setting up thermal config. cfg_pg 0 0x%x cfg_pg 1 0x%x\n",
+@@ -1244,43 +1246,41 @@ pm80xx_set_sas_protocol_timer_config(struct pm8001_hba_info *pm8001_ha)
+ 	circularQ = &pm8001_ha->inbnd_q_tbl[0];
+ 	payload.tag = cpu_to_le32(tag);
+ 
+-	SASConfigPage.pageCode        =  SAS_PROTOCOL_TIMER_CONFIG_PAGE;
+-	SASConfigPage.MST_MSI         =  3 << 15;
+-	SASConfigPage.STP_SSP_MCT_TMO =  (STP_MCT_TMO << 16) | SSP_MCT_TMO;
+-	SASConfigPage.STP_FRM_TMO     = (SAS_MAX_OPEN_TIME << 24) |
+-				(SMP_MAX_CONN_TIMER << 16) | STP_FRM_TIMER;
+-	SASConfigPage.STP_IDLE_TMO    =  STP_IDLE_TIME;
+-
+-	if (SASConfigPage.STP_IDLE_TMO > 0x3FFFFFF)
+-		SASConfigPage.STP_IDLE_TMO = 0x3FFFFFF;
+-
+-
+-	SASConfigPage.OPNRJT_RTRY_INTVL =         (SAS_MFD << 16) |
+-						SAS_OPNRJT_RTRY_INTVL;
+-	SASConfigPage.Data_Cmd_OPNRJT_RTRY_TMO =  (SAS_DOPNRJT_RTRY_TMO << 16)
+-						| SAS_COPNRJT_RTRY_TMO;
+-	SASConfigPage.Data_Cmd_OPNRJT_RTRY_THR =  (SAS_DOPNRJT_RTRY_THR << 16)
+-						| SAS_COPNRJT_RTRY_THR;
+-	SASConfigPage.MAX_AIP =  SAS_MAX_AIP;
++	SASConfigPage.pageCode = cpu_to_le32(SAS_PROTOCOL_TIMER_CONFIG_PAGE);
++	SASConfigPage.MST_MSI = cpu_to_le32(3 << 15);
++	SASConfigPage.STP_SSP_MCT_TMO =
++		cpu_to_le32((STP_MCT_TMO << 16) | SSP_MCT_TMO);
++	SASConfigPage.STP_FRM_TMO =
++		cpu_to_le32((SAS_MAX_OPEN_TIME << 24) |
++			    (SMP_MAX_CONN_TIMER << 16) | STP_FRM_TIMER);
++	SASConfigPage.STP_IDLE_TMO = cpu_to_le32(STP_IDLE_TIME);
++
++	SASConfigPage.OPNRJT_RTRY_INTVL =
++		cpu_to_le32((SAS_MFD << 16) | SAS_OPNRJT_RTRY_INTVL);
++	SASConfigPage.Data_Cmd_OPNRJT_RTRY_TMO =
++		cpu_to_le32((SAS_DOPNRJT_RTRY_TMO << 16) | SAS_COPNRJT_RTRY_TMO);
++	SASConfigPage.Data_Cmd_OPNRJT_RTRY_THR =
++		cpu_to_le32((SAS_DOPNRJT_RTRY_THR << 16) | SAS_COPNRJT_RTRY_THR);
++	SASConfigPage.MAX_AIP = cpu_to_le32(SAS_MAX_AIP);
+ 
+ 	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.pageCode 0x%08x\n",
+-		   SASConfigPage.pageCode);
++		   le32_to_cpu(SASConfigPage.pageCode));
+ 	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.MST_MSI  0x%08x\n",
+-		   SASConfigPage.MST_MSI);
++		   le32_to_cpu(SASConfigPage.MST_MSI));
+ 	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.STP_SSP_MCT_TMO  0x%08x\n",
+-		   SASConfigPage.STP_SSP_MCT_TMO);
++		   le32_to_cpu(SASConfigPage.STP_SSP_MCT_TMO));
+ 	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.STP_FRM_TMO  0x%08x\n",
+-		   SASConfigPage.STP_FRM_TMO);
++		   le32_to_cpu(SASConfigPage.STP_FRM_TMO));
+ 	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.STP_IDLE_TMO  0x%08x\n",
+-		   SASConfigPage.STP_IDLE_TMO);
++		   le32_to_cpu(SASConfigPage.STP_IDLE_TMO));
+ 	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.OPNRJT_RTRY_INTVL  0x%08x\n",
+-		   SASConfigPage.OPNRJT_RTRY_INTVL);
++		   le32_to_cpu(SASConfigPage.OPNRJT_RTRY_INTVL));
+ 	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.Data_Cmd_OPNRJT_RTRY_TMO  0x%08x\n",
+-		   SASConfigPage.Data_Cmd_OPNRJT_RTRY_TMO);
++		   le32_to_cpu(SASConfigPage.Data_Cmd_OPNRJT_RTRY_TMO));
+ 	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.Data_Cmd_OPNRJT_RTRY_THR  0x%08x\n",
+-		   SASConfigPage.Data_Cmd_OPNRJT_RTRY_THR);
++		   le32_to_cpu(SASConfigPage.Data_Cmd_OPNRJT_RTRY_THR));
+ 	pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.MAX_AIP  0x%08x\n",
+-		   SASConfigPage.MAX_AIP);
++		   le32_to_cpu(SASConfigPage.MAX_AIP));
+ 
+ 	memcpy(&payload.cfg_pg, &SASConfigPage,
+ 			 sizeof(SASProtocolTimerConfig_t));
+@@ -1406,12 +1406,13 @@ static int pm80xx_encrypt_update(struct pm8001_hba_info *pm8001_ha)
+ 	/* Currently only one key is used. New KEK index is 1.
+ 	 * Current KEK index is 1. Store KEK to NVRAM is 1.
+ 	 */
+-	payload.new_curidx_ksop = ((1 << 24) | (1 << 16) | (1 << 8) |
+-					KEK_MGMT_SUBOP_KEYCARDUPDATE);
++	payload.new_curidx_ksop =
++		cpu_to_le32(((1 << 24) | (1 << 16) | (1 << 8) |
++			     KEK_MGMT_SUBOP_KEYCARDUPDATE));
+ 
+ 	pm8001_dbg(pm8001_ha, DEV,
+ 		   "Saving Encryption info to flash. payload 0x%x\n",
+-		   payload.new_curidx_ksop);
++		   le32_to_cpu(payload.new_curidx_ksop));
+ 
+ 	rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload,
+ 			sizeof(payload), 0);
+@@ -1800,6 +1801,7 @@ static void pm80xx_send_abort_all(struct pm8001_hba_info *pm8001_ha,
+ 	ccb->device = pm8001_ha_dev;
+ 	ccb->ccb_tag = ccb_tag;
+ 	ccb->task = task;
++	ccb->n_elem = 0;
+ 
+ 	circularQ = &pm8001_ha->inbnd_q_tbl[0];
+ 
+@@ -1881,7 +1883,7 @@ static void pm80xx_send_read_log(struct pm8001_hba_info *pm8001_ha,
+ 
+ 	sata_cmd.tag = cpu_to_le32(ccb_tag);
+ 	sata_cmd.device_id = cpu_to_le32(pm8001_ha_dev->device_id);
+-	sata_cmd.ncqtag_atap_dir_m_dad |= ((0x1 << 7) | (0x5 << 9));
++	sata_cmd.ncqtag_atap_dir_m_dad = cpu_to_le32(((0x1 << 7) | (0x5 << 9)));
+ 	memcpy(&sata_cmd.sata_fis, &fis, sizeof(struct host_to_dev_fis));
+ 
+ 	res = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &sata_cmd,
+@@ -2517,7 +2519,8 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha,
+ 				len = sizeof(struct pio_setup_fis);
+ 				pm8001_dbg(pm8001_ha, IO,
+ 					   "PIO read len = %d\n", len);
+-			} else if (t->ata_task.use_ncq) {
++			} else if (t->ata_task.use_ncq &&
++				   t->data_dir != DMA_NONE) {
+ 				len = sizeof(struct set_dev_bits_fis);
+ 				pm8001_dbg(pm8001_ha, IO, "FPDMA len = %d\n",
+ 					   len);
+@@ -4388,13 +4391,15 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ 	struct ssp_ini_io_start_req ssp_cmd;
+ 	u32 tag = ccb->ccb_tag;
+ 	int ret;
+-	u64 phys_addr, start_addr, end_addr;
++	u64 phys_addr, end_addr;
+ 	u32 end_addr_high, end_addr_low;
+ 	struct inbound_queue_table *circularQ;
+ 	u32 q_index, cpu_id;
+ 	u32 opc = OPC_INB_SSPINIIOSTART;
++
+ 	memset(&ssp_cmd, 0, sizeof(ssp_cmd));
+ 	memcpy(ssp_cmd.ssp_iu.lun, task->ssp_task.LUN, 8);
++
+ 	/* data address domain added for spcv; set to 0 by host,
+ 	 * used internally by controller
+ 	 * 0 for SAS 1.1 and SAS 2.0 compatible TLR
+@@ -4405,7 +4410,7 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ 	ssp_cmd.device_id = cpu_to_le32(pm8001_dev->device_id);
+ 	ssp_cmd.tag = cpu_to_le32(tag);
+ 	if (task->ssp_task.enable_first_burst)
+-		ssp_cmd.ssp_iu.efb_prio_attr |= 0x80;
++		ssp_cmd.ssp_iu.efb_prio_attr = 0x80;
+ 	ssp_cmd.ssp_iu.efb_prio_attr |= (task->ssp_task.task_prio << 3);
+ 	ssp_cmd.ssp_iu.efb_prio_attr |= (task->ssp_task.task_attr & 7);
+ 	memcpy(ssp_cmd.ssp_iu.cdb, task->ssp_task.cmd->cmnd,
+@@ -4437,21 +4442,24 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ 			ssp_cmd.enc_esgl = cpu_to_le32(1<<31);
+ 		} else if (task->num_scatter == 1) {
+ 			u64 dma_addr = sg_dma_address(task->scatter);
++
+ 			ssp_cmd.enc_addr_low =
+ 				cpu_to_le32(lower_32_bits(dma_addr));
+ 			ssp_cmd.enc_addr_high =
+ 				cpu_to_le32(upper_32_bits(dma_addr));
+ 			ssp_cmd.enc_len = cpu_to_le32(task->total_xfer_len);
+ 			ssp_cmd.enc_esgl = 0;
++
+ 			/* Check 4G Boundary */
+-			start_addr = cpu_to_le64(dma_addr);
+-			end_addr = (start_addr + ssp_cmd.enc_len) - 1;
+-			end_addr_low = cpu_to_le32(lower_32_bits(end_addr));
+-			end_addr_high = cpu_to_le32(upper_32_bits(end_addr));
+-			if (end_addr_high != ssp_cmd.enc_addr_high) {
++			end_addr = dma_addr + le32_to_cpu(ssp_cmd.enc_len) - 1;
++			end_addr_low = lower_32_bits(end_addr);
++			end_addr_high = upper_32_bits(end_addr);
++
++			if (end_addr_high != le32_to_cpu(ssp_cmd.enc_addr_high)) {
+ 				pm8001_dbg(pm8001_ha, FAIL,
+ 					   "The sg list address start_addr=0x%016llx data_len=0x%x end_addr_high=0x%08x end_addr_low=0x%08x has crossed 4G boundary\n",
+-					   start_addr, ssp_cmd.enc_len,
++					   dma_addr,
++					   le32_to_cpu(ssp_cmd.enc_len),
+ 					   end_addr_high, end_addr_low);
+ 				pm8001_chip_make_sg(task->scatter, 1,
+ 					ccb->buf_prd);
+@@ -4460,7 +4468,7 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ 					cpu_to_le32(lower_32_bits(phys_addr));
+ 				ssp_cmd.enc_addr_high =
+ 					cpu_to_le32(upper_32_bits(phys_addr));
+-				ssp_cmd.enc_esgl = cpu_to_le32(1<<31);
++				ssp_cmd.enc_esgl = cpu_to_le32(1U<<31);
+ 			}
+ 		} else if (task->num_scatter == 0) {
+ 			ssp_cmd.enc_addr_low = 0;
+@@ -4468,8 +4476,10 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ 			ssp_cmd.enc_len = cpu_to_le32(task->total_xfer_len);
+ 			ssp_cmd.enc_esgl = 0;
+ 		}
++
+ 		/* XTS mode. All other fields are 0 */
+-		ssp_cmd.key_cmode = 0x6 << 4;
++		ssp_cmd.key_cmode = cpu_to_le32(0x6 << 4);
++
+ 		/* set tweak values. Should be the start lba */
+ 		ssp_cmd.twk_val0 = cpu_to_le32((task->ssp_task.cmd->cmnd[2] << 24) |
+ 						(task->ssp_task.cmd->cmnd[3] << 16) |
+@@ -4491,20 +4501,22 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ 			ssp_cmd.esgl = cpu_to_le32(1<<31);
+ 		} else if (task->num_scatter == 1) {
+ 			u64 dma_addr = sg_dma_address(task->scatter);
++
+ 			ssp_cmd.addr_low = cpu_to_le32(lower_32_bits(dma_addr));
+ 			ssp_cmd.addr_high =
+ 				cpu_to_le32(upper_32_bits(dma_addr));
+ 			ssp_cmd.len = cpu_to_le32(task->total_xfer_len);
+ 			ssp_cmd.esgl = 0;
++
+ 			/* Check 4G Boundary */
+-			start_addr = cpu_to_le64(dma_addr);
+-			end_addr = (start_addr + ssp_cmd.len) - 1;
+-			end_addr_low = cpu_to_le32(lower_32_bits(end_addr));
+-			end_addr_high = cpu_to_le32(upper_32_bits(end_addr));
+-			if (end_addr_high != ssp_cmd.addr_high) {
++			end_addr = dma_addr + le32_to_cpu(ssp_cmd.len) - 1;
++			end_addr_low = lower_32_bits(end_addr);
++			end_addr_high = upper_32_bits(end_addr);
++			if (end_addr_high != le32_to_cpu(ssp_cmd.addr_high)) {
+ 				pm8001_dbg(pm8001_ha, FAIL,
+ 					   "The sg list address start_addr=0x%016llx data_len=0x%x end_addr_high=0x%08x end_addr_low=0x%08x has crossed 4G boundary\n",
+-					   start_addr, ssp_cmd.len,
++					   dma_addr,
++					   le32_to_cpu(ssp_cmd.len),
+ 					   end_addr_high, end_addr_low);
+ 				pm8001_chip_make_sg(task->scatter, 1,
+ 					ccb->buf_prd);
+@@ -4538,7 +4550,7 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 	u32 q_index, cpu_id;
+ 	struct sata_start_req sata_cmd;
+ 	u32 hdr_tag, ncg_tag = 0;
+-	u64 phys_addr, start_addr, end_addr;
++	u64 phys_addr, end_addr;
+ 	u32 end_addr_high, end_addr_low;
+ 	u32 ATAP = 0x0;
+ 	u32 dir;
+@@ -4550,22 +4562,21 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 	q_index = (u32) (cpu_id) % (pm8001_ha->max_q_num);
+ 	circularQ = &pm8001_ha->inbnd_q_tbl[q_index];
+ 
+-	if (task->data_dir == DMA_NONE) {
++	if (task->data_dir == DMA_NONE && !task->ata_task.use_ncq) {
+ 		ATAP = 0x04; /* no data*/
+ 		pm8001_dbg(pm8001_ha, IO, "no data\n");
+ 	} else if (likely(!task->ata_task.device_control_reg_update)) {
+-		if (task->ata_task.dma_xfer) {
++		if (task->ata_task.use_ncq &&
++		    dev->sata_dev.class != ATA_DEV_ATAPI) {
++			ATAP = 0x07; /* FPDMA */
++			pm8001_dbg(pm8001_ha, IO, "FPDMA\n");
++		} else if (task->ata_task.dma_xfer) {
+ 			ATAP = 0x06; /* DMA */
+ 			pm8001_dbg(pm8001_ha, IO, "DMA\n");
+ 		} else {
+ 			ATAP = 0x05; /* PIO*/
+ 			pm8001_dbg(pm8001_ha, IO, "PIO\n");
+ 		}
+-		if (task->ata_task.use_ncq &&
+-		    dev->sata_dev.class != ATA_DEV_ATAPI) {
+-			ATAP = 0x07; /* FPDMA */
+-			pm8001_dbg(pm8001_ha, IO, "FPDMA\n");
+-		}
+ 	}
+ 	if (task->ata_task.use_ncq && pm8001_get_ncq_tag(task, &hdr_tag)) {
+ 		task->ata_task.fis.sector_count |= (u8) (hdr_tag << 3);
+@@ -4599,32 +4610,38 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 			pm8001_chip_make_sg(task->scatter,
+ 						ccb->n_elem, ccb->buf_prd);
+ 			phys_addr = ccb->ccb_dma_handle;
+-			sata_cmd.enc_addr_low = lower_32_bits(phys_addr);
+-			sata_cmd.enc_addr_high = upper_32_bits(phys_addr);
++			sata_cmd.enc_addr_low =
++				cpu_to_le32(lower_32_bits(phys_addr));
++			sata_cmd.enc_addr_high =
++				cpu_to_le32(upper_32_bits(phys_addr));
+ 			sata_cmd.enc_esgl = cpu_to_le32(1 << 31);
+ 		} else if (task->num_scatter == 1) {
+ 			u64 dma_addr = sg_dma_address(task->scatter);
+-			sata_cmd.enc_addr_low = lower_32_bits(dma_addr);
+-			sata_cmd.enc_addr_high = upper_32_bits(dma_addr);
++
++			sata_cmd.enc_addr_low =
++				cpu_to_le32(lower_32_bits(dma_addr));
++			sata_cmd.enc_addr_high =
++				cpu_to_le32(upper_32_bits(dma_addr));
+ 			sata_cmd.enc_len = cpu_to_le32(task->total_xfer_len);
+ 			sata_cmd.enc_esgl = 0;
++
+ 			/* Check 4G Boundary */
+-			start_addr = cpu_to_le64(dma_addr);
+-			end_addr = (start_addr + sata_cmd.enc_len) - 1;
+-			end_addr_low = cpu_to_le32(lower_32_bits(end_addr));
+-			end_addr_high = cpu_to_le32(upper_32_bits(end_addr));
+-			if (end_addr_high != sata_cmd.enc_addr_high) {
++			end_addr = dma_addr + le32_to_cpu(sata_cmd.enc_len) - 1;
++			end_addr_low = lower_32_bits(end_addr);
++			end_addr_high = upper_32_bits(end_addr);
++			if (end_addr_high != le32_to_cpu(sata_cmd.enc_addr_high)) {
+ 				pm8001_dbg(pm8001_ha, FAIL,
+ 					   "The sg list address start_addr=0x%016llx data_len=0x%x end_addr_high=0x%08x end_addr_low=0x%08x has crossed 4G boundary\n",
+-					   start_addr, sata_cmd.enc_len,
++					   dma_addr,
++					   le32_to_cpu(sata_cmd.enc_len),
+ 					   end_addr_high, end_addr_low);
+ 				pm8001_chip_make_sg(task->scatter, 1,
+ 					ccb->buf_prd);
+ 				phys_addr = ccb->ccb_dma_handle;
+ 				sata_cmd.enc_addr_low =
+-					lower_32_bits(phys_addr);
++					cpu_to_le32(lower_32_bits(phys_addr));
+ 				sata_cmd.enc_addr_high =
+-					upper_32_bits(phys_addr);
++					cpu_to_le32(upper_32_bits(phys_addr));
+ 				sata_cmd.enc_esgl =
+ 					cpu_to_le32(1 << 31);
+ 			}
+@@ -4635,7 +4652,8 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 			sata_cmd.enc_esgl = 0;
+ 		}
+ 		/* XTS mode. All other fields are 0 */
+-		sata_cmd.key_index_mode = 0x6 << 4;
++		sata_cmd.key_index_mode = cpu_to_le32(0x6 << 4);
++
+ 		/* set tweak values. Should be the start lba */
+ 		sata_cmd.twk_val0 =
+ 			cpu_to_le32((sata_cmd.sata_fis.lbal_exp << 24) |
+@@ -4661,31 +4679,31 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 			phys_addr = ccb->ccb_dma_handle;
+ 			sata_cmd.addr_low = lower_32_bits(phys_addr);
+ 			sata_cmd.addr_high = upper_32_bits(phys_addr);
+-			sata_cmd.esgl = cpu_to_le32(1 << 31);
++			sata_cmd.esgl = cpu_to_le32(1U << 31);
+ 		} else if (task->num_scatter == 1) {
+ 			u64 dma_addr = sg_dma_address(task->scatter);
++
+ 			sata_cmd.addr_low = lower_32_bits(dma_addr);
+ 			sata_cmd.addr_high = upper_32_bits(dma_addr);
+ 			sata_cmd.len = cpu_to_le32(task->total_xfer_len);
+ 			sata_cmd.esgl = 0;
++
+ 			/* Check 4G Boundary */
+-			start_addr = cpu_to_le64(dma_addr);
+-			end_addr = (start_addr + sata_cmd.len) - 1;
+-			end_addr_low = cpu_to_le32(lower_32_bits(end_addr));
+-			end_addr_high = cpu_to_le32(upper_32_bits(end_addr));
++			end_addr = dma_addr + le32_to_cpu(sata_cmd.len) - 1;
++			end_addr_low = lower_32_bits(end_addr);
++			end_addr_high = upper_32_bits(end_addr);
+ 			if (end_addr_high != sata_cmd.addr_high) {
+ 				pm8001_dbg(pm8001_ha, FAIL,
+ 					   "The sg list address start_addr=0x%016llx data_len=0x%xend_addr_high=0x%08x end_addr_low=0x%08x has crossed 4G boundary\n",
+-					   start_addr, sata_cmd.len,
++					   dma_addr,
++					   le32_to_cpu(sata_cmd.len),
+ 					   end_addr_high, end_addr_low);
+ 				pm8001_chip_make_sg(task->scatter, 1,
+ 					ccb->buf_prd);
+ 				phys_addr = ccb->ccb_dma_handle;
+-				sata_cmd.addr_low =
+-					lower_32_bits(phys_addr);
+-				sata_cmd.addr_high =
+-					upper_32_bits(phys_addr);
+-				sata_cmd.esgl = cpu_to_le32(1 << 31);
++				sata_cmd.addr_low = lower_32_bits(phys_addr);
++				sata_cmd.addr_high = upper_32_bits(phys_addr);
++				sata_cmd.esgl = cpu_to_le32(1U << 31);
+ 			}
+ 		} else if (task->num_scatter == 0) {
+ 			sata_cmd.addr_low = 0;
+@@ -4693,27 +4711,28 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ 			sata_cmd.len = cpu_to_le32(task->total_xfer_len);
+ 			sata_cmd.esgl = 0;
+ 		}
++
+ 		/* scsi cdb */
+ 		sata_cmd.atapi_scsi_cdb[0] =
+ 			cpu_to_le32(((task->ata_task.atapi_packet[0]) |
+-			(task->ata_task.atapi_packet[1] << 8) |
+-			(task->ata_task.atapi_packet[2] << 16) |
+-			(task->ata_task.atapi_packet[3] << 24)));
++				     (task->ata_task.atapi_packet[1] << 8) |
++				     (task->ata_task.atapi_packet[2] << 16) |
++				     (task->ata_task.atapi_packet[3] << 24)));
+ 		sata_cmd.atapi_scsi_cdb[1] =
+ 			cpu_to_le32(((task->ata_task.atapi_packet[4]) |
+-			(task->ata_task.atapi_packet[5] << 8) |
+-			(task->ata_task.atapi_packet[6] << 16) |
+-			(task->ata_task.atapi_packet[7] << 24)));
++				     (task->ata_task.atapi_packet[5] << 8) |
++				     (task->ata_task.atapi_packet[6] << 16) |
++				     (task->ata_task.atapi_packet[7] << 24)));
+ 		sata_cmd.atapi_scsi_cdb[2] =
+ 			cpu_to_le32(((task->ata_task.atapi_packet[8]) |
+-			(task->ata_task.atapi_packet[9] << 8) |
+-			(task->ata_task.atapi_packet[10] << 16) |
+-			(task->ata_task.atapi_packet[11] << 24)));
++				     (task->ata_task.atapi_packet[9] << 8) |
++				     (task->ata_task.atapi_packet[10] << 16) |
++				     (task->ata_task.atapi_packet[11] << 24)));
+ 		sata_cmd.atapi_scsi_cdb[3] =
+ 			cpu_to_le32(((task->ata_task.atapi_packet[12]) |
+-			(task->ata_task.atapi_packet[13] << 8) |
+-			(task->ata_task.atapi_packet[14] << 16) |
+-			(task->ata_task.atapi_packet[15] << 24)));
++				     (task->ata_task.atapi_packet[13] << 8) |
++				     (task->ata_task.atapi_packet[14] << 16) |
++				     (task->ata_task.atapi_packet[15] << 24)));
+ 	}
+ 
+ 	/* Check for read log for failed drive and return */
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index 032efb294ee52..42e7298be5219 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -555,7 +555,7 @@ qla2x00_sysfs_read_vpd(struct file *filp, struct kobject *kobj,
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EINVAL;
+ 
+-	if (IS_NOCACHE_VPD_TYPE(ha))
++	if (!IS_NOCACHE_VPD_TYPE(ha))
+ 		goto skip;
+ 
+ 	faddr = ha->flt_region_vpd << 2;
+@@ -745,7 +745,7 @@ qla2x00_sysfs_write_reset(struct file *filp, struct kobject *kobj,
+ 		ql_log(ql_log_info, vha, 0x706f,
+ 		    "Issuing MPI reset.\n");
+ 
+-		if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
++		if (IS_QLA83XX(ha)) {
+ 			uint32_t idc_control;
+ 
+ 			qla83xx_idc_lock(vha, 0);
+@@ -1056,9 +1056,6 @@ qla2x00_free_sysfs_attr(scsi_qla_host_t *vha, bool stop_beacon)
+ 			continue;
+ 		if (iter->type == 3 && !(IS_CNA_CAPABLE(ha)))
+ 			continue;
+-		if (iter->type == 0x27 &&
+-		    (!IS_QLA27XX(ha) || !IS_QLA28XX(ha)))
+-			continue;
+ 
+ 		sysfs_remove_bin_file(&host->shost_gendev.kobj,
+ 		    iter->attr);
+diff --git a/drivers/scsi/qla2xxx/qla_bsg.c b/drivers/scsi/qla2xxx/qla_bsg.c
+index 9da8034ccad40..c2f00f076f799 100644
+--- a/drivers/scsi/qla2xxx/qla_bsg.c
++++ b/drivers/scsi/qla2xxx/qla_bsg.c
+@@ -29,7 +29,8 @@ void qla2x00_bsg_job_done(srb_t *sp, int res)
+ 	    "%s: sp hdl %x, result=%x bsg ptr %p\n",
+ 	    __func__, sp->handle, res, bsg_job);
+ 
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 
+ 	bsg_reply->result = res;
+ 	bsg_job_done(bsg_job, bsg_reply->result,
+@@ -3013,7 +3014,8 @@ qla24xx_bsg_timeout(struct bsg_job *bsg_job)
+ 
+ done:
+ 	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index 9ebf4a234d9a9..aefb29d7c7aee 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -726,6 +726,11 @@ typedef struct srb {
+ 	 * code.
+ 	 */
+ 	void (*put_fn)(struct kref *kref);
++
++	/*
++	 * Report completion for asynchronous commands.
++	 */
++	void (*async_done)(struct srb *sp, int res);
+ } srb_t;
+ 
+ #define GET_CMD_SP(sp) (sp->u.scmd.cmd)
+@@ -2886,7 +2891,11 @@ struct ct_fdmi2_hba_attributes {
+ #define FDMI_PORT_SPEED_8GB		0x10
+ #define FDMI_PORT_SPEED_16GB		0x20
+ #define FDMI_PORT_SPEED_32GB		0x40
+-#define FDMI_PORT_SPEED_64GB		0x80
++#define FDMI_PORT_SPEED_20GB		0x80
++#define FDMI_PORT_SPEED_40GB		0x100
++#define FDMI_PORT_SPEED_128GB		0x200
++#define FDMI_PORT_SPEED_64GB		0x400
++#define FDMI_PORT_SPEED_256GB		0x800
+ #define FDMI_PORT_SPEED_UNKNOWN		0x8000
+ 
+ #define FC_CLASS_2	0x04
+@@ -4262,8 +4271,10 @@ struct qla_hw_data {
+ #define QLA_ABTS_WAIT_ENABLED(_sp) \
+ 	(QLA_NVME_IOS(_sp) && QLA_ABTS_FW_ENABLED(_sp->fcport->vha->hw))
+ 
+-#define IS_PI_UNINIT_CAPABLE(ha)	(IS_QLA83XX(ha) || IS_QLA27XX(ha))
+-#define IS_PI_IPGUARD_CAPABLE(ha)	(IS_QLA83XX(ha) || IS_QLA27XX(ha))
++#define IS_PI_UNINIT_CAPABLE(ha)	(IS_QLA83XX(ha) || IS_QLA27XX(ha) || \
++					 IS_QLA28XX(ha))
++#define IS_PI_IPGUARD_CAPABLE(ha)	(IS_QLA83XX(ha) || IS_QLA27XX(ha) || \
++					 IS_QLA28XX(ha))
+ #define IS_PI_DIFB_DIX0_CAPABLE(ha)	(0)
+ #define IS_PI_SPLIT_DET_CAPABLE_HBA(ha)	(IS_QLA83XX(ha) || IS_QLA27XX(ha) || \
+ 					IS_QLA28XX(ha))
+@@ -4610,6 +4621,7 @@ struct qla_hw_data {
+ 	struct workqueue_struct *wq;
+ 	struct work_struct heartbeat_work;
+ 	struct qlfc_fw fw_buf;
++	unsigned long last_heartbeat_run_jiffies;
+ 
+ 	/* FCP_CMND priority support */
+ 	struct qla_fcp_prio_cfg *fcp_prio_cfg;
+@@ -5427,4 +5439,8 @@ struct ql_vnd_tgt_stats_resp {
+ #include "qla_gbl.h"
+ #include "qla_dbg.h"
+ #include "qla_inline.h"
++
++#define IS_SESSION_DELETED(_fcport) (_fcport->disc_state == DSC_DELETE_PEND || \
++				      _fcport->disc_state == DSC_DELETED)
++
+ #endif
+diff --git a/drivers/scsi/qla2xxx/qla_edif.c b/drivers/scsi/qla2xxx/qla_edif.c
+index 53d2b85620271..0628633c7c7e9 100644
+--- a/drivers/scsi/qla2xxx/qla_edif.c
++++ b/drivers/scsi/qla2xxx/qla_edif.c
+@@ -668,6 +668,11 @@ qla_edif_app_authok(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ 	    bsg_job->request_payload.sg_cnt, &appplogiok,
+ 	    sizeof(struct auth_complete_cmd));
+ 
++	/* silent unaligned access warning */
++	portid.b.domain = appplogiok.u.d_id.b.domain;
++	portid.b.area   = appplogiok.u.d_id.b.area;
++	portid.b.al_pa  = appplogiok.u.d_id.b.al_pa;
++
+ 	switch (appplogiok.type) {
+ 	case PL_TYPE_WWPN:
+ 		fcport = qla2x00_find_fcport_by_wwpn(vha,
+@@ -678,7 +683,7 @@ qla_edif_app_authok(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ 			    __func__, appplogiok.u.wwpn);
+ 		break;
+ 	case PL_TYPE_DID:
+-		fcport = qla2x00_find_fcport_by_pid(vha, &appplogiok.u.d_id);
++		fcport = qla2x00_find_fcport_by_pid(vha, &portid);
+ 		if (!fcport)
+ 			ql_dbg(ql_dbg_edif, vha, 0x911d,
+ 			    "%s d_id lookup failed: %x\n", __func__,
+@@ -777,6 +782,11 @@ qla_edif_app_authfail(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ 	    bsg_job->request_payload.sg_cnt, &appplogifail,
+ 	    sizeof(struct auth_complete_cmd));
+ 
++	/* silent unaligned access warning */
++	portid.b.domain = appplogifail.u.d_id.b.domain;
++	portid.b.area   = appplogifail.u.d_id.b.area;
++	portid.b.al_pa  = appplogifail.u.d_id.b.al_pa;
++
+ 	/*
+ 	 * TODO: edif: app has failed this plogi. Inform driver to
+ 	 * take any action (if any).
+@@ -788,7 +798,7 @@ qla_edif_app_authfail(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ 		SET_DID_STATUS(bsg_reply->result, DID_OK);
+ 		break;
+ 	case PL_TYPE_DID:
+-		fcport = qla2x00_find_fcport_by_pid(vha, &appplogifail.u.d_id);
++		fcport = qla2x00_find_fcport_by_pid(vha, &portid);
+ 		if (!fcport)
+ 			ql_dbg(ql_dbg_edif, vha, 0x911d,
+ 			    "%s d_id lookup failed: %x\n", __func__,
+@@ -1253,6 +1263,7 @@ qla24xx_sadb_update(struct bsg_job *bsg_job)
+ 	int result = 0;
+ 	struct qla_sa_update_frame sa_frame;
+ 	struct srb_iocb *iocb_cmd;
++	port_id_t portid;
+ 
+ 	ql_dbg(ql_dbg_edif + ql_dbg_verbose, vha, 0x911d,
+ 	    "%s entered, vha: 0x%p\n", __func__, vha);
+@@ -1276,7 +1287,12 @@ qla24xx_sadb_update(struct bsg_job *bsg_job)
+ 		goto done;
+ 	}
+ 
+-	fcport = qla2x00_find_fcport_by_pid(vha, &sa_frame.port_id);
++	/* silent unaligned access warning */
++	portid.b.domain = sa_frame.port_id.b.domain;
++	portid.b.area   = sa_frame.port_id.b.area;
++	portid.b.al_pa  = sa_frame.port_id.b.al_pa;
++
++	fcport = qla2x00_find_fcport_by_pid(vha, &portid);
+ 	if (fcport) {
+ 		found = 1;
+ 		if (sa_frame.flags == QLA_SA_UPDATE_FLAGS_TX_KEY)
+@@ -2146,7 +2162,8 @@ edif_doorbell_show(struct device *dev, struct device_attribute *attr,
+ 
+ static void qla_noop_sp_done(srb_t *sp, int res)
+ {
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+ 
+ /*
+diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
+index 8d8503a284790..3f8b8bbabe6de 100644
+--- a/drivers/scsi/qla2xxx/qla_gbl.h
++++ b/drivers/scsi/qla2xxx/qla_gbl.h
+@@ -316,7 +316,8 @@ extern int qla2x00_start_sp(srb_t *);
+ extern int qla24xx_dif_start_scsi(srb_t *);
+ extern int qla2x00_start_bidir(srb_t *, struct scsi_qla_host *, uint32_t);
+ extern int qla2xxx_dif_start_scsi_mq(srb_t *);
+-extern void qla2x00_init_timer(srb_t *sp, unsigned long tmo);
++extern void qla2x00_init_async_sp(srb_t *sp, unsigned long tmo,
++				  void (*done)(struct srb *, int));
+ extern unsigned long qla2x00_get_async_timeout(struct scsi_qla_host *);
+ 
+ extern void *qla2x00_alloc_iocbs(struct scsi_qla_host *, srb_t *);
+@@ -332,6 +333,7 @@ extern int qla24xx_get_one_block_sg(uint32_t, struct qla2_sgx *, uint32_t *);
+ extern int qla24xx_configure_prot_mode(srb_t *, uint16_t *);
+ extern int qla24xx_issue_sa_replace_iocb(scsi_qla_host_t *vha,
+ 	struct qla_work_evt *e);
++void qla2x00_sp_release(struct kref *kref);
+ 
+ /*
+  * Global Function Prototypes in qla_mbx.c source file.
+diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
+index 28b574e20ef32..6b67bd561810d 100644
+--- a/drivers/scsi/qla2xxx/qla_gs.c
++++ b/drivers/scsi/qla2xxx/qla_gs.c
+@@ -529,7 +529,6 @@ static void qla2x00_async_sns_sp_done(srb_t *sp, int rc)
+ 		if (!e)
+ 			goto err2;
+ 
+-		del_timer(&sp->u.iocb_cmd.timer);
+ 		e->u.iosb.sp = sp;
+ 		qla2x00_post_work(vha, e);
+ 		return;
+@@ -556,8 +555,8 @@ err2:
+ 			sp->u.iocb_cmd.u.ctarg.rsp = NULL;
+ 		}
+ 
+-		sp->free(sp);
+-
++		/* ref: INIT */
++		kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 		return;
+ 	}
+ 
+@@ -592,13 +591,15 @@ static int qla_async_rftid(scsi_qla_host_t *vha, port_id_t *d_id)
+ 	if (!vha->flags.online)
+ 		goto done;
+ 
++	/* ref: INIT */
+ 	sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL);
+ 	if (!sp)
+ 		goto done;
+ 
+ 	sp->type = SRB_CT_PTHRU_CMD;
+ 	sp->name = "rft_id";
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla2x00_async_sns_sp_done);
+ 
+ 	sp->u.iocb_cmd.u.ctarg.req = dma_alloc_coherent(&vha->hw->pdev->dev,
+ 	    sizeof(struct ct_sns_pkt), &sp->u.iocb_cmd.u.ctarg.req_dma,
+@@ -638,8 +639,6 @@ static int qla_async_rftid(scsi_qla_host_t *vha, port_id_t *d_id)
+ 	sp->u.iocb_cmd.u.ctarg.req_size = RFT_ID_REQ_SIZE;
+ 	sp->u.iocb_cmd.u.ctarg.rsp_size = RFT_ID_RSP_SIZE;
+ 	sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+-	sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+-	sp->done = qla2x00_async_sns_sp_done;
+ 
+ 	ql_dbg(ql_dbg_disc, vha, 0xffff,
+ 	    "Async-%s - hdl=%x portid %06x.\n",
+@@ -653,7 +652,8 @@ static int qla_async_rftid(scsi_qla_host_t *vha, port_id_t *d_id)
+ 	}
+ 	return rval;
+ done_free_sp:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ 	return rval;
+ }
+@@ -676,8 +676,7 @@ qla2x00_rff_id(scsi_qla_host_t *vha, u8 type)
+ 		return (QLA_SUCCESS);
+ 	}
+ 
+-	return qla_async_rffid(vha, &vha->d_id, qlt_rff_id(vha),
+-	    FC4_TYPE_FCP_SCSI);
++	return qla_async_rffid(vha, &vha->d_id, qlt_rff_id(vha), type);
+ }
+ 
+ static int qla_async_rffid(scsi_qla_host_t *vha, port_id_t *d_id,
+@@ -688,13 +687,15 @@ static int qla_async_rffid(scsi_qla_host_t *vha, port_id_t *d_id,
+ 	srb_t *sp;
+ 	struct ct_sns_pkt *ct_sns;
+ 
++	/* ref: INIT */
+ 	sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL);
+ 	if (!sp)
+ 		goto done;
+ 
+ 	sp->type = SRB_CT_PTHRU_CMD;
+ 	sp->name = "rff_id";
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla2x00_async_sns_sp_done);
+ 
+ 	sp->u.iocb_cmd.u.ctarg.req = dma_alloc_coherent(&vha->hw->pdev->dev,
+ 	    sizeof(struct ct_sns_pkt), &sp->u.iocb_cmd.u.ctarg.req_dma,
+@@ -727,13 +728,11 @@ static int qla_async_rffid(scsi_qla_host_t *vha, port_id_t *d_id,
+ 	/* Prepare CT arguments -- port_id, FC-4 feature, FC-4 type */
+ 	ct_req->req.rff_id.port_id = port_id_to_be_id(*d_id);
+ 	ct_req->req.rff_id.fc4_feature = fc4feature;
+-	ct_req->req.rff_id.fc4_type = fc4type;		/* SCSI - FCP */
++	ct_req->req.rff_id.fc4_type = fc4type;		/* SCSI-FCP or FC-NVMe */
+ 
+ 	sp->u.iocb_cmd.u.ctarg.req_size = RFF_ID_REQ_SIZE;
+ 	sp->u.iocb_cmd.u.ctarg.rsp_size = RFF_ID_RSP_SIZE;
+ 	sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+-	sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+-	sp->done = qla2x00_async_sns_sp_done;
+ 
+ 	ql_dbg(ql_dbg_disc, vha, 0xffff,
+ 	    "Async-%s - hdl=%x portid %06x feature %x type %x.\n",
+@@ -749,7 +748,8 @@ static int qla_async_rffid(scsi_qla_host_t *vha, port_id_t *d_id,
+ 	return rval;
+ 
+ done_free_sp:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ 	return rval;
+ }
+@@ -779,13 +779,15 @@ static int qla_async_rnnid(scsi_qla_host_t *vha, port_id_t *d_id,
+ 	srb_t *sp;
+ 	struct ct_sns_pkt *ct_sns;
+ 
++	/* ref: INIT */
+ 	sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL);
+ 	if (!sp)
+ 		goto done;
+ 
+ 	sp->type = SRB_CT_PTHRU_CMD;
+ 	sp->name = "rnid";
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla2x00_async_sns_sp_done);
+ 
+ 	sp->u.iocb_cmd.u.ctarg.req = dma_alloc_coherent(&vha->hw->pdev->dev,
+ 	    sizeof(struct ct_sns_pkt), &sp->u.iocb_cmd.u.ctarg.req_dma,
+@@ -823,9 +825,6 @@ static int qla_async_rnnid(scsi_qla_host_t *vha, port_id_t *d_id,
+ 	sp->u.iocb_cmd.u.ctarg.rsp_size = RNN_ID_RSP_SIZE;
+ 	sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+ 
+-	sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+-	sp->done = qla2x00_async_sns_sp_done;
+-
+ 	ql_dbg(ql_dbg_disc, vha, 0xffff,
+ 	    "Async-%s - hdl=%x portid %06x\n",
+ 	    sp->name, sp->handle, d_id->b24);
+@@ -840,7 +839,8 @@ static int qla_async_rnnid(scsi_qla_host_t *vha, port_id_t *d_id,
+ 	return rval;
+ 
+ done_free_sp:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ 	return rval;
+ }
+@@ -886,13 +886,15 @@ static int qla_async_rsnn_nn(scsi_qla_host_t *vha)
+ 	srb_t *sp;
+ 	struct ct_sns_pkt *ct_sns;
+ 
++	/* ref: INIT */
+ 	sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL);
+ 	if (!sp)
+ 		goto done;
+ 
+ 	sp->type = SRB_CT_PTHRU_CMD;
+ 	sp->name = "rsnn_nn";
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla2x00_async_sns_sp_done);
+ 
+ 	sp->u.iocb_cmd.u.ctarg.req = dma_alloc_coherent(&vha->hw->pdev->dev,
+ 	    sizeof(struct ct_sns_pkt), &sp->u.iocb_cmd.u.ctarg.req_dma,
+@@ -936,9 +938,6 @@ static int qla_async_rsnn_nn(scsi_qla_host_t *vha)
+ 	sp->u.iocb_cmd.u.ctarg.rsp_size = RSNN_NN_RSP_SIZE;
+ 	sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+ 
+-	sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+-	sp->done = qla2x00_async_sns_sp_done;
+-
+ 	ql_dbg(ql_dbg_disc, vha, 0xffff,
+ 	    "Async-%s - hdl=%x.\n",
+ 	    sp->name, sp->handle);
+@@ -953,7 +952,8 @@ static int qla_async_rsnn_nn(scsi_qla_host_t *vha)
+ 	return rval;
+ 
+ done_free_sp:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ 	return rval;
+ }
+@@ -2893,7 +2893,8 @@ static void qla24xx_async_gpsc_sp_done(srb_t *sp, int res)
+ 	qla24xx_handle_gpsc_event(vha, &ea);
+ 
+ done:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+ 
+ int qla24xx_async_gpsc(scsi_qla_host_t *vha, fc_port_t *fcport)
+@@ -2905,6 +2906,7 @@ int qla24xx_async_gpsc(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 	if (!vha->flags.online || (fcport->flags & FCF_ASYNC_SENT))
+ 		return rval;
+ 
++	/* ref: INIT */
+ 	sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ 	if (!sp)
+ 		goto done;
+@@ -2913,8 +2915,8 @@ int qla24xx_async_gpsc(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 	sp->name = "gpsc";
+ 	sp->gen1 = fcport->rscn_gen;
+ 	sp->gen2 = fcport->login_gen;
+-
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla24xx_async_gpsc_sp_done);
+ 
+ 	/* CT_IU preamble  */
+ 	ct_req = qla24xx_prep_ct_fm_req(fcport->ct_desc.ct_sns, GPSC_CMD,
+@@ -2932,9 +2934,6 @@ int qla24xx_async_gpsc(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 	sp->u.iocb_cmd.u.ctarg.rsp_size = GPSC_RSP_SIZE;
+ 	sp->u.iocb_cmd.u.ctarg.nport_handle = vha->mgmt_svr_loop_id;
+ 
+-	sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+-	sp->done = qla24xx_async_gpsc_sp_done;
+-
+ 	ql_dbg(ql_dbg_disc, vha, 0x205e,
+ 	    "Async-%s %8phC hdl=%x loopid=%x portid=%02x%02x%02x.\n",
+ 	    sp->name, fcport->port_name, sp->handle,
+@@ -2947,7 +2946,8 @@ int qla24xx_async_gpsc(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 	return rval;
+ 
+ done_free_sp:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ 	return rval;
+ }
+@@ -2996,7 +2996,8 @@ void qla24xx_sp_unmap(scsi_qla_host_t *vha, srb_t *sp)
+ 		break;
+ 	}
+ 
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+ 
+ void qla24xx_handle_gpnid_event(scsi_qla_host_t *vha, struct event_arg *ea)
+@@ -3135,13 +3136,15 @@ static void qla2x00_async_gpnid_sp_done(srb_t *sp, int res)
+ 	if (res) {
+ 		if (res == QLA_FUNCTION_TIMEOUT) {
+ 			qla24xx_post_gpnid_work(sp->vha, &ea.id);
+-			sp->free(sp);
++			/* ref: INIT */
++			kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 			return;
+ 		}
+ 	} else if (sp->gen1) {
+ 		/* There was another RSCN for this Nport ID */
+ 		qla24xx_post_gpnid_work(sp->vha, &ea.id);
+-		sp->free(sp);
++		/* ref: INIT */
++		kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 		return;
+ 	}
+ 
+@@ -3162,7 +3165,8 @@ static void qla2x00_async_gpnid_sp_done(srb_t *sp, int res)
+ 				  sp->u.iocb_cmd.u.ctarg.rsp_dma);
+ 		sp->u.iocb_cmd.u.ctarg.rsp = NULL;
+ 
+-		sp->free(sp);
++		/* ref: INIT */
++		kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 		return;
+ 	}
+ 
+@@ -3182,6 +3186,7 @@ int qla24xx_async_gpnid(scsi_qla_host_t *vha, port_id_t *id)
+ 	if (!vha->flags.online)
+ 		goto done;
+ 
++	/* ref: INIT */
+ 	sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL);
+ 	if (!sp)
+ 		goto done;
+@@ -3190,14 +3195,16 @@ int qla24xx_async_gpnid(scsi_qla_host_t *vha, port_id_t *id)
+ 	sp->name = "gpnid";
+ 	sp->u.iocb_cmd.u.ctarg.id = *id;
+ 	sp->gen1 = 0;
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla2x00_async_gpnid_sp_done);
+ 
+ 	spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags);
+ 	list_for_each_entry(tsp, &vha->gpnid_list, elem) {
+ 		if (tsp->u.iocb_cmd.u.ctarg.id.b24 == id->b24) {
+ 			tsp->gen1++;
+ 			spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
+-			sp->free(sp);
++			/* ref: INIT */
++			kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 			goto done;
+ 		}
+ 	}
+@@ -3238,9 +3245,6 @@ int qla24xx_async_gpnid(scsi_qla_host_t *vha, port_id_t *id)
+ 	sp->u.iocb_cmd.u.ctarg.rsp_size = GPN_ID_RSP_SIZE;
+ 	sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+ 
+-	sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+-	sp->done = qla2x00_async_gpnid_sp_done;
+-
+ 	ql_dbg(ql_dbg_disc, vha, 0x2067,
+ 	    "Async-%s hdl=%x ID %3phC.\n", sp->name,
+ 	    sp->handle, &ct_req->req.port_id.port_id);
+@@ -3270,8 +3274,8 @@ done_free_sp:
+ 			sp->u.iocb_cmd.u.ctarg.rsp_dma);
+ 		sp->u.iocb_cmd.u.ctarg.rsp = NULL;
+ 	}
+-
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ 	return rval;
+ }
+@@ -3326,7 +3330,8 @@ void qla24xx_async_gffid_sp_done(srb_t *sp, int res)
+ 	ea.rc = res;
+ 
+ 	qla24xx_handle_gffid_event(vha, &ea);
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+ 
+ /* Get FC4 Feature with Nport ID. */
+@@ -3339,6 +3344,7 @@ int qla24xx_async_gffid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 	if (!vha->flags.online || (fcport->flags & FCF_ASYNC_SENT))
+ 		return rval;
+ 
++	/* ref: INIT */
+ 	sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ 	if (!sp)
+ 		return rval;
+@@ -3348,9 +3354,8 @@ int qla24xx_async_gffid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 	sp->name = "gffid";
+ 	sp->gen1 = fcport->rscn_gen;
+ 	sp->gen2 = fcport->login_gen;
+-
+-	sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla24xx_async_gffid_sp_done);
+ 
+ 	/* CT_IU preamble  */
+ 	ct_req = qla2x00_prep_ct_req(fcport->ct_desc.ct_sns, GFF_ID_CMD,
+@@ -3368,8 +3373,6 @@ int qla24xx_async_gffid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 	sp->u.iocb_cmd.u.ctarg.rsp_size = GFF_ID_RSP_SIZE;
+ 	sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+ 
+-	sp->done = qla24xx_async_gffid_sp_done;
+-
+ 	ql_dbg(ql_dbg_disc, vha, 0x2132,
+ 	    "Async-%s hdl=%x  %8phC.\n", sp->name,
+ 	    sp->handle, fcport->port_name);
+@@ -3380,7 +3383,8 @@ int qla24xx_async_gffid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 
+ 	return rval;
+ done_free_sp:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 	fcport->flags &= ~FCF_ASYNC_SENT;
+ 	return rval;
+ }
+@@ -3767,7 +3771,6 @@ static void qla2x00_async_gpnft_gnnft_sp_done(srb_t *sp, int res)
+ 	    "Async done-%s res %x FC4Type %x\n",
+ 	    sp->name, res, sp->gen2);
+ 
+-	del_timer(&sp->u.iocb_cmd.timer);
+ 	sp->rc = res;
+ 	if (res) {
+ 		unsigned long flags;
+@@ -3892,9 +3895,8 @@ static int qla24xx_async_gnnft(scsi_qla_host_t *vha, struct srb *sp,
+ 	sp->name = "gnnft";
+ 	sp->gen1 = vha->hw->base_qpair->chip_reset;
+ 	sp->gen2 = fc4_type;
+-
+-	sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla2x00_async_gpnft_gnnft_sp_done);
+ 
+ 	memset(sp->u.iocb_cmd.u.ctarg.rsp, 0, sp->u.iocb_cmd.u.ctarg.rsp_size);
+ 	memset(sp->u.iocb_cmd.u.ctarg.req, 0, sp->u.iocb_cmd.u.ctarg.req_size);
+@@ -3910,8 +3912,6 @@ static int qla24xx_async_gnnft(scsi_qla_host_t *vha, struct srb *sp,
+ 	sp->u.iocb_cmd.u.ctarg.req_size = GNN_FT_REQ_SIZE;
+ 	sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+ 
+-	sp->done = qla2x00_async_gpnft_gnnft_sp_done;
+-
+ 	ql_dbg(ql_dbg_disc, vha, 0xffff,
+ 	    "Async-%s hdl=%x FC4Type %x.\n", sp->name,
+ 	    sp->handle, ct_req->req.gpn_ft.port_type);
+@@ -3938,8 +3938,8 @@ done_free_sp:
+ 		    sp->u.iocb_cmd.u.ctarg.rsp_dma);
+ 		sp->u.iocb_cmd.u.ctarg.rsp = NULL;
+ 	}
+-
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 
+ 	spin_lock_irqsave(&vha->work_lock, flags);
+ 	vha->scan.scan_flags &= ~SF_SCANNING;
+@@ -3991,9 +3991,12 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+ 		ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0xffff,
+ 		    "%s: Performing FCP Scan\n", __func__);
+ 
+-		if (sp)
+-			sp->free(sp); /* should not happen */
++		if (sp) {
++			/* ref: INIT */
++			kref_put(&sp->cmd_kref, qla2x00_sp_release);
++		}
+ 
++		/* ref: INIT */
+ 		sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL);
+ 		if (!sp) {
+ 			spin_lock_irqsave(&vha->work_lock, flags);
+@@ -4038,6 +4041,7 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+ 			    sp->u.iocb_cmd.u.ctarg.req,
+ 			    sp->u.iocb_cmd.u.ctarg.req_dma);
+ 			sp->u.iocb_cmd.u.ctarg.req = NULL;
++			/* ref: INIT */
+ 			qla2x00_rel_sp(sp);
+ 			return rval;
+ 		}
+@@ -4057,9 +4061,8 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+ 	sp->name = "gpnft";
+ 	sp->gen1 = vha->hw->base_qpair->chip_reset;
+ 	sp->gen2 = fc4_type;
+-
+-	sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla2x00_async_gpnft_gnnft_sp_done);
+ 
+ 	rspsz = sp->u.iocb_cmd.u.ctarg.rsp_size;
+ 	memset(sp->u.iocb_cmd.u.ctarg.rsp, 0, sp->u.iocb_cmd.u.ctarg.rsp_size);
+@@ -4074,8 +4077,6 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+ 
+ 	sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+ 
+-	sp->done = qla2x00_async_gpnft_gnnft_sp_done;
+-
+ 	ql_dbg(ql_dbg_disc, vha, 0xffff,
+ 	    "Async-%s hdl=%x FC4Type %x.\n", sp->name,
+ 	    sp->handle, ct_req->req.gpn_ft.port_type);
+@@ -4103,7 +4104,8 @@ done_free_sp:
+ 		sp->u.iocb_cmd.u.ctarg.rsp = NULL;
+ 	}
+ 
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 
+ 	spin_lock_irqsave(&vha->work_lock, flags);
+ 	vha->scan.scan_flags &= ~SF_SCANNING;
+@@ -4167,7 +4169,8 @@ static void qla2x00_async_gnnid_sp_done(srb_t *sp, int res)
+ 
+ 	qla24xx_handle_gnnid_event(vha, &ea);
+ 
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+ 
+ int qla24xx_async_gnnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+@@ -4180,6 +4183,7 @@ int qla24xx_async_gnnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 		return rval;
+ 
+ 	qla2x00_set_fcport_disc_state(fcport, DSC_GNN_ID);
++	/* ref: INIT */
+ 	sp = qla2x00_get_sp(vha, fcport, GFP_ATOMIC);
+ 	if (!sp)
+ 		goto done;
+@@ -4189,9 +4193,8 @@ int qla24xx_async_gnnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 	sp->name = "gnnid";
+ 	sp->gen1 = fcport->rscn_gen;
+ 	sp->gen2 = fcport->login_gen;
+-
+-	sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla2x00_async_gnnid_sp_done);
+ 
+ 	/* CT_IU preamble  */
+ 	ct_req = qla2x00_prep_ct_req(fcport->ct_desc.ct_sns, GNN_ID_CMD,
+@@ -4210,8 +4213,6 @@ int qla24xx_async_gnnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 	sp->u.iocb_cmd.u.ctarg.rsp_size = GNN_ID_RSP_SIZE;
+ 	sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+ 
+-	sp->done = qla2x00_async_gnnid_sp_done;
+-
+ 	ql_dbg(ql_dbg_disc, vha, 0xffff,
+ 	    "Async-%s - %8phC hdl=%x loopid=%x portid %06x.\n",
+ 	    sp->name, fcport->port_name,
+@@ -4223,7 +4224,8 @@ int qla24xx_async_gnnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 	return rval;
+ 
+ done_free_sp:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 	fcport->flags &= ~FCF_ASYNC_SENT;
+ done:
+ 	return rval;
+@@ -4297,7 +4299,8 @@ static void qla2x00_async_gfpnid_sp_done(srb_t *sp, int res)
+ 
+ 	qla24xx_handle_gfpnid_event(vha, &ea);
+ 
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+ 
+ int qla24xx_async_gfpnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+@@ -4309,6 +4312,7 @@ int qla24xx_async_gfpnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 	if (!vha->flags.online || (fcport->flags & FCF_ASYNC_SENT))
+ 		return rval;
+ 
++	/* ref: INIT */
+ 	sp = qla2x00_get_sp(vha, fcport, GFP_ATOMIC);
+ 	if (!sp)
+ 		goto done;
+@@ -4317,9 +4321,8 @@ int qla24xx_async_gfpnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 	sp->name = "gfpnid";
+ 	sp->gen1 = fcport->rscn_gen;
+ 	sp->gen2 = fcport->login_gen;
+-
+-	sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla2x00_async_gfpnid_sp_done);
+ 
+ 	/* CT_IU preamble  */
+ 	ct_req = qla2x00_prep_ct_req(fcport->ct_desc.ct_sns, GFPN_ID_CMD,
+@@ -4338,8 +4341,6 @@ int qla24xx_async_gfpnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 	sp->u.iocb_cmd.u.ctarg.rsp_size = GFPN_ID_RSP_SIZE;
+ 	sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+ 
+-	sp->done = qla2x00_async_gfpnid_sp_done;
+-
+ 	ql_dbg(ql_dbg_disc, vha, 0xffff,
+ 	    "Async-%s - %8phC hdl=%x loopid=%x portid %06x.\n",
+ 	    sp->name, fcport->port_name,
+@@ -4352,7 +4353,8 @@ int qla24xx_async_gfpnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 	return rval;
+ 
+ done_free_sp:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ 	return rval;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 070b636802d04..56ad42bb17422 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -51,6 +51,9 @@ qla2x00_sp_timeout(struct timer_list *t)
+ 	WARN_ON(irqs_disabled());
+ 	iocb = &sp->u.iocb_cmd;
+ 	iocb->timeout(sp);
++
++	/* ref: TMR */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+ 
+ void qla2x00_sp_free(srb_t *sp)
+@@ -125,8 +128,13 @@ static void qla24xx_abort_iocb_timeout(void *data)
+ 	}
+ 	spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
+ 
+-	if (sp->cmd_sp)
++	if (sp->cmd_sp) {
++		/*
++		 * This done function should take care of
++		 * original command ref: INIT
++		 */
+ 		sp->cmd_sp->done(sp->cmd_sp, QLA_OS_TIMER_EXPIRED);
++	}
+ 
+ 	abt->u.abt.comp_status = cpu_to_le16(CS_TIMEOUT);
+ 	sp->done(sp, QLA_OS_TIMER_EXPIRED);
+@@ -140,11 +148,11 @@ static void qla24xx_abort_sp_done(srb_t *sp, int res)
+ 	if (orig_sp)
+ 		qla_wait_nvme_release_cmd_kref(orig_sp);
+ 
+-	del_timer(&sp->u.iocb_cmd.timer);
+ 	if (sp->flags & SRB_WAKEUP_ON_COMP)
+ 		complete(&abt->u.abt.comp);
+ 	else
+-		sp->free(sp);
++		/* ref: INIT */
++		kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+ 
+ int qla24xx_async_abort_cmd(srb_t *cmd_sp, bool wait)
+@@ -154,6 +162,7 @@ int qla24xx_async_abort_cmd(srb_t *cmd_sp, bool wait)
+ 	srb_t *sp;
+ 	int rval = QLA_FUNCTION_FAILED;
+ 
++	/* ref: INIT for ABTS command */
+ 	sp = qla2xxx_get_qpair_sp(cmd_sp->vha, cmd_sp->qpair, cmd_sp->fcport,
+ 				  GFP_ATOMIC);
+ 	if (!sp)
+@@ -167,23 +176,22 @@ int qla24xx_async_abort_cmd(srb_t *cmd_sp, bool wait)
+ 	if (wait)
+ 		sp->flags = SRB_WAKEUP_ON_COMP;
+ 
+-	abt_iocb->timeout = qla24xx_abort_iocb_timeout;
+ 	init_completion(&abt_iocb->u.abt.comp);
+ 	/* FW can send 2 x ABTS's timeout/20s */
+-	qla2x00_init_timer(sp, 42);
++	qla2x00_init_async_sp(sp, 42, qla24xx_abort_sp_done);
++	sp->u.iocb_cmd.timeout = qla24xx_abort_iocb_timeout;
+ 
+ 	abt_iocb->u.abt.cmd_hndl = cmd_sp->handle;
+ 	abt_iocb->u.abt.req_que_no = cpu_to_le16(cmd_sp->qpair->req->id);
+ 
+-	sp->done = qla24xx_abort_sp_done;
+-
+ 	ql_dbg(ql_dbg_async, vha, 0x507c,
+ 	       "Abort command issued - hdl=%x, type=%x\n", cmd_sp->handle,
+ 	       cmd_sp->type);
+ 
+ 	rval = qla2x00_start_sp(sp);
+ 	if (rval != QLA_SUCCESS) {
+-		sp->free(sp);
++		/* ref: INIT */
++		kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 		return rval;
+ 	}
+ 
+@@ -191,7 +199,8 @@ int qla24xx_async_abort_cmd(srb_t *cmd_sp, bool wait)
+ 		wait_for_completion(&abt_iocb->u.abt.comp);
+ 		rval = abt_iocb->u.abt.comp_status == CS_COMPLETE ?
+ 			QLA_SUCCESS : QLA_ERR_FROM_FW;
+-		sp->free(sp);
++		/* ref: INIT */
++		kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 	}
+ 
+ 	return rval;
+@@ -286,10 +295,13 @@ static void qla2x00_async_login_sp_done(srb_t *sp, int res)
+ 		ea.iop[0] = lio->u.logio.iop[0];
+ 		ea.iop[1] = lio->u.logio.iop[1];
+ 		ea.sp = sp;
++		if (res)
++			ea.data[0] = MBS_COMMAND_ERROR;
+ 		qla24xx_handle_plogi_done_event(vha, &ea);
+ 	}
+ 
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+ 
+ int
+@@ -308,6 +320,7 @@ qla2x00_async_login(struct scsi_qla_host *vha, fc_port_t *fcport,
+ 		return rval;
+ 	}
+ 
++	/* ref: INIT */
+ 	sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ 	if (!sp)
+ 		goto done;
+@@ -320,12 +333,10 @@ qla2x00_async_login(struct scsi_qla_host *vha, fc_port_t *fcport,
+ 	sp->name = "login";
+ 	sp->gen1 = fcport->rscn_gen;
+ 	sp->gen2 = fcport->login_gen;
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla2x00_async_login_sp_done);
+ 
+ 	lio = &sp->u.iocb_cmd;
+-	lio->timeout = qla2x00_async_iocb_timeout;
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
+-
+-	sp->done = qla2x00_async_login_sp_done;
+ 	if (N2N_TOPO(fcport->vha->hw) && fcport_is_bigger(fcport)) {
+ 		lio->u.logio.flags |= SRB_LOGIN_PRLI_ONLY;
+ 	} else {
+@@ -358,7 +369,8 @@ qla2x00_async_login(struct scsi_qla_host *vha, fc_port_t *fcport,
+ 	return rval;
+ 
+ done_free_sp:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 	fcport->flags &= ~FCF_ASYNC_SENT;
+ done:
+ 	fcport->flags &= ~FCF_ASYNC_ACTIVE;
+@@ -370,29 +382,26 @@ static void qla2x00_async_logout_sp_done(srb_t *sp, int res)
+ 	sp->fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
+ 	sp->fcport->login_gen++;
+ 	qlt_logo_completion_handler(sp->fcport, sp->u.iocb_cmd.u.logio.data[0]);
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+ 
+ int
+ qla2x00_async_logout(struct scsi_qla_host *vha, fc_port_t *fcport)
+ {
+ 	srb_t *sp;
+-	struct srb_iocb *lio;
+ 	int rval = QLA_FUNCTION_FAILED;
+ 
+ 	fcport->flags |= FCF_ASYNC_SENT;
++	/* ref: INIT */
+ 	sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ 	if (!sp)
+ 		goto done;
+ 
+ 	sp->type = SRB_LOGOUT_CMD;
+ 	sp->name = "logout";
+-
+-	lio = &sp->u.iocb_cmd;
+-	lio->timeout = qla2x00_async_iocb_timeout;
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
+-
+-	sp->done = qla2x00_async_logout_sp_done;
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla2x00_async_logout_sp_done),
+ 
+ 	ql_dbg(ql_dbg_disc, vha, 0x2070,
+ 	    "Async-logout - hdl=%x loop-id=%x portid=%02x%02x%02x %8phC explicit %d.\n",
+@@ -406,7 +415,8 @@ qla2x00_async_logout(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 	return rval;
+ 
+ done_free_sp:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ 	fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
+ 	return rval;
+@@ -432,29 +442,26 @@ static void qla2x00_async_prlo_sp_done(srb_t *sp, int res)
+ 	if (!test_bit(UNLOADING, &vha->dpc_flags))
+ 		qla2x00_post_async_prlo_done_work(sp->fcport->vha, sp->fcport,
+ 		    lio->u.logio.data);
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+ 
+ int
+ qla2x00_async_prlo(struct scsi_qla_host *vha, fc_port_t *fcport)
+ {
+ 	srb_t *sp;
+-	struct srb_iocb *lio;
+ 	int rval;
+ 
+ 	rval = QLA_FUNCTION_FAILED;
++	/* ref: INIT */
+ 	sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ 	if (!sp)
+ 		goto done;
+ 
+ 	sp->type = SRB_PRLO_CMD;
+ 	sp->name = "prlo";
+-
+-	lio = &sp->u.iocb_cmd;
+-	lio->timeout = qla2x00_async_iocb_timeout;
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
+-
+-	sp->done = qla2x00_async_prlo_sp_done;
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla2x00_async_prlo_sp_done);
+ 
+ 	ql_dbg(ql_dbg_disc, vha, 0x2070,
+ 	    "Async-prlo - hdl=%x loop-id=%x portid=%02x%02x%02x.\n",
+@@ -468,7 +475,8 @@ qla2x00_async_prlo(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 	return rval;
+ 
+ done_free_sp:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ 	fcport->flags &= ~FCF_ASYNC_ACTIVE;
+ 	return rval;
+@@ -551,10 +559,12 @@ static void qla2x00_async_adisc_sp_done(srb_t *sp, int res)
+ 	ea.iop[1] = lio->u.logio.iop[1];
+ 	ea.fcport = sp->fcport;
+ 	ea.sp = sp;
++	if (res)
++		ea.data[0] = MBS_COMMAND_ERROR;
+ 
+ 	qla24xx_handle_adisc_event(vha, &ea);
+-
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+ 
+ int
+@@ -565,26 +575,34 @@ qla2x00_async_adisc(struct scsi_qla_host *vha, fc_port_t *fcport,
+ 	struct srb_iocb *lio;
+ 	int rval = QLA_FUNCTION_FAILED;
+ 
++	if (IS_SESSION_DELETED(fcport)) {
++		ql_log(ql_log_warn, vha, 0xffff,
++		       "%s: %8phC is being delete - not sending command.\n",
++		       __func__, fcport->port_name);
++		fcport->flags &= ~FCF_ASYNC_ACTIVE;
++		return rval;
++	}
++
+ 	if (!vha->flags.online || (fcport->flags & FCF_ASYNC_SENT))
+ 		return rval;
+ 
+ 	fcport->flags |= FCF_ASYNC_SENT;
++	/* ref: INIT */
+ 	sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ 	if (!sp)
+ 		goto done;
+ 
+ 	sp->type = SRB_ADISC_CMD;
+ 	sp->name = "adisc";
+-
+-	lio = &sp->u.iocb_cmd;
+-	lio->timeout = qla2x00_async_iocb_timeout;
+ 	sp->gen1 = fcport->rscn_gen;
+ 	sp->gen2 = fcport->login_gen;
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla2x00_async_adisc_sp_done);
+ 
+-	sp->done = qla2x00_async_adisc_sp_done;
+-	if (data[1] & QLA_LOGIO_LOGIN_RETRIED)
++	if (data[1] & QLA_LOGIO_LOGIN_RETRIED) {
++		lio = &sp->u.iocb_cmd;
+ 		lio->u.logio.flags |= SRB_LOGIN_RETRIED;
++	}
+ 
+ 	ql_dbg(ql_dbg_disc, vha, 0x206f,
+ 	    "Async-adisc - hdl=%x loopid=%x portid=%06x %8phC.\n",
+@@ -597,7 +615,8 @@ qla2x00_async_adisc(struct scsi_qla_host *vha, fc_port_t *fcport,
+ 	return rval;
+ 
+ done_free_sp:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ 	fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
+ 	qla2x00_post_async_adisc_work(vha, fcport, data);
+@@ -963,6 +982,9 @@ static void qla24xx_handle_gnl_done_event(scsi_qla_host_t *vha,
+ 				set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
+ 			}
+ 			break;
++		case ISP_CFG_NL:
++			qla24xx_fcport_handle_login(vha, fcport);
++			break;
+ 		default:
+ 			break;
+ 		}
+@@ -1078,13 +1100,13 @@ static void qla24xx_async_gnl_sp_done(srb_t *sp, int res)
+ 	}
+ 	spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
+ 
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+ 
+ int qla24xx_async_gnl(struct scsi_qla_host *vha, fc_port_t *fcport)
+ {
+ 	srb_t *sp;
+-	struct srb_iocb *mbx;
+ 	int rval = QLA_FUNCTION_FAILED;
+ 	unsigned long flags;
+ 	u16 *mb;
+@@ -1109,6 +1131,7 @@ int qla24xx_async_gnl(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 	vha->gnl.sent = 1;
+ 	spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
+ 
++	/* ref: INIT */
+ 	sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ 	if (!sp)
+ 		goto done;
+@@ -1117,10 +1140,8 @@ int qla24xx_async_gnl(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 	sp->name = "gnlist";
+ 	sp->gen1 = fcport->rscn_gen;
+ 	sp->gen2 = fcport->login_gen;
+-
+-	mbx = &sp->u.iocb_cmd;
+-	mbx->timeout = qla2x00_async_iocb_timeout;
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha)+2);
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla24xx_async_gnl_sp_done);
+ 
+ 	mb = sp->u.iocb_cmd.u.mbx.out_mb;
+ 	mb[0] = MBC_PORT_NODE_NAME_LIST;
+@@ -1132,8 +1153,6 @@ int qla24xx_async_gnl(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 	mb[8] = vha->gnl.size;
+ 	mb[9] = vha->vp_idx;
+ 
+-	sp->done = qla24xx_async_gnl_sp_done;
+-
+ 	ql_dbg(ql_dbg_disc, vha, 0x20da,
+ 	    "Async-%s - OUT WWPN %8phC hndl %x\n",
+ 	    sp->name, fcport->port_name, sp->handle);
+@@ -1145,7 +1164,8 @@ int qla24xx_async_gnl(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 	return rval;
+ 
+ done_free_sp:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ 	fcport->flags &= ~(FCF_ASYNC_ACTIVE | FCF_ASYNC_SENT);
+ 	return rval;
+@@ -1191,7 +1211,7 @@ done:
+ 	dma_pool_free(ha->s_dma_pool, sp->u.iocb_cmd.u.mbx.in,
+ 		sp->u.iocb_cmd.u.mbx.in_dma);
+ 
+-	sp->free(sp);
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+ 
+ int qla24xx_post_prli_work(struct scsi_qla_host *vha, fc_port_t *fcport)
+@@ -1232,11 +1252,13 @@ static void qla2x00_async_prli_sp_done(srb_t *sp, int res)
+ 		ea.sp = sp;
+ 		if (res == QLA_OS_TIMER_EXPIRED)
+ 			ea.data[0] = QLA_OS_TIMER_EXPIRED;
++		else if (res)
++			ea.data[0] = MBS_COMMAND_ERROR;
+ 
+ 		qla24xx_handle_prli_done_event(vha, &ea);
+ 	}
+ 
+-	sp->free(sp);
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+ 
+ int
+@@ -1269,12 +1291,10 @@ qla24xx_async_prli(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 
+ 	sp->type = SRB_PRLI_CMD;
+ 	sp->name = "prli";
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla2x00_async_prli_sp_done);
+ 
+ 	lio = &sp->u.iocb_cmd;
+-	lio->timeout = qla2x00_async_iocb_timeout;
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
+-
+-	sp->done = qla2x00_async_prli_sp_done;
+ 	lio->u.logio.flags = 0;
+ 
+ 	if (NVME_TARGET(vha->hw, fcport))
+@@ -1296,7 +1316,8 @@ qla24xx_async_prli(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 	return rval;
+ 
+ done_free_sp:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 	fcport->flags &= ~FCF_ASYNC_SENT;
+ 	return rval;
+ }
+@@ -1325,14 +1346,21 @@ int qla24xx_async_gpdb(struct scsi_qla_host *vha, fc_port_t *fcport, u8 opt)
+ 	struct port_database_24xx *pd;
+ 	struct qla_hw_data *ha = vha->hw;
+ 
+-	if (!vha->flags.online || (fcport->flags & FCF_ASYNC_SENT) ||
+-	    fcport->loop_id == FC_NO_LOOP_ID) {
++	if (IS_SESSION_DELETED(fcport)) {
+ 		ql_log(ql_log_warn, vha, 0xffff,
+-		    "%s: %8phC - not sending command.\n",
+-		    __func__, fcport->port_name);
++		       "%s: %8phC is being delete - not sending command.\n",
++		       __func__, fcport->port_name);
++		fcport->flags &= ~FCF_ASYNC_ACTIVE;
+ 		return rval;
+ 	}
+ 
++	if (!vha->flags.online || fcport->flags & FCF_ASYNC_SENT) {
++		ql_log(ql_log_warn, vha, 0xffff,
++		    "%s: %8phC online %d flags %x - not sending command.\n",
++		    __func__, fcport->port_name, vha->flags.online, fcport->flags);
++		goto done;
++	}
++
+ 	sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ 	if (!sp)
+ 		goto done;
+@@ -1344,10 +1372,8 @@ int qla24xx_async_gpdb(struct scsi_qla_host *vha, fc_port_t *fcport, u8 opt)
+ 	sp->name = "gpdb";
+ 	sp->gen1 = fcport->rscn_gen;
+ 	sp->gen2 = fcport->login_gen;
+-
+-	mbx = &sp->u.iocb_cmd;
+-	mbx->timeout = qla2x00_async_iocb_timeout;
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla24xx_async_gpdb_sp_done);
+ 
+ 	pd = dma_pool_zalloc(ha->s_dma_pool, GFP_KERNEL, &pd_dma);
+ 	if (pd == NULL) {
+@@ -1366,11 +1392,10 @@ int qla24xx_async_gpdb(struct scsi_qla_host *vha, fc_port_t *fcport, u8 opt)
+ 	mb[9] = vha->vp_idx;
+ 	mb[10] = opt;
+ 
+-	mbx->u.mbx.in = pd;
++	mbx = &sp->u.iocb_cmd;
++	mbx->u.mbx.in = (void *)pd;
+ 	mbx->u.mbx.in_dma = pd_dma;
+ 
+-	sp->done = qla24xx_async_gpdb_sp_done;
+-
+ 	ql_dbg(ql_dbg_disc, vha, 0x20dc,
+ 	    "Async-%s %8phC hndl %x opt %x\n",
+ 	    sp->name, fcport->port_name, sp->handle, opt);
+@@ -1384,7 +1409,7 @@ done_free_sp:
+ 	if (pd)
+ 		dma_pool_free(ha->s_dma_pool, pd, pd_dma);
+ 
+-	sp->free(sp);
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 	fcport->flags &= ~FCF_ASYNC_SENT;
+ done:
+ 	fcport->flags &= ~FCF_ASYNC_ACTIVE;
+@@ -1556,6 +1581,11 @@ static void qla_chk_n2n_b4_login(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 	u8 login = 0;
+ 	int rc;
+ 
++	ql_dbg(ql_dbg_disc, vha, 0x307b,
++	    "%s %8phC DS %d LS %d lid %d retries=%d\n",
++	    __func__, fcport->port_name, fcport->disc_state,
++	    fcport->fw_login_state, fcport->loop_id, fcport->login_retry);
++
+ 	if (qla_tgt_mode_enabled(vha))
+ 		return;
+ 
+@@ -1614,7 +1644,8 @@ int qla24xx_fcport_handle_login(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 	    fcport->login_gen, fcport->loop_id, fcport->scan_state,
+ 	    fcport->fc4_type);
+ 
+-	if (fcport->scan_state != QLA_FCPORT_FOUND)
++	if (fcport->scan_state != QLA_FCPORT_FOUND ||
++	    fcport->disc_state == DSC_DELETE_PEND)
+ 		return 0;
+ 
+ 	if ((fcport->loop_id != FC_NO_LOOP_ID) &&
+@@ -1635,7 +1666,7 @@ int qla24xx_fcport_handle_login(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 	if (vha->host->active_mode == MODE_TARGET && !N2N_TOPO(vha->hw))
+ 		return 0;
+ 
+-	if (fcport->flags & FCF_ASYNC_SENT) {
++	if (fcport->flags & (FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE)) {
+ 		set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
+ 		return 0;
+ 	}
+@@ -1970,22 +2001,21 @@ qla2x00_async_tm_cmd(fc_port_t *fcport, uint32_t flags, uint32_t lun,
+ 	srb_t *sp;
+ 	int rval = QLA_FUNCTION_FAILED;
+ 
++	/* ref: INIT */
+ 	sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ 	if (!sp)
+ 		goto done;
+ 
+-	tm_iocb = &sp->u.iocb_cmd;
+ 	sp->type = SRB_TM_CMD;
+ 	sp->name = "tmf";
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha),
++			      qla2x00_tmf_sp_done);
++	sp->u.iocb_cmd.timeout = qla2x00_tmf_iocb_timeout;
+ 
+-	tm_iocb->timeout = qla2x00_tmf_iocb_timeout;
++	tm_iocb = &sp->u.iocb_cmd;
+ 	init_completion(&tm_iocb->u.tmf.comp);
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha));
+-
+ 	tm_iocb->u.tmf.flags = flags;
+ 	tm_iocb->u.tmf.lun = lun;
+-	tm_iocb->u.tmf.data = tag;
+-	sp->done = qla2x00_tmf_sp_done;
+ 
+ 	ql_dbg(ql_dbg_taskm, vha, 0x802f,
+ 	    "Async-tmf hdl=%x loop-id=%x portid=%02x%02x%02x.\n",
+@@ -2015,7 +2045,8 @@ qla2x00_async_tm_cmd(fc_port_t *fcport, uint32_t flags, uint32_t lun,
+ 	}
+ 
+ done_free_sp:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 	fcport->flags &= ~FCF_ASYNC_SENT;
+ done:
+ 	return rval;
+@@ -2074,13 +2105,6 @@ qla24xx_handle_prli_done_event(struct scsi_qla_host *vha, struct event_arg *ea)
+ 		qla24xx_post_gpdb_work(vha, ea->fcport, 0);
+ 		break;
+ 	default:
+-		if ((ea->iop[0] == LSC_SCODE_ELS_REJECT) &&
+-		    (ea->iop[1] == 0x50000)) {   /* reson 5=busy expl:0x0 */
+-			set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
+-			ea->fcport->fw_login_state = DSC_LS_PLOGI_COMP;
+-			break;
+-		}
+-
+ 		sp = ea->sp;
+ 		ql_dbg(ql_dbg_disc, vha, 0x2118,
+ 		       "%s %d %8phC priority %s, fc4type %x prev try %s\n",
+@@ -2224,12 +2248,7 @@ qla24xx_handle_plogi_done_event(struct scsi_qla_host *vha, struct event_arg *ea)
+ 		ql_dbg(ql_dbg_disc, vha, 0x20eb, "%s %d %8phC cmd error %x\n",
+ 		    __func__, __LINE__, ea->fcport->port_name, ea->data[1]);
+ 
+-		ea->fcport->flags &= ~FCF_ASYNC_SENT;
+-		qla2x00_set_fcport_disc_state(ea->fcport, DSC_LOGIN_FAILED);
+-		if (ea->data[1] & QLA_LOGIO_LOGIN_RETRIED)
+-			set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
+-		else
+-			qla2x00_mark_device_lost(vha, ea->fcport, 1);
++		qlt_schedule_sess_for_deletion(ea->fcport);
+ 		break;
+ 	case MBS_LOOP_ID_USED:
+ 		/* data[1] = IO PARAM 1 = nport ID  */
+@@ -3472,6 +3491,14 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *vha)
+ 	struct rsp_que *rsp = ha->rsp_q_map[0];
+ 	struct qla2xxx_fw_dump *fw_dump;
+ 
++	if (ha->fw_dump) {
++		ql_dbg(ql_dbg_init, vha, 0x00bd,
++		    "Firmware dump already allocated.\n");
++		return;
++	}
++
++	ha->fw_dumped = 0;
++	ha->fw_dump_cap_flags = 0;
+ 	dump_size = fixed_size = mem_size = eft_size = fce_size = mq_size = 0;
+ 	req_q_size = rsp_q_size = 0;
+ 
+@@ -3482,7 +3509,7 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *vha)
+ 		mem_size = (ha->fw_memory_size - 0x11000 + 1) *
+ 		    sizeof(uint16_t);
+ 	} else if (IS_FWI2_CAPABLE(ha)) {
+-		if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha))
++		if (IS_QLA83XX(ha))
+ 			fixed_size = offsetof(struct qla83xx_fw_dump, ext_mem);
+ 		else if (IS_QLA81XX(ha))
+ 			fixed_size = offsetof(struct qla81xx_fw_dump, ext_mem);
+@@ -3494,8 +3521,7 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *vha)
+ 		mem_size = (ha->fw_memory_size - 0x100000 + 1) *
+ 		    sizeof(uint32_t);
+ 		if (ha->mqenable) {
+-			if (!IS_QLA83XX(ha) && !IS_QLA27XX(ha) &&
+-			    !IS_QLA28XX(ha))
++			if (!IS_QLA83XX(ha))
+ 				mq_size = sizeof(struct qla2xxx_mq_chain);
+ 			/*
+ 			 * Allocate maximum buffer size for all queues - Q0.
+@@ -4056,8 +4082,7 @@ enable_82xx_npiv:
+ 			    ha->fw_major_version, ha->fw_minor_version,
+ 			    ha->fw_subminor_version);
+ 
+-			if (IS_QLA83XX(ha) || IS_QLA27XX(ha) ||
+-			    IS_QLA28XX(ha)) {
++			if (IS_QLA83XX(ha)) {
+ 				ha->flags.fac_supported = 0;
+ 				rval = QLA_SUCCESS;
+ 			}
+@@ -5602,6 +5627,13 @@ qla2x00_configure_local_loop(scsi_qla_host_t *vha)
+ 			memcpy(fcport->node_name, new_fcport->node_name,
+ 			    WWN_SIZE);
+ 			fcport->scan_state = QLA_FCPORT_FOUND;
++			if (fcport->login_retry == 0) {
++				fcport->login_retry = vha->hw->login_retry_count;
++				ql_dbg(ql_dbg_disc, vha, 0x2135,
++				    "Port login retry %8phN, lid 0x%04x retry cnt=%d.\n",
++				    fcport->port_name, fcport->loop_id,
++				    fcport->login_retry);
++			}
+ 			found++;
+ 			break;
+ 		}
+@@ -5735,6 +5767,8 @@ qla2x00_reg_remote_port(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 	if (atomic_read(&fcport->state) == FCS_ONLINE)
+ 		return;
+ 
++	qla2x00_set_fcport_state(fcport, FCS_ONLINE);
++
+ 	rport_ids.node_name = wwn_to_u64(fcport->node_name);
+ 	rport_ids.port_name = wwn_to_u64(fcport->port_name);
+ 	rport_ids.port_id = fcport->d_id.b.domain << 16 |
+@@ -5842,6 +5876,7 @@ qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 		qla2x00_reg_remote_port(vha, fcport);
+ 		break;
+ 	case MODE_TARGET:
++		qla2x00_set_fcport_state(fcport, FCS_ONLINE);
+ 		if (!vha->vha_tgt.qla_tgt->tgt_stop &&
+ 			!vha->vha_tgt.qla_tgt->tgt_stopped)
+ 			qlt_fc_port_added(vha, fcport);
+@@ -5856,8 +5891,6 @@ qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 		break;
+ 	}
+ 
+-	qla2x00_set_fcport_state(fcport, FCS_ONLINE);
+-
+ 	if (IS_IIDMA_CAPABLE(vha->hw) && vha->hw->flags.gpsc_supported) {
+ 		if (fcport->id_changed) {
+ 			fcport->id_changed = 0;
+@@ -9394,7 +9427,7 @@ struct qla_qpair *qla2xxx_create_qpair(struct scsi_qla_host *vha, int qos,
+ 		qpair->rsp->req = qpair->req;
+ 		qpair->rsp->qpair = qpair;
+ 		/* init qpair to this cpu. Will adjust at run time. */
+-		qla_cpu_update(qpair, smp_processor_id());
++		qla_cpu_update(qpair, raw_smp_processor_id());
+ 
+ 		if (IS_T10_PI_CAPABLE(ha) && ql2xenabledif) {
+ 			if (ha->fw_attributes & BIT_4)
+diff --git a/drivers/scsi/qla2xxx/qla_inline.h b/drivers/scsi/qla2xxx/qla_inline.h
+index 5f3b7995cc8f3..db17f7f410cdd 100644
+--- a/drivers/scsi/qla2xxx/qla_inline.h
++++ b/drivers/scsi/qla2xxx/qla_inline.h
+@@ -184,6 +184,8 @@ static void qla2xxx_init_sp(srb_t *sp, scsi_qla_host_t *vha,
+ 	sp->vha = vha;
+ 	sp->qpair = qpair;
+ 	sp->cmd_type = TYPE_SRB;
++	/* ref : INIT - normal flow */
++	kref_init(&sp->cmd_kref);
+ 	INIT_LIST_HEAD(&sp->elem);
+ }
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
+index ed604f2185bf2..e0fe9ddb4bd2c 100644
+--- a/drivers/scsi/qla2xxx/qla_iocb.c
++++ b/drivers/scsi/qla2xxx/qla_iocb.c
+@@ -2560,11 +2560,38 @@ qla24xx_tm_iocb(srb_t *sp, struct tsk_mgmt_entry *tsk)
+ 	}
+ }
+ 
+-void qla2x00_init_timer(srb_t *sp, unsigned long tmo)
++static void
++qla2x00_async_done(struct srb *sp, int res)
++{
++	if (del_timer(&sp->u.iocb_cmd.timer)) {
++		/*
++		 * Successfully cancelled the timeout handler
++		 * ref: TMR
++		 */
++		if (kref_put(&sp->cmd_kref, qla2x00_sp_release))
++			return;
++	}
++	sp->async_done(sp, res);
++}
++
++void
++qla2x00_sp_release(struct kref *kref)
++{
++	struct srb *sp = container_of(kref, struct srb, cmd_kref);
++
++	sp->free(sp);
++}
++
++void
++qla2x00_init_async_sp(srb_t *sp, unsigned long tmo,
++		     void (*done)(struct srb *sp, int res))
+ {
+ 	timer_setup(&sp->u.iocb_cmd.timer, qla2x00_sp_timeout, 0);
+-	sp->u.iocb_cmd.timer.expires = jiffies + tmo * HZ;
++	sp->done = qla2x00_async_done;
++	sp->async_done = done;
+ 	sp->free = qla2x00_sp_free;
++	sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
++	sp->u.iocb_cmd.timer.expires = jiffies + tmo * HZ;
+ 	if (IS_QLAFX00(sp->vha->hw) && sp->type == SRB_FXIOCB_DCMD)
+ 		init_completion(&sp->u.iocb_cmd.u.fxiocb.fxiocb_comp);
+ 	sp->start_timer = 1;
+@@ -2651,7 +2678,9 @@ qla24xx_els_dcmd_iocb(scsi_qla_host_t *vha, int els_opcode,
+ 	       return -ENOMEM;
+ 	}
+ 
+-	/* Alloc SRB structure */
++	/* Alloc SRB structure
++	 * ref: INIT
++	 */
+ 	sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ 	if (!sp) {
+ 		kfree(fcport);
+@@ -2672,18 +2701,19 @@ qla24xx_els_dcmd_iocb(scsi_qla_host_t *vha, int els_opcode,
+ 	sp->type = SRB_ELS_DCMD;
+ 	sp->name = "ELS_DCMD";
+ 	sp->fcport = fcport;
+-	elsio->timeout = qla2x00_els_dcmd_iocb_timeout;
+-	qla2x00_init_timer(sp, ELS_DCMD_TIMEOUT);
+-	init_completion(&sp->u.iocb_cmd.u.els_logo.comp);
+-	sp->done = qla2x00_els_dcmd_sp_done;
++	qla2x00_init_async_sp(sp, ELS_DCMD_TIMEOUT,
++			      qla2x00_els_dcmd_sp_done);
+ 	sp->free = qla2x00_els_dcmd_sp_free;
++	sp->u.iocb_cmd.timeout = qla2x00_els_dcmd_iocb_timeout;
++	init_completion(&sp->u.iocb_cmd.u.els_logo.comp);
+ 
+ 	elsio->u.els_logo.els_logo_pyld = dma_alloc_coherent(&ha->pdev->dev,
+ 			    DMA_POOL_SIZE, &elsio->u.els_logo.els_logo_pyld_dma,
+ 			    GFP_KERNEL);
+ 
+ 	if (!elsio->u.els_logo.els_logo_pyld) {
+-		sp->free(sp);
++		/* ref: INIT */
++		kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 		return QLA_FUNCTION_FAILED;
+ 	}
+ 
+@@ -2706,7 +2736,8 @@ qla24xx_els_dcmd_iocb(scsi_qla_host_t *vha, int els_opcode,
+ 
+ 	rval = qla2x00_start_sp(sp);
+ 	if (rval != QLA_SUCCESS) {
+-		sp->free(sp);
++		/* ref: INIT */
++		kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 		return QLA_FUNCTION_FAILED;
+ 	}
+ 
+@@ -2717,7 +2748,8 @@ qla24xx_els_dcmd_iocb(scsi_qla_host_t *vha, int els_opcode,
+ 
+ 	wait_for_completion(&elsio->u.els_logo.comp);
+ 
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 	return rval;
+ }
+ 
+@@ -2850,7 +2882,6 @@ static void qla2x00_els_dcmd2_sp_done(srb_t *sp, int res)
+ 	    sp->name, res, sp->handle, fcport->d_id.b24, fcport->port_name);
+ 
+ 	fcport->flags &= ~(FCF_ASYNC_SENT|FCF_ASYNC_ACTIVE);
+-	del_timer(&sp->u.iocb_cmd.timer);
+ 
+ 	if (sp->flags & SRB_WAKEUP_ON_COMP)
+ 		complete(&lio->u.els_plogi.comp);
+@@ -2927,6 +2958,7 @@ static void qla2x00_els_dcmd2_sp_done(srb_t *sp, int res)
+ 					set_bit(ISP_ABORT_NEEDED,
+ 					    &vha->dpc_flags);
+ 					qla2xxx_wake_dpc(vha);
++					break;
+ 				}
+ 				fallthrough;
+ 			default:
+@@ -2936,9 +2968,7 @@ static void qla2x00_els_dcmd2_sp_done(srb_t *sp, int res)
+ 				    fw_status[0], fw_status[1], fw_status[2]);
+ 
+ 				fcport->flags &= ~FCF_ASYNC_SENT;
+-				qla2x00_set_fcport_disc_state(fcport,
+-				    DSC_LOGIN_FAILED);
+-				set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
++				qlt_schedule_sess_for_deletion(fcport);
+ 				break;
+ 			}
+ 			break;
+@@ -2950,8 +2980,7 @@ static void qla2x00_els_dcmd2_sp_done(srb_t *sp, int res)
+ 			    fw_status[0], fw_status[1], fw_status[2]);
+ 
+ 			sp->fcport->flags &= ~FCF_ASYNC_SENT;
+-			qla2x00_set_fcport_disc_state(fcport, DSC_LOGIN_FAILED);
+-			set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
++			qlt_schedule_sess_for_deletion(fcport);
+ 			break;
+ 		}
+ 
+@@ -2960,7 +2989,8 @@ static void qla2x00_els_dcmd2_sp_done(srb_t *sp, int res)
+ 			struct srb_iocb *elsio = &sp->u.iocb_cmd;
+ 
+ 			qla2x00_els_dcmd2_free(vha, &elsio->u.els_plogi);
+-			sp->free(sp);
++			/* ref: INIT */
++			kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 			return;
+ 		}
+ 		e->u.iosb.sp = sp;
+@@ -2978,7 +3008,9 @@ qla24xx_els_dcmd2_iocb(scsi_qla_host_t *vha, int els_opcode,
+ 	int rval = QLA_SUCCESS;
+ 	void	*ptr, *resp_ptr;
+ 
+-	/* Alloc SRB structure */
++	/* Alloc SRB structure
++	 * ref: INIT
++	 */
+ 	sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ 	if (!sp) {
+ 		ql_log(ql_log_info, vha, 0x70e6,
+@@ -2993,17 +3025,16 @@ qla24xx_els_dcmd2_iocb(scsi_qla_host_t *vha, int els_opcode,
+ 	ql_dbg(ql_dbg_io, vha, 0x3073,
+ 	       "%s Enter: PLOGI portid=%06x\n", __func__, fcport->d_id.b24);
+ 
+-	sp->type = SRB_ELS_DCMD;
+-	sp->name = "ELS_DCMD";
+-	sp->fcport = fcport;
+-
+-	elsio->timeout = qla2x00_els_dcmd2_iocb_timeout;
+ 	if (wait)
+ 		sp->flags = SRB_WAKEUP_ON_COMP;
+ 
+-	qla2x00_init_timer(sp, ELS_DCMD_TIMEOUT + 2);
++	sp->type = SRB_ELS_DCMD;
++	sp->name = "ELS_DCMD";
++	sp->fcport = fcport;
++	qla2x00_init_async_sp(sp, ELS_DCMD_TIMEOUT + 2,
++			     qla2x00_els_dcmd2_sp_done);
++	sp->u.iocb_cmd.timeout = qla2x00_els_dcmd2_iocb_timeout;
+ 
+-	sp->done = qla2x00_els_dcmd2_sp_done;
+ 	elsio->u.els_plogi.tx_size = elsio->u.els_plogi.rx_size = DMA_POOL_SIZE;
+ 
+ 	ptr = elsio->u.els_plogi.els_plogi_pyld =
+@@ -3068,7 +3099,8 @@ qla24xx_els_dcmd2_iocb(scsi_qla_host_t *vha, int els_opcode,
+ out:
+ 	fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
+ 	qla2x00_els_dcmd2_free(vha, &elsio->u.els_plogi);
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ 	return rval;
+ }
+@@ -3879,8 +3911,15 @@ qla2x00_start_sp(srb_t *sp)
+ 		break;
+ 	}
+ 
+-	if (sp->start_timer)
++	if (sp->start_timer) {
++		/* ref: TMR timer ref
++		 * this code should be just before start_iocbs function
++		 * This will make sure that caller function don't to do
++		 * kref_put even on failure
++		 */
++		kref_get(&sp->cmd_kref);
+ 		add_timer(&sp->u.iocb_cmd.timer);
++	}
+ 
+ 	wmb();
+ 	qla2x00_start_iocbs(vha, qp->req);
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index aaf6504570fdd..198b782d77901 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -2498,6 +2498,7 @@ qla24xx_tm_iocb_entry(scsi_qla_host_t *vha, struct req_que *req, void *tsk)
+ 		iocb->u.tmf.data = QLA_FUNCTION_FAILED;
+ 	} else if ((le16_to_cpu(sts->scsi_status) &
+ 	    SS_RESPONSE_INFO_LEN_VALID)) {
++		host_to_fcp_swap(sts->data, sizeof(sts->data));
+ 		if (le32_to_cpu(sts->rsp_data_len) < 4) {
+ 			ql_log(ql_log_warn, fcport->vha, 0x503b,
+ 			    "Async-%s error - hdl=%x not enough response(%d).\n",
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index 10d2655ef6767..7f236db058869 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -9,6 +9,12 @@
+ #include <linux/delay.h>
+ #include <linux/gfp.h>
+ 
++#ifdef CONFIG_PPC
++#define IS_PPCARCH      true
++#else
++#define IS_PPCARCH      false
++#endif
++
+ static struct mb_cmd_name {
+ 	uint16_t cmd;
+ 	const char *str;
+@@ -728,6 +734,9 @@ again:
+ 				vha->min_supported_speed =
+ 				    nv->min_supported_speed;
+ 			}
++
++			if (IS_PPCARCH)
++				mcp->mb[11] |= BIT_4;
+ 		}
+ 
+ 		if (ha->flags.exlogins_enabled)
+@@ -3029,8 +3038,7 @@ qla2x00_get_resource_cnts(scsi_qla_host_t *vha)
+ 		ha->orig_fw_iocb_count = mcp->mb[10];
+ 		if (ha->flags.npiv_supported)
+ 			ha->max_npiv_vports = mcp->mb[11];
+-		if (IS_QLA81XX(ha) || IS_QLA83XX(ha) || IS_QLA27XX(ha) ||
+-		    IS_QLA28XX(ha))
++		if (IS_QLA81XX(ha) || IS_QLA83XX(ha))
+ 			ha->fw_max_fcf_count = mcp->mb[12];
+ 	}
+ 
+@@ -5621,7 +5629,7 @@ qla2x00_get_data_rate(scsi_qla_host_t *vha)
+ 	mcp->out_mb = MBX_1|MBX_0;
+ 	mcp->in_mb = MBX_2|MBX_1|MBX_0;
+ 	if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha))
+-		mcp->in_mb |= MBX_3;
++		mcp->in_mb |= MBX_4|MBX_3;
+ 	mcp->tov = MBX_TOV_SECONDS;
+ 	mcp->flags = 0;
+ 	rval = qla2x00_mailbox_command(vha, mcp);
+@@ -6479,23 +6487,21 @@ int qla24xx_send_mb_cmd(struct scsi_qla_host *vha, mbx_cmd_t *mcp)
+ 	if (!vha->hw->flags.fw_started)
+ 		goto done;
+ 
++	/* ref: INIT */
+ 	sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL);
+ 	if (!sp)
+ 		goto done;
+ 
+-	sp->type = SRB_MB_IOCB;
+-	sp->name = mb_to_str(mcp->mb[0]);
+-
+ 	c = &sp->u.iocb_cmd;
+-	c->timeout = qla2x00_async_iocb_timeout;
+ 	init_completion(&c->u.mbx.comp);
+ 
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++	sp->type = SRB_MB_IOCB;
++	sp->name = mb_to_str(mcp->mb[0]);
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla2x00_async_mb_sp_done);
+ 
+ 	memcpy(sp->u.iocb_cmd.u.mbx.out_mb, mcp->mb, SIZEOF_IOCB_MB_REG);
+ 
+-	sp->done = qla2x00_async_mb_sp_done;
+-
+ 	rval = qla2x00_start_sp(sp);
+ 	if (rval != QLA_SUCCESS) {
+ 		ql_dbg(ql_dbg_mbx, vha, 0x1018,
+@@ -6527,7 +6533,8 @@ int qla24xx_send_mb_cmd(struct scsi_qla_host *vha, mbx_cmd_t *mcp)
+ 	}
+ 
+ done_free_sp:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ 	return rval;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_mid.c b/drivers/scsi/qla2xxx/qla_mid.c
+index 1c024055f8c50..e6b5c4ccce97b 100644
+--- a/drivers/scsi/qla2xxx/qla_mid.c
++++ b/drivers/scsi/qla2xxx/qla_mid.c
+@@ -965,6 +965,7 @@ int qla24xx_control_vp(scsi_qla_host_t *vha, int cmd)
+ 	if (vp_index == 0 || vp_index >= ha->max_npiv_vports)
+ 		return QLA_PARAMETER_ERROR;
+ 
++	/* ref: INIT */
+ 	sp = qla2x00_get_sp(base_vha, NULL, GFP_KERNEL);
+ 	if (!sp)
+ 		return rval;
+@@ -972,9 +973,8 @@ int qla24xx_control_vp(scsi_qla_host_t *vha, int cmd)
+ 	sp->type = SRB_CTRL_VP;
+ 	sp->name = "ctrl_vp";
+ 	sp->comp = &comp;
+-	sp->done = qla_ctrlvp_sp_done;
+-	sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla_ctrlvp_sp_done);
+ 	sp->u.iocb_cmd.u.ctrlvp.cmd = cmd;
+ 	sp->u.iocb_cmd.u.ctrlvp.vp_index = vp_index;
+ 
+@@ -1008,6 +1008,7 @@ int qla24xx_control_vp(scsi_qla_host_t *vha, int cmd)
+ 		break;
+ 	}
+ done:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 	return rval;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_mr.c b/drivers/scsi/qla2xxx/qla_mr.c
+index 350b0c4346fb6..f726eb8449c5e 100644
+--- a/drivers/scsi/qla2xxx/qla_mr.c
++++ b/drivers/scsi/qla2xxx/qla_mr.c
+@@ -1787,17 +1787,18 @@ qlafx00_fx_disc(scsi_qla_host_t *vha, fc_port_t *fcport, uint16_t fx_type)
+ 	struct register_host_info *preg_hsi;
+ 	struct new_utsname *p_sysid = NULL;
+ 
++	/* ref: INIT */
+ 	sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ 	if (!sp)
+ 		goto done;
+ 
+ 	sp->type = SRB_FXIOCB_DCMD;
+ 	sp->name = "fxdisc";
++	qla2x00_init_async_sp(sp, FXDISC_TIMEOUT,
++			      qla2x00_fxdisc_sp_done);
++	sp->u.iocb_cmd.timeout = qla2x00_fxdisc_iocb_timeout;
+ 
+ 	fdisc = &sp->u.iocb_cmd;
+-	fdisc->timeout = qla2x00_fxdisc_iocb_timeout;
+-	qla2x00_init_timer(sp, FXDISC_TIMEOUT);
+-
+ 	switch (fx_type) {
+ 	case FXDISC_GET_CONFIG_INFO:
+ 	fdisc->u.fxiocb.flags =
+@@ -1898,7 +1899,6 @@ qlafx00_fx_disc(scsi_qla_host_t *vha, fc_port_t *fcport, uint16_t fx_type)
+ 	}
+ 
+ 	fdisc->u.fxiocb.req_func_type = cpu_to_le16(fx_type);
+-	sp->done = qla2x00_fxdisc_sp_done;
+ 
+ 	rval = qla2x00_start_sp(sp);
+ 	if (rval != QLA_SUCCESS)
+@@ -1974,7 +1974,8 @@ done_unmap_req:
+ 		dma_free_coherent(&ha->pdev->dev, fdisc->u.fxiocb.req_len,
+ 		    fdisc->u.fxiocb.req_addr, fdisc->u.fxiocb.req_dma_handle);
+ done_free_sp:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ 	return rval;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index 138ffdb5c92cd..fd4c1167b2ced 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -37,6 +37,11 @@ int qla_nvme_register_remote(struct scsi_qla_host *vha, struct fc_port *fcport)
+ 		(fcport->nvme_flag & NVME_FLAG_REGISTERED))
+ 		return 0;
+ 
++	if (atomic_read(&fcport->state) == FCS_ONLINE)
++		return 0;
++
++	qla2x00_set_fcport_state(fcport, FCS_ONLINE);
++
+ 	fcport->nvme_flag &= ~NVME_FLAG_RESETTING;
+ 
+ 	memset(&req, 0, sizeof(struct nvme_fc_port_info));
+@@ -167,6 +172,18 @@ out:
+ 	qla2xxx_rel_qpair_sp(sp->qpair, sp);
+ }
+ 
++static void qla_nvme_ls_unmap(struct srb *sp, struct nvmefc_ls_req *fd)
++{
++	if (sp->flags & SRB_DMA_VALID) {
++		struct srb_iocb *nvme = &sp->u.iocb_cmd;
++		struct qla_hw_data *ha = sp->fcport->vha->hw;
++
++		dma_unmap_single(&ha->pdev->dev, nvme->u.nvme.cmd_dma,
++				 fd->rqstlen, DMA_TO_DEVICE);
++		sp->flags &= ~SRB_DMA_VALID;
++	}
++}
++
+ static void qla_nvme_release_ls_cmd_kref(struct kref *kref)
+ {
+ 	struct srb *sp = container_of(kref, struct srb, cmd_kref);
+@@ -183,6 +200,8 @@ static void qla_nvme_release_ls_cmd_kref(struct kref *kref)
+ 	spin_unlock_irqrestore(&priv->cmd_lock, flags);
+ 
+ 	fd = priv->fd;
++
++	qla_nvme_ls_unmap(sp, fd);
+ 	fd->done(fd, priv->comp_status);
+ out:
+ 	qla2x00_rel_sp(sp);
+@@ -353,6 +372,8 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
+ 	dma_sync_single_for_device(&ha->pdev->dev, nvme->u.nvme.cmd_dma,
+ 	    fd->rqstlen, DMA_TO_DEVICE);
+ 
++	sp->flags |= SRB_DMA_VALID;
++
+ 	rval = qla2x00_start_sp(sp);
+ 	if (rval != QLA_SUCCESS) {
+ 		ql_log(ql_log_warn, vha, 0x700e,
+@@ -360,6 +381,7 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
+ 		wake_up(&sp->nvme_ls_waitq);
+ 		sp->priv = NULL;
+ 		priv->sp = NULL;
++		qla_nvme_ls_unmap(sp, fd);
+ 		qla2x00_rel_sp(sp);
+ 		return rval;
+ 	}
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index abcd309172638..6dc2189badd33 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -728,7 +728,8 @@ void qla2x00_sp_compl(srb_t *sp, int res)
+ 	struct scsi_cmnd *cmd = GET_CMD_SP(sp);
+ 	struct completion *comp = sp->comp;
+ 
+-	sp->free(sp);
++	/* kref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 	cmd->result = res;
+ 	CMD_SP(cmd) = NULL;
+ 	scsi_done(cmd);
+@@ -819,7 +820,8 @@ void qla2xxx_qpair_sp_compl(srb_t *sp, int res)
+ 	struct scsi_cmnd *cmd = GET_CMD_SP(sp);
+ 	struct completion *comp = sp->comp;
+ 
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 	cmd->result = res;
+ 	CMD_SP(cmd) = NULL;
+ 	scsi_done(cmd);
+@@ -919,6 +921,7 @@ qla2xxx_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
+ 		goto qc24_target_busy;
+ 
+ 	sp = scsi_cmd_priv(cmd);
++	/* ref: INIT */
+ 	qla2xxx_init_sp(sp, vha, vha->hw->base_qpair, fcport);
+ 
+ 	sp->u.scmd.cmd = cmd;
+@@ -938,7 +941,8 @@ qla2xxx_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
+ 	return 0;
+ 
+ qc24_host_busy_free_sp:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 
+ qc24_target_busy:
+ 	return SCSI_MLQUEUE_TARGET_BUSY;
+@@ -1008,6 +1012,7 @@ qla2xxx_mqueuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd,
+ 		goto qc24_target_busy;
+ 
+ 	sp = scsi_cmd_priv(cmd);
++	/* ref: INIT */
+ 	qla2xxx_init_sp(sp, vha, qpair, fcport);
+ 
+ 	sp->u.scmd.cmd = cmd;
+@@ -1026,7 +1031,8 @@ qla2xxx_mqueuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd,
+ 	return 0;
+ 
+ qc24_host_busy_free_sp:
+-	sp->free(sp);
++	/* ref: INIT */
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 
+ qc24_target_busy:
+ 	return SCSI_MLQUEUE_TARGET_BUSY;
+@@ -3748,8 +3754,7 @@ qla2x00_unmap_iobases(struct qla_hw_data *ha)
+ 		if (ha->mqiobase)
+ 			iounmap(ha->mqiobase);
+ 
+-		if ((IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha)) &&
+-		    ha->msixbase)
++		if (ha->msixbase)
+ 			iounmap(ha->msixbase);
+ 	}
+ }
+@@ -3891,6 +3896,8 @@ qla24xx_free_purex_list(struct purex_list *list)
+ 	spin_lock_irqsave(&list->lock, flags);
+ 	list_for_each_entry_safe(item, next, &list->head, list) {
+ 		list_del(&item->list);
++		if (item == &item->vha->default_item)
++			continue;
+ 		kfree(item);
+ 	}
+ 	spin_unlock_irqrestore(&list->lock, flags);
+@@ -5526,6 +5533,11 @@ void qla2x00_relogin(struct scsi_qla_host *vha)
+ 					memset(&ea, 0, sizeof(ea));
+ 					ea.fcport = fcport;
+ 					qla24xx_handle_relogin_event(vha, &ea);
++				} else if (vha->hw->current_topology ==
++					 ISP_CFG_NL &&
++					IS_QLA2XXX_MIDTYPE(vha->hw)) {
++					(void)qla24xx_fcport_handle_login(vha,
++									fcport);
+ 				} else if (vha->hw->current_topology ==
+ 				    ISP_CFG_NL) {
+ 					fcport->login_retry--;
+@@ -7199,7 +7211,7 @@ skip:
+ 	return do_heartbeat;
+ }
+ 
+-static void qla_heart_beat(struct scsi_qla_host *vha)
++static void qla_heart_beat(struct scsi_qla_host *vha, u16 dpc_started)
+ {
+ 	struct qla_hw_data *ha = vha->hw;
+ 
+@@ -7209,8 +7221,19 @@ static void qla_heart_beat(struct scsi_qla_host *vha)
+ 	if (vha->hw->flags.eeh_busy || qla2x00_chip_is_down(vha))
+ 		return;
+ 
+-	if (qla_do_heartbeat(vha))
++	/*
++	 * dpc thread cannot run if heartbeat is running at the same time.
++	 * We also do not want to starve heartbeat task. Therefore, do
++	 * heartbeat task at least once every 5 seconds.
++	 */
++	if (dpc_started &&
++	    time_before(jiffies, ha->last_heartbeat_run_jiffies + 5 * HZ))
++		return;
++
++	if (qla_do_heartbeat(vha)) {
++		ha->last_heartbeat_run_jiffies = jiffies;
+ 		queue_work(ha->wq, &ha->heartbeat_work);
++	}
+ }
+ 
+ /**************************************************************************
+@@ -7401,6 +7424,8 @@ qla2x00_timer(struct timer_list *t)
+ 		start_dpc++;
+ 	}
+ 
++	/* borrowing w to signify dpc will run */
++	w = 0;
+ 	/* Schedule the DPC routine if needed */
+ 	if ((test_bit(ISP_ABORT_NEEDED, &vha->dpc_flags) ||
+ 	    test_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags) ||
+@@ -7433,9 +7458,10 @@ qla2x00_timer(struct timer_list *t)
+ 		    test_bit(RELOGIN_NEEDED, &vha->dpc_flags),
+ 		    test_bit(PROCESS_PUREX_IOCB, &vha->dpc_flags));
+ 		qla2xxx_wake_dpc(vha);
++		w = 1;
+ 	}
+ 
+-	qla_heart_beat(vha);
++	qla_heart_beat(vha, w);
+ 
+ 	qla2x00_restart_timer(vha, WATCH_INTERVAL);
+ }
+@@ -7633,7 +7659,7 @@ qla2xxx_pci_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
+ 
+ 	switch (state) {
+ 	case pci_channel_io_normal:
+-		ha->flags.eeh_busy = 0;
++		qla_pci_set_eeh_busy(vha);
+ 		if (ql2xmqsupport || ql2xnvmeenable) {
+ 			set_bit(QPAIR_ONLINE_CHECK_NEEDED, &vha->dpc_flags);
+ 			qla2xxx_wake_dpc(vha);
+@@ -7674,9 +7700,16 @@ qla2xxx_pci_mmio_enabled(struct pci_dev *pdev)
+ 	       "mmio enabled\n");
+ 
+ 	ha->pci_error_state = QLA_PCI_MMIO_ENABLED;
++
+ 	if (IS_QLA82XX(ha))
+ 		return PCI_ERS_RESULT_RECOVERED;
+ 
++	if (qla2x00_isp_reg_stat(ha)) {
++		ql_log(ql_log_info, base_vha, 0x803f,
++		    "During mmio enabled, PCI/Register disconnect still detected.\n");
++		goto out;
++	}
++
+ 	spin_lock_irqsave(&ha->hardware_lock, flags);
+ 	if (IS_QLA2100(ha) || IS_QLA2200(ha)){
+ 		stat = rd_reg_word(&reg->hccr);
+@@ -7698,6 +7731,7 @@ qla2xxx_pci_mmio_enabled(struct pci_dev *pdev)
+ 		    "RISC paused -- mmio_enabled, Dumping firmware.\n");
+ 		qla2xxx_dump_fw(base_vha);
+ 	}
++out:
+ 	/* set PCI_ERS_RESULT_NEED_RESET to trigger call to qla2xxx_pci_slot_reset */
+ 	ql_dbg(ql_dbg_aer, base_vha, 0x600d,
+ 	       "mmio enabled returning.\n");
+diff --git a/drivers/scsi/qla2xxx/qla_sup.c b/drivers/scsi/qla2xxx/qla_sup.c
+index a0aeba69513d4..c092a6b1ced4f 100644
+--- a/drivers/scsi/qla2xxx/qla_sup.c
++++ b/drivers/scsi/qla2xxx/qla_sup.c
+@@ -844,7 +844,7 @@ qla2xxx_get_flt_info(scsi_qla_host_t *vha, uint32_t flt_addr)
+ 				ha->flt_region_nvram = start;
+ 			break;
+ 		case FLT_REG_IMG_PRI_27XX:
+-			if (IS_QLA27XX(ha) && !IS_QLA28XX(ha))
++			if (IS_QLA27XX(ha) || IS_QLA28XX(ha))
+ 				ha->flt_region_img_status_pri = start;
+ 			break;
+ 		case FLT_REG_IMG_SEC_27XX:
+@@ -1356,7 +1356,7 @@ next:
+ 		    flash_data_addr(ha, faddr), le32_to_cpu(*dwptr));
+ 		if (ret) {
+ 			ql_dbg(ql_dbg_user, vha, 0x7006,
+-			    "Failed slopw write %x (%x)\n", faddr, *dwptr);
++			    "Failed slow write %x (%x)\n", faddr, *dwptr);
+ 			break;
+ 		}
+ 	}
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 8993d438e0b72..b109716d44fb7 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -620,7 +620,7 @@ static void qla2x00_async_nack_sp_done(srb_t *sp, int res)
+ 	}
+ 	spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
+ 
+-	sp->free(sp);
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+ 
+ int qla24xx_async_notify_ack(scsi_qla_host_t *vha, fc_port_t *fcport,
+@@ -656,12 +656,10 @@ int qla24xx_async_notify_ack(scsi_qla_host_t *vha, fc_port_t *fcport,
+ 
+ 	sp->type = type;
+ 	sp->name = "nack";
+-
+-	sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+-	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha)+2);
++	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++			      qla2x00_async_nack_sp_done);
+ 
+ 	sp->u.iocb_cmd.u.nack.ntfy = ntfy;
+-	sp->done = qla2x00_async_nack_sp_done;
+ 
+ 	ql_dbg(ql_dbg_disc, vha, 0x20f4,
+ 	    "Async-%s %8phC hndl %x %s\n",
+@@ -674,7 +672,7 @@ int qla24xx_async_notify_ack(scsi_qla_host_t *vha, fc_port_t *fcport,
+ 	return rval;
+ 
+ done_free_sp:
+-	sp->free(sp);
++	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ 	fcport->flags &= ~FCF_ASYNC_SENT;
+ 	return rval;
+@@ -3320,6 +3318,7 @@ int qlt_xmit_response(struct qla_tgt_cmd *cmd, int xmit_type,
+ 			"RESET-RSP online/active/old-count/new-count = %d/%d/%d/%d.\n",
+ 			vha->flags.online, qla2x00_reset_active(vha),
+ 			cmd->reset_count, qpair->chip_reset);
++		res = 0;
+ 		goto out_unmap_unlock;
+ 	}
+ 
+@@ -7221,8 +7220,7 @@ qlt_probe_one_stage1(struct scsi_qla_host *base_vha, struct qla_hw_data *ha)
+ 	if (!QLA_TGT_MODE_ENABLED())
+ 		return;
+ 
+-	if  ((ql2xenablemsix == 0) || IS_QLA83XX(ha) || IS_QLA27XX(ha) ||
+-	    IS_QLA28XX(ha)) {
++	if  (ha->mqenable || IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
+ 		ISP_ATIO_Q_IN(base_vha) = &ha->mqiobase->isp25mq.atio_q_in;
+ 		ISP_ATIO_Q_OUT(base_vha) = &ha->mqiobase->isp25mq.atio_q_out;
+ 	} else {
+diff --git a/drivers/scsi/qla2xxx/qla_tmpl.c b/drivers/scsi/qla2xxx/qla_tmpl.c
+index 26c13a953b975..b0a74b036cf4b 100644
+--- a/drivers/scsi/qla2xxx/qla_tmpl.c
++++ b/drivers/scsi/qla2xxx/qla_tmpl.c
+@@ -435,8 +435,13 @@ qla27xx_fwdt_entry_t266(struct scsi_qla_host *vha,
+ {
+ 	ql_dbg(ql_dbg_misc, vha, 0xd20a,
+ 	    "%s: reset risc [%lx]\n", __func__, *len);
+-	if (buf)
+-		WARN_ON_ONCE(qla24xx_soft_reset(vha->hw) != QLA_SUCCESS);
++	if (buf) {
++		if (qla24xx_soft_reset(vha->hw) != QLA_SUCCESS) {
++			ql_dbg(ql_dbg_async, vha, 0x5001,
++			    "%s: unable to soft reset\n", __func__);
++			return INVALID_ENTRY;
++		}
++	}
+ 
+ 	return qla27xx_next_entry(ent);
+ }
+diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
+index 2371edbc3af4b..d327e1fa5c0b7 100644
+--- a/drivers/scsi/scsi_error.c
++++ b/drivers/scsi/scsi_error.c
+@@ -483,8 +483,13 @@ static void scsi_report_sense(struct scsi_device *sdev,
+ 
+ 		if (sshdr->asc == 0x29) {
+ 			evt_type = SDEV_EVT_POWER_ON_RESET_OCCURRED;
+-			sdev_printk(KERN_WARNING, sdev,
+-				    "Power-on or device reset occurred\n");
++			/*
++			 * Do not print message if it is an expected side-effect
++			 * of runtime PM.
++			 */
++			if (!sdev->silence_suspend)
++				sdev_printk(KERN_WARNING, sdev,
++					    "Power-on or device reset occurred\n");
+ 		}
+ 
+ 		if (sshdr->asc == 0x2a && sshdr->ascq == 0x01) {
+diff --git a/drivers/scsi/scsi_transport_fc.c b/drivers/scsi/scsi_transport_fc.c
+index 60e406bcf42a9..a2524106206db 100644
+--- a/drivers/scsi/scsi_transport_fc.c
++++ b/drivers/scsi/scsi_transport_fc.c
+@@ -34,7 +34,7 @@ static int fc_bsg_hostadd(struct Scsi_Host *, struct fc_host_attrs *);
+ static int fc_bsg_rportadd(struct Scsi_Host *, struct fc_rport *);
+ static void fc_bsg_remove(struct request_queue *);
+ static void fc_bsg_goose_queue(struct fc_rport *);
+-static void fc_li_stats_update(struct fc_fn_li_desc *li_desc,
++static void fc_li_stats_update(u16 event_type,
+ 			       struct fc_fpin_stats *stats);
+ static void fc_delivery_stats_update(u32 reason_code,
+ 				     struct fc_fpin_stats *stats);
+@@ -670,42 +670,34 @@ fc_find_rport_by_wwpn(struct Scsi_Host *shost, u64 wwpn)
+ EXPORT_SYMBOL(fc_find_rport_by_wwpn);
+ 
+ static void
+-fc_li_stats_update(struct fc_fn_li_desc *li_desc,
++fc_li_stats_update(u16 event_type,
+ 		   struct fc_fpin_stats *stats)
+ {
+-	stats->li += be32_to_cpu(li_desc->event_count);
+-	switch (be16_to_cpu(li_desc->event_type)) {
++	stats->li++;
++	switch (event_type) {
+ 	case FPIN_LI_UNKNOWN:
+-		stats->li_failure_unknown +=
+-		    be32_to_cpu(li_desc->event_count);
++		stats->li_failure_unknown++;
+ 		break;
+ 	case FPIN_LI_LINK_FAILURE:
+-		stats->li_link_failure_count +=
+-		    be32_to_cpu(li_desc->event_count);
++		stats->li_link_failure_count++;
+ 		break;
+ 	case FPIN_LI_LOSS_OF_SYNC:
+-		stats->li_loss_of_sync_count +=
+-		    be32_to_cpu(li_desc->event_count);
++		stats->li_loss_of_sync_count++;
+ 		break;
+ 	case FPIN_LI_LOSS_OF_SIG:
+-		stats->li_loss_of_signals_count +=
+-		    be32_to_cpu(li_desc->event_count);
++		stats->li_loss_of_signals_count++;
+ 		break;
+ 	case FPIN_LI_PRIM_SEQ_ERR:
+-		stats->li_prim_seq_err_count +=
+-		    be32_to_cpu(li_desc->event_count);
++		stats->li_prim_seq_err_count++;
+ 		break;
+ 	case FPIN_LI_INVALID_TX_WD:
+-		stats->li_invalid_tx_word_count +=
+-		    be32_to_cpu(li_desc->event_count);
++		stats->li_invalid_tx_word_count++;
+ 		break;
+ 	case FPIN_LI_INVALID_CRC:
+-		stats->li_invalid_crc_count +=
+-		    be32_to_cpu(li_desc->event_count);
++		stats->li_invalid_crc_count++;
+ 		break;
+ 	case FPIN_LI_DEVICE_SPEC:
+-		stats->li_device_specific +=
+-		    be32_to_cpu(li_desc->event_count);
++		stats->li_device_specific++;
+ 		break;
+ 	}
+ }
+@@ -767,6 +759,7 @@ fc_fpin_li_stats_update(struct Scsi_Host *shost, struct fc_tlv_desc *tlv)
+ 	struct fc_rport *attach_rport = NULL;
+ 	struct fc_host_attrs *fc_host = shost_to_fc_host(shost);
+ 	struct fc_fn_li_desc *li_desc = (struct fc_fn_li_desc *)tlv;
++	u16 event_type = be16_to_cpu(li_desc->event_type);
+ 	u64 wwpn;
+ 
+ 	rport = fc_find_rport_by_wwpn(shost,
+@@ -775,7 +768,7 @@ fc_fpin_li_stats_update(struct Scsi_Host *shost, struct fc_tlv_desc *tlv)
+ 	    (rport->roles & FC_PORT_ROLE_FCP_TARGET ||
+ 	     rport->roles & FC_PORT_ROLE_NVME_TARGET)) {
+ 		attach_rport = rport;
+-		fc_li_stats_update(li_desc, &attach_rport->fpin_stats);
++		fc_li_stats_update(event_type, &attach_rport->fpin_stats);
+ 	}
+ 
+ 	if (be32_to_cpu(li_desc->pname_count) > 0) {
+@@ -789,14 +782,14 @@ fc_fpin_li_stats_update(struct Scsi_Host *shost, struct fc_tlv_desc *tlv)
+ 			    rport->roles & FC_PORT_ROLE_NVME_TARGET)) {
+ 				if (rport == attach_rport)
+ 					continue;
+-				fc_li_stats_update(li_desc,
++				fc_li_stats_update(event_type,
+ 						   &rport->fpin_stats);
+ 			}
+ 		}
+ 	}
+ 
+ 	if (fc_host->port_name == be64_to_cpu(li_desc->attached_wwpn))
+-		fc_li_stats_update(li_desc, &fc_host->fpin_stats);
++		fc_li_stats_update(event_type, &fc_host->fpin_stats);
+ }
+ 
+ /*
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 65875a598d629..dfca484dd0c40 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -3754,7 +3754,8 @@ static int sd_suspend_common(struct device *dev, bool ignore_stop_errors)
+ 		return 0;
+ 
+ 	if (sdkp->WCE && sdkp->media_present) {
+-		sd_printk(KERN_NOTICE, sdkp, "Synchronizing SCSI cache\n");
++		if (!sdkp->device->silence_suspend)
++			sd_printk(KERN_NOTICE, sdkp, "Synchronizing SCSI cache\n");
+ 		ret = sd_sync_cache(sdkp, &sshdr);
+ 
+ 		if (ret) {
+@@ -3776,7 +3777,8 @@ static int sd_suspend_common(struct device *dev, bool ignore_stop_errors)
+ 	}
+ 
+ 	if (sdkp->device->manage_start_stop) {
+-		sd_printk(KERN_NOTICE, sdkp, "Stopping disk\n");
++		if (!sdkp->device->silence_suspend)
++			sd_printk(KERN_NOTICE, sdkp, "Stopping disk\n");
+ 		/* an error is not worth aborting a system sleep */
+ 		ret = sd_start_stop_device(sdkp, 0);
+ 		if (ignore_stop_errors)
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 9e7aa3a2fdf54..4faea70561b80 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -585,7 +585,12 @@ static void ufshcd_print_pwr_info(struct ufs_hba *hba)
+ 		"INVALID MODE",
+ 	};
+ 
+-	dev_err(hba->dev, "%s:[RX, TX]: gear=[%d, %d], lane[%d, %d], pwr[%s, %s], rate = %d\n",
++	/*
++	 * Using dev_dbg to avoid messages during runtime PM to avoid
++	 * never-ending cycles of messages written back to storage by user space
++	 * causing runtime resume, causing more messages and so on.
++	 */
++	dev_dbg(hba->dev, "%s:[RX, TX]: gear=[%d, %d], lane[%d, %d], pwr[%s, %s], rate = %d\n",
+ 		 __func__,
+ 		 hba->pwr_info.gear_rx, hba->pwr_info.gear_tx,
+ 		 hba->pwr_info.lane_rx, hba->pwr_info.lane_tx,
+@@ -4999,6 +5004,12 @@ static int ufshcd_slave_configure(struct scsi_device *sdev)
+ 		pm_runtime_get_noresume(&sdev->sdev_gendev);
+ 	else if (ufshcd_is_rpm_autosuspend_allowed(hba))
+ 		sdev->rpm_autosuspend = 1;
++	/*
++	 * Do not print messages during runtime PM to avoid never-ending cycles
++	 * of messages written back to storage by user space causing runtime
++	 * resume, causing more messages and so on.
++	 */
++	sdev->silence_suspend = 1;
+ 
+ 	ufshcd_crypto_register(hba, q);
+ 
+@@ -7285,7 +7296,13 @@ static u32 ufshcd_find_max_sup_active_icc_level(struct ufs_hba *hba,
+ 
+ 	if (!hba->vreg_info.vcc || !hba->vreg_info.vccq ||
+ 						!hba->vreg_info.vccq2) {
+-		dev_err(hba->dev,
++		/*
++		 * Using dev_dbg to avoid messages during runtime PM to avoid
++		 * never-ending cycles of messages written back to storage by
++		 * user space causing runtime resume, causing more messages and
++		 * so on.
++		 */
++		dev_dbg(hba->dev,
+ 			"%s: Regulator capability was not set, actvIccLevel=%d",
+ 							__func__, icc_level);
+ 		goto out;
+diff --git a/drivers/soc/mediatek/mtk-pm-domains.c b/drivers/soc/mediatek/mtk-pm-domains.c
+index b762bc40f56bd..afd2fd74802d2 100644
+--- a/drivers/soc/mediatek/mtk-pm-domains.c
++++ b/drivers/soc/mediatek/mtk-pm-domains.c
+@@ -443,6 +443,9 @@ generic_pm_domain *scpsys_add_one_domain(struct scpsys *scpsys, struct device_no
+ 	pd->genpd.power_off = scpsys_power_off;
+ 	pd->genpd.power_on = scpsys_power_on;
+ 
++	if (MTK_SCPD_CAPS(pd, MTK_SCPD_ACTIVE_WAKEUP))
++		pd->genpd.flags |= GENPD_FLAG_ACTIVE_WAKEUP;
++
+ 	if (MTK_SCPD_CAPS(pd, MTK_SCPD_KEEP_DEFAULT_OFF))
+ 		pm_genpd_init(&pd->genpd, NULL, true);
+ 	else
+diff --git a/drivers/soc/qcom/ocmem.c b/drivers/soc/qcom/ocmem.c
+index d2dacbbaafbd1..97fd24c178f8d 100644
+--- a/drivers/soc/qcom/ocmem.c
++++ b/drivers/soc/qcom/ocmem.c
+@@ -206,6 +206,7 @@ struct ocmem *of_get_ocmem(struct device *dev)
+ 	ocmem = platform_get_drvdata(pdev);
+ 	if (!ocmem) {
+ 		dev_err(dev, "Cannot get ocmem\n");
++		put_device(&pdev->dev);
+ 		return ERR_PTR(-ENODEV);
+ 	}
+ 	return ocmem;
+diff --git a/drivers/soc/qcom/qcom_aoss.c b/drivers/soc/qcom/qcom_aoss.c
+index 34acf58bbb0d6..55c156aaf68a9 100644
+--- a/drivers/soc/qcom/qcom_aoss.c
++++ b/drivers/soc/qcom/qcom_aoss.c
+@@ -451,7 +451,11 @@ struct qmp *qmp_get(struct device *dev)
+ 
+ 	qmp = platform_get_drvdata(pdev);
+ 
+-	return qmp ? qmp : ERR_PTR(-EPROBE_DEFER);
++	if (!qmp) {
++		put_device(&pdev->dev);
++		return ERR_PTR(-EPROBE_DEFER);
++	}
++	return qmp;
+ }
+ EXPORT_SYMBOL(qmp_get);
+ 
+@@ -497,7 +501,7 @@ static int qmp_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	ret = devm_request_irq(&pdev->dev, irq, qmp_intr, IRQF_ONESHOT,
++	ret = devm_request_irq(&pdev->dev, irq, qmp_intr, 0,
+ 			       "aoss-qmp", qmp);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "failed to request interrupt\n");
+diff --git a/drivers/soc/qcom/rpmpd.c b/drivers/soc/qcom/rpmpd.c
+index 4f69fb9b2e0e7..69b587822e39c 100644
+--- a/drivers/soc/qcom/rpmpd.c
++++ b/drivers/soc/qcom/rpmpd.c
+@@ -570,6 +570,9 @@ static int rpmpd_probe(struct platform_device *pdev)
+ 
+ 	data->domains = devm_kcalloc(&pdev->dev, num, sizeof(*data->domains),
+ 				     GFP_KERNEL);
++	if (!data->domains)
++		return -ENOMEM;
++
+ 	data->num_domains = num;
+ 
+ 	for (i = 0; i < num; i++) {
+diff --git a/drivers/soc/ti/wkup_m3_ipc.c b/drivers/soc/ti/wkup_m3_ipc.c
+index 72386bd393fed..2f03ced0f4113 100644
+--- a/drivers/soc/ti/wkup_m3_ipc.c
++++ b/drivers/soc/ti/wkup_m3_ipc.c
+@@ -450,9 +450,9 @@ static int wkup_m3_ipc_probe(struct platform_device *pdev)
+ 		return PTR_ERR(m3_ipc->ipc_mem_base);
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (!irq) {
++	if (irq < 0) {
+ 		dev_err(&pdev->dev, "no irq resource\n");
+-		return -ENXIO;
++		return irq;
+ 	}
+ 
+ 	ret = devm_request_irq(dev, irq, wkup_m3_txev_handler,
+diff --git a/drivers/soundwire/dmi-quirks.c b/drivers/soundwire/dmi-quirks.c
+index 0ca2a3e3a02e2..747983743a14b 100644
+--- a/drivers/soundwire/dmi-quirks.c
++++ b/drivers/soundwire/dmi-quirks.c
+@@ -59,7 +59,7 @@ static const struct dmi_system_id adr_remap_quirk_table[] = {
+ 	{
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "HP"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "HP Spectre x360 Convertible"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HP Spectre x360 Conv"),
+ 		},
+ 		.driver_data = (void *)intel_tgl_bios,
+ 	},
+diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c
+index 78037ffdb09ba..f72d36654ac2a 100644
+--- a/drivers/soundwire/intel.c
++++ b/drivers/soundwire/intel.c
+@@ -448,8 +448,8 @@ static void intel_shim_wake(struct sdw_intel *sdw, bool wake_enable)
+ 
+ 		/* Clear wake status */
+ 		wake_sts = intel_readw(shim, SDW_SHIM_WAKESTS);
+-		wake_sts |= (SDW_SHIM_WAKEEN_ENABLE << link_id);
+-		intel_writew(shim, SDW_SHIM_WAKESTS_STATUS, wake_sts);
++		wake_sts |= (SDW_SHIM_WAKESTS_STATUS << link_id);
++		intel_writew(shim, SDW_SHIM_WAKESTS, wake_sts);
+ 	}
+ 	mutex_unlock(sdw->link_res->shim_lock);
+ }
+diff --git a/drivers/spi/spi-fsi.c b/drivers/spi/spi-fsi.c
+index b6c7467f0b590..d403a7a3021d0 100644
+--- a/drivers/spi/spi-fsi.c
++++ b/drivers/spi/spi-fsi.c
+@@ -25,6 +25,7 @@
+ 
+ #define SPI_FSI_BASE			0x70000
+ #define SPI_FSI_INIT_TIMEOUT_MS		1000
++#define SPI_FSI_STATUS_TIMEOUT_MS	100
+ #define SPI_FSI_MAX_RX_SIZE		8
+ #define SPI_FSI_MAX_TX_SIZE		40
+ 
+@@ -299,6 +300,7 @@ static int fsi_spi_transfer_data(struct fsi_spi *ctx,
+ 				 struct spi_transfer *transfer)
+ {
+ 	int rc = 0;
++	unsigned long end;
+ 	u64 status = 0ULL;
+ 
+ 	if (transfer->tx_buf) {
+@@ -315,10 +317,14 @@ static int fsi_spi_transfer_data(struct fsi_spi *ctx,
+ 			if (rc)
+ 				return rc;
+ 
++			end = jiffies + msecs_to_jiffies(SPI_FSI_STATUS_TIMEOUT_MS);
+ 			do {
+ 				rc = fsi_spi_status(ctx, &status, "TX");
+ 				if (rc)
+ 					return rc;
++
++				if (time_after(jiffies, end))
++					return -ETIMEDOUT;
+ 			} while (status & SPI_FSI_STATUS_TDR_FULL);
+ 
+ 			sent += nb;
+@@ -329,10 +335,14 @@ static int fsi_spi_transfer_data(struct fsi_spi *ctx,
+ 		u8 *rx = transfer->rx_buf;
+ 
+ 		while (transfer->len > recv) {
++			end = jiffies + msecs_to_jiffies(SPI_FSI_STATUS_TIMEOUT_MS);
+ 			do {
+ 				rc = fsi_spi_status(ctx, &status, "RX");
+ 				if (rc)
+ 					return rc;
++
++				if (time_after(jiffies, end))
++					return -ETIMEDOUT;
+ 			} while (!(status & SPI_FSI_STATUS_RDR_FULL));
+ 
+ 			rc = fsi_spi_read_reg(ctx, SPI_FSI_DATA_RX, &in);
+diff --git a/drivers/spi/spi-mt65xx.c b/drivers/spi/spi-mt65xx.c
+index 753bd313e6fda..2ca19b01948a2 100644
+--- a/drivers/spi/spi-mt65xx.c
++++ b/drivers/spi/spi-mt65xx.c
+@@ -43,8 +43,11 @@
+ #define SPI_CFG1_PACKET_LOOP_OFFSET       8
+ #define SPI_CFG1_PACKET_LENGTH_OFFSET     16
+ #define SPI_CFG1_GET_TICK_DLY_OFFSET      29
++#define SPI_CFG1_GET_TICK_DLY_OFFSET_V1   30
+ 
+ #define SPI_CFG1_GET_TICK_DLY_MASK        0xe0000000
++#define SPI_CFG1_GET_TICK_DLY_MASK_V1     0xc0000000
++
+ #define SPI_CFG1_CS_IDLE_MASK             0xff
+ #define SPI_CFG1_PACKET_LOOP_MASK         0xff00
+ #define SPI_CFG1_PACKET_LENGTH_MASK       0x3ff0000
+@@ -346,9 +349,15 @@ static int mtk_spi_prepare_message(struct spi_master *master,
+ 
+ 	/* tick delay */
+ 	reg_val = readl(mdata->base + SPI_CFG1_REG);
+-	reg_val &= ~SPI_CFG1_GET_TICK_DLY_MASK;
+-	reg_val |= ((chip_config->tick_delay & 0x7)
+-		<< SPI_CFG1_GET_TICK_DLY_OFFSET);
++	if (mdata->dev_comp->enhance_timing) {
++		reg_val &= ~SPI_CFG1_GET_TICK_DLY_MASK;
++		reg_val |= ((chip_config->tick_delay & 0x7)
++			    << SPI_CFG1_GET_TICK_DLY_OFFSET);
++	} else {
++		reg_val &= ~SPI_CFG1_GET_TICK_DLY_MASK_V1;
++		reg_val |= ((chip_config->tick_delay & 0x3)
++			    << SPI_CFG1_GET_TICK_DLY_OFFSET_V1);
++	}
+ 	writel(reg_val, mdata->base + SPI_CFG1_REG);
+ 
+ 	/* set hw cs timing */
+diff --git a/drivers/spi/spi-mxic.c b/drivers/spi/spi-mxic.c
+index 45889947afed8..03fce4493aa79 100644
+--- a/drivers/spi/spi-mxic.c
++++ b/drivers/spi/spi-mxic.c
+@@ -304,25 +304,21 @@ static int mxic_spi_data_xfer(struct mxic_spi *mxic, const void *txbuf,
+ 
+ 		writel(data, mxic->regs + TXD(nbytes % 4));
+ 
++		ret = readl_poll_timeout(mxic->regs + INT_STS, sts,
++					 sts & INT_TX_EMPTY, 0, USEC_PER_SEC);
++		if (ret)
++			return ret;
++
++		ret = readl_poll_timeout(mxic->regs + INT_STS, sts,
++					 sts & INT_RX_NOT_EMPTY, 0,
++					 USEC_PER_SEC);
++		if (ret)
++			return ret;
++
++		data = readl(mxic->regs + RXD);
+ 		if (rxbuf) {
+-			ret = readl_poll_timeout(mxic->regs + INT_STS, sts,
+-						 sts & INT_TX_EMPTY, 0,
+-						 USEC_PER_SEC);
+-			if (ret)
+-				return ret;
+-
+-			ret = readl_poll_timeout(mxic->regs + INT_STS, sts,
+-						 sts & INT_RX_NOT_EMPTY, 0,
+-						 USEC_PER_SEC);
+-			if (ret)
+-				return ret;
+-
+-			data = readl(mxic->regs + RXD);
+ 			data >>= (8 * (4 - nbytes));
+ 			memcpy(rxbuf + pos, &data, nbytes);
+-			WARN_ON(readl(mxic->regs + INT_STS) & INT_RX_NOT_EMPTY);
+-		} else {
+-			readl(mxic->regs + RXD);
+ 		}
+ 		WARN_ON(readl(mxic->regs + INT_STS) & INT_RX_NOT_EMPTY);
+ 
+diff --git a/drivers/spi/spi-pxa2xx-pci.c b/drivers/spi/spi-pxa2xx-pci.c
+index 2e134eb4bd2c9..6502fda6243e0 100644
+--- a/drivers/spi/spi-pxa2xx-pci.c
++++ b/drivers/spi/spi-pxa2xx-pci.c
+@@ -76,14 +76,23 @@ static bool lpss_dma_filter(struct dma_chan *chan, void *param)
+ 	return true;
+ }
+ 
++static void lpss_dma_put_device(void *dma_dev)
++{
++	pci_dev_put(dma_dev);
++}
++
+ static int lpss_spi_setup(struct pci_dev *dev, struct pxa_spi_info *c)
+ {
+ 	struct pci_dev *dma_dev;
++	int ret;
+ 
+ 	c->num_chipselect = 1;
+ 	c->max_clk_rate = 50000000;
+ 
+ 	dma_dev = pci_get_slot(dev->bus, PCI_DEVFN(PCI_SLOT(dev->devfn), 0));
++	ret = devm_add_action_or_reset(&dev->dev, lpss_dma_put_device, dma_dev);
++	if (ret)
++		return ret;
+ 
+ 	if (c->tx_param) {
+ 		struct dw_dma_slave *slave = c->tx_param;
+@@ -107,8 +116,9 @@ static int lpss_spi_setup(struct pci_dev *dev, struct pxa_spi_info *c)
+ 
+ static int mrfld_spi_setup(struct pci_dev *dev, struct pxa_spi_info *c)
+ {
+-	struct pci_dev *dma_dev = pci_get_slot(dev->bus, PCI_DEVFN(21, 0));
+ 	struct dw_dma_slave *tx, *rx;
++	struct pci_dev *dma_dev;
++	int ret;
+ 
+ 	switch (PCI_FUNC(dev->devfn)) {
+ 	case 0:
+@@ -133,6 +143,11 @@ static int mrfld_spi_setup(struct pci_dev *dev, struct pxa_spi_info *c)
+ 		return -ENODEV;
+ 	}
+ 
++	dma_dev = pci_get_slot(dev->bus, PCI_DEVFN(21, 0));
++	ret = devm_add_action_or_reset(&dev->dev, lpss_dma_put_device, dma_dev);
++	if (ret)
++		return ret;
++
+ 	tx = c->tx_param;
+ 	tx->dma_dev = &dma_dev->dev;
+ 
+diff --git a/drivers/spi/spi-tegra114.c b/drivers/spi/spi-tegra114.c
+index e9de1d958bbd2..8f345247a8c32 100644
+--- a/drivers/spi/spi-tegra114.c
++++ b/drivers/spi/spi-tegra114.c
+@@ -1352,6 +1352,10 @@ static int tegra_spi_probe(struct platform_device *pdev)
+ 	tspi->phys = r->start;
+ 
+ 	spi_irq = platform_get_irq(pdev, 0);
++	if (spi_irq < 0) {
++		ret = spi_irq;
++		goto exit_free_master;
++	}
+ 	tspi->irq = spi_irq;
+ 
+ 	tspi->clk = devm_clk_get(&pdev->dev, "spi");
+diff --git a/drivers/spi/spi-tegra20-slink.c b/drivers/spi/spi-tegra20-slink.c
+index e8204e155484d..746bbafa13781 100644
+--- a/drivers/spi/spi-tegra20-slink.c
++++ b/drivers/spi/spi-tegra20-slink.c
+@@ -1003,14 +1003,8 @@ static int tegra_slink_probe(struct platform_device *pdev)
+ 	struct resource		*r;
+ 	int ret, spi_irq;
+ 	const struct tegra_slink_chip_data *cdata = NULL;
+-	const struct of_device_id *match;
+ 
+-	match = of_match_device(tegra_slink_of_match, &pdev->dev);
+-	if (!match) {
+-		dev_err(&pdev->dev, "Error: No device match found\n");
+-		return -ENODEV;
+-	}
+-	cdata = match->data;
++	cdata = of_device_get_match_data(&pdev->dev);
+ 
+ 	master = spi_alloc_master(&pdev->dev, sizeof(*tspi));
+ 	if (!master) {
+diff --git a/drivers/spi/spi-tegra210-quad.c b/drivers/spi/spi-tegra210-quad.c
+index c0f9a75b44b5d..27e6ce4fbd35f 100644
+--- a/drivers/spi/spi-tegra210-quad.c
++++ b/drivers/spi/spi-tegra210-quad.c
+@@ -1249,6 +1249,8 @@ static int tegra_qspi_probe(struct platform_device *pdev)
+ 
+ 	tqspi->phys = r->start;
+ 	qspi_irq = platform_get_irq(pdev, 0);
++	if (qspi_irq < 0)
++		return qspi_irq;
+ 	tqspi->irq = qspi_irq;
+ 
+ 	tqspi->clk = devm_clk_get(&pdev->dev, "qspi");
+diff --git a/drivers/spi/spi-zynqmp-gqspi.c b/drivers/spi/spi-zynqmp-gqspi.c
+index 328b6559bb19a..2b5afae8ff7fc 100644
+--- a/drivers/spi/spi-zynqmp-gqspi.c
++++ b/drivers/spi/spi-zynqmp-gqspi.c
+@@ -1172,7 +1172,10 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ 		goto clk_dis_all;
+ 	}
+ 
+-	dma_set_mask(&pdev->dev, DMA_BIT_MASK(44));
++	ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(44));
++	if (ret)
++		goto clk_dis_all;
++
+ 	ctlr->bits_per_word_mask = SPI_BPW_MASK(8);
+ 	ctlr->num_chipselect = GQSPI_DEFAULT_NUM_CS;
+ 	ctlr->mem_ops = &zynqmp_qspi_mem_ops;
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 8ba87b7f8f1a8..4e459516b74c0 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -1021,10 +1021,10 @@ int spi_map_buf(struct spi_controller *ctlr, struct device *dev,
+ 	int i, ret;
+ 
+ 	if (vmalloced_buf || kmap_buf) {
+-		desc_len = min_t(int, max_seg_size, PAGE_SIZE);
++		desc_len = min_t(unsigned long, max_seg_size, PAGE_SIZE);
+ 		sgs = DIV_ROUND_UP(len + offset_in_page(buf), desc_len);
+ 	} else if (virt_addr_valid(buf)) {
+-		desc_len = min_t(int, max_seg_size, ctlr->max_dma_len);
++		desc_len = min_t(size_t, max_seg_size, ctlr->max_dma_len);
+ 		sgs = DIV_ROUND_UP(len, desc_len);
+ 	} else {
+ 		return -EINVAL;
+diff --git a/drivers/staging/iio/adc/ad7280a.c b/drivers/staging/iio/adc/ad7280a.c
+index fef0055b89909..20183b2ea1279 100644
+--- a/drivers/staging/iio/adc/ad7280a.c
++++ b/drivers/staging/iio/adc/ad7280a.c
+@@ -107,9 +107,9 @@
+ static unsigned int ad7280a_devaddr(unsigned int addr)
+ {
+ 	return ((addr & 0x1) << 4) |
+-	       ((addr & 0x2) << 3) |
++	       ((addr & 0x2) << 2) |
+ 	       (addr & 0x4) |
+-	       ((addr & 0x8) >> 3) |
++	       ((addr & 0x8) >> 2) |
+ 	       ((addr & 0x10) >> 4);
+ }
+ 
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_acc.c b/drivers/staging/media/atomisp/pci/atomisp_acc.c
+index 9a1751895ab03..28cb271663c47 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_acc.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_acc.c
+@@ -439,6 +439,18 @@ int atomisp_acc_s_mapped_arg(struct atomisp_sub_device *asd,
+ 	return 0;
+ }
+ 
++static void atomisp_acc_unload_some_extensions(struct atomisp_sub_device *asd,
++					      int i,
++					      struct atomisp_acc_fw *acc_fw)
++{
++	while (--i >= 0) {
++		if (acc_fw->flags & acc_flag_to_pipe[i].flag) {
++			atomisp_css_unload_acc_extension(asd, acc_fw->fw,
++							 acc_flag_to_pipe[i].pipe_id);
++		}
++	}
++}
++
+ /*
+  * Appends the loaded acceleration binary extensions to the
+  * current ISP mode. Must be called just before sh_css_start().
+@@ -479,16 +491,20 @@ int atomisp_acc_load_extensions(struct atomisp_sub_device *asd)
+ 								     acc_fw->fw,
+ 								     acc_flag_to_pipe[i].pipe_id,
+ 								     acc_fw->type);
+-				if (ret)
++				if (ret) {
++					atomisp_acc_unload_some_extensions(asd, i, acc_fw);
+ 					goto error;
++				}
+ 
+ 				ext_loaded = true;
+ 			}
+ 		}
+ 
+ 		ret = atomisp_css_set_acc_parameters(acc_fw);
+-		if (ret < 0)
++		if (ret < 0) {
++			atomisp_acc_unload_some_extensions(asd, i, acc_fw);
+ 			goto error;
++		}
+ 	}
+ 
+ 	if (!ext_loaded)
+@@ -497,6 +513,7 @@ int atomisp_acc_load_extensions(struct atomisp_sub_device *asd)
+ 	ret = atomisp_css_update_stream(asd);
+ 	if (ret) {
+ 		dev_err(isp->dev, "%s: update stream failed.\n", __func__);
++		atomisp_acc_unload_extensions(asd);
+ 		goto error;
+ 	}
+ 
+@@ -504,13 +521,6 @@ int atomisp_acc_load_extensions(struct atomisp_sub_device *asd)
+ 	return 0;
+ 
+ error:
+-	while (--i >= 0) {
+-		if (acc_fw->flags & acc_flag_to_pipe[i].flag) {
+-			atomisp_css_unload_acc_extension(asd, acc_fw->fw,
+-							 acc_flag_to_pipe[i].pipe_id);
+-		}
+-	}
+-
+ 	list_for_each_entry_continue_reverse(acc_fw, &asd->acc.fw, list) {
+ 		if (acc_fw->type != ATOMISP_ACC_FW_LOAD_TYPE_OUTPUT &&
+ 		    acc_fw->type != ATOMISP_ACC_FW_LOAD_TYPE_VIEWFINDER)
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
+index 62dc06e224765..cd0a771454da4 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
+@@ -729,6 +729,21 @@ static int axp_regulator_set(struct device *dev, struct gmin_subdev *gs,
+ 	return 0;
+ }
+ 
++/*
++ * Some boards contain a hw-bug where turning eldo2 back on after having turned
++ * it off causes the CPLM3218 ambient-light-sensor on the image-sensor's I2C bus
++ * to crash, hanging the bus. Do not turn eldo2 off on these systems.
++ */
++static const struct dmi_system_id axp_leave_eldo2_on_ids[] = {
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "TrekStor"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "SurfTab duo W1 10.1 (VT4)"),
++		},
++	},
++	{ }
++};
++
+ static int axp_v1p8_on(struct device *dev, struct gmin_subdev *gs)
+ {
+ 	int ret;
+@@ -763,6 +778,9 @@ static int axp_v1p8_off(struct device *dev, struct gmin_subdev *gs)
+ 	if (ret)
+ 		return ret;
+ 
++	if (dmi_check_system(axp_leave_eldo2_on_ids))
++		return 0;
++
+ 	ret = axp_regulator_set(dev, gs, gs->eldo2_sel_reg, gs->eldo2_1p8v,
+ 				ELDO_CTRL_REG, gs->eldo2_ctrl_shift, false);
+ 	return ret;
+diff --git a/drivers/staging/media/atomisp/pci/hmm/hmm.c b/drivers/staging/media/atomisp/pci/hmm/hmm.c
+index 6a5ee46070898..c1cda16f2dc01 100644
+--- a/drivers/staging/media/atomisp/pci/hmm/hmm.c
++++ b/drivers/staging/media/atomisp/pci/hmm/hmm.c
+@@ -39,7 +39,7 @@
+ struct hmm_bo_device bo_device;
+ struct hmm_pool	dynamic_pool;
+ struct hmm_pool	reserved_pool;
+-static ia_css_ptr dummy_ptr;
++static ia_css_ptr dummy_ptr = mmgr_EXCEPTION;
+ static bool hmm_initialized;
+ struct _hmm_mem_stat hmm_mem_stat;
+ 
+@@ -209,7 +209,7 @@ int hmm_init(void)
+ 
+ void hmm_cleanup(void)
+ {
+-	if (!dummy_ptr)
++	if (dummy_ptr == mmgr_EXCEPTION)
+ 		return;
+ 	sysfs_remove_group(&atomisp_dev->kobj, atomisp_attribute_group);
+ 
+@@ -288,7 +288,8 @@ void hmm_free(ia_css_ptr virt)
+ 
+ 	dev_dbg(atomisp_dev, "%s: free 0x%08x\n", __func__, virt);
+ 
+-	WARN_ON(!virt);
++	if (WARN_ON(virt == mmgr_EXCEPTION))
++		return;
+ 
+ 	bo = hmm_bo_device_search_start(&bo_device, (unsigned int)virt);
+ 
+diff --git a/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c b/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c
+index 9cd713c02a455..686d813f5c626 100644
+--- a/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c
++++ b/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c
+@@ -23,7 +23,7 @@ static void hantro_h1_set_src_img_ctrl(struct hantro_dev *vpu,
+ 
+ 	reg = H1_REG_IN_IMG_CTRL_ROW_LEN(pix_fmt->width)
+ 		| H1_REG_IN_IMG_CTRL_OVRFLR_D4(0)
+-		| H1_REG_IN_IMG_CTRL_OVRFLB_D4(0)
++		| H1_REG_IN_IMG_CTRL_OVRFLB(0)
+ 		| H1_REG_IN_IMG_CTRL_FMT(ctx->vpu_src_fmt->enc_fmt);
+ 	vepu_write_relaxed(vpu, reg, H1_REG_IN_IMG_CTRL);
+ }
+diff --git a/drivers/staging/media/hantro/hantro_h1_regs.h b/drivers/staging/media/hantro/hantro_h1_regs.h
+index d6e9825bb5c7b..30e7e7b920b55 100644
+--- a/drivers/staging/media/hantro/hantro_h1_regs.h
++++ b/drivers/staging/media/hantro/hantro_h1_regs.h
+@@ -47,7 +47,7 @@
+ #define H1_REG_IN_IMG_CTRL				0x03c
+ #define     H1_REG_IN_IMG_CTRL_ROW_LEN(x)		((x) << 12)
+ #define     H1_REG_IN_IMG_CTRL_OVRFLR_D4(x)		((x) << 10)
+-#define     H1_REG_IN_IMG_CTRL_OVRFLB_D4(x)		((x) << 6)
++#define     H1_REG_IN_IMG_CTRL_OVRFLB(x)		((x) << 6)
+ #define     H1_REG_IN_IMG_CTRL_FMT(x)			((x) << 2)
+ #define H1_REG_ENC_CTRL0				0x040
+ #define    H1_REG_ENC_CTRL0_INIT_QP(x)			((x) << 26)
+diff --git a/drivers/staging/media/imx/imx7-mipi-csis.c b/drivers/staging/media/imx/imx7-mipi-csis.c
+index 2b73fa55c938b..9ea723bb5f209 100644
+--- a/drivers/staging/media/imx/imx7-mipi-csis.c
++++ b/drivers/staging/media/imx/imx7-mipi-csis.c
+@@ -32,7 +32,6 @@
+ #include <media/v4l2-subdev.h>
+ 
+ #define CSIS_DRIVER_NAME			"imx7-mipi-csis"
+-#define CSIS_SUBDEV_NAME			CSIS_DRIVER_NAME
+ 
+ #define CSIS_PAD_SINK				0
+ #define CSIS_PAD_SOURCE				1
+@@ -311,7 +310,6 @@ struct csi_state {
+ 	struct reset_control *mrst;
+ 	struct regulator *mipi_phy_regulator;
+ 	const struct mipi_csis_info *info;
+-	u8 index;
+ 
+ 	struct v4l2_subdev sd;
+ 	struct media_pad pads[CSIS_PADS_NUM];
+@@ -1303,8 +1301,8 @@ static int mipi_csis_subdev_init(struct csi_state *state)
+ 
+ 	v4l2_subdev_init(sd, &mipi_csis_subdev_ops);
+ 	sd->owner = THIS_MODULE;
+-	snprintf(sd->name, sizeof(sd->name), "%s.%d",
+-		 CSIS_SUBDEV_NAME, state->index);
++	snprintf(sd->name, sizeof(sd->name), "csis-%s",
++		 dev_name(state->dev));
+ 
+ 	sd->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
+ 	sd->ctrl_handler = NULL;
+diff --git a/drivers/staging/media/imx/imx8mq-mipi-csi2.c b/drivers/staging/media/imx/imx8mq-mipi-csi2.c
+index 7adbdd14daa93..3b9fa75efac6b 100644
+--- a/drivers/staging/media/imx/imx8mq-mipi-csi2.c
++++ b/drivers/staging/media/imx/imx8mq-mipi-csi2.c
+@@ -398,9 +398,6 @@ static int imx8mq_mipi_csi_s_stream(struct v4l2_subdev *sd, int enable)
+ 	struct csi_state *state = mipi_sd_to_csi2_state(sd);
+ 	int ret = 0;
+ 
+-	imx8mq_mipi_csi_write(state, CSI2RX_IRQ_MASK,
+-			      CSI2RX_IRQ_MASK_ULPS_STATUS_CHANGE);
+-
+ 	if (enable) {
+ 		ret = pm_runtime_resume_and_get(state->dev);
+ 		if (ret < 0)
+@@ -696,7 +693,7 @@ err_parse:
+  * Suspend/resume
+  */
+ 
+-static int imx8mq_mipi_csi_pm_suspend(struct device *dev, bool runtime)
++static int imx8mq_mipi_csi_pm_suspend(struct device *dev)
+ {
+ 	struct v4l2_subdev *sd = dev_get_drvdata(dev);
+ 	struct csi_state *state = mipi_sd_to_csi2_state(sd);
+@@ -708,36 +705,21 @@ static int imx8mq_mipi_csi_pm_suspend(struct device *dev, bool runtime)
+ 		imx8mq_mipi_csi_stop_stream(state);
+ 		imx8mq_mipi_csi_clk_disable(state);
+ 		state->state &= ~ST_POWERED;
+-		if (!runtime)
+-			state->state |= ST_SUSPENDED;
+ 	}
+ 
+ 	mutex_unlock(&state->lock);
+ 
+-	ret = icc_set_bw(state->icc_path, 0, 0);
+-	if (ret)
+-		dev_err(dev, "icc_set_bw failed with %d\n", ret);
+-
+ 	return ret ? -EAGAIN : 0;
+ }
+ 
+-static int imx8mq_mipi_csi_pm_resume(struct device *dev, bool runtime)
++static int imx8mq_mipi_csi_pm_resume(struct device *dev)
+ {
+ 	struct v4l2_subdev *sd = dev_get_drvdata(dev);
+ 	struct csi_state *state = mipi_sd_to_csi2_state(sd);
+ 	int ret = 0;
+ 
+-	ret = icc_set_bw(state->icc_path, 0, state->icc_path_bw);
+-	if (ret) {
+-		dev_err(dev, "icc_set_bw failed with %d\n", ret);
+-		return ret;
+-	}
+-
+ 	mutex_lock(&state->lock);
+ 
+-	if (!runtime && !(state->state & ST_SUSPENDED))
+-		goto unlock;
+-
+ 	if (!(state->state & ST_POWERED)) {
+ 		state->state |= ST_POWERED;
+ 		ret = imx8mq_mipi_csi_clk_enable(state);
+@@ -758,22 +740,60 @@ unlock:
+ 
+ static int __maybe_unused imx8mq_mipi_csi_suspend(struct device *dev)
+ {
+-	return imx8mq_mipi_csi_pm_suspend(dev, false);
++	struct v4l2_subdev *sd = dev_get_drvdata(dev);
++	struct csi_state *state = mipi_sd_to_csi2_state(sd);
++	int ret;
++
++	ret = imx8mq_mipi_csi_pm_suspend(dev);
++	if (ret)
++		return ret;
++
++	state->state |= ST_SUSPENDED;
++
++	return ret;
+ }
+ 
+ static int __maybe_unused imx8mq_mipi_csi_resume(struct device *dev)
+ {
+-	return imx8mq_mipi_csi_pm_resume(dev, false);
++	struct v4l2_subdev *sd = dev_get_drvdata(dev);
++	struct csi_state *state = mipi_sd_to_csi2_state(sd);
++
++	if (!(state->state & ST_SUSPENDED))
++		return 0;
++
++	return imx8mq_mipi_csi_pm_resume(dev);
+ }
+ 
+ static int __maybe_unused imx8mq_mipi_csi_runtime_suspend(struct device *dev)
+ {
+-	return imx8mq_mipi_csi_pm_suspend(dev, true);
++	struct v4l2_subdev *sd = dev_get_drvdata(dev);
++	struct csi_state *state = mipi_sd_to_csi2_state(sd);
++	int ret;
++
++	ret = imx8mq_mipi_csi_pm_suspend(dev);
++	if (ret)
++		return ret;
++
++	ret = icc_set_bw(state->icc_path, 0, 0);
++	if (ret)
++		dev_err(dev, "icc_set_bw failed with %d\n", ret);
++
++	return ret;
+ }
+ 
+ static int __maybe_unused imx8mq_mipi_csi_runtime_resume(struct device *dev)
+ {
+-	return imx8mq_mipi_csi_pm_resume(dev, true);
++	struct v4l2_subdev *sd = dev_get_drvdata(dev);
++	struct csi_state *state = mipi_sd_to_csi2_state(sd);
++	int ret;
++
++	ret = icc_set_bw(state->icc_path, 0, state->icc_path_bw);
++	if (ret) {
++		dev_err(dev, "icc_set_bw failed with %d\n", ret);
++		return ret;
++	}
++
++	return imx8mq_mipi_csi_pm_resume(dev);
+ }
+ 
+ static const struct dev_pm_ops imx8mq_mipi_csi_pm_ops = {
+@@ -921,7 +941,7 @@ static int imx8mq_mipi_csi_probe(struct platform_device *pdev)
+ 	/* Enable runtime PM. */
+ 	pm_runtime_enable(dev);
+ 	if (!pm_runtime_enabled(dev)) {
+-		ret = imx8mq_mipi_csi_pm_resume(dev, true);
++		ret = imx8mq_mipi_csi_runtime_resume(dev);
+ 		if (ret < 0)
+ 			goto icc;
+ 	}
+@@ -934,7 +954,7 @@ static int imx8mq_mipi_csi_probe(struct platform_device *pdev)
+ 
+ cleanup:
+ 	pm_runtime_disable(&pdev->dev);
+-	imx8mq_mipi_csi_pm_suspend(&pdev->dev, true);
++	imx8mq_mipi_csi_runtime_suspend(&pdev->dev);
+ 
+ 	media_entity_cleanup(&state->sd.entity);
+ 	v4l2_async_nf_unregister(&state->notifier);
+@@ -958,7 +978,7 @@ static int imx8mq_mipi_csi_remove(struct platform_device *pdev)
+ 	v4l2_async_unregister_subdev(&state->sd);
+ 
+ 	pm_runtime_disable(&pdev->dev);
+-	imx8mq_mipi_csi_pm_suspend(&pdev->dev, true);
++	imx8mq_mipi_csi_runtime_suspend(&pdev->dev);
+ 	media_entity_cleanup(&state->sd.entity);
+ 	mutex_destroy(&state->lock);
+ 	pm_runtime_set_suspended(&pdev->dev);
+diff --git a/drivers/staging/media/meson/vdec/esparser.c b/drivers/staging/media/meson/vdec/esparser.c
+index db7022707ff8d..86ccc8937afca 100644
+--- a/drivers/staging/media/meson/vdec/esparser.c
++++ b/drivers/staging/media/meson/vdec/esparser.c
+@@ -328,7 +328,12 @@ esparser_queue(struct amvdec_session *sess, struct vb2_v4l2_buffer *vbuf)
+ 
+ 	offset = esparser_get_offset(sess);
+ 
+-	amvdec_add_ts(sess, vb->timestamp, vbuf->timecode, offset, vbuf->flags);
++	ret = amvdec_add_ts(sess, vb->timestamp, vbuf->timecode, offset, vbuf->flags);
++	if (ret) {
++		v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_ERROR);
++		return ret;
++	}
++
+ 	dev_dbg(core->dev, "esparser: ts = %llu pld_size = %u offset = %08X flags = %08X\n",
+ 		vb->timestamp, payload_size, offset, vbuf->flags);
+ 
+diff --git a/drivers/staging/media/meson/vdec/vdec_helpers.c b/drivers/staging/media/meson/vdec/vdec_helpers.c
+index b9125c295d1d3..06fd66539797a 100644
+--- a/drivers/staging/media/meson/vdec/vdec_helpers.c
++++ b/drivers/staging/media/meson/vdec/vdec_helpers.c
+@@ -227,13 +227,16 @@ int amvdec_set_canvases(struct amvdec_session *sess,
+ }
+ EXPORT_SYMBOL_GPL(amvdec_set_canvases);
+ 
+-void amvdec_add_ts(struct amvdec_session *sess, u64 ts,
+-		   struct v4l2_timecode tc, u32 offset, u32 vbuf_flags)
++int amvdec_add_ts(struct amvdec_session *sess, u64 ts,
++		  struct v4l2_timecode tc, u32 offset, u32 vbuf_flags)
+ {
+ 	struct amvdec_timestamp *new_ts;
+ 	unsigned long flags;
+ 
+ 	new_ts = kzalloc(sizeof(*new_ts), GFP_KERNEL);
++	if (!new_ts)
++		return -ENOMEM;
++
+ 	new_ts->ts = ts;
+ 	new_ts->tc = tc;
+ 	new_ts->offset = offset;
+@@ -242,6 +245,7 @@ void amvdec_add_ts(struct amvdec_session *sess, u64 ts,
+ 	spin_lock_irqsave(&sess->ts_spinlock, flags);
+ 	list_add_tail(&new_ts->list, &sess->timestamps);
+ 	spin_unlock_irqrestore(&sess->ts_spinlock, flags);
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(amvdec_add_ts);
+ 
+diff --git a/drivers/staging/media/meson/vdec/vdec_helpers.h b/drivers/staging/media/meson/vdec/vdec_helpers.h
+index 88137d15aa3ad..4bf3e61d081b3 100644
+--- a/drivers/staging/media/meson/vdec/vdec_helpers.h
++++ b/drivers/staging/media/meson/vdec/vdec_helpers.h
+@@ -56,8 +56,8 @@ void amvdec_dst_buf_done_offset(struct amvdec_session *sess,
+  * @offset: offset in the VIFIFO where the associated packet was written
+  * @flags: the vb2_v4l2_buffer flags
+  */
+-void amvdec_add_ts(struct amvdec_session *sess, u64 ts,
+-		   struct v4l2_timecode tc, u32 offset, u32 flags);
++int amvdec_add_ts(struct amvdec_session *sess, u64 ts,
++		  struct v4l2_timecode tc, u32 offset, u32 flags);
+ void amvdec_remove_ts(struct amvdec_session *sess, u64 ts);
+ 
+ /**
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_h264.c b/drivers/staging/media/sunxi/cedrus/cedrus_h264.c
+index b4173a8926d69..d8fb93035470e 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_h264.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_h264.c
+@@ -38,7 +38,7 @@ struct cedrus_h264_sram_ref_pic {
+ 
+ #define CEDRUS_H264_FRAME_NUM		18
+ 
+-#define CEDRUS_NEIGHBOR_INFO_BUF_SIZE	(16 * SZ_1K)
++#define CEDRUS_NEIGHBOR_INFO_BUF_SIZE	(32 * SZ_1K)
+ #define CEDRUS_MIN_PIC_INFO_BUF_SIZE       (130 * SZ_1K)
+ 
+ static void cedrus_h264_write_sram(struct cedrus_dev *dev,
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
+index 8829a7bab07ec..ffade5cbd2e40 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
+@@ -23,7 +23,7 @@
+  * Subsequent BSP implementations seem to double the neighbor info buffer size
+  * for the H6 SoC, which may be related to 10 bit H265 support.
+  */
+-#define CEDRUS_H265_NEIGHBOR_INFO_BUF_SIZE	(397 * SZ_1K)
++#define CEDRUS_H265_NEIGHBOR_INFO_BUF_SIZE	(794 * SZ_1K)
+ #define CEDRUS_H265_ENTRY_POINTS_BUF_SIZE	(4 * SZ_1K)
+ #define CEDRUS_H265_MV_COL_BUF_UNIT_CTB_SIZE	160
+ 
+diff --git a/drivers/staging/media/zoran/zoran.h b/drivers/staging/media/zoran/zoran.h
+index b1ad2a2b914cd..50d5a7acfab6c 100644
+--- a/drivers/staging/media/zoran/zoran.h
++++ b/drivers/staging/media/zoran/zoran.h
+@@ -313,6 +313,6 @@ static inline struct zoran *to_zoran(struct v4l2_device *v4l2_dev)
+ 
+ #endif
+ 
+-int zoran_queue_init(struct zoran *zr, struct vb2_queue *vq);
++int zoran_queue_init(struct zoran *zr, struct vb2_queue *vq, int dir);
+ void zoran_queue_exit(struct zoran *zr);
+ int zr_set_buf(struct zoran *zr);
+diff --git a/drivers/staging/media/zoran/zoran_card.c b/drivers/staging/media/zoran/zoran_card.c
+index f259585b06897..11d415c0c05d2 100644
+--- a/drivers/staging/media/zoran/zoran_card.c
++++ b/drivers/staging/media/zoran/zoran_card.c
+@@ -803,6 +803,52 @@ int zoran_check_jpg_settings(struct zoran *zr,
+ 	return 0;
+ }
+ 
++static int zoran_init_video_device(struct zoran *zr, struct video_device *video_dev, int dir)
++{
++	int err;
++
++	/* Now add the template and register the device unit. */
++	*video_dev = zoran_template;
++	video_dev->v4l2_dev = &zr->v4l2_dev;
++	video_dev->lock = &zr->lock;
++	video_dev->device_caps = V4L2_CAP_STREAMING | V4L2_CAP_READWRITE | dir;
++
++	strscpy(video_dev->name, ZR_DEVNAME(zr), sizeof(video_dev->name));
++	/*
++	 * It's not a mem2mem device, but you can both capture and output from one and the same
++	 * device. This should really be split up into two device nodes, but that's a job for
++	 * another day.
++	 */
++	video_dev->vfl_dir = VFL_DIR_M2M;
++	zoran_queue_init(zr, &zr->vq, V4L2_BUF_TYPE_VIDEO_CAPTURE);
++
++	err = video_register_device(video_dev, VFL_TYPE_VIDEO, video_nr[zr->id]);
++	if (err < 0)
++		return err;
++	video_set_drvdata(video_dev, zr);
++	return 0;
++}
++
++static void zoran_exit_video_devices(struct zoran *zr)
++{
++	video_unregister_device(zr->video_dev);
++	kfree(zr->video_dev);
++}
++
++static int zoran_init_video_devices(struct zoran *zr)
++{
++	int err;
++
++	zr->video_dev = video_device_alloc();
++	if (!zr->video_dev)
++		return -ENOMEM;
++
++	err = zoran_init_video_device(zr, zr->video_dev, V4L2_CAP_VIDEO_CAPTURE);
++	if (err)
++		kfree(zr->video_dev);
++	return err;
++}
++
+ void zoran_open_init_params(struct zoran *zr)
+ {
+ 	int i;
+@@ -874,17 +920,11 @@ static int zr36057_init(struct zoran *zr)
+ 	zoran_open_init_params(zr);
+ 
+ 	/* allocate memory *before* doing anything to the hardware in case allocation fails */
+-	zr->video_dev = video_device_alloc();
+-	if (!zr->video_dev) {
+-		err = -ENOMEM;
+-		goto exit;
+-	}
+ 	zr->stat_com = dma_alloc_coherent(&zr->pci_dev->dev,
+ 					  BUZ_NUM_STAT_COM * sizeof(u32),
+ 					  &zr->p_sc, GFP_KERNEL);
+ 	if (!zr->stat_com) {
+-		err = -ENOMEM;
+-		goto exit_video;
++		return -ENOMEM;
+ 	}
+ 	for (j = 0; j < BUZ_NUM_STAT_COM; j++)
+ 		zr->stat_com[j] = cpu_to_le32(1); /* mark as unavailable to zr36057 */
+@@ -897,26 +937,9 @@ static int zr36057_init(struct zoran *zr)
+ 		goto exit_statcom;
+ 	}
+ 
+-	/* Now add the template and register the device unit. */
+-	*zr->video_dev = zoran_template;
+-	zr->video_dev->v4l2_dev = &zr->v4l2_dev;
+-	zr->video_dev->lock = &zr->lock;
+-	zr->video_dev->device_caps = V4L2_CAP_STREAMING | V4L2_CAP_VIDEO_CAPTURE;
+-
+-	strscpy(zr->video_dev->name, ZR_DEVNAME(zr), sizeof(zr->video_dev->name));
+-	/*
+-	 * It's not a mem2mem device, but you can both capture and output from one and the same
+-	 * device. This should really be split up into two device nodes, but that's a job for
+-	 * another day.
+-	 */
+-	zr->video_dev->vfl_dir = VFL_DIR_M2M;
+-
+-	zoran_queue_init(zr, &zr->vq);
+-
+-	err = video_register_device(zr->video_dev, VFL_TYPE_VIDEO, video_nr[zr->id]);
+-	if (err < 0)
++	err = zoran_init_video_devices(zr);
++	if (err)
+ 		goto exit_statcomb;
+-	video_set_drvdata(zr->video_dev, zr);
+ 
+ 	zoran_init_hardware(zr);
+ 	if (!pass_through) {
+@@ -931,9 +954,6 @@ exit_statcomb:
+ 	dma_free_coherent(&zr->pci_dev->dev, BUZ_NUM_STAT_COM * sizeof(u32) * 2, zr->stat_comb, zr->p_scb);
+ exit_statcom:
+ 	dma_free_coherent(&zr->pci_dev->dev, BUZ_NUM_STAT_COM * sizeof(u32), zr->stat_com, zr->p_sc);
+-exit_video:
+-	kfree(zr->video_dev);
+-exit:
+ 	return err;
+ }
+ 
+@@ -965,7 +985,7 @@ static void zoran_remove(struct pci_dev *pdev)
+ 	dma_free_coherent(&zr->pci_dev->dev, BUZ_NUM_STAT_COM * sizeof(u32) * 2, zr->stat_comb, zr->p_scb);
+ 	pci_release_regions(pdev);
+ 	pci_disable_device(zr->pci_dev);
+-	video_unregister_device(zr->video_dev);
++	zoran_exit_video_devices(zr);
+ exit_free:
+ 	v4l2_ctrl_handler_free(&zr->hdl);
+ 	v4l2_device_unregister(&zr->v4l2_dev);
+@@ -1069,8 +1089,10 @@ static int zoran_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+ 	if (err)
+-		return -ENODEV;
+-	vb2_dma_contig_set_max_seg_size(&pdev->dev, DMA_BIT_MASK(32));
++		return err;
++	err = vb2_dma_contig_set_max_seg_size(&pdev->dev, U32_MAX);
++	if (err)
++		return err;
+ 
+ 	nr = zoran_num++;
+ 	if (nr >= BUZ_MAX) {
+diff --git a/drivers/staging/media/zoran/zoran_device.c b/drivers/staging/media/zoran/zoran_device.c
+index 5b12a730a2290..fb1f0465ca87f 100644
+--- a/drivers/staging/media/zoran/zoran_device.c
++++ b/drivers/staging/media/zoran/zoran_device.c
+@@ -814,7 +814,7 @@ static void zoran_reap_stat_com(struct zoran *zr)
+ 		if (zr->jpg_settings.tmp_dcm == 1)
+ 			i = (zr->jpg_dma_tail - zr->jpg_err_shift) & BUZ_MASK_STAT_COM;
+ 		else
+-			i = ((zr->jpg_dma_tail - zr->jpg_err_shift) & 1) * 2 + 1;
++			i = ((zr->jpg_dma_tail - zr->jpg_err_shift) & 1) * 2;
+ 
+ 		stat_com = le32_to_cpu(zr->stat_com[i]);
+ 		if ((stat_com & 1) == 0) {
+@@ -826,6 +826,11 @@ static void zoran_reap_stat_com(struct zoran *zr)
+ 		size = (stat_com & GENMASK(22, 1)) >> 1;
+ 
+ 		buf = zr->inuse[i];
++		if (!buf) {
++			spin_unlock_irqrestore(&zr->queued_bufs_lock, flags);
++			pci_err(zr->pci_dev, "No buffer at slot %d\n", i);
++			return;
++		}
+ 		buf->vbuf.vb2_buf.timestamp = ktime_get_ns();
+ 
+ 		if (zr->codec_mode == BUZ_MODE_MOTION_COMPRESS) {
+diff --git a/drivers/staging/media/zoran/zoran_driver.c b/drivers/staging/media/zoran/zoran_driver.c
+index 46382e43f1bf7..84665637ebb79 100644
+--- a/drivers/staging/media/zoran/zoran_driver.c
++++ b/drivers/staging/media/zoran/zoran_driver.c
+@@ -255,8 +255,6 @@ static int zoran_querycap(struct file *file, void *__fh, struct v4l2_capability
+ 	strscpy(cap->card, ZR_DEVNAME(zr), sizeof(cap->card));
+ 	strscpy(cap->driver, "zoran", sizeof(cap->driver));
+ 	snprintf(cap->bus_info, sizeof(cap->bus_info), "PCI:%s", pci_name(zr->pci_dev));
+-	cap->device_caps = zr->video_dev->device_caps;
+-	cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
+ 	return 0;
+ }
+ 
+@@ -582,6 +580,9 @@ static int zoran_s_std(struct file *file, void *__fh, v4l2_std_id std)
+ 	struct zoran *zr = video_drvdata(file);
+ 	int res = 0;
+ 
++	if (zr->norm == std)
++		return 0;
++
+ 	if (zr->running != ZORAN_MAP_MODE_NONE)
+ 		return -EBUSY;
+ 
+@@ -739,6 +740,7 @@ static int zoran_g_parm(struct file *file, void *priv, struct v4l2_streamparm *p
+ 	if (parm->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ 		return -EINVAL;
+ 
++	parm->parm.capture.readbuffers = 9;
+ 	return 0;
+ }
+ 
+@@ -869,6 +871,10 @@ int zr_set_buf(struct zoran *zr)
+ 		vbuf = &buf->vbuf;
+ 
+ 		buf->vbuf.field = V4L2_FIELD_INTERLACED;
++		if (BUZ_MAX_HEIGHT < (zr->v4l_settings.height * 2))
++			buf->vbuf.field = V4L2_FIELD_INTERLACED;
++		else
++			buf->vbuf.field = V4L2_FIELD_TOP;
+ 		vb2_set_plane_payload(&buf->vbuf.vb2_buf, 0, zr->buffer_size);
+ 		vb2_buffer_done(&buf->vbuf.vb2_buf, VB2_BUF_STATE_DONE);
+ 		zr->inuse[0] = NULL;
+@@ -928,6 +934,7 @@ static int zr_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
+ 		zr->stat_com[j] = cpu_to_le32(1);
+ 		zr->inuse[j] = NULL;
+ 	}
++	zr->vbseq = 0;
+ 
+ 	if (zr->map_mode != ZORAN_MAP_MODE_RAW) {
+ 		pci_info(zr->pci_dev, "START JPG\n");
+@@ -1008,7 +1015,7 @@ static const struct vb2_ops zr_video_qops = {
+ 	.wait_finish            = vb2_ops_wait_finish,
+ };
+ 
+-int zoran_queue_init(struct zoran *zr, struct vb2_queue *vq)
++int zoran_queue_init(struct zoran *zr, struct vb2_queue *vq, int dir)
+ {
+ 	int err;
+ 
+@@ -1016,8 +1023,9 @@ int zoran_queue_init(struct zoran *zr, struct vb2_queue *vq)
+ 	INIT_LIST_HEAD(&zr->queued_bufs);
+ 
+ 	vq->dev = &zr->pci_dev->dev;
+-	vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+-	vq->io_modes = VB2_USERPTR | VB2_DMABUF | VB2_MMAP | VB2_READ | VB2_WRITE;
++	vq->type = dir;
++
++	vq->io_modes = VB2_DMABUF | VB2_MMAP | VB2_READ | VB2_WRITE;
+ 	vq->drv_priv = zr;
+ 	vq->buf_struct_size = sizeof(struct zr_buffer);
+ 	vq->ops = &zr_video_qops;
+diff --git a/drivers/staging/mt7621-dts/gbpc1.dts b/drivers/staging/mt7621-dts/gbpc1.dts
+index e38a083811e54..5ae94b1ad5998 100644
+--- a/drivers/staging/mt7621-dts/gbpc1.dts
++++ b/drivers/staging/mt7621-dts/gbpc1.dts
+@@ -12,7 +12,8 @@
+ 
+ 	memory@0 {
+ 		device_type = "memory";
+-		reg = <0x0 0x1c000000>, <0x20000000 0x4000000>;
++		reg = <0x00000000 0x1c000000>,
++		      <0x20000000 0x04000000>;
+ 	};
+ 
+ 	chosen {
+@@ -38,24 +39,16 @@
+ 	gpio-leds {
+ 		compatible = "gpio-leds";
+ 
+-		system {
+-			label = "gb-pc1:green:system";
++		power {
++			label = "green:power";
+ 			gpios = <&gpio 6 GPIO_ACTIVE_LOW>;
++			linux,default-trigger = "default-on";
+ 		};
+ 
+-		status {
+-			label = "gb-pc1:green:status";
++		system {
++			label = "green:system";
+ 			gpios = <&gpio 8 GPIO_ACTIVE_LOW>;
+-		};
+-
+-		lan1 {
+-			label = "gb-pc1:green:lan1";
+-			gpios = <&gpio 24 GPIO_ACTIVE_LOW>;
+-		};
+-
+-		lan2 {
+-			label = "gb-pc1:green:lan2";
+-			gpios = <&gpio 25 GPIO_ACTIVE_LOW>;
++			linux,default-trigger = "disk-activity";
+ 		};
+ 	};
+ };
+@@ -95,9 +88,8 @@
+ 
+ 		partition@50000 {
+ 			label = "firmware";
+-			reg = <0x50000 0x1FB0000>;
++			reg = <0x50000 0x1fb0000>;
+ 		};
+-
+ 	};
+ };
+ 
+@@ -106,9 +98,12 @@
+ };
+ 
+ &pinctrl {
+-	state_default: pinctrl0 {
+-		default_gpio: gpio {
+-			groups = "wdt", "rgmii2", "uart3";
++	pinctrl-names = "default";
++	pinctrl-0 = <&state_default>;
++
++	state_default: state-default {
++		gpio-pinmux {
++			groups = "rgmii2", "uart3", "wdt";
+ 			function = "gpio";
+ 		};
+ 	};
+@@ -117,12 +112,13 @@
+ &switch0 {
+ 	ports {
+ 		port@0 {
++			status = "okay";
+ 			label = "ethblack";
+-			status = "ok";
+ 		};
++
+ 		port@4 {
++			status = "okay";
+ 			label = "ethblue";
+-			status = "ok";
+ 		};
+ 	};
+ };
+diff --git a/drivers/staging/mt7621-dts/gbpc2.dts b/drivers/staging/mt7621-dts/gbpc2.dts
+index 6fe603c7711d7..a7fce8de61472 100644
+--- a/drivers/staging/mt7621-dts/gbpc2.dts
++++ b/drivers/staging/mt7621-dts/gbpc2.dts
+@@ -1,22 +1,122 @@
+ // SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+ /dts-v1/;
+ 
+-#include "gbpc1.dts"
++#include "mt7621.dtsi"
++
++#include <dt-bindings/gpio/gpio.h>
++#include <dt-bindings/input/input.h>
+ 
+ / {
+ 	compatible = "gnubee,gb-pc2", "mediatek,mt7621-soc";
+ 	model = "GB-PC2";
++
++	memory@0 {
++		device_type = "memory";
++		reg = <0x00000000 0x1c000000>,
++		      <0x20000000 0x04000000>;
++	};
++
++	chosen {
++		bootargs = "console=ttyS0,57600";
++	};
++
++	palmbus: palmbus@1e000000 {
++		i2c@900 {
++			status = "okay";
++		};
++	};
++
++	gpio-keys {
++		compatible = "gpio-keys";
++
++		reset {
++			label = "reset";
++			gpios = <&gpio 18 GPIO_ACTIVE_HIGH>;
++			linux,code = <KEY_RESTART>;
++		};
++	};
++};
++
++&sdhci {
++	status = "okay";
++};
++
++&spi0 {
++	status = "okay";
++
++	m25p80@0 {
++		#address-cells = <1>;
++		#size-cells = <1>;
++		compatible = "jedec,spi-nor";
++		reg = <0>;
++		spi-max-frequency = <50000000>;
++		broken-flash-reset;
++
++		partition@0 {
++			label = "u-boot";
++			reg = <0x0 0x30000>;
++			read-only;
++		};
++
++		partition@30000 {
++			label = "u-boot-env";
++			reg = <0x30000 0x10000>;
++			read-only;
++		};
++
++		factory: partition@40000 {
++			label = "factory";
++			reg = <0x40000 0x10000>;
++			read-only;
++		};
++
++		partition@50000 {
++			label = "firmware";
++			reg = <0x50000 0x1fb0000>;
++		};
++	};
+ };
+ 
+-&default_gpio {
+-	groups = "wdt", "uart3";
+-	function = "gpio";
++&pcie {
++	status = "okay";
+ };
+ 
+-&gmac1 {
+-	status = "ok";
++&pinctrl {
++	pinctrl-names = "default";
++	pinctrl-0 = <&state_default>;
++
++	state_default: state-default {
++		gpio-pinmux {
++			groups = "wdt";
++			function = "gpio";
++		};
++	};
+ };
+ 
+-&phy_external {
+-	status = "ok";
++&ethernet {
++	gmac1: mac@1 {
++		status = "okay";
++		phy-handle = <&ethphy7>;
++	};
++
++	mdio-bus {
++		ethphy7: ethernet-phy@7 {
++			reg = <7>;
++			phy-mode = "rgmii-rxid";
++		};
++	};
++};
++
++&switch0 {
++	ports {
++		port@0 {
++			status = "okay";
++			label = "ethblack";
++		};
++
++		port@4 {
++			status = "okay";
++			label = "ethblue";
++		};
++	};
+ };
+diff --git a/drivers/staging/mt7621-dts/mt7621.dtsi b/drivers/staging/mt7621-dts/mt7621.dtsi
+index 6d158e4f4b8c7..263ef53e1e92c 100644
+--- a/drivers/staging/mt7621-dts/mt7621.dtsi
++++ b/drivers/staging/mt7621-dts/mt7621.dtsi
+@@ -44,9 +44,9 @@
+ 		regulator-max-microvolt = <3300000>;
+ 		enable-active-high;
+ 		regulator-always-on;
+-	  };
++	};
+ 
+-	  mmc_fixed_1v8_io: fixedregulator@1 {
++	mmc_fixed_1v8_io: fixedregulator@1 {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "mmc_io";
+ 		regulator-min-microvolt = <1800000>;
+@@ -363,37 +363,32 @@
+ 
+ 		mediatek,ethsys = <&sysc>;
+ 
++		pinctrl-names = "default";
++		pinctrl-0 = <&mdio_pins>, <&rgmii1_pins>, <&rgmii2_pins>;
+ 
+ 		gmac0: mac@0 {
+ 			compatible = "mediatek,eth-mac";
+ 			reg = <0>;
+ 			phy-mode = "rgmii";
++
+ 			fixed-link {
+ 				speed = <1000>;
+ 				full-duplex;
+ 				pause;
+ 			};
+ 		};
++
+ 		gmac1: mac@1 {
+ 			compatible = "mediatek,eth-mac";
+ 			reg = <1>;
+ 			status = "off";
+ 			phy-mode = "rgmii-rxid";
+-			phy-handle = <&phy_external>;
+ 		};
++
+ 		mdio-bus {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			phy_external: ethernet-phy@5 {
+-				status = "off";
+-				reg = <5>;
+-				phy-mode = "rgmii-rxid";
+-
+-				pinctrl-names = "default";
+-				pinctrl-0 = <&rgmii2_pins>;
+-			};
+-
+ 			switch0: switch0@0 {
+ 				compatible = "mediatek,mt7621";
+ 				#address-cells = <1>;
+@@ -411,36 +406,43 @@
+ 					#address-cells = <1>;
+ 					#size-cells = <0>;
+ 					reg = <0>;
++
+ 					port@0 {
+ 						status = "off";
+ 						reg = <0>;
+ 						label = "lan0";
+ 					};
++
+ 					port@1 {
+ 						status = "off";
+ 						reg = <1>;
+ 						label = "lan1";
+ 					};
++
+ 					port@2 {
+ 						status = "off";
+ 						reg = <2>;
+ 						label = "lan2";
+ 					};
++
+ 					port@3 {
+ 						status = "off";
+ 						reg = <3>;
+ 						label = "lan3";
+ 					};
++
+ 					port@4 {
+ 						status = "off";
+ 						reg = <4>;
+ 						label = "lan4";
+ 					};
++
+ 					port@6 {
+ 						reg = <6>;
+ 						label = "cpu";
+ 						ethernet = <&gmac0>;
+ 						phy-mode = "trgmii";
++
+ 						fixed-link {
+ 							speed = <1000>;
+ 							full-duplex;
+diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c
+index 9873bb2a9ee4f..113a3efd12e95 100644
+--- a/drivers/staging/qlge/qlge_main.c
++++ b/drivers/staging/qlge/qlge_main.c
+@@ -4605,14 +4605,12 @@ static int qlge_probe(struct pci_dev *pdev,
+ 	err = register_netdev(ndev);
+ 	if (err) {
+ 		dev_err(&pdev->dev, "net device registration failed.\n");
+-		qlge_release_all(pdev);
+-		pci_disable_device(pdev);
+-		goto netdev_free;
++		goto cleanup_pdev;
+ 	}
+ 
+ 	err = qlge_health_create_reporters(qdev);
+ 	if (err)
+-		goto netdev_free;
++		goto unregister_netdev;
+ 
+ 	/* Start up the timer to trigger EEH if
+ 	 * the bus goes dead
+@@ -4626,6 +4624,11 @@ static int qlge_probe(struct pci_dev *pdev,
+ 	devlink_register(devlink);
+ 	return 0;
+ 
++unregister_netdev:
++	unregister_netdev(ndev);
++cleanup_pdev:
++	qlge_release_all(pdev);
++	pci_disable_device(pdev);
+ netdev_free:
+ 	free_netdev(ndev);
+ devlink_free:
+diff --git a/drivers/staging/r8188eu/core/rtw_recv.c b/drivers/staging/r8188eu/core/rtw_recv.c
+index 51a13262a226f..d120d61454a35 100644
+--- a/drivers/staging/r8188eu/core/rtw_recv.c
++++ b/drivers/staging/r8188eu/core/rtw_recv.c
+@@ -1853,8 +1853,7 @@ static int recv_func(struct adapter *padapter, struct recv_frame *rframe)
+ 		struct recv_frame *pending_frame;
+ 		int cnt = 0;
+ 
+-		pending_frame = rtw_alloc_recvframe(&padapter->recvpriv.uc_swdec_pending_queue);
+-		while (pending_frame) {
++		while ((pending_frame = rtw_alloc_recvframe(&padapter->recvpriv.uc_swdec_pending_queue))) {
+ 			cnt++;
+ 			recv_func_posthandle(padapter, pending_frame);
+ 		}
+diff --git a/drivers/staging/r8188eu/hal/rtl8188e_hal_init.c b/drivers/staging/r8188eu/hal/rtl8188e_hal_init.c
+index 8c00f2dd67da6..8656d40610593 100644
+--- a/drivers/staging/r8188eu/hal/rtl8188e_hal_init.c
++++ b/drivers/staging/r8188eu/hal/rtl8188e_hal_init.c
+@@ -540,10 +540,10 @@ static int load_firmware(struct rt_firmware *pFirmware, struct device *device)
+ 	}
+ 	memcpy(pFirmware->szFwBuffer, fw->data, fw->size);
+ 	pFirmware->ulFwLength = fw->size;
+-	release_firmware(fw);
+-	DBG_88E_LEVEL(_drv_info_, "+%s: !bUsedWoWLANFw, FmrmwareLen:%d+\n", __func__, pFirmware->ulFwLength);
++	dev_dbg(device, "!bUsedWoWLANFw, FmrmwareLen:%d+\n", pFirmware->ulFwLength);
+ 
+ Exit:
++	release_firmware(fw);
+ 	return rtStatus;
+ }
+ 
+diff --git a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+index 68f61a7389303..f232a53329e16 100644
+--- a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
++++ b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+@@ -53,7 +53,7 @@ struct int3400_thermal_priv {
+ 	struct art *arts;
+ 	int trt_count;
+ 	struct trt *trts;
+-	u8 uuid_bitmap;
++	u32 uuid_bitmap;
+ 	int rel_misc_dev_res;
+ 	int current_uuid_index;
+ 	char *data_vault;
+@@ -468,6 +468,11 @@ static void int3400_setup_gddv(struct int3400_thermal_priv *priv)
+ 	priv->data_vault = kmemdup(obj->package.elements[0].buffer.pointer,
+ 				   obj->package.elements[0].buffer.length,
+ 				   GFP_KERNEL);
++	if (!priv->data_vault) {
++		kfree(buffer.pointer);
++		return;
++	}
++
+ 	bin_attr_data_vault.private = priv->data_vault;
+ 	bin_attr_data_vault.size = obj->package.elements[0].buffer.length;
+ 	kfree(buffer.pointer);
+diff --git a/drivers/tty/hvc/hvc_iucv.c b/drivers/tty/hvc/hvc_iucv.c
+index 82a76cac94deb..32366caca6623 100644
+--- a/drivers/tty/hvc/hvc_iucv.c
++++ b/drivers/tty/hvc/hvc_iucv.c
+@@ -1417,7 +1417,9 @@ out_error:
+  */
+ static	int __init hvc_iucv_config(char *val)
+ {
+-	 return kstrtoul(val, 10, &hvc_iucv_devices);
++	if (kstrtoul(val, 10, &hvc_iucv_devices))
++		pr_warn("hvc_iucv= invalid parameter value '%s'\n", val);
++	return 1;
+ }
+ 
+ 
+diff --git a/drivers/tty/mxser.c b/drivers/tty/mxser.c
+index 39458b42df7b0..88d2f16fbf89a 100644
+--- a/drivers/tty/mxser.c
++++ b/drivers/tty/mxser.c
+@@ -719,6 +719,7 @@ static int mxser_activate(struct tty_port *port, struct tty_struct *tty)
+ 	struct mxser_port *info = container_of(port, struct mxser_port, port);
+ 	unsigned long page;
+ 	unsigned long flags;
++	int ret;
+ 
+ 	page = __get_free_page(GFP_KERNEL);
+ 	if (!page)
+@@ -728,9 +729,9 @@ static int mxser_activate(struct tty_port *port, struct tty_struct *tty)
+ 
+ 	if (!info->type) {
+ 		set_bit(TTY_IO_ERROR, &tty->flags);
+-		free_page(page);
+ 		spin_unlock_irqrestore(&info->slock, flags);
+-		return 0;
++		ret = 0;
++		goto err_free_xmit;
+ 	}
+ 	info->port.xmit_buf = (unsigned char *) page;
+ 
+@@ -750,8 +751,10 @@ static int mxser_activate(struct tty_port *port, struct tty_struct *tty)
+ 		if (capable(CAP_SYS_ADMIN)) {
+ 			set_bit(TTY_IO_ERROR, &tty->flags);
+ 			return 0;
+-		} else
+-			return -ENODEV;
++		}
++
++		ret = -ENODEV;
++		goto err_free_xmit;
+ 	}
+ 
+ 	/*
+@@ -796,6 +799,10 @@ static int mxser_activate(struct tty_port *port, struct tty_struct *tty)
+ 	spin_unlock_irqrestore(&info->slock, flags);
+ 
+ 	return 0;
++err_free_xmit:
++	free_page(page);
++	info->port.xmit_buf = NULL;
++	return ret;
+ }
+ 
+ /*
+diff --git a/drivers/tty/serial/8250/8250_aspeed_vuart.c b/drivers/tty/serial/8250/8250_aspeed_vuart.c
+index 2350fb3bb5e4c..c2cecc6f47db4 100644
+--- a/drivers/tty/serial/8250/8250_aspeed_vuart.c
++++ b/drivers/tty/serial/8250/8250_aspeed_vuart.c
+@@ -487,7 +487,7 @@ static int aspeed_vuart_probe(struct platform_device *pdev)
+ 	port.port.irq = irq_of_parse_and_map(np, 0);
+ 	port.port.handle_irq = aspeed_vuart_handle_irq;
+ 	port.port.iotype = UPIO_MEM;
+-	port.port.type = PORT_16550A;
++	port.port.type = PORT_ASPEED_VUART;
+ 	port.port.uartclk = clk;
+ 	port.port.flags = UPF_SHARE_IRQ | UPF_BOOT_AUTOCONF | UPF_IOREMAP
+ 		| UPF_FIXED_PORT | UPF_FIXED_TYPE | UPF_NO_THRE_TEST;
+diff --git a/drivers/tty/serial/8250/8250_dma.c b/drivers/tty/serial/8250/8250_dma.c
+index 890fa7ddaa7f3..b3c3f7e5851ab 100644
+--- a/drivers/tty/serial/8250/8250_dma.c
++++ b/drivers/tty/serial/8250/8250_dma.c
+@@ -64,10 +64,19 @@ int serial8250_tx_dma(struct uart_8250_port *p)
+ 	struct uart_8250_dma		*dma = p->dma;
+ 	struct circ_buf			*xmit = &p->port.state->xmit;
+ 	struct dma_async_tx_descriptor	*desc;
++	struct uart_port		*up = &p->port;
+ 	int ret;
+ 
+-	if (dma->tx_running)
++	if (dma->tx_running) {
++		if (up->x_char) {
++			dmaengine_pause(dma->txchan);
++			uart_xchar_out(up, UART_TX);
++			dmaengine_resume(dma->txchan);
++		}
+ 		return 0;
++	} else if (up->x_char) {
++		uart_xchar_out(up, UART_TX);
++	}
+ 
+ 	if (uart_tx_stopped(&p->port) || uart_circ_empty(xmit)) {
+ 		/* We have been called from __dma_tx_complete() */
+diff --git a/drivers/tty/serial/8250/8250_lpss.c b/drivers/tty/serial/8250/8250_lpss.c
+index d3bafec7619da..0f5af061e0b45 100644
+--- a/drivers/tty/serial/8250/8250_lpss.c
++++ b/drivers/tty/serial/8250/8250_lpss.c
+@@ -117,8 +117,7 @@ static int byt_serial_setup(struct lpss8250 *lpss, struct uart_port *port)
+ {
+ 	struct dw_dma_slave *param = &lpss->dma_param;
+ 	struct pci_dev *pdev = to_pci_dev(port->dev);
+-	unsigned int dma_devfn = PCI_DEVFN(PCI_SLOT(pdev->devfn), 0);
+-	struct pci_dev *dma_dev = pci_get_slot(pdev->bus, dma_devfn);
++	struct pci_dev *dma_dev;
+ 
+ 	switch (pdev->device) {
+ 	case PCI_DEVICE_ID_INTEL_BYT_UART1:
+@@ -137,6 +136,8 @@ static int byt_serial_setup(struct lpss8250 *lpss, struct uart_port *port)
+ 		return -EINVAL;
+ 	}
+ 
++	dma_dev = pci_get_slot(pdev->bus, PCI_DEVFN(PCI_SLOT(pdev->devfn), 0));
++
+ 	param->dma_dev = &dma_dev->dev;
+ 	param->m_master = 0;
+ 	param->p_master = 1;
+@@ -152,6 +153,14 @@ static int byt_serial_setup(struct lpss8250 *lpss, struct uart_port *port)
+ 	return 0;
+ }
+ 
++static void byt_serial_exit(struct lpss8250 *lpss)
++{
++	struct dw_dma_slave *param = &lpss->dma_param;
++
++	/* Paired with pci_get_slot() in the byt_serial_setup() above */
++	put_device(param->dma_dev);
++}
++
+ static int ehl_serial_setup(struct lpss8250 *lpss, struct uart_port *port)
+ {
+ 	struct uart_8250_dma *dma = &lpss->data.dma;
+@@ -170,6 +179,13 @@ static int ehl_serial_setup(struct lpss8250 *lpss, struct uart_port *port)
+ 	return 0;
+ }
+ 
++static void ehl_serial_exit(struct lpss8250 *lpss)
++{
++	struct uart_8250_port *up = serial8250_get_port(lpss->data.line);
++
++	up->dma = NULL;
++}
++
+ #ifdef CONFIG_SERIAL_8250_DMA
+ static const struct dw_dma_platform_data qrk_serial_dma_pdata = {
+ 	.nr_channels = 2,
+@@ -344,8 +360,7 @@ static int lpss8250_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	return 0;
+ 
+ err_exit:
+-	if (lpss->board->exit)
+-		lpss->board->exit(lpss);
++	lpss->board->exit(lpss);
+ 	pci_free_irq_vectors(pdev);
+ 	return ret;
+ }
+@@ -356,8 +371,7 @@ static void lpss8250_remove(struct pci_dev *pdev)
+ 
+ 	serial8250_unregister_port(lpss->data.line);
+ 
+-	if (lpss->board->exit)
+-		lpss->board->exit(lpss);
++	lpss->board->exit(lpss);
+ 	pci_free_irq_vectors(pdev);
+ }
+ 
+@@ -365,12 +379,14 @@ static const struct lpss8250_board byt_board = {
+ 	.freq = 100000000,
+ 	.base_baud = 2764800,
+ 	.setup = byt_serial_setup,
++	.exit = byt_serial_exit,
+ };
+ 
+ static const struct lpss8250_board ehl_board = {
+ 	.freq = 200000000,
+ 	.base_baud = 12500000,
+ 	.setup = ehl_serial_setup,
++	.exit = ehl_serial_exit,
+ };
+ 
+ static const struct lpss8250_board qrk_board = {
+diff --git a/drivers/tty/serial/8250/8250_mid.c b/drivers/tty/serial/8250/8250_mid.c
+index efa0515139f8e..e6c1791609ddf 100644
+--- a/drivers/tty/serial/8250/8250_mid.c
++++ b/drivers/tty/serial/8250/8250_mid.c
+@@ -73,6 +73,11 @@ static int pnw_setup(struct mid8250 *mid, struct uart_port *p)
+ 	return 0;
+ }
+ 
++static void pnw_exit(struct mid8250 *mid)
++{
++	pci_dev_put(mid->dma_dev);
++}
++
+ static int tng_handle_irq(struct uart_port *p)
+ {
+ 	struct mid8250 *mid = p->private_data;
+@@ -124,6 +129,11 @@ static int tng_setup(struct mid8250 *mid, struct uart_port *p)
+ 	return 0;
+ }
+ 
++static void tng_exit(struct mid8250 *mid)
++{
++	pci_dev_put(mid->dma_dev);
++}
++
+ static int dnv_handle_irq(struct uart_port *p)
+ {
+ 	struct mid8250 *mid = p->private_data;
+@@ -330,9 +340,9 @@ static int mid8250_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 
+ 	pci_set_drvdata(pdev, mid);
+ 	return 0;
++
+ err:
+-	if (mid->board->exit)
+-		mid->board->exit(mid);
++	mid->board->exit(mid);
+ 	return ret;
+ }
+ 
+@@ -342,8 +352,7 @@ static void mid8250_remove(struct pci_dev *pdev)
+ 
+ 	serial8250_unregister_port(mid->line);
+ 
+-	if (mid->board->exit)
+-		mid->board->exit(mid);
++	mid->board->exit(mid);
+ }
+ 
+ static const struct mid8250_board pnw_board = {
+@@ -351,6 +360,7 @@ static const struct mid8250_board pnw_board = {
+ 	.freq = 50000000,
+ 	.base_baud = 115200,
+ 	.setup = pnw_setup,
++	.exit = pnw_exit,
+ };
+ 
+ static const struct mid8250_board tng_board = {
+@@ -358,6 +368,7 @@ static const struct mid8250_board tng_board = {
+ 	.freq = 38400000,
+ 	.base_baud = 1843200,
+ 	.setup = tng_setup,
++	.exit = tng_exit,
+ };
+ 
+ static const struct mid8250_board dnv_board = {
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 46e2079ad1aa2..1e1750a9614e4 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -307,6 +307,14 @@ static const struct serial8250_config uart_config[] = {
+ 		.rxtrig_bytes	= {1, 32, 64, 112},
+ 		.flags		= UART_CAP_FIFO | UART_CAP_SLEEP,
+ 	},
++	[PORT_ASPEED_VUART] = {
++		.name		= "ASPEED VUART",
++		.fifo_size	= 16,
++		.tx_loadsz	= 16,
++		.fcr		= UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_00,
++		.rxtrig_bytes	= {1, 4, 8, 14},
++		.flags		= UART_CAP_FIFO,
++	},
+ };
+ 
+ /* Uart divisor latch read */
+@@ -1615,6 +1623,18 @@ static inline void start_tx_rs485(struct uart_port *port)
+ 	struct uart_8250_port *up = up_to_u8250p(port);
+ 	struct uart_8250_em485 *em485 = up->em485;
+ 
++	/*
++	 * While serial8250_em485_handle_stop_tx() is a noop if
++	 * em485->active_timer != &em485->stop_tx_timer, it might happen that
++	 * the timer is still armed and triggers only after the current bunch of
++	 * chars is send and em485->active_timer == &em485->stop_tx_timer again.
++	 * So cancel the timer. There is still a theoretical race condition if
++	 * the timer is already running and only comes around to check for
++	 * em485->active_timer when &em485->stop_tx_timer is armed again.
++	 */
++	if (em485->active_timer == &em485->stop_tx_timer)
++		hrtimer_try_to_cancel(&em485->stop_tx_timer);
++
+ 	em485->active_timer = NULL;
+ 
+ 	if (em485->tx_stopped) {
+@@ -1799,9 +1819,7 @@ void serial8250_tx_chars(struct uart_8250_port *up)
+ 	int count;
+ 
+ 	if (port->x_char) {
+-		serial_out(up, UART_TX, port->x_char);
+-		port->icount.tx++;
+-		port->x_char = 0;
++		uart_xchar_out(port, UART_TX);
+ 		return;
+ 	}
+ 	if (uart_tx_stopped(port)) {
+diff --git a/drivers/tty/serial/kgdboc.c b/drivers/tty/serial/kgdboc.c
+index 49d0c7f2b29b8..79b7db8580e05 100644
+--- a/drivers/tty/serial/kgdboc.c
++++ b/drivers/tty/serial/kgdboc.c
+@@ -403,16 +403,16 @@ static int kgdboc_option_setup(char *opt)
+ {
+ 	if (!opt) {
+ 		pr_err("config string not provided\n");
+-		return -EINVAL;
++		return 1;
+ 	}
+ 
+ 	if (strlen(opt) >= MAX_CONFIG_LEN) {
+ 		pr_err("config string too long\n");
+-		return -ENOSPC;
++		return 1;
+ 	}
+ 	strcpy(config, opt);
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ __setup("kgdboc=", kgdboc_option_setup);
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index dc6129ddef85d..eb15423f935a3 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -652,6 +652,20 @@ static void uart_flush_buffer(struct tty_struct *tty)
+ 	tty_port_tty_wakeup(&state->port);
+ }
+ 
++/*
++ * This function performs low-level write of high-priority XON/XOFF
++ * character and accounting for it.
++ *
++ * Requires uart_port to implement .serial_out().
++ */
++void uart_xchar_out(struct uart_port *uport, int offset)
++{
++	serial_port_out(uport, offset, uport->x_char);
++	uport->icount.tx++;
++	uport->x_char = 0;
++}
++EXPORT_SYMBOL_GPL(uart_xchar_out);
++
+ /*
+  * This function is used to send a high-priority XON/XOFF character to
+  * the device
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index df3522dab31b5..1e7dc130c39a6 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -762,7 +762,7 @@ static int xhci_exit_test_mode(struct xhci_hcd *xhci)
+ 	}
+ 	pm_runtime_allow(xhci_to_hcd(xhci)->self.controller);
+ 	xhci->test_mode = 0;
+-	return xhci_reset(xhci);
++	return xhci_reset(xhci, XHCI_RESET_SHORT_USEC);
+ }
+ 
+ void xhci_set_link_state(struct xhci_hcd *xhci, struct xhci_port *port,
+@@ -1088,6 +1088,9 @@ static void xhci_get_usb2_port_status(struct xhci_port *port, u32 *status,
+ 		if (link_state == XDEV_U2)
+ 			*status |= USB_PORT_STAT_L1;
+ 		if (link_state == XDEV_U0) {
++			if (bus_state->resume_done[portnum])
++				usb_hcd_end_port_resume(&port->rhub->hcd->self,
++							portnum);
+ 			bus_state->resume_done[portnum] = 0;
+ 			clear_bit(portnum, &bus_state->resuming_ports);
+ 			if (bus_state->suspended_ports & (1 << portnum)) {
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 0e312066c5c63..b398d3fdabf61 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -2583,7 +2583,7 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
+ 
+ fail:
+ 	xhci_halt(xhci);
+-	xhci_reset(xhci);
++	xhci_reset(xhci, XHCI_RESET_SHORT_USEC);
+ 	xhci_mem_cleanup(xhci);
+ 	return -ENOMEM;
+ }
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index d7c0bf494d930..2c1cc94808875 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -65,7 +65,7 @@ static bool td_on_ring(struct xhci_td *td, struct xhci_ring *ring)
+  * handshake done).  There are two failure modes:  "usec" have passed (major
+  * hardware flakeout), or the register reads as all-ones (hardware removed).
+  */
+-int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, int usec)
++int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, u64 timeout_us)
+ {
+ 	u32	result;
+ 	int	ret;
+@@ -73,7 +73,7 @@ int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, int usec)
+ 	ret = readl_poll_timeout_atomic(ptr, result,
+ 					(result & mask) == done ||
+ 					result == U32_MAX,
+-					1, usec);
++					1, timeout_us);
+ 	if (result == U32_MAX)		/* card removed */
+ 		return -ENODEV;
+ 
+@@ -162,7 +162,7 @@ int xhci_start(struct xhci_hcd *xhci)
+  * Transactions will be terminated immediately, and operational registers
+  * will be set to their defaults.
+  */
+-int xhci_reset(struct xhci_hcd *xhci)
++int xhci_reset(struct xhci_hcd *xhci, u64 timeout_us)
+ {
+ 	u32 command;
+ 	u32 state;
+@@ -195,8 +195,7 @@ int xhci_reset(struct xhci_hcd *xhci)
+ 	if (xhci->quirks & XHCI_INTEL_HOST)
+ 		udelay(1000);
+ 
+-	ret = xhci_handshake(&xhci->op_regs->command,
+-			CMD_RESET, 0, 10 * 1000 * 1000);
++	ret = xhci_handshake(&xhci->op_regs->command, CMD_RESET, 0, timeout_us);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -209,8 +208,7 @@ int xhci_reset(struct xhci_hcd *xhci)
+ 	 * xHCI cannot write to any doorbells or operational registers other
+ 	 * than status until the "Controller Not Ready" flag is cleared.
+ 	 */
+-	ret = xhci_handshake(&xhci->op_regs->status,
+-			STS_CNR, 0, 10 * 1000 * 1000);
++	ret = xhci_handshake(&xhci->op_regs->status, STS_CNR, 0, timeout_us);
+ 
+ 	xhci->usb2_rhub.bus_state.port_c_suspend = 0;
+ 	xhci->usb2_rhub.bus_state.suspended_ports = 0;
+@@ -731,7 +729,7 @@ static void xhci_stop(struct usb_hcd *hcd)
+ 	xhci->xhc_state |= XHCI_STATE_HALTED;
+ 	xhci->cmd_ring_state = CMD_RING_STATE_STOPPED;
+ 	xhci_halt(xhci);
+-	xhci_reset(xhci);
++	xhci_reset(xhci, XHCI_RESET_SHORT_USEC);
+ 	spin_unlock_irq(&xhci->lock);
+ 
+ 	xhci_cleanup_msix(xhci);
+@@ -784,7 +782,7 @@ void xhci_shutdown(struct usb_hcd *hcd)
+ 	xhci_halt(xhci);
+ 	/* Workaround for spurious wakeups at shutdown with HSW */
+ 	if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)
+-		xhci_reset(xhci);
++		xhci_reset(xhci, XHCI_RESET_SHORT_USEC);
+ 	spin_unlock_irq(&xhci->lock);
+ 
+ 	xhci_cleanup_msix(xhci);
+@@ -1170,7 +1168,7 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ 		xhci_dbg(xhci, "Stop HCD\n");
+ 		xhci_halt(xhci);
+ 		xhci_zero_64b_regs(xhci);
+-		retval = xhci_reset(xhci);
++		retval = xhci_reset(xhci, XHCI_RESET_LONG_USEC);
+ 		spin_unlock_irq(&xhci->lock);
+ 		if (retval)
+ 			return retval;
+@@ -5318,7 +5316,7 @@ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks)
+ 
+ 	xhci_dbg(xhci, "Resetting HCD\n");
+ 	/* Reset the internal HC memory state and registers. */
+-	retval = xhci_reset(xhci);
++	retval = xhci_reset(xhci, XHCI_RESET_LONG_USEC);
+ 	if (retval)
+ 		return retval;
+ 	xhci_dbg(xhci, "Reset complete\n");
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 5a75fe5631238..bc0789229527f 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -229,6 +229,9 @@ struct xhci_op_regs {
+ #define CMD_ETE		(1 << 14)
+ /* bits 15:31 are reserved (and should be preserved on writes). */
+ 
++#define XHCI_RESET_LONG_USEC		(10 * 1000 * 1000)
++#define XHCI_RESET_SHORT_USEC		(250 * 1000)
++
+ /* IMAN - Interrupt Management Register */
+ #define IMAN_IE		(1 << 1)
+ #define IMAN_IP		(1 << 0)
+@@ -2083,11 +2086,11 @@ void xhci_free_container_ctx(struct xhci_hcd *xhci,
+ 
+ /* xHCI host controller glue */
+ typedef void (*xhci_get_quirks_t)(struct device *, struct xhci_hcd *);
+-int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, int usec);
++int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, u64 timeout_us);
+ void xhci_quiesce(struct xhci_hcd *xhci);
+ int xhci_halt(struct xhci_hcd *xhci);
+ int xhci_start(struct xhci_hcd *xhci);
+-int xhci_reset(struct xhci_hcd *xhci);
++int xhci_reset(struct xhci_hcd *xhci, u64 timeout_us);
+ int xhci_run(struct usb_hcd *hcd);
+ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks);
+ void xhci_shutdown(struct usb_hcd *hcd);
+@@ -2467,6 +2470,8 @@ static inline const char *xhci_decode_ctrl_ctx(char *str,
+ 	unsigned int	bit;
+ 	int		ret = 0;
+ 
++	str[0] = '\0';
++
+ 	if (drop) {
+ 		ret = sprintf(str, "Drop:");
+ 		for_each_set_bit(bit, &drop, 32)
+@@ -2624,8 +2629,11 @@ static inline const char *xhci_decode_usbsts(char *str, u32 usbsts)
+ {
+ 	int ret = 0;
+ 
++	ret = sprintf(str, " 0x%08x", usbsts);
++
+ 	if (usbsts == ~(u32)0)
+-		return " 0xffffffff";
++		return str;
++
+ 	if (usbsts & STS_HALT)
+ 		ret += sprintf(str + ret, " HCHalted");
+ 	if (usbsts & STS_FATAL)
+diff --git a/drivers/usb/serial/Kconfig b/drivers/usb/serial/Kconfig
+index de5c012570603..ef8d1c73c7545 100644
+--- a/drivers/usb/serial/Kconfig
++++ b/drivers/usb/serial/Kconfig
+@@ -66,6 +66,7 @@ config USB_SERIAL_SIMPLE
+ 		- Libtransistor USB console
+ 		- a number of Motorola phones
+ 		- Motorola Tetra devices
++		- Nokia mobile phones
+ 		- Novatel Wireless GPS receivers
+ 		- Siemens USB/MPI adapter.
+ 		- ViVOtech ViVOpay USB device.
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index a70fd86f735ca..88b284d61681a 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -116,6 +116,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(ADLINK_VENDOR_ID, ADLINK_ND6530GC_PRODUCT_ID) },
+ 	{ USB_DEVICE(SMART_VENDOR_ID, SMART_PRODUCT_ID) },
+ 	{ USB_DEVICE(AT_VENDOR_ID, AT_VTKIT3_PRODUCT_ID) },
++	{ USB_DEVICE(IBM_VENDOR_ID, IBM_PRODUCT_ID) },
+ 	{ }					/* Terminating entry */
+ };
+ 
+@@ -435,6 +436,7 @@ static int pl2303_detect_type(struct usb_serial *serial)
+ 		case 0x105:
+ 		case 0x305:
+ 		case 0x405:
++		case 0x605:
+ 			/*
+ 			 * Assume it's an HXN-type if the device doesn't
+ 			 * support the old read request value.
+diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h
+index 6097ee8fccb25..c5406452b774e 100644
+--- a/drivers/usb/serial/pl2303.h
++++ b/drivers/usb/serial/pl2303.h
+@@ -35,6 +35,9 @@
+ #define ATEN_PRODUCT_UC232B	0x2022
+ #define ATEN_PRODUCT_ID2	0x2118
+ 
++#define IBM_VENDOR_ID		0x04b3
++#define IBM_PRODUCT_ID		0x4016
++
+ #define IODATA_VENDOR_ID	0x04bb
+ #define IODATA_PRODUCT_ID	0x0a03
+ #define IODATA_PRODUCT_ID_RSAQ5	0x0a0e
+diff --git a/drivers/usb/serial/usb-serial-simple.c b/drivers/usb/serial/usb-serial-simple.c
+index bd23a7cb1be2b..4c6747889a194 100644
+--- a/drivers/usb/serial/usb-serial-simple.c
++++ b/drivers/usb/serial/usb-serial-simple.c
+@@ -91,6 +91,11 @@ DEVICE(moto_modem, MOTO_IDS);
+ 	{ USB_DEVICE(0x0cad, 0x9016) }	/* TPG2200 */
+ DEVICE(motorola_tetra, MOTOROLA_TETRA_IDS);
+ 
++/* Nokia mobile phone driver */
++#define NOKIA_IDS()			\
++	{ USB_DEVICE(0x0421, 0x069a) }	/* Nokia 130 (RM-1035) */
++DEVICE(nokia, NOKIA_IDS);
++
+ /* Novatel Wireless GPS driver */
+ #define NOVATEL_IDS()			\
+ 	{ USB_DEVICE(0x09d7, 0x0100) }	/* NovAtel FlexPack GPS */
+@@ -123,6 +128,7 @@ static struct usb_serial_driver * const serial_drivers[] = {
+ 	&vivopay_device,
+ 	&moto_modem_device,
+ 	&motorola_tetra_device,
++	&nokia_device,
+ 	&novatel_gps_device,
+ 	&hp4x_device,
+ 	&suunto_device,
+@@ -140,6 +146,7 @@ static const struct usb_device_id id_table[] = {
+ 	VIVOPAY_IDS(),
+ 	MOTO_IDS(),
+ 	MOTOROLA_TETRA_IDS(),
++	NOKIA_IDS(),
+ 	NOVATEL_IDS(),
+ 	HP4X_IDS(),
+ 	SUUNTO_IDS(),
+diff --git a/drivers/usb/storage/ene_ub6250.c b/drivers/usb/storage/ene_ub6250.c
+index 5f7d678502be4..6012603f3630e 100644
+--- a/drivers/usb/storage/ene_ub6250.c
++++ b/drivers/usb/storage/ene_ub6250.c
+@@ -237,36 +237,33 @@ static struct us_unusual_dev ene_ub6250_unusual_dev_list[] = {
+ #define memstick_logaddr(logadr1, logadr0) ((((u16)(logadr1)) << 8) | (logadr0))
+ 
+ 
+-struct SD_STATUS {
+-	u8    Insert:1;
+-	u8    Ready:1;
+-	u8    MediaChange:1;
+-	u8    IsMMC:1;
+-	u8    HiCapacity:1;
+-	u8    HiSpeed:1;
+-	u8    WtP:1;
+-	u8    Reserved:1;
+-};
+-
+-struct MS_STATUS {
+-	u8    Insert:1;
+-	u8    Ready:1;
+-	u8    MediaChange:1;
+-	u8    IsMSPro:1;
+-	u8    IsMSPHG:1;
+-	u8    Reserved1:1;
+-	u8    WtP:1;
+-	u8    Reserved2:1;
+-};
+-
+-struct SM_STATUS {
+-	u8    Insert:1;
+-	u8    Ready:1;
+-	u8    MediaChange:1;
+-	u8    Reserved:3;
+-	u8    WtP:1;
+-	u8    IsMS:1;
+-};
++/* SD_STATUS bits */
++#define SD_Insert	BIT(0)
++#define SD_Ready	BIT(1)
++#define SD_MediaChange	BIT(2)
++#define SD_IsMMC	BIT(3)
++#define SD_HiCapacity	BIT(4)
++#define SD_HiSpeed	BIT(5)
++#define SD_WtP		BIT(6)
++			/* Bit 7 reserved */
++
++/* MS_STATUS bits */
++#define MS_Insert	BIT(0)
++#define MS_Ready	BIT(1)
++#define MS_MediaChange	BIT(2)
++#define MS_IsMSPro	BIT(3)
++#define MS_IsMSPHG	BIT(4)
++			/* Bit 5 reserved */
++#define MS_WtP		BIT(6)
++			/* Bit 7 reserved */
++
++/* SM_STATUS bits */
++#define SM_Insert	BIT(0)
++#define SM_Ready	BIT(1)
++#define SM_MediaChange	BIT(2)
++			/* Bits 3-5 reserved */
++#define SM_WtP		BIT(6)
++#define SM_IsMS		BIT(7)
+ 
+ struct ms_bootblock_cis {
+ 	u8 bCistplDEVICE[6];    /* 0 */
+@@ -437,9 +434,9 @@ struct ene_ub6250_info {
+ 	u8		*bbuf;
+ 
+ 	/* for 6250 code */
+-	struct SD_STATUS	SD_Status;
+-	struct MS_STATUS	MS_Status;
+-	struct SM_STATUS	SM_Status;
++	u8		SD_Status;
++	u8		MS_Status;
++	u8		SM_Status;
+ 
+ 	/* ----- SD Control Data ---------------- */
+ 	/*SD_REGISTER SD_Regs; */
+@@ -602,7 +599,7 @@ static int sd_scsi_test_unit_ready(struct us_data *us, struct scsi_cmnd *srb)
+ {
+ 	struct ene_ub6250_info *info = (struct ene_ub6250_info *) us->extra;
+ 
+-	if (info->SD_Status.Insert && info->SD_Status.Ready)
++	if ((info->SD_Status & SD_Insert) && (info->SD_Status & SD_Ready))
+ 		return USB_STOR_TRANSPORT_GOOD;
+ 	else {
+ 		ene_sd_init(us);
+@@ -622,7 +619,7 @@ static int sd_scsi_mode_sense(struct us_data *us, struct scsi_cmnd *srb)
+ 		0x0b, 0x00, 0x80, 0x08, 0x00, 0x00,
+ 		0x71, 0xc0, 0x00, 0x00, 0x02, 0x00 };
+ 
+-	if (info->SD_Status.WtP)
++	if (info->SD_Status & SD_WtP)
+ 		usb_stor_set_xfer_buf(mediaWP, 12, srb);
+ 	else
+ 		usb_stor_set_xfer_buf(mediaNoWP, 12, srb);
+@@ -641,9 +638,9 @@ static int sd_scsi_read_capacity(struct us_data *us, struct scsi_cmnd *srb)
+ 	struct ene_ub6250_info *info = (struct ene_ub6250_info *) us->extra;
+ 
+ 	usb_stor_dbg(us, "sd_scsi_read_capacity\n");
+-	if (info->SD_Status.HiCapacity) {
++	if (info->SD_Status & SD_HiCapacity) {
+ 		bl_len = 0x200;
+-		if (info->SD_Status.IsMMC)
++		if (info->SD_Status & SD_IsMMC)
+ 			bl_num = info->HC_C_SIZE-1;
+ 		else
+ 			bl_num = (info->HC_C_SIZE + 1) * 1024 - 1;
+@@ -693,7 +690,7 @@ static int sd_scsi_read(struct us_data *us, struct scsi_cmnd *srb)
+ 		return USB_STOR_TRANSPORT_ERROR;
+ 	}
+ 
+-	if (info->SD_Status.HiCapacity)
++	if (info->SD_Status & SD_HiCapacity)
+ 		bnByte = bn;
+ 
+ 	/* set up the command wrapper */
+@@ -733,7 +730,7 @@ static int sd_scsi_write(struct us_data *us, struct scsi_cmnd *srb)
+ 		return USB_STOR_TRANSPORT_ERROR;
+ 	}
+ 
+-	if (info->SD_Status.HiCapacity)
++	if (info->SD_Status & SD_HiCapacity)
+ 		bnByte = bn;
+ 
+ 	/* set up the command wrapper */
+@@ -1456,7 +1453,7 @@ static int ms_scsi_test_unit_ready(struct us_data *us, struct scsi_cmnd *srb)
+ 	struct ene_ub6250_info *info = (struct ene_ub6250_info *)(us->extra);
+ 
+ 	/* pr_info("MS_SCSI_Test_Unit_Ready\n"); */
+-	if (info->MS_Status.Insert && info->MS_Status.Ready) {
++	if ((info->MS_Status & MS_Insert) && (info->MS_Status & MS_Ready)) {
+ 		return USB_STOR_TRANSPORT_GOOD;
+ 	} else {
+ 		ene_ms_init(us);
+@@ -1476,7 +1473,7 @@ static int ms_scsi_mode_sense(struct us_data *us, struct scsi_cmnd *srb)
+ 		0x0b, 0x00, 0x80, 0x08, 0x00, 0x00,
+ 		0x71, 0xc0, 0x00, 0x00, 0x02, 0x00 };
+ 
+-	if (info->MS_Status.WtP)
++	if (info->MS_Status & MS_WtP)
+ 		usb_stor_set_xfer_buf(mediaWP, 12, srb);
+ 	else
+ 		usb_stor_set_xfer_buf(mediaNoWP, 12, srb);
+@@ -1495,7 +1492,7 @@ static int ms_scsi_read_capacity(struct us_data *us, struct scsi_cmnd *srb)
+ 
+ 	usb_stor_dbg(us, "ms_scsi_read_capacity\n");
+ 	bl_len = 0x200;
+-	if (info->MS_Status.IsMSPro)
++	if (info->MS_Status & MS_IsMSPro)
+ 		bl_num = info->MSP_TotalBlock - 1;
+ 	else
+ 		bl_num = info->MS_Lib.NumberOfLogBlock * info->MS_Lib.blockSize * 2 - 1;
+@@ -1650,7 +1647,7 @@ static int ms_scsi_read(struct us_data *us, struct scsi_cmnd *srb)
+ 	if (bn > info->bl_num)
+ 		return USB_STOR_TRANSPORT_ERROR;
+ 
+-	if (info->MS_Status.IsMSPro) {
++	if (info->MS_Status & MS_IsMSPro) {
+ 		result = ene_load_bincode(us, MSP_RW_PATTERN);
+ 		if (result != USB_STOR_XFER_GOOD) {
+ 			usb_stor_dbg(us, "Load MPS RW pattern Fail !!\n");
+@@ -1751,7 +1748,7 @@ static int ms_scsi_write(struct us_data *us, struct scsi_cmnd *srb)
+ 	if (bn > info->bl_num)
+ 		return USB_STOR_TRANSPORT_ERROR;
+ 
+-	if (info->MS_Status.IsMSPro) {
++	if (info->MS_Status & MS_IsMSPro) {
+ 		result = ene_load_bincode(us, MSP_RW_PATTERN);
+ 		if (result != USB_STOR_XFER_GOOD) {
+ 			pr_info("Load MSP RW pattern Fail !!\n");
+@@ -1859,12 +1856,12 @@ static int ene_get_card_status(struct us_data *us, u8 *buf)
+ 
+ 	tmpreg = (u16) reg4b;
+ 	reg4b = *(u32 *)(&buf[0x14]);
+-	if (info->SD_Status.HiCapacity && !info->SD_Status.IsMMC)
++	if ((info->SD_Status & SD_HiCapacity) && !(info->SD_Status & SD_IsMMC))
+ 		info->HC_C_SIZE = (reg4b >> 8) & 0x3fffff;
+ 
+ 	info->SD_C_SIZE = ((tmpreg & 0x03) << 10) | (u16)(reg4b >> 22);
+ 	info->SD_C_SIZE_MULT = (u8)(reg4b >> 7)  & 0x07;
+-	if (info->SD_Status.HiCapacity && info->SD_Status.IsMMC)
++	if ((info->SD_Status & SD_HiCapacity) && (info->SD_Status & SD_IsMMC))
+ 		info->HC_C_SIZE = *(u32 *)(&buf[0x100]);
+ 
+ 	if (info->SD_READ_BL_LEN > SD_BLOCK_LEN) {
+@@ -2076,6 +2073,7 @@ static int ene_ms_init(struct us_data *us)
+ 	u16 MSP_BlockSize, MSP_UserAreaBlocks;
+ 	struct ene_ub6250_info *info = (struct ene_ub6250_info *) us->extra;
+ 	u8 *bbuf = info->bbuf;
++	unsigned int s;
+ 
+ 	printk(KERN_INFO "transport --- ENE_MSInit\n");
+ 
+@@ -2100,15 +2098,16 @@ static int ene_ms_init(struct us_data *us)
+ 		return USB_STOR_TRANSPORT_ERROR;
+ 	}
+ 	/* the same part to test ENE */
+-	info->MS_Status = *(struct MS_STATUS *) bbuf;
+-
+-	if (info->MS_Status.Insert && info->MS_Status.Ready) {
+-		printk(KERN_INFO "Insert     = %x\n", info->MS_Status.Insert);
+-		printk(KERN_INFO "Ready      = %x\n", info->MS_Status.Ready);
+-		printk(KERN_INFO "IsMSPro    = %x\n", info->MS_Status.IsMSPro);
+-		printk(KERN_INFO "IsMSPHG    = %x\n", info->MS_Status.IsMSPHG);
+-		printk(KERN_INFO "WtP= %x\n", info->MS_Status.WtP);
+-		if (info->MS_Status.IsMSPro) {
++	info->MS_Status = bbuf[0];
++
++	s = info->MS_Status;
++	if ((s & MS_Insert) && (s & MS_Ready)) {
++		printk(KERN_INFO "Insert     = %x\n", !!(s & MS_Insert));
++		printk(KERN_INFO "Ready      = %x\n", !!(s & MS_Ready));
++		printk(KERN_INFO "IsMSPro    = %x\n", !!(s & MS_IsMSPro));
++		printk(KERN_INFO "IsMSPHG    = %x\n", !!(s & MS_IsMSPHG));
++		printk(KERN_INFO "WtP= %x\n", !!(s & MS_WtP));
++		if (s & MS_IsMSPro) {
+ 			MSP_BlockSize      = (bbuf[6] << 8) | bbuf[7];
+ 			MSP_UserAreaBlocks = (bbuf[10] << 8) | bbuf[11];
+ 			info->MSP_TotalBlock = MSP_BlockSize * MSP_UserAreaBlocks;
+@@ -2169,17 +2168,17 @@ static int ene_sd_init(struct us_data *us)
+ 		return USB_STOR_TRANSPORT_ERROR;
+ 	}
+ 
+-	info->SD_Status =  *(struct SD_STATUS *) bbuf;
+-	if (info->SD_Status.Insert && info->SD_Status.Ready) {
+-		struct SD_STATUS *s = &info->SD_Status;
++	info->SD_Status = bbuf[0];
++	if ((info->SD_Status & SD_Insert) && (info->SD_Status & SD_Ready)) {
++		unsigned int s = info->SD_Status;
+ 
+ 		ene_get_card_status(us, bbuf);
+-		usb_stor_dbg(us, "Insert     = %x\n", s->Insert);
+-		usb_stor_dbg(us, "Ready      = %x\n", s->Ready);
+-		usb_stor_dbg(us, "IsMMC      = %x\n", s->IsMMC);
+-		usb_stor_dbg(us, "HiCapacity = %x\n", s->HiCapacity);
+-		usb_stor_dbg(us, "HiSpeed    = %x\n", s->HiSpeed);
+-		usb_stor_dbg(us, "WtP        = %x\n", s->WtP);
++		usb_stor_dbg(us, "Insert     = %x\n", !!(s & SD_Insert));
++		usb_stor_dbg(us, "Ready      = %x\n", !!(s & SD_Ready));
++		usb_stor_dbg(us, "IsMMC      = %x\n", !!(s & SD_IsMMC));
++		usb_stor_dbg(us, "HiCapacity = %x\n", !!(s & SD_HiCapacity));
++		usb_stor_dbg(us, "HiSpeed    = %x\n", !!(s & SD_HiSpeed));
++		usb_stor_dbg(us, "WtP        = %x\n", !!(s & SD_WtP));
+ 	} else {
+ 		usb_stor_dbg(us, "SD Card Not Ready --- %x\n", bbuf[0]);
+ 		return USB_STOR_TRANSPORT_ERROR;
+@@ -2201,14 +2200,14 @@ static int ene_init(struct us_data *us)
+ 
+ 	misc_reg03 = bbuf[0];
+ 	if (misc_reg03 & 0x01) {
+-		if (!info->SD_Status.Ready) {
++		if (!(info->SD_Status & SD_Ready)) {
+ 			result = ene_sd_init(us);
+ 			if (result != USB_STOR_XFER_GOOD)
+ 				return USB_STOR_TRANSPORT_ERROR;
+ 		}
+ 	}
+ 	if (misc_reg03 & 0x02) {
+-		if (!info->MS_Status.Ready) {
++		if (!(info->MS_Status & MS_Ready)) {
+ 			result = ene_ms_init(us);
+ 			if (result != USB_STOR_XFER_GOOD)
+ 				return USB_STOR_TRANSPORT_ERROR;
+@@ -2307,14 +2306,14 @@ static int ene_transport(struct scsi_cmnd *srb, struct us_data *us)
+ 
+ 	/*US_DEBUG(usb_stor_show_command(us, srb)); */
+ 	scsi_set_resid(srb, 0);
+-	if (unlikely(!(info->SD_Status.Ready || info->MS_Status.Ready)))
++	if (unlikely(!(info->SD_Status & SD_Ready) || (info->MS_Status & MS_Ready)))
+ 		result = ene_init(us);
+ 	if (result == USB_STOR_XFER_GOOD) {
+ 		result = USB_STOR_TRANSPORT_ERROR;
+-		if (info->SD_Status.Ready)
++		if (info->SD_Status & SD_Ready)
+ 			result = sd_scsi_irp(us, srb);
+ 
+-		if (info->MS_Status.Ready)
++		if (info->MS_Status & MS_Ready)
+ 			result = ms_scsi_irp(us, srb);
+ 	}
+ 	return result;
+@@ -2378,7 +2377,6 @@ static int ene_ub6250_probe(struct usb_interface *intf,
+ 
+ static int ene_ub6250_resume(struct usb_interface *iface)
+ {
+-	u8 tmp = 0;
+ 	struct us_data *us = usb_get_intfdata(iface);
+ 	struct ene_ub6250_info *info = (struct ene_ub6250_info *)(us->extra);
+ 
+@@ -2390,17 +2388,16 @@ static int ene_ub6250_resume(struct usb_interface *iface)
+ 	mutex_unlock(&us->dev_mutex);
+ 
+ 	info->Power_IsResum = true;
+-	/*info->SD_Status.Ready = 0; */
+-	info->SD_Status = *(struct SD_STATUS *)&tmp;
+-	info->MS_Status = *(struct MS_STATUS *)&tmp;
+-	info->SM_Status = *(struct SM_STATUS *)&tmp;
++	/* info->SD_Status &= ~SD_Ready; */
++	info->SD_Status = 0;
++	info->MS_Status = 0;
++	info->SM_Status = 0;
+ 
+ 	return 0;
+ }
+ 
+ static int ene_ub6250_reset_resume(struct usb_interface *iface)
+ {
+-	u8 tmp = 0;
+ 	struct us_data *us = usb_get_intfdata(iface);
+ 	struct ene_ub6250_info *info = (struct ene_ub6250_info *)(us->extra);
+ 
+@@ -2412,10 +2409,10 @@ static int ene_ub6250_reset_resume(struct usb_interface *iface)
+ 	 * the device
+ 	 */
+ 	info->Power_IsResum = true;
+-	/*info->SD_Status.Ready = 0; */
+-	info->SD_Status = *(struct SD_STATUS *)&tmp;
+-	info->MS_Status = *(struct MS_STATUS *)&tmp;
+-	info->SM_Status = *(struct SM_STATUS *)&tmp;
++	/* info->SD_Status &= ~SD_Ready; */
++	info->SD_Status = 0;
++	info->MS_Status = 0;
++	info->SM_Status = 0;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/storage/realtek_cr.c b/drivers/usb/storage/realtek_cr.c
+index 3789698d9d3c6..0c423916d7bfa 100644
+--- a/drivers/usb/storage/realtek_cr.c
++++ b/drivers/usb/storage/realtek_cr.c
+@@ -365,7 +365,7 @@ static int rts51x_read_mem(struct us_data *us, u16 addr, u8 *data, u16 len)
+ 
+ 	buf = kmalloc(len, GFP_NOIO);
+ 	if (buf == NULL)
+-		return USB_STOR_TRANSPORT_ERROR;
++		return -ENOMEM;
+ 
+ 	usb_stor_dbg(us, "addr = 0x%x, len = %d\n", addr, len);
+ 
+diff --git a/drivers/usb/typec/tipd/core.c b/drivers/usb/typec/tipd/core.c
+index 7ffcda94d323a..16b4560216ba6 100644
+--- a/drivers/usb/typec/tipd/core.c
++++ b/drivers/usb/typec/tipd/core.c
+@@ -256,6 +256,10 @@ static int tps6598x_connect(struct tps6598x *tps, u32 status)
+ 	typec_set_pwr_opmode(tps->port, mode);
+ 	typec_set_pwr_role(tps->port, TPS_STATUS_TO_TYPEC_PORTROLE(status));
+ 	typec_set_vconn_role(tps->port, TPS_STATUS_TO_TYPEC_VCONN(status));
++	if (TPS_STATUS_TO_UPSIDE_DOWN(status))
++		typec_set_orientation(tps->port, TYPEC_ORIENTATION_REVERSE);
++	else
++		typec_set_orientation(tps->port, TYPEC_ORIENTATION_NORMAL);
+ 	tps6598x_set_data_role(tps, TPS_STATUS_TO_TYPEC_DATAROLE(status), true);
+ 
+ 	tps->partner = typec_register_partner(tps->port, &desc);
+@@ -278,6 +282,7 @@ static void tps6598x_disconnect(struct tps6598x *tps, u32 status)
+ 	typec_set_pwr_opmode(tps->port, TYPEC_PWR_MODE_USB);
+ 	typec_set_pwr_role(tps->port, TPS_STATUS_TO_TYPEC_PORTROLE(status));
+ 	typec_set_vconn_role(tps->port, TPS_STATUS_TO_TYPEC_VCONN(status));
++	typec_set_orientation(tps->port, TYPEC_ORIENTATION_NONE);
+ 	tps6598x_set_data_role(tps, TPS_STATUS_TO_TYPEC_DATAROLE(status), false);
+ 
+ 	power_supply_changed(tps->psy);
+diff --git a/drivers/usb/typec/tipd/tps6598x.h b/drivers/usb/typec/tipd/tps6598x.h
+index 3dae84c524fb5..527857549d699 100644
+--- a/drivers/usb/typec/tipd/tps6598x.h
++++ b/drivers/usb/typec/tipd/tps6598x.h
+@@ -17,6 +17,7 @@
+ /* TPS_REG_STATUS bits */
+ #define TPS_STATUS_PLUG_PRESENT		BIT(0)
+ #define TPS_STATUS_PLUG_UPSIDE_DOWN	BIT(4)
++#define TPS_STATUS_TO_UPSIDE_DOWN(s)	(!!((s) & TPS_STATUS_PLUG_UPSIDE_DOWN))
+ #define TPS_STATUS_PORTROLE		BIT(5)
+ #define TPS_STATUS_TO_TYPEC_PORTROLE(s) (!!((s) & TPS_STATUS_PORTROLE))
+ #define TPS_STATUS_DATAROLE		BIT(6)
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index 7b4ab7cfc3599..57bed3d740608 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -1680,7 +1680,7 @@ static void mlx5_vdpa_kick_vq(struct vdpa_device *vdev, u16 idx)
+ 		return;
+ 
+ 	if (unlikely(is_ctrl_vq_idx(mvdev, idx))) {
+-		if (!mvdev->cvq.ready)
++		if (!mvdev->wq || !mvdev->cvq.ready)
+ 			return;
+ 
+ 		wqent = kzalloc(sizeof(*wqent), GFP_ATOMIC);
+@@ -1916,11 +1916,25 @@ static u64 mlx5_vdpa_get_features(struct vdpa_device *vdev)
+ 	return ndev->mvdev.mlx_features;
+ }
+ 
+-static int verify_min_features(struct mlx5_vdpa_dev *mvdev, u64 features)
++static int verify_driver_features(struct mlx5_vdpa_dev *mvdev, u64 features)
+ {
++	/* Minimum features to expect */
+ 	if (!(features & BIT_ULL(VIRTIO_F_ACCESS_PLATFORM)))
+ 		return -EOPNOTSUPP;
+ 
++	/* Double check features combination sent down by the driver.
++	 * Fail invalid features due to absence of the depended feature.
++	 *
++	 * Per VIRTIO v1.1 specification, section 5.1.3.1 Feature bit
++	 * requirements: "VIRTIO_NET_F_MQ Requires VIRTIO_NET_F_CTRL_VQ".
++	 * By failing the invalid features sent down by untrusted drivers,
++	 * we're assured the assumption made upon is_index_valid() and
++	 * is_ctrl_vq_idx() will not be compromised.
++	 */
++	if ((features & (BIT_ULL(VIRTIO_NET_F_MQ) | BIT_ULL(VIRTIO_NET_F_CTRL_VQ))) ==
++            BIT_ULL(VIRTIO_NET_F_MQ))
++		return -EINVAL;
++
+ 	return 0;
+ }
+ 
+@@ -1996,7 +2010,7 @@ static int mlx5_vdpa_set_features(struct vdpa_device *vdev, u64 features)
+ 
+ 	print_features(mvdev, features, true);
+ 
+-	err = verify_min_features(mvdev, features);
++	err = verify_driver_features(mvdev, features);
+ 	if (err)
+ 		return err;
+ 
+@@ -2659,9 +2673,12 @@ static void mlx5_vdpa_dev_del(struct vdpa_mgmt_dev *v_mdev, struct vdpa_device *
+ 	struct mlx5_vdpa_mgmtdev *mgtdev = container_of(v_mdev, struct mlx5_vdpa_mgmtdev, mgtdev);
+ 	struct mlx5_vdpa_dev *mvdev = to_mvdev(dev);
+ 	struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);
++	struct workqueue_struct *wq;
+ 
+ 	mlx5_notifier_unregister(mvdev->mdev, &ndev->nb);
+-	destroy_workqueue(mvdev->wq);
++	wq = mvdev->wq;
++	mvdev->wq = NULL;
++	destroy_workqueue(wq);
+ 	_vdpa_unregister_device(dev);
+ 	mgtdev->ndev = NULL;
+ }
+diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
+index f948e6cd29939..2e6409cc11ada 100644
+--- a/drivers/vfio/pci/vfio_pci_core.c
++++ b/drivers/vfio/pci/vfio_pci_core.c
+@@ -228,6 +228,19 @@ int vfio_pci_set_power_state(struct vfio_pci_core_device *vdev, pci_power_t stat
+ 	if (!ret) {
+ 		/* D3 might be unsupported via quirk, skip unless in D3 */
+ 		if (needs_save && pdev->current_state >= PCI_D3hot) {
++			/*
++			 * The current PCI state will be saved locally in
++			 * 'pm_save' during the D3hot transition. When the
++			 * device state is changed to D0 again with the current
++			 * function, then pci_store_saved_state() will restore
++			 * the state and will free the memory pointed by
++			 * 'pm_save'. There are few cases where the PCI power
++			 * state can be changed to D0 without the involvement
++			 * of the driver. For these cases, free the earlier
++			 * allocated memory first before overwriting 'pm_save'
++			 * to prevent the memory leak.
++			 */
++			kfree(vdev->pm_save);
+ 			vdev->pm_save = pci_store_saved_state(pdev);
+ 		} else if (needs_restore) {
+ 			pci_load_and_free_saved_state(pdev, &vdev->pm_save);
+@@ -322,6 +335,17 @@ void vfio_pci_core_disable(struct vfio_pci_core_device *vdev)
+ 	/* For needs_reset */
+ 	lockdep_assert_held(&vdev->vdev.dev_set->lock);
+ 
++	/*
++	 * This function can be invoked while the power state is non-D0.
++	 * This function calls __pci_reset_function_locked() which internally
++	 * can use pci_pm_reset() for the function reset. pci_pm_reset() will
++	 * fail if the power state is non-D0. Also, for the devices which
++	 * have NoSoftRst-, the reset function can cause the PCI config space
++	 * reset without restoring the original state (saved locally in
++	 * 'vdev->pm_save').
++	 */
++	vfio_pci_set_power_state(vdev, PCI_D0);
++
+ 	/* Stop the device from further DMA */
+ 	pci_clear_master(pdev);
+ 
+@@ -921,6 +945,19 @@ long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd,
+ 			return -EINVAL;
+ 
+ 		vfio_pci_zap_and_down_write_memory_lock(vdev);
++
++		/*
++		 * This function can be invoked while the power state is non-D0.
++		 * If pci_try_reset_function() has been called while the power
++		 * state is non-D0, then pci_try_reset_function() will
++		 * internally set the power state to D0 without vfio driver
++		 * involvement. For the devices which have NoSoftRst-, the
++		 * reset function can cause the PCI config space reset without
++		 * restoring the original state (saved locally in
++		 * 'vdev->pm_save').
++		 */
++		vfio_pci_set_power_state(vdev, PCI_D0);
++
+ 		ret = pci_try_reset_function(vdev->pdev);
+ 		up_write(&vdev->memory_lock);
+ 
+@@ -2055,6 +2092,18 @@ static int vfio_pci_dev_set_hot_reset(struct vfio_device_set *dev_set,
+ 	}
+ 	cur_mem = NULL;
+ 
++	/*
++	 * The pci_reset_bus() will reset all the devices in the bus.
++	 * The power state can be non-D0 for some of the devices in the bus.
++	 * For these devices, the pci_reset_bus() will internally set
++	 * the power state to D0 without vfio driver involvement.
++	 * For the devices which have NoSoftRst-, the reset function can
++	 * cause the PCI config space reset without restoring the original
++	 * state (saved locally in 'vdev->pm_save').
++	 */
++	list_for_each_entry(cur, &dev_set->device_list, vdev.dev_set_list)
++		vfio_pci_set_power_state(cur, PCI_D0);
++
+ 	ret = pci_reset_bus(pdev);
+ 
+ err_undo:
+@@ -2108,6 +2157,18 @@ static bool vfio_pci_dev_set_try_reset(struct vfio_device_set *dev_set)
+ 	if (!pdev)
+ 		return false;
+ 
++	/*
++	 * The pci_reset_bus() will reset all the devices in the bus.
++	 * The power state can be non-D0 for some of the devices in the bus.
++	 * For these devices, the pci_reset_bus() will internally set
++	 * the power state to D0 without vfio driver involvement.
++	 * For the devices which have NoSoftRst-, the reset function can
++	 * cause the PCI config space reset without restoring the original
++	 * state (saved locally in 'vdev->pm_save').
++	 */
++	list_for_each_entry(cur, &dev_set->device_list, vdev.dev_set_list)
++		vfio_pci_set_power_state(cur, PCI_D0);
++
+ 	ret = pci_reset_bus(pdev);
+ 	if (ret)
+ 		return false;
+diff --git a/drivers/vhost/iotlb.c b/drivers/vhost/iotlb.c
+index 40b098320b2a7..5829cf2d0552d 100644
+--- a/drivers/vhost/iotlb.c
++++ b/drivers/vhost/iotlb.c
+@@ -62,8 +62,12 @@ int vhost_iotlb_add_range_ctx(struct vhost_iotlb *iotlb,
+ 	 */
+ 	if (start == 0 && last == ULONG_MAX) {
+ 		u64 mid = last / 2;
++		int err = vhost_iotlb_add_range_ctx(iotlb, start, mid, addr,
++				perm, opaque);
++
++		if (err)
++			return err;
+ 
+-		vhost_iotlb_add_range_ctx(iotlb, start, mid, addr, perm, opaque);
+ 		addr += mid + 1;
+ 		start = mid + 1;
+ 	}
+diff --git a/drivers/video/fbdev/atafb.c b/drivers/video/fbdev/atafb.c
+index e3812a8ff55a4..29e650ecfceb1 100644
+--- a/drivers/video/fbdev/atafb.c
++++ b/drivers/video/fbdev/atafb.c
+@@ -1683,9 +1683,9 @@ static int falcon_setcolreg(unsigned int regno, unsigned int red,
+ 			   ((blue & 0xfc00) >> 8));
+ 	if (regno < 16) {
+ 		shifter_tt.color_reg[regno] =
+-			(((red & 0xe000) >> 13) | ((red & 0x1000) >> 12) << 8) |
+-			(((green & 0xe000) >> 13) | ((green & 0x1000) >> 12) << 4) |
+-			((blue & 0xe000) >> 13) | ((blue & 0x1000) >> 12);
++			((((red & 0xe000) >> 13)   | ((red & 0x1000) >> 12)) << 8)   |
++			((((green & 0xe000) >> 13) | ((green & 0x1000) >> 12)) << 4) |
++			   ((blue & 0xe000) >> 13) | ((blue & 0x1000) >> 12);
+ 		((u32 *)info->pseudo_palette)[regno] = ((red & 0xf800) |
+ 						       ((green & 0xfc00) >> 5) |
+ 						       ((blue & 0xf800) >> 11));
+@@ -1971,9 +1971,9 @@ static int stste_setcolreg(unsigned int regno, unsigned int red,
+ 	green >>= 12;
+ 	if (ATARIHW_PRESENT(EXTD_SHIFTER))
+ 		shifter_tt.color_reg[regno] =
+-			(((red & 0xe) >> 1) | ((red & 1) << 3) << 8) |
+-			(((green & 0xe) >> 1) | ((green & 1) << 3) << 4) |
+-			((blue & 0xe) >> 1) | ((blue & 1) << 3);
++			((((red & 0xe)   >> 1) | ((red & 1)   << 3)) << 8) |
++			((((green & 0xe) >> 1) | ((green & 1) << 3)) << 4) |
++			  ((blue & 0xe)  >> 1) | ((blue & 1)  << 3);
+ 	else
+ 		shifter_tt.color_reg[regno] =
+ 			((red & 0xe) << 7) |
+diff --git a/drivers/video/fbdev/atmel_lcdfb.c b/drivers/video/fbdev/atmel_lcdfb.c
+index 355b6120dc4f0..1fc8de4ecbebf 100644
+--- a/drivers/video/fbdev/atmel_lcdfb.c
++++ b/drivers/video/fbdev/atmel_lcdfb.c
+@@ -1062,15 +1062,16 @@ static int __init atmel_lcdfb_probe(struct platform_device *pdev)
+ 
+ 	INIT_LIST_HEAD(&info->modelist);
+ 
+-	if (pdev->dev.of_node) {
+-		ret = atmel_lcdfb_of_init(sinfo);
+-		if (ret)
+-			goto free_info;
+-	} else {
++	if (!pdev->dev.of_node) {
+ 		dev_err(dev, "cannot get default configuration\n");
+ 		goto free_info;
+ 	}
+ 
++	ret = atmel_lcdfb_of_init(sinfo);
++	if (ret)
++		goto free_info;
++
++	ret = -ENODEV;
+ 	if (!sinfo->config)
+ 		goto free_info;
+ 
+diff --git a/drivers/video/fbdev/cirrusfb.c b/drivers/video/fbdev/cirrusfb.c
+index 93802abbbc72a..3d47c347b8970 100644
+--- a/drivers/video/fbdev/cirrusfb.c
++++ b/drivers/video/fbdev/cirrusfb.c
+@@ -469,7 +469,7 @@ static int cirrusfb_check_mclk(struct fb_info *info, long freq)
+ 	return 0;
+ }
+ 
+-static int cirrusfb_check_pixclock(const struct fb_var_screeninfo *var,
++static int cirrusfb_check_pixclock(struct fb_var_screeninfo *var,
+ 				   struct fb_info *info)
+ {
+ 	long freq;
+@@ -478,9 +478,7 @@ static int cirrusfb_check_pixclock(const struct fb_var_screeninfo *var,
+ 	unsigned maxclockidx = var->bits_per_pixel >> 3;
+ 
+ 	/* convert from ps to kHz */
+-	freq = PICOS2KHZ(var->pixclock);
+-
+-	dev_dbg(info->device, "desired pixclock: %ld kHz\n", freq);
++	freq = PICOS2KHZ(var->pixclock ? : 1);
+ 
+ 	maxclock = cirrusfb_board_info[cinfo->btype].maxclock[maxclockidx];
+ 	cinfo->multiplexing = 0;
+@@ -488,11 +486,13 @@ static int cirrusfb_check_pixclock(const struct fb_var_screeninfo *var,
+ 	/* If the frequency is greater than we can support, we might be able
+ 	 * to use multiplexing for the video mode */
+ 	if (freq > maxclock) {
+-		dev_err(info->device,
+-			"Frequency greater than maxclock (%ld kHz)\n",
+-			maxclock);
+-		return -EINVAL;
++		var->pixclock = KHZ2PICOS(maxclock);
++
++		while ((freq = PICOS2KHZ(var->pixclock)) > maxclock)
++			var->pixclock++;
+ 	}
++	dev_dbg(info->device, "desired pixclock: %ld kHz\n", freq);
++
+ 	/*
+ 	 * Additional constraint: 8bpp uses DAC clock doubling to allow maximum
+ 	 * pixel clock
+diff --git a/drivers/video/fbdev/controlfb.c b/drivers/video/fbdev/controlfb.c
+index 509311471d515..bd59e7b11ed53 100644
+--- a/drivers/video/fbdev/controlfb.c
++++ b/drivers/video/fbdev/controlfb.c
+@@ -67,7 +67,9 @@
+ #define out_8(addr, val)	(void)(val)
+ #define in_le32(addr)		0
+ #define out_le32(addr, val)	(void)(val)
++#ifndef pgprot_cached_wthru
+ #define pgprot_cached_wthru(prot) (prot)
++#endif
+ #else
+ static void invalid_vram_cache(void __force *addr)
+ {
+diff --git a/drivers/video/fbdev/core/fbcvt.c b/drivers/video/fbdev/core/fbcvt.c
+index 55d2bd0ce5c02..64843464c6613 100644
+--- a/drivers/video/fbdev/core/fbcvt.c
++++ b/drivers/video/fbdev/core/fbcvt.c
+@@ -214,9 +214,11 @@ static u32 fb_cvt_aspect_ratio(struct fb_cvt_data *cvt)
+ static void fb_cvt_print_name(struct fb_cvt_data *cvt)
+ {
+ 	u32 pixcount, pixcount_mod;
+-	int cnt = 255, offset = 0, read = 0;
+-	u8 *buf = kzalloc(256, GFP_KERNEL);
++	int size = 256;
++	int off = 0;
++	u8 *buf;
+ 
++	buf = kzalloc(size, GFP_KERNEL);
+ 	if (!buf)
+ 		return;
+ 
+@@ -224,43 +226,30 @@ static void fb_cvt_print_name(struct fb_cvt_data *cvt)
+ 	pixcount_mod = (cvt->xres * (cvt->yres/cvt->interlace)) % 1000000;
+ 	pixcount_mod /= 1000;
+ 
+-	read = snprintf(buf+offset, cnt, "fbcvt: %dx%d@%d: CVT Name - ",
+-			cvt->xres, cvt->yres, cvt->refresh);
+-	offset += read;
+-	cnt -= read;
++	off += scnprintf(buf + off, size - off, "fbcvt: %dx%d@%d: CVT Name - ",
++			    cvt->xres, cvt->yres, cvt->refresh);
+ 
+-	if (cvt->status)
+-		snprintf(buf+offset, cnt, "Not a CVT standard - %d.%03d Mega "
+-			 "Pixel Image\n", pixcount, pixcount_mod);
+-	else {
+-		if (pixcount) {
+-			read = snprintf(buf+offset, cnt, "%d", pixcount);
+-			cnt -= read;
+-			offset += read;
+-		}
++	if (cvt->status) {
++		off += scnprintf(buf + off, size - off,
++				 "Not a CVT standard - %d.%03d Mega Pixel Image\n",
++				 pixcount, pixcount_mod);
++	} else {
++		if (pixcount)
++			off += scnprintf(buf + off, size - off, "%d", pixcount);
+ 
+-		read = snprintf(buf+offset, cnt, ".%03dM", pixcount_mod);
+-		cnt -= read;
+-		offset += read;
++		off += scnprintf(buf + off, size - off, ".%03dM", pixcount_mod);
+ 
+ 		if (cvt->aspect_ratio == 0)
+-			read = snprintf(buf+offset, cnt, "3");
++			off += scnprintf(buf + off, size - off, "3");
+ 		else if (cvt->aspect_ratio == 3)
+-			read = snprintf(buf+offset, cnt, "4");
++			off += scnprintf(buf + off, size - off, "4");
+ 		else if (cvt->aspect_ratio == 1 || cvt->aspect_ratio == 4)
+-			read = snprintf(buf+offset, cnt, "9");
++			off += scnprintf(buf + off, size - off, "9");
+ 		else if (cvt->aspect_ratio == 2)
+-			read = snprintf(buf+offset, cnt, "A");
+-		else
+-			read = 0;
+-		cnt -= read;
+-		offset += read;
+-
+-		if (cvt->flags & FB_CVT_FLAG_REDUCED_BLANK) {
+-			read = snprintf(buf+offset, cnt, "-R");
+-			cnt -= read;
+-			offset += read;
+-		}
++			off += scnprintf(buf + off, size - off, "A");
++
++		if (cvt->flags & FB_CVT_FLAG_REDUCED_BLANK)
++			off += scnprintf(buf + off, size - off, "-R");
+ 	}
+ 
+ 	printk(KERN_INFO "%s\n", buf);
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index 0fa7ede94fa61..b585339509b0e 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -25,6 +25,7 @@
+ #include <linux/init.h>
+ #include <linux/linux_logo.h>
+ #include <linux/proc_fs.h>
++#include <linux/platform_device.h>
+ #include <linux/seq_file.h>
+ #include <linux/console.h>
+ #include <linux/kmod.h>
+@@ -1557,18 +1558,36 @@ static void do_remove_conflicting_framebuffers(struct apertures_struct *a,
+ 	/* check all firmware fbs and kick off if the base addr overlaps */
+ 	for_each_registered_fb(i) {
+ 		struct apertures_struct *gen_aper;
++		struct device *device;
+ 
+ 		if (!(registered_fb[i]->flags & FBINFO_MISC_FIRMWARE))
+ 			continue;
+ 
+ 		gen_aper = registered_fb[i]->apertures;
++		device = registered_fb[i]->device;
+ 		if (fb_do_apertures_overlap(gen_aper, a) ||
+ 			(primary && gen_aper && gen_aper->count &&
+ 			 gen_aper->ranges[0].base == VGA_FB_PHYS)) {
+ 
+ 			printk(KERN_INFO "fb%d: switching to %s from %s\n",
+ 			       i, name, registered_fb[i]->fix.id);
+-			do_unregister_framebuffer(registered_fb[i]);
++
++			/*
++			 * If we kick-out a firmware driver, we also want to remove
++			 * the underlying platform device, such as simple-framebuffer,
++			 * VESA, EFI, etc. A native driver will then be able to
++			 * allocate the memory range.
++			 *
++			 * If it's not a platform device, at least print a warning. A
++			 * fix would add code to remove the device from the system.
++			 */
++			if (dev_is_platform(device)) {
++				registered_fb[i]->forced_out = true;
++				platform_device_unregister(to_platform_device(device));
++			} else {
++				pr_warn("fb%d: cannot remove device\n", i);
++				do_unregister_framebuffer(registered_fb[i]);
++			}
+ 		}
+ 	}
+ }
+@@ -1898,9 +1917,13 @@ EXPORT_SYMBOL(register_framebuffer);
+ void
+ unregister_framebuffer(struct fb_info *fb_info)
+ {
+-	mutex_lock(&registration_lock);
++	bool forced_out = fb_info->forced_out;
++
++	if (!forced_out)
++		mutex_lock(&registration_lock);
+ 	do_unregister_framebuffer(fb_info);
+-	mutex_unlock(&registration_lock);
++	if (!forced_out)
++		mutex_unlock(&registration_lock);
+ }
+ EXPORT_SYMBOL(unregister_framebuffer);
+ 
+diff --git a/drivers/video/fbdev/matrox/matroxfb_base.c b/drivers/video/fbdev/matrox/matroxfb_base.c
+index 5c82611e93d99..236521b19daf7 100644
+--- a/drivers/video/fbdev/matrox/matroxfb_base.c
++++ b/drivers/video/fbdev/matrox/matroxfb_base.c
+@@ -1377,7 +1377,7 @@ static struct video_board vbG200 = {
+ 	.lowlevel = &matrox_G100
+ };
+ static struct video_board vbG200eW = {
+-	.maxvram = 0x800000,
++	.maxvram = 0x100000,
+ 	.maxdisplayable = 0x800000,
+ 	.accelID = FB_ACCEL_MATROX_MGAG200,
+ 	.lowlevel = &matrox_G100
+diff --git a/drivers/video/fbdev/nvidia/nv_i2c.c b/drivers/video/fbdev/nvidia/nv_i2c.c
+index d7994a1732459..0b48965a6420c 100644
+--- a/drivers/video/fbdev/nvidia/nv_i2c.c
++++ b/drivers/video/fbdev/nvidia/nv_i2c.c
+@@ -86,7 +86,7 @@ static int nvidia_setup_i2c_bus(struct nvidia_i2c_chan *chan, const char *name,
+ {
+ 	int rc;
+ 
+-	strcpy(chan->adapter.name, name);
++	strscpy(chan->adapter.name, name, sizeof(chan->adapter.name));
+ 	chan->adapter.owner = THIS_MODULE;
+ 	chan->adapter.class = i2c_class;
+ 	chan->adapter.algo_data = &chan->algo;
+diff --git a/drivers/video/fbdev/omap2/omapfb/displays/connector-dvi.c b/drivers/video/fbdev/omap2/omapfb/displays/connector-dvi.c
+index 2fa436475b406..c8ad3ef42bd31 100644
+--- a/drivers/video/fbdev/omap2/omapfb/displays/connector-dvi.c
++++ b/drivers/video/fbdev/omap2/omapfb/displays/connector-dvi.c
+@@ -246,6 +246,7 @@ static int dvic_probe_of(struct platform_device *pdev)
+ 	adapter_node = of_parse_phandle(node, "ddc-i2c-bus", 0);
+ 	if (adapter_node) {
+ 		adapter = of_get_i2c_adapter_by_node(adapter_node);
++		of_node_put(adapter_node);
+ 		if (adapter == NULL) {
+ 			dev_err(&pdev->dev, "failed to parse ddc-i2c-bus\n");
+ 			omap_dss_put_device(ddata->in);
+diff --git a/drivers/video/fbdev/omap2/omapfb/displays/panel-dsi-cm.c b/drivers/video/fbdev/omap2/omapfb/displays/panel-dsi-cm.c
+index 4b0793abdd84b..a2c7c5cb15234 100644
+--- a/drivers/video/fbdev/omap2/omapfb/displays/panel-dsi-cm.c
++++ b/drivers/video/fbdev/omap2/omapfb/displays/panel-dsi-cm.c
+@@ -409,7 +409,7 @@ static ssize_t dsicm_num_errors_show(struct device *dev,
+ 	if (r)
+ 		return r;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", errors);
++	return sysfs_emit(buf, "%d\n", errors);
+ }
+ 
+ static ssize_t dsicm_hw_revision_show(struct device *dev,
+@@ -439,7 +439,7 @@ static ssize_t dsicm_hw_revision_show(struct device *dev,
+ 	if (r)
+ 		return r;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%02x.%02x.%02x\n", id1, id2, id3);
++	return sysfs_emit(buf, "%02x.%02x.%02x\n", id1, id2, id3);
+ }
+ 
+ static ssize_t dsicm_store_ulps(struct device *dev,
+@@ -487,7 +487,7 @@ static ssize_t dsicm_show_ulps(struct device *dev,
+ 	t = ddata->ulps_enabled;
+ 	mutex_unlock(&ddata->lock);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%u\n", t);
++	return sysfs_emit(buf, "%u\n", t);
+ }
+ 
+ static ssize_t dsicm_store_ulps_timeout(struct device *dev,
+@@ -532,7 +532,7 @@ static ssize_t dsicm_show_ulps_timeout(struct device *dev,
+ 	t = ddata->ulps_timeout;
+ 	mutex_unlock(&ddata->lock);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%u\n", t);
++	return sysfs_emit(buf, "%u\n", t);
+ }
+ 
+ static DEVICE_ATTR(num_dsi_errors, S_IRUGO, dsicm_num_errors_show, NULL);
+diff --git a/drivers/video/fbdev/omap2/omapfb/displays/panel-sony-acx565akm.c b/drivers/video/fbdev/omap2/omapfb/displays/panel-sony-acx565akm.c
+index 8d8b5ff7d43c8..3696eb09b69b4 100644
+--- a/drivers/video/fbdev/omap2/omapfb/displays/panel-sony-acx565akm.c
++++ b/drivers/video/fbdev/omap2/omapfb/displays/panel-sony-acx565akm.c
+@@ -476,7 +476,7 @@ static ssize_t show_cabc_available_modes(struct device *dev,
+ 	int i;
+ 
+ 	if (!ddata->has_cabc)
+-		return snprintf(buf, PAGE_SIZE, "%s\n", cabc_modes[0]);
++		return sysfs_emit(buf, "%s\n", cabc_modes[0]);
+ 
+ 	for (i = 0, len = 0;
+ 	     len < PAGE_SIZE && i < ARRAY_SIZE(cabc_modes); i++)
+diff --git a/drivers/video/fbdev/omap2/omapfb/displays/panel-tpo-td043mtea1.c b/drivers/video/fbdev/omap2/omapfb/displays/panel-tpo-td043mtea1.c
+index afac1d9445aa2..57b7d1f490962 100644
+--- a/drivers/video/fbdev/omap2/omapfb/displays/panel-tpo-td043mtea1.c
++++ b/drivers/video/fbdev/omap2/omapfb/displays/panel-tpo-td043mtea1.c
+@@ -169,7 +169,7 @@ static ssize_t tpo_td043_vmirror_show(struct device *dev,
+ {
+ 	struct panel_drv_data *ddata = dev_get_drvdata(dev);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", ddata->vmirror);
++	return sysfs_emit(buf, "%d\n", ddata->vmirror);
+ }
+ 
+ static ssize_t tpo_td043_vmirror_store(struct device *dev,
+@@ -199,7 +199,7 @@ static ssize_t tpo_td043_mode_show(struct device *dev,
+ {
+ 	struct panel_drv_data *ddata = dev_get_drvdata(dev);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", ddata->mode);
++	return sysfs_emit(buf, "%d\n", ddata->mode);
+ }
+ 
+ static ssize_t tpo_td043_mode_store(struct device *dev,
+diff --git a/drivers/video/fbdev/sm712fb.c b/drivers/video/fbdev/sm712fb.c
+index 0dbc6bf8268ac..092a1caa1208e 100644
+--- a/drivers/video/fbdev/sm712fb.c
++++ b/drivers/video/fbdev/sm712fb.c
+@@ -1047,7 +1047,7 @@ static ssize_t smtcfb_read(struct fb_info *info, char __user *buf,
+ 	if (count + p > total_size)
+ 		count = total_size - p;
+ 
+-	buffer = kmalloc((count > PAGE_SIZE) ? PAGE_SIZE : count, GFP_KERNEL);
++	buffer = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ 	if (!buffer)
+ 		return -ENOMEM;
+ 
+@@ -1059,25 +1059,14 @@ static ssize_t smtcfb_read(struct fb_info *info, char __user *buf,
+ 	while (count) {
+ 		c = (count > PAGE_SIZE) ? PAGE_SIZE : count;
+ 		dst = buffer;
+-		for (i = c >> 2; i--;) {
+-			*dst = fb_readl(src++);
+-			*dst = big_swap(*dst);
++		for (i = (c + 3) >> 2; i--;) {
++			u32 val;
++
++			val = fb_readl(src);
++			*dst = big_swap(val);
++			src++;
+ 			dst++;
+ 		}
+-		if (c & 3) {
+-			u8 *dst8 = (u8 *)dst;
+-			u8 __iomem *src8 = (u8 __iomem *)src;
+-
+-			for (i = c & 3; i--;) {
+-				if (i & 1) {
+-					*dst8++ = fb_readb(++src8);
+-				} else {
+-					*dst8++ = fb_readb(--src8);
+-					src8 += 2;
+-				}
+-			}
+-			src = (u32 __iomem *)src8;
+-		}
+ 
+ 		if (copy_to_user(buf, buffer, c)) {
+ 			err = -EFAULT;
+@@ -1130,7 +1119,7 @@ static ssize_t smtcfb_write(struct fb_info *info, const char __user *buf,
+ 		count = total_size - p;
+ 	}
+ 
+-	buffer = kmalloc((count > PAGE_SIZE) ? PAGE_SIZE : count, GFP_KERNEL);
++	buffer = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ 	if (!buffer)
+ 		return -ENOMEM;
+ 
+@@ -1148,24 +1137,11 @@ static ssize_t smtcfb_write(struct fb_info *info, const char __user *buf,
+ 			break;
+ 		}
+ 
+-		for (i = c >> 2; i--;) {
+-			fb_writel(big_swap(*src), dst++);
++		for (i = (c + 3) >> 2; i--;) {
++			fb_writel(big_swap(*src), dst);
++			dst++;
+ 			src++;
+ 		}
+-		if (c & 3) {
+-			u8 *src8 = (u8 *)src;
+-			u8 __iomem *dst8 = (u8 __iomem *)dst;
+-
+-			for (i = c & 3; i--;) {
+-				if (i & 1) {
+-					fb_writeb(*src8++, ++dst8);
+-				} else {
+-					fb_writeb(*src8++, --dst8);
+-					dst8 += 2;
+-				}
+-			}
+-			dst = (u32 __iomem *)dst8;
+-		}
+ 
+ 		*ppos += c;
+ 		buf += c;
+diff --git a/drivers/video/fbdev/smscufx.c b/drivers/video/fbdev/smscufx.c
+index bfac3ee4a6422..28768c272b73d 100644
+--- a/drivers/video/fbdev/smscufx.c
++++ b/drivers/video/fbdev/smscufx.c
+@@ -1656,6 +1656,7 @@ static int ufx_usb_probe(struct usb_interface *interface,
+ 	info->par = dev;
+ 	info->pseudo_palette = dev->pseudo_palette;
+ 	info->fbops = &ufx_ops;
++	INIT_LIST_HEAD(&info->modelist);
+ 
+ 	retval = fb_alloc_cmap(&info->cmap, 256, 0);
+ 	if (retval < 0) {
+@@ -1666,8 +1667,6 @@ static int ufx_usb_probe(struct usb_interface *interface,
+ 	INIT_DELAYED_WORK(&dev->free_framebuffer_work,
+ 			  ufx_free_framebuffer_work);
+ 
+-	INIT_LIST_HEAD(&info->modelist);
+-
+ 	retval = ufx_reg_read(dev, 0x3000, &id_rev);
+ 	check_warn_goto_error(retval, "error %d reading 0x3000 register from device", retval);
+ 	dev_dbg(dev->gdev, "ID_REV register value 0x%08x", id_rev);
+diff --git a/drivers/video/fbdev/udlfb.c b/drivers/video/fbdev/udlfb.c
+index b9cdd02c10009..90f48b71fd8f7 100644
+--- a/drivers/video/fbdev/udlfb.c
++++ b/drivers/video/fbdev/udlfb.c
+@@ -1426,7 +1426,7 @@ static ssize_t metrics_bytes_rendered_show(struct device *fbdev,
+ 				   struct device_attribute *a, char *buf) {
+ 	struct fb_info *fb_info = dev_get_drvdata(fbdev);
+ 	struct dlfb_data *dlfb = fb_info->par;
+-	return snprintf(buf, PAGE_SIZE, "%u\n",
++	return sysfs_emit(buf, "%u\n",
+ 			atomic_read(&dlfb->bytes_rendered));
+ }
+ 
+@@ -1434,7 +1434,7 @@ static ssize_t metrics_bytes_identical_show(struct device *fbdev,
+ 				   struct device_attribute *a, char *buf) {
+ 	struct fb_info *fb_info = dev_get_drvdata(fbdev);
+ 	struct dlfb_data *dlfb = fb_info->par;
+-	return snprintf(buf, PAGE_SIZE, "%u\n",
++	return sysfs_emit(buf, "%u\n",
+ 			atomic_read(&dlfb->bytes_identical));
+ }
+ 
+@@ -1442,7 +1442,7 @@ static ssize_t metrics_bytes_sent_show(struct device *fbdev,
+ 				   struct device_attribute *a, char *buf) {
+ 	struct fb_info *fb_info = dev_get_drvdata(fbdev);
+ 	struct dlfb_data *dlfb = fb_info->par;
+-	return snprintf(buf, PAGE_SIZE, "%u\n",
++	return sysfs_emit(buf, "%u\n",
+ 			atomic_read(&dlfb->bytes_sent));
+ }
+ 
+@@ -1450,7 +1450,7 @@ static ssize_t metrics_cpu_kcycles_used_show(struct device *fbdev,
+ 				   struct device_attribute *a, char *buf) {
+ 	struct fb_info *fb_info = dev_get_drvdata(fbdev);
+ 	struct dlfb_data *dlfb = fb_info->par;
+-	return snprintf(buf, PAGE_SIZE, "%u\n",
++	return sysfs_emit(buf, "%u\n",
+ 			atomic_read(&dlfb->cpu_kcycles_used));
+ }
+ 
+diff --git a/drivers/video/fbdev/w100fb.c b/drivers/video/fbdev/w100fb.c
+index d96ab28f8ce4a..4e641a780726e 100644
+--- a/drivers/video/fbdev/w100fb.c
++++ b/drivers/video/fbdev/w100fb.c
+@@ -770,12 +770,18 @@ out:
+ 		fb_dealloc_cmap(&info->cmap);
+ 		kfree(info->pseudo_palette);
+ 	}
+-	if (remapped_fbuf != NULL)
++	if (remapped_fbuf != NULL) {
+ 		iounmap(remapped_fbuf);
+-	if (remapped_regs != NULL)
++		remapped_fbuf = NULL;
++	}
++	if (remapped_regs != NULL) {
+ 		iounmap(remapped_regs);
+-	if (remapped_base != NULL)
++		remapped_regs = NULL;
++	}
++	if (remapped_base != NULL) {
+ 		iounmap(remapped_base);
++		remapped_base = NULL;
++	}
+ 	if (info)
+ 		framebuffer_release(info);
+ 	return err;
+@@ -795,8 +801,11 @@ static int w100fb_remove(struct platform_device *pdev)
+ 	fb_dealloc_cmap(&info->cmap);
+ 
+ 	iounmap(remapped_base);
++	remapped_base = NULL;
+ 	iounmap(remapped_regs);
++	remapped_regs = NULL;
+ 	iounmap(remapped_fbuf);
++	remapped_fbuf = NULL;
+ 
+ 	framebuffer_release(info);
+ 
+diff --git a/drivers/virt/acrn/hsm.c b/drivers/virt/acrn/hsm.c
+index 5419794fccf1e..423ea888d79af 100644
+--- a/drivers/virt/acrn/hsm.c
++++ b/drivers/virt/acrn/hsm.c
+@@ -136,8 +136,10 @@ static long acrn_dev_ioctl(struct file *filp, unsigned int cmd,
+ 		if (IS_ERR(vm_param))
+ 			return PTR_ERR(vm_param);
+ 
+-		if ((vm_param->reserved0 | vm_param->reserved1) != 0)
++		if ((vm_param->reserved0 | vm_param->reserved1) != 0) {
++			kfree(vm_param);
+ 			return -EINVAL;
++		}
+ 
+ 		vm = acrn_vm_create(vm, vm_param);
+ 		if (!vm) {
+@@ -182,21 +184,29 @@ static long acrn_dev_ioctl(struct file *filp, unsigned int cmd,
+ 			return PTR_ERR(cpu_regs);
+ 
+ 		for (i = 0; i < ARRAY_SIZE(cpu_regs->reserved); i++)
+-			if (cpu_regs->reserved[i])
++			if (cpu_regs->reserved[i]) {
++				kfree(cpu_regs);
+ 				return -EINVAL;
++			}
+ 
+ 		for (i = 0; i < ARRAY_SIZE(cpu_regs->vcpu_regs.reserved_32); i++)
+-			if (cpu_regs->vcpu_regs.reserved_32[i])
++			if (cpu_regs->vcpu_regs.reserved_32[i]) {
++				kfree(cpu_regs);
+ 				return -EINVAL;
++			}
+ 
+ 		for (i = 0; i < ARRAY_SIZE(cpu_regs->vcpu_regs.reserved_64); i++)
+-			if (cpu_regs->vcpu_regs.reserved_64[i])
++			if (cpu_regs->vcpu_regs.reserved_64[i]) {
++				kfree(cpu_regs);
+ 				return -EINVAL;
++			}
+ 
+ 		for (i = 0; i < ARRAY_SIZE(cpu_regs->vcpu_regs.gdt.reserved); i++)
+ 			if (cpu_regs->vcpu_regs.gdt.reserved[i] |
+-			    cpu_regs->vcpu_regs.idt.reserved[i])
++			    cpu_regs->vcpu_regs.idt.reserved[i]) {
++				kfree(cpu_regs);
+ 				return -EINVAL;
++			}
+ 
+ 		ret = hcall_set_vcpu_regs(vm->vmid, virt_to_phys(cpu_regs));
+ 		if (ret < 0)
+diff --git a/drivers/virt/acrn/mm.c b/drivers/virt/acrn/mm.c
+index c4f2e15c8a2ba..3b1b1e7a844b4 100644
+--- a/drivers/virt/acrn/mm.c
++++ b/drivers/virt/acrn/mm.c
+@@ -162,10 +162,34 @@ int acrn_vm_ram_map(struct acrn_vm *vm, struct acrn_vm_memmap *memmap)
+ 	void *remap_vaddr;
+ 	int ret, pinned;
+ 	u64 user_vm_pa;
++	unsigned long pfn;
++	struct vm_area_struct *vma;
+ 
+ 	if (!vm || !memmap)
+ 		return -EINVAL;
+ 
++	mmap_read_lock(current->mm);
++	vma = vma_lookup(current->mm, memmap->vma_base);
++	if (vma && ((vma->vm_flags & VM_PFNMAP) != 0)) {
++		if ((memmap->vma_base + memmap->len) > vma->vm_end) {
++			mmap_read_unlock(current->mm);
++			return -EINVAL;
++		}
++
++		ret = follow_pfn(vma, memmap->vma_base, &pfn);
++		mmap_read_unlock(current->mm);
++		if (ret < 0) {
++			dev_dbg(acrn_dev.this_device,
++				"Failed to lookup PFN at VMA:%pK.\n", (void *)memmap->vma_base);
++			return ret;
++		}
++
++		return acrn_mm_region_add(vm, memmap->user_vm_pa,
++			 PFN_PHYS(pfn), memmap->len,
++			 ACRN_MEM_TYPE_WB, memmap->attr);
++	}
++	mmap_read_unlock(current->mm);
++
+ 	/* Get the page number of the map region */
+ 	nr_pages = memmap->len >> PAGE_SHIFT;
+ 	pages = vzalloc(nr_pages * sizeof(struct page *));
+diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
+index c2b733ef95b0d..c7dc7a9a6b06a 100644
+--- a/drivers/virtio/virtio.c
++++ b/drivers/virtio/virtio.c
+@@ -504,8 +504,9 @@ int virtio_device_restore(struct virtio_device *dev)
+ 			goto err;
+ 	}
+ 
+-	/* Finally, tell the device we're all set */
+-	virtio_add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK);
++	/* If restore didn't do it, mark device DRIVER_OK ourselves. */
++	if (!(dev->config->get_status(dev) & VIRTIO_CONFIG_S_DRIVER_OK))
++		virtio_device_ready(dev);
+ 
+ 	virtio_config_enable(dev);
+ 
+diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
+index fdbde1db5ec59..d724f676608ba 100644
+--- a/drivers/virtio/virtio_pci_common.c
++++ b/drivers/virtio/virtio_pci_common.c
+@@ -24,46 +24,17 @@ MODULE_PARM_DESC(force_legacy,
+ 		 "Force legacy mode for transitional virtio 1 devices");
+ #endif
+ 
+-/* disable irq handlers */
+-void vp_disable_cbs(struct virtio_device *vdev)
++/* wait for pending irq handlers */
++void vp_synchronize_vectors(struct virtio_device *vdev)
+ {
+ 	struct virtio_pci_device *vp_dev = to_vp_device(vdev);
+ 	int i;
+ 
+-	if (vp_dev->intx_enabled) {
+-		/*
+-		 * The below synchronize() guarantees that any
+-		 * interrupt for this line arriving after
+-		 * synchronize_irq() has completed is guaranteed to see
+-		 * intx_soft_enabled == false.
+-		 */
+-		WRITE_ONCE(vp_dev->intx_soft_enabled, false);
++	if (vp_dev->intx_enabled)
+ 		synchronize_irq(vp_dev->pci_dev->irq);
+-	}
+-
+-	for (i = 0; i < vp_dev->msix_vectors; ++i)
+-		disable_irq(pci_irq_vector(vp_dev->pci_dev, i));
+-}
+-
+-/* enable irq handlers */
+-void vp_enable_cbs(struct virtio_device *vdev)
+-{
+-	struct virtio_pci_device *vp_dev = to_vp_device(vdev);
+-	int i;
+-
+-	if (vp_dev->intx_enabled) {
+-		disable_irq(vp_dev->pci_dev->irq);
+-		/*
+-		 * The above disable_irq() provides TSO ordering and
+-		 * as such promotes the below store to store-release.
+-		 */
+-		WRITE_ONCE(vp_dev->intx_soft_enabled, true);
+-		enable_irq(vp_dev->pci_dev->irq);
+-		return;
+-	}
+ 
+ 	for (i = 0; i < vp_dev->msix_vectors; ++i)
+-		enable_irq(pci_irq_vector(vp_dev->pci_dev, i));
++		synchronize_irq(pci_irq_vector(vp_dev->pci_dev, i));
+ }
+ 
+ /* the notify function used when creating a virt queue */
+@@ -113,9 +84,6 @@ static irqreturn_t vp_interrupt(int irq, void *opaque)
+ 	struct virtio_pci_device *vp_dev = opaque;
+ 	u8 isr;
+ 
+-	if (!READ_ONCE(vp_dev->intx_soft_enabled))
+-		return IRQ_NONE;
+-
+ 	/* reading the ISR has the effect of also clearing it so it's very
+ 	 * important to save off the value. */
+ 	isr = ioread8(vp_dev->isr);
+@@ -173,8 +141,7 @@ static int vp_request_msix_vectors(struct virtio_device *vdev, int nvectors,
+ 	snprintf(vp_dev->msix_names[v], sizeof *vp_dev->msix_names,
+ 		 "%s-config", name);
+ 	err = request_irq(pci_irq_vector(vp_dev->pci_dev, v),
+-			  vp_config_changed, IRQF_NO_AUTOEN,
+-			  vp_dev->msix_names[v],
++			  vp_config_changed, 0, vp_dev->msix_names[v],
+ 			  vp_dev);
+ 	if (err)
+ 		goto error;
+@@ -193,8 +160,7 @@ static int vp_request_msix_vectors(struct virtio_device *vdev, int nvectors,
+ 		snprintf(vp_dev->msix_names[v], sizeof *vp_dev->msix_names,
+ 			 "%s-virtqueues", name);
+ 		err = request_irq(pci_irq_vector(vp_dev->pci_dev, v),
+-				  vp_vring_interrupt, IRQF_NO_AUTOEN,
+-				  vp_dev->msix_names[v],
++				  vp_vring_interrupt, 0, vp_dev->msix_names[v],
+ 				  vp_dev);
+ 		if (err)
+ 			goto error;
+@@ -371,7 +337,7 @@ static int vp_find_vqs_msix(struct virtio_device *vdev, unsigned nvqs,
+ 			 "%s-%s",
+ 			 dev_name(&vp_dev->vdev.dev), names[i]);
+ 		err = request_irq(pci_irq_vector(vp_dev->pci_dev, msix_vec),
+-				  vring_interrupt, IRQF_NO_AUTOEN,
++				  vring_interrupt, 0,
+ 				  vp_dev->msix_names[msix_vec],
+ 				  vqs[i]);
+ 		if (err)
+diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h
+index 23f6c5c678d5e..eb17a29fc7ef1 100644
+--- a/drivers/virtio/virtio_pci_common.h
++++ b/drivers/virtio/virtio_pci_common.h
+@@ -63,7 +63,6 @@ struct virtio_pci_device {
+ 	/* MSI-X support */
+ 	int msix_enabled;
+ 	int intx_enabled;
+-	bool intx_soft_enabled;
+ 	cpumask_var_t *msix_affinity_masks;
+ 	/* Name strings for interrupts. This size should be enough,
+ 	 * and I'm too lazy to allocate each name separately. */
+@@ -102,10 +101,8 @@ static struct virtio_pci_device *to_vp_device(struct virtio_device *vdev)
+ 	return container_of(vdev, struct virtio_pci_device, vdev);
+ }
+ 
+-/* disable irq handlers */
+-void vp_disable_cbs(struct virtio_device *vdev);
+-/* enable irq handlers */
+-void vp_enable_cbs(struct virtio_device *vdev);
++/* wait for pending irq handlers */
++void vp_synchronize_vectors(struct virtio_device *vdev);
+ /* the notify function used when creating a virt queue */
+ bool vp_notify(struct virtqueue *vq);
+ /* the config->del_vqs() implementation */
+diff --git a/drivers/virtio/virtio_pci_legacy.c b/drivers/virtio/virtio_pci_legacy.c
+index b3f8128b7983b..82eb437ad9205 100644
+--- a/drivers/virtio/virtio_pci_legacy.c
++++ b/drivers/virtio/virtio_pci_legacy.c
+@@ -98,8 +98,8 @@ static void vp_reset(struct virtio_device *vdev)
+ 	/* Flush out the status write, and flush in device writes,
+ 	 * including MSi-X interrupts, if any. */
+ 	vp_legacy_get_status(&vp_dev->ldev);
+-	/* Disable VQ/configuration callbacks. */
+-	vp_disable_cbs(vdev);
++	/* Flush pending VQ/configuration callbacks. */
++	vp_synchronize_vectors(vdev);
+ }
+ 
+ static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector)
+@@ -185,7 +185,6 @@ static void del_vq(struct virtio_pci_vq_info *info)
+ }
+ 
+ static const struct virtio_config_ops virtio_pci_config_ops = {
+-	.enable_cbs	= vp_enable_cbs,
+ 	.get		= vp_get,
+ 	.set		= vp_set,
+ 	.get_status	= vp_get_status,
+diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
+index 5455bc041fb69..30654d3a0b41e 100644
+--- a/drivers/virtio/virtio_pci_modern.c
++++ b/drivers/virtio/virtio_pci_modern.c
+@@ -172,8 +172,8 @@ static void vp_reset(struct virtio_device *vdev)
+ 	 */
+ 	while (vp_modern_get_status(mdev))
+ 		msleep(1);
+-	/* Disable VQ/configuration callbacks. */
+-	vp_disable_cbs(vdev);
++	/* Flush pending VQ/configuration callbacks. */
++	vp_synchronize_vectors(vdev);
+ }
+ 
+ static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector)
+@@ -380,7 +380,6 @@ static bool vp_get_shm_region(struct virtio_device *vdev,
+ }
+ 
+ static const struct virtio_config_ops virtio_pci_config_nodev_ops = {
+-	.enable_cbs	= vp_enable_cbs,
+ 	.get		= NULL,
+ 	.set		= NULL,
+ 	.generation	= vp_generation,
+@@ -398,7 +397,6 @@ static const struct virtio_config_ops virtio_pci_config_nodev_ops = {
+ };
+ 
+ static const struct virtio_config_ops virtio_pci_config_ops = {
+-	.enable_cbs	= vp_enable_cbs,
+ 	.get		= vp_get,
+ 	.set		= vp_set,
+ 	.generation	= vp_generation,
+diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c
+index 117bc2a8eb0a4..db843f8258602 100644
+--- a/drivers/watchdog/rti_wdt.c
++++ b/drivers/watchdog/rti_wdt.c
+@@ -228,6 +228,7 @@ static int rti_wdt_probe(struct platform_device *pdev)
+ 	ret = pm_runtime_get_sync(dev);
+ 	if (ret) {
+ 		pm_runtime_put_noidle(dev);
++		pm_runtime_disable(&pdev->dev);
+ 		return dev_err_probe(dev, ret, "runtime pm failed\n");
+ 	}
+ 
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index d19762dc90fed..caf58b185a8b9 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -170,8 +170,8 @@ static int padzero(unsigned long elf_bss)
+ 
+ static int
+ create_elf_tables(struct linux_binprm *bprm, const struct elfhdr *exec,
+-		unsigned long load_addr, unsigned long interp_load_addr,
+-		unsigned long e_entry)
++		unsigned long interp_load_addr,
++		unsigned long e_entry, unsigned long phdr_addr)
+ {
+ 	struct mm_struct *mm = current->mm;
+ 	unsigned long p = bprm->p;
+@@ -257,7 +257,7 @@ create_elf_tables(struct linux_binprm *bprm, const struct elfhdr *exec,
+ 	NEW_AUX_ENT(AT_HWCAP, ELF_HWCAP);
+ 	NEW_AUX_ENT(AT_PAGESZ, ELF_EXEC_PAGESIZE);
+ 	NEW_AUX_ENT(AT_CLKTCK, CLOCKS_PER_SEC);
+-	NEW_AUX_ENT(AT_PHDR, load_addr + exec->e_phoff);
++	NEW_AUX_ENT(AT_PHDR, phdr_addr);
+ 	NEW_AUX_ENT(AT_PHENT, sizeof(struct elf_phdr));
+ 	NEW_AUX_ENT(AT_PHNUM, exec->e_phnum);
+ 	NEW_AUX_ENT(AT_BASE, interp_load_addr);
+@@ -823,7 +823,7 @@ static int parse_elf_properties(struct file *f, const struct elf_phdr *phdr,
+ static int load_elf_binary(struct linux_binprm *bprm)
+ {
+ 	struct file *interpreter = NULL; /* to shut gcc up */
+- 	unsigned long load_addr = 0, load_bias = 0;
++	unsigned long load_addr, load_bias = 0, phdr_addr = 0;
+ 	int load_addr_set = 0;
+ 	unsigned long error;
+ 	struct elf_phdr *elf_ppnt, *elf_phdata, *interp_elf_phdata = NULL;
+@@ -1180,6 +1180,17 @@ out_free_interp:
+ 				reloc_func_desc = load_bias;
+ 			}
+ 		}
++
++		/*
++		 * Figure out which segment in the file contains the Program
++		 * Header table, and map to the associated memory address.
++		 */
++		if (elf_ppnt->p_offset <= elf_ex->e_phoff &&
++		    elf_ex->e_phoff < elf_ppnt->p_offset + elf_ppnt->p_filesz) {
++			phdr_addr = elf_ex->e_phoff - elf_ppnt->p_offset +
++				    elf_ppnt->p_vaddr;
++		}
++
+ 		k = elf_ppnt->p_vaddr;
+ 		if ((elf_ppnt->p_flags & PF_X) && k < start_code)
+ 			start_code = k;
+@@ -1215,6 +1226,7 @@ out_free_interp:
+ 	}
+ 
+ 	e_entry = elf_ex->e_entry + load_bias;
++	phdr_addr += load_bias;
+ 	elf_bss += load_bias;
+ 	elf_brk += load_bias;
+ 	start_code += load_bias;
+@@ -1278,8 +1290,8 @@ out_free_interp:
+ 		goto out;
+ #endif /* ARCH_HAS_SETUP_ADDITIONAL_PAGES */
+ 
+-	retval = create_elf_tables(bprm, elf_ex,
+-			  load_addr, interp_load_addr, e_entry);
++	retval = create_elf_tables(bprm, elf_ex, interp_load_addr,
++				   e_entry, phdr_addr);
+ 	if (retval < 0)
+ 		goto out;
+ 
+@@ -1630,17 +1642,16 @@ static void fill_siginfo_note(struct memelfnote *note, user_siginfo_t *csigdata,
+  *   long file_ofs
+  * followed by COUNT filenames in ASCII: "FILE1" NUL "FILE2" NUL...
+  */
+-static int fill_files_note(struct memelfnote *note)
++static int fill_files_note(struct memelfnote *note, struct coredump_params *cprm)
+ {
+-	struct mm_struct *mm = current->mm;
+-	struct vm_area_struct *vma;
+ 	unsigned count, size, names_ofs, remaining, n;
+ 	user_long_t *data;
+ 	user_long_t *start_end_ofs;
+ 	char *name_base, *name_curpos;
++	int i;
+ 
+ 	/* *Estimated* file count and total data size needed */
+-	count = mm->map_count;
++	count = cprm->vma_count;
+ 	if (count > UINT_MAX / 64)
+ 		return -EINVAL;
+ 	size = count * 64;
+@@ -1662,11 +1673,12 @@ static int fill_files_note(struct memelfnote *note)
+ 	name_base = name_curpos = ((char *)data) + names_ofs;
+ 	remaining = size - names_ofs;
+ 	count = 0;
+-	for (vma = mm->mmap; vma != NULL; vma = vma->vm_next) {
++	for (i = 0; i < cprm->vma_count; i++) {
++		struct core_vma_metadata *m = &cprm->vma_meta[i];
+ 		struct file *file;
+ 		const char *filename;
+ 
+-		file = vma->vm_file;
++		file = m->file;
+ 		if (!file)
+ 			continue;
+ 		filename = file_path(file, name_curpos, remaining);
+@@ -1686,9 +1698,9 @@ static int fill_files_note(struct memelfnote *note)
+ 		memmove(name_curpos, filename, n);
+ 		name_curpos += n;
+ 
+-		*start_end_ofs++ = vma->vm_start;
+-		*start_end_ofs++ = vma->vm_end;
+-		*start_end_ofs++ = vma->vm_pgoff;
++		*start_end_ofs++ = m->start;
++		*start_end_ofs++ = m->end;
++		*start_end_ofs++ = m->pgoff;
+ 		count++;
+ 	}
+ 
+@@ -1699,7 +1711,7 @@ static int fill_files_note(struct memelfnote *note)
+ 	 * Count usually is less than mm->map_count,
+ 	 * we need to move filenames down.
+ 	 */
+-	n = mm->map_count - count;
++	n = cprm->vma_count - count;
+ 	if (n != 0) {
+ 		unsigned shift_bytes = n * 3 * sizeof(data[0]);
+ 		memmove(name_base - shift_bytes, name_base,
+@@ -1811,7 +1823,7 @@ static int fill_thread_core_info(struct elf_thread_core_info *t,
+ 
+ static int fill_note_info(struct elfhdr *elf, int phdrs,
+ 			  struct elf_note_info *info,
+-			  const kernel_siginfo_t *siginfo, struct pt_regs *regs)
++			  struct coredump_params *cprm)
+ {
+ 	struct task_struct *dump_task = current;
+ 	const struct user_regset_view *view = task_user_regset_view(dump_task);
+@@ -1883,7 +1895,7 @@ static int fill_note_info(struct elfhdr *elf, int phdrs,
+ 	 * Now fill in each thread's information.
+ 	 */
+ 	for (t = info->thread; t != NULL; t = t->next)
+-		if (!fill_thread_core_info(t, view, siginfo->si_signo, &info->size))
++		if (!fill_thread_core_info(t, view, cprm->siginfo->si_signo, &info->size))
+ 			return 0;
+ 
+ 	/*
+@@ -1892,13 +1904,13 @@ static int fill_note_info(struct elfhdr *elf, int phdrs,
+ 	fill_psinfo(psinfo, dump_task->group_leader, dump_task->mm);
+ 	info->size += notesize(&info->psinfo);
+ 
+-	fill_siginfo_note(&info->signote, &info->csigdata, siginfo);
++	fill_siginfo_note(&info->signote, &info->csigdata, cprm->siginfo);
+ 	info->size += notesize(&info->signote);
+ 
+ 	fill_auxv_note(&info->auxv, current->mm);
+ 	info->size += notesize(&info->auxv);
+ 
+-	if (fill_files_note(&info->files) == 0)
++	if (fill_files_note(&info->files, cprm) == 0)
+ 		info->size += notesize(&info->files);
+ 
+ 	return 1;
+@@ -2040,7 +2052,7 @@ static int elf_note_info_init(struct elf_note_info *info)
+ 
+ static int fill_note_info(struct elfhdr *elf, int phdrs,
+ 			  struct elf_note_info *info,
+-			  const kernel_siginfo_t *siginfo, struct pt_regs *regs)
++			  struct coredump_params *cprm)
+ {
+ 	struct core_thread *ct;
+ 	struct elf_thread_status *ets;
+@@ -2061,13 +2073,13 @@ static int fill_note_info(struct elfhdr *elf, int phdrs,
+ 	list_for_each_entry(ets, &info->thread_list, list) {
+ 		int sz;
+ 
+-		sz = elf_dump_thread_status(siginfo->si_signo, ets);
++		sz = elf_dump_thread_status(cprm->siginfo->si_signo, ets);
+ 		info->thread_status_size += sz;
+ 	}
+ 	/* now collect the dump for the current */
+ 	memset(info->prstatus, 0, sizeof(*info->prstatus));
+-	fill_prstatus(&info->prstatus->common, current, siginfo->si_signo);
+-	elf_core_copy_regs(&info->prstatus->pr_reg, regs);
++	fill_prstatus(&info->prstatus->common, current, cprm->siginfo->si_signo);
++	elf_core_copy_regs(&info->prstatus->pr_reg, cprm->regs);
+ 
+ 	/* Set up header */
+ 	fill_elf_header(elf, phdrs, ELF_ARCH, ELF_CORE_EFLAGS);
+@@ -2083,18 +2095,18 @@ static int fill_note_info(struct elfhdr *elf, int phdrs,
+ 	fill_note(info->notes + 1, "CORE", NT_PRPSINFO,
+ 		  sizeof(*info->psinfo), info->psinfo);
+ 
+-	fill_siginfo_note(info->notes + 2, &info->csigdata, siginfo);
++	fill_siginfo_note(info->notes + 2, &info->csigdata, cprm->siginfo);
+ 	fill_auxv_note(info->notes + 3, current->mm);
+ 	info->numnote = 4;
+ 
+-	if (fill_files_note(info->notes + info->numnote) == 0) {
++	if (fill_files_note(info->notes + info->numnote, cprm) == 0) {
+ 		info->notes_files = info->notes + info->numnote;
+ 		info->numnote++;
+ 	}
+ 
+ 	/* Try to dump the FPU. */
+-	info->prstatus->pr_fpvalid = elf_core_copy_task_fpregs(current, regs,
+-							       info->fpu);
++	info->prstatus->pr_fpvalid =
++		elf_core_copy_task_fpregs(current, cprm->regs, info->fpu);
+ 	if (info->prstatus->pr_fpvalid)
+ 		fill_note(info->notes + info->numnote++,
+ 			  "CORE", NT_PRFPREG, sizeof(*info->fpu), info->fpu);
+@@ -2180,8 +2192,7 @@ static void fill_extnum_info(struct elfhdr *elf, struct elf_shdr *shdr4extnum,
+ static int elf_core_dump(struct coredump_params *cprm)
+ {
+ 	int has_dumped = 0;
+-	int vma_count, segs, i;
+-	size_t vma_data_size;
++	int segs, i;
+ 	struct elfhdr elf;
+ 	loff_t offset = 0, dataoff;
+ 	struct elf_note_info info = { };
+@@ -2189,16 +2200,12 @@ static int elf_core_dump(struct coredump_params *cprm)
+ 	struct elf_shdr *shdr4extnum = NULL;
+ 	Elf_Half e_phnum;
+ 	elf_addr_t e_shoff;
+-	struct core_vma_metadata *vma_meta;
+-
+-	if (dump_vma_snapshot(cprm, &vma_count, &vma_meta, &vma_data_size))
+-		return 0;
+ 
+ 	/*
+ 	 * The number of segs are recored into ELF header as 16bit value.
+ 	 * Please check DEFAULT_MAX_MAP_COUNT definition when you modify here.
+ 	 */
+-	segs = vma_count + elf_core_extra_phdrs();
++	segs = cprm->vma_count + elf_core_extra_phdrs();
+ 
+ 	/* for notes section */
+ 	segs++;
+@@ -2212,7 +2219,7 @@ static int elf_core_dump(struct coredump_params *cprm)
+ 	 * Collect all the non-memory information about the process for the
+ 	 * notes.  This also sets up the file header.
+ 	 */
+-	if (!fill_note_info(&elf, e_phnum, &info, cprm->siginfo, cprm->regs))
++	if (!fill_note_info(&elf, e_phnum, &info, cprm))
+ 		goto end_coredump;
+ 
+ 	has_dumped = 1;
+@@ -2237,7 +2244,7 @@ static int elf_core_dump(struct coredump_params *cprm)
+ 
+ 	dataoff = offset = roundup(offset, ELF_EXEC_PAGESIZE);
+ 
+-	offset += vma_data_size;
++	offset += cprm->vma_data_size;
+ 	offset += elf_core_extra_data_size();
+ 	e_shoff = offset;
+ 
+@@ -2257,8 +2264,8 @@ static int elf_core_dump(struct coredump_params *cprm)
+ 		goto end_coredump;
+ 
+ 	/* Write program headers for segments dump */
+-	for (i = 0; i < vma_count; i++) {
+-		struct core_vma_metadata *meta = vma_meta + i;
++	for (i = 0; i < cprm->vma_count; i++) {
++		struct core_vma_metadata *meta = cprm->vma_meta + i;
+ 		struct elf_phdr phdr;
+ 
+ 		phdr.p_type = PT_LOAD;
+@@ -2295,8 +2302,8 @@ static int elf_core_dump(struct coredump_params *cprm)
+ 	/* Align to page */
+ 	dump_skip_to(cprm, dataoff);
+ 
+-	for (i = 0; i < vma_count; i++) {
+-		struct core_vma_metadata *meta = vma_meta + i;
++	for (i = 0; i < cprm->vma_count; i++) {
++		struct core_vma_metadata *meta = cprm->vma_meta + i;
+ 
+ 		if (!dump_user_range(cprm, meta->start, meta->dump_size))
+ 			goto end_coredump;
+@@ -2313,7 +2320,6 @@ static int elf_core_dump(struct coredump_params *cprm)
+ end_coredump:
+ 	free_note_info(&info);
+ 	kfree(shdr4extnum);
+-	kvfree(vma_meta);
+ 	kfree(phdr4note);
+ 	return has_dumped;
+ }
+diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c
+index c6f588dc4a9db..1a25536b01201 100644
+--- a/fs/binfmt_elf_fdpic.c
++++ b/fs/binfmt_elf_fdpic.c
+@@ -1465,7 +1465,7 @@ static bool elf_fdpic_dump_segments(struct coredump_params *cprm,
+ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ {
+ 	int has_dumped = 0;
+-	int vma_count, segs;
++	int segs;
+ 	int i;
+ 	struct elfhdr *elf = NULL;
+ 	loff_t offset = 0, dataoff;
+@@ -1480,8 +1480,6 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ 	elf_addr_t e_shoff;
+ 	struct core_thread *ct;
+ 	struct elf_thread_status *tmp;
+-	struct core_vma_metadata *vma_meta = NULL;
+-	size_t vma_data_size;
+ 
+ 	/* alloc memory for large data structures: too large to be on stack */
+ 	elf = kmalloc(sizeof(*elf), GFP_KERNEL);
+@@ -1491,9 +1489,6 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ 	if (!psinfo)
+ 		goto end_coredump;
+ 
+-	if (dump_vma_snapshot(cprm, &vma_count, &vma_meta, &vma_data_size))
+-		goto end_coredump;
+-
+ 	for (ct = current->signal->core_state->dumper.next;
+ 					ct; ct = ct->next) {
+ 		tmp = elf_dump_thread_status(cprm->siginfo->si_signo,
+@@ -1513,7 +1508,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ 	tmp->next = thread_list;
+ 	thread_list = tmp;
+ 
+-	segs = vma_count + elf_core_extra_phdrs();
++	segs = cprm->vma_count + elf_core_extra_phdrs();
+ 
+ 	/* for notes section */
+ 	segs++;
+@@ -1558,7 +1553,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ 	/* Page-align dumped data */
+ 	dataoff = offset = roundup(offset, ELF_EXEC_PAGESIZE);
+ 
+-	offset += vma_data_size;
++	offset += cprm->vma_data_size;
+ 	offset += elf_core_extra_data_size();
+ 	e_shoff = offset;
+ 
+@@ -1578,8 +1573,8 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ 		goto end_coredump;
+ 
+ 	/* write program headers for segments dump */
+-	for (i = 0; i < vma_count; i++) {
+-		struct core_vma_metadata *meta = vma_meta + i;
++	for (i = 0; i < cprm->vma_count; i++) {
++		struct core_vma_metadata *meta = cprm->vma_meta + i;
+ 		struct elf_phdr phdr;
+ 		size_t sz;
+ 
+@@ -1628,7 +1623,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ 
+ 	dump_skip_to(cprm, dataoff);
+ 
+-	if (!elf_fdpic_dump_segments(cprm, vma_meta, vma_count))
++	if (!elf_fdpic_dump_segments(cprm, cprm->vma_meta, cprm->vma_count))
+ 		goto end_coredump;
+ 
+ 	if (!elf_core_write_extra_data(cprm))
+@@ -1652,7 +1647,6 @@ end_coredump:
+ 		thread_list = thread_list->next;
+ 		kfree(tmp);
+ 	}
+-	kvfree(vma_meta);
+ 	kfree(phdr4note);
+ 	kfree(elf);
+ 	kfree(psinfo);
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 05c606fb39685..398904be0f413 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1521,8 +1521,12 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
+ 	if (!test_bit(BTRFS_FS_OPEN, &fs_info->flags))
+ 		return;
+ 
+-	if (!btrfs_exclop_start(fs_info, BTRFS_EXCLOP_BALANCE))
++	sb_start_write(fs_info->sb);
++
++	if (!btrfs_exclop_start(fs_info, BTRFS_EXCLOP_BALANCE)) {
++		sb_end_write(fs_info->sb);
+ 		return;
++	}
+ 
+ 	/*
+ 	 * Long running balances can keep us blocked here for eternity, so
+@@ -1530,6 +1534,7 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
+ 	 */
+ 	if (!mutex_trylock(&fs_info->reclaim_bgs_lock)) {
+ 		btrfs_exclop_finish(fs_info);
++		sb_end_write(fs_info->sb);
+ 		return;
+ 	}
+ 
+@@ -1604,6 +1609,7 @@ next:
+ 	spin_unlock(&fs_info->unused_bgs_lock);
+ 	mutex_unlock(&fs_info->reclaim_bgs_lock);
+ 	btrfs_exclop_finish(fs_info);
++	sb_end_write(fs_info->sb);
+ }
+ 
+ void btrfs_reclaim_bgs(struct btrfs_fs_info *fs_info)
+diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
+index 32da97c3c19db..66d6a414b2cad 100644
+--- a/fs/btrfs/compression.c
++++ b/fs/btrfs/compression.c
+@@ -807,7 +807,7 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
+ 	u64 em_len;
+ 	u64 em_start;
+ 	struct extent_map *em;
+-	blk_status_t ret = BLK_STS_RESOURCE;
++	blk_status_t ret;
+ 	int faili = 0;
+ 	u8 *sums;
+ 
+@@ -820,14 +820,18 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
+ 	read_lock(&em_tree->lock);
+ 	em = lookup_extent_mapping(em_tree, file_offset, fs_info->sectorsize);
+ 	read_unlock(&em_tree->lock);
+-	if (!em)
+-		return BLK_STS_IOERR;
++	if (!em) {
++		ret = BLK_STS_IOERR;
++		goto out;
++	}
+ 
+ 	ASSERT(em->compress_type != BTRFS_COMPRESS_NONE);
+ 	compressed_len = em->block_len;
+ 	cb = kmalloc(compressed_bio_size(fs_info, compressed_len), GFP_NOFS);
+-	if (!cb)
++	if (!cb) {
++		ret = BLK_STS_RESOURCE;
+ 		goto out;
++	}
+ 
+ 	refcount_set(&cb->pending_sectors, compressed_len >> fs_info->sectorsize_bits);
+ 	cb->errors = 0;
+@@ -850,8 +854,10 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
+ 	nr_pages = DIV_ROUND_UP(compressed_len, PAGE_SIZE);
+ 	cb->compressed_pages = kcalloc(nr_pages, sizeof(struct page *),
+ 				       GFP_NOFS);
+-	if (!cb->compressed_pages)
++	if (!cb->compressed_pages) {
++		ret = BLK_STS_RESOURCE;
+ 		goto fail1;
++	}
+ 
+ 	for (pg_index = 0; pg_index < nr_pages; pg_index++) {
+ 		cb->compressed_pages[pg_index] = alloc_page(GFP_NOFS);
+@@ -937,7 +943,7 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
+ 			comp_bio = NULL;
+ 		}
+ 	}
+-	return 0;
++	return BLK_STS_OK;
+ 
+ fail2:
+ 	while (faili >= 0) {
+@@ -950,6 +956,8 @@ fail1:
+ 	kfree(cb);
+ out:
+ 	free_extent_map(em);
++	bio->bi_status = ret;
++	bio_endio(bio);
+ 	return ret;
+ finish_cb:
+ 	if (comp_bio) {
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 0980025331101..aab3855851888 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -441,17 +441,31 @@ static int csum_one_extent_buffer(struct extent_buffer *eb)
+ 	else
+ 		ret = btrfs_check_leaf_full(eb);
+ 
+-	if (ret < 0) {
+-		btrfs_print_tree(eb, 0);
++	if (ret < 0)
++		goto error;
++
++	/*
++	 * Also check the generation, the eb reached here must be newer than
++	 * last committed. Or something seriously wrong happened.
++	 */
++	if (unlikely(btrfs_header_generation(eb) <= fs_info->last_trans_committed)) {
++		ret = -EUCLEAN;
+ 		btrfs_err(fs_info,
+-			"block=%llu write time tree block corruption detected",
+-			eb->start);
+-		WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
+-		return ret;
++			"block=%llu bad generation, have %llu expect > %llu",
++			  eb->start, btrfs_header_generation(eb),
++			  fs_info->last_trans_committed);
++		goto error;
+ 	}
+ 	write_extent_buffer(eb, result, 0, fs_info->csum_size);
+ 
+ 	return 0;
++
++error:
++	btrfs_print_tree(eb, 0);
++	btrfs_err(fs_info, "block=%llu write time tree block corruption detected",
++		  eb->start);
++	WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
++	return ret;
+ }
+ 
+ /* Checksum all dirty extent buffers in one bio_vec */
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 5512d7028091f..ced0195f3390e 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -2640,7 +2640,6 @@ int btrfs_repair_one_sector(struct inode *inode,
+ 	const int icsum = bio_offset >> fs_info->sectorsize_bits;
+ 	struct bio *repair_bio;
+ 	struct btrfs_bio *repair_bbio;
+-	blk_status_t status;
+ 
+ 	btrfs_debug(fs_info,
+ 		   "repair read error: read error at %llu", start);
+@@ -2679,13 +2678,13 @@ int btrfs_repair_one_sector(struct inode *inode,
+ 		    "repair read error: submitting new read to mirror %d",
+ 		    failrec->this_mirror);
+ 
+-	status = submit_bio_hook(inode, repair_bio, failrec->this_mirror,
+-				 failrec->bio_flags);
+-	if (status) {
+-		free_io_failure(failure_tree, tree, failrec);
+-		bio_put(repair_bio);
+-	}
+-	return blk_status_to_errno(status);
++	/*
++	 * At this point we have a bio, so any errors from submit_bio_hook()
++	 * will be handled by the endio on the repair_bio, so we can't return an
++	 * error here.
++	 */
++	submit_bio_hook(inode, repair_bio, failrec->this_mirror, failrec->bio_flags);
++	return BLK_STS_OK;
+ }
+ 
+ static void end_page_read(struct page *page, bool uptodate, u64 start, u32 len)
+@@ -4800,11 +4799,12 @@ static int submit_eb_page(struct page *page, struct writeback_control *wbc,
+ 		return ret;
+ 	}
+ 	if (cache) {
+-		/* Impiles write in zoned mode */
+-		btrfs_put_block_group(cache);
+-		/* Mark the last eb in a block group */
++		/*
++		 * Implies write in zoned mode. Mark the last eb in a block group.
++		 */
+ 		if (cache->seq_zone && eb->start + eb->len == cache->zone_capacity)
+ 			set_bit(EXTENT_BUFFER_ZONE_FINISH, &eb->bflags);
++		btrfs_put_block_group(cache);
+ 	}
+ 	ret = write_one_eb(eb, wbc, epd);
+ 	free_extent_buffer(eb);
+diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c
+index d1cbb64a78f3a..a68d9cab12ead 100644
+--- a/fs/btrfs/file-item.c
++++ b/fs/btrfs/file-item.c
+@@ -303,7 +303,7 @@ found:
+ 	read_extent_buffer(path->nodes[0], dst, (unsigned long)item,
+ 			ret * csum_size);
+ out:
+-	if (ret == -ENOENT)
++	if (ret == -ENOENT || ret == -EFBIG)
+ 		ret = 0;
+ 	return ret;
+ }
+@@ -366,6 +366,7 @@ blk_status_t btrfs_lookup_bio_sums(struct inode *inode, struct bio *bio, u8 *dst
+ {
+ 	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
+ 	struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree;
++	struct btrfs_bio *bbio = NULL;
+ 	struct btrfs_path *path;
+ 	const u32 sectorsize = fs_info->sectorsize;
+ 	const u32 csum_size = fs_info->csum_size;
+@@ -375,6 +376,7 @@ blk_status_t btrfs_lookup_bio_sums(struct inode *inode, struct bio *bio, u8 *dst
+ 	u8 *csum;
+ 	const unsigned int nblocks = orig_len >> fs_info->sectorsize_bits;
+ 	int count = 0;
++	blk_status_t ret = BLK_STS_OK;
+ 
+ 	if (!fs_info->csum_root || (BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM))
+ 		return BLK_STS_OK;
+@@ -397,7 +399,7 @@ blk_status_t btrfs_lookup_bio_sums(struct inode *inode, struct bio *bio, u8 *dst
+ 		return BLK_STS_RESOURCE;
+ 
+ 	if (!dst) {
+-		struct btrfs_bio *bbio = btrfs_bio(bio);
++		bbio = btrfs_bio(bio);
+ 
+ 		if (nblocks * csum_size > BTRFS_BIO_INLINE_CSUM_SIZE) {
+ 			bbio->csum = kmalloc_array(nblocks, csum_size, GFP_NOFS);
+@@ -453,21 +455,27 @@ blk_status_t btrfs_lookup_bio_sums(struct inode *inode, struct bio *bio, u8 *dst
+ 
+ 		count = search_csum_tree(fs_info, path, cur_disk_bytenr,
+ 					 search_len, csum_dst);
+-		if (count <= 0) {
+-			/*
+-			 * Either we hit a critical error or we didn't find
+-			 * the csum.
+-			 * Either way, we put zero into the csums dst, and skip
+-			 * to the next sector.
+-			 */
++		if (count < 0) {
++			ret = errno_to_blk_status(count);
++			if (bbio)
++				btrfs_bio_free_csum(bbio);
++			break;
++		}
++
++		/*
++		 * We didn't find a csum for this range.  We need to make sure
++		 * we complain loudly about this, because we are not NODATASUM.
++		 *
++		 * However for the DATA_RELOC inode we could potentially be
++		 * relocating data extents for a NODATASUM inode, so the inode
++		 * itself won't be marked with NODATASUM, but the extent we're
++		 * copying is in fact NODATASUM.  If we don't find a csum we
++		 * assume this is the case.
++		 */
++		if (count == 0) {
+ 			memset(csum_dst, 0, csum_size);
+ 			count = 1;
+ 
+-			/*
+-			 * For data reloc inode, we need to mark the range
+-			 * NODATASUM so that balance won't report false csum
+-			 * error.
+-			 */
+ 			if (BTRFS_I(inode)->root->root_key.objectid ==
+ 			    BTRFS_DATA_RELOC_TREE_OBJECTID) {
+ 				u64 file_offset;
+@@ -488,7 +496,7 @@ blk_status_t btrfs_lookup_bio_sums(struct inode *inode, struct bio *bio, u8 *dst
+ 	}
+ 
+ 	btrfs_free_path(path);
+-	return BLK_STS_OK;
++	return ret;
+ }
+ 
+ int btrfs_lookup_csums_range(struct btrfs_root *root, u64 start, u64 end,
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index cc957cce23a1a..68f5a94c82b78 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -2536,10 +2536,15 @@ blk_status_t btrfs_submit_data_bio(struct inode *inode, struct bio *bio,
+ 			goto out;
+ 
+ 		if (bio_flags & EXTENT_BIO_COMPRESSED) {
++			/*
++			 * btrfs_submit_compressed_read will handle completing
++			 * the bio if there were any errors, so just return
++			 * here.
++			 */
+ 			ret = btrfs_submit_compressed_read(inode, bio,
+ 							   mirror_num,
+ 							   bio_flags);
+-			goto out;
++			goto out_no_endio;
+ 		} else {
+ 			/*
+ 			 * Lookup bio sums does extra checks around whether we
+@@ -2573,6 +2578,7 @@ out:
+ 		bio->bi_status = ret;
+ 		bio_endio(bio);
+ 	}
++out_no_endio:
+ 	return ret;
+ }
+ 
+diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c
+index e0f93b357548f..157d72e330d6f 100644
+--- a/fs/btrfs/reflink.c
++++ b/fs/btrfs/reflink.c
+@@ -505,8 +505,11 @@ process_slot:
+ 			 */
+ 			ASSERT(key.offset == 0);
+ 			ASSERT(datal <= fs_info->sectorsize);
+-			if (key.offset != 0 || datal > fs_info->sectorsize)
+-				return -EUCLEAN;
++			if (WARN_ON(key.offset != 0) ||
++			    WARN_ON(datal > fs_info->sectorsize)) {
++				ret = -EUCLEAN;
++				goto out;
++			}
+ 
+ 			ret = clone_copy_inline_extent(inode, path, &new_key,
+ 						       drop_start, datal, size,
+diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
+index 48d77f360a249..b1e6e06f2cbd5 100644
+--- a/fs/btrfs/space-info.c
++++ b/fs/btrfs/space-info.c
+@@ -1059,7 +1059,6 @@ static void btrfs_preempt_reclaim_metadata_space(struct work_struct *work)
+ 			trans_rsv->reserved;
+ 		if (block_rsv_size < space_info->bytes_may_use)
+ 			delalloc_size = space_info->bytes_may_use - block_rsv_size;
+-		spin_unlock(&space_info->lock);
+ 
+ 		/*
+ 		 * We don't want to include the global_rsv in our calculation,
+@@ -1090,6 +1089,8 @@ static void btrfs_preempt_reclaim_metadata_space(struct work_struct *work)
+ 			flush = FLUSH_DELAYED_REFS_NR;
+ 		}
+ 
++		spin_unlock(&space_info->lock);
++
+ 		/*
+ 		 * We don't want to reclaim everything, just a portion, so scale
+ 		 * down the to_reclaim by 1/4.  If it takes us down to 0,
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 42391d4aeb119..683822b7b194f 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -530,15 +530,48 @@ error:
+ 	return ret;
+ }
+ 
+-static bool device_path_matched(const char *path, struct btrfs_device *device)
++/*
++ * Check if the device in the path matches the device in the given struct device.
++ *
++ * Returns:
++ *   true  If it is the same device.
++ *   false If it is not the same device or on error.
++ */
++static bool device_matched(const struct btrfs_device *device, const char *path)
+ {
+-	int found;
++	char *device_name;
++	dev_t dev_old;
++	dev_t dev_new;
++	int ret;
++
++	/*
++	 * If we are looking for a device with the matching dev_t, then skip
++	 * device without a name (a missing device).
++	 */
++	if (!device->name)
++		return false;
++
++	device_name = kzalloc(BTRFS_PATH_NAME_MAX, GFP_KERNEL);
++	if (!device_name)
++		return false;
+ 
+ 	rcu_read_lock();
+-	found = strcmp(rcu_str_deref(device->name), path);
++	scnprintf(device_name, BTRFS_PATH_NAME_MAX, "%s", rcu_str_deref(device->name));
+ 	rcu_read_unlock();
+ 
+-	return found == 0;
++	ret = lookup_bdev(device_name, &dev_old);
++	kfree(device_name);
++	if (ret)
++		return false;
++
++	ret = lookup_bdev(path, &dev_new);
++	if (ret)
++		return false;
++
++	if (dev_old == dev_new)
++		return true;
++
++	return false;
+ }
+ 
+ /*
+@@ -571,9 +604,7 @@ static int btrfs_free_stale_devices(const char *path,
+ 					 &fs_devices->devices, dev_list) {
+ 			if (skip_device && skip_device == device)
+ 				continue;
+-			if (path && !device->name)
+-				continue;
+-			if (path && !device_path_matched(path, device))
++			if (path && !device_matched(device, path))
+ 				continue;
+ 			if (fs_devices->opened) {
+ 				/* for an already deleted device return 0 */
+@@ -8263,10 +8294,12 @@ static int relocating_repair_kthread(void *data)
+ 	target = cache->start;
+ 	btrfs_put_block_group(cache);
+ 
++	sb_start_write(fs_info->sb);
+ 	if (!btrfs_exclop_start(fs_info, BTRFS_EXCLOP_BALANCE)) {
+ 		btrfs_info(fs_info,
+ 			   "zoned: skip relocating block group %llu to repair: EBUSY",
+ 			   target);
++		sb_end_write(fs_info->sb);
+ 		return -EBUSY;
+ 	}
+ 
+@@ -8294,6 +8327,7 @@ out:
+ 		btrfs_put_block_group(cache);
+ 	mutex_unlock(&fs_info->reclaim_bgs_lock);
+ 	btrfs_exclop_finish(fs_info);
++	sb_end_write(fs_info->sb);
+ 
+ 	return ret;
+ }
+diff --git a/fs/buffer.c b/fs/buffer.c
+index 46bc589b7a03c..d959b52f0d1a5 100644
+--- a/fs/buffer.c
++++ b/fs/buffer.c
+@@ -1235,16 +1235,18 @@ static void bh_lru_install(struct buffer_head *bh)
+ 	int i;
+ 
+ 	check_irqs_on();
++	bh_lru_lock();
++
+ 	/*
+ 	 * the refcount of buffer_head in bh_lru prevents dropping the
+ 	 * attached page(i.e., try_to_free_buffers) so it could cause
+ 	 * failing page migration.
+ 	 * Skip putting upcoming bh into bh_lru until migration is done.
+ 	 */
+-	if (lru_cache_disabled())
++	if (lru_cache_disabled()) {
++		bh_lru_unlock();
+ 		return;
+-
+-	bh_lru_lock();
++	}
+ 
+ 	b = this_cpu_ptr(&bh_lrus);
+ 	for (i = 0; i < BH_LRU_SIZE; i++) {
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index 99c51391a48da..02b762b7e3fda 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -209,6 +209,9 @@ cifs_read_super(struct super_block *sb)
+ 	if (rc)
+ 		goto out_no_root;
+ 	/* tune readahead according to rsize if readahead size not set on mount */
++	if (cifs_sb->ctx->rsize == 0)
++		cifs_sb->ctx->rsize =
++			tcon->ses->server->ops->negotiate_rsize(tcon, cifs_sb->ctx);
+ 	if (cifs_sb->ctx->rasize)
+ 		sb->s_bdi->ra_pages = cifs_sb->ctx->rasize / PAGE_SIZE;
+ 	else
+@@ -253,6 +256,9 @@ static void cifs_kill_sb(struct super_block *sb)
+ 	struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
+ 	struct cifs_tcon *tcon;
+ 	struct cached_fid *cfid;
++	struct rb_root *root = &cifs_sb->tlink_tree;
++	struct rb_node *node;
++	struct tcon_link *tlink;
+ 
+ 	/*
+ 	 * We ned to release all dentries for the cached directories
+@@ -262,16 +268,18 @@ static void cifs_kill_sb(struct super_block *sb)
+ 		dput(cifs_sb->root);
+ 		cifs_sb->root = NULL;
+ 	}
+-	tcon = cifs_sb_master_tcon(cifs_sb);
+-	if (tcon) {
++	node = rb_first(root);
++	while (node != NULL) {
++		tlink = rb_entry(node, struct tcon_link, tl_rbnode);
++		tcon = tlink_tcon(tlink);
+ 		cfid = &tcon->crfid;
+ 		mutex_lock(&cfid->fid_mutex);
+ 		if (cfid->dentry) {
+-
+ 			dput(cfid->dentry);
+ 			cfid->dentry = NULL;
+ 		}
+ 		mutex_unlock(&cfid->fid_mutex);
++		node = rb_next(node);
+ 	}
+ 
+ 	kill_anon_super(sb);
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index fb69524a992bb..ec9d225e255a2 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -3401,6 +3401,9 @@ static int connect_dfs_target(struct mount_ctx *mnt_ctx, const char *full_path,
+ 	struct cifs_sb_info *cifs_sb = mnt_ctx->cifs_sb;
+ 	char *oldmnt = cifs_sb->ctx->mount_options;
+ 
++	cifs_dbg(FYI, "%s: full_path=%s ref_path=%s target=%s\n", __func__, full_path, ref_path,
++		 dfs_cache_get_tgt_name(tit));
++
+ 	rc = dfs_cache_get_tgt_referral(ref_path, tit, &ref);
+ 	if (rc)
+ 		goto out;
+@@ -3499,13 +3502,18 @@ static int __follow_dfs_link(struct mount_ctx *mnt_ctx)
+ 	if (rc)
+ 		goto out;
+ 
+-	/* Try all dfs link targets */
++	/* Try all dfs link targets.  If an I/O fails from currently connected DFS target with an
++	 * error other than STATUS_PATH_NOT_COVERED (-EREMOTE), then retry it from other targets as
++	 * specified in MS-DFSC "3.1.5.2 I/O Operation to Target Fails with an Error Other Than
++	 * STATUS_PATH_NOT_COVERED."
++	 */
+ 	for (rc = -ENOENT, tit = dfs_cache_get_tgt_iterator(&tl);
+ 	     tit; tit = dfs_cache_get_next_tgt(&tl, tit)) {
+ 		rc = connect_dfs_target(mnt_ctx, full_path, mnt_ctx->leaf_fullpath + 1, tit);
+ 		if (!rc) {
+ 			rc = is_path_remote(mnt_ctx);
+-			break;
++			if (!rc || rc == -EREMOTE)
++				break;
+ 		}
+ 	}
+ 
+@@ -3579,7 +3587,7 @@ int cifs_mount(struct cifs_sb_info *cifs_sb, struct smb3_fs_context *ctx)
+ 		goto error;
+ 
+ 	rc = is_path_remote(&mnt_ctx);
+-	if (rc == -EREMOTE)
++	if (rc)
+ 		rc = follow_dfs_link(&mnt_ctx);
+ 	if (rc)
+ 		goto error;
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 9fee3af83a736..abadc2f86dea6 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -3734,6 +3734,11 @@ cifs_send_async_read(loff_t offset, size_t len, struct cifsFileInfo *open_file,
+ 				break;
+ 		}
+ 
++		if (cifs_sb->ctx->rsize == 0)
++			cifs_sb->ctx->rsize =
++				server->ops->negotiate_rsize(tlink_tcon(open_file->tlink),
++							     cifs_sb->ctx);
++
+ 		rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize,
+ 						   &rsize, credits);
+ 		if (rc)
+@@ -4512,6 +4517,11 @@ static int cifs_readpages(struct file *file, struct address_space *mapping,
+ 				break;
+ 		}
+ 
++		if (cifs_sb->ctx->rsize == 0)
++			cifs_sb->ctx->rsize =
++				server->ops->negotiate_rsize(tlink_tcon(open_file->tlink),
++							     cifs_sb->ctx);
++
+ 		rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize,
+ 						   &rsize, credits);
+ 		if (rc)
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index c5b1dea54ebcd..d8ca38ea8e223 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1631,6 +1631,7 @@ smb2_ioctl_query_info(const unsigned int xid,
+ 	unsigned int size[2];
+ 	void *data[2];
+ 	int create_options = is_dir ? CREATE_NOT_FILE : CREATE_NOT_DIR;
++	void (*free_req1_func)(struct smb_rqst *r);
+ 
+ 	vars = kzalloc(sizeof(*vars), GFP_ATOMIC);
+ 	if (vars == NULL)
+@@ -1640,27 +1641,29 @@ smb2_ioctl_query_info(const unsigned int xid,
+ 
+ 	resp_buftype[0] = resp_buftype[1] = resp_buftype[2] = CIFS_NO_BUFFER;
+ 
+-	if (copy_from_user(&qi, arg, sizeof(struct smb_query_info)))
+-		goto e_fault;
+-
++	if (copy_from_user(&qi, arg, sizeof(struct smb_query_info))) {
++		rc = -EFAULT;
++		goto free_vars;
++	}
+ 	if (qi.output_buffer_length > 1024) {
+-		kfree(vars);
+-		return -EINVAL;
++		rc = -EINVAL;
++		goto free_vars;
+ 	}
+ 
+ 	if (!ses || !server) {
+-		kfree(vars);
+-		return -EIO;
++		rc = -EIO;
++		goto free_vars;
+ 	}
+ 
+ 	if (smb3_encryption_required(tcon))
+ 		flags |= CIFS_TRANSFORM_REQ;
+ 
+-	buffer = memdup_user(arg + sizeof(struct smb_query_info),
+-			     qi.output_buffer_length);
+-	if (IS_ERR(buffer)) {
+-		kfree(vars);
+-		return PTR_ERR(buffer);
++	if (qi.output_buffer_length) {
++		buffer = memdup_user(arg + sizeof(struct smb_query_info), qi.output_buffer_length);
++		if (IS_ERR(buffer)) {
++			rc = PTR_ERR(buffer);
++			goto free_vars;
++		}
+ 	}
+ 
+ 	/* Open */
+@@ -1698,45 +1701,45 @@ smb2_ioctl_query_info(const unsigned int xid,
+ 	rc = SMB2_open_init(tcon, server,
+ 			    &rqst[0], &oplock, &oparms, path);
+ 	if (rc)
+-		goto iqinf_exit;
++		goto free_output_buffer;
+ 	smb2_set_next_command(tcon, &rqst[0]);
+ 
+ 	/* Query */
+ 	if (qi.flags & PASSTHRU_FSCTL) {
+ 		/* Can eventually relax perm check since server enforces too */
+-		if (!capable(CAP_SYS_ADMIN))
++		if (!capable(CAP_SYS_ADMIN)) {
+ 			rc = -EPERM;
+-		else  {
+-			rqst[1].rq_iov = &vars->io_iov[0];
+-			rqst[1].rq_nvec = SMB2_IOCTL_IOV_SIZE;
+-
+-			rc = SMB2_ioctl_init(tcon, server,
+-					     &rqst[1],
+-					     COMPOUND_FID, COMPOUND_FID,
+-					     qi.info_type, true, buffer,
+-					     qi.output_buffer_length,
+-					     CIFSMaxBufSize -
+-					     MAX_SMB2_CREATE_RESPONSE_SIZE -
+-					     MAX_SMB2_CLOSE_RESPONSE_SIZE);
++			goto free_open_req;
+ 		}
++		rqst[1].rq_iov = &vars->io_iov[0];
++		rqst[1].rq_nvec = SMB2_IOCTL_IOV_SIZE;
++
++		rc = SMB2_ioctl_init(tcon, server, &rqst[1], COMPOUND_FID, COMPOUND_FID,
++				     qi.info_type, true, buffer, qi.output_buffer_length,
++				     CIFSMaxBufSize - MAX_SMB2_CREATE_RESPONSE_SIZE -
++				     MAX_SMB2_CLOSE_RESPONSE_SIZE);
++		free_req1_func = SMB2_ioctl_free;
+ 	} else if (qi.flags == PASSTHRU_SET_INFO) {
+ 		/* Can eventually relax perm check since server enforces too */
+-		if (!capable(CAP_SYS_ADMIN))
++		if (!capable(CAP_SYS_ADMIN)) {
+ 			rc = -EPERM;
+-		else  {
+-			rqst[1].rq_iov = &vars->si_iov[0];
+-			rqst[1].rq_nvec = 1;
+-
+-			size[0] = 8;
+-			data[0] = buffer;
+-
+-			rc = SMB2_set_info_init(tcon, server,
+-					&rqst[1],
+-					COMPOUND_FID, COMPOUND_FID,
+-					current->tgid,
+-					FILE_END_OF_FILE_INFORMATION,
+-					SMB2_O_INFO_FILE, 0, data, size);
++			goto free_open_req;
+ 		}
++		if (qi.output_buffer_length < 8) {
++			rc = -EINVAL;
++			goto free_open_req;
++		}
++		rqst[1].rq_iov = &vars->si_iov[0];
++		rqst[1].rq_nvec = 1;
++
++		/* MS-FSCC 2.4.13 FileEndOfFileInformation */
++		size[0] = 8;
++		data[0] = buffer;
++
++		rc = SMB2_set_info_init(tcon, server, &rqst[1], COMPOUND_FID, COMPOUND_FID,
++					current->tgid, FILE_END_OF_FILE_INFORMATION,
++					SMB2_O_INFO_FILE, 0, data, size);
++		free_req1_func = SMB2_set_info_free;
+ 	} else if (qi.flags == PASSTHRU_QUERY_INFO) {
+ 		rqst[1].rq_iov = &vars->qi_iov[0];
+ 		rqst[1].rq_nvec = 1;
+@@ -1747,6 +1750,7 @@ smb2_ioctl_query_info(const unsigned int xid,
+ 				  qi.info_type, qi.additional_information,
+ 				  qi.input_buffer_length,
+ 				  qi.output_buffer_length, buffer);
++		free_req1_func = SMB2_query_info_free;
+ 	} else { /* unknown flags */
+ 		cifs_tcon_dbg(VFS, "Invalid passthru query flags: 0x%x\n",
+ 			      qi.flags);
+@@ -1754,7 +1758,7 @@ smb2_ioctl_query_info(const unsigned int xid,
+ 	}
+ 
+ 	if (rc)
+-		goto iqinf_exit;
++		goto free_open_req;
+ 	smb2_set_next_command(tcon, &rqst[1]);
+ 	smb2_set_related(&rqst[1]);
+ 
+@@ -1765,14 +1769,14 @@ smb2_ioctl_query_info(const unsigned int xid,
+ 	rc = SMB2_close_init(tcon, server,
+ 			     &rqst[2], COMPOUND_FID, COMPOUND_FID, false);
+ 	if (rc)
+-		goto iqinf_exit;
++		goto free_req_1;
+ 	smb2_set_related(&rqst[2]);
+ 
+ 	rc = compound_send_recv(xid, ses, server,
+ 				flags, 3, rqst,
+ 				resp_buftype, rsp_iov);
+ 	if (rc)
+-		goto iqinf_exit;
++		goto out;
+ 
+ 	/* No need to bump num_remote_opens since handle immediately closed */
+ 	if (qi.flags & PASSTHRU_FSCTL) {
+@@ -1782,18 +1786,22 @@ smb2_ioctl_query_info(const unsigned int xid,
+ 			qi.input_buffer_length = le32_to_cpu(io_rsp->OutputCount);
+ 		if (qi.input_buffer_length > 0 &&
+ 		    le32_to_cpu(io_rsp->OutputOffset) + qi.input_buffer_length
+-		    > rsp_iov[1].iov_len)
+-			goto e_fault;
++		    > rsp_iov[1].iov_len) {
++			rc = -EFAULT;
++			goto out;
++		}
+ 
+ 		if (copy_to_user(&pqi->input_buffer_length,
+ 				 &qi.input_buffer_length,
+-				 sizeof(qi.input_buffer_length)))
+-			goto e_fault;
++				 sizeof(qi.input_buffer_length))) {
++			rc = -EFAULT;
++			goto out;
++		}
+ 
+ 		if (copy_to_user((void __user *)pqi + sizeof(struct smb_query_info),
+ 				 (const void *)io_rsp + le32_to_cpu(io_rsp->OutputOffset),
+ 				 qi.input_buffer_length))
+-			goto e_fault;
++			rc = -EFAULT;
+ 	} else {
+ 		pqi = (struct smb_query_info __user *)arg;
+ 		qi_rsp = (struct smb2_query_info_rsp *)rsp_iov[1].iov_base;
+@@ -1801,28 +1809,30 @@ smb2_ioctl_query_info(const unsigned int xid,
+ 			qi.input_buffer_length = le32_to_cpu(qi_rsp->OutputBufferLength);
+ 		if (copy_to_user(&pqi->input_buffer_length,
+ 				 &qi.input_buffer_length,
+-				 sizeof(qi.input_buffer_length)))
+-			goto e_fault;
++				 sizeof(qi.input_buffer_length))) {
++			rc = -EFAULT;
++			goto out;
++		}
+ 
+ 		if (copy_to_user(pqi + 1, qi_rsp->Buffer,
+ 				 qi.input_buffer_length))
+-			goto e_fault;
++			rc = -EFAULT;
+ 	}
+ 
+- iqinf_exit:
+-	cifs_small_buf_release(rqst[0].rq_iov[0].iov_base);
+-	cifs_small_buf_release(rqst[1].rq_iov[0].iov_base);
+-	cifs_small_buf_release(rqst[2].rq_iov[0].iov_base);
++out:
+ 	free_rsp_buf(resp_buftype[0], rsp_iov[0].iov_base);
+ 	free_rsp_buf(resp_buftype[1], rsp_iov[1].iov_base);
+ 	free_rsp_buf(resp_buftype[2], rsp_iov[2].iov_base);
+-	kfree(vars);
++	SMB2_close_free(&rqst[2]);
++free_req_1:
++	free_req1_func(&rqst[1]);
++free_open_req:
++	SMB2_open_free(&rqst[0]);
++free_output_buffer:
+ 	kfree(buffer);
++free_vars:
++	kfree(vars);
+ 	return rc;
+-
+-e_fault:
+-	rc = -EFAULT;
+-	goto iqinf_exit;
+ }
+ 
+ static ssize_t
+diff --git a/fs/coredump.c b/fs/coredump.c
+index a6b3c196cdef5..f5e22e3e30168 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -41,6 +41,7 @@
+ #include <linux/fs.h>
+ #include <linux/path.h>
+ #include <linux/timekeeping.h>
++#include <linux/elf.h>
+ 
+ #include <linux/uaccess.h>
+ #include <asm/mmu_context.h>
+@@ -52,6 +53,9 @@
+ 
+ #include <trace/events/sched.h>
+ 
++static bool dump_vma_snapshot(struct coredump_params *cprm);
++static void free_vma_snapshot(struct coredump_params *cprm);
++
+ int core_uses_pid;
+ unsigned int core_pipe_limit;
+ char core_pattern[CORENAME_MAX_SIZE] = "core";
+@@ -534,6 +538,7 @@ void do_coredump(const kernel_siginfo_t *siginfo)
+ 		 * by any locks.
+ 		 */
+ 		.mm_flags = mm->flags,
++		.vma_meta = NULL,
+ 	};
+ 
+ 	audit_core_dumps(siginfo->si_signo);
+@@ -748,6 +753,9 @@ void do_coredump(const kernel_siginfo_t *siginfo)
+ 			pr_info("Core dump to |%s disabled\n", cn.corename);
+ 			goto close_fail;
+ 		}
++		if (!dump_vma_snapshot(&cprm))
++			goto close_fail;
++
+ 		file_start_write(cprm.file);
+ 		core_dumped = binfmt->core_dump(&cprm);
+ 		/*
+@@ -761,6 +769,7 @@ void do_coredump(const kernel_siginfo_t *siginfo)
+ 			dump_emit(&cprm, "", 1);
+ 		}
+ 		file_end_write(cprm.file);
++		free_vma_snapshot(&cprm);
+ 	}
+ 	if (ispipe && core_pipe_limit)
+ 		wait_for_dump_helpers(cprm.file);
+@@ -926,6 +935,8 @@ static bool always_dump_vma(struct vm_area_struct *vma)
+ 	return false;
+ }
+ 
++#define DUMP_SIZE_MAYBE_ELFHDR_PLACEHOLDER 1
++
+ /*
+  * Decide how much of @vma's contents should be included in a core dump.
+  */
+@@ -985,9 +996,20 @@ static unsigned long vma_dump_size(struct vm_area_struct *vma,
+ 	 * dump the first page to aid in determining what was mapped here.
+ 	 */
+ 	if (FILTER(ELF_HEADERS) &&
+-	    vma->vm_pgoff == 0 && (vma->vm_flags & VM_READ) &&
+-	    (READ_ONCE(file_inode(vma->vm_file)->i_mode) & 0111) != 0)
+-		return PAGE_SIZE;
++	    vma->vm_pgoff == 0 && (vma->vm_flags & VM_READ)) {
++		if ((READ_ONCE(file_inode(vma->vm_file)->i_mode) & 0111) != 0)
++			return PAGE_SIZE;
++
++		/*
++		 * ELF libraries aren't always executable.
++		 * We'll want to check whether the mapping starts with the ELF
++		 * magic, but not now - we're holding the mmap lock,
++		 * so copy_from_user() doesn't work here.
++		 * Use a placeholder instead, and fix it up later in
++		 * dump_vma_snapshot().
++		 */
++		return DUMP_SIZE_MAYBE_ELFHDR_PLACEHOLDER;
++	}
+ 
+ #undef	FILTER
+ 
+@@ -1024,18 +1046,29 @@ static struct vm_area_struct *next_vma(struct vm_area_struct *this_vma,
+ 	return gate_vma;
+ }
+ 
++static void free_vma_snapshot(struct coredump_params *cprm)
++{
++	if (cprm->vma_meta) {
++		int i;
++		for (i = 0; i < cprm->vma_count; i++) {
++			struct file *file = cprm->vma_meta[i].file;
++			if (file)
++				fput(file);
++		}
++		kvfree(cprm->vma_meta);
++		cprm->vma_meta = NULL;
++	}
++}
++
+ /*
+  * Under the mmap_lock, take a snapshot of relevant information about the task's
+  * VMAs.
+  */
+-int dump_vma_snapshot(struct coredump_params *cprm, int *vma_count,
+-		      struct core_vma_metadata **vma_meta,
+-		      size_t *vma_data_size_ptr)
++static bool dump_vma_snapshot(struct coredump_params *cprm)
+ {
+ 	struct vm_area_struct *vma, *gate_vma;
+ 	struct mm_struct *mm = current->mm;
+ 	int i;
+-	size_t vma_data_size = 0;
+ 
+ 	/*
+ 	 * Once the stack expansion code is fixed to not change VMA bounds
+@@ -1043,36 +1076,51 @@ int dump_vma_snapshot(struct coredump_params *cprm, int *vma_count,
+ 	 * mmap_lock in read mode.
+ 	 */
+ 	if (mmap_write_lock_killable(mm))
+-		return -EINTR;
++		return false;
+ 
++	cprm->vma_data_size = 0;
+ 	gate_vma = get_gate_vma(mm);
+-	*vma_count = mm->map_count + (gate_vma ? 1 : 0);
++	cprm->vma_count = mm->map_count + (gate_vma ? 1 : 0);
+ 
+-	*vma_meta = kvmalloc_array(*vma_count, sizeof(**vma_meta), GFP_KERNEL);
+-	if (!*vma_meta) {
++	cprm->vma_meta = kvmalloc_array(cprm->vma_count, sizeof(*cprm->vma_meta), GFP_KERNEL);
++	if (!cprm->vma_meta) {
+ 		mmap_write_unlock(mm);
+-		return -ENOMEM;
++		return false;
+ 	}
+ 
+ 	for (i = 0, vma = first_vma(current, gate_vma); vma != NULL;
+ 			vma = next_vma(vma, gate_vma), i++) {
+-		struct core_vma_metadata *m = (*vma_meta) + i;
++		struct core_vma_metadata *m = cprm->vma_meta + i;
+ 
+ 		m->start = vma->vm_start;
+ 		m->end = vma->vm_end;
+ 		m->flags = vma->vm_flags;
+ 		m->dump_size = vma_dump_size(vma, cprm->mm_flags);
++		m->pgoff = vma->vm_pgoff;
+ 
+-		vma_data_size += m->dump_size;
++		m->file = vma->vm_file;
++		if (m->file)
++			get_file(m->file);
+ 	}
+ 
+ 	mmap_write_unlock(mm);
+ 
+-	if (WARN_ON(i != *vma_count)) {
+-		kvfree(*vma_meta);
+-		return -EFAULT;
++	for (i = 0; i < cprm->vma_count; i++) {
++		struct core_vma_metadata *m = cprm->vma_meta + i;
++
++		if (m->dump_size == DUMP_SIZE_MAYBE_ELFHDR_PLACEHOLDER) {
++			char elfmag[SELFMAG];
++
++			if (copy_from_user(elfmag, (void __user *)m->start, SELFMAG) ||
++					memcmp(elfmag, ELFMAG, SELFMAG) != 0) {
++				m->dump_size = 0;
++			} else {
++				m->dump_size = PAGE_SIZE;
++			}
++		}
++
++		cprm->vma_data_size += m->dump_size;
+ 	}
+ 
+-	*vma_data_size_ptr = vma_data_size;
+-	return 0;
++	return true;
+ }
+diff --git a/fs/exec.c b/fs/exec.c
+index 537d92c41105b..690aaab256fdf 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -494,8 +494,14 @@ static int bprm_stack_limits(struct linux_binprm *bprm)
+ 	 * the stack. They aren't stored until much later when we can't
+ 	 * signal to the parent that the child has run out of stack space.
+ 	 * Instead, calculate it here so it's possible to fail gracefully.
++	 *
++	 * In the case of argc = 0, make sure there is space for adding a
++	 * empty string (which will bump argc to 1), to ensure confused
++	 * userspace programs don't start processing from argv[1], thinking
++	 * argc can never be 0, to keep them from walking envp by accident.
++	 * See do_execveat_common().
+ 	 */
+-	ptr_size = (bprm->argc + bprm->envc) * sizeof(void *);
++	ptr_size = (max(bprm->argc, 1) + bprm->envc) * sizeof(void *);
+ 	if (limit <= ptr_size)
+ 		return -E2BIG;
+ 	limit -= ptr_size;
+@@ -1893,6 +1899,9 @@ static int do_execveat_common(int fd, struct filename *filename,
+ 	}
+ 
+ 	retval = count(argv, MAX_ARG_STRINGS);
++	if (retval == 0)
++		pr_warn_once("process '%s' launched '%s' with NULL argv: empty string added\n",
++			     current->comm, bprm->filename);
+ 	if (retval < 0)
+ 		goto out_free;
+ 	bprm->argc = retval;
+@@ -1919,6 +1928,19 @@ static int do_execveat_common(int fd, struct filename *filename,
+ 	if (retval < 0)
+ 		goto out_free;
+ 
++	/*
++	 * When argv is empty, add an empty string ("") as argv[0] to
++	 * ensure confused userspace programs that start processing
++	 * from argv[1] won't end up walking envp. See also
++	 * bprm_stack_limits().
++	 */
++	if (bprm->argc == 0) {
++		retval = copy_string_kernel("", bprm);
++		if (retval < 0)
++			goto out_free;
++		bprm->argc = 1;
++	}
++
+ 	retval = bprm_execve(bprm, fd, filename, flags);
+ out_free:
+ 	free_bprm(bprm);
+@@ -1947,6 +1969,8 @@ int kernel_execve(const char *kernel_filename,
+ 	}
+ 
+ 	retval = count_strings_kernel(argv);
++	if (WARN_ON_ONCE(retval == 0))
++		retval = -EINVAL;
+ 	if (retval < 0)
+ 		goto out_free;
+ 	bprm->argc = retval;
+diff --git a/fs/ext2/super.c b/fs/ext2/super.c
+index d8d580b609baa..3d21279fe2cb5 100644
+--- a/fs/ext2/super.c
++++ b/fs/ext2/super.c
+@@ -753,8 +753,12 @@ static loff_t ext2_max_size(int bits)
+ 	res += 1LL << (bits-2);
+ 	res += 1LL << (2*(bits-2));
+ 	res += 1LL << (3*(bits-2));
++	/* Compute how many metadata blocks are needed */
++	meta_blocks = 1;
++	meta_blocks += 1 + ppb;
++	meta_blocks += 1 + ppb + ppb * ppb;
+ 	/* Does block tree limit file size? */
+-	if (res < upper_limit)
++	if (res + meta_blocks <= upper_limit)
+ 		goto check_lfs;
+ 
+ 	res = upper_limit;
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index d091133a4b460..46fdb40c3962c 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -1788,19 +1788,20 @@ bool empty_inline_dir(struct inode *dir, int *has_inline_data)
+ 	void *inline_pos;
+ 	unsigned int offset;
+ 	struct ext4_dir_entry_2 *de;
+-	bool ret = true;
++	bool ret = false;
+ 
+ 	err = ext4_get_inode_loc(dir, &iloc);
+ 	if (err) {
+ 		EXT4_ERROR_INODE_ERR(dir, -err,
+ 				     "error %d getting inode %lu block",
+ 				     err, dir->i_ino);
+-		return true;
++		return false;
+ 	}
+ 
+ 	down_read(&EXT4_I(dir)->xattr_sem);
+ 	if (!ext4_has_inline_data(dir)) {
+ 		*has_inline_data = 0;
++		ret = true;
+ 		goto out;
+ 	}
+ 
+@@ -1809,7 +1810,6 @@ bool empty_inline_dir(struct inode *dir, int *has_inline_data)
+ 		ext4_warning(dir->i_sb,
+ 			     "bad inline directory (dir #%lu) - no `..'",
+ 			     dir->i_ino);
+-		ret = true;
+ 		goto out;
+ 	}
+ 
+@@ -1828,16 +1828,15 @@ bool empty_inline_dir(struct inode *dir, int *has_inline_data)
+ 				     dir->i_ino, le32_to_cpu(de->inode),
+ 				     le16_to_cpu(de->rec_len), de->name_len,
+ 				     inline_size);
+-			ret = true;
+ 			goto out;
+ 		}
+ 		if (le32_to_cpu(de->inode)) {
+-			ret = false;
+ 			goto out;
+ 		}
+ 		offset += ext4_rec_len_from_disk(de->rec_len, inline_size);
+ 	}
+ 
++	ret = true;
+ out:
+ 	up_read(&EXT4_I(dir)->xattr_sem);
+ 	brelse(iloc.bh);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 2f5686dfa30d5..a61d1e4e10265 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1992,6 +1992,15 @@ static int ext4_writepage(struct page *page,
+ 	else
+ 		len = PAGE_SIZE;
+ 
++	/* Should never happen but for bugs in other kernel subsystems */
++	if (!page_has_buffers(page)) {
++		ext4_warning_inode(inode,
++		   "page %lu does not have buffers attached", page->index);
++		ClearPageDirty(page);
++		unlock_page(page);
++		return 0;
++	}
++
+ 	page_bufs = page_buffers(page);
+ 	/*
+ 	 * We cannot do block allocation or other extent handling in this
+@@ -2595,6 +2604,22 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
+ 			wait_on_page_writeback(page);
+ 			BUG_ON(PageWriteback(page));
+ 
++			/*
++			 * Should never happen but for buggy code in
++			 * other subsystems that call
++			 * set_page_dirty() without properly warning
++			 * the file system first.  See [1] for more
++			 * information.
++			 *
++			 * [1] https://lore.kernel.org/linux-mm/20180103100430.GE4911@quack2.suse.cz
++			 */
++			if (!page_has_buffers(page)) {
++				ext4_warning_inode(mpd->inode, "page %lu does not have buffers attached", page->index);
++				ClearPageDirty(page);
++				unlock_page(page);
++				continue;
++			}
++
+ 			if (mpd->map.m_len == 0)
+ 				mpd->first_page = page->index;
+ 			mpd->next_page = page->index + 1;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index c849fd845d9b8..b53c07c5c7606 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -1000,7 +1000,7 @@ static inline int should_optimize_scan(struct ext4_allocation_context *ac)
+ 		return 0;
+ 	if (ac->ac_criteria >= 2)
+ 		return 0;
+-	if (ext4_test_inode_flag(ac->ac_inode, EXT4_INODE_EXTENTS))
++	if (!ext4_test_inode_flag(ac->ac_inode, EXT4_INODE_EXTENTS))
+ 		return 0;
+ 	return 1;
+ }
+@@ -3899,69 +3899,95 @@ void ext4_mb_mark_bb(struct super_block *sb, ext4_fsblk_t block,
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	ext4_group_t group;
+ 	ext4_grpblk_t blkoff;
+-	int i, clen, err;
++	int i, err;
+ 	int already;
++	unsigned int clen, clen_changed, thisgrp_len;
+ 
+-	clen = EXT4_B2C(sbi, len);
++	while (len > 0) {
++		ext4_get_group_no_and_offset(sb, block, &group, &blkoff);
+ 
+-	ext4_get_group_no_and_offset(sb, block, &group, &blkoff);
+-	bitmap_bh = ext4_read_block_bitmap(sb, group);
+-	if (IS_ERR(bitmap_bh)) {
+-		err = PTR_ERR(bitmap_bh);
+-		bitmap_bh = NULL;
+-		goto out_err;
+-	}
++		/*
++		 * Check to see if we are freeing blocks across a group
++		 * boundary.
++		 * In case of flex_bg, this can happen that (block, len) may
++		 * span across more than one group. In that case we need to
++		 * get the corresponding group metadata to work with.
++		 * For this we have goto again loop.
++		 */
++		thisgrp_len = min_t(unsigned int, (unsigned int)len,
++			EXT4_BLOCKS_PER_GROUP(sb) - EXT4_C2B(sbi, blkoff));
++		clen = EXT4_NUM_B2C(sbi, thisgrp_len);
+ 
+-	err = -EIO;
+-	gdp = ext4_get_group_desc(sb, group, &gdp_bh);
+-	if (!gdp)
+-		goto out_err;
++		bitmap_bh = ext4_read_block_bitmap(sb, group);
++		if (IS_ERR(bitmap_bh)) {
++			err = PTR_ERR(bitmap_bh);
++			bitmap_bh = NULL;
++			break;
++		}
+ 
+-	ext4_lock_group(sb, group);
+-	already = 0;
+-	for (i = 0; i < clen; i++)
+-		if (!mb_test_bit(blkoff + i, bitmap_bh->b_data) == !state)
+-			already++;
++		err = -EIO;
++		gdp = ext4_get_group_desc(sb, group, &gdp_bh);
++		if (!gdp)
++			break;
+ 
+-	if (state)
+-		ext4_set_bits(bitmap_bh->b_data, blkoff, clen);
+-	else
+-		mb_test_and_clear_bits(bitmap_bh->b_data, blkoff, clen);
+-	if (ext4_has_group_desc_csum(sb) &&
+-	    (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))) {
+-		gdp->bg_flags &= cpu_to_le16(~EXT4_BG_BLOCK_UNINIT);
+-		ext4_free_group_clusters_set(sb, gdp,
+-					     ext4_free_clusters_after_init(sb,
+-						group, gdp));
+-	}
+-	if (state)
+-		clen = ext4_free_group_clusters(sb, gdp) - clen + already;
+-	else
+-		clen = ext4_free_group_clusters(sb, gdp) + clen - already;
++		ext4_lock_group(sb, group);
++		already = 0;
++		for (i = 0; i < clen; i++)
++			if (!mb_test_bit(blkoff + i, bitmap_bh->b_data) ==
++					 !state)
++				already++;
++
++		clen_changed = clen - already;
++		if (state)
++			ext4_set_bits(bitmap_bh->b_data, blkoff, clen);
++		else
++			mb_test_and_clear_bits(bitmap_bh->b_data, blkoff, clen);
++		if (ext4_has_group_desc_csum(sb) &&
++		    (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))) {
++			gdp->bg_flags &= cpu_to_le16(~EXT4_BG_BLOCK_UNINIT);
++			ext4_free_group_clusters_set(sb, gdp,
++			     ext4_free_clusters_after_init(sb, group, gdp));
++		}
++		if (state)
++			clen = ext4_free_group_clusters(sb, gdp) - clen_changed;
++		else
++			clen = ext4_free_group_clusters(sb, gdp) + clen_changed;
+ 
+-	ext4_free_group_clusters_set(sb, gdp, clen);
+-	ext4_block_bitmap_csum_set(sb, group, gdp, bitmap_bh);
+-	ext4_group_desc_csum_set(sb, group, gdp);
++		ext4_free_group_clusters_set(sb, gdp, clen);
++		ext4_block_bitmap_csum_set(sb, group, gdp, bitmap_bh);
++		ext4_group_desc_csum_set(sb, group, gdp);
+ 
+-	ext4_unlock_group(sb, group);
++		ext4_unlock_group(sb, group);
+ 
+-	if (sbi->s_log_groups_per_flex) {
+-		ext4_group_t flex_group = ext4_flex_group(sbi, group);
++		if (sbi->s_log_groups_per_flex) {
++			ext4_group_t flex_group = ext4_flex_group(sbi, group);
++			struct flex_groups *fg = sbi_array_rcu_deref(sbi,
++						   s_flex_groups, flex_group);
+ 
+-		atomic64_sub(len,
+-			     &sbi_array_rcu_deref(sbi, s_flex_groups,
+-						  flex_group)->free_clusters);
++			if (state)
++				atomic64_sub(clen_changed, &fg->free_clusters);
++			else
++				atomic64_add(clen_changed, &fg->free_clusters);
++
++		}
++
++		err = ext4_handle_dirty_metadata(NULL, NULL, bitmap_bh);
++		if (err)
++			break;
++		sync_dirty_buffer(bitmap_bh);
++		err = ext4_handle_dirty_metadata(NULL, NULL, gdp_bh);
++		sync_dirty_buffer(gdp_bh);
++		if (err)
++			break;
++
++		block += thisgrp_len;
++		len -= thisgrp_len;
++		brelse(bitmap_bh);
++		BUG_ON(len < 0);
+ 	}
+ 
+-	err = ext4_handle_dirty_metadata(NULL, NULL, bitmap_bh);
+ 	if (err)
+-		goto out_err;
+-	sync_dirty_buffer(bitmap_bh);
+-	err = ext4_handle_dirty_metadata(NULL, NULL, gdp_bh);
+-	sync_dirty_buffer(gdp_bh);
+-
+-out_err:
+-	brelse(bitmap_bh);
++		brelse(bitmap_bh);
+ }
+ 
+ /*
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 47b9f87dbc6f7..94db928d1c9e0 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -2997,14 +2997,14 @@ bool ext4_empty_dir(struct inode *inode)
+ 	if (inode->i_size < ext4_dir_rec_len(1, NULL) +
+ 					ext4_dir_rec_len(2, NULL)) {
+ 		EXT4_ERROR_INODE(inode, "invalid size");
+-		return true;
++		return false;
+ 	}
+ 	/* The first directory block must not be a hole,
+ 	 * so treat it as DIRENT_HTREE
+ 	 */
+ 	bh = ext4_read_dirblock(inode, 0, DIRENT_HTREE);
+ 	if (IS_ERR(bh))
+-		return true;
++		return false;
+ 
+ 	de = (struct ext4_dir_entry_2 *) bh->b_data;
+ 	if (ext4_check_dir_entry(inode, NULL, de, bh, bh->b_data, bh->b_size,
+@@ -3012,7 +3012,7 @@ bool ext4_empty_dir(struct inode *inode)
+ 	    le32_to_cpu(de->inode) != inode->i_ino || strcmp(".", de->name)) {
+ 		ext4_warning_inode(inode, "directory missing '.'");
+ 		brelse(bh);
+-		return true;
++		return false;
+ 	}
+ 	offset = ext4_rec_len_from_disk(de->rec_len, sb->s_blocksize);
+ 	de = ext4_next_entry(de, sb->s_blocksize);
+@@ -3021,7 +3021,7 @@ bool ext4_empty_dir(struct inode *inode)
+ 	    le32_to_cpu(de->inode) == 0 || strcmp("..", de->name)) {
+ 		ext4_warning_inode(inode, "directory missing '..'");
+ 		brelse(bh);
+-		return true;
++		return false;
+ 	}
+ 	offset += ext4_rec_len_from_disk(de->rec_len, sb->s_blocksize);
+ 	while (offset < inode->i_size) {
+@@ -3035,7 +3035,7 @@ bool ext4_empty_dir(struct inode *inode)
+ 				continue;
+ 			}
+ 			if (IS_ERR(bh))
+-				return true;
++				return false;
+ 		}
+ 		de = (struct ext4_dir_entry_2 *) (bh->b_data +
+ 					(offset & (sb->s_blocksize - 1)));
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index 2011e97424438..d913cd9feceab 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -864,6 +864,7 @@ static struct page *validate_checkpoint(struct f2fs_sb_info *sbi,
+ 	struct page *cp_page_1 = NULL, *cp_page_2 = NULL;
+ 	struct f2fs_checkpoint *cp_block = NULL;
+ 	unsigned long long cur_version = 0, pre_version = 0;
++	unsigned int cp_blocks;
+ 	int err;
+ 
+ 	err = get_checkpoint_version(sbi, cp_addr, &cp_block,
+@@ -871,15 +872,16 @@ static struct page *validate_checkpoint(struct f2fs_sb_info *sbi,
+ 	if (err)
+ 		return NULL;
+ 
+-	if (le32_to_cpu(cp_block->cp_pack_total_block_count) >
+-					sbi->blocks_per_seg) {
++	cp_blocks = le32_to_cpu(cp_block->cp_pack_total_block_count);
++
++	if (cp_blocks > sbi->blocks_per_seg || cp_blocks <= F2FS_CP_PACKS) {
+ 		f2fs_warn(sbi, "invalid cp_pack_total_block_count:%u",
+ 			  le32_to_cpu(cp_block->cp_pack_total_block_count));
+ 		goto invalid_cp;
+ 	}
+ 	pre_version = *version;
+ 
+-	cp_addr += le32_to_cpu(cp_block->cp_pack_total_block_count) - 1;
++	cp_addr += cp_blocks - 1;
+ 	err = get_checkpoint_version(sbi, cp_addr, &cp_block,
+ 					&cp_page_2, version);
+ 	if (err)
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 190a3c4d4c913..ced0fd54658c1 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -313,10 +313,9 @@ static int lz4_decompress_pages(struct decompress_io_ctx *dic)
+ 	}
+ 
+ 	if (ret != PAGE_SIZE << dic->log_cluster_size) {
+-		printk_ratelimited("%sF2FS-fs (%s): lz4 invalid rlen:%zu, "
++		printk_ratelimited("%sF2FS-fs (%s): lz4 invalid ret:%d, "
+ 					"expected:%lu\n", KERN_ERR,
+-					F2FS_I_SB(dic->inode)->sb->s_id,
+-					dic->rlen,
++					F2FS_I_SB(dic->inode)->sb->s_id, ret,
+ 					PAGE_SIZE << dic->log_cluster_size);
+ 		return -EIO;
+ 	}
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 3ba75587a2cda..5b4b261308443 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -3267,8 +3267,12 @@ static int __f2fs_write_data_pages(struct address_space *mapping,
+ 	/* to avoid spliting IOs due to mixed WB_SYNC_ALL and WB_SYNC_NONE */
+ 	if (wbc->sync_mode == WB_SYNC_ALL)
+ 		atomic_inc(&sbi->wb_sync_req[DATA]);
+-	else if (atomic_read(&sbi->wb_sync_req[DATA]))
++	else if (atomic_read(&sbi->wb_sync_req[DATA])) {
++		/* to avoid potential deadlock */
++		if (current->plug)
++			blk_finish_plug(current->plug);
+ 		goto skip_write;
++	}
+ 
+ 	if (__should_serialize_io(inode, wbc)) {
+ 		mutex_lock(&sbi->writepages);
+@@ -3459,7 +3463,7 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping,
+ 
+ 		*fsdata = NULL;
+ 
+-		if (len == PAGE_SIZE)
++		if (len == PAGE_SIZE && !(f2fs_is_atomic_file(inode)))
+ 			goto repeat;
+ 
+ 		ret = f2fs_prepare_compress_overwrite(inode, pagep,
+diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c
+index 8c50518475a99..b449c7a372a4b 100644
+--- a/fs/f2fs/debug.c
++++ b/fs/f2fs/debug.c
+@@ -21,7 +21,7 @@
+ #include "gc.h"
+ 
+ static LIST_HEAD(f2fs_stat_list);
+-static DEFINE_MUTEX(f2fs_stat_mutex);
++static DEFINE_RAW_SPINLOCK(f2fs_stat_lock);
+ #ifdef CONFIG_DEBUG_FS
+ static struct dentry *f2fs_debugfs_root;
+ #endif
+@@ -338,14 +338,16 @@ static char *s_flag[] = {
+ 	[SBI_QUOTA_SKIP_FLUSH]	= " quota_skip_flush",
+ 	[SBI_QUOTA_NEED_REPAIR]	= " quota_need_repair",
+ 	[SBI_IS_RESIZEFS]	= " resizefs",
++	[SBI_IS_FREEZING]	= " freezefs",
+ };
+ 
+ static int stat_show(struct seq_file *s, void *v)
+ {
+ 	struct f2fs_stat_info *si;
+ 	int i = 0, j = 0;
++	unsigned long flags;
+ 
+-	mutex_lock(&f2fs_stat_mutex);
++	raw_spin_lock_irqsave(&f2fs_stat_lock, flags);
+ 	list_for_each_entry(si, &f2fs_stat_list, stat_list) {
+ 		update_general_status(si->sbi);
+ 
+@@ -573,7 +575,7 @@ static int stat_show(struct seq_file *s, void *v)
+ 		seq_printf(s, "  - paged : %llu KB\n",
+ 				si->page_mem >> 10);
+ 	}
+-	mutex_unlock(&f2fs_stat_mutex);
++	raw_spin_unlock_irqrestore(&f2fs_stat_lock, flags);
+ 	return 0;
+ }
+ 
+@@ -584,6 +586,7 @@ int f2fs_build_stats(struct f2fs_sb_info *sbi)
+ {
+ 	struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi);
+ 	struct f2fs_stat_info *si;
++	unsigned long flags;
+ 	int i;
+ 
+ 	si = f2fs_kzalloc(sbi, sizeof(struct f2fs_stat_info), GFP_KERNEL);
+@@ -619,9 +622,9 @@ int f2fs_build_stats(struct f2fs_sb_info *sbi)
+ 	atomic_set(&sbi->max_aw_cnt, 0);
+ 	atomic_set(&sbi->max_vw_cnt, 0);
+ 
+-	mutex_lock(&f2fs_stat_mutex);
++	raw_spin_lock_irqsave(&f2fs_stat_lock, flags);
+ 	list_add_tail(&si->stat_list, &f2fs_stat_list);
+-	mutex_unlock(&f2fs_stat_mutex);
++	raw_spin_unlock_irqrestore(&f2fs_stat_lock, flags);
+ 
+ 	return 0;
+ }
+@@ -629,10 +632,11 @@ int f2fs_build_stats(struct f2fs_sb_info *sbi)
+ void f2fs_destroy_stats(struct f2fs_sb_info *sbi)
+ {
+ 	struct f2fs_stat_info *si = F2FS_STAT(sbi);
++	unsigned long flags;
+ 
+-	mutex_lock(&f2fs_stat_mutex);
++	raw_spin_lock_irqsave(&f2fs_stat_lock, flags);
+ 	list_del(&si->stat_list);
+-	mutex_unlock(&f2fs_stat_mutex);
++	raw_spin_unlock_irqrestore(&f2fs_stat_lock, flags);
+ 
+ 	kfree(si);
+ }
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index d753094a4919f..980c0c2b6350a 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1259,6 +1259,7 @@ enum {
+ 	SBI_QUOTA_SKIP_FLUSH,			/* skip flushing quota in current CP */
+ 	SBI_QUOTA_NEED_REPAIR,			/* quota file may be corrupted */
+ 	SBI_IS_RESIZEFS,			/* resizefs is in process */
++	SBI_IS_FREEZING,			/* freezefs is in process */
+ };
+ 
+ enum {
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index b752d1a6e6ff8..21dde125ff77d 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -2002,7 +2002,10 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
+ 
+ 	inode_lock(inode);
+ 
+-	f2fs_disable_compressed_file(inode);
++	if (!f2fs_disable_compressed_file(inode)) {
++		ret = -EINVAL;
++		goto out;
++	}
+ 
+ 	if (f2fs_is_atomic_file(inode)) {
+ 		if (is_inode_flag_set(inode, FI_ATOMIC_REVOKE_REQUEST))
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index b538cbcba351d..fc9e33859b6c6 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1026,8 +1026,10 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ 		set_sbi_flag(sbi, SBI_NEED_FSCK);
+ 	}
+ 
+-	if (f2fs_check_nid_range(sbi, dni->ino))
++	if (f2fs_check_nid_range(sbi, dni->ino)) {
++		f2fs_put_page(node_page, 1);
+ 		return false;
++	}
+ 
+ 	*nofs = ofs_of_node(node_page);
+ 	source_blkaddr = data_blkaddr(NULL, node_page, ofs_in_node);
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 1db7823d5a131..a40e52ba5ec84 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -769,7 +769,8 @@ void f2fs_evict_inode(struct inode *inode)
+ 	f2fs_remove_ino_entry(sbi, inode->i_ino, UPDATE_INO);
+ 	f2fs_remove_ino_entry(sbi, inode->i_ino, FLUSH_INO);
+ 
+-	sb_start_intwrite(inode->i_sb);
++	if (!is_sbi_flag_set(sbi, SBI_IS_FREEZING))
++		sb_start_intwrite(inode->i_sb);
+ 	set_inode_flag(inode, FI_NO_ALLOC);
+ 	i_size_write(inode, 0);
+ retry:
+@@ -800,7 +801,8 @@ retry:
+ 		if (dquot_initialize_needed(inode))
+ 			set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR);
+ 	}
+-	sb_end_intwrite(inode->i_sb);
++	if (!is_sbi_flag_set(sbi, SBI_IS_FREEZING))
++		sb_end_intwrite(inode->i_sb);
+ no_delete:
+ 	dquot_drop(inode);
+ 
+@@ -876,6 +878,7 @@ void f2fs_handle_failed_inode(struct inode *inode)
+ 	err = f2fs_get_node_info(sbi, inode->i_ino, &ni);
+ 	if (err) {
+ 		set_sbi_flag(sbi, SBI_NEED_FSCK);
++		set_inode_flag(inode, FI_FREE_NID);
+ 		f2fs_warn(sbi, "May loss orphan inode, run fsck to fix.");
+ 		goto out;
+ 	}
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 556fcd8457f3f..69c6bcaf5aae8 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -2106,8 +2106,12 @@ static int f2fs_write_node_pages(struct address_space *mapping,
+ 
+ 	if (wbc->sync_mode == WB_SYNC_ALL)
+ 		atomic_inc(&sbi->wb_sync_req[NODE]);
+-	else if (atomic_read(&sbi->wb_sync_req[NODE]))
++	else if (atomic_read(&sbi->wb_sync_req[NODE])) {
++		/* to avoid potential deadlock */
++		if (current->plug)
++			blk_finish_plug(current->plug);
+ 		goto skip_write;
++	}
+ 
+ 	trace_f2fs_writepages(mapping->host, wbc, NODE);
+ 
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index df9ed75f0b7a7..5fd33aabd0fc5 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -4792,6 +4792,13 @@ static int sanity_check_curseg(struct f2fs_sb_info *sbi)
+ 
+ 		sanity_check_seg_type(sbi, curseg->seg_type);
+ 
++		if (curseg->alloc_type != LFS && curseg->alloc_type != SSR) {
++			f2fs_err(sbi,
++				 "Current segment has invalid alloc_type:%d",
++				 curseg->alloc_type);
++			return -EFSCORRUPTED;
++		}
++
+ 		if (f2fs_test_bit(blkofs, se->cur_valid_map))
+ 			goto out;
+ 
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 08faa787a773b..9402d5b2c7fda 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1668,11 +1668,15 @@ static int f2fs_freeze(struct super_block *sb)
+ 	/* ensure no checkpoint required */
+ 	if (!llist_empty(&F2FS_SB(sb)->cprc_info.issue_list))
+ 		return -EINVAL;
++
++	/* to avoid deadlock on f2fs_evict_inode->SB_FREEZE_FS */
++	set_sbi_flag(F2FS_SB(sb), SBI_IS_FREEZING);
+ 	return 0;
+ }
+ 
+ static int f2fs_unfreeze(struct super_block *sb)
+ {
++	clear_sbi_flag(F2FS_SB(sb), SBI_IS_FREEZING);
+ 	return 0;
+ }
+ 
+@@ -2695,7 +2699,7 @@ int f2fs_quota_sync(struct super_block *sb, int type)
+ 	struct f2fs_sb_info *sbi = F2FS_SB(sb);
+ 	struct quota_info *dqopt = sb_dqopt(sb);
+ 	int cnt;
+-	int ret;
++	int ret = 0;
+ 
+ 	/*
+ 	 * Now when everything is written we can discard the pagecache so
+@@ -2706,8 +2710,8 @@ int f2fs_quota_sync(struct super_block *sb, int type)
+ 		if (type != -1 && cnt != type)
+ 			continue;
+ 
+-		if (!sb_has_quota_active(sb, type))
+-			return 0;
++		if (!sb_has_quota_active(sb, cnt))
++			continue;
+ 
+ 		inode_lock(dqopt->files[cnt]);
+ 
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index 8cdeac090f81d..8885c73834725 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -473,7 +473,7 @@ out:
+ 		} else if (t == GC_IDLE_AT) {
+ 			if (!sbi->am.atgc_enabled)
+ 				return -EINVAL;
+-			sbi->gc_mode = GC_AT;
++			sbi->gc_mode = GC_IDLE_AT;
+ 		} else {
+ 			sbi->gc_mode = GC_NORMAL;
+ 		}
+diff --git a/fs/file.c b/fs/file.c
+index 97d212a9b8144..ee93173467025 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -87,6 +87,21 @@ static void copy_fdtable(struct fdtable *nfdt, struct fdtable *ofdt)
+ 	copy_fd_bitmaps(nfdt, ofdt, ofdt->max_fds);
+ }
+ 
++/*
++ * Note how the fdtable bitmap allocations very much have to be a multiple of
++ * BITS_PER_LONG. This is not only because we walk those things in chunks of
++ * 'unsigned long' in some places, but simply because that is how the Linux
++ * kernel bitmaps are defined to work: they are not "bits in an array of bytes",
++ * they are very much "bits in an array of unsigned long".
++ *
++ * The ALIGN(nr, BITS_PER_LONG) here is for clarity: since we just multiplied
++ * by that "1024/sizeof(ptr)" before, we already know there are sufficient
++ * clear low bits. Clang seems to realize that, gcc ends up being confused.
++ *
++ * On a 128-bit machine, the ALIGN() would actually matter. In the meantime,
++ * let's consider it documentation (and maybe a test-case for gcc to improve
++ * its code generation ;)
++ */
+ static struct fdtable * alloc_fdtable(unsigned int nr)
+ {
+ 	struct fdtable *fdt;
+@@ -102,6 +117,7 @@ static struct fdtable * alloc_fdtable(unsigned int nr)
+ 	nr /= (1024 / sizeof(struct file *));
+ 	nr = roundup_pow_of_two(nr + 1);
+ 	nr *= (1024 / sizeof(struct file *));
++	nr = ALIGN(nr, BITS_PER_LONG);
+ 	/*
+ 	 * Note that this can drive nr *below* what we had passed if sysctl_nr_open
+ 	 * had been set lower between the check in expand_files() and here.  Deal
+@@ -269,6 +285,19 @@ static unsigned int count_open_files(struct fdtable *fdt)
+ 	return i;
+ }
+ 
++/*
++ * Note that a sane fdtable size always has to be a multiple of
++ * BITS_PER_LONG, since we have bitmaps that are sized by this.
++ *
++ * 'max_fds' will normally already be properly aligned, but it
++ * turns out that in the close_range() -> __close_range() ->
++ * unshare_fd() -> dup_fd() -> sane_fdtable_size() we can end
++ * up having a 'max_fds' value that isn't already aligned.
++ *
++ * Rather than make close_range() have to worry about this,
++ * just make that BITS_PER_LONG alignment be part of a sane
++ * fdtable size. Becuase that's really what it is.
++ */
+ static unsigned int sane_fdtable_size(struct fdtable *fdt, unsigned int max_fds)
+ {
+ 	unsigned int count;
+@@ -276,7 +305,7 @@ static unsigned int sane_fdtable_size(struct fdtable *fdt, unsigned int max_fds)
+ 	count = count_open_files(fdt);
+ 	if (max_fds < NR_OPEN_DEFAULT)
+ 		max_fds = NR_OPEN_DEFAULT;
+-	return min(count, max_fds);
++	return ALIGN(min(count, max_fds), BITS_PER_LONG);
+ }
+ 
+ /*
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index d67108489148e..fbdb7a30470a3 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -2146,7 +2146,7 @@ int gfs2_setattr_size(struct inode *inode, u64 newsize)
+ 
+ 	ret = do_shrink(inode, newsize);
+ out:
+-	gfs2_rs_delete(ip, NULL);
++	gfs2_rs_delete(ip);
+ 	gfs2_qa_put(ip);
+ 	return ret;
+ }
+diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
+index 8c39a8571b1fa..b53ad18e5ccbf 100644
+--- a/fs/gfs2/file.c
++++ b/fs/gfs2/file.c
+@@ -706,7 +706,7 @@ static int gfs2_release(struct inode *inode, struct file *file)
+ 
+ 	if (file->f_mode & FMODE_WRITE) {
+ 		if (gfs2_rs_active(&ip->i_res))
+-			gfs2_rs_delete(ip, &inode->i_writecount);
++			gfs2_rs_delete(ip);
+ 		gfs2_qa_put(ip);
+ 	}
+ 	return 0;
+@@ -1083,6 +1083,7 @@ out_uninit:
+ 	gfs2_holder_uninit(gh);
+ 	if (statfs_gh)
+ 		kfree(statfs_gh);
++	from->count = orig_count - read;
+ 	return read ? read : ret;
+ }
+ 
+diff --git a/fs/gfs2/inode.c b/fs/gfs2/inode.c
+index 89905f4f29bb6..66a123306aecb 100644
+--- a/fs/gfs2/inode.c
++++ b/fs/gfs2/inode.c
+@@ -793,7 +793,7 @@ fail_free_inode:
+ 		if (free_vfs_inode) /* else evict will do the put for us */
+ 			gfs2_glock_put(ip->i_gl);
+ 	}
+-	gfs2_rs_delete(ip, NULL);
++	gfs2_rs_deltree(&ip->i_res);
+ 	gfs2_qa_put(ip);
+ fail_free_acls:
+ 	posix_acl_release(default_acl);
+diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
+index 0fb3c01bc5577..3b34bb24d0af4 100644
+--- a/fs/gfs2/rgrp.c
++++ b/fs/gfs2/rgrp.c
+@@ -680,13 +680,14 @@ void gfs2_rs_deltree(struct gfs2_blkreserv *rs)
+ /**
+  * gfs2_rs_delete - delete a multi-block reservation
+  * @ip: The inode for this reservation
+- * @wcount: The inode's write count, or NULL
+  *
+  */
+-void gfs2_rs_delete(struct gfs2_inode *ip, atomic_t *wcount)
++void gfs2_rs_delete(struct gfs2_inode *ip)
+ {
++	struct inode *inode = &ip->i_inode;
++
+ 	down_write(&ip->i_rw_mutex);
+-	if ((wcount == NULL) || (atomic_read(wcount) <= 1))
++	if (atomic_read(&inode->i_writecount) <= 1)
+ 		gfs2_rs_deltree(&ip->i_res);
+ 	up_write(&ip->i_rw_mutex);
+ }
+@@ -1415,7 +1416,8 @@ int gfs2_fitrim(struct file *filp, void __user *argp)
+ 
+ 	start = r.start >> bs_shift;
+ 	end = start + (r.len >> bs_shift);
+-	minlen = max_t(u64, r.minlen,
++	minlen = max_t(u64, r.minlen, sdp->sd_sb.sb_bsize);
++	minlen = max_t(u64, minlen,
+ 		       q->limits.discard_granularity) >> bs_shift;
+ 
+ 	if (end <= start || minlen > sdp->sd_max_rg_data)
+diff --git a/fs/gfs2/rgrp.h b/fs/gfs2/rgrp.h
+index 3e2ca1fb43056..46dd94e9e085c 100644
+--- a/fs/gfs2/rgrp.h
++++ b/fs/gfs2/rgrp.h
+@@ -45,7 +45,7 @@ extern int gfs2_alloc_blocks(struct gfs2_inode *ip, u64 *bn, unsigned int *n,
+ 			     bool dinode, u64 *generation);
+ 
+ extern void gfs2_rs_deltree(struct gfs2_blkreserv *rs);
+-extern void gfs2_rs_delete(struct gfs2_inode *ip, atomic_t *wcount);
++extern void gfs2_rs_delete(struct gfs2_inode *ip);
+ extern void __gfs2_free_blocks(struct gfs2_inode *ip, struct gfs2_rgrpd *rgd,
+ 			       u64 bstart, u32 blen, int meta);
+ extern void gfs2_free_meta(struct gfs2_inode *ip, struct gfs2_rgrpd *rgd,
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index 0f93e8beca4d9..313a8be610ef0 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -1398,7 +1398,7 @@ out:
+ 	truncate_inode_pages_final(&inode->i_data);
+ 	if (ip->i_qadata)
+ 		gfs2_assert_warn(sdp, ip->i_qadata->qa_ref == 0);
+-	gfs2_rs_delete(ip, NULL);
++	gfs2_rs_deltree(&ip->i_res);
+ 	gfs2_ordered_del_inode(ip);
+ 	clear_inode(inode);
+ 	gfs2_dir_hash_inval(ip);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index db724482cd117..49cafa0a8b8fb 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -2720,8 +2720,12 @@ static bool io_rw_should_reissue(struct io_kiocb *req)
+ 
+ static bool __io_complete_rw_common(struct io_kiocb *req, long res)
+ {
+-	if (req->rw.kiocb.ki_flags & IOCB_WRITE)
++	if (req->rw.kiocb.ki_flags & IOCB_WRITE) {
+ 		kiocb_end_write(req);
++		fsnotify_modify(req->file);
++	} else {
++		fsnotify_access(req->file);
++	}
+ 	if (unlikely(res != req->result)) {
+ 		if ((res == -EAGAIN || res == -EOPNOTSUPP) &&
+ 		    io_rw_should_reissue(req)) {
+@@ -3346,13 +3350,15 @@ static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter)
+ 				ret = nr;
+ 			break;
+ 		}
++		ret += nr;
+ 		if (!iov_iter_is_bvec(iter)) {
+ 			iov_iter_advance(iter, nr);
+ 		} else {
+-			req->rw.len -= nr;
+ 			req->rw.addr += nr;
++			req->rw.len -= nr;
++			if (!req->rw.len)
++				break;
+ 		}
+-		ret += nr;
+ 		if (nr != iovec.iov_len)
+ 			break;
+ 	}
+@@ -4211,6 +4217,8 @@ static int io_fallocate(struct io_kiocb *req, unsigned int issue_flags)
+ 				req->sync.len);
+ 	if (ret < 0)
+ 		req_set_fail(req);
++	else
++		fsnotify_modify(req->file);
+ 	io_req_complete(req, ret);
+ 	return 0;
+ }
+@@ -5172,8 +5180,7 @@ static int io_accept_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 	accept->nofile = rlimit(RLIMIT_NOFILE);
+ 
+ 	accept->file_slot = READ_ONCE(sqe->file_index);
+-	if (accept->file_slot && ((req->open.how.flags & O_CLOEXEC) ||
+-				  (accept->flags & SOCK_CLOEXEC)))
++	if (accept->file_slot && (accept->flags & SOCK_CLOEXEC))
+ 		return -EINVAL;
+ 	if (accept->flags & ~(SOCK_CLOEXEC | SOCK_NONBLOCK))
+ 		return -EINVAL;
+@@ -8173,6 +8180,7 @@ static int __io_sqe_files_scm(struct io_ring_ctx *ctx, int nr, int offset)
+ 			fput(fpl->fp[i]);
+ 	} else {
+ 		kfree_skb(skb);
++		free_uid(fpl->user);
+ 		kfree(fpl);
+ 	}
+ 
+diff --git a/fs/jffs2/build.c b/fs/jffs2/build.c
+index b288c8ae1236b..837cd55fd4c5e 100644
+--- a/fs/jffs2/build.c
++++ b/fs/jffs2/build.c
+@@ -415,13 +415,15 @@ int jffs2_do_mount_fs(struct jffs2_sb_info *c)
+ 		jffs2_free_ino_caches(c);
+ 		jffs2_free_raw_node_refs(c);
+ 		ret = -EIO;
+-		goto out_free;
++		goto out_sum_exit;
+ 	}
+ 
+ 	jffs2_calc_trigger_levels(c);
+ 
+ 	return 0;
+ 
++ out_sum_exit:
++	jffs2_sum_exit(c);
+  out_free:
+ 	kvfree(c->blocks);
+ 
+diff --git a/fs/jffs2/fs.c b/fs/jffs2/fs.c
+index 2ac410477c4f4..71f03a5d36ed2 100644
+--- a/fs/jffs2/fs.c
++++ b/fs/jffs2/fs.c
+@@ -603,8 +603,8 @@ out_root:
+ 	jffs2_free_ino_caches(c);
+ 	jffs2_free_raw_node_refs(c);
+ 	kvfree(c->blocks);
+- out_inohash:
+ 	jffs2_clear_xattr_subsystem(c);
++ out_inohash:
+ 	kfree(c->inocache_list);
+  out_wbuf:
+ 	jffs2_flash_cleanup(c);
+diff --git a/fs/jffs2/scan.c b/fs/jffs2/scan.c
+index b676056826beb..29671e33a1714 100644
+--- a/fs/jffs2/scan.c
++++ b/fs/jffs2/scan.c
+@@ -136,7 +136,7 @@ int jffs2_scan_medium(struct jffs2_sb_info *c)
+ 		if (!s) {
+ 			JFFS2_WARNING("Can't allocate memory for summary\n");
+ 			ret = -ENOMEM;
+-			goto out;
++			goto out_buf;
+ 		}
+ 	}
+ 
+@@ -275,13 +275,15 @@ int jffs2_scan_medium(struct jffs2_sb_info *c)
+ 	}
+ 	ret = 0;
+  out:
++	jffs2_sum_reset_collected(s);
++	kfree(s);
++ out_buf:
+ 	if (buf_size)
+ 		kfree(flashbuf);
+ #ifndef __ECOS
+ 	else
+ 		mtd_unpoint(c->mtd, 0, c->mtd->size);
+ #endif
+-	kfree(s);
+ 	return ret;
+ }
+ 
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 91f4ec93dab1f..d8502f4989d9d 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -148,6 +148,7 @@ static const s8 budtab[256] = {
+  *	0	- success
+  *	-ENOMEM	- insufficient memory
+  *	-EIO	- i/o error
++ *	-EINVAL - wrong bmap data
+  */
+ int dbMount(struct inode *ipbmap)
+ {
+@@ -179,6 +180,12 @@ int dbMount(struct inode *ipbmap)
+ 	bmp->db_nfree = le64_to_cpu(dbmp_le->dn_nfree);
+ 	bmp->db_l2nbperpage = le32_to_cpu(dbmp_le->dn_l2nbperpage);
+ 	bmp->db_numag = le32_to_cpu(dbmp_le->dn_numag);
++	if (!bmp->db_numag) {
++		release_metapage(mp);
++		kfree(bmp);
++		return -EINVAL;
++	}
++
+ 	bmp->db_maxlevel = le32_to_cpu(dbmp_le->dn_maxlevel);
+ 	bmp->db_maxag = le32_to_cpu(dbmp_le->dn_maxag);
+ 	bmp->db_agpref = le32_to_cpu(dbmp_le->dn_agpref);
+diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
+index c343666d9a428..6464dde03705c 100644
+--- a/fs/nfs/callback_proc.c
++++ b/fs/nfs/callback_proc.c
+@@ -358,12 +358,11 @@ __be32 nfs4_callback_devicenotify(void *argp, void *resp,
+ 				  struct cb_process_state *cps)
+ {
+ 	struct cb_devicenotifyargs *args = argp;
++	const struct pnfs_layoutdriver_type *ld = NULL;
+ 	uint32_t i;
+ 	__be32 res = 0;
+-	struct nfs_client *clp = cps->clp;
+-	struct nfs_server *server = NULL;
+ 
+-	if (!clp) {
++	if (!cps->clp) {
+ 		res = cpu_to_be32(NFS4ERR_OP_NOT_IN_SESSION);
+ 		goto out;
+ 	}
+@@ -371,23 +370,15 @@ __be32 nfs4_callback_devicenotify(void *argp, void *resp,
+ 	for (i = 0; i < args->ndevs; i++) {
+ 		struct cb_devicenotifyitem *dev = &args->devs[i];
+ 
+-		if (!server ||
+-		    server->pnfs_curr_ld->id != dev->cbd_layout_type) {
+-			rcu_read_lock();
+-			list_for_each_entry_rcu(server, &clp->cl_superblocks, client_link)
+-				if (server->pnfs_curr_ld &&
+-				    server->pnfs_curr_ld->id == dev->cbd_layout_type) {
+-					rcu_read_unlock();
+-					goto found;
+-				}
+-			rcu_read_unlock();
+-			continue;
++		if (!ld || ld->id != dev->cbd_layout_type) {
++			pnfs_put_layoutdriver(ld);
++			ld = pnfs_find_layoutdriver(dev->cbd_layout_type);
++			if (!ld)
++				continue;
+ 		}
+-
+-	found:
+-		nfs4_delete_deviceid(server->pnfs_curr_ld, clp, &dev->cbd_dev_id);
++		nfs4_delete_deviceid(ld, cps->clp, &dev->cbd_dev_id);
+ 	}
+-
++	pnfs_put_layoutdriver(ld);
+ out:
+ 	kfree(args->devs);
+ 	return res;
+diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c
+index f90de8043b0f9..8dcb08e1a885d 100644
+--- a/fs/nfs/callback_xdr.c
++++ b/fs/nfs/callback_xdr.c
+@@ -271,10 +271,6 @@ __be32 decode_devicenotify_args(struct svc_rqst *rqstp,
+ 	n = ntohl(*p++);
+ 	if (n == 0)
+ 		goto out;
+-	if (n > ULONG_MAX / sizeof(*args->devs)) {
+-		status = htonl(NFS4ERR_BADXDR);
+-		goto out;
+-	}
+ 
+ 	args->devs = kmalloc_array(n, sizeof(*args->devs), GFP_KERNEL);
+ 	if (!args->devs) {
+diff --git a/fs/nfs/nfs2xdr.c b/fs/nfs/nfs2xdr.c
+index 7fba7711e6b3a..3d5ba43f44bb6 100644
+--- a/fs/nfs/nfs2xdr.c
++++ b/fs/nfs/nfs2xdr.c
+@@ -949,7 +949,7 @@ int nfs2_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ 
+ 	error = decode_filename_inline(xdr, &entry->name, &entry->len);
+ 	if (unlikely(error))
+-		return error;
++		return -EAGAIN;
+ 
+ 	/*
+ 	 * The type (size and byte order) of nfscookie isn't defined in
+diff --git a/fs/nfs/nfs3xdr.c b/fs/nfs/nfs3xdr.c
+index 9274c9c5efea6..7ab60ad98776f 100644
+--- a/fs/nfs/nfs3xdr.c
++++ b/fs/nfs/nfs3xdr.c
+@@ -1967,7 +1967,6 @@ int nfs3_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ 		       bool plus)
+ {
+ 	struct user_namespace *userns = rpc_userns(entry->server->client);
+-	struct nfs_entry old = *entry;
+ 	__be32 *p;
+ 	int error;
+ 	u64 new_cookie;
+@@ -1987,15 +1986,15 @@ int nfs3_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ 
+ 	error = decode_fileid3(xdr, &entry->ino);
+ 	if (unlikely(error))
+-		return error;
++		return -EAGAIN;
+ 
+ 	error = decode_inline_filename3(xdr, &entry->name, &entry->len);
+ 	if (unlikely(error))
+-		return error;
++		return -EAGAIN;
+ 
+ 	error = decode_cookie3(xdr, &new_cookie);
+ 	if (unlikely(error))
+-		return error;
++		return -EAGAIN;
+ 
+ 	entry->d_type = DT_UNKNOWN;
+ 
+@@ -2003,7 +2002,7 @@ int nfs3_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ 		entry->fattr->valid = 0;
+ 		error = decode_post_op_attr(xdr, entry->fattr, userns);
+ 		if (unlikely(error))
+-			return error;
++			return -EAGAIN;
+ 		if (entry->fattr->valid & NFS_ATTR_FATTR_V3)
+ 			entry->d_type = nfs_umode_to_dtype(entry->fattr->mode);
+ 
+@@ -2018,11 +2017,8 @@ int nfs3_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ 			return -EAGAIN;
+ 		if (*p != xdr_zero) {
+ 			error = decode_nfs_fh3(xdr, entry->fh);
+-			if (unlikely(error)) {
+-				if (error == -E2BIG)
+-					goto out_truncated;
+-				return error;
+-			}
++			if (unlikely(error))
++				return -EAGAIN;
+ 		} else
+ 			zero_nfs_fh3(entry->fh);
+ 	}
+@@ -2031,11 +2027,6 @@ int nfs3_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ 	entry->cookie = new_cookie;
+ 
+ 	return 0;
+-
+-out_truncated:
+-	dprintk("NFS: directory entry contains invalid file handle\n");
+-	*entry = old;
+-	return -EAGAIN;
+ }
+ 
+ /*
+@@ -2228,6 +2219,7 @@ static int decode_fsinfo3resok(struct xdr_stream *xdr,
+ 	/* ignore properties */
+ 	result->lease_time = 0;
+ 	result->change_attr_type = NFS4_CHANGE_TYPE_IS_UNDEFINED;
++	result->xattr_support = 0;
+ 	return 0;
+ }
+ 
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 0abbbf5d2bdf1..5298c3a278c59 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -8275,6 +8275,7 @@ nfs4_bind_one_conn_to_session_done(struct rpc_task *task, void *calldata)
+ 	case -NFS4ERR_DEADSESSION:
+ 		nfs4_schedule_session_recovery(clp->cl_session,
+ 				task->tk_status);
++		return;
+ 	}
+ 	if (args->dir == NFS4_CDFC4_FORE_OR_BOTH &&
+ 			res->dir != NFS4_CDFS4_BOTH) {
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index ad7f83dc9a2df..815d630802451 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -1218,6 +1218,7 @@ static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc)
+ 
+ 	do {
+ 		list_splice_init(&mirror->pg_list, &head);
++		mirror->pg_recoalesce = 0;
+ 
+ 		while (!list_empty(&head)) {
+ 			struct nfs_page *req;
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 7c9090a28e5c3..7ddd003ab8b1a 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -92,6 +92,17 @@ find_pnfs_driver(u32 id)
+ 	return local;
+ }
+ 
++const struct pnfs_layoutdriver_type *pnfs_find_layoutdriver(u32 id)
++{
++	return find_pnfs_driver(id);
++}
++
++void pnfs_put_layoutdriver(const struct pnfs_layoutdriver_type *ld)
++{
++	if (ld)
++		module_put(ld->owner);
++}
++
+ void
+ unset_pnfs_layoutdriver(struct nfs_server *nfss)
+ {
+diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
+index f4d7548d67b24..07f11489e4e9f 100644
+--- a/fs/nfs/pnfs.h
++++ b/fs/nfs/pnfs.h
+@@ -234,6 +234,8 @@ struct pnfs_devicelist {
+ 
+ extern int pnfs_register_layoutdriver(struct pnfs_layoutdriver_type *);
+ extern void pnfs_unregister_layoutdriver(struct pnfs_layoutdriver_type *);
++extern const struct pnfs_layoutdriver_type *pnfs_find_layoutdriver(u32 id);
++extern void pnfs_put_layoutdriver(const struct pnfs_layoutdriver_type *ld);
+ 
+ /* nfs4proc.c */
+ extern size_t max_response_pages(struct nfs_server *server);
+diff --git a/fs/nfs/proc.c b/fs/nfs/proc.c
+index 73dcaa99fa9ba..e3570c656b0f9 100644
+--- a/fs/nfs/proc.c
++++ b/fs/nfs/proc.c
+@@ -92,6 +92,7 @@ nfs_proc_get_root(struct nfs_server *server, struct nfs_fh *fhandle,
+ 	info->maxfilesize = 0x7FFFFFFF;
+ 	info->lease_time = 0;
+ 	info->change_attr_type = NFS4_CHANGE_TYPE_IS_UNDEFINED;
++	info->xattr_support = 0;
+ 	return 0;
+ }
+ 
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index 9b7619ce17a77..e86aff429993e 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -315,7 +315,10 @@ static void nfs_mapping_set_error(struct page *page, int error)
+ 	struct address_space *mapping = page_file_mapping(page);
+ 
+ 	SetPageError(page);
+-	mapping_set_error(mapping, error);
++	filemap_set_wb_err(mapping, error);
++	if (mapping->host)
++		errseq_set(&mapping->host->i_sb->s_wb_err,
++			   error == -ENOSPC ? -ENOSPC : -EIO);
+ 	nfs_set_pageerror(mapping);
+ }
+ 
+@@ -1408,6 +1411,8 @@ static void nfs_initiate_write(struct nfs_pgio_header *hdr,
+ {
+ 	int priority = flush_task_priority(how);
+ 
++	if (IS_SWAPFILE(hdr->inode))
++		task_setup_data->flags |= RPC_TASK_SWAPPER;
+ 	task_setup_data->priority = priority;
+ 	rpc_ops->write_setup(hdr, msg, &task_setup_data->rpc_client);
+ 	trace_nfs_initiate_write(hdr);
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index fdf89fcf1a0ca..4cffe05e04775 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -644,7 +644,7 @@ nfsd_file_cache_init(void)
+ 	if (!nfsd_filecache_wq)
+ 		goto out;
+ 
+-	nfsd_file_hashtbl = kcalloc(NFSD_FILE_HASH_SIZE,
++	nfsd_file_hashtbl = kvcalloc(NFSD_FILE_HASH_SIZE,
+ 				sizeof(*nfsd_file_hashtbl), GFP_KERNEL);
+ 	if (!nfsd_file_hashtbl) {
+ 		pr_err("nfsd: unable to allocate nfsd_file_hashtbl\n");
+@@ -712,7 +712,7 @@ out_err:
+ 	nfsd_file_slab = NULL;
+ 	kmem_cache_destroy(nfsd_file_mark_slab);
+ 	nfsd_file_mark_slab = NULL;
+-	kfree(nfsd_file_hashtbl);
++	kvfree(nfsd_file_hashtbl);
+ 	nfsd_file_hashtbl = NULL;
+ 	destroy_workqueue(nfsd_filecache_wq);
+ 	nfsd_filecache_wq = NULL;
+@@ -858,7 +858,7 @@ nfsd_file_cache_shutdown(void)
+ 	fsnotify_wait_marks_destroyed();
+ 	kmem_cache_destroy(nfsd_file_mark_slab);
+ 	nfsd_file_mark_slab = NULL;
+-	kfree(nfsd_file_hashtbl);
++	kvfree(nfsd_file_hashtbl);
+ 	nfsd_file_hashtbl = NULL;
+ 	destroy_workqueue(nfsd_filecache_wq);
+ 	nfsd_filecache_wq = NULL;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index ac0ddde8beef2..6075a328d756e 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4693,6 +4693,14 @@ nfsd_break_deleg_cb(struct file_lock *fl)
+ 	return ret;
+ }
+ 
++/**
++ * nfsd_breaker_owns_lease - Check if lease conflict was resolved
++ * @fl: Lock state to check
++ *
++ * Return values:
++ *   %true: Lease conflict was resolved
++ *   %false: Lease conflict was not resolved.
++ */
+ static bool nfsd_breaker_owns_lease(struct file_lock *fl)
+ {
+ 	struct nfs4_delegation *dl = fl->fl_owner;
+@@ -4700,11 +4708,11 @@ static bool nfsd_breaker_owns_lease(struct file_lock *fl)
+ 	struct nfs4_client *clp;
+ 
+ 	if (!i_am_nfsd())
+-		return NULL;
++		return false;
+ 	rqst = kthread_data(current);
+ 	/* Note rq_prog == NFS_ACL_PROGRAM is also possible: */
+ 	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
+-		return NULL;
++		return false;
+ 	clp = *(rqst->rq_lease_breaker);
+ 	return dl->dl_stid.sc_client == clp;
+ }
+diff --git a/fs/nfsd/nfsproc.c b/fs/nfsd/nfsproc.c
+index 312fd289be583..9700ad433b498 100644
+--- a/fs/nfsd/nfsproc.c
++++ b/fs/nfsd/nfsproc.c
+@@ -230,7 +230,7 @@ nfsd_proc_write(struct svc_rqst *rqstp)
+ 	unsigned long cnt = argp->len;
+ 	unsigned int nvecs;
+ 
+-	dprintk("nfsd: WRITE    %s %d bytes at %d\n",
++	dprintk("nfsd: WRITE    %s %u bytes at %d\n",
+ 		SVCFH_fmt(&argp->fh),
+ 		argp->len, argp->offset);
+ 
+diff --git a/fs/nfsd/xdr.h b/fs/nfsd/xdr.h
+index 528fb299430e6..852f71580bd06 100644
+--- a/fs/nfsd/xdr.h
++++ b/fs/nfsd/xdr.h
+@@ -32,7 +32,7 @@ struct nfsd_readargs {
+ struct nfsd_writeargs {
+ 	svc_fh			fh;
+ 	__u32			offset;
+-	int			len;
++	__u32			len;
+ 	struct xdr_buf		payload;
+ };
+ 
+diff --git a/fs/ntfs/inode.c b/fs/ntfs/inode.c
+index 4474adb393ca8..517b71c73aa96 100644
+--- a/fs/ntfs/inode.c
++++ b/fs/ntfs/inode.c
+@@ -1881,6 +1881,10 @@ int ntfs_read_inode_mount(struct inode *vi)
+ 		}
+ 		/* Now allocate memory for the attribute list. */
+ 		ni->attr_list_size = (u32)ntfs_attr_size(a);
++		if (!ni->attr_list_size) {
++			ntfs_error(sb, "Attr_list_size is zero");
++			goto put_err_out;
++		}
+ 		ni->attr_list = ntfs_malloc_nofs(ni->attr_list_size);
+ 		if (!ni->attr_list) {
+ 			ntfs_error(sb, "Not enough memory to allocate buffer "
+diff --git a/fs/ocfs2/quota_global.c b/fs/ocfs2/quota_global.c
+index f033de733adb3..effe92c7d6937 100644
+--- a/fs/ocfs2/quota_global.c
++++ b/fs/ocfs2/quota_global.c
+@@ -337,7 +337,6 @@ void ocfs2_unlock_global_qf(struct ocfs2_mem_dqinfo *oinfo, int ex)
+ /* Read information header from global quota file */
+ int ocfs2_global_read_info(struct super_block *sb, int type)
+ {
+-	struct inode *gqinode = NULL;
+ 	unsigned int ino[OCFS2_MAXQUOTAS] = { USER_QUOTA_SYSTEM_INODE,
+ 					      GROUP_QUOTA_SYSTEM_INODE };
+ 	struct ocfs2_global_disk_dqinfo dinfo;
+@@ -346,29 +345,31 @@ int ocfs2_global_read_info(struct super_block *sb, int type)
+ 	u64 pcount;
+ 	int status;
+ 
++	oinfo->dqi_gi.dqi_sb = sb;
++	oinfo->dqi_gi.dqi_type = type;
++	ocfs2_qinfo_lock_res_init(&oinfo->dqi_gqlock, oinfo);
++	oinfo->dqi_gi.dqi_entry_size = sizeof(struct ocfs2_global_disk_dqblk);
++	oinfo->dqi_gi.dqi_ops = &ocfs2_global_ops;
++	oinfo->dqi_gqi_bh = NULL;
++	oinfo->dqi_gqi_count = 0;
++
+ 	/* Read global header */
+-	gqinode = ocfs2_get_system_file_inode(OCFS2_SB(sb), ino[type],
++	oinfo->dqi_gqinode = ocfs2_get_system_file_inode(OCFS2_SB(sb), ino[type],
+ 			OCFS2_INVALID_SLOT);
+-	if (!gqinode) {
++	if (!oinfo->dqi_gqinode) {
+ 		mlog(ML_ERROR, "failed to get global quota inode (type=%d)\n",
+ 			type);
+ 		status = -EINVAL;
+ 		goto out_err;
+ 	}
+-	oinfo->dqi_gi.dqi_sb = sb;
+-	oinfo->dqi_gi.dqi_type = type;
+-	oinfo->dqi_gi.dqi_entry_size = sizeof(struct ocfs2_global_disk_dqblk);
+-	oinfo->dqi_gi.dqi_ops = &ocfs2_global_ops;
+-	oinfo->dqi_gqi_bh = NULL;
+-	oinfo->dqi_gqi_count = 0;
+-	oinfo->dqi_gqinode = gqinode;
++
+ 	status = ocfs2_lock_global_qf(oinfo, 0);
+ 	if (status < 0) {
+ 		mlog_errno(status);
+ 		goto out_err;
+ 	}
+ 
+-	status = ocfs2_extent_map_get_blocks(gqinode, 0, &oinfo->dqi_giblk,
++	status = ocfs2_extent_map_get_blocks(oinfo->dqi_gqinode, 0, &oinfo->dqi_giblk,
+ 					     &pcount, NULL);
+ 	if (status < 0)
+ 		goto out_unlock;
+diff --git a/fs/ocfs2/quota_local.c b/fs/ocfs2/quota_local.c
+index 0e4b16d4c037f..b1a8b046f4c22 100644
+--- a/fs/ocfs2/quota_local.c
++++ b/fs/ocfs2/quota_local.c
+@@ -702,8 +702,6 @@ static int ocfs2_local_read_info(struct super_block *sb, int type)
+ 	info->dqi_priv = oinfo;
+ 	oinfo->dqi_type = type;
+ 	INIT_LIST_HEAD(&oinfo->dqi_chunk);
+-	oinfo->dqi_gqinode = NULL;
+-	ocfs2_qinfo_lock_res_init(&oinfo->dqi_gqlock, oinfo);
+ 	oinfo->dqi_rec = NULL;
+ 	oinfo->dqi_lqi_bh = NULL;
+ 	oinfo->dqi_libh = NULL;
+diff --git a/fs/proc/bootconfig.c b/fs/proc/bootconfig.c
+index 6d8d4bf208377..2e244ada1f970 100644
+--- a/fs/proc/bootconfig.c
++++ b/fs/proc/bootconfig.c
+@@ -32,6 +32,8 @@ static int __init copy_xbc_key_value_list(char *dst, size_t size)
+ 	int ret = 0;
+ 
+ 	key = kzalloc(XBC_KEYLEN_MAX, GFP_KERNEL);
++	if (!key)
++		return -ENOMEM;
+ 
+ 	xbc_for_each_key_value(leaf, val) {
+ 		ret = xbc_node_compose_key(leaf, key, XBC_KEYLEN_MAX);
+diff --git a/fs/pstore/platform.c b/fs/pstore/platform.c
+index f243cb5e6a4fb..e26162f102ffe 100644
+--- a/fs/pstore/platform.c
++++ b/fs/pstore/platform.c
+@@ -143,21 +143,22 @@ static void pstore_timer_kick(void)
+ 	mod_timer(&pstore_timer, jiffies + msecs_to_jiffies(pstore_update_ms));
+ }
+ 
+-/*
+- * Should pstore_dump() wait for a concurrent pstore_dump()? If
+- * not, the current pstore_dump() will report a failure to dump
+- * and return.
+- */
+-static bool pstore_cannot_wait(enum kmsg_dump_reason reason)
++static bool pstore_cannot_block_path(enum kmsg_dump_reason reason)
+ {
+-	/* In NMI path, pstore shouldn't block regardless of reason. */
++	/*
++	 * In case of NMI path, pstore shouldn't be blocked
++	 * regardless of reason.
++	 */
+ 	if (in_nmi())
+ 		return true;
+ 
+ 	switch (reason) {
+ 	/* In panic case, other cpus are stopped by smp_send_stop(). */
+ 	case KMSG_DUMP_PANIC:
+-	/* Emergency restart shouldn't be blocked. */
++	/*
++	 * Emergency restart shouldn't be blocked by spinning on
++	 * pstore_info::buf_lock.
++	 */
+ 	case KMSG_DUMP_EMERG:
+ 		return true;
+ 	default:
+@@ -389,21 +390,19 @@ static void pstore_dump(struct kmsg_dumper *dumper,
+ 	unsigned long	total = 0;
+ 	const char	*why;
+ 	unsigned int	part = 1;
++	unsigned long	flags = 0;
+ 	int		ret;
+ 
+ 	why = kmsg_dump_reason_str(reason);
+ 
+-	if (down_trylock(&psinfo->buf_lock)) {
+-		/* Failed to acquire lock: give up if we cannot wait. */
+-		if (pstore_cannot_wait(reason)) {
+-			pr_err("dump skipped in %s path: may corrupt error record\n",
+-				in_nmi() ? "NMI" : why);
+-			return;
+-		}
+-		if (down_interruptible(&psinfo->buf_lock)) {
+-			pr_err("could not grab semaphore?!\n");
++	if (pstore_cannot_block_path(reason)) {
++		if (!spin_trylock_irqsave(&psinfo->buf_lock, flags)) {
++			pr_err("dump skipped in %s path because of concurrent dump\n",
++					in_nmi() ? "NMI" : why);
+ 			return;
+ 		}
++	} else {
++		spin_lock_irqsave(&psinfo->buf_lock, flags);
+ 	}
+ 
+ 	kmsg_dump_rewind(&iter);
+@@ -467,8 +466,7 @@ static void pstore_dump(struct kmsg_dumper *dumper,
+ 		total += record.size;
+ 		part++;
+ 	}
+-
+-	up(&psinfo->buf_lock);
++	spin_unlock_irqrestore(&psinfo->buf_lock, flags);
+ }
+ 
+ static struct kmsg_dumper pstore_dumper = {
+@@ -594,7 +592,7 @@ int pstore_register(struct pstore_info *psi)
+ 		psi->write_user = pstore_write_user_compat;
+ 	psinfo = psi;
+ 	mutex_init(&psinfo->read_mutex);
+-	sema_init(&psinfo->buf_lock, 1);
++	spin_lock_init(&psinfo->buf_lock);
+ 
+ 	if (psi->flags & PSTORE_FLAGS_DMESG)
+ 		allocate_buf_for_compression();
+diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c
+index 7c61d0ec0159e..79e371bc15e1e 100644
+--- a/fs/ubifs/dir.c
++++ b/fs/ubifs/dir.c
+@@ -349,20 +349,97 @@ out_budg:
+ 	return err;
+ }
+ 
+-static int do_tmpfile(struct inode *dir, struct dentry *dentry,
+-		      umode_t mode, struct inode **whiteout)
++static struct inode *create_whiteout(struct inode *dir, struct dentry *dentry)
++{
++	int err;
++	umode_t mode = S_IFCHR | WHITEOUT_MODE;
++	struct inode *inode;
++	struct ubifs_info *c = dir->i_sb->s_fs_info;
++	struct fscrypt_name nm;
++
++	/*
++	 * Create an inode('nlink = 1') for whiteout without updating journal,
++	 * let ubifs_jnl_rename() store it on flash to complete rename whiteout
++	 * atomically.
++	 */
++
++	dbg_gen("dent '%pd', mode %#hx in dir ino %lu",
++		dentry, mode, dir->i_ino);
++
++	err = fscrypt_setup_filename(dir, &dentry->d_name, 0, &nm);
++	if (err)
++		return ERR_PTR(err);
++
++	inode = ubifs_new_inode(c, dir, mode);
++	if (IS_ERR(inode)) {
++		err = PTR_ERR(inode);
++		goto out_free;
++	}
++
++	init_special_inode(inode, inode->i_mode, WHITEOUT_DEV);
++	ubifs_assert(c, inode->i_op == &ubifs_file_inode_operations);
++
++	err = ubifs_init_security(dir, inode, &dentry->d_name);
++	if (err)
++		goto out_inode;
++
++	/* The dir size is updated by do_rename. */
++	insert_inode_hash(inode);
++
++	return inode;
++
++out_inode:
++	make_bad_inode(inode);
++	iput(inode);
++out_free:
++	fscrypt_free_filename(&nm);
++	ubifs_err(c, "cannot create whiteout file, error %d", err);
++	return ERR_PTR(err);
++}
++
++/**
++ * lock_2_inodes - a wrapper for locking two UBIFS inodes.
++ * @inode1: first inode
++ * @inode2: second inode
++ *
++ * We do not implement any tricks to guarantee strict lock ordering, because
++ * VFS has already done it for us on the @i_mutex. So this is just a simple
++ * wrapper function.
++ */
++static void lock_2_inodes(struct inode *inode1, struct inode *inode2)
++{
++	mutex_lock_nested(&ubifs_inode(inode1)->ui_mutex, WB_MUTEX_1);
++	mutex_lock_nested(&ubifs_inode(inode2)->ui_mutex, WB_MUTEX_2);
++}
++
++/**
++ * unlock_2_inodes - a wrapper for unlocking two UBIFS inodes.
++ * @inode1: first inode
++ * @inode2: second inode
++ */
++static void unlock_2_inodes(struct inode *inode1, struct inode *inode2)
++{
++	mutex_unlock(&ubifs_inode(inode2)->ui_mutex);
++	mutex_unlock(&ubifs_inode(inode1)->ui_mutex);
++}
++
++static int ubifs_tmpfile(struct user_namespace *mnt_userns, struct inode *dir,
++			 struct dentry *dentry, umode_t mode)
+ {
+ 	struct inode *inode;
+ 	struct ubifs_info *c = dir->i_sb->s_fs_info;
+-	struct ubifs_budget_req req = { .new_ino = 1, .new_dent = 1};
++	struct ubifs_budget_req req = { .new_ino = 1, .new_dent = 1,
++					.dirtied_ino = 1};
+ 	struct ubifs_budget_req ino_req = { .dirtied_ino = 1 };
+-	struct ubifs_inode *ui, *dir_ui = ubifs_inode(dir);
++	struct ubifs_inode *ui;
+ 	int err, instantiated = 0;
+ 	struct fscrypt_name nm;
+ 
+ 	/*
+-	 * Budget request settings: new dirty inode, new direntry,
+-	 * budget for dirtied inode will be released via writeback.
++	 * Budget request settings: new inode, new direntry, changing the
++	 * parent directory inode.
++	 * Allocate budget separately for new dirtied inode, the budget will
++	 * be released via writeback.
+ 	 */
+ 
+ 	dbg_gen("dent '%pd', mode %#hx in dir ino %lu",
+@@ -392,42 +469,30 @@ static int do_tmpfile(struct inode *dir, struct dentry *dentry,
+ 	}
+ 	ui = ubifs_inode(inode);
+ 
+-	if (whiteout) {
+-		init_special_inode(inode, inode->i_mode, WHITEOUT_DEV);
+-		ubifs_assert(c, inode->i_op == &ubifs_file_inode_operations);
+-	}
+-
+ 	err = ubifs_init_security(dir, inode, &dentry->d_name);
+ 	if (err)
+ 		goto out_inode;
+ 
+ 	mutex_lock(&ui->ui_mutex);
+ 	insert_inode_hash(inode);
+-
+-	if (whiteout) {
+-		mark_inode_dirty(inode);
+-		drop_nlink(inode);
+-		*whiteout = inode;
+-	} else {
+-		d_tmpfile(dentry, inode);
+-	}
++	d_tmpfile(dentry, inode);
+ 	ubifs_assert(c, ui->dirty);
+ 
+ 	instantiated = 1;
+ 	mutex_unlock(&ui->ui_mutex);
+ 
+-	mutex_lock(&dir_ui->ui_mutex);
++	lock_2_inodes(dir, inode);
+ 	err = ubifs_jnl_update(c, dir, &nm, inode, 1, 0);
+ 	if (err)
+ 		goto out_cancel;
+-	mutex_unlock(&dir_ui->ui_mutex);
++	unlock_2_inodes(dir, inode);
+ 
+ 	ubifs_release_budget(c, &req);
+ 
+ 	return 0;
+ 
+ out_cancel:
+-	mutex_unlock(&dir_ui->ui_mutex);
++	unlock_2_inodes(dir, inode);
+ out_inode:
+ 	make_bad_inode(inode);
+ 	if (!instantiated)
+@@ -441,12 +506,6 @@ out_budg:
+ 	return err;
+ }
+ 
+-static int ubifs_tmpfile(struct user_namespace *mnt_userns, struct inode *dir,
+-			 struct dentry *dentry, umode_t mode)
+-{
+-	return do_tmpfile(dir, dentry, mode, NULL);
+-}
+-
+ /**
+  * vfs_dent_type - get VFS directory entry type.
+  * @type: UBIFS directory entry type
+@@ -660,32 +719,6 @@ static int ubifs_dir_release(struct inode *dir, struct file *file)
+ 	return 0;
+ }
+ 
+-/**
+- * lock_2_inodes - a wrapper for locking two UBIFS inodes.
+- * @inode1: first inode
+- * @inode2: second inode
+- *
+- * We do not implement any tricks to guarantee strict lock ordering, because
+- * VFS has already done it for us on the @i_mutex. So this is just a simple
+- * wrapper function.
+- */
+-static void lock_2_inodes(struct inode *inode1, struct inode *inode2)
+-{
+-	mutex_lock_nested(&ubifs_inode(inode1)->ui_mutex, WB_MUTEX_1);
+-	mutex_lock_nested(&ubifs_inode(inode2)->ui_mutex, WB_MUTEX_2);
+-}
+-
+-/**
+- * unlock_2_inodes - a wrapper for unlocking two UBIFS inodes.
+- * @inode1: first inode
+- * @inode2: second inode
+- */
+-static void unlock_2_inodes(struct inode *inode1, struct inode *inode2)
+-{
+-	mutex_unlock(&ubifs_inode(inode2)->ui_mutex);
+-	mutex_unlock(&ubifs_inode(inode1)->ui_mutex);
+-}
+-
+ static int ubifs_link(struct dentry *old_dentry, struct inode *dir,
+ 		      struct dentry *dentry)
+ {
+@@ -949,7 +982,8 @@ static int ubifs_mkdir(struct user_namespace *mnt_userns, struct inode *dir,
+ 	struct ubifs_inode *dir_ui = ubifs_inode(dir);
+ 	struct ubifs_info *c = dir->i_sb->s_fs_info;
+ 	int err, sz_change;
+-	struct ubifs_budget_req req = { .new_ino = 1, .new_dent = 1 };
++	struct ubifs_budget_req req = { .new_ino = 1, .new_dent = 1,
++					.dirtied_ino = 1};
+ 	struct fscrypt_name nm;
+ 
+ 	/*
+@@ -1264,17 +1298,19 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 					.dirtied_ino = 3 };
+ 	struct ubifs_budget_req ino_req = { .dirtied_ino = 1,
+ 			.dirtied_ino_d = ALIGN(old_inode_ui->data_len, 8) };
++	struct ubifs_budget_req wht_req;
+ 	struct timespec64 time;
+ 	unsigned int saved_nlink;
+ 	struct fscrypt_name old_nm, new_nm;
+ 
+ 	/*
+-	 * Budget request settings: deletion direntry, new direntry, removing
+-	 * the old inode, and changing old and new parent directory inodes.
++	 * Budget request settings:
++	 *   req: deletion direntry, new direntry, removing the old inode,
++	 *   and changing old and new parent directory inodes.
++	 *
++	 *   wht_req: new whiteout inode for RENAME_WHITEOUT.
+ 	 *
+-	 * However, this operation also marks the target inode as dirty and
+-	 * does not write it, so we allocate budget for the target inode
+-	 * separately.
++	 *   ino_req: marks the target inode as dirty and does not write it.
+ 	 */
+ 
+ 	dbg_gen("dent '%pd' ino %lu in dir ino %lu to dent '%pd' in dir ino %lu flags 0x%x",
+@@ -1331,20 +1367,44 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 			goto out_release;
+ 		}
+ 
+-		err = do_tmpfile(old_dir, old_dentry, S_IFCHR | WHITEOUT_MODE, &whiteout);
+-		if (err) {
++		/*
++		 * The whiteout inode without dentry is pinned in memory,
++		 * umount won't happen during rename process because we
++		 * got parent dentry.
++		 */
++		whiteout = create_whiteout(old_dir, old_dentry);
++		if (IS_ERR(whiteout)) {
++			err = PTR_ERR(whiteout);
+ 			kfree(dev);
+ 			goto out_release;
+ 		}
+ 
+-		spin_lock(&whiteout->i_lock);
+-		whiteout->i_state |= I_LINKABLE;
+-		spin_unlock(&whiteout->i_lock);
+-
+ 		whiteout_ui = ubifs_inode(whiteout);
+ 		whiteout_ui->data = dev;
+ 		whiteout_ui->data_len = ubifs_encode_dev(dev, MKDEV(0, 0));
+ 		ubifs_assert(c, !whiteout_ui->dirty);
++
++		memset(&wht_req, 0, sizeof(struct ubifs_budget_req));
++		wht_req.new_ino = 1;
++		wht_req.new_ino_d = ALIGN(whiteout_ui->data_len, 8);
++		/*
++		 * To avoid deadlock between space budget (holds ui_mutex and
++		 * waits wb work) and writeback work(waits ui_mutex), do space
++		 * budget before ubifs inodes locked.
++		 */
++		err = ubifs_budget_space(c, &wht_req);
++		if (err) {
++			/*
++			 * Whiteout inode can not be written on flash by
++			 * ubifs_jnl_write_inode(), because it's neither
++			 * dirty nor zero-nlink.
++			 */
++			iput(whiteout);
++			goto out_release;
++		}
++
++		/* Add the old_dentry size to the old_dir size. */
++		old_sz -= CALC_DENT_SIZE(fname_len(&old_nm));
+ 	}
+ 
+ 	lock_4_inodes(old_dir, new_dir, new_inode, whiteout);
+@@ -1416,29 +1476,11 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 		sync = IS_DIRSYNC(old_dir) || IS_DIRSYNC(new_dir);
+ 		if (unlink && IS_SYNC(new_inode))
+ 			sync = 1;
+-	}
+-
+-	if (whiteout) {
+-		struct ubifs_budget_req wht_req = { .dirtied_ino = 1,
+-				.dirtied_ino_d = \
+-				ALIGN(ubifs_inode(whiteout)->data_len, 8) };
+-
+-		err = ubifs_budget_space(c, &wht_req);
+-		if (err) {
+-			kfree(whiteout_ui->data);
+-			whiteout_ui->data_len = 0;
+-			iput(whiteout);
+-			goto out_release;
+-		}
+-
+-		inc_nlink(whiteout);
+-		mark_inode_dirty(whiteout);
+-
+-		spin_lock(&whiteout->i_lock);
+-		whiteout->i_state &= ~I_LINKABLE;
+-		spin_unlock(&whiteout->i_lock);
+-
+-		iput(whiteout);
++		/*
++		 * S_SYNC flag of whiteout inherits from the old_dir, and we
++		 * have already checked the old dir inode. So there is no need
++		 * to check whiteout.
++		 */
+ 	}
+ 
+ 	err = ubifs_jnl_rename(c, old_dir, old_inode, &old_nm, new_dir,
+@@ -1449,6 +1491,11 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 	unlock_4_inodes(old_dir, new_dir, new_inode, whiteout);
+ 	ubifs_release_budget(c, &req);
+ 
++	if (whiteout) {
++		ubifs_release_budget(c, &wht_req);
++		iput(whiteout);
++	}
++
+ 	mutex_lock(&old_inode_ui->ui_mutex);
+ 	release = old_inode_ui->dirty;
+ 	mark_inode_dirty_sync(old_inode);
+@@ -1457,11 +1504,16 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 	if (release)
+ 		ubifs_release_budget(c, &ino_req);
+ 	if (IS_SYNC(old_inode))
+-		err = old_inode->i_sb->s_op->write_inode(old_inode, NULL);
++		/*
++		 * Rename finished here. Although old inode cannot be updated
++		 * on flash, old ctime is not a big problem, don't return err
++		 * code to userspace.
++		 */
++		old_inode->i_sb->s_op->write_inode(old_inode, NULL);
+ 
+ 	fscrypt_free_filename(&old_nm);
+ 	fscrypt_free_filename(&new_nm);
+-	return err;
++	return 0;
+ 
+ out_cancel:
+ 	if (unlink) {
+@@ -1482,11 +1534,11 @@ out_cancel:
+ 				inc_nlink(old_dir);
+ 		}
+ 	}
++	unlock_4_inodes(old_dir, new_dir, new_inode, whiteout);
+ 	if (whiteout) {
+-		drop_nlink(whiteout);
++		ubifs_release_budget(c, &wht_req);
+ 		iput(whiteout);
+ 	}
+-	unlock_4_inodes(old_dir, new_dir, new_inode, whiteout);
+ out_release:
+ 	ubifs_release_budget(c, &ino_req);
+ 	ubifs_release_budget(c, &req);
+diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
+index 5cfa28cd00cdc..6b45a037a0471 100644
+--- a/fs/ubifs/file.c
++++ b/fs/ubifs/file.c
+@@ -570,7 +570,7 @@ static int ubifs_write_end(struct file *file, struct address_space *mapping,
+ 	}
+ 
+ 	if (!PagePrivate(page)) {
+-		SetPagePrivate(page);
++		attach_page_private(page, (void *)1);
+ 		atomic_long_inc(&c->dirty_pg_cnt);
+ 		__set_page_dirty_nobuffers(page);
+ 	}
+@@ -947,7 +947,7 @@ static int do_writepage(struct page *page, int len)
+ 		release_existing_page_budget(c);
+ 
+ 	atomic_long_dec(&c->dirty_pg_cnt);
+-	ClearPagePrivate(page);
++	detach_page_private(page);
+ 	ClearPageChecked(page);
+ 
+ 	kunmap(page);
+@@ -1304,7 +1304,7 @@ static void ubifs_invalidatepage(struct page *page, unsigned int offset,
+ 		release_existing_page_budget(c);
+ 
+ 	atomic_long_dec(&c->dirty_pg_cnt);
+-	ClearPagePrivate(page);
++	detach_page_private(page);
+ 	ClearPageChecked(page);
+ }
+ 
+@@ -1471,8 +1471,8 @@ static int ubifs_migrate_page(struct address_space *mapping,
+ 		return rc;
+ 
+ 	if (PagePrivate(page)) {
+-		ClearPagePrivate(page);
+-		SetPagePrivate(newpage);
++		detach_page_private(page);
++		attach_page_private(newpage, (void *)1);
+ 	}
+ 
+ 	if (mode != MIGRATE_SYNC_NO_COPY)
+@@ -1496,7 +1496,7 @@ static int ubifs_releasepage(struct page *page, gfp_t unused_gfp_flags)
+ 		return 0;
+ 	ubifs_assert(c, PagePrivate(page));
+ 	ubifs_assert(c, 0);
+-	ClearPagePrivate(page);
++	detach_page_private(page);
+ 	ClearPageChecked(page);
+ 	return 1;
+ }
+@@ -1567,7 +1567,7 @@ static vm_fault_t ubifs_vm_page_mkwrite(struct vm_fault *vmf)
+ 	else {
+ 		if (!PageChecked(page))
+ 			ubifs_convert_page_budget(c);
+-		SetPagePrivate(page);
++		attach_page_private(page, (void *)1);
+ 		atomic_long_inc(&c->dirty_pg_cnt);
+ 		__set_page_dirty_nobuffers(page);
+ 	}
+diff --git a/fs/ubifs/io.c b/fs/ubifs/io.c
+index 00b61dba62b70..b019dd6f7fa06 100644
+--- a/fs/ubifs/io.c
++++ b/fs/ubifs/io.c
+@@ -833,16 +833,42 @@ int ubifs_wbuf_write_nolock(struct ubifs_wbuf *wbuf, void *buf, int len)
+ 	 */
+ 	n = aligned_len >> c->max_write_shift;
+ 	if (n) {
+-		n <<= c->max_write_shift;
++		int m = n - 1;
++
+ 		dbg_io("write %d bytes to LEB %d:%d", n, wbuf->lnum,
+ 		       wbuf->offs);
+-		err = ubifs_leb_write(c, wbuf->lnum, buf + written,
+-				      wbuf->offs, n);
++
++		if (m) {
++			/* '(n-1)<<c->max_write_shift < len' is always true. */
++			m <<= c->max_write_shift;
++			err = ubifs_leb_write(c, wbuf->lnum, buf + written,
++					      wbuf->offs, m);
++			if (err)
++				goto out;
++			wbuf->offs += m;
++			aligned_len -= m;
++			len -= m;
++			written += m;
++		}
++
++		/*
++		 * The non-written len of buf may be less than 'n' because
++		 * parameter 'len' is not 8 bytes aligned, so here we read
++		 * min(len, n) bytes from buf.
++		 */
++		n = 1 << c->max_write_shift;
++		memcpy(wbuf->buf, buf + written, min(len, n));
++		if (n > len) {
++			ubifs_assert(c, n - len < 8);
++			ubifs_pad(c, wbuf->buf + len, n - len);
++		}
++
++		err = ubifs_leb_write(c, wbuf->lnum, wbuf->buf, wbuf->offs, n);
+ 		if (err)
+ 			goto out;
+ 		wbuf->offs += n;
+ 		aligned_len -= n;
+-		len -= n;
++		len -= min(len, n);
+ 		written += n;
+ 	}
+ 
+diff --git a/fs/ubifs/ioctl.c b/fs/ubifs/ioctl.c
+index c6a8634877803..71bcebe45f9c5 100644
+--- a/fs/ubifs/ioctl.c
++++ b/fs/ubifs/ioctl.c
+@@ -108,7 +108,7 @@ static int setflags(struct inode *inode, int flags)
+ 	struct ubifs_inode *ui = ubifs_inode(inode);
+ 	struct ubifs_info *c = inode->i_sb->s_fs_info;
+ 	struct ubifs_budget_req req = { .dirtied_ino = 1,
+-					.dirtied_ino_d = ui->data_len };
++			.dirtied_ino_d = ALIGN(ui->data_len, 8) };
+ 
+ 	err = ubifs_budget_space(c, &req);
+ 	if (err)
+diff --git a/fs/ubifs/journal.c b/fs/ubifs/journal.c
+index 8ea680dba61e3..75dab0ae3939d 100644
+--- a/fs/ubifs/journal.c
++++ b/fs/ubifs/journal.c
+@@ -1207,9 +1207,9 @@ out_free:
+  * @sync: non-zero if the write-buffer has to be synchronized
+  *
+  * This function implements the re-name operation which may involve writing up
+- * to 4 inodes and 2 directory entries. It marks the written inodes as clean
+- * and returns zero on success. In case of failure, a negative error code is
+- * returned.
++ * to 4 inodes(new inode, whiteout inode, old and new parent directory inodes)
++ * and 2 directory entries. It marks the written inodes as clean and returns
++ * zero on success. In case of failure, a negative error code is returned.
+  */
+ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ 		     const struct inode *old_inode,
+@@ -1222,14 +1222,15 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ 	void *p;
+ 	union ubifs_key key;
+ 	struct ubifs_dent_node *dent, *dent2;
+-	int err, dlen1, dlen2, ilen, lnum, offs, len, orphan_added = 0;
++	int err, dlen1, dlen2, ilen, wlen, lnum, offs, len, orphan_added = 0;
+ 	int aligned_dlen1, aligned_dlen2, plen = UBIFS_INO_NODE_SZ;
+ 	int last_reference = !!(new_inode && new_inode->i_nlink == 0);
+ 	int move = (old_dir != new_dir);
+-	struct ubifs_inode *new_ui;
++	struct ubifs_inode *new_ui, *whiteout_ui;
+ 	u8 hash_old_dir[UBIFS_HASH_ARR_SZ];
+ 	u8 hash_new_dir[UBIFS_HASH_ARR_SZ];
+ 	u8 hash_new_inode[UBIFS_HASH_ARR_SZ];
++	u8 hash_whiteout_inode[UBIFS_HASH_ARR_SZ];
+ 	u8 hash_dent1[UBIFS_HASH_ARR_SZ];
+ 	u8 hash_dent2[UBIFS_HASH_ARR_SZ];
+ 
+@@ -1249,9 +1250,20 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ 	} else
+ 		ilen = 0;
+ 
++	if (whiteout) {
++		whiteout_ui = ubifs_inode(whiteout);
++		ubifs_assert(c, mutex_is_locked(&whiteout_ui->ui_mutex));
++		ubifs_assert(c, whiteout->i_nlink == 1);
++		ubifs_assert(c, !whiteout_ui->dirty);
++		wlen = UBIFS_INO_NODE_SZ;
++		wlen += whiteout_ui->data_len;
++	} else
++		wlen = 0;
++
+ 	aligned_dlen1 = ALIGN(dlen1, 8);
+ 	aligned_dlen2 = ALIGN(dlen2, 8);
+-	len = aligned_dlen1 + aligned_dlen2 + ALIGN(ilen, 8) + ALIGN(plen, 8);
++	len = aligned_dlen1 + aligned_dlen2 + ALIGN(ilen, 8) +
++	      ALIGN(wlen, 8) + ALIGN(plen, 8);
+ 	if (move)
+ 		len += plen;
+ 
+@@ -1313,6 +1325,15 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ 		p += ALIGN(ilen, 8);
+ 	}
+ 
++	if (whiteout) {
++		pack_inode(c, p, whiteout, 0);
++		err = ubifs_node_calc_hash(c, p, hash_whiteout_inode);
++		if (err)
++			goto out_release;
++
++		p += ALIGN(wlen, 8);
++	}
++
+ 	if (!move) {
+ 		pack_inode(c, p, old_dir, 1);
+ 		err = ubifs_node_calc_hash(c, p, hash_old_dir);
+@@ -1352,6 +1373,9 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ 		if (new_inode)
+ 			ubifs_wbuf_add_ino_nolock(&c->jheads[BASEHD].wbuf,
+ 						  new_inode->i_ino);
++		if (whiteout)
++			ubifs_wbuf_add_ino_nolock(&c->jheads[BASEHD].wbuf,
++						  whiteout->i_ino);
+ 	}
+ 	release_head(c, BASEHD);
+ 
+@@ -1368,8 +1392,6 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ 		err = ubifs_tnc_add_nm(c, &key, lnum, offs, dlen2, hash_dent2, old_nm);
+ 		if (err)
+ 			goto out_ro;
+-
+-		ubifs_delete_orphan(c, whiteout->i_ino);
+ 	} else {
+ 		err = ubifs_add_dirt(c, lnum, dlen2);
+ 		if (err)
+@@ -1390,6 +1412,15 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ 		offs += ALIGN(ilen, 8);
+ 	}
+ 
++	if (whiteout) {
++		ino_key_init(c, &key, whiteout->i_ino);
++		err = ubifs_tnc_add(c, &key, lnum, offs, wlen,
++				    hash_whiteout_inode);
++		if (err)
++			goto out_ro;
++		offs += ALIGN(wlen, 8);
++	}
++
+ 	ino_key_init(c, &key, old_dir->i_ino);
+ 	err = ubifs_tnc_add(c, &key, lnum, offs, plen, hash_old_dir);
+ 	if (err)
+@@ -1410,6 +1441,11 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ 		new_ui->synced_i_size = new_ui->ui_size;
+ 		spin_unlock(&new_ui->ui_lock);
+ 	}
++	/*
++	 * No need to mark whiteout inode clean.
++	 * Whiteout doesn't have non-zero size, no need to update
++	 * synced_i_size for whiteout_ui.
++	 */
+ 	mark_inode_clean(c, ubifs_inode(old_dir));
+ 	if (move)
+ 		mark_inode_clean(c, ubifs_inode(new_dir));
+diff --git a/include/drm/drm_connector.h b/include/drm/drm_connector.h
+index 379746d3266f0..4b45a7d4cb537 100644
+--- a/include/drm/drm_connector.h
++++ b/include/drm/drm_connector.h
+@@ -566,10 +566,16 @@ struct drm_display_info {
+ 	bool rgb_quant_range_selectable;
+ 
+ 	/**
+-	 * @edid_hdmi_dc_modes: Mask of supported hdmi deep color modes. Even
+-	 * more stuff redundant with @bus_formats.
++	 * @edid_hdmi_rgb444_dc_modes: Mask of supported hdmi deep color modes
++	 * in RGB 4:4:4. Even more stuff redundant with @bus_formats.
+ 	 */
+-	u8 edid_hdmi_dc_modes;
++	u8 edid_hdmi_rgb444_dc_modes;
++
++	/**
++	 * @edid_hdmi_ycbcr444_dc_modes: Mask of supported hdmi deep color
++	 * modes in YCbCr 4:4:4. Even more stuff redundant with @bus_formats.
++	 */
++	u8 edid_hdmi_ycbcr444_dc_modes;
+ 
+ 	/**
+ 	 * @cea_rev: CEA revision of the HDMI sink.
+diff --git a/include/drm/drm_dp_helper.h b/include/drm/drm_dp_helper.h
+index b52df4db3e8fe..99838bdd33507 100644
+--- a/include/drm/drm_dp_helper.h
++++ b/include/drm/drm_dp_helper.h
+@@ -456,7 +456,7 @@ struct drm_panel;
+ #define DP_FEC_CAPABILITY_1			0x091   /* 2.0 */
+ 
+ /* DP-HDMI2.1 PCON DSC ENCODER SUPPORT */
+-#define DP_PCON_DSC_ENCODER_CAP_SIZE        0xC	/* 0x9E - 0x92 */
++#define DP_PCON_DSC_ENCODER_CAP_SIZE        0xD	/* 0x92 through 0x9E */
+ #define DP_PCON_DSC_ENCODER                 0x092
+ # define DP_PCON_DSC_ENCODER_SUPPORTED      (1 << 0)
+ # define DP_PCON_DSC_PPS_ENC_OVERRIDE       (1 << 1)
+diff --git a/include/drm/drm_modeset_lock.h b/include/drm/drm_modeset_lock.h
+index b84693fbd2b50..ec4f543c3d950 100644
+--- a/include/drm/drm_modeset_lock.h
++++ b/include/drm/drm_modeset_lock.h
+@@ -34,6 +34,7 @@ struct drm_modeset_lock;
+  * struct drm_modeset_acquire_ctx - locking context (see ww_acquire_ctx)
+  * @ww_ctx: base acquire ctx
+  * @contended: used internally for -EDEADLK handling
++ * @stack_depot: used internally for contention debugging
+  * @locked: list of held locks
+  * @trylock_only: trylock mode used in atomic contexts/panic notifiers
+  * @interruptible: whether interruptible locking should be used.
+diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
+index a3dba31df01e9..6db58d1808665 100644
+--- a/include/linux/atomic/atomic-arch-fallback.h
++++ b/include/linux/atomic/atomic-arch-fallback.h
+@@ -151,7 +151,16 @@
+ static __always_inline int
+ arch_atomic_read_acquire(const atomic_t *v)
+ {
+-	return smp_load_acquire(&(v)->counter);
++	int ret;
++
++	if (__native_word(atomic_t)) {
++		ret = smp_load_acquire(&(v)->counter);
++	} else {
++		ret = arch_atomic_read(v);
++		__atomic_acquire_fence();
++	}
++
++	return ret;
+ }
+ #define arch_atomic_read_acquire arch_atomic_read_acquire
+ #endif
+@@ -160,7 +169,12 @@ arch_atomic_read_acquire(const atomic_t *v)
+ static __always_inline void
+ arch_atomic_set_release(atomic_t *v, int i)
+ {
+-	smp_store_release(&(v)->counter, i);
++	if (__native_word(atomic_t)) {
++		smp_store_release(&(v)->counter, i);
++	} else {
++		__atomic_release_fence();
++		arch_atomic_set(v, i);
++	}
+ }
+ #define arch_atomic_set_release arch_atomic_set_release
+ #endif
+@@ -1258,7 +1272,16 @@ arch_atomic_dec_if_positive(atomic_t *v)
+ static __always_inline s64
+ arch_atomic64_read_acquire(const atomic64_t *v)
+ {
+-	return smp_load_acquire(&(v)->counter);
++	s64 ret;
++
++	if (__native_word(atomic64_t)) {
++		ret = smp_load_acquire(&(v)->counter);
++	} else {
++		ret = arch_atomic64_read(v);
++		__atomic_acquire_fence();
++	}
++
++	return ret;
+ }
+ #define arch_atomic64_read_acquire arch_atomic64_read_acquire
+ #endif
+@@ -1267,7 +1290,12 @@ arch_atomic64_read_acquire(const atomic64_t *v)
+ static __always_inline void
+ arch_atomic64_set_release(atomic64_t *v, s64 i)
+ {
+-	smp_store_release(&(v)->counter, i);
++	if (__native_word(atomic64_t)) {
++		smp_store_release(&(v)->counter, i);
++	} else {
++		__atomic_release_fence();
++		arch_atomic64_set(v, i);
++	}
+ }
+ #define arch_atomic64_set_release arch_atomic64_set_release
+ #endif
+@@ -2358,4 +2386,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
+ #endif
+ 
+ #endif /* _LINUX_ATOMIC_FALLBACK_H */
+-// cca554917d7ea73d5e3e7397dd70c484cad9b2c4
++// 8e2cc06bc0d2c0967d2f8424762bd48555ee40ae
+diff --git a/include/linux/binfmts.h b/include/linux/binfmts.h
+index 049cf9421d831..f821b72433613 100644
+--- a/include/linux/binfmts.h
++++ b/include/linux/binfmts.h
+@@ -87,6 +87,9 @@ struct coredump_params {
+ 	loff_t written;
+ 	loff_t pos;
+ 	loff_t to_skip;
++	int vma_count;
++	size_t vma_data_size;
++	struct core_vma_metadata *vma_meta;
+ };
+ 
+ /*
+diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
+index b4de2010fba55..bc5c04d711bbc 100644
+--- a/include/linux/blk-cgroup.h
++++ b/include/linux/blk-cgroup.h
+@@ -24,6 +24,7 @@
+ #include <linux/atomic.h>
+ #include <linux/kthread.h>
+ #include <linux/fs.h>
++#include <linux/blk-mq.h>
+ 
+ /* percpu_counter batch for blkg_[rw]stats, per-cpu drift doesn't matter */
+ #define BLKG_STAT_CPU_BATCH	(INT_MAX / 2)
+@@ -604,6 +605,21 @@ static inline void blkcg_clear_delay(struct blkcg_gq *blkg)
+ 		atomic_dec(&blkg->blkcg->css.cgroup->congestion_count);
+ }
+ 
++/**
++ * blk_cgroup_mergeable - Determine whether to allow or disallow merges
++ * @rq: request to merge into
++ * @bio: bio to merge
++ *
++ * @bio and @rq should belong to the same cgroup and their issue_as_root should
++ * match. The latter is necessary as we don't want to throttle e.g. a metadata
++ * update because it happens to be next to a regular IO.
++ */
++static inline bool blk_cgroup_mergeable(struct request *rq, struct bio *bio)
++{
++	return rq->bio->bi_blkg == bio->bi_blkg &&
++		bio_issue_as_root_blkg(rq->bio) == bio_issue_as_root_blkg(bio);
++}
++
+ void blk_cgroup_bio_start(struct bio *bio);
+ void blkcg_add_delay(struct blkcg_gq *blkg, u64 now, u64 delta);
+ void blkcg_schedule_throttle(struct request_queue *q, bool use_memdelay);
+@@ -659,6 +675,7 @@ static inline void blkg_put(struct blkcg_gq *blkg) { }
+ static inline bool blkcg_punt_bio_submit(struct bio *bio) { return false; }
+ static inline void blkcg_bio_issue_init(struct bio *bio) { }
+ static inline void blk_cgroup_bio_start(struct bio *bio) { }
++static inline bool blk_cgroup_mergeable(struct request *rq, struct bio *bio) { return true; }
+ 
+ #define blk_queue_for_each_rl(rl, q)	\
+ 	for ((rl) = &(q)->root_rl; (rl); (rl) = NULL)
+diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
+index fe065c394fff6..86c0f85df8bb4 100644
+--- a/include/linux/blk_types.h
++++ b/include/linux/blk_types.h
+@@ -317,7 +317,8 @@ enum {
+ 	BIO_TRACE_COMPLETION,	/* bio_endio() should trace the final completion
+ 				 * of this bio. */
+ 	BIO_CGROUP_ACCT,	/* has been accounted to a cgroup */
+-	BIO_TRACKED,		/* set if bio goes through the rq_qos path */
++	BIO_QOS_THROTTLED,	/* bio went through rq_qos throttle path */
++	BIO_QOS_MERGED,		/* but went through rq_qos merge path */
+ 	BIO_REMAPPED,
+ 	BIO_ZONE_WRITE_LOCKED,	/* Owns a zoned device zone write lock */
+ 	BIO_PERCPU_CACHE,	/* can participate in per-cpu alloc cache */
+diff --git a/include/linux/coredump.h b/include/linux/coredump.h
+index 78fcd776b185a..4b95e46d215f1 100644
+--- a/include/linux/coredump.h
++++ b/include/linux/coredump.h
+@@ -12,6 +12,8 @@ struct core_vma_metadata {
+ 	unsigned long start, end;
+ 	unsigned long flags;
+ 	unsigned long dump_size;
++	unsigned long pgoff;
++	struct file   *file;
+ };
+ 
+ extern int core_uses_pid;
+@@ -29,9 +31,6 @@ extern int dump_emit(struct coredump_params *cprm, const void *addr, int nr);
+ extern int dump_align(struct coredump_params *cprm, int align);
+ int dump_user_range(struct coredump_params *cprm, unsigned long start,
+ 		    unsigned long len);
+-int dump_vma_snapshot(struct coredump_params *cprm, int *vma_count,
+-		      struct core_vma_metadata **vma_meta,
+-		      size_t *vma_data_size_ptr);
+ extern void do_coredump(const kernel_siginfo_t *siginfo);
+ #else
+ static inline void do_coredump(const kernel_siginfo_t *siginfo) {}
+diff --git a/include/linux/fb.h b/include/linux/fb.h
+index 02f362c661c80..3d7306c9a7065 100644
+--- a/include/linux/fb.h
++++ b/include/linux/fb.h
+@@ -502,6 +502,7 @@ struct fb_info {
+ 	} *apertures;
+ 
+ 	bool skip_vt_switch; /* no VT switch on suspend/resume required */
++	bool forced_out; /* set when being removed by another driver */
+ };
+ 
+ static inline struct apertures_struct *alloc_apertures(unsigned int max_num) {
+diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
+index f0c7b352340a4..2e078f40195d2 100644
+--- a/include/linux/lsm_hook_defs.h
++++ b/include/linux/lsm_hook_defs.h
+@@ -335,6 +335,8 @@ LSM_HOOK(int, 0, sctp_bind_connect, struct sock *sk, int optname,
+ 	 struct sockaddr *address, int addrlen)
+ LSM_HOOK(void, LSM_RET_VOID, sctp_sk_clone, struct sctp_association *asoc,
+ 	 struct sock *sk, struct sock *newsk)
++LSM_HOOK(int, 0, sctp_assoc_established, struct sctp_association *asoc,
++	 struct sk_buff *skb)
+ #endif /* CONFIG_SECURITY_NETWORK */
+ 
+ #ifdef CONFIG_SECURITY_INFINIBAND
+diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
+index d45b6f6e27fda..d6823214d5c1e 100644
+--- a/include/linux/lsm_hooks.h
++++ b/include/linux/lsm_hooks.h
+@@ -1050,6 +1050,11 @@
+  *	@asoc pointer to current sctp association structure.
+  *	@sk pointer to current sock structure.
+  *	@newsk pointer to new sock structure.
++ * @sctp_assoc_established:
++ *	Passes the @asoc and @chunk->skb of the association COOKIE_ACK packet
++ *	to the security module.
++ *	@asoc pointer to sctp association structure.
++ *	@skb pointer to skbuff of association packet.
+  *
+  * Security hooks for Infiniband
+  *
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 73210623bda7a..1e553c12841eb 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -1840,20 +1840,6 @@ struct zap_details {
+ 	struct page *single_page;		/* Locked page to be unmapped */
+ };
+ 
+-/*
+- * We set details->zap_mappings when we want to unmap shared but keep private
+- * pages. Return true if skip zapping this page, false otherwise.
+- */
+-static inline bool
+-zap_skip_check_mapping(struct zap_details *details, struct page *page)
+-{
+-	if (!details || !page)
+-		return false;
+-
+-	return details->zap_mapping &&
+-	    (details->zap_mapping != page_rmapping(page));
+-}
+-
+ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
+ 			     pte_t pte);
+ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
+diff --git a/include/linux/mtd/rawnand.h b/include/linux/mtd/rawnand.h
+index 5b88cd51fadb5..dcf90144d70b7 100644
+--- a/include/linux/mtd/rawnand.h
++++ b/include/linux/mtd/rawnand.h
+@@ -1240,6 +1240,7 @@ struct nand_secure_region {
+  * @lock: Lock protecting the suspended field. Also used to serialize accesses
+  *        to the NAND device
+  * @suspended: Set to 1 when the device is suspended, 0 when it's not
++ * @resume_wq: wait queue to sleep if rawnand is in suspended state.
+  * @cur_cs: Currently selected target. -1 means no target selected, otherwise we
+  *          should always have cur_cs >= 0 && cur_cs < nanddev_ntargets().
+  *          NAND Controller drivers should not modify this value, but they're
+@@ -1294,6 +1295,7 @@ struct nand_chip {
+ 	/* Internals */
+ 	struct mutex lock;
+ 	unsigned int suspended : 1;
++	wait_queue_head_t resume_wq;
+ 	int cur_cs;
+ 	int read_retries;
+ 	struct nand_secure_region *secure_regions;
+diff --git a/include/linux/netfilter_netdev.h b/include/linux/netfilter_netdev.h
+index e6487a6911360..8676316547cc4 100644
+--- a/include/linux/netfilter_netdev.h
++++ b/include/linux/netfilter_netdev.h
+@@ -99,7 +99,7 @@ static inline struct sk_buff *nf_hook_egress(struct sk_buff *skb, int *rc,
+ 		return skb;
+ 
+ 	nf_hook_state_init(&state, NF_NETDEV_EGRESS,
+-			   NFPROTO_NETDEV, dev, NULL, NULL,
++			   NFPROTO_NETDEV, NULL, dev, NULL,
+ 			   dev_net(dev), NULL);
+ 
+ 	/* nf assumes rcu_read_lock, not just read_lock_bh */
+diff --git a/include/linux/nvme.h b/include/linux/nvme.h
+index 855dd9b3e84be..a662435c9b6f1 100644
+--- a/include/linux/nvme.h
++++ b/include/linux/nvme.h
+@@ -337,6 +337,7 @@ enum {
+ 	NVME_CTRL_ONCS_TIMESTAMP		= 1 << 6,
+ 	NVME_CTRL_VWC_PRESENT			= 1 << 0,
+ 	NVME_CTRL_OACS_SEC_SUPP                 = 1 << 0,
++	NVME_CTRL_OACS_NS_MNGT_SUPP		= 1 << 3,
+ 	NVME_CTRL_OACS_DIRECTIVES		= 1 << 5,
+ 	NVME_CTRL_OACS_DBBUF_SUPP		= 1 << 8,
+ 	NVME_CTRL_LPA_CMD_EFFECTS_LOG		= 1 << 1,
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 18a75c8e615cd..2d6118937d070 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -656,6 +656,7 @@ struct pci_bus {
+ 	struct bin_attribute	*legacy_io;	/* Legacy I/O for this bus */
+ 	struct bin_attribute	*legacy_mem;	/* Legacy mem */
+ 	unsigned int		is_added:1;
++	unsigned int		unsafe_warn:1;	/* warned about RW1C config write */
+ };
+ 
+ #define to_pci_bus(n)	container_of(n, struct pci_bus, dev)
+diff --git a/include/linux/pstore.h b/include/linux/pstore.h
+index eb93a54cff31f..e97a8188f0fd8 100644
+--- a/include/linux/pstore.h
++++ b/include/linux/pstore.h
+@@ -14,7 +14,7 @@
+ #include <linux/errno.h>
+ #include <linux/kmsg_dump.h>
+ #include <linux/mutex.h>
+-#include <linux/semaphore.h>
++#include <linux/spinlock.h>
+ #include <linux/time.h>
+ #include <linux/types.h>
+ 
+@@ -87,7 +87,7 @@ struct pstore_record {
+  * @owner:	module which is responsible for this backend driver
+  * @name:	name of the backend driver
+  *
+- * @buf_lock:	semaphore to serialize access to @buf
++ * @buf_lock:	spinlock to serialize access to @buf
+  * @buf:	preallocated crash dump buffer
+  * @bufsize:	size of @buf available for crash dump bytes (must match
+  *		smallest number of bytes available for writing to a
+@@ -178,7 +178,7 @@ struct pstore_info {
+ 	struct module	*owner;
+ 	const char	*name;
+ 
+-	struct semaphore buf_lock;
++	spinlock_t	buf_lock;
+ 	char		*buf;
+ 	size_t		bufsize;
+ 
+diff --git a/include/linux/randomize_kstack.h b/include/linux/randomize_kstack.h
+index bebc911161b6f..d373f1bcbf7ca 100644
+--- a/include/linux/randomize_kstack.h
++++ b/include/linux/randomize_kstack.h
+@@ -16,8 +16,20 @@ DECLARE_PER_CPU(u32, kstack_offset);
+  * alignment. Also, since this use is being explicitly masked to a max of
+  * 10 bits, stack-clash style attacks are unlikely. For more details see
+  * "VLAs" in Documentation/process/deprecated.rst
++ *
++ * The normal __builtin_alloca() is initialized with INIT_STACK_ALL (currently
++ * only with Clang and not GCC). Initializing the unused area on each syscall
++ * entry is expensive, and generating an implicit call to memset() may also be
++ * problematic (such as in noinstr functions). Therefore, if the compiler
++ * supports it (which it should if it initializes allocas), always use the
++ * "uninitialized" variant of the builtin.
+  */
+-void *__builtin_alloca(size_t size);
++#if __has_builtin(__builtin_alloca_uninitialized)
++#define __kstack_alloca __builtin_alloca_uninitialized
++#else
++#define __kstack_alloca __builtin_alloca
++#endif
++
+ /*
+  * Use, at most, 10 bits of entropy. We explicitly cap this to keep the
+  * "VLA" from being unbounded (see above). 10 bits leaves enough room for
+@@ -36,7 +48,7 @@ void *__builtin_alloca(size_t size);
+ 	if (static_branch_maybe(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT,	\
+ 				&randomize_kstack_offset)) {		\
+ 		u32 offset = raw_cpu_read(kstack_offset);		\
+-		u8 *ptr = __builtin_alloca(KSTACK_OFFSET_MAX(offset));	\
++		u8 *ptr = __kstack_alloca(KSTACK_OFFSET_MAX(offset));	\
+ 		/* Keep allocation even after "ptr" loses scope. */	\
+ 		asm volatile("" :: "r"(ptr) : "memory");		\
+ 	}								\
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index ee5ed88219631..3375bb533f9d9 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1622,6 +1622,14 @@ static inline unsigned int task_state_index(struct task_struct *tsk)
+ 	if (tsk_state == TASK_IDLE)
+ 		state = TASK_REPORT_IDLE;
+ 
++	/*
++	 * We're lying here, but rather than expose a completely new task state
++	 * to userspace, we can make this appear as if the task has gone through
++	 * a regular rt_mutex_lock() call.
++	 */
++	if (tsk_state == TASK_RTLOCK_WAIT)
++		state = TASK_UNINTERRUPTIBLE;
++
+ 	return fls(state);
+ }
+ 
+diff --git a/include/linux/security.h b/include/linux/security.h
+index bbf44a4668326..bd059e7b47664 100644
+--- a/include/linux/security.h
++++ b/include/linux/security.h
+@@ -1430,6 +1430,8 @@ int security_sctp_bind_connect(struct sock *sk, int optname,
+ 			       struct sockaddr *address, int addrlen);
+ void security_sctp_sk_clone(struct sctp_association *asoc, struct sock *sk,
+ 			    struct sock *newsk);
++int security_sctp_assoc_established(struct sctp_association *asoc,
++				    struct sk_buff *skb);
+ 
+ #else	/* CONFIG_SECURITY_NETWORK */
+ static inline int security_unix_stream_connect(struct sock *sock,
+@@ -1649,6 +1651,12 @@ static inline void security_sctp_sk_clone(struct sctp_association *asoc,
+ 					  struct sock *newsk)
+ {
+ }
++
++static inline int security_sctp_assoc_established(struct sctp_association *asoc,
++						  struct sk_buff *skb)
++{
++	return 0;
++}
+ #endif	/* CONFIG_SECURITY_NETWORK */
+ 
+ #ifdef CONFIG_SECURITY_INFINIBAND
+diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h
+index c58cc142d23f4..8c32935e1059d 100644
+--- a/include/linux/serial_core.h
++++ b/include/linux/serial_core.h
+@@ -458,6 +458,8 @@ extern void uart_handle_cts_change(struct uart_port *uport,
+ extern void uart_insert_char(struct uart_port *port, unsigned int status,
+ 		 unsigned int overrun, unsigned int ch, unsigned int flag);
+ 
++void uart_xchar_out(struct uart_port *uport, int offset);
++
+ #ifdef CONFIG_MAGIC_SYSRQ_SERIAL
+ #define SYSRQ_TIMEOUT	(HZ * 5)
+ 
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 60ab0c2fe5674..5078e1a505c31 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -1446,6 +1446,11 @@ static inline unsigned int skb_end_offset(const struct sk_buff *skb)
+ {
+ 	return skb->end;
+ }
++
++static inline void skb_set_end_offset(struct sk_buff *skb, unsigned int offset)
++{
++	skb->end = offset;
++}
+ #else
+ static inline unsigned char *skb_end_pointer(const struct sk_buff *skb)
+ {
+@@ -1456,6 +1461,11 @@ static inline unsigned int skb_end_offset(const struct sk_buff *skb)
+ {
+ 	return skb->end - skb->head;
+ }
++
++static inline void skb_set_end_offset(struct sk_buff *skb, unsigned int offset)
++{
++	skb->end = skb->head + offset;
++}
+ #endif
+ 
+ /* Internal */
+@@ -1695,19 +1705,19 @@ static inline int skb_unclone(struct sk_buff *skb, gfp_t pri)
+ 	return 0;
+ }
+ 
+-/* This variant of skb_unclone() makes sure skb->truesize is not changed */
++/* This variant of skb_unclone() makes sure skb->truesize
++ * and skb_end_offset() are not changed, whenever a new skb->head is needed.
++ *
++ * Indeed there is no guarantee that ksize(kmalloc(X)) == ksize(kmalloc(X))
++ * when various debugging features are in place.
++ */
++int __skb_unclone_keeptruesize(struct sk_buff *skb, gfp_t pri);
+ static inline int skb_unclone_keeptruesize(struct sk_buff *skb, gfp_t pri)
+ {
+ 	might_sleep_if(gfpflags_allow_blocking(pri));
+ 
+-	if (skb_cloned(skb)) {
+-		unsigned int save = skb->truesize;
+-		int res;
+-
+-		res = pskb_expand_head(skb, 0, 0, pri);
+-		skb->truesize = save;
+-		return res;
+-	}
++	if (skb_cloned(skb))
++		return __skb_unclone_keeptruesize(skb, pri);
+ 	return 0;
+ }
+ 
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index 18a717fe62eb0..7f32dd59e7513 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -310,21 +310,16 @@ static inline void sock_drop(struct sock *sk, struct sk_buff *skb)
+ 	kfree_skb(skb);
+ }
+ 
+-static inline void drop_sk_msg(struct sk_psock *psock, struct sk_msg *msg)
+-{
+-	if (msg->skb)
+-		sock_drop(psock->sk, msg->skb);
+-	kfree(msg);
+-}
+-
+ static inline void sk_psock_queue_msg(struct sk_psock *psock,
+ 				      struct sk_msg *msg)
+ {
+ 	spin_lock_bh(&psock->ingress_lock);
+ 	if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED))
+ 		list_add_tail(&msg->list, &psock->ingress_msg);
+-	else
+-		drop_sk_msg(psock, msg);
++	else {
++		sk_msg_free(psock->sk, msg);
++		kfree(msg);
++	}
+ 	spin_unlock_bh(&psock->ingress_lock);
+ }
+ 
+diff --git a/include/linux/soc/ti/ti_sci_protocol.h b/include/linux/soc/ti/ti_sci_protocol.h
+index 0aad7009b50e6..bd0d11af76c5e 100644
+--- a/include/linux/soc/ti/ti_sci_protocol.h
++++ b/include/linux/soc/ti/ti_sci_protocol.h
+@@ -645,7 +645,7 @@ devm_ti_sci_get_of_resource(const struct ti_sci_handle *handle,
+ 
+ static inline struct ti_sci_resource *
+ devm_ti_sci_get_resource(const struct ti_sci_handle *handle, struct device *dev,
+-			 u32 dev_id, u32 sub_type);
++			 u32 dev_id, u32 sub_type)
+ {
+ 	return ERR_PTR(-EINVAL);
+ }
+diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h
+index b519609af1d02..4417f667c757e 100644
+--- a/include/linux/sunrpc/xdr.h
++++ b/include/linux/sunrpc/xdr.h
+@@ -731,6 +731,8 @@ xdr_stream_decode_uint32_array(struct xdr_stream *xdr,
+ 
+ 	if (unlikely(xdr_stream_decode_u32(xdr, &len) < 0))
+ 		return -EBADMSG;
++	if (len > SIZE_MAX / sizeof(*p))
++		return -EBADMSG;
+ 	p = xdr_inline_decode(xdr, len * sizeof(*p));
+ 	if (unlikely(!p))
+ 		return -EBADMSG;
+diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
+index 955ea4d7af0b2..eef5e87c03b43 100644
+--- a/include/linux/sunrpc/xprt.h
++++ b/include/linux/sunrpc/xprt.h
+@@ -139,6 +139,9 @@ struct rpc_xprt_ops {
+ 	void		(*rpcbind)(struct rpc_task *task);
+ 	void		(*set_port)(struct rpc_xprt *xprt, unsigned short port);
+ 	void		(*connect)(struct rpc_xprt *xprt, struct rpc_task *task);
++	int		(*get_srcaddr)(struct rpc_xprt *xprt, char *buf,
++				       size_t buflen);
++	unsigned short	(*get_srcport)(struct rpc_xprt *xprt);
+ 	int		(*buf_alloc)(struct rpc_task *task);
+ 	void		(*buf_free)(struct rpc_task *task);
+ 	void		(*prepare_request)(struct rpc_rqst *req);
+diff --git a/include/linux/sunrpc/xprtsock.h b/include/linux/sunrpc/xprtsock.h
+index 8c2a712cb2420..fed813ffe7db1 100644
+--- a/include/linux/sunrpc/xprtsock.h
++++ b/include/linux/sunrpc/xprtsock.h
+@@ -10,7 +10,6 @@
+ 
+ int		init_socket_xprt(void);
+ void		cleanup_socket_xprt(void);
+-unsigned short	get_srcport(struct rpc_xprt *);
+ 
+ #define RPC_MIN_RESVPORT	(1U)
+ #define RPC_MAX_RESVPORT	(65535U)
+@@ -89,5 +88,6 @@ struct sock_xprt {
+ #define XPRT_SOCK_WAKE_WRITE	(5)
+ #define XPRT_SOCK_WAKE_PENDING	(6)
+ #define XPRT_SOCK_WAKE_DISCONNECT	(7)
++#define XPRT_SOCK_CONNECT_SENT	(8)
+ 
+ #endif /* _LINUX_SUNRPC_XPRTSOCK_H */
+diff --git a/include/net/netfilter/nf_conntrack_helper.h b/include/net/netfilter/nf_conntrack_helper.h
+index 37f0fbefb060f..9939c366f720d 100644
+--- a/include/net/netfilter/nf_conntrack_helper.h
++++ b/include/net/netfilter/nf_conntrack_helper.h
+@@ -177,4 +177,5 @@ void nf_nat_helper_unregister(struct nf_conntrack_nat_helper *nat);
+ int nf_nat_helper_try_module_get(const char *name, u16 l3num,
+ 				 u8 protonum);
+ void nf_nat_helper_put(struct nf_conntrack_helper *helper);
++void nf_ct_set_auto_assign_helper_warned(struct net *net);
+ #endif /*_NF_CONNTRACK_HELPER_H*/
+diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
+index a3647fadf1ccb..9f927c44087de 100644
+--- a/include/net/netfilter/nf_flow_table.h
++++ b/include/net/netfilter/nf_flow_table.h
+@@ -10,6 +10,8 @@
+ #include <linux/netfilter/nf_conntrack_tuple_common.h>
+ #include <net/flow_offload.h>
+ #include <net/dst.h>
++#include <linux/if_pppox.h>
++#include <linux/ppp_defs.h>
+ 
+ struct nf_flowtable;
+ struct nf_flow_rule;
+@@ -313,4 +315,20 @@ int nf_flow_rule_route_ipv6(struct net *net, const struct flow_offload *flow,
+ int nf_flow_table_offload_init(void);
+ void nf_flow_table_offload_exit(void);
+ 
++static inline __be16 nf_flow_pppoe_proto(const struct sk_buff *skb)
++{
++	__be16 proto;
++
++	proto = *((__be16 *)(skb_mac_header(skb) + ETH_HLEN +
++			     sizeof(struct pppoe_hdr)));
++	switch (proto) {
++	case htons(PPP_IP):
++		return htons(ETH_P_IP);
++	case htons(PPP_IPV6):
++		return htons(ETH_P_IPV6);
++	}
++
++	return 0;
++}
++
+ #endif /* _NF_FLOW_TABLE_H */
+diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
+index d1c6fc83b1e38..5a54bb7fc6569 100644
+--- a/include/scsi/scsi_device.h
++++ b/include/scsi/scsi_device.h
+@@ -206,6 +206,7 @@ struct scsi_device {
+ 	unsigned rpm_autosuspend:1;	/* Enable runtime autosuspend at device
+ 					 * creation time */
+ 	unsigned ignore_media_change:1; /* Ignore MEDIA CHANGE on resume */
++	unsigned silence_suspend:1;	/* Do not print runtime PM related messages */
+ 
+ 	unsigned int queue_stopped;	/* request queue is quiesced */
+ 	bool offline_already;		/* Device offline message logged */
+diff --git a/include/sound/pcm.h b/include/sound/pcm.h
+index 2018b7512d3d5..e08bf475d02d4 100644
+--- a/include/sound/pcm.h
++++ b/include/sound/pcm.h
+@@ -399,6 +399,7 @@ struct snd_pcm_runtime {
+ 	struct fasync_struct *fasync;
+ 	bool stop_operating;		/* sync_stop will be called */
+ 	struct mutex buffer_mutex;	/* protect for buffer changes */
++	atomic_t buffer_accessing;	/* >0: in r/w operation, <0: blocked */
+ 
+ 	/* -- private section -- */
+ 	void *private_data;
+diff --git a/include/trace/events/ext4.h b/include/trace/events/ext4.h
+index 0ea36b2b0662a..61a64d1b2bb68 100644
+--- a/include/trace/events/ext4.h
++++ b/include/trace/events/ext4.h
+@@ -95,6 +95,17 @@ TRACE_DEFINE_ENUM(ES_REFERENCED_B);
+ 	{ FALLOC_FL_COLLAPSE_RANGE,	"COLLAPSE_RANGE"},	\
+ 	{ FALLOC_FL_ZERO_RANGE,		"ZERO_RANGE"})
+ 
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_XATTR);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_CROSS_RENAME);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_JOURNAL_FLAG_CHANGE);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_NOMEM);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_SWAP_BOOT);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_RESIZE);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_RENAME_DIR);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_FALLOC_RANGE);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_INODE_JOURNAL_DATA);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_MAX);
++
+ #define show_fc_reason(reason)						\
+ 	__print_symbolic(reason,					\
+ 		{ EXT4_FC_REASON_XATTR,		"XATTR"},		\
+@@ -2723,41 +2734,50 @@ TRACE_EVENT(ext4_fc_commit_stop,
+ 
+ #define FC_REASON_NAME_STAT(reason)					\
+ 	show_fc_reason(reason),						\
+-	__entry->sbi->s_fc_stats.fc_ineligible_reason_count[reason]
++	__entry->fc_ineligible_rc[reason]
+ 
+ TRACE_EVENT(ext4_fc_stats,
+-	    TP_PROTO(struct super_block *sb),
+-
+-	    TP_ARGS(sb),
++	TP_PROTO(struct super_block *sb),
+ 
+-	    TP_STRUCT__entry(
+-		    __field(dev_t, dev)
+-		    __field(struct ext4_sb_info *, sbi)
+-		    __field(int, count)
+-		    ),
++	TP_ARGS(sb),
+ 
+-	    TP_fast_assign(
+-		    __entry->dev = sb->s_dev;
+-		    __entry->sbi = EXT4_SB(sb);
+-		    ),
++	TP_STRUCT__entry(
++		__field(dev_t, dev)
++		__array(unsigned int, fc_ineligible_rc, EXT4_FC_REASON_MAX)
++		__field(unsigned long, fc_commits)
++		__field(unsigned long, fc_ineligible_commits)
++		__field(unsigned long, fc_numblks)
++	),
+ 
+-	    TP_printk("dev %d:%d fc ineligible reasons:\n"
+-		      "%s:%d, %s:%d, %s:%d, %s:%d, %s:%d, %s:%d, %s:%d, %s:%d, %s:%d; "
+-		      "num_commits:%ld, ineligible: %ld, numblks: %ld",
+-		      MAJOR(__entry->dev), MINOR(__entry->dev),
+-		      FC_REASON_NAME_STAT(EXT4_FC_REASON_XATTR),
+-		      FC_REASON_NAME_STAT(EXT4_FC_REASON_CROSS_RENAME),
+-		      FC_REASON_NAME_STAT(EXT4_FC_REASON_JOURNAL_FLAG_CHANGE),
+-		      FC_REASON_NAME_STAT(EXT4_FC_REASON_NOMEM),
+-		      FC_REASON_NAME_STAT(EXT4_FC_REASON_SWAP_BOOT),
+-		      FC_REASON_NAME_STAT(EXT4_FC_REASON_RESIZE),
+-		      FC_REASON_NAME_STAT(EXT4_FC_REASON_RENAME_DIR),
+-		      FC_REASON_NAME_STAT(EXT4_FC_REASON_FALLOC_RANGE),
+-		      FC_REASON_NAME_STAT(EXT4_FC_REASON_INODE_JOURNAL_DATA),
+-		      __entry->sbi->s_fc_stats.fc_num_commits,
+-		      __entry->sbi->s_fc_stats.fc_ineligible_commits,
+-		      __entry->sbi->s_fc_stats.fc_numblks)
++	TP_fast_assign(
++		int i;
+ 
++		__entry->dev = sb->s_dev;
++		for (i = 0; i < EXT4_FC_REASON_MAX; i++) {
++			__entry->fc_ineligible_rc[i] =
++				EXT4_SB(sb)->s_fc_stats.fc_ineligible_reason_count[i];
++		}
++		__entry->fc_commits = EXT4_SB(sb)->s_fc_stats.fc_num_commits;
++		__entry->fc_ineligible_commits =
++			EXT4_SB(sb)->s_fc_stats.fc_ineligible_commits;
++		__entry->fc_numblks = EXT4_SB(sb)->s_fc_stats.fc_numblks;
++	),
++
++	TP_printk("dev %d,%d fc ineligible reasons:\n"
++		  "%s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u "
++		  "num_commits:%lu, ineligible: %lu, numblks: %lu",
++		  MAJOR(__entry->dev), MINOR(__entry->dev),
++		  FC_REASON_NAME_STAT(EXT4_FC_REASON_XATTR),
++		  FC_REASON_NAME_STAT(EXT4_FC_REASON_CROSS_RENAME),
++		  FC_REASON_NAME_STAT(EXT4_FC_REASON_JOURNAL_FLAG_CHANGE),
++		  FC_REASON_NAME_STAT(EXT4_FC_REASON_NOMEM),
++		  FC_REASON_NAME_STAT(EXT4_FC_REASON_SWAP_BOOT),
++		  FC_REASON_NAME_STAT(EXT4_FC_REASON_RESIZE),
++		  FC_REASON_NAME_STAT(EXT4_FC_REASON_RENAME_DIR),
++		  FC_REASON_NAME_STAT(EXT4_FC_REASON_FALLOC_RANGE),
++		  FC_REASON_NAME_STAT(EXT4_FC_REASON_INODE_JOURNAL_DATA),
++		  __entry->fc_commits, __entry->fc_ineligible_commits,
++		  __entry->fc_numblks)
+ );
+ 
+ #define DEFINE_TRACE_DENTRY_EVENT(__type)				\
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index e70c90116edae..4a3ab0ed6e062 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -83,12 +83,15 @@ enum rxrpc_call_trace {
+ 	rxrpc_call_error,
+ 	rxrpc_call_got,
+ 	rxrpc_call_got_kernel,
++	rxrpc_call_got_timer,
+ 	rxrpc_call_got_userid,
+ 	rxrpc_call_new_client,
+ 	rxrpc_call_new_service,
+ 	rxrpc_call_put,
+ 	rxrpc_call_put_kernel,
+ 	rxrpc_call_put_noqueue,
++	rxrpc_call_put_notimer,
++	rxrpc_call_put_timer,
+ 	rxrpc_call_put_userid,
+ 	rxrpc_call_queued,
+ 	rxrpc_call_queued_ref,
+@@ -278,12 +281,15 @@ enum rxrpc_tx_point {
+ 	EM(rxrpc_call_error,			"*E*") \
+ 	EM(rxrpc_call_got,			"GOT") \
+ 	EM(rxrpc_call_got_kernel,		"Gke") \
++	EM(rxrpc_call_got_timer,		"GTM") \
+ 	EM(rxrpc_call_got_userid,		"Gus") \
+ 	EM(rxrpc_call_new_client,		"NWc") \
+ 	EM(rxrpc_call_new_service,		"NWs") \
+ 	EM(rxrpc_call_put,			"PUT") \
+ 	EM(rxrpc_call_put_kernel,		"Pke") \
+-	EM(rxrpc_call_put_noqueue,		"PNQ") \
++	EM(rxrpc_call_put_noqueue,		"PnQ") \
++	EM(rxrpc_call_put_notimer,		"PnT") \
++	EM(rxrpc_call_put_timer,		"PTM") \
+ 	EM(rxrpc_call_put_userid,		"Pus") \
+ 	EM(rxrpc_call_queued,			"QUE") \
+ 	EM(rxrpc_call_queued_ref,		"QUR") \
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index b12cfceddb6e9..2b3c3f83076c8 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -2284,8 +2284,8 @@ union bpf_attr {
+  * 	Return
+  * 		The return value depends on the result of the test, and can be:
+  *
+- *		* 0, if current task belongs to the cgroup2.
+- *		* 1, if current task does not belong to the cgroup2.
++ *		* 1, if current task belongs to the cgroup2.
++ *		* 0, if current task does not belong to the cgroup2.
+  * 		* A negative error code, if an error occurred.
+  *
+  * long bpf_skb_change_tail(struct sk_buff *skb, u32 len, u64 flags)
+@@ -2973,8 +2973,8 @@ union bpf_attr {
+  *
+  * 			# sysctl kernel.perf_event_max_stack=<new value>
+  * 	Return
+- * 		A non-negative value equal to or less than *size* on success,
+- * 		or a negative error in case of failure.
++ * 		The non-negative copied *buf* length equal to or less than
++ * 		*size* on success, or a negative error in case of failure.
+  *
+  * long bpf_skb_load_bytes_relative(const void *skb, u32 offset, void *to, u32 len, u32 start_header)
+  * 	Description
+@@ -4277,8 +4277,8 @@ union bpf_attr {
+  *
+  *			# sysctl kernel.perf_event_max_stack=<new value>
+  *	Return
+- *		A non-negative value equal to or less than *size* on success,
+- *		or a negative error in case of failure.
++ * 		The non-negative copied *buf* length equal to or less than
++ * 		*size* on success, or a negative error in case of failure.
+  *
+  * long bpf_load_hdr_opt(struct bpf_sock_ops *skops, void *searchby_res, u32 len, u64 flags)
+  *	Description
+diff --git a/include/uapi/linux/loop.h b/include/uapi/linux/loop.h
+index 24a1c45bd1ae2..98e60801195e2 100644
+--- a/include/uapi/linux/loop.h
++++ b/include/uapi/linux/loop.h
+@@ -45,7 +45,7 @@ struct loop_info {
+ 	unsigned long	   lo_inode; 		/* ioctl r/o */
+ 	__kernel_old_dev_t lo_rdevice; 		/* ioctl r/o */
+ 	int		   lo_offset;
+-	int		   lo_encrypt_type;
++	int		   lo_encrypt_type;		/* obsolete, ignored */
+ 	int		   lo_encrypt_key_size; 	/* ioctl w/o */
+ 	int		   lo_flags;
+ 	char		   lo_name[LO_NAME_SIZE];
+@@ -61,7 +61,7 @@ struct loop_info64 {
+ 	__u64		   lo_offset;
+ 	__u64		   lo_sizelimit;/* bytes, 0 == max available */
+ 	__u32		   lo_number;			/* ioctl r/o */
+-	__u32		   lo_encrypt_type;
++	__u32		   lo_encrypt_type;		/* obsolete, ignored */
+ 	__u32		   lo_encrypt_key_size;		/* ioctl w/o */
+ 	__u32		   lo_flags;
+ 	__u8		   lo_file_name[LO_NAME_SIZE];
+diff --git a/include/uapi/linux/omap3isp.h b/include/uapi/linux/omap3isp.h
+index 87b55755f4ffe..d9db7ad438908 100644
+--- a/include/uapi/linux/omap3isp.h
++++ b/include/uapi/linux/omap3isp.h
+@@ -162,6 +162,7 @@ struct omap3isp_h3a_aewb_config {
+  * struct omap3isp_stat_data - Statistic data sent to or received from user
+  * @ts: Timestamp of returned framestats.
+  * @buf: Pointer to pass to user.
++ * @buf_size: Size of buffer.
+  * @frame_number: Frame number of requested stats.
+  * @cur_frame: Current frame number being processed.
+  * @config_counter: Number of the configuration associated with the data.
+@@ -176,10 +177,12 @@ struct omap3isp_stat_data {
+ 	struct timeval ts;
+ #endif
+ 	void __user *buf;
+-	__u32 buf_size;
+-	__u16 frame_number;
+-	__u16 cur_frame;
+-	__u16 config_counter;
++	__struct_group(/* no tag */, frame, /* no attrs */,
++		__u32 buf_size;
++		__u16 frame_number;
++		__u16 cur_frame;
++		__u16 config_counter;
++	);
+ };
+ 
+ #ifdef __KERNEL__
+@@ -189,10 +192,12 @@ struct omap3isp_stat_data_time32 {
+ 		__s32	tv_usec;
+ 	} ts;
+ 	__u32 buf;
+-	__u32 buf_size;
+-	__u16 frame_number;
+-	__u16 cur_frame;
+-	__u16 config_counter;
++	__struct_group(/* no tag */, frame, /* no attrs */,
++		__u32 buf_size;
++		__u16 frame_number;
++		__u16 cur_frame;
++		__u16 config_counter;
++	);
+ };
+ #endif
+ 
+diff --git a/include/uapi/linux/rfkill.h b/include/uapi/linux/rfkill.h
+index 9b77cfc42efa3..283c5a7b3f2c8 100644
+--- a/include/uapi/linux/rfkill.h
++++ b/include/uapi/linux/rfkill.h
+@@ -159,8 +159,16 @@ struct rfkill_event_ext {
+  * old behaviour for all userspace, unless it explicitly opts in to the
+  * rules outlined here by using the new &struct rfkill_event_ext.
+  *
+- * Userspace using &struct rfkill_event_ext must adhere to the following
+- * rules
++ * Additionally, some other userspace (bluez, g-s-d) was reading with a
++ * large size but as streaming reads rather than message-based, or with
++ * too strict checks for the returned size. So eventually, we completely
++ * reverted this, and extended messages need to be opted in to by using
++ * an ioctl:
++ *
++ *  ioctl(fd, RFKILL_IOCTL_MAX_SIZE, sizeof(struct rfkill_event_ext));
++ *
++ * Userspace using &struct rfkill_event_ext and the ioctl must adhere to
++ * the following rules:
+  *
+  * 1. accept short writes, optionally using them to detect that it's
+  *    running on an older kernel;
+@@ -175,6 +183,8 @@ struct rfkill_event_ext {
+ #define RFKILL_IOC_MAGIC	'R'
+ #define RFKILL_IOC_NOINPUT	1
+ #define RFKILL_IOCTL_NOINPUT	_IO(RFKILL_IOC_MAGIC, RFKILL_IOC_NOINPUT)
++#define RFKILL_IOC_MAX_SIZE	2
++#define RFKILL_IOCTL_MAX_SIZE	_IOW(RFKILL_IOC_MAGIC, RFKILL_IOC_EXT_SIZE, __u32)
+ 
+ /* and that's all userspace gets */
+ 
+diff --git a/include/uapi/linux/rseq.h b/include/uapi/linux/rseq.h
+index 9a402fdb60e97..77ee207623a9b 100644
+--- a/include/uapi/linux/rseq.h
++++ b/include/uapi/linux/rseq.h
+@@ -105,23 +105,11 @@ struct rseq {
+ 	 * Read and set by the kernel. Set by user-space with single-copy
+ 	 * atomicity semantics. This field should only be updated by the
+ 	 * thread which registered this data structure. Aligned on 64-bit.
++	 *
++	 * 32-bit architectures should update the low order bits of the
++	 * rseq_cs field, leaving the high order bits initialized to 0.
+ 	 */
+-	union {
+-		__u64 ptr64;
+-#ifdef __LP64__
+-		__u64 ptr;
+-#else
+-		struct {
+-#if (defined(__BYTE_ORDER) && (__BYTE_ORDER == __BIG_ENDIAN)) || defined(__BIG_ENDIAN)
+-			__u32 padding;		/* Initialized to zero. */
+-			__u32 ptr32;
+-#else /* LITTLE */
+-			__u32 ptr32;
+-			__u32 padding;		/* Initialized to zero. */
+-#endif /* ENDIAN */
+-		} ptr;
+-#endif
+-	} rseq_cs;
++	__u64 rseq_cs;
+ 
+ 	/*
+ 	 * Restartable sequences flags field.
+diff --git a/include/uapi/linux/serial_core.h b/include/uapi/linux/serial_core.h
+index c4042dcfdc0c3..8885e69178bd7 100644
+--- a/include/uapi/linux/serial_core.h
++++ b/include/uapi/linux/serial_core.h
+@@ -68,6 +68,9 @@
+ /* NVIDIA Tegra Combined UART */
+ #define PORT_TEGRA_TCU	41
+ 
++/* ASPEED AST2x00 virtual UART */
++#define PORT_ASPEED_VUART	42
++
+ /* Intel EG20 */
+ #define PORT_PCH_8LINE	44
+ #define PORT_PCH_2LINE	45
+diff --git a/kernel/audit.h b/kernel/audit.h
+index c4498090a5bd6..58b66543b4d57 100644
+--- a/kernel/audit.h
++++ b/kernel/audit.h
+@@ -201,6 +201,10 @@ struct audit_context {
+ 		struct {
+ 			char			*name;
+ 		} module;
++		struct {
++			struct audit_ntp_data	ntp_data;
++			struct timespec64	tk_injoffset;
++		} time;
+ 	};
+ 	int fds[2];
+ 	struct audit_proctitle proctitle;
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index 2dc94a0e34474..321904af2311e 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -1331,6 +1331,53 @@ static void audit_log_fcaps(struct audit_buffer *ab, struct audit_names *name)
+ 			 from_kuid(&init_user_ns, name->fcap.rootid));
+ }
+ 
++static void audit_log_time(struct audit_context *context, struct audit_buffer **ab)
++{
++	const struct audit_ntp_data *ntp = &context->time.ntp_data;
++	const struct timespec64 *tk = &context->time.tk_injoffset;
++	static const char * const ntp_name[] = {
++		"offset",
++		"freq",
++		"status",
++		"tai",
++		"tick",
++		"adjust",
++	};
++	int type;
++
++	if (context->type == AUDIT_TIME_ADJNTPVAL) {
++		for (type = 0; type < AUDIT_NTP_NVALS; type++) {
++			if (ntp->vals[type].newval != ntp->vals[type].oldval) {
++				if (!*ab) {
++					*ab = audit_log_start(context,
++							GFP_KERNEL,
++							AUDIT_TIME_ADJNTPVAL);
++					if (!*ab)
++						return;
++				}
++				audit_log_format(*ab, "op=%s old=%lli new=%lli",
++						 ntp_name[type],
++						 ntp->vals[type].oldval,
++						 ntp->vals[type].newval);
++				audit_log_end(*ab);
++				*ab = NULL;
++			}
++		}
++	}
++	if (tk->tv_sec != 0 || tk->tv_nsec != 0) {
++		if (!*ab) {
++			*ab = audit_log_start(context, GFP_KERNEL,
++					      AUDIT_TIME_INJOFFSET);
++			if (!*ab)
++				return;
++		}
++		audit_log_format(*ab, "sec=%lli nsec=%li",
++				 (long long)tk->tv_sec, tk->tv_nsec);
++		audit_log_end(*ab);
++		*ab = NULL;
++	}
++}
++
+ static void show_special(struct audit_context *context, int *call_panic)
+ {
+ 	struct audit_buffer *ab;
+@@ -1445,6 +1492,11 @@ static void show_special(struct audit_context *context, int *call_panic)
+ 			audit_log_format(ab, "(null)");
+ 
+ 		break;
++	case AUDIT_TIME_ADJNTPVAL:
++	case AUDIT_TIME_INJOFFSET:
++		/* this call deviates from the rest, eating the buffer */
++		audit_log_time(context, &ab);
++		break;
+ 	}
+ 	audit_log_end(ab);
+ }
+@@ -2840,31 +2892,26 @@ void __audit_fanotify(unsigned int response)
+ 
+ void __audit_tk_injoffset(struct timespec64 offset)
+ {
+-	audit_log(audit_context(), GFP_KERNEL, AUDIT_TIME_INJOFFSET,
+-		  "sec=%lli nsec=%li",
+-		  (long long)offset.tv_sec, offset.tv_nsec);
+-}
+-
+-static void audit_log_ntp_val(const struct audit_ntp_data *ad,
+-			      const char *op, enum audit_ntp_type type)
+-{
+-	const struct audit_ntp_val *val = &ad->vals[type];
+-
+-	if (val->newval == val->oldval)
+-		return;
++	struct audit_context *context = audit_context();
+ 
+-	audit_log(audit_context(), GFP_KERNEL, AUDIT_TIME_ADJNTPVAL,
+-		  "op=%s old=%lli new=%lli", op, val->oldval, val->newval);
++	/* only set type if not already set by NTP */
++	if (!context->type)
++		context->type = AUDIT_TIME_INJOFFSET;
++	memcpy(&context->time.tk_injoffset, &offset, sizeof(offset));
+ }
+ 
+ void __audit_ntp_log(const struct audit_ntp_data *ad)
+ {
+-	audit_log_ntp_val(ad, "offset",	AUDIT_NTP_OFFSET);
+-	audit_log_ntp_val(ad, "freq",	AUDIT_NTP_FREQ);
+-	audit_log_ntp_val(ad, "status",	AUDIT_NTP_STATUS);
+-	audit_log_ntp_val(ad, "tai",	AUDIT_NTP_TAI);
+-	audit_log_ntp_val(ad, "tick",	AUDIT_NTP_TICK);
+-	audit_log_ntp_val(ad, "adjust",	AUDIT_NTP_ADJUST);
++	struct audit_context *context = audit_context();
++	int type;
++
++	for (type = 0; type < AUDIT_NTP_NVALS; type++)
++		if (ad->vals[type].newval != ad->vals[type].oldval) {
++			/* unconditionally set type, overwriting TK */
++			context->type = AUDIT_TIME_ADJNTPVAL;
++			memcpy(&context->time.ntp_data, ad, sizeof(*ad));
++			break;
++		}
+ }
+ 
+ void __audit_log_nfcfg(const char *name, u8 af, unsigned int nentries,
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index c9da250fee38c..ebbb2f4d3b9cc 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -401,6 +401,9 @@ static struct btf_type btf_void;
+ static int btf_resolve(struct btf_verifier_env *env,
+ 		       const struct btf_type *t, u32 type_id);
+ 
++static int btf_func_check(struct btf_verifier_env *env,
++			  const struct btf_type *t);
++
+ static bool btf_type_is_modifier(const struct btf_type *t)
+ {
+ 	/* Some of them is not strictly a C modifier
+@@ -576,6 +579,7 @@ static bool btf_type_needs_resolve(const struct btf_type *t)
+ 	       btf_type_is_struct(t) ||
+ 	       btf_type_is_array(t) ||
+ 	       btf_type_is_var(t) ||
++	       btf_type_is_func(t) ||
+ 	       btf_type_is_decl_tag(t) ||
+ 	       btf_type_is_datasec(t);
+ }
+@@ -3521,9 +3525,24 @@ static s32 btf_func_check_meta(struct btf_verifier_env *env,
+ 	return 0;
+ }
+ 
++static int btf_func_resolve(struct btf_verifier_env *env,
++			    const struct resolve_vertex *v)
++{
++	const struct btf_type *t = v->t;
++	u32 next_type_id = t->type;
++	int err;
++
++	err = btf_func_check(env, t);
++	if (err)
++		return err;
++
++	env_stack_pop_resolved(env, next_type_id, 0);
++	return 0;
++}
++
+ static struct btf_kind_operations func_ops = {
+ 	.check_meta = btf_func_check_meta,
+-	.resolve = btf_df_resolve,
++	.resolve = btf_func_resolve,
+ 	.check_member = btf_df_check_member,
+ 	.check_kflag_member = btf_df_check_kflag_member,
+ 	.log_details = btf_ref_type_log,
+@@ -4143,7 +4162,7 @@ static bool btf_resolve_valid(struct btf_verifier_env *env,
+ 		return !btf_resolved_type_id(btf, type_id) &&
+ 		       !btf_resolved_type_size(btf, type_id);
+ 
+-	if (btf_type_is_decl_tag(t))
++	if (btf_type_is_decl_tag(t) || btf_type_is_func(t))
+ 		return btf_resolved_type_id(btf, type_id) &&
+ 		       !btf_resolved_type_size(btf, type_id);
+ 
+@@ -4233,12 +4252,6 @@ static int btf_check_all_types(struct btf_verifier_env *env)
+ 			if (err)
+ 				return err;
+ 		}
+-
+-		if (btf_type_is_func(t)) {
+-			err = btf_func_check(env, t);
+-			if (err)
+-				return err;
+-		}
+ 	}
+ 
+ 	return 0;
+@@ -6189,12 +6202,17 @@ bool btf_id_set_contains(const struct btf_id_set *set, u32 id)
+ 	return bsearch(&id, set->ids, set->cnt, sizeof(u32), btf_id_cmp_func) != NULL;
+ }
+ 
++enum {
++	BTF_MODULE_F_LIVE = (1 << 0),
++};
++
+ #ifdef CONFIG_DEBUG_INFO_BTF_MODULES
+ struct btf_module {
+ 	struct list_head list;
+ 	struct module *module;
+ 	struct btf *btf;
+ 	struct bin_attribute *sysfs_attr;
++	int flags;
+ };
+ 
+ static LIST_HEAD(btf_modules);
+@@ -6220,7 +6238,8 @@ static int btf_module_notify(struct notifier_block *nb, unsigned long op,
+ 	int err = 0;
+ 
+ 	if (mod->btf_data_size == 0 ||
+-	    (op != MODULE_STATE_COMING && op != MODULE_STATE_GOING))
++	    (op != MODULE_STATE_COMING && op != MODULE_STATE_LIVE &&
++	     op != MODULE_STATE_GOING))
+ 		goto out;
+ 
+ 	switch (op) {
+@@ -6277,6 +6296,17 @@ static int btf_module_notify(struct notifier_block *nb, unsigned long op,
+ 			btf_mod->sysfs_attr = attr;
+ 		}
+ 
++		break;
++	case MODULE_STATE_LIVE:
++		mutex_lock(&btf_module_mutex);
++		list_for_each_entry_safe(btf_mod, tmp, &btf_modules, list) {
++			if (btf_mod->module != module)
++				continue;
++
++			btf_mod->flags |= BTF_MODULE_F_LIVE;
++			break;
++		}
++		mutex_unlock(&btf_module_mutex);
+ 		break;
+ 	case MODULE_STATE_GOING:
+ 		mutex_lock(&btf_module_mutex);
+@@ -6323,7 +6353,12 @@ struct module *btf_try_get_module(const struct btf *btf)
+ 		if (btf_mod->btf != btf)
+ 			continue;
+ 
+-		if (try_module_get(btf_mod->module))
++		/* We must only consider module whose __init routine has
++		 * finished, hence we must check for BTF_MODULE_F_LIVE flag,
++		 * which is set from the notifier callback for
++		 * MODULE_STATE_LIVE.
++		 */
++		if ((btf_mod->flags & BTF_MODULE_F_LIVE) && try_module_get(btf_mod->module))
+ 			res = btf_mod->module;
+ 
+ 		break;
+diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
+index 0dcaed4d3f4ce..fc0f77f91224b 100644
+--- a/kernel/bpf/stackmap.c
++++ b/kernel/bpf/stackmap.c
+@@ -219,7 +219,7 @@ static void stack_map_get_build_id_offset(struct bpf_stack_build_id *id_offs,
+ }
+ 
+ static struct perf_callchain_entry *
+-get_callchain_entry_for_task(struct task_struct *task, u32 init_nr)
++get_callchain_entry_for_task(struct task_struct *task, u32 max_depth)
+ {
+ #ifdef CONFIG_STACKTRACE
+ 	struct perf_callchain_entry *entry;
+@@ -230,9 +230,8 @@ get_callchain_entry_for_task(struct task_struct *task, u32 init_nr)
+ 	if (!entry)
+ 		return NULL;
+ 
+-	entry->nr = init_nr +
+-		stack_trace_save_tsk(task, (unsigned long *)(entry->ip + init_nr),
+-				     sysctl_perf_event_max_stack - init_nr, 0);
++	entry->nr = stack_trace_save_tsk(task, (unsigned long *)entry->ip,
++					 max_depth, 0);
+ 
+ 	/* stack_trace_save_tsk() works on unsigned long array, while
+ 	 * perf_callchain_entry uses u64 array. For 32-bit systems, it is
+@@ -244,7 +243,7 @@ get_callchain_entry_for_task(struct task_struct *task, u32 init_nr)
+ 		int i;
+ 
+ 		/* copy data from the end to avoid using extra buffer */
+-		for (i = entry->nr - 1; i >= (int)init_nr; i--)
++		for (i = entry->nr - 1; i >= 0; i--)
+ 			to[i] = (u64)(from[i]);
+ 	}
+ 
+@@ -261,27 +260,19 @@ static long __bpf_get_stackid(struct bpf_map *map,
+ {
+ 	struct bpf_stack_map *smap = container_of(map, struct bpf_stack_map, map);
+ 	struct stack_map_bucket *bucket, *new_bucket, *old_bucket;
+-	u32 max_depth = map->value_size / stack_map_data_size(map);
+-	/* stack_map_alloc() checks that max_depth <= sysctl_perf_event_max_stack */
+-	u32 init_nr = sysctl_perf_event_max_stack - max_depth;
+ 	u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
+ 	u32 hash, id, trace_nr, trace_len;
+ 	bool user = flags & BPF_F_USER_STACK;
+ 	u64 *ips;
+ 	bool hash_matches;
+ 
+-	/* get_perf_callchain() guarantees that trace->nr >= init_nr
+-	 * and trace-nr <= sysctl_perf_event_max_stack, so trace_nr <= max_depth
+-	 */
+-	trace_nr = trace->nr - init_nr;
+-
+-	if (trace_nr <= skip)
++	if (trace->nr <= skip)
+ 		/* skipping more than usable stack trace */
+ 		return -EFAULT;
+ 
+-	trace_nr -= skip;
++	trace_nr = trace->nr - skip;
+ 	trace_len = trace_nr * sizeof(u64);
+-	ips = trace->ip + skip + init_nr;
++	ips = trace->ip + skip;
+ 	hash = jhash2((u32 *)ips, trace_len / sizeof(u32), 0);
+ 	id = hash & (smap->n_buckets - 1);
+ 	bucket = READ_ONCE(smap->buckets[id]);
+@@ -338,8 +329,7 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map,
+ 	   u64, flags)
+ {
+ 	u32 max_depth = map->value_size / stack_map_data_size(map);
+-	/* stack_map_alloc() checks that max_depth <= sysctl_perf_event_max_stack */
+-	u32 init_nr = sysctl_perf_event_max_stack - max_depth;
++	u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
+ 	bool user = flags & BPF_F_USER_STACK;
+ 	struct perf_callchain_entry *trace;
+ 	bool kernel = !user;
+@@ -348,8 +338,12 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map,
+ 			       BPF_F_FAST_STACK_CMP | BPF_F_REUSE_STACKID)))
+ 		return -EINVAL;
+ 
+-	trace = get_perf_callchain(regs, init_nr, kernel, user,
+-				   sysctl_perf_event_max_stack, false, false);
++	max_depth += skip;
++	if (max_depth > sysctl_perf_event_max_stack)
++		max_depth = sysctl_perf_event_max_stack;
++
++	trace = get_perf_callchain(regs, 0, kernel, user, max_depth,
++				   false, false);
+ 
+ 	if (unlikely(!trace))
+ 		/* couldn't fetch the stack trace */
+@@ -440,7 +434,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
+ 			    struct perf_callchain_entry *trace_in,
+ 			    void *buf, u32 size, u64 flags)
+ {
+-	u32 init_nr, trace_nr, copy_len, elem_size, num_elem;
++	u32 trace_nr, copy_len, elem_size, num_elem, max_depth;
+ 	bool user_build_id = flags & BPF_F_USER_BUILD_ID;
+ 	u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
+ 	bool user = flags & BPF_F_USER_STACK;
+@@ -465,30 +459,28 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
+ 		goto err_fault;
+ 
+ 	num_elem = size / elem_size;
+-	if (sysctl_perf_event_max_stack < num_elem)
+-		init_nr = 0;
+-	else
+-		init_nr = sysctl_perf_event_max_stack - num_elem;
++	max_depth = num_elem + skip;
++	if (sysctl_perf_event_max_stack < max_depth)
++		max_depth = sysctl_perf_event_max_stack;
+ 
+ 	if (trace_in)
+ 		trace = trace_in;
+ 	else if (kernel && task)
+-		trace = get_callchain_entry_for_task(task, init_nr);
++		trace = get_callchain_entry_for_task(task, max_depth);
+ 	else
+-		trace = get_perf_callchain(regs, init_nr, kernel, user,
+-					   sysctl_perf_event_max_stack,
++		trace = get_perf_callchain(regs, 0, kernel, user, max_depth,
+ 					   false, false);
+ 	if (unlikely(!trace))
+ 		goto err_fault;
+ 
+-	trace_nr = trace->nr - init_nr;
+-	if (trace_nr < skip)
++	if (trace->nr < skip)
+ 		goto err_fault;
+ 
+-	trace_nr -= skip;
++	trace_nr = trace->nr - skip;
+ 	trace_nr = (trace_nr <= num_elem) ? trace_nr : num_elem;
+ 	copy_len = trace_nr * elem_size;
+-	ips = trace->ip + skip + init_nr;
++
++	ips = trace->ip + skip;
+ 	if (user && user_build_id)
+ 		stack_map_get_build_id_offset(buf, ips, trace_nr, user);
+ 	else
+diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
+index df2bface866ef..85cb51c4a17e6 100644
+--- a/kernel/debug/kdb/kdb_support.c
++++ b/kernel/debug/kdb/kdb_support.c
+@@ -291,7 +291,7 @@ int kdb_getarea_size(void *res, unsigned long addr, size_t size)
+  */
+ int kdb_putarea_size(unsigned long addr, void *res, size_t size)
+ {
+-	int ret = copy_from_kernel_nofault((char *)addr, (char *)res, size);
++	int ret = copy_to_kernel_nofault((char *)addr, (char *)res, size);
+ 	if (ret) {
+ 		if (!KDB_STATE(SUPPRESS)) {
+ 			kdb_func_printf("Bad address 0x%lx\n", addr);
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index 7a14ca29c3778..f8ff598596b85 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -927,7 +927,7 @@ static __init int dma_debug_cmdline(char *str)
+ 		global_disable = true;
+ 	}
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static __init int dma_debug_entries_cmdline(char *str)
+@@ -936,7 +936,7 @@ static __init int dma_debug_entries_cmdline(char *str)
+ 		return -EINVAL;
+ 	if (!get_option(&str, &nr_prealloc_entries))
+ 		nr_prealloc_entries = PREALLOC_DMA_DEBUG_ENTRIES;
+-	return 0;
++	return 1;
+ }
+ 
+ __setup("dma_debug=", dma_debug_cmdline);
+diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
+index e7f138c7636cb..a1bbe0b914f14 100644
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -655,13 +655,10 @@ void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr,
+ void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
+ 		size_t size, enum dma_data_direction dir)
+ {
+-	/*
+-	 * Unconditional bounce is necessary to avoid corruption on
+-	 * sync_*_for_cpu or dma_ummap_* when the device didn't overwrite
+-	 * the whole lengt of the bounce buffer.
+-	 */
+-	swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
+-	BUG_ON(!valid_dma_direction(dir));
++	if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
++		swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
++	else
++		BUG_ON(dir != DMA_FROM_DEVICE);
+ }
+ 
+ void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr,
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index e4a43d475ba6a..a0e196f973b95 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -10560,8 +10560,11 @@ perf_event_parse_addr_filter(struct perf_event *event, char *fstr,
+ 			}
+ 
+ 			/* ready to consume more filters */
++			kfree(filename);
++			filename = NULL;
+ 			state = IF_STATE_ACTION;
+ 			filter = NULL;
++			kernel = 0;
+ 		}
+ 	}
+ 
+diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
+index 335d988bd8111..c0789383807b9 100644
+--- a/kernel/livepatch/core.c
++++ b/kernel/livepatch/core.c
+@@ -190,7 +190,7 @@ static int klp_find_object_symbol(const char *objname, const char *name,
+ 	return -EINVAL;
+ }
+ 
+-static int klp_resolve_symbols(Elf64_Shdr *sechdrs, const char *strtab,
++static int klp_resolve_symbols(Elf_Shdr *sechdrs, const char *strtab,
+ 			       unsigned int symndx, Elf_Shdr *relasec,
+ 			       const char *sec_objname)
+ {
+@@ -218,7 +218,7 @@ static int klp_resolve_symbols(Elf64_Shdr *sechdrs, const char *strtab,
+ 	relas = (Elf_Rela *) relasec->sh_addr;
+ 	/* For each rela in this klp relocation section */
+ 	for (i = 0; i < relasec->sh_size / sizeof(Elf_Rela); i++) {
+-		sym = (Elf64_Sym *)sechdrs[symndx].sh_addr + ELF_R_SYM(relas[i].r_info);
++		sym = (Elf_Sym *)sechdrs[symndx].sh_addr + ELF_R_SYM(relas[i].r_info);
+ 		if (sym->st_shndx != SHN_LIVEPATCH) {
+ 			pr_err("symbol %s is not marked as a livepatch symbol\n",
+ 			       strtab + sym->st_name);
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index d48cd608376ae..55570f52de5a6 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -183,11 +183,9 @@ static DECLARE_BITMAP(list_entries_in_use, MAX_LOCKDEP_ENTRIES);
+ static struct hlist_head lock_keys_hash[KEYHASH_SIZE];
+ unsigned long nr_lock_classes;
+ unsigned long nr_zapped_classes;
+-#ifndef CONFIG_DEBUG_LOCKDEP
+-static
+-#endif
++unsigned long max_lock_class_idx;
+ struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
+-static DECLARE_BITMAP(lock_classes_in_use, MAX_LOCKDEP_KEYS);
++DECLARE_BITMAP(lock_classes_in_use, MAX_LOCKDEP_KEYS);
+ 
+ static inline struct lock_class *hlock_class(struct held_lock *hlock)
+ {
+@@ -338,7 +336,7 @@ static inline void lock_release_holdtime(struct held_lock *hlock)
+  * elements. These elements are linked together by the lock_entry member in
+  * struct lock_class.
+  */
+-LIST_HEAD(all_lock_classes);
++static LIST_HEAD(all_lock_classes);
+ static LIST_HEAD(free_lock_classes);
+ 
+ /**
+@@ -1252,6 +1250,7 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
+ 	struct lockdep_subclass_key *key;
+ 	struct hlist_head *hash_head;
+ 	struct lock_class *class;
++	int idx;
+ 
+ 	DEBUG_LOCKS_WARN_ON(!irqs_disabled());
+ 
+@@ -1317,6 +1316,9 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
+ 	 * of classes.
+ 	 */
+ 	list_move_tail(&class->lock_entry, &all_lock_classes);
++	idx = class - lock_classes;
++	if (idx > max_lock_class_idx)
++		max_lock_class_idx = idx;
+ 
+ 	if (verbose(class)) {
+ 		graph_unlock();
+@@ -5998,6 +6000,8 @@ static void zap_class(struct pending_free *pf, struct lock_class *class)
+ 		WRITE_ONCE(class->name, NULL);
+ 		nr_lock_classes--;
+ 		__clear_bit(class - lock_classes, lock_classes_in_use);
++		if (class - lock_classes == max_lock_class_idx)
++			max_lock_class_idx--;
+ 	} else {
+ 		WARN_ONCE(true, "%s() failed for class %s\n", __func__,
+ 			  class->name);
+@@ -6288,7 +6292,13 @@ void lockdep_reset_lock(struct lockdep_map *lock)
+ 		lockdep_reset_lock_reg(lock);
+ }
+ 
+-/* Unregister a dynamically allocated key. */
++/*
++ * Unregister a dynamically allocated key.
++ *
++ * Unlike lockdep_register_key(), a search is always done to find a matching
++ * key irrespective of debug_locks to avoid potential invalid access to freed
++ * memory in lock_class entry.
++ */
+ void lockdep_unregister_key(struct lock_class_key *key)
+ {
+ 	struct hlist_head *hash_head = keyhashentry(key);
+@@ -6303,10 +6313,8 @@ void lockdep_unregister_key(struct lock_class_key *key)
+ 		return;
+ 
+ 	raw_local_irq_save(flags);
+-	if (!graph_lock())
+-		goto out_irq;
++	lockdep_lock();
+ 
+-	pf = get_pending_free();
+ 	hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
+ 		if (k == key) {
+ 			hlist_del_rcu(&k->hash_entry);
+@@ -6314,11 +6322,13 @@ void lockdep_unregister_key(struct lock_class_key *key)
+ 			break;
+ 		}
+ 	}
+-	WARN_ON_ONCE(!found);
+-	__lockdep_free_key_range(pf, key, 1);
+-	call_rcu_zapped(pf);
+-	graph_unlock();
+-out_irq:
++	WARN_ON_ONCE(!found && debug_locks);
++	if (found) {
++		pf = get_pending_free();
++		__lockdep_free_key_range(pf, key, 1);
++		call_rcu_zapped(pf);
++	}
++	lockdep_unlock();
+ 	raw_local_irq_restore(flags);
+ 
+ 	/* Wait until is_dynamic_key() has finished accessing k->hash_entry. */
+diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
+index ecb8662e7a4ed..bbe9000260d02 100644
+--- a/kernel/locking/lockdep_internals.h
++++ b/kernel/locking/lockdep_internals.h
+@@ -121,7 +121,6 @@ static const unsigned long LOCKF_USED_IN_IRQ_READ =
+ 
+ #define MAX_LOCKDEP_CHAIN_HLOCKS (MAX_LOCKDEP_CHAINS*5)
+ 
+-extern struct list_head all_lock_classes;
+ extern struct lock_chain lock_chains[];
+ 
+ #define LOCK_USAGE_CHARS (2*XXX_LOCK_USAGE_STATES + 1)
+@@ -151,6 +150,10 @@ extern unsigned int nr_large_chain_blocks;
+ 
+ extern unsigned int max_lockdep_depth;
+ extern unsigned int max_bfs_queue_depth;
++extern unsigned long max_lock_class_idx;
++
++extern struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
++extern unsigned long lock_classes_in_use[];
+ 
+ #ifdef CONFIG_PROVE_LOCKING
+ extern unsigned long lockdep_count_forward_deps(struct lock_class *);
+@@ -205,7 +208,6 @@ struct lockdep_stats {
+ };
+ 
+ DECLARE_PER_CPU(struct lockdep_stats, lockdep_stats);
+-extern struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
+ 
+ #define __debug_atomic_inc(ptr)					\
+ 	this_cpu_inc(lockdep_stats.ptr);
+diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
+index b8d9a050c337a..15fdc7fa5c688 100644
+--- a/kernel/locking/lockdep_proc.c
++++ b/kernel/locking/lockdep_proc.c
+@@ -24,14 +24,33 @@
+ 
+ #include "lockdep_internals.h"
+ 
++/*
++ * Since iteration of lock_classes is done without holding the lockdep lock,
++ * it is not safe to iterate all_lock_classes list directly as the iteration
++ * may branch off to free_lock_classes or the zapped list. Iteration is done
++ * directly on the lock_classes array by checking the lock_classes_in_use
++ * bitmap and max_lock_class_idx.
++ */
++#define iterate_lock_classes(idx, class)				\
++	for (idx = 0, class = lock_classes; idx <= max_lock_class_idx;	\
++	     idx++, class++)
++
+ static void *l_next(struct seq_file *m, void *v, loff_t *pos)
+ {
+-	return seq_list_next(v, &all_lock_classes, pos);
++	struct lock_class *class = v;
++
++	++class;
++	*pos = class - lock_classes;
++	return (*pos > max_lock_class_idx) ? NULL : class;
+ }
+ 
+ static void *l_start(struct seq_file *m, loff_t *pos)
+ {
+-	return seq_list_start_head(&all_lock_classes, *pos);
++	unsigned long idx = *pos;
++
++	if (idx > max_lock_class_idx)
++		return NULL;
++	return lock_classes + idx;
+ }
+ 
+ static void l_stop(struct seq_file *m, void *v)
+@@ -57,14 +76,16 @@ static void print_name(struct seq_file *m, struct lock_class *class)
+ 
+ static int l_show(struct seq_file *m, void *v)
+ {
+-	struct lock_class *class = list_entry(v, struct lock_class, lock_entry);
++	struct lock_class *class = v;
+ 	struct lock_list *entry;
+ 	char usage[LOCK_USAGE_CHARS];
++	int idx = class - lock_classes;
+ 
+-	if (v == &all_lock_classes) {
++	if (v == lock_classes)
+ 		seq_printf(m, "all lock classes:\n");
++
++	if (!test_bit(idx, lock_classes_in_use))
+ 		return 0;
+-	}
+ 
+ 	seq_printf(m, "%p", class->key);
+ #ifdef CONFIG_DEBUG_LOCKDEP
+@@ -220,8 +241,11 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
+ 
+ #ifdef CONFIG_PROVE_LOCKING
+ 	struct lock_class *class;
++	unsigned long idx;
+ 
+-	list_for_each_entry(class, &all_lock_classes, lock_entry) {
++	iterate_lock_classes(idx, class) {
++		if (!test_bit(idx, lock_classes_in_use))
++			continue;
+ 
+ 		if (class->usage_mask == 0)
+ 			nr_unused++;
+@@ -254,6 +278,7 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
+ 
+ 		sum_forward_deps += lockdep_count_forward_deps(class);
+ 	}
++
+ #ifdef CONFIG_DEBUG_LOCKDEP
+ 	DEBUG_LOCKS_WARN_ON(debug_atomic_read(nr_unused_locks) != nr_unused);
+ #endif
+@@ -345,6 +370,8 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
+ 	seq_printf(m, " max bfs queue depth:           %11u\n",
+ 			max_bfs_queue_depth);
+ #endif
++	seq_printf(m, " max lock class index:          %11lu\n",
++			max_lock_class_idx);
+ 	lockdep_stats_debug_show(m);
+ 	seq_printf(m, " debug_locks:                   %11u\n",
+ 			debug_locks);
+@@ -622,12 +649,16 @@ static int lock_stat_open(struct inode *inode, struct file *file)
+ 	if (!res) {
+ 		struct lock_stat_data *iter = data->stats;
+ 		struct seq_file *m = file->private_data;
++		unsigned long idx;
+ 
+-		list_for_each_entry(class, &all_lock_classes, lock_entry) {
++		iterate_lock_classes(idx, class) {
++			if (!test_bit(idx, lock_classes_in_use))
++				continue;
+ 			iter->class = class;
+ 			iter->stats = lock_stats(class);
+ 			iter++;
+ 		}
++
+ 		data->iter_end = iter;
+ 
+ 		sort(data->stats, data->iter_end - data->stats,
+@@ -645,6 +676,7 @@ static ssize_t lock_stat_write(struct file *file, const char __user *buf,
+ 			       size_t count, loff_t *ppos)
+ {
+ 	struct lock_class *class;
++	unsigned long idx;
+ 	char c;
+ 
+ 	if (count) {
+@@ -654,8 +686,11 @@ static ssize_t lock_stat_write(struct file *file, const char __user *buf,
+ 		if (c != '0')
+ 			return count;
+ 
+-		list_for_each_entry(class, &all_lock_classes, lock_entry)
++		iterate_lock_classes(idx, class) {
++			if (!test_bit(idx, lock_classes_in_use))
++				continue;
+ 			clear_lock_stats(class);
++		}
+ 	}
+ 	return count;
+ }
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index e6af502c2fd77..08780a466fdf7 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -1328,7 +1328,7 @@ static int __init resumedelay_setup(char *str)
+ 	int rc = kstrtouint(str, 0, &resume_delay);
+ 
+ 	if (rc)
+-		return rc;
++		pr_warn("resumedelay: bad option string '%s'\n", str);
+ 	return 1;
+ }
+ 
+diff --git a/kernel/power/suspend_test.c b/kernel/power/suspend_test.c
+index d20526c5be15b..b663a97f5867a 100644
+--- a/kernel/power/suspend_test.c
++++ b/kernel/power/suspend_test.c
+@@ -157,22 +157,22 @@ static int __init setup_test_suspend(char *value)
+ 	value++;
+ 	suspend_type = strsep(&value, ",");
+ 	if (!suspend_type)
+-		return 0;
++		return 1;
+ 
+ 	repeat = strsep(&value, ",");
+ 	if (repeat) {
+ 		if (kstrtou32(repeat, 0, &test_repeat_count_max))
+-			return 0;
++			return 1;
+ 	}
+ 
+ 	for (i = PM_SUSPEND_MIN; i < PM_SUSPEND_MAX; i++)
+ 		if (!strcmp(pm_labels[i], suspend_type)) {
+ 			test_state_label = pm_labels[i];
+-			return 0;
++			return 1;
+ 		}
+ 
+ 	printk(warn_bad_state, suspend_type);
+-	return 0;
++	return 1;
+ }
+ __setup("test_suspend", setup_test_suspend);
+ 
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 57b132b658e15..4187946a28f9e 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -146,8 +146,10 @@ static int __control_devkmsg(char *str)
+ 
+ static int __init control_devkmsg(char *str)
+ {
+-	if (__control_devkmsg(str) < 0)
++	if (__control_devkmsg(str) < 0) {
++		pr_warn("printk.devkmsg: bad option string '%s'\n", str);
+ 		return 1;
++	}
+ 
+ 	/*
+ 	 * Set sysctl string accordingly:
+@@ -166,7 +168,7 @@ static int __init control_devkmsg(char *str)
+ 	 */
+ 	devkmsg_log |= DEVKMSG_LOG_MASK_LOCK;
+ 
+-	return 0;
++	return 1;
+ }
+ __setup("printk.devkmsg=", control_devkmsg);
+ 
+diff --git a/kernel/ptrace.c b/kernel/ptrace.c
+index f8589bf8d7dce..516ad5e65849f 100644
+--- a/kernel/ptrace.c
++++ b/kernel/ptrace.c
+@@ -371,6 +371,26 @@ bool ptrace_may_access(struct task_struct *task, unsigned int mode)
+ 	return !err;
+ }
+ 
++static int check_ptrace_options(unsigned long data)
++{
++	if (data & ~(unsigned long)PTRACE_O_MASK)
++		return -EINVAL;
++
++	if (unlikely(data & PTRACE_O_SUSPEND_SECCOMP)) {
++		if (!IS_ENABLED(CONFIG_CHECKPOINT_RESTORE) ||
++		    !IS_ENABLED(CONFIG_SECCOMP))
++			return -EINVAL;
++
++		if (!capable(CAP_SYS_ADMIN))
++			return -EPERM;
++
++		if (seccomp_mode(&current->seccomp) != SECCOMP_MODE_DISABLED ||
++		    current->ptrace & PT_SUSPEND_SECCOMP)
++			return -EPERM;
++	}
++	return 0;
++}
++
+ static int ptrace_attach(struct task_struct *task, long request,
+ 			 unsigned long addr,
+ 			 unsigned long flags)
+@@ -382,8 +402,16 @@ static int ptrace_attach(struct task_struct *task, long request,
+ 	if (seize) {
+ 		if (addr != 0)
+ 			goto out;
++		/*
++		 * This duplicates the check in check_ptrace_options() because
++		 * ptrace_attach() and ptrace_setoptions() have historically
++		 * used different error codes for unknown ptrace options.
++		 */
+ 		if (flags & ~(unsigned long)PTRACE_O_MASK)
+ 			goto out;
++		retval = check_ptrace_options(flags);
++		if (retval)
++			return retval;
+ 		flags = PT_PTRACED | PT_SEIZED | (flags << PT_OPT_FLAG_SHIFT);
+ 	} else {
+ 		flags = PT_PTRACED;
+@@ -656,22 +684,11 @@ int ptrace_writedata(struct task_struct *tsk, char __user *src, unsigned long ds
+ static int ptrace_setoptions(struct task_struct *child, unsigned long data)
+ {
+ 	unsigned flags;
++	int ret;
+ 
+-	if (data & ~(unsigned long)PTRACE_O_MASK)
+-		return -EINVAL;
+-
+-	if (unlikely(data & PTRACE_O_SUSPEND_SECCOMP)) {
+-		if (!IS_ENABLED(CONFIG_CHECKPOINT_RESTORE) ||
+-		    !IS_ENABLED(CONFIG_SECCOMP))
+-			return -EINVAL;
+-
+-		if (!capable(CAP_SYS_ADMIN))
+-			return -EPERM;
+-
+-		if (seccomp_mode(&current->seccomp) != SECCOMP_MODE_DISABLED ||
+-		    current->ptrace & PT_SUSPEND_SECCOMP)
+-			return -EPERM;
+-	}
++	ret = check_ptrace_options(data);
++	if (ret)
++		return ret;
+ 
+ 	/* Avoid intermediate state when all opts are cleared */
+ 	flags = child->ptrace;
+diff --git a/kernel/rcu/rcu_segcblist.h b/kernel/rcu/rcu_segcblist.h
+index 9a19328ff2514..5d405943823ec 100644
+--- a/kernel/rcu/rcu_segcblist.h
++++ b/kernel/rcu/rcu_segcblist.h
+@@ -56,13 +56,13 @@ static inline long rcu_segcblist_n_cbs(struct rcu_segcblist *rsclp)
+ static inline void rcu_segcblist_set_flags(struct rcu_segcblist *rsclp,
+ 					   int flags)
+ {
+-	rsclp->flags |= flags;
++	WRITE_ONCE(rsclp->flags, rsclp->flags | flags);
+ }
+ 
+ static inline void rcu_segcblist_clear_flags(struct rcu_segcblist *rsclp,
+ 					     int flags)
+ {
+-	rsclp->flags &= ~flags;
++	WRITE_ONCE(rsclp->flags, rsclp->flags & ~flags);
+ }
+ 
+ static inline bool rcu_segcblist_test_flags(struct rcu_segcblist *rsclp,
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 28fd0cef9b1fb..0e70fe0ab6909 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -91,7 +91,7 @@ static struct rcu_state rcu_state = {
+ 	.abbr = RCU_ABBR,
+ 	.exp_mutex = __MUTEX_INITIALIZER(rcu_state.exp_mutex),
+ 	.exp_wake_mutex = __MUTEX_INITIALIZER(rcu_state.exp_wake_mutex),
+-	.ofl_lock = __RAW_SPIN_LOCK_UNLOCKED(rcu_state.ofl_lock),
++	.ofl_lock = __ARCH_SPIN_LOCK_UNLOCKED,
+ };
+ 
+ /* Dump rcu_node combining tree at boot to verify correct setup. */
+@@ -1168,7 +1168,15 @@ bool rcu_lockdep_current_cpu_online(void)
+ 	preempt_disable_notrace();
+ 	rdp = this_cpu_ptr(&rcu_data);
+ 	rnp = rdp->mynode;
+-	if (rdp->grpmask & rcu_rnp_online_cpus(rnp) || READ_ONCE(rnp->ofl_seq) & 0x1)
++	/*
++	 * Strictly, we care here about the case where the current CPU is
++	 * in rcu_cpu_starting() and thus has an excuse for rdp->grpmask
++	 * not being up to date. So arch_spin_is_locked() might have a
++	 * false positive if it's held by some *other* CPU, but that's
++	 * OK because that just means a false *negative* on the warning.
++	 */
++	if (rdp->grpmask & rcu_rnp_online_cpus(rnp) ||
++	    arch_spin_is_locked(&rcu_state.ofl_lock))
+ 		ret = true;
+ 	preempt_enable_notrace();
+ 	return ret;
+@@ -1732,7 +1740,6 @@ static void rcu_strict_gp_boundary(void *unused)
+  */
+ static noinline_for_stack bool rcu_gp_init(void)
+ {
+-	unsigned long firstseq;
+ 	unsigned long flags;
+ 	unsigned long oldmask;
+ 	unsigned long mask;
+@@ -1775,22 +1782,17 @@ static noinline_for_stack bool rcu_gp_init(void)
+ 	 * of RCU's Requirements documentation.
+ 	 */
+ 	WRITE_ONCE(rcu_state.gp_state, RCU_GP_ONOFF);
++	/* Exclude CPU hotplug operations. */
+ 	rcu_for_each_leaf_node(rnp) {
+-		// Wait for CPU-hotplug operations that might have
+-		// started before this grace period did.
+-		smp_mb(); // Pair with barriers used when updating ->ofl_seq to odd values.
+-		firstseq = READ_ONCE(rnp->ofl_seq);
+-		if (firstseq & 0x1)
+-			while (firstseq == READ_ONCE(rnp->ofl_seq))
+-				schedule_timeout_idle(1);  // Can't wake unless RCU is watching.
+-		smp_mb(); // Pair with barriers used when updating ->ofl_seq to even values.
+-		raw_spin_lock(&rcu_state.ofl_lock);
+-		raw_spin_lock_irq_rcu_node(rnp);
++		local_irq_save(flags);
++		arch_spin_lock(&rcu_state.ofl_lock);
++		raw_spin_lock_rcu_node(rnp);
+ 		if (rnp->qsmaskinit == rnp->qsmaskinitnext &&
+ 		    !rnp->wait_blkd_tasks) {
+ 			/* Nothing to do on this leaf rcu_node structure. */
+-			raw_spin_unlock_irq_rcu_node(rnp);
+-			raw_spin_unlock(&rcu_state.ofl_lock);
++			raw_spin_unlock_rcu_node(rnp);
++			arch_spin_unlock(&rcu_state.ofl_lock);
++			local_irq_restore(flags);
+ 			continue;
+ 		}
+ 
+@@ -1825,8 +1827,9 @@ static noinline_for_stack bool rcu_gp_init(void)
+ 				rcu_cleanup_dead_rnp(rnp);
+ 		}
+ 
+-		raw_spin_unlock_irq_rcu_node(rnp);
+-		raw_spin_unlock(&rcu_state.ofl_lock);
++		raw_spin_unlock_rcu_node(rnp);
++		arch_spin_unlock(&rcu_state.ofl_lock);
++		local_irq_restore(flags);
+ 	}
+ 	rcu_gp_slow(gp_preinit_delay); /* Races with CPU hotplug. */
+ 
+@@ -4247,11 +4250,10 @@ void rcu_cpu_starting(unsigned int cpu)
+ 
+ 	rnp = rdp->mynode;
+ 	mask = rdp->grpmask;
+-	WRITE_ONCE(rnp->ofl_seq, rnp->ofl_seq + 1);
+-	WARN_ON_ONCE(!(rnp->ofl_seq & 0x1));
++	local_irq_save(flags);
++	arch_spin_lock(&rcu_state.ofl_lock);
+ 	rcu_dynticks_eqs_online();
+-	smp_mb(); // Pair with rcu_gp_cleanup()'s ->ofl_seq barrier().
+-	raw_spin_lock_irqsave_rcu_node(rnp, flags);
++	raw_spin_lock_rcu_node(rnp);
+ 	WRITE_ONCE(rnp->qsmaskinitnext, rnp->qsmaskinitnext | mask);
+ 	newcpu = !(rnp->expmaskinitnext & mask);
+ 	rnp->expmaskinitnext |= mask;
+@@ -4264,15 +4266,18 @@ void rcu_cpu_starting(unsigned int cpu)
+ 
+ 	/* An incoming CPU should never be blocking a grace period. */
+ 	if (WARN_ON_ONCE(rnp->qsmask & mask)) { /* RCU waiting on incoming CPU? */
++		/* rcu_report_qs_rnp() *really* wants some flags to restore */
++		unsigned long flags2;
++
++		local_irq_save(flags2);
+ 		rcu_disable_urgency_upon_qs(rdp);
+ 		/* Report QS -after- changing ->qsmaskinitnext! */
+-		rcu_report_qs_rnp(mask, rnp, rnp->gp_seq, flags);
++		rcu_report_qs_rnp(mask, rnp, rnp->gp_seq, flags2);
+ 	} else {
+-		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
++		raw_spin_unlock_rcu_node(rnp);
+ 	}
+-	smp_mb(); // Pair with rcu_gp_cleanup()'s ->ofl_seq barrier().
+-	WRITE_ONCE(rnp->ofl_seq, rnp->ofl_seq + 1);
+-	WARN_ON_ONCE(rnp->ofl_seq & 0x1);
++	arch_spin_unlock(&rcu_state.ofl_lock);
++	local_irq_restore(flags);
+ 	smp_mb(); /* Ensure RCU read-side usage follows above initialization. */
+ }
+ 
+@@ -4286,7 +4291,7 @@ void rcu_cpu_starting(unsigned int cpu)
+  */
+ void rcu_report_dead(unsigned int cpu)
+ {
+-	unsigned long flags;
++	unsigned long flags, seq_flags;
+ 	unsigned long mask;
+ 	struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
+ 	struct rcu_node *rnp = rdp->mynode;  /* Outgoing CPU's rdp & rnp. */
+@@ -4300,10 +4305,8 @@ void rcu_report_dead(unsigned int cpu)
+ 
+ 	/* Remove outgoing CPU from mask in the leaf rcu_node structure. */
+ 	mask = rdp->grpmask;
+-	WRITE_ONCE(rnp->ofl_seq, rnp->ofl_seq + 1);
+-	WARN_ON_ONCE(!(rnp->ofl_seq & 0x1));
+-	smp_mb(); // Pair with rcu_gp_cleanup()'s ->ofl_seq barrier().
+-	raw_spin_lock(&rcu_state.ofl_lock);
++	local_irq_save(seq_flags);
++	arch_spin_lock(&rcu_state.ofl_lock);
+ 	raw_spin_lock_irqsave_rcu_node(rnp, flags); /* Enforce GP memory-order guarantee. */
+ 	rdp->rcu_ofl_gp_seq = READ_ONCE(rcu_state.gp_seq);
+ 	rdp->rcu_ofl_gp_flags = READ_ONCE(rcu_state.gp_flags);
+@@ -4314,10 +4317,8 @@ void rcu_report_dead(unsigned int cpu)
+ 	}
+ 	WRITE_ONCE(rnp->qsmaskinitnext, rnp->qsmaskinitnext & ~mask);
+ 	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+-	raw_spin_unlock(&rcu_state.ofl_lock);
+-	smp_mb(); // Pair with rcu_gp_cleanup()'s ->ofl_seq barrier().
+-	WRITE_ONCE(rnp->ofl_seq, rnp->ofl_seq + 1);
+-	WARN_ON_ONCE(rnp->ofl_seq & 0x1);
++	arch_spin_unlock(&rcu_state.ofl_lock);
++	local_irq_restore(seq_flags);
+ 
+ 	rdp->cpu_started = false;
+ }
+diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
+index 305cf6aeb4086..aff4cc9303fb4 100644
+--- a/kernel/rcu/tree.h
++++ b/kernel/rcu/tree.h
+@@ -56,8 +56,6 @@ struct rcu_node {
+ 				/*  Initialized from ->qsmaskinitnext at the */
+ 				/*  beginning of each grace period. */
+ 	unsigned long qsmaskinitnext;
+-	unsigned long ofl_seq;	/* CPU-hotplug operation sequence count. */
+-				/* Online CPUs for next grace period. */
+ 	unsigned long expmask;	/* CPUs or groups that need to check in */
+ 				/*  to allow the current expedited GP */
+ 				/*  to complete. */
+@@ -358,7 +356,7 @@ struct rcu_state {
+ 	const char *name;			/* Name of structure. */
+ 	char abbr;				/* Abbreviated name. */
+ 
+-	raw_spinlock_t ofl_lock ____cacheline_internodealigned_in_smp;
++	arch_spinlock_t ofl_lock ____cacheline_internodealigned_in_smp;
+ 						/* Synchronize offline with */
+ 						/*  GP pre-initialization. */
+ };
+diff --git a/kernel/resource.c b/kernel/resource.c
+index 5ad3eba619ba7..092a6154371bd 100644
+--- a/kernel/resource.c
++++ b/kernel/resource.c
+@@ -56,14 +56,6 @@ struct resource_constraint {
+ 
+ static DEFINE_RWLOCK(resource_lock);
+ 
+-/*
+- * For memory hotplug, there is no way to free resource entries allocated
+- * by boot mem after the system is up. So for reusing the resource entry
+- * we need to remember the resource.
+- */
+-static struct resource *bootmem_resource_free;
+-static DEFINE_SPINLOCK(bootmem_resource_lock);
+-
+ static struct resource *next_resource(struct resource *p)
+ {
+ 	if (p->child)
+@@ -160,36 +152,19 @@ __initcall(ioresources_init);
+ 
+ static void free_resource(struct resource *res)
+ {
+-	if (!res)
+-		return;
+-
+-	if (!PageSlab(virt_to_head_page(res))) {
+-		spin_lock(&bootmem_resource_lock);
+-		res->sibling = bootmem_resource_free;
+-		bootmem_resource_free = res;
+-		spin_unlock(&bootmem_resource_lock);
+-	} else {
++	/**
++	 * If the resource was allocated using memblock early during boot
++	 * we'll leak it here: we can only return full pages back to the
++	 * buddy and trying to be smart and reusing them eventually in
++	 * alloc_resource() overcomplicates resource handling.
++	 */
++	if (res && PageSlab(virt_to_head_page(res)))
+ 		kfree(res);
+-	}
+ }
+ 
+ static struct resource *alloc_resource(gfp_t flags)
+ {
+-	struct resource *res = NULL;
+-
+-	spin_lock(&bootmem_resource_lock);
+-	if (bootmem_resource_free) {
+-		res = bootmem_resource_free;
+-		bootmem_resource_free = res->sibling;
+-	}
+-	spin_unlock(&bootmem_resource_lock);
+-
+-	if (res)
+-		memset(res, 0, sizeof(struct resource));
+-	else
+-		res = kzalloc(sizeof(struct resource), flags);
+-
+-	return res;
++	return kzalloc(sizeof(struct resource), flags);
+ }
+ 
+ /* Return the conflict entry if you can't request it */
+diff --git a/kernel/rseq.c b/kernel/rseq.c
+index 6d45ac3dae7fb..97ac20b4f7387 100644
+--- a/kernel/rseq.c
++++ b/kernel/rseq.c
+@@ -128,10 +128,10 @@ static int rseq_get_rseq_cs(struct task_struct *t, struct rseq_cs *rseq_cs)
+ 	int ret;
+ 
+ #ifdef CONFIG_64BIT
+-	if (get_user(ptr, &t->rseq->rseq_cs.ptr64))
++	if (get_user(ptr, &t->rseq->rseq_cs))
+ 		return -EFAULT;
+ #else
+-	if (copy_from_user(&ptr, &t->rseq->rseq_cs.ptr64, sizeof(ptr)))
++	if (copy_from_user(&ptr, &t->rseq->rseq_cs, sizeof(ptr)))
+ 		return -EFAULT;
+ #endif
+ 	if (!ptr) {
+@@ -217,9 +217,9 @@ static int clear_rseq_cs(struct task_struct *t)
+ 	 * Set rseq_cs to NULL.
+ 	 */
+ #ifdef CONFIG_64BIT
+-	return put_user(0UL, &t->rseq->rseq_cs.ptr64);
++	return put_user(0UL, &t->rseq->rseq_cs);
+ #else
+-	if (clear_user(&t->rseq->rseq_cs.ptr64, sizeof(t->rseq->rseq_cs.ptr64)))
++	if (clear_user(&t->rseq->rseq_cs, sizeof(t->rseq->rseq_cs)))
+ 		return -EFAULT;
+ 	return 0;
+ #endif
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 65082cab29069..0e4b8214af31c 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -36,6 +36,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_rt_tp);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_dl_tp);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_se_tp);
++EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_thermal_tp);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_cpu_capacity_tp);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_overutilized_tp);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_util_est_cfs_tp);
+diff --git a/kernel/sched/cpuacct.c b/kernel/sched/cpuacct.c
+index ab67d97a84428..cacc2076ad214 100644
+--- a/kernel/sched/cpuacct.c
++++ b/kernel/sched/cpuacct.c
+@@ -328,12 +328,13 @@ static struct cftype files[] = {
+  */
+ void cpuacct_charge(struct task_struct *tsk, u64 cputime)
+ {
++	unsigned int cpu = task_cpu(tsk);
+ 	struct cpuacct *ca;
+ 
+ 	rcu_read_lock();
+ 
+ 	for (ca = task_ca(tsk); ca; ca = parent_ca(ca))
+-		__this_cpu_add(*ca->cpuusage, cputime);
++		*per_cpu_ptr(ca->cpuusage, cpu) += cputime;
+ 
+ 	rcu_read_unlock();
+ }
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index e7af18857371e..7f6bb37d3a2f7 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -289,6 +289,7 @@ static void sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time)
+ 	 * into the same scale so we can compare.
+ 	 */
+ 	boost = (sg_cpu->iowait_boost * sg_cpu->max) >> SCHED_CAPACITY_SHIFT;
++	boost = uclamp_rq_util_with(cpu_rq(sg_cpu->cpu), boost, NULL);
+ 	if (sg_cpu->util < boost)
+ 		sg_cpu->util = boost;
+ }
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index d2c072b0ef01f..62f0cf8422775 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -2240,12 +2240,6 @@ static int push_dl_task(struct rq *rq)
+ 		return 0;
+ 
+ retry:
+-	if (is_migration_disabled(next_task))
+-		return 0;
+-
+-	if (WARN_ON(next_task == rq->curr))
+-		return 0;
+-
+ 	/*
+ 	 * If next_task preempts rq->curr, and rq->curr
+ 	 * can move away, it makes sense to just reschedule
+@@ -2258,6 +2252,12 @@ retry:
+ 		return 0;
+ 	}
+ 
++	if (is_migration_disabled(next_task))
++		return 0;
++
++	if (WARN_ON(next_task == rq->curr))
++		return 0;
++
+ 	/* We might release rq lock */
+ 	get_task_struct(next_task);
+ 
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index 7dcbaa31c5d91..50e05c8d0d617 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -931,25 +931,15 @@ void print_numa_stats(struct seq_file *m, int node, unsigned long tsf,
+ static void sched_show_numa(struct task_struct *p, struct seq_file *m)
+ {
+ #ifdef CONFIG_NUMA_BALANCING
+-	struct mempolicy *pol;
+-
+ 	if (p->mm)
+ 		P(mm->numa_scan_seq);
+ 
+-	task_lock(p);
+-	pol = p->mempolicy;
+-	if (pol && !(pol->flags & MPOL_F_MORON))
+-		pol = NULL;
+-	mpol_get(pol);
+-	task_unlock(p);
+-
+ 	P(numa_pages_migrated);
+ 	P(numa_preferred_nid);
+ 	P(total_numa_faults);
+ 	SEQ_printf(m, "current_node=%d, numa_group_id=%d\n",
+ 			task_node(p), task_numa_group_id(p));
+ 	show_numa_stats(p, m);
+-	mpol_put(pol);
+ #endif
+ }
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 069e01772d922..9637766e220da 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -9062,9 +9062,10 @@ static bool update_pick_idlest(struct sched_group *idlest,
+  * This is an approximation as the number of running tasks may not be
+  * related to the number of busy CPUs due to sched_setaffinity.
+  */
+-static inline bool allow_numa_imbalance(int dst_running, int dst_weight)
++static inline bool
++allow_numa_imbalance(unsigned int running, unsigned int weight)
+ {
+-	return (dst_running < (dst_weight >> 2));
++	return (running < (weight >> 2));
+ }
+ 
+ /*
+@@ -9198,12 +9199,13 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu)
+ 				return idlest;
+ #endif
+ 			/*
+-			 * Otherwise, keep the task on this node to stay close
+-			 * its wakeup source and improve locality. If there is
+-			 * a real need of migration, periodic load balance will
+-			 * take care of it.
++			 * Otherwise, keep the task close to the wakeup source
++			 * and improve locality if the number of running tasks
++			 * would remain below threshold where an imbalance is
++			 * allowed. If there is a real need of migration,
++			 * periodic load balance will take care of it.
+ 			 */
+-			if (allow_numa_imbalance(local_sgs.sum_nr_running, sd->span_weight))
++			if (allow_numa_imbalance(local_sgs.sum_nr_running + 1, local_sgs.group_weight))
+ 				return NULL;
+ 		}
+ 
+@@ -9409,7 +9411,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
+ 		/* Consider allowing a small imbalance between NUMA groups */
+ 		if (env->sd->flags & SD_NUMA) {
+ 			env->imbalance = adjust_numa_imbalance(env->imbalance,
+-				busiest->sum_nr_running, busiest->group_weight);
++				local->sum_nr_running + 1, local->group_weight);
+ 		}
+ 
+ 		return;
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index 7b4f4fbbb4048..14f273c295183 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -2026,6 +2026,16 @@ static int push_rt_task(struct rq *rq, bool pull)
+ 		return 0;
+ 
+ retry:
++	/*
++	 * It's possible that the next_task slipped in of
++	 * higher priority than current. If that's the case
++	 * just reschedule current.
++	 */
++	if (unlikely(next_task->prio < rq->curr->prio)) {
++		resched_curr(rq);
++		return 0;
++	}
++
+ 	if (is_migration_disabled(next_task)) {
+ 		struct task_struct *push_task = NULL;
+ 		int cpu;
+@@ -2033,6 +2043,18 @@ retry:
+ 		if (!pull || rq->push_busy)
+ 			return 0;
+ 
++		/*
++		 * Invoking find_lowest_rq() on anything but an RT task doesn't
++		 * make sense. Per the above priority check, curr has to
++		 * be of higher priority than next_task, so no need to
++		 * reschedule when bailing out.
++		 *
++		 * Note that the stoppers are masqueraded as SCHED_FIFO
++		 * (cf. sched_set_stop_task()), so we can't rely on rt_task().
++		 */
++		if (rq->curr->sched_class != &rt_sched_class)
++			return 0;
++
+ 		cpu = find_lowest_rq(rq->curr);
+ 		if (cpu == -1 || cpu == rq->cpu)
+ 			return 0;
+@@ -2057,16 +2079,6 @@ retry:
+ 	if (WARN_ON(next_task == rq->curr))
+ 		return 0;
+ 
+-	/*
+-	 * It's possible that the next_task slipped in of
+-	 * higher priority than current. If that's the case
+-	 * just reschedule current.
+-	 */
+-	if (unlikely(next_task->prio < rq->curr->prio)) {
+-		resched_curr(rq);
+-		return 0;
+-	}
+-
+ 	/* We might release rq lock */
+ 	get_task_struct(next_task);
+ 
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 5816ad79cce85..1e6befe0d78c3 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -3654,12 +3654,17 @@ static char *trace_iter_expand_format(struct trace_iterator *iter)
+ }
+ 
+ /* Returns true if the string is safe to dereference from an event */
+-static bool trace_safe_str(struct trace_iterator *iter, const char *str)
++static bool trace_safe_str(struct trace_iterator *iter, const char *str,
++			   bool star, int len)
+ {
+ 	unsigned long addr = (unsigned long)str;
+ 	struct trace_event *trace_event;
+ 	struct trace_event_call *event;
+ 
++	/* Ignore strings with no length */
++	if (star && !len)
++		return true;
++
+ 	/* OK if part of the event data */
+ 	if ((addr >= (unsigned long)iter->ent) &&
+ 	    (addr < (unsigned long)iter->ent + iter->ent_size))
+@@ -3845,7 +3850,7 @@ void trace_check_vprintf(struct trace_iterator *iter, const char *fmt,
+ 		 * instead. See samples/trace_events/trace-events-sample.h
+ 		 * for reference.
+ 		 */
+-		if (WARN_ONCE(!trace_safe_str(iter, str),
++		if (WARN_ONCE(!trace_safe_str(iter, str, star, len),
+ 			      "fmt: '%s' current_buffer: '%s'",
+ 			      fmt, show_buffer(&iter->seq))) {
+ 			int ret;
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 92be9cb1d7d4b..99f4bae375ffb 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -40,6 +40,14 @@ static LIST_HEAD(ftrace_generic_fields);
+ static LIST_HEAD(ftrace_common_fields);
+ static bool eventdir_initialized;
+ 
++static LIST_HEAD(module_strings);
++
++struct module_string {
++	struct list_head	next;
++	struct module		*module;
++	char			*str;
++};
++
+ #define GFP_TRACE (GFP_KERNEL | __GFP_ZERO)
+ 
+ static struct kmem_cache *field_cachep;
+@@ -2633,6 +2641,76 @@ static void update_event_printk(struct trace_event_call *call,
+ 	}
+ }
+ 
++static void add_str_to_module(struct module *module, char *str)
++{
++	struct module_string *modstr;
++
++	modstr = kmalloc(sizeof(*modstr), GFP_KERNEL);
++
++	/*
++	 * If we failed to allocate memory here, then we'll just
++	 * let the str memory leak when the module is removed.
++	 * If this fails to allocate, there's worse problems than
++	 * a leaked string on module removal.
++	 */
++	if (WARN_ON_ONCE(!modstr))
++		return;
++
++	modstr->module = module;
++	modstr->str = str;
++
++	list_add(&modstr->next, &module_strings);
++}
++
++static void update_event_fields(struct trace_event_call *call,
++				struct trace_eval_map *map)
++{
++	struct ftrace_event_field *field;
++	struct list_head *head;
++	char *ptr;
++	char *str;
++	int len = strlen(map->eval_string);
++
++	/* Dynamic events should never have field maps */
++	if (WARN_ON_ONCE(call->flags & TRACE_EVENT_FL_DYNAMIC))
++		return;
++
++	head = trace_get_fields(call);
++	list_for_each_entry(field, head, link) {
++		ptr = strchr(field->type, '[');
++		if (!ptr)
++			continue;
++		ptr++;
++
++		if (!isalpha(*ptr) && *ptr != '_')
++			continue;
++
++		if (strncmp(map->eval_string, ptr, len) != 0)
++			continue;
++
++		str = kstrdup(field->type, GFP_KERNEL);
++		if (WARN_ON_ONCE(!str))
++			return;
++		ptr = str + (ptr - field->type);
++		ptr = eval_replace(ptr, map, len);
++		/* enum/sizeof string smaller than value */
++		if (WARN_ON_ONCE(!ptr)) {
++			kfree(str);
++			continue;
++		}
++
++		/*
++		 * If the event is part of a module, then we need to free the string
++		 * when the module is removed. Otherwise, it will stay allocated
++		 * until a reboot.
++		 */
++		if (call->module)
++			add_str_to_module(call->module, str);
++
++		field->type = str;
++	}
++}
++
+ void trace_event_eval_update(struct trace_eval_map **map, int len)
+ {
+ 	struct trace_event_call *call, *p;
+@@ -2668,6 +2746,7 @@ void trace_event_eval_update(struct trace_eval_map **map, int len)
+ 					first = false;
+ 				}
+ 				update_event_printk(call, map[i]);
++				update_event_fields(call, map[i]);
+ 			}
+ 		}
+ 	}
+@@ -2853,6 +2932,7 @@ static void trace_module_add_events(struct module *mod)
+ static void trace_module_remove_events(struct module *mod)
+ {
+ 	struct trace_event_call *call, *p;
++	struct module_string *modstr, *m;
+ 
+ 	down_write(&trace_event_sem);
+ 	list_for_each_entry_safe(call, p, &ftrace_events, list) {
+@@ -2861,6 +2941,14 @@ static void trace_module_remove_events(struct module *mod)
+ 		if (call->module == mod)
+ 			__trace_remove_event_call(call);
+ 	}
++	/* Check for any strings allocade for this module */
++	list_for_each_entry_safe(modstr, m, &module_strings, next) {
++		if (modstr->module != mod)
++			continue;
++		list_del(&modstr->next);
++		kfree(modstr->str);
++		kfree(modstr);
++	}
+ 	up_write(&trace_event_sem);
+ 
+ 	/*
+diff --git a/kernel/watch_queue.c b/kernel/watch_queue.c
+index 055bc20ecdda5..b1ae7c9c3b473 100644
+--- a/kernel/watch_queue.c
++++ b/kernel/watch_queue.c
+@@ -274,7 +274,7 @@ long watch_queue_set_size(struct pipe_inode_info *pipe, unsigned int nr_notes)
+ 	return 0;
+ 
+ error_p:
+-	for (i = 0; i < nr_pages; i++)
++	while (--i >= 0)
+ 		__free_page(pages[i]);
+ 	kfree(pages);
+ error:
+@@ -373,6 +373,7 @@ static void __put_watch_queue(struct kref *kref)
+ 
+ 	for (i = 0; i < wqueue->nr_pages; i++)
+ 		__free_page(wqueue->notes[i]);
++	kfree(wqueue->notes);
+ 	bitmap_free(wqueue->notes_bitmap);
+ 
+ 	wfilter = rcu_access_pointer(wqueue->filter);
+@@ -398,6 +399,7 @@ static void free_watch(struct rcu_head *rcu)
+ 	put_watch_queue(rcu_access_pointer(watch->queue));
+ 	atomic_dec(&watch->cred->user->nr_watches);
+ 	put_cred(watch->cred);
++	kfree(watch);
+ }
+ 
+ static void __put_watch(struct kref *kref)
+diff --git a/lib/kunit/try-catch.c b/lib/kunit/try-catch.c
+index 0dd434e40487c..71e5c58530996 100644
+--- a/lib/kunit/try-catch.c
++++ b/lib/kunit/try-catch.c
+@@ -52,7 +52,7 @@ static unsigned long kunit_test_timeout(void)
+ 	 * If tests timeout due to exceeding sysctl_hung_task_timeout_secs,
+ 	 * the task will be killed and an oops generated.
+ 	 */
+-	return 300 * MSEC_PER_SEC; /* 5 min */
++	return 300 * msecs_to_jiffies(MSEC_PER_SEC); /* 5 min */
+ }
+ 
+ void kunit_try_catch_run(struct kunit_try_catch *try_catch, void *context)
+diff --git a/lib/raid6/test/Makefile b/lib/raid6/test/Makefile
+index a4c7cd74cff58..4fb7700a741bd 100644
+--- a/lib/raid6/test/Makefile
++++ b/lib/raid6/test/Makefile
+@@ -4,6 +4,8 @@
+ # from userspace.
+ #
+ 
++pound := \#
++
+ CC	 = gcc
+ OPTFLAGS = -O2			# Adjust as desired
+ CFLAGS	 = -I.. -I ../../../include -g $(OPTFLAGS)
+@@ -42,7 +44,7 @@ else ifeq ($(HAS_NEON),yes)
+         OBJS   += neon.o neon1.o neon2.o neon4.o neon8.o recov_neon.o recov_neon_inner.o
+         CFLAGS += -DCONFIG_KERNEL_MODE_NEON=1
+ else
+-        HAS_ALTIVEC := $(shell printf '\#include <altivec.h>\nvector int a;\n' |\
++        HAS_ALTIVEC := $(shell printf '$(pound)include <altivec.h>\nvector int a;\n' |\
+                          gcc -c -x c - >/dev/null && rm ./-.o && echo yes)
+         ifeq ($(HAS_ALTIVEC),yes)
+                 CFLAGS += -I../../../arch/powerpc/include
+diff --git a/lib/raid6/test/test.c b/lib/raid6/test/test.c
+index a3cf071941ab4..841a55242abaa 100644
+--- a/lib/raid6/test/test.c
++++ b/lib/raid6/test/test.c
+@@ -19,7 +19,6 @@
+ #define NDISKS		16	/* Including P and Q */
+ 
+ const char raid6_empty_zero_page[PAGE_SIZE] __attribute__((aligned(PAGE_SIZE)));
+-struct raid6_calls raid6_call;
+ 
+ char *dataptrs[NDISKS];
+ char data[NDISKS][PAGE_SIZE] __attribute__((aligned(PAGE_SIZE)));
+diff --git a/lib/test_kmod.c b/lib/test_kmod.c
+index ce15893914131..cb800b1d0d99c 100644
+--- a/lib/test_kmod.c
++++ b/lib/test_kmod.c
+@@ -1149,6 +1149,7 @@ static struct kmod_test_device *register_test_dev_kmod(void)
+ 	if (ret) {
+ 		pr_err("could not register misc device: %d\n", ret);
+ 		free_test_dev_kmod(test_dev);
++		test_dev = NULL;
+ 		goto out;
+ 	}
+ 
+diff --git a/lib/test_lockup.c b/lib/test_lockup.c
+index 906b598740a7b..c3fd87d6c2dd0 100644
+--- a/lib/test_lockup.c
++++ b/lib/test_lockup.c
+@@ -417,9 +417,14 @@ static bool test_kernel_ptr(unsigned long addr, int size)
+ 		return false;
+ 
+ 	/* should be at least readable kernel address */
+-	if (access_ok(ptr, 1) ||
+-	    access_ok(ptr + size - 1, 1) ||
+-	    get_kernel_nofault(buf, ptr) ||
++	if (!IS_ENABLED(CONFIG_ALTERNATE_USER_ADDRESS_SPACE) &&
++	    (access_ok((void __user *)ptr, 1) ||
++	     access_ok((void __user *)ptr + size - 1, 1))) {
++		pr_err("user space ptr invalid in kernel: %#lx\n", addr);
++		return true;
++	}
++
++	if (get_kernel_nofault(buf, ptr) ||
+ 	    get_kernel_nofault(buf, ptr + size - 1)) {
+ 		pr_err("invalid kernel ptr: %#lx\n", addr);
+ 		return true;
+diff --git a/lib/test_xarray.c b/lib/test_xarray.c
+index 8b1c318189ce8..e77d4856442c3 100644
+--- a/lib/test_xarray.c
++++ b/lib/test_xarray.c
+@@ -1463,6 +1463,25 @@ unlock:
+ 	XA_BUG_ON(xa, !xa_empty(xa));
+ }
+ 
++static noinline void check_create_range_5(struct xarray *xa,
++		unsigned long index, unsigned int order)
++{
++	XA_STATE_ORDER(xas, xa, index, order);
++	unsigned int i;
++
++	xa_store_order(xa, index, order, xa_mk_index(index), GFP_KERNEL);
++
++	for (i = 0; i < order + 10; i++) {
++		do {
++			xas_lock(&xas);
++			xas_create_range(&xas);
++			xas_unlock(&xas);
++		} while (xas_nomem(&xas, GFP_KERNEL));
++	}
++
++	xa_destroy(xa);
++}
++
+ static noinline void check_create_range(struct xarray *xa)
+ {
+ 	unsigned int order;
+@@ -1490,6 +1509,9 @@ static noinline void check_create_range(struct xarray *xa)
+ 		check_create_range_4(xa, (3U << order) + 1, order);
+ 		check_create_range_4(xa, (3U << order) - 1, order);
+ 		check_create_range_4(xa, (1U << 24) + 1, order);
++
++		check_create_range_5(xa, 0, order);
++		check_create_range_5(xa, (1U << order), order);
+ 	}
+ 
+ 	check_create_range_3();
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index 58d5e567f8368..937dd643008df 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -49,10 +49,15 @@
+ 
+ #include <asm/page.h>		/* for PAGE_SIZE */
+ #include <asm/byteorder.h>	/* cpu_to_le16 */
++#include <asm/unaligned.h>
+ 
+ #include <linux/string_helpers.h>
+ #include "kstrtox.h"
+ 
++/* Disable pointer hashing if requested */
++bool no_hash_pointers __ro_after_init;
++EXPORT_SYMBOL_GPL(no_hash_pointers);
++
+ static noinline unsigned long long simple_strntoull(const char *startp, size_t max_chars, char **endp, unsigned int base)
+ {
+ 	const char *cp;
+@@ -848,6 +853,19 @@ static char *ptr_to_id(char *buf, char *end, const void *ptr,
+ 	return pointer_string(buf, end, (const void *)hashval, spec);
+ }
+ 
++static char *default_pointer(char *buf, char *end, const void *ptr,
++			     struct printf_spec spec)
++{
++	/*
++	 * default is to _not_ leak addresses, so hash before printing,
++	 * unless no_hash_pointers is specified on the command line.
++	 */
++	if (unlikely(no_hash_pointers))
++		return pointer_string(buf, end, ptr, spec);
++
++	return ptr_to_id(buf, end, ptr, spec);
++}
++
+ int kptr_restrict __read_mostly;
+ 
+ static noinline_for_stack
+@@ -857,7 +875,7 @@ char *restricted_pointer(char *buf, char *end, const void *ptr,
+ 	switch (kptr_restrict) {
+ 	case 0:
+ 		/* Handle as %p, hash and do _not_ leak addresses. */
+-		return ptr_to_id(buf, end, ptr, spec);
++		return default_pointer(buf, end, ptr, spec);
+ 	case 1: {
+ 		const struct cred *cred;
+ 
+@@ -1771,7 +1789,7 @@ char *fourcc_string(char *buf, char *end, const u32 *fourcc,
+ 	char output[sizeof("0123 little-endian (0x01234567)")];
+ 	char *p = output;
+ 	unsigned int i;
+-	u32 val;
++	u32 orig, val;
+ 
+ 	if (fmt[1] != 'c' || fmt[2] != 'c')
+ 		return error_string(buf, end, "(%p4?)", spec);
+@@ -1779,21 +1797,22 @@ char *fourcc_string(char *buf, char *end, const u32 *fourcc,
+ 	if (check_pointer(&buf, end, fourcc, spec))
+ 		return buf;
+ 
+-	val = *fourcc & ~BIT(31);
++	orig = get_unaligned(fourcc);
++	val = orig & ~BIT(31);
+ 
+-	for (i = 0; i < sizeof(*fourcc); i++) {
++	for (i = 0; i < sizeof(u32); i++) {
+ 		unsigned char c = val >> (i * 8);
+ 
+ 		/* Print non-control ASCII characters as-is, dot otherwise */
+ 		*p++ = isascii(c) && isprint(c) ? c : '.';
+ 	}
+ 
+-	strcpy(p, *fourcc & BIT(31) ? " big-endian" : " little-endian");
++	strcpy(p, orig & BIT(31) ? " big-endian" : " little-endian");
+ 	p += strlen(p);
+ 
+ 	*p++ = ' ';
+ 	*p++ = '(';
+-	p = special_hex_number(p, output + sizeof(output) - 2, *fourcc, sizeof(u32));
++	p = special_hex_number(p, output + sizeof(output) - 2, orig, sizeof(u32));
+ 	*p++ = ')';
+ 	*p = '\0';
+ 
+@@ -2233,10 +2252,6 @@ char *fwnode_string(char *buf, char *end, struct fwnode_handle *fwnode,
+ 	return widen_string(buf, buf - buf_start, end, spec);
+ }
+ 
+-/* Disable pointer hashing if requested */
+-bool no_hash_pointers __ro_after_init;
+-EXPORT_SYMBOL_GPL(no_hash_pointers);
+-
+ int __init no_hash_pointers_enable(char *str)
+ {
+ 	if (no_hash_pointers)
+@@ -2465,7 +2480,7 @@ char *pointer(const char *fmt, char *buf, char *end, void *ptr,
+ 	case 'e':
+ 		/* %pe with a non-ERR_PTR gets treated as plain %p */
+ 		if (!IS_ERR(ptr))
+-			break;
++			return default_pointer(buf, end, ptr, spec);
+ 		return err_ptr(buf, end, ptr, spec);
+ 	case 'u':
+ 	case 'k':
+@@ -2475,16 +2490,9 @@ char *pointer(const char *fmt, char *buf, char *end, void *ptr,
+ 		default:
+ 			return error_string(buf, end, "(einval)", spec);
+ 		}
++	default:
++		return default_pointer(buf, end, ptr, spec);
+ 	}
+-
+-	/*
+-	 * default is to _not_ leak addresses, so hash before printing,
+-	 * unless no_hash_pointers is specified on the command line.
+-	 */
+-	if (unlikely(no_hash_pointers))
+-		return pointer_string(buf, end, ptr, spec);
+-	else
+-		return ptr_to_id(buf, end, ptr, spec);
+ }
+ 
+ /*
+diff --git a/lib/xarray.c b/lib/xarray.c
+index f5d8f54907b4f..96e2d7748e5aa 100644
+--- a/lib/xarray.c
++++ b/lib/xarray.c
+@@ -722,6 +722,8 @@ void xas_create_range(struct xa_state *xas)
+ 
+ 		for (;;) {
+ 			struct xa_node *node = xas->xa_node;
++			if (node->shift >= shift)
++				break;
+ 			xas->xa_node = xa_parent_locked(xas->xa, node);
+ 			xas->xa_offset = node->offset - 1;
+ 			if (node->offset != 0)
+@@ -1079,6 +1081,7 @@ void xas_split(struct xa_state *xas, void *entry, unsigned int order)
+ 					xa_mk_node(child));
+ 			if (xa_is_value(curr))
+ 				values--;
++			xas_update(xas, child);
+ 		} else {
+ 			unsigned int canon = offset - xas->xa_sibs;
+ 
+@@ -1093,6 +1096,7 @@ void xas_split(struct xa_state *xas, void *entry, unsigned int order)
+ 	} while (offset-- > xas->xa_offset);
+ 
+ 	node->nr_values += values;
++	xas_update(xas, node);
+ }
+ EXPORT_SYMBOL_GPL(xas_split);
+ #endif
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index adbe5aa011848..b78861b8e0139 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -789,6 +789,8 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp)
+ 	unsigned long flags;
+ 	struct kmemleak_object *object;
+ 	struct kmemleak_scan_area *area = NULL;
++	unsigned long untagged_ptr;
++	unsigned long untagged_objp;
+ 
+ 	object = find_and_get_object(ptr, 1);
+ 	if (!object) {
+@@ -797,6 +799,9 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp)
+ 		return;
+ 	}
+ 
++	untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
++	untagged_objp = (unsigned long)kasan_reset_tag((void *)object->pointer);
++
+ 	if (scan_area_cache)
+ 		area = kmem_cache_alloc(scan_area_cache, gfp_kmemleak_mask(gfp));
+ 
+@@ -808,8 +813,8 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp)
+ 		goto out_unlock;
+ 	}
+ 	if (size == SIZE_MAX) {
+-		size = object->pointer + object->size - ptr;
+-	} else if (ptr + size > object->pointer + object->size) {
++		size = untagged_objp + object->size - untagged_ptr;
++	} else if (untagged_ptr + size > untagged_objp + object->size) {
+ 		kmemleak_warn("Scan area larger than object 0x%08lx\n", ptr);
+ 		dump_object_info(object);
+ 		kmem_cache_free(scan_area_cache, area);
+diff --git a/mm/madvise.c b/mm/madvise.c
+index 8c927202bbe61..7a2679a7307fe 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -1287,8 +1287,7 @@ SYSCALL_DEFINE5(process_madvise, int, pidfd, const struct iovec __user *, vec,
+ 		iov_iter_advance(&iter, iovec.iov_len);
+ 	}
+ 
+-	if (ret == 0)
+-		ret = total_len - iov_iter_count(&iter);
++	ret = (total_len - iov_iter_count(&iter)) ? : ret;
+ 
+ release_mm:
+ 	mmput(mm);
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index e57b14f966151..317035d97c0cc 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -7038,7 +7038,7 @@ static int __init cgroup_memory(char *s)
+ 		if (!strcmp(token, "nokmem"))
+ 			cgroup_memory_nokmem = true;
+ 	}
+-	return 0;
++	return 1;
+ }
+ __setup("cgroup.memory=", cgroup_memory);
+ 
+diff --git a/mm/memory.c b/mm/memory.c
+index 8f1de811a1dcb..10bf23234fccc 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -1304,6 +1304,35 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
+ 	return ret;
+ }
+ 
++/* Whether we should zap all COWed (private) pages too */
++static inline bool should_zap_cows(struct zap_details *details)
++{
++	/* By default, zap all pages */
++	if (!details)
++		return true;
++
++	/* Or, we zap COWed pages only if the caller wants to */
++	return !details->zap_mapping;
++}
++
++/*
++ * We set details->zap_mappings when we want to unmap shared but keep private
++ * pages. Return true if skip zapping this page, false otherwise.
++ */
++static inline bool
++zap_skip_check_mapping(struct zap_details *details, struct page *page)
++{
++	/* If we can make a decision without *page.. */
++	if (should_zap_cows(details))
++		return false;
++
++	/* E.g. the caller passes NULL for the case of a zero page */
++	if (!page)
++		return false;
++
++	return details->zap_mapping != page_rmapping(page);
++}
++
+ static unsigned long zap_pte_range(struct mmu_gather *tlb,
+ 				struct vm_area_struct *vma, pmd_t *pmd,
+ 				unsigned long addr, unsigned long end,
+@@ -1382,17 +1411,24 @@ again:
+ 			continue;
+ 		}
+ 
+-		/* If details->check_mapping, we leave swap entries. */
+-		if (unlikely(details))
+-			continue;
+-
+-		if (!non_swap_entry(entry))
++		if (!non_swap_entry(entry)) {
++			/* Genuine swap entry, hence a private anon page */
++			if (!should_zap_cows(details))
++				continue;
+ 			rss[MM_SWAPENTS]--;
+-		else if (is_migration_entry(entry)) {
++		} else if (is_migration_entry(entry)) {
+ 			struct page *page;
+ 
+ 			page = pfn_swap_entry_to_page(entry);
++			if (zap_skip_check_mapping(details, page))
++				continue;
+ 			rss[mm_counter(page)]--;
++		} else if (is_hwpoison_entry(entry)) {
++			if (!should_zap_cows(details))
++				continue;
++		} else {
++			/* We should have covered all the swap entry types */
++			WARN_ON_ONCE(1);
+ 		}
+ 		if (unlikely(!free_swap_and_cache(entry)))
+ 			print_bad_pte(vma, addr, ptent, NULL);
+@@ -3852,11 +3888,20 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
+ 		return ret;
+ 
+ 	if (unlikely(PageHWPoison(vmf->page))) {
+-		if (ret & VM_FAULT_LOCKED)
+-			unlock_page(vmf->page);
+-		put_page(vmf->page);
++		struct page *page = vmf->page;
++		vm_fault_t poisonret = VM_FAULT_HWPOISON;
++		if (ret & VM_FAULT_LOCKED) {
++			if (page_mapped(page))
++				unmap_mapping_pages(page_mapping(page),
++						    page->index, 1, false);
++			/* Retry if a clean page was removed from the cache. */
++			if (invalidate_inode_page(page))
++				poisonret = VM_FAULT_NOPAGE;
++			unlock_page(page);
++		}
++		put_page(page);
+ 		vmf->page = NULL;
+-		return VM_FAULT_HWPOISON;
++		return poisonret;
+ 	}
+ 
+ 	if (unlikely(!(ret & VM_FAULT_LOCKED)))
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index f6248affaf38c..dd5daddb6257a 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -783,7 +783,6 @@ static int vma_replace_policy(struct vm_area_struct *vma,
+ static int mbind_range(struct mm_struct *mm, unsigned long start,
+ 		       unsigned long end, struct mempolicy *new_pol)
+ {
+-	struct vm_area_struct *next;
+ 	struct vm_area_struct *prev;
+ 	struct vm_area_struct *vma;
+ 	int err = 0;
+@@ -798,8 +797,7 @@ static int mbind_range(struct mm_struct *mm, unsigned long start,
+ 	if (start > vma->vm_start)
+ 		prev = vma;
+ 
+-	for (; vma && vma->vm_start < end; prev = vma, vma = next) {
+-		next = vma->vm_next;
++	for (; vma && vma->vm_start < end; prev = vma, vma = vma->vm_next) {
+ 		vmstart = max(start, vma->vm_start);
+ 		vmend   = min(end, vma->vm_end);
+ 
+@@ -813,10 +811,6 @@ static int mbind_range(struct mm_struct *mm, unsigned long start,
+ 				 new_pol, vma->vm_userfaultfd_ctx);
+ 		if (prev) {
+ 			vma = prev;
+-			next = vma->vm_next;
+-			if (mpol_equal(vma_policy(vma), new_pol))
+-				continue;
+-			/* vma_merge() joined vma && vma->next, case 8 */
+ 			goto replace;
+ 		}
+ 		if (vma->vm_start != vmstart) {
+diff --git a/mm/mlock.c b/mm/mlock.c
+index e263d62ae2d09..c79d09f5e5064 100644
+--- a/mm/mlock.c
++++ b/mm/mlock.c
+@@ -827,13 +827,12 @@ int user_shm_lock(size_t size, struct ucounts *ucounts)
+ 
+ 	locked = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ 	lock_limit = rlimit(RLIMIT_MEMLOCK);
+-	if (lock_limit == RLIM_INFINITY)
+-		allowed = 1;
+-	lock_limit >>= PAGE_SHIFT;
++	if (lock_limit != RLIM_INFINITY)
++		lock_limit >>= PAGE_SHIFT;
+ 	spin_lock(&shmlock_user_lock);
+ 	memlock = inc_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_MEMLOCK, locked);
+ 
+-	if (!allowed && (memlock == LONG_MAX || memlock > lock_limit) && !capable(CAP_IPC_LOCK)) {
++	if ((memlock == LONG_MAX || memlock > lock_limit) && !capable(CAP_IPC_LOCK)) {
+ 		dec_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_MEMLOCK, locked);
+ 		goto out;
+ 	}
+diff --git a/mm/mmap.c b/mm/mmap.c
+index bfb0ea164a90a..22f2469bb2458 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -2550,7 +2550,7 @@ static int __init cmdline_parse_stack_guard_gap(char *p)
+ 	if (!*endptr)
+ 		stack_guard_gap = val << PAGE_SHIFT;
+ 
+-	return 0;
++	return 1;
+ }
+ __setup("stack_guard_gap=", cmdline_parse_stack_guard_gap);
+ 
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index d9492eaa47138..18365f284d8c6 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -7910,10 +7910,17 @@ restart:
+ 
+ out2:
+ 	/* Align start of ZONE_MOVABLE on all nids to MAX_ORDER_NR_PAGES */
+-	for (nid = 0; nid < MAX_NUMNODES; nid++)
++	for (nid = 0; nid < MAX_NUMNODES; nid++) {
++		unsigned long start_pfn, end_pfn;
++
+ 		zone_movable_pfn[nid] =
+ 			roundup(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES);
+ 
++		get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
++		if (zone_movable_pfn[nid] >= end_pfn)
++			zone_movable_pfn[nid] = 0;
++	}
++
+ out:
+ 	/* restore the node_state */
+ 	node_states[N_MEMORY] = saved_node_state;
+diff --git a/mm/slab.c b/mm/slab.c
+index ca4822f6b2b6b..6c2013c214a79 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -3429,6 +3429,7 @@ static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp,
+ 
+ 	if (is_kfence_address(objp)) {
+ 		kmemleak_free_recursive(objp, cachep->flags);
++		memcg_slab_free_hook(cachep, &objp, 1);
+ 		__kfence_free(objp);
+ 		return;
+ 	}
+diff --git a/mm/usercopy.c b/mm/usercopy.c
+index b3de3c4eefba7..540968b481e7e 100644
+--- a/mm/usercopy.c
++++ b/mm/usercopy.c
+@@ -294,7 +294,10 @@ static bool enable_checks __initdata = true;
+ 
+ static int __init parse_hardened_usercopy(char *str)
+ {
+-	return strtobool(str, &enable_checks);
++	if (strtobool(str, &enable_checks))
++		pr_warn("Invalid option string for hardened_usercopy: '%s'\n",
++			str);
++	return 1;
+ }
+ 
+ __setup("hardened_usercopy=", parse_hardened_usercopy);
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index bd669c95b9a7e..55900496ac33b 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -669,7 +669,9 @@ static void le_conn_timeout(struct work_struct *work)
+ 	if (conn->role == HCI_ROLE_SLAVE) {
+ 		/* Disable LE Advertising */
+ 		le_disable_advertising(hdev);
++		hci_dev_lock(hdev);
+ 		hci_le_conn_failed(conn, HCI_ERROR_ADVERTISING_TIMEOUT);
++		hci_dev_unlock(hdev);
+ 		return;
+ 	}
+ 
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index d2a430b6a13bd..a95d171b3a64b 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -1005,26 +1005,29 @@ static int isotp_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ {
+ 	struct sock *sk = sock->sk;
+ 	struct sk_buff *skb;
+-	int err = 0;
+-	int noblock;
++	struct isotp_sock *so = isotp_sk(sk);
++	int noblock = flags & MSG_DONTWAIT;
++	int ret = 0;
+ 
+-	noblock = flags & MSG_DONTWAIT;
+-	flags &= ~MSG_DONTWAIT;
++	if (flags & ~(MSG_DONTWAIT | MSG_TRUNC | MSG_PEEK))
++		return -EINVAL;
+ 
+-	skb = skb_recv_datagram(sk, flags, noblock, &err);
++	if (!so->bound)
++		return -EADDRNOTAVAIL;
++
++	flags &= ~MSG_DONTWAIT;
++	skb = skb_recv_datagram(sk, flags, noblock, &ret);
+ 	if (!skb)
+-		return err;
++		return ret;
+ 
+ 	if (size < skb->len)
+ 		msg->msg_flags |= MSG_TRUNC;
+ 	else
+ 		size = skb->len;
+ 
+-	err = memcpy_to_msg(msg, skb->data, size);
+-	if (err < 0) {
+-		skb_free_datagram(sk, skb);
+-		return err;
+-	}
++	ret = memcpy_to_msg(msg, skb->data, size);
++	if (ret < 0)
++		goto out_err;
+ 
+ 	sock_recv_timestamp(msg, sk, skb);
+ 
+@@ -1034,9 +1037,13 @@ static int isotp_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ 		memcpy(msg->msg_name, skb->cb, msg->msg_namelen);
+ 	}
+ 
++	/* set length of return value */
++	ret = (flags & MSG_TRUNC) ? skb->len : size;
++
++out_err:
+ 	skb_free_datagram(sk, skb);
+ 
+-	return size;
++	return ret;
+ }
+ 
+ static int isotp_release(struct socket *sock)
+@@ -1104,6 +1111,7 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 	struct net *net = sock_net(sk);
+ 	int ifindex;
+ 	struct net_device *dev;
++	canid_t tx_id, rx_id;
+ 	int err = 0;
+ 	int notify_enetdown = 0;
+ 	int do_rx_reg = 1;
+@@ -1111,8 +1119,18 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 	if (len < ISOTP_MIN_NAMELEN)
+ 		return -EINVAL;
+ 
+-	if (addr->can_addr.tp.tx_id & (CAN_ERR_FLAG | CAN_RTR_FLAG))
+-		return -EADDRNOTAVAIL;
++	/* sanitize tx/rx CAN identifiers */
++	tx_id = addr->can_addr.tp.tx_id;
++	if (tx_id & CAN_EFF_FLAG)
++		tx_id &= (CAN_EFF_FLAG | CAN_EFF_MASK);
++	else
++		tx_id &= CAN_SFF_MASK;
++
++	rx_id = addr->can_addr.tp.rx_id;
++	if (rx_id & CAN_EFF_FLAG)
++		rx_id &= (CAN_EFF_FLAG | CAN_EFF_MASK);
++	else
++		rx_id &= CAN_SFF_MASK;
+ 
+ 	if (!addr->can_ifindex)
+ 		return -ENODEV;
+@@ -1124,21 +1142,13 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 		do_rx_reg = 0;
+ 
+ 	/* do not validate rx address for functional addressing */
+-	if (do_rx_reg) {
+-		if (addr->can_addr.tp.rx_id == addr->can_addr.tp.tx_id) {
+-			err = -EADDRNOTAVAIL;
+-			goto out;
+-		}
+-
+-		if (addr->can_addr.tp.rx_id & (CAN_ERR_FLAG | CAN_RTR_FLAG)) {
+-			err = -EADDRNOTAVAIL;
+-			goto out;
+-		}
++	if (do_rx_reg && rx_id == tx_id) {
++		err = -EADDRNOTAVAIL;
++		goto out;
+ 	}
+ 
+ 	if (so->bound && addr->can_ifindex == so->ifindex &&
+-	    addr->can_addr.tp.rx_id == so->rxid &&
+-	    addr->can_addr.tp.tx_id == so->txid)
++	    rx_id == so->rxid && tx_id == so->txid)
+ 		goto out;
+ 
+ 	dev = dev_get_by_index(net, addr->can_ifindex);
+@@ -1162,8 +1172,7 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 	ifindex = dev->ifindex;
+ 
+ 	if (do_rx_reg)
+-		can_rx_register(net, dev, addr->can_addr.tp.rx_id,
+-				SINGLE_MASK(addr->can_addr.tp.rx_id),
++		can_rx_register(net, dev, rx_id, SINGLE_MASK(rx_id),
+ 				isotp_rcv, sk, "isotp", sk);
+ 
+ 	dev_put(dev);
+@@ -1183,8 +1192,8 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 
+ 	/* switch to new settings */
+ 	so->ifindex = ifindex;
+-	so->rxid = addr->can_addr.tp.rx_id;
+-	so->txid = addr->can_addr.tp.tx_id;
++	so->rxid = rx_id;
++	so->txid = tx_id;
+ 	so->bound = 1;
+ 
+ out:
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index f1e3d70e89870..fdd8041206001 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -201,7 +201,7 @@ static void __build_skb_around(struct sk_buff *skb, void *data,
+ 	skb->head = data;
+ 	skb->data = data;
+ 	skb_reset_tail_pointer(skb);
+-	skb->end = skb->tail + size;
++	skb_set_end_offset(skb, size);
+ 	skb->mac_header = (typeof(skb->mac_header))~0U;
+ 	skb->transport_header = (typeof(skb->transport_header))~0U;
+ 
+@@ -1738,11 +1738,10 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail,
+ 	skb->head     = data;
+ 	skb->head_frag = 0;
+ 	skb->data    += off;
++
++	skb_set_end_offset(skb, size);
+ #ifdef NET_SKBUFF_DATA_USES_OFFSET
+-	skb->end      = size;
+ 	off           = nhead;
+-#else
+-	skb->end      = skb->head + size;
+ #endif
+ 	skb->tail	      += off;
+ 	skb_headers_offset_update(skb, nhead);
+@@ -1790,6 +1789,38 @@ struct sk_buff *skb_realloc_headroom(struct sk_buff *skb, unsigned int headroom)
+ }
+ EXPORT_SYMBOL(skb_realloc_headroom);
+ 
++int __skb_unclone_keeptruesize(struct sk_buff *skb, gfp_t pri)
++{
++	unsigned int saved_end_offset, saved_truesize;
++	struct skb_shared_info *shinfo;
++	int res;
++
++	saved_end_offset = skb_end_offset(skb);
++	saved_truesize = skb->truesize;
++
++	res = pskb_expand_head(skb, 0, 0, pri);
++	if (res)
++		return res;
++
++	skb->truesize = saved_truesize;
++
++	if (likely(skb_end_offset(skb) == saved_end_offset))
++		return 0;
++
++	shinfo = skb_shinfo(skb);
++
++	/* We are about to change back skb->end,
++	 * we need to move skb_shinfo() to its new location.
++	 */
++	memmove(skb->head + saved_end_offset,
++		shinfo,
++		offsetof(struct skb_shared_info, frags[shinfo->nr_frags]));
++
++	skb_set_end_offset(skb, saved_end_offset);
++
++	return 0;
++}
++
+ /**
+  *	skb_expand_head - reallocate header of &sk_buff
+  *	@skb: buffer to reallocate
+@@ -6138,11 +6169,7 @@ static int pskb_carve_inside_header(struct sk_buff *skb, const u32 off,
+ 	skb->head = data;
+ 	skb->data = data;
+ 	skb->head_frag = 0;
+-#ifdef NET_SKBUFF_DATA_USES_OFFSET
+-	skb->end = size;
+-#else
+-	skb->end = skb->head + size;
+-#endif
++	skb_set_end_offset(skb, size);
+ 	skb_set_tail_pointer(skb, skb_headlen(skb));
+ 	skb_headers_offset_update(skb, 0);
+ 	skb->cloned = 0;
+@@ -6280,11 +6307,7 @@ static int pskb_carve_inside_nonlinear(struct sk_buff *skb, const u32 off,
+ 	skb->head = data;
+ 	skb->head_frag = 0;
+ 	skb->data = data;
+-#ifdef NET_SKBUFF_DATA_USES_OFFSET
+-	skb->end = size;
+-#else
+-	skb->end = skb->head + size;
+-#endif
++	skb_set_end_offset(skb, size);
+ 	skb_reset_tail_pointer(skb);
+ 	skb_headers_offset_update(skb, 0);
+ 	skb->cloned   = 0;
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 929a2b096b04e..cc381165ea080 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -27,6 +27,7 @@ int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len,
+ 		 int elem_first_coalesce)
+ {
+ 	struct page_frag *pfrag = sk_page_frag(sk);
++	u32 osize = msg->sg.size;
+ 	int ret = 0;
+ 
+ 	len -= msg->sg.size;
+@@ -35,13 +36,17 @@ int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len,
+ 		u32 orig_offset;
+ 		int use, i;
+ 
+-		if (!sk_page_frag_refill(sk, pfrag))
+-			return -ENOMEM;
++		if (!sk_page_frag_refill(sk, pfrag)) {
++			ret = -ENOMEM;
++			goto msg_trim;
++		}
+ 
+ 		orig_offset = pfrag->offset;
+ 		use = min_t(int, len, pfrag->size - orig_offset);
+-		if (!sk_wmem_schedule(sk, use))
+-			return -ENOMEM;
++		if (!sk_wmem_schedule(sk, use)) {
++			ret = -ENOMEM;
++			goto msg_trim;
++		}
+ 
+ 		i = msg->sg.end;
+ 		sk_msg_iter_var_prev(i);
+@@ -71,6 +76,10 @@ int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len,
+ 	}
+ 
+ 	return ret;
++
++msg_trim:
++	sk_msg_trim(sk, msg, osize);
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(sk_msg_alloc);
+ 
+diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
+index e64ef196a984a..ff29a5a3e4c1f 100644
+--- a/net/dsa/dsa2.c
++++ b/net/dsa/dsa2.c
+@@ -1613,6 +1613,10 @@ void dsa_switch_shutdown(struct dsa_switch *ds)
+ 	struct dsa_port *dp;
+ 
+ 	mutex_lock(&dsa2_mutex);
++
++	if (!ds->setup)
++		goto out;
++
+ 	rtnl_lock();
+ 
+ 	dsa_switch_for_each_user_port(dp, ds) {
+@@ -1629,6 +1633,7 @@ void dsa_switch_shutdown(struct dsa_switch *ds)
+ 		dp->master->dsa_ptr = NULL;
+ 
+ 	rtnl_unlock();
++out:
+ 	mutex_unlock(&dsa2_mutex);
+ }
+ EXPORT_SYMBOL_GPL(dsa_switch_shutdown);
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 2c30c599cc161..750e229b7c491 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -498,6 +498,15 @@ void __ip_select_ident(struct net *net, struct iphdr *iph, int segs)
+ }
+ EXPORT_SYMBOL(__ip_select_ident);
+ 
++static void ip_rt_fix_tos(struct flowi4 *fl4)
++{
++	__u8 tos = RT_FL_TOS(fl4);
++
++	fl4->flowi4_tos = tos & IPTOS_RT_MASK;
++	fl4->flowi4_scope = tos & RTO_ONLINK ?
++			    RT_SCOPE_LINK : RT_SCOPE_UNIVERSE;
++}
++
+ static void __build_flow_key(const struct net *net, struct flowi4 *fl4,
+ 			     const struct sock *sk,
+ 			     const struct iphdr *iph,
+@@ -823,6 +832,7 @@ static void ip_do_redirect(struct dst_entry *dst, struct sock *sk, struct sk_buf
+ 	rt = (struct rtable *) dst;
+ 
+ 	__build_flow_key(net, &fl4, sk, iph, oif, tos, prot, mark, 0);
++	ip_rt_fix_tos(&fl4);
+ 	__ip_do_redirect(rt, skb, &fl4, true);
+ }
+ 
+@@ -1047,6 +1057,7 @@ static void ip_rt_update_pmtu(struct dst_entry *dst, struct sock *sk,
+ 	struct flowi4 fl4;
+ 
+ 	ip_rt_build_flow_key(&fl4, sk, skb);
++	ip_rt_fix_tos(&fl4);
+ 
+ 	/* Don't make lookup fail for bridged encapsulations */
+ 	if (skb && netif_is_any_bridge_port(skb->dev))
+@@ -1121,6 +1132,8 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu)
+ 			goto out;
+ 
+ 		new = true;
++	} else {
++		ip_rt_fix_tos(&fl4);
+ 	}
+ 
+ 	__ip_rt_update_pmtu((struct rtable *)xfrm_dst_path(&rt->dst), &fl4, mtu);
+@@ -2601,7 +2614,6 @@ add:
+ struct rtable *ip_route_output_key_hash(struct net *net, struct flowi4 *fl4,
+ 					const struct sk_buff *skb)
+ {
+-	__u8 tos = RT_FL_TOS(fl4);
+ 	struct fib_result res = {
+ 		.type		= RTN_UNSPEC,
+ 		.fi		= NULL,
+@@ -2611,9 +2623,7 @@ struct rtable *ip_route_output_key_hash(struct net *net, struct flowi4 *fl4,
+ 	struct rtable *rth;
+ 
+ 	fl4->flowi4_iif = LOOPBACK_IFINDEX;
+-	fl4->flowi4_tos = tos & IPTOS_RT_MASK;
+-	fl4->flowi4_scope = ((tos & RTO_ONLINK) ?
+-			 RT_SCOPE_LINK : RT_SCOPE_UNIVERSE);
++	ip_rt_fix_tos(fl4);
+ 
+ 	rcu_read_lock();
+ 	rth = ip_route_output_key_hash_rcu(net, fl4, &res, skb);
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 9b9b02052fd36..1cdcb4df0eb7e 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -138,10 +138,9 @@ int tcp_bpf_sendmsg_redir(struct sock *sk, struct sk_msg *msg,
+ 	struct sk_psock *psock = sk_psock_get(sk);
+ 	int ret;
+ 
+-	if (unlikely(!psock)) {
+-		sk_msg_free(sk, msg);
+-		return 0;
+-	}
++	if (unlikely(!psock))
++		return -EPIPE;
++
+ 	ret = ingress ? bpf_tcp_ingress(sk, psock, msg, bytes, flags) :
+ 			tcp_bpf_push_locked(sk, msg, bytes, flags, false);
+ 	sk_psock_put(sk, psock);
+@@ -335,7 +334,7 @@ more_data:
+ 			cork = true;
+ 			psock->cork = NULL;
+ 		}
+-		sk_msg_return(sk, msg, tosend);
++		sk_msg_return(sk, msg, msg->sg.size);
+ 		release_sock(sk);
+ 
+ 		ret = tcp_bpf_sendmsg_redir(sk_redir, msg, tosend, flags);
+@@ -375,8 +374,11 @@ more_data:
+ 		}
+ 		if (msg &&
+ 		    msg->sg.data[msg->sg.start].page_link &&
+-		    msg->sg.data[msg->sg.start].length)
++		    msg->sg.data[msg->sg.start].length) {
++			if (eval == __SK_REDIRECT)
++				sk_mem_charge(sk, msg->sg.size);
+ 			goto more_data;
++		}
+ 	}
+ 	return ret;
+ }
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 2e6e5a70168eb..3a8126dfcd4db 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -3719,6 +3719,7 @@ static void tcp_connect_queue_skb(struct sock *sk, struct sk_buff *skb)
+  */
+ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn)
+ {
++	struct inet_connection_sock *icsk = inet_csk(sk);
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 	struct tcp_fastopen_request *fo = tp->fastopen_req;
+ 	int space, err = 0;
+@@ -3733,8 +3734,10 @@ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn)
+ 	 * private TCP options. The cost is reduced data space in SYN :(
+ 	 */
+ 	tp->rx_opt.mss_clamp = tcp_mss_clamp(tp, tp->rx_opt.mss_clamp);
++	/* Sync mss_cache after updating the mss_clamp */
++	tcp_sync_mss(sk, icsk->icsk_pmtu_cookie);
+ 
+-	space = __tcp_mtu_to_mss(sk, inet_csk(sk)->icsk_pmtu_cookie) -
++	space = __tcp_mtu_to_mss(sk, icsk->icsk_pmtu_cookie) -
+ 		MAX_TCP_OPTION_SPACE;
+ 
+ 	space = min_t(size_t, space, fo->size);
+diff --git a/net/ipv6/xfrm6_output.c b/net/ipv6/xfrm6_output.c
+index d0d280077721b..ad07904642cad 100644
+--- a/net/ipv6/xfrm6_output.c
++++ b/net/ipv6/xfrm6_output.c
+@@ -45,6 +45,19 @@ static int __xfrm6_output_finish(struct net *net, struct sock *sk, struct sk_buf
+ 	return xfrm_output(sk, skb);
+ }
+ 
++static int xfrm6_noneed_fragment(struct sk_buff *skb)
++{
++	struct frag_hdr *fh;
++	u8 prevhdr = ipv6_hdr(skb)->nexthdr;
++
++	if (prevhdr != NEXTHDR_FRAGMENT)
++		return 0;
++	fh = (struct frag_hdr *)(skb->data + sizeof(struct ipv6hdr));
++	if (fh->nexthdr == NEXTHDR_ESP || fh->nexthdr == NEXTHDR_AUTH)
++		return 1;
++	return 0;
++}
++
+ static int __xfrm6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct dst_entry *dst = skb_dst(skb);
+@@ -73,6 +86,9 @@ static int __xfrm6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ 		xfrm6_local_rxpmtu(skb, mtu);
+ 		kfree_skb(skb);
+ 		return -EMSGSIZE;
++	} else if (toobig && xfrm6_noneed_fragment(skb)) {
++		skb->ignore_df = 1;
++		goto skip_frag;
+ 	} else if (!skb->ignore_df && toobig && skb->sk) {
+ 		xfrm_local_error(skb, mtu);
+ 		kfree_skb(skb);
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index 9bf52a09b5ff3..fd51db3be91c4 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -1699,7 +1699,7 @@ static int pfkey_register(struct sock *sk, struct sk_buff *skb, const struct sad
+ 
+ 	xfrm_probe_algs();
+ 
+-	supp_skb = compose_sadb_supported(hdr, GFP_KERNEL);
++	supp_skb = compose_sadb_supported(hdr, GFP_KERNEL | __GFP_ZERO);
+ 	if (!supp_skb) {
+ 		if (hdr->sadb_msg_satype != SADB_SATYPE_UNSPEC)
+ 			pfk->registered &= ~(1<<hdr->sadb_msg_satype);
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index 3354a3b905b8f..541723babcf03 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -2380,7 +2380,7 @@ u8 *ieee80211_ie_build_vht_cap(u8 *pos, struct ieee80211_sta_vht_cap *vht_cap,
+ u8 *ieee80211_ie_build_vht_oper(u8 *pos, struct ieee80211_sta_vht_cap *vht_cap,
+ 				const struct cfg80211_chan_def *chandef);
+ u8 ieee80211_ie_len_he_cap(struct ieee80211_sub_if_data *sdata, u8 iftype);
+-u8 *ieee80211_ie_build_he_cap(u8 *pos,
++u8 *ieee80211_ie_build_he_cap(u32 disable_flags, u8 *pos,
+ 			      const struct ieee80211_sta_he_cap *he_cap,
+ 			      u8 *end);
+ void ieee80211_ie_build_he_6ghz_cap(struct ieee80211_sub_if_data *sdata,
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index 45fb517591ee9..5311c3cd3050d 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -1131,17 +1131,14 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ 		local->scan_ies_len +=
+ 			2 + sizeof(struct ieee80211_vht_cap);
+ 
+-	/* HE cap element is variable in size - set len to allow max size */
+ 	/*
+-	 * TODO: 1 is added at the end of the calculation to accommodate for
+-	 *	the temporary placing of the HE capabilities IE under EXT.
+-	 *	Remove it once it is placed in the final place.
+-	 */
+-	if (supp_he)
++	 * HE cap element is variable in size - set len to allow max size */
++	if (supp_he) {
+ 		local->scan_ies_len +=
+-			2 + sizeof(struct ieee80211_he_cap_elem) +
++			3 + sizeof(struct ieee80211_he_cap_elem) +
+ 			sizeof(struct ieee80211_he_mcs_nss_supp) +
+-			IEEE80211_HE_PPE_THRES_MAX_LEN + 1;
++			IEEE80211_HE_PPE_THRES_MAX_LEN;
++	}
+ 
+ 	if (!local->ops->hw_scan) {
+ 		/* For hw_scan, driver needs to set these up. */
+diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c
+index 15ac08d111ea1..6847fdf934392 100644
+--- a/net/mac80211/mesh.c
++++ b/net/mac80211/mesh.c
+@@ -580,7 +580,7 @@ int mesh_add_he_cap_ie(struct ieee80211_sub_if_data *sdata,
+ 		return -ENOMEM;
+ 
+ 	pos = skb_put(skb, ie_len);
+-	ieee80211_ie_build_he_cap(pos, he_cap, pos + ie_len);
++	ieee80211_ie_build_he_cap(0, pos, he_cap, pos + ie_len);
+ 
+ 	return 0;
+ }
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 404b846501618..ce56f23a4a1f0 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -630,7 +630,7 @@ static void ieee80211_add_he_ie(struct ieee80211_sub_if_data *sdata,
+ 				struct sk_buff *skb,
+ 				struct ieee80211_supported_band *sband)
+ {
+-	u8 *pos;
++	u8 *pos, *pre_he_pos;
+ 	const struct ieee80211_sta_he_cap *he_cap = NULL;
+ 	struct ieee80211_chanctx_conf *chanctx_conf;
+ 	u8 he_cap_size;
+@@ -647,20 +647,21 @@ static void ieee80211_add_he_ie(struct ieee80211_sub_if_data *sdata,
+ 
+ 	he_cap = ieee80211_get_he_iftype_cap(sband,
+ 					     ieee80211_vif_type_p2p(&sdata->vif));
+-	if (!he_cap || !reg_cap)
++	if (!he_cap || !chanctx_conf || !reg_cap)
+ 		return;
+ 
+-	/*
+-	 * TODO: the 1 added is because this temporarily is under the EXTENSION
+-	 * IE. Get rid of it when it moves.
+-	 */
++	/* get a max size estimate */
+ 	he_cap_size =
+ 		2 + 1 + sizeof(he_cap->he_cap_elem) +
+ 		ieee80211_he_mcs_nss_size(&he_cap->he_cap_elem) +
+ 		ieee80211_he_ppe_size(he_cap->ppe_thres[0],
+ 				      he_cap->he_cap_elem.phy_cap_info);
+ 	pos = skb_put(skb, he_cap_size);
+-	ieee80211_ie_build_he_cap(pos, he_cap, pos + he_cap_size);
++	pre_he_pos = pos;
++	pos = ieee80211_ie_build_he_cap(sdata->u.mgd.flags,
++					pos, he_cap, pos + he_cap_size);
++	/* trim excess if any */
++	skb_trim(skb, skb->len - (pre_he_pos + he_cap_size - pos));
+ 
+ 	ieee80211_ie_build_he_6ghz_cap(sdata, skb);
+ }
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index 0e4e1956bcea1..49a9bbb12538f 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -1961,7 +1961,7 @@ static int ieee80211_build_preq_ies_band(struct ieee80211_sub_if_data *sdata,
+ 	if (he_cap &&
+ 	    cfg80211_any_usable_channels(local->hw.wiphy, BIT(sband->band),
+ 					 IEEE80211_CHAN_NO_HE)) {
+-		pos = ieee80211_ie_build_he_cap(pos, he_cap, end);
++		pos = ieee80211_ie_build_he_cap(0, pos, he_cap, end);
+ 		if (!pos)
+ 			goto out_err;
+ 	}
+@@ -2905,10 +2905,11 @@ u8 ieee80211_ie_len_he_cap(struct ieee80211_sub_if_data *sdata, u8 iftype)
+ 				     he_cap->he_cap_elem.phy_cap_info);
+ }
+ 
+-u8 *ieee80211_ie_build_he_cap(u8 *pos,
++u8 *ieee80211_ie_build_he_cap(u32 disable_flags, u8 *pos,
+ 			      const struct ieee80211_sta_he_cap *he_cap,
+ 			      u8 *end)
+ {
++	struct ieee80211_he_cap_elem elem;
+ 	u8 n;
+ 	u8 ie_len;
+ 	u8 *orig_pos = pos;
+@@ -2921,7 +2922,23 @@ u8 *ieee80211_ie_build_he_cap(u8 *pos,
+ 	if (!he_cap)
+ 		return orig_pos;
+ 
+-	n = ieee80211_he_mcs_nss_size(&he_cap->he_cap_elem);
++	/* modify on stack first to calculate 'n' and 'ie_len' correctly */
++	elem = he_cap->he_cap_elem;
++
++	if (disable_flags & IEEE80211_STA_DISABLE_40MHZ)
++		elem.phy_cap_info[0] &=
++			~(IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_80MHZ_IN_5G |
++			  IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_IN_2G);
++
++	if (disable_flags & IEEE80211_STA_DISABLE_160MHZ)
++		elem.phy_cap_info[0] &=
++			~IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G;
++
++	if (disable_flags & IEEE80211_STA_DISABLE_80P80MHZ)
++		elem.phy_cap_info[0] &=
++			~IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_80PLUS80_MHZ_IN_5G;
++
++	n = ieee80211_he_mcs_nss_size(&elem);
+ 	ie_len = 2 + 1 +
+ 		 sizeof(he_cap->he_cap_elem) + n +
+ 		 ieee80211_he_ppe_size(he_cap->ppe_thres[0],
+@@ -2935,8 +2952,8 @@ u8 *ieee80211_ie_build_he_cap(u8 *pos,
+ 	*pos++ = WLAN_EID_EXT_HE_CAPABILITY;
+ 
+ 	/* Fixed data */
+-	memcpy(pos, &he_cap->he_cap_elem, sizeof(he_cap->he_cap_elem));
+-	pos += sizeof(he_cap->he_cap_elem);
++	memcpy(pos, &elem, sizeof(elem));
++	pos += sizeof(elem);
+ 
+ 	memcpy(pos, &he_cap->he_mcs_nss_supp, n);
+ 	pos += n;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 7566c9dc66812..1784e528704a6 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -1202,6 +1202,7 @@ static struct sk_buff *__mptcp_alloc_tx_skb(struct sock *sk, struct sock *ssk, g
+ 		tcp_skb_entail(ssk, skb);
+ 		return skb;
+ 	}
++	tcp_skb_tsorted_anchor_cleanup(skb);
+ 	kfree_skb(skb);
+ 	return NULL;
+ }
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 7f79974607643..917e708a45611 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -989,7 +989,7 @@ static int __nf_ct_resolve_clash(struct sk_buff *skb,
+ 
+ 		nf_ct_acct_merge(ct, ctinfo, loser_ct);
+ 		nf_ct_add_to_dying_list(loser_ct);
+-		nf_conntrack_put(&loser_ct->ct_general);
++		nf_ct_put(loser_ct);
+ 		nf_ct_set(skb, ct, ctinfo);
+ 
+ 		NF_CT_STAT_INC(net, clash_resolve);
+@@ -1920,7 +1920,7 @@ repeat:
+ 		/* Invalid: inverse of the return code tells
+ 		 * the netfilter core what to do */
+ 		pr_debug("nf_conntrack_in: Can't track with proto module\n");
+-		nf_conntrack_put(&ct->ct_general);
++		nf_ct_put(ct);
+ 		skb->_nfct = 0;
+ 		/* Special case: TCP tracker reports an attempt to reopen a
+ 		 * closed/aborted connection. We have to go back and create a
+diff --git a/net/netfilter/nf_conntrack_helper.c b/net/netfilter/nf_conntrack_helper.c
+index ae4488a13c70c..ceb38a7b37cb7 100644
+--- a/net/netfilter/nf_conntrack_helper.c
++++ b/net/netfilter/nf_conntrack_helper.c
+@@ -556,6 +556,12 @@ static const struct nf_ct_ext_type helper_extend = {
+ 	.id	= NF_CT_EXT_HELPER,
+ };
+ 
++void nf_ct_set_auto_assign_helper_warned(struct net *net)
++{
++	nf_ct_pernet(net)->auto_assign_helper_warned = true;
++}
++EXPORT_SYMBOL_GPL(nf_ct_set_auto_assign_helper_warned);
++
+ void nf_conntrack_helper_pernet_init(struct net *net)
+ {
+ 	struct nf_conntrack_net *cnet = nf_ct_pernet(net);
+diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c
+index af5115e127cfd..3cee5d8ee7027 100644
+--- a/net/netfilter/nf_conntrack_proto_tcp.c
++++ b/net/netfilter/nf_conntrack_proto_tcp.c
+@@ -341,8 +341,8 @@ static void tcp_options(const struct sk_buff *skb,
+ 	if (!ptr)
+ 		return;
+ 
+-	state->td_scale =
+-	state->flags = 0;
++	state->td_scale = 0;
++	state->flags &= IP_CT_TCP_FLAG_BE_LIBERAL;
+ 
+ 	while (length > 0) {
+ 		int opcode=*ptr++;
+@@ -839,6 +839,16 @@ static bool tcp_can_early_drop(const struct nf_conn *ct)
+ 	return false;
+ }
+ 
++static void nf_ct_tcp_state_reset(struct ip_ct_tcp_state *state)
++{
++	state->td_end		= 0;
++	state->td_maxend	= 0;
++	state->td_maxwin	= 0;
++	state->td_maxack	= 0;
++	state->td_scale		= 0;
++	state->flags		&= IP_CT_TCP_FLAG_BE_LIBERAL;
++}
++
+ /* Returns verdict for packet, or -1 for invalid. */
+ int nf_conntrack_tcp_packet(struct nf_conn *ct,
+ 			    struct sk_buff *skb,
+@@ -945,8 +955,7 @@ int nf_conntrack_tcp_packet(struct nf_conn *ct,
+ 			ct->proto.tcp.last_flags &= ~IP_CT_EXP_CHALLENGE_ACK;
+ 			ct->proto.tcp.seen[ct->proto.tcp.last_dir].flags =
+ 				ct->proto.tcp.last_flags;
+-			memset(&ct->proto.tcp.seen[dir], 0,
+-			       sizeof(struct ip_ct_tcp_state));
++			nf_ct_tcp_state_reset(&ct->proto.tcp.seen[dir]);
+ 			break;
+ 		}
+ 		ct->proto.tcp.last_index = index;
+diff --git a/net/netfilter/nf_flow_table_inet.c b/net/netfilter/nf_flow_table_inet.c
+index bc4126d8ef65f..280fdd32965f6 100644
+--- a/net/netfilter/nf_flow_table_inet.c
++++ b/net/netfilter/nf_flow_table_inet.c
+@@ -6,12 +6,29 @@
+ #include <linux/rhashtable.h>
+ #include <net/netfilter/nf_flow_table.h>
+ #include <net/netfilter/nf_tables.h>
++#include <linux/if_vlan.h>
+ 
+ static unsigned int
+ nf_flow_offload_inet_hook(void *priv, struct sk_buff *skb,
+ 			  const struct nf_hook_state *state)
+ {
++	struct vlan_ethhdr *veth;
++	__be16 proto;
++
+ 	switch (skb->protocol) {
++	case htons(ETH_P_8021Q):
++		veth = (struct vlan_ethhdr *)skb_mac_header(skb);
++		proto = veth->h_vlan_encapsulated_proto;
++		break;
++	case htons(ETH_P_PPP_SES):
++		proto = nf_flow_pppoe_proto(skb);
++		break;
++	default:
++		proto = skb->protocol;
++		break;
++	}
++
++	switch (proto) {
+ 	case htons(ETH_P_IP):
+ 		return nf_flow_offload_ip_hook(priv, skb, state);
+ 	case htons(ETH_P_IPV6):
+diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c
+index 889cf88d3dba6..6257d87c3a56d 100644
+--- a/net/netfilter/nf_flow_table_ip.c
++++ b/net/netfilter/nf_flow_table_ip.c
+@@ -8,8 +8,6 @@
+ #include <linux/ipv6.h>
+ #include <linux/netdevice.h>
+ #include <linux/if_ether.h>
+-#include <linux/if_pppox.h>
+-#include <linux/ppp_defs.h>
+ #include <net/ip.h>
+ #include <net/ipv6.h>
+ #include <net/ip6_route.h>
+@@ -239,22 +237,6 @@ static unsigned int nf_flow_xmit_xfrm(struct sk_buff *skb,
+ 	return NF_STOLEN;
+ }
+ 
+-static inline __be16 nf_flow_pppoe_proto(const struct sk_buff *skb)
+-{
+-	__be16 proto;
+-
+-	proto = *((__be16 *)(skb_mac_header(skb) + ETH_HLEN +
+-			     sizeof(struct pppoe_hdr)));
+-	switch (proto) {
+-	case htons(PPP_IP):
+-		return htons(ETH_P_IP);
+-	case htons(PPP_IPV6):
+-		return htons(ETH_P_IPV6);
+-	}
+-
+-	return 0;
+-}
+-
+ static bool nf_flow_skb_encap_protocol(const struct sk_buff *skb, __be16 proto,
+ 				       u32 *offset)
+ {
+diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c
+index 99b1de14ff7ee..54ecb9fbf2de6 100644
+--- a/net/netfilter/nft_ct.c
++++ b/net/netfilter/nft_ct.c
+@@ -1040,6 +1040,9 @@ static int nft_ct_helper_obj_init(const struct nft_ctx *ctx,
+ 	if (err < 0)
+ 		goto err_put_helper;
+ 
++	/* Avoid the bogus warning, helper will be assigned after CT init */
++	nf_ct_set_auto_assign_helper_warned(ctx->net);
++
+ 	return 0;
+ 
+ err_put_helper:
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 9eba2e6483851..6fbc3ea735e5f 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -157,6 +157,8 @@ EXPORT_SYMBOL(do_trace_netlink_extack);
+ 
+ static inline u32 netlink_group_mask(u32 group)
+ {
++	if (group > 32)
++		return 0;
+ 	return group ? 1 << (group - 1) : 0;
+ }
+ 
+diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c
+index 1b5eae57bc900..f2b64cab9af70 100644
+--- a/net/openvswitch/conntrack.c
++++ b/net/openvswitch/conntrack.c
+@@ -574,7 +574,7 @@ ovs_ct_expect_find(struct net *net, const struct nf_conntrack_zone *zone,
+ 			struct nf_conn *ct = nf_ct_tuplehash_to_ctrack(h);
+ 
+ 			nf_ct_delete(ct, 0, 0);
+-			nf_conntrack_put(&ct->ct_general);
++			nf_ct_put(ct);
+ 		}
+ 	}
+ 
+@@ -723,7 +723,7 @@ static bool skb_nfct_cached(struct net *net,
+ 		if (nf_ct_is_confirmed(ct))
+ 			nf_ct_delete(ct, 0, 0);
+ 
+-		nf_conntrack_put(&ct->ct_general);
++		nf_ct_put(ct);
+ 		nf_ct_set(skb, NULL, 0);
+ 		return false;
+ 	}
+@@ -732,6 +732,57 @@ static bool skb_nfct_cached(struct net *net,
+ }
+ 
+ #if IS_ENABLED(CONFIG_NF_NAT)
++static void ovs_nat_update_key(struct sw_flow_key *key,
++			       const struct sk_buff *skb,
++			       enum nf_nat_manip_type maniptype)
++{
++	if (maniptype == NF_NAT_MANIP_SRC) {
++		__be16 src;
++
++		key->ct_state |= OVS_CS_F_SRC_NAT;
++		if (key->eth.type == htons(ETH_P_IP))
++			key->ipv4.addr.src = ip_hdr(skb)->saddr;
++		else if (key->eth.type == htons(ETH_P_IPV6))
++			memcpy(&key->ipv6.addr.src, &ipv6_hdr(skb)->saddr,
++			       sizeof(key->ipv6.addr.src));
++		else
++			return;
++
++		if (key->ip.proto == IPPROTO_UDP)
++			src = udp_hdr(skb)->source;
++		else if (key->ip.proto == IPPROTO_TCP)
++			src = tcp_hdr(skb)->source;
++		else if (key->ip.proto == IPPROTO_SCTP)
++			src = sctp_hdr(skb)->source;
++		else
++			return;
++
++		key->tp.src = src;
++	} else {
++		__be16 dst;
++
++		key->ct_state |= OVS_CS_F_DST_NAT;
++		if (key->eth.type == htons(ETH_P_IP))
++			key->ipv4.addr.dst = ip_hdr(skb)->daddr;
++		else if (key->eth.type == htons(ETH_P_IPV6))
++			memcpy(&key->ipv6.addr.dst, &ipv6_hdr(skb)->daddr,
++			       sizeof(key->ipv6.addr.dst));
++		else
++			return;
++
++		if (key->ip.proto == IPPROTO_UDP)
++			dst = udp_hdr(skb)->dest;
++		else if (key->ip.proto == IPPROTO_TCP)
++			dst = tcp_hdr(skb)->dest;
++		else if (key->ip.proto == IPPROTO_SCTP)
++			dst = sctp_hdr(skb)->dest;
++		else
++			return;
++
++		key->tp.dst = dst;
++	}
++}
++
+ /* Modelled after nf_nat_ipv[46]_fn().
+  * range is only used for new, uninitialized NAT state.
+  * Returns either NF_ACCEPT or NF_DROP.
+@@ -739,7 +790,7 @@ static bool skb_nfct_cached(struct net *net,
+ static int ovs_ct_nat_execute(struct sk_buff *skb, struct nf_conn *ct,
+ 			      enum ip_conntrack_info ctinfo,
+ 			      const struct nf_nat_range2 *range,
+-			      enum nf_nat_manip_type maniptype)
++			      enum nf_nat_manip_type maniptype, struct sw_flow_key *key)
+ {
+ 	int hooknum, nh_off, err = NF_ACCEPT;
+ 
+@@ -811,58 +862,11 @@ static int ovs_ct_nat_execute(struct sk_buff *skb, struct nf_conn *ct,
+ push:
+ 	skb_push_rcsum(skb, nh_off);
+ 
+-	return err;
+-}
+-
+-static void ovs_nat_update_key(struct sw_flow_key *key,
+-			       const struct sk_buff *skb,
+-			       enum nf_nat_manip_type maniptype)
+-{
+-	if (maniptype == NF_NAT_MANIP_SRC) {
+-		__be16 src;
+-
+-		key->ct_state |= OVS_CS_F_SRC_NAT;
+-		if (key->eth.type == htons(ETH_P_IP))
+-			key->ipv4.addr.src = ip_hdr(skb)->saddr;
+-		else if (key->eth.type == htons(ETH_P_IPV6))
+-			memcpy(&key->ipv6.addr.src, &ipv6_hdr(skb)->saddr,
+-			       sizeof(key->ipv6.addr.src));
+-		else
+-			return;
+-
+-		if (key->ip.proto == IPPROTO_UDP)
+-			src = udp_hdr(skb)->source;
+-		else if (key->ip.proto == IPPROTO_TCP)
+-			src = tcp_hdr(skb)->source;
+-		else if (key->ip.proto == IPPROTO_SCTP)
+-			src = sctp_hdr(skb)->source;
+-		else
+-			return;
+-
+-		key->tp.src = src;
+-	} else {
+-		__be16 dst;
+-
+-		key->ct_state |= OVS_CS_F_DST_NAT;
+-		if (key->eth.type == htons(ETH_P_IP))
+-			key->ipv4.addr.dst = ip_hdr(skb)->daddr;
+-		else if (key->eth.type == htons(ETH_P_IPV6))
+-			memcpy(&key->ipv6.addr.dst, &ipv6_hdr(skb)->daddr,
+-			       sizeof(key->ipv6.addr.dst));
+-		else
+-			return;
+-
+-		if (key->ip.proto == IPPROTO_UDP)
+-			dst = udp_hdr(skb)->dest;
+-		else if (key->ip.proto == IPPROTO_TCP)
+-			dst = tcp_hdr(skb)->dest;
+-		else if (key->ip.proto == IPPROTO_SCTP)
+-			dst = sctp_hdr(skb)->dest;
+-		else
+-			return;
++	/* Update the flow key if NAT successful. */
++	if (err == NF_ACCEPT)
++		ovs_nat_update_key(key, skb, maniptype);
+ 
+-		key->tp.dst = dst;
+-	}
++	return err;
+ }
+ 
+ /* Returns NF_DROP if the packet should be dropped, NF_ACCEPT otherwise. */
+@@ -904,7 +908,7 @@ static int ovs_ct_nat(struct net *net, struct sw_flow_key *key,
+ 	} else {
+ 		return NF_ACCEPT; /* Connection is not NATed. */
+ 	}
+-	err = ovs_ct_nat_execute(skb, ct, ctinfo, &info->range, maniptype);
++	err = ovs_ct_nat_execute(skb, ct, ctinfo, &info->range, maniptype, key);
+ 
+ 	if (err == NF_ACCEPT && ct->status & IPS_DST_NAT) {
+ 		if (ct->status & IPS_SRC_NAT) {
+@@ -914,17 +918,13 @@ static int ovs_ct_nat(struct net *net, struct sw_flow_key *key,
+ 				maniptype = NF_NAT_MANIP_SRC;
+ 
+ 			err = ovs_ct_nat_execute(skb, ct, ctinfo, &info->range,
+-						 maniptype);
++						 maniptype, key);
+ 		} else if (CTINFO2DIR(ctinfo) == IP_CT_DIR_ORIGINAL) {
+ 			err = ovs_ct_nat_execute(skb, ct, ctinfo, NULL,
+-						 NF_NAT_MANIP_SRC);
++						 NF_NAT_MANIP_SRC, key);
+ 		}
+ 	}
+ 
+-	/* Mark NAT done if successful and update the flow key. */
+-	if (err == NF_ACCEPT)
+-		ovs_nat_update_key(key, skb, maniptype);
+-
+ 	return err;
+ }
+ #else /* !CONFIG_NF_NAT */
+@@ -967,7 +967,8 @@ static int __ovs_ct_lookup(struct net *net, struct sw_flow_key *key,
+ 
+ 		/* Associate skb with specified zone. */
+ 		if (tmpl) {
+-			nf_conntrack_put(skb_nfct(skb));
++			ct = nf_ct_get(skb, &ctinfo);
++			nf_ct_put(ct);
+ 			nf_conntrack_get(&tmpl->ct_general);
+ 			nf_ct_set(skb, tmpl, IP_CT_NEW);
+ 		}
+@@ -1328,7 +1329,12 @@ int ovs_ct_execute(struct net *net, struct sk_buff *skb,
+ 
+ int ovs_ct_clear(struct sk_buff *skb, struct sw_flow_key *key)
+ {
+-	nf_conntrack_put(skb_nfct(skb));
++	enum ip_conntrack_info ctinfo;
++	struct nf_conn *ct;
++
++	ct = nf_ct_get(skb, &ctinfo);
++
++	nf_ct_put(ct);
+ 	nf_ct_set(skb, NULL, IP_CT_UNTRACKED);
+ 	ovs_ct_fill_key(skb, key, false);
+ 
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index fd1f809e9bc1b..0d677c9c2c805 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -2201,8 +2201,8 @@ static int __ovs_nla_put_key(const struct sw_flow_key *swkey,
+ 			icmpv6_key->icmpv6_type = ntohs(output->tp.src);
+ 			icmpv6_key->icmpv6_code = ntohs(output->tp.dst);
+ 
+-			if (icmpv6_key->icmpv6_type == NDISC_NEIGHBOUR_SOLICITATION ||
+-			    icmpv6_key->icmpv6_type == NDISC_NEIGHBOUR_ADVERTISEMENT) {
++			if (swkey->tp.src == htons(NDISC_NEIGHBOUR_SOLICITATION) ||
++			    swkey->tp.src == htons(NDISC_NEIGHBOUR_ADVERTISEMENT)) {
+ 				struct ovs_key_nd *nd_key;
+ 
+ 				nla = nla_reserve(skb, OVS_KEY_ATTR_ND, sizeof(*nd_key));
+diff --git a/net/rfkill/core.c b/net/rfkill/core.c
+index ac15a944573f7..068c7bcd30c94 100644
+--- a/net/rfkill/core.c
++++ b/net/rfkill/core.c
+@@ -78,6 +78,7 @@ struct rfkill_data {
+ 	struct mutex		mtx;
+ 	wait_queue_head_t	read_wait;
+ 	bool			input_handler;
++	u8			max_size;
+ };
+ 
+ 
+@@ -1141,6 +1142,8 @@ static int rfkill_fop_open(struct inode *inode, struct file *file)
+ 	if (!data)
+ 		return -ENOMEM;
+ 
++	data->max_size = RFKILL_EVENT_SIZE_V1;
++
+ 	INIT_LIST_HEAD(&data->events);
+ 	mutex_init(&data->mtx);
+ 	init_waitqueue_head(&data->read_wait);
+@@ -1223,6 +1226,7 @@ static ssize_t rfkill_fop_read(struct file *file, char __user *buf,
+ 				list);
+ 
+ 	sz = min_t(unsigned long, sizeof(ev->ev), count);
++	sz = min_t(unsigned long, sz, data->max_size);
+ 	ret = sz;
+ 	if (copy_to_user(buf, &ev->ev, sz))
+ 		ret = -EFAULT;
+@@ -1237,6 +1241,7 @@ static ssize_t rfkill_fop_read(struct file *file, char __user *buf,
+ static ssize_t rfkill_fop_write(struct file *file, const char __user *buf,
+ 				size_t count, loff_t *pos)
+ {
++	struct rfkill_data *data = file->private_data;
+ 	struct rfkill *rfkill;
+ 	struct rfkill_event_ext ev;
+ 	int ret;
+@@ -1251,6 +1256,7 @@ static ssize_t rfkill_fop_write(struct file *file, const char __user *buf,
+ 	 * our API version even in a write() call, if it cares.
+ 	 */
+ 	count = min(count, sizeof(ev));
++	count = min_t(size_t, count, data->max_size);
+ 	if (copy_from_user(&ev, buf, count))
+ 		return -EFAULT;
+ 
+@@ -1310,31 +1316,47 @@ static int rfkill_fop_release(struct inode *inode, struct file *file)
+ 	return 0;
+ }
+ 
+-#ifdef CONFIG_RFKILL_INPUT
+ static long rfkill_fop_ioctl(struct file *file, unsigned int cmd,
+ 			     unsigned long arg)
+ {
+ 	struct rfkill_data *data = file->private_data;
++	int ret = -ENOSYS;
++	u32 size;
+ 
+ 	if (_IOC_TYPE(cmd) != RFKILL_IOC_MAGIC)
+ 		return -ENOSYS;
+ 
+-	if (_IOC_NR(cmd) != RFKILL_IOC_NOINPUT)
+-		return -ENOSYS;
+-
+ 	mutex_lock(&data->mtx);
+-
+-	if (!data->input_handler) {
+-		if (atomic_inc_return(&rfkill_input_disabled) == 1)
+-			printk(KERN_DEBUG "rfkill: input handler disabled\n");
+-		data->input_handler = true;
++	switch (_IOC_NR(cmd)) {
++#ifdef CONFIG_RFKILL_INPUT
++	case RFKILL_IOC_NOINPUT:
++		if (!data->input_handler) {
++			if (atomic_inc_return(&rfkill_input_disabled) == 1)
++				printk(KERN_DEBUG "rfkill: input handler disabled\n");
++			data->input_handler = true;
++		}
++		ret = 0;
++		break;
++#endif
++	case RFKILL_IOC_MAX_SIZE:
++		if (get_user(size, (__u32 __user *)arg)) {
++			ret = -EFAULT;
++			break;
++		}
++		if (size < RFKILL_EVENT_SIZE_V1 || size > U8_MAX) {
++			ret = -EINVAL;
++			break;
++		}
++		data->max_size = size;
++		ret = 0;
++		break;
++	default:
++		break;
+ 	}
+-
+ 	mutex_unlock(&data->mtx);
+ 
+-	return 0;
++	return ret;
+ }
+-#endif
+ 
+ static const struct file_operations rfkill_fops = {
+ 	.owner		= THIS_MODULE,
+@@ -1343,10 +1365,8 @@ static const struct file_operations rfkill_fops = {
+ 	.write		= rfkill_fop_write,
+ 	.poll		= rfkill_fop_poll,
+ 	.release	= rfkill_fop_release,
+-#ifdef CONFIG_RFKILL_INPUT
+ 	.unlocked_ioctl	= rfkill_fop_ioctl,
+ 	.compat_ioctl	= compat_ptr_ioctl,
+-#endif
+ 	.llseek		= no_llseek,
+ };
+ 
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 7bd6f8a66a3ef..969e532f77a90 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -777,14 +777,12 @@ void rxrpc_propose_ACK(struct rxrpc_call *, u8, u32, bool, bool,
+ 		       enum rxrpc_propose_ack_trace);
+ void rxrpc_process_call(struct work_struct *);
+ 
+-static inline void rxrpc_reduce_call_timer(struct rxrpc_call *call,
+-					   unsigned long expire_at,
+-					   unsigned long now,
+-					   enum rxrpc_timer_trace why)
+-{
+-	trace_rxrpc_timer(call, why, now);
+-	timer_reduce(&call->timer, expire_at);
+-}
++void rxrpc_reduce_call_timer(struct rxrpc_call *call,
++			     unsigned long expire_at,
++			     unsigned long now,
++			     enum rxrpc_timer_trace why);
++
++void rxrpc_delete_call_timer(struct rxrpc_call *call);
+ 
+ /*
+  * call_object.c
+@@ -808,6 +806,7 @@ void rxrpc_release_calls_on_socket(struct rxrpc_sock *);
+ bool __rxrpc_queue_call(struct rxrpc_call *);
+ bool rxrpc_queue_call(struct rxrpc_call *);
+ void rxrpc_see_call(struct rxrpc_call *);
++bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op);
+ void rxrpc_get_call(struct rxrpc_call *, enum rxrpc_call_trace);
+ void rxrpc_put_call(struct rxrpc_call *, enum rxrpc_call_trace);
+ void rxrpc_cleanup_call(struct rxrpc_call *);
+diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
+index df864e6922679..22e05de5d1ca9 100644
+--- a/net/rxrpc/call_event.c
++++ b/net/rxrpc/call_event.c
+@@ -310,7 +310,7 @@ recheck_state:
+ 	}
+ 
+ 	if (call->state == RXRPC_CALL_COMPLETE) {
+-		del_timer_sync(&call->timer);
++		rxrpc_delete_call_timer(call);
+ 		goto out_put;
+ 	}
+ 
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index 4eb91d958a48d..043508fd8d8a5 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -53,10 +53,30 @@ static void rxrpc_call_timer_expired(struct timer_list *t)
+ 
+ 	if (call->state < RXRPC_CALL_COMPLETE) {
+ 		trace_rxrpc_timer(call, rxrpc_timer_expired, jiffies);
+-		rxrpc_queue_call(call);
++		__rxrpc_queue_call(call);
++	} else {
++		rxrpc_put_call(call, rxrpc_call_put);
++	}
++}
++
++void rxrpc_reduce_call_timer(struct rxrpc_call *call,
++			     unsigned long expire_at,
++			     unsigned long now,
++			     enum rxrpc_timer_trace why)
++{
++	if (rxrpc_try_get_call(call, rxrpc_call_got_timer)) {
++		trace_rxrpc_timer(call, why, now);
++		if (timer_reduce(&call->timer, expire_at))
++			rxrpc_put_call(call, rxrpc_call_put_notimer);
+ 	}
+ }
+ 
++void rxrpc_delete_call_timer(struct rxrpc_call *call)
++{
++	if (del_timer_sync(&call->timer))
++		rxrpc_put_call(call, rxrpc_call_put_timer);
++}
++
+ static struct lock_class_key rxrpc_call_user_mutex_lock_class_key;
+ 
+ /*
+@@ -463,6 +483,17 @@ void rxrpc_see_call(struct rxrpc_call *call)
+ 	}
+ }
+ 
++bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op)
++{
++	const void *here = __builtin_return_address(0);
++	int n = atomic_fetch_add_unless(&call->usage, 1, 0);
++
++	if (n == 0)
++		return false;
++	trace_rxrpc_call(call->debug_id, op, n, here, NULL);
++	return true;
++}
++
+ /*
+  * Note the addition of a ref on a call.
+  */
+@@ -510,8 +541,7 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call)
+ 	spin_unlock_bh(&call->lock);
+ 
+ 	rxrpc_put_call_slot(call);
+-
+-	del_timer_sync(&call->timer);
++	rxrpc_delete_call_timer(call);
+ 
+ 	/* Make sure we don't get any more notifications */
+ 	write_lock_bh(&rx->recvmsg_lock);
+@@ -618,6 +648,8 @@ static void rxrpc_destroy_call(struct work_struct *work)
+ 	struct rxrpc_call *call = container_of(work, struct rxrpc_call, processor);
+ 	struct rxrpc_net *rxnet = call->rxnet;
+ 
++	rxrpc_delete_call_timer(call);
++
+ 	rxrpc_put_connection(call->conn);
+ 	rxrpc_put_peer(call->peer);
+ 	kfree(call->rxtx_buffer);
+@@ -652,8 +684,6 @@ void rxrpc_cleanup_call(struct rxrpc_call *call)
+ 
+ 	memset(&call->sock_node, 0xcd, sizeof(call->sock_node));
+ 
+-	del_timer_sync(&call->timer);
+-
+ 	ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);
+ 	ASSERT(test_bit(RXRPC_CALL_RELEASED, &call->flags));
+ 
+diff --git a/net/rxrpc/server_key.c b/net/rxrpc/server_key.c
+index ead3471307ee5..ee269e0e6ee87 100644
+--- a/net/rxrpc/server_key.c
++++ b/net/rxrpc/server_key.c
+@@ -84,6 +84,9 @@ static int rxrpc_preparse_s(struct key_preparsed_payload *prep)
+ 
+ 	prep->payload.data[1] = (struct rxrpc_security *)sec;
+ 
++	if (!sec->preparse_server_key)
++		return -EINVAL;
++
+ 	return sec->preparse_server_key(prep);
+ }
+ 
+@@ -91,7 +94,7 @@ static void rxrpc_free_preparse_s(struct key_preparsed_payload *prep)
+ {
+ 	const struct rxrpc_security *sec = prep->payload.data[1];
+ 
+-	if (sec)
++	if (sec && sec->free_preparse_server_key)
+ 		sec->free_preparse_server_key(prep);
+ }
+ 
+@@ -99,7 +102,7 @@ static void rxrpc_destroy_s(struct key *key)
+ {
+ 	const struct rxrpc_security *sec = key->payload.data[1];
+ 
+-	if (sec)
++	if (sec && sec->destroy_server_key)
+ 		sec->destroy_server_key(key);
+ }
+ 
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index 4ffea1290ce1c..553bf41671a65 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -583,22 +583,25 @@ static bool tcf_ct_skb_nfct_cached(struct net *net, struct sk_buff *skb,
+ 	if (!ct)
+ 		return false;
+ 	if (!net_eq(net, read_pnet(&ct->ct_net)))
+-		return false;
++		goto drop_ct;
+ 	if (nf_ct_zone(ct)->id != zone_id)
+-		return false;
++		goto drop_ct;
+ 
+ 	/* Force conntrack entry direction. */
+ 	if (force && CTINFO2DIR(ctinfo) != IP_CT_DIR_ORIGINAL) {
+ 		if (nf_ct_is_confirmed(ct))
+ 			nf_ct_kill(ct);
+ 
+-		nf_conntrack_put(&ct->ct_general);
+-		nf_ct_set(skb, NULL, IP_CT_UNTRACKED);
+-
+-		return false;
++		goto drop_ct;
+ 	}
+ 
+ 	return true;
++
++drop_ct:
++	nf_ct_put(ct);
++	nf_ct_set(skb, NULL, IP_CT_UNTRACKED);
++
++	return false;
+ }
+ 
+ /* Trim the skb to the length specified by the IP/IPv6 header,
+@@ -757,7 +760,7 @@ static void tcf_ct_params_free(struct rcu_head *head)
+ 	tcf_ct_flow_table_put(params);
+ 
+ 	if (params->tmpl)
+-		nf_conntrack_put(&params->tmpl->ct_general);
++		nf_ct_put(params->tmpl);
+ 	kfree(params);
+ }
+ 
+@@ -967,7 +970,7 @@ static int tcf_ct_act(struct sk_buff *skb, const struct tc_action *a,
+ 		tc_skb_cb(skb)->post_ct = false;
+ 		ct = nf_ct_get(skb, &ctinfo);
+ 		if (ct) {
+-			nf_conntrack_put(&ct->ct_general);
++			nf_ct_put(ct);
+ 			nf_ct_set(skb, NULL, IP_CT_UNTRACKED);
+ 		}
+ 
+diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
+index 354c1c4de19bd..5633fbb1ba3c6 100644
+--- a/net/sctp/sm_statefuns.c
++++ b/net/sctp/sm_statefuns.c
+@@ -930,6 +930,11 @@ enum sctp_disposition sctp_sf_do_5_1E_ca(struct net *net,
+ 	if (!sctp_vtag_verify(chunk, asoc))
+ 		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ 
++	/* Set peer label for connection. */
++	if (security_sctp_assoc_established((struct sctp_association *)asoc,
++					    chunk->skb))
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++
+ 	/* Verify that the chunk length for the COOKIE-ACK is OK.
+ 	 * If we don't do this, any bundled chunks may be junked.
+ 	 */
+@@ -945,9 +950,6 @@ enum sctp_disposition sctp_sf_do_5_1E_ca(struct net *net,
+ 	 */
+ 	sctp_add_cmd_sf(commands, SCTP_CMD_INIT_COUNTER_RESET, SCTP_NULL());
+ 
+-	/* Set peer label for connection. */
+-	security_inet_conn_established(ep->base.sk, chunk->skb);
+-
+ 	/* RFC 2960 5.1 Normal Establishment of an Association
+ 	 *
+ 	 * E) Upon reception of the COOKIE ACK, endpoint "A" will move
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index c83fe618767c4..b36d235d2d6d9 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -1065,7 +1065,9 @@ rpc_task_get_next_xprt(struct rpc_clnt *clnt)
+ static
+ void rpc_task_set_transport(struct rpc_task *task, struct rpc_clnt *clnt)
+ {
+-	if (task->tk_xprt)
++	if (task->tk_xprt &&
++			!(test_bit(XPRT_OFFLINE, &task->tk_xprt->state) &&
++                        (task->tk_flags & RPC_TASK_MOVEABLE)))
+ 		return;
+ 	if (task->tk_flags & RPC_TASK_NO_ROUND_ROBIN)
+ 		task->tk_xprt = rpc_task_get_first_xprt(clnt);
+@@ -1085,8 +1087,6 @@ void rpc_task_set_client(struct rpc_task *task, struct rpc_clnt *clnt)
+ 		task->tk_flags |= RPC_TASK_TIMEOUT;
+ 	if (clnt->cl_noretranstimeo)
+ 		task->tk_flags |= RPC_TASK_NO_RETRANS_TIMEOUT;
+-	if (atomic_read(&clnt->cl_swapper))
+-		task->tk_flags |= RPC_TASK_SWAPPER;
+ 	/* Add to the client's list of all tasks */
+ 	spin_lock(&clnt->cl_lock);
+ 	list_add_tail(&task->tk_task, &clnt->cl_tasks);
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index e2c835482791e..ae295844ac55a 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -876,6 +876,15 @@ void rpc_release_calldata(const struct rpc_call_ops *ops, void *calldata)
+ 		ops->rpc_release(calldata);
+ }
+ 
++static bool xprt_needs_memalloc(struct rpc_xprt *xprt, struct rpc_task *tk)
++{
++	if (!xprt)
++		return false;
++	if (!atomic_read(&xprt->swapper))
++		return false;
++	return test_bit(XPRT_LOCKED, &xprt->state) && xprt->snd_task == tk;
++}
++
+ /*
+  * This is the RPC `scheduler' (or rather, the finite state machine).
+  */
+@@ -884,6 +893,7 @@ static void __rpc_execute(struct rpc_task *task)
+ 	struct rpc_wait_queue *queue;
+ 	int task_is_async = RPC_IS_ASYNC(task);
+ 	int status = 0;
++	unsigned long pflags = current->flags;
+ 
+ 	WARN_ON_ONCE(RPC_IS_QUEUED(task));
+ 	if (RPC_IS_QUEUED(task))
+@@ -906,6 +916,10 @@ static void __rpc_execute(struct rpc_task *task)
+ 		}
+ 		if (!do_action)
+ 			break;
++		if (RPC_IS_SWAPPER(task) ||
++		    xprt_needs_memalloc(task->tk_xprt, task))
++			current->flags |= PF_MEMALLOC;
++
+ 		trace_rpc_task_run_action(task, do_action);
+ 		do_action(task);
+ 
+@@ -943,7 +957,7 @@ static void __rpc_execute(struct rpc_task *task)
+ 		rpc_clear_running(task);
+ 		spin_unlock(&queue->lock);
+ 		if (task_is_async)
+-			return;
++			goto out;
+ 
+ 		/* sync task: sleep here */
+ 		trace_rpc_task_sync_sleep(task, task->tk_action);
+@@ -967,6 +981,8 @@ static void __rpc_execute(struct rpc_task *task)
+ 
+ 	/* Release all resources associated with the task */
+ 	rpc_release_task(task);
++out:
++	current_restore_flags(pflags, PF_MEMALLOC);
+ }
+ 
+ /*
+@@ -1023,8 +1039,8 @@ int rpc_malloc(struct rpc_task *task)
+ 	struct rpc_buffer *buf;
+ 	gfp_t gfp = GFP_NOFS;
+ 
+-	if (RPC_IS_SWAPPER(task))
+-		gfp = __GFP_MEMALLOC | GFP_NOWAIT | __GFP_NOWARN;
++	if (RPC_IS_ASYNC(task))
++		gfp = GFP_NOWAIT | __GFP_NOWARN;
+ 
+ 	size += sizeof(struct rpc_buffer);
+ 	if (size <= RPC_BUFFER_MAXSIZE)
+diff --git a/net/sunrpc/sysfs.c b/net/sunrpc/sysfs.c
+index 0c28280dd3bcb..5d39bb2b01f12 100644
+--- a/net/sunrpc/sysfs.c
++++ b/net/sunrpc/sysfs.c
+@@ -97,7 +97,7 @@ static ssize_t rpc_sysfs_xprt_dstaddr_show(struct kobject *kobj,
+ 		return 0;
+ 	ret = sprintf(buf, "%s\n", xprt->address_strings[RPC_DISPLAY_ADDR]);
+ 	xprt_put(xprt);
+-	return ret + 1;
++	return ret;
+ }
+ 
+ static ssize_t rpc_sysfs_xprt_srcaddr_show(struct kobject *kobj,
+@@ -105,33 +105,31 @@ static ssize_t rpc_sysfs_xprt_srcaddr_show(struct kobject *kobj,
+ 					   char *buf)
+ {
+ 	struct rpc_xprt *xprt = rpc_sysfs_xprt_kobj_get_xprt(kobj);
+-	struct sockaddr_storage saddr;
+-	struct sock_xprt *sock;
+-	ssize_t ret = -1;
++	size_t buflen = PAGE_SIZE;
++	ssize_t ret = -ENOTSOCK;
+ 
+ 	if (!xprt || !xprt_connected(xprt)) {
+-		xprt_put(xprt);
+-		return -ENOTCONN;
++		ret = -ENOTCONN;
++	} else if (xprt->ops->get_srcaddr) {
++		ret = xprt->ops->get_srcaddr(xprt, buf, buflen);
++		if (ret > 0) {
++			if (ret < buflen - 1) {
++				buf[ret] = '\n';
++				ret++;
++				buf[ret] = '\0';
++			}
++		}
+ 	}
+-
+-	sock = container_of(xprt, struct sock_xprt, xprt);
+-	mutex_lock(&sock->recv_mutex);
+-	if (sock->sock == NULL ||
+-	    kernel_getsockname(sock->sock, (struct sockaddr *)&saddr) < 0)
+-		goto out;
+-
+-	ret = sprintf(buf, "%pISc\n", &saddr);
+-out:
+-	mutex_unlock(&sock->recv_mutex);
+ 	xprt_put(xprt);
+-	return ret + 1;
++	return ret;
+ }
+ 
+ static ssize_t rpc_sysfs_xprt_info_show(struct kobject *kobj,
+-					struct kobj_attribute *attr,
+-					char *buf)
++					struct kobj_attribute *attr, char *buf)
+ {
+ 	struct rpc_xprt *xprt = rpc_sysfs_xprt_kobj_get_xprt(kobj);
++	unsigned short srcport = 0;
++	size_t buflen = PAGE_SIZE;
+ 	ssize_t ret;
+ 
+ 	if (!xprt || !xprt_connected(xprt)) {
+@@ -139,7 +137,11 @@ static ssize_t rpc_sysfs_xprt_info_show(struct kobject *kobj,
+ 		return -ENOTCONN;
+ 	}
+ 
+-	ret = sprintf(buf, "last_used=%lu\ncur_cong=%lu\ncong_win=%lu\n"
++	if (xprt->ops->get_srcport)
++		srcport = xprt->ops->get_srcport(xprt);
++
++	ret = snprintf(buf, buflen,
++		       "last_used=%lu\ncur_cong=%lu\ncong_win=%lu\n"
+ 		       "max_num_slots=%u\nmin_num_slots=%u\nnum_reqs=%u\n"
+ 		       "binding_q_len=%u\nsending_q_len=%u\npending_q_len=%u\n"
+ 		       "backlog_q_len=%u\nmain_xprt=%d\nsrc_port=%u\n"
+@@ -147,14 +149,11 @@ static ssize_t rpc_sysfs_xprt_info_show(struct kobject *kobj,
+ 		       xprt->last_used, xprt->cong, xprt->cwnd, xprt->max_reqs,
+ 		       xprt->min_reqs, xprt->num_reqs, xprt->binding.qlen,
+ 		       xprt->sending.qlen, xprt->pending.qlen,
+-		       xprt->backlog.qlen, xprt->main,
+-		       (xprt->xprt_class->ident == XPRT_TRANSPORT_TCP) ?
+-		       get_srcport(xprt) : 0,
++		       xprt->backlog.qlen, xprt->main, srcport,
+ 		       atomic_long_read(&xprt->queuelen),
+-		       (xprt->xprt_class->ident == XPRT_TRANSPORT_TCP) ?
+-				xprt->address_strings[RPC_DISPLAY_PORT] : "0");
++		       xprt->address_strings[RPC_DISPLAY_PORT]);
+ 	xprt_put(xprt);
+-	return ret + 1;
++	return ret;
+ }
+ 
+ static ssize_t rpc_sysfs_xprt_state_show(struct kobject *kobj,
+@@ -201,7 +200,7 @@ static ssize_t rpc_sysfs_xprt_state_show(struct kobject *kobj,
+ 	}
+ 
+ 	xprt_put(xprt);
+-	return ret + 1;
++	return ret;
+ }
+ 
+ static ssize_t rpc_sysfs_xprt_switch_info_show(struct kobject *kobj,
+@@ -220,7 +219,7 @@ static ssize_t rpc_sysfs_xprt_switch_info_show(struct kobject *kobj,
+ 		      xprt_switch->xps_nunique_destaddr_xprts,
+ 		      atomic_long_read(&xprt_switch->xps_queuelen));
+ 	xprt_switch_put(xprt_switch);
+-	return ret + 1;
++	return ret;
+ }
+ 
+ static ssize_t rpc_sysfs_xprt_dstaddr_store(struct kobject *kobj,
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index a02de2bddb28b..396a74974f60f 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -1503,6 +1503,9 @@ bool xprt_prepare_transmit(struct rpc_task *task)
+ 		return false;
+ 
+ 	}
++	if (atomic_read(&xprt->swapper))
++		/* This will be clear in __rpc_execute */
++		current->flags |= PF_MEMALLOC;
+ 	return true;
+ }
+ 
+@@ -2112,7 +2115,14 @@ static void xprt_destroy(struct rpc_xprt *xprt)
+ 	 */
+ 	wait_on_bit_lock(&xprt->state, XPRT_LOCKED, TASK_UNINTERRUPTIBLE);
+ 
++	/*
++	 * xprt_schedule_autodisconnect() can run after XPRT_LOCKED
++	 * is cleared.  We use ->transport_lock to ensure the mod_timer()
++	 * can only run *before* del_time_sync(), never after.
++	 */
++	spin_lock(&xprt->transport_lock);
+ 	del_timer_sync(&xprt->timer);
++	spin_unlock(&xprt->transport_lock);
+ 
+ 	/*
+ 	 * Destroy sockets etc from the system workqueue so they can
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index 16e5696314a4f..6268af7e03101 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -239,8 +239,11 @@ xprt_rdma_connect_worker(struct work_struct *work)
+ 	struct rpcrdma_xprt *r_xprt = container_of(work, struct rpcrdma_xprt,
+ 						   rx_connect_worker.work);
+ 	struct rpc_xprt *xprt = &r_xprt->rx_xprt;
++	unsigned int pflags = current->flags;
+ 	int rc;
+ 
++	if (atomic_read(&xprt->swapper))
++		current->flags |= PF_MEMALLOC;
+ 	rc = rpcrdma_xprt_connect(r_xprt);
+ 	xprt_clear_connecting(xprt);
+ 	if (!rc) {
+@@ -254,6 +257,7 @@ xprt_rdma_connect_worker(struct work_struct *work)
+ 		rpcrdma_xprt_disconnect(r_xprt);
+ 	xprt_unlock_connect(xprt, r_xprt);
+ 	xprt_wake_pending_tasks(xprt, rc);
++	current_restore_flags(pflags, PF_MEMALLOC);
+ }
+ 
+ /**
+@@ -574,8 +578,8 @@ xprt_rdma_allocate(struct rpc_task *task)
+ 	gfp_t flags;
+ 
+ 	flags = RPCRDMA_DEF_GFP;
+-	if (RPC_IS_SWAPPER(task))
+-		flags = __GFP_MEMALLOC | GFP_NOWAIT | __GFP_NOWARN;
++	if (RPC_IS_ASYNC(task))
++		flags = GFP_NOWAIT | __GFP_NOWARN;
+ 
+ 	if (!rpcrdma_check_regbuf(r_xprt, req->rl_sendbuf, rqst->rq_callsize,
+ 				  flags))
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 03770e56df361..37d961c1a5c9b 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -1638,7 +1638,7 @@ static int xs_get_srcport(struct sock_xprt *transport)
+ 	return port;
+ }
+ 
+-unsigned short get_srcport(struct rpc_xprt *xprt)
++static unsigned short xs_sock_srcport(struct rpc_xprt *xprt)
+ {
+ 	struct sock_xprt *sock = container_of(xprt, struct sock_xprt, xprt);
+ 	unsigned short ret = 0;
+@@ -1648,7 +1648,25 @@ unsigned short get_srcport(struct rpc_xprt *xprt)
+ 	mutex_unlock(&sock->recv_mutex);
+ 	return ret;
+ }
+-EXPORT_SYMBOL(get_srcport);
++
++static int xs_sock_srcaddr(struct rpc_xprt *xprt, char *buf, size_t buflen)
++{
++	struct sock_xprt *sock = container_of(xprt, struct sock_xprt, xprt);
++	union {
++		struct sockaddr sa;
++		struct sockaddr_storage st;
++	} saddr;
++	int ret = -ENOTCONN;
++
++	mutex_lock(&sock->recv_mutex);
++	if (sock->sock) {
++		ret = kernel_getsockname(sock->sock, &saddr.sa);
++		if (ret >= 0)
++			ret = snprintf(buf, buflen, "%pISc", &saddr.sa);
++	}
++	mutex_unlock(&sock->recv_mutex);
++	return ret;
++}
+ 
+ static unsigned short xs_next_srcport(struct sock_xprt *transport, unsigned short port)
+ {
+@@ -2052,7 +2070,10 @@ static void xs_udp_setup_socket(struct work_struct *work)
+ 	struct rpc_xprt *xprt = &transport->xprt;
+ 	struct socket *sock;
+ 	int status = -EIO;
++	unsigned int pflags = current->flags;
+ 
++	if (atomic_read(&xprt->swapper))
++		current->flags |= PF_MEMALLOC;
+ 	sock = xs_create_sock(xprt, transport,
+ 			xs_addr(xprt)->sa_family, SOCK_DGRAM,
+ 			IPPROTO_UDP, false);
+@@ -2072,6 +2093,7 @@ out:
+ 	xprt_clear_connecting(xprt);
+ 	xprt_unlock_connect(xprt, transport);
+ 	xprt_wake_pending_tasks(xprt, status);
++	current_restore_flags(pflags, PF_MEMALLOC);
+ }
+ 
+ /**
+@@ -2231,11 +2253,19 @@ static void xs_tcp_setup_socket(struct work_struct *work)
+ 	struct socket *sock = transport->sock;
+ 	struct rpc_xprt *xprt = &transport->xprt;
+ 	int status;
++	unsigned int pflags = current->flags;
++
++	if (atomic_read(&xprt->swapper))
++		current->flags |= PF_MEMALLOC;
+ 
+-	if (!sock) {
+-		sock = xs_create_sock(xprt, transport,
+-				xs_addr(xprt)->sa_family, SOCK_STREAM,
+-				IPPROTO_TCP, true);
++	if (xprt_connected(xprt))
++		goto out;
++	if (test_and_clear_bit(XPRT_SOCK_CONNECT_SENT,
++			       &transport->sock_state) ||
++	    !sock) {
++		xs_reset_transport(transport);
++		sock = xs_create_sock(xprt, transport, xs_addr(xprt)->sa_family,
++				      SOCK_STREAM, IPPROTO_TCP, true);
+ 		if (IS_ERR(sock)) {
+ 			xprt_wake_pending_tasks(xprt, PTR_ERR(sock));
+ 			goto out;
+@@ -2259,6 +2289,7 @@ static void xs_tcp_setup_socket(struct work_struct *work)
+ 		fallthrough;
+ 	case -EINPROGRESS:
+ 		/* SYN_SENT! */
++		set_bit(XPRT_SOCK_CONNECT_SENT, &transport->sock_state);
+ 		if (xprt->reestablish_timeout < XS_TCP_INIT_REEST_TO)
+ 			xprt->reestablish_timeout = XS_TCP_INIT_REEST_TO;
+ 		fallthrough;
+@@ -2296,6 +2327,7 @@ out:
+ 	xprt_clear_connecting(xprt);
+ out_unlock:
+ 	xprt_unlock_connect(xprt, transport);
++	current_restore_flags(pflags, PF_MEMALLOC);
+ }
+ 
+ /**
+@@ -2319,13 +2351,9 @@ static void xs_connect(struct rpc_xprt *xprt, struct rpc_task *task)
+ 
+ 	WARN_ON_ONCE(!xprt_lock_connect(xprt, task, transport));
+ 
+-	if (transport->sock != NULL && !xprt_connecting(xprt)) {
++	if (transport->sock != NULL) {
+ 		dprintk("RPC:       xs_connect delayed xprt %p for %lu "
+-				"seconds\n",
+-				xprt, xprt->reestablish_timeout / HZ);
+-
+-		/* Start by resetting any existing state */
+-		xs_reset_transport(transport);
++			"seconds\n", xprt, xprt->reestablish_timeout / HZ);
+ 
+ 		delay = xprt_reconnect_delay(xprt);
+ 		xprt_reconnect_backoff(xprt, XS_TCP_INIT_REEST_TO);
+@@ -2621,6 +2649,8 @@ static const struct rpc_xprt_ops xs_udp_ops = {
+ 	.rpcbind		= rpcb_getport_async,
+ 	.set_port		= xs_set_port,
+ 	.connect		= xs_connect,
++	.get_srcaddr		= xs_sock_srcaddr,
++	.get_srcport		= xs_sock_srcport,
+ 	.buf_alloc		= rpc_malloc,
+ 	.buf_free		= rpc_free,
+ 	.send_request		= xs_udp_send_request,
+@@ -2643,6 +2673,8 @@ static const struct rpc_xprt_ops xs_tcp_ops = {
+ 	.rpcbind		= rpcb_getport_async,
+ 	.set_port		= xs_set_port,
+ 	.connect		= xs_connect,
++	.get_srcaddr		= xs_sock_srcaddr,
++	.get_srcport		= xs_sock_srcport,
+ 	.buf_alloc		= rpc_malloc,
+ 	.buf_free		= rpc_free,
+ 	.prepare_request	= xs_stream_prepare_request,
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 7545321c3440b..17f8c523e33b0 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -2852,7 +2852,8 @@ static void tipc_sk_retry_connect(struct sock *sk, struct sk_buff_head *list)
+ 
+ 	/* Try again later if dest link is congested */
+ 	if (tsk->cong_link_cnt) {
+-		sk_reset_timer(sk, &sk->sk_timer, msecs_to_jiffies(100));
++		sk_reset_timer(sk, &sk->sk_timer,
++			       jiffies + msecs_to_jiffies(100));
+ 		return;
+ 	}
+ 	/* Prepare SYN for retransmit */
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index b0bfc78e421ce..62f47821d783f 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -1996,7 +1996,7 @@ static int queue_oob(struct socket *sock, struct msghdr *msg, struct sock *other
+ 	if (ousk->oob_skb)
+ 		consume_skb(ousk->oob_skb);
+ 
+-	ousk->oob_skb = skb;
++	WRITE_ONCE(ousk->oob_skb, skb);
+ 
+ 	scm_stat_add(other, skb);
+ 	skb_queue_tail(&other->sk_receive_queue, skb);
+@@ -2514,9 +2514,8 @@ static int unix_stream_recv_urg(struct unix_stream_read_state *state)
+ 
+ 	oob_skb = u->oob_skb;
+ 
+-	if (!(state->flags & MSG_PEEK)) {
+-		u->oob_skb = NULL;
+-	}
++	if (!(state->flags & MSG_PEEK))
++		WRITE_ONCE(u->oob_skb, NULL);
+ 
+ 	unix_state_unlock(sk);
+ 
+@@ -2551,7 +2550,7 @@ static struct sk_buff *manage_oob(struct sk_buff *skb, struct sock *sk,
+ 				skb = NULL;
+ 			} else if (sock_flag(sk, SOCK_URGINLINE)) {
+ 				if (!(flags & MSG_PEEK)) {
+-					u->oob_skb = NULL;
++					WRITE_ONCE(u->oob_skb, NULL);
+ 					consume_skb(skb);
+ 				}
+ 			} else if (!(flags & MSG_PEEK)) {
+@@ -3006,11 +3005,10 @@ static int unix_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ 	case SIOCATMARK:
+ 		{
+ 			struct sk_buff *skb;
+-			struct unix_sock *u = unix_sk(sk);
+ 			int answ = 0;
+ 
+ 			skb = skb_peek(&sk->sk_receive_queue);
+-			if (skb && skb == u->oob_skb)
++			if (skb && skb == READ_ONCE(unix_sk(sk)->oob_skb))
+ 				answ = 1;
+ 			err = put_user(answ, (int __user *)arg);
+ 		}
+@@ -3051,6 +3049,10 @@ static __poll_t unix_poll(struct file *file, struct socket *sock, poll_table *wa
+ 		mask |= EPOLLIN | EPOLLRDNORM;
+ 	if (sk_is_readable(sk))
+ 		mask |= EPOLLIN | EPOLLRDNORM;
++#if IS_ENABLED(CONFIG_AF_UNIX_OOB)
++	if (READ_ONCE(unix_sk(sk)->oob_skb))
++		mask |= EPOLLPRI;
++#endif
+ 
+ 	/* Connection-based need to check for termination and startup */
+ 	if ((sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET) &&
+diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
+index dad9ca65f4f9d..c5f936fbf876d 100644
+--- a/net/vmw_vsock/virtio_transport.c
++++ b/net/vmw_vsock/virtio_transport.c
+@@ -622,6 +622,13 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
+ 	INIT_WORK(&vsock->event_work, virtio_transport_event_work);
+ 	INIT_WORK(&vsock->send_pkt_work, virtio_transport_send_pkt_work);
+ 
++	if (virtio_has_feature(vdev, VIRTIO_VSOCK_F_SEQPACKET))
++		vsock->seqpacket_allow = true;
++
++	vdev->priv = vsock;
++
++	virtio_device_ready(vdev);
++
+ 	mutex_lock(&vsock->tx_lock);
+ 	vsock->tx_run = true;
+ 	mutex_unlock(&vsock->tx_lock);
+@@ -636,10 +643,6 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
+ 	vsock->event_run = true;
+ 	mutex_unlock(&vsock->event_lock);
+ 
+-	if (virtio_has_feature(vdev, VIRTIO_VSOCK_F_SEQPACKET))
+-		vsock->seqpacket_allow = true;
+-
+-	vdev->priv = vsock;
+ 	rcu_assign_pointer(the_virtio_vsock, vsock);
+ 
+ 	mutex_unlock(&the_virtio_vsock_mutex);
+diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c
+index 3583354a7d7fe..3a171828638b1 100644
+--- a/net/x25/af_x25.c
++++ b/net/x25/af_x25.c
+@@ -1765,10 +1765,15 @@ void x25_kill_by_neigh(struct x25_neigh *nb)
+ 
+ 	write_lock_bh(&x25_list_lock);
+ 
+-	sk_for_each(s, &x25_list)
+-		if (x25_sk(s)->neighbour == nb)
++	sk_for_each(s, &x25_list) {
++		if (x25_sk(s)->neighbour == nb) {
++			write_unlock_bh(&x25_list_lock);
++			lock_sock(s);
+ 			x25_disconnect(s, ENETUNREACH, 0, 0);
+-
++			release_sock(s);
++			write_lock_bh(&x25_list_lock);
++		}
++	}
+ 	write_unlock_bh(&x25_list_lock);
+ 
+ 	/* Remove any related forwards */
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index f16074eb53c72..468e9b7e3edfb 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -403,18 +403,8 @@ EXPORT_SYMBOL(xsk_tx_peek_release_desc_batch);
+ static int xsk_wakeup(struct xdp_sock *xs, u8 flags)
+ {
+ 	struct net_device *dev = xs->dev;
+-	int err;
+-
+-	rcu_read_lock();
+-	err = dev->netdev_ops->ndo_xsk_wakeup(dev, xs->queue_id, flags);
+-	rcu_read_unlock();
+-
+-	return err;
+-}
+ 
+-static int xsk_zc_xmit(struct xdp_sock *xs)
+-{
+-	return xsk_wakeup(xs, XDP_WAKEUP_TX);
++	return dev->netdev_ops->ndo_xsk_wakeup(dev, xs->queue_id, flags);
+ }
+ 
+ static void xsk_destruct_skb(struct sk_buff *skb)
+@@ -533,6 +523,12 @@ static int xsk_generic_xmit(struct sock *sk)
+ 
+ 	mutex_lock(&xs->mutex);
+ 
++	/* Since we dropped the RCU read lock, the socket state might have changed. */
++	if (unlikely(!xsk_is_bound(xs))) {
++		err = -ENXIO;
++		goto out;
++	}
++
+ 	if (xs->queue_id >= xs->dev->real_num_tx_queues)
+ 		goto out;
+ 
+@@ -596,16 +592,26 @@ out:
+ 	return err;
+ }
+ 
+-static int __xsk_sendmsg(struct sock *sk)
++static int xsk_xmit(struct sock *sk)
+ {
+ 	struct xdp_sock *xs = xdp_sk(sk);
++	int ret;
+ 
+ 	if (unlikely(!(xs->dev->flags & IFF_UP)))
+ 		return -ENETDOWN;
+ 	if (unlikely(!xs->tx))
+ 		return -ENOBUFS;
+ 
+-	return xs->zc ? xsk_zc_xmit(xs) : xsk_generic_xmit(sk);
++	if (xs->zc)
++		return xsk_wakeup(xs, XDP_WAKEUP_TX);
++
++	/* Drop the RCU lock since the SKB path might sleep. */
++	rcu_read_unlock();
++	ret = xsk_generic_xmit(sk);
++	/* Reaquire RCU lock before going into common code. */
++	rcu_read_lock();
++
++	return ret;
+ }
+ 
+ static bool xsk_no_wakeup(struct sock *sk)
+@@ -619,7 +625,7 @@ static bool xsk_no_wakeup(struct sock *sk)
+ #endif
+ }
+ 
+-static int xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)
++static int __xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)
+ {
+ 	bool need_wait = !(m->msg_flags & MSG_DONTWAIT);
+ 	struct sock *sk = sock->sk;
+@@ -639,11 +645,22 @@ static int xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)
+ 
+ 	pool = xs->pool;
+ 	if (pool->cached_need_wakeup & XDP_WAKEUP_TX)
+-		return __xsk_sendmsg(sk);
++		return xsk_xmit(sk);
+ 	return 0;
+ }
+ 
+-static int xsk_recvmsg(struct socket *sock, struct msghdr *m, size_t len, int flags)
++static int xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)
++{
++	int ret;
++
++	rcu_read_lock();
++	ret = __xsk_sendmsg(sock, m, total_len);
++	rcu_read_unlock();
++
++	return ret;
++}
++
++static int __xsk_recvmsg(struct socket *sock, struct msghdr *m, size_t len, int flags)
+ {
+ 	bool need_wait = !(flags & MSG_DONTWAIT);
+ 	struct sock *sk = sock->sk;
+@@ -669,6 +686,17 @@ static int xsk_recvmsg(struct socket *sock, struct msghdr *m, size_t len, int fl
+ 	return 0;
+ }
+ 
++static int xsk_recvmsg(struct socket *sock, struct msghdr *m, size_t len, int flags)
++{
++	int ret;
++
++	rcu_read_lock();
++	ret = __xsk_recvmsg(sock, m, len, flags);
++	rcu_read_unlock();
++
++	return ret;
++}
++
+ static __poll_t xsk_poll(struct file *file, struct socket *sock,
+ 			     struct poll_table_struct *wait)
+ {
+@@ -679,8 +707,11 @@ static __poll_t xsk_poll(struct file *file, struct socket *sock,
+ 
+ 	sock_poll_wait(file, sock, wait);
+ 
+-	if (unlikely(!xsk_is_bound(xs)))
++	rcu_read_lock();
++	if (unlikely(!xsk_is_bound(xs))) {
++		rcu_read_unlock();
+ 		return mask;
++	}
+ 
+ 	pool = xs->pool;
+ 
+@@ -689,7 +720,7 @@ static __poll_t xsk_poll(struct file *file, struct socket *sock,
+ 			xsk_wakeup(xs, pool->cached_need_wakeup);
+ 		else
+ 			/* Poll needs to drive Tx also in copy mode */
+-			__xsk_sendmsg(sk);
++			xsk_xmit(sk);
+ 	}
+ 
+ 	if (xs->rx && !xskq_prod_is_empty(xs->rx))
+@@ -697,6 +728,7 @@ static __poll_t xsk_poll(struct file *file, struct socket *sock,
+ 	if (xs->tx && xsk_tx_writeable(xs))
+ 		mask |= EPOLLOUT | EPOLLWRNORM;
+ 
++	rcu_read_unlock();
+ 	return mask;
+ }
+ 
+@@ -728,7 +760,6 @@ static void xsk_unbind_dev(struct xdp_sock *xs)
+ 
+ 	/* Wait for driver to stop using the xdp socket. */
+ 	xp_del_xsk(xs->pool, xs);
+-	xs->dev = NULL;
+ 	synchronize_net();
+ 	dev_put(dev);
+ }
+diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
+index fd39bb660ebcd..0202a90b65e3a 100644
+--- a/net/xdp/xsk_buff_pool.c
++++ b/net/xdp/xsk_buff_pool.c
+@@ -584,9 +584,13 @@ u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
+ 	u32 nb_entries1 = 0, nb_entries2;
+ 
+ 	if (unlikely(pool->dma_need_sync)) {
++		struct xdp_buff *buff;
++
+ 		/* Slow path */
+-		*xdp = xp_alloc(pool);
+-		return !!*xdp;
++		buff = xp_alloc(pool);
++		if (buff)
++			*xdp = buff;
++		return !!buff;
+ 	}
+ 
+ 	if (unlikely(pool->free_list_cnt)) {
+diff --git a/net/xfrm/xfrm_interface.c b/net/xfrm/xfrm_interface.c
+index 4e3c62d1ad9e9..1e8b26eecb3f8 100644
+--- a/net/xfrm/xfrm_interface.c
++++ b/net/xfrm/xfrm_interface.c
+@@ -304,7 +304,10 @@ xfrmi_xmit2(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
+ 			if (mtu < IPV6_MIN_MTU)
+ 				mtu = IPV6_MIN_MTU;
+ 
+-			icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
++			if (skb->len > 1280)
++				icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
++			else
++				goto xmit;
+ 		} else {
+ 			if (!(ip_hdr(skb)->frag_off & htons(IP_DF)))
+ 				goto xmit;
+diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c
+index 49d7a6ad7e397..1fb79b3ecdd51 100644
+--- a/samples/bpf/xdpsock_user.c
++++ b/samples/bpf/xdpsock_user.c
+@@ -1673,14 +1673,15 @@ int main(int argc, char **argv)
+ 
+ 	setlocale(LC_ALL, "");
+ 
++	prev_time = get_nsecs();
++	start_time = prev_time;
++
+ 	if (!opt_quiet) {
+ 		ret = pthread_create(&pt, NULL, poller, NULL);
+ 		if (ret)
+ 			exit_with_error(ret);
+ 	}
+ 
+-	prev_time = get_nsecs();
+-	start_time = prev_time;
+ 
+ 	if (opt_bench == BENCH_RXDROP)
+ 		rx_drop_all();
+diff --git a/samples/landlock/sandboxer.c b/samples/landlock/sandboxer.c
+index 7a15910d21718..8859fc1935428 100644
+--- a/samples/landlock/sandboxer.c
++++ b/samples/landlock/sandboxer.c
+@@ -134,6 +134,7 @@ static int populate_ruleset(
+ 	ret = 0;
+ 
+ out_free_name:
++	free(path_list);
+ 	free(env_path_name);
+ 	return ret;
+ }
+diff --git a/scripts/atomic/fallbacks/read_acquire b/scripts/atomic/fallbacks/read_acquire
+index 803ba75610766..a0ea1d26e6b2e 100755
+--- a/scripts/atomic/fallbacks/read_acquire
++++ b/scripts/atomic/fallbacks/read_acquire
+@@ -2,6 +2,15 @@ cat <<EOF
+ static __always_inline ${ret}
+ arch_${atomic}_read_acquire(const ${atomic}_t *v)
+ {
+-	return smp_load_acquire(&(v)->counter);
++	${int} ret;
++
++	if (__native_word(${atomic}_t)) {
++		ret = smp_load_acquire(&(v)->counter);
++	} else {
++		ret = arch_${atomic}_read(v);
++		__atomic_acquire_fence();
++	}
++
++	return ret;
+ }
+ EOF
+diff --git a/scripts/atomic/fallbacks/set_release b/scripts/atomic/fallbacks/set_release
+index 86ede759f24ea..05cdb7f42477a 100755
+--- a/scripts/atomic/fallbacks/set_release
++++ b/scripts/atomic/fallbacks/set_release
+@@ -2,6 +2,11 @@ cat <<EOF
+ static __always_inline void
+ arch_${atomic}_set_release(${atomic}_t *v, ${int} i)
+ {
+-	smp_store_release(&(v)->counter, i);
++	if (__native_word(${atomic}_t)) {
++		smp_store_release(&(v)->counter, i);
++	} else {
++		__atomic_release_fence();
++		arch_${atomic}_set(v, i);
++	}
+ }
+ EOF
+diff --git a/scripts/dtc/Makefile b/scripts/dtc/Makefile
+index 95aaf7431bffa..1cba78e1dce68 100644
+--- a/scripts/dtc/Makefile
++++ b/scripts/dtc/Makefile
+@@ -29,7 +29,7 @@ dtc-objs	+= yamltree.o
+ # To include <yaml.h> installed in a non-default path
+ HOSTCFLAGS_yamltree.o := $(shell pkg-config --cflags yaml-0.1)
+ # To link libyaml installed in a non-default path
+-HOSTLDLIBS_dtc	:= $(shell pkg-config yaml-0.1 --libs)
++HOSTLDLIBS_dtc	:= $(shell pkg-config --libs yaml-0.1)
+ endif
+ 
+ # Generated files need one more search path to include headers in source tree
+diff --git a/scripts/gcc-plugins/stackleak_plugin.c b/scripts/gcc-plugins/stackleak_plugin.c
+index e9db7dcb3e5f4..b04aa8e91a41f 100644
+--- a/scripts/gcc-plugins/stackleak_plugin.c
++++ b/scripts/gcc-plugins/stackleak_plugin.c
+@@ -429,6 +429,23 @@ static unsigned int stackleak_cleanup_execute(void)
+ 	return 0;
+ }
+ 
++/*
++ * STRING_CST may or may not be NUL terminated:
++ * https://gcc.gnu.org/onlinedocs/gccint/Constant-expressions.html
++ */
++static inline bool string_equal(tree node, const char *string, int length)
++{
++	if (TREE_STRING_LENGTH(node) < length)
++		return false;
++	if (TREE_STRING_LENGTH(node) > length + 1)
++		return false;
++	if (TREE_STRING_LENGTH(node) == length + 1 &&
++	    TREE_STRING_POINTER(node)[length] != '\0')
++		return false;
++	return !memcmp(TREE_STRING_POINTER(node), string, length);
++}
++#define STRING_EQUAL(node, str)	string_equal(node, str, strlen(str))
++
+ static bool stackleak_gate(void)
+ {
+ 	tree section;
+@@ -438,13 +455,13 @@ static bool stackleak_gate(void)
+ 	if (section && TREE_VALUE(section)) {
+ 		section = TREE_VALUE(TREE_VALUE(section));
+ 
+-		if (!strncmp(TREE_STRING_POINTER(section), ".init.text", 10))
++		if (STRING_EQUAL(section, ".init.text"))
+ 			return false;
+-		if (!strncmp(TREE_STRING_POINTER(section), ".devinit.text", 13))
++		if (STRING_EQUAL(section, ".devinit.text"))
+ 			return false;
+-		if (!strncmp(TREE_STRING_POINTER(section), ".cpuinit.text", 13))
++		if (STRING_EQUAL(section, ".cpuinit.text"))
+ 			return false;
+-		if (!strncmp(TREE_STRING_POINTER(section), ".meminit.text", 13))
++		if (STRING_EQUAL(section, ".meminit.text"))
+ 			return false;
+ 	}
+ 
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index cb8ab7d91d307..ca491aa2b3762 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -669,7 +669,7 @@ static void handle_modversion(const struct module *mod,
+ 	unsigned int crc;
+ 
+ 	if (sym->st_shndx == SHN_UNDEF) {
+-		warn("EXPORT symbol \"%s\" [%s%s] version ...\n"
++		warn("EXPORT symbol \"%s\" [%s%s] version generation failed, symbol will not be versioned.\n"
+ 		     "Is \"%s\" prototyped in <asm/asm-prototypes.h>?\n",
+ 		     symname, mod->name, mod->is_vmlinux ? "" : ".ko",
+ 		     symname);
+diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
+index 08f907382c618..7d87772f0ce68 100644
+--- a/security/integrity/evm/evm_main.c
++++ b/security/integrity/evm/evm_main.c
+@@ -86,7 +86,7 @@ static int __init evm_set_fixmode(char *str)
+ 	else
+ 		pr_err("invalid \"%s\" mode", str);
+ 
+-	return 0;
++	return 1;
+ }
+ __setup("evm=", evm_set_fixmode);
+ 
+diff --git a/security/keys/keyctl_pkey.c b/security/keys/keyctl_pkey.c
+index 5de0d599a2748..97bc27bbf0797 100644
+--- a/security/keys/keyctl_pkey.c
++++ b/security/keys/keyctl_pkey.c
+@@ -135,15 +135,23 @@ static int keyctl_pkey_params_get_2(const struct keyctl_pkey_params __user *_par
+ 
+ 	switch (op) {
+ 	case KEYCTL_PKEY_ENCRYPT:
++		if (uparams.in_len  > info.max_dec_size ||
++		    uparams.out_len > info.max_enc_size)
++			return -EINVAL;
++		break;
+ 	case KEYCTL_PKEY_DECRYPT:
+ 		if (uparams.in_len  > info.max_enc_size ||
+ 		    uparams.out_len > info.max_dec_size)
+ 			return -EINVAL;
+ 		break;
+ 	case KEYCTL_PKEY_SIGN:
++		if (uparams.in_len  > info.max_data_size ||
++		    uparams.out_len > info.max_sig_size)
++			return -EINVAL;
++		break;
+ 	case KEYCTL_PKEY_VERIFY:
+-		if (uparams.in_len  > info.max_sig_size ||
+-		    uparams.out_len > info.max_data_size)
++		if (uparams.in_len  > info.max_data_size ||
++		    uparams.in2_len > info.max_sig_size)
+ 			return -EINVAL;
+ 		break;
+ 	default:
+@@ -151,7 +159,7 @@ static int keyctl_pkey_params_get_2(const struct keyctl_pkey_params __user *_par
+ 	}
+ 
+ 	params->in_len  = uparams.in_len;
+-	params->out_len = uparams.out_len;
++	params->out_len = uparams.out_len; /* Note: same as in2_len */
+ 	return 0;
+ }
+ 
+diff --git a/security/keys/trusted-keys/trusted_core.c b/security/keys/trusted-keys/trusted_core.c
+index d5c891d8d3534..9b9d3ef79cbe3 100644
+--- a/security/keys/trusted-keys/trusted_core.c
++++ b/security/keys/trusted-keys/trusted_core.c
+@@ -27,10 +27,10 @@ module_param_named(source, trusted_key_source, charp, 0);
+ MODULE_PARM_DESC(source, "Select trusted keys source (tpm or tee)");
+ 
+ static const struct trusted_key_source trusted_key_sources[] = {
+-#if defined(CONFIG_TCG_TPM)
++#if IS_REACHABLE(CONFIG_TCG_TPM)
+ 	{ "tpm", &trusted_key_tpm_ops },
+ #endif
+-#if defined(CONFIG_TEE)
++#if IS_REACHABLE(CONFIG_TEE)
+ 	{ "tee", &trusted_key_tee_ops },
+ #endif
+ };
+@@ -351,7 +351,7 @@ static int __init init_trusted(void)
+ 
+ static void __exit cleanup_trusted(void)
+ {
+-	static_call(trusted_key_exit)();
++	static_call_cond(trusted_key_exit)();
+ }
+ 
+ late_initcall(init_trusted);
+diff --git a/security/landlock/syscalls.c b/security/landlock/syscalls.c
+index 32396962f04d6..7e27ce394020d 100644
+--- a/security/landlock/syscalls.c
++++ b/security/landlock/syscalls.c
+@@ -192,7 +192,7 @@ SYSCALL_DEFINE3(landlock_create_ruleset,
+ 		return PTR_ERR(ruleset);
+ 
+ 	/* Creates anonymous FD referring to the ruleset. */
+-	ruleset_fd = anon_inode_getfd("landlock-ruleset", &ruleset_fops,
++	ruleset_fd = anon_inode_getfd("[landlock-ruleset]", &ruleset_fops,
+ 			ruleset, O_RDWR | O_CLOEXEC);
+ 	if (ruleset_fd < 0)
+ 		landlock_put_ruleset(ruleset);
+diff --git a/security/security.c b/security/security.c
+index 64abdfb20bc2c..745a8bf66d91d 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -884,9 +884,22 @@ int security_fs_context_dup(struct fs_context *fc, struct fs_context *src_fc)
+ 	return call_int_hook(fs_context_dup, 0, fc, src_fc);
+ }
+ 
+-int security_fs_context_parse_param(struct fs_context *fc, struct fs_parameter *param)
++int security_fs_context_parse_param(struct fs_context *fc,
++				    struct fs_parameter *param)
+ {
+-	return call_int_hook(fs_context_parse_param, -ENOPARAM, fc, param);
++	struct security_hook_list *hp;
++	int trc;
++	int rc = -ENOPARAM;
++
++	hlist_for_each_entry(hp, &security_hook_heads.fs_context_parse_param,
++			     list) {
++		trc = hp->hook.fs_context_parse_param(fc, param);
++		if (trc == 0)
++			rc = 0;
++		else if (trc != -ENOPARAM)
++			return trc;
++	}
++	return rc;
+ }
+ 
+ int security_sb_alloc(struct super_block *sb)
+@@ -2399,6 +2412,13 @@ void security_sctp_sk_clone(struct sctp_association *asoc, struct sock *sk,
+ }
+ EXPORT_SYMBOL(security_sctp_sk_clone);
+ 
++int security_sctp_assoc_established(struct sctp_association *asoc,
++				    struct sk_buff *skb)
++{
++	return call_int_hook(sctp_assoc_established, 0, asoc, skb);
++}
++EXPORT_SYMBOL(security_sctp_assoc_established);
++
+ #endif	/* CONFIG_SECURITY_NETWORK */
+ 
+ #ifdef CONFIG_SECURITY_INFINIBAND
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 49b4f59db35e7..94ef617de9d0e 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -355,6 +355,10 @@ static void inode_free_security(struct inode *inode)
+ 
+ struct selinux_mnt_opts {
+ 	const char *fscontext, *context, *rootcontext, *defcontext;
++	u32 fscontext_sid;
++	u32 context_sid;
++	u32 rootcontext_sid;
++	u32 defcontext_sid;
+ };
+ 
+ static void selinux_free_mnt_opts(void *mnt_opts)
+@@ -492,7 +496,7 @@ static int selinux_is_sblabel_mnt(struct super_block *sb)
+ 
+ static int sb_check_xattr_support(struct super_block *sb)
+ {
+-	struct superblock_security_struct *sbsec = sb->s_security;
++	struct superblock_security_struct *sbsec = selinux_superblock(sb);
+ 	struct dentry *root = sb->s_root;
+ 	struct inode *root_inode = d_backing_inode(root);
+ 	u32 sid;
+@@ -611,15 +615,14 @@ static int bad_option(struct superblock_security_struct *sbsec, char flag,
+ 	return 0;
+ }
+ 
+-static int parse_sid(struct super_block *sb, const char *s, u32 *sid,
+-		     gfp_t gfp)
++static int parse_sid(struct super_block *sb, const char *s, u32 *sid)
+ {
+ 	int rc = security_context_str_to_sid(&selinux_state, s,
+-					     sid, gfp);
++					     sid, GFP_KERNEL);
+ 	if (rc)
+ 		pr_warn("SELinux: security_context_str_to_sid"
+ 		       "(%s) failed for (dev %s, type %s) errno=%d\n",
+-		       s, sb->s_id, sb->s_type->name, rc);
++		       s, sb ? sb->s_id : "?", sb ? sb->s_type->name : "?", rc);
+ 	return rc;
+ }
+ 
+@@ -686,8 +689,7 @@ static int selinux_set_mnt_opts(struct super_block *sb,
+ 	 */
+ 	if (opts) {
+ 		if (opts->fscontext) {
+-			rc = parse_sid(sb, opts->fscontext, &fscontext_sid,
+-					GFP_KERNEL);
++			rc = parse_sid(sb, opts->fscontext, &fscontext_sid);
+ 			if (rc)
+ 				goto out;
+ 			if (bad_option(sbsec, FSCONTEXT_MNT, sbsec->sid,
+@@ -696,8 +698,7 @@ static int selinux_set_mnt_opts(struct super_block *sb,
+ 			sbsec->flags |= FSCONTEXT_MNT;
+ 		}
+ 		if (opts->context) {
+-			rc = parse_sid(sb, opts->context, &context_sid,
+-					GFP_KERNEL);
++			rc = parse_sid(sb, opts->context, &context_sid);
+ 			if (rc)
+ 				goto out;
+ 			if (bad_option(sbsec, CONTEXT_MNT, sbsec->mntpoint_sid,
+@@ -706,8 +707,7 @@ static int selinux_set_mnt_opts(struct super_block *sb,
+ 			sbsec->flags |= CONTEXT_MNT;
+ 		}
+ 		if (opts->rootcontext) {
+-			rc = parse_sid(sb, opts->rootcontext, &rootcontext_sid,
+-					GFP_KERNEL);
++			rc = parse_sid(sb, opts->rootcontext, &rootcontext_sid);
+ 			if (rc)
+ 				goto out;
+ 			if (bad_option(sbsec, ROOTCONTEXT_MNT, root_isec->sid,
+@@ -716,8 +716,7 @@ static int selinux_set_mnt_opts(struct super_block *sb,
+ 			sbsec->flags |= ROOTCONTEXT_MNT;
+ 		}
+ 		if (opts->defcontext) {
+-			rc = parse_sid(sb, opts->defcontext, &defcontext_sid,
+-					GFP_KERNEL);
++			rc = parse_sid(sb, opts->defcontext, &defcontext_sid);
+ 			if (rc)
+ 				goto out;
+ 			if (bad_option(sbsec, DEFCONTEXT_MNT, sbsec->def_sid,
+@@ -1009,21 +1008,29 @@ static int selinux_add_opt(int token, const char *s, void **mnt_opts)
+ 		if (opts->context || opts->defcontext)
+ 			goto Einval;
+ 		opts->context = s;
++		if (selinux_initialized(&selinux_state))
++			parse_sid(NULL, s, &opts->context_sid);
+ 		break;
+ 	case Opt_fscontext:
+ 		if (opts->fscontext)
+ 			goto Einval;
+ 		opts->fscontext = s;
++		if (selinux_initialized(&selinux_state))
++			parse_sid(NULL, s, &opts->fscontext_sid);
+ 		break;
+ 	case Opt_rootcontext:
+ 		if (opts->rootcontext)
+ 			goto Einval;
+ 		opts->rootcontext = s;
++		if (selinux_initialized(&selinux_state))
++			parse_sid(NULL, s, &opts->rootcontext_sid);
+ 		break;
+ 	case Opt_defcontext:
+ 		if (opts->context || opts->defcontext)
+ 			goto Einval;
+ 		opts->defcontext = s;
++		if (selinux_initialized(&selinux_state))
++			parse_sid(NULL, s, &opts->defcontext_sid);
+ 		break;
+ 	}
+ 	return 0;
+@@ -2696,9 +2703,7 @@ free_opt:
+ static int selinux_sb_mnt_opts_compat(struct super_block *sb, void *mnt_opts)
+ {
+ 	struct selinux_mnt_opts *opts = mnt_opts;
+-	struct superblock_security_struct *sbsec = sb->s_security;
+-	u32 sid;
+-	int rc;
++	struct superblock_security_struct *sbsec = selinux_superblock(sb);
+ 
+ 	/*
+ 	 * Superblock not initialized (i.e. no options) - reject if any
+@@ -2715,34 +2720,36 @@ static int selinux_sb_mnt_opts_compat(struct super_block *sb, void *mnt_opts)
+ 		return (sbsec->flags & SE_MNTMASK) ? 1 : 0;
+ 
+ 	if (opts->fscontext) {
+-		rc = parse_sid(sb, opts->fscontext, &sid, GFP_NOWAIT);
+-		if (rc)
++		if (opts->fscontext_sid == SECSID_NULL)
+ 			return 1;
+-		if (bad_option(sbsec, FSCONTEXT_MNT, sbsec->sid, sid))
++		else if (bad_option(sbsec, FSCONTEXT_MNT, sbsec->sid,
++				       opts->fscontext_sid))
+ 			return 1;
+ 	}
+ 	if (opts->context) {
+-		rc = parse_sid(sb, opts->context, &sid, GFP_NOWAIT);
+-		if (rc)
++		if (opts->context_sid == SECSID_NULL)
+ 			return 1;
+-		if (bad_option(sbsec, CONTEXT_MNT, sbsec->mntpoint_sid, sid))
++		else if (bad_option(sbsec, CONTEXT_MNT, sbsec->mntpoint_sid,
++				       opts->context_sid))
+ 			return 1;
+ 	}
+ 	if (opts->rootcontext) {
+-		struct inode_security_struct *root_isec;
+-
+-		root_isec = backing_inode_security(sb->s_root);
+-		rc = parse_sid(sb, opts->rootcontext, &sid, GFP_NOWAIT);
+-		if (rc)
+-			return 1;
+-		if (bad_option(sbsec, ROOTCONTEXT_MNT, root_isec->sid, sid))
++		if (opts->rootcontext_sid == SECSID_NULL)
+ 			return 1;
++		else {
++			struct inode_security_struct *root_isec;
++
++			root_isec = backing_inode_security(sb->s_root);
++			if (bad_option(sbsec, ROOTCONTEXT_MNT, root_isec->sid,
++				       opts->rootcontext_sid))
++				return 1;
++		}
+ 	}
+ 	if (opts->defcontext) {
+-		rc = parse_sid(sb, opts->defcontext, &sid, GFP_NOWAIT);
+-		if (rc)
++		if (opts->defcontext_sid == SECSID_NULL)
+ 			return 1;
+-		if (bad_option(sbsec, DEFCONTEXT_MNT, sbsec->def_sid, sid))
++		else if (bad_option(sbsec, DEFCONTEXT_MNT, sbsec->def_sid,
++				       opts->defcontext_sid))
+ 			return 1;
+ 	}
+ 	return 0;
+@@ -2762,14 +2769,14 @@ static int selinux_sb_remount(struct super_block *sb, void *mnt_opts)
+ 		return 0;
+ 
+ 	if (opts->fscontext) {
+-		rc = parse_sid(sb, opts->fscontext, &sid, GFP_KERNEL);
++		rc = parse_sid(sb, opts->fscontext, &sid);
+ 		if (rc)
+ 			return rc;
+ 		if (bad_option(sbsec, FSCONTEXT_MNT, sbsec->sid, sid))
+ 			goto out_bad_option;
+ 	}
+ 	if (opts->context) {
+-		rc = parse_sid(sb, opts->context, &sid, GFP_KERNEL);
++		rc = parse_sid(sb, opts->context, &sid);
+ 		if (rc)
+ 			return rc;
+ 		if (bad_option(sbsec, CONTEXT_MNT, sbsec->mntpoint_sid, sid))
+@@ -2778,14 +2785,14 @@ static int selinux_sb_remount(struct super_block *sb, void *mnt_opts)
+ 	if (opts->rootcontext) {
+ 		struct inode_security_struct *root_isec;
+ 		root_isec = backing_inode_security(sb->s_root);
+-		rc = parse_sid(sb, opts->rootcontext, &sid, GFP_KERNEL);
++		rc = parse_sid(sb, opts->rootcontext, &sid);
+ 		if (rc)
+ 			return rc;
+ 		if (bad_option(sbsec, ROOTCONTEXT_MNT, root_isec->sid, sid))
+ 			goto out_bad_option;
+ 	}
+ 	if (opts->defcontext) {
+-		rc = parse_sid(sb, opts->defcontext, &sid, GFP_KERNEL);
++		rc = parse_sid(sb, opts->defcontext, &sid);
+ 		if (rc)
+ 			return rc;
+ 		if (bad_option(sbsec, DEFCONTEXT_MNT, sbsec->def_sid, sid))
+@@ -2909,10 +2916,9 @@ static int selinux_fs_context_parse_param(struct fs_context *fc,
+ 		return opt;
+ 
+ 	rc = selinux_add_opt(opt, param->string, &fc->security);
+-	if (!rc) {
++	if (!rc)
+ 		param->string = NULL;
+-		rc = 1;
+-	}
++
+ 	return rc;
+ }
+ 
+@@ -3794,6 +3800,12 @@ static int selinux_file_ioctl(struct file *file, unsigned int cmd,
+ 					    CAP_OPT_NONE, true);
+ 		break;
+ 
++	case FIOCLEX:
++	case FIONCLEX:
++		if (!selinux_policycap_ioctl_skip_cloexec())
++			error = ioctl_has_perm(cred, file, FILE__IOCTL, (u16) cmd);
++		break;
++
+ 	/* default case assumes that the command will go
+ 	 * to the file's ioctl() function.
+ 	 */
+@@ -5348,37 +5360,38 @@ static void selinux_sock_graft(struct sock *sk, struct socket *parent)
+ 	sksec->sclass = isec->sclass;
+ }
+ 
+-/* Called whenever SCTP receives an INIT chunk. This happens when an incoming
+- * connect(2), sctp_connectx(3) or sctp_sendmsg(3) (with no association
+- * already present).
++/*
++ * Determines peer_secid for the asoc and updates socket's peer label
++ * if it's the first association on the socket.
+  */
+-static int selinux_sctp_assoc_request(struct sctp_association *asoc,
+-				      struct sk_buff *skb)
++static int selinux_sctp_process_new_assoc(struct sctp_association *asoc,
++					  struct sk_buff *skb)
+ {
+-	struct sk_security_struct *sksec = asoc->base.sk->sk_security;
++	struct sock *sk = asoc->base.sk;
++	u16 family = sk->sk_family;
++	struct sk_security_struct *sksec = sk->sk_security;
+ 	struct common_audit_data ad;
+ 	struct lsm_network_audit net = {0,};
+-	u8 peerlbl_active;
+-	u32 peer_sid = SECINITSID_UNLABELED;
+-	u32 conn_sid;
+-	int err = 0;
++	int err;
+ 
+-	if (!selinux_policycap_extsockclass())
+-		return 0;
++	/* handle mapped IPv4 packets arriving via IPv6 sockets */
++	if (family == PF_INET6 && skb->protocol == htons(ETH_P_IP))
++		family = PF_INET;
+ 
+-	peerlbl_active = selinux_peerlbl_enabled();
++	if (selinux_peerlbl_enabled()) {
++		asoc->peer_secid = SECSID_NULL;
+ 
+-	if (peerlbl_active) {
+ 		/* This will return peer_sid = SECSID_NULL if there are
+ 		 * no peer labels, see security_net_peersid_resolve().
+ 		 */
+-		err = selinux_skb_peerlbl_sid(skb, asoc->base.sk->sk_family,
+-					      &peer_sid);
++		err = selinux_skb_peerlbl_sid(skb, family, &asoc->peer_secid);
+ 		if (err)
+ 			return err;
+ 
+-		if (peer_sid == SECSID_NULL)
+-			peer_sid = SECINITSID_UNLABELED;
++		if (asoc->peer_secid == SECSID_NULL)
++			asoc->peer_secid = SECINITSID_UNLABELED;
++	} else {
++		asoc->peer_secid = SECINITSID_UNLABELED;
+ 	}
+ 
+ 	if (sksec->sctp_assoc_state == SCTP_ASSOC_UNSET) {
+@@ -5389,8 +5402,8 @@ static int selinux_sctp_assoc_request(struct sctp_association *asoc,
+ 		 * then it is approved by policy and used as the primary
+ 		 * peer SID for getpeercon(3).
+ 		 */
+-		sksec->peer_sid = peer_sid;
+-	} else if  (sksec->peer_sid != peer_sid) {
++		sksec->peer_sid = asoc->peer_secid;
++	} else if (sksec->peer_sid != asoc->peer_secid) {
+ 		/* Other association peer SIDs are checked to enforce
+ 		 * consistency among the peer SIDs.
+ 		 */
+@@ -5398,11 +5411,32 @@ static int selinux_sctp_assoc_request(struct sctp_association *asoc,
+ 		ad.u.net = &net;
+ 		ad.u.net->sk = asoc->base.sk;
+ 		err = avc_has_perm(&selinux_state,
+-				   sksec->peer_sid, peer_sid, sksec->sclass,
+-				   SCTP_SOCKET__ASSOCIATION, &ad);
++				   sksec->peer_sid, asoc->peer_secid,
++				   sksec->sclass, SCTP_SOCKET__ASSOCIATION,
++				   &ad);
+ 		if (err)
+ 			return err;
+ 	}
++	return 0;
++}
++
++/* Called whenever SCTP receives an INIT or COOKIE ECHO chunk. This
++ * happens on an incoming connect(2), sctp_connectx(3) or
++ * sctp_sendmsg(3) (with no association already present).
++ */
++static int selinux_sctp_assoc_request(struct sctp_association *asoc,
++				      struct sk_buff *skb)
++{
++	struct sk_security_struct *sksec = asoc->base.sk->sk_security;
++	u32 conn_sid;
++	int err;
++
++	if (!selinux_policycap_extsockclass())
++		return 0;
++
++	err = selinux_sctp_process_new_assoc(asoc, skb);
++	if (err)
++		return err;
+ 
+ 	/* Compute the MLS component for the connection and store
+ 	 * the information in asoc. This will be used by SCTP TCP type
+@@ -5410,17 +5444,36 @@ static int selinux_sctp_assoc_request(struct sctp_association *asoc,
+ 	 * socket to be generated. selinux_sctp_sk_clone() will then
+ 	 * plug this into the new socket.
+ 	 */
+-	err = selinux_conn_sid(sksec->sid, peer_sid, &conn_sid);
++	err = selinux_conn_sid(sksec->sid, asoc->peer_secid, &conn_sid);
+ 	if (err)
+ 		return err;
+ 
+ 	asoc->secid = conn_sid;
+-	asoc->peer_secid = peer_sid;
+ 
+ 	/* Set any NetLabel labels including CIPSO/CALIPSO options. */
+ 	return selinux_netlbl_sctp_assoc_request(asoc, skb);
+ }
+ 
++/* Called when SCTP receives a COOKIE ACK chunk as the final
++ * response to an association request (initited by us).
++ */
++static int selinux_sctp_assoc_established(struct sctp_association *asoc,
++					  struct sk_buff *skb)
++{
++	struct sk_security_struct *sksec = asoc->base.sk->sk_security;
++
++	if (!selinux_policycap_extsockclass())
++		return 0;
++
++	/* Inherit secid from the parent socket - this will be picked up
++	 * by selinux_sctp_sk_clone() if the association gets peeled off
++	 * into a new socket.
++	 */
++	asoc->secid = sksec->sid;
++
++	return selinux_sctp_process_new_assoc(asoc, skb);
++}
++
+ /* Check if sctp IPv4/IPv6 addresses are valid for binding or connecting
+  * based on their @optname.
+  */
+@@ -7241,6 +7294,7 @@ static struct security_hook_list selinux_hooks[] __lsm_ro_after_init = {
+ 	LSM_HOOK_INIT(sctp_assoc_request, selinux_sctp_assoc_request),
+ 	LSM_HOOK_INIT(sctp_sk_clone, selinux_sctp_sk_clone),
+ 	LSM_HOOK_INIT(sctp_bind_connect, selinux_sctp_bind_connect),
++	LSM_HOOK_INIT(sctp_assoc_established, selinux_sctp_assoc_established),
+ 	LSM_HOOK_INIT(inet_conn_request, selinux_inet_conn_request),
+ 	LSM_HOOK_INIT(inet_csk_clone, selinux_inet_csk_clone),
+ 	LSM_HOOK_INIT(inet_conn_established, selinux_inet_conn_established),
+diff --git a/security/selinux/include/policycap.h b/security/selinux/include/policycap.h
+index 2ec038efbb03c..a9e572ca4fd96 100644
+--- a/security/selinux/include/policycap.h
++++ b/security/selinux/include/policycap.h
+@@ -11,6 +11,7 @@ enum {
+ 	POLICYDB_CAPABILITY_CGROUPSECLABEL,
+ 	POLICYDB_CAPABILITY_NNP_NOSUID_TRANSITION,
+ 	POLICYDB_CAPABILITY_GENFS_SECLABEL_SYMLINKS,
++	POLICYDB_CAPABILITY_IOCTL_SKIP_CLOEXEC,
+ 	__POLICYDB_CAPABILITY_MAX
+ };
+ #define POLICYDB_CAPABILITY_MAX (__POLICYDB_CAPABILITY_MAX - 1)
+diff --git a/security/selinux/include/policycap_names.h b/security/selinux/include/policycap_names.h
+index b89289f092c93..ebd64afe1defd 100644
+--- a/security/selinux/include/policycap_names.h
++++ b/security/selinux/include/policycap_names.h
+@@ -12,7 +12,8 @@ const char *selinux_policycap_names[__POLICYDB_CAPABILITY_MAX] = {
+ 	"always_check_network",
+ 	"cgroup_seclabel",
+ 	"nnp_nosuid_transition",
+-	"genfs_seclabel_symlinks"
++	"genfs_seclabel_symlinks",
++	"ioctl_skip_cloexec"
+ };
+ 
+ #endif /* _SELINUX_POLICYCAP_NAMES_H_ */
+diff --git a/security/selinux/include/security.h b/security/selinux/include/security.h
+index ac0ece01305a6..c0d966020ebdd 100644
+--- a/security/selinux/include/security.h
++++ b/security/selinux/include/security.h
+@@ -219,6 +219,13 @@ static inline bool selinux_policycap_genfs_seclabel_symlinks(void)
+ 	return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_GENFS_SECLABEL_SYMLINKS]);
+ }
+ 
++static inline bool selinux_policycap_ioctl_skip_cloexec(void)
++{
++	struct selinux_state *state = &selinux_state;
++
++	return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_IOCTL_SKIP_CLOEXEC]);
++}
++
+ struct selinux_policy_convert_data;
+ 
+ struct selinux_load_state {
+diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c
+index e4cd7cb856f37..f2f6203e0fff5 100644
+--- a/security/selinux/selinuxfs.c
++++ b/security/selinux/selinuxfs.c
+@@ -2127,6 +2127,8 @@ static int sel_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	}
+ 
+ 	ret = sel_make_avc_files(dentry);
++	if (ret)
++		goto err;
+ 
+ 	dentry = sel_make_dir(sb->s_root, "ss", &fsi->last_ino);
+ 	if (IS_ERR(dentry)) {
+diff --git a/security/selinux/xfrm.c b/security/selinux/xfrm.c
+index be83e5ce4469c..debe15207d2bf 100644
+--- a/security/selinux/xfrm.c
++++ b/security/selinux/xfrm.c
+@@ -347,7 +347,7 @@ int selinux_xfrm_state_alloc_acquire(struct xfrm_state *x,
+ 	int rc;
+ 	struct xfrm_sec_ctx *ctx;
+ 	char *ctx_str = NULL;
+-	int str_len;
++	u32 str_len;
+ 
+ 	if (!polsec)
+ 		return 0;
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index efd35b07c7f88..95a15b77f42fb 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -2511,7 +2511,7 @@ static int smk_ipv6_check(struct smack_known *subject,
+ #ifdef CONFIG_AUDIT
+ 	smk_ad_init_net(&ad, __func__, LSM_AUDIT_DATA_NET, &net);
+ 	ad.a.u.net->family = PF_INET6;
+-	ad.a.u.net->dport = ntohs(address->sin6_port);
++	ad.a.u.net->dport = address->sin6_port;
+ 	if (act == SMK_RECEIVING)
+ 		ad.a.u.net->v6info.saddr = address->sin6_addr;
+ 	else
+diff --git a/security/tomoyo/load_policy.c b/security/tomoyo/load_policy.c
+index 3445ae6fd4794..363b65be87ab7 100644
+--- a/security/tomoyo/load_policy.c
++++ b/security/tomoyo/load_policy.c
+@@ -24,7 +24,7 @@ static const char *tomoyo_loader;
+ static int __init tomoyo_loader_setup(char *str)
+ {
+ 	tomoyo_loader = str;
+-	return 0;
++	return 1;
+ }
+ 
+ __setup("TOMOYO_loader=", tomoyo_loader_setup);
+@@ -64,7 +64,7 @@ static const char *tomoyo_trigger;
+ static int __init tomoyo_trigger_setup(char *str)
+ {
+ 	tomoyo_trigger = str;
+-	return 0;
++	return 1;
+ }
+ 
+ __setup("TOMOYO_trigger=", tomoyo_trigger_setup);
+diff --git a/sound/core/pcm.c b/sound/core/pcm.c
+index edd9849210f2d..977d54320a5ca 100644
+--- a/sound/core/pcm.c
++++ b/sound/core/pcm.c
+@@ -970,6 +970,7 @@ int snd_pcm_attach_substream(struct snd_pcm *pcm, int stream,
+ 
+ 	runtime->status->state = SNDRV_PCM_STATE_OPEN;
+ 	mutex_init(&runtime->buffer_mutex);
++	atomic_set(&runtime->buffer_accessing, 0);
+ 
+ 	substream->runtime = runtime;
+ 	substream->private_data = pcm->private_data;
+diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
+index 27ee0a0bee04a..c363cd3fe1edd 100644
+--- a/sound/core/pcm_lib.c
++++ b/sound/core/pcm_lib.c
+@@ -1906,11 +1906,9 @@ static int wait_for_avail(struct snd_pcm_substream *substream,
+ 		if (avail >= runtime->twake)
+ 			break;
+ 		snd_pcm_stream_unlock_irq(substream);
+-		mutex_unlock(&runtime->buffer_mutex);
+ 
+ 		tout = schedule_timeout(wait_time);
+ 
+-		mutex_lock(&runtime->buffer_mutex);
+ 		snd_pcm_stream_lock_irq(substream);
+ 		set_current_state(TASK_INTERRUPTIBLE);
+ 		switch (runtime->status->state) {
+@@ -2204,7 +2202,6 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream,
+ 
+ 	nonblock = !!(substream->f_flags & O_NONBLOCK);
+ 
+-	mutex_lock(&runtime->buffer_mutex);
+ 	snd_pcm_stream_lock_irq(substream);
+ 	err = pcm_accessible_state(runtime);
+ 	if (err < 0)
+@@ -2259,6 +2256,10 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream,
+ 			err = -EINVAL;
+ 			goto _end_unlock;
+ 		}
++		if (!atomic_inc_unless_negative(&runtime->buffer_accessing)) {
++			err = -EBUSY;
++			goto _end_unlock;
++		}
+ 		snd_pcm_stream_unlock_irq(substream);
+ 		if (!is_playback)
+ 			snd_pcm_dma_buffer_sync(substream, SNDRV_DMA_SYNC_CPU);
+@@ -2267,6 +2268,7 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream,
+ 		if (is_playback)
+ 			snd_pcm_dma_buffer_sync(substream, SNDRV_DMA_SYNC_DEVICE);
+ 		snd_pcm_stream_lock_irq(substream);
++		atomic_dec(&runtime->buffer_accessing);
+ 		if (err < 0)
+ 			goto _end_unlock;
+ 		err = pcm_accessible_state(runtime);
+@@ -2296,7 +2298,6 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream,
+ 	if (xfer > 0 && err >= 0)
+ 		snd_pcm_update_state(substream, runtime);
+ 	snd_pcm_stream_unlock_irq(substream);
+-	mutex_unlock(&runtime->buffer_mutex);
+ 	return xfer > 0 ? (snd_pcm_sframes_t)xfer : err;
+ }
+ EXPORT_SYMBOL(__snd_pcm_lib_xfer);
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 3cfae44c6b722..d954d167db3c0 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -672,6 +672,24 @@ static int snd_pcm_hw_params_choose(struct snd_pcm_substream *pcm,
+ 	return 0;
+ }
+ 
++/* acquire buffer_mutex; if it's in r/w operation, return -EBUSY, otherwise
++ * block the further r/w operations
++ */
++static int snd_pcm_buffer_access_lock(struct snd_pcm_runtime *runtime)
++{
++	if (!atomic_dec_unless_positive(&runtime->buffer_accessing))
++		return -EBUSY;
++	mutex_lock(&runtime->buffer_mutex);
++	return 0; /* keep buffer_mutex, unlocked by below */
++}
++
++/* release buffer_mutex and clear r/w access flag */
++static void snd_pcm_buffer_access_unlock(struct snd_pcm_runtime *runtime)
++{
++	mutex_unlock(&runtime->buffer_mutex);
++	atomic_inc(&runtime->buffer_accessing);
++}
++
+ #if IS_ENABLED(CONFIG_SND_PCM_OSS)
+ #define is_oss_stream(substream)	((substream)->oss.oss)
+ #else
+@@ -682,14 +700,16 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
+ 			     struct snd_pcm_hw_params *params)
+ {
+ 	struct snd_pcm_runtime *runtime;
+-	int err = 0, usecs;
++	int err, usecs;
+ 	unsigned int bits;
+ 	snd_pcm_uframes_t frames;
+ 
+ 	if (PCM_RUNTIME_CHECK(substream))
+ 		return -ENXIO;
+ 	runtime = substream->runtime;
+-	mutex_lock(&runtime->buffer_mutex);
++	err = snd_pcm_buffer_access_lock(runtime);
++	if (err < 0)
++		return err;
+ 	snd_pcm_stream_lock_irq(substream);
+ 	switch (runtime->status->state) {
+ 	case SNDRV_PCM_STATE_OPEN:
+@@ -807,7 +827,7 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
+ 			snd_pcm_lib_free_pages(substream);
+ 	}
+  unlock:
+-	mutex_unlock(&runtime->buffer_mutex);
++	snd_pcm_buffer_access_unlock(runtime);
+ 	return err;
+ }
+ 
+@@ -852,7 +872,9 @@ static int snd_pcm_hw_free(struct snd_pcm_substream *substream)
+ 	if (PCM_RUNTIME_CHECK(substream))
+ 		return -ENXIO;
+ 	runtime = substream->runtime;
+-	mutex_lock(&runtime->buffer_mutex);
++	result = snd_pcm_buffer_access_lock(runtime);
++	if (result < 0)
++		return result;
+ 	snd_pcm_stream_lock_irq(substream);
+ 	switch (runtime->status->state) {
+ 	case SNDRV_PCM_STATE_SETUP:
+@@ -871,7 +893,7 @@ static int snd_pcm_hw_free(struct snd_pcm_substream *substream)
+ 	snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
+ 	cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
+  unlock:
+-	mutex_unlock(&runtime->buffer_mutex);
++	snd_pcm_buffer_access_unlock(runtime);
+ 	return result;
+ }
+ 
+@@ -1356,12 +1378,15 @@ static int snd_pcm_action_nonatomic(const struct action_ops *ops,
+ 
+ 	/* Guarantee the group members won't change during non-atomic action */
+ 	down_read(&snd_pcm_link_rwsem);
+-	mutex_lock(&substream->runtime->buffer_mutex);
++	res = snd_pcm_buffer_access_lock(substream->runtime);
++	if (res < 0)
++		goto unlock;
+ 	if (snd_pcm_stream_linked(substream))
+ 		res = snd_pcm_action_group(ops, substream, state, false);
+ 	else
+ 		res = snd_pcm_action_single(ops, substream, state);
+-	mutex_unlock(&substream->runtime->buffer_mutex);
++	snd_pcm_buffer_access_unlock(substream->runtime);
++ unlock:
+ 	up_read(&snd_pcm_link_rwsem);
+ 	return res;
+ }
+diff --git a/sound/firewire/fcp.c b/sound/firewire/fcp.c
+index bbfbebf4affbc..df44dd5dc4b22 100644
+--- a/sound/firewire/fcp.c
++++ b/sound/firewire/fcp.c
+@@ -240,9 +240,7 @@ int fcp_avc_transaction(struct fw_unit *unit,
+ 	t.response_match_bytes = response_match_bytes;
+ 	t.state = STATE_PENDING;
+ 	init_waitqueue_head(&t.wait);
+-
+-	if (*(const u8 *)command == 0x00 || *(const u8 *)command == 0x03)
+-		t.deferrable = true;
++	t.deferrable = (*(const u8 *)command == 0x00 || *(const u8 *)command == 0x03);
+ 
+ 	spin_lock_irq(&transactions_lock);
+ 	list_add_tail(&t.list, &transactions);
+diff --git a/sound/isa/cs423x/cs4236.c b/sound/isa/cs423x/cs4236.c
+index b6bdebd9ef275..10112e1bb25dc 100644
+--- a/sound/isa/cs423x/cs4236.c
++++ b/sound/isa/cs423x/cs4236.c
+@@ -494,7 +494,7 @@ static int snd_cs423x_pnpbios_detect(struct pnp_dev *pdev,
+ 	static int dev;
+ 	int err;
+ 	struct snd_card *card;
+-	struct pnp_dev *cdev;
++	struct pnp_dev *cdev, *iter;
+ 	char cid[PNP_ID_LEN];
+ 
+ 	if (pnp_device_is_isapnp(pdev))
+@@ -510,9 +510,11 @@ static int snd_cs423x_pnpbios_detect(struct pnp_dev *pdev,
+ 	strcpy(cid, pdev->id[0].id);
+ 	cid[5] = '1';
+ 	cdev = NULL;
+-	list_for_each_entry(cdev, &(pdev->protocol->devices), protocol_list) {
+-		if (!strcmp(cdev->id[0].id, cid))
++	list_for_each_entry(iter, &(pdev->protocol->devices), protocol_list) {
++		if (!strcmp(iter->id[0].id, cid)) {
++			cdev = iter;
+ 			break;
++		}
+ 	}
+ 	err = snd_cs423x_card_new(&pdev->dev, dev, &card);
+ 	if (err < 0)
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 3b6f2aacda459..1ffd96fbf2309 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2061,14 +2061,16 @@ static const struct hda_controller_ops pci_hda_ops = {
+ 	.position_check = azx_position_check,
+ };
+ 
++static DECLARE_BITMAP(probed_devs, SNDRV_CARDS);
++
+ static int azx_probe(struct pci_dev *pci,
+ 		     const struct pci_device_id *pci_id)
+ {
+-	static int dev;
+ 	struct snd_card *card;
+ 	struct hda_intel *hda;
+ 	struct azx *chip;
+ 	bool schedule_probe;
++	int dev;
+ 	int err;
+ 
+ 	if (pci_match_id(driver_denylist, pci)) {
+@@ -2076,10 +2078,11 @@ static int azx_probe(struct pci_dev *pci,
+ 		return -ENODEV;
+ 	}
+ 
++	dev = find_first_zero_bit(probed_devs, SNDRV_CARDS);
+ 	if (dev >= SNDRV_CARDS)
+ 		return -ENODEV;
+ 	if (!enable[dev]) {
+-		dev++;
++		set_bit(dev, probed_devs);
+ 		return -ENOENT;
+ 	}
+ 
+@@ -2146,7 +2149,7 @@ static int azx_probe(struct pci_dev *pci,
+ 	if (schedule_probe)
+ 		schedule_delayed_work(&hda->probe_work, 0);
+ 
+-	dev++;
++	set_bit(dev, probed_devs);
+ 	if (chip->disabled)
+ 		complete_all(&hda->probe_wait);
+ 	return 0;
+@@ -2369,6 +2372,7 @@ static void azx_remove(struct pci_dev *pci)
+ 		cancel_delayed_work_sync(&hda->probe_work);
+ 		device_lock(&pci->dev);
+ 
++		clear_bit(chip->dev_index, probed_devs);
+ 		pci_set_drvdata(pci, NULL);
+ 		snd_card_free(card);
+ 	}
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index ffcde7409d2a5..472d81679a27d 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1617,6 +1617,7 @@ static void hdmi_present_sense_via_verbs(struct hdmi_spec_per_pin *per_pin,
+ 	struct hda_codec *codec = per_pin->codec;
+ 	struct hdmi_spec *spec = codec->spec;
+ 	struct hdmi_eld *eld = &spec->temp_eld;
++	struct device *dev = hda_codec_dev(codec);
+ 	hda_nid_t pin_nid = per_pin->pin_nid;
+ 	int dev_id = per_pin->dev_id;
+ 	/*
+@@ -1630,8 +1631,13 @@ static void hdmi_present_sense_via_verbs(struct hdmi_spec_per_pin *per_pin,
+ 	int present;
+ 	int ret;
+ 
++#ifdef	CONFIG_PM
++	if (dev->power.runtime_status == RPM_SUSPENDING)
++		return;
++#endif
++
+ 	ret = snd_hda_power_up_pm(codec);
+-	if (ret < 0 && pm_runtime_suspended(hda_codec_dev(codec)))
++	if (ret < 0 && pm_runtime_suspended(dev))
+ 		goto out;
+ 
+ 	present = snd_hda_jack_pin_sense(codec, pin_nid, dev_id);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 08bf8a77a3e4d..f6e5ed34dd094 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -3612,8 +3612,8 @@ static void alc256_shutup(struct hda_codec *codec)
+ 	/* If disable 3k pulldown control for alc257, the Mic detection will not work correctly
+ 	 * when booting with headset plugged. So skip setting it for the codec alc257
+ 	 */
+-	if (spec->codec_variant != ALC269_TYPE_ALC257 &&
+-	    spec->codec_variant != ALC269_TYPE_ALC256)
++	if (codec->core.vendor_id != 0x10ec0236 &&
++	    codec->core.vendor_id != 0x10ec0257)
+ 		alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
+ 
+ 	if (!spec->no_shutup_pins)
+@@ -6816,6 +6816,7 @@ enum {
+ 	ALC236_FIXUP_HP_MUTE_LED,
+ 	ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
+ 	ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
++	ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
+ 	ALC295_FIXUP_ASUS_MIC_NO_PRESENCE,
+ 	ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS,
+ 	ALC269VC_FIXUP_ACER_HEADSET_MIC,
+@@ -8138,6 +8139,14 @@ static const struct hda_fixup alc269_fixups[] = {
+ 			{ }
+ 		},
+ 	},
++	[ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x08},
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x2fcf},
++			{ }
++		},
++	},
+ 	[ALC295_FIXUP_ASUS_MIC_NO_PRESENCE] = {
+ 		.type = HDA_FIXUP_PINS,
+ 		.v.pins = (const struct hda_pintbl[]) {
+@@ -8900,6 +8909,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8),
+ 	SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ 	SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
++	SND_PCI_QUIRK(0x144d, 0xc832, "Samsung Galaxy Book Flex Alpha (NP730QCJ)", ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ 	SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC),
+@@ -9242,6 +9252,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC298_FIXUP_HUAWEI_MBX_STEREO, .name = "huawei-mbx-stereo"},
+ 	{.id = ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE, .name = "alc256-medion-headset"},
+ 	{.id = ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET, .name = "alc298-samsung-headphone"},
++	{.id = ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET, .name = "alc256-samsung-headphone"},
+ 	{.id = ALC255_FIXUP_XIAOMI_HEADSET_MIC, .name = "alc255-xiaomi-headset"},
+ 	{.id = ALC274_FIXUP_HP_MIC, .name = "alc274-hp-mic-detect"},
+ 	{.id = ALC245_FIXUP_HP_X360_AMP, .name = "alc245-hp-x360-amp"},
+diff --git a/sound/soc/amd/acp/acp-mach-common.c b/sound/soc/amd/acp/acp-mach-common.c
+index 7785f12aa0065..55ad3c70f0efb 100644
+--- a/sound/soc/amd/acp/acp-mach-common.c
++++ b/sound/soc/amd/acp/acp-mach-common.c
+@@ -531,6 +531,8 @@ int acp_legacy_dai_links_create(struct snd_soc_card *card)
+ 		num_links++;
+ 
+ 	links = devm_kzalloc(dev, sizeof(struct snd_soc_dai_link) * num_links, GFP_KERNEL);
++	if (!links)
++		return -ENOMEM;
+ 
+ 	if (drv_data->hs_cpu_id == I2S_SP) {
+ 		links[i].name = "acp-headset-codec";
+diff --git a/sound/soc/amd/vangogh/acp5x-mach.c b/sound/soc/amd/vangogh/acp5x-mach.c
+index 14cf325e4b237..5d7a17755fa7f 100644
+--- a/sound/soc/amd/vangogh/acp5x-mach.c
++++ b/sound/soc/amd/vangogh/acp5x-mach.c
+@@ -165,6 +165,7 @@ static int acp5x_cs35l41_hw_params(struct snd_pcm_substream *substream,
+ 	unsigned int num_codecs = rtd->num_codecs;
+ 	unsigned int bclk_val;
+ 
++	ret = 0;
+ 	for (i = 0; i < num_codecs; i++) {
+ 		codec_dai = asoc_rtd_to_codec(rtd, i);
+ 		if ((strcmp(codec_dai->name, "spi-VLV1776:00") == 0) ||
+diff --git a/sound/soc/amd/vangogh/acp5x-pcm-dma.c b/sound/soc/amd/vangogh/acp5x-pcm-dma.c
+index f10de38976cb5..bfca4cf423cf1 100644
+--- a/sound/soc/amd/vangogh/acp5x-pcm-dma.c
++++ b/sound/soc/amd/vangogh/acp5x-pcm-dma.c
+@@ -281,7 +281,7 @@ static int acp5x_dma_hw_params(struct snd_soc_component *component,
+ 		return -EINVAL;
+ 	}
+ 	size = params_buffer_bytes(params);
+-	rtd->dma_addr = substream->dma_buffer.addr;
++	rtd->dma_addr = substream->runtime->dma_addr;
+ 	rtd->num_pages = (PAGE_ALIGN(size) >> PAGE_SHIFT);
+ 	config_acp5x_dma(rtd, substream->stream);
+ 	return 0;
+@@ -426,51 +426,51 @@ static int acp5x_audio_remove(struct platform_device *pdev)
+ static int __maybe_unused acp5x_pcm_resume(struct device *dev)
+ {
+ 	struct i2s_dev_data *adata;
+-	u32 val, reg_val, frmt_val;
++	struct i2s_stream_instance *rtd;
++	u32 val;
+ 
+-	reg_val = 0;
+-	frmt_val = 0;
+ 	adata = dev_get_drvdata(dev);
+ 
+ 	if (adata->play_stream && adata->play_stream->runtime) {
+-		struct i2s_stream_instance *rtd =
+-			adata->play_stream->runtime->private_data;
++		rtd = adata->play_stream->runtime->private_data;
+ 		config_acp5x_dma(rtd, SNDRV_PCM_STREAM_PLAYBACK);
+-		switch (rtd->i2s_instance) {
+-		case I2S_HS_INSTANCE:
+-			reg_val = ACP_HSTDM_ITER;
+-			frmt_val = ACP_HSTDM_TXFRMT;
+-			break;
+-		case I2S_SP_INSTANCE:
+-		default:
+-			reg_val = ACP_I2STDM_ITER;
+-			frmt_val = ACP_I2STDM_TXFRMT;
++		acp_writel((rtd->xfer_resolution  << 3), rtd->acp5x_base + ACP_HSTDM_ITER);
++		if (adata->tdm_mode == TDM_ENABLE) {
++			acp_writel(adata->tdm_fmt, adata->acp5x_base + ACP_HSTDM_TXFRMT);
++			val = acp_readl(adata->acp5x_base + ACP_HSTDM_ITER);
++			acp_writel(val | 0x2, adata->acp5x_base + ACP_HSTDM_ITER);
++		}
++	}
++	if (adata->i2ssp_play_stream && adata->i2ssp_play_stream->runtime) {
++		rtd = adata->i2ssp_play_stream->runtime->private_data;
++		config_acp5x_dma(rtd, SNDRV_PCM_STREAM_PLAYBACK);
++		acp_writel((rtd->xfer_resolution  << 3), rtd->acp5x_base + ACP_I2STDM_ITER);
++		if (adata->tdm_mode == TDM_ENABLE) {
++			acp_writel(adata->tdm_fmt, adata->acp5x_base + ACP_I2STDM_TXFRMT);
++			val = acp_readl(adata->acp5x_base + ACP_I2STDM_ITER);
++			acp_writel(val | 0x2, adata->acp5x_base + ACP_I2STDM_ITER);
+ 		}
+-		acp_writel((rtd->xfer_resolution  << 3),
+-			   rtd->acp5x_base + reg_val);
+ 	}
+ 
+ 	if (adata->capture_stream && adata->capture_stream->runtime) {
+-		struct i2s_stream_instance *rtd =
+-			adata->capture_stream->runtime->private_data;
++		rtd = adata->capture_stream->runtime->private_data;
+ 		config_acp5x_dma(rtd, SNDRV_PCM_STREAM_CAPTURE);
+-		switch (rtd->i2s_instance) {
+-		case I2S_HS_INSTANCE:
+-			reg_val = ACP_HSTDM_IRER;
+-			frmt_val = ACP_HSTDM_RXFRMT;
+-			break;
+-		case I2S_SP_INSTANCE:
+-		default:
+-			reg_val = ACP_I2STDM_IRER;
+-			frmt_val = ACP_I2STDM_RXFRMT;
++		acp_writel((rtd->xfer_resolution  << 3), rtd->acp5x_base + ACP_HSTDM_IRER);
++		if (adata->tdm_mode == TDM_ENABLE) {
++			acp_writel(adata->tdm_fmt, adata->acp5x_base + ACP_HSTDM_RXFRMT);
++			val = acp_readl(adata->acp5x_base + ACP_HSTDM_IRER);
++			acp_writel(val | 0x2, adata->acp5x_base + ACP_HSTDM_IRER);
+ 		}
+-		acp_writel((rtd->xfer_resolution  << 3),
+-			   rtd->acp5x_base + reg_val);
+ 	}
+-	if (adata->tdm_mode == TDM_ENABLE) {
+-		acp_writel(adata->tdm_fmt, adata->acp5x_base + frmt_val);
+-		val = acp_readl(adata->acp5x_base + reg_val);
+-		acp_writel(val | 0x2, adata->acp5x_base + reg_val);
++	if (adata->i2ssp_capture_stream && adata->i2ssp_capture_stream->runtime) {
++		rtd = adata->i2ssp_capture_stream->runtime->private_data;
++		config_acp5x_dma(rtd, SNDRV_PCM_STREAM_CAPTURE);
++		acp_writel((rtd->xfer_resolution  << 3), rtd->acp5x_base + ACP_I2STDM_IRER);
++		if (adata->tdm_mode == TDM_ENABLE) {
++			acp_writel(adata->tdm_fmt, adata->acp5x_base + ACP_I2STDM_RXFRMT);
++			val = acp_readl(adata->acp5x_base + ACP_I2STDM_IRER);
++			acp_writel(val | 0x2, adata->acp5x_base + ACP_I2STDM_IRER);
++		}
+ 	}
+ 	acp_writel(1, adata->acp5x_base + ACP_EXTERNAL_INTR_ENB);
+ 	return 0;
+diff --git a/sound/soc/atmel/atmel_ssc_dai.c b/sound/soc/atmel/atmel_ssc_dai.c
+index 26e2bc690d86e..c1dea8d624164 100644
+--- a/sound/soc/atmel/atmel_ssc_dai.c
++++ b/sound/soc/atmel/atmel_ssc_dai.c
+@@ -280,7 +280,10 @@ static int atmel_ssc_startup(struct snd_pcm_substream *substream,
+ 
+ 	/* Enable PMC peripheral clock for this SSC */
+ 	pr_debug("atmel_ssc_dai: Starting clock\n");
+-	clk_enable(ssc_p->ssc->clk);
++	ret = clk_enable(ssc_p->ssc->clk);
++	if (ret)
++		return ret;
++
+ 	ssc_p->mck_rate = clk_get_rate(ssc_p->ssc->clk);
+ 
+ 	/* Reset the SSC unless initialized to keep it in a clean state */
+diff --git a/sound/soc/atmel/mikroe-proto.c b/sound/soc/atmel/mikroe-proto.c
+index f9331f7e80fe4..5b513ff7fe2d4 100644
+--- a/sound/soc/atmel/mikroe-proto.c
++++ b/sound/soc/atmel/mikroe-proto.c
+@@ -115,7 +115,8 @@ static int snd_proto_probe(struct platform_device *pdev)
+ 	cpu_np = of_parse_phandle(np, "i2s-controller", 0);
+ 	if (!cpu_np) {
+ 		dev_err(&pdev->dev, "i2s-controller missing\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_codec_node;
+ 	}
+ 	dai->cpus->of_node = cpu_np;
+ 	dai->platforms->of_node = cpu_np;
+@@ -125,7 +126,8 @@ static int snd_proto_probe(struct platform_device *pdev)
+ 						       &bitclkmaster, &framemaster);
+ 	if (bitclkmaster != framemaster) {
+ 		dev_err(&pdev->dev, "Must be the same bitclock and frame master\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_cpu_node;
+ 	}
+ 	if (bitclkmaster) {
+ 		if (codec_np == bitclkmaster)
+@@ -136,18 +138,20 @@ static int snd_proto_probe(struct platform_device *pdev)
+ 		dai_fmt |= snd_soc_daifmt_parse_clock_provider_as_flag(np, NULL);
+ 	}
+ 
+-	of_node_put(bitclkmaster);
+-	of_node_put(framemaster);
+-	dai->dai_fmt = dai_fmt;
+-
+-	of_node_put(codec_np);
+-	of_node_put(cpu_np);
+ 
++	dai->dai_fmt = dai_fmt;
+ 	ret = snd_soc_register_card(&snd_proto);
+ 	if (ret && ret != -EPROBE_DEFER)
+ 		dev_err(&pdev->dev,
+ 			"snd_soc_register_card() failed: %d\n", ret);
+ 
++
++put_cpu_node:
++	of_node_put(bitclkmaster);
++	of_node_put(framemaster);
++	of_node_put(cpu_np);
++put_codec_node:
++	of_node_put(codec_np);
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/atmel/sam9g20_wm8731.c b/sound/soc/atmel/sam9g20_wm8731.c
+index 915da92e1ec82..33e43013ff770 100644
+--- a/sound/soc/atmel/sam9g20_wm8731.c
++++ b/sound/soc/atmel/sam9g20_wm8731.c
+@@ -214,6 +214,7 @@ static int at91sam9g20ek_audio_probe(struct platform_device *pdev)
+ 	cpu_np = of_parse_phandle(np, "atmel,ssc-controller", 0);
+ 	if (!cpu_np) {
+ 		dev_err(&pdev->dev, "dai and pcm info missing\n");
++		of_node_put(codec_np);
+ 		return -EINVAL;
+ 	}
+ 	at91sam9g20ek_dai.cpus->of_node = cpu_np;
+diff --git a/sound/soc/atmel/sam9x5_wm8731.c b/sound/soc/atmel/sam9x5_wm8731.c
+index 7c45dc4f8c1bb..99310e40e7a62 100644
+--- a/sound/soc/atmel/sam9x5_wm8731.c
++++ b/sound/soc/atmel/sam9x5_wm8731.c
+@@ -142,7 +142,7 @@ static int sam9x5_wm8731_driver_probe(struct platform_device *pdev)
+ 	if (!cpu_np) {
+ 		dev_err(&pdev->dev, "atmel,ssc-controller node missing\n");
+ 		ret = -EINVAL;
+-		goto out;
++		goto out_put_codec_np;
+ 	}
+ 	dai->cpus->of_node = cpu_np;
+ 	dai->platforms->of_node = cpu_np;
+@@ -153,12 +153,9 @@ static int sam9x5_wm8731_driver_probe(struct platform_device *pdev)
+ 	if (ret != 0) {
+ 		dev_err(&pdev->dev, "Failed to set SSC %d for audio: %d\n",
+ 			ret, priv->ssc_id);
+-		goto out;
++		goto out_put_cpu_np;
+ 	}
+ 
+-	of_node_put(codec_np);
+-	of_node_put(cpu_np);
+-
+ 	ret = devm_snd_soc_register_card(&pdev->dev, card);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Platform device allocation failed\n");
+@@ -167,10 +164,14 @@ static int sam9x5_wm8731_driver_probe(struct platform_device *pdev)
+ 
+ 	dev_dbg(&pdev->dev, "%s ok\n", __func__);
+ 
+-	return ret;
++	goto out_put_cpu_np;
+ 
+ out_put_audio:
+ 	atmel_ssc_put_audio(priv->ssc_id);
++out_put_cpu_np:
++	of_node_put(cpu_np);
++out_put_codec_np:
++	of_node_put(codec_np);
+ out:
+ 	return ret;
+ }
+diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
+index 3a610ba183ffb..0d4e1fb9befce 100644
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -707,6 +707,7 @@ config SND_SOC_CS4349
+ 
+ config SND_SOC_CS47L15
+ 	tristate
++	depends on MFD_CS47L15
+ 
+ config SND_SOC_CS47L24
+ 	tristate
+@@ -714,15 +715,19 @@ config SND_SOC_CS47L24
+ 
+ config SND_SOC_CS47L35
+ 	tristate
++	depends on MFD_CS47L35
+ 
+ config SND_SOC_CS47L85
+ 	tristate
++	depends on MFD_CS47L85
+ 
+ config SND_SOC_CS47L90
+ 	tristate
++	depends on MFD_CS47L90
+ 
+ config SND_SOC_CS47L92
+ 	tristate
++	depends on MFD_CS47L92
+ 
+ # Cirrus Logic Quad-Channel ADC
+ config SND_SOC_CS53L30
+diff --git a/sound/soc/codecs/cs35l41.c b/sound/soc/codecs/cs35l41.c
+index 9c4d481f7614c..8a2d0f9dd9a61 100644
+--- a/sound/soc/codecs/cs35l41.c
++++ b/sound/soc/codecs/cs35l41.c
+@@ -1071,8 +1071,8 @@ static int cs35l41_irq_gpio_config(struct cs35l41_private *cs35l41)
+ 
+ 	regmap_update_bits(cs35l41->regmap, CS35L41_GPIO2_CTRL1,
+ 			   CS35L41_GPIO_POL_MASK | CS35L41_GPIO_DIR_MASK,
+-			   irq_gpio_cfg1->irq_pol_inv << CS35L41_GPIO_POL_SHIFT |
+-			   !irq_gpio_cfg1->irq_out_en << CS35L41_GPIO_DIR_SHIFT);
++			   irq_gpio_cfg2->irq_pol_inv << CS35L41_GPIO_POL_SHIFT |
++			   !irq_gpio_cfg2->irq_out_en << CS35L41_GPIO_DIR_SHIFT);
+ 
+ 	regmap_update_bits(cs35l41->regmap, CS35L41_GPIO_PAD_CONTROL,
+ 			   CS35L41_GPIO1_CTRL_MASK | CS35L41_GPIO2_CTRL_MASK,
+@@ -1113,7 +1113,7 @@ static struct snd_soc_dai_driver cs35l41_dai[] = {
+ 		.capture = {
+ 			.stream_name = "AMP Capture",
+ 			.channels_min = 1,
+-			.channels_max = 8,
++			.channels_max = 4,
+ 			.rates = SNDRV_PCM_RATE_KNOT,
+ 			.formats = CS35L41_TX_FORMATS,
+ 		},
+diff --git a/sound/soc/codecs/cs42l42.c b/sound/soc/codecs/cs42l42.c
+index a63fba4e6c9c2..eb170d106396e 100644
+--- a/sound/soc/codecs/cs42l42.c
++++ b/sound/soc/codecs/cs42l42.c
+@@ -1630,7 +1630,11 @@ static irqreturn_t cs42l42_irq_thread(int irq, void *data)
+ 
+ 	mutex_lock(&cs42l42->jack_detect_mutex);
+ 
+-	/* Check auto-detect status */
++	/*
++	 * Check auto-detect status. Don't assume a previous unplug event has
++	 * cleared the flags. If the jack is unplugged and plugged during
++	 * system suspend there won't have been an unplug event.
++	 */
+ 	if ((~masks[5]) & irq_params_table[5].mask) {
+ 		if (stickies[5] & CS42L42_HSDET_AUTO_DONE_MASK) {
+ 			cs42l42_process_hs_type_detect(cs42l42);
+@@ -1638,11 +1642,15 @@ static irqreturn_t cs42l42_irq_thread(int irq, void *data)
+ 			case CS42L42_PLUG_CTIA:
+ 			case CS42L42_PLUG_OMTP:
+ 				snd_soc_jack_report(cs42l42->jack, SND_JACK_HEADSET,
+-						    SND_JACK_HEADSET);
++						    SND_JACK_HEADSET |
++						    SND_JACK_BTN_0 | SND_JACK_BTN_1 |
++						    SND_JACK_BTN_2 | SND_JACK_BTN_3);
+ 				break;
+ 			case CS42L42_PLUG_HEADPHONE:
+ 				snd_soc_jack_report(cs42l42->jack, SND_JACK_HEADPHONE,
+-						    SND_JACK_HEADPHONE);
++						    SND_JACK_HEADSET |
++						    SND_JACK_BTN_0 | SND_JACK_BTN_1 |
++						    SND_JACK_BTN_2 | SND_JACK_BTN_3);
+ 				break;
+ 			default:
+ 				break;
+diff --git a/sound/soc/codecs/lpass-rx-macro.c b/sound/soc/codecs/lpass-rx-macro.c
+index 6ffe88345de5f..3a3dc0539d921 100644
+--- a/sound/soc/codecs/lpass-rx-macro.c
++++ b/sound/soc/codecs/lpass-rx-macro.c
+@@ -2039,6 +2039,10 @@ static int rx_macro_load_compander_coeff(struct snd_soc_component *component,
+ 	int i;
+ 	int hph_pwr_mode;
+ 
++	/* AUX does not have compander */
++	if (comp == INTERP_AUX)
++		return 0;
++
+ 	if (!rx->comp_enabled[comp])
+ 		return 0;
+ 
+@@ -2268,7 +2272,7 @@ static int rx_macro_mux_get(struct snd_kcontrol *kcontrol,
+ 	struct snd_soc_component *component = snd_soc_dapm_to_component(widget->dapm);
+ 	struct rx_macro *rx = snd_soc_component_get_drvdata(component);
+ 
+-	ucontrol->value.integer.value[0] =
++	ucontrol->value.enumerated.item[0] =
+ 			rx->rx_port_value[widget->shift];
+ 	return 0;
+ }
+@@ -2280,7 +2284,7 @@ static int rx_macro_mux_put(struct snd_kcontrol *kcontrol,
+ 	struct snd_soc_component *component = snd_soc_dapm_to_component(widget->dapm);
+ 	struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
+ 	struct snd_soc_dapm_update *update = NULL;
+-	u32 rx_port_value = ucontrol->value.integer.value[0];
++	u32 rx_port_value = ucontrol->value.enumerated.item[0];
+ 	u32 aif_rst;
+ 	struct rx_macro *rx = snd_soc_component_get_drvdata(component);
+ 
+@@ -2392,7 +2396,7 @@ static int rx_macro_get_hph_pwr_mode(struct snd_kcontrol *kcontrol,
+ 	struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol);
+ 	struct rx_macro *rx = snd_soc_component_get_drvdata(component);
+ 
+-	ucontrol->value.integer.value[0] = rx->hph_pwr_mode;
++	ucontrol->value.enumerated.item[0] = rx->hph_pwr_mode;
+ 	return 0;
+ }
+ 
+@@ -2402,7 +2406,7 @@ static int rx_macro_put_hph_pwr_mode(struct snd_kcontrol *kcontrol,
+ 	struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol);
+ 	struct rx_macro *rx = snd_soc_component_get_drvdata(component);
+ 
+-	rx->hph_pwr_mode = ucontrol->value.integer.value[0];
++	rx->hph_pwr_mode = ucontrol->value.enumerated.item[0];
+ 	return 0;
+ }
+ 
+@@ -3542,6 +3546,8 @@ static int rx_macro_probe(struct platform_device *pdev)
+ 		return PTR_ERR(base);
+ 
+ 	rx->regmap = devm_regmap_init_mmio(dev, base, &rx_regmap_config);
++	if (IS_ERR(rx->regmap))
++		return PTR_ERR(rx->regmap);
+ 
+ 	dev_set_drvdata(dev, rx);
+ 
+diff --git a/sound/soc/codecs/lpass-tx-macro.c b/sound/soc/codecs/lpass-tx-macro.c
+index a4c0a155af565..9c96ab1bf84f9 100644
+--- a/sound/soc/codecs/lpass-tx-macro.c
++++ b/sound/soc/codecs/lpass-tx-macro.c
+@@ -1821,6 +1821,8 @@ static int tx_macro_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	tx->regmap = devm_regmap_init_mmio(dev, base, &tx_regmap_config);
++	if (IS_ERR(tx->regmap))
++		return PTR_ERR(tx->regmap);
+ 
+ 	dev_set_drvdata(dev, tx);
+ 
+diff --git a/sound/soc/codecs/lpass-va-macro.c b/sound/soc/codecs/lpass-va-macro.c
+index 11147e35689b2..e14c277e6a8b6 100644
+--- a/sound/soc/codecs/lpass-va-macro.c
++++ b/sound/soc/codecs/lpass-va-macro.c
+@@ -780,7 +780,7 @@ static int va_macro_dec_mode_get(struct snd_kcontrol *kcontrol,
+ 	struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
+ 	int path = e->shift_l;
+ 
+-	ucontrol->value.integer.value[0] = va->dec_mode[path];
++	ucontrol->value.enumerated.item[0] = va->dec_mode[path];
+ 
+ 	return 0;
+ }
+@@ -789,7 +789,7 @@ static int va_macro_dec_mode_put(struct snd_kcontrol *kcontrol,
+ 				 struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);
+-	int value = ucontrol->value.integer.value[0];
++	int value = ucontrol->value.enumerated.item[0];
+ 	struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
+ 	int path = e->shift_l;
+ 	struct va_macro *va = snd_soc_component_get_drvdata(comp);
+diff --git a/sound/soc/codecs/lpass-wsa-macro.c b/sound/soc/codecs/lpass-wsa-macro.c
+index 75baf8eb70299..69d2915f40d88 100644
+--- a/sound/soc/codecs/lpass-wsa-macro.c
++++ b/sound/soc/codecs/lpass-wsa-macro.c
+@@ -2405,6 +2405,8 @@ static int wsa_macro_probe(struct platform_device *pdev)
+ 		return PTR_ERR(base);
+ 
+ 	wsa->regmap = devm_regmap_init_mmio(dev, base, &wsa_regmap_config);
++	if (IS_ERR(wsa->regmap))
++		return PTR_ERR(wsa->regmap);
+ 
+ 	dev_set_drvdata(dev, wsa);
+ 
+diff --git a/sound/soc/codecs/max98927.c b/sound/soc/codecs/max98927.c
+index 5ba5f876eab87..fd84780bf689f 100644
+--- a/sound/soc/codecs/max98927.c
++++ b/sound/soc/codecs/max98927.c
+@@ -16,6 +16,7 @@
+ #include <sound/pcm_params.h>
+ #include <sound/soc.h>
+ #include <linux/gpio.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/of_gpio.h>
+ #include <sound/tlv.h>
+ #include "max98927.h"
+diff --git a/sound/soc/codecs/msm8916-wcd-analog.c b/sound/soc/codecs/msm8916-wcd-analog.c
+index 3ddd822240e3a..971b8360b5b1b 100644
+--- a/sound/soc/codecs/msm8916-wcd-analog.c
++++ b/sound/soc/codecs/msm8916-wcd-analog.c
+@@ -1221,8 +1221,10 @@ static int pm8916_wcd_analog_spmi_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	irq = platform_get_irq_byname(pdev, "mbhc_switch_int");
+-	if (irq < 0)
+-		return irq;
++	if (irq < 0) {
++		ret = irq;
++		goto err_disable_clk;
++	}
+ 
+ 	ret = devm_request_threaded_irq(dev, irq, NULL,
+ 			       pm8916_mbhc_switch_irq_handler,
+@@ -1234,8 +1236,10 @@ static int pm8916_wcd_analog_spmi_probe(struct platform_device *pdev)
+ 
+ 	if (priv->mbhc_btn_enabled) {
+ 		irq = platform_get_irq_byname(pdev, "mbhc_but_press_det");
+-		if (irq < 0)
+-			return irq;
++		if (irq < 0) {
++			ret = irq;
++			goto err_disable_clk;
++		}
+ 
+ 		ret = devm_request_threaded_irq(dev, irq, NULL,
+ 				       mbhc_btn_press_irq_handler,
+@@ -1246,8 +1250,10 @@ static int pm8916_wcd_analog_spmi_probe(struct platform_device *pdev)
+ 			dev_err(dev, "cannot request mbhc button press irq\n");
+ 
+ 		irq = platform_get_irq_byname(pdev, "mbhc_but_rel_det");
+-		if (irq < 0)
+-			return irq;
++		if (irq < 0) {
++			ret = irq;
++			goto err_disable_clk;
++		}
+ 
+ 		ret = devm_request_threaded_irq(dev, irq, NULL,
+ 				       mbhc_btn_release_irq_handler,
+@@ -1264,6 +1270,10 @@ static int pm8916_wcd_analog_spmi_probe(struct platform_device *pdev)
+ 	return devm_snd_soc_register_component(dev, &pm8916_wcd_analog,
+ 				      pm8916_wcd_analog_dai,
+ 				      ARRAY_SIZE(pm8916_wcd_analog_dai));
++
++err_disable_clk:
++	clk_disable_unprepare(priv->mclk);
++	return ret;
+ }
+ 
+ static int pm8916_wcd_analog_spmi_remove(struct platform_device *pdev)
+diff --git a/sound/soc/codecs/msm8916-wcd-digital.c b/sound/soc/codecs/msm8916-wcd-digital.c
+index fcc10c8bc6259..9ad7fc0baf072 100644
+--- a/sound/soc/codecs/msm8916-wcd-digital.c
++++ b/sound/soc/codecs/msm8916-wcd-digital.c
+@@ -1201,7 +1201,7 @@ static int msm8916_wcd_digital_probe(struct platform_device *pdev)
+ 	ret = clk_prepare_enable(priv->mclk);
+ 	if (ret < 0) {
+ 		dev_err(dev, "failed to enable mclk %d\n", ret);
+-		return ret;
++		goto err_clk;
+ 	}
+ 
+ 	dev_set_drvdata(dev, priv);
+@@ -1209,6 +1209,9 @@ static int msm8916_wcd_digital_probe(struct platform_device *pdev)
+ 	return devm_snd_soc_register_component(dev, &msm8916_wcd_digital,
+ 				      msm8916_wcd_digital_dai,
+ 				      ARRAY_SIZE(msm8916_wcd_digital_dai));
++err_clk:
++	clk_disable_unprepare(priv->ahbclk);
++	return ret;
+ }
+ 
+ static int msm8916_wcd_digital_remove(struct platform_device *pdev)
+diff --git a/sound/soc/codecs/mt6358.c b/sound/soc/codecs/mt6358.c
+index 9b263a9a669dc..4c7b5d940799b 100644
+--- a/sound/soc/codecs/mt6358.c
++++ b/sound/soc/codecs/mt6358.c
+@@ -107,6 +107,7 @@ int mt6358_set_mtkaif_protocol(struct snd_soc_component *cmpnt,
+ 	priv->mtkaif_protocol = mtkaif_protocol;
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(mt6358_set_mtkaif_protocol);
+ 
+ static void playback_gpio_set(struct mt6358_priv *priv)
+ {
+@@ -273,6 +274,7 @@ int mt6358_mtkaif_calibration_enable(struct snd_soc_component *cmpnt)
+ 			   1 << RG_AUD_PAD_TOP_DAT_MISO_LOOPBACK_SFT);
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(mt6358_mtkaif_calibration_enable);
+ 
+ int mt6358_mtkaif_calibration_disable(struct snd_soc_component *cmpnt)
+ {
+@@ -296,6 +298,7 @@ int mt6358_mtkaif_calibration_disable(struct snd_soc_component *cmpnt)
+ 	capture_gpio_reset(priv);
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(mt6358_mtkaif_calibration_disable);
+ 
+ int mt6358_set_mtkaif_calibration_phase(struct snd_soc_component *cmpnt,
+ 					int phase_1, int phase_2)
+@@ -310,6 +313,7 @@ int mt6358_set_mtkaif_calibration_phase(struct snd_soc_component *cmpnt,
+ 			   phase_2 << RG_AUD_PAD_TOP_PHASE_MODE2_SFT);
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(mt6358_set_mtkaif_calibration_phase);
+ 
+ /* dl pga gain */
+ enum {
+diff --git a/sound/soc/codecs/rk817_codec.c b/sound/soc/codecs/rk817_codec.c
+index 03f24edfe4f64..8fffe378618d0 100644
+--- a/sound/soc/codecs/rk817_codec.c
++++ b/sound/soc/codecs/rk817_codec.c
+@@ -508,12 +508,14 @@ static int rk817_platform_probe(struct platform_device *pdev)
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "%s() register codec error %d\n",
+ 			__func__, ret);
+-		goto err_;
++		goto err_clk;
+ 	}
+ 
+ 	return 0;
+-err_:
+ 
++err_clk:
++	clk_disable_unprepare(rk817_codec_data->mclk);
++err_:
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/codecs/rt5663.c b/sound/soc/codecs/rt5663.c
+index 2138f62e6af5d..3a8fba101b20f 100644
+--- a/sound/soc/codecs/rt5663.c
++++ b/sound/soc/codecs/rt5663.c
+@@ -3478,6 +3478,8 @@ static int rt5663_parse_dp(struct rt5663_priv *rt5663, struct device *dev)
+ 		table_size = sizeof(struct impedance_mapping_table) *
+ 			rt5663->pdata.impedance_sensing_num;
+ 		rt5663->imp_table = devm_kzalloc(dev, table_size, GFP_KERNEL);
++		if (!rt5663->imp_table)
++			return -ENOMEM;
+ 		ret = device_property_read_u32_array(dev,
+ 			"realtek,impedance_sensing_table",
+ 			(u32 *)rt5663->imp_table, table_size);
+diff --git a/sound/soc/codecs/rt5682s.c b/sound/soc/codecs/rt5682s.c
+index d79b548d23fa4..f2a2f3d60925c 100644
+--- a/sound/soc/codecs/rt5682s.c
++++ b/sound/soc/codecs/rt5682s.c
+@@ -822,6 +822,7 @@ static void rt5682s_jack_detect_handler(struct work_struct *work)
+ {
+ 	struct rt5682s_priv *rt5682s =
+ 		container_of(work, struct rt5682s_priv, jack_detect_work.work);
++	struct snd_soc_dapm_context *dapm;
+ 	int val, btn_type;
+ 
+ 	if (!rt5682s->component || !rt5682s->component->card ||
+@@ -832,7 +833,9 @@ static void rt5682s_jack_detect_handler(struct work_struct *work)
+ 		return;
+ 	}
+ 
+-	mutex_lock(&rt5682s->jdet_mutex);
++	dapm = snd_soc_component_get_dapm(rt5682s->component);
++
++	snd_soc_dapm_mutex_lock(dapm);
+ 	mutex_lock(&rt5682s->calibrate_mutex);
+ 
+ 	val = snd_soc_component_read(rt5682s->component, RT5682S_AJD1_CTRL)
+@@ -889,6 +892,9 @@ static void rt5682s_jack_detect_handler(struct work_struct *work)
+ 		rt5682s->irq_work_delay_time = 50;
+ 	}
+ 
++	mutex_unlock(&rt5682s->calibrate_mutex);
++	snd_soc_dapm_mutex_unlock(dapm);
++
+ 	snd_soc_jack_report(rt5682s->hs_jack, rt5682s->jack_type,
+ 		SND_JACK_HEADSET | SND_JACK_BTN_0 | SND_JACK_BTN_1 |
+ 		SND_JACK_BTN_2 | SND_JACK_BTN_3);
+@@ -898,9 +904,6 @@ static void rt5682s_jack_detect_handler(struct work_struct *work)
+ 		schedule_delayed_work(&rt5682s->jd_check_work, 0);
+ 	else
+ 		cancel_delayed_work_sync(&rt5682s->jd_check_work);
+-
+-	mutex_unlock(&rt5682s->calibrate_mutex);
+-	mutex_unlock(&rt5682s->jdet_mutex);
+ }
+ 
+ static void rt5682s_jd_check_handler(struct work_struct *work)
+@@ -908,14 +911,9 @@ static void rt5682s_jd_check_handler(struct work_struct *work)
+ 	struct rt5682s_priv *rt5682s =
+ 		container_of(work, struct rt5682s_priv, jd_check_work.work);
+ 
+-	if (snd_soc_component_read(rt5682s->component, RT5682S_AJD1_CTRL)
+-		& RT5682S_JDH_RS_MASK) {
++	if (snd_soc_component_read(rt5682s->component, RT5682S_AJD1_CTRL) & RT5682S_JDH_RS_MASK) {
+ 		/* jack out */
+-		rt5682s->jack_type = rt5682s_headset_detect(rt5682s->component, 0);
+-
+-		snd_soc_jack_report(rt5682s->hs_jack, rt5682s->jack_type,
+-			SND_JACK_HEADSET | SND_JACK_BTN_0 | SND_JACK_BTN_1 |
+-			SND_JACK_BTN_2 | SND_JACK_BTN_3);
++		schedule_delayed_work(&rt5682s->jack_detect_work, 0);
+ 	} else {
+ 		schedule_delayed_work(&rt5682s->jd_check_work, 500);
+ 	}
+@@ -1323,7 +1321,6 @@ static int rt5682s_hp_amp_event(struct snd_soc_dapm_widget *w,
+ 		struct snd_kcontrol *kcontrol, int event)
+ {
+ 	struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
+-	struct rt5682s_priv *rt5682s = snd_soc_component_get_drvdata(component);
+ 
+ 	switch (event) {
+ 	case SND_SOC_DAPM_POST_PMU:
+@@ -1339,8 +1336,6 @@ static int rt5682s_hp_amp_event(struct snd_soc_dapm_widget *w,
+ 		snd_soc_component_write(component, RT5682S_BIAS_CUR_CTRL_11, 0x6666);
+ 		snd_soc_component_write(component, RT5682S_BIAS_CUR_CTRL_12, 0xa82a);
+ 
+-		mutex_lock(&rt5682s->jdet_mutex);
+-
+ 		snd_soc_component_update_bits(component, RT5682S_HP_CTRL_2,
+ 			RT5682S_HPO_L_PATH_MASK | RT5682S_HPO_R_PATH_MASK |
+ 			RT5682S_HPO_SEL_IP_EN_SW, RT5682S_HPO_L_PATH_EN |
+@@ -1348,8 +1343,6 @@ static int rt5682s_hp_amp_event(struct snd_soc_dapm_widget *w,
+ 		usleep_range(5000, 10000);
+ 		snd_soc_component_update_bits(component, RT5682S_HP_AMP_DET_CTL_1,
+ 			RT5682S_CP_SW_SIZE_MASK, RT5682S_CP_SW_SIZE_L | RT5682S_CP_SW_SIZE_S);
+-
+-		mutex_unlock(&rt5682s->jdet_mutex);
+ 		break;
+ 
+ 	case SND_SOC_DAPM_POST_PMD:
+@@ -3075,7 +3068,6 @@ static int rt5682s_i2c_probe(struct i2c_client *i2c,
+ 
+ 	mutex_init(&rt5682s->calibrate_mutex);
+ 	mutex_init(&rt5682s->sar_mutex);
+-	mutex_init(&rt5682s->jdet_mutex);
+ 	rt5682s_calibrate(rt5682s);
+ 
+ 	regmap_update_bits(rt5682s->regmap, RT5682S_MICBIAS_2,
+diff --git a/sound/soc/codecs/rt5682s.h b/sound/soc/codecs/rt5682s.h
+index 1bf2ef7ce5784..397a2531b6f68 100644
+--- a/sound/soc/codecs/rt5682s.h
++++ b/sound/soc/codecs/rt5682s.h
+@@ -1446,7 +1446,6 @@ struct rt5682s_priv {
+ 	struct delayed_work jd_check_work;
+ 	struct mutex calibrate_mutex;
+ 	struct mutex sar_mutex;
+-	struct mutex jdet_mutex;
+ 
+ #ifdef CONFIG_COMMON_CLK
+ 	struct clk_hw dai_clks_hw[RT5682S_DAI_NUM_CLKS];
+diff --git a/sound/soc/codecs/wcd934x.c b/sound/soc/codecs/wcd934x.c
+index e63c6b723d76c..7b99318070cfa 100644
+--- a/sound/soc/codecs/wcd934x.c
++++ b/sound/soc/codecs/wcd934x.c
+@@ -3023,14 +3023,14 @@ static int wcd934x_hph_impedance_get(struct snd_kcontrol *kcontrol,
+ 	return 0;
+ }
+ static const struct snd_kcontrol_new hph_type_detect_controls[] = {
+-	SOC_SINGLE_EXT("HPH Type", 0, 0, UINT_MAX, 0,
++	SOC_SINGLE_EXT("HPH Type", 0, 0, WCD_MBHC_HPH_STEREO, 0,
+ 		       wcd934x_get_hph_type, NULL),
+ };
+ 
+ static const struct snd_kcontrol_new impedance_detect_controls[] = {
+-	SOC_SINGLE_EXT("HPHL Impedance", 0, 0, UINT_MAX, 0,
++	SOC_SINGLE_EXT("HPHL Impedance", 0, 0, INT_MAX, 0,
+ 		       wcd934x_hph_impedance_get, NULL),
+-	SOC_SINGLE_EXT("HPHR Impedance", 0, 1, UINT_MAX, 0,
++	SOC_SINGLE_EXT("HPHR Impedance", 0, 1, INT_MAX, 0,
+ 		       wcd934x_hph_impedance_get, NULL),
+ };
+ 
+@@ -3308,13 +3308,16 @@ static int wcd934x_rx_hph_mode_put(struct snd_kcontrol *kc,
+ 
+ 	mode_val = ucontrol->value.enumerated.item[0];
+ 
++	if (mode_val == wcd->hph_mode)
++		return 0;
++
+ 	if (mode_val == 0) {
+ 		dev_err(wcd->dev, "Invalid HPH Mode, default to ClSH HiFi\n");
+ 		mode_val = CLS_H_LOHIFI;
+ 	}
+ 	wcd->hph_mode = mode_val;
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static int slim_rx_mux_get(struct snd_kcontrol *kc,
+@@ -5885,6 +5888,7 @@ static int wcd934x_codec_parse_data(struct wcd934x_codec *wcd)
+ 	}
+ 
+ 	wcd->sidev = of_slim_get_device(wcd->sdev->ctrl, ifc_dev_np);
++	of_node_put(ifc_dev_np);
+ 	if (!wcd->sidev) {
+ 		dev_err(dev, "Unable to get SLIM Interface device\n");
+ 		return -EINVAL;
+diff --git a/sound/soc/codecs/wcd938x.c b/sound/soc/codecs/wcd938x.c
+index bbc261ab2025b..4480c118ed5d9 100644
+--- a/sound/soc/codecs/wcd938x.c
++++ b/sound/soc/codecs/wcd938x.c
+@@ -2504,7 +2504,7 @@ static int wcd938x_tx_mode_get(struct snd_kcontrol *kcontrol,
+ 	struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
+ 	int path = e->shift_l;
+ 
+-	ucontrol->value.integer.value[0] = wcd938x->tx_mode[path];
++	ucontrol->value.enumerated.item[0] = wcd938x->tx_mode[path];
+ 
+ 	return 0;
+ }
+@@ -2528,7 +2528,7 @@ static int wcd938x_rx_hph_mode_get(struct snd_kcontrol *kcontrol,
+ 	struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol);
+ 	struct wcd938x_priv *wcd938x = snd_soc_component_get_drvdata(component);
+ 
+-	ucontrol->value.integer.value[0] = wcd938x->hph_mode;
++	ucontrol->value.enumerated.item[0] = wcd938x->hph_mode;
+ 
+ 	return 0;
+ }
+@@ -3577,14 +3577,14 @@ static int wcd938x_hph_impedance_get(struct snd_kcontrol *kcontrol,
+ }
+ 
+ static const struct snd_kcontrol_new hph_type_detect_controls[] = {
+-	SOC_SINGLE_EXT("HPH Type", 0, 0, UINT_MAX, 0,
++	SOC_SINGLE_EXT("HPH Type", 0, 0, WCD_MBHC_HPH_STEREO, 0,
+ 		       wcd938x_get_hph_type, NULL),
+ };
+ 
+ static const struct snd_kcontrol_new impedance_detect_controls[] = {
+-	SOC_SINGLE_EXT("HPHL Impedance", 0, 0, UINT_MAX, 0,
++	SOC_SINGLE_EXT("HPHL Impedance", 0, 0, INT_MAX, 0,
+ 		       wcd938x_hph_impedance_get, NULL),
+-	SOC_SINGLE_EXT("HPHR Impedance", 0, 1, UINT_MAX, 0,
++	SOC_SINGLE_EXT("HPHR Impedance", 0, 1, INT_MAX, 0,
+ 		       wcd938x_hph_impedance_get, NULL),
+ };
+ 
+diff --git a/sound/soc/codecs/wm8350.c b/sound/soc/codecs/wm8350.c
+index 15d42ce3b21d6..41504ce2a682f 100644
+--- a/sound/soc/codecs/wm8350.c
++++ b/sound/soc/codecs/wm8350.c
+@@ -1537,18 +1537,38 @@ static  int wm8350_component_probe(struct snd_soc_component *component)
+ 	wm8350_clear_bits(wm8350, WM8350_JACK_DETECT,
+ 			  WM8350_JDL_ENA | WM8350_JDR_ENA);
+ 
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_JCK_DET_L,
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_JCK_DET_L,
+ 			    wm8350_hpl_jack_handler, 0, "Left jack detect",
+ 			    priv);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_JCK_DET_R,
++	if (ret != 0)
++		goto err;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_JCK_DET_R,
+ 			    wm8350_hpr_jack_handler, 0, "Right jack detect",
+ 			    priv);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_MICSCD,
++	if (ret != 0)
++		goto free_jck_det_l;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_MICSCD,
+ 			    wm8350_mic_handler, 0, "Microphone short", priv);
+-	wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_MICD,
++	if (ret != 0)
++		goto free_jck_det_r;
++
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_MICD,
+ 			    wm8350_mic_handler, 0, "Microphone detect", priv);
++	if (ret != 0)
++		goto free_micscd;
+ 
+ 	return 0;
++
++free_micscd:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CODEC_MICSCD, priv);
++free_jck_det_r:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CODEC_JCK_DET_R, priv);
++free_jck_det_l:
++	wm8350_free_irq(wm8350, WM8350_IRQ_CODEC_JCK_DET_L, priv);
++err:
++	return ret;
+ }
+ 
+ static void wm8350_component_remove(struct snd_soc_component *component)
+diff --git a/sound/soc/dwc/dwc-i2s.c b/sound/soc/dwc/dwc-i2s.c
+index 5cb58929090d4..1edac3e10f345 100644
+--- a/sound/soc/dwc/dwc-i2s.c
++++ b/sound/soc/dwc/dwc-i2s.c
+@@ -403,9 +403,13 @@ static int dw_i2s_runtime_suspend(struct device *dev)
+ static int dw_i2s_runtime_resume(struct device *dev)
+ {
+ 	struct dw_i2s_dev *dw_dev = dev_get_drvdata(dev);
++	int ret;
+ 
+-	if (dw_dev->capability & DW_I2S_MASTER)
+-		clk_enable(dw_dev->clk);
++	if (dw_dev->capability & DW_I2S_MASTER) {
++		ret = clk_enable(dw_dev->clk);
++		if (ret)
++			return ret;
++	}
+ 	return 0;
+ }
+ 
+@@ -422,10 +426,13 @@ static int dw_i2s_resume(struct snd_soc_component *component)
+ {
+ 	struct dw_i2s_dev *dev = snd_soc_component_get_drvdata(component);
+ 	struct snd_soc_dai *dai;
+-	int stream;
++	int stream, ret;
+ 
+-	if (dev->capability & DW_I2S_MASTER)
+-		clk_enable(dev->clk);
++	if (dev->capability & DW_I2S_MASTER) {
++		ret = clk_enable(dev->clk);
++		if (ret)
++			return ret;
++	}
+ 
+ 	for_each_component_dais(component, dai) {
+ 		for_each_pcm_streams(stream)
+diff --git a/sound/soc/fsl/fsl_spdif.c b/sound/soc/fsl/fsl_spdif.c
+index d178b479c8bd4..06d4a014f296d 100644
+--- a/sound/soc/fsl/fsl_spdif.c
++++ b/sound/soc/fsl/fsl_spdif.c
+@@ -610,6 +610,8 @@ static void fsl_spdif_shutdown(struct snd_pcm_substream *substream,
+ 		mask = SCR_TXFIFO_AUTOSYNC_MASK | SCR_TXFIFO_CTRL_MASK |
+ 			SCR_TXSEL_MASK | SCR_USRC_SEL_MASK |
+ 			SCR_TXFIFO_FSEL_MASK;
++		/* Disable TX clock */
++		regmap_update_bits(regmap, REG_SPDIF_STC, STC_TXCLK_ALL_EN_MASK, 0);
+ 	} else {
+ 		scr = SCR_RXFIFO_OFF | SCR_RXFIFO_CTL_ZERO;
+ 		mask = SCR_RXFIFO_FSEL_MASK | SCR_RXFIFO_AUTOSYNC_MASK|
+diff --git a/sound/soc/fsl/imx-es8328.c b/sound/soc/fsl/imx-es8328.c
+index 09c674ee79f1a..168973035e35f 100644
+--- a/sound/soc/fsl/imx-es8328.c
++++ b/sound/soc/fsl/imx-es8328.c
+@@ -87,6 +87,7 @@ static int imx_es8328_probe(struct platform_device *pdev)
+ 	if (int_port > MUX_PORT_MAX || int_port == 0) {
+ 		dev_err(dev, "mux-int-port: hardware only has %d mux ports\n",
+ 			MUX_PORT_MAX);
++		ret = -EINVAL;
+ 		goto fail;
+ 	}
+ 
+diff --git a/sound/soc/generic/simple-card-utils.c b/sound/soc/generic/simple-card-utils.c
+index 850e968677f10..7d8c9843ad53b 100644
+--- a/sound/soc/generic/simple-card-utils.c
++++ b/sound/soc/generic/simple-card-utils.c
+@@ -275,6 +275,7 @@ int asoc_simple_hw_params(struct snd_pcm_substream *substream,
+ 		mclk_fs = props->mclk_fs;
+ 
+ 	if (mclk_fs) {
++		struct snd_soc_component *component;
+ 		mclk = params_rate(params) * mclk_fs;
+ 
+ 		for_each_prop_dai_codec(props, i, pdai) {
+@@ -282,16 +283,30 @@ int asoc_simple_hw_params(struct snd_pcm_substream *substream,
+ 			if (ret < 0)
+ 				return ret;
+ 		}
++
+ 		for_each_prop_dai_cpu(props, i, pdai) {
+ 			ret = asoc_simple_set_clk_rate(pdai, mclk);
+ 			if (ret < 0)
+ 				return ret;
+ 		}
++
++		/* Ensure sysclk is set on all components in case any
++		 * (such as platform components) are missed by calls to
++		 * snd_soc_dai_set_sysclk.
++		 */
++		for_each_rtd_components(rtd, i, component) {
++			ret = snd_soc_component_set_sysclk(component, 0, 0,
++							   mclk, SND_SOC_CLOCK_IN);
++			if (ret && ret != -ENOTSUPP)
++				return ret;
++		}
++
+ 		for_each_rtd_codec_dais(rtd, i, sdai) {
+ 			ret = snd_soc_dai_set_sysclk(sdai, 0, mclk, SND_SOC_CLOCK_IN);
+ 			if (ret && ret != -ENOTSUPP)
+ 				return ret;
+ 		}
++
+ 		for_each_rtd_cpu_dais(rtd, i, sdai) {
+ 			ret = snd_soc_dai_set_sysclk(sdai, 0, mclk, SND_SOC_CLOCK_OUT);
+ 			if (ret && ret != -ENOTSUPP)
+diff --git a/sound/soc/intel/boards/sof_es8336.c b/sound/soc/intel/boards/sof_es8336.c
+index 20d577eaab6d7..28d7670b8f8f8 100644
+--- a/sound/soc/intel/boards/sof_es8336.c
++++ b/sound/soc/intel/boards/sof_es8336.c
+@@ -63,7 +63,12 @@ static const struct acpi_gpio_mapping *gpio_mapping = acpi_es8336_gpios;
+ 
+ static void log_quirks(struct device *dev)
+ {
+-	dev_info(dev, "quirk SSP%ld",  SOF_ES8336_SSP_CODEC(quirk));
++	dev_info(dev, "quirk mask %#lx\n", quirk);
++	dev_info(dev, "quirk SSP%ld\n",  SOF_ES8336_SSP_CODEC(quirk));
++	if (quirk & SOF_ES8336_ENABLE_DMIC)
++		dev_info(dev, "quirk DMIC enabled\n");
++	if (quirk & SOF_ES8336_TGL_GPIO_QUIRK)
++		dev_info(dev, "quirk TGL GPIO enabled\n");
+ }
+ 
+ static int sof_es8316_speaker_power_event(struct snd_soc_dapm_widget *w,
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 54eefaff62a7e..182e23bc2f343 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -184,7 +184,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 		.callback = sof_sdw_quirk_cb,
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "HP"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "HP Spectre x360 Convertible"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HP Spectre x360 Conv"),
+ 		},
+ 		.driver_data = (void *)(SOF_SDW_TGL_HDMI |
+ 					SOF_SDW_PCH_DMIC |
+diff --git a/sound/soc/intel/common/soc-acpi-intel-bxt-match.c b/sound/soc/intel/common/soc-acpi-intel-bxt-match.c
+index 342d340522045..04a92e74d99bc 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-bxt-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-bxt-match.c
+@@ -41,6 +41,11 @@ static struct snd_soc_acpi_mach *apl_quirk(void *arg)
+ 	return mach;
+ }
+ 
++static const struct snd_soc_acpi_codecs essx_83x6 = {
++	.num_codecs = 3,
++	.codecs = { "ESSX8316", "ESSX8326", "ESSX8336"},
++};
++
+ static const struct snd_soc_acpi_codecs bxt_codecs = {
+ 	.num_codecs = 1,
+ 	.codecs = {"MX98357A"}
+@@ -83,7 +88,7 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_bxt_machines[] = {
+ 		.sof_tplg_filename = "sof-apl-tdf8532.tplg",
+ 	},
+ 	{
+-		.id = "ESSX8336",
++		.comp_ids = &essx_83x6,
+ 		.drv_name = "sof-essx8336",
+ 		.sof_fw_filename = "sof-apl.ri",
+ 		.sof_tplg_filename = "sof-apl-es8336.tplg",
+diff --git a/sound/soc/intel/common/soc-acpi-intel-cml-match.c b/sound/soc/intel/common/soc-acpi-intel-cml-match.c
+index 4eebc79d4b486..14395833d89e8 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-cml-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-cml-match.c
+@@ -9,6 +9,11 @@
+ #include <sound/soc-acpi.h>
+ #include <sound/soc-acpi-intel-match.h>
+ 
++static const struct snd_soc_acpi_codecs essx_83x6 = {
++	.num_codecs = 3,
++	.codecs = { "ESSX8316", "ESSX8326", "ESSX8336"},
++};
++
+ static const struct snd_soc_acpi_codecs rt1011_spk_codecs = {
+ 	.num_codecs = 1,
+ 	.codecs = {"10EC1011"}
+@@ -82,7 +87,7 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_cml_machines[] = {
+ 		.sof_tplg_filename = "sof-cml-da7219-max98390.tplg",
+ 	},
+ 	{
+-		.id = "ESSX8336",
++		.comp_ids = &essx_83x6,
+ 		.drv_name = "sof-essx8336",
+ 		.sof_fw_filename = "sof-cml.ri",
+ 		.sof_tplg_filename = "sof-cml-es8336.tplg",
+diff --git a/sound/soc/intel/common/soc-acpi-intel-glk-match.c b/sound/soc/intel/common/soc-acpi-intel-glk-match.c
+index 8492b7e2a9450..7aa6a870d5a5c 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-glk-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-glk-match.c
+@@ -9,6 +9,11 @@
+ #include <sound/soc-acpi.h>
+ #include <sound/soc-acpi-intel-match.h>
+ 
++static const struct snd_soc_acpi_codecs essx_83x6 = {
++	.num_codecs = 3,
++	.codecs = { "ESSX8316", "ESSX8326", "ESSX8336"},
++};
++
+ static const struct snd_soc_acpi_codecs glk_codecs = {
+ 	.num_codecs = 1,
+ 	.codecs = {"MX98357A"}
+@@ -58,7 +63,7 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_glk_machines[] = {
+ 		.sof_tplg_filename = "sof-glk-cs42l42.tplg",
+ 	},
+ 	{
+-		.id = "ESSX8336",
++		.comp_ids = &essx_83x6,
+ 		.drv_name = "sof-essx8336",
+ 		.sof_fw_filename = "sof-glk.ri",
+ 		.sof_tplg_filename = "sof-glk-es8336.tplg",
+diff --git a/sound/soc/intel/common/soc-acpi-intel-jsl-match.c b/sound/soc/intel/common/soc-acpi-intel-jsl-match.c
+index 278ec196da7bf..9d0d0e1437a4b 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-jsl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-jsl-match.c
+@@ -9,6 +9,11 @@
+ #include <sound/soc-acpi.h>
+ #include <sound/soc-acpi-intel-match.h>
+ 
++static const struct snd_soc_acpi_codecs essx_83x6 = {
++	.num_codecs = 3,
++	.codecs = { "ESSX8316", "ESSX8326", "ESSX8336"},
++};
++
+ static const struct snd_soc_acpi_codecs jsl_7219_98373_codecs = {
+ 	.num_codecs = 1,
+ 	.codecs = {"MX98373"}
+@@ -87,7 +92,7 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_jsl_machines[] = {
+ 		.sof_tplg_filename = "sof-jsl-cs42l42-mx98360a.tplg",
+ 	},
+ 	{
+-		.id = "ESSX8336",
++		.comp_ids = &essx_83x6,
+ 		.drv_name = "sof-essx8336",
+ 		.sof_fw_filename = "sof-jsl.ri",
+ 		.sof_tplg_filename = "sof-jsl-es8336.tplg",
+diff --git a/sound/soc/intel/common/soc-acpi-intel-tgl-match.c b/sound/soc/intel/common/soc-acpi-intel-tgl-match.c
+index da31bb3cca17c..e2658bca69318 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-tgl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-tgl-match.c
+@@ -10,6 +10,11 @@
+ #include <sound/soc-acpi-intel-match.h>
+ #include "soc-acpi-intel-sdw-mockup-match.h"
+ 
++static const struct snd_soc_acpi_codecs essx_83x6 = {
++	.num_codecs = 3,
++	.codecs = { "ESSX8316", "ESSX8326", "ESSX8336"},
++};
++
+ static const struct snd_soc_acpi_codecs tgl_codecs = {
+ 	.num_codecs = 1,
+ 	.codecs = {"MX98357A"}
+@@ -389,7 +394,7 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_tgl_machines[] = {
+ 		.sof_tplg_filename = "sof-tgl-rt1011-rt5682.tplg",
+ 	},
+ 	{
+-		.id = "ESSX8336",
++		.comp_ids = &essx_83x6,
+ 		.drv_name = "sof-essx8336",
+ 		.sof_fw_filename = "sof-tgl.ri",
+ 		.sof_tplg_filename = "sof-tgl-es8336.tplg",
+diff --git a/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c b/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
+index bda103211e0bd..0ab8b050b305f 100644
+--- a/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
++++ b/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
+@@ -685,7 +685,6 @@ static int mt8183_da7219_max98357_dev_probe(struct platform_device *pdev)
+ 	struct snd_soc_dai_link *dai_link;
+ 	struct mt8183_da7219_max98357_priv *priv;
+ 	struct pinctrl *pinctrl;
+-	const struct of_device_id *match;
+ 	int ret, i;
+ 
+ 	platform_node = of_parse_phandle(pdev->dev.of_node,
+@@ -695,11 +694,9 @@ static int mt8183_da7219_max98357_dev_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
+-	match = of_match_device(pdev->dev.driver->of_match_table, &pdev->dev);
+-	if (!match || !match->data)
++	card = (struct snd_soc_card *)of_device_get_match_data(&pdev->dev);
++	if (!card)
+ 		return -EINVAL;
+-
+-	card = (struct snd_soc_card *)match->data;
+ 	card->dev = &pdev->dev;
+ 
+ 	hdmi_codec = of_parse_phandle(pdev->dev.of_node,
+diff --git a/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c b/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c
+index 9f0bf15fe465e..8c81c5528f6e1 100644
+--- a/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c
++++ b/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c
+@@ -637,7 +637,6 @@ mt8183_mt6358_ts3a227_max98357_dev_probe(struct platform_device *pdev)
+ 	struct device_node *platform_node, *ec_codec, *hdmi_codec;
+ 	struct snd_soc_dai_link *dai_link;
+ 	struct mt8183_mt6358_ts3a227_max98357_priv *priv;
+-	const struct of_device_id *match;
+ 	int ret, i;
+ 
+ 	platform_node = of_parse_phandle(pdev->dev.of_node,
+@@ -647,11 +646,9 @@ mt8183_mt6358_ts3a227_max98357_dev_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
+-	match = of_match_device(pdev->dev.driver->of_match_table, &pdev->dev);
+-	if (!match || !match->data)
++	card = (struct snd_soc_card *)of_device_get_match_data(&pdev->dev);
++	if (!card)
+ 		return -EINVAL;
+-
+-	card = (struct snd_soc_card *)match->data;
+ 	card->dev = &pdev->dev;
+ 
+ 	ec_codec = of_parse_phandle(pdev->dev.of_node, "mediatek,ec-codec", 0);
+diff --git a/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c b/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c
+index 24a5d0adec1ba..c1d225b498513 100644
+--- a/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c
++++ b/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c
+@@ -1106,7 +1106,6 @@ static int mt8192_mt6359_dev_probe(struct platform_device *pdev)
+ 	struct device_node *platform_node, *hdmi_codec;
+ 	int ret, i;
+ 	struct snd_soc_dai_link *dai_link;
+-	const struct of_device_id *match;
+ 	struct mt8192_mt6359_priv *priv;
+ 
+ 	platform_node = of_parse_phandle(pdev->dev.of_node,
+@@ -1116,11 +1115,11 @@ static int mt8192_mt6359_dev_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
+-	match = of_match_device(pdev->dev.driver->of_match_table, &pdev->dev);
+-	if (!match || !match->data)
+-		return -EINVAL;
+-
+-	card = (struct snd_soc_card *)match->data;
++	card = (struct snd_soc_card *)of_device_get_match_data(&pdev->dev);
++	if (!card) {
++		ret = -EINVAL;
++		goto put_platform_node;
++	}
+ 	card->dev = &pdev->dev;
+ 
+ 	hdmi_codec = of_parse_phandle(pdev->dev.of_node,
+@@ -1162,20 +1161,24 @@ static int mt8192_mt6359_dev_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
+-	if (!priv)
+-		return -ENOMEM;
++	if (!priv) {
++		ret = -ENOMEM;
++		goto put_hdmi_codec;
++	}
+ 	snd_soc_card_set_drvdata(card, priv);
+ 
+ 	ret = mt8192_afe_gpio_init(&pdev->dev);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "init gpio error %d\n", ret);
+-		return ret;
++		goto put_hdmi_codec;
+ 	}
+ 
+ 	ret = devm_snd_soc_register_card(&pdev->dev, card);
+ 
+-	of_node_put(platform_node);
++put_hdmi_codec:
+ 	of_node_put(hdmi_codec);
++put_platform_node:
++	of_node_put(platform_node);
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/mxs/mxs-saif.c b/sound/soc/mxs/mxs-saif.c
+index 6a2d24d489647..879c1221a809b 100644
+--- a/sound/soc/mxs/mxs-saif.c
++++ b/sound/soc/mxs/mxs-saif.c
+@@ -455,7 +455,10 @@ static int mxs_saif_hw_params(struct snd_pcm_substream *substream,
+ 		* basic clock which should be fast enough for the internal
+ 		* logic.
+ 		*/
+-		clk_enable(saif->clk);
++		ret = clk_enable(saif->clk);
++		if (ret)
++			return ret;
++
+ 		ret = clk_set_rate(saif->clk, 24000000);
+ 		clk_disable(saif->clk);
+ 		if (ret)
+diff --git a/sound/soc/mxs/mxs-sgtl5000.c b/sound/soc/mxs/mxs-sgtl5000.c
+index a6407f4388de7..fb721bc499496 100644
+--- a/sound/soc/mxs/mxs-sgtl5000.c
++++ b/sound/soc/mxs/mxs-sgtl5000.c
+@@ -118,6 +118,9 @@ static int mxs_sgtl5000_probe(struct platform_device *pdev)
+ 	codec_np = of_parse_phandle(np, "audio-codec", 0);
+ 	if (!saif_np[0] || !saif_np[1] || !codec_np) {
+ 		dev_err(&pdev->dev, "phandle missing or invalid\n");
++		of_node_put(codec_np);
++		of_node_put(saif_np[0]);
++		of_node_put(saif_np[1]);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/sound/soc/rockchip/rockchip_i2s.c b/sound/soc/rockchip/rockchip_i2s.c
+index a6d7656c206e5..4ce5d25793875 100644
+--- a/sound/soc/rockchip/rockchip_i2s.c
++++ b/sound/soc/rockchip/rockchip_i2s.c
+@@ -716,19 +716,23 @@ static int rockchip_i2s_probe(struct platform_device *pdev)
+ 	i2s->mclk = devm_clk_get(&pdev->dev, "i2s_clk");
+ 	if (IS_ERR(i2s->mclk)) {
+ 		dev_err(&pdev->dev, "Can't retrieve i2s master clock\n");
+-		return PTR_ERR(i2s->mclk);
++		ret = PTR_ERR(i2s->mclk);
++		goto err_clk;
+ 	}
+ 
+ 	regs = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+-	if (IS_ERR(regs))
+-		return PTR_ERR(regs);
++	if (IS_ERR(regs)) {
++		ret = PTR_ERR(regs);
++		goto err_clk;
++	}
+ 
+ 	i2s->regmap = devm_regmap_init_mmio(&pdev->dev, regs,
+ 					    &rockchip_i2s_regmap_config);
+ 	if (IS_ERR(i2s->regmap)) {
+ 		dev_err(&pdev->dev,
+ 			"Failed to initialise managed register map\n");
+-		return PTR_ERR(i2s->regmap);
++		ret = PTR_ERR(i2s->regmap);
++		goto err_clk;
+ 	}
+ 
+ 	i2s->bclk_ratio = 64;
+@@ -768,7 +772,8 @@ err_suspend:
+ 		i2s_runtime_suspend(&pdev->dev);
+ err_pm_disable:
+ 	pm_runtime_disable(&pdev->dev);
+-
++err_clk:
++	clk_disable_unprepare(i2s->hclk);
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/rockchip/rockchip_i2s_tdm.c b/sound/soc/rockchip/rockchip_i2s_tdm.c
+index 5f9cb5c4c7f09..98700e75b82a1 100644
+--- a/sound/soc/rockchip/rockchip_i2s_tdm.c
++++ b/sound/soc/rockchip/rockchip_i2s_tdm.c
+@@ -469,14 +469,14 @@ static int rockchip_i2s_tdm_set_fmt(struct snd_soc_dai *cpu_dai,
+ 		txcr_val = I2S_TXCR_IBM_NORMAL;
+ 		rxcr_val = I2S_RXCR_IBM_NORMAL;
+ 		break;
+-	case SND_SOC_DAIFMT_DSP_A: /* PCM no delay mode */
+-		txcr_val = I2S_TXCR_TFS_PCM;
+-		rxcr_val = I2S_RXCR_TFS_PCM;
+-		break;
+-	case SND_SOC_DAIFMT_DSP_B: /* PCM delay 1 mode */
++	case SND_SOC_DAIFMT_DSP_A: /* PCM delay 1 mode */
+ 		txcr_val = I2S_TXCR_TFS_PCM | I2S_TXCR_PBM_MODE(1);
+ 		rxcr_val = I2S_RXCR_TFS_PCM | I2S_RXCR_PBM_MODE(1);
+ 		break;
++	case SND_SOC_DAIFMT_DSP_B: /* PCM no delay mode */
++		txcr_val = I2S_TXCR_TFS_PCM;
++		rxcr_val = I2S_RXCR_TFS_PCM;
++		break;
+ 	default:
+ 		ret = -EINVAL;
+ 		goto err_pm_put;
+@@ -1738,7 +1738,7 @@ static int __maybe_unused rockchip_i2s_tdm_resume(struct device *dev)
+ 	struct rk_i2s_tdm_dev *i2s_tdm = dev_get_drvdata(dev);
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+ 		return ret;
+ 	ret = regcache_sync(i2s_tdm->regmap);
+diff --git a/sound/soc/sh/fsi.c b/sound/soc/sh/fsi.c
+index cdf3b7f69ba70..e9a1eb6bdf66a 100644
+--- a/sound/soc/sh/fsi.c
++++ b/sound/soc/sh/fsi.c
+@@ -816,14 +816,27 @@ static int fsi_clk_enable(struct device *dev,
+ 			return ret;
+ 		}
+ 
+-		clk_enable(clock->xck);
+-		clk_enable(clock->ick);
+-		clk_enable(clock->div);
++		ret = clk_enable(clock->xck);
++		if (ret)
++			goto err;
++		ret = clk_enable(clock->ick);
++		if (ret)
++			goto disable_xck;
++		ret = clk_enable(clock->div);
++		if (ret)
++			goto disable_ick;
+ 
+ 		clock->count++;
+ 	}
+ 
+ 	return ret;
++
++disable_ick:
++	clk_disable(clock->ick);
++disable_xck:
++	clk_disable(clock->xck);
++err:
++	return ret;
+ }
+ 
+ static int fsi_clk_disable(struct device *dev,
+diff --git a/sound/soc/sh/rz-ssi.c b/sound/soc/sh/rz-ssi.c
+index fa0cc08f70ec4..16de2633a8736 100644
+--- a/sound/soc/sh/rz-ssi.c
++++ b/sound/soc/sh/rz-ssi.c
+@@ -411,54 +411,56 @@ static int rz_ssi_pio_recv(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm)
+ {
+ 	struct snd_pcm_substream *substream = strm->substream;
+ 	struct snd_pcm_runtime *runtime;
++	bool done = false;
+ 	u16 *buf;
+ 	int fifo_samples;
+ 	int frames_left;
+-	int samples = 0;
++	int samples;
+ 	int i;
+ 
+ 	if (!rz_ssi_stream_is_valid(ssi, strm))
+ 		return -EINVAL;
+ 
+ 	runtime = substream->runtime;
+-	/* frames left in this period */
+-	frames_left = runtime->period_size - (strm->buffer_pos %
+-					      runtime->period_size);
+-	if (frames_left == 0)
+-		frames_left = runtime->period_size;
+ 
+-	/* Samples in RX FIFO */
+-	fifo_samples = (rz_ssi_reg_readl(ssi, SSIFSR) >>
+-			SSIFSR_RDC_SHIFT) & SSIFSR_RDC_MASK;
++	while (!done) {
++		/* frames left in this period */
++		frames_left = runtime->period_size -
++			      (strm->buffer_pos % runtime->period_size);
++		if (!frames_left)
++			frames_left = runtime->period_size;
++
++		/* Samples in RX FIFO */
++		fifo_samples = (rz_ssi_reg_readl(ssi, SSIFSR) >>
++				SSIFSR_RDC_SHIFT) & SSIFSR_RDC_MASK;
++
++		/* Only read full frames at a time */
++		samples = 0;
++		while (frames_left && (fifo_samples >= runtime->channels)) {
++			samples += runtime->channels;
++			fifo_samples -= runtime->channels;
++			frames_left--;
++		}
+ 
+-	/* Only read full frames at a time */
+-	while (frames_left && (fifo_samples >= runtime->channels)) {
+-		samples += runtime->channels;
+-		fifo_samples -= runtime->channels;
+-		frames_left--;
+-	}
++		/* not enough samples yet */
++		if (!samples)
++			break;
+ 
+-	/* not enough samples yet */
+-	if (samples == 0)
+-		return 0;
++		/* calculate new buffer index */
++		buf = (u16 *)(runtime->dma_area);
++		buf += strm->buffer_pos * runtime->channels;
+ 
+-	/* calculate new buffer index */
+-	buf = (u16 *)(runtime->dma_area);
+-	buf += strm->buffer_pos * runtime->channels;
+-
+-	/* Note, only supports 16-bit samples */
+-	for (i = 0; i < samples; i++)
+-		*buf++ = (u16)(rz_ssi_reg_readl(ssi, SSIFRDR) >> 16);
++		/* Note, only supports 16-bit samples */
++		for (i = 0; i < samples; i++)
++			*buf++ = (u16)(rz_ssi_reg_readl(ssi, SSIFRDR) >> 16);
+ 
+-	rz_ssi_reg_mask_setl(ssi, SSIFSR, SSIFSR_RDF, 0);
+-	rz_ssi_pointer_update(strm, samples / runtime->channels);
++		rz_ssi_reg_mask_setl(ssi, SSIFSR, SSIFSR_RDF, 0);
++		rz_ssi_pointer_update(strm, samples / runtime->channels);
+ 
+-	/*
+-	 * If we finished this period, but there are more samples in
+-	 * the RX FIFO, call this function again
+-	 */
+-	if (frames_left == 0 && fifo_samples >= runtime->channels)
+-		rz_ssi_pio_recv(ssi, strm);
++		/* check if there are no more samples in the RX FIFO */
++		if (!(!frames_left && fifo_samples >= runtime->channels))
++			done = true;
++	}
+ 
+ 	return 0;
+ }
+@@ -975,6 +977,9 @@ static int rz_ssi_probe(struct platform_device *pdev)
+ 	ssi->playback.priv = ssi;
+ 	ssi->capture.priv = ssi;
+ 
++	spin_lock_init(&ssi->lock);
++	dev_set_drvdata(&pdev->dev, ssi);
++
+ 	/* Error Interrupt */
+ 	ssi->irq_int = platform_get_irq_byname(pdev, "int_req");
+ 	if (ssi->irq_int < 0)
+@@ -1022,8 +1027,6 @@ static int rz_ssi_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(&pdev->dev);
+ 	pm_runtime_resume_and_get(&pdev->dev);
+ 
+-	spin_lock_init(&ssi->lock);
+-	dev_set_drvdata(&pdev->dev, ssi);
+ 	ret = devm_snd_soc_register_component(&pdev->dev, &rz_ssi_soc_component,
+ 					      rz_ssi_soc_dai,
+ 					      ARRAY_SIZE(rz_ssi_soc_dai));
+diff --git a/sound/soc/soc-compress.c b/sound/soc/soc-compress.c
+index 8e2494a9f3a7f..e9dd25894dc0f 100644
+--- a/sound/soc/soc-compress.c
++++ b/sound/soc/soc-compress.c
+@@ -567,6 +567,11 @@ int snd_soc_new_compress(struct snd_soc_pcm_runtime *rtd, int num)
+ 		return -EINVAL;
+ 	}
+ 
++	if (!codec_dai) {
++		dev_err(rtd->card->dev, "Missing codec\n");
++		return -EINVAL;
++	}
++
+ 	/* check client and interface hw capabilities */
+ 	if (snd_soc_dai_stream_valid(codec_dai, SNDRV_PCM_STREAM_PLAYBACK) &&
+ 	    snd_soc_dai_stream_valid(cpu_dai,   SNDRV_PCM_STREAM_PLAYBACK))
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index dcf6be4c4aaad..0dc89f5684076 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -3184,7 +3184,7 @@ int snd_soc_get_dai_name(const struct of_phandle_args *args,
+ 	for_each_component(pos) {
+ 		struct device_node *component_of_node = soc_component_to_node(pos);
+ 
+-		if (component_of_node != args->np)
++		if (component_of_node != args->np || !pos->num_dai)
+ 			continue;
+ 
+ 		ret = snd_soc_component_of_xlate_dai_name(pos, args, dai_name);
+diff --git a/sound/soc/soc-generic-dmaengine-pcm.c b/sound/soc/soc-generic-dmaengine-pcm.c
+index c54c8ca8d7156..359987bf76d1b 100644
+--- a/sound/soc/soc-generic-dmaengine-pcm.c
++++ b/sound/soc/soc-generic-dmaengine-pcm.c
+@@ -86,10 +86,10 @@ static int dmaengine_pcm_hw_params(struct snd_soc_component *component,
+ 
+ 	memset(&slave_config, 0, sizeof(slave_config));
+ 
+-	if (!pcm->config)
+-		prepare_slave_config = snd_dmaengine_pcm_prepare_slave_config;
+-	else
++	if (pcm->config && pcm->config->prepare_slave_config)
+ 		prepare_slave_config = pcm->config->prepare_slave_config;
++	else
++		prepare_slave_config = snd_dmaengine_pcm_prepare_slave_config;
+ 
+ 	if (prepare_slave_config) {
+ 		int ret = prepare_slave_config(substream, params, &slave_config);
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index f5b9e66ac3b82..3bd0bba91600c 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -512,7 +512,8 @@ static int soc_tplg_kcontrol_bind_io(struct snd_soc_tplg_ctl_hdr *hdr,
+ 
+ 	if (le32_to_cpu(hdr->ops.info) == SND_SOC_TPLG_CTL_BYTES
+ 		&& k->iface & SNDRV_CTL_ELEM_IFACE_MIXER
+-		&& k->access & SNDRV_CTL_ELEM_ACCESS_TLV_READWRITE
++		&& (k->access & SNDRV_CTL_ELEM_ACCESS_TLV_READ
++		    || k->access & SNDRV_CTL_ELEM_ACCESS_TLV_WRITE)
+ 		&& k->access & SNDRV_CTL_ELEM_ACCESS_TLV_CALLBACK) {
+ 		struct soc_bytes_ext *sbe;
+ 		struct snd_soc_tplg_bytes_control *be;
+diff --git a/sound/soc/sof/imx/imx8m.c b/sound/soc/sof/imx/imx8m.c
+index e4618980cf8bc..1b20205ff0d14 100644
+--- a/sound/soc/sof/imx/imx8m.c
++++ b/sound/soc/sof/imx/imx8m.c
+@@ -192,6 +192,7 @@ static int imx8m_probe(struct snd_sof_dev *sdev)
+ 	}
+ 
+ 	ret = of_address_to_resource(res_node, 0, &res);
++	of_node_put(res_node);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to get reserved region address\n");
+ 		goto exit_pdev_unregister;
+diff --git a/sound/soc/sof/intel/Kconfig b/sound/soc/sof/intel/Kconfig
+index 88b6176af021c..d83e1a36707af 100644
+--- a/sound/soc/sof/intel/Kconfig
++++ b/sound/soc/sof/intel/Kconfig
+@@ -84,6 +84,7 @@ if SND_SOC_SOF_PCI
+ config SND_SOC_SOF_MERRIFIELD
+ 	tristate "SOF support for Tangier/Merrifield"
+ 	default SND_SOC_SOF_PCI
++	select SND_SOC_SOF_PCI_DEV
+ 	select SND_SOC_SOF_INTEL_ATOM_HIFI_EP
+ 	help
+ 	  This adds support for Sound Open Firmware for Intel(R) platforms
+diff --git a/sound/soc/sof/intel/hda-loader.c b/sound/soc/sof/intel/hda-loader.c
+index abad6d0ceb837..6354963061360 100644
+--- a/sound/soc/sof/intel/hda-loader.c
++++ b/sound/soc/sof/intel/hda-loader.c
+@@ -48,7 +48,7 @@ static struct hdac_ext_stream *cl_stream_prepare(struct snd_sof_dev *sdev, unsig
+ 	ret = snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV_SG, &pci->dev, size, dmab);
+ 	if (ret < 0) {
+ 		dev_err(sdev->dev, "error: memory alloc failed: %d\n", ret);
+-		goto error;
++		goto out_put;
+ 	}
+ 
+ 	hstream->period_bytes = 0;/* initialize period_bytes */
+@@ -59,22 +59,23 @@ static struct hdac_ext_stream *cl_stream_prepare(struct snd_sof_dev *sdev, unsig
+ 		ret = hda_dsp_iccmax_stream_hw_params(sdev, dsp_stream, dmab, NULL);
+ 		if (ret < 0) {
+ 			dev_err(sdev->dev, "error: iccmax stream prepare failed: %d\n", ret);
+-			goto error;
++			goto out_free;
+ 		}
+ 	} else {
+ 		ret = hda_dsp_stream_hw_params(sdev, dsp_stream, dmab, NULL);
+ 		if (ret < 0) {
+ 			dev_err(sdev->dev, "error: hdac prepare failed: %d\n", ret);
+-			goto error;
++			goto out_free;
+ 		}
+ 		hda_dsp_stream_spib_config(sdev, dsp_stream, HDA_DSP_SPIB_ENABLE, size);
+ 	}
+ 
+ 	return dsp_stream;
+ 
+-error:
+-	hda_dsp_stream_put(sdev, direction, hstream->stream_tag);
++out_free:
+ 	snd_dma_free_pages(dmab);
++out_put:
++	hda_dsp_stream_put(sdev, direction, hstream->stream_tag);
+ 	return ERR_PTR(ret);
+ }
+ 
+diff --git a/sound/soc/sof/intel/hda-pcm.c b/sound/soc/sof/intel/hda-pcm.c
+index 41cb60955f5c1..68834325d8fb1 100644
+--- a/sound/soc/sof/intel/hda-pcm.c
++++ b/sound/soc/sof/intel/hda-pcm.c
+@@ -278,6 +278,7 @@ int hda_dsp_pcm_open(struct snd_sof_dev *sdev,
+ 		runtime->hw.info &= ~SNDRV_PCM_INFO_PAUSE;
+ 
+ 	if (hda_always_enable_dmi_l1 ||
++	    direction == SNDRV_PCM_STREAM_PLAYBACK ||
+ 	    spcm->stream[substream->stream].d0i3_compatible)
+ 		flags |= SOF_HDA_STREAM_DMI_L1_COMPATIBLE;
+ 
+diff --git a/sound/soc/sof/intel/hda.c b/sound/soc/sof/intel/hda.c
+index 25200a0e1dc9d..fc88296ab8987 100644
+--- a/sound/soc/sof/intel/hda.c
++++ b/sound/soc/sof/intel/hda.c
+@@ -1200,7 +1200,7 @@ static bool link_slaves_found(struct snd_sof_dev *sdev,
+ 	struct hdac_bus *bus = sof_to_bus(sdev);
+ 	struct sdw_intel_slave_id *ids = sdw->ids;
+ 	int num_slaves = sdw->num_slaves;
+-	unsigned int part_id, link_id, unique_id, mfg_id;
++	unsigned int part_id, link_id, unique_id, mfg_id, version;
+ 	int i, j, k;
+ 
+ 	for (i = 0; i < link->num_adr; i++) {
+@@ -1210,12 +1210,14 @@ static bool link_slaves_found(struct snd_sof_dev *sdev,
+ 		mfg_id = SDW_MFG_ID(adr);
+ 		part_id = SDW_PART_ID(adr);
+ 		link_id = SDW_DISCO_LINK_ID(adr);
++		version = SDW_VERSION(adr);
+ 
+ 		for (j = 0; j < num_slaves; j++) {
+ 			/* find out how many identical parts were reported on that link */
+ 			if (ids[j].link_id == link_id &&
+ 			    ids[j].id.part_id == part_id &&
+-			    ids[j].id.mfg_id == mfg_id)
++			    ids[j].id.mfg_id == mfg_id &&
++			    ids[j].id.sdw_version == version)
+ 				reported_part_count++;
+ 		}
+ 
+@@ -1224,21 +1226,24 @@ static bool link_slaves_found(struct snd_sof_dev *sdev,
+ 
+ 			if (ids[j].link_id != link_id ||
+ 			    ids[j].id.part_id != part_id ||
+-			    ids[j].id.mfg_id != mfg_id)
++			    ids[j].id.mfg_id != mfg_id ||
++			    ids[j].id.sdw_version != version)
+ 				continue;
+ 
+ 			/* find out how many identical parts are expected */
+ 			for (k = 0; k < link->num_adr; k++) {
+ 				u64 adr2 = link->adr_d[k].adr;
+-				unsigned int part_id2, link_id2, mfg_id2;
++				unsigned int part_id2, link_id2, mfg_id2, version2;
+ 
+ 				mfg_id2 = SDW_MFG_ID(adr2);
+ 				part_id2 = SDW_PART_ID(adr2);
+ 				link_id2 = SDW_DISCO_LINK_ID(adr2);
++				version2 = SDW_VERSION(adr2);
+ 
+ 				if (link_id2 == link_id &&
+ 				    part_id2 == part_id &&
+-				    mfg_id2 == mfg_id)
++				    mfg_id2 == mfg_id &&
++				    version2 == version)
+ 					expected_part_count++;
+ 			}
+ 
+diff --git a/sound/soc/ti/davinci-i2s.c b/sound/soc/ti/davinci-i2s.c
+index 6dca51862dd76..0363a088d2e00 100644
+--- a/sound/soc/ti/davinci-i2s.c
++++ b/sound/soc/ti/davinci-i2s.c
+@@ -708,7 +708,9 @@ static int davinci_i2s_probe(struct platform_device *pdev)
+ 	dev->clk = clk_get(&pdev->dev, NULL);
+ 	if (IS_ERR(dev->clk))
+ 		return -ENODEV;
+-	clk_enable(dev->clk);
++	ret = clk_enable(dev->clk);
++	if (ret)
++		goto err_put_clk;
+ 
+ 	dev->dev = &pdev->dev;
+ 	dev_set_drvdata(&pdev->dev, dev);
+@@ -730,6 +732,7 @@ err_unregister_component:
+ 	snd_soc_unregister_component(&pdev->dev);
+ err_release_clk:
+ 	clk_disable(dev->clk);
++err_put_clk:
+ 	clk_put(dev->clk);
+ 	return ret;
+ }
+diff --git a/sound/soc/xilinx/xlnx_formatter_pcm.c b/sound/soc/xilinx/xlnx_formatter_pcm.c
+index ce19a6058b279..5c4158069a5a8 100644
+--- a/sound/soc/xilinx/xlnx_formatter_pcm.c
++++ b/sound/soc/xilinx/xlnx_formatter_pcm.c
+@@ -84,6 +84,7 @@ struct xlnx_pcm_drv_data {
+ 	struct snd_pcm_substream *play_stream;
+ 	struct snd_pcm_substream *capture_stream;
+ 	struct clk *axi_clk;
++	unsigned int sysclk;
+ };
+ 
+ /*
+@@ -314,6 +315,15 @@ static irqreturn_t xlnx_s2mm_irq_handler(int irq, void *arg)
+ 	return IRQ_NONE;
+ }
+ 
++static int xlnx_formatter_set_sysclk(struct snd_soc_component *component,
++				     int clk_id, int source, unsigned int freq, int dir)
++{
++	struct xlnx_pcm_drv_data *adata = dev_get_drvdata(component->dev);
++
++	adata->sysclk = freq;
++	return 0;
++}
++
+ static int xlnx_formatter_pcm_open(struct snd_soc_component *component,
+ 				   struct snd_pcm_substream *substream)
+ {
+@@ -450,11 +460,25 @@ static int xlnx_formatter_pcm_hw_params(struct snd_soc_component *component,
+ 	u64 size;
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+ 	struct xlnx_pcm_stream_param *stream_data = runtime->private_data;
++	struct xlnx_pcm_drv_data *adata = dev_get_drvdata(component->dev);
+ 
+ 	active_ch = params_channels(params);
+ 	if (active_ch > stream_data->ch_limit)
+ 		return -EINVAL;
+ 
++	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK &&
++	    adata->sysclk) {
++		unsigned int mclk_fs = adata->sysclk / params_rate(params);
++
++		if (adata->sysclk % params_rate(params) != 0) {
++			dev_warn(component->dev, "sysclk %u not divisible by rate %u\n",
++				 adata->sysclk, params_rate(params));
++			return -EINVAL;
++		}
++
++		writel(mclk_fs, stream_data->mmio + XLNX_AUD_FS_MULTIPLIER);
++	}
++
+ 	if (substream->stream == SNDRV_PCM_STREAM_CAPTURE &&
+ 	    stream_data->xfer_mode == AES_TO_PCM) {
+ 		val = readl(stream_data->mmio + XLNX_AUD_STS);
+@@ -552,6 +576,7 @@ static int xlnx_formatter_pcm_new(struct snd_soc_component *component,
+ 
+ static const struct snd_soc_component_driver xlnx_asoc_component = {
+ 	.name		= DRV_NAME,
++	.set_sysclk	= xlnx_formatter_set_sysclk,
+ 	.open		= xlnx_formatter_pcm_open,
+ 	.close		= xlnx_formatter_pcm_close,
+ 	.hw_params	= xlnx_formatter_pcm_hw_params,
+diff --git a/sound/spi/at73c213.c b/sound/spi/at73c213.c
+index 76c0e37a838cf..8a2da6b1012eb 100644
+--- a/sound/spi/at73c213.c
++++ b/sound/spi/at73c213.c
+@@ -218,7 +218,9 @@ static int snd_at73c213_pcm_open(struct snd_pcm_substream *substream)
+ 	runtime->hw = snd_at73c213_playback_hw;
+ 	chip->substream = substream;
+ 
+-	clk_enable(chip->ssc->clk);
++	err = clk_enable(chip->ssc->clk);
++	if (err)
++		return err;
+ 
+ 	return 0;
+ }
+@@ -776,7 +778,9 @@ static int snd_at73c213_chip_init(struct snd_at73c213 *chip)
+ 		goto out;
+ 
+ 	/* Enable DAC master clock. */
+-	clk_enable(chip->board->dac_clk);
++	retval = clk_enable(chip->board->dac_clk);
++	if (retval)
++		goto out;
+ 
+ 	/* Initialize at73c213 on SPI bus. */
+ 	retval = snd_at73c213_write_reg(chip, DAC_RST, 0x04);
+@@ -889,7 +893,9 @@ static int snd_at73c213_dev_init(struct snd_card *card,
+ 	chip->card = card;
+ 	chip->irq = -1;
+ 
+-	clk_enable(chip->ssc->clk);
++	retval = clk_enable(chip->ssc->clk);
++	if (retval)
++		return retval;
+ 
+ 	retval = request_irq(irq, snd_at73c213_interrupt, 0, "at73c213", chip);
+ 	if (retval) {
+@@ -1008,7 +1014,9 @@ static int snd_at73c213_remove(struct spi_device *spi)
+ 	int retval;
+ 
+ 	/* Stop playback. */
+-	clk_enable(chip->ssc->clk);
++	retval = clk_enable(chip->ssc->clk);
++	if (retval)
++		goto out;
+ 	ssc_writel(chip->ssc->regs, CR, SSC_BIT(CR_TXDIS));
+ 	clk_disable(chip->ssc->clk);
+ 
+@@ -1088,9 +1096,16 @@ static int snd_at73c213_resume(struct device *dev)
+ {
+ 	struct snd_card *card = dev_get_drvdata(dev);
+ 	struct snd_at73c213 *chip = card->private_data;
++	int retval;
+ 
+-	clk_enable(chip->board->dac_clk);
+-	clk_enable(chip->ssc->clk);
++	retval = clk_enable(chip->board->dac_clk);
++	if (retval)
++		return retval;
++	retval = clk_enable(chip->ssc->clk);
++	if (retval) {
++		clk_disable(chip->board->dac_clk);
++		return retval;
++	}
+ 	ssc_writel(chip->ssc->regs, CR, SSC_BIT(CR_TXEN));
+ 
+ 	return 0;
+diff --git a/tools/bpf/bpftool/btf.c b/tools/bpf/bpftool/btf.c
+index 015d2758f8264..4a561ec848c0e 100644
+--- a/tools/bpf/bpftool/btf.c
++++ b/tools/bpf/bpftool/btf.c
+@@ -899,7 +899,7 @@ static int do_show(int argc, char **argv)
+ 				      equal_fn_for_key_as_id, NULL);
+ 	btf_map_table = hashmap__new(hash_fn_for_key_as_id,
+ 				     equal_fn_for_key_as_id, NULL);
+-	if (!btf_prog_table || !btf_map_table) {
++	if (IS_ERR(btf_prog_table) || IS_ERR(btf_map_table)) {
+ 		hashmap__free(btf_prog_table);
+ 		hashmap__free(btf_map_table);
+ 		if (fd >= 0)
+diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c
+index 5c18351290f00..587027050f431 100644
+--- a/tools/bpf/bpftool/gen.c
++++ b/tools/bpf/bpftool/gen.c
+@@ -928,7 +928,6 @@ static int do_skeleton(int argc, char **argv)
+ 			s = (struct bpf_object_skeleton *)calloc(1, sizeof(*s));\n\
+ 			if (!s)						    \n\
+ 				goto err;				    \n\
+-			obj->skeleton = s;				    \n\
+ 									    \n\
+ 			s->sz = sizeof(*s);				    \n\
+ 			s->name = \"%1$s\";				    \n\
+@@ -1001,6 +1000,7 @@ static int do_skeleton(int argc, char **argv)
+ 									    \n\
+ 			s->data = (void *)%2$s__elf_bytes(&s->data_sz);	    \n\
+ 									    \n\
++			obj->skeleton = s;				    \n\
+ 			return 0;					    \n\
+ 		err:							    \n\
+ 			bpf_object__destroy_skeleton(s);		    \n\
+diff --git a/tools/bpf/bpftool/link.c b/tools/bpf/bpftool/link.c
+index 2c258db0d3521..97dec81950e5d 100644
+--- a/tools/bpf/bpftool/link.c
++++ b/tools/bpf/bpftool/link.c
+@@ -2,6 +2,7 @@
+ /* Copyright (C) 2020 Facebook */
+ 
+ #include <errno.h>
++#include <linux/err.h>
+ #include <net/if.h>
+ #include <stdio.h>
+ #include <unistd.h>
+@@ -306,7 +307,7 @@ static int do_show(int argc, char **argv)
+ 	if (show_pinned) {
+ 		link_table = hashmap__new(hash_fn_for_key_as_id,
+ 					  equal_fn_for_key_as_id, NULL);
+-		if (!link_table) {
++		if (IS_ERR(link_table)) {
+ 			p_err("failed to create hashmap for pinned paths");
+ 			return -1;
+ 		}
+diff --git a/tools/bpf/bpftool/map.c b/tools/bpf/bpftool/map.c
+index cae1f11192969..59d8e72851daf 100644
+--- a/tools/bpf/bpftool/map.c
++++ b/tools/bpf/bpftool/map.c
+@@ -619,17 +619,14 @@ static int show_map_close_plain(int fd, struct bpf_map_info *info)
+ 					    u32_as_hash_field(info->id))
+ 			printf("\n\tpinned %s", (char *)entry->value);
+ 	}
+-	printf("\n");
+ 
+ 	if (frozen_str) {
+ 		frozen = atoi(frozen_str);
+ 		free(frozen_str);
+ 	}
+ 
+-	if (!info->btf_id && !frozen)
+-		return 0;
+-
+-	printf("\t");
++	if (info->btf_id || frozen)
++		printf("\n\t");
+ 
+ 	if (info->btf_id)
+ 		printf("btf_id %d", info->btf_id);
+@@ -698,7 +695,7 @@ static int do_show(int argc, char **argv)
+ 	if (show_pinned) {
+ 		map_table = hashmap__new(hash_fn_for_key_as_id,
+ 					 equal_fn_for_key_as_id, NULL);
+-		if (!map_table) {
++		if (IS_ERR(map_table)) {
+ 			p_err("failed to create hashmap for pinned paths");
+ 			return -1;
+ 		}
+@@ -1053,11 +1050,9 @@ static void print_key_value(struct bpf_map_info *info, void *key,
+ 	json_writer_t *btf_wtr;
+ 	struct btf *btf;
+ 
+-	btf = btf__load_from_kernel_by_id(info->btf_id);
+-	if (libbpf_get_error(btf)) {
+-		p_err("failed to get btf");
++	btf = get_map_kv_btf(info);
++	if (libbpf_get_error(btf))
+ 		return;
+-	}
+ 
+ 	if (json_output) {
+ 		print_entry_json(info, key, value, btf);
+diff --git a/tools/bpf/bpftool/pids.c b/tools/bpf/bpftool/pids.c
+index 56b598eee043a..7c384d10e95f8 100644
+--- a/tools/bpf/bpftool/pids.c
++++ b/tools/bpf/bpftool/pids.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+ /* Copyright (C) 2020 Facebook */
+ #include <errno.h>
++#include <linux/err.h>
+ #include <stdbool.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+@@ -101,7 +102,7 @@ int build_obj_refs_table(struct hashmap **map, enum bpf_obj_type type)
+ 	libbpf_print_fn_t default_print;
+ 
+ 	*map = hashmap__new(hash_fn_for_key_as_id, equal_fn_for_key_as_id, NULL);
+-	if (!*map) {
++	if (IS_ERR(*map)) {
+ 		p_err("failed to create hashmap for PID references");
+ 		return -1;
+ 	}
+diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
+index 6ccd17b8eb560..41ed566e8c73e 100644
+--- a/tools/bpf/bpftool/prog.c
++++ b/tools/bpf/bpftool/prog.c
+@@ -571,7 +571,7 @@ static int do_show(int argc, char **argv)
+ 	if (show_pinned) {
+ 		prog_table = hashmap__new(hash_fn_for_key_as_id,
+ 					  equal_fn_for_key_as_id, NULL);
+-		if (!prog_table) {
++		if (IS_ERR(prog_table)) {
+ 			p_err("failed to create hashmap for pinned paths");
+ 			return -1;
+ 		}
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index b12cfceddb6e9..7faad2d5d05a8 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -2284,8 +2284,8 @@ union bpf_attr {
+  * 	Return
+  * 		The return value depends on the result of the test, and can be:
+  *
+- *		* 0, if current task belongs to the cgroup2.
+- *		* 1, if current task does not belong to the cgroup2.
++ *		* 1, if current task belongs to the cgroup2.
++ *		* 0, if current task does not belong to the cgroup2.
+  * 		* A negative error code, if an error occurred.
+  *
+  * long bpf_skb_change_tail(struct sk_buff *skb, u32 len, u64 flags)
+diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
+index 17c0a46d8cd22..fc393c4542e86 100644
+--- a/tools/lib/bpf/btf.h
++++ b/tools/lib/bpf/btf.h
+@@ -313,8 +313,28 @@ btf_dump__dump_type_data(struct btf_dump *d, __u32 id,
+ 			 const struct btf_dump_type_data_opts *opts);
+ 
+ /*
+- * A set of helpers for easier BTF types handling
++ * A set of helpers for easier BTF types handling.
++ *
++ * The inline functions below rely on constants from the kernel headers which
++ * may not be available for applications including this header file. To avoid
++ * compilation errors, we define all the constants here that were added after
++ * the initial introduction of the BTF_KIND* constants.
+  */
++#ifndef BTF_KIND_FUNC
++#define BTF_KIND_FUNC		12	/* Function	*/
++#define BTF_KIND_FUNC_PROTO	13	/* Function Proto	*/
++#endif
++#ifndef BTF_KIND_VAR
++#define BTF_KIND_VAR		14	/* Variable	*/
++#define BTF_KIND_DATASEC	15	/* Section	*/
++#endif
++#ifndef BTF_KIND_FLOAT
++#define BTF_KIND_FLOAT		16	/* Floating point	*/
++#endif
++/* The kernel header switched to enums, so these two were never #defined */
++#define BTF_KIND_DECL_TAG	17	/* Decl Tag */
++#define BTF_KIND_TYPE_TAG	18	/* Type Tag */
++
+ static inline __u16 btf_kind(const struct btf_type *t)
+ {
+ 	return BTF_INFO_KIND(t->info);
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index 5cae71600631b..96af8c4ecf789 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -1483,6 +1483,11 @@ static const char *btf_dump_resolve_name(struct btf_dump *d, __u32 id,
+ 	if (s->name_resolved)
+ 		return *cached_name ? *cached_name : orig_name;
+ 
++	if (btf_is_fwd(t) || (btf_is_enum(t) && btf_vlen(t) == 0)) {
++		s->name_resolved = 1;
++		return orig_name;
++	}
++
+ 	dup_cnt = btf_dump_name_dups(d, name_map, orig_name);
+ 	if (dup_cnt > 1) {
+ 		const size_t max_len = 256;
+@@ -1839,14 +1844,16 @@ static int btf_dump_array_data(struct btf_dump *d,
+ {
+ 	const struct btf_array *array = btf_array(t);
+ 	const struct btf_type *elem_type;
+-	__u32 i, elem_size = 0, elem_type_id;
++	__u32 i, elem_type_id;
++	__s64 elem_size;
+ 	bool is_array_member;
+ 
+ 	elem_type_id = array->type;
+ 	elem_type = skip_mods_and_typedefs(d->btf, elem_type_id, NULL);
+ 	elem_size = btf__resolve_size(d->btf, elem_type_id);
+ 	if (elem_size <= 0) {
+-		pr_warn("unexpected elem size %d for array type [%u]\n", elem_size, id);
++		pr_warn("unexpected elem size %zd for array type [%u]\n",
++			(ssize_t)elem_size, id);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index c7ba5e6ed9cfe..23edb41637d43 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -11497,6 +11497,9 @@ void bpf_object__detach_skeleton(struct bpf_object_skeleton *s)
+ 
+ void bpf_object__destroy_skeleton(struct bpf_object_skeleton *s)
+ {
++	if (!s)
++		return;
++
+ 	if (s->progs)
+ 		bpf_object__detach_skeleton(s);
+ 	if (s->obj)
+diff --git a/tools/lib/bpf/netlink.c b/tools/lib/bpf/netlink.c
+index 39f25e09b51e2..fadde7d80a51c 100644
+--- a/tools/lib/bpf/netlink.c
++++ b/tools/lib/bpf/netlink.c
+@@ -87,29 +87,75 @@ enum {
+ 	NL_DONE,
+ };
+ 
++static int netlink_recvmsg(int sock, struct msghdr *mhdr, int flags)
++{
++	int len;
++
++	do {
++		len = recvmsg(sock, mhdr, flags);
++	} while (len < 0 && (errno == EINTR || errno == EAGAIN));
++
++	if (len < 0)
++		return -errno;
++	return len;
++}
++
++static int alloc_iov(struct iovec *iov, int len)
++{
++	void *nbuf;
++
++	nbuf = realloc(iov->iov_base, len);
++	if (!nbuf)
++		return -ENOMEM;
++
++	iov->iov_base = nbuf;
++	iov->iov_len = len;
++	return 0;
++}
++
+ static int libbpf_netlink_recv(int sock, __u32 nl_pid, int seq,
+ 			       __dump_nlmsg_t _fn, libbpf_dump_nlmsg_t fn,
+ 			       void *cookie)
+ {
++	struct iovec iov = {};
++	struct msghdr mhdr = {
++		.msg_iov = &iov,
++		.msg_iovlen = 1,
++	};
+ 	bool multipart = true;
+ 	struct nlmsgerr *err;
+ 	struct nlmsghdr *nh;
+-	char buf[4096];
+ 	int len, ret;
+ 
++	ret = alloc_iov(&iov, 4096);
++	if (ret)
++		goto done;
++
+ 	while (multipart) {
+ start:
+ 		multipart = false;
+-		len = recv(sock, buf, sizeof(buf), 0);
++		len = netlink_recvmsg(sock, &mhdr, MSG_PEEK | MSG_TRUNC);
++		if (len < 0) {
++			ret = len;
++			goto done;
++		}
++
++		if (len > iov.iov_len) {
++			ret = alloc_iov(&iov, len);
++			if (ret)
++				goto done;
++		}
++
++		len = netlink_recvmsg(sock, &mhdr, 0);
+ 		if (len < 0) {
+-			ret = -errno;
++			ret = len;
+ 			goto done;
+ 		}
+ 
+ 		if (len == 0)
+ 			break;
+ 
+-		for (nh = (struct nlmsghdr *)buf; NLMSG_OK(nh, len);
++		for (nh = (struct nlmsghdr *)iov.iov_base; NLMSG_OK(nh, len);
+ 		     nh = NLMSG_NEXT(nh, len)) {
+ 			if (nh->nlmsg_pid != nl_pid) {
+ 				ret = -LIBBPF_ERRNO__WRNGPID;
+@@ -130,7 +176,8 @@ start:
+ 				libbpf_nla_dump_errormsg(nh);
+ 				goto done;
+ 			case NLMSG_DONE:
+-				return 0;
++				ret = 0;
++				goto done;
+ 			default:
+ 				break;
+ 			}
+@@ -142,15 +189,17 @@ start:
+ 				case NL_NEXT:
+ 					goto start;
+ 				case NL_DONE:
+-					return 0;
++					ret = 0;
++					goto done;
+ 				default:
+-					return ret;
++					goto done;
+ 				}
+ 			}
+ 		}
+ 	}
+ 	ret = 0;
+ done:
++	free(iov.iov_base);
+ 	return ret;
+ }
+ 
+diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
+index 81f8fbc85e70d..c43b4d94346d5 100644
+--- a/tools/lib/bpf/xsk.c
++++ b/tools/lib/bpf/xsk.c
+@@ -1210,12 +1210,23 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
+ 
+ int xsk_umem__delete(struct xsk_umem *umem)
+ {
++	struct xdp_mmap_offsets off;
++	int err;
++
+ 	if (!umem)
+ 		return 0;
+ 
+ 	if (umem->refcount)
+ 		return -EBUSY;
+ 
++	err = xsk_get_mmap_offsets(umem->fd, &off);
++	if (!err && umem->fill_save && umem->comp_save) {
++		munmap(umem->fill_save->ring - off.fr.desc,
++		       off.fr.desc + umem->config.fill_size * sizeof(__u64));
++		munmap(umem->comp_save->ring - off.cr.desc,
++		       off.cr.desc + umem->config.comp_size * sizeof(__u64));
++	}
++
+ 	close(umem->fd);
+ 	free(umem);
+ 
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index 7974933dbc77f..08c024038ca73 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -956,10 +956,10 @@ try_again_reset:
+ 	 * Enable counters and exec the command:
+ 	 */
+ 	if (forks) {
+-		evlist__start_workload(evsel_list);
+ 		err = enable_counters();
+ 		if (err)
+ 			return -1;
++		evlist__start_workload(evsel_list);
+ 
+ 		t0 = rdclock();
+ 		clock_gettime(CLOCK_MONOTONIC, &ref_time);
+diff --git a/tools/perf/pmu-events/arch/x86/skylakex/cache.json b/tools/perf/pmu-events/arch/x86/skylakex/cache.json
+index 9ff67206ade4e..821d2f2a8f251 100644
+--- a/tools/perf/pmu-events/arch/x86/skylakex/cache.json
++++ b/tools/perf/pmu-events/arch/x86/skylakex/cache.json
+@@ -314,6 +314,19 @@
+         "SampleAfterValue": "2000003",
+         "UMask": "0x82"
+     },
++    {
++        "BriefDescription": "All retired memory instructions.",
++        "Counter": "0,1,2,3",
++        "CounterHTOff": "0,1,2,3",
++        "Data_LA": "1",
++        "EventCode": "0xD0",
++        "EventName": "MEM_INST_RETIRED.ANY",
++        "L1_Hit_Indication": "1",
++        "PEBS": "1",
++        "PublicDescription": "Counts all retired memory instructions - loads and stores.",
++        "SampleAfterValue": "2000003",
++        "UMask": "0x83"
++    },
+     {
+         "BriefDescription": "Retired load instructions with locked access.",
+         "Counter": "0,1,2,3",
+@@ -358,6 +371,7 @@
+         "EventCode": "0xD0",
+         "EventName": "MEM_INST_RETIRED.STLB_MISS_LOADS",
+         "PEBS": "1",
++        "PublicDescription": "Number of retired load instructions that (start a) miss in the 2nd-level TLB (STLB).",
+         "SampleAfterValue": "100003",
+         "UMask": "0x11"
+     },
+@@ -370,6 +384,7 @@
+         "EventName": "MEM_INST_RETIRED.STLB_MISS_STORES",
+         "L1_Hit_Indication": "1",
+         "PEBS": "1",
++        "PublicDescription": "Number of retired store instructions that (start a) miss in the 2nd-level TLB (STLB).",
+         "SampleAfterValue": "100003",
+         "UMask": "0x12"
+     },
+@@ -733,7 +748,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.ANY_RESPONSE",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0000010491",
++        "MSRValue": "0x10491",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -772,7 +787,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x04003C0491",
++        "MSRValue": "0x4003C0491",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -785,7 +800,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x01003C0491",
++        "MSRValue": "0x1003C0491",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -798,7 +813,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x08003C0491",
++        "MSRValue": "0x8003C0491",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -811,7 +826,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.ANY_RESPONSE",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0000010490",
++        "MSRValue": "0x10490",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -850,7 +865,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x04003C0490",
++        "MSRValue": "0x4003C0490",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -863,7 +878,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x01003C0490",
++        "MSRValue": "0x1003C0490",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -876,7 +891,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x08003C0490",
++        "MSRValue": "0x8003C0490",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -889,7 +904,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.ANY_RESPONSE",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0000010120",
++        "MSRValue": "0x10120",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -928,7 +943,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x04003C0120",
++        "MSRValue": "0x4003C0120",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -941,7 +956,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.NO_SNOOP_NEEDED",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x01003C0120",
++        "MSRValue": "0x1003C0120",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -954,7 +969,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x08003C0120",
++        "MSRValue": "0x8003C0120",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -967,7 +982,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_RFO.ANY_RESPONSE",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0000010122",
++        "MSRValue": "0x10122",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1006,7 +1021,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x04003C0122",
++        "MSRValue": "0x4003C0122",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1019,7 +1034,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.NO_SNOOP_NEEDED",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x01003C0122",
++        "MSRValue": "0x1003C0122",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1032,7 +1047,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x08003C0122",
++        "MSRValue": "0x8003C0122",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1045,7 +1060,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.ANY_RESPONSE",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0000010004",
++        "MSRValue": "0x10004",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1084,7 +1099,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x04003C0004",
++        "MSRValue": "0x4003C0004",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1097,7 +1112,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.NO_SNOOP_NEEDED",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x01003C0004",
++        "MSRValue": "0x1003C0004",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1110,7 +1125,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x08003C0004",
++        "MSRValue": "0x8003C0004",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1123,7 +1138,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.ANY_RESPONSE",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0000010001",
++        "MSRValue": "0x10001",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1162,7 +1177,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x04003C0001",
++        "MSRValue": "0x4003C0001",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1175,7 +1190,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x01003C0001",
++        "MSRValue": "0x1003C0001",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1188,7 +1203,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x08003C0001",
++        "MSRValue": "0x8003C0001",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1201,7 +1216,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.ANY_RESPONSE",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0000010002",
++        "MSRValue": "0x10002",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1240,7 +1255,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x04003C0002",
++        "MSRValue": "0x4003C0002",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1253,7 +1268,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.NO_SNOOP_NEEDED",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x01003C0002",
++        "MSRValue": "0x1003C0002",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1266,7 +1281,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x08003C0002",
++        "MSRValue": "0x8003C0002",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1279,7 +1294,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.ANY_RESPONSE",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0000010400",
++        "MSRValue": "0x10400",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1318,7 +1333,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x04003C0400",
++        "MSRValue": "0x4003C0400",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1331,7 +1346,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.NO_SNOOP_NEEDED",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x01003C0400",
++        "MSRValue": "0x1003C0400",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1344,7 +1359,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.SNOOP_HIT_WITH_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x08003C0400",
++        "MSRValue": "0x8003C0400",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1357,7 +1372,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.ANY_RESPONSE",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0000010010",
++        "MSRValue": "0x10010",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1396,7 +1411,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x04003C0010",
++        "MSRValue": "0x4003C0010",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1409,7 +1424,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x01003C0010",
++        "MSRValue": "0x1003C0010",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1422,7 +1437,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x08003C0010",
++        "MSRValue": "0x8003C0010",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1435,7 +1450,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.ANY_RESPONSE",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0000010020",
++        "MSRValue": "0x10020",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1474,7 +1489,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x04003C0020",
++        "MSRValue": "0x4003C0020",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1487,7 +1502,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.NO_SNOOP_NEEDED",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x01003C0020",
++        "MSRValue": "0x1003C0020",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1500,7 +1515,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x08003C0020",
++        "MSRValue": "0x8003C0020",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1513,7 +1528,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.ANY_RESPONSE",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0000010080",
++        "MSRValue": "0x10080",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1552,7 +1567,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x04003C0080",
++        "MSRValue": "0x4003C0080",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1565,7 +1580,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x01003C0080",
++        "MSRValue": "0x1003C0080",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1578,7 +1593,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x08003C0080",
++        "MSRValue": "0x8003C0080",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1591,7 +1606,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.ANY_RESPONSE",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0000010100",
++        "MSRValue": "0x10100",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1630,7 +1645,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x04003C0100",
++        "MSRValue": "0x4003C0100",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1643,7 +1658,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.NO_SNOOP_NEEDED",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x01003C0100",
++        "MSRValue": "0x1003C0100",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1656,7 +1671,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x08003C0100",
++        "MSRValue": "0x8003C0100",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+diff --git a/tools/perf/pmu-events/arch/x86/skylakex/floating-point.json b/tools/perf/pmu-events/arch/x86/skylakex/floating-point.json
+index 503737ed3a83c..9e873ab224502 100644
+--- a/tools/perf/pmu-events/arch/x86/skylakex/floating-point.json
++++ b/tools/perf/pmu-events/arch/x86/skylakex/floating-point.json
+@@ -1,73 +1,81 @@
+ [
+     {
+-        "BriefDescription": "Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 2 computation operations, one for each element.  Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT14 RCP14 DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
++        "BriefDescription": "Counts once for most SIMD 128-bit packed computational double precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retired.",
+         "Counter": "0,1,2,3",
+         "CounterHTOff": "0,1,2,3,4,5,6,7",
+         "EventCode": "0xC7",
+         "EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE",
++        "PublicDescription": "Counts once for most SIMD 128-bit packed computational double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 2 computation operations, one for each element.  Applies to packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+         "SampleAfterValue": "2000003",
+         "UMask": "0x4"
+     },
+     {
+-        "BriefDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 4 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
++        "BriefDescription": "Counts once for most SIMD 128-bit packed computational single precision floating-point instruction retired. Counts twice for DPP and FM(N)ADD/SUB instructions retired.",
+         "Counter": "0,1,2,3",
+         "CounterHTOff": "0,1,2,3,4,5,6,7",
+         "EventCode": "0xC7",
+         "EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE",
++        "PublicDescription": "Counts once for most SIMD 128-bit packed computational single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 4 computation operations, one for each element.  Applies to packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+         "SampleAfterValue": "2000003",
+         "UMask": "0x8"
+     },
+     {
+-        "BriefDescription": "Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 4 computation operations, one for each element.  Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
++        "BriefDescription": "Counts once for most SIMD 256-bit packed double computational precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retired.",
+         "Counter": "0,1,2,3",
+         "CounterHTOff": "0,1,2,3,4,5,6,7",
+         "EventCode": "0xC7",
+         "EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE",
++        "PublicDescription": "Counts once for most SIMD 256-bit packed double computational precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 4 computation operations, one for each element.  Applies to packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+         "SampleAfterValue": "2000003",
+         "UMask": "0x10"
+     },
+     {
+-        "BriefDescription": "Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
++        "BriefDescription": "Counts once for most SIMD 256-bit packed single computational precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retired.",
+         "Counter": "0,1,2,3",
+         "CounterHTOff": "0,1,2,3,4,5,6,7",
+         "EventCode": "0xC7",
+         "EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE",
++        "PublicDescription": "Counts once for most SIMD 256-bit packed single computational precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, one for each element.  Applies to packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+         "SampleAfterValue": "2000003",
+         "UMask": "0x20"
+     },
+     {
+-        "BriefDescription": "Number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, one for each element.  Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 8 calculations per element.",
++        "BriefDescription": "Counts number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, one for each element.  Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+         "Counter": "0,1,2,3",
+         "CounterHTOff": "0,1,2,3,4,5,6,7",
+         "EventCode": "0xC7",
+         "EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE",
++        "PublicDescription": "Number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, one for each element.  Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+         "SampleAfterValue": "2000003",
+         "UMask": "0x40"
+     },
+     {
+-        "BriefDescription": "Number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 16 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 16 calculations per element.",
++        "BriefDescription": "Counts number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 16 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+         "Counter": "0,1,2,3",
+         "CounterHTOff": "0,1,2,3,4,5,6,7",
+         "EventCode": "0xC7",
+         "EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE",
++        "PublicDescription": "Number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 16 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+         "SampleAfterValue": "2000003",
+         "UMask": "0x80"
+     },
+     {
+-        "BriefDescription": "Number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 1 computation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
++        "BriefDescription": "Counts once for most SIMD scalar computational double precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retired.",
+         "Counter": "0,1,2,3",
+         "CounterHTOff": "0,1,2,3,4,5,6,7",
+         "EventCode": "0xC7",
+         "EventName": "FP_ARITH_INST_RETIRED.SCALAR_DOUBLE",
++        "PublicDescription": "Counts once for most SIMD scalar computational double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 1 computational operation. Applies to SIMD scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+         "SampleAfterValue": "2000003",
+         "UMask": "0x1"
+     },
+     {
+-        "BriefDescription": "Number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 1 computation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
++        "BriefDescription": "Counts once for most SIMD scalar computational single precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retired.",
+         "Counter": "0,1,2,3",
+         "CounterHTOff": "0,1,2,3,4,5,6,7",
+         "EventCode": "0xC7",
+         "EventName": "FP_ARITH_INST_RETIRED.SCALAR_SINGLE",
++        "PublicDescription": "Counts once for most SIMD scalar computational single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 1 computational operation. Applies to SIMD scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+         "SampleAfterValue": "2000003",
+         "UMask": "0x2"
+     },
+diff --git a/tools/perf/pmu-events/arch/x86/skylakex/frontend.json b/tools/perf/pmu-events/arch/x86/skylakex/frontend.json
+index 078706a500919..ecce4273ae52c 100644
+--- a/tools/perf/pmu-events/arch/x86/skylakex/frontend.json
++++ b/tools/perf/pmu-events/arch/x86/skylakex/frontend.json
+@@ -30,7 +30,21 @@
+         "UMask": "0x2"
+     },
+     {
+-        "BriefDescription": "Retired Instructions who experienced decode stream buffer (DSB - the decoded instruction-cache) miss.",
++        "BriefDescription": "Retired Instructions who experienced DSB miss.",
++        "Counter": "0,1,2,3",
++        "CounterHTOff": "0,1,2,3",
++        "EventCode": "0xC6",
++        "EventName": "FRONTEND_RETIRED.ANY_DSB_MISS",
++        "MSRIndex": "0x3F7",
++        "MSRValue": "0x1",
++        "PEBS": "1",
++        "PublicDescription": "Counts retired Instructions that experienced DSB (Decode stream buffer i.e. the decoded instruction-cache) miss.",
++        "SampleAfterValue": "100007",
++        "TakenAlone": "1",
++        "UMask": "0x1"
++    },
++    {
++        "BriefDescription": "Retired Instructions who experienced a critical DSB miss.",
+         "Counter": "0,1,2,3",
+         "CounterHTOff": "0,1,2,3",
+         "EventCode": "0xC6",
+@@ -38,7 +52,7 @@
+         "MSRIndex": "0x3F7",
+         "MSRValue": "0x11",
+         "PEBS": "1",
+-        "PublicDescription": "Counts retired Instructions that experienced DSB (Decode stream buffer i.e. the decoded instruction-cache) miss.",
++        "PublicDescription": "Number of retired Instructions that experienced a critical DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. Critical means stalls were exposed to the back-end as a result of the DSB miss.",
+         "SampleAfterValue": "100007",
+         "TakenAlone": "1",
+         "UMask": "0x1"
+diff --git a/tools/perf/pmu-events/arch/x86/skylakex/memory.json b/tools/perf/pmu-events/arch/x86/skylakex/memory.json
+index 6f29b02fa320c..60c286b4fe54c 100644
+--- a/tools/perf/pmu-events/arch/x86/skylakex/memory.json
++++ b/tools/perf/pmu-events/arch/x86/skylakex/memory.json
+@@ -299,7 +299,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x083FC00491",
++        "MSRValue": "0x83FC00491",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -312,7 +312,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063FC00491",
++        "MSRValue": "0x63FC00491",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -325,7 +325,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0604000491",
++        "MSRValue": "0x604000491",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -338,7 +338,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063B800491",
++        "MSRValue": "0x63B800491",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -377,7 +377,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x083FC00490",
++        "MSRValue": "0x83FC00490",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -390,7 +390,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063FC00490",
++        "MSRValue": "0x63FC00490",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -403,7 +403,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0604000490",
++        "MSRValue": "0x604000490",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -416,7 +416,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063B800490",
++        "MSRValue": "0x63B800490",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -455,7 +455,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x083FC00120",
++        "MSRValue": "0x83FC00120",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -468,7 +468,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063FC00120",
++        "MSRValue": "0x63FC00120",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -481,7 +481,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0604000120",
++        "MSRValue": "0x604000120",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -494,7 +494,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063B800120",
++        "MSRValue": "0x63B800120",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -533,7 +533,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x083FC00122",
++        "MSRValue": "0x83FC00122",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -546,7 +546,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063FC00122",
++        "MSRValue": "0x63FC00122",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -559,7 +559,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0604000122",
++        "MSRValue": "0x604000122",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -572,7 +572,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063B800122",
++        "MSRValue": "0x63B800122",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -611,7 +611,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.REMOTE_HIT_FORWARD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x083FC00004",
++        "MSRValue": "0x83FC00004",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -624,7 +624,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063FC00004",
++        "MSRValue": "0x63FC00004",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -637,7 +637,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0604000004",
++        "MSRValue": "0x604000004",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -650,7 +650,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063B800004",
++        "MSRValue": "0x63B800004",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -689,7 +689,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x083FC00001",
++        "MSRValue": "0x83FC00001",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -702,7 +702,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063FC00001",
++        "MSRValue": "0x63FC00001",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -715,7 +715,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0604000001",
++        "MSRValue": "0x604000001",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -728,7 +728,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063B800001",
++        "MSRValue": "0x63B800001",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -767,7 +767,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x083FC00002",
++        "MSRValue": "0x83FC00002",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -780,7 +780,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063FC00002",
++        "MSRValue": "0x63FC00002",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -793,7 +793,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0604000002",
++        "MSRValue": "0x604000002",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -806,7 +806,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063B800002",
++        "MSRValue": "0x63B800002",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -845,7 +845,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS.REMOTE_HIT_FORWARD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x083FC00400",
++        "MSRValue": "0x83FC00400",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -858,7 +858,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063FC00400",
++        "MSRValue": "0x63FC00400",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -871,7 +871,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0604000400",
++        "MSRValue": "0x604000400",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -884,7 +884,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063B800400",
++        "MSRValue": "0x63B800400",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -923,7 +923,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x083FC00010",
++        "MSRValue": "0x83FC00010",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -936,7 +936,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063FC00010",
++        "MSRValue": "0x63FC00010",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -949,7 +949,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0604000010",
++        "MSRValue": "0x604000010",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -962,7 +962,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063B800010",
++        "MSRValue": "0x63B800010",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1001,7 +1001,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x083FC00020",
++        "MSRValue": "0x83FC00020",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1014,7 +1014,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063FC00020",
++        "MSRValue": "0x63FC00020",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1027,7 +1027,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0604000020",
++        "MSRValue": "0x604000020",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1040,7 +1040,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063B800020",
++        "MSRValue": "0x63B800020",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1079,7 +1079,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x083FC00080",
++        "MSRValue": "0x83FC00080",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1092,7 +1092,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063FC00080",
++        "MSRValue": "0x63FC00080",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1105,7 +1105,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0604000080",
++        "MSRValue": "0x604000080",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1118,7 +1118,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063B800080",
++        "MSRValue": "0x63B800080",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1157,7 +1157,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x083FC00100",
++        "MSRValue": "0x83FC00100",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1170,7 +1170,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063FC00100",
++        "MSRValue": "0x63FC00100",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1183,7 +1183,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x0604000100",
++        "MSRValue": "0x604000100",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+@@ -1196,7 +1196,7 @@
+         "EventCode": "0xB7, 0xBB",
+         "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+         "MSRIndex": "0x1a6,0x1a7",
+-        "MSRValue": "0x063B800100",
++        "MSRValue": "0x63B800100",
+         "Offcore": "1",
+         "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+         "SampleAfterValue": "100003",
+diff --git a/tools/perf/pmu-events/arch/x86/skylakex/pipeline.json b/tools/perf/pmu-events/arch/x86/skylakex/pipeline.json
+index ca57481206660..12eabae3e2242 100644
+--- a/tools/perf/pmu-events/arch/x86/skylakex/pipeline.json
++++ b/tools/perf/pmu-events/arch/x86/skylakex/pipeline.json
+@@ -435,6 +435,17 @@
+         "PublicDescription": "Counts the number of instructions (EOMs) retired. Counting covers macro-fused instructions individually (that is, increments by two).",
+         "SampleAfterValue": "2000003"
+     },
++    {
++        "BriefDescription": "Number of all retired NOP instructions.",
++        "Counter": "0,1,2,3",
++        "CounterHTOff": "0,1,2,3,4,5,6,7",
++        "Errata": "SKL091, SKL044",
++        "EventCode": "0xC0",
++        "EventName": "INST_RETIRED.NOP",
++        "PEBS": "1",
++        "SampleAfterValue": "2000003",
++        "UMask": "0x2"
++    },
+     {
+         "BriefDescription": "Precise instruction retired event with HW to reduce effect of PEBS shadow in IP distribution",
+         "Counter": "1",
+diff --git a/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json b/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json
+index 863c9e103969e..b016f7d1ff3de 100644
+--- a/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json
++++ b/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json
+@@ -1,26 +1,167 @@
+ [
++    {
++        "BriefDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend",
++        "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)",
++        "MetricGroup": "TopdownL1",
++        "MetricName": "Frontend_Bound",
++        "PublicDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend. Frontend denotes the first part of the processor core responsible to fetch operations that are executed later on by the Backend part. Within the Frontend; a branch predictor predicts the next address to fetch; cache-lines are fetched from the memory subsystem; parsed into instructions; and lastly decoded into micro-operations (uops). Ideally the Frontend can issue Machine_Width uops every cycle to the Backend. Frontend Bound denotes unutilized issue-slots when there is no Backend stall; i.e. bubbles where Frontend delivered no uops while Backend could have accepted them. For example; stalls due to instruction-cache misses would be categorized under Frontend Bound."
++    },
++    {
++        "BriefDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend. SMT version; use when SMT is enabled and measuring per logical CPU.",
++        "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))",
++        "MetricGroup": "TopdownL1_SMT",
++        "MetricName": "Frontend_Bound_SMT",
++        "PublicDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend. Frontend denotes the first part of the processor core responsible to fetch operations that are executed later on by the Backend part. Within the Frontend; a branch predictor predicts the next address to fetch; cache-lines are fetched from the memory subsystem; parsed into instructions; and lastly decoded into micro-operations (uops). Ideally the Frontend can issue Machine_Width uops every cycle to the Backend. Frontend Bound denotes unutilized issue-slots when there is no Backend stall; i.e. bubbles where Frontend delivered no uops while Backend could have accepted them. For example; stalls due to instruction-cache misses would be categorized under Frontend Bound. SMT version; use when SMT is enabled and measuring per logical CPU."
++    },
++    {
++        "BriefDescription": "This category represents fraction of slots wasted due to incorrect speculations",
++        "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)",
++        "MetricGroup": "TopdownL1",
++        "MetricName": "Bad_Speculation",
++        "PublicDescription": "This category represents fraction of slots wasted due to incorrect speculations. This include slots used to issue uops that do not eventually get retired and slots for which the issue-pipeline was blocked due to recovery from earlier incorrect speculation. For example; wasted work due to miss-predicted branches are categorized under Bad Speculation category. Incorrect data speculation followed by Memory Ordering Nukes is another example."
++    },
++    {
++        "BriefDescription": "This category represents fraction of slots wasted due to incorrect speculations. SMT version; use when SMT is enabled and measuring per logical CPU.",
++        "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))",
++        "MetricGroup": "TopdownL1_SMT",
++        "MetricName": "Bad_Speculation_SMT",
++        "PublicDescription": "This category represents fraction of slots wasted due to incorrect speculations. This include slots used to issue uops that do not eventually get retired and slots for which the issue-pipeline was blocked due to recovery from earlier incorrect speculation. For example; wasted work due to miss-predicted branches are categorized under Bad Speculation category. Incorrect data speculation followed by Memory Ordering Nukes is another example. SMT version; use when SMT is enabled and measuring per logical CPU."
++    },
++    {
++        "BriefDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend",
++        "MetricConstraint": "NO_NMI_WATCHDOG",
++        "MetricExpr": "1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)",
++        "MetricGroup": "TopdownL1",
++        "MetricName": "Backend_Bound",
++        "PublicDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend. Backend is the portion of the processor core where the out-of-order scheduler dispatches ready uops into their respective execution units; and once completed these uops get retired according to program order. For example; stalls due to data-cache misses or stalls due to the divider unit being overloaded are both categorized under Backend Bound. Backend Bound is further divided into two main categories: Memory Bound and Core Bound."
++    },
++    {
++        "BriefDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend. SMT version; use when SMT is enabled and measuring per logical CPU.",
++        "MetricExpr": "1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))",
++        "MetricGroup": "TopdownL1_SMT",
++        "MetricName": "Backend_Bound_SMT",
++        "PublicDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend. Backend is the portion of the processor core where the out-of-order scheduler dispatches ready uops into their respective execution units; and once completed these uops get retired according to program order. For example; stalls due to data-cache misses or stalls due to the divider unit being overloaded are both categorized under Backend Bound. Backend Bound is further divided into two main categories: Memory Bound and Core Bound. SMT version; use when SMT is enabled and measuring per logical CPU."
++    },
++    {
++        "BriefDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired",
++        "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)",
++        "MetricGroup": "TopdownL1",
++        "MetricName": "Retiring",
++        "PublicDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired. Ideally; all pipeline slots would be attributed to the Retiring category.  Retiring of 100% would indicate the maximum Pipeline_Width throughput was achieved.  Maximizing Retiring typically increases the Instructions-per-cycle (see IPC metric). Note that a high Retiring value does not necessary mean there is no room for more performance.  For example; Heavy-operations or Microcode Assists are categorized under Retiring. They often indicate suboptimal performance and can often be optimized or avoided. "
++    },
++    {
++        "BriefDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired. SMT version; use when SMT is enabled and measuring per logical CPU.",
++        "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))",
++        "MetricGroup": "TopdownL1_SMT",
++        "MetricName": "Retiring_SMT",
++        "PublicDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired. Ideally; all pipeline slots would be attributed to the Retiring category.  Retiring of 100% would indicate the maximum Pipeline_Width throughput was achieved.  Maximizing Retiring typically increases the Instructions-per-cycle (see IPC metric). Note that a high Retiring value does not necessary mean there is no room for more performance.  For example; Heavy-operations or Microcode Assists are categorized under Retiring. They often indicate suboptimal performance and can often be optimized or avoided. SMT version; use when SMT is enabled and measuring per logical CPU."
++    },
++    {
++        "BriefDescription": "Total pipeline cost of Branch Misprediction related bottlenecks",
++        "MetricExpr": "100 * ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) )",
++        "MetricGroup": "Bad;BadSpec;BrMispredicts",
++        "MetricName": "Mispredictions"
++    },
++    {
++        "BriefDescription": "Total pipeline cost of Branch Misprediction related bottlenecks",
++        "MetricExpr": "100 * ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) )",
++        "MetricGroup": "Bad;BadSpec;BrMispredicts_SMT",
++        "MetricName": "Mispredictions_SMT"
++    },
++    {
++        "BriefDescription": "Total pipeline cost of (external) Memory Bandwidth related bottlenecks",
++        "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) * ( ( (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES )
  / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (min( CPU_CLK_UNHALTED.THREAD , cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=4@ ) / CPU_CLK_UNHALTED.THREAD) / #(CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) ) + ( (( CYCLE_ACTIVITY.STALLS_L2_MISS - CY
 CLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (OFFCORE_REQUESTS_BUFFER.SQ_FULL / CPU_CLK_UNHALTED.THREAD) / #(( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) ) ) + ( (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES))
  * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( ((L1D_PEND_MISS.PENDING / ( MEM_LOAD_RETIRED.L1_MISS + MEM_LOAD_RETIRED.FB_HIT )) * cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ / CPU_CLK_UNHALTED.THREAD) / #(max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) ) ",
++        "MetricGroup": "Mem;MemoryBW;Offcore",
++        "MetricName": "Memory_Bandwidth"
++    },
++    {
++        "BriefDescription": "Total pipeline cost of (external) Memory Bandwidth related bottlenecks",
++        "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) * ( ( (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (M
 EM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (min( CPU_CLK_UNHALTED.THREAD , cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=4@ ) / CPU_CLK_UNHALTED.THREA
 D) / #(CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) ) + ( (( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THR
 EAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (( OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2 ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )) / #(( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) ) ) + ( (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_
 NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( ((L1D_PEND_MISS.PENDING / ( MEM_LOAD_RETIRED.L1_MISS + MEM_LOAD_RETIRED.FB_HIT )) * cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ / CPU_CLK_UNHALTED.THREAD) / #(max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) ) ",
++        "MetricGroup": "Mem;MemoryBW;Offcore_SMT",
++        "MetricName": "Memory_Bandwidth_SMT"
++    },
++    {
++        "BriefDescription": "Total pipeline cost of Memory Latency related bottlenecks (external memory and off-core caches)",
++        "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) * ( ( (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES )
  / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (min( CPU_CLK_UNHALTED.THREAD , OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD ) / CPU_CLK_UNHALTED.THREAD - (min( CPU_CLK_UNHALTED.THREAD , cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=4@ ) / CPU_CLK_UNHALTED.THREAD)) / #(CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D
 _MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) ) + ( (( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (( (20.5 * ((CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC) * msr@tsc@ / 1000000000 / duration_time)) - (3.5 * ((CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC) * msr@tsc@ / 1000000000 / duration_time)) ) * MEM_LOAD_RETIRED.L3_HIT * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CPU_CLK_UNHALTED.THREAD) / #(( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_
 UNHALTED.THREAD) ) + ( (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD)) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) )",
++        "MetricGroup": "Mem;MemoryLat;Offcore",
++        "MetricName": "Memory_Latency"
++    },
++    {
++        "BriefDescription": "Total pipeline cost of Memory Latency related bottlenecks (external memory and off-core caches)",
++        "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) * ( ( (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (M
 EM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (min( CPU_CLK_UNHALTED.THREAD , OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD ) / CPU_CLK_UNHALTED.THREAD - (min(
  CPU_CLK_UNHALTED.THREAD , cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=4@ ) / CPU_CLK_UNHALTED.THREAD)) / #(CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) ) + ( (( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_
 PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (( (20.5 * ((CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC) * msr@tsc@ / 1000000000 / duration_time)) - (3.5 * ((CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC) * msr@tsc@ / 1000000000 / duration_time)) ) * MEM_LOAD_RETIRED.L3_HIT * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CPU_CLK_UNHALTED.THREAD) / #(( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) ) + ( (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.
 L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD)) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) )",
++        "MetricGroup": "Mem;MemoryLat;Offcore_SMT",
++        "MetricName": "Memory_Latency_SMT"
++    },
++    {
++        "BriefDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)",
++        "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) * ( ( (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) / ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (min( 9 * cpu@DTLB_LOAD_MISSES.ST
 LB_HIT\\,cmask\\=1@ + DTLB_LOAD_MISSES.WALK_ACTIVE , max( CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS , 0 ) ) / CPU_CLK_UNHALTED.THREAD) / (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) ) + ( (EXE_ACTIVITY.BOUND_ON_STORES / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (( 9 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=1@ + DTLB_STORE_MISSES.WALK_ACTIVE ) / CPU_CLK_UNHALTED.THREAD) / #(EXE_ACTIVITY.BOUND_ON_STORES / CPU_CLK_UNHALTED.THREAD) ) ) ",
++        "MetricGroup": "Mem;MemoryTLB",
++        "MetricName": "Memory_Data_TLBs"
++    },
++    {
++        "BriefDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)",
++        "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) * ( ( (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) / ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHA
 LTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (min( 9 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=1@ + DTLB_LOAD_MISSES.WALK_ACTIVE , max( CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS , 0 ) ) / CPU_CLK_UNHALTED.THREAD) / (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) ) + ( (EXE_ACTIVITY.BOUND_ON_STORES / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOP
 S_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (( 9 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=1@ + DTLB_STORE_MISSES.WALK_ACTIVE ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )) / #(EXE_ACTIVITY.BOUND_ON_STORES / CPU_CLK_UNHALTED.THREAD) ) ) ",
++        "MetricGroup": "Mem;MemoryTLB;_SMT",
++        "MetricName": "Memory_Data_TLBs_SMT"
++    },
++    {
++        "BriefDescription": "Total pipeline cost of branch related instructions (used for program control-flow including function calls)",
++        "MetricExpr": "100 * (( BR_INST_RETIRED.CONDITIONAL + 3 * BR_INST_RETIRED.NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - ( BR_INST_RETIRED.CONDITIONAL - BR_INST_RETIRED.NOT_TAKEN ) - 2 * BR_INST_RETIRED.NEAR_CALL) ) / (4 * CPU_CLK_UNHALTED.THREAD))",
++        "MetricGroup": "Ret",
++        "MetricName": "Branching_Overhead"
++    },
++    {
++        "BriefDescription": "Total pipeline cost of branch related instructions (used for program control-flow including function calls)",
++        "MetricExpr": "100 * (( BR_INST_RETIRED.CONDITIONAL + 3 * BR_INST_RETIRED.NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - ( BR_INST_RETIRED.CONDITIONAL - BR_INST_RETIRED.NOT_TAKEN ) - 2 * BR_INST_RETIRED.NEAR_CALL) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))",
++        "MetricGroup": "Ret_SMT",
++        "MetricName": "Branching_Overhead_SMT"
++    },
++    {
++        "BriefDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses)",
++        "MetricExpr": "100 * (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * ( (ICACHE_64B.IFTAG_STALL / CPU_CLK_UNHALTED.THREAD) + (( ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDATA_STALL\\,cmask\\=1\\,edge@ ) / CPU_CLK_UNHALTED.THREAD) + (9 * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD))",
++        "MetricGroup": "BigFoot;Fed;Frontend;IcMiss;MemoryTLB",
++        "MetricName": "Big_Code"
++    },
++    {
++        "BriefDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses)",
++        "MetricExpr": "100 * (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ( (ICACHE_64B.IFTAG_STALL / CPU_CLK_UNHALTED.THREAD) + (( ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDATA_STALL\\,cmask\\=1\\,edge@ ) / CPU_CLK_UNHALTED.THREAD) + (9 * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))",
++        "MetricGroup": "BigFoot;Fed;Frontend;IcMiss;MemoryTLB_SMT",
++        "MetricName": "Big_Code_SMT"
++    },
++    {
++        "BriefDescription": "Total pipeline cost of instruction fetch bandwidth related bottlenecks",
++        "MetricExpr": "100 * ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) ) - (100 * (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * ( (ICACHE_64B.IFTAG_STALL / CPU_CLK_UNHALTED.THREAD) + (( ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDATA_STALL\\,cmask\\=1\\,edge@ ) / CPU_CLK_UNHALTED.THREAD) + (9 * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)))",
++        "MetricGroup": "Fed;FetchBW;Frontend",
++        "MetricName": "Instruction_Fetch_BW"
++    },
++    {
++        "BriefDescription": "Total pipeline cost of instruction fetch bandwidth related bottlenecks",
++        "MetricExpr": "100 * ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) ) - (100 * (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ( (ICACHE_64B.IFTAG_STALL / CPU_CLK_UNHALTED.THREAD) + (( ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDATA_STALL\\,cmask\\=1\\,edge@ 
 ) / CPU_CLK_UNHALTED.THREAD) + (9 * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))))",
++        "MetricGroup": "Fed;FetchBW;Frontend_SMT",
++        "MetricName": "Instruction_Fetch_BW_SMT"
++    },
+     {
+         "BriefDescription": "Instructions Per Cycle (per Logical Processor)",
+         "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD",
+-        "MetricGroup": "Summary",
++        "MetricGroup": "Ret;Summary",
+         "MetricName": "IPC"
+     },
+     {
+         "BriefDescription": "Uops Per Instruction",
+         "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY",
+-        "MetricGroup": "Pipeline;Retire",
++        "MetricGroup": "Pipeline;Ret;Retire",
+         "MetricName": "UPI"
+     },
+     {
+         "BriefDescription": "Instruction per taken branch",
+-        "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKEN",
+-        "MetricGroup": "Branches;FetchBW;PGO",
+-        "MetricName": "IpTB"
++        "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / BR_INST_RETIRED.NEAR_TAKEN",
++        "MetricGroup": "Branches;Fed;FetchBW",
++        "MetricName": "UpTB"
+     },
+     {
+         "BriefDescription": "Cycles Per Instruction (per Logical Processor)",
+         "MetricExpr": "1 / (INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD)",
+-        "MetricGroup": "Pipeline",
++        "MetricGroup": "Pipeline;Mem",
+         "MetricName": "CPI"
+     },
+     {
+@@ -30,39 +171,84 @@
+         "MetricName": "CLKS"
+     },
+     {
+-        "BriefDescription": "Instructions Per Cycle (per physical core)",
++        "BriefDescription": "Total issue-pipeline slots (per-Physical Core till ICL; per-Logical Processor ICL onward)",
++        "MetricExpr": "4 * CPU_CLK_UNHALTED.THREAD",
++        "MetricGroup": "TmaL1",
++        "MetricName": "SLOTS"
++    },
++    {
++        "BriefDescription": "Total issue-pipeline slots (per-Physical Core till ICL; per-Logical Processor ICL onward)",
++        "MetricExpr": "4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )",
++        "MetricGroup": "TmaL1_SMT",
++        "MetricName": "SLOTS_SMT"
++    },
++    {
++        "BriefDescription": "The ratio of Executed- by Issued-Uops",
++        "MetricExpr": "UOPS_EXECUTED.THREAD / UOPS_ISSUED.ANY",
++        "MetricGroup": "Cor;Pipeline",
++        "MetricName": "Execute_per_Issue",
++        "PublicDescription": "The ratio of Executed- by Issued-Uops. Ratio > 1 suggests high rate of uop micro-fusions. Ratio < 1 suggest high rate of \"execute\" at rename stage."
++    },
++    {
++        "BriefDescription": "Instructions Per Cycle across hyper-threads (per physical core)",
+         "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD",
+-        "MetricGroup": "SMT;TmaL1",
++        "MetricGroup": "Ret;SMT;TmaL1",
+         "MetricName": "CoreIPC"
+     },
+     {
+-        "BriefDescription": "Instructions Per Cycle (per physical core)",
++        "BriefDescription": "Instructions Per Cycle across hyper-threads (per physical core)",
+         "MetricExpr": "INST_RETIRED.ANY / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )",
+-        "MetricGroup": "SMT;TmaL1",
++        "MetricGroup": "Ret;SMT;TmaL1_SMT",
+         "MetricName": "CoreIPC_SMT"
+     },
+     {
+         "BriefDescription": "Floating Point Operations Per Cycle",
+         "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE ) / CPU_CLK_UNHALTED.THREAD",
+-        "MetricGroup": "Flops",
++        "MetricGroup": "Ret;Flops",
+         "MetricName": "FLOPc"
+     },
+     {
+         "BriefDescription": "Floating Point Operations Per Cycle",
+         "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )",
+-        "MetricGroup": "Flops_SMT",
++        "MetricGroup": "Ret;Flops_SMT",
+         "MetricName": "FLOPc_SMT"
+     },
++    {
++        "BriefDescription": "Actual per-core usage of the Floating Point execution units (regardless of the vector width)",
++        "MetricExpr": "( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) ) / ( 2 * CPU_CLK_UNHALTED.THREAD )",
++        "MetricGroup": "Cor;Flops;HPC",
++        "MetricName": "FP_Arith_Utilization",
++        "PublicDescription": "Actual per-core usage of the Floating Point execution units (regardless of the vector width). Values > 1 are possible due to Fused-Multiply Add (FMA) counting."
++    },
++    {
++        "BriefDescription": "Actual per-core usage of the Floating Point execution units (regardless of the vector width). SMT version; use when SMT is enabled and measuring per logical CPU.",
++        "MetricExpr": "( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) ) / ( 2 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ) )",
++        "MetricGroup": "Cor;Flops;HPC_SMT",
++        "MetricName": "FP_Arith_Utilization_SMT",
++        "PublicDescription": "Actual per-core usage of the Floating Point execution units (regardless of the vector width). Values > 1 are possible due to Fused-Multiply Add (FMA) counting. SMT version; use when SMT is enabled and measuring per logical CPU."
++    },
+     {
+         "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is at least 1 uop executed)",
+         "MetricExpr": "UOPS_EXECUTED.THREAD / (( UOPS_EXECUTED.CORE_CYCLES_GE_1 / 2 ) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)",
+-        "MetricGroup": "Pipeline;PortsUtil",
++        "MetricGroup": "Backend;Cor;Pipeline;PortsUtil",
+         "MetricName": "ILP"
+     },
++    {
++        "BriefDescription": "Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)",
++        "MetricExpr": " ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) ) * (4 * CPU_CLK_UNHALTED.THREAD) / BR_MISP_RETIRED.ALL_BRANCHES",
++        "MetricGroup": "Bad;BrMispredicts",
++        "MetricName": "Branch_Misprediction_Cost"
++    },
++    {
++        "BriefDescription": "Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)",
++        "MetricExpr": " ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) ) * (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )) / BR_MISP_RETIRED.ALL_BRANCHES",
++        "MetricGroup": "Bad;BrMispredicts_SMT",
++        "MetricName": "Branch_Misprediction_Cost_SMT"
++    },
+     {
+         "BriefDescription": "Number of Instructions per non-speculative Branch Misprediction (JEClear)",
+         "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.ALL_BRANCHES",
+-        "MetricGroup": "BrMispredicts",
++        "MetricGroup": "Bad;BadSpec;BrMispredicts",
+         "MetricName": "IpMispredict"
+     },
+     {
+@@ -86,122 +272,249 @@
+     {
+         "BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
+         "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.ALL_BRANCHES",
+-        "MetricGroup": "Branches;InsType",
++        "MetricGroup": "Branches;Fed;InsType",
+         "MetricName": "IpBranch"
+     },
+     {
+         "BriefDescription": "Instructions per (near) call (lower number means higher occurrence rate)",
+         "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_CALL",
+-        "MetricGroup": "Branches",
++        "MetricGroup": "Branches;Fed;PGO",
+         "MetricName": "IpCall"
+     },
++    {
++        "BriefDescription": "Instruction per taken branch",
++        "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKEN",
++        "MetricGroup": "Branches;Fed;FetchBW;Frontend;PGO",
++        "MetricName": "IpTB"
++    },
+     {
+         "BriefDescription": "Branch instructions per taken branch. ",
+         "MetricExpr": "BR_INST_RETIRED.ALL_BRANCHES / BR_INST_RETIRED.NEAR_TAKEN",
+-        "MetricGroup": "Branches;PGO",
++        "MetricGroup": "Branches;Fed;PGO",
+         "MetricName": "BpTkBranch"
+     },
+     {
+         "BriefDescription": "Instructions per Floating Point (FP) Operation (lower number means higher occurrence rate)",
+         "MetricExpr": "INST_RETIRED.ANY / ( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE )",
+-        "MetricGroup": "Flops;FpArith;InsType",
++        "MetricGroup": "Flops;InsType",
+         "MetricName": "IpFLOP"
+     },
++    {
++        "BriefDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)",
++        "MetricExpr": "INST_RETIRED.ANY / ( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) )",
++        "MetricGroup": "Flops;InsType",
++        "MetricName": "IpArith",
++        "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). May undercount due to FMA double counting. Approximated prior to BDW."
++    },
++    {
++        "BriefDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate)",
++        "MetricExpr": "INST_RETIRED.ANY / FP_ARITH_INST_RETIRED.SCALAR_SINGLE",
++        "MetricGroup": "Flops;FpScalar;InsType",
++        "MetricName": "IpArith_Scalar_SP",
++        "PublicDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
++    },
++    {
++        "BriefDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate)",
++        "MetricExpr": "INST_RETIRED.ANY / FP_ARITH_INST_RETIRED.SCALAR_DOUBLE",
++        "MetricGroup": "Flops;FpScalar;InsType",
++        "MetricName": "IpArith_Scalar_DP",
++        "PublicDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
++    },
++    {
++        "BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate)",
++        "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE )",
++        "MetricGroup": "Flops;FpVector;InsType",
++        "MetricName": "IpArith_AVX128",
++        "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
++    },
++    {
++        "BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate)",
++        "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE )",
++        "MetricGroup": "Flops;FpVector;InsType",
++        "MetricName": "IpArith_AVX256",
++        "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
++    },
++    {
++        "BriefDescription": "Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate)",
++        "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE )",
++        "MetricGroup": "Flops;FpVector;InsType",
++        "MetricName": "IpArith_AVX512",
++        "PublicDescription": "Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
++    },
+     {
+         "BriefDescription": "Total number of retired Instructions, Sample with: INST_RETIRED.PREC_DIST",
+         "MetricExpr": "INST_RETIRED.ANY",
+         "MetricGroup": "Summary;TmaL1",
+         "MetricName": "Instructions"
+     },
++    {
++        "BriefDescription": "Average number of Uops issued by front-end when it issued something",
++        "MetricExpr": "UOPS_ISSUED.ANY / cpu@UOPS_ISSUED.ANY\\,cmask\\=1@",
++        "MetricGroup": "Fed;FetchBW",
++        "MetricName": "Fetch_UpC"
++    },
+     {
+         "BriefDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache)",
+         "MetricExpr": "IDQ.DSB_UOPS / (IDQ.DSB_UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS)",
+-        "MetricGroup": "DSB;FetchBW",
++        "MetricGroup": "DSB;Fed;FetchBW",
+         "MetricName": "DSB_Coverage"
+     },
+     {
+-        "BriefDescription": "Actual Average Latency for L1 data-cache miss demand loads (in core cycles)",
++        "BriefDescription": "Total penalty related to DSB (uop cache) misses - subset/see of/the Instruction_Fetch_BW Bottleneck.",
++        "MetricExpr": "(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * (DSB2MITE_SWITCHES.PENALTY_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) + ((IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD))) * (( IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES_4_UOPS ) / CPU_CLK_UNHALTED.THREAD / 2) / #((IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)))",
++        "MetricGroup": "DSBmiss;Fed",
++        "MetricName": "DSB_Misses_Cost"
++    },
++    {
++        "BriefDescription": "Total penalty related to DSB (uop cache) misses - subset/see of/the Instruction_Fetch_BW Bottleneck.",
++        "MetricExpr": "(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * (DSB2MITE_SWITCHES.PENALTY_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) + ((IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) * (( IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES_4_UOPS ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ) / 2) / #((IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED
 .THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))))",
++        "MetricGroup": "DSBmiss;Fed_SMT",
++        "MetricName": "DSB_Misses_Cost_SMT"
++    },
++    {
++        "BriefDescription": "Number of Instructions per non-speculative DSB miss",
++        "MetricExpr": "INST_RETIRED.ANY / FRONTEND_RETIRED.ANY_DSB_MISS",
++        "MetricGroup": "DSBmiss;Fed",
++        "MetricName": "IpDSB_Miss_Ret"
++    },
++    {
++        "BriefDescription": "Fraction of branches that are non-taken conditionals",
++        "MetricExpr": "BR_INST_RETIRED.NOT_TAKEN / BR_INST_RETIRED.ALL_BRANCHES",
++        "MetricGroup": "Bad;Branches;CodeGen;PGO",
++        "MetricName": "Cond_NT"
++    },
++    {
++        "BriefDescription": "Fraction of branches that are taken conditionals",
++        "MetricExpr": "( BR_INST_RETIRED.CONDITIONAL - BR_INST_RETIRED.NOT_TAKEN )  / BR_INST_RETIRED.ALL_BRANCHES",
++        "MetricGroup": "Bad;Branches;CodeGen;PGO",
++        "MetricName": "Cond_TK"
++    },
++    {
++        "BriefDescription": "Fraction of branches that are CALL or RET",
++        "MetricExpr": "( BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_RETURN ) / BR_INST_RETIRED.ALL_BRANCHES",
++        "MetricGroup": "Bad;Branches",
++        "MetricName": "CallRet"
++    },
++    {
++        "BriefDescription": "Fraction of branches that are unconditional (direct or indirect) jumps",
++        "MetricExpr": "(BR_INST_RETIRED.NEAR_TAKEN - ( BR_INST_RETIRED.CONDITIONAL - BR_INST_RETIRED.NOT_TAKEN ) - 2 * BR_INST_RETIRED.NEAR_CALL) / BR_INST_RETIRED.ALL_BRANCHES",
++        "MetricGroup": "Bad;Branches",
++        "MetricName": "Jump"
++    },
++    {
++        "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load instructions (in core cycles)",
+         "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_RETIRED.L1_MISS + MEM_LOAD_RETIRED.FB_HIT )",
+-        "MetricGroup": "MemoryBound;MemoryLat",
+-        "MetricName": "Load_Miss_Real_Latency"
++        "MetricGroup": "Mem;MemoryBound;MemoryLat",
++        "MetricName": "Load_Miss_Real_Latency",
++        "PublicDescription": "Actual Average Latency for L1 data-cache miss demand load instructions (in core cycles). Latency may be overestimated for multi-load instructions - e.g. repeat strings."
+     },
+     {
+         "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)",
+         "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
+-        "MetricGroup": "MemoryBound;MemoryBW",
++        "MetricGroup": "Mem;MemoryBound;MemoryBW",
+         "MetricName": "MLP"
+     },
+-    {
+-        "BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses",
+-        "MetricConstraint": "NO_NMI_WATCHDOG",
+-        "MetricExpr": "( ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_PENDING + DTLB_STORE_MISSES.WALK_PENDING + EPT.WALK_PENDING ) / ( 2 * CORE_CLKS )",
+-        "MetricGroup": "MemoryTLB",
+-        "MetricName": "Page_Walks_Utilization"
+-    },
+     {
+         "BriefDescription": "Average data fill bandwidth to the L1 data cache [GB / sec]",
+         "MetricExpr": "64 * L1D.REPLACEMENT / 1000000000 / duration_time",
+-        "MetricGroup": "MemoryBW",
++        "MetricGroup": "Mem;MemoryBW",
+         "MetricName": "L1D_Cache_Fill_BW"
+     },
+     {
+         "BriefDescription": "Average data fill bandwidth to the L2 cache [GB / sec]",
+         "MetricExpr": "64 * L2_LINES_IN.ALL / 1000000000 / duration_time",
+-        "MetricGroup": "MemoryBW",
++        "MetricGroup": "Mem;MemoryBW",
+         "MetricName": "L2_Cache_Fill_BW"
+     },
+     {
+         "BriefDescription": "Average per-core data fill bandwidth to the L3 cache [GB / sec]",
+         "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1000000000 / duration_time",
+-        "MetricGroup": "MemoryBW",
++        "MetricGroup": "Mem;MemoryBW",
+         "MetricName": "L3_Cache_Fill_BW"
+     },
+     {
+         "BriefDescription": "Average per-core data access bandwidth to the L3 cache [GB / sec]",
+         "MetricExpr": "64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1000000000 / duration_time",
+-        "MetricGroup": "MemoryBW;Offcore",
++        "MetricGroup": "Mem;MemoryBW;Offcore",
+         "MetricName": "L3_Cache_Access_BW"
+     },
+     {
+         "BriefDescription": "L1 cache true misses per kilo instruction for retired demand loads",
+         "MetricExpr": "1000 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANY",
+-        "MetricGroup": "CacheMisses",
++        "MetricGroup": "Mem;CacheMisses",
+         "MetricName": "L1MPKI"
+     },
++    {
++        "BriefDescription": "L1 cache true misses per kilo instruction for all demand loads (including speculative)",
++        "MetricExpr": "1000 * L2_RQSTS.ALL_DEMAND_DATA_RD / INST_RETIRED.ANY",
++        "MetricGroup": "Mem;CacheMisses",
++        "MetricName": "L1MPKI_Load"
++    },
+     {
+         "BriefDescription": "L2 cache true misses per kilo instruction for retired demand loads",
+         "MetricExpr": "1000 * MEM_LOAD_RETIRED.L2_MISS / INST_RETIRED.ANY",
+-        "MetricGroup": "CacheMisses",
++        "MetricGroup": "Mem;Backend;CacheMisses",
+         "MetricName": "L2MPKI"
+     },
+     {
+         "BriefDescription": "L2 cache misses per kilo instruction for all request types (including speculative)",
+         "MetricExpr": "1000 * L2_RQSTS.MISS / INST_RETIRED.ANY",
+-        "MetricGroup": "CacheMisses;Offcore",
++        "MetricGroup": "Mem;CacheMisses;Offcore",
+         "MetricName": "L2MPKI_All"
+     },
++    {
++        "BriefDescription": "L2 cache misses per kilo instruction for all demand loads  (including speculative)",
++        "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.ANY",
++        "MetricGroup": "Mem;CacheMisses",
++        "MetricName": "L2MPKI_Load"
++    },
+     {
+         "BriefDescription": "L2 cache hits per kilo instruction for all request types (including speculative)",
+         "MetricExpr": "1000 * ( L2_RQSTS.REFERENCES - L2_RQSTS.MISS ) / INST_RETIRED.ANY",
+-        "MetricGroup": "CacheMisses",
++        "MetricGroup": "Mem;CacheMisses",
+         "MetricName": "L2HPKI_All"
+     },
++    {
++        "BriefDescription": "L2 cache hits per kilo instruction for all demand loads  (including speculative)",
++        "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.ANY",
++        "MetricGroup": "Mem;CacheMisses",
++        "MetricName": "L2HPKI_Load"
++    },
+     {
+         "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
+         "MetricExpr": "1000 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY",
+-        "MetricGroup": "CacheMisses",
++        "MetricGroup": "Mem;CacheMisses",
+         "MetricName": "L3MPKI"
+     },
++    {
++        "BriefDescription": "Fill Buffer (FB) true hits per kilo instructions for retired demand loads",
++        "MetricExpr": "1000 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANY",
++        "MetricGroup": "Mem;CacheMisses",
++        "MetricName": "FB_HPKI"
++    },
++    {
++        "BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses",
++        "MetricConstraint": "NO_NMI_WATCHDOG",
++        "MetricExpr": "( ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_PENDING + DTLB_STORE_MISSES.WALK_PENDING + EPT.WALK_PENDING ) / ( 2 * CPU_CLK_UNHALTED.THREAD )",
++        "MetricGroup": "Mem;MemoryTLB",
++        "MetricName": "Page_Walks_Utilization"
++    },
++    {
++        "BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses",
++        "MetricExpr": "( ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_PENDING + DTLB_STORE_MISSES.WALK_PENDING + EPT.WALK_PENDING ) / ( 2 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ) )",
++        "MetricGroup": "Mem;MemoryTLB_SMT",
++        "MetricName": "Page_Walks_Utilization_SMT"
++    },
+     {
+         "BriefDescription": "Rate of silent evictions from the L2 cache per Kilo instruction where the evicted lines are dropped (no writeback to L3 or memory)",
+         "MetricExpr": "1000 * L2_LINES_OUT.SILENT / INST_RETIRED.ANY",
+-        "MetricGroup": "L2Evicts;Server",
++        "MetricGroup": "L2Evicts;Mem;Server",
+         "MetricName": "L2_Evictions_Silent_PKI"
+     },
+     {
+         "BriefDescription": "Rate of non silent evictions from the L2 cache per Kilo instruction",
+         "MetricExpr": "1000 * L2_LINES_OUT.NON_SILENT / INST_RETIRED.ANY",
+-        "MetricGroup": "L2Evicts;Server",
++        "MetricGroup": "L2Evicts;Mem;Server",
+         "MetricName": "L2_Evictions_NonSilent_PKI"
+     },
+     {
+@@ -219,7 +532,7 @@
+     {
+         "BriefDescription": "Giga Floating Point Operations Per Second",
+         "MetricExpr": "( ( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE ) / 1000000000 ) / duration_time",
+-        "MetricGroup": "Flops;HPC",
++        "MetricGroup": "Cor;Flops;HPC",
+         "MetricName": "GFLOPs"
+     },
+     {
+@@ -228,6 +541,48 @@
+         "MetricGroup": "Power",
+         "MetricName": "Turbo_Utilization"
+     },
++    {
++        "BriefDescription": "Fraction of Core cycles where the core was running with power-delivery for baseline license level 0",
++        "MetricExpr": "CORE_POWER.LVL0_TURBO_LICENSE / CPU_CLK_UNHALTED.THREAD",
++        "MetricGroup": "Power",
++        "MetricName": "Power_License0_Utilization",
++        "PublicDescription": "Fraction of Core cycles where the core was running with power-delivery for baseline license level 0.  This includes non-AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes."
++    },
++    {
++        "BriefDescription": "Fraction of Core cycles where the core was running with power-delivery for baseline license level 0. SMT version; use when SMT is enabled and measuring per logical CPU.",
++        "MetricExpr": "CORE_POWER.LVL0_TURBO_LICENSE / 2 / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )",
++        "MetricGroup": "Power_SMT",
++        "MetricName": "Power_License0_Utilization_SMT",
++        "PublicDescription": "Fraction of Core cycles where the core was running with power-delivery for baseline license level 0.  This includes non-AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes. SMT version; use when SMT is enabled and measuring per logical CPU."
++    },
++    {
++        "BriefDescription": "Fraction of Core cycles where the core was running with power-delivery for license level 1",
++        "MetricExpr": "CORE_POWER.LVL1_TURBO_LICENSE / CPU_CLK_UNHALTED.THREAD",
++        "MetricGroup": "Power",
++        "MetricName": "Power_License1_Utilization",
++        "PublicDescription": "Fraction of Core cycles where the core was running with power-delivery for license level 1.  This includes high current AVX 256-bit instructions as well as low current AVX 512-bit instructions."
++    },
++    {
++        "BriefDescription": "Fraction of Core cycles where the core was running with power-delivery for license level 1. SMT version; use when SMT is enabled and measuring per logical CPU.",
++        "MetricExpr": "CORE_POWER.LVL1_TURBO_LICENSE / 2 / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )",
++        "MetricGroup": "Power_SMT",
++        "MetricName": "Power_License1_Utilization_SMT",
++        "PublicDescription": "Fraction of Core cycles where the core was running with power-delivery for license level 1.  This includes high current AVX 256-bit instructions as well as low current AVX 512-bit instructions. SMT version; use when SMT is enabled and measuring per logical CPU."
++    },
++    {
++        "BriefDescription": "Fraction of Core cycles where the core was running with power-delivery for license level 2 (introduced in SKX)",
++        "MetricExpr": "CORE_POWER.LVL2_TURBO_LICENSE / CPU_CLK_UNHALTED.THREAD",
++        "MetricGroup": "Power",
++        "MetricName": "Power_License2_Utilization",
++        "PublicDescription": "Fraction of Core cycles where the core was running with power-delivery for license level 2 (introduced in SKX).  This includes high current AVX 512-bit instructions."
++    },
++    {
++        "BriefDescription": "Fraction of Core cycles where the core was running with power-delivery for license level 2 (introduced in SKX). SMT version; use when SMT is enabled and measuring per logical CPU.",
++        "MetricExpr": "CORE_POWER.LVL2_TURBO_LICENSE / 2 / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )",
++        "MetricGroup": "Power_SMT",
++        "MetricName": "Power_License2_Utilization_SMT",
++        "PublicDescription": "Fraction of Core cycles where the core was running with power-delivery for license level 2 (introduced in SKX).  This includes high current AVX 512-bit instructions. SMT version; use when SMT is enabled and measuring per logical CPU."
++    },
+     {
+         "BriefDescription": "Fraction of cycles where both hardware Logical Processors were active",
+         "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0",
+@@ -240,34 +595,46 @@
+         "MetricGroup": "OS",
+         "MetricName": "Kernel_Utilization"
+     },
++    {
++        "BriefDescription": "Cycles Per Instruction for the Operating System (OS) Kernel mode",
++        "MetricExpr": "CPU_CLK_UNHALTED.THREAD_P:k / INST_RETIRED.ANY_P:k",
++        "MetricGroup": "OS",
++        "MetricName": "Kernel_CPI"
++    },
+     {
+         "BriefDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]",
+         "MetricExpr": "( 64 * ( uncore_imc@cas_count_read@ + uncore_imc@cas_count_write@ ) / 1000000000 ) / duration_time",
+-        "MetricGroup": "HPC;MemoryBW;SoC",
++        "MetricGroup": "HPC;Mem;MemoryBW;SoC",
+         "MetricName": "DRAM_BW_Use"
+     },
+     {
+         "BriefDescription": "Average latency of data read request to external memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches",
+         "MetricExpr": "1000000000 * ( cha@event\\=0x36\\,umask\\=0x21\\,config\\=0x40433@ / cha@event\\=0x35\\,umask\\=0x21\\,config\\=0x40433@ ) / ( cha_0@event\\=0x0@ / duration_time )",
+-        "MetricGroup": "MemoryLat;SoC",
++        "MetricGroup": "Mem;MemoryLat;SoC",
+         "MetricName": "MEM_Read_Latency"
+     },
+     {
+         "BriefDescription": "Average number of parallel data read requests to external memory. Accounts for demand loads and L1/L2 prefetches",
+         "MetricExpr": "cha@event\\=0x36\\,umask\\=0x21\\,config\\=0x40433@ / cha@event\\=0x36\\,umask\\=0x21\\,config\\=0x40433\\,thresh\\=1@",
+-        "MetricGroup": "MemoryBW;SoC",
++        "MetricGroup": "Mem;MemoryBW;SoC",
+         "MetricName": "MEM_Parallel_Reads"
+     },
++    {
++        "BriefDescription": "Average latency of data read request to external DRAM memory [in nanoseconds]. Accounts for demand loads and L1/L2 data-read prefetches",
++        "MetricExpr": "1000000000 * ( UNC_M_RPQ_OCCUPANCY / UNC_M_RPQ_INSERTS ) / imc_0@event\\=0x0@",
++        "MetricGroup": "Mem;MemoryLat;SoC;Server",
++        "MetricName": "MEM_DRAM_Read_Latency"
++    },
+     {
+         "BriefDescription": "Average IO (network or disk) Bandwidth Use for Writes [GB / sec]",
+         "MetricExpr": "( UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3 ) * 4 / 1000000000 / duration_time",
+-        "MetricGroup": "IoBW;SoC;Server",
++        "MetricGroup": "IoBW;Mem;SoC;Server",
+         "MetricName": "IO_Write_BW"
+     },
+     {
+         "BriefDescription": "Average IO (network or disk) Bandwidth Use for Reads [GB / sec]",
+         "MetricExpr": "( UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3 ) * 4 / 1000000000 / duration_time",
+-        "MetricGroup": "IoBW;SoC;Server",
++        "MetricGroup": "IoBW;Mem;SoC;Server",
+         "MetricName": "IO_Read_BW"
+     },
+     {
+diff --git a/tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json b/tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json
+index 6ed92bc5c129b..06c5ca26ca3f3 100644
+--- a/tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json
++++ b/tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json
+@@ -537,6 +537,18 @@
+         "PublicDescription": "Counts clockticks of the 1GHz trafiic controller clock in the IIO unit.",
+         "Unit": "IIO"
+     },
++    {
++        "BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 0-3",
++        "Counter": "0,1,2,3",
++        "EventCode": "0xC2",
++        "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.ALL_PARTS",
++        "FCMask": "0x4",
++        "PerPkg": "1",
++        "PortMask": "0x0f",
++        "PublicDescription": "PCIe Completion Buffer Inserts of completions with data: Part 0-3",
++        "UMask": "0x03",
++        "Unit": "IIO"
++    },
+     {
+         "BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 0",
+         "Counter": "0,1,2,3",
+@@ -585,6 +597,17 @@
+         "UMask": "0x03",
+         "Unit": "IIO"
+     },
++    {
++        "BriefDescription": "PCIe Completion Buffer occupancy of completions with data: Part 0-3",
++        "Counter": "2,3",
++        "EventCode": "0xD5",
++        "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.ALL_PARTS",
++        "FCMask": "0x04",
++        "PerPkg": "1",
++        "PublicDescription": "PCIe Completion Buffer occupancy of completions with data: Part 0-3",
++        "UMask": "0x0f",
++        "Unit": "IIO"
++    },
+     {
+         "BriefDescription": "PCIe Completion Buffer occupancy of completions with data: Part 0",
+         "Counter": "2,3",
+diff --git a/tools/testing/cxl/test/cxl.c b/tools/testing/cxl/test/cxl.c
+index cb32f9e27d5d9..5a39b3fdcac42 100644
+--- a/tools/testing/cxl/test/cxl.c
++++ b/tools/testing/cxl/test/cxl.c
+@@ -488,7 +488,7 @@ static __init int cxl_test_init(void)
+ 
+ 	for (i = 0; i < ARRAY_SIZE(cxl_root_port); i++) {
+ 		struct platform_device *bridge =
+-			cxl_host_bridge[i / NR_CXL_ROOT_PORTS];
++			cxl_host_bridge[i % ARRAY_SIZE(cxl_host_bridge)];
+ 		struct platform_device *pdev;
+ 
+ 		pdev = platform_device_alloc("cxl_root_port", i);
+diff --git a/tools/testing/selftests/bpf/prog_tests/bind_perm.c b/tools/testing/selftests/bpf/prog_tests/bind_perm.c
+index d0f06e40c16d0..eac71fbb24ce2 100644
+--- a/tools/testing/selftests/bpf/prog_tests/bind_perm.c
++++ b/tools/testing/selftests/bpf/prog_tests/bind_perm.c
+@@ -1,13 +1,24 @@
+ // SPDX-License-Identifier: GPL-2.0
+-#include <test_progs.h>
+-#include "bind_perm.skel.h"
+-
++#define _GNU_SOURCE
++#include <sched.h>
++#include <stdlib.h>
+ #include <sys/types.h>
+ #include <sys/socket.h>
+ #include <sys/capability.h>
+ 
++#include "test_progs.h"
++#include "bind_perm.skel.h"
++
+ static int duration;
+ 
++static int create_netns(void)
++{
++	if (!ASSERT_OK(unshare(CLONE_NEWNET), "create netns"))
++		return -1;
++
++	return 0;
++}
++
+ void try_bind(int family, int port, int expected_errno)
+ {
+ 	struct sockaddr_storage addr = {};
+@@ -75,6 +86,9 @@ void test_bind_perm(void)
+ 	struct bind_perm *skel;
+ 	int cgroup_fd;
+ 
++	if (create_netns())
++		return;
++
+ 	cgroup_fd = test__join_cgroup("/bind_perm");
+ 	if (CHECK(cgroup_fd < 0, "cg-join", "errno %d", errno))
+ 		return;
+diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h
+new file mode 100644
+index 0000000000000..5bb11fe595a43
+--- /dev/null
++++ b/tools/testing/selftests/bpf/progs/bpf_misc.h
+@@ -0,0 +1,19 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef __BPF_MISC_H__
++#define __BPF_MISC_H__
++
++#if defined(__TARGET_ARCH_x86)
++#define SYSCALL_WRAPPER 1
++#define SYS_PREFIX "__x64_"
++#elif defined(__TARGET_ARCH_s390)
++#define SYSCALL_WRAPPER 1
++#define SYS_PREFIX "__s390x_"
++#elif defined(__TARGET_ARCH_arm64)
++#define SYSCALL_WRAPPER 1
++#define SYS_PREFIX "__arm64_"
++#else
++#define SYSCALL_WRAPPER 0
++#define SYS_PREFIX "__se_"
++#endif
++
++#endif
+diff --git a/tools/testing/selftests/bpf/progs/test_probe_user.c b/tools/testing/selftests/bpf/progs/test_probe_user.c
+index 8812a90da4eb8..702578a5e496d 100644
+--- a/tools/testing/selftests/bpf/progs/test_probe_user.c
++++ b/tools/testing/selftests/bpf/progs/test_probe_user.c
+@@ -7,20 +7,7 @@
+ 
+ #include <bpf/bpf_helpers.h>
+ #include <bpf/bpf_tracing.h>
+-
+-#if defined(__TARGET_ARCH_x86)
+-#define SYSCALL_WRAPPER 1
+-#define SYS_PREFIX "__x64_"
+-#elif defined(__TARGET_ARCH_s390)
+-#define SYSCALL_WRAPPER 1
+-#define SYS_PREFIX "__s390x_"
+-#elif defined(__TARGET_ARCH_arm64)
+-#define SYSCALL_WRAPPER 1
+-#define SYS_PREFIX "__arm64_"
+-#else
+-#define SYSCALL_WRAPPER 0
+-#define SYS_PREFIX ""
+-#endif
++#include "bpf_misc.h"
+ 
+ static struct sockaddr_in old;
+ 
+diff --git a/tools/testing/selftests/bpf/progs/test_sock_fields.c b/tools/testing/selftests/bpf/progs/test_sock_fields.c
+index 81b57b9aaaeae..7967348b11af6 100644
+--- a/tools/testing/selftests/bpf/progs/test_sock_fields.c
++++ b/tools/testing/selftests/bpf/progs/test_sock_fields.c
+@@ -113,7 +113,7 @@ static void tpcpy(struct bpf_tcp_sock *dst,
+ 
+ #define RET_LOG() ({						\
+ 	linum = __LINE__;					\
+-	bpf_map_update_elem(&linum_map, &linum_idx, &linum, BPF_NOEXIST);	\
++	bpf_map_update_elem(&linum_map, &linum_idx, &linum, BPF_ANY);	\
+ 	return CG_OK;						\
+ })
+ 
+diff --git a/tools/testing/selftests/bpf/test_lirc_mode2.sh b/tools/testing/selftests/bpf/test_lirc_mode2.sh
+index ec4e15948e406..5252b91f48a18 100755
+--- a/tools/testing/selftests/bpf/test_lirc_mode2.sh
++++ b/tools/testing/selftests/bpf/test_lirc_mode2.sh
+@@ -3,6 +3,7 @@
+ 
+ # Kselftest framework requirement - SKIP code is 4.
+ ksft_skip=4
++ret=$ksft_skip
+ 
+ msg="skip all tests:"
+ if [ $UID != 0 ]; then
+@@ -25,7 +26,7 @@ do
+ 	fi
+ done
+ 
+-if [ -n $LIRCDEV ];
++if [ -n "$LIRCDEV" ];
+ then
+ 	TYPE=lirc_mode2
+ 	./test_lirc_mode2_user $LIRCDEV $INPUTDEV
+@@ -36,3 +37,5 @@ then
+ 		echo -e ${GREEN}"PASS: $TYPE"${NC}
+ 	fi
+ fi
++
++exit $ret
+diff --git a/tools/testing/selftests/bpf/test_lwt_ip_encap.sh b/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
+index b497bb85b667f..6c69c42b1d607 100755
+--- a/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
++++ b/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
+@@ -120,6 +120,14 @@ setup()
+ 	ip netns exec ${NS2} sysctl -wq net.ipv4.conf.default.rp_filter=0
+ 	ip netns exec ${NS3} sysctl -wq net.ipv4.conf.default.rp_filter=0
+ 
++	# disable IPv6 DAD because it sometimes takes too long and fails tests
++	ip netns exec ${NS1} sysctl -wq net.ipv6.conf.all.accept_dad=0
++	ip netns exec ${NS2} sysctl -wq net.ipv6.conf.all.accept_dad=0
++	ip netns exec ${NS3} sysctl -wq net.ipv6.conf.all.accept_dad=0
++	ip netns exec ${NS1} sysctl -wq net.ipv6.conf.default.accept_dad=0
++	ip netns exec ${NS2} sysctl -wq net.ipv6.conf.default.accept_dad=0
++	ip netns exec ${NS3} sysctl -wq net.ipv6.conf.default.accept_dad=0
++
+ 	ip link add veth1 type veth peer name veth2
+ 	ip link add veth3 type veth peer name veth4
+ 	ip link add veth5 type veth peer name veth6
+@@ -289,7 +297,7 @@ test_ping()
+ 		ip netns exec ${NS1} ping  -c 1 -W 1 -I veth1 ${IPv4_DST} 2>&1 > /dev/null
+ 		RET=$?
+ 	elif [ "${PROTO}" == "IPv6" ] ; then
+-		ip netns exec ${NS1} ping6 -c 1 -W 6 -I veth1 ${IPv6_DST} 2>&1 > /dev/null
++		ip netns exec ${NS1} ping6 -c 1 -W 1 -I veth1 ${IPv6_DST} 2>&1 > /dev/null
+ 		RET=$?
+ 	else
+ 		echo "    test_ping: unknown PROTO: ${PROTO}"
+diff --git a/tools/testing/selftests/bpf/test_xdp_redirect_multi.sh b/tools/testing/selftests/bpf/test_xdp_redirect_multi.sh
+index 05f8727409997..cc57cb87e65f6 100755
+--- a/tools/testing/selftests/bpf/test_xdp_redirect_multi.sh
++++ b/tools/testing/selftests/bpf/test_xdp_redirect_multi.sh
+@@ -32,6 +32,11 @@ DRV_MODE="xdpgeneric xdpdrv xdpegress"
+ PASS=0
+ FAIL=0
+ LOG_DIR=$(mktemp -d)
++declare -a NS
++NS[0]="ns0-$(mktemp -u XXXXXX)"
++NS[1]="ns1-$(mktemp -u XXXXXX)"
++NS[2]="ns2-$(mktemp -u XXXXXX)"
++NS[3]="ns3-$(mktemp -u XXXXXX)"
+ 
+ test_pass()
+ {
+@@ -47,11 +52,9 @@ test_fail()
+ 
+ clean_up()
+ {
+-	for i in $(seq $NUM); do
+-		ip link del veth$i 2> /dev/null
+-		ip netns del ns$i 2> /dev/null
++	for i in $(seq 0 $NUM); do
++		ip netns del ${NS[$i]} 2> /dev/null
+ 	done
+-	ip netns del ns0 2> /dev/null
+ }
+ 
+ # Kselftest framework requirement - SKIP code is 4.
+@@ -79,23 +82,22 @@ setup_ns()
+ 		mode="xdpdrv"
+ 	fi
+ 
+-	ip netns add ns0
++	ip netns add ${NS[0]}
+ 	for i in $(seq $NUM); do
+-	        ip netns add ns$i
+-		ip -n ns$i link add veth0 index 2 type veth \
+-			peer name veth$i netns ns0 index $((1 + $i))
+-		ip -n ns0 link set veth$i up
+-		ip -n ns$i link set veth0 up
+-
+-		ip -n ns$i addr add 192.0.2.$i/24 dev veth0
+-		ip -n ns$i addr add 2001:db8::$i/64 dev veth0
++	        ip netns add ${NS[$i]}
++		ip -n ${NS[$i]} link add veth0 type veth peer name veth$i netns ${NS[0]}
++		ip -n ${NS[$i]} link set veth0 up
++		ip -n ${NS[0]} link set veth$i up
++
++		ip -n ${NS[$i]} addr add 192.0.2.$i/24 dev veth0
++		ip -n ${NS[$i]} addr add 2001:db8::$i/64 dev veth0
+ 		# Add a neigh entry for IPv4 ping test
+-		ip -n ns$i neigh add 192.0.2.253 lladdr 00:00:00:00:00:01 dev veth0
+-		ip -n ns$i link set veth0 $mode obj \
++		ip -n ${NS[$i]} neigh add 192.0.2.253 lladdr 00:00:00:00:00:01 dev veth0
++		ip -n ${NS[$i]} link set veth0 $mode obj \
+ 			xdp_dummy.o sec xdp &> /dev/null || \
+ 			{ test_fail "Unable to load dummy xdp" && exit 1; }
+ 		IFACES="$IFACES veth$i"
+-		veth_mac[$i]=$(ip -n ns0 link show veth$i | awk '/link\/ether/ {print $2}')
++		veth_mac[$i]=$(ip -n ${NS[0]} link show veth$i | awk '/link\/ether/ {print $2}')
+ 	done
+ }
+ 
+@@ -104,10 +106,10 @@ do_egress_tests()
+ 	local mode=$1
+ 
+ 	# mac test
+-	ip netns exec ns2 tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-2_${mode}.log &
+-	ip netns exec ns3 tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-3_${mode}.log &
++	ip netns exec ${NS[2]} tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-2_${mode}.log &
++	ip netns exec ${NS[3]} tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-3_${mode}.log &
+ 	sleep 0.5
+-	ip netns exec ns1 ping 192.0.2.254 -i 0.1 -c 4 &> /dev/null
++	ip netns exec ${NS[1]} ping 192.0.2.254 -i 0.1 -c 4 &> /dev/null
+ 	sleep 0.5
+ 	pkill tcpdump
+ 
+@@ -123,18 +125,18 @@ do_ping_tests()
+ 	local mode=$1
+ 
+ 	# ping6 test: echo request should be redirect back to itself, not others
+-	ip netns exec ns1 ip neigh add 2001:db8::2 dev veth0 lladdr 00:00:00:00:00:02
++	ip netns exec ${NS[1]} ip neigh add 2001:db8::2 dev veth0 lladdr 00:00:00:00:00:02
+ 
+-	ip netns exec ns1 tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-1_${mode}.log &
+-	ip netns exec ns2 tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-2_${mode}.log &
+-	ip netns exec ns3 tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-3_${mode}.log &
++	ip netns exec ${NS[1]} tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-1_${mode}.log &
++	ip netns exec ${NS[2]} tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-2_${mode}.log &
++	ip netns exec ${NS[3]} tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-3_${mode}.log &
+ 	sleep 0.5
+ 	# ARP test
+-	ip netns exec ns1 arping -q -c 2 -I veth0 192.0.2.254
++	ip netns exec ${NS[1]} arping -q -c 2 -I veth0 192.0.2.254
+ 	# IPv4 test
+-	ip netns exec ns1 ping 192.0.2.253 -i 0.1 -c 4 &> /dev/null
++	ip netns exec ${NS[1]} ping 192.0.2.253 -i 0.1 -c 4 &> /dev/null
+ 	# IPv6 test
+-	ip netns exec ns1 ping6 2001:db8::2 -i 0.1 -c 2 &> /dev/null
++	ip netns exec ${NS[1]} ping6 2001:db8::2 -i 0.1 -c 2 &> /dev/null
+ 	sleep 0.5
+ 	pkill tcpdump
+ 
+@@ -180,7 +182,7 @@ do_tests()
+ 		xdpgeneric) drv_p="-S";;
+ 	esac
+ 
+-	ip netns exec ns0 ./xdp_redirect_multi $drv_p $IFACES &> ${LOG_DIR}/xdp_redirect_${mode}.log &
++	ip netns exec ${NS[0]} ./xdp_redirect_multi $drv_p $IFACES &> ${LOG_DIR}/xdp_redirect_${mode}.log &
+ 	xdp_pid=$!
+ 	sleep 1
+ 	if ! ps -p $xdp_pid > /dev/null; then
+@@ -197,10 +199,10 @@ do_tests()
+ 	kill $xdp_pid
+ }
+ 
+-trap clean_up EXIT
+-
+ check_env
+ 
++trap clean_up EXIT
++
+ for mode in ${DRV_MODE}; do
+ 	setup_ns $mode
+ 	do_tests $mode
+diff --git a/tools/testing/selftests/bpf/xdpxceiver.c b/tools/testing/selftests/bpf/xdpxceiver.c
+index 621342ec30c48..37d4873d9a2ed 100644
+--- a/tools/testing/selftests/bpf/xdpxceiver.c
++++ b/tools/testing/selftests/bpf/xdpxceiver.c
+@@ -902,7 +902,10 @@ static bool rx_stats_are_valid(struct ifobject *ifobject)
+ 			return true;
+ 		case STAT_TEST_RX_FULL:
+ 			xsk_stat = stats.rx_ring_full;
+-			expected_stat -= RX_FULL_RXQSIZE;
++			if (ifobject->umem->num_frames < XSK_RING_PROD__DEFAULT_NUM_DESCS)
++				expected_stat = ifobject->umem->num_frames - RX_FULL_RXQSIZE;
++			else
++				expected_stat = XSK_RING_PROD__DEFAULT_NUM_DESCS - RX_FULL_RXQSIZE;
+ 			break;
+ 		case STAT_TEST_RX_FILL_EMPTY:
+ 			xsk_stat = stats.rx_fill_ring_empty_descs;
+diff --git a/tools/testing/selftests/lkdtm/config b/tools/testing/selftests/lkdtm/config
+index a26a3fa9e9255..8bd847f0463cd 100644
+--- a/tools/testing/selftests/lkdtm/config
++++ b/tools/testing/selftests/lkdtm/config
+@@ -6,6 +6,7 @@ CONFIG_HARDENED_USERCOPY=y
+ # CONFIG_HARDENED_USERCOPY_FALLBACK is not set
+ CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT=y
+ CONFIG_INIT_ON_ALLOC_DEFAULT_ON=y
++CONFIG_UBSAN=y
+ CONFIG_UBSAN_BOUNDS=y
+ CONFIG_UBSAN_TRAP=y
+ CONFIG_STACKPROTECTOR_STRONG=y
+diff --git a/tools/testing/selftests/net/af_unix/test_unix_oob.c b/tools/testing/selftests/net/af_unix/test_unix_oob.c
+index 3dece8b292536..b57e91e1c3f28 100644
+--- a/tools/testing/selftests/net/af_unix/test_unix_oob.c
++++ b/tools/testing/selftests/net/af_unix/test_unix_oob.c
+@@ -218,10 +218,10 @@ main(int argc, char **argv)
+ 
+ 	/* Test 1:
+ 	 * veriyf that SIGURG is
+-	 * delivered and 63 bytes are
+-	 * read and oob is '@'
++	 * delivered, 63 bytes are
++	 * read, oob is '@', and POLLPRI works.
+ 	 */
+-	wait_for_data(pfd, POLLIN | POLLPRI);
++	wait_for_data(pfd, POLLPRI);
+ 	read_oob(pfd, &oob);
+ 	len = read_data(pfd, buf, 1024);
+ 	if (!signal_recvd || len != 63 || oob != '@') {
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.sh b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+index 559173a8e387b..d75fa97609c15 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+@@ -445,6 +445,8 @@ do_transfer()
+ 	local stat_ackrx_last_l=$(get_mib_counter "${listener_ns}" "MPTcpExtMPCapableACKRX")
+ 	local stat_cookietx_last=$(get_mib_counter "${listener_ns}" "TcpExtSyncookiesSent")
+ 	local stat_cookierx_last=$(get_mib_counter "${listener_ns}" "TcpExtSyncookiesRecv")
++	local stat_csum_err_s=$(get_mib_counter "${listener_ns}" "MPTcpExtDataCsumErr")
++	local stat_csum_err_c=$(get_mib_counter "${connector_ns}" "MPTcpExtDataCsumErr")
+ 
+ 	timeout ${timeout_test} \
+ 		ip netns exec ${listener_ns} \
+@@ -537,6 +539,23 @@ do_transfer()
+ 		fi
+ 	fi
+ 
++	if $checksum; then
++		local csum_err_s=$(get_mib_counter "${listener_ns}" "MPTcpExtDataCsumErr")
++		local csum_err_c=$(get_mib_counter "${connector_ns}" "MPTcpExtDataCsumErr")
++
++		local csum_err_s_nr=$((csum_err_s - stat_csum_err_s))
++		if [ $csum_err_s_nr -gt 0 ]; then
++			printf "[ FAIL ]\nserver got $csum_err_s_nr data checksum error[s]"
++			rets=1
++		fi
++
++		local csum_err_c_nr=$((csum_err_c - stat_csum_err_c))
++		if [ $csum_err_c_nr -gt 0 ]; then
++			printf "[ FAIL ]\nclient got $csum_err_c_nr data checksum error[s]"
++			retc=1
++		fi
++	fi
++
+ 	if [ $retc -eq 0 ] && [ $rets -eq 0 ]; then
+ 		printf "[ OK ]"
+ 	fi
+diff --git a/tools/testing/selftests/net/test_vxlan_under_vrf.sh b/tools/testing/selftests/net/test_vxlan_under_vrf.sh
+index ea5a7a808f120..1fd1250ebc667 100755
+--- a/tools/testing/selftests/net/test_vxlan_under_vrf.sh
++++ b/tools/testing/selftests/net/test_vxlan_under_vrf.sh
+@@ -120,11 +120,11 @@ echo "[ OK ]"
+ 
+ # Move the underlay to a non-default VRF
+ ip -netns hv-1 link set veth0 vrf vrf-underlay
+-ip -netns hv-1 link set veth0 down
+-ip -netns hv-1 link set veth0 up
++ip -netns hv-1 link set vxlan0 down
++ip -netns hv-1 link set vxlan0 up
+ ip -netns hv-2 link set veth0 vrf vrf-underlay
+-ip -netns hv-2 link set veth0 down
+-ip -netns hv-2 link set veth0 up
++ip -netns hv-2 link set vxlan0 down
++ip -netns hv-2 link set vxlan0 up
+ 
+ echo -n "Check VM connectivity through VXLAN (underlay in a VRF)            "
+ ip netns exec vm-1 ping -c 1 -W 1 10.0.0.2 &> /dev/null || (echo "[FAIL]"; false)
+diff --git a/tools/testing/selftests/net/timestamping.c b/tools/testing/selftests/net/timestamping.c
+index aee631c5284eb..044bc0e9ed81a 100644
+--- a/tools/testing/selftests/net/timestamping.c
++++ b/tools/testing/selftests/net/timestamping.c
+@@ -325,8 +325,8 @@ int main(int argc, char **argv)
+ 	struct ifreq device;
+ 	struct ifreq hwtstamp;
+ 	struct hwtstamp_config hwconfig, hwconfig_requested;
+-	struct so_timestamping so_timestamping_get = { 0, -1 };
+-	struct so_timestamping so_timestamping = { 0, -1 };
++	struct so_timestamping so_timestamping_get = { 0, 0 };
++	struct so_timestamping so_timestamping = { 0, 0 };
+ 	struct sockaddr_in addr;
+ 	struct ip_mreq imr;
+ 	struct in_addr iaddr;
+diff --git a/tools/testing/selftests/net/tls.c b/tools/testing/selftests/net/tls.c
+index 6e468e0f42f78..5d70b04c482c9 100644
+--- a/tools/testing/selftests/net/tls.c
++++ b/tools/testing/selftests/net/tls.c
+@@ -683,6 +683,9 @@ TEST_F(tls, splice_cmsg_to_pipe)
+ 	char buf[10];
+ 	int p[2];
+ 
++	if (self->notls)
++		SKIP(return, "no TLS support");
++
+ 	ASSERT_GE(pipe(p), 0);
+ 	EXPECT_EQ(tls_send_cmsg(self->fd, 100, test_str, send_len, 0), 10);
+ 	EXPECT_EQ(splice(self->cfd, NULL, p[1], NULL, send_len, 0), -1);
+@@ -703,6 +706,9 @@ TEST_F(tls, splice_dec_cmsg_to_pipe)
+ 	char buf[10];
+ 	int p[2];
+ 
++	if (self->notls)
++		SKIP(return, "no TLS support");
++
+ 	ASSERT_GE(pipe(p), 0);
+ 	EXPECT_EQ(tls_send_cmsg(self->fd, 100, test_str, send_len, 0), 10);
+ 	EXPECT_EQ(recv(self->cfd, buf, send_len, 0), -1);
+diff --git a/tools/testing/selftests/rcutorture/bin/torture.sh b/tools/testing/selftests/rcutorture/bin/torture.sh
+index eae88aacca2aa..b2929fb15f7e6 100755
+--- a/tools/testing/selftests/rcutorture/bin/torture.sh
++++ b/tools/testing/selftests/rcutorture/bin/torture.sh
+@@ -71,8 +71,8 @@ usage () {
+ 	echo "       --configs-rcutorture \"config-file list w/ repeat factor (3*TINY01)\""
+ 	echo "       --configs-locktorture \"config-file list w/ repeat factor (10*LOCK01)\""
+ 	echo "       --configs-scftorture \"config-file list w/ repeat factor (2*CFLIST)\""
+-	echo "       --doall"
+-	echo "       --doallmodconfig / --do-no-allmodconfig"
++	echo "       --do-all"
++	echo "       --do-allmodconfig / --do-no-allmodconfig"
+ 	echo "       --do-clocksourcewd / --do-no-clocksourcewd"
+ 	echo "       --do-kasan / --do-no-kasan"
+ 	echo "       --do-kcsan / --do-no-kcsan"
+diff --git a/tools/testing/selftests/sgx/Makefile b/tools/testing/selftests/sgx/Makefile
+index 7f12d55b97f86..472b27ccd7dcb 100644
+--- a/tools/testing/selftests/sgx/Makefile
++++ b/tools/testing/selftests/sgx/Makefile
+@@ -4,7 +4,7 @@ include ../lib.mk
+ 
+ .PHONY: all clean
+ 
+-CAN_BUILD_X86_64 := $(shell ../x86/check_cc.sh $(CC) \
++CAN_BUILD_X86_64 := $(shell ../x86/check_cc.sh "$(CC)" \
+ 			    ../x86/trivial_64bit_program.c)
+ 
+ ifndef OBJCOPY
+diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
+index 1607322a112c9..1530c3e0242ef 100644
+--- a/tools/testing/selftests/vm/Makefile
++++ b/tools/testing/selftests/vm/Makefile
+@@ -1,6 +1,8 @@
+ # SPDX-License-Identifier: GPL-2.0
+ # Makefile for vm selftests
+ 
++LOCAL_HDRS += $(selfdir)/vm/local_config.h $(top_srcdir)/mm/gup_test.h
++
+ include local_config.mk
+ 
+ uname_M := $(shell uname -m 2>/dev/null || echo not)
+@@ -49,9 +51,9 @@ TEST_GEN_FILES += split_huge_page_test
+ TEST_GEN_FILES += ksm_tests
+ 
+ ifeq ($(MACHINE),x86_64)
+-CAN_BUILD_I386 := $(shell ./../x86/check_cc.sh $(CC) ../x86/trivial_32bit_program.c -m32)
+-CAN_BUILD_X86_64 := $(shell ./../x86/check_cc.sh $(CC) ../x86/trivial_64bit_program.c)
+-CAN_BUILD_WITH_NOPIE := $(shell ./../x86/check_cc.sh $(CC) ../x86/trivial_program.c -no-pie)
++CAN_BUILD_I386 := $(shell ./../x86/check_cc.sh "$(CC)" ../x86/trivial_32bit_program.c -m32)
++CAN_BUILD_X86_64 := $(shell ./../x86/check_cc.sh "$(CC)" ../x86/trivial_64bit_program.c)
++CAN_BUILD_WITH_NOPIE := $(shell ./../x86/check_cc.sh "$(CC)" ../x86/trivial_program.c -no-pie)
+ 
+ TARGETS := protection_keys
+ BINARIES_32 := $(TARGETS:%=%_32)
+@@ -140,10 +142,6 @@ endif
+ 
+ $(OUTPUT)/mlock-random-test $(OUTPUT)/memfd_secret: LDLIBS += -lcap
+ 
+-$(OUTPUT)/gup_test: ../../../../mm/gup_test.h
+-
+-$(OUTPUT)/hmm-tests: local_config.h
+-
+ # HMM_EXTRA_LIBS may get set in local_config.mk, or it may be left empty.
+ $(OUTPUT)/hmm-tests: LDLIBS += $(HMM_EXTRA_LIBS)
+ 
+diff --git a/tools/testing/selftests/x86/Makefile b/tools/testing/selftests/x86/Makefile
+index 8a1f62ab3c8e6..53df7d3893d31 100644
+--- a/tools/testing/selftests/x86/Makefile
++++ b/tools/testing/selftests/x86/Makefile
+@@ -6,9 +6,9 @@ include ../lib.mk
+ .PHONY: all all_32 all_64 warn_32bit_failure clean
+ 
+ UNAME_M := $(shell uname -m)
+-CAN_BUILD_I386 := $(shell ./check_cc.sh $(CC) trivial_32bit_program.c -m32)
+-CAN_BUILD_X86_64 := $(shell ./check_cc.sh $(CC) trivial_64bit_program.c)
+-CAN_BUILD_WITH_NOPIE := $(shell ./check_cc.sh $(CC) trivial_program.c -no-pie)
++CAN_BUILD_I386 := $(shell ./check_cc.sh "$(CC)" trivial_32bit_program.c -m32)
++CAN_BUILD_X86_64 := $(shell ./check_cc.sh "$(CC)" trivial_64bit_program.c)
++CAN_BUILD_WITH_NOPIE := $(shell ./check_cc.sh "$(CC)" trivial_program.c -no-pie)
+ 
+ TARGETS_C_BOTHBITS := single_step_syscall sysret_ss_attrs syscall_nt test_mremap_vdso \
+ 			check_initial_reg_state sigreturn iopl ioperm \
+diff --git a/tools/testing/selftests/x86/check_cc.sh b/tools/testing/selftests/x86/check_cc.sh
+index 3e2089c8cf549..8c669c0d662ee 100755
+--- a/tools/testing/selftests/x86/check_cc.sh
++++ b/tools/testing/selftests/x86/check_cc.sh
+@@ -7,7 +7,7 @@ CC="$1"
+ TESTPROG="$2"
+ shift 2
+ 
+-if "$CC" -o /dev/null "$TESTPROG" -O0 "$@" 2>/dev/null; then
++if [ -n "$CC" ] && $CC -o /dev/null "$TESTPROG" -O0 "$@" 2>/dev/null; then
+     echo 1
+ else
+     echo 0
+diff --git a/tools/virtio/virtio_test.c b/tools/virtio/virtio_test.c
+index cb3f29c09aff3..23f142af544ad 100644
+--- a/tools/virtio/virtio_test.c
++++ b/tools/virtio/virtio_test.c
+@@ -130,6 +130,7 @@ static void vdev_info_init(struct vdev_info* dev, unsigned long long features)
+ 	memset(dev, 0, sizeof *dev);
+ 	dev->vdev.features = features;
+ 	INIT_LIST_HEAD(&dev->vdev.vqs);
++	spin_lock_init(&dev->vdev.vqs_list_lock);
+ 	dev->buf_size = 1024;
+ 	dev->buf = malloc(dev->buf_size);
+ 	assert(dev->buf);
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 6ae9e04d0585e..74798e50e3f97 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -117,6 +117,8 @@ EXPORT_SYMBOL_GPL(kvm_debugfs_dir);
+ 
+ static const struct file_operations stat_fops_per_vm;
+ 
++static struct file_operations kvm_chardev_ops;
++
+ static long kvm_vcpu_ioctl(struct file *file, unsigned int ioctl,
+ 			   unsigned long arg);
+ #ifdef CONFIG_KVM_COMPAT
+@@ -1107,6 +1109,16 @@ static struct kvm *kvm_create_vm(unsigned long type)
+ 	preempt_notifier_inc();
+ 	kvm_init_pm_notifier(kvm);
+ 
++	/*
++	 * When the fd passed to this ioctl() is opened it pins the module,
++	 * but try_module_get() also prevents getting a reference if the module
++	 * is in MODULE_STATE_GOING (e.g. if someone ran "rmmod --wait").
++	 */
++	if (!try_module_get(kvm_chardev_ops.owner)) {
++		r = -ENODEV;
++		goto out_err;
++	}
++
+ 	return kvm;
+ 
+ out_err:
+@@ -1196,6 +1208,7 @@ static void kvm_destroy_vm(struct kvm *kvm)
+ 	preempt_notifier_dec();
+ 	hardware_disable_all();
+ 	mmdrop(mm);
++	module_put(kvm_chardev_ops.owner);
+ }
+ 
+ void kvm_get_kvm(struct kvm *kvm)


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-04-08 13:04 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-04-08 13:04 UTC (permalink / raw
  To: gentoo-commits

commit:     2ecd03ea8107b713b3340c42a51774c7e8559129
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Apr  8 13:03:59 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Apr  8 13:03:59 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2ecd03ea

Remove redundant patch

Removed:
2410_revert-swiotlb-rework-fix-info-leak-with-dma_from_device.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   4 -
 ...rework-fix-info-leak-with-dma_from_device.patch | 187 ---------------------
 2 files changed, 191 deletions(-)

diff --git a/0000_README b/0000_README
index 2a86553a..2aa81fbc 100644
--- a/0000_README
+++ b/0000_README
@@ -135,10 +135,6 @@ Patch:  2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch
 From:   https://patchwork.kernel.org/project/linux-wireless/patch/70e27cbc652cbdb78277b9c691a3a5ba02653afb.1641540175.git.objelf@gmail.com/
 Desc:   mt76: mt7921e: fix possible probe failure after reboot
 
-Patch:  2410_revert-swiotlb-rework-fix-info-leak-with-dma_from_device.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git
-Desc:   Revert swiotlb: rework fix info leak with DMA_FROM_DEVICE
-
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2410_revert-swiotlb-rework-fix-info-leak-with-dma_from_device.patch b/2410_revert-swiotlb-rework-fix-info-leak-with-dma_from_device.patch
deleted file mode 100644
index 69476ab1..00000000
--- a/2410_revert-swiotlb-rework-fix-info-leak-with-dma_from_device.patch
+++ /dev/null
@@ -1,187 +0,0 @@
-From bddac7c1e02ba47f0570e494c9289acea3062cc1 Mon Sep 17 00:00:00 2001
-From: Linus Torvalds <torvalds@linux-foundation.org>
-Date: Sat, 26 Mar 2022 10:42:04 -0700
-Subject: Revert "swiotlb: rework "fix info leak with DMA_FROM_DEVICE""
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-From: Linus Torvalds <torvalds@linux-foundation.org>
-
-commit bddac7c1e02ba47f0570e494c9289acea3062cc1 upstream.
-
-This reverts commit aa6f8dcbab473f3a3c7454b74caa46d36cdc5d13.
-
-It turns out this breaks at least the ath9k wireless driver, and
-possibly others.
-
-What the ath9k driver does on packet receive is to set up the DMA
-transfer with:
-
-  int ath_rx_init(..)
-  ..
-                bf->bf_buf_addr = dma_map_single(sc->dev, skb->data,
-                                                 common->rx_bufsize,
-                                                 DMA_FROM_DEVICE);
-
-and then the receive logic (through ath_rx_tasklet()) will fetch
-incoming packets
-
-  static bool ath_edma_get_buffers(..)
-  ..
-        dma_sync_single_for_cpu(sc->dev, bf->bf_buf_addr,
-                                common->rx_bufsize, DMA_FROM_DEVICE);
-
-        ret = ath9k_hw_process_rxdesc_edma(ah, rs, skb->data);
-        if (ret == -EINPROGRESS) {
-                /*let device gain the buffer again*/
-                dma_sync_single_for_device(sc->dev, bf->bf_buf_addr,
-                                common->rx_bufsize, DMA_FROM_DEVICE);
-                return false;
-        }
-
-and it's worth noting how that first DMA sync:
-
-    dma_sync_single_for_cpu(..DMA_FROM_DEVICE);
-
-is there to make sure the CPU can read the DMA buffer (possibly by
-copying it from the bounce buffer area, or by doing some cache flush).
-The iommu correctly turns that into a "copy from bounce bufer" so that
-the driver can look at the state of the packets.
-
-In the meantime, the device may continue to write to the DMA buffer, but
-we at least have a snapshot of the state due to that first DMA sync.
-
-But that _second_ DMA sync:
-
-    dma_sync_single_for_device(..DMA_FROM_DEVICE);
-
-is telling the DMA mapping that the CPU wasn't interested in the area
-because the packet wasn't there.  In the case of a DMA bounce buffer,
-that is a no-op.
-
-Note how it's not a sync for the CPU (the "for_device()" part), and it's
-not a sync for data written by the CPU (the "DMA_FROM_DEVICE" part).
-
-Or rather, it _should_ be a no-op.  That's what commit aa6f8dcbab47
-broke: it made the code bounce the buffer unconditionally, and changed
-the DMA_FROM_DEVICE to just unconditionally and illogically be
-DMA_TO_DEVICE.
-
-[ Side note: purely within the confines of the swiotlb driver it wasn't
-  entirely illogical: The reason it did that odd DMA_FROM_DEVICE ->
-  DMA_TO_DEVICE conversion thing is because inside the swiotlb driver,
-  it uses just a swiotlb_bounce() helper that doesn't care about the
-  whole distinction of who the sync is for - only which direction to
-  bounce.
-
-  So it took the "sync for device" to mean that the CPU must have been
-  the one writing, and thought it meant DMA_TO_DEVICE. ]
-
-Also note how the commentary in that commit was wrong, probably due to
-that whole confusion, claiming that the commit makes the swiotlb code
-
-                                  "bounce unconditionally (that is, also
-    when dir == DMA_TO_DEVICE) in order do avoid synchronising back stale
-    data from the swiotlb buffer"
-
-which is nonsensical for two reasons:
-
- - that "also when dir == DMA_TO_DEVICE" is nonsensical, as that was
-   exactly when it always did - and should do - the bounce.
-
- - since this is a sync for the device (not for the CPU), we're clearly
-   fundamentally not coping back stale data from the bounce buffers at
-   all, because we'd be copying *to* the bounce buffers.
-
-So that commit was just very confused.  It confused the direction of the
-synchronization (to the device, not the cpu) with the direction of the
-DMA (from the device).
-
-Reported-and-bisected-by: Oleksandr Natalenko <oleksandr@natalenko.name>
-Reported-by: Olha Cherevyk <olha.cherevyk@gmail.com>
-Cc: Halil Pasic <pasic@linux.ibm.com>
-Cc: Christoph Hellwig <hch@lst.de>
-Cc: Kalle Valo <kvalo@kernel.org>
-Cc: Robin Murphy <robin.murphy@arm.com>
-Cc: Toke Høiland-Jørgensen <toke@toke.dk>
-Cc: Maxime Bizon <mbizon@freebox.fr>
-Cc: Johannes Berg <johannes@sipsolutions.net>
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
----
- Documentation/core-api/dma-attributes.rst |    8 ++++++++
- include/linux/dma-mapping.h               |    8 ++++++++
- kernel/dma/swiotlb.c                      |   23 ++++++++---------------
- 3 files changed, 24 insertions(+), 15 deletions(-)
-
---- a/Documentation/core-api/dma-attributes.rst
-+++ b/Documentation/core-api/dma-attributes.rst
-@@ -130,3 +130,11 @@ accesses to DMA buffers in both privileg
- subsystem that the buffer is fully accessible at the elevated privilege
- level (and ideally inaccessible or at least read-only at the
- lesser-privileged levels).
-+
-+DMA_ATTR_OVERWRITE
-+------------------
-+
-+This is a hint to the DMA-mapping subsystem that the device is expected to
-+overwrite the entire mapped size, thus the caller does not require any of the
-+previous buffer contents to be preserved. This allows bounce-buffering
-+implementations to optimise DMA_FROM_DEVICE transfers.
---- a/include/linux/dma-mapping.h
-+++ b/include/linux/dma-mapping.h
-@@ -62,6 +62,14 @@
- #define DMA_ATTR_PRIVILEGED		(1UL << 9)
- 
- /*
-+ * This is a hint to the DMA-mapping subsystem that the device is expected
-+ * to overwrite the entire mapped size, thus the caller does not require any
-+ * of the previous buffer contents to be preserved. This allows
-+ * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers.
-+ */
-+#define DMA_ATTR_OVERWRITE		(1UL << 10)
-+
-+/*
-  * A dma_addr_t can hold any valid DMA or bus address for the platform.  It can
-  * be given to a device to use as a DMA source or target.  It is specific to a
-  * given device and there may be a translation between the CPU physical address
---- a/kernel/dma/swiotlb.c
-+++ b/kernel/dma/swiotlb.c
-@@ -627,14 +627,10 @@ phys_addr_t swiotlb_tbl_map_single(struc
- 	for (i = 0; i < nr_slots(alloc_size + offset); i++)
- 		mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
- 	tlb_addr = slot_addr(mem->start, index) + offset;
--	/*
--	 * When dir == DMA_FROM_DEVICE we could omit the copy from the orig
--	 * to the tlb buffer, if we knew for sure the device will
--	 * overwirte the entire current content. But we don't. Thus
--	 * unconditional bounce may prevent leaking swiotlb content (i.e.
--	 * kernel memory) to user-space.
--	 */
--	swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE);
-+	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-+	    (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE ||
-+	    dir == DMA_BIDIRECTIONAL))
-+		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE);
- 	return tlb_addr;
- }
- 
-@@ -701,13 +697,10 @@ void swiotlb_tbl_unmap_single(struct dev
- void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
- 		size_t size, enum dma_data_direction dir)
- {
--	/*
--	 * Unconditional bounce is necessary to avoid corruption on
--	 * sync_*_for_cpu or dma_ummap_* when the device didn't overwrite
--	 * the whole lengt of the bounce buffer.
--	 */
--	swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
--	BUG_ON(!valid_dma_direction(dir));
-+	if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
-+		swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
-+	else
-+		BUG_ON(dir != DMA_FROM_DEVICE);
- }
- 
- void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr,


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-04-08 13:06 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-04-08 13:06 UTC (permalink / raw
  To: gentoo-commits

commit:     0412262fd3d4c3eaf5548c0394307ba532b6ab8f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Apr  8 13:05:44 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Apr  8 13:05:44 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0412262f

Removed redunant patch

Removed:
2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/0000_README b/0000_README
index 2aa81fbc..e871f18a 100644
--- a/0000_README
+++ b/0000_README
@@ -131,10 +131,6 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
-Patch:  2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch
-From:   https://patchwork.kernel.org/project/linux-wireless/patch/70e27cbc652cbdb78277b9c691a3a5ba02653afb.1641540175.git.objelf@gmail.com/
-Desc:   mt76: mt7921e: fix possible probe failure after reboot
-
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-04-08 13:07 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-04-08 13:07 UTC (permalink / raw
  To: gentoo-commits

commit:     5efc135f497de18be94cffe6ca700a998ab5986b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Apr  8 13:06:46 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Apr  8 13:06:46 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5efc135f

Remove redundant patch

2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 ...e-fix-possible-probe-failure-after-reboot.patch | 436 ---------------------
 1 file changed, 436 deletions(-)

diff --git a/2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch b/2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch
deleted file mode 100644
index 4440e910..00000000
--- a/2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch
+++ /dev/null
@@ -1,436 +0,0 @@
-From patchwork Fri Jan  7 07:30:03 2022
-Content-Type: text/plain; charset="utf-8"
-MIME-Version: 1.0
-Content-Transfer-Encoding: 7bit
-X-Patchwork-Submitter: Sean Wang <sean.wang@mediatek.com>
-X-Patchwork-Id: 12706336
-X-Patchwork-Delegate: nbd@nbd.name
-Return-Path: <linux-wireless-owner@kernel.org>
-X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on
-	aws-us-west-2-korg-lkml-1.web.codeaurora.org
-Received: from vger.kernel.org (vger.kernel.org [23.128.96.18])
-	by smtp.lore.kernel.org (Postfix) with ESMTP id 21BAFC433F5
-	for <linux-wireless@archiver.kernel.org>;
- Fri,  7 Jan 2022 07:30:14 +0000 (UTC)
-Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand
-        id S234590AbiAGHaN (ORCPT
-        <rfc822;linux-wireless@archiver.kernel.org>);
-        Fri, 7 Jan 2022 02:30:13 -0500
-Received: from mailgw01.mediatek.com ([60.244.123.138]:50902 "EHLO
-        mailgw01.mediatek.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org
-        with ESMTP id S234589AbiAGHaM (ORCPT
-        <rfc822;linux-wireless@vger.kernel.org>);
-        Fri, 7 Jan 2022 02:30:12 -0500
-X-UUID: 13aa3d5f268849bb8556160d0ceca5f3-20220107
-X-UUID: 13aa3d5f268849bb8556160d0ceca5f3-20220107
-Received: from mtkcas10.mediatek.inc [(172.21.101.39)] by
- mailgw01.mediatek.com
-        (envelope-from <sean.wang@mediatek.com>)
-        (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256)
-        with ESMTP id 982418171; Fri, 07 Jan 2022 15:30:06 +0800
-Received: from mtkcas10.mediatek.inc (172.21.101.39) by
- mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server
- (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.792.3;
- Fri, 7 Jan 2022 15:30:05 +0800
-Received: from mtkswgap22.mediatek.inc (172.21.77.33) by mtkcas10.mediatek.inc
- (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend
- Transport; Fri, 7 Jan 2022 15:30:04 +0800
-From: <sean.wang@mediatek.com>
-To: <nbd@nbd.name>, <lorenzo.bianconi@redhat.com>
-CC: <sean.wang@mediatek.com>, <Soul.Huang@mediatek.com>,
-        <YN.Chen@mediatek.com>, <Leon.Yen@mediatek.com>,
-        <Eric-SY.Chang@mediatek.com>, <Mark-YW.Chen@mediatek.com>,
-        <Deren.Wu@mediatek.com>, <km.lin@mediatek.com>,
-        <jenhao.yang@mediatek.com>, <robin.chiu@mediatek.com>,
-        <Eddie.Chen@mediatek.com>, <ch.yeh@mediatek.com>,
-        <posh.sun@mediatek.com>, <ted.huang@mediatek.com>,
-        <Eric.Liang@mediatek.com>, <Stella.Chang@mediatek.com>,
-        <Tom.Chou@mediatek.com>, <steve.lee@mediatek.com>,
-        <jsiuda@google.com>, <frankgor@google.com>, <jemele@google.com>,
-        <abhishekpandit@google.com>, <shawnku@google.com>,
-        <linux-wireless@vger.kernel.org>,
-        <linux-mediatek@lists.infradead.org>,
-        "Deren Wu" <deren.wu@mediatek.com>
-Subject: [PATCH] mt76: mt7921e: fix possible probe failure after reboot
-Date: Fri, 7 Jan 2022 15:30:03 +0800
-Message-ID: 
- <70e27cbc652cbdb78277b9c691a3a5ba02653afb.1641540175.git.objelf@gmail.com>
-X-Mailer: git-send-email 1.7.9.5
-MIME-Version: 1.0
-X-MTK: N
-Precedence: bulk
-List-ID: <linux-wireless.vger.kernel.org>
-X-Mailing-List: linux-wireless@vger.kernel.org
-
-From: Sean Wang <sean.wang@mediatek.com>
-
-It doesn't guarantee the mt7921e gets started with ASPM L0 after each
-machine reboot on every platform.
-
-If mt7921e gets started with not ASPM L0, it would be possible that the
-driver encounters time to time failure in mt7921_pci_probe, like a
-weird chip identifier is read
-
-[  215.514503] mt7921e 0000:05:00.0: ASIC revision: feed0000
-[  216.604741] mt7921e: probe of 0000:05:00.0 failed with error -110
-
-or failing to init hardware because the driver is not allowed to access the
-register until the device is in ASPM L0 state. So, we call
-__mt7921e_mcu_drv_pmctrl in early mt7921_pci_probe to force the device
-to bring back to the L0 state for we can safely access registers in any
-case.
-
-In the patch, we move all functions from dma.c to pci.c and register mt76
-bus operation earilier, that is the __mt7921e_mcu_drv_pmctrl depends on.
-
-Fixes: bf3747ae2e25 ("mt76: mt7921: enable aspm by default")
-Reported-by: Kai-Chuan Hsieh <kaichuan.hsieh@canonical.com>
-Co-developed-by: Deren Wu <deren.wu@mediatek.com>
-Signed-off-by: Deren Wu <deren.wu@mediatek.com>
-Signed-off-by: Sean Wang <sean.wang@mediatek.com>
----
- .../net/wireless/mediatek/mt76/mt7921/dma.c   | 119 -----------------
- .../wireless/mediatek/mt76/mt7921/mt7921.h    |   1 +
- .../net/wireless/mediatek/mt76/mt7921/pci.c   | 124 ++++++++++++++++++
- .../wireless/mediatek/mt76/mt7921/pci_mcu.c   |  18 ++-
- 4 files changed, 139 insertions(+), 123 deletions(-)
-
-diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
-index cdff1fd52d93..39d6ce4ecddd 100644
---- a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
-+++ b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
-@@ -78,110 +78,6 @@ static void mt7921_dma_prefetch(struct mt7921_dev *dev)
- 	mt76_wr(dev, MT_WFDMA0_TX_RING17_EXT_CTRL, PREFETCH(0x380, 0x4));
- }
- 
--static u32 __mt7921_reg_addr(struct mt7921_dev *dev, u32 addr)
--{
--	static const struct {
--		u32 phys;
--		u32 mapped;
--		u32 size;
--	} fixed_map[] = {
--		{ 0x820d0000, 0x30000, 0x10000 }, /* WF_LMAC_TOP (WF_WTBLON) */
--		{ 0x820ed000, 0x24800, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_MIB) */
--		{ 0x820e4000, 0x21000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_TMAC) */
--		{ 0x820e7000, 0x21e00, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_DMA) */
--		{ 0x820eb000, 0x24200, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_LPON) */
--		{ 0x820e2000, 0x20800, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_AGG) */
--		{ 0x820e3000, 0x20c00, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_ARB) */
--		{ 0x820e5000, 0x21400, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_RMAC) */
--		{ 0x00400000, 0x80000, 0x10000 }, /* WF_MCU_SYSRAM */
--		{ 0x00410000, 0x90000, 0x10000 }, /* WF_MCU_SYSRAM (configure register) */
--		{ 0x40000000, 0x70000, 0x10000 }, /* WF_UMAC_SYSRAM */
--		{ 0x54000000, 0x02000, 0x1000 }, /* WFDMA PCIE0 MCU DMA0 */
--		{ 0x55000000, 0x03000, 0x1000 }, /* WFDMA PCIE0 MCU DMA1 */
--		{ 0x58000000, 0x06000, 0x1000 }, /* WFDMA PCIE1 MCU DMA0 (MEM_DMA) */
--		{ 0x59000000, 0x07000, 0x1000 }, /* WFDMA PCIE1 MCU DMA1 */
--		{ 0x7c000000, 0xf0000, 0x10000 }, /* CONN_INFRA */
--		{ 0x7c020000, 0xd0000, 0x10000 }, /* CONN_INFRA, WFDMA */
--		{ 0x7c060000, 0xe0000, 0x10000 }, /* CONN_INFRA, conn_host_csr_top */
--		{ 0x80020000, 0xb0000, 0x10000 }, /* WF_TOP_MISC_OFF */
--		{ 0x81020000, 0xc0000, 0x10000 }, /* WF_TOP_MISC_ON */
--		{ 0x820c0000, 0x08000, 0x4000 }, /* WF_UMAC_TOP (PLE) */
--		{ 0x820c8000, 0x0c000, 0x2000 }, /* WF_UMAC_TOP (PSE) */
--		{ 0x820cc000, 0x0e000, 0x1000 }, /* WF_UMAC_TOP (PP) */
--		{ 0x820cd000, 0x0f000, 0x1000 }, /* WF_MDP_TOP */
--		{ 0x820ce000, 0x21c00, 0x0200 }, /* WF_LMAC_TOP (WF_SEC) */
--		{ 0x820cf000, 0x22000, 0x1000 }, /* WF_LMAC_TOP (WF_PF) */
--		{ 0x820e0000, 0x20000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_CFG) */
--		{ 0x820e1000, 0x20400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_TRB) */
--		{ 0x820e9000, 0x23400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_WTBLOFF) */
--		{ 0x820ea000, 0x24000, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_ETBF) */
--		{ 0x820ec000, 0x24600, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_INT) */
--		{ 0x820f0000, 0xa0000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_CFG) */
--		{ 0x820f1000, 0xa0600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_TRB) */
--		{ 0x820f2000, 0xa0800, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_AGG) */
--		{ 0x820f3000, 0xa0c00, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_ARB) */
--		{ 0x820f4000, 0xa1000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_TMAC) */
--		{ 0x820f5000, 0xa1400, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_RMAC) */
--		{ 0x820f7000, 0xa1e00, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_DMA) */
--		{ 0x820f9000, 0xa3400, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_WTBLOFF) */
--		{ 0x820fa000, 0xa4000, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_ETBF) */
--		{ 0x820fb000, 0xa4200, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_LPON) */
--		{ 0x820fc000, 0xa4600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_INT) */
--		{ 0x820fd000, 0xa4800, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_MIB) */
--	};
--	int i;
--
--	if (addr < 0x100000)
--		return addr;
--
--	for (i = 0; i < ARRAY_SIZE(fixed_map); i++) {
--		u32 ofs;
--
--		if (addr < fixed_map[i].phys)
--			continue;
--
--		ofs = addr - fixed_map[i].phys;
--		if (ofs > fixed_map[i].size)
--			continue;
--
--		return fixed_map[i].mapped + ofs;
--	}
--
--	if ((addr >= 0x18000000 && addr < 0x18c00000) ||
--	    (addr >= 0x70000000 && addr < 0x78000000) ||
--	    (addr >= 0x7c000000 && addr < 0x7c400000))
--		return mt7921_reg_map_l1(dev, addr);
--
--	dev_err(dev->mt76.dev, "Access currently unsupported address %08x\n",
--		addr);
--
--	return 0;
--}
--
--static u32 mt7921_rr(struct mt76_dev *mdev, u32 offset)
--{
--	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
--	u32 addr = __mt7921_reg_addr(dev, offset);
--
--	return dev->bus_ops->rr(mdev, addr);
--}
--
--static void mt7921_wr(struct mt76_dev *mdev, u32 offset, u32 val)
--{
--	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
--	u32 addr = __mt7921_reg_addr(dev, offset);
--
--	dev->bus_ops->wr(mdev, addr, val);
--}
--
--static u32 mt7921_rmw(struct mt76_dev *mdev, u32 offset, u32 mask, u32 val)
--{
--	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
--	u32 addr = __mt7921_reg_addr(dev, offset);
--
--	return dev->bus_ops->rmw(mdev, addr, mask, val);
--}
--
- static int mt7921_dma_disable(struct mt7921_dev *dev, bool force)
- {
- 	if (force) {
-@@ -341,23 +237,8 @@ int mt7921_wpdma_reinit_cond(struct mt7921_dev *dev)
- 
- int mt7921_dma_init(struct mt7921_dev *dev)
- {
--	struct mt76_bus_ops *bus_ops;
- 	int ret;
- 
--	dev->phy.dev = dev;
--	dev->phy.mt76 = &dev->mt76.phy;
--	dev->mt76.phy.priv = &dev->phy;
--	dev->bus_ops = dev->mt76.bus;
--	bus_ops = devm_kmemdup(dev->mt76.dev, dev->bus_ops, sizeof(*bus_ops),
--			       GFP_KERNEL);
--	if (!bus_ops)
--		return -ENOMEM;
--
--	bus_ops->rr = mt7921_rr;
--	bus_ops->wr = mt7921_wr;
--	bus_ops->rmw = mt7921_rmw;
--	dev->mt76.bus = bus_ops;
--
- 	mt76_dma_attach(&dev->mt76);
- 
- 	ret = mt7921_dma_disable(dev, true);
-diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
-index 8b674e042568..63e3c7ef5e89 100644
---- a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
-+++ b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
-@@ -443,6 +443,7 @@ int mt7921e_mcu_init(struct mt7921_dev *dev);
- int mt7921s_wfsys_reset(struct mt7921_dev *dev);
- int mt7921s_mac_reset(struct mt7921_dev *dev);
- int mt7921s_init_reset(struct mt7921_dev *dev);
-+int __mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev);
- int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev);
- int mt7921e_mcu_fw_pmctrl(struct mt7921_dev *dev);
- 
-diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
-index 1ae0d5826ca7..a0c82d19c4d9 100644
---- a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
-+++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
-@@ -121,6 +121,110 @@ static void mt7921e_unregister_device(struct mt7921_dev *dev)
- 	mt76_free_device(&dev->mt76);
- }
- 
-+static u32 __mt7921_reg_addr(struct mt7921_dev *dev, u32 addr)
-+{
-+	static const struct {
-+		u32 phys;
-+		u32 mapped;
-+		u32 size;
-+	} fixed_map[] = {
-+		{ 0x820d0000, 0x30000, 0x10000 }, /* WF_LMAC_TOP (WF_WTBLON) */
-+		{ 0x820ed000, 0x24800, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_MIB) */
-+		{ 0x820e4000, 0x21000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_TMAC) */
-+		{ 0x820e7000, 0x21e00, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_DMA) */
-+		{ 0x820eb000, 0x24200, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_LPON) */
-+		{ 0x820e2000, 0x20800, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_AGG) */
-+		{ 0x820e3000, 0x20c00, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_ARB) */
-+		{ 0x820e5000, 0x21400, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_RMAC) */
-+		{ 0x00400000, 0x80000, 0x10000 }, /* WF_MCU_SYSRAM */
-+		{ 0x00410000, 0x90000, 0x10000 }, /* WF_MCU_SYSRAM (configure register) */
-+		{ 0x40000000, 0x70000, 0x10000 }, /* WF_UMAC_SYSRAM */
-+		{ 0x54000000, 0x02000, 0x1000 }, /* WFDMA PCIE0 MCU DMA0 */
-+		{ 0x55000000, 0x03000, 0x1000 }, /* WFDMA PCIE0 MCU DMA1 */
-+		{ 0x58000000, 0x06000, 0x1000 }, /* WFDMA PCIE1 MCU DMA0 (MEM_DMA) */
-+		{ 0x59000000, 0x07000, 0x1000 }, /* WFDMA PCIE1 MCU DMA1 */
-+		{ 0x7c000000, 0xf0000, 0x10000 }, /* CONN_INFRA */
-+		{ 0x7c020000, 0xd0000, 0x10000 }, /* CONN_INFRA, WFDMA */
-+		{ 0x7c060000, 0xe0000, 0x10000 }, /* CONN_INFRA, conn_host_csr_top */
-+		{ 0x80020000, 0xb0000, 0x10000 }, /* WF_TOP_MISC_OFF */
-+		{ 0x81020000, 0xc0000, 0x10000 }, /* WF_TOP_MISC_ON */
-+		{ 0x820c0000, 0x08000, 0x4000 }, /* WF_UMAC_TOP (PLE) */
-+		{ 0x820c8000, 0x0c000, 0x2000 }, /* WF_UMAC_TOP (PSE) */
-+		{ 0x820cc000, 0x0e000, 0x1000 }, /* WF_UMAC_TOP (PP) */
-+		{ 0x820cd000, 0x0f000, 0x1000 }, /* WF_MDP_TOP */
-+		{ 0x820ce000, 0x21c00, 0x0200 }, /* WF_LMAC_TOP (WF_SEC) */
-+		{ 0x820cf000, 0x22000, 0x1000 }, /* WF_LMAC_TOP (WF_PF) */
-+		{ 0x820e0000, 0x20000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_CFG) */
-+		{ 0x820e1000, 0x20400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_TRB) */
-+		{ 0x820e9000, 0x23400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_WTBLOFF) */
-+		{ 0x820ea000, 0x24000, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_ETBF) */
-+		{ 0x820ec000, 0x24600, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_INT) */
-+		{ 0x820f0000, 0xa0000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_CFG) */
-+		{ 0x820f1000, 0xa0600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_TRB) */
-+		{ 0x820f2000, 0xa0800, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_AGG) */
-+		{ 0x820f3000, 0xa0c00, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_ARB) */
-+		{ 0x820f4000, 0xa1000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_TMAC) */
-+		{ 0x820f5000, 0xa1400, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_RMAC) */
-+		{ 0x820f7000, 0xa1e00, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_DMA) */
-+		{ 0x820f9000, 0xa3400, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_WTBLOFF) */
-+		{ 0x820fa000, 0xa4000, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_ETBF) */
-+		{ 0x820fb000, 0xa4200, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_LPON) */
-+		{ 0x820fc000, 0xa4600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_INT) */
-+		{ 0x820fd000, 0xa4800, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_MIB) */
-+	};
-+	int i;
-+
-+	if (addr < 0x100000)
-+		return addr;
-+
-+	for (i = 0; i < ARRAY_SIZE(fixed_map); i++) {
-+		u32 ofs;
-+
-+		if (addr < fixed_map[i].phys)
-+			continue;
-+
-+		ofs = addr - fixed_map[i].phys;
-+		if (ofs > fixed_map[i].size)
-+			continue;
-+
-+		return fixed_map[i].mapped + ofs;
-+	}
-+
-+	if ((addr >= 0x18000000 && addr < 0x18c00000) ||
-+	    (addr >= 0x70000000 && addr < 0x78000000) ||
-+	    (addr >= 0x7c000000 && addr < 0x7c400000))
-+		return mt7921_reg_map_l1(dev, addr);
-+
-+	dev_err(dev->mt76.dev, "Access currently unsupported address %08x\n",
-+		addr);
-+
-+	return 0;
-+}
-+
-+static u32 mt7921_rr(struct mt76_dev *mdev, u32 offset)
-+{
-+	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
-+	u32 addr = __mt7921_reg_addr(dev, offset);
-+
-+	return dev->bus_ops->rr(mdev, addr);
-+}
-+
-+static void mt7921_wr(struct mt76_dev *mdev, u32 offset, u32 val)
-+{
-+	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
-+	u32 addr = __mt7921_reg_addr(dev, offset);
-+
-+	dev->bus_ops->wr(mdev, addr, val);
-+}
-+
-+static u32 mt7921_rmw(struct mt76_dev *mdev, u32 offset, u32 mask, u32 val)
-+{
-+	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
-+	u32 addr = __mt7921_reg_addr(dev, offset);
-+
-+	return dev->bus_ops->rmw(mdev, addr, mask, val);
-+}
-+
- static int mt7921_pci_probe(struct pci_dev *pdev,
- 			    const struct pci_device_id *id)
- {
-@@ -152,6 +256,7 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
- 		.fw_own = mt7921e_mcu_fw_pmctrl,
- 	};
- 
-+	struct mt76_bus_ops *bus_ops;
- 	struct mt7921_dev *dev;
- 	struct mt76_dev *mdev;
- 	int ret;
-@@ -189,6 +294,25 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
- 
- 	mt76_mmio_init(&dev->mt76, pcim_iomap_table(pdev)[0]);
- 	tasklet_init(&dev->irq_tasklet, mt7921_irq_tasklet, (unsigned long)dev);
-+
-+	dev->phy.dev = dev;
-+	dev->phy.mt76 = &dev->mt76.phy;
-+	dev->mt76.phy.priv = &dev->phy;
-+	dev->bus_ops = dev->mt76.bus;
-+	bus_ops = devm_kmemdup(dev->mt76.dev, dev->bus_ops, sizeof(*bus_ops),
-+			       GFP_KERNEL);
-+	if (!bus_ops)
-+		return -ENOMEM;
-+
-+	bus_ops->rr = mt7921_rr;
-+	bus_ops->wr = mt7921_wr;
-+	bus_ops->rmw = mt7921_rmw;
-+	dev->mt76.bus = bus_ops;
-+
-+	ret = __mt7921e_mcu_drv_pmctrl(dev);
-+	if (ret)
-+		return ret;
-+
- 	mdev->rev = (mt7921_l1_rr(dev, MT_HW_CHIPID) << 16) |
- 		    (mt7921_l1_rr(dev, MT_HW_REV) & 0xff);
- 	dev_info(mdev->dev, "ASIC revision: %04x\n", mdev->rev);
-diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
-index f9e350b67fdc..36669e5aeef3 100644
---- a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
-+++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
-@@ -59,10 +59,8 @@ int mt7921e_mcu_init(struct mt7921_dev *dev)
- 	return err;
- }
- 
--int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
-+int __mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
- {
--	struct mt76_phy *mphy = &dev->mt76.phy;
--	struct mt76_connac_pm *pm = &dev->pm;
- 	int i, err = 0;
- 
- 	for (i = 0; i < MT7921_DRV_OWN_RETRY_COUNT; i++) {
-@@ -75,9 +73,21 @@ int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
- 	if (i == MT7921_DRV_OWN_RETRY_COUNT) {
- 		dev_err(dev->mt76.dev, "driver own failed\n");
- 		err = -EIO;
--		goto out;
- 	}
- 
-+	return err;
-+}
-+
-+int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
-+{
-+	struct mt76_phy *mphy = &dev->mt76.phy;
-+	struct mt76_connac_pm *pm = &dev->pm;
-+	int err;
-+
-+	err = __mt7921e_mcu_drv_pmctrl(dev);
-+	if (err < 0)
-+		goto out;
-+
- 	mt7921_wpdma_reinit_cond(dev);
- 	clear_bit(MT76_STATE_PM, &mphy->state);
- 


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-04-12 17:49 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-04-12 17:49 UTC (permalink / raw
  To: gentoo-commits

commit:     678cd9fb011788da6f318021ec4cf8b0691a635e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Apr 12 17:38:27 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Apr 12 17:48:57 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=678cd9fb

Select AUTOFS_FS when GENTOO_LINUX_INIT_SYSTEMD selected

Bug: https://bugs.gentoo.org/838082

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 3712fa96..9eefdc31 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -1,14 +1,14 @@
---- a/Kconfig	2021-06-04 19:03:33.646823432 -0400
-+++ b/Kconfig	2021-06-04 19:03:40.508892817 -0400
+--- a/Kconfig	2022-04-12 13:11:48.403113171 -0400
++++ b/Kconfig	2022-04-12 13:12:36.530084675 -0400
 @@ -30,3 +30,5 @@ source "lib/Kconfig"
  source "lib/Kconfig.debug"
  
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2022-01-29 13:28:12.679255142 -0500
-+++ b/distro/Kconfig	2022-01-29 15:29:29.800465617 -0500
-@@ -0,0 +1,285 @@
+--- /dev/null	2022-04-12 05:39:54.696333295 -0400
++++ b/distro/Kconfig	2022-04-12 13:21:04.666379519 -0400
+@@ -0,0 +1,286 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -122,6 +122,7 @@
 +	depends on GENTOO_LINUX && GENTOO_LINUX_UDEV
 +
 +	select AUTOFS4_FS
++	select AUTOFS_FS
 +	select BLK_DEV_BSG
 +	select BPF_SYSCALL
 +	select CGROUP_BPF


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-04-12 19:14 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-04-12 19:14 UTC (permalink / raw
  To: gentoo-commits

commit:     39441839807148c21766d10727dbd2b07bc16e16
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Apr 12 19:14:41 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Apr 12 19:14:41 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=39441839

Remove deprecated select AUTOFS4_FS

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 9eefdc31..9843c3e2 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -8,7 +8,7 @@
 +source "distro/Kconfig"
 --- /dev/null	2022-04-12 05:39:54.696333295 -0400
 +++ b/distro/Kconfig	2022-04-12 13:21:04.666379519 -0400
-@@ -0,0 +1,286 @@
+@@ -0,0 +1,285 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -121,7 +121,6 @@
 +
 +	depends on GENTOO_LINUX && GENTOO_LINUX_UDEV
 +
-+	select AUTOFS4_FS
 +	select AUTOFS_FS
 +	select BLK_DEV_BSG
 +	select BPF_SYSCALL


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [gentoo-commits] proj/linux-patches:5.16 commit in: /
@ 2022-04-13 18:33 Mike Pagano
  0 siblings, 0 replies; 35+ messages in thread
From: Mike Pagano @ 2022-04-13 18:33 UTC (permalink / raw
  To: gentoo-commits

commit:     75a4ab5f071f956d3c17efd019ee4e5bfb430134
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 13 18:33:41 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr 13 18:33:41 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=75a4ab5f

Linux patch 5.16.20

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |     4 +
 1019_linux-5.16.20.patch | 12366 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 12370 insertions(+)

diff --git a/0000_README b/0000_README
index e871f18a..fbdb093a 100644
--- a/0000_README
+++ b/0000_README
@@ -119,6 +119,10 @@ Patch:  1018_linux-5.16.19.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.16.19
 
+Patch:  1019_linux-5.16.20.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.16.20
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1019_linux-5.16.20.patch b/1019_linux-5.16.20.patch
new file mode 100644
index 00000000..01f8abd8
--- /dev/null
+++ b/1019_linux-5.16.20.patch
@@ -0,0 +1,12366 @@
+diff --git a/Documentation/virt/kvm/devices/vcpu.rst b/Documentation/virt/kvm/devices/vcpu.rst
+index 60a29972d3f1b..d063aaee5bb73 100644
+--- a/Documentation/virt/kvm/devices/vcpu.rst
++++ b/Documentation/virt/kvm/devices/vcpu.rst
+@@ -70,7 +70,7 @@ irqchip.
+ 	 -ENODEV  PMUv3 not supported or GIC not initialized
+ 	 -ENXIO   PMUv3 not properly configured or in-kernel irqchip not
+ 	 	  configured as required prior to calling this attribute
+-	 -EBUSY   PMUv3 already initialized
++	 -EBUSY   PMUv3 already initialized or a VCPU has already run
+ 	 -EINVAL  Invalid filter range
+ 	 =======  ======================================================
+ 
+diff --git a/Makefile b/Makefile
+index c16a8a2ffd73e..1e8d249c996b1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 16
+-SUBLEVEL = 19
++SUBLEVEL = 20
+ EXTRAVERSION =
+ NAME = Gobble Gobble
+ 
+diff --git a/arch/alpha/kernel/rtc.c b/arch/alpha/kernel/rtc.c
+index ce3077946e1d9..fb3025396ac96 100644
+--- a/arch/alpha/kernel/rtc.c
++++ b/arch/alpha/kernel/rtc.c
+@@ -80,7 +80,12 @@ init_rtc_epoch(void)
+ static int
+ alpha_rtc_read_time(struct device *dev, struct rtc_time *tm)
+ {
+-	mc146818_get_time(tm);
++	int ret = mc146818_get_time(tm);
++
++	if (ret < 0) {
++		dev_err_ratelimited(dev, "unable to read current time\n");
++		return ret;
++	}
+ 
+ 	/* Adjust for non-default epochs.  It's easier to depend on the
+ 	   generic __get_rtc_time and adjust the epoch here than create
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index bfbf0c4c7c5e5..39f5c1672f480 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -75,6 +75,7 @@
+ #define ARM_CPU_PART_CORTEX_A77		0xD0D
+ #define ARM_CPU_PART_NEOVERSE_V1	0xD40
+ #define ARM_CPU_PART_CORTEX_A78		0xD41
++#define ARM_CPU_PART_CORTEX_A78AE	0xD42
+ #define ARM_CPU_PART_CORTEX_X1		0xD44
+ #define ARM_CPU_PART_CORTEX_A510	0xD46
+ #define ARM_CPU_PART_CORTEX_A710	0xD47
+@@ -123,6 +124,7 @@
+ #define MIDR_CORTEX_A77	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77)
+ #define MIDR_NEOVERSE_V1	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V1)
+ #define MIDR_CORTEX_A78	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78)
++#define MIDR_CORTEX_A78AE	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78AE)
+ #define MIDR_CORTEX_X1	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1)
+ #define MIDR_CORTEX_A510 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A510)
+ #define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710)
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index d016f27af6da9..8c7ba346d713e 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -137,6 +137,7 @@ struct kvm_arch {
+ 
+ 	/* Memory Tagging Extension enabled for the guest */
+ 	bool mte_enabled;
++	bool ran_once;
+ };
+ 
+ struct kvm_vcpu_fault_info {
+diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c
+index 771f543464e06..33e0fabc0b79b 100644
+--- a/arch/arm64/kernel/patching.c
++++ b/arch/arm64/kernel/patching.c
+@@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg)
+ 	int i, ret = 0;
+ 	struct aarch64_insn_patch *pp = arg;
+ 
+-	/* The first CPU becomes master */
+-	if (atomic_inc_return(&pp->cpu_count) == 1) {
++	/* The last CPU becomes master */
++	if (atomic_inc_return(&pp->cpu_count) == num_online_cpus()) {
+ 		for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
+ 			ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
+ 							     pp->new_insns[i]);
+diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
+index 5777929d35bf4..40be3a7c2c531 100644
+--- a/arch/arm64/kernel/proton-pack.c
++++ b/arch/arm64/kernel/proton-pack.c
+@@ -853,6 +853,7 @@ u8 spectre_bhb_loop_affected(int scope)
+ 	if (scope == SCOPE_LOCAL_CPU) {
+ 		static const struct midr_range spectre_bhb_k32_list[] = {
+ 			MIDR_ALL_VERSIONS(MIDR_CORTEX_A78),
++			MIDR_ALL_VERSIONS(MIDR_CORTEX_A78AE),
+ 			MIDR_ALL_VERSIONS(MIDR_CORTEX_A78C),
+ 			MIDR_ALL_VERSIONS(MIDR_CORTEX_X1),
+ 			MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
+diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
+index 27df5c1e6baad..3b46041f2b978 100644
+--- a/arch/arm64/kernel/smp.c
++++ b/arch/arm64/kernel/smp.c
+@@ -234,6 +234,7 @@ asmlinkage notrace void secondary_start_kernel(void)
+ 	 * Log the CPU info before it is marked online and might get read.
+ 	 */
+ 	cpuinfo_store_cpu();
++	store_cpu_topology(cpu);
+ 
+ 	/*
+ 	 * Enable GIC and timers.
+@@ -242,7 +243,6 @@ asmlinkage notrace void secondary_start_kernel(void)
+ 
+ 	ipi_setup(cpu);
+ 
+-	store_cpu_topology(cpu);
+ 	numa_add_cpu(cpu);
+ 
+ 	/*
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 1eadf9088880a..6f3be4d44abe0 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -629,6 +629,10 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
+ 	if (kvm_vm_is_protected(kvm))
+ 		kvm_call_hyp_nvhe(__pkvm_vcpu_init_traps, vcpu);
+ 
++	mutex_lock(&kvm->lock);
++	kvm->arch.ran_once = true;
++	mutex_unlock(&kvm->lock);
++
+ 	return ret;
+ }
+ 
+diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
+index a5e4bbf5e68f9..c996fc562da4a 100644
+--- a/arch/arm64/kvm/pmu-emul.c
++++ b/arch/arm64/kvm/pmu-emul.c
+@@ -921,6 +921,8 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
+ 
+ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+ {
++	struct kvm *kvm = vcpu->kvm;
++
+ 	if (!kvm_vcpu_has_pmu(vcpu))
+ 		return -ENODEV;
+ 
+@@ -938,7 +940,7 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+ 		int __user *uaddr = (int __user *)(long)attr->addr;
+ 		int irq;
+ 
+-		if (!irqchip_in_kernel(vcpu->kvm))
++		if (!irqchip_in_kernel(kvm))
+ 			return -EINVAL;
+ 
+ 		if (get_user(irq, uaddr))
+@@ -948,7 +950,7 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+ 		if (!(irq_is_ppi(irq) || irq_is_spi(irq)))
+ 			return -EINVAL;
+ 
+-		if (!pmu_irq_is_valid(vcpu->kvm, irq))
++		if (!pmu_irq_is_valid(kvm, irq))
+ 			return -EINVAL;
+ 
+ 		if (kvm_arm_pmu_irq_initialized(vcpu))
+@@ -963,7 +965,7 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+ 		struct kvm_pmu_event_filter filter;
+ 		int nr_events;
+ 
+-		nr_events = kvm_pmu_event_mask(vcpu->kvm) + 1;
++		nr_events = kvm_pmu_event_mask(kvm) + 1;
+ 
+ 		uaddr = (struct kvm_pmu_event_filter __user *)(long)attr->addr;
+ 
+@@ -975,12 +977,17 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+ 		     filter.action != KVM_PMU_EVENT_DENY))
+ 			return -EINVAL;
+ 
+-		mutex_lock(&vcpu->kvm->lock);
++		mutex_lock(&kvm->lock);
++
++		if (kvm->arch.ran_once) {
++			mutex_unlock(&kvm->lock);
++			return -EBUSY;
++		}
+ 
+-		if (!vcpu->kvm->arch.pmu_filter) {
+-			vcpu->kvm->arch.pmu_filter = bitmap_alloc(nr_events, GFP_KERNEL_ACCOUNT);
+-			if (!vcpu->kvm->arch.pmu_filter) {
+-				mutex_unlock(&vcpu->kvm->lock);
++		if (!kvm->arch.pmu_filter) {
++			kvm->arch.pmu_filter = bitmap_alloc(nr_events, GFP_KERNEL_ACCOUNT);
++			if (!kvm->arch.pmu_filter) {
++				mutex_unlock(&kvm->lock);
+ 				return -ENOMEM;
+ 			}
+ 
+@@ -991,17 +998,17 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+ 			 * events, the default is to allow.
+ 			 */
+ 			if (filter.action == KVM_PMU_EVENT_ALLOW)
+-				bitmap_zero(vcpu->kvm->arch.pmu_filter, nr_events);
++				bitmap_zero(kvm->arch.pmu_filter, nr_events);
+ 			else
+-				bitmap_fill(vcpu->kvm->arch.pmu_filter, nr_events);
++				bitmap_fill(kvm->arch.pmu_filter, nr_events);
+ 		}
+ 
+ 		if (filter.action == KVM_PMU_EVENT_ALLOW)
+-			bitmap_set(vcpu->kvm->arch.pmu_filter, filter.base_event, filter.nevents);
++			bitmap_set(kvm->arch.pmu_filter, filter.base_event, filter.nevents);
+ 		else
+-			bitmap_clear(vcpu->kvm->arch.pmu_filter, filter.base_event, filter.nevents);
++			bitmap_clear(kvm->arch.pmu_filter, filter.base_event, filter.nevents);
+ 
+-		mutex_unlock(&vcpu->kvm->lock);
++		mutex_unlock(&kvm->lock);
+ 
+ 		return 0;
+ 	}
+diff --git a/arch/mips/boot/dts/ingenic/jz4780.dtsi b/arch/mips/boot/dts/ingenic/jz4780.dtsi
+index b0a4e2e019c36..af6cd32c706c7 100644
+--- a/arch/mips/boot/dts/ingenic/jz4780.dtsi
++++ b/arch/mips/boot/dts/ingenic/jz4780.dtsi
+@@ -470,7 +470,7 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <1>;
+ 
+-			eth0_addr: eth-mac-addr@0x22 {
++			eth0_addr: eth-mac-addr@22 {
+ 				reg = <0x22 0x6>;
+ 			};
+ 		};
+diff --git a/arch/mips/include/asm/setup.h b/arch/mips/include/asm/setup.h
+index bb36a400203df..8c56b862fd9c2 100644
+--- a/arch/mips/include/asm/setup.h
++++ b/arch/mips/include/asm/setup.h
+@@ -16,7 +16,7 @@ static inline void setup_8250_early_printk_port(unsigned long base,
+ 	unsigned int reg_shift, unsigned int timeout) {}
+ #endif
+ 
+-extern void set_handler(unsigned long offset, void *addr, unsigned long len);
++void set_handler(unsigned long offset, const void *addr, unsigned long len);
+ extern void set_uncached_handler(unsigned long offset, void *addr, unsigned long len);
+ 
+ typedef void (*vi_handler_t)(void);
+diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
+index d26b0fb8ea067..b9b31b13062d6 100644
+--- a/arch/mips/kernel/traps.c
++++ b/arch/mips/kernel/traps.c
+@@ -2091,19 +2091,19 @@ static void *set_vi_srs_handler(int n, vi_handler_t addr, int srs)
+ 		 * If no shadow set is selected then use the default handler
+ 		 * that does normal register saving and standard interrupt exit
+ 		 */
+-		extern char except_vec_vi, except_vec_vi_lui;
+-		extern char except_vec_vi_ori, except_vec_vi_end;
+-		extern char rollback_except_vec_vi;
+-		char *vec_start = using_rollback_handler() ?
+-			&rollback_except_vec_vi : &except_vec_vi;
++		extern const u8 except_vec_vi[], except_vec_vi_lui[];
++		extern const u8 except_vec_vi_ori[], except_vec_vi_end[];
++		extern const u8 rollback_except_vec_vi[];
++		const u8 *vec_start = using_rollback_handler() ?
++				      rollback_except_vec_vi : except_vec_vi;
+ #if defined(CONFIG_CPU_MICROMIPS) || defined(CONFIG_CPU_BIG_ENDIAN)
+-		const int lui_offset = &except_vec_vi_lui - vec_start + 2;
+-		const int ori_offset = &except_vec_vi_ori - vec_start + 2;
++		const int lui_offset = except_vec_vi_lui - vec_start + 2;
++		const int ori_offset = except_vec_vi_ori - vec_start + 2;
+ #else
+-		const int lui_offset = &except_vec_vi_lui - vec_start;
+-		const int ori_offset = &except_vec_vi_ori - vec_start;
++		const int lui_offset = except_vec_vi_lui - vec_start;
++		const int ori_offset = except_vec_vi_ori - vec_start;
+ #endif
+-		const int handler_len = &except_vec_vi_end - vec_start;
++		const int handler_len = except_vec_vi_end - vec_start;
+ 
+ 		if (handler_len > VECTORSPACING) {
+ 			/*
+@@ -2311,7 +2311,7 @@ void per_cpu_trap_init(bool is_boot_cpu)
+ }
+ 
+ /* Install CPU exception handler */
+-void set_handler(unsigned long offset, void *addr, unsigned long size)
++void set_handler(unsigned long offset, const void *addr, unsigned long size)
+ {
+ #ifdef CONFIG_CPU_MICROMIPS
+ 	memcpy((void *)(ebase + offset), ((unsigned char *)addr - 1), size);
+diff --git a/arch/mips/ralink/ill_acc.c b/arch/mips/ralink/ill_acc.c
+index bdf53807d7c2b..bea857c9da8b7 100644
+--- a/arch/mips/ralink/ill_acc.c
++++ b/arch/mips/ralink/ill_acc.c
+@@ -61,6 +61,7 @@ static int __init ill_acc_of_setup(void)
+ 	pdev = of_find_device_by_node(np);
+ 	if (!pdev) {
+ 		pr_err("%pOFn: failed to lookup pdev\n", np);
++		of_node_put(np);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/arch/parisc/kernel/patch.c b/arch/parisc/kernel/patch.c
+index 80a0ab372802d..e59574f65e641 100644
+--- a/arch/parisc/kernel/patch.c
++++ b/arch/parisc/kernel/patch.c
+@@ -40,10 +40,7 @@ static void __kprobes *patch_map(void *addr, int fixmap, unsigned long *flags,
+ 
+ 	*need_unmap = 1;
+ 	set_fixmap(fixmap, page_to_phys(page));
+-	if (flags)
+-		raw_spin_lock_irqsave(&patch_lock, *flags);
+-	else
+-		__acquire(&patch_lock);
++	raw_spin_lock_irqsave(&patch_lock, *flags);
+ 
+ 	return (void *) (__fix_to_virt(fixmap) + (uintaddr & ~PAGE_MASK));
+ }
+@@ -52,10 +49,7 @@ static void __kprobes patch_unmap(int fixmap, unsigned long *flags)
+ {
+ 	clear_fixmap(fixmap);
+ 
+-	if (flags)
+-		raw_spin_unlock_irqrestore(&patch_lock, *flags);
+-	else
+-		__release(&patch_lock);
++	raw_spin_unlock_irqrestore(&patch_lock, *flags);
+ }
+ 
+ void __kprobes __patch_text_multiple(void *addr, u32 *insn, unsigned int len)
+@@ -67,8 +61,9 @@ void __kprobes __patch_text_multiple(void *addr, u32 *insn, unsigned int len)
+ 	int mapped;
+ 
+ 	/* Make sure we don't have any aliases in cache */
+-	flush_kernel_vmap_range(addr, len);
+-	flush_icache_range(start, end);
++	flush_kernel_dcache_range_asm(start, end);
++	flush_kernel_icache_range_asm(start, end);
++	flush_tlb_kernel_range(start, end);
+ 
+ 	p = fixmap = patch_map(addr, FIX_TEXT_POKE0, &flags, &mapped);
+ 
+@@ -81,8 +76,10 @@ void __kprobes __patch_text_multiple(void *addr, u32 *insn, unsigned int len)
+ 			 * We're crossing a page boundary, so
+ 			 * need to remap
+ 			 */
+-			flush_kernel_vmap_range((void *)fixmap,
+-						(p-fixmap) * sizeof(*p));
++			flush_kernel_dcache_range_asm((unsigned long)fixmap,
++						      (unsigned long)p);
++			flush_tlb_kernel_range((unsigned long)fixmap,
++					       (unsigned long)p);
+ 			if (mapped)
+ 				patch_unmap(FIX_TEXT_POKE0, &flags);
+ 			p = fixmap = patch_map(addr, FIX_TEXT_POKE0, &flags,
+@@ -90,10 +87,10 @@ void __kprobes __patch_text_multiple(void *addr, u32 *insn, unsigned int len)
+ 		}
+ 	}
+ 
+-	flush_kernel_vmap_range((void *)fixmap, (p-fixmap) * sizeof(*p));
++	flush_kernel_dcache_range_asm((unsigned long)fixmap, (unsigned long)p);
++	flush_tlb_kernel_range((unsigned long)fixmap, (unsigned long)p);
+ 	if (mapped)
+ 		patch_unmap(FIX_TEXT_POKE0, &flags);
+-	flush_icache_range(start, end);
+ }
+ 
+ void __kprobes __patch_text(void *addr, u32 insn)
+diff --git a/arch/powerpc/boot/dts/fsl/t104xrdb.dtsi b/arch/powerpc/boot/dts/fsl/t104xrdb.dtsi
+index 099a598c74c00..bfe1ed5be3374 100644
+--- a/arch/powerpc/boot/dts/fsl/t104xrdb.dtsi
++++ b/arch/powerpc/boot/dts/fsl/t104xrdb.dtsi
+@@ -139,12 +139,12 @@
+ 		fman@400000 {
+ 			ethernet@e6000 {
+ 				phy-handle = <&phy_rgmii_0>;
+-				phy-connection-type = "rgmii";
++				phy-connection-type = "rgmii-id";
+ 			};
+ 
+ 			ethernet@e8000 {
+ 				phy-handle = <&phy_rgmii_1>;
+-				phy-connection-type = "rgmii";
++				phy-connection-type = "rgmii-id";
+ 			};
+ 
+ 			mdio0: mdio@fc000 {
+diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
+index a1d238255f077..a07960066b5fa 100644
+--- a/arch/powerpc/include/asm/interrupt.h
++++ b/arch/powerpc/include/asm/interrupt.h
+@@ -567,7 +567,7 @@ DECLARE_INTERRUPT_HANDLER_RAW(do_slb_fault);
+ DECLARE_INTERRUPT_HANDLER(do_bad_slb_fault);
+ 
+ /* hash_utils.c */
+-DECLARE_INTERRUPT_HANDLER_RAW(do_hash_fault);
++DECLARE_INTERRUPT_HANDLER(do_hash_fault);
+ 
+ /* fault.c */
+ DECLARE_INTERRUPT_HANDLER(do_page_fault);
+diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
+index 254687258f42b..f2c5c26869f1a 100644
+--- a/arch/powerpc/include/asm/page.h
++++ b/arch/powerpc/include/asm/page.h
+@@ -132,7 +132,11 @@ static inline bool pfn_valid(unsigned long pfn)
+ #define virt_to_page(kaddr)	pfn_to_page(virt_to_pfn(kaddr))
+ #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
+ 
+-#define virt_addr_valid(kaddr)	pfn_valid(virt_to_pfn(kaddr))
++#define virt_addr_valid(vaddr)	({					\
++	unsigned long _addr = (unsigned long)vaddr;			\
++	_addr >= PAGE_OFFSET && _addr < (unsigned long)high_memory &&	\
++	pfn_valid(virt_to_pfn(_addr));					\
++})
+ 
+ /*
+  * On Book-E parts we need __va to parse the device tree and we can't
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index ff80bbad22a58..e18a725a8e5d3 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -1235,6 +1235,12 @@ int __init early_init_dt_scan_rtas(unsigned long node,
+ 	entryp = of_get_flat_dt_prop(node, "linux,rtas-entry", NULL);
+ 	sizep  = of_get_flat_dt_prop(node, "rtas-size", NULL);
+ 
++#ifdef CONFIG_PPC64
++	/* need this feature to decide the crashkernel offset */
++	if (of_get_flat_dt_prop(node, "ibm,hypertas-functions", NULL))
++		powerpc_firmware_features |= FW_FEATURE_LPAR;
++#endif
++
+ 	if (basep && entryp && sizep) {
+ 		rtas.base = *basep;
+ 		rtas.entry = *entryp;
+diff --git a/arch/powerpc/kernel/secvar-sysfs.c b/arch/powerpc/kernel/secvar-sysfs.c
+index a0a78aba2083e..1ee4640a26413 100644
+--- a/arch/powerpc/kernel/secvar-sysfs.c
++++ b/arch/powerpc/kernel/secvar-sysfs.c
+@@ -26,15 +26,18 @@ static ssize_t format_show(struct kobject *kobj, struct kobj_attribute *attr,
+ 	const char *format;
+ 
+ 	node = of_find_compatible_node(NULL, NULL, "ibm,secvar-backend");
+-	if (!of_device_is_available(node))
+-		return -ENODEV;
++	if (!of_device_is_available(node)) {
++		rc = -ENODEV;
++		goto out;
++	}
+ 
+ 	rc = of_property_read_string(node, "format", &format);
+ 	if (rc)
+-		return rc;
++		goto out;
+ 
+ 	rc = sprintf(buf, "%s\n", format);
+ 
++out:
+ 	of_node_put(node);
+ 
+ 	return rc;
+diff --git a/arch/powerpc/kexec/core.c b/arch/powerpc/kexec/core.c
+index a2242017e55f6..11b327fa135c8 100644
+--- a/arch/powerpc/kexec/core.c
++++ b/arch/powerpc/kexec/core.c
+@@ -134,11 +134,18 @@ void __init reserve_crashkernel(void)
+ 	if (!crashk_res.start) {
+ #ifdef CONFIG_PPC64
+ 		/*
+-		 * On 64bit we split the RMO in half but cap it at half of
+-		 * a small SLB (128MB) since the crash kernel needs to place
+-		 * itself and some stacks to be in the first segment.
++		 * On the LPAR platform place the crash kernel to mid of
++		 * RMA size (512MB or more) to ensure the crash kernel
++		 * gets enough space to place itself and some stack to be
++		 * in the first segment. At the same time normal kernel
++		 * also get enough space to allocate memory for essential
++		 * system resource in the first segment. Keep the crash
++		 * kernel starts at 128MB offset on other platforms.
+ 		 */
+-		crashk_res.start = min(0x8000000ULL, (ppc64_rma_size / 2));
++		if (firmware_has_feature(FW_FEATURE_LPAR))
++			crashk_res.start = ppc64_rma_size / 2;
++		else
++			crashk_res.start = min(0x8000000ULL, (ppc64_rma_size / 2));
+ #else
+ 		crashk_res.start = KDUMP_KERNELBASE;
+ #endif
+diff --git a/arch/powerpc/kvm/book3s_64_entry.S b/arch/powerpc/kvm/book3s_64_entry.S
+index 983b8c18bc31e..a644003603da1 100644
+--- a/arch/powerpc/kvm/book3s_64_entry.S
++++ b/arch/powerpc/kvm/book3s_64_entry.S
+@@ -407,10 +407,16 @@ END_FTR_SECTION_IFSET(CPU_FTR_DAWR1)
+ 	 */
+ 	ld	r10,HSTATE_SCRATCH0(r13)
+ 	cmpwi	r10,BOOK3S_INTERRUPT_MACHINE_CHECK
+-	beq	machine_check_common
++	beq	.Lcall_machine_check_common
+ 
+ 	cmpwi	r10,BOOK3S_INTERRUPT_SYSTEM_RESET
+-	beq	system_reset_common
++	beq	.Lcall_system_reset_common
+ 
+ 	b	.
++
++.Lcall_machine_check_common:
++	b	machine_check_common
++
++.Lcall_system_reset_common:
++	b	system_reset_common
+ #endif
+diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
+index cfd45245d0093..f77fd4428db31 100644
+--- a/arch/powerpc/mm/book3s64/hash_utils.c
++++ b/arch/powerpc/mm/book3s64/hash_utils.c
+@@ -1522,8 +1522,7 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap,
+ }
+ EXPORT_SYMBOL_GPL(hash_page);
+ 
+-DECLARE_INTERRUPT_HANDLER(__do_hash_fault);
+-DEFINE_INTERRUPT_HANDLER(__do_hash_fault)
++DEFINE_INTERRUPT_HANDLER(do_hash_fault)
+ {
+ 	unsigned long ea = regs->dar;
+ 	unsigned long dsisr = regs->dsisr;
+@@ -1582,35 +1581,6 @@ DEFINE_INTERRUPT_HANDLER(__do_hash_fault)
+ 	}
+ }
+ 
+-/*
+- * The _RAW interrupt entry checks for the in_nmi() case before
+- * running the full handler.
+- */
+-DEFINE_INTERRUPT_HANDLER_RAW(do_hash_fault)
+-{
+-	/*
+-	 * If we are in an "NMI" (e.g., an interrupt when soft-disabled), then
+-	 * don't call hash_page, just fail the fault. This is required to
+-	 * prevent re-entrancy problems in the hash code, namely perf
+-	 * interrupts hitting while something holds H_PAGE_BUSY, and taking a
+-	 * hash fault. See the comment in hash_preload().
+-	 *
+-	 * We come here as a result of a DSI at a point where we don't want
+-	 * to call hash_page, such as when we are accessing memory (possibly
+-	 * user memory) inside a PMU interrupt that occurred while interrupts
+-	 * were soft-disabled.  We want to invoke the exception handler for
+-	 * the access, or panic if there isn't a handler.
+-	 */
+-	if (unlikely(in_nmi())) {
+-		do_bad_page_fault_segv(regs);
+-		return 0;
+-	}
+-
+-	__do_hash_fault(regs);
+-
+-	return 0;
+-}
+-
+ #ifdef CONFIG_PPC_MM_SLICES
+ static bool should_hash_preload(struct mm_struct *mm, unsigned long ea)
+ {
+@@ -1677,26 +1647,18 @@ static void hash_preload(struct mm_struct *mm, pte_t *ptep, unsigned long ea,
+ #endif /* CONFIG_PPC_64K_PAGES */
+ 
+ 	/*
+-	 * __hash_page_* must run with interrupts off, as it sets the
+-	 * H_PAGE_BUSY bit. It's possible for perf interrupts to hit at any
+-	 * time and may take a hash fault reading the user stack, see
+-	 * read_user_stack_slow() in the powerpc/perf code.
+-	 *
+-	 * If that takes a hash fault on the same page as we lock here, it
+-	 * will bail out when seeing H_PAGE_BUSY set, and retry the access
+-	 * leading to an infinite loop.
++	 * __hash_page_* must run with interrupts off, including PMI interrupts
++	 * off, as it sets the H_PAGE_BUSY bit.
+ 	 *
+-	 * Disabling interrupts here does not prevent perf interrupts, but it
+-	 * will prevent them taking hash faults (see the NMI test in
+-	 * do_hash_page), then read_user_stack's copy_from_user_nofault will
+-	 * fail and perf will fall back to read_user_stack_slow(), which
+-	 * walks the Linux page tables.
++	 * It's otherwise possible for perf interrupts to hit at any time and
++	 * may take a hash fault reading the user stack, which could take a
++	 * hash miss and deadlock on the same H_PAGE_BUSY bit.
+ 	 *
+ 	 * Interrupts must also be off for the duration of the
+ 	 * mm_is_thread_local test and update, to prevent preempt running the
+ 	 * mm on another CPU (XXX: this may be racy vs kthread_use_mm).
+ 	 */
+-	local_irq_save(flags);
++	powerpc_local_irq_pmu_save(flags);
+ 
+ 	/* Is that local to this CPU ? */
+ 	if (mm_is_thread_local(mm))
+@@ -1721,7 +1683,7 @@ static void hash_preload(struct mm_struct *mm, pte_t *ptep, unsigned long ea,
+ 				   mm_ctx_user_psize(&mm->context),
+ 				   pte_val(*ptep));
+ 
+-	local_irq_restore(flags);
++	powerpc_local_irq_pmu_restore(flags);
+ }
+ 
+ /*
+diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
+index bd5d91a31183b..05b9c3f31456c 100644
+--- a/arch/powerpc/mm/mem.c
++++ b/arch/powerpc/mm/mem.c
+@@ -256,7 +256,7 @@ void __init mem_init(void)
+ #endif
+ 
+ 	high_memory = (void *) __va(max_low_pfn * PAGE_SIZE);
+-	set_max_mapnr(max_low_pfn);
++	set_max_mapnr(max_pfn);
+ 
+ 	kasan_late_init();
+ 
+diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
+index 3bb9d168e3b31..85753e32a4de9 100644
+--- a/arch/powerpc/mm/pageattr.c
++++ b/arch/powerpc/mm/pageattr.c
+@@ -15,12 +15,14 @@
+ #include <asm/pgtable.h>
+ 
+ 
++static pte_basic_t pte_update_delta(pte_t *ptep, unsigned long addr,
++				    unsigned long old, unsigned long new)
++{
++	return pte_update(&init_mm, addr, ptep, old & ~new, new & ~old, 0);
++}
++
+ /*
+- * Updates the attributes of a page in three steps:
+- *
+- * 1. take the page_table_lock
+- * 2. install the new entry with the updated attributes
+- * 3. flush the TLB
++ * Updates the attributes of a page atomically.
+  *
+  * This sequence is safe against concurrent updates, and also allows updating the
+  * attributes of a page currently being executed or accessed.
+@@ -28,25 +30,21 @@
+ static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
+ {
+ 	long action = (long)data;
+-	pte_t pte;
+ 
+-	spin_lock(&init_mm.page_table_lock);
+-
+-	pte = ptep_get(ptep);
+-
+-	/* modify the PTE bits as desired, then apply */
++	/* modify the PTE bits as desired */
+ 	switch (action) {
+ 	case SET_MEMORY_RO:
+-		pte = pte_wrprotect(pte);
++		/* Don't clear DIRTY bit */
++		pte_update_delta(ptep, addr, _PAGE_KERNEL_RW & ~_PAGE_DIRTY, _PAGE_KERNEL_RO);
+ 		break;
+ 	case SET_MEMORY_RW:
+-		pte = pte_mkwrite(pte_mkdirty(pte));
++		pte_update_delta(ptep, addr, _PAGE_KERNEL_RO, _PAGE_KERNEL_RW);
+ 		break;
+ 	case SET_MEMORY_NX:
+-		pte = pte_exprotect(pte);
++		pte_update_delta(ptep, addr, _PAGE_KERNEL_ROX, _PAGE_KERNEL_RO);
+ 		break;
+ 	case SET_MEMORY_X:
+-		pte = pte_mkexec(pte);
++		pte_update_delta(ptep, addr, _PAGE_KERNEL_RO, _PAGE_KERNEL_ROX);
+ 		break;
+ 	case SET_MEMORY_NP:
+ 		pte_update(&init_mm, addr, ptep, _PAGE_PRESENT, 0, 0);
+@@ -59,16 +57,12 @@ static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
+ 		break;
+ 	}
+ 
+-	pte_update(&init_mm, addr, ptep, ~0UL, pte_val(pte), 0);
+-
+ 	/* See ptesync comment in radix__set_pte_at() */
+ 	if (radix_enabled())
+ 		asm volatile("ptesync": : :"memory");
+ 
+ 	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+ 
+-	spin_unlock(&init_mm.page_table_lock);
+-
+ 	return 0;
+ }
+ 
+diff --git a/arch/powerpc/perf/callchain.h b/arch/powerpc/perf/callchain.h
+index d6fa6e25234f4..19a8d051ddf10 100644
+--- a/arch/powerpc/perf/callchain.h
++++ b/arch/powerpc/perf/callchain.h
+@@ -2,7 +2,6 @@
+ #ifndef _POWERPC_PERF_CALLCHAIN_H
+ #define _POWERPC_PERF_CALLCHAIN_H
+ 
+-int read_user_stack_slow(const void __user *ptr, void *buf, int nb);
+ void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
+ 			    struct pt_regs *regs);
+ void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
+@@ -26,17 +25,11 @@ static inline int __read_user_stack(const void __user *ptr, void *ret,
+ 				    size_t size)
+ {
+ 	unsigned long addr = (unsigned long)ptr;
+-	int rc;
+ 
+ 	if (addr > TASK_SIZE - size || (addr & (size - 1)))
+ 		return -EFAULT;
+ 
+-	rc = copy_from_user_nofault(ret, ptr, size);
+-
+-	if (IS_ENABLED(CONFIG_PPC64) && !radix_enabled() && rc)
+-		return read_user_stack_slow(ptr, ret, size);
+-
+-	return rc;
++	return copy_from_user_nofault(ret, ptr, size);
+ }
+ 
+ #endif /* _POWERPC_PERF_CALLCHAIN_H */
+diff --git a/arch/powerpc/perf/callchain_64.c b/arch/powerpc/perf/callchain_64.c
+index 8d0df4226328d..488e8a21a11ea 100644
+--- a/arch/powerpc/perf/callchain_64.c
++++ b/arch/powerpc/perf/callchain_64.c
+@@ -18,33 +18,6 @@
+ 
+ #include "callchain.h"
+ 
+-/*
+- * On 64-bit we don't want to invoke hash_page on user addresses from
+- * interrupt context, so if the access faults, we read the page tables
+- * to find which page (if any) is mapped and access it directly. Radix
+- * has no need for this so it doesn't use read_user_stack_slow.
+- */
+-int read_user_stack_slow(const void __user *ptr, void *buf, int nb)
+-{
+-
+-	unsigned long addr = (unsigned long) ptr;
+-	unsigned long offset;
+-	struct page *page;
+-	void *kaddr;
+-
+-	if (get_user_page_fast_only(addr, FOLL_WRITE, &page)) {
+-		kaddr = page_address(page);
+-
+-		/* align address to page boundary */
+-		offset = addr & ~PAGE_MASK;
+-
+-		memcpy(buf, kaddr + offset, nb);
+-		put_page(page);
+-		return 0;
+-	}
+-	return -EFAULT;
+-}
+-
+ static int read_user_stack_64(const unsigned long __user *ptr, unsigned long *ret)
+ {
+ 	return __read_user_stack(ptr, ret, sizeof(*ret));
+diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
+index a208997ade88b..87a95cbff2f3f 100644
+--- a/arch/powerpc/platforms/Kconfig.cputype
++++ b/arch/powerpc/platforms/Kconfig.cputype
+@@ -111,6 +111,7 @@ config PPC_BOOK3S_64
+ 
+ config PPC_BOOK3E_64
+ 	bool "Embedded processors"
++	select PPC_FSL_BOOK3E
+ 	select PPC_FPU # Make it a choice ?
+ 	select PPC_SMP_MUXED_IPI
+ 	select PPC_DOORBELL
+@@ -287,7 +288,7 @@ config FSL_BOOKE
+ config PPC_FSL_BOOK3E
+ 	bool
+ 	select ARCH_SUPPORTS_HUGETLBFS if PHYS_64BIT || PPC64
+-	select FSL_EMB_PERFMON
++	imply FSL_EMB_PERFMON
+ 	select PPC_SMP_MUXED_IPI
+ 	select PPC_DOORBELL
+ 	default y if FSL_BOOKE
+diff --git a/arch/riscv/lib/memmove.S b/arch/riscv/lib/memmove.S
+index 07d1d2152ba5c..e0609e1f0864d 100644
+--- a/arch/riscv/lib/memmove.S
++++ b/arch/riscv/lib/memmove.S
+@@ -1,64 +1,316 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Copyright (C) 2022 Michael T. Kloos <michael@michaelkloos.com>
++ */
+ 
+ #include <linux/linkage.h>
+ #include <asm/asm.h>
+ 
+-ENTRY(__memmove)
+-WEAK(memmove)
+-        move    t0, a0
+-        move    t1, a1
+-
+-        beq     a0, a1, exit_memcpy
+-        beqz    a2, exit_memcpy
+-        srli    t2, a2, 0x2
+-
+-        slt     t3, a0, a1
+-        beqz    t3, do_reverse
+-
+-        andi    a2, a2, 0x3
+-        li      t4, 1
+-        beqz    t2, byte_copy
+-
+-word_copy:
+-        lw      t3, 0(a1)
+-        addi    t2, t2, -1
+-        addi    a1, a1, 4
+-        sw      t3, 0(a0)
+-        addi    a0, a0, 4
+-        bnez    t2, word_copy
+-        beqz    a2, exit_memcpy
+-        j       byte_copy
+-
+-do_reverse:
+-        add     a0, a0, a2
+-        add     a1, a1, a2
+-        andi    a2, a2, 0x3
+-        li      t4, -1
+-        beqz    t2, reverse_byte_copy
+-
+-reverse_word_copy:
+-        addi    a1, a1, -4
+-        addi    t2, t2, -1
+-        lw      t3, 0(a1)
+-        addi    a0, a0, -4
+-        sw      t3, 0(a0)
+-        bnez    t2, reverse_word_copy
+-        beqz    a2, exit_memcpy
+-
+-reverse_byte_copy:
+-        addi    a0, a0, -1
+-        addi    a1, a1, -1
++SYM_FUNC_START(__memmove)
++SYM_FUNC_START_WEAK(memmove)
++	/*
++	 * Returns
++	 *   a0 - dest
++	 *
++	 * Parameters
++	 *   a0 - Inclusive first byte of dest
++	 *   a1 - Inclusive first byte of src
++	 *   a2 - Length of copy n
++	 *
++	 * Because the return matches the parameter register a0,
++	 * we will not clobber or modify that register.
++	 *
++	 * Note: This currently only works on little-endian.
++	 * To port to big-endian, reverse the direction of shifts
++	 * in the 2 misaligned fixup copy loops.
++	 */
+ 
++	/* Return if nothing to do */
++	beq a0, a1, return_from_memmove
++	beqz a2, return_from_memmove
++
++	/*
++	 * Register Uses
++	 *      Forward Copy: a1 - Index counter of src
++	 *      Reverse Copy: a4 - Index counter of src
++	 *      Forward Copy: t3 - Index counter of dest
++	 *      Reverse Copy: t4 - Index counter of dest
++	 *   Both Copy Modes: t5 - Inclusive first multibyte/aligned of dest
++	 *   Both Copy Modes: t6 - Non-Inclusive last multibyte/aligned of dest
++	 *   Both Copy Modes: t0 - Link / Temporary for load-store
++	 *   Both Copy Modes: t1 - Temporary for load-store
++	 *   Both Copy Modes: t2 - Temporary for load-store
++	 *   Both Copy Modes: a5 - dest to src alignment offset
++	 *   Both Copy Modes: a6 - Shift ammount
++	 *   Both Copy Modes: a7 - Inverse Shift ammount
++	 *   Both Copy Modes: a2 - Alternate breakpoint for unrolled loops
++	 */
++
++	/*
++	 * Solve for some register values now.
++	 * Byte copy does not need t5 or t6.
++	 */
++	mv   t3, a0
++	add  t4, a0, a2
++	add  a4, a1, a2
++
++	/*
++	 * Byte copy if copying less than (2 * SZREG) bytes. This can
++	 * cause problems with the bulk copy implementation and is
++	 * small enough not to bother.
++	 */
++	andi t0, a2, -(2 * SZREG)
++	beqz t0, byte_copy
++
++	/*
++	 * Now solve for t5 and t6.
++	 */
++	andi t5, t3, -SZREG
++	andi t6, t4, -SZREG
++	/*
++	 * If dest(Register t3) rounded down to the nearest naturally
++	 * aligned SZREG address, does not equal dest, then add SZREG
++	 * to find the low-bound of SZREG alignment in the dest memory
++	 * region.  Note that this could overshoot the dest memory
++	 * region if n is less than SZREG.  This is one reason why
++	 * we always byte copy if n is less than SZREG.
++	 * Otherwise, dest is already naturally aligned to SZREG.
++	 */
++	beq  t5, t3, 1f
++		addi t5, t5, SZREG
++	1:
++
++	/*
++	 * If the dest and src are co-aligned to SZREG, then there is
++	 * no need for the full rigmarole of a full misaligned fixup copy.
++	 * Instead, do a simpler co-aligned copy.
++	 */
++	xor  t0, a0, a1
++	andi t1, t0, (SZREG - 1)
++	beqz t1, coaligned_copy
++	/* Fall through to misaligned fixup copy */
++
++misaligned_fixup_copy:
++	bltu a1, a0, misaligned_fixup_copy_reverse
++
++misaligned_fixup_copy_forward:
++	jal  t0, byte_copy_until_aligned_forward
++
++	andi a5, a1, (SZREG - 1) /* Find the alignment offset of src (a1) */
++	slli a6, a5, 3 /* Multiply by 8 to convert that to bits to shift */
++	sub  a5, a1, t3 /* Find the difference between src and dest */
++	andi a1, a1, -SZREG /* Align the src pointer */
++	addi a2, t6, SZREG /* The other breakpoint for the unrolled loop*/
++
++	/*
++	 * Compute The Inverse Shift
++	 * a7 = XLEN - a6 = XLEN + -a6
++	 * 2s complement negation to find the negative: -a6 = ~a6 + 1
++	 * Add that to XLEN.  XLEN = SZREG * 8.
++	 */
++	not  a7, a6
++	addi a7, a7, (SZREG * 8 + 1)
++
++	/*
++	 * Fix Misalignment Copy Loop - Forward
++	 * load_val0 = load_ptr[0];
++	 * do {
++	 * 	load_val1 = load_ptr[1];
++	 * 	store_ptr += 2;
++	 * 	store_ptr[0 - 2] = (load_val0 >> {a6}) | (load_val1 << {a7});
++	 *
++	 * 	if (store_ptr == {a2})
++	 * 		break;
++	 *
++	 * 	load_val0 = load_ptr[2];
++	 * 	load_ptr += 2;
++	 * 	store_ptr[1 - 2] = (load_val1 >> {a6}) | (load_val0 << {a7});
++	 *
++	 * } while (store_ptr != store_ptr_end);
++	 * store_ptr = store_ptr_end;
++	 */
++
++	REG_L t0, (0 * SZREG)(a1)
++	1:
++	REG_L t1, (1 * SZREG)(a1)
++	addi  t3, t3, (2 * SZREG)
++	srl   t0, t0, a6
++	sll   t2, t1, a7
++	or    t2, t0, t2
++	REG_S t2, ((0 * SZREG) - (2 * SZREG))(t3)
++
++	beq   t3, a2, 2f
++
++	REG_L t0, (2 * SZREG)(a1)
++	addi  a1, a1, (2 * SZREG)
++	srl   t1, t1, a6
++	sll   t2, t0, a7
++	or    t2, t1, t2
++	REG_S t2, ((1 * SZREG) - (2 * SZREG))(t3)
++
++	bne   t3, t6, 1b
++	2:
++	mv    t3, t6 /* Fix the dest pointer in case the loop was broken */
++
++	add  a1, t3, a5 /* Restore the src pointer */
++	j byte_copy_forward /* Copy any remaining bytes */
++
++misaligned_fixup_copy_reverse:
++	jal  t0, byte_copy_until_aligned_reverse
++
++	andi a5, a4, (SZREG - 1) /* Find the alignment offset of src (a4) */
++	slli a6, a5, 3 /* Multiply by 8 to convert that to bits to shift */
++	sub  a5, a4, t4 /* Find the difference between src and dest */
++	andi a4, a4, -SZREG /* Align the src pointer */
++	addi a2, t5, -SZREG /* The other breakpoint for the unrolled loop*/
++
++	/*
++	 * Compute The Inverse Shift
++	 * a7 = XLEN - a6 = XLEN + -a6
++	 * 2s complement negation to find the negative: -a6 = ~a6 + 1
++	 * Add that to XLEN.  XLEN = SZREG * 8.
++	 */
++	not  a7, a6
++	addi a7, a7, (SZREG * 8 + 1)
++
++	/*
++	 * Fix Misalignment Copy Loop - Reverse
++	 * load_val1 = load_ptr[0];
++	 * do {
++	 * 	load_val0 = load_ptr[-1];
++	 * 	store_ptr -= 2;
++	 * 	store_ptr[1] = (load_val0 >> {a6}) | (load_val1 << {a7});
++	 *
++	 * 	if (store_ptr == {a2})
++	 * 		break;
++	 *
++	 * 	load_val1 = load_ptr[-2];
++	 * 	load_ptr -= 2;
++	 * 	store_ptr[0] = (load_val1 >> {a6}) | (load_val0 << {a7});
++	 *
++	 * } while (store_ptr != store_ptr_end);
++	 * store_ptr = store_ptr_end;
++	 */
++
++	REG_L t1, ( 0 * SZREG)(a4)
++	1:
++	REG_L t0, (-1 * SZREG)(a4)
++	addi  t4, t4, (-2 * SZREG)
++	sll   t1, t1, a7
++	srl   t2, t0, a6
++	or    t2, t1, t2
++	REG_S t2, ( 1 * SZREG)(t4)
++
++	beq   t4, a2, 2f
++
++	REG_L t1, (-2 * SZREG)(a4)
++	addi  a4, a4, (-2 * SZREG)
++	sll   t0, t0, a7
++	srl   t2, t1, a6
++	or    t2, t0, t2
++	REG_S t2, ( 0 * SZREG)(t4)
++
++	bne   t4, t5, 1b
++	2:
++	mv    t4, t5 /* Fix the dest pointer in case the loop was broken */
++
++	add  a4, t4, a5 /* Restore the src pointer */
++	j byte_copy_reverse /* Copy any remaining bytes */
++
++/*
++ * Simple copy loops for SZREG co-aligned memory locations.
++ * These also make calls to do byte copies for any unaligned
++ * data at their terminations.
++ */
++coaligned_copy:
++	bltu a1, a0, coaligned_copy_reverse
++
++coaligned_copy_forward:
++	jal t0, byte_copy_until_aligned_forward
++
++	1:
++	REG_L t1, ( 0 * SZREG)(a1)
++	addi  a1, a1, SZREG
++	addi  t3, t3, SZREG
++	REG_S t1, (-1 * SZREG)(t3)
++	bne   t3, t6, 1b
++
++	j byte_copy_forward /* Copy any remaining bytes */
++
++coaligned_copy_reverse:
++	jal t0, byte_copy_until_aligned_reverse
++
++	1:
++	REG_L t1, (-1 * SZREG)(a4)
++	addi  a4, a4, -SZREG
++	addi  t4, t4, -SZREG
++	REG_S t1, ( 0 * SZREG)(t4)
++	bne   t4, t5, 1b
++
++	j byte_copy_reverse /* Copy any remaining bytes */
++
++/*
++ * These are basically sub-functions within the function.  They
++ * are used to byte copy until the dest pointer is in alignment.
++ * At which point, a bulk copy method can be used by the
++ * calling code.  These work on the same registers as the bulk
++ * copy loops.  Therefore, the register values can be picked
++ * up from where they were left and we avoid code duplication
++ * without any overhead except the call in and return jumps.
++ */
++byte_copy_until_aligned_forward:
++	beq  t3, t5, 2f
++	1:
++	lb   t1,  0(a1)
++	addi a1, a1, 1
++	addi t3, t3, 1
++	sb   t1, -1(t3)
++	bne  t3, t5, 1b
++	2:
++	jalr zero, 0x0(t0) /* Return to multibyte copy loop */
++
++byte_copy_until_aligned_reverse:
++	beq  t4, t6, 2f
++	1:
++	lb   t1, -1(a4)
++	addi a4, a4, -1
++	addi t4, t4, -1
++	sb   t1,  0(t4)
++	bne  t4, t6, 1b
++	2:
++	jalr zero, 0x0(t0) /* Return to multibyte copy loop */
++
++/*
++ * Simple byte copy loops.
++ * These will byte copy until they reach the end of data to copy.
++ * At that point, they will call to return from memmove.
++ */
+ byte_copy:
+-        lb      t3, 0(a1)
+-        addi    a2, a2, -1
+-        sb      t3, 0(a0)
+-        add     a1, a1, t4
+-        add     a0, a0, t4
+-        bnez    a2, byte_copy
+-
+-exit_memcpy:
+-        move a0, t0
+-        move a1, t1
+-        ret
+-END(__memmove)
++	bltu a1, a0, byte_copy_reverse
++
++byte_copy_forward:
++	beq  t3, t4, 2f
++	1:
++	lb   t1,  0(a1)
++	addi a1, a1, 1
++	addi t3, t3, 1
++	sb   t1, -1(t3)
++	bne  t3, t4, 1b
++	2:
++	ret
++
++byte_copy_reverse:
++	beq  t4, t3, 2f
++	1:
++	lb   t1, -1(a4)
++	addi a4, a4, -1
++	addi t4, t4, -1
++	sb   t1,  0(t4)
++	bne  t4, t3, 1b
++	2:
++
++return_from_memmove:
++	ret
++
++SYM_FUNC_END(memmove)
++SYM_FUNC_END(__memmove)
+diff --git a/arch/um/include/asm/xor.h b/arch/um/include/asm/xor.h
+index f512704a9ec7b..22b39de73c246 100644
+--- a/arch/um/include/asm/xor.h
++++ b/arch/um/include/asm/xor.h
+@@ -4,8 +4,10 @@
+ 
+ #ifdef CONFIG_64BIT
+ #undef CONFIG_X86_32
++#define TT_CPU_INF_XOR_DEFAULT (AVX_SELECT(&xor_block_sse_pf64))
+ #else
+ #define CONFIG_X86_32 1
++#define TT_CPU_INF_XOR_DEFAULT (AVX_SELECT(&xor_block_8regs))
+ #endif
+ 
+ #include <asm/cpufeature.h>
+@@ -16,7 +18,7 @@
+ #undef XOR_SELECT_TEMPLATE
+ /* pick an arbitrary one - measuring isn't possible with inf-cpu */
+ #define XOR_SELECT_TEMPLATE(x)	\
+-	(time_travel_mode == TT_MODE_INFCPU ? &xor_block_8regs : NULL)
++	(time_travel_mode == TT_MODE_INFCPU ? TT_CPU_INF_XOR_DEFAULT : x))
+ #endif
+ 
+ #endif
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 5c2ccb85f2efb..d377d8af5a9f0 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2827,6 +2827,11 @@ config IA32_AOUT
+ config X86_X32
+ 	bool "x32 ABI for 64-bit mode"
+ 	depends on X86_64
++	# llvm-objcopy does not convert x86_64 .note.gnu.property or
++	# compressed debug sections to x86_x32 properly:
++	# https://github.com/ClangBuiltLinux/linux/issues/514
++	# https://github.com/ClangBuiltLinux/linux/issues/1141
++	depends on $(success,$(OBJCOPY) --version | head -n1 | grep -qv llvm)
+ 	help
+ 	  Include code to run binaries for the x32 native 32-bit ABI
+ 	  for 64-bit processors.  An x32 process gets access to the
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 5434030e851e4..aa2daa1f36248 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -281,7 +281,7 @@ static struct extra_reg intel_spr_extra_regs[] __read_mostly = {
+ 	INTEL_UEVENT_EXTRA_REG(0x012a, MSR_OFFCORE_RSP_0, 0x3fffffffffull, RSP_0),
+ 	INTEL_UEVENT_EXTRA_REG(0x012b, MSR_OFFCORE_RSP_1, 0x3fffffffffull, RSP_1),
+ 	INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x01cd),
+-	INTEL_UEVENT_EXTRA_REG(0x01c6, MSR_PEBS_FRONTEND, 0x7fff17, FE),
++	INTEL_UEVENT_EXTRA_REG(0x01c6, MSR_PEBS_FRONTEND, 0x7fff1f, FE),
+ 	INTEL_UEVENT_EXTRA_REG(0x40ad, MSR_PEBS_FRONTEND, 0x7, FE),
+ 	INTEL_UEVENT_EXTRA_REG(0x04c2, MSR_PEBS_FRONTEND, 0x8, FE),
+ 	EVENT_EXTRA_END
+@@ -5521,7 +5521,11 @@ static void intel_pmu_check_event_constraints(struct event_constraint *event_con
+ 			/* Disabled fixed counters which are not in CPUID */
+ 			c->idxmsk64 &= intel_ctrl;
+ 
+-			if (c->idxmsk64 != INTEL_PMC_MSK_FIXED_REF_CYCLES)
++			/*
++			 * Don't extend the pseudo-encoding to the
++			 * generic counters
++			 */
++			if (!use_fixed_pseudo_encoding(c->code))
+ 				c->idxmsk64 |= (1ULL << num_counters) - 1;
+ 		}
+ 		c->idxmsk64 &=
+diff --git a/arch/x86/include/asm/bug.h b/arch/x86/include/asm/bug.h
+index bab883c0b6fee..66570e95af398 100644
+--- a/arch/x86/include/asm/bug.h
++++ b/arch/x86/include/asm/bug.h
+@@ -77,9 +77,9 @@ do {								\
+  */
+ #define __WARN_FLAGS(flags)					\
+ do {								\
+-	__auto_type f = BUGFLAG_WARNING|(flags);		\
++	__auto_type __flags = BUGFLAG_WARNING|(flags);		\
+ 	instrumentation_begin();				\
+-	_BUG_FLAGS(ASM_UD2, f, ASM_REACHABLE);			\
++	_BUG_FLAGS(ASM_UD2, __flags, ASM_REACHABLE);		\
+ 	instrumentation_end();					\
+ } while (0)
+ 
+diff --git a/arch/x86/include/asm/irq_stack.h b/arch/x86/include/asm/irq_stack.h
+index ae9d40f6c7066..05af249d6bec2 100644
+--- a/arch/x86/include/asm/irq_stack.h
++++ b/arch/x86/include/asm/irq_stack.h
+@@ -99,7 +99,8 @@
+ }
+ 
+ #define ASM_CALL_ARG0							\
+-	"call %P[__func]				\n"
++	"call %P[__func]				\n"		\
++	ASM_REACHABLE
+ 
+ #define ASM_CALL_ARG1							\
+ 	"movq	%[arg1], %%rdi				\n"		\
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 5d3645b325e27..92c119831ed41 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -503,6 +503,7 @@ struct kvm_pmu {
+ 	u64 global_ctrl_mask;
+ 	u64 global_ovf_ctrl_mask;
+ 	u64 reserved_bits;
++	u64 raw_event_mask;
+ 	u8 version;
+ 	struct kvm_pmc gp_counters[INTEL_PMC_MAX_GENERIC];
+ 	struct kvm_pmc fixed_counters[INTEL_PMC_MAX_FIXED];
+diff --git a/arch/x86/include/asm/msi.h b/arch/x86/include/asm/msi.h
+index b85147d75626e..d71c7e8b738d2 100644
+--- a/arch/x86/include/asm/msi.h
++++ b/arch/x86/include/asm/msi.h
+@@ -12,14 +12,17 @@ int pci_msi_prepare(struct irq_domain *domain, struct device *dev, int nvec,
+ /* Structs and defines for the X86 specific MSI message format */
+ 
+ typedef struct x86_msi_data {
+-	u32	vector			:  8,
+-		delivery_mode		:  3,
+-		dest_mode_logical	:  1,
+-		reserved		:  2,
+-		active_low		:  1,
+-		is_level		:  1;
+-
+-	u32	dmar_subhandle;
++	union {
++		struct {
++			u32	vector			:  8,
++				delivery_mode		:  3,
++				dest_mode_logical	:  1,
++				reserved		:  2,
++				active_low		:  1,
++				is_level		:  1;
++		};
++		u32	dmar_subhandle;
++	};
+ } __attribute__ ((packed)) arch_msi_msg_data_t;
+ #define arch_msi_msg_data	x86_msi_data
+ 
+diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
+index 8fc1b5003713f..a2b6626c681f5 100644
+--- a/arch/x86/include/asm/perf_event.h
++++ b/arch/x86/include/asm/perf_event.h
+@@ -241,6 +241,11 @@ struct x86_pmu_capability {
+ #define INTEL_PMC_IDX_FIXED_SLOTS	(INTEL_PMC_IDX_FIXED + 3)
+ #define INTEL_PMC_MSK_FIXED_SLOTS	(1ULL << INTEL_PMC_IDX_FIXED_SLOTS)
+ 
++static inline bool use_fixed_pseudo_encoding(u64 code)
++{
++	return !(code & 0xff);
++}
++
+ /*
+  * We model BTS tracing as another fixed-mode PMC.
+  *
+diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
+index d28829403ed08..fc1ab0116f4ee 100644
+--- a/arch/x86/kernel/fpu/xstate.c
++++ b/arch/x86/kernel/fpu/xstate.c
+@@ -1626,7 +1626,7 @@ static int __xstate_request_perm(u64 permitted, u64 requested)
+ 		return ret;
+ 
+ 	/* Pairs with the READ_ONCE() in xstate_get_group_perm() */
+-	WRITE_ONCE(fpu->perm.__state_perm, requested);
++	WRITE_ONCE(fpu->perm.__state_perm, mask);
+ 	/* Protected by sighand lock */
+ 	fpu->perm.__state_size = ksize;
+ 	fpu->perm.__user_state_size = usize;
+diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
+index 882213df37130..71f336425e58a 100644
+--- a/arch/x86/kernel/hpet.c
++++ b/arch/x86/kernel/hpet.c
+@@ -1435,8 +1435,12 @@ irqreturn_t hpet_rtc_interrupt(int irq, void *dev_id)
+ 	hpet_rtc_timer_reinit();
+ 	memset(&curr_time, 0, sizeof(struct rtc_time));
+ 
+-	if (hpet_rtc_flags & (RTC_UIE | RTC_AIE))
+-		mc146818_get_time(&curr_time);
++	if (hpet_rtc_flags & (RTC_UIE | RTC_AIE)) {
++		if (unlikely(mc146818_get_time(&curr_time) < 0)) {
++			pr_err_ratelimited("unable to read current time from RTC\n");
++			return IRQ_HANDLED;
++		}
++	}
+ 
+ 	if (hpet_rtc_flags & RTC_UIE &&
+ 	    curr_time.tm_sec != hpet_prev_update_sec) {
+diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c
+index 9c407a33a7741..af58a5bc17cdc 100644
+--- a/arch/x86/kernel/static_call.c
++++ b/arch/x86/kernel/static_call.c
+@@ -12,10 +12,9 @@ enum insn_type {
+ };
+ 
+ /*
+- * data16 data16 xorq %rax, %rax - a single 5 byte instruction that clears %rax
+- * The REX.W cancels the effect of any data16.
++ * cs cs cs xorl %eax, %eax - a single 5 byte instruction that clears %[er]ax
+  */
+-static const u8 xor5rax[] = { 0x66, 0x66, 0x48, 0x31, 0xc0 };
++static const u8 xor5rax[] = { 0x2e, 0x2e, 0x2e, 0x31, 0xc0 };
+ 
+ static void __ref __static_call_transform(void *insn, enum insn_type type, void *func)
+ {
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 5705446c12134..769ba8ffd7002 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -3514,8 +3514,10 @@ static int em_rdpid(struct x86_emulate_ctxt *ctxt)
+ {
+ 	u64 tsc_aux = 0;
+ 
+-	if (ctxt->ops->get_msr(ctxt, MSR_TSC_AUX, &tsc_aux))
++	if (!ctxt->ops->guest_has_rdpid(ctxt))
+ 		return emulate_ud(ctxt);
++
++	ctxt->ops->get_msr(ctxt, MSR_TSC_AUX, &tsc_aux);
+ 	ctxt->dst.val = tsc_aux;
+ 	return X86EMUL_CONTINUE;
+ }
+diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h
+index 68b420289d7ed..fb09cd22cb7f5 100644
+--- a/arch/x86/kvm/kvm_emulate.h
++++ b/arch/x86/kvm/kvm_emulate.h
+@@ -226,6 +226,7 @@ struct x86_emulate_ops {
+ 	bool (*guest_has_long_mode)(struct x86_emulate_ctxt *ctxt);
+ 	bool (*guest_has_movbe)(struct x86_emulate_ctxt *ctxt);
+ 	bool (*guest_has_fxsr)(struct x86_emulate_ctxt *ctxt);
++	bool (*guest_has_rdpid)(struct x86_emulate_ctxt *ctxt);
+ 
+ 	void (*set_nmi_mask)(struct x86_emulate_ctxt *ctxt, bool masked);
+ 
+diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
+index de955ca58d17c..227b06dd1fea6 100644
+--- a/arch/x86/kvm/pmu.c
++++ b/arch/x86/kvm/pmu.c
+@@ -96,8 +96,7 @@ static void kvm_perf_overflow_intr(struct perf_event *perf_event,
+ 
+ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type,
+ 				  u64 config, bool exclude_user,
+-				  bool exclude_kernel, bool intr,
+-				  bool in_tx, bool in_tx_cp)
++				  bool exclude_kernel, bool intr)
+ {
+ 	struct perf_event *event;
+ 	struct perf_event_attr attr = {
+@@ -113,16 +112,14 @@ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type,
+ 
+ 	attr.sample_period = get_sample_period(pmc, pmc->counter);
+ 
+-	if (in_tx)
+-		attr.config |= HSW_IN_TX;
+-	if (in_tx_cp) {
++	if ((attr.config & HSW_IN_TX_CHECKPOINTED) &&
++	    guest_cpuid_is_intel(pmc->vcpu)) {
+ 		/*
+ 		 * HSW_IN_TX_CHECKPOINTED is not supported with nonzero
+ 		 * period. Just clear the sample period so at least
+ 		 * allocating the counter doesn't fail.
+ 		 */
+ 		attr.sample_period = 0;
+-		attr.config |= HSW_IN_TX_CHECKPOINTED;
+ 	}
+ 
+ 	event = perf_event_create_kernel_counter(&attr, -1, current,
+@@ -178,6 +175,7 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
+ 	struct kvm *kvm = pmc->vcpu->kvm;
+ 	struct kvm_pmu_event_filter *filter;
+ 	int i;
++	struct kvm_pmu *pmu = vcpu_to_pmu(pmc->vcpu);
+ 	bool allow_event = true;
+ 
+ 	if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL)
+@@ -217,7 +215,7 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
+ 	}
+ 
+ 	if (type == PERF_TYPE_RAW)
+-		config = eventsel & AMD64_RAW_EVENT_MASK;
++		config = eventsel & pmu->raw_event_mask;
+ 
+ 	if (pmc->current_config == eventsel && pmc_resume_counter(pmc))
+ 		return;
+@@ -228,9 +226,7 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
+ 	pmc_reprogram_counter(pmc, type, config,
+ 			      !(eventsel & ARCH_PERFMON_EVENTSEL_USR),
+ 			      !(eventsel & ARCH_PERFMON_EVENTSEL_OS),
+-			      eventsel & ARCH_PERFMON_EVENTSEL_INT,
+-			      (eventsel & HSW_IN_TX),
+-			      (eventsel & HSW_IN_TX_CHECKPOINTED));
++			      eventsel & ARCH_PERFMON_EVENTSEL_INT);
+ }
+ EXPORT_SYMBOL_GPL(reprogram_gp_counter);
+ 
+@@ -266,7 +262,7 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx)
+ 			      kvm_x86_ops.pmu_ops->find_fixed_event(idx),
+ 			      !(en_field & 0x2), /* exclude user */
+ 			      !(en_field & 0x1), /* exclude kernel */
+-			      pmi, false, false);
++			      pmi);
+ }
+ EXPORT_SYMBOL_GPL(reprogram_fixed_counter);
+ 
+diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
+index 72713324f3bf9..9399550a461cb 100644
+--- a/arch/x86/kvm/svm/avic.c
++++ b/arch/x86/kvm/svm/avic.c
+@@ -949,15 +949,10 @@ out:
+ void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ {
+ 	u64 entry;
+-	/* ID = 0xff (broadcast), ID > 0xff (reserved) */
+ 	int h_physical_id = kvm_cpu_get_apicid(cpu);
+ 	struct vcpu_svm *svm = to_svm(vcpu);
+ 
+-	/*
+-	 * Since the host physical APIC id is 8 bits,
+-	 * we can support host APIC ID upto 255.
+-	 */
+-	if (WARN_ON(h_physical_id > AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK))
++	if (WARN_ON(h_physical_id & ~AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK))
+ 		return;
+ 
+ 	entry = READ_ONCE(*(svm->avic_physical_id_cache));
+diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
+index 7fadfe3c67e73..39ee0e271d9ca 100644
+--- a/arch/x86/kvm/svm/pmu.c
++++ b/arch/x86/kvm/svm/pmu.c
+@@ -260,12 +260,10 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	/* MSR_EVNTSELn */
+ 	pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_EVNTSEL);
+ 	if (pmc) {
+-		if (data == pmc->eventsel)
+-			return 0;
+-		if (!(data & pmu->reserved_bits)) {
++		data &= ~pmu->reserved_bits;
++		if (data != pmc->eventsel)
+ 			reprogram_gp_counter(pmc, data);
+-			return 0;
+-		}
++		return 0;
+ 	}
+ 
+ 	return 1;
+@@ -282,6 +280,7 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu)
+ 
+ 	pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << 48) - 1;
+ 	pmu->reserved_bits = 0xfffffff000280000ull;
++	pmu->raw_event_mask = AMD64_RAW_EVENT_MASK;
+ 	pmu->version = 1;
+ 	/* not applicable to AMD; but clean them to prevent any fall out */
+ 	pmu->counter_bitmask[KVM_PMC_FIXED] = 0;
+diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
+index 1b460e539926b..f7caba158cc74 100644
+--- a/arch/x86/kvm/svm/svm.h
++++ b/arch/x86/kvm/svm/svm.h
+@@ -22,6 +22,8 @@
+ #include <asm/svm.h>
+ #include <asm/sev-common.h>
+ 
++#include "kvm_cache_regs.h"
++
+ #define __sme_page_pa(x) __sme_set(page_to_pfn(x) << PAGE_SHIFT)
+ 
+ #define	IOPM_SIZE PAGE_SIZE * 3
+@@ -508,7 +510,7 @@ extern struct kvm_x86_nested_ops svm_nested_ops;
+ #define AVIC_LOGICAL_ID_ENTRY_VALID_BIT			31
+ #define AVIC_LOGICAL_ID_ENTRY_VALID_MASK		(1 << 31)
+ 
+-#define AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK	(0xFFULL)
++#define AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK	GENMASK_ULL(11, 0)
+ #define AVIC_PHYSICAL_ID_ENTRY_BACKING_PAGE_MASK	(0xFFFFFFFFFFULL << 12)
+ #define AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK		(1ULL << 62)
+ #define AVIC_PHYSICAL_ID_ENTRY_VALID_MASK		(1ULL << 63)
+diff --git a/arch/x86/kvm/svm/svm_onhyperv.c b/arch/x86/kvm/svm/svm_onhyperv.c
+index 98aa981c04ec5..8cdc62c74a964 100644
+--- a/arch/x86/kvm/svm/svm_onhyperv.c
++++ b/arch/x86/kvm/svm/svm_onhyperv.c
+@@ -4,7 +4,6 @@
+  */
+ 
+ #include <linux/kvm_host.h>
+-#include "kvm_cache_regs.h"
+ 
+ #include <asm/mshyperv.h>
+ 
+diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
+index 60563a45f3eb8..ee2452215e93c 100644
+--- a/arch/x86/kvm/vmx/pmu_intel.c
++++ b/arch/x86/kvm/vmx/pmu_intel.c
+@@ -395,6 +395,7 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	struct kvm_pmc *pmc;
+ 	u32 msr = msr_info->index;
+ 	u64 data = msr_info->data;
++	u64 reserved_bits;
+ 
+ 	switch (msr) {
+ 	case MSR_CORE_PERF_FIXED_CTR_CTRL:
+@@ -449,7 +450,11 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		} else if ((pmc = get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0))) {
+ 			if (data == pmc->eventsel)
+ 				return 0;
+-			if (!(data & pmu->reserved_bits)) {
++			reserved_bits = pmu->reserved_bits;
++			if ((pmc->idx == 2) &&
++			    (pmu->raw_event_mask & HSW_IN_TX_CHECKPOINTED))
++				reserved_bits ^= HSW_IN_TX_CHECKPOINTED;
++			if (!(data & reserved_bits)) {
+ 				reprogram_gp_counter(pmc, data);
+ 				return 0;
+ 			}
+@@ -476,6 +481,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
+ 	pmu->counter_bitmask[KVM_PMC_FIXED] = 0;
+ 	pmu->version = 0;
+ 	pmu->reserved_bits = 0xffffffff00200000ull;
++	pmu->raw_event_mask = X86_RAW_EVENT_MASK;
+ 
+ 	entry = kvm_find_cpuid_entry(vcpu, 0xa, 0);
+ 	if (!entry)
+@@ -522,8 +528,10 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
+ 	entry = kvm_find_cpuid_entry(vcpu, 7, 0);
+ 	if (entry &&
+ 	    (boot_cpu_has(X86_FEATURE_HLE) || boot_cpu_has(X86_FEATURE_RTM)) &&
+-	    (entry->ebx & (X86_FEATURE_HLE|X86_FEATURE_RTM)))
+-		pmu->reserved_bits ^= HSW_IN_TX|HSW_IN_TX_CHECKPOINTED;
++	    (entry->ebx & (X86_FEATURE_HLE|X86_FEATURE_RTM))) {
++		pmu->reserved_bits ^= HSW_IN_TX;
++		pmu->raw_event_mask |= (HSW_IN_TX|HSW_IN_TX_CHECKPOINTED);
++	}
+ 
+ 	bitmap_set(pmu->all_valid_pmc_idx,
+ 		0, pmu->nr_arch_gp_counters);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index ff181a6c1005f..08522041df593 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -7466,6 +7466,11 @@ static bool emulator_guest_has_fxsr(struct x86_emulate_ctxt *ctxt)
+ 	return guest_cpuid_has(emul_to_vcpu(ctxt), X86_FEATURE_FXSR);
+ }
+ 
++static bool emulator_guest_has_rdpid(struct x86_emulate_ctxt *ctxt)
++{
++	return guest_cpuid_has(emul_to_vcpu(ctxt), X86_FEATURE_RDPID);
++}
++
+ static ulong emulator_read_gpr(struct x86_emulate_ctxt *ctxt, unsigned reg)
+ {
+ 	return kvm_register_read_raw(emul_to_vcpu(ctxt), reg);
+@@ -7548,6 +7553,7 @@ static const struct x86_emulate_ops emulate_ops = {
+ 	.guest_has_long_mode = emulator_guest_has_long_mode,
+ 	.guest_has_movbe     = emulator_guest_has_movbe,
+ 	.guest_has_fxsr      = emulator_guest_has_fxsr,
++	.guest_has_rdpid     = emulator_guest_has_rdpid,
+ 	.set_nmi_mask        = emulator_set_nmi_mask,
+ 	.get_hflags          = emulator_get_hflags,
+ 	.exiting_smm         = emulator_exiting_smm,
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index 59ba2968af1b3..511172d70825c 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -854,13 +854,11 @@ done:
+ 			nr_invalidate);
+ }
+ 
+-static bool tlb_is_not_lazy(int cpu)
++static bool tlb_is_not_lazy(int cpu, void *data)
+ {
+ 	return !per_cpu(cpu_tlbstate_shared.is_lazy, cpu);
+ }
+ 
+-static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask);
+-
+ DEFINE_PER_CPU_SHARED_ALIGNED(struct tlb_state_shared, cpu_tlbstate_shared);
+ EXPORT_PER_CPU_SYMBOL(cpu_tlbstate_shared);
+ 
+@@ -889,36 +887,11 @@ STATIC_NOPV void native_flush_tlb_multi(const struct cpumask *cpumask,
+ 	 * up on the new contents of what used to be page tables, while
+ 	 * doing a speculative memory access.
+ 	 */
+-	if (info->freed_tables) {
++	if (info->freed_tables)
+ 		on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true);
+-	} else {
+-		/*
+-		 * Although we could have used on_each_cpu_cond_mask(),
+-		 * open-coding it has performance advantages, as it eliminates
+-		 * the need for indirect calls or retpolines. In addition, it
+-		 * allows to use a designated cpumask for evaluating the
+-		 * condition, instead of allocating one.
+-		 *
+-		 * This code works under the assumption that there are no nested
+-		 * TLB flushes, an assumption that is already made in
+-		 * flush_tlb_mm_range().
+-		 *
+-		 * cond_cpumask is logically a stack-local variable, but it is
+-		 * more efficient to have it off the stack and not to allocate
+-		 * it on demand. Preemption is disabled and this code is
+-		 * non-reentrant.
+-		 */
+-		struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask);
+-		int cpu;
+-
+-		cpumask_clear(cond_cpumask);
+-
+-		for_each_cpu(cpu, cpumask) {
+-			if (tlb_is_not_lazy(cpu))
+-				__cpumask_set_cpu(cpu, cond_cpumask);
+-		}
+-		on_each_cpu_mask(cond_cpumask, flush_tlb_func, (void *)info, true);
+-	}
++	else
++		on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func,
++				(void *)info, 1, cpumask);
+ }
+ 
+ void flush_tlb_multi(const struct cpumask *cpumask,
+diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
+index 9f2b251e83c56..3822666fb73d5 100644
+--- a/arch/x86/power/cpu.c
++++ b/arch/x86/power/cpu.c
+@@ -40,7 +40,8 @@ static void msr_save_context(struct saved_context *ctxt)
+ 	struct saved_msr *end = msr + ctxt->saved_msrs.num;
+ 
+ 	while (msr < end) {
+-		msr->valid = !rdmsrl_safe(msr->info.msr_no, &msr->info.reg.q);
++		if (msr->valid)
++			rdmsrl(msr->info.msr_no, msr->info.reg.q);
+ 		msr++;
+ 	}
+ }
+@@ -424,8 +425,10 @@ static int msr_build_context(const u32 *msr_id, const int num)
+ 	}
+ 
+ 	for (i = saved_msrs->num, j = 0; i < total_num; i++, j++) {
++		u64 dummy;
++
+ 		msr_array[i].info.msr_no	= msr_id[j];
+-		msr_array[i].valid		= false;
++		msr_array[i].valid		= !rdmsrl_safe(msr_id[j], &dummy);
+ 		msr_array[i].info.reg.q		= 0;
+ 	}
+ 	saved_msrs->num   = total_num;
+@@ -500,10 +503,24 @@ static int pm_cpu_check(const struct x86_cpu_id *c)
+ 	return ret;
+ }
+ 
++static void pm_save_spec_msr(void)
++{
++	u32 spec_msr_id[] = {
++		MSR_IA32_SPEC_CTRL,
++		MSR_IA32_TSX_CTRL,
++		MSR_TSX_FORCE_ABORT,
++		MSR_IA32_MCU_OPT_CTRL,
++		MSR_AMD64_LS_CFG,
++	};
++
++	msr_build_context(spec_msr_id, ARRAY_SIZE(spec_msr_id));
++}
++
+ static int pm_check_save_msr(void)
+ {
+ 	dmi_check_system(msr_save_dmi_table);
+ 	pm_cpu_check(msr_save_cpu_table);
++	pm_save_spec_msr();
+ 
+ 	return 0;
+ }
+diff --git a/arch/x86/xen/smp_hvm.c b/arch/x86/xen/smp_hvm.c
+index 6ff3c887e0b99..b70afdff419ca 100644
+--- a/arch/x86/xen/smp_hvm.c
++++ b/arch/x86/xen/smp_hvm.c
+@@ -19,6 +19,12 @@ static void __init xen_hvm_smp_prepare_boot_cpu(void)
+ 	 */
+ 	xen_vcpu_setup(0);
+ 
++	/*
++	 * Called again in case the kernel boots on vcpu >= MAX_VIRT_CPUS.
++	 * Refer to comments in xen_hvm_init_time_ops().
++	 */
++	xen_hvm_init_time_ops();
++
+ 	/*
+ 	 * The alternative logic (which patches the unlock/lock) runs before
+ 	 * the smp bootup up code is activated. Hence we need to set this up
+diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
+index d9c945ee11008..9ef0a5cca96ee 100644
+--- a/arch/x86/xen/time.c
++++ b/arch/x86/xen/time.c
+@@ -558,6 +558,11 @@ static void xen_hvm_setup_cpu_clockevents(void)
+ 
+ void __init xen_hvm_init_time_ops(void)
+ {
++	static bool hvm_time_initialized;
++
++	if (hvm_time_initialized)
++		return;
++
+ 	/*
+ 	 * vector callback is needed otherwise we cannot receive interrupts
+ 	 * on cpu > 0 and at this point we don't know how many cpus are
+@@ -567,7 +572,22 @@ void __init xen_hvm_init_time_ops(void)
+ 		return;
+ 
+ 	if (!xen_feature(XENFEAT_hvm_safe_pvclock)) {
+-		pr_info("Xen doesn't support pvclock on HVM, disable pv timer");
++		pr_info_once("Xen doesn't support pvclock on HVM, disable pv timer");
++		return;
++	}
++
++	/*
++	 * Only MAX_VIRT_CPUS 'vcpu_info' are embedded inside 'shared_info'.
++	 * The __this_cpu_read(xen_vcpu) is still NULL when Xen HVM guest
++	 * boots on vcpu >= MAX_VIRT_CPUS (e.g., kexec), To access
++	 * __this_cpu_read(xen_vcpu) via xen_clocksource_read() will panic.
++	 *
++	 * The xen_hvm_init_time_ops() should be called again later after
++	 * __this_cpu_read(xen_vcpu) is available.
++	 */
++	if (!__this_cpu_read(xen_vcpu)) {
++		pr_info("Delay xen_init_time_common() as kernel is running on vcpu=%d\n",
++			xen_vcpu_nr(0));
+ 		return;
+ 	}
+ 
+@@ -577,6 +597,8 @@ void __init xen_hvm_init_time_ops(void)
+ 	x86_cpuinit.setup_percpu_clockev = xen_hvm_setup_cpu_clockevents;
+ 
+ 	x86_platform.set_wallclock = xen_set_wallclock;
++
++	hvm_time_initialized = true;
+ }
+ #endif
+ 
+diff --git a/arch/xtensa/boot/dts/xtfpga-flash-128m.dtsi b/arch/xtensa/boot/dts/xtfpga-flash-128m.dtsi
+index 9bf8bad1dd18a..c33932568aa73 100644
+--- a/arch/xtensa/boot/dts/xtfpga-flash-128m.dtsi
++++ b/arch/xtensa/boot/dts/xtfpga-flash-128m.dtsi
+@@ -8,19 +8,19 @@
+ 			reg = <0x00000000 0x08000000>;
+ 			bank-width = <2>;
+ 			device-width = <2>;
+-			partition@0x0 {
++			partition@0 {
+ 				label = "data";
+ 				reg = <0x00000000 0x06000000>;
+ 			};
+-			partition@0x6000000 {
++			partition@6000000 {
+ 				label = "boot loader area";
+ 				reg = <0x06000000 0x00800000>;
+ 			};
+-			partition@0x6800000 {
++			partition@6800000 {
+ 				label = "kernel image";
+ 				reg = <0x06800000 0x017e0000>;
+ 			};
+-			partition@0x7fe0000 {
++			partition@7fe0000 {
+ 				label = "boot environment";
+ 				reg = <0x07fe0000 0x00020000>;
+ 			};
+diff --git a/arch/xtensa/boot/dts/xtfpga-flash-16m.dtsi b/arch/xtensa/boot/dts/xtfpga-flash-16m.dtsi
+index 40c2f81f7cb66..7bde2ab2d6fb5 100644
+--- a/arch/xtensa/boot/dts/xtfpga-flash-16m.dtsi
++++ b/arch/xtensa/boot/dts/xtfpga-flash-16m.dtsi
+@@ -8,19 +8,19 @@
+ 			reg = <0x08000000 0x01000000>;
+ 			bank-width = <2>;
+ 			device-width = <2>;
+-			partition@0x0 {
++			partition@0 {
+ 				label = "boot loader area";
+ 				reg = <0x00000000 0x00400000>;
+ 			};
+-			partition@0x400000 {
++			partition@400000 {
+ 				label = "kernel image";
+ 				reg = <0x00400000 0x00600000>;
+ 			};
+-			partition@0xa00000 {
++			partition@a00000 {
+ 				label = "data";
+ 				reg = <0x00a00000 0x005e0000>;
+ 			};
+-			partition@0xfe0000 {
++			partition@fe0000 {
+ 				label = "boot environment";
+ 				reg = <0x00fe0000 0x00020000>;
+ 			};
+diff --git a/arch/xtensa/boot/dts/xtfpga-flash-4m.dtsi b/arch/xtensa/boot/dts/xtfpga-flash-4m.dtsi
+index fb8d3a9f33c23..0655b868749a4 100644
+--- a/arch/xtensa/boot/dts/xtfpga-flash-4m.dtsi
++++ b/arch/xtensa/boot/dts/xtfpga-flash-4m.dtsi
+@@ -8,11 +8,11 @@
+ 			reg = <0x08000000 0x00400000>;
+ 			bank-width = <2>;
+ 			device-width = <2>;
+-			partition@0x0 {
++			partition@0 {
+ 				label = "boot loader area";
+ 				reg = <0x00000000 0x003f0000>;
+ 			};
+-			partition@0x3f0000 {
++			partition@3f0000 {
+ 				label = "boot environment";
+ 				reg = <0x003f0000 0x00010000>;
+ 			};
+diff --git a/drivers/ata/sata_dwc_460ex.c b/drivers/ata/sata_dwc_460ex.c
+index 338c2e50f7591..29e2b0dfba309 100644
+--- a/drivers/ata/sata_dwc_460ex.c
++++ b/drivers/ata/sata_dwc_460ex.c
+@@ -145,7 +145,11 @@ struct sata_dwc_device {
+ #endif
+ };
+ 
+-#define SATA_DWC_QCMD_MAX	32
++/*
++ * Allow one extra special slot for commands and DMA management
++ * to account for libata internal commands.
++ */
++#define SATA_DWC_QCMD_MAX	(ATA_MAX_QUEUE + 1)
+ 
+ struct sata_dwc_device_port {
+ 	struct sata_dwc_device	*hsdev;
+diff --git a/drivers/base/power/trace.c b/drivers/base/power/trace.c
+index 94665037f4a35..72b7a92337b18 100644
+--- a/drivers/base/power/trace.c
++++ b/drivers/base/power/trace.c
+@@ -120,7 +120,11 @@ static unsigned int read_magic_time(void)
+ 	struct rtc_time time;
+ 	unsigned int val;
+ 
+-	mc146818_get_time(&time);
++	if (mc146818_get_time(&time) < 0) {
++		pr_err("Unable to read current time from RTC\n");
++		return 0;
++	}
++
+ 	pr_info("RTC time: %ptRt, date: %ptRd\n", &time, &time);
+ 	val = time.tm_year;				/* 100 years */
+ 	if (val > 100)
+diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h
+index f27d5b0f9a0bb..a98bfcf4a5f02 100644
+--- a/drivers/block/drbd/drbd_int.h
++++ b/drivers/block/drbd/drbd_int.h
+@@ -1642,22 +1642,22 @@ struct sib_info {
+ };
+ void drbd_bcast_event(struct drbd_device *device, const struct sib_info *sib);
+ 
+-extern void notify_resource_state(struct sk_buff *,
++extern int notify_resource_state(struct sk_buff *,
+ 				  unsigned int,
+ 				  struct drbd_resource *,
+ 				  struct resource_info *,
+ 				  enum drbd_notification_type);
+-extern void notify_device_state(struct sk_buff *,
++extern int notify_device_state(struct sk_buff *,
+ 				unsigned int,
+ 				struct drbd_device *,
+ 				struct device_info *,
+ 				enum drbd_notification_type);
+-extern void notify_connection_state(struct sk_buff *,
++extern int notify_connection_state(struct sk_buff *,
+ 				    unsigned int,
+ 				    struct drbd_connection *,
+ 				    struct connection_info *,
+ 				    enum drbd_notification_type);
+-extern void notify_peer_device_state(struct sk_buff *,
++extern int notify_peer_device_state(struct sk_buff *,
+ 				     unsigned int,
+ 				     struct drbd_peer_device *,
+ 				     struct peer_device_info *,
+diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
+index 53ba2dddba6e6..b1676c6329962 100644
+--- a/drivers/block/drbd/drbd_main.c
++++ b/drivers/block/drbd/drbd_main.c
+@@ -2791,12 +2791,12 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
+ 
+ 	if (init_submitter(device)) {
+ 		err = ERR_NOMEM;
+-		goto out_idr_remove_vol;
++		goto out_idr_remove_from_resource;
+ 	}
+ 
+ 	err = add_disk(disk);
+ 	if (err)
+-		goto out_idr_remove_vol;
++		goto out_idr_remove_from_resource;
+ 
+ 	/* inherit the connection state */
+ 	device->state.conn = first_connection(resource)->cstate;
+@@ -2810,8 +2810,6 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
+ 	drbd_debugfs_device_add(device);
+ 	return NO_ERROR;
+ 
+-out_idr_remove_vol:
+-	idr_remove(&connection->peer_devices, vnr);
+ out_idr_remove_from_resource:
+ 	for_each_connection(connection, resource) {
+ 		peer_device = idr_remove(&connection->peer_devices, vnr);
+diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c
+index 44ccf8b4f4b29..69184cf17b6ad 100644
+--- a/drivers/block/drbd/drbd_nl.c
++++ b/drivers/block/drbd/drbd_nl.c
+@@ -4617,7 +4617,7 @@ static int nla_put_notification_header(struct sk_buff *msg,
+ 	return drbd_notification_header_to_skb(msg, &nh, true);
+ }
+ 
+-void notify_resource_state(struct sk_buff *skb,
++int notify_resource_state(struct sk_buff *skb,
+ 			   unsigned int seq,
+ 			   struct drbd_resource *resource,
+ 			   struct resource_info *resource_info,
+@@ -4659,16 +4659,17 @@ void notify_resource_state(struct sk_buff *skb,
+ 		if (err && err != -ESRCH)
+ 			goto failed;
+ 	}
+-	return;
++	return 0;
+ 
+ nla_put_failure:
+ 	nlmsg_free(skb);
+ failed:
+ 	drbd_err(resource, "Error %d while broadcasting event. Event seq:%u\n",
+ 			err, seq);
++	return err;
+ }
+ 
+-void notify_device_state(struct sk_buff *skb,
++int notify_device_state(struct sk_buff *skb,
+ 			 unsigned int seq,
+ 			 struct drbd_device *device,
+ 			 struct device_info *device_info,
+@@ -4708,16 +4709,17 @@ void notify_device_state(struct sk_buff *skb,
+ 		if (err && err != -ESRCH)
+ 			goto failed;
+ 	}
+-	return;
++	return 0;
+ 
+ nla_put_failure:
+ 	nlmsg_free(skb);
+ failed:
+ 	drbd_err(device, "Error %d while broadcasting event. Event seq:%u\n",
+ 		 err, seq);
++	return err;
+ }
+ 
+-void notify_connection_state(struct sk_buff *skb,
++int notify_connection_state(struct sk_buff *skb,
+ 			     unsigned int seq,
+ 			     struct drbd_connection *connection,
+ 			     struct connection_info *connection_info,
+@@ -4757,16 +4759,17 @@ void notify_connection_state(struct sk_buff *skb,
+ 		if (err && err != -ESRCH)
+ 			goto failed;
+ 	}
+-	return;
++	return 0;
+ 
+ nla_put_failure:
+ 	nlmsg_free(skb);
+ failed:
+ 	drbd_err(connection, "Error %d while broadcasting event. Event seq:%u\n",
+ 		 err, seq);
++	return err;
+ }
+ 
+-void notify_peer_device_state(struct sk_buff *skb,
++int notify_peer_device_state(struct sk_buff *skb,
+ 			      unsigned int seq,
+ 			      struct drbd_peer_device *peer_device,
+ 			      struct peer_device_info *peer_device_info,
+@@ -4807,13 +4810,14 @@ void notify_peer_device_state(struct sk_buff *skb,
+ 		if (err && err != -ESRCH)
+ 			goto failed;
+ 	}
+-	return;
++	return 0;
+ 
+ nla_put_failure:
+ 	nlmsg_free(skb);
+ failed:
+ 	drbd_err(peer_device, "Error %d while broadcasting event. Event seq:%u\n",
+ 		 err, seq);
++	return err;
+ }
+ 
+ void notify_helper(enum drbd_notification_type type,
+@@ -4864,7 +4868,7 @@ fail:
+ 		 err, seq);
+ }
+ 
+-static void notify_initial_state_done(struct sk_buff *skb, unsigned int seq)
++static int notify_initial_state_done(struct sk_buff *skb, unsigned int seq)
+ {
+ 	struct drbd_genlmsghdr *dh;
+ 	int err;
+@@ -4878,11 +4882,12 @@ static void notify_initial_state_done(struct sk_buff *skb, unsigned int seq)
+ 	if (nla_put_notification_header(skb, NOTIFY_EXISTS))
+ 		goto nla_put_failure;
+ 	genlmsg_end(skb, dh);
+-	return;
++	return 0;
+ 
+ nla_put_failure:
+ 	nlmsg_free(skb);
+ 	pr_err("Error %d sending event. Event seq:%u\n", err, seq);
++	return err;
+ }
+ 
+ static void free_state_changes(struct list_head *list)
+@@ -4909,6 +4914,7 @@ static int get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
+ 	unsigned int seq = cb->args[2];
+ 	unsigned int n;
+ 	enum drbd_notification_type flags = 0;
++	int err = 0;
+ 
+ 	/* There is no need for taking notification_mutex here: it doesn't
+ 	   matter if the initial state events mix with later state chage
+@@ -4917,32 +4923,32 @@ static int get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ 	cb->args[5]--;
+ 	if (cb->args[5] == 1) {
+-		notify_initial_state_done(skb, seq);
++		err = notify_initial_state_done(skb, seq);
+ 		goto out;
+ 	}
+ 	n = cb->args[4]++;
+ 	if (cb->args[4] < cb->args[3])
+ 		flags |= NOTIFY_CONTINUES;
+ 	if (n < 1) {
+-		notify_resource_state_change(skb, seq, state_change->resource,
++		err = notify_resource_state_change(skb, seq, state_change->resource,
+ 					     NOTIFY_EXISTS | flags);
+ 		goto next;
+ 	}
+ 	n--;
+ 	if (n < state_change->n_connections) {
+-		notify_connection_state_change(skb, seq, &state_change->connections[n],
++		err = notify_connection_state_change(skb, seq, &state_change->connections[n],
+ 					       NOTIFY_EXISTS | flags);
+ 		goto next;
+ 	}
+ 	n -= state_change->n_connections;
+ 	if (n < state_change->n_devices) {
+-		notify_device_state_change(skb, seq, &state_change->devices[n],
++		err = notify_device_state_change(skb, seq, &state_change->devices[n],
+ 					   NOTIFY_EXISTS | flags);
+ 		goto next;
+ 	}
+ 	n -= state_change->n_devices;
+ 	if (n < state_change->n_devices * state_change->n_connections) {
+-		notify_peer_device_state_change(skb, seq, &state_change->peer_devices[n],
++		err = notify_peer_device_state_change(skb, seq, &state_change->peer_devices[n],
+ 						NOTIFY_EXISTS | flags);
+ 		goto next;
+ 	}
+@@ -4957,7 +4963,10 @@ next:
+ 		cb->args[4] = 0;
+ 	}
+ out:
+-	return skb->len;
++	if (err)
++		return err;
++	else
++		return skb->len;
+ }
+ 
+ int drbd_adm_get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
+diff --git a/drivers/block/drbd/drbd_state.c b/drivers/block/drbd/drbd_state.c
+index b8a27818ab3f8..4ee11aef6672b 100644
+--- a/drivers/block/drbd/drbd_state.c
++++ b/drivers/block/drbd/drbd_state.c
+@@ -1537,7 +1537,7 @@ int drbd_bitmap_io_from_worker(struct drbd_device *device,
+ 	return rv;
+ }
+ 
+-void notify_resource_state_change(struct sk_buff *skb,
++int notify_resource_state_change(struct sk_buff *skb,
+ 				  unsigned int seq,
+ 				  struct drbd_resource_state_change *resource_state_change,
+ 				  enum drbd_notification_type type)
+@@ -1550,10 +1550,10 @@ void notify_resource_state_change(struct sk_buff *skb,
+ 		.res_susp_fen = resource_state_change->susp_fen[NEW],
+ 	};
+ 
+-	notify_resource_state(skb, seq, resource, &resource_info, type);
++	return notify_resource_state(skb, seq, resource, &resource_info, type);
+ }
+ 
+-void notify_connection_state_change(struct sk_buff *skb,
++int notify_connection_state_change(struct sk_buff *skb,
+ 				    unsigned int seq,
+ 				    struct drbd_connection_state_change *connection_state_change,
+ 				    enum drbd_notification_type type)
+@@ -1564,10 +1564,10 @@ void notify_connection_state_change(struct sk_buff *skb,
+ 		.conn_role = connection_state_change->peer_role[NEW],
+ 	};
+ 
+-	notify_connection_state(skb, seq, connection, &connection_info, type);
++	return notify_connection_state(skb, seq, connection, &connection_info, type);
+ }
+ 
+-void notify_device_state_change(struct sk_buff *skb,
++int notify_device_state_change(struct sk_buff *skb,
+ 				unsigned int seq,
+ 				struct drbd_device_state_change *device_state_change,
+ 				enum drbd_notification_type type)
+@@ -1577,10 +1577,10 @@ void notify_device_state_change(struct sk_buff *skb,
+ 		.dev_disk_state = device_state_change->disk_state[NEW],
+ 	};
+ 
+-	notify_device_state(skb, seq, device, &device_info, type);
++	return notify_device_state(skb, seq, device, &device_info, type);
+ }
+ 
+-void notify_peer_device_state_change(struct sk_buff *skb,
++int notify_peer_device_state_change(struct sk_buff *skb,
+ 				     unsigned int seq,
+ 				     struct drbd_peer_device_state_change *p,
+ 				     enum drbd_notification_type type)
+@@ -1594,7 +1594,7 @@ void notify_peer_device_state_change(struct sk_buff *skb,
+ 		.peer_resync_susp_dependency = p->resync_susp_dependency[NEW],
+ 	};
+ 
+-	notify_peer_device_state(skb, seq, peer_device, &peer_device_info, type);
++	return notify_peer_device_state(skb, seq, peer_device, &peer_device_info, type);
+ }
+ 
+ static void broadcast_state_change(struct drbd_state_change *state_change)
+@@ -1602,7 +1602,7 @@ static void broadcast_state_change(struct drbd_state_change *state_change)
+ 	struct drbd_resource_state_change *resource_state_change = &state_change->resource[0];
+ 	bool resource_state_has_changed;
+ 	unsigned int n_device, n_connection, n_peer_device, n_peer_devices;
+-	void (*last_func)(struct sk_buff *, unsigned int, void *,
++	int (*last_func)(struct sk_buff *, unsigned int, void *,
+ 			  enum drbd_notification_type) = NULL;
+ 	void *last_arg = NULL;
+ 
+diff --git a/drivers/block/drbd/drbd_state_change.h b/drivers/block/drbd/drbd_state_change.h
+index ba80f612d6abb..d5b0479bc9a66 100644
+--- a/drivers/block/drbd/drbd_state_change.h
++++ b/drivers/block/drbd/drbd_state_change.h
+@@ -44,19 +44,19 @@ extern struct drbd_state_change *remember_old_state(struct drbd_resource *, gfp_
+ extern void copy_old_to_new_state_change(struct drbd_state_change *);
+ extern void forget_state_change(struct drbd_state_change *);
+ 
+-extern void notify_resource_state_change(struct sk_buff *,
++extern int notify_resource_state_change(struct sk_buff *,
+ 					 unsigned int,
+ 					 struct drbd_resource_state_change *,
+ 					 enum drbd_notification_type type);
+-extern void notify_connection_state_change(struct sk_buff *,
++extern int notify_connection_state_change(struct sk_buff *,
+ 					   unsigned int,
+ 					   struct drbd_connection_state_change *,
+ 					   enum drbd_notification_type type);
+-extern void notify_device_state_change(struct sk_buff *,
++extern int notify_device_state_change(struct sk_buff *,
+ 				       unsigned int,
+ 				       struct drbd_device_state_change *,
+ 				       enum drbd_notification_type type);
+-extern void notify_peer_device_state_change(struct sk_buff *,
++extern int notify_peer_device_state_change(struct sk_buff *,
+ 					    unsigned int,
+ 					    struct drbd_peer_device_state_change *,
+ 					    enum drbd_notification_type type);
+diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
+index f864b17be7e36..35025f283bf65 100644
+--- a/drivers/char/virtio_console.c
++++ b/drivers/char/virtio_console.c
+@@ -2245,7 +2245,7 @@ static struct virtio_driver virtio_rproc_serial = {
+ 	.remove =	virtcons_remove,
+ };
+ 
+-static int __init init(void)
++static int __init virtio_console_init(void)
+ {
+ 	int err;
+ 
+@@ -2280,7 +2280,7 @@ free:
+ 	return err;
+ }
+ 
+-static void __exit fini(void)
++static void __exit virtio_console_fini(void)
+ {
+ 	reclaim_dma_bufs();
+ 
+@@ -2290,8 +2290,8 @@ static void __exit fini(void)
+ 	class_destroy(pdrvdata.class);
+ 	debugfs_remove_recursive(pdrvdata.debugfs_dir);
+ }
+-module_init(init);
+-module_exit(fini);
++module_init(virtio_console_init);
++module_exit(virtio_console_fini);
+ 
+ MODULE_DESCRIPTION("Virtio console driver");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/clk/clk-si5341.c b/drivers/clk/clk-si5341.c
+index f7b41366666e5..4de098b6b0d4e 100644
+--- a/drivers/clk/clk-si5341.c
++++ b/drivers/clk/clk-si5341.c
+@@ -798,6 +798,15 @@ static unsigned long si5341_output_clk_recalc_rate(struct clk_hw *hw,
+ 	u32 r_divider;
+ 	u8 r[3];
+ 
++	err = regmap_read(output->data->regmap,
++			SI5341_OUT_CONFIG(output), &val);
++	if (err < 0)
++		return err;
++
++	/* If SI5341_OUT_CFG_RDIV_FORCE2 is set, r_divider is 2 */
++	if (val & SI5341_OUT_CFG_RDIV_FORCE2)
++		return parent_rate / 2;
++
+ 	err = regmap_bulk_read(output->data->regmap,
+ 			SI5341_OUT_R_REG(output), r, 3);
+ 	if (err < 0)
+@@ -814,13 +823,6 @@ static unsigned long si5341_output_clk_recalc_rate(struct clk_hw *hw,
+ 	r_divider += 1;
+ 	r_divider <<= 1;
+ 
+-	err = regmap_read(output->data->regmap,
+-			SI5341_OUT_CONFIG(output), &val);
+-	if (err < 0)
+-		return err;
+-
+-	if (val & SI5341_OUT_CFG_RDIV_FORCE2)
+-		r_divider = 2;
+ 
+ 	return parent_rate / r_divider;
+ }
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 2c5db2df9d423..aaf2793ef6380 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -631,6 +631,24 @@ static void clk_core_get_boundaries(struct clk_core *core,
+ 		*max_rate = min(*max_rate, clk_user->max_rate);
+ }
+ 
++static bool clk_core_check_boundaries(struct clk_core *core,
++				      unsigned long min_rate,
++				      unsigned long max_rate)
++{
++	struct clk *user;
++
++	lockdep_assert_held(&prepare_lock);
++
++	if (min_rate > core->max_rate || max_rate < core->min_rate)
++		return false;
++
++	hlist_for_each_entry(user, &core->clks, clks_node)
++		if (min_rate > user->max_rate || max_rate < user->min_rate)
++			return false;
++
++	return true;
++}
++
+ void clk_hw_set_rate_range(struct clk_hw *hw, unsigned long min_rate,
+ 			   unsigned long max_rate)
+ {
+@@ -2347,6 +2365,11 @@ int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max)
+ 	clk->min_rate = min;
+ 	clk->max_rate = max;
+ 
++	if (!clk_core_check_boundaries(clk->core, min, max)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
+ 	rate = clk_core_get_rate_nolock(clk->core);
+ 	if (rate < min || rate > max) {
+ 		/*
+@@ -2375,6 +2398,7 @@ int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max)
+ 		}
+ 	}
+ 
++out:
+ 	if (clk->exclusive_count)
+ 		clk_core_rate_protect(clk->core);
+ 
+diff --git a/drivers/clk/mediatek/clk-mt8192.c b/drivers/clk/mediatek/clk-mt8192.c
+index cbc7c6dbe0f44..79ddb3cc0b98a 100644
+--- a/drivers/clk/mediatek/clk-mt8192.c
++++ b/drivers/clk/mediatek/clk-mt8192.c
+@@ -1236,9 +1236,17 @@ static int clk_mt8192_infra_probe(struct platform_device *pdev)
+ 
+ 	r = mtk_clk_register_gates(node, infra_clks, ARRAY_SIZE(infra_clks), clk_data);
+ 	if (r)
+-		return r;
++		goto free_clk_data;
++
++	r = of_clk_add_provider(node, of_clk_src_onecell_get, clk_data);
++	if (r)
++		goto free_clk_data;
++
++	return r;
+ 
+-	return of_clk_add_provider(node, of_clk_src_onecell_get, clk_data);
++free_clk_data:
++	mtk_free_clk_data(clk_data);
++	return r;
+ }
+ 
+ static int clk_mt8192_peri_probe(struct platform_device *pdev)
+@@ -1253,9 +1261,17 @@ static int clk_mt8192_peri_probe(struct platform_device *pdev)
+ 
+ 	r = mtk_clk_register_gates(node, peri_clks, ARRAY_SIZE(peri_clks), clk_data);
+ 	if (r)
+-		return r;
++		goto free_clk_data;
++
++	r = of_clk_add_provider(node, of_clk_src_onecell_get, clk_data);
++	if (r)
++		goto free_clk_data;
+ 
+-	return of_clk_add_provider(node, of_clk_src_onecell_get, clk_data);
++	return r;
++
++free_clk_data:
++	mtk_free_clk_data(clk_data);
++	return r;
+ }
+ 
+ static int clk_mt8192_apmixed_probe(struct platform_device *pdev)
+@@ -1271,9 +1287,17 @@ static int clk_mt8192_apmixed_probe(struct platform_device *pdev)
+ 	mtk_clk_register_plls(node, plls, ARRAY_SIZE(plls), clk_data);
+ 	r = mtk_clk_register_gates(node, apmixed_clks, ARRAY_SIZE(apmixed_clks), clk_data);
+ 	if (r)
+-		return r;
++		goto free_clk_data;
+ 
+-	return of_clk_add_provider(node, of_clk_src_onecell_get, clk_data);
++	r = of_clk_add_provider(node, of_clk_src_onecell_get, clk_data);
++	if (r)
++		goto free_clk_data;
++
++	return r;
++
++free_clk_data:
++	mtk_free_clk_data(clk_data);
++	return r;
+ }
+ 
+ static const struct of_device_id of_match_clk_mt8192[] = {
+diff --git a/drivers/clk/rockchip/clk-rk3568.c b/drivers/clk/rockchip/clk-rk3568.c
+index 69a9e8069a486..604a367bc498a 100644
+--- a/drivers/clk/rockchip/clk-rk3568.c
++++ b/drivers/clk/rockchip/clk-rk3568.c
+@@ -1038,13 +1038,13 @@ static struct rockchip_clk_branch rk3568_clk_branches[] __initdata = {
+ 			RK3568_CLKGATE_CON(20), 8, GFLAGS),
+ 	GATE(HCLK_VOP, "hclk_vop", "hclk_vo", 0,
+ 			RK3568_CLKGATE_CON(20), 9, GFLAGS),
+-	COMPOSITE(DCLK_VOP0, "dclk_vop0", hpll_vpll_gpll_cpll_p, CLK_SET_RATE_PARENT | CLK_SET_RATE_NO_REPARENT,
++	COMPOSITE(DCLK_VOP0, "dclk_vop0", hpll_vpll_gpll_cpll_p, CLK_SET_RATE_NO_REPARENT,
+ 			RK3568_CLKSEL_CON(39), 10, 2, MFLAGS, 0, 8, DFLAGS,
+ 			RK3568_CLKGATE_CON(20), 10, GFLAGS),
+-	COMPOSITE(DCLK_VOP1, "dclk_vop1", hpll_vpll_gpll_cpll_p, CLK_SET_RATE_PARENT | CLK_SET_RATE_NO_REPARENT,
++	COMPOSITE(DCLK_VOP1, "dclk_vop1", hpll_vpll_gpll_cpll_p, CLK_SET_RATE_NO_REPARENT,
+ 			RK3568_CLKSEL_CON(40), 10, 2, MFLAGS, 0, 8, DFLAGS,
+ 			RK3568_CLKGATE_CON(20), 11, GFLAGS),
+-	COMPOSITE(DCLK_VOP2, "dclk_vop2", hpll_vpll_gpll_cpll_p, 0,
++	COMPOSITE(DCLK_VOP2, "dclk_vop2", hpll_vpll_gpll_cpll_p, CLK_SET_RATE_NO_REPARENT,
+ 			RK3568_CLKSEL_CON(41), 10, 2, MFLAGS, 0, 8, DFLAGS,
+ 			RK3568_CLKGATE_CON(20), 12, GFLAGS),
+ 	GATE(CLK_VOP_PWM, "clk_vop_pwm", "xin24m", 0,
+diff --git a/drivers/clk/ti/clk.c b/drivers/clk/ti/clk.c
+index 3da33c786d77c..29eafab4353ef 100644
+--- a/drivers/clk/ti/clk.c
++++ b/drivers/clk/ti/clk.c
+@@ -131,7 +131,7 @@ int ti_clk_setup_ll_ops(struct ti_clk_ll_ops *ops)
+ void __init ti_dt_clocks_register(struct ti_dt_clk oclks[])
+ {
+ 	struct ti_dt_clk *c;
+-	struct device_node *node, *parent;
++	struct device_node *node, *parent, *child;
+ 	struct clk *clk;
+ 	struct of_phandle_args clkspec;
+ 	char buf[64];
+@@ -171,10 +171,13 @@ void __init ti_dt_clocks_register(struct ti_dt_clk oclks[])
+ 		node = of_find_node_by_name(NULL, buf);
+ 		if (num_args && compat_mode) {
+ 			parent = node;
+-			node = of_get_child_by_name(parent, "clock");
+-			if (!node)
+-				node = of_get_child_by_name(parent, "clk");
+-			of_node_put(parent);
++			child = of_get_child_by_name(parent, "clock");
++			if (!child)
++				child = of_get_child_by_name(parent, "clk");
++			if (child) {
++				of_node_put(parent);
++				node = child;
++			}
+ 		}
+ 
+ 		clkspec.np = node;
+diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c
+index db17196266e4b..82d370ae6a4a5 100644
+--- a/drivers/cpufreq/cppc_cpufreq.c
++++ b/drivers/cpufreq/cppc_cpufreq.c
+@@ -303,52 +303,48 @@ static u64 cppc_get_dmi_max_khz(void)
+ 
+ /*
+  * If CPPC lowest_freq and nominal_freq registers are exposed then we can
+- * use them to convert perf to freq and vice versa
+- *
+- * If the perf/freq point lies between Nominal and Lowest, we can treat
+- * (Low perf, Low freq) and (Nom Perf, Nom freq) as 2D co-ordinates of a line
+- * and extrapolate the rest
+- * For perf/freq > Nominal, we use the ratio perf:freq at Nominal for conversion
++ * use them to convert perf to freq and vice versa. The conversion is
++ * extrapolated as an affine function passing by the 2 points:
++ *  - (Low perf, Low freq)
++ *  - (Nominal perf, Nominal perf)
+  */
+ static unsigned int cppc_cpufreq_perf_to_khz(struct cppc_cpudata *cpu_data,
+ 					     unsigned int perf)
+ {
+ 	struct cppc_perf_caps *caps = &cpu_data->perf_caps;
++	s64 retval, offset = 0;
+ 	static u64 max_khz;
+ 	u64 mul, div;
+ 
+ 	if (caps->lowest_freq && caps->nominal_freq) {
+-		if (perf >= caps->nominal_perf) {
+-			mul = caps->nominal_freq;
+-			div = caps->nominal_perf;
+-		} else {
+-			mul = caps->nominal_freq - caps->lowest_freq;
+-			div = caps->nominal_perf - caps->lowest_perf;
+-		}
++		mul = caps->nominal_freq - caps->lowest_freq;
++		div = caps->nominal_perf - caps->lowest_perf;
++		offset = caps->nominal_freq - div64_u64(caps->nominal_perf * mul, div);
+ 	} else {
+ 		if (!max_khz)
+ 			max_khz = cppc_get_dmi_max_khz();
+ 		mul = max_khz;
+ 		div = caps->highest_perf;
+ 	}
+-	return (u64)perf * mul / div;
++
++	retval = offset + div64_u64(perf * mul, div);
++	if (retval >= 0)
++		return retval;
++	return 0;
+ }
+ 
+ static unsigned int cppc_cpufreq_khz_to_perf(struct cppc_cpudata *cpu_data,
+ 					     unsigned int freq)
+ {
+ 	struct cppc_perf_caps *caps = &cpu_data->perf_caps;
++	s64 retval, offset = 0;
+ 	static u64 max_khz;
+ 	u64  mul, div;
+ 
+ 	if (caps->lowest_freq && caps->nominal_freq) {
+-		if (freq >= caps->nominal_freq) {
+-			mul = caps->nominal_perf;
+-			div = caps->nominal_freq;
+-		} else {
+-			mul = caps->lowest_perf;
+-			div = caps->lowest_freq;
+-		}
++		mul = caps->nominal_perf - caps->lowest_perf;
++		div = caps->nominal_freq - caps->lowest_freq;
++		offset = caps->nominal_perf - div64_u64(caps->nominal_freq * mul, div);
+ 	} else {
+ 		if (!max_khz)
+ 			max_khz = cppc_get_dmi_max_khz();
+@@ -356,7 +352,10 @@ static unsigned int cppc_cpufreq_khz_to_perf(struct cppc_cpudata *cpu_data,
+ 		div = max_khz;
+ 	}
+ 
+-	return (u64)freq * mul / div;
++	retval = offset + div64_u64(freq * mul, div);
++	if (retval >= 0)
++		return retval;
++	return 0;
+ }
+ 
+ static int cppc_cpufreq_set_target(struct cpufreq_policy *policy,
+diff --git a/drivers/dma/sh/shdma-base.c b/drivers/dma/sh/shdma-base.c
+index 19ac95c0098f0..7f72b3f4cd1ae 100644
+--- a/drivers/dma/sh/shdma-base.c
++++ b/drivers/dma/sh/shdma-base.c
+@@ -115,10 +115,8 @@ static dma_cookie_t shdma_tx_submit(struct dma_async_tx_descriptor *tx)
+ 		ret = pm_runtime_get(schan->dev);
+ 
+ 		spin_unlock_irq(&schan->chan_lock);
+-		if (ret < 0) {
++		if (ret < 0)
+ 			dev_err(schan->dev, "%s(): GET = %d\n", __func__, ret);
+-			pm_runtime_put(schan->dev);
+-		}
+ 
+ 		pm_runtime_barrier(schan->dev);
+ 
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index dcb0dca651ac7..7ccdd3820343a 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -1368,6 +1368,16 @@ static int gpiochip_to_irq(struct gpio_chip *gc, unsigned int offset)
+ {
+ 	struct irq_domain *domain = gc->irq.domain;
+ 
++#ifdef CONFIG_GPIOLIB_IRQCHIP
++	/*
++	 * Avoid race condition with other code, which tries to lookup
++	 * an IRQ before the irqchip has been properly registered,
++	 * i.e. while gpiochip is still being brought up.
++	 */
++	if (!gc->irq.initialized)
++		return -EPROBE_DEFER;
++#endif
++
+ 	if (!gpiochip_irqchip_irq_valid(gc, offset))
+ 		return -ENXIO;
+ 
+@@ -1557,6 +1567,15 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc,
+ 
+ 	acpi_gpiochip_request_interrupts(gc);
+ 
++	/*
++	 * Using barrier() here to prevent compiler from reordering
++	 * gc->irq.initialized before initialization of above
++	 * GPIO chip irq members.
++	 */
++	barrier();
++
++	gc->irq.initialized = true;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 0311d799a010d..8948697890411 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -1510,6 +1510,7 @@ int amdgpu_cs_fence_to_handle_ioctl(struct drm_device *dev, void *data,
+ 		return 0;
+ 
+ 	default:
++		dma_fence_put(fence);
+ 		return -EINVAL;
+ 	}
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index 1916ec84dd71f..e7845df6cad22 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -266,7 +266,7 @@ static int amdgpu_gfx_kiq_acquire(struct amdgpu_device *adev,
+ 		    * adev->gfx.mec.num_pipe_per_mec
+ 		    * adev->gfx.mec.num_queue_per_pipe;
+ 
+-	while (queue_bit-- >= 0) {
++	while (--queue_bit >= 0) {
+ 		if (test_bit(queue_bit, adev->gfx.mec.queue_bitmap))
+ 			continue;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index 4fcfc2313b8ce..ea792b6a1c0f7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -1286,7 +1286,8 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
+ 	    !(abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE))
+ 		return;
+ 
+-	dma_resv_lock(bo->base.resv, NULL);
++	if (WARN_ON_ONCE(!dma_resv_trylock(bo->base.resv)))
++		return;
+ 
+ 	r = amdgpu_fill_buffer(abo, AMDGPU_POISON, bo->base.resv, &fence);
+ 	if (!WARN_ON(r)) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+index da11ceba06981..0ce2a7aa400b1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+@@ -569,8 +569,8 @@ static void vcn_v3_0_mc_resume_dpg_mode(struct amdgpu_device *adev, int inst_idx
+ 			AMDGPU_GPU_PAGE_ALIGN(sizeof(struct amdgpu_fw_shared)), 0, indirect);
+ 
+ 	/* VCN global tiling registers */
+-	WREG32_SOC15_DPG_MODE(0, SOC15_DPG_MODE_OFFSET(
+-		UVD, 0, mmUVD_GFX10_ADDR_CONFIG), adev->gfx.config.gb_addr_config, 0, indirect);
++	WREG32_SOC15_DPG_MODE(inst_idx, SOC15_DPG_MODE_OFFSET(
++		UVD, inst_idx, mmUVD_GFX10_ADDR_CONFIG), adev->gfx.config.gb_addr_config, 0, indirect);
+ }
+ 
+ static void vcn_v3_0_disable_static_power_gating(struct amdgpu_device *adev, int inst)
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+index 24ebd61395d84..3aaf10c778d77 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+@@ -1840,13 +1840,9 @@ static int kfd_ioctl_svm(struct file *filep, struct kfd_process *p, void *data)
+ 	if (!args->start_addr || !args->size)
+ 		return -EINVAL;
+ 
+-	mutex_lock(&p->mutex);
+-
+ 	r = svm_ioctl(p, args->op, args->start_addr, args->size, args->nattr,
+ 		      args->attrs);
+ 
+-	mutex_unlock(&p->mutex);
+-
+ 	return r;
+ }
+ #else
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+index c33d689f29e8e..e574aa32a111d 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+@@ -1563,7 +1563,7 @@ int kfd_create_crat_image_acpi(void **crat_image, size_t *size)
+ 	/* Fetch the CRAT table from ACPI */
+ 	status = acpi_get_table(CRAT_SIGNATURE, 0, &crat_table);
+ 	if (status == AE_NOT_FOUND) {
+-		pr_warn("CRAT table not found\n");
++		pr_info("CRAT table not found\n");
+ 		return -ENODATA;
+ 	} else if (ACPI_FAILURE(status)) {
+ 		const char *err = acpi_format_exception(status);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index b993011cfa64a..9902287111089 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -1150,7 +1150,6 @@ static void kfd_process_notifier_release(struct mmu_notifier *mn,
+ 
+ 	cancel_delayed_work_sync(&p->eviction_work);
+ 	cancel_delayed_work_sync(&p->restore_work);
+-	cancel_delayed_work_sync(&p->svms.restore_work);
+ 
+ 	mutex_lock(&p->mutex);
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
+index ed4bc5f844ce7..766b3660c8c86 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
+@@ -270,15 +270,6 @@ int kfd_smi_event_open(struct kfd_dev *dev, uint32_t *fd)
+ 		return ret;
+ 	}
+ 
+-	ret = anon_inode_getfd(kfd_smi_name, &kfd_smi_ev_fops, (void *)client,
+-			       O_RDWR);
+-	if (ret < 0) {
+-		kfifo_free(&client->fifo);
+-		kfree(client);
+-		return ret;
+-	}
+-	*fd = ret;
+-
+ 	init_waitqueue_head(&client->wait_queue);
+ 	spin_lock_init(&client->lock);
+ 	client->events = 0;
+@@ -288,5 +279,20 @@ int kfd_smi_event_open(struct kfd_dev *dev, uint32_t *fd)
+ 	list_add_rcu(&client->list, &dev->smi_clients);
+ 	spin_unlock(&dev->smi_lock);
+ 
++	ret = anon_inode_getfd(kfd_smi_name, &kfd_smi_ev_fops, (void *)client,
++			       O_RDWR);
++	if (ret < 0) {
++		spin_lock(&dev->smi_lock);
++		list_del_rcu(&client->list);
++		spin_unlock(&dev->smi_lock);
++
++		synchronize_rcu();
++
++		kfifo_free(&client->fifo);
++		kfree(client);
++		return ret;
++	}
++	*fd = ret;
++
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+index c0b8f4ff80b8a..13a4558606260 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+@@ -1589,13 +1589,14 @@ static void svm_range_restore_work(struct work_struct *work)
+ 
+ 	pr_debug("restore svm ranges\n");
+ 
+-	/* kfd_process_notifier_release destroys this worker thread. So during
+-	 * the lifetime of this thread, kfd_process and mm will be valid.
+-	 */
+ 	p = container_of(svms, struct kfd_process, svms);
+-	mm = p->mm;
+-	if (!mm)
++
++	/* Keep mm reference when svm_range_validate_and_map ranges */
++	mm = get_task_mm(p->lead_thread);
++	if (!mm) {
++		pr_debug("svms 0x%p process mm gone\n", svms);
+ 		return;
++	}
+ 
+ 	svm_range_list_lock_and_flush_work(svms, mm);
+ 	mutex_lock(&svms->lock);
+@@ -1649,6 +1650,7 @@ static void svm_range_restore_work(struct work_struct *work)
+ out_reschedule:
+ 	mutex_unlock(&svms->lock);
+ 	mmap_write_unlock(mm);
++	mmput(mm);
+ 
+ 	/* If validation failed, reschedule another attempt */
+ 	if (evicted_ranges) {
+@@ -1926,10 +1928,9 @@ svm_range_update_notifier_and_interval_tree(struct mm_struct *mm,
+ }
+ 
+ static void
+-svm_range_handle_list_op(struct svm_range_list *svms, struct svm_range *prange)
++svm_range_handle_list_op(struct svm_range_list *svms, struct svm_range *prange,
++			 struct mm_struct *mm)
+ {
+-	struct mm_struct *mm = prange->work_item.mm;
+-
+ 	switch (prange->work_item.op) {
+ 	case SVM_OP_NULL:
+ 		pr_debug("NULL OP 0x%p prange 0x%p [0x%lx 0x%lx]\n",
+@@ -2007,40 +2008,44 @@ static void svm_range_deferred_list_work(struct work_struct *work)
+ 	struct svm_range_list *svms;
+ 	struct svm_range *prange;
+ 	struct mm_struct *mm;
+-	struct kfd_process *p;
+ 
+ 	svms = container_of(work, struct svm_range_list, deferred_list_work);
+ 	pr_debug("enter svms 0x%p\n", svms);
+ 
+-	p = container_of(svms, struct kfd_process, svms);
+-	/* Avoid mm is gone when inserting mmu notifier */
+-	mm = get_task_mm(p->lead_thread);
+-	if (!mm) {
+-		pr_debug("svms 0x%p process mm gone\n", svms);
+-		return;
+-	}
+-retry:
+-	mmap_write_lock(mm);
+-
+-	/* Checking for the need to drain retry faults must be inside
+-	 * mmap write lock to serialize with munmap notifiers.
+-	 */
+-	if (unlikely(atomic_read(&svms->drain_pagefaults))) {
+-		mmap_write_unlock(mm);
+-		svm_range_drain_retry_fault(svms);
+-		goto retry;
+-	}
+-
+ 	spin_lock(&svms->deferred_list_lock);
+ 	while (!list_empty(&svms->deferred_range_list)) {
+ 		prange = list_first_entry(&svms->deferred_range_list,
+ 					  struct svm_range, deferred_list);
+-		list_del_init(&prange->deferred_list);
+ 		spin_unlock(&svms->deferred_list_lock);
+ 
+ 		pr_debug("prange 0x%p [0x%lx 0x%lx] op %d\n", prange,
+ 			 prange->start, prange->last, prange->work_item.op);
+ 
++		mm = prange->work_item.mm;
++retry:
++		mmap_write_lock(mm);
++
++		/* Checking for the need to drain retry faults must be inside
++		 * mmap write lock to serialize with munmap notifiers.
++		 */
++		if (unlikely(atomic_read(&svms->drain_pagefaults))) {
++			mmap_write_unlock(mm);
++			svm_range_drain_retry_fault(svms);
++			goto retry;
++		}
++
++		/* Remove from deferred_list must be inside mmap write lock, for
++		 * two race cases:
++		 * 1. unmap_from_cpu may change work_item.op and add the range
++		 *    to deferred_list again, cause use after free bug.
++		 * 2. svm_range_list_lock_and_flush_work may hold mmap write
++		 *    lock and continue because deferred_list is empty, but
++		 *    deferred_list work is actually waiting for mmap lock.
++		 */
++		spin_lock(&svms->deferred_list_lock);
++		list_del_init(&prange->deferred_list);
++		spin_unlock(&svms->deferred_list_lock);
++
+ 		mutex_lock(&svms->lock);
+ 		mutex_lock(&prange->migrate_mutex);
+ 		while (!list_empty(&prange->child_list)) {
+@@ -2051,19 +2056,20 @@ retry:
+ 			pr_debug("child prange 0x%p op %d\n", pchild,
+ 				 pchild->work_item.op);
+ 			list_del_init(&pchild->child_list);
+-			svm_range_handle_list_op(svms, pchild);
++			svm_range_handle_list_op(svms, pchild, mm);
+ 		}
+ 		mutex_unlock(&prange->migrate_mutex);
+ 
+-		svm_range_handle_list_op(svms, prange);
++		svm_range_handle_list_op(svms, prange, mm);
+ 		mutex_unlock(&svms->lock);
++		mmap_write_unlock(mm);
++
++		/* Pairs with mmget in svm_range_add_list_work */
++		mmput(mm);
+ 
+ 		spin_lock(&svms->deferred_list_lock);
+ 	}
+ 	spin_unlock(&svms->deferred_list_lock);
+-
+-	mmap_write_unlock(mm);
+-	mmput(mm);
+ 	pr_debug("exit svms 0x%p\n", svms);
+ }
+ 
+@@ -2081,6 +2087,9 @@ svm_range_add_list_work(struct svm_range_list *svms, struct svm_range *prange,
+ 			prange->work_item.op = op;
+ 	} else {
+ 		prange->work_item.op = op;
++
++		/* Pairs with mmput in deferred_list_work */
++		mmget(mm);
+ 		prange->work_item.mm = mm;
+ 		list_add_tail(&prange->deferred_list,
+ 			      &prange->svms->deferred_range_list);
+@@ -2769,6 +2778,8 @@ void svm_range_list_fini(struct kfd_process *p)
+ 
+ 	pr_debug("pasid 0x%x svms 0x%p\n", p->pasid, &p->svms);
+ 
++	cancel_delayed_work_sync(&p->svms.restore_work);
++
+ 	/* Ensure list work is finished before process is destroyed */
+ 	flush_work(&p->svms.deferred_list_work);
+ 
+@@ -2779,7 +2790,6 @@ void svm_range_list_fini(struct kfd_process *p)
+ 	atomic_inc(&p->svms.drain_pagefaults);
+ 	svm_range_drain_retry_fault(&p->svms);
+ 
+-
+ 	list_for_each_entry_safe(prange, next, &p->svms.list, list) {
+ 		svm_range_unlink(prange);
+ 		svm_range_remove_notifier(prange);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 7cadb9e81d9d5..4cb807b7b0d69 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3924,7 +3924,7 @@ static u32 convert_brightness_to_user(const struct amdgpu_dm_backlight_caps *cap
+ 				 max - min);
+ }
+ 
+-static int amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,
++static void amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,
+ 					 int bl_idx,
+ 					 u32 user_brightness)
+ {
+@@ -3955,7 +3955,8 @@ static int amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,
+ 			DRM_DEBUG("DM: Failed to update backlight on eDP[%d]\n", bl_idx);
+ 	}
+ 
+-	return rc ? 0 : 1;
++	if (rc)
++		dm->actual_brightness[bl_idx] = user_brightness;
+ }
+ 
+ static int amdgpu_dm_backlight_update_status(struct backlight_device *bd)
+@@ -9804,7 +9805,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ 	/* restore the backlight level */
+ 	for (i = 0; i < dm->num_of_edps; i++) {
+ 		if (dm->backlight_dev[i] &&
+-		    (amdgpu_dm_backlight_get_level(dm, i) != dm->brightness[i]))
++		    (dm->actual_brightness[i] != dm->brightness[i]))
+ 			amdgpu_dm_backlight_set_level(dm, i, dm->brightness[i]);
+ 	}
+ #endif
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index 37e61a88d49e2..db85b7c3d0520 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -540,6 +540,12 @@ struct amdgpu_display_manager {
+ 	 * cached backlight values.
+ 	 */
+ 	u32 brightness[AMDGPU_DM_MAX_NUM_EDP];
++	/**
++	 * @actual_brightness:
++	 *
++	 * last successfully applied backlight values.
++	 */
++	u32 actual_brightness[AMDGPU_DM_MAX_NUM_EDP];
+ };
+ 
+ enum dsc_clock_force_state {
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index f4e829ec8e108..ab58bcb116770 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -227,8 +227,10 @@ static ssize_t dp_link_settings_read(struct file *f, char __user *buf,
+ 			break;
+ 
+ 		r = put_user(*(rd_buf + result), buf);
+-		if (r)
++		if (r) {
++			kfree(rd_buf);
+ 			return r; /* r = -EFAULT */
++		}
+ 
+ 		buf += 1;
+ 		size -= 1;
+@@ -389,8 +391,10 @@ static ssize_t dp_phy_settings_read(struct file *f, char __user *buf,
+ 			break;
+ 
+ 		r = put_user((*(rd_buf + result)), buf);
+-		if (r)
++		if (r) {
++			kfree(rd_buf);
+ 			return r; /* r = -EFAULT */
++		}
+ 
+ 		buf += 1;
+ 		size -= 1;
+@@ -1317,8 +1321,10 @@ static ssize_t dp_dsc_clock_en_read(struct file *f, char __user *buf,
+ 				break;
+ 	}
+ 
+-	if (!pipe_ctx)
++	if (!pipe_ctx) {
++		kfree(rd_buf);
+ 		return -ENXIO;
++	}
+ 
+ 	dsc = pipe_ctx->stream_res.dsc;
+ 	if (dsc)
+@@ -1334,8 +1340,10 @@ static ssize_t dp_dsc_clock_en_read(struct file *f, char __user *buf,
+ 			break;
+ 
+ 		r = put_user(*(rd_buf + result), buf);
+-		if (r)
++		if (r) {
++			kfree(rd_buf);
+ 			return r; /* r = -EFAULT */
++		}
+ 
+ 		buf += 1;
+ 		size -= 1;
+@@ -1504,8 +1512,10 @@ static ssize_t dp_dsc_slice_width_read(struct file *f, char __user *buf,
+ 				break;
+ 	}
+ 
+-	if (!pipe_ctx)
++	if (!pipe_ctx) {
++		kfree(rd_buf);
+ 		return -ENXIO;
++	}
+ 
+ 	dsc = pipe_ctx->stream_res.dsc;
+ 	if (dsc)
+@@ -1521,8 +1531,10 @@ static ssize_t dp_dsc_slice_width_read(struct file *f, char __user *buf,
+ 			break;
+ 
+ 		r = put_user(*(rd_buf + result), buf);
+-		if (r)
++		if (r) {
++			kfree(rd_buf);
+ 			return r; /* r = -EFAULT */
++		}
+ 
+ 		buf += 1;
+ 		size -= 1;
+@@ -1689,8 +1701,10 @@ static ssize_t dp_dsc_slice_height_read(struct file *f, char __user *buf,
+ 				break;
+ 	}
+ 
+-	if (!pipe_ctx)
++	if (!pipe_ctx) {
++		kfree(rd_buf);
+ 		return -ENXIO;
++	}
+ 
+ 	dsc = pipe_ctx->stream_res.dsc;
+ 	if (dsc)
+@@ -1706,8 +1720,10 @@ static ssize_t dp_dsc_slice_height_read(struct file *f, char __user *buf,
+ 			break;
+ 
+ 		r = put_user(*(rd_buf + result), buf);
+-		if (r)
++		if (r) {
++			kfree(rd_buf);
+ 			return r; /* r = -EFAULT */
++		}
+ 
+ 		buf += 1;
+ 		size -= 1;
+@@ -1870,8 +1886,10 @@ static ssize_t dp_dsc_bits_per_pixel_read(struct file *f, char __user *buf,
+ 				break;
+ 	}
+ 
+-	if (!pipe_ctx)
++	if (!pipe_ctx) {
++		kfree(rd_buf);
+ 		return -ENXIO;
++	}
+ 
+ 	dsc = pipe_ctx->stream_res.dsc;
+ 	if (dsc)
+@@ -1887,8 +1905,10 @@ static ssize_t dp_dsc_bits_per_pixel_read(struct file *f, char __user *buf,
+ 			break;
+ 
+ 		r = put_user(*(rd_buf + result), buf);
+-		if (r)
++		if (r) {
++			kfree(rd_buf);
+ 			return r; /* r = -EFAULT */
++		}
+ 
+ 		buf += 1;
+ 		size -= 1;
+@@ -2046,8 +2066,10 @@ static ssize_t dp_dsc_pic_width_read(struct file *f, char __user *buf,
+ 				break;
+ 	}
+ 
+-	if (!pipe_ctx)
++	if (!pipe_ctx) {
++		kfree(rd_buf);
+ 		return -ENXIO;
++	}
+ 
+ 	dsc = pipe_ctx->stream_res.dsc;
+ 	if (dsc)
+@@ -2063,8 +2085,10 @@ static ssize_t dp_dsc_pic_width_read(struct file *f, char __user *buf,
+ 			break;
+ 
+ 		r = put_user(*(rd_buf + result), buf);
+-		if (r)
++		if (r) {
++			kfree(rd_buf);
+ 			return r; /* r = -EFAULT */
++		}
+ 
+ 		buf += 1;
+ 		size -= 1;
+@@ -2103,8 +2127,10 @@ static ssize_t dp_dsc_pic_height_read(struct file *f, char __user *buf,
+ 				break;
+ 	}
+ 
+-	if (!pipe_ctx)
++	if (!pipe_ctx) {
++		kfree(rd_buf);
+ 		return -ENXIO;
++	}
+ 
+ 	dsc = pipe_ctx->stream_res.dsc;
+ 	if (dsc)
+@@ -2120,8 +2146,10 @@ static ssize_t dp_dsc_pic_height_read(struct file *f, char __user *buf,
+ 			break;
+ 
+ 		r = put_user(*(rd_buf + result), buf);
+-		if (r)
++		if (r) {
++			kfree(rd_buf);
+ 			return r; /* r = -EFAULT */
++		}
+ 
+ 		buf += 1;
+ 		size -= 1;
+@@ -2175,8 +2203,10 @@ static ssize_t dp_dsc_chunk_size_read(struct file *f, char __user *buf,
+ 				break;
+ 	}
+ 
+-	if (!pipe_ctx)
++	if (!pipe_ctx) {
++		kfree(rd_buf);
+ 		return -ENXIO;
++	}
+ 
+ 	dsc = pipe_ctx->stream_res.dsc;
+ 	if (dsc)
+@@ -2192,8 +2222,10 @@ static ssize_t dp_dsc_chunk_size_read(struct file *f, char __user *buf,
+ 			break;
+ 
+ 		r = put_user(*(rd_buf + result), buf);
+-		if (r)
++		if (r) {
++			kfree(rd_buf);
+ 			return r; /* r = -EFAULT */
++		}
+ 
+ 		buf += 1;
+ 		size -= 1;
+@@ -2247,8 +2279,10 @@ static ssize_t dp_dsc_slice_bpg_offset_read(struct file *f, char __user *buf,
+ 				break;
+ 	}
+ 
+-	if (!pipe_ctx)
++	if (!pipe_ctx) {
++		kfree(rd_buf);
+ 		return -ENXIO;
++	}
+ 
+ 	dsc = pipe_ctx->stream_res.dsc;
+ 	if (dsc)
+@@ -2264,8 +2298,10 @@ static ssize_t dp_dsc_slice_bpg_offset_read(struct file *f, char __user *buf,
+ 			break;
+ 
+ 		r = put_user(*(rd_buf + result), buf);
+-		if (r)
++		if (r) {
++			kfree(rd_buf);
+ 			return r; /* r = -EFAULT */
++		}
+ 
+ 		buf += 1;
+ 		size -= 1;
+@@ -3255,8 +3291,10 @@ static ssize_t dcc_en_bits_read(
+ 	dc->hwss.get_dcc_en_bits(dc, dcc_en_bits);
+ 
+ 	rd_buf = kcalloc(rd_buf_size, sizeof(char), GFP_KERNEL);
+-	if (!rd_buf)
++	if (!rd_buf) {
++		kfree(dcc_en_bits);
+ 		return -ENOMEM;
++	}
+ 
+ 	for (i = 0; i < num_pipes; i++)
+ 		offset += snprintf(rd_buf + offset, rd_buf_size - offset,
+@@ -3269,8 +3307,10 @@ static ssize_t dcc_en_bits_read(
+ 		if (*pos >= rd_buf_size)
+ 			break;
+ 		r = put_user(*(rd_buf + result), buf);
+-		if (r)
++		if (r) {
++			kfree(rd_buf);
+ 			return r; /* r = -EFAULT */
++		}
+ 		buf += 1;
+ 		size -= 1;
+ 		*pos += 1;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+index c022e56f94596..90962fb91916f 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+@@ -74,10 +74,8 @@ bool amdgpu_dm_link_setup_psr(struct dc_stream_state *stream)
+ 
+ 	link = stream->link;
+ 
+-	psr_config.psr_version = link->dpcd_caps.psr_caps.psr_version;
+-
+-	if (psr_config.psr_version > 0) {
+-		psr_config.psr_exit_link_training_required = 0x1;
++	if (link->psr_settings.psr_version != DC_PSR_VERSION_UNSUPPORTED) {
++		psr_config.psr_version = link->psr_settings.psr_version;
+ 		psr_config.psr_frame_capture_indication_req = 0;
+ 		psr_config.psr_rfb_setup_time = 0x37;
+ 		psr_config.psr_sdp_transmit_line_num_deadline = 0x20;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 135ea1c422f2c..f46aa7f8c35d0 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -2124,21 +2124,26 @@ static enum link_training_result dp_perform_8b_10b_link_training(
+ 				repeater_id--) {
+ 			status = perform_clock_recovery_sequence(link, lt_settings, repeater_id);
+ 
+-			if (status != LINK_TRAINING_SUCCESS)
++			if (status != LINK_TRAINING_SUCCESS) {
++				repeater_training_done(link, repeater_id);
+ 				break;
++			}
+ 
+ 			status = perform_channel_equalization_sequence(link,
+ 					lt_settings,
+ 					repeater_id);
+ 
++			repeater_training_done(link, repeater_id);
++
+ 			if (status != LINK_TRAINING_SUCCESS)
+ 				break;
+ 
+-			repeater_training_done(link, repeater_id);
++			for (lane = 0; lane < LANE_COUNT_DP_MAX; lane++) {
++				lt_settings->dpcd_lane_settings[lane].raw = 0;
++				lt_settings->hw_lane_settings[lane].VOLTAGE_SWING = 0;
++				lt_settings->hw_lane_settings[lane].PRE_EMPHASIS = 0;
++			}
+ 		}
+-
+-		for (lane = 0; lane < (uint8_t)lt_settings->link_settings.lane_count; lane++)
+-			lt_settings->dpcd_lane_settings[lane].raw = 0;
+ 	}
+ 
+ 	if (status == LINK_TRAINING_SUCCESS) {
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 6b066ceab4128..3aa2040d2475b 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -1640,6 +1640,9 @@ static bool are_stream_backends_same(
+ 	if (is_timing_changed(stream_a, stream_b))
+ 		return false;
+ 
++	if (stream_a->signal != stream_b->signal)
++		return false;
++
+ 	if (stream_a->dpms_off != stream_b->dpms_off)
+ 		return false;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+index 79313d1ab5d95..d452a0d1777ea 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+@@ -874,7 +874,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ 		.clock_trace = true,
+ 		.disable_pplib_clock_request = true,
+ 		.min_disp_clk_khz = 100000,
+-		.pipe_split_policy = MPC_SPLIT_DYNAMIC,
++		.pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP,
+ 		.force_single_disp_pipe_split = false,
+ 		.disable_dcc = DCC_ENABLE,
+ 		.vsr_support = true,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
+index f969ff65f802b..d3eb99b5648e2 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
+@@ -2013,7 +2013,9 @@ bool dcn31_validate_bandwidth(struct dc *dc,
+ 
+ 	BW_VAL_TRACE_COUNT();
+ 
++	DC_FP_START();
+ 	out = dcn30_internal_validate_bw(dc, context, pipes, &pipe_cnt, &vlevel, fast_validate);
++	DC_FP_END();
+ 
+ 	// Disable fast_validate to set min dcfclk in alculate_wm_and_dlg
+ 	if (pipe_cnt == 0)
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+index 08362d506534b..a68496b3f9296 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+@@ -1045,6 +1045,17 @@ bool amdgpu_dpm_is_baco_supported(struct amdgpu_device *adev)
+ 
+ 	if (!pp_funcs || !pp_funcs->get_asic_baco_capability)
+ 		return false;
++	/* Don't use baco for reset in S3.
++	 * This is a workaround for some platforms
++	 * where entering BACO during suspend
++	 * seems to cause reboots or hangs.
++	 * This might be related to the fact that BACO controls
++	 * power to the whole GPU including devices like audio and USB.
++	 * Powering down/up everything may adversely affect these other
++	 * devices.  Needs more investigation.
++	 */
++	if (adev->in_s3)
++		return false;
+ 
+ 	if (pp_funcs->get_asic_baco_capability(pp_handle, &baco_cap))
+ 		return false;
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu10_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu10_hwmgr.c
+index 1f406f21b452f..cf74621f94a75 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu10_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu10_hwmgr.c
+@@ -773,13 +773,13 @@ static int smu10_dpm_force_dpm_level(struct pp_hwmgr *hwmgr,
+ 		smum_send_msg_to_smc_with_parameter(hwmgr,
+ 						PPSMC_MSG_SetHardMinFclkByFreq,
+ 						hwmgr->display_config->num_display > 3 ?
+-						data->clock_vol_info.vdd_dep_on_fclk->entries[0].clk :
++						(data->clock_vol_info.vdd_dep_on_fclk->entries[0].clk / 100) :
+ 						min_mclk,
+ 						NULL);
+ 
+ 		smum_send_msg_to_smc_with_parameter(hwmgr,
+ 						PPSMC_MSG_SetHardMinSocclkByFreq,
+-						data->clock_vol_info.vdd_dep_on_socclk->entries[0].clk,
++						data->clock_vol_info.vdd_dep_on_socclk->entries[0].clk / 100,
+ 						NULL);
+ 		smum_send_msg_to_smc_with_parameter(hwmgr,
+ 						PPSMC_MSG_SetHardMinVcn,
+@@ -792,11 +792,11 @@ static int smu10_dpm_force_dpm_level(struct pp_hwmgr *hwmgr,
+ 						NULL);
+ 		smum_send_msg_to_smc_with_parameter(hwmgr,
+ 						PPSMC_MSG_SetSoftMaxFclkByFreq,
+-						data->clock_vol_info.vdd_dep_on_fclk->entries[index_fclk].clk,
++						data->clock_vol_info.vdd_dep_on_fclk->entries[index_fclk].clk / 100,
+ 						NULL);
+ 		smum_send_msg_to_smc_with_parameter(hwmgr,
+ 						PPSMC_MSG_SetSoftMaxSocclkByFreq,
+-						data->clock_vol_info.vdd_dep_on_socclk->entries[index_socclk].clk,
++						data->clock_vol_info.vdd_dep_on_socclk->entries[index_socclk].clk / 100,
+ 						NULL);
+ 		smum_send_msg_to_smc_with_parameter(hwmgr,
+ 						PPSMC_MSG_SetSoftMaxVcn,
+diff --git a/drivers/gpu/drm/bridge/nwl-dsi.c b/drivers/gpu/drm/bridge/nwl-dsi.c
+index 6e484d836cfe2..691039aba87f4 100644
+--- a/drivers/gpu/drm/bridge/nwl-dsi.c
++++ b/drivers/gpu/drm/bridge/nwl-dsi.c
+@@ -861,18 +861,19 @@ nwl_dsi_bridge_mode_set(struct drm_bridge *bridge,
+ 	memcpy(&dsi->mode, adjusted_mode, sizeof(dsi->mode));
+ 	drm_mode_debug_printmodeline(adjusted_mode);
+ 
+-	pm_runtime_get_sync(dev);
++	if (pm_runtime_resume_and_get(dev) < 0)
++		return;
+ 
+ 	if (clk_prepare_enable(dsi->lcdif_clk) < 0)
+-		return;
++		goto runtime_put;
+ 	if (clk_prepare_enable(dsi->core_clk) < 0)
+-		return;
++		goto runtime_put;
+ 
+ 	/* Step 1 from DSI reset-out instructions */
+ 	ret = reset_control_deassert(dsi->rst_pclk);
+ 	if (ret < 0) {
+ 		DRM_DEV_ERROR(dev, "Failed to deassert PCLK: %d\n", ret);
+-		return;
++		goto runtime_put;
+ 	}
+ 
+ 	/* Step 2 from DSI reset-out instructions */
+@@ -882,13 +883,18 @@ nwl_dsi_bridge_mode_set(struct drm_bridge *bridge,
+ 	ret = reset_control_deassert(dsi->rst_esc);
+ 	if (ret < 0) {
+ 		DRM_DEV_ERROR(dev, "Failed to deassert ESC: %d\n", ret);
+-		return;
++		goto runtime_put;
+ 	}
+ 	ret = reset_control_deassert(dsi->rst_byte);
+ 	if (ret < 0) {
+ 		DRM_DEV_ERROR(dev, "Failed to deassert BYTE: %d\n", ret);
+-		return;
++		goto runtime_put;
+ 	}
++
++	return;
++
++runtime_put:
++	pm_runtime_put_sync(dev);
+ }
+ 
+ static void
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index b8f5419e514ae..83e5c115e7547 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -212,9 +212,7 @@ static const struct edid_quirk {
+ 
+ 	/* Windows Mixed Reality Headsets */
+ 	EDID_QUIRK('A', 'C', 'R', 0x7fce, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('H', 'P', 'N', 0x3515, EDID_QUIRK_NON_DESKTOP),
+ 	EDID_QUIRK('L', 'E', 'N', 0x0408, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('L', 'E', 'N', 0xb800, EDID_QUIRK_NON_DESKTOP),
+ 	EDID_QUIRK('F', 'U', 'J', 0x1970, EDID_QUIRK_NON_DESKTOP),
+ 	EDID_QUIRK('D', 'E', 'L', 0x7fce, EDID_QUIRK_NON_DESKTOP),
+ 	EDID_QUIRK('S', 'E', 'C', 0x144a, EDID_QUIRK_NON_DESKTOP),
+@@ -5327,17 +5325,13 @@ u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edi
+ 	info->width_mm = edid->width_cm * 10;
+ 	info->height_mm = edid->height_cm * 10;
+ 
+-	info->non_desktop = !!(quirks & EDID_QUIRK_NON_DESKTOP);
+-
+ 	drm_get_monitor_range(connector, edid);
+ 
+-	DRM_DEBUG_KMS("non_desktop set to %d\n", info->non_desktop);
+-
+ 	if (edid->revision < 3)
+-		return quirks;
++		goto out;
+ 
+ 	if (!(edid->input & DRM_EDID_INPUT_DIGITAL))
+-		return quirks;
++		goto out;
+ 
+ 	info->color_formats |= DRM_COLOR_FORMAT_RGB444;
+ 	drm_parse_cea_ext(connector, edid);
+@@ -5358,7 +5352,7 @@ u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edi
+ 
+ 	/* Only defined for 1.4 with digital displays */
+ 	if (edid->revision < 4)
+-		return quirks;
++		goto out;
+ 
+ 	switch (edid->input & DRM_EDID_DIGITAL_DEPTH_MASK) {
+ 	case DRM_EDID_DIGITAL_DEPTH_6:
+@@ -5395,6 +5389,13 @@ u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edi
+ 
+ 	drm_update_mso(connector, edid);
+ 
++out:
++	if (quirks & EDID_QUIRK_NON_DESKTOP) {
++		drm_dbg_kms(connector->dev, "Non-desktop display%s\n",
++			    info->non_desktop ? " (redundant quirk)" : "");
++		info->non_desktop = true;
++	}
++
+ 	return quirks;
+ }
+ 
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index b910978d3e480..4e853acfd1e8a 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -180,6 +180,12 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "MicroPC"),
+ 		},
+ 		.driver_data = (void *)&lcd720x1280_rightside_up,
++	}, {	/* GPD Win Max */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "GPD"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "G1619-01"),
++		},
++		.driver_data = (void *)&lcd800x1280_rightside_up,
+ 	}, {	/*
+ 		 * GPD Pocket, note that the the DMI data is less generic then
+ 		 * it seems, devices with a board-vendor of "AMI Corporation"
+diff --git a/drivers/gpu/drm/imx/dw_hdmi-imx.c b/drivers/gpu/drm/imx/dw_hdmi-imx.c
+index 87428fb23d9ff..a2277a0d6d06f 100644
+--- a/drivers/gpu/drm/imx/dw_hdmi-imx.c
++++ b/drivers/gpu/drm/imx/dw_hdmi-imx.c
+@@ -222,6 +222,7 @@ static int dw_hdmi_imx_probe(struct platform_device *pdev)
+ 	struct device_node *np = pdev->dev.of_node;
+ 	const struct of_device_id *match = of_match_node(dw_hdmi_imx_dt_ids, np);
+ 	struct imx_hdmi *hdmi;
++	int ret;
+ 
+ 	hdmi = devm_kzalloc(&pdev->dev, sizeof(*hdmi), GFP_KERNEL);
+ 	if (!hdmi)
+@@ -243,10 +244,15 @@ static int dw_hdmi_imx_probe(struct platform_device *pdev)
+ 	hdmi->bridge = of_drm_find_bridge(np);
+ 	if (!hdmi->bridge) {
+ 		dev_err(hdmi->dev, "Unable to find bridge\n");
++		dw_hdmi_remove(hdmi->hdmi);
+ 		return -ENODEV;
+ 	}
+ 
+-	return component_add(&pdev->dev, &dw_hdmi_imx_ops);
++	ret = component_add(&pdev->dev, &dw_hdmi_imx_ops);
++	if (ret)
++		dw_hdmi_remove(hdmi->hdmi);
++
++	return ret;
+ }
+ 
+ static int dw_hdmi_imx_remove(struct platform_device *pdev)
+diff --git a/drivers/gpu/drm/imx/imx-ldb.c b/drivers/gpu/drm/imx/imx-ldb.c
+index e5078d03020d9..fb0e951248f68 100644
+--- a/drivers/gpu/drm/imx/imx-ldb.c
++++ b/drivers/gpu/drm/imx/imx-ldb.c
+@@ -572,6 +572,8 @@ static int imx_ldb_panel_ddc(struct device *dev,
+ 		edidp = of_get_property(child, "edid", &edid_len);
+ 		if (edidp) {
+ 			channel->edid = kmemdup(edidp, edid_len, GFP_KERNEL);
++			if (!channel->edid)
++				return -ENOMEM;
+ 		} else if (!channel->panel) {
+ 			/* fallback to display-timings node */
+ 			ret = of_get_drm_display_mode(child,
+diff --git a/drivers/gpu/drm/imx/parallel-display.c b/drivers/gpu/drm/imx/parallel-display.c
+index 06cb1a59b9bcd..63ba2ad846791 100644
+--- a/drivers/gpu/drm/imx/parallel-display.c
++++ b/drivers/gpu/drm/imx/parallel-display.c
+@@ -75,8 +75,10 @@ static int imx_pd_connector_get_modes(struct drm_connector *connector)
+ 		ret = of_get_drm_display_mode(np, &imxpd->mode,
+ 					      &imxpd->bus_flags,
+ 					      OF_USE_NATIVE_MODE);
+-		if (ret)
++		if (ret) {
++			drm_mode_destroy(connector->dev, mode);
+ 			return ret;
++		}
+ 
+ 		drm_mode_copy(mode, &imxpd->mode);
+ 		mode->type |= DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED;
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index 0afc3b756f92d..09f8fa111dcc6 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -1871,7 +1871,7 @@ int msm_dsi_host_init(struct msm_dsi *msm_dsi)
+ 
+ 	/* do not autoenable, will be enabled later */
+ 	ret = devm_request_irq(&pdev->dev, msm_host->irq, dsi_host_irq,
+-			IRQF_TRIGGER_HIGH | IRQF_ONESHOT | IRQF_NO_AUTOEN,
++			IRQF_TRIGGER_HIGH | IRQF_NO_AUTOEN,
+ 			"dsi_isr", msm_host);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "failed to request IRQ%u: %d\n",
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c
+index e1772211b0a4b..612310d5d4812 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c
+@@ -216,6 +216,7 @@ gm20b_pmu = {
+ 	.intr = gt215_pmu_intr,
+ 	.recv = gm20b_pmu_recv,
+ 	.initmsg = gm20b_pmu_initmsg,
++	.reset = gf100_pmu_reset,
+ };
+ 
+ #if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC)
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c
+index 6bf7fc1bd1e3b..1a6f9c3af5ecd 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c
+@@ -23,7 +23,7 @@
+  */
+ #include "priv.h"
+ 
+-static void
++void
+ gp102_pmu_reset(struct nvkm_pmu *pmu)
+ {
+ 	struct nvkm_device *device = pmu->subdev.device;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
+index ba1583bb618b2..94cfb1791af6e 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
+@@ -83,6 +83,7 @@ gp10b_pmu = {
+ 	.intr = gt215_pmu_intr,
+ 	.recv = gm20b_pmu_recv,
+ 	.initmsg = gm20b_pmu_initmsg,
++	.reset = gp102_pmu_reset,
+ };
+ 
+ #if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC)
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h
+index bcaade758ff72..21abf31f44420 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h
+@@ -41,6 +41,7 @@ int gt215_pmu_send(struct nvkm_pmu *, u32[2], u32, u32, u32, u32);
+ 
+ bool gf100_pmu_enabled(struct nvkm_pmu *);
+ void gf100_pmu_reset(struct nvkm_pmu *);
++void gp102_pmu_reset(struct nvkm_pmu *pmu);
+ 
+ void gk110_pmu_pgob(struct nvkm_pmu *, bool);
+ 
+diff --git a/drivers/gpu/drm/panel/panel-ilitek-ili9341.c b/drivers/gpu/drm/panel/panel-ilitek-ili9341.c
+index 2c3378a259b1e..e1542451ef9d0 100644
+--- a/drivers/gpu/drm/panel/panel-ilitek-ili9341.c
++++ b/drivers/gpu/drm/panel/panel-ilitek-ili9341.c
+@@ -612,8 +612,10 @@ static int ili9341_dbi_probe(struct spi_device *spi, struct gpio_desc *dc,
+ 	int ret;
+ 
+ 	vcc = devm_regulator_get_optional(dev, "vcc");
+-	if (IS_ERR(vcc))
++	if (IS_ERR(vcc)) {
+ 		dev_err(dev, "get optional vcc failed\n");
++		vcc = NULL;
++	}
+ 
+ 	dbidev = devm_drm_dev_alloc(dev, &ili9341_dbi_driver,
+ 				    struct mipi_dbi_dev, drm);
+diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
+index e47ae40a865a8..91bdec3e0ef77 100644
+--- a/drivers/gpu/drm/v3d/v3d_gem.c
++++ b/drivers/gpu/drm/v3d/v3d_gem.c
+@@ -798,7 +798,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
+ 
+ 		if (!render->base.perfmon) {
+ 			ret = -ENOENT;
+-			goto fail;
++			goto fail_perfmon;
+ 		}
+ 	}
+ 
+@@ -847,6 +847,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
+ 
+ fail_unreserve:
+ 	mutex_unlock(&v3d->sched_lock);
++fail_perfmon:
+ 	drm_gem_unlock_reservations(last_job->bo,
+ 				    last_job->bo_count, &acquire_ctx);
+ fail:
+@@ -1027,7 +1028,7 @@ v3d_submit_csd_ioctl(struct drm_device *dev, void *data,
+ 						     args->perfmon_id);
+ 		if (!job->base.perfmon) {
+ 			ret = -ENOENT;
+-			goto fail;
++			goto fail_perfmon;
+ 		}
+ 	}
+ 
+@@ -1056,6 +1057,7 @@ v3d_submit_csd_ioctl(struct drm_device *dev, void *data,
+ 
+ fail_unreserve:
+ 	mutex_unlock(&v3d->sched_lock);
++fail_perfmon:
+ 	drm_gem_unlock_reservations(clean_job->bo, clean_job->bo_count,
+ 				    &acquire_ctx);
+ fail:
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index 2829575fd9b7b..958b002c57ec7 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -380,7 +380,7 @@ void vmbus_channel_map_relid(struct vmbus_channel *channel)
+ 	 * execute:
+ 	 *
+ 	 *  (a) In the "normal (i.e., not resuming from hibernation)" path,
+-	 *      the full barrier in smp_store_mb() guarantees that the store
++	 *      the full barrier in virt_store_mb() guarantees that the store
+ 	 *      is propagated to all CPUs before the add_channel_work work
+ 	 *      is queued.  In turn, add_channel_work is queued before the
+ 	 *      channel's ring buffer is allocated/initialized and the
+@@ -392,14 +392,14 @@ void vmbus_channel_map_relid(struct vmbus_channel *channel)
+ 	 *      recv_int_page before retrieving the channel pointer from the
+ 	 *      array of channels.
+ 	 *
+-	 *  (b) In the "resuming from hibernation" path, the smp_store_mb()
++	 *  (b) In the "resuming from hibernation" path, the virt_store_mb()
+ 	 *      guarantees that the store is propagated to all CPUs before
+ 	 *      the VMBus connection is marked as ready for the resume event
+ 	 *      (cf. check_ready_for_resume_event()).  The interrupt handler
+ 	 *      of the VMBus driver and vmbus_chan_sched() can not run before
+ 	 *      vmbus_bus_resume() has completed execution (cf. resume_noirq).
+ 	 */
+-	smp_store_mb(
++	virt_store_mb(
+ 		vmbus_connection.channels[channel->offermsg.child_relid],
+ 		channel);
+ }
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index 44bd0b6ff5059..a939ca1a8d544 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -2776,10 +2776,15 @@ static void __exit vmbus_exit(void)
+ 	if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE) {
+ 		kmsg_dump_unregister(&hv_kmsg_dumper);
+ 		unregister_die_notifier(&hyperv_die_block);
+-		atomic_notifier_chain_unregister(&panic_notifier_list,
+-						 &hyperv_panic_block);
+ 	}
+ 
++	/*
++	 * The panic notifier is always registered, hence we should
++	 * also unconditionally unregister it here as well.
++	 */
++	atomic_notifier_chain_unregister(&panic_notifier_list,
++					 &hyperv_panic_block);
++
+ 	free_page((unsigned long)hv_panic_page);
+ 	unregister_sysctl_table(hv_ctl_table_hdr);
+ 	hv_ctl_table_hdr = NULL;
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index 35f0d5e7533d6..1c107d6d03b99 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -2824,6 +2824,7 @@ static int cm_dreq_handler(struct cm_work *work)
+ 	switch (cm_id_priv->id.state) {
+ 	case IB_CM_REP_SENT:
+ 	case IB_CM_DREQ_SENT:
++	case IB_CM_MRA_REP_RCVD:
+ 		ib_cancel_mad(cm_id_priv->msg);
+ 		break;
+ 	case IB_CM_ESTABLISHED:
+@@ -2831,8 +2832,6 @@ static int cm_dreq_handler(struct cm_work *work)
+ 		    cm_id_priv->id.lap_state == IB_CM_MRA_LAP_RCVD)
+ 			ib_cancel_mad(cm_id_priv->msg);
+ 		break;
+-	case IB_CM_MRA_REP_RCVD:
+-		break;
+ 	case IB_CM_TIMEWAIT:
+ 		atomic_long_inc(&work->port->counters[CM_RECV_DUPLICATES]
+ 						     [CM_DREQ_COUNTER]);
+diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.c b/drivers/infiniband/hw/hfi1/mmu_rb.c
+index 876cc78a22cca..7333646021bb8 100644
+--- a/drivers/infiniband/hw/hfi1/mmu_rb.c
++++ b/drivers/infiniband/hw/hfi1/mmu_rb.c
+@@ -80,6 +80,9 @@ void hfi1_mmu_rb_unregister(struct mmu_rb_handler *handler)
+ 	unsigned long flags;
+ 	struct list_head del_list;
+ 
++	/* Prevent freeing of mm until we are completely finished. */
++	mmgrab(handler->mn.mm);
++
+ 	/* Unregister first so we don't get any more notifications. */
+ 	mmu_notifier_unregister(&handler->mn, handler->mn.mm);
+ 
+@@ -102,6 +105,9 @@ void hfi1_mmu_rb_unregister(struct mmu_rb_handler *handler)
+ 
+ 	do_remove(handler, &del_list);
+ 
++	/* Now the mm may be freed. */
++	mmdrop(handler->mn.mm);
++
+ 	kfree(handler);
+ }
+ 
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 2910d78333130..d40a1460ef971 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -541,8 +541,10 @@ static void __cache_work_func(struct mlx5_cache_ent *ent)
+ 		spin_lock_irq(&ent->lock);
+ 		if (ent->disabled)
+ 			goto out;
+-		if (need_delay)
++		if (need_delay) {
+ 			queue_delayed_work(cache->wq, &ent->dwork, 300 * HZ);
++			goto out;
++		}
+ 		remove_cache_mr_locked(ent);
+ 		queue_adjust_cache_locked(ent);
+ 	}
+@@ -630,6 +632,7 @@ static void mlx5_mr_cache_free(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
+ {
+ 	struct mlx5_cache_ent *ent = mr->cache_ent;
+ 
++	WRITE_ONCE(dev->cache.last_add, jiffies);
+ 	spin_lock_irq(&ent->lock);
+ 	list_add_tail(&mr->list, &ent->head);
+ 	ent->available_mrs++;
+diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
+index ae50b56e89132..8ef112f883a77 100644
+--- a/drivers/infiniband/sw/rdmavt/qp.c
++++ b/drivers/infiniband/sw/rdmavt/qp.c
+@@ -3190,7 +3190,11 @@ serr_no_r_lock:
+ 	spin_lock_irqsave(&sqp->s_lock, flags);
+ 	rvt_send_complete(sqp, wqe, send_status);
+ 	if (sqp->ibqp.qp_type == IB_QPT_RC) {
+-		int lastwqe = rvt_error_qp(sqp, IB_WC_WR_FLUSH_ERR);
++		int lastwqe;
++
++		spin_lock(&sqp->r_lock);
++		lastwqe = rvt_error_qp(sqp, IB_WC_WR_FLUSH_ERR);
++		spin_unlock(&sqp->r_lock);
+ 
+ 		sqp->s_flags &= ~RVT_S_BUSY;
+ 		spin_unlock_irqrestore(&sqp->s_lock, flags);
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index f5848b351b193..af00714c8056b 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -1558,6 +1558,7 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
+ 				dev_info(smmu->dev, "\t0x%016llx\n",
+ 					 (unsigned long long)evt[i]);
+ 
++			cond_resched();
+ 		}
+ 
+ 		/*
+diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c
+index 91749654fd490..be60f6f3a265d 100644
+--- a/drivers/iommu/omap-iommu.c
++++ b/drivers/iommu/omap-iommu.c
+@@ -1661,7 +1661,7 @@ static struct iommu_device *omap_iommu_probe_device(struct device *dev)
+ 	num_iommus = of_property_count_elems_of_size(dev->of_node, "iommus",
+ 						     sizeof(phandle));
+ 	if (num_iommus < 0)
+-		return 0;
++		return ERR_PTR(-ENODEV);
+ 
+ 	arch_data = kcalloc(num_iommus + 1, sizeof(*arch_data), GFP_KERNEL);
+ 	if (!arch_data)
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 0cb584d9815b9..fc1bfffc468f3 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -3007,18 +3007,12 @@ static int __init allocate_lpi_tables(void)
+ 	return 0;
+ }
+ 
+-static u64 its_clear_vpend_valid(void __iomem *vlpi_base, u64 clr, u64 set)
++static u64 read_vpend_dirty_clear(void __iomem *vlpi_base)
+ {
+ 	u32 count = 1000000;	/* 1s! */
+ 	bool clean;
+ 	u64 val;
+ 
+-	val = gicr_read_vpendbaser(vlpi_base + GICR_VPENDBASER);
+-	val &= ~GICR_VPENDBASER_Valid;
+-	val &= ~clr;
+-	val |= set;
+-	gicr_write_vpendbaser(val, vlpi_base + GICR_VPENDBASER);
+-
+ 	do {
+ 		val = gicr_read_vpendbaser(vlpi_base + GICR_VPENDBASER);
+ 		clean = !(val & GICR_VPENDBASER_Dirty);
+@@ -3029,10 +3023,26 @@ static u64 its_clear_vpend_valid(void __iomem *vlpi_base, u64 clr, u64 set)
+ 		}
+ 	} while (!clean && count);
+ 
+-	if (unlikely(val & GICR_VPENDBASER_Dirty)) {
++	if (unlikely(!clean))
+ 		pr_err_ratelimited("ITS virtual pending table not cleaning\n");
++
++	return val;
++}
++
++static u64 its_clear_vpend_valid(void __iomem *vlpi_base, u64 clr, u64 set)
++{
++	u64 val;
++
++	/* Make sure we wait until the RD is done with the initial scan */
++	val = read_vpend_dirty_clear(vlpi_base);
++	val &= ~GICR_VPENDBASER_Valid;
++	val &= ~clr;
++	val |= set;
++	gicr_write_vpendbaser(val, vlpi_base + GICR_VPENDBASER);
++
++	val = read_vpend_dirty_clear(vlpi_base);
++	if (unlikely(val & GICR_VPENDBASER_Dirty))
+ 		val |= GICR_VPENDBASER_PendingLast;
+-	}
+ 
+ 	return val;
+ }
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 86397522e7864..b93dd5b9f83d0 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -206,11 +206,11 @@ static inline void __iomem *gic_dist_base(struct irq_data *d)
+ 	}
+ }
+ 
+-static void gic_do_wait_for_rwp(void __iomem *base)
++static void gic_do_wait_for_rwp(void __iomem *base, u32 bit)
+ {
+ 	u32 count = 1000000;	/* 1s! */
+ 
+-	while (readl_relaxed(base + GICD_CTLR) & GICD_CTLR_RWP) {
++	while (readl_relaxed(base + GICD_CTLR) & bit) {
+ 		count--;
+ 		if (!count) {
+ 			pr_err_ratelimited("RWP timeout, gone fishing\n");
+@@ -224,13 +224,13 @@ static void gic_do_wait_for_rwp(void __iomem *base)
+ /* Wait for completion of a distributor change */
+ static void gic_dist_wait_for_rwp(void)
+ {
+-	gic_do_wait_for_rwp(gic_data.dist_base);
++	gic_do_wait_for_rwp(gic_data.dist_base, GICD_CTLR_RWP);
+ }
+ 
+ /* Wait for completion of a redistributor change */
+ static void gic_redist_wait_for_rwp(void)
+ {
+-	gic_do_wait_for_rwp(gic_data_rdist_rd_base());
++	gic_do_wait_for_rwp(gic_data_rdist_rd_base(), GICR_CTLR_RWP);
+ }
+ 
+ #ifdef CONFIG_ARM64
+@@ -1466,6 +1466,12 @@ static int gic_irq_domain_translate(struct irq_domain *d,
+ 		if(fwspec->param_count != 2)
+ 			return -EINVAL;
+ 
++		if (fwspec->param[0] < 16) {
++			pr_err(FW_BUG "Illegal GSI%d translation request\n",
++			       fwspec->param[0]);
++			return -EINVAL;
++		}
++
+ 		*hwirq = fwspec->param[0];
+ 		*type = fwspec->param[1];
+ 
+diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
+index b8bb46c65a97a..3dbac62c932dd 100644
+--- a/drivers/irqchip/irq-gic.c
++++ b/drivers/irqchip/irq-gic.c
+@@ -1085,6 +1085,12 @@ static int gic_irq_domain_translate(struct irq_domain *d,
+ 		if(fwspec->param_count != 2)
+ 			return -EINVAL;
+ 
++		if (fwspec->param[0] < 16) {
++			pr_err(FW_BUG "Illegal GSI%d translation request\n",
++			       fwspec->param[0]);
++			return -EINVAL;
++		}
++
+ 		*hwirq = fwspec->param[0];
+ 		*type = fwspec->param[1];
+ 
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index 21fe8652b095b..901abd6dea419 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -18,6 +18,7 @@
+ #include <linux/dm-ioctl.h>
+ #include <linux/hdreg.h>
+ #include <linux/compat.h>
++#include <linux/nospec.h>
+ 
+ #include <linux/uaccess.h>
+ #include <linux/ima.h>
+@@ -1788,6 +1789,7 @@ static ioctl_fn lookup_ioctl(unsigned int cmd, int *ioctl_flags)
+ 	if (unlikely(cmd >= ARRAY_SIZE(_ioctls)))
+ 		return NULL;
+ 
++	cmd = array_index_nospec(cmd, ARRAY_SIZE(_ioctls));
+ 	*ioctl_flags = _ioctls[cmd].flags;
+ 	return _ioctls[cmd].fn;
+ }
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index 579ab6183d4d8..dffeb47a9efbc 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -499,8 +499,13 @@ static blk_status_t dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 
+ 	if (unlikely(!ti)) {
+ 		int srcu_idx;
+-		struct dm_table *map = dm_get_live_table(md, &srcu_idx);
++		struct dm_table *map;
+ 
++		map = dm_get_live_table(md, &srcu_idx);
++		if (unlikely(!map)) {
++			dm_put_live_table(md, srcu_idx);
++			return BLK_STS_RESOURCE;
++		}
+ 		ti = dm_table_find_target(map, 0);
+ 		dm_put_live_table(md, srcu_idx);
+ 	}
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 5fd3660e07b5c..af12c0accb590 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1587,15 +1587,10 @@ static void dm_submit_bio(struct bio *bio)
+ 	struct dm_table *map;
+ 
+ 	map = dm_get_live_table(md, &srcu_idx);
+-	if (unlikely(!map)) {
+-		DMERR_LIMIT("%s: mapping table unavailable, erroring io",
+-			    dm_device_name(md));
+-		bio_io_error(bio);
+-		goto out;
+-	}
+ 
+-	/* If suspended, queue this IO for later */
+-	if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags))) {
++	/* If suspended, or map not yet available, queue this IO for later */
++	if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)) ||
++	    unlikely(!map)) {
+ 		if (bio->bi_opf & REQ_NOWAIT)
+ 			bio_wouldblock_error(bio);
+ 		else if (bio->bi_opf & REQ_RAHEAD)
+diff --git a/drivers/misc/habanalabs/common/memory.c b/drivers/misc/habanalabs/common/memory.c
+index 9bd626a00de30..03416b4ee0b7a 100644
+--- a/drivers/misc/habanalabs/common/memory.c
++++ b/drivers/misc/habanalabs/common/memory.c
+@@ -1973,16 +1973,15 @@ err_dec_exporting_cnt:
+ static int mem_ioctl_no_mmu(struct hl_fpriv *hpriv, union hl_mem_args *args)
+ {
+ 	struct hl_device *hdev = hpriv->hdev;
+-	struct hl_ctx *ctx = hpriv->ctx;
+ 	u64 block_handle, device_addr = 0;
++	struct hl_ctx *ctx = hpriv->ctx;
+ 	u32 handle = 0, block_size;
+-	int rc, dmabuf_fd = -EBADF;
++	int rc;
+ 
+ 	switch (args->in.op) {
+ 	case HL_MEM_OP_ALLOC:
+ 		if (args->in.alloc.mem_size == 0) {
+-			dev_err(hdev->dev,
+-				"alloc size must be larger than 0\n");
++			dev_err(hdev->dev, "alloc size must be larger than 0\n");
+ 			rc = -EINVAL;
+ 			goto out;
+ 		}
+@@ -2003,15 +2002,14 @@ static int mem_ioctl_no_mmu(struct hl_fpriv *hpriv, union hl_mem_args *args)
+ 
+ 	case HL_MEM_OP_MAP:
+ 		if (args->in.flags & HL_MEM_USERPTR) {
+-			device_addr = args->in.map_host.host_virt_addr;
+-			rc = 0;
++			dev_err(hdev->dev, "Failed to map host memory when MMU is disabled\n");
++			rc = -EPERM;
+ 		} else {
+-			rc = get_paddr_from_handle(ctx, &args->in,
+-							&device_addr);
++			rc = get_paddr_from_handle(ctx, &args->in, &device_addr);
++			memset(args, 0, sizeof(*args));
++			args->out.device_virt_addr = device_addr;
+ 		}
+ 
+-		memset(args, 0, sizeof(*args));
+-		args->out.device_virt_addr = device_addr;
+ 		break;
+ 
+ 	case HL_MEM_OP_UNMAP:
+@@ -2019,20 +2017,14 @@ static int mem_ioctl_no_mmu(struct hl_fpriv *hpriv, union hl_mem_args *args)
+ 		break;
+ 
+ 	case HL_MEM_OP_MAP_BLOCK:
+-		rc = map_block(hdev, args->in.map_block.block_addr,
+-				&block_handle, &block_size);
++		rc = map_block(hdev, args->in.map_block.block_addr, &block_handle, &block_size);
+ 		args->out.block_handle = block_handle;
+ 		args->out.block_size = block_size;
+ 		break;
+ 
+ 	case HL_MEM_OP_EXPORT_DMABUF_FD:
+-		rc = export_dmabuf_from_addr(ctx,
+-				args->in.export_dmabuf_fd.handle,
+-				args->in.export_dmabuf_fd.mem_size,
+-				args->in.flags,
+-				&dmabuf_fd);
+-		memset(args, 0, sizeof(*args));
+-		args->out.fd = dmabuf_fd;
++		dev_err(hdev->dev, "Failed to export dma-buf object when MMU is disabled\n");
++		rc = -EPERM;
+ 		break;
+ 
+ 	default:
+diff --git a/drivers/misc/habanalabs/common/mmu/mmu_v1.c b/drivers/misc/habanalabs/common/mmu/mmu_v1.c
+index 0f536f79dd9c9..e68e9f71c546a 100644
+--- a/drivers/misc/habanalabs/common/mmu/mmu_v1.c
++++ b/drivers/misc/habanalabs/common/mmu/mmu_v1.c
+@@ -467,7 +467,7 @@ static void hl_mmu_v1_fini(struct hl_device *hdev)
+ {
+ 	/* MMU H/W fini was already done in device hw_fini() */
+ 
+-	if (!ZERO_OR_NULL_PTR(hdev->mmu_priv.hr.mmu_shadow_hop0)) {
++	if (!ZERO_OR_NULL_PTR(hdev->mmu_priv.dr.mmu_shadow_hop0)) {
+ 		kvfree(hdev->mmu_priv.dr.mmu_shadow_hop0);
+ 		gen_pool_destroy(hdev->mmu_priv.dr.mmu_pgt_pool);
+ 
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 92713cd440651..e347358e5f4a8 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -1880,6 +1880,31 @@ static inline bool mmc_blk_rq_error(struct mmc_blk_request *brq)
+ 	       brq->data.error || brq->cmd.resp[0] & CMD_ERRORS;
+ }
+ 
++static int mmc_spi_err_check(struct mmc_card *card)
++{
++	u32 status = 0;
++	int err;
++
++	/*
++	 * SPI does not have a TRAN state we have to wait on, instead the
++	 * card is ready again when it no longer holds the line LOW.
++	 * We still have to ensure two things here before we know the write
++	 * was successful:
++	 * 1. The card has not disconnected during busy and we actually read our
++	 * own pull-up, thinking it was still connected, so ensure it
++	 * still responds.
++	 * 2. Check for any error bits, in particular R1_SPI_IDLE to catch a
++	 * just reconnected card after being disconnected during busy.
++	 */
++	err = __mmc_send_status(card, &status, 0);
++	if (err)
++		return err;
++	/* All R1 and R2 bits of SPI are errors in our case */
++	if (status)
++		return -EIO;
++	return 0;
++}
++
+ static int mmc_blk_busy_cb(void *cb_data, bool *busy)
+ {
+ 	struct mmc_blk_busy_data *data = cb_data;
+@@ -1903,9 +1928,16 @@ static int mmc_blk_card_busy(struct mmc_card *card, struct request *req)
+ 	struct mmc_blk_busy_data cb_data;
+ 	int err;
+ 
+-	if (mmc_host_is_spi(card->host) || rq_data_dir(req) == READ)
++	if (rq_data_dir(req) == READ)
+ 		return 0;
+ 
++	if (mmc_host_is_spi(card->host)) {
++		err = mmc_spi_err_check(card);
++		if (err)
++			mqrq->brq.data.bytes_xfered = 0;
++		return err;
++	}
++
+ 	cb_data.card = card;
+ 	cb_data.status = 0;
+ 	err = __mmc_poll_for_busy(card, MMC_BLK_TIMEOUT_MS, &mmc_blk_busy_cb,
+@@ -2344,6 +2376,8 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
+ 	struct mmc_blk_data *md;
+ 	int devidx, ret;
+ 	char cap_str[10];
++	bool cache_enabled = false;
++	bool fua_enabled = false;
+ 
+ 	devidx = ida_simple_get(&mmc_blk_ida, 0, max_devices, GFP_KERNEL);
+ 	if (devidx < 0) {
+@@ -2425,13 +2459,17 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
+ 			md->flags |= MMC_BLK_CMD23;
+ 	}
+ 
+-	if (mmc_card_mmc(card) &&
+-	    md->flags & MMC_BLK_CMD23 &&
++	if (md->flags & MMC_BLK_CMD23 &&
+ 	    ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) ||
+ 	     card->ext_csd.rel_sectors)) {
+ 		md->flags |= MMC_BLK_REL_WR;
+-		blk_queue_write_cache(md->queue.queue, true, true);
++		fua_enabled = true;
++		cache_enabled = true;
+ 	}
++	if (mmc_cache_enabled(card->host))
++		cache_enabled  = true;
++
++	blk_queue_write_cache(md->queue.queue, cache_enabled, fua_enabled);
+ 
+ 	string_get_size((u64)size, 512, STRING_UNITS_2,
+ 			cap_str, sizeof(cap_str));
+diff --git a/drivers/mmc/host/mmci_stm32_sdmmc.c b/drivers/mmc/host/mmci_stm32_sdmmc.c
+index a75d3dd34d18c..4cceb9bab0361 100644
+--- a/drivers/mmc/host/mmci_stm32_sdmmc.c
++++ b/drivers/mmc/host/mmci_stm32_sdmmc.c
+@@ -62,8 +62,8 @@ static int sdmmc_idma_validate_data(struct mmci_host *host,
+ 	 * excepted the last element which has no constraint on idmasize
+ 	 */
+ 	for_each_sg(data->sg, sg, data->sg_len - 1, i) {
+-		if (!IS_ALIGNED(data->sg->offset, sizeof(u32)) ||
+-		    !IS_ALIGNED(data->sg->length, SDMMC_IDMA_BURST)) {
++		if (!IS_ALIGNED(sg->offset, sizeof(u32)) ||
++		    !IS_ALIGNED(sg->length, SDMMC_IDMA_BURST)) {
+ 			dev_err(mmc_dev(host->mmc),
+ 				"unaligned scatterlist: ofst:%x length:%d\n",
+ 				data->sg->offset, data->sg->length);
+@@ -71,7 +71,7 @@ static int sdmmc_idma_validate_data(struct mmci_host *host,
+ 		}
+ 	}
+ 
+-	if (!IS_ALIGNED(data->sg->offset, sizeof(u32))) {
++	if (!IS_ALIGNED(sg->offset, sizeof(u32))) {
+ 		dev_err(mmc_dev(host->mmc),
+ 			"unaligned last scatterlist: ofst:%x length:%d\n",
+ 			data->sg->offset, data->sg->length);
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index f5b2684ad8058..ae689bf54686f 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -382,10 +382,10 @@ static void renesas_sdhi_hs400_complete(struct mmc_host *mmc)
+ 			SH_MOBILE_SDHI_SCC_TMPPORT2_HS400OSEL) |
+ 			sd_scc_read32(host, priv, SH_MOBILE_SDHI_SCC_TMPPORT2));
+ 
+-	/* Set the sampling clock selection range of HS400 mode */
+ 	sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_DTCNTL,
+ 		       SH_MOBILE_SDHI_SCC_DTCNTL_TAPEN |
+-		       0x4 << SH_MOBILE_SDHI_SCC_DTCNTL_TAPNUM_SHIFT);
++		       sd_scc_read32(host, priv,
++				     SH_MOBILE_SDHI_SCC_DTCNTL));
+ 
+ 	/* Avoid bad TAP */
+ 	if (bad_taps & BIT(priv->tap_set)) {
+diff --git a/drivers/mmc/host/sdhci-xenon.c b/drivers/mmc/host/sdhci-xenon.c
+index 666cee4c7f7c6..08e838400b526 100644
+--- a/drivers/mmc/host/sdhci-xenon.c
++++ b/drivers/mmc/host/sdhci-xenon.c
+@@ -241,16 +241,6 @@ static void xenon_voltage_switch(struct sdhci_host *host)
+ {
+ 	/* Wait for 5ms after set 1.8V signal enable bit */
+ 	usleep_range(5000, 5500);
+-
+-	/*
+-	 * For some reason the controller's Host Control2 register reports
+-	 * the bit representing 1.8V signaling as 0 when read after it was
+-	 * written as 1. Subsequent read reports 1.
+-	 *
+-	 * Since this may cause some issues, do an empty read of the Host
+-	 * Control2 register here to circumvent this.
+-	 */
+-	sdhci_readw(host, SDHCI_HOST_CONTROL2);
+ }
+ 
+ static unsigned int xenon_get_max_clock(struct sdhci_host *host)
+diff --git a/drivers/net/can/usb/etas_es58x/es58x_fd.c b/drivers/net/can/usb/etas_es58x/es58x_fd.c
+index 4f0cae29f4d8b..b71d1530638b7 100644
+--- a/drivers/net/can/usb/etas_es58x/es58x_fd.c
++++ b/drivers/net/can/usb/etas_es58x/es58x_fd.c
+@@ -171,12 +171,11 @@ static int es58x_fd_rx_event_msg(struct net_device *netdev,
+ 	const struct es58x_fd_rx_event_msg *rx_event_msg;
+ 	int ret;
+ 
++	rx_event_msg = &es58x_fd_urb_cmd->rx_event_msg;
+ 	ret = es58x_check_msg_len(es58x_dev->dev, *rx_event_msg, msg_len);
+ 	if (ret)
+ 		return ret;
+ 
+-	rx_event_msg = &es58x_fd_urb_cmd->rx_event_msg;
+-
+ 	return es58x_rx_err_msg(netdev, rx_event_msg->error_code,
+ 				rx_event_msg->event_code,
+ 				get_unaligned_le64(&rx_event_msg->timestamp));
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index fab8dd73fa84c..fdbcd48d991df 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -3196,6 +3196,7 @@ static int bnxt_alloc_tx_rings(struct bnxt *bp)
+ 		}
+ 		qidx = bp->tc_to_qidx[j];
+ 		ring->queue_id = bp->q_info[qidx].queue_id;
++		spin_lock_init(&txr->xdp_tx_lock);
+ 		if (i < bp->tx_nr_rings_xdp)
+ 			continue;
+ 		if (i % bp->tx_nr_rings_per_tc == (bp->tx_nr_rings_per_tc - 1))
+@@ -10274,6 +10275,12 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
+ 	if (irq_re_init)
+ 		udp_tunnel_nic_reset_ntf(bp->dev);
+ 
++	if (bp->tx_nr_rings_xdp < num_possible_cpus()) {
++		if (!static_key_enabled(&bnxt_xdp_locking_key))
++			static_branch_enable(&bnxt_xdp_locking_key);
++	} else if (static_key_enabled(&bnxt_xdp_locking_key)) {
++		static_branch_disable(&bnxt_xdp_locking_key);
++	}
+ 	set_bit(BNXT_STATE_OPEN, &bp->state);
+ 	bnxt_enable_int(bp);
+ 	/* Enable TX queues */
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 2846d14756671..bbf93310da1f2 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -593,7 +593,8 @@ struct nqe_cn {
+ #define BNXT_MAX_MTU		9500
+ #define BNXT_MAX_PAGE_MODE_MTU	\
+ 	((unsigned int)PAGE_SIZE - VLAN_ETH_HLEN - NET_IP_ALIGN -	\
+-	 XDP_PACKET_HEADROOM)
++	 XDP_PACKET_HEADROOM - \
++	 SKB_DATA_ALIGN((unsigned int)sizeof(struct skb_shared_info)))
+ 
+ #define BNXT_MIN_PKT_SIZE	52
+ 
+@@ -800,6 +801,8 @@ struct bnxt_tx_ring_info {
+ 	u32			dev_state;
+ 
+ 	struct bnxt_ring_struct	tx_ring_struct;
++	/* Synchronize simultaneous xdp_xmit on same ring */
++	spinlock_t		xdp_tx_lock;
+ };
+ 
+ #define BNXT_LEGACY_COAL_CMPL_PARAMS					\
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index f147ad5a65315..a47753a4498e1 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -2076,9 +2076,7 @@ static int bnxt_set_pauseparam(struct net_device *dev,
+ 		}
+ 
+ 		link_info->autoneg |= BNXT_AUTONEG_FLOW_CTRL;
+-		if (bp->hwrm_spec_code >= 0x10201)
+-			link_info->req_flow_ctrl =
+-				PORT_PHY_CFG_REQ_AUTO_PAUSE_AUTONEG_PAUSE;
++		link_info->req_flow_ctrl = 0;
+ 	} else {
+ 		/* when transition from auto pause to force pause,
+ 		 * force a link change
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+index c8083df5e0ab8..148b58f3468b3 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+@@ -20,6 +20,8 @@
+ #include "bnxt.h"
+ #include "bnxt_xdp.h"
+ 
++DEFINE_STATIC_KEY_FALSE(bnxt_xdp_locking_key);
++
+ struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp,
+ 				   struct bnxt_tx_ring_info *txr,
+ 				   dma_addr_t mapping, u32 len)
+@@ -227,11 +229,16 @@ int bnxt_xdp_xmit(struct net_device *dev, int num_frames,
+ 	ring = smp_processor_id() % bp->tx_nr_rings_xdp;
+ 	txr = &bp->tx_ring[ring];
+ 
++	if (READ_ONCE(txr->dev_state) == BNXT_DEV_STATE_CLOSING)
++		return -EINVAL;
++
++	if (static_branch_unlikely(&bnxt_xdp_locking_key))
++		spin_lock(&txr->xdp_tx_lock);
++
+ 	for (i = 0; i < num_frames; i++) {
+ 		struct xdp_frame *xdp = frames[i];
+ 
+-		if (!txr || !bnxt_tx_avail(bp, txr) ||
+-		    !(bp->bnapi[ring]->flags & BNXT_NAPI_FLAG_XDP))
++		if (!bnxt_tx_avail(bp, txr))
+ 			break;
+ 
+ 		mapping = dma_map_single(&pdev->dev, xdp->data, xdp->len,
+@@ -250,6 +257,9 @@ int bnxt_xdp_xmit(struct net_device *dev, int num_frames,
+ 		bnxt_db_write(bp, &txr->tx_db, txr->tx_prod);
+ 	}
+ 
++	if (static_branch_unlikely(&bnxt_xdp_locking_key))
++		spin_unlock(&txr->xdp_tx_lock);
++
+ 	return nxmit;
+ }
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h
+index 0df40c3beb050..067bb5e821f54 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h
+@@ -10,6 +10,8 @@
+ #ifndef BNXT_XDP_H
+ #define BNXT_XDP_H
+ 
++DECLARE_STATIC_KEY_FALSE(bnxt_xdp_locking_key);
++
+ struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp,
+ 				   struct bnxt_tx_ring_info *txr,
+ 				   dma_addr_t mapping, u32 len);
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c
+index 32b5faa87bb8d..208a3459f2e29 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c
+@@ -168,7 +168,7 @@ static int dpaa2_ptp_probe(struct fsl_mc_device *mc_dev)
+ 	base = of_iomap(node, 0);
+ 	if (!base) {
+ 		err = -ENOMEM;
+-		goto err_close;
++		goto err_put;
+ 	}
+ 
+ 	err = fsl_mc_allocate_irqs(mc_dev);
+@@ -212,6 +212,8 @@ err_free_mc_irq:
+ 	fsl_mc_free_irqs(mc_dev);
+ err_unmap:
+ 	iounmap(base);
++err_put:
++	of_node_put(node);
+ err_close:
+ 	dprtc_close(mc_dev->mc_io, 0, mc_dev->mc_handle);
+ err_free_mcp:
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index a474715860475..446373623f47d 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -671,7 +671,7 @@ static inline struct ice_pf *ice_netdev_to_pf(struct net_device *netdev)
+ 
+ static inline bool ice_is_xdp_ena_vsi(struct ice_vsi *vsi)
+ {
+-	return !!vsi->xdp_prog;
++	return !!READ_ONCE(vsi->xdp_prog);
+ }
+ 
+ static inline void ice_set_ring_xdp(struct ice_tx_ring *ring)
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index edba96845baf7..a3514a5e067a2 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -1409,6 +1409,7 @@ static int ice_vsi_alloc_rings(struct ice_vsi *vsi)
+ 		ring->tx_tstamps = &pf->ptp.port.tx;
+ 		ring->dev = dev;
+ 		ring->count = vsi->num_tx_desc;
++		ring->txq_teid = ICE_INVAL_TEID;
+ 		WRITE_ONCE(vsi->tx_rings[i], ring);
+ 	}
+ 
+@@ -3113,6 +3114,8 @@ int ice_vsi_release(struct ice_vsi *vsi)
+ 		}
+ 	}
+ 
++	if (ice_is_vsi_dflt_vsi(pf->first_sw, vsi))
++		ice_clear_dflt_vsi(pf->first_sw);
+ 	ice_fltr_remove_all(vsi);
+ 	ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);
+ 	err = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx);
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 523877c2f8b6b..65742f296b0dc 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -2528,7 +2528,7 @@ static int ice_xdp_alloc_setup_rings(struct ice_vsi *vsi)
+ 		spin_lock_init(&xdp_ring->tx_lock);
+ 		for (j = 0; j < xdp_ring->count; j++) {
+ 			tx_desc = ICE_TX_DESC(xdp_ring, j);
+-			tx_desc->cmd_type_offset_bsz = cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE);
++			tx_desc->cmd_type_offset_bsz = 0;
+ 		}
+ 	}
+ 
+@@ -2724,8 +2724,10 @@ free_qmap:
+ 
+ 	ice_for_each_xdp_txq(vsi, i)
+ 		if (vsi->xdp_rings[i]) {
+-			if (vsi->xdp_rings[i]->desc)
++			if (vsi->xdp_rings[i]->desc) {
++				synchronize_rcu();
+ 				ice_free_tx_ring(vsi->xdp_rings[i]);
++			}
+ 			kfree_rcu(vsi->xdp_rings[i], rcu);
+ 			vsi->xdp_rings[i] = NULL;
+ 		}
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+index e17813fb71a1b..91182a6bc137f 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+@@ -3383,9 +3383,9 @@ static int ice_vc_dis_qs_msg(struct ice_vf *vf, u8 *msg)
+ 				goto error_param;
+ 			}
+ 
+-			/* Skip queue if not enabled */
+ 			if (!test_bit(vf_q_id, vf->txq_ena))
+-				continue;
++				dev_dbg(ice_pf_to_dev(vsi->back), "Queue %u on VSI %u is not enabled, but stopping it anyway\n",
++					vf_q_id, vsi->vsi_num);
+ 
+ 			ice_fill_txq_meta(vsi, ring, &txq_meta);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
+index ac97cf3c58040..28c4f90ad07f1 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
+@@ -41,8 +41,10 @@ static void ice_qp_reset_stats(struct ice_vsi *vsi, u16 q_idx)
+ static void ice_qp_clean_rings(struct ice_vsi *vsi, u16 q_idx)
+ {
+ 	ice_clean_tx_ring(vsi->tx_rings[q_idx]);
+-	if (ice_is_xdp_ena_vsi(vsi))
++	if (ice_is_xdp_ena_vsi(vsi)) {
++		synchronize_rcu();
+ 		ice_clean_tx_ring(vsi->xdp_rings[q_idx]);
++	}
+ 	ice_clean_rx_ring(vsi->rx_rings[q_idx]);
+ }
+ 
+@@ -763,7 +765,7 @@ ice_xsk_wakeup(struct net_device *netdev, u32 queue_id,
+ 	struct ice_vsi *vsi = np->vsi;
+ 	struct ice_tx_ring *ring;
+ 
+-	if (test_bit(ICE_DOWN, vsi->state))
++	if (test_bit(ICE_VSI_DOWN, vsi->state))
+ 		return -ENETDOWN;
+ 
+ 	if (!ice_is_xdp_ena_vsi(vsi))
+diff --git a/drivers/net/ethernet/marvell/mv643xx_eth.c b/drivers/net/ethernet/marvell/mv643xx_eth.c
+index 0636783f7bc03..ffa131d2bb52a 100644
+--- a/drivers/net/ethernet/marvell/mv643xx_eth.c
++++ b/drivers/net/ethernet/marvell/mv643xx_eth.c
+@@ -2747,7 +2747,7 @@ static int mv643xx_eth_shared_of_add_port(struct platform_device *pdev,
+ 	}
+ 
+ 	ret = of_get_mac_address(pnp, ppd.mac_addr);
+-	if (ret)
++	if (ret == -EPROBE_DEFER)
+ 		return ret;
+ 
+ 	mv643xx_eth_property(pnp, "tx-queue-size", ppd.tx_queue_size);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 22de7327c5a82..4730d6c14aebf 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -5223,6 +5223,7 @@ mlx5e_create_netdev(struct mlx5_core_dev *mdev, const struct mlx5e_profile *prof
+ 	}
+ 
+ 	netif_carrier_off(netdev);
++	netif_tx_disable(netdev);
+ 	dev_net_set(netdev, mlx5_core_net(mdev));
+ 
+ 	return netdev;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/port.c b/drivers/net/ethernet/mellanox/mlx5/core/port.c
+index 7b16a1188aabb..fd79860de723b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/port.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/port.c
+@@ -433,35 +433,12 @@ int mlx5_query_module_eeprom_by_page(struct mlx5_core_dev *dev,
+ 				     struct mlx5_module_eeprom_query_params *params,
+ 				     u8 *data)
+ {
+-	u8 module_id;
+ 	int err;
+ 
+ 	err = mlx5_query_module_num(dev, &params->module_number);
+ 	if (err)
+ 		return err;
+ 
+-	err = mlx5_query_module_id(dev, params->module_number, &module_id);
+-	if (err)
+-		return err;
+-
+-	switch (module_id) {
+-	case MLX5_MODULE_ID_SFP:
+-		if (params->page > 0)
+-			return -EINVAL;
+-		break;
+-	case MLX5_MODULE_ID_QSFP:
+-	case MLX5_MODULE_ID_QSFP28:
+-	case MLX5_MODULE_ID_QSFP_PLUS:
+-		if (params->page > 3)
+-			return -EINVAL;
+-		break;
+-	case MLX5_MODULE_ID_DSFP:
+-		break;
+-	default:
+-		mlx5_core_err(dev, "Module ID not recognized: 0x%x\n", module_id);
+-		return -EINVAL;
+-	}
+-
+ 	if (params->i2c_address != MLX5_I2C_ADDR_HIGH &&
+ 	    params->i2c_address != MLX5_I2C_ADDR_LOW) {
+ 		mlx5_core_err(dev, "I2C address not recognized: 0x%x\n", params->i2c_address);
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_debug.c b/drivers/net/ethernet/qlogic/qed/qed_debug.c
+index e3edca187ddfa..5250d1d1e49ca 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_debug.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_debug.c
+@@ -489,7 +489,7 @@ struct split_type_defs {
+ 
+ #define STATIC_DEBUG_LINE_DWORDS	9
+ 
+-#define NUM_COMMON_GLOBAL_PARAMS	11
++#define NUM_COMMON_GLOBAL_PARAMS	10
+ 
+ #define MAX_RECURSION_DEPTH		10
+ 
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_fp.c b/drivers/net/ethernet/qlogic/qede/qede_fp.c
+index 999abcfe3310a..17f895250e041 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_fp.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_fp.c
+@@ -747,6 +747,9 @@ qede_build_skb(struct qede_rx_queue *rxq,
+ 	buf = page_address(bd->data) + bd->page_offset;
+ 	skb = build_skb(buf, rxq->rx_buf_seg_size);
+ 
++	if (unlikely(!skb))
++		return NULL;
++
+ 	skb_reserve(skb, pad);
+ 	skb_put(skb, len);
+ 
+diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c
+index 3dbea028b325c..1f8cfd8060082 100644
+--- a/drivers/net/ethernet/sfc/efx_channels.c
++++ b/drivers/net/ethernet/sfc/efx_channels.c
+@@ -763,6 +763,85 @@ void efx_remove_channels(struct efx_nic *efx)
+ 	kfree(efx->xdp_tx_queues);
+ }
+ 
++static int efx_set_xdp_tx_queue(struct efx_nic *efx, int xdp_queue_number,
++				struct efx_tx_queue *tx_queue)
++{
++	if (xdp_queue_number >= efx->xdp_tx_queue_count)
++		return -EINVAL;
++
++	netif_dbg(efx, drv, efx->net_dev,
++		  "Channel %u TXQ %u is XDP %u, HW %u\n",
++		  tx_queue->channel->channel, tx_queue->label,
++		  xdp_queue_number, tx_queue->queue);
++	efx->xdp_tx_queues[xdp_queue_number] = tx_queue;
++	return 0;
++}
++
++static void efx_set_xdp_channels(struct efx_nic *efx)
++{
++	struct efx_tx_queue *tx_queue;
++	struct efx_channel *channel;
++	unsigned int next_queue = 0;
++	int xdp_queue_number = 0;
++	int rc;
++
++	/* We need to mark which channels really have RX and TX
++	 * queues, and adjust the TX queue numbers if we have separate
++	 * RX-only and TX-only channels.
++	 */
++	efx_for_each_channel(channel, efx) {
++		if (channel->channel < efx->tx_channel_offset)
++			continue;
++
++		if (efx_channel_is_xdp_tx(channel)) {
++			efx_for_each_channel_tx_queue(tx_queue, channel) {
++				tx_queue->queue = next_queue++;
++				rc = efx_set_xdp_tx_queue(efx, xdp_queue_number,
++							  tx_queue);
++				if (rc == 0)
++					xdp_queue_number++;
++			}
++		} else {
++			efx_for_each_channel_tx_queue(tx_queue, channel) {
++				tx_queue->queue = next_queue++;
++				netif_dbg(efx, drv, efx->net_dev,
++					  "Channel %u TXQ %u is HW %u\n",
++					  channel->channel, tx_queue->label,
++					  tx_queue->queue);
++			}
++
++			/* If XDP is borrowing queues from net stack, it must
++			 * use the queue with no csum offload, which is the
++			 * first one of the channel
++			 * (note: tx_queue_by_type is not initialized yet)
++			 */
++			if (efx->xdp_txq_queues_mode ==
++			    EFX_XDP_TX_QUEUES_BORROWED) {
++				tx_queue = &channel->tx_queue[0];
++				rc = efx_set_xdp_tx_queue(efx, xdp_queue_number,
++							  tx_queue);
++				if (rc == 0)
++					xdp_queue_number++;
++			}
++		}
++	}
++	WARN_ON(efx->xdp_txq_queues_mode == EFX_XDP_TX_QUEUES_DEDICATED &&
++		xdp_queue_number != efx->xdp_tx_queue_count);
++	WARN_ON(efx->xdp_txq_queues_mode != EFX_XDP_TX_QUEUES_DEDICATED &&
++		xdp_queue_number > efx->xdp_tx_queue_count);
++
++	/* If we have more CPUs than assigned XDP TX queues, assign the already
++	 * existing queues to the exceeding CPUs
++	 */
++	next_queue = 0;
++	while (xdp_queue_number < efx->xdp_tx_queue_count) {
++		tx_queue = efx->xdp_tx_queues[next_queue++];
++		rc = efx_set_xdp_tx_queue(efx, xdp_queue_number, tx_queue);
++		if (rc == 0)
++			xdp_queue_number++;
++	}
++}
++
+ int efx_realloc_channels(struct efx_nic *efx, u32 rxq_entries, u32 txq_entries)
+ {
+ 	struct efx_channel *other_channel[EFX_MAX_CHANNELS], *channel;
+@@ -837,6 +916,7 @@ int efx_realloc_channels(struct efx_nic *efx, u32 rxq_entries, u32 txq_entries)
+ 		efx_init_napi_channel(efx->channel[i]);
+ 	}
+ 
++	efx_set_xdp_channels(efx);
+ out:
+ 	/* Destroy unused channel structures */
+ 	for (i = 0; i < efx->n_channels; i++) {
+@@ -872,26 +952,9 @@ rollback:
+ 	goto out;
+ }
+ 
+-static inline int
+-efx_set_xdp_tx_queue(struct efx_nic *efx, int xdp_queue_number,
+-		     struct efx_tx_queue *tx_queue)
+-{
+-	if (xdp_queue_number >= efx->xdp_tx_queue_count)
+-		return -EINVAL;
+-
+-	netif_dbg(efx, drv, efx->net_dev, "Channel %u TXQ %u is XDP %u, HW %u\n",
+-		  tx_queue->channel->channel, tx_queue->label,
+-		  xdp_queue_number, tx_queue->queue);
+-	efx->xdp_tx_queues[xdp_queue_number] = tx_queue;
+-	return 0;
+-}
+-
+ int efx_set_channels(struct efx_nic *efx)
+ {
+-	struct efx_tx_queue *tx_queue;
+ 	struct efx_channel *channel;
+-	unsigned int next_queue = 0;
+-	int xdp_queue_number;
+ 	int rc;
+ 
+ 	efx->tx_channel_offset =
+@@ -909,61 +972,14 @@ int efx_set_channels(struct efx_nic *efx)
+ 			return -ENOMEM;
+ 	}
+ 
+-	/* We need to mark which channels really have RX and TX
+-	 * queues, and adjust the TX queue numbers if we have separate
+-	 * RX-only and TX-only channels.
+-	 */
+-	xdp_queue_number = 0;
+ 	efx_for_each_channel(channel, efx) {
+ 		if (channel->channel < efx->n_rx_channels)
+ 			channel->rx_queue.core_index = channel->channel;
+ 		else
+ 			channel->rx_queue.core_index = -1;
+-
+-		if (channel->channel >= efx->tx_channel_offset) {
+-			if (efx_channel_is_xdp_tx(channel)) {
+-				efx_for_each_channel_tx_queue(tx_queue, channel) {
+-					tx_queue->queue = next_queue++;
+-					rc = efx_set_xdp_tx_queue(efx, xdp_queue_number, tx_queue);
+-					if (rc == 0)
+-						xdp_queue_number++;
+-				}
+-			} else {
+-				efx_for_each_channel_tx_queue(tx_queue, channel) {
+-					tx_queue->queue = next_queue++;
+-					netif_dbg(efx, drv, efx->net_dev, "Channel %u TXQ %u is HW %u\n",
+-						  channel->channel, tx_queue->label,
+-						  tx_queue->queue);
+-				}
+-
+-				/* If XDP is borrowing queues from net stack, it must use the queue
+-				 * with no csum offload, which is the first one of the channel
+-				 * (note: channel->tx_queue_by_type is not initialized yet)
+-				 */
+-				if (efx->xdp_txq_queues_mode == EFX_XDP_TX_QUEUES_BORROWED) {
+-					tx_queue = &channel->tx_queue[0];
+-					rc = efx_set_xdp_tx_queue(efx, xdp_queue_number, tx_queue);
+-					if (rc == 0)
+-						xdp_queue_number++;
+-				}
+-			}
+-		}
+ 	}
+-	WARN_ON(efx->xdp_txq_queues_mode == EFX_XDP_TX_QUEUES_DEDICATED &&
+-		xdp_queue_number != efx->xdp_tx_queue_count);
+-	WARN_ON(efx->xdp_txq_queues_mode != EFX_XDP_TX_QUEUES_DEDICATED &&
+-		xdp_queue_number > efx->xdp_tx_queue_count);
+ 
+-	/* If we have more CPUs than assigned XDP TX queues, assign the already
+-	 * existing queues to the exceeding CPUs
+-	 */
+-	next_queue = 0;
+-	while (xdp_queue_number < efx->xdp_tx_queue_count) {
+-		tx_queue = efx->xdp_tx_queues[next_queue++];
+-		rc = efx_set_xdp_tx_queue(efx, xdp_queue_number, tx_queue);
+-		if (rc == 0)
+-			xdp_queue_number++;
+-	}
++	efx_set_xdp_channels(efx);
+ 
+ 	rc = netif_set_real_num_tx_queues(efx->net_dev, efx->n_tx_channels);
+ 	if (rc)
+@@ -1107,7 +1123,7 @@ void efx_start_channels(struct efx_nic *efx)
+ 	struct efx_rx_queue *rx_queue;
+ 	struct efx_channel *channel;
+ 
+-	efx_for_each_channel(channel, efx) {
++	efx_for_each_channel_rev(channel, efx) {
+ 		efx_for_each_channel_tx_queue(tx_queue, channel) {
+ 			efx_init_tx_queue(tx_queue);
+ 			atomic_inc(&efx->active_queues);
+diff --git a/drivers/net/ethernet/sfc/rx_common.c b/drivers/net/ethernet/sfc/rx_common.c
+index 633ca77a26fd1..b925de9b43028 100644
+--- a/drivers/net/ethernet/sfc/rx_common.c
++++ b/drivers/net/ethernet/sfc/rx_common.c
+@@ -166,6 +166,9 @@ static void efx_fini_rx_recycle_ring(struct efx_rx_queue *rx_queue)
+ 	struct efx_nic *efx = rx_queue->efx;
+ 	int i;
+ 
++	if (unlikely(!rx_queue->page_ring))
++		return;
++
+ 	/* Unmap and release the pages in the recycle ring. Remove the ring. */
+ 	for (i = 0; i <= rx_queue->page_ptr_mask; i++) {
+ 		struct page *page = rx_queue->page_ring[i];
+diff --git a/drivers/net/ethernet/sfc/tx.c b/drivers/net/ethernet/sfc/tx.c
+index d16e031e95f44..6983799e1c05d 100644
+--- a/drivers/net/ethernet/sfc/tx.c
++++ b/drivers/net/ethernet/sfc/tx.c
+@@ -443,6 +443,9 @@ int efx_xdp_tx_buffers(struct efx_nic *efx, int n, struct xdp_frame **xdpfs,
+ 	if (unlikely(!tx_queue))
+ 		return -EINVAL;
+ 
++	if (!tx_queue->initialised)
++		return -EINVAL;
++
+ 	if (efx->xdp_txq_queues_mode != EFX_XDP_TX_QUEUES_DEDICATED)
+ 		HARD_TX_LOCK(efx->net_dev, tx_queue->core_txq, cpu);
+ 
+diff --git a/drivers/net/ethernet/sfc/tx_common.c b/drivers/net/ethernet/sfc/tx_common.c
+index d530cde2b8648..9bc8281b7f5bd 100644
+--- a/drivers/net/ethernet/sfc/tx_common.c
++++ b/drivers/net/ethernet/sfc/tx_common.c
+@@ -101,6 +101,8 @@ void efx_fini_tx_queue(struct efx_tx_queue *tx_queue)
+ 	netif_dbg(tx_queue->efx, drv, tx_queue->efx->net_dev,
+ 		  "shutting down TX queue %d\n", tx_queue->queue);
+ 
++	tx_queue->initialised = false;
++
+ 	if (!tx_queue->buffer)
+ 		return;
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index 5d29f336315b7..11e1055e8260f 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -431,8 +431,7 @@ stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+ 	plat->phylink_node = np;
+ 
+ 	/* Get max speed of operation from device tree */
+-	if (of_property_read_u32(np, "max-speed", &plat->max_speed))
+-		plat->max_speed = -1;
++	of_property_read_u32(np, "max-speed", &plat->max_speed);
+ 
+ 	plat->bus_id = of_alias_get_id(np, "ethernet");
+ 	if (plat->bus_id < 0)
+diff --git a/drivers/net/macvtap.c b/drivers/net/macvtap.c
+index 6b12902a803f0..cecf8c63096cd 100644
+--- a/drivers/net/macvtap.c
++++ b/drivers/net/macvtap.c
+@@ -133,11 +133,17 @@ static void macvtap_setup(struct net_device *dev)
+ 	dev->tx_queue_len = TUN_READQ_SIZE;
+ }
+ 
++static struct net *macvtap_link_net(const struct net_device *dev)
++{
++	return dev_net(macvlan_dev_real_dev(dev));
++}
++
+ static struct rtnl_link_ops macvtap_link_ops __read_mostly = {
+ 	.kind		= "macvtap",
+ 	.setup		= macvtap_setup,
+ 	.newlink	= macvtap_newlink,
+ 	.dellink	= macvtap_dellink,
++	.get_link_net	= macvtap_link_net,
+ 	.priv_size      = sizeof(struct macvtap_dev),
+ };
+ 
+diff --git a/drivers/net/mdio/mdio-mscc-miim.c b/drivers/net/mdio/mdio-mscc-miim.c
+index 17f98f609ec82..5070ca2f2637a 100644
+--- a/drivers/net/mdio/mdio-mscc-miim.c
++++ b/drivers/net/mdio/mdio-mscc-miim.c
+@@ -76,6 +76,9 @@ static int mscc_miim_read(struct mii_bus *bus, int mii_id, int regnum)
+ 	u32 val;
+ 	int ret;
+ 
++	if (regnum & MII_ADDR_C45)
++		return -EOPNOTSUPP;
++
+ 	ret = mscc_miim_wait_pending(bus);
+ 	if (ret)
+ 		goto out;
+@@ -105,6 +108,9 @@ static int mscc_miim_write(struct mii_bus *bus, int mii_id,
+ 	struct mscc_miim_dev *miim = bus->priv;
+ 	int ret;
+ 
++	if (regnum & MII_ADDR_C45)
++		return -EOPNOTSUPP;
++
+ 	ret = mscc_miim_wait_pending(bus);
+ 	if (ret < 0)
+ 		goto out;
+diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c
+index c1512c9925a66..15aa5ac1ff49c 100644
+--- a/drivers/net/phy/sfp-bus.c
++++ b/drivers/net/phy/sfp-bus.c
+@@ -74,6 +74,12 @@ static const struct sfp_quirk sfp_quirks[] = {
+ 		.vendor = "HUAWEI",
+ 		.part = "MA5671A",
+ 		.modes = sfp_quirk_2500basex,
++	}, {
++		// Lantech 8330-262D-E can operate at 2500base-X, but
++		// incorrectly report 2500MBd NRZ in their EEPROM
++		.vendor = "Lantech",
++		.part = "8330-262D-E",
++		.modes = sfp_quirk_2500basex,
+ 	}, {
+ 		.vendor = "UBNT",
+ 		.part = "UF-INSTANT",
+diff --git a/drivers/net/tap.c b/drivers/net/tap.c
+index 8e3a28ba6b282..ba2ef5437e167 100644
+--- a/drivers/net/tap.c
++++ b/drivers/net/tap.c
+@@ -1198,7 +1198,8 @@ static int tap_sendmsg(struct socket *sock, struct msghdr *m,
+ 	struct xdp_buff *xdp;
+ 	int i;
+ 
+-	if (ctl && (ctl->type == TUN_MSG_PTR)) {
++	if (m->msg_controllen == sizeof(struct tun_msg_ctl) &&
++	    ctl && ctl->type == TUN_MSG_PTR) {
+ 		for (i = 0; i < ctl->num; i++) {
+ 			xdp = &((struct xdp_buff *)ctl->ptr)[i];
+ 			tap_get_user_xdp(q, xdp);
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 45a67e72a02c6..02de8d998bfa4 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -2489,7 +2489,8 @@ static int tun_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)
+ 	if (!tun)
+ 		return -EBADFD;
+ 
+-	if (ctl && (ctl->type == TUN_MSG_PTR)) {
++	if (m->msg_controllen == sizeof(struct tun_msg_ctl) &&
++	    ctl && ctl->type == TUN_MSG_PTR) {
+ 		struct tun_page tpage;
+ 		int n = ctl->num;
+ 		int flush = 0;
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index b2242a082431c..091dd7caf10cc 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -1265,6 +1265,7 @@ static int vrf_prepare_mac_header(struct sk_buff *skb,
+ 	eth = (struct ethhdr *)skb->data;
+ 
+ 	skb_reset_mac_header(skb);
++	skb_reset_mac_len(skb);
+ 
+ 	/* we set the ethernet destination and the source addresses to the
+ 	 * address of the VRF device.
+@@ -1294,9 +1295,9 @@ static int vrf_prepare_mac_header(struct sk_buff *skb,
+  */
+ static int vrf_add_mac_header_if_unset(struct sk_buff *skb,
+ 				       struct net_device *vrf_dev,
+-				       u16 proto)
++				       u16 proto, struct net_device *orig_dev)
+ {
+-	if (skb_mac_header_was_set(skb))
++	if (skb_mac_header_was_set(skb) && dev_has_header(orig_dev))
+ 		return 0;
+ 
+ 	return vrf_prepare_mac_header(skb, vrf_dev, proto);
+@@ -1402,6 +1403,8 @@ static struct sk_buff *vrf_ip6_rcv(struct net_device *vrf_dev,
+ 
+ 	/* if packet is NDISC then keep the ingress interface */
+ 	if (!is_ndisc) {
++		struct net_device *orig_dev = skb->dev;
++
+ 		vrf_rx_stats(vrf_dev, skb->len);
+ 		skb->dev = vrf_dev;
+ 		skb->skb_iif = vrf_dev->ifindex;
+@@ -1410,7 +1413,8 @@ static struct sk_buff *vrf_ip6_rcv(struct net_device *vrf_dev,
+ 			int err;
+ 
+ 			err = vrf_add_mac_header_if_unset(skb, vrf_dev,
+-							  ETH_P_IPV6);
++							  ETH_P_IPV6,
++							  orig_dev);
+ 			if (likely(!err)) {
+ 				skb_push(skb, skb->mac_len);
+ 				dev_queue_xmit_nit(skb, vrf_dev);
+@@ -1440,6 +1444,8 @@ static struct sk_buff *vrf_ip6_rcv(struct net_device *vrf_dev,
+ static struct sk_buff *vrf_ip_rcv(struct net_device *vrf_dev,
+ 				  struct sk_buff *skb)
+ {
++	struct net_device *orig_dev = skb->dev;
++
+ 	skb->dev = vrf_dev;
+ 	skb->skb_iif = vrf_dev->ifindex;
+ 	IPCB(skb)->flags |= IPSKB_L3SLAVE;
+@@ -1460,7 +1466,8 @@ static struct sk_buff *vrf_ip_rcv(struct net_device *vrf_dev,
+ 	if (!list_empty(&vrf_dev->ptype_all)) {
+ 		int err;
+ 
+-		err = vrf_add_mac_header_if_unset(skb, vrf_dev, ETH_P_IP);
++		err = vrf_add_mac_header_if_unset(skb, vrf_dev, ETH_P_IP,
++						  orig_dev);
+ 		if (likely(!err)) {
+ 			skb_push(skb, skb->mac_len);
+ 			dev_queue_xmit_nit(skb, vrf_dev);
+diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c
+index 3fb0aa0008259..24bd0520926bf 100644
+--- a/drivers/net/wireless/ath/ath11k/ahb.c
++++ b/drivers/net/wireless/ath/ath11k/ahb.c
+@@ -391,6 +391,8 @@ static void ath11k_ahb_free_ext_irq(struct ath11k_base *ab)
+ 
+ 		for (j = 0; j < irq_grp->num_irq; j++)
+ 			free_irq(ab->irq_num[irq_grp->irqs[j]], irq_grp);
++
++		netif_napi_del(&irq_grp->napi);
+ 	}
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/mhi.c b/drivers/net/wireless/ath/ath11k/mhi.c
+index 49c0b1ad40a02..f2149241fb131 100644
+--- a/drivers/net/wireless/ath/ath11k/mhi.c
++++ b/drivers/net/wireless/ath/ath11k/mhi.c
+@@ -519,7 +519,7 @@ static int ath11k_mhi_set_state(struct ath11k_pci *ab_pci,
+ 		ret = 0;
+ 		break;
+ 	case ATH11K_MHI_POWER_ON:
+-		ret = mhi_async_power_up(ab_pci->mhi_ctrl);
++		ret = mhi_sync_power_up(ab_pci->mhi_ctrl);
+ 		break;
+ 	case ATH11K_MHI_POWER_OFF:
+ 		mhi_power_down(ab_pci->mhi_ctrl, true);
+diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c
+index 4c348bacf2cb4..754578f3dcf1c 100644
+--- a/drivers/net/wireless/ath/ath11k/pci.c
++++ b/drivers/net/wireless/ath/ath11k/pci.c
+@@ -1419,6 +1419,11 @@ static __maybe_unused int ath11k_pci_pm_suspend(struct device *dev)
+ 	struct ath11k_base *ab = dev_get_drvdata(dev);
+ 	int ret;
+ 
++	if (test_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags)) {
++		ath11k_dbg(ab, ATH11K_DBG_BOOT, "boot skipping pci suspend as qmi is not initialised\n");
++		return 0;
++	}
++
+ 	ret = ath11k_core_suspend(ab);
+ 	if (ret)
+ 		ath11k_warn(ab, "failed to suspend core: %d\n", ret);
+@@ -1431,6 +1436,11 @@ static __maybe_unused int ath11k_pci_pm_resume(struct device *dev)
+ 	struct ath11k_base *ab = dev_get_drvdata(dev);
+ 	int ret;
+ 
++	if (test_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags)) {
++		ath11k_dbg(ab, ATH11K_DBG_BOOT, "boot skipping pci resume as qmi is not initialised\n");
++		return 0;
++	}
++
+ 	ret = ath11k_core_resume(ab);
+ 	if (ret)
+ 		ath11k_warn(ab, "failed to resume core: %d\n", ret);
+diff --git a/drivers/net/wireless/ath/ath5k/eeprom.c b/drivers/net/wireless/ath/ath5k/eeprom.c
+index 1fbc2c19848f2..d444b3d70ba2e 100644
+--- a/drivers/net/wireless/ath/ath5k/eeprom.c
++++ b/drivers/net/wireless/ath/ath5k/eeprom.c
+@@ -746,6 +746,9 @@ ath5k_eeprom_convert_pcal_info_5111(struct ath5k_hw *ah, int mode,
+ 			}
+ 		}
+ 
++		if (idx == AR5K_EEPROM_N_PD_CURVES)
++			goto err_out;
++
+ 		ee->ee_pd_gains[mode] = 1;
+ 
+ 		pd = &chinfo[pier].pd_curves[idx];
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h b/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h
+index 3988f5fea33ab..6b2a2828cb83d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
+ /*
+- * Copyright (C) 2018-2021 Intel Corporation
++ * Copyright (C) 2018-2022 Intel Corporation
+  */
+ #ifndef __iwl_fw_dbg_tlv_h__
+ #define __iwl_fw_dbg_tlv_h__
+@@ -244,11 +244,10 @@ struct iwl_fw_ini_hcmd_tlv {
+ } __packed; /* FW_TLV_DEBUG_HCMD_API_S_VER_1 */
+ 
+ /**
+-* struct iwl_fw_ini_conf_tlv - preset configuration TLV
++* struct iwl_fw_ini_addr_val - Address and value to set it to
+ *
+ * @address: the base address
+ * @value: value to set at address
+-
+ */
+ struct iwl_fw_ini_addr_val {
+ 	__le32 address;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c
+index 035336a9e755e..6d82725cb87d0 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright (C) 2012-2014, 2018-2021 Intel Corporation
++ * Copyright (C) 2012-2014, 2018-2022 Intel Corporation
+  * Copyright (C) 2013-2014 Intel Mobile Communications GmbH
+  * Copyright (C) 2017 Intel Deutschland GmbH
+  */
+@@ -295,18 +295,31 @@ void iwl_mvm_phy_ctxt_unref(struct iwl_mvm *mvm, struct iwl_mvm_phy_ctxt *ctxt)
+ 	 * otherwise we might not be able to reuse this phy.
+ 	 */
+ 	if (ctxt->ref == 0) {
+-		struct ieee80211_channel *chan;
++		struct ieee80211_channel *chan = NULL;
+ 		struct cfg80211_chan_def chandef;
+-		struct ieee80211_supported_band *sband = NULL;
+-		enum nl80211_band band = NL80211_BAND_2GHZ;
++		struct ieee80211_supported_band *sband;
++		enum nl80211_band band;
++		int channel;
+ 
+-		while (!sband && band < NUM_NL80211_BANDS)
+-			sband = mvm->hw->wiphy->bands[band++];
++		for (band = NL80211_BAND_2GHZ; band < NUM_NL80211_BANDS; band++) {
++			sband = mvm->hw->wiphy->bands[band];
+ 
+-		if (WARN_ON(!sband))
+-			return;
++			if (!sband)
++				continue;
++
++			for (channel = 0; channel < sband->n_channels; channel++)
++				if (!(sband->channels[channel].flags &
++						IEEE80211_CHAN_DISABLED)) {
++					chan = &sband->channels[channel];
++					break;
++				}
+ 
+-		chan = &sband->channels[0];
++			if (chan)
++				break;
++		}
++
++		if (WARN_ON(!chan))
++			return;
+ 
+ 		cfg80211_chandef_create(&chandef, chan, NL80211_CHAN_NO_HT);
+ 		iwl_mvm_phy_ctxt_changed(mvm, ctxt, &chandef, 1, 1);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index 960b21719b80d..e3eefc55beaf7 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -1890,7 +1890,10 @@ static u8 iwl_mvm_scan_umac_chan_flags_v2(struct iwl_mvm *mvm,
+ 			IWL_SCAN_CHANNEL_FLAG_CACHE_ADD;
+ 
+ 	/* set fragmented ebs for fragmented scan on HB channels */
+-	if (iwl_mvm_is_scan_fragmented(params->hb_type))
++	if ((!iwl_mvm_is_cdb_supported(mvm) &&
++	     iwl_mvm_is_scan_fragmented(params->type)) ||
++	    (iwl_mvm_is_cdb_supported(mvm) &&
++	     iwl_mvm_is_scan_fragmented(params->hb_type)))
+ 		flags |= IWL_SCAN_CHANNEL_FLAG_EBS_FRAG;
+ 
+ 	return flags;
+diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
+index 5e1c1506a4c65..7aecde35cb9a3 100644
+--- a/drivers/net/wireless/mediatek/mt76/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/dma.c
+@@ -465,6 +465,7 @@ mt76_dma_rx_fill(struct mt76_dev *dev, struct mt76_queue *q)
+ 
+ 		qbuf.addr = addr + offset;
+ 		qbuf.len = len - offset;
++		qbuf.skip_unmap = false;
+ 		mt76_dma_add_buf(dev, q, &qbuf, 1, 0, buf, NULL);
+ 		frames++;
+ 	}
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index e2da720a91b61..f740a8ba164d4 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -19,7 +19,7 @@
+ 
+ #define MT_MCU_RING_SIZE	32
+ #define MT_RX_BUF_SIZE		2048
+-#define MT_SKB_HEAD_LEN		128
++#define MT_SKB_HEAD_LEN		256
+ 
+ #define MT_MAX_NON_AQL_PKT	16
+ #define MT_TXQ_FREE_THR		32
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index ed8f7bc18977d..f8f3c1db488c6 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -1732,7 +1732,7 @@ mt7615_mac_adjust_sensitivity(struct mt7615_phy *phy,
+ 	struct mt7615_dev *dev = phy->dev;
+ 	int false_cca = ofdm ? phy->false_cca_ofdm : phy->false_cca_cck;
+ 	bool ext_phy = phy != &dev->phy;
+-	u16 def_th = ofdm ? -98 : -110;
++	s16 def_th = ofdm ? -98 : -110;
+ 	bool update = false;
+ 	s8 *sensitivity;
+ 	int signal;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index a1da514ca2564..d1a00cd470734 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -982,6 +982,7 @@ mt7915_mac_write_txwi_80211(struct mt7915_dev *dev, __le32 *txwi,
+ 		val = MT_TXD3_SN_VALID |
+ 		      FIELD_PREP(MT_TXD3_SEQ, IEEE80211_SEQ_TO_SN(seqno));
+ 		txwi[3] |= cpu_to_le32(val);
++		txwi[7] &= ~cpu_to_le32(MT_TXD7_HW_AMSDU);
+ 	}
+ 
+ 	val = FIELD_PREP(MT_TXD7_TYPE, fc_type) |
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index 8c55562c1a8d9..b3bd090a13f45 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -260,6 +260,7 @@ static void mt7921_stop(struct ieee80211_hw *hw)
+ 
+ 	cancel_delayed_work_sync(&dev->pm.ps_work);
+ 	cancel_work_sync(&dev->pm.wake_work);
++	cancel_work_sync(&dev->reset_work);
+ 	mt76_connac_free_pending_tx_skbs(&dev->pm, NULL);
+ 
+ 	mt7921_mutex_acquire(dev);
+diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
+index d02ec5a735cb1..9d9c0984903f4 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.c
++++ b/drivers/net/wireless/realtek/rtw89/core.c
+@@ -1501,11 +1501,12 @@ static void rtw89_core_txq_push(struct rtw89_dev *rtwdev,
+ 	unsigned long i;
+ 	int ret;
+ 
++	rcu_read_lock();
+ 	for (i = 0; i < frame_cnt; i++) {
+ 		skb = ieee80211_tx_dequeue_ni(rtwdev->hw, txq);
+ 		if (!skb) {
+ 			rtw89_debug(rtwdev, RTW89_DBG_TXRX, "dequeue a NULL skb\n");
+-			return;
++			goto out;
+ 		}
+ 		rtw89_core_txq_check_agg(rtwdev, rtwtxq, skb);
+ 		ret = rtw89_core_tx_write(rtwdev, vif, sta, skb, NULL);
+@@ -1515,6 +1516,8 @@ static void rtw89_core_txq_push(struct rtw89_dev *rtwdev,
+ 			break;
+ 		}
+ 	}
++out:
++	rcu_read_unlock();
+ }
+ 
+ static u32 rtw89_check_and_reclaim_tx_resource(struct rtw89_dev *rtwdev, u8 tid)
+diff --git a/drivers/opp/debugfs.c b/drivers/opp/debugfs.c
+index 596c185b5dda4..b5f2f9f393926 100644
+--- a/drivers/opp/debugfs.c
++++ b/drivers/opp/debugfs.c
+@@ -10,6 +10,7 @@
+ #include <linux/debugfs.h>
+ #include <linux/device.h>
+ #include <linux/err.h>
++#include <linux/of.h>
+ #include <linux/init.h>
+ #include <linux/limits.h>
+ #include <linux/slab.h>
+@@ -131,9 +132,13 @@ void opp_debug_create_one(struct dev_pm_opp *opp, struct opp_table *opp_table)
+ 	debugfs_create_bool("suspend", S_IRUGO, d, &opp->suspend);
+ 	debugfs_create_u32("performance_state", S_IRUGO, d, &opp->pstate);
+ 	debugfs_create_ulong("rate_hz", S_IRUGO, d, &opp->rate);
++	debugfs_create_u32("level", S_IRUGO, d, &opp->level);
+ 	debugfs_create_ulong("clock_latency_ns", S_IRUGO, d,
+ 			     &opp->clock_latency_ns);
+ 
++	opp->of_name = of_node_full_name(opp->np);
++	debugfs_create_str("of_name", S_IRUGO, d, (char **)&opp->of_name);
++
+ 	opp_debug_create_supplies(opp, opp_table, d);
+ 	opp_debug_create_bw(opp, opp_table, d);
+ 
+diff --git a/drivers/opp/opp.h b/drivers/opp/opp.h
+index 407c3bfe51d96..45e3a55239a13 100644
+--- a/drivers/opp/opp.h
++++ b/drivers/opp/opp.h
+@@ -96,6 +96,7 @@ struct dev_pm_opp {
+ 
+ #ifdef CONFIG_DEBUG_FS
+ 	struct dentry *dentry;
++	const char *of_name;
+ #endif
+ };
+ 
+diff --git a/drivers/parisc/dino.c b/drivers/parisc/dino.c
+index 952a92504df69..e33036281327d 100644
+--- a/drivers/parisc/dino.c
++++ b/drivers/parisc/dino.c
+@@ -142,9 +142,8 @@ struct dino_device
+ {
+ 	struct pci_hba_data	hba;	/* 'C' inheritance - must be first */
+ 	spinlock_t		dinosaur_pen;
+-	unsigned long		txn_addr; /* EIR addr to generate interrupt */ 
+-	u32			txn_data; /* EIR data assign to each dino */ 
+ 	u32 			imr;	  /* IRQ's which are enabled */ 
++	struct gsc_irq		gsc_irq;
+ 	int			global_irq[DINO_LOCAL_IRQS]; /* map IMR bit to global irq */
+ #ifdef DINO_DEBUG
+ 	unsigned int		dino_irr0; /* save most recent IRQ line stat */
+@@ -339,14 +338,43 @@ static void dino_unmask_irq(struct irq_data *d)
+ 	if (tmp & DINO_MASK_IRQ(local_irq)) {
+ 		DBG(KERN_WARNING "%s(): IRQ asserted! (ILR 0x%x)\n",
+ 				__func__, tmp);
+-		gsc_writel(dino_dev->txn_data, dino_dev->txn_addr);
++		gsc_writel(dino_dev->gsc_irq.txn_data, dino_dev->gsc_irq.txn_addr);
+ 	}
+ }
+ 
++#ifdef CONFIG_SMP
++static int dino_set_affinity_irq(struct irq_data *d, const struct cpumask *dest,
++				bool force)
++{
++	struct dino_device *dino_dev = irq_data_get_irq_chip_data(d);
++	struct cpumask tmask;
++	int cpu_irq;
++	u32 eim;
++
++	if (!cpumask_and(&tmask, dest, cpu_online_mask))
++		return -EINVAL;
++
++	cpu_irq = cpu_check_affinity(d, &tmask);
++	if (cpu_irq < 0)
++		return cpu_irq;
++
++	dino_dev->gsc_irq.txn_addr = txn_affinity_addr(d->irq, cpu_irq);
++	eim = ((u32) dino_dev->gsc_irq.txn_addr) | dino_dev->gsc_irq.txn_data;
++	__raw_writel(eim, dino_dev->hba.base_addr+DINO_IAR0);
++
++	irq_data_update_effective_affinity(d, &tmask);
++
++	return IRQ_SET_MASK_OK;
++}
++#endif
++
+ static struct irq_chip dino_interrupt_type = {
+ 	.name		= "GSC-PCI",
+ 	.irq_unmask	= dino_unmask_irq,
+ 	.irq_mask	= dino_mask_irq,
++#ifdef CONFIG_SMP
++	.irq_set_affinity = dino_set_affinity_irq,
++#endif
+ };
+ 
+ 
+@@ -806,7 +834,6 @@ static int __init dino_common_init(struct parisc_device *dev,
+ {
+ 	int status;
+ 	u32 eim;
+-	struct gsc_irq gsc_irq;
+ 	struct resource *res;
+ 
+ 	pcibios_register_hba(&dino_dev->hba);
+@@ -821,10 +848,8 @@ static int __init dino_common_init(struct parisc_device *dev,
+ 	**   still only has 11 IRQ input lines - just map some of them
+ 	**   to a different processor.
+ 	*/
+-	dev->irq = gsc_alloc_irq(&gsc_irq);
+-	dino_dev->txn_addr = gsc_irq.txn_addr;
+-	dino_dev->txn_data = gsc_irq.txn_data;
+-	eim = ((u32) gsc_irq.txn_addr) | gsc_irq.txn_data;
++	dev->irq = gsc_alloc_irq(&dino_dev->gsc_irq);
++	eim = ((u32) dino_dev->gsc_irq.txn_addr) | dino_dev->gsc_irq.txn_data;
+ 
+ 	/* 
+ 	** Dino needs a PA "IRQ" to get a processor's attention.
+diff --git a/drivers/parisc/gsc.c b/drivers/parisc/gsc.c
+index ed9371acf37eb..ec175ae998733 100644
+--- a/drivers/parisc/gsc.c
++++ b/drivers/parisc/gsc.c
+@@ -135,10 +135,41 @@ static void gsc_asic_unmask_irq(struct irq_data *d)
+ 	 */
+ }
+ 
++#ifdef CONFIG_SMP
++static int gsc_set_affinity_irq(struct irq_data *d, const struct cpumask *dest,
++				bool force)
++{
++	struct gsc_asic *gsc_dev = irq_data_get_irq_chip_data(d);
++	struct cpumask tmask;
++	int cpu_irq;
++
++	if (!cpumask_and(&tmask, dest, cpu_online_mask))
++		return -EINVAL;
++
++	cpu_irq = cpu_check_affinity(d, &tmask);
++	if (cpu_irq < 0)
++		return cpu_irq;
++
++	gsc_dev->gsc_irq.txn_addr = txn_affinity_addr(d->irq, cpu_irq);
++	gsc_dev->eim = ((u32) gsc_dev->gsc_irq.txn_addr) | gsc_dev->gsc_irq.txn_data;
++
++	/* switch IRQ's for devices below LASI/WAX to other CPU */
++	gsc_writel(gsc_dev->eim, gsc_dev->hpa + OFFSET_IAR);
++
++	irq_data_update_effective_affinity(d, &tmask);
++
++	return IRQ_SET_MASK_OK;
++}
++#endif
++
++
+ static struct irq_chip gsc_asic_interrupt_type = {
+ 	.name		=	"GSC-ASIC",
+ 	.irq_unmask	=	gsc_asic_unmask_irq,
+ 	.irq_mask	=	gsc_asic_mask_irq,
++#ifdef CONFIG_SMP
++	.irq_set_affinity =	gsc_set_affinity_irq,
++#endif
+ };
+ 
+ int gsc_assign_irq(struct irq_chip *type, void *data)
+diff --git a/drivers/parisc/gsc.h b/drivers/parisc/gsc.h
+index 86abad3fa2150..73cbd0bb1975a 100644
+--- a/drivers/parisc/gsc.h
++++ b/drivers/parisc/gsc.h
+@@ -31,6 +31,7 @@ struct gsc_asic {
+ 	int version;
+ 	int type;
+ 	int eim;
++	struct gsc_irq gsc_irq;
+ 	int global_irq[32];
+ };
+ 
+diff --git a/drivers/parisc/lasi.c b/drivers/parisc/lasi.c
+index 4e4fd12c2112e..6ef621adb63a8 100644
+--- a/drivers/parisc/lasi.c
++++ b/drivers/parisc/lasi.c
+@@ -163,7 +163,6 @@ static int __init lasi_init_chip(struct parisc_device *dev)
+ {
+ 	extern void (*chassis_power_off)(void);
+ 	struct gsc_asic *lasi;
+-	struct gsc_irq gsc_irq;
+ 	int ret;
+ 
+ 	lasi = kzalloc(sizeof(*lasi), GFP_KERNEL);
+@@ -185,7 +184,7 @@ static int __init lasi_init_chip(struct parisc_device *dev)
+ 	lasi_init_irq(lasi);
+ 
+ 	/* the IRQ lasi should use */
+-	dev->irq = gsc_alloc_irq(&gsc_irq);
++	dev->irq = gsc_alloc_irq(&lasi->gsc_irq);
+ 	if (dev->irq < 0) {
+ 		printk(KERN_ERR "%s(): cannot get GSC irq\n",
+ 				__func__);
+@@ -193,9 +192,9 @@ static int __init lasi_init_chip(struct parisc_device *dev)
+ 		return -EBUSY;
+ 	}
+ 
+-	lasi->eim = ((u32) gsc_irq.txn_addr) | gsc_irq.txn_data;
++	lasi->eim = ((u32) lasi->gsc_irq.txn_addr) | lasi->gsc_irq.txn_data;
+ 
+-	ret = request_irq(gsc_irq.irq, gsc_asic_intr, 0, "lasi", lasi);
++	ret = request_irq(lasi->gsc_irq.irq, gsc_asic_intr, 0, "lasi", lasi);
+ 	if (ret < 0) {
+ 		kfree(lasi);
+ 		return ret;
+diff --git a/drivers/parisc/wax.c b/drivers/parisc/wax.c
+index 5b6df15162354..73a2b01f8d9ca 100644
+--- a/drivers/parisc/wax.c
++++ b/drivers/parisc/wax.c
+@@ -68,7 +68,6 @@ static int __init wax_init_chip(struct parisc_device *dev)
+ {
+ 	struct gsc_asic *wax;
+ 	struct parisc_device *parent;
+-	struct gsc_irq gsc_irq;
+ 	int ret;
+ 
+ 	wax = kzalloc(sizeof(*wax), GFP_KERNEL);
+@@ -85,7 +84,7 @@ static int __init wax_init_chip(struct parisc_device *dev)
+ 	wax_init_irq(wax);
+ 
+ 	/* the IRQ wax should use */
+-	dev->irq = gsc_claim_irq(&gsc_irq, WAX_GSC_IRQ);
++	dev->irq = gsc_claim_irq(&wax->gsc_irq, WAX_GSC_IRQ);
+ 	if (dev->irq < 0) {
+ 		printk(KERN_ERR "%s(): cannot get GSC irq\n",
+ 				__func__);
+@@ -93,9 +92,9 @@ static int __init wax_init_chip(struct parisc_device *dev)
+ 		return -EBUSY;
+ 	}
+ 
+-	wax->eim = ((u32) gsc_irq.txn_addr) | gsc_irq.txn_data;
++	wax->eim = ((u32) wax->gsc_irq.txn_addr) | wax->gsc_irq.txn_data;
+ 
+-	ret = request_irq(gsc_irq.irq, gsc_asic_intr, 0, "wax", wax);
++	ret = request_irq(wax->gsc_irq.irq, gsc_asic_intr, 0, "wax", wax);
+ 	if (ret < 0) {
+ 		kfree(wax);
+ 		return ret;
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index a924564fdbbc3..6277b3f3031a0 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -1179,7 +1179,7 @@ static void advk_msi_irq_compose_msi_msg(struct irq_data *data,
+ 
+ 	msg->address_lo = lower_32_bits(msi_msg);
+ 	msg->address_hi = upper_32_bits(msi_msg);
+-	msg->data = data->irq;
++	msg->data = data->hwirq;
+ }
+ 
+ static int advk_msi_set_affinity(struct irq_data *irq_data,
+@@ -1196,15 +1196,11 @@ static int advk_msi_irq_domain_alloc(struct irq_domain *domain,
+ 	int hwirq, i;
+ 
+ 	mutex_lock(&pcie->msi_used_lock);
+-	hwirq = bitmap_find_next_zero_area(pcie->msi_used, MSI_IRQ_NUM,
+-					   0, nr_irqs, 0);
+-	if (hwirq >= MSI_IRQ_NUM) {
+-		mutex_unlock(&pcie->msi_used_lock);
+-		return -ENOSPC;
+-	}
+-
+-	bitmap_set(pcie->msi_used, hwirq, nr_irqs);
++	hwirq = bitmap_find_free_region(pcie->msi_used, MSI_IRQ_NUM,
++					order_base_2(nr_irqs));
+ 	mutex_unlock(&pcie->msi_used_lock);
++	if (hwirq < 0)
++		return -ENOSPC;
+ 
+ 	for (i = 0; i < nr_irqs; i++)
+ 		irq_domain_set_info(domain, virq + i, hwirq + i,
+@@ -1222,7 +1218,7 @@ static void advk_msi_irq_domain_free(struct irq_domain *domain,
+ 	struct advk_pcie *pcie = domain->host_data;
+ 
+ 	mutex_lock(&pcie->msi_used_lock);
+-	bitmap_clear(pcie->msi_used, d->hwirq, nr_irqs);
++	bitmap_release_region(pcie->msi_used, d->hwirq, order_base_2(nr_irqs));
+ 	mutex_unlock(&pcie->msi_used_lock);
+ }
+ 
+diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
+index 90d84d3bc868f..5b833f00e9800 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-test.c
++++ b/drivers/pci/endpoint/functions/pci-epf-test.c
+@@ -285,7 +285,17 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test)
+ 		if (ret)
+ 			dev_err(dev, "Data transfer failed\n");
+ 	} else {
+-		memcpy(dst_addr, src_addr, reg->size);
++		void *buf;
++
++		buf = kzalloc(reg->size, GFP_KERNEL);
++		if (!buf) {
++			ret = -ENOMEM;
++			goto err_map_addr;
++		}
++
++		memcpy_fromio(buf, src_addr, reg->size);
++		memcpy_toio(dst_addr, buf, reg->size);
++		kfree(buf);
+ 	}
+ 	ktime_get_ts64(&end);
+ 	pci_epf_test_print_rate("COPY", reg->size, &start, &end, use_dma);
+@@ -441,7 +451,7 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test)
+ 		if (!epf_test->dma_supported) {
+ 			dev_err(dev, "Cannot transfer data using DMA\n");
+ 			ret = -EINVAL;
+-			goto err_map_addr;
++			goto err_dma_map;
+ 		}
+ 
+ 		src_phys_addr = dma_map_single(dma_dev, buf, reg->size,
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 0e765e0644c22..60098a701e83a 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -1086,6 +1086,8 @@ static void quirk_cmd_compl(struct pci_dev *pdev)
+ }
+ DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID,
+ 			      PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
++DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0110,
++			      PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
+ DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0400,
+ 			      PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
+ DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0401,
+diff --git a/drivers/perf/qcom_l2_pmu.c b/drivers/perf/qcom_l2_pmu.c
+index 7640491aab123..30234c261b05c 100644
+--- a/drivers/perf/qcom_l2_pmu.c
++++ b/drivers/perf/qcom_l2_pmu.c
+@@ -736,7 +736,7 @@ static struct cluster_pmu *l2_cache_associate_cpu_with_cluster(
+ {
+ 	u64 mpidr;
+ 	int cpu_cluster_id;
+-	struct cluster_pmu *cluster = NULL;
++	struct cluster_pmu *cluster;
+ 
+ 	/*
+ 	 * This assumes that the cluster_id is in MPIDR[aff1] for
+@@ -758,10 +758,10 @@ static struct cluster_pmu *l2_cache_associate_cpu_with_cluster(
+ 			 cluster->cluster_id);
+ 		cpumask_set_cpu(cpu, &cluster->cluster_cpus);
+ 		*per_cpu_ptr(l2cache_pmu->pmu_cluster, cpu) = cluster;
+-		break;
++		return cluster;
+ 	}
+ 
+-	return cluster;
++	return NULL;
+ }
+ 
+ static int l2cache_pmu_online_cpu(unsigned int cpu, struct hlist_node *node)
+diff --git a/drivers/phy/amlogic/phy-meson-gxl-usb2.c b/drivers/phy/amlogic/phy-meson-gxl-usb2.c
+index 2b3c0d730f20f..db17c3448bfed 100644
+--- a/drivers/phy/amlogic/phy-meson-gxl-usb2.c
++++ b/drivers/phy/amlogic/phy-meson-gxl-usb2.c
+@@ -114,8 +114,10 @@ static int phy_meson_gxl_usb2_init(struct phy *phy)
+ 		return ret;
+ 
+ 	ret = clk_prepare_enable(priv->clk);
+-	if (ret)
++	if (ret) {
++		reset_control_rearm(priv->reset);
+ 		return ret;
++	}
+ 
+ 	return 0;
+ }
+@@ -125,6 +127,7 @@ static int phy_meson_gxl_usb2_exit(struct phy *phy)
+ 	struct phy_meson_gxl_usb2_priv *priv = phy_get_drvdata(phy);
+ 
+ 	clk_disable_unprepare(priv->clk);
++	reset_control_rearm(priv->reset);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/phy/amlogic/phy-meson8b-usb2.c b/drivers/phy/amlogic/phy-meson8b-usb2.c
+index cf10bed40528a..dd96763911b8b 100644
+--- a/drivers/phy/amlogic/phy-meson8b-usb2.c
++++ b/drivers/phy/amlogic/phy-meson8b-usb2.c
+@@ -154,6 +154,7 @@ static int phy_meson8b_usb2_power_on(struct phy *phy)
+ 	ret = clk_prepare_enable(priv->clk_usb_general);
+ 	if (ret) {
+ 		dev_err(&phy->dev, "Failed to enable USB general clock\n");
++		reset_control_rearm(priv->reset);
+ 		return ret;
+ 	}
+ 
+@@ -161,6 +162,7 @@ static int phy_meson8b_usb2_power_on(struct phy *phy)
+ 	if (ret) {
+ 		dev_err(&phy->dev, "Failed to enable USB DDR clock\n");
+ 		clk_disable_unprepare(priv->clk_usb_general);
++		reset_control_rearm(priv->reset);
+ 		return ret;
+ 	}
+ 
+@@ -199,6 +201,7 @@ static int phy_meson8b_usb2_power_on(struct phy *phy)
+ 				dev_warn(&phy->dev, "USB ID detect failed!\n");
+ 				clk_disable_unprepare(priv->clk_usb);
+ 				clk_disable_unprepare(priv->clk_usb_general);
++				reset_control_rearm(priv->reset);
+ 				return -EINVAL;
+ 			}
+ 		}
+@@ -218,6 +221,7 @@ static int phy_meson8b_usb2_power_off(struct phy *phy)
+ 
+ 	clk_disable_unprepare(priv->clk_usb);
+ 	clk_disable_unprepare(priv->clk_usb_general);
++	reset_control_rearm(priv->reset);
+ 
+ 	/* power off the PHY by putting it into reset mode */
+ 	regmap_update_bits(priv->regmap, REG_CTRL, REG_CTRL_POWER_ON_RESET,
+@@ -265,8 +269,9 @@ static int phy_meson8b_usb2_probe(struct platform_device *pdev)
+ 		return PTR_ERR(priv->clk_usb);
+ 
+ 	priv->reset = devm_reset_control_get_optional_shared(&pdev->dev, NULL);
+-	if (PTR_ERR(priv->reset) == -EPROBE_DEFER)
+-		return PTR_ERR(priv->reset);
++	if (IS_ERR(priv->reset))
++		return dev_err_probe(&pdev->dev, PTR_ERR(priv->reset),
++				     "Failed to get the reset line");
+ 
+ 	priv->dr_mode = of_usb_get_dr_mode_by_phy(pdev->dev.of_node, -1);
+ 	if (priv->dr_mode == USB_DR_MODE_UNKNOWN) {
+diff --git a/drivers/platform/x86/hp-wmi.c b/drivers/platform/x86/hp-wmi.c
+index 48a46466f0862..88f0bfd6ecf1a 100644
+--- a/drivers/platform/x86/hp-wmi.c
++++ b/drivers/platform/x86/hp-wmi.c
+@@ -35,10 +35,6 @@ MODULE_LICENSE("GPL");
+ MODULE_ALIAS("wmi:95F24279-4D7B-4334-9387-ACCDC67EF61C");
+ MODULE_ALIAS("wmi:5FB7F034-2C63-45e9-BE91-3D44E2C707E4");
+ 
+-static int enable_tablet_mode_sw = -1;
+-module_param(enable_tablet_mode_sw, int, 0444);
+-MODULE_PARM_DESC(enable_tablet_mode_sw, "Enable SW_TABLET_MODE reporting (-1=auto, 0=no, 1=yes)");
+-
+ #define HPWMI_EVENT_GUID "95F24279-4D7B-4334-9387-ACCDC67EF61C"
+ #define HPWMI_BIOS_GUID "5FB7F034-2C63-45e9-BE91-3D44E2C707E4"
+ #define HP_OMEN_EC_THERMAL_PROFILE_OFFSET 0x95
+@@ -107,6 +103,7 @@ enum hp_wmi_commandtype {
+ 	HPWMI_FEATURE2_QUERY		= 0x0d,
+ 	HPWMI_WIRELESS2_QUERY		= 0x1b,
+ 	HPWMI_POSTCODEERROR_QUERY	= 0x2a,
++	HPWMI_SYSTEM_DEVICE_MODE	= 0x40,
+ 	HPWMI_THERMAL_PROFILE_QUERY	= 0x4c,
+ };
+ 
+@@ -217,6 +214,19 @@ struct rfkill2_device {
+ static int rfkill2_count;
+ static struct rfkill2_device rfkill2[HPWMI_MAX_RFKILL2_DEVICES];
+ 
++/*
++ * Chassis Types values were obtained from SMBIOS reference
++ * specification version 3.00. A complete list of system enclosures
++ * and chassis types is available on Table 17.
++ */
++static const char * const tablet_chassis_types[] = {
++	"30", /* Tablet*/
++	"31", /* Convertible */
++	"32"  /* Detachable */
++};
++
++#define DEVICE_MODE_TABLET	0x06
++
+ /* map output size to the corresponding WMI method id */
+ static inline int encode_outsize_for_pvsz(int outsize)
+ {
+@@ -320,7 +330,7 @@ static int hp_wmi_get_fan_speed(int fan)
+ 	char fan_data[4] = { fan, 0, 0, 0 };
+ 
+ 	int ret = hp_wmi_perform_query(HPWMI_FAN_SPEED_GET_QUERY, HPWMI_GM,
+-				       &fan_data, sizeof(fan_data),
++				       &fan_data, sizeof(char),
+ 				       sizeof(fan_data));
+ 
+ 	if (ret != 0)
+@@ -345,14 +355,39 @@ static int hp_wmi_read_int(int query)
+ 	return val;
+ }
+ 
+-static int hp_wmi_hw_state(int mask)
++static int hp_wmi_get_dock_state(void)
+ {
+ 	int state = hp_wmi_read_int(HPWMI_HARDWARE_QUERY);
+ 
+ 	if (state < 0)
+ 		return state;
+ 
+-	return !!(state & mask);
++	return !!(state & HPWMI_DOCK_MASK);
++}
++
++static int hp_wmi_get_tablet_mode(void)
++{
++	char system_device_mode[4] = { 0 };
++	const char *chassis_type;
++	bool tablet_found;
++	int ret;
++
++	chassis_type = dmi_get_system_info(DMI_CHASSIS_TYPE);
++	if (!chassis_type)
++		return -ENODEV;
++
++	tablet_found = match_string(tablet_chassis_types,
++				    ARRAY_SIZE(tablet_chassis_types),
++				    chassis_type) >= 0;
++	if (!tablet_found)
++		return -ENODEV;
++
++	ret = hp_wmi_perform_query(HPWMI_SYSTEM_DEVICE_MODE, HPWMI_READ,
++				   system_device_mode, 0, sizeof(system_device_mode));
++	if (ret < 0)
++		return ret;
++
++	return system_device_mode[0] == DEVICE_MODE_TABLET;
+ }
+ 
+ static int omen_thermal_profile_set(int mode)
+@@ -364,7 +399,7 @@ static int omen_thermal_profile_set(int mode)
+ 		return -EINVAL;
+ 
+ 	ret = hp_wmi_perform_query(HPWMI_SET_PERFORMANCE_MODE, HPWMI_GM,
+-				   &buffer, sizeof(buffer), sizeof(buffer));
++				   &buffer, sizeof(buffer), 0);
+ 
+ 	if (ret)
+ 		return ret < 0 ? ret : -EINVAL;
+@@ -401,7 +436,7 @@ static int hp_wmi_fan_speed_max_set(int enabled)
+ 	int ret;
+ 
+ 	ret = hp_wmi_perform_query(HPWMI_FAN_SPEED_MAX_SET_QUERY, HPWMI_GM,
+-				   &enabled, sizeof(enabled), sizeof(enabled));
++				   &enabled, sizeof(enabled), 0);
+ 
+ 	if (ret)
+ 		return ret < 0 ? ret : -EINVAL;
+@@ -414,7 +449,7 @@ static int hp_wmi_fan_speed_max_get(void)
+ 	int val = 0, ret;
+ 
+ 	ret = hp_wmi_perform_query(HPWMI_FAN_SPEED_MAX_GET_QUERY, HPWMI_GM,
+-				   &val, sizeof(val), sizeof(val));
++				   &val, 0, sizeof(val));
+ 
+ 	if (ret)
+ 		return ret < 0 ? ret : -EINVAL;
+@@ -426,7 +461,7 @@ static int __init hp_wmi_bios_2008_later(void)
+ {
+ 	int state = 0;
+ 	int ret = hp_wmi_perform_query(HPWMI_FEATURE_QUERY, HPWMI_READ, &state,
+-				       sizeof(state), sizeof(state));
++				       0, sizeof(state));
+ 	if (!ret)
+ 		return 1;
+ 
+@@ -437,7 +472,7 @@ static int __init hp_wmi_bios_2009_later(void)
+ {
+ 	u8 state[128];
+ 	int ret = hp_wmi_perform_query(HPWMI_FEATURE2_QUERY, HPWMI_READ, &state,
+-				       sizeof(state), sizeof(state));
++				       0, sizeof(state));
+ 	if (!ret)
+ 		return 1;
+ 
+@@ -515,7 +550,7 @@ static int hp_wmi_rfkill2_refresh(void)
+ 	int err, i;
+ 
+ 	err = hp_wmi_perform_query(HPWMI_WIRELESS2_QUERY, HPWMI_READ, &state,
+-				   sizeof(state), sizeof(state));
++				   0, sizeof(state));
+ 	if (err)
+ 		return err;
+ 
+@@ -568,7 +603,7 @@ static ssize_t als_show(struct device *dev, struct device_attribute *attr,
+ static ssize_t dock_show(struct device *dev, struct device_attribute *attr,
+ 			 char *buf)
+ {
+-	int value = hp_wmi_hw_state(HPWMI_DOCK_MASK);
++	int value = hp_wmi_get_dock_state();
+ 	if (value < 0)
+ 		return value;
+ 	return sprintf(buf, "%d\n", value);
+@@ -577,7 +612,7 @@ static ssize_t dock_show(struct device *dev, struct device_attribute *attr,
+ static ssize_t tablet_show(struct device *dev, struct device_attribute *attr,
+ 			   char *buf)
+ {
+-	int value = hp_wmi_hw_state(HPWMI_TABLET_MASK);
++	int value = hp_wmi_get_tablet_mode();
+ 	if (value < 0)
+ 		return value;
+ 	return sprintf(buf, "%d\n", value);
+@@ -604,7 +639,7 @@ static ssize_t als_store(struct device *dev, struct device_attribute *attr,
+ 		return ret;
+ 
+ 	ret = hp_wmi_perform_query(HPWMI_ALS_QUERY, HPWMI_WRITE, &tmp,
+-				       sizeof(tmp), sizeof(tmp));
++				       sizeof(tmp), 0);
+ 	if (ret)
+ 		return ret < 0 ? ret : -EINVAL;
+ 
+@@ -625,9 +660,9 @@ static ssize_t postcode_store(struct device *dev, struct device_attribute *attr,
+ 	if (clear == false)
+ 		return -EINVAL;
+ 
+-	/* Clear the POST error code. It is kept until until cleared. */
++	/* Clear the POST error code. It is kept until cleared. */
+ 	ret = hp_wmi_perform_query(HPWMI_POSTCODEERROR_QUERY, HPWMI_WRITE, &tmp,
+-				       sizeof(tmp), sizeof(tmp));
++				       sizeof(tmp), 0);
+ 	if (ret)
+ 		return ret < 0 ? ret : -EINVAL;
+ 
+@@ -699,10 +734,10 @@ static void hp_wmi_notify(u32 value, void *context)
+ 	case HPWMI_DOCK_EVENT:
+ 		if (test_bit(SW_DOCK, hp_wmi_input_dev->swbit))
+ 			input_report_switch(hp_wmi_input_dev, SW_DOCK,
+-					    hp_wmi_hw_state(HPWMI_DOCK_MASK));
++					    hp_wmi_get_dock_state());
+ 		if (test_bit(SW_TABLET_MODE, hp_wmi_input_dev->swbit))
+ 			input_report_switch(hp_wmi_input_dev, SW_TABLET_MODE,
+-					    hp_wmi_hw_state(HPWMI_TABLET_MASK));
++					    hp_wmi_get_tablet_mode());
+ 		input_sync(hp_wmi_input_dev);
+ 		break;
+ 	case HPWMI_PARK_HDD:
+@@ -780,19 +815,17 @@ static int __init hp_wmi_input_setup(void)
+ 	__set_bit(EV_SW, hp_wmi_input_dev->evbit);
+ 
+ 	/* Dock */
+-	val = hp_wmi_hw_state(HPWMI_DOCK_MASK);
++	val = hp_wmi_get_dock_state();
+ 	if (!(val < 0)) {
+ 		__set_bit(SW_DOCK, hp_wmi_input_dev->swbit);
+ 		input_report_switch(hp_wmi_input_dev, SW_DOCK, val);
+ 	}
+ 
+ 	/* Tablet mode */
+-	if (enable_tablet_mode_sw > 0) {
+-		val = hp_wmi_hw_state(HPWMI_TABLET_MASK);
+-		if (val >= 0) {
+-			__set_bit(SW_TABLET_MODE, hp_wmi_input_dev->swbit);
+-			input_report_switch(hp_wmi_input_dev, SW_TABLET_MODE, val);
+-		}
++	val = hp_wmi_get_tablet_mode();
++	if (!(val < 0)) {
++		__set_bit(SW_TABLET_MODE, hp_wmi_input_dev->swbit);
++		input_report_switch(hp_wmi_input_dev, SW_TABLET_MODE, val);
+ 	}
+ 
+ 	err = sparse_keymap_setup(hp_wmi_input_dev, hp_wmi_keymap, NULL);
+@@ -919,7 +952,7 @@ static int __init hp_wmi_rfkill2_setup(struct platform_device *device)
+ 	int err, i;
+ 
+ 	err = hp_wmi_perform_query(HPWMI_WIRELESS2_QUERY, HPWMI_READ, &state,
+-				   sizeof(state), sizeof(state));
++				   0, sizeof(state));
+ 	if (err)
+ 		return err < 0 ? err : -EINVAL;
+ 
+@@ -1227,10 +1260,10 @@ static int hp_wmi_resume_handler(struct device *device)
+ 	if (hp_wmi_input_dev) {
+ 		if (test_bit(SW_DOCK, hp_wmi_input_dev->swbit))
+ 			input_report_switch(hp_wmi_input_dev, SW_DOCK,
+-					    hp_wmi_hw_state(HPWMI_DOCK_MASK));
++					    hp_wmi_get_dock_state());
+ 		if (test_bit(SW_TABLET_MODE, hp_wmi_input_dev->swbit))
+ 			input_report_switch(hp_wmi_input_dev, SW_TABLET_MODE,
+-					    hp_wmi_hw_state(HPWMI_TABLET_MASK));
++					    hp_wmi_get_tablet_mode());
+ 		input_sync(hp_wmi_input_dev);
+ 	}
+ 
+diff --git a/drivers/power/supply/axp20x_battery.c b/drivers/power/supply/axp20x_battery.c
+index 18a9db0df4b1f..335e12cc5e2f9 100644
+--- a/drivers/power/supply/axp20x_battery.c
++++ b/drivers/power/supply/axp20x_battery.c
+@@ -186,7 +186,6 @@ static int axp20x_battery_get_prop(struct power_supply *psy,
+ 				   union power_supply_propval *val)
+ {
+ 	struct axp20x_batt_ps *axp20x_batt = power_supply_get_drvdata(psy);
+-	struct iio_channel *chan;
+ 	int ret = 0, reg, val1;
+ 
+ 	switch (psp) {
+@@ -266,12 +265,12 @@ static int axp20x_battery_get_prop(struct power_supply *psy,
+ 		if (ret)
+ 			return ret;
+ 
+-		if (reg & AXP20X_PWR_STATUS_BAT_CHARGING)
+-			chan = axp20x_batt->batt_chrg_i;
+-		else
+-			chan = axp20x_batt->batt_dischrg_i;
+-
+-		ret = iio_read_channel_processed(chan, &val->intval);
++		if (reg & AXP20X_PWR_STATUS_BAT_CHARGING) {
++			ret = iio_read_channel_processed(axp20x_batt->batt_chrg_i, &val->intval);
++		} else {
++			ret = iio_read_channel_processed(axp20x_batt->batt_dischrg_i, &val1);
++			val->intval = -val1;
++		}
+ 		if (ret)
+ 			return ret;
+ 
+diff --git a/drivers/power/supply/axp288_charger.c b/drivers/power/supply/axp288_charger.c
+index ec41f6cd3f93f..c498e62ab4e20 100644
+--- a/drivers/power/supply/axp288_charger.c
++++ b/drivers/power/supply/axp288_charger.c
+@@ -42,11 +42,11 @@
+ #define VBUS_ISPOUT_CUR_LIM_1500MA	0x1	/* 1500mA */
+ #define VBUS_ISPOUT_CUR_LIM_2000MA	0x2	/* 2000mA */
+ #define VBUS_ISPOUT_CUR_NO_LIM		0x3	/* 2500mA */
+-#define VBUS_ISPOUT_VHOLD_SET_MASK	0x31
++#define VBUS_ISPOUT_VHOLD_SET_MASK	0x38
+ #define VBUS_ISPOUT_VHOLD_SET_BIT_POS	0x3
+ #define VBUS_ISPOUT_VHOLD_SET_OFFSET	4000	/* 4000mV */
+ #define VBUS_ISPOUT_VHOLD_SET_LSB_RES	100	/* 100mV */
+-#define VBUS_ISPOUT_VHOLD_SET_4300MV	0x3	/* 4300mV */
++#define VBUS_ISPOUT_VHOLD_SET_4400MV	0x4	/* 4400mV */
+ #define VBUS_ISPOUT_VBUS_PATH_DIS	BIT(7)
+ 
+ #define CHRG_CCCV_CC_MASK		0xf		/* 4 bits */
+@@ -769,6 +769,16 @@ static int charger_init_hw_regs(struct axp288_chrg_info *info)
+ 		ret = axp288_charger_vbus_path_select(info, true);
+ 		if (ret < 0)
+ 			return ret;
++	} else {
++		/* Set Vhold to the factory default / recommended 4.4V */
++		val = VBUS_ISPOUT_VHOLD_SET_4400MV << VBUS_ISPOUT_VHOLD_SET_BIT_POS;
++		ret = regmap_update_bits(info->regmap, AXP20X_VBUS_IPSOUT_MGMT,
++					 VBUS_ISPOUT_VHOLD_SET_MASK, val);
++		if (ret < 0) {
++			dev_err(&info->pdev->dev, "register(%x) write error(%d)\n",
++				AXP20X_VBUS_IPSOUT_MGMT, ret);
++			return ret;
++		}
+ 	}
+ 
+ 	/* Read current charge voltage and current limit */
+diff --git a/drivers/ptp/ptp_sysfs.c b/drivers/ptp/ptp_sysfs.c
+index 41b92dc2f011a..9233bfedeb174 100644
+--- a/drivers/ptp/ptp_sysfs.c
++++ b/drivers/ptp/ptp_sysfs.c
+@@ -14,7 +14,7 @@ static ssize_t clock_name_show(struct device *dev,
+ 			       struct device_attribute *attr, char *page)
+ {
+ 	struct ptp_clock *ptp = dev_get_drvdata(dev);
+-	return snprintf(page, PAGE_SIZE-1, "%s\n", ptp->info->name);
++	return sysfs_emit(page, "%s\n", ptp->info->name);
+ }
+ static DEVICE_ATTR_RO(clock_name);
+ 
+@@ -387,7 +387,7 @@ static ssize_t ptp_pin_show(struct device *dev, struct device_attribute *attr,
+ 
+ 	mutex_unlock(&ptp->pincfg_mux);
+ 
+-	return snprintf(page, PAGE_SIZE, "%u %u\n", func, chan);
++	return sysfs_emit(page, "%u %u\n", func, chan);
+ }
+ 
+ static ssize_t ptp_pin_store(struct device *dev, struct device_attribute *attr,
+diff --git a/drivers/regulator/atc260x-regulator.c b/drivers/regulator/atc260x-regulator.c
+index 05147d2c38428..485e58b264c04 100644
+--- a/drivers/regulator/atc260x-regulator.c
++++ b/drivers/regulator/atc260x-regulator.c
+@@ -292,6 +292,7 @@ enum atc2603c_reg_ids {
+ 	.bypass_mask = BIT(5), \
+ 	.active_discharge_reg = ATC2603C_PMU_SWITCH_CTL, \
+ 	.active_discharge_mask = BIT(1), \
++	.active_discharge_on = BIT(1), \
+ 	.owner = THIS_MODULE, \
+ }
+ 
+diff --git a/drivers/regulator/rtq2134-regulator.c b/drivers/regulator/rtq2134-regulator.c
+index f21e3f8b21f23..8e13dea354a21 100644
+--- a/drivers/regulator/rtq2134-regulator.c
++++ b/drivers/regulator/rtq2134-regulator.c
+@@ -285,6 +285,7 @@ static const unsigned int rtq2134_buck_ramp_delay_table[] = {
+ 		.enable_mask = RTQ2134_VOUTEN_MASK, \
+ 		.active_discharge_reg = RTQ2134_REG_BUCK##_id##_CFG0, \
+ 		.active_discharge_mask = RTQ2134_ACTDISCHG_MASK, \
++		.active_discharge_on = RTQ2134_ACTDISCHG_MASK, \
+ 		.ramp_reg = RTQ2134_REG_BUCK##_id##_RSPCFG, \
+ 		.ramp_mask = RTQ2134_RSPUP_MASK, \
+ 		.ramp_delay_table = rtq2134_buck_ramp_delay_table, \
+diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
+index dc3f8b0dde989..b90a603d6b12f 100644
+--- a/drivers/rtc/rtc-cmos.c
++++ b/drivers/rtc/rtc-cmos.c
+@@ -222,6 +222,8 @@ static inline void cmos_write_bank2(unsigned char val, unsigned char addr)
+ 
+ static int cmos_read_time(struct device *dev, struct rtc_time *t)
+ {
++	int ret;
++
+ 	/*
+ 	 * If pm_trace abused the RTC for storage, set the timespec to 0,
+ 	 * which tells the caller that this RTC value is unusable.
+@@ -229,7 +231,12 @@ static int cmos_read_time(struct device *dev, struct rtc_time *t)
+ 	if (!pm_trace_rtc_valid())
+ 		return -EIO;
+ 
+-	mc146818_get_time(t);
++	ret = mc146818_get_time(t);
++	if (ret < 0) {
++		dev_err_ratelimited(dev, "unable to read current time\n");
++		return ret;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -793,16 +800,14 @@ cmos_do_probe(struct device *dev, struct resource *ports, int rtc_irq)
+ 
+ 	rename_region(ports, dev_name(&cmos_rtc.rtc->dev));
+ 
+-	spin_lock_irq(&rtc_lock);
+-
+-	/* Ensure that the RTC is accessible. Bit 6 must be 0! */
+-	if ((CMOS_READ(RTC_VALID) & 0x40) != 0) {
+-		spin_unlock_irq(&rtc_lock);
+-		dev_warn(dev, "not accessible\n");
++	if (!mc146818_does_rtc_work()) {
++		dev_warn(dev, "broken or not accessible\n");
+ 		retval = -ENXIO;
+ 		goto cleanup1;
+ 	}
+ 
++	spin_lock_irq(&rtc_lock);
++
+ 	if (!(flags & CMOS_RTC_FLAGS_NOFREQ)) {
+ 		/* force periodic irq to CMOS reset default of 1024Hz;
+ 		 *
+diff --git a/drivers/rtc/rtc-mc146818-lib.c b/drivers/rtc/rtc-mc146818-lib.c
+index 04b05e3b68cb8..8abac672f3cbd 100644
+--- a/drivers/rtc/rtc-mc146818-lib.c
++++ b/drivers/rtc/rtc-mc146818-lib.c
+@@ -8,10 +8,36 @@
+ #include <linux/acpi.h>
+ #endif
+ 
+-unsigned int mc146818_get_time(struct rtc_time *time)
++/*
++ * If the UIP (Update-in-progress) bit of the RTC is set for more then
++ * 10ms, the RTC is apparently broken or not present.
++ */
++bool mc146818_does_rtc_work(void)
++{
++	int i;
++	unsigned char val;
++	unsigned long flags;
++
++	for (i = 0; i < 10; i++) {
++		spin_lock_irqsave(&rtc_lock, flags);
++		val = CMOS_READ(RTC_FREQ_SELECT);
++		spin_unlock_irqrestore(&rtc_lock, flags);
++
++		if ((val & RTC_UIP) == 0)
++			return true;
++
++		mdelay(1);
++	}
++
++	return false;
++}
++EXPORT_SYMBOL_GPL(mc146818_does_rtc_work);
++
++int mc146818_get_time(struct rtc_time *time)
+ {
+ 	unsigned char ctrl;
+ 	unsigned long flags;
++	unsigned int iter_count = 0;
+ 	unsigned char century = 0;
+ 	bool retry;
+ 
+@@ -20,13 +46,13 @@ unsigned int mc146818_get_time(struct rtc_time *time)
+ #endif
+ 
+ again:
+-	spin_lock_irqsave(&rtc_lock, flags);
+-	/* Ensure that the RTC is accessible. Bit 6 must be 0! */
+-	if (WARN_ON_ONCE((CMOS_READ(RTC_VALID) & 0x40) != 0)) {
+-		spin_unlock_irqrestore(&rtc_lock, flags);
+-		memset(time, 0xff, sizeof(*time));
+-		return 0;
++	if (iter_count > 10) {
++		memset(time, 0, sizeof(*time));
++		return -EIO;
+ 	}
++	iter_count++;
++
++	spin_lock_irqsave(&rtc_lock, flags);
+ 
+ 	/*
+ 	 * Check whether there is an update in progress during which the
+@@ -116,7 +142,7 @@ again:
+ 
+ 	time->tm_mon--;
+ 
+-	return RTC_24H;
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(mc146818_get_time);
+ 
+diff --git a/drivers/rtc/rtc-wm8350.c b/drivers/rtc/rtc-wm8350.c
+index 2018614f258f6..6eaa9321c0741 100644
+--- a/drivers/rtc/rtc-wm8350.c
++++ b/drivers/rtc/rtc-wm8350.c
+@@ -432,14 +432,21 @@ static int wm8350_rtc_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	wm8350_register_irq(wm8350, WM8350_IRQ_RTC_SEC,
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_RTC_SEC,
+ 			    wm8350_rtc_update_handler, 0,
+ 			    "RTC Seconds", wm8350);
++	if (ret)
++		return ret;
++
+ 	wm8350_mask_irq(wm8350, WM8350_IRQ_RTC_SEC);
+ 
+-	wm8350_register_irq(wm8350, WM8350_IRQ_RTC_ALM,
++	ret = wm8350_register_irq(wm8350, WM8350_IRQ_RTC_ALM,
+ 			    wm8350_rtc_alarm_handler, 0,
+ 			    "RTC Alarm", wm8350);
++	if (ret) {
++		wm8350_free_irq(wm8350, WM8350_IRQ_RTC_SEC, wm8350);
++		return ret;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/scsi/aha152x.c b/drivers/scsi/aha152x.c
+index d17880b57d17b..2449b4215b32d 100644
+--- a/drivers/scsi/aha152x.c
++++ b/drivers/scsi/aha152x.c
+@@ -3375,13 +3375,11 @@ static int __init aha152x_setup(char *str)
+ 	setup[setup_count].synchronous = ints[0] >= 6 ? ints[6] : 1;
+ 	setup[setup_count].delay       = ints[0] >= 7 ? ints[7] : DELAY_DEFAULT;
+ 	setup[setup_count].ext_trans   = ints[0] >= 8 ? ints[8] : 0;
+-	if (ints[0] > 8) {                                                /*}*/
++	if (ints[0] > 8)
+ 		printk(KERN_NOTICE "aha152x: usage: aha152x=<IOBASE>[,<IRQ>[,<SCSI ID>"
+ 		       "[,<RECONNECT>[,<PARITY>[,<SYNCHRONOUS>[,<DELAY>[,<EXT_TRANS>]]]]]]]\n");
+-	} else {
++	else
+ 		setup_count++;
+-		return 0;
+-	}
+ 
+ 	return 1;
+ }
+diff --git a/drivers/scsi/bfa/bfad_attr.c b/drivers/scsi/bfa/bfad_attr.c
+index c8b947c160698..0254e610ea8fd 100644
+--- a/drivers/scsi/bfa/bfad_attr.c
++++ b/drivers/scsi/bfa/bfad_attr.c
+@@ -711,7 +711,7 @@ bfad_im_serial_num_show(struct device *dev, struct device_attribute *attr,
+ 	char serial_num[BFA_ADAPTER_SERIAL_NUM_LEN];
+ 
+ 	bfa_get_adapter_serial_num(&bfad->bfa, serial_num);
+-	return snprintf(buf, PAGE_SIZE, "%s\n", serial_num);
++	return sysfs_emit(buf, "%s\n", serial_num);
+ }
+ 
+ static ssize_t
+@@ -725,7 +725,7 @@ bfad_im_model_show(struct device *dev, struct device_attribute *attr,
+ 	char model[BFA_ADAPTER_MODEL_NAME_LEN];
+ 
+ 	bfa_get_adapter_model(&bfad->bfa, model);
+-	return snprintf(buf, PAGE_SIZE, "%s\n", model);
++	return sysfs_emit(buf, "%s\n", model);
+ }
+ 
+ static ssize_t
+@@ -805,7 +805,7 @@ bfad_im_model_desc_show(struct device *dev, struct device_attribute *attr,
+ 		snprintf(model_descr, BFA_ADAPTER_MODEL_DESCR_LEN,
+ 			"Invalid Model");
+ 
+-	return snprintf(buf, PAGE_SIZE, "%s\n", model_descr);
++	return sysfs_emit(buf, "%s\n", model_descr);
+ }
+ 
+ static ssize_t
+@@ -819,7 +819,7 @@ bfad_im_node_name_show(struct device *dev, struct device_attribute *attr,
+ 	u64        nwwn;
+ 
+ 	nwwn = bfa_fcs_lport_get_nwwn(port->fcs_port);
+-	return snprintf(buf, PAGE_SIZE, "0x%llx\n", cpu_to_be64(nwwn));
++	return sysfs_emit(buf, "0x%llx\n", cpu_to_be64(nwwn));
+ }
+ 
+ static ssize_t
+@@ -836,7 +836,7 @@ bfad_im_symbolic_name_show(struct device *dev, struct device_attribute *attr,
+ 	bfa_fcs_lport_get_attr(&bfad->bfa_fcs.fabric.bport, &port_attr);
+ 	strlcpy(symname, port_attr.port_cfg.sym_name.symname,
+ 			BFA_SYMNAME_MAXLEN);
+-	return snprintf(buf, PAGE_SIZE, "%s\n", symname);
++	return sysfs_emit(buf, "%s\n", symname);
+ }
+ 
+ static ssize_t
+@@ -850,14 +850,14 @@ bfad_im_hw_version_show(struct device *dev, struct device_attribute *attr,
+ 	char hw_ver[BFA_VERSION_LEN];
+ 
+ 	bfa_get_pci_chip_rev(&bfad->bfa, hw_ver);
+-	return snprintf(buf, PAGE_SIZE, "%s\n", hw_ver);
++	return sysfs_emit(buf, "%s\n", hw_ver);
+ }
+ 
+ static ssize_t
+ bfad_im_drv_version_show(struct device *dev, struct device_attribute *attr,
+ 				char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, "%s\n", BFAD_DRIVER_VERSION);
++	return sysfs_emit(buf, "%s\n", BFAD_DRIVER_VERSION);
+ }
+ 
+ static ssize_t
+@@ -871,7 +871,7 @@ bfad_im_optionrom_version_show(struct device *dev,
+ 	char optrom_ver[BFA_VERSION_LEN];
+ 
+ 	bfa_get_adapter_optrom_ver(&bfad->bfa, optrom_ver);
+-	return snprintf(buf, PAGE_SIZE, "%s\n", optrom_ver);
++	return sysfs_emit(buf, "%s\n", optrom_ver);
+ }
+ 
+ static ssize_t
+@@ -885,7 +885,7 @@ bfad_im_fw_version_show(struct device *dev, struct device_attribute *attr,
+ 	char fw_ver[BFA_VERSION_LEN];
+ 
+ 	bfa_get_adapter_fw_ver(&bfad->bfa, fw_ver);
+-	return snprintf(buf, PAGE_SIZE, "%s\n", fw_ver);
++	return sysfs_emit(buf, "%s\n", fw_ver);
+ }
+ 
+ static ssize_t
+@@ -897,7 +897,7 @@ bfad_im_num_of_ports_show(struct device *dev, struct device_attribute *attr,
+ 			(struct bfad_im_port_s *) shost->hostdata[0];
+ 	struct bfad_s *bfad = im_port->bfad;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n",
++	return sysfs_emit(buf, "%d\n",
+ 			bfa_get_nports(&bfad->bfa));
+ }
+ 
+@@ -905,7 +905,7 @@ static ssize_t
+ bfad_im_drv_name_show(struct device *dev, struct device_attribute *attr,
+ 				char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, "%s\n", BFAD_DRIVER_NAME);
++	return sysfs_emit(buf, "%s\n", BFAD_DRIVER_NAME);
+ }
+ 
+ static ssize_t
+@@ -924,14 +924,14 @@ bfad_im_num_of_discovered_ports_show(struct device *dev,
+ 	rports = kcalloc(nrports, sizeof(struct bfa_rport_qualifier_s),
+ 			 GFP_ATOMIC);
+ 	if (rports == NULL)
+-		return snprintf(buf, PAGE_SIZE, "Failed\n");
++		return sysfs_emit(buf, "Failed\n");
+ 
+ 	spin_lock_irqsave(&bfad->bfad_lock, flags);
+ 	bfa_fcs_lport_get_rport_quals(port->fcs_port, rports, &nrports);
+ 	spin_unlock_irqrestore(&bfad->bfad_lock, flags);
+ 	kfree(rports);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", nrports);
++	return sysfs_emit(buf, "%d\n", nrports);
+ }
+ 
+ static          DEVICE_ATTR(serial_number, S_IRUGO,
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index b3bdbea3c9556..20763f1878860 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -2390,17 +2390,25 @@ static irqreturn_t cq_interrupt_v3_hw(int irq_no, void *p)
+ 	return IRQ_WAKE_THREAD;
+ }
+ 
++static void hisi_sas_v3_free_vectors(void *data)
++{
++	struct pci_dev *pdev = data;
++
++	pci_free_irq_vectors(pdev);
++}
++
+ static int interrupt_preinit_v3_hw(struct hisi_hba *hisi_hba)
+ {
+ 	int vectors;
+ 	int max_msi = HISI_SAS_MSI_COUNT_V3_HW, min_msi;
+ 	struct Scsi_Host *shost = hisi_hba->shost;
++	struct pci_dev *pdev = hisi_hba->pci_dev;
+ 	struct irq_affinity desc = {
+ 		.pre_vectors = BASE_VECTORS_V3_HW,
+ 	};
+ 
+ 	min_msi = MIN_AFFINE_VECTORS_V3_HW;
+-	vectors = pci_alloc_irq_vectors_affinity(hisi_hba->pci_dev,
++	vectors = pci_alloc_irq_vectors_affinity(pdev,
+ 						 min_msi, max_msi,
+ 						 PCI_IRQ_MSI |
+ 						 PCI_IRQ_AFFINITY,
+@@ -2412,6 +2420,7 @@ static int interrupt_preinit_v3_hw(struct hisi_hba *hisi_hba)
+ 	hisi_hba->cq_nvecs = vectors - BASE_VECTORS_V3_HW;
+ 	shost->nr_hw_queues = hisi_hba->cq_nvecs;
+ 
++	devm_add_action(&pdev->dev, hisi_sas_v3_free_vectors, pdev);
+ 	return 0;
+ }
+ 
+@@ -3959,6 +3968,54 @@ static const struct file_operations debugfs_bist_phy_v3_hw_fops = {
+ 	.owner = THIS_MODULE,
+ };
+ 
++static ssize_t debugfs_bist_cnt_v3_hw_write(struct file *filp,
++					const char __user *buf,
++					size_t count, loff_t *ppos)
++{
++	struct seq_file *m = filp->private_data;
++	struct hisi_hba *hisi_hba = m->private;
++	unsigned int cnt;
++	int val;
++
++	if (hisi_hba->debugfs_bist_enable)
++		return -EPERM;
++
++	val = kstrtouint_from_user(buf, count, 0, &cnt);
++	if (val)
++		return val;
++
++	if (cnt)
++		return -EINVAL;
++
++	hisi_hba->debugfs_bist_cnt = 0;
++	return count;
++}
++
++static int debugfs_bist_cnt_v3_hw_show(struct seq_file *s, void *p)
++{
++	struct hisi_hba *hisi_hba = s->private;
++
++	seq_printf(s, "%u\n", hisi_hba->debugfs_bist_cnt);
++
++	return 0;
++}
++
++static int debugfs_bist_cnt_v3_hw_open(struct inode *inode,
++					  struct file *filp)
++{
++	return single_open(filp, debugfs_bist_cnt_v3_hw_show,
++			   inode->i_private);
++}
++
++static const struct file_operations debugfs_bist_cnt_v3_hw_ops = {
++	.open = debugfs_bist_cnt_v3_hw_open,
++	.read = seq_read,
++	.write = debugfs_bist_cnt_v3_hw_write,
++	.llseek = seq_lseek,
++	.release = single_release,
++	.owner = THIS_MODULE,
++};
++
+ static const struct {
+ 	int		value;
+ 	char		*name;
+@@ -4596,8 +4653,8 @@ static void debugfs_bist_init_v3_hw(struct hisi_hba *hisi_hba)
+ 	debugfs_create_file("phy_id", 0600, hisi_hba->debugfs_bist_dentry,
+ 			    hisi_hba, &debugfs_bist_phy_v3_hw_fops);
+ 
+-	debugfs_create_u32("cnt", 0600, hisi_hba->debugfs_bist_dentry,
+-			   &hisi_hba->debugfs_bist_cnt);
++	debugfs_create_file("cnt", 0600, hisi_hba->debugfs_bist_dentry,
++			    hisi_hba, &debugfs_bist_cnt_v3_hw_ops);
+ 
+ 	debugfs_create_file("loopback_mode", 0600,
+ 			    hisi_hba->debugfs_bist_dentry,
+@@ -4763,7 +4820,7 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 
+ 	rc = scsi_add_host(shost, dev);
+ 	if (rc)
+-		goto err_out_free_irq_vectors;
++		goto err_out_debugfs;
+ 
+ 	rc = sas_register_ha(sha);
+ 	if (rc)
+@@ -4792,8 +4849,6 @@ err_out_hw_init:
+ 	sas_unregister_ha(sha);
+ err_out_register_ha:
+ 	scsi_remove_host(shost);
+-err_out_free_irq_vectors:
+-	pci_free_irq_vectors(pdev);
+ err_out_debugfs:
+ 	debugfs_exit_v3_hw(hisi_hba);
+ err_out_ha:
+@@ -4817,7 +4872,6 @@ hisi_sas_v3_destroy_irqs(struct pci_dev *pdev, struct hisi_hba *hisi_hba)
+ 
+ 		devm_free_irq(&pdev->dev, pci_irq_vector(pdev, nr), cq);
+ 	}
+-	pci_free_irq_vectors(pdev);
+ }
+ 
+ static void hisi_sas_v3_remove(struct pci_dev *pdev)
+diff --git a/drivers/scsi/libfc/fc_exch.c b/drivers/scsi/libfc/fc_exch.c
+index 841000445b9a1..aa223db4cf53c 100644
+--- a/drivers/scsi/libfc/fc_exch.c
++++ b/drivers/scsi/libfc/fc_exch.c
+@@ -1701,6 +1701,7 @@ static void fc_exch_abts_resp(struct fc_exch *ep, struct fc_frame *fp)
+ 	if (cancel_delayed_work_sync(&ep->timeout_work)) {
+ 		FC_EXCH_DBG(ep, "Exchange timer canceled due to ABTS response\n");
+ 		fc_exch_release(ep);	/* release from pending timer hold */
++		return;
+ 	}
+ 
+ 	spin_lock_bh(&ep->ex_lock);
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+index 2daf633ea2955..9bb64a7b2af72 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_fw.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+@@ -1275,7 +1275,7 @@ static void mpi3mr_free_op_req_q_segments(struct mpi3mr_ioc *mrioc, u16 q_idx)
+ 			    MPI3MR_MAX_SEG_LIST_SIZE,
+ 			    mrioc->req_qinfo[q_idx].q_segment_list,
+ 			    mrioc->req_qinfo[q_idx].q_segment_list_dma);
+-			mrioc->op_reply_qinfo[q_idx].q_segment_list = NULL;
++			mrioc->req_qinfo[q_idx].q_segment_list = NULL;
+ 		}
+ 	} else
+ 		size = mrioc->req_qinfo[q_idx].segment_qd *
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c
+index fe10f257b5a4d..224a4155d9b4a 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_os.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_os.c
+@@ -2204,6 +2204,8 @@ void mpi3mr_process_op_reply_desc(struct mpi3mr_ioc *mrioc,
+ 		scmd->result = DID_OK << 16;
+ 		goto out_success;
+ 	}
++
++	scsi_set_resid(scmd, scsi_bufflen(scmd) - xfer_count);
+ 	if (ioc_status == MPI3_IOCSTATUS_SCSI_DATA_UNDERRUN &&
+ 	    xfer_count == 0 && (scsi_status == MPI3_SCSI_STATUS_BUSY ||
+ 	    scsi_status == MPI3_SCSI_STATUS_RESERVATION_CONFLICT ||
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index 00792767c620d..7e476f50935b8 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -11035,6 +11035,7 @@ _scsih_expander_node_remove(struct MPT3SAS_ADAPTER *ioc,
+ {
+ 	struct _sas_port *mpt3sas_port, *next;
+ 	unsigned long flags;
++	int port_id;
+ 
+ 	/* remove sibling ports attached to this expander */
+ 	list_for_each_entry_safe(mpt3sas_port, next,
+@@ -11055,6 +11056,8 @@ _scsih_expander_node_remove(struct MPT3SAS_ADAPTER *ioc,
+ 			    mpt3sas_port->hba_port);
+ 	}
+ 
++	port_id = sas_expander->port->port_id;
++
+ 	mpt3sas_transport_port_remove(ioc, sas_expander->sas_address,
+ 	    sas_expander->sas_address_parent, sas_expander->port);
+ 
+@@ -11062,7 +11065,7 @@ _scsih_expander_node_remove(struct MPT3SAS_ADAPTER *ioc,
+ 	    "expander_remove: handle(0x%04x), sas_addr(0x%016llx), port:%d\n",
+ 	    sas_expander->handle, (unsigned long long)
+ 	    sas_expander->sas_address,
+-	    sas_expander->port->port_id);
++	    port_id);
+ 
+ 	spin_lock_irqsave(&ioc->sas_node_lock, flags);
+ 	list_del(&sas_expander->list);
+diff --git a/drivers/scsi/mvsas/mv_init.c b/drivers/scsi/mvsas/mv_init.c
+index dcae2d4464f90..44df7c03aab8d 100644
+--- a/drivers/scsi/mvsas/mv_init.c
++++ b/drivers/scsi/mvsas/mv_init.c
+@@ -696,7 +696,7 @@ static struct pci_driver mvs_pci_driver = {
+ static ssize_t driver_version_show(struct device *cdev,
+ 				   struct device_attribute *attr, char *buffer)
+ {
+-	return snprintf(buffer, PAGE_SIZE, "%s\n", DRV_VERSION);
++	return sysfs_emit(buffer, "%s\n", DRV_VERSION);
+ }
+ 
+ static DEVICE_ATTR_RO(driver_version);
+@@ -744,7 +744,7 @@ static ssize_t interrupt_coalescing_store(struct device *cdev,
+ static ssize_t interrupt_coalescing_show(struct device *cdev,
+ 					 struct device_attribute *attr, char *buffer)
+ {
+-	return snprintf(buffer, PAGE_SIZE, "%d\n", interrupt_coalescing);
++	return sysfs_emit(buffer, "%d\n", interrupt_coalescing);
+ }
+ 
+ static DEVICE_ATTR_RW(interrupt_coalescing);
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index 6332cf23bf6b2..1a5338917a895 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -1767,7 +1767,6 @@ static void pm8001_send_abort_all(struct pm8001_hba_info *pm8001_ha,
+ 	}
+ 
+ 	task = sas_alloc_slow_task(GFP_ATOMIC);
+-
+ 	if (!task) {
+ 		pm8001_dbg(pm8001_ha, FAIL, "cannot allocate task\n");
+ 		return;
+@@ -1776,8 +1775,10 @@ static void pm8001_send_abort_all(struct pm8001_hba_info *pm8001_ha,
+ 	task->task_done = pm8001_task_done;
+ 
+ 	res = pm8001_tag_alloc(pm8001_ha, &ccb_tag);
+-	if (res)
++	if (res) {
++		sas_free_task(task);
+ 		return;
++	}
+ 
+ 	ccb = &pm8001_ha->ccb_info[ccb_tag];
+ 	ccb->device = pm8001_ha_dev;
+@@ -1794,8 +1795,10 @@ static void pm8001_send_abort_all(struct pm8001_hba_info *pm8001_ha,
+ 
+ 	ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &task_abort,
+ 			sizeof(task_abort), 0);
+-	if (ret)
++	if (ret) {
++		sas_free_task(task);
+ 		pm8001_tag_free(pm8001_ha, ccb_tag);
++	}
+ 
+ }
+ 
+@@ -3717,12 +3720,11 @@ int pm8001_mpi_task_abort_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	mb();
+ 
+ 	if (pm8001_dev->id & NCQ_ABORT_ALL_FLAG) {
+-		pm8001_tag_free(pm8001_ha, tag);
+ 		sas_free_task(t);
+-		/* clear the flag */
+-		pm8001_dev->id &= 0xBFFFFFFF;
+-	} else
++		pm8001_dev->id &= ~NCQ_ABORT_ALL_FLAG;
++	} else {
+ 		t->task_done(t);
++	}
+ 
+ 	return 0;
+ }
+@@ -4490,6 +4492,9 @@ static int pm8001_chip_reg_dev_req(struct pm8001_hba_info *pm8001_ha,
+ 		SAS_ADDR_SIZE);
+ 	rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload,
+ 			sizeof(payload), 0);
++	if (rc)
++		pm8001_tag_free(pm8001_ha, tag);
++
+ 	return rc;
+ }
+ 
+@@ -4902,6 +4907,11 @@ pm8001_chip_fw_flash_update_req(struct pm8001_hba_info *pm8001_ha,
+ 	ccb->ccb_tag = tag;
+ 	rc = pm8001_chip_fw_flash_update_build(pm8001_ha, &flash_update_info,
+ 		tag);
++	if (rc) {
++		kfree(fw_control_context);
++		pm8001_tag_free(pm8001_ha, tag);
++	}
++
+ 	return rc;
+ }
+ 
+@@ -5006,6 +5016,9 @@ pm8001_chip_set_dev_state_req(struct pm8001_hba_info *pm8001_ha,
+ 	payload.nds = cpu_to_le32(state);
+ 	rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload,
+ 			sizeof(payload), 0);
++	if (rc)
++		pm8001_tag_free(pm8001_ha, tag);
++
+ 	return rc;
+ 
+ }
+diff --git a/drivers/scsi/pm8001/pm8001_sas.c b/drivers/scsi/pm8001/pm8001_sas.c
+index c0b45b8a513d7..2a9ea08c7fe67 100644
+--- a/drivers/scsi/pm8001/pm8001_sas.c
++++ b/drivers/scsi/pm8001/pm8001_sas.c
+@@ -831,10 +831,10 @@ pm8001_exec_internal_task_abort(struct pm8001_hba_info *pm8001_ha,
+ 
+ 		res = PM8001_CHIP_DISP->task_abort(pm8001_ha,
+ 			pm8001_dev, flag, task_tag, ccb_tag);
+-
+ 		if (res) {
+ 			del_timer(&task->slow_task->timer);
+ 			pm8001_dbg(pm8001_ha, FAIL, "Executing internal task failed\n");
++			pm8001_tag_free(pm8001_ha, ccb_tag);
+ 			goto ex_err;
+ 		}
+ 		wait_for_completion(&task->slow_task->completion);
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index 48b0154211c75..d7c88d0f1899b 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -66,18 +66,16 @@ int pm80xx_bar4_shift(struct pm8001_hba_info *pm8001_ha, u32 shift_value)
+ }
+ 
+ static void pm80xx_pci_mem_copy(struct pm8001_hba_info  *pm8001_ha, u32 soffset,
+-				const void *destination,
++				__le32 *destination,
+ 				u32 dw_count, u32 bus_base_number)
+ {
+ 	u32 index, value, offset;
+-	u32 *destination1;
+-	destination1 = (u32 *)destination;
+ 
+-	for (index = 0; index < dw_count; index += 4, destination1++) {
++	for (index = 0; index < dw_count; index += 4, destination++) {
+ 		offset = (soffset + index);
+ 		if (offset < (64 * 1024)) {
+ 			value = pm8001_cr32(pm8001_ha, bus_base_number, offset);
+-			*destination1 =  cpu_to_le32(value);
++			*destination = cpu_to_le32(value);
+ 		}
+ 	}
+ 	return;
+@@ -4930,8 +4928,13 @@ static int pm80xx_chip_phy_ctl_req(struct pm8001_hba_info *pm8001_ha,
+ 	payload.tag = cpu_to_le32(tag);
+ 	payload.phyop_phyid =
+ 		cpu_to_le32(((phy_op & 0xFF) << 8) | (phyId & 0xFF));
+-	return pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload,
+-			sizeof(payload), 0);
++
++	rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload,
++				  sizeof(payload), 0);
++	if (rc)
++		pm8001_tag_free(pm8001_ha, tag);
++
++	return rc;
+ }
+ 
+ static u32 pm80xx_chip_is_our_interrupt(struct pm8001_hba_info *pm8001_ha)
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index d0ce723299bf7..6bc0f5f511e12 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -223,6 +223,8 @@ static int scsi_realloc_sdev_budget_map(struct scsi_device *sdev,
+ 	int ret;
+ 	struct sbitmap sb_backup;
+ 
++	depth = min_t(unsigned int, depth, scsi_device_max_queue_depth(sdev));
++
+ 	/*
+ 	 * realloc if new shift is calculated, which is caused by setting
+ 	 * up one new default queue depth after calling ->slave_configure
+@@ -245,6 +247,9 @@ static int scsi_realloc_sdev_budget_map(struct scsi_device *sdev,
+ 				scsi_device_max_queue_depth(sdev),
+ 				new_shift, GFP_KERNEL,
+ 				sdev->request_queue->node, false, true);
++	if (!ret)
++		sbitmap_resize(&sdev->budget_map, depth);
++
+ 	if (need_free) {
+ 		if (ret)
+ 			sdev->budget_map = sb_backup;
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index dfca484dd0c40..95f4788619ed7 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -3321,6 +3321,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
+ 			sd_read_block_limits(sdkp);
+ 			sd_read_block_characteristics(sdkp);
+ 			sd_zbc_read_zones(sdkp, buffer);
++			sd_read_cpr(sdkp);
+ 		}
+ 
+ 		sd_print_capacity(sdkp, old_capacity);
+@@ -3330,7 +3331,6 @@ static int sd_revalidate_disk(struct gendisk *disk)
+ 		sd_read_app_tag_own(sdkp, buffer);
+ 		sd_read_write_same(sdkp, buffer);
+ 		sd_read_security(sdkp, buffer);
+-		sd_read_cpr(sdkp);
+ 	}
+ 
+ 	/*
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index f0897d587454a..f3749e5086737 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -2513,17 +2513,15 @@ static void pqi_remove_all_scsi_devices(struct pqi_ctrl_info *ctrl_info)
+ 	struct pqi_scsi_dev *device;
+ 	struct pqi_scsi_dev *next;
+ 
+-	spin_lock_irqsave(&ctrl_info->scsi_device_list_lock, flags);
+-
+ 	list_for_each_entry_safe(device, next, &ctrl_info->scsi_device_list,
+ 		scsi_device_list_entry) {
+ 		if (pqi_is_device_added(device))
+ 			pqi_remove_device(ctrl_info, device);
++		spin_lock_irqsave(&ctrl_info->scsi_device_list_lock, flags);
+ 		list_del(&device->scsi_device_list_entry);
+ 		pqi_free_device(device);
++		spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags);
+ 	}
+-
+-	spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags);
+ }
+ 
+ static int pqi_scan_scsi_devices(struct pqi_ctrl_info *ctrl_info)
+@@ -7857,6 +7855,21 @@ static int pqi_force_sis_mode(struct pqi_ctrl_info *ctrl_info)
+ 	return pqi_revert_to_sis_mode(ctrl_info);
+ }
+ 
++static void pqi_perform_lockup_action(void)
++{
++	switch (pqi_lockup_action) {
++	case PANIC:
++		panic("FATAL: Smart Family Controller lockup detected");
++		break;
++	case REBOOT:
++		emergency_restart();
++		break;
++	case NONE:
++	default:
++		break;
++	}
++}
++
+ static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info)
+ {
+ 	int rc;
+@@ -7881,8 +7894,15 @@ static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info)
+ 	 * commands.
+ 	 */
+ 	rc = sis_wait_for_ctrl_ready(ctrl_info);
+-	if (rc)
++	if (rc) {
++		if (reset_devices) {
++			dev_err(&ctrl_info->pci_dev->dev,
++				"kdump init failed with error %d\n", rc);
++			pqi_lockup_action = REBOOT;
++			pqi_perform_lockup_action();
++		}
+ 		return rc;
++	}
+ 
+ 	/*
+ 	 * Get the controller properties.  This allows us to determine
+@@ -8607,21 +8627,6 @@ static int pqi_ofa_ctrl_restart(struct pqi_ctrl_info *ctrl_info, unsigned int de
+ 	return pqi_ctrl_init_resume(ctrl_info);
+ }
+ 
+-static void pqi_perform_lockup_action(void)
+-{
+-	switch (pqi_lockup_action) {
+-	case PANIC:
+-		panic("FATAL: Smart Family Controller lockup detected");
+-		break;
+-	case REBOOT:
+-		emergency_restart();
+-		break;
+-	case NONE:
+-	default:
+-		break;
+-	}
+-}
+-
+ static struct pqi_raid_error_info pqi_ctrl_offline_raid_error_info = {
+ 	.data_out_result = PQI_DATA_IN_OUT_HARDWARE_ERROR,
+ 	.status = SAM_STAT_CHECK_CONDITION,
+diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
+index f5a2eed543452..9f683fed8f2c9 100644
+--- a/drivers/scsi/sr.c
++++ b/drivers/scsi/sr.c
+@@ -579,7 +579,7 @@ static int sr_block_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
+ 
+ 	scsi_autopm_get_device(sdev);
+ 
+-	if (ret != CDROMCLOSETRAY && ret != CDROMEJECT) {
++	if (cmd != CDROMCLOSETRAY && cmd != CDROMEJECT) {
+ 		ret = cdrom_ioctl(&cd->cdi, bdev, mode, cmd, arg);
+ 		if (ret != -ENOSYS)
+ 			goto put;
+diff --git a/drivers/scsi/ufs/ufshcd-pci.c b/drivers/scsi/ufs/ufshcd-pci.c
+index f76692053ca17..e892b9feffb11 100644
+--- a/drivers/scsi/ufs/ufshcd-pci.c
++++ b/drivers/scsi/ufs/ufshcd-pci.c
+@@ -428,6 +428,12 @@ static int ufs_intel_adl_init(struct ufs_hba *hba)
+ 	return ufs_intel_common_init(hba);
+ }
+ 
++static int ufs_intel_mtl_init(struct ufs_hba *hba)
++{
++	hba->caps |= UFSHCD_CAP_CRYPTO | UFSHCD_CAP_WB_EN;
++	return ufs_intel_common_init(hba);
++}
++
+ static struct ufs_hba_variant_ops ufs_intel_cnl_hba_vops = {
+ 	.name                   = "intel-pci",
+ 	.init			= ufs_intel_common_init,
+@@ -465,6 +471,16 @@ static struct ufs_hba_variant_ops ufs_intel_adl_hba_vops = {
+ 	.device_reset		= ufs_intel_device_reset,
+ };
+ 
++static struct ufs_hba_variant_ops ufs_intel_mtl_hba_vops = {
++	.name                   = "intel-pci",
++	.init			= ufs_intel_mtl_init,
++	.exit			= ufs_intel_common_exit,
++	.hce_enable_notify	= ufs_intel_hce_enable_notify,
++	.link_startup_notify	= ufs_intel_link_startup_notify,
++	.resume			= ufs_intel_resume,
++	.device_reset		= ufs_intel_device_reset,
++};
++
+ #ifdef CONFIG_PM_SLEEP
+ static int ufshcd_pci_restore(struct device *dev)
+ {
+@@ -579,6 +595,7 @@ static const struct pci_device_id ufshcd_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x98FA), (kernel_ulong_t)&ufs_intel_lkf_hba_vops },
+ 	{ PCI_VDEVICE(INTEL, 0x51FF), (kernel_ulong_t)&ufs_intel_adl_hba_vops },
+ 	{ PCI_VDEVICE(INTEL, 0x54FF), (kernel_ulong_t)&ufs_intel_adl_hba_vops },
++	{ PCI_VDEVICE(INTEL, 0x7E47), (kernel_ulong_t)&ufs_intel_mtl_hba_vops },
+ 	{ }	/* terminate list */
+ };
+ 
+diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
+index ded5ba9b1466a..54c3b8f34c0ab 100644
+--- a/drivers/scsi/ufs/ufshpb.c
++++ b/drivers/scsi/ufs/ufshpb.c
+@@ -870,12 +870,6 @@ static struct ufshpb_region *ufshpb_victim_lru_info(struct ufshpb_lu *hpb)
+ 	struct ufshpb_region *rgn, *victim_rgn = NULL;
+ 
+ 	list_for_each_entry(rgn, &lru_info->lh_lru_rgn, list_lru_rgn) {
+-		if (!rgn) {
+-			dev_err(&hpb->sdev_ufs_lu->sdev_dev,
+-				"%s: no region allocated\n",
+-				__func__);
+-			return NULL;
+-		}
+ 		if (ufshpb_check_srgns_issue_state(hpb, rgn))
+ 			continue;
+ 
+@@ -891,6 +885,11 @@ static struct ufshpb_region *ufshpb_victim_lru_info(struct ufshpb_lu *hpb)
+ 		break;
+ 	}
+ 
++	if (!victim_rgn)
++		dev_err(&hpb->sdev_ufs_lu->sdev_dev,
++			"%s: no region allocated\n",
++			__func__);
++
+ 	return victim_rgn;
+ }
+ 
+diff --git a/drivers/scsi/zorro7xx.c b/drivers/scsi/zorro7xx.c
+index 27b9e2baab1a6..7acf9193a9e80 100644
+--- a/drivers/scsi/zorro7xx.c
++++ b/drivers/scsi/zorro7xx.c
+@@ -159,6 +159,8 @@ static void zorro7xx_remove_one(struct zorro_dev *z)
+ 	scsi_remove_host(host);
+ 
+ 	NCR_700_release(host);
++	if (host->base > 0x01000000)
++		iounmap(hostdata->base);
+ 	kfree(hostdata);
+ 	free_irq(host->irq, host);
+ 	zorro_release_device(z);
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index 44744c55304e1..c943908e52810 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -1173,7 +1173,7 @@ static int bcm_qspi_exec_mem_op(struct spi_mem *mem,
+ 	addr = op->addr.val;
+ 	len = op->data.nbytes;
+ 
+-	if (bcm_qspi_bspi_ver_three(qspi) == true) {
++	if (has_bspi(qspi) && bcm_qspi_bspi_ver_three(qspi) == true) {
+ 		/*
+ 		 * The address coming into this function is a raw flash offset.
+ 		 * But for BSPI <= V3, we need to convert it to a remapped BSPI
+@@ -1192,7 +1192,7 @@ static int bcm_qspi_exec_mem_op(struct spi_mem *mem,
+ 	    len < 4)
+ 		mspi_read = true;
+ 
+-	if (mspi_read)
++	if (!has_bspi(qspi) || mspi_read)
+ 		return bcm_qspi_mspi_exec_mem_op(spi, op);
+ 
+ 	ret = bcm_qspi_bspi_set_mode(qspi, op, 0);
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 4e459516b74c0..501339121050b 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -1151,11 +1151,15 @@ static int __spi_unmap_msg(struct spi_controller *ctlr, struct spi_message *msg)
+ 
+ 	if (ctlr->dma_tx)
+ 		tx_dev = ctlr->dma_tx->device->dev;
++	else if (ctlr->dma_map_dev)
++		tx_dev = ctlr->dma_map_dev;
+ 	else
+ 		tx_dev = ctlr->dev.parent;
+ 
+ 	if (ctlr->dma_rx)
+ 		rx_dev = ctlr->dma_rx->device->dev;
++	else if (ctlr->dma_map_dev)
++		rx_dev = ctlr->dma_map_dev;
+ 	else
+ 		rx_dev = ctlr->dev.parent;
+ 
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index b9505bb51f45c..de25140159e39 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -1209,6 +1209,9 @@ int vchiq_dump_platform_instances(void *dump_context)
+ 	int len;
+ 	int i;
+ 
++	if (!state)
++		return -ENOTCONN;
++
+ 	/*
+ 	 * There is no list of instances, so instead scan all services,
+ 	 * marking those that have been dumped.
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+index 7fe20d4b7ba28..b7295236671c1 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+@@ -2306,6 +2306,9 @@ void vchiq_msg_queue_push(unsigned int handle, struct vchiq_header *header)
+ 	struct vchiq_service *service = find_service_by_handle(handle);
+ 	int pos;
+ 
++	if (!service)
++		return;
++
+ 	while (service->msg_queue_write == service->msg_queue_read +
+ 		VCHIQ_MAX_SLOTS) {
+ 		if (wait_for_completion_interruptible(&service->msg_queue_pop))
+@@ -2326,6 +2329,9 @@ struct vchiq_header *vchiq_msg_hold(unsigned int handle)
+ 	struct vchiq_header *header;
+ 	int pos;
+ 
++	if (!service)
++		return NULL;
++
+ 	if (service->msg_queue_write == service->msg_queue_read)
+ 		return NULL;
+ 
+diff --git a/drivers/staging/wfx/main.c b/drivers/staging/wfx/main.c
+index 858d778cc5897..e3999e95ce851 100644
+--- a/drivers/staging/wfx/main.c
++++ b/drivers/staging/wfx/main.c
+@@ -322,7 +322,8 @@ struct wfx_dev *wfx_init_common(struct device *dev,
+ 	wdev->pdata.gpio_wakeup = devm_gpiod_get_optional(dev, "wakeup",
+ 							  GPIOD_OUT_LOW);
+ 	if (IS_ERR(wdev->pdata.gpio_wakeup))
+-		return NULL;
++		goto err;
++
+ 	if (wdev->pdata.gpio_wakeup)
+ 		gpiod_set_consumer_name(wdev->pdata.gpio_wakeup, "wfx wakeup");
+ 
+@@ -341,6 +342,10 @@ struct wfx_dev *wfx_init_common(struct device *dev,
+ 		return NULL;
+ 
+ 	return wdev;
++
++err:
++	ieee80211_free_hw(hw);
++	return NULL;
+ }
+ 
+ int wfx_probe(struct wfx_dev *wdev)
+diff --git a/drivers/tty/serial/samsung_tty.c b/drivers/tty/serial/samsung_tty.c
+index ca084c10d0bb4..78f01ddab1c65 100644
+--- a/drivers/tty/serial/samsung_tty.c
++++ b/drivers/tty/serial/samsung_tty.c
+@@ -922,11 +922,8 @@ static void s3c24xx_serial_tx_chars(struct s3c24xx_uart_port *ourport)
+ 		return;
+ 	}
+ 
+-	if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) {
+-		spin_unlock(&port->lock);
++	if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ 		uart_write_wakeup(port);
+-		spin_lock(&port->lock);
+-	}
+ 
+ 	if (uart_circ_empty(xmit))
+ 		s3c24xx_serial_stop_tx(port);
+diff --git a/drivers/usb/cdns3/cdnsp-debug.h b/drivers/usb/cdns3/cdnsp-debug.h
+index a8776df2d4e0c..f0ca865cce2a0 100644
+--- a/drivers/usb/cdns3/cdnsp-debug.h
++++ b/drivers/usb/cdns3/cdnsp-debug.h
+@@ -182,208 +182,211 @@ static inline const char *cdnsp_decode_trb(char *str, size_t size, u32 field0,
+ 	int ep_id = TRB_TO_EP_INDEX(field3) - 1;
+ 	int type = TRB_FIELD_TO_TYPE(field3);
+ 	unsigned int ep_num;
+-	int ret = 0;
++	int ret;
+ 	u32 temp;
+ 
+ 	ep_num = DIV_ROUND_UP(ep_id, 2);
+ 
+ 	switch (type) {
+ 	case TRB_LINK:
+-		ret += snprintf(str, size,
+-				"LINK %08x%08x intr %ld type '%s' flags %c:%c:%c:%c",
+-				field1, field0, GET_INTR_TARGET(field2),
+-				cdnsp_trb_type_string(type),
+-				field3 & TRB_IOC ? 'I' : 'i',
+-				field3 & TRB_CHAIN ? 'C' : 'c',
+-				field3 & TRB_TC ? 'T' : 't',
+-				field3 & TRB_CYCLE ? 'C' : 'c');
++		ret = snprintf(str, size,
++			       "LINK %08x%08x intr %ld type '%s' flags %c:%c:%c:%c",
++			       field1, field0, GET_INTR_TARGET(field2),
++			       cdnsp_trb_type_string(type),
++			       field3 & TRB_IOC ? 'I' : 'i',
++			       field3 & TRB_CHAIN ? 'C' : 'c',
++			       field3 & TRB_TC ? 'T' : 't',
++			       field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_TRANSFER:
+ 	case TRB_COMPLETION:
+ 	case TRB_PORT_STATUS:
+ 	case TRB_HC_EVENT:
+-		ret += snprintf(str, size,
+-				"ep%d%s(%d) type '%s' TRB %08x%08x status '%s'"
+-				" len %ld slot %ld flags %c:%c",
+-				ep_num, ep_id % 2 ? "out" : "in",
+-				TRB_TO_EP_INDEX(field3),
+-				cdnsp_trb_type_string(type), field1, field0,
+-				cdnsp_trb_comp_code_string(GET_COMP_CODE(field2)),
+-				EVENT_TRB_LEN(field2), TRB_TO_SLOT_ID(field3),
+-				field3 & EVENT_DATA ? 'E' : 'e',
+-				field3 & TRB_CYCLE ? 'C' : 'c');
++		ret = snprintf(str, size,
++			       "ep%d%s(%d) type '%s' TRB %08x%08x status '%s'"
++			       " len %ld slot %ld flags %c:%c",
++			       ep_num, ep_id % 2 ? "out" : "in",
++			       TRB_TO_EP_INDEX(field3),
++			       cdnsp_trb_type_string(type), field1, field0,
++			       cdnsp_trb_comp_code_string(GET_COMP_CODE(field2)),
++			       EVENT_TRB_LEN(field2), TRB_TO_SLOT_ID(field3),
++			       field3 & EVENT_DATA ? 'E' : 'e',
++			       field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_MFINDEX_WRAP:
+-		ret += snprintf(str, size, "%s: flags %c",
+-				cdnsp_trb_type_string(type),
+-				field3 & TRB_CYCLE ? 'C' : 'c');
++		ret = snprintf(str, size, "%s: flags %c",
++			       cdnsp_trb_type_string(type),
++			       field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_SETUP:
+-		ret += snprintf(str, size,
+-				"type '%s' bRequestType %02x bRequest %02x "
+-				"wValue %02x%02x wIndex %02x%02x wLength %d "
+-				"length %ld TD size %ld intr %ld Setup ID %ld "
+-				"flags %c:%c:%c",
+-				cdnsp_trb_type_string(type),
+-				field0 & 0xff,
+-				(field0 & 0xff00) >> 8,
+-				(field0 & 0xff000000) >> 24,
+-				(field0 & 0xff0000) >> 16,
+-				(field1 & 0xff00) >> 8,
+-				field1 & 0xff,
+-				(field1 & 0xff000000) >> 16 |
+-				(field1 & 0xff0000) >> 16,
+-				TRB_LEN(field2), GET_TD_SIZE(field2),
+-				GET_INTR_TARGET(field2),
+-				TRB_SETUPID_TO_TYPE(field3),
+-				field3 & TRB_IDT ? 'D' : 'd',
+-				field3 & TRB_IOC ? 'I' : 'i',
+-				field3 & TRB_CYCLE ? 'C' : 'c');
++		ret = snprintf(str, size,
++			       "type '%s' bRequestType %02x bRequest %02x "
++			       "wValue %02x%02x wIndex %02x%02x wLength %d "
++			       "length %ld TD size %ld intr %ld Setup ID %ld "
++			       "flags %c:%c:%c",
++			       cdnsp_trb_type_string(type),
++			       field0 & 0xff,
++			       (field0 & 0xff00) >> 8,
++			       (field0 & 0xff000000) >> 24,
++			       (field0 & 0xff0000) >> 16,
++			       (field1 & 0xff00) >> 8,
++			       field1 & 0xff,
++			       (field1 & 0xff000000) >> 16 |
++			       (field1 & 0xff0000) >> 16,
++			       TRB_LEN(field2), GET_TD_SIZE(field2),
++			       GET_INTR_TARGET(field2),
++			       TRB_SETUPID_TO_TYPE(field3),
++			       field3 & TRB_IDT ? 'D' : 'd',
++			       field3 & TRB_IOC ? 'I' : 'i',
++			       field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_DATA:
+-		ret += snprintf(str, size,
+-				"type '%s' Buffer %08x%08x length %ld TD size %ld "
+-				"intr %ld flags %c:%c:%c:%c:%c:%c:%c",
+-				cdnsp_trb_type_string(type),
+-				field1, field0, TRB_LEN(field2),
+-				GET_TD_SIZE(field2),
+-				GET_INTR_TARGET(field2),
+-				field3 & TRB_IDT ? 'D' : 'i',
+-				field3 & TRB_IOC ? 'I' : 'i',
+-				field3 & TRB_CHAIN ? 'C' : 'c',
+-				field3 & TRB_NO_SNOOP ? 'S' : 's',
+-				field3 & TRB_ISP ? 'I' : 'i',
+-				field3 & TRB_ENT ? 'E' : 'e',
+-				field3 & TRB_CYCLE ? 'C' : 'c');
++		ret = snprintf(str, size,
++			       "type '%s' Buffer %08x%08x length %ld TD size %ld "
++			       "intr %ld flags %c:%c:%c:%c:%c:%c:%c",
++			       cdnsp_trb_type_string(type),
++			       field1, field0, TRB_LEN(field2),
++			       GET_TD_SIZE(field2),
++			       GET_INTR_TARGET(field2),
++			       field3 & TRB_IDT ? 'D' : 'i',
++			       field3 & TRB_IOC ? 'I' : 'i',
++			       field3 & TRB_CHAIN ? 'C' : 'c',
++			       field3 & TRB_NO_SNOOP ? 'S' : 's',
++			       field3 & TRB_ISP ? 'I' : 'i',
++			       field3 & TRB_ENT ? 'E' : 'e',
++			       field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_STATUS:
+-		ret += snprintf(str, size,
+-				"Buffer %08x%08x length %ld TD size %ld intr"
+-				"%ld type '%s' flags %c:%c:%c:%c",
+-				field1, field0, TRB_LEN(field2),
+-				GET_TD_SIZE(field2),
+-				GET_INTR_TARGET(field2),
+-				cdnsp_trb_type_string(type),
+-				field3 & TRB_IOC ? 'I' : 'i',
+-				field3 & TRB_CHAIN ? 'C' : 'c',
+-				field3 & TRB_ENT ? 'E' : 'e',
+-				field3 & TRB_CYCLE ? 'C' : 'c');
++		ret = snprintf(str, size,
++			       "Buffer %08x%08x length %ld TD size %ld intr"
++			       "%ld type '%s' flags %c:%c:%c:%c",
++			       field1, field0, TRB_LEN(field2),
++			       GET_TD_SIZE(field2),
++			       GET_INTR_TARGET(field2),
++			       cdnsp_trb_type_string(type),
++			       field3 & TRB_IOC ? 'I' : 'i',
++			       field3 & TRB_CHAIN ? 'C' : 'c',
++			       field3 & TRB_ENT ? 'E' : 'e',
++			       field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_NORMAL:
+ 	case TRB_ISOC:
+ 	case TRB_EVENT_DATA:
+ 	case TRB_TR_NOOP:
+-		ret += snprintf(str, size,
+-				"type '%s' Buffer %08x%08x length %ld "
+-				"TD size %ld intr %ld "
+-				"flags %c:%c:%c:%c:%c:%c:%c:%c:%c",
+-				cdnsp_trb_type_string(type),
+-				field1, field0, TRB_LEN(field2),
+-				GET_TD_SIZE(field2),
+-				GET_INTR_TARGET(field2),
+-				field3 & TRB_BEI ? 'B' : 'b',
+-				field3 & TRB_IDT ? 'T' : 't',
+-				field3 & TRB_IOC ? 'I' : 'i',
+-				field3 & TRB_CHAIN ? 'C' : 'c',
+-				field3 & TRB_NO_SNOOP ? 'S' : 's',
+-				field3 & TRB_ISP ? 'I' : 'i',
+-				field3 & TRB_ENT ? 'E' : 'e',
+-				field3 & TRB_CYCLE ? 'C' : 'c',
+-				!(field3 & TRB_EVENT_INVALIDATE) ? 'V' : 'v');
++		ret = snprintf(str, size,
++			       "type '%s' Buffer %08x%08x length %ld "
++			       "TD size %ld intr %ld "
++			       "flags %c:%c:%c:%c:%c:%c:%c:%c:%c",
++			       cdnsp_trb_type_string(type),
++			       field1, field0, TRB_LEN(field2),
++			       GET_TD_SIZE(field2),
++			       GET_INTR_TARGET(field2),
++			       field3 & TRB_BEI ? 'B' : 'b',
++			       field3 & TRB_IDT ? 'T' : 't',
++			       field3 & TRB_IOC ? 'I' : 'i',
++			       field3 & TRB_CHAIN ? 'C' : 'c',
++			       field3 & TRB_NO_SNOOP ? 'S' : 's',
++			       field3 & TRB_ISP ? 'I' : 'i',
++			       field3 & TRB_ENT ? 'E' : 'e',
++			       field3 & TRB_CYCLE ? 'C' : 'c',
++			       !(field3 & TRB_EVENT_INVALIDATE) ? 'V' : 'v');
+ 		break;
+ 	case TRB_CMD_NOOP:
+ 	case TRB_ENABLE_SLOT:
+-		ret += snprintf(str, size, "%s: flags %c",
+-				cdnsp_trb_type_string(type),
+-				field3 & TRB_CYCLE ? 'C' : 'c');
++		ret = snprintf(str, size, "%s: flags %c",
++			       cdnsp_trb_type_string(type),
++			       field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_DISABLE_SLOT:
+-		ret += snprintf(str, size, "%s: slot %ld flags %c",
+-				cdnsp_trb_type_string(type),
+-				TRB_TO_SLOT_ID(field3),
+-				field3 & TRB_CYCLE ? 'C' : 'c');
++		ret = snprintf(str, size, "%s: slot %ld flags %c",
++			       cdnsp_trb_type_string(type),
++			       TRB_TO_SLOT_ID(field3),
++			       field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_ADDR_DEV:
+-		ret += snprintf(str, size,
+-				"%s: ctx %08x%08x slot %ld flags %c:%c",
+-				cdnsp_trb_type_string(type), field1, field0,
+-				TRB_TO_SLOT_ID(field3),
+-				field3 & TRB_BSR ? 'B' : 'b',
+-				field3 & TRB_CYCLE ? 'C' : 'c');
++		ret = snprintf(str, size,
++			       "%s: ctx %08x%08x slot %ld flags %c:%c",
++			       cdnsp_trb_type_string(type), field1, field0,
++			       TRB_TO_SLOT_ID(field3),
++			       field3 & TRB_BSR ? 'B' : 'b',
++			       field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_CONFIG_EP:
+-		ret += snprintf(str, size,
+-				"%s: ctx %08x%08x slot %ld flags %c:%c",
+-				cdnsp_trb_type_string(type), field1, field0,
+-				TRB_TO_SLOT_ID(field3),
+-				field3 & TRB_DC ? 'D' : 'd',
+-				field3 & TRB_CYCLE ? 'C' : 'c');
++		ret = snprintf(str, size,
++			       "%s: ctx %08x%08x slot %ld flags %c:%c",
++			       cdnsp_trb_type_string(type), field1, field0,
++			       TRB_TO_SLOT_ID(field3),
++			       field3 & TRB_DC ? 'D' : 'd',
++			       field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_EVAL_CONTEXT:
+-		ret += snprintf(str, size,
+-				"%s: ctx %08x%08x slot %ld flags %c",
+-				cdnsp_trb_type_string(type), field1, field0,
+-				TRB_TO_SLOT_ID(field3),
+-				field3 & TRB_CYCLE ? 'C' : 'c');
++		ret = snprintf(str, size,
++			       "%s: ctx %08x%08x slot %ld flags %c",
++			       cdnsp_trb_type_string(type), field1, field0,
++			       TRB_TO_SLOT_ID(field3),
++			       field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_RESET_EP:
+ 	case TRB_HALT_ENDPOINT:
+ 	case TRB_FLUSH_ENDPOINT:
+-		ret += snprintf(str, size,
+-				"%s: ep%d%s(%d) ctx %08x%08x slot %ld flags %c",
+-				cdnsp_trb_type_string(type),
+-				ep_num, ep_id % 2 ? "out" : "in",
+-				TRB_TO_EP_INDEX(field3), field1, field0,
+-				TRB_TO_SLOT_ID(field3),
+-				field3 & TRB_CYCLE ? 'C' : 'c');
++		ret = snprintf(str, size,
++			       "%s: ep%d%s(%d) ctx %08x%08x slot %ld flags %c",
++			       cdnsp_trb_type_string(type),
++			       ep_num, ep_id % 2 ? "out" : "in",
++			       TRB_TO_EP_INDEX(field3), field1, field0,
++			       TRB_TO_SLOT_ID(field3),
++			       field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_STOP_RING:
+-		ret += snprintf(str, size,
+-				"%s: ep%d%s(%d) slot %ld sp %d flags %c",
+-				cdnsp_trb_type_string(type),
+-				ep_num, ep_id % 2 ? "out" : "in",
+-				TRB_TO_EP_INDEX(field3),
+-				TRB_TO_SLOT_ID(field3),
+-				TRB_TO_SUSPEND_PORT(field3),
+-				field3 & TRB_CYCLE ? 'C' : 'c');
++		ret = snprintf(str, size,
++			       "%s: ep%d%s(%d) slot %ld sp %d flags %c",
++			       cdnsp_trb_type_string(type),
++			       ep_num, ep_id % 2 ? "out" : "in",
++			       TRB_TO_EP_INDEX(field3),
++			       TRB_TO_SLOT_ID(field3),
++			       TRB_TO_SUSPEND_PORT(field3),
++			       field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_SET_DEQ:
+-		ret += snprintf(str, size,
+-				"%s: ep%d%s(%d) deq %08x%08x stream %ld slot %ld  flags %c",
+-				cdnsp_trb_type_string(type),
+-				ep_num, ep_id % 2 ? "out" : "in",
+-				TRB_TO_EP_INDEX(field3), field1, field0,
+-				TRB_TO_STREAM_ID(field2),
+-				TRB_TO_SLOT_ID(field3),
+-				field3 & TRB_CYCLE ? 'C' : 'c');
++		ret = snprintf(str, size,
++			       "%s: ep%d%s(%d) deq %08x%08x stream %ld slot %ld  flags %c",
++			       cdnsp_trb_type_string(type),
++			       ep_num, ep_id % 2 ? "out" : "in",
++			       TRB_TO_EP_INDEX(field3), field1, field0,
++			       TRB_TO_STREAM_ID(field2),
++			       TRB_TO_SLOT_ID(field3),
++			       field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_RESET_DEV:
+-		ret += snprintf(str, size, "%s: slot %ld flags %c",
+-				cdnsp_trb_type_string(type),
+-				TRB_TO_SLOT_ID(field3),
+-				field3 & TRB_CYCLE ? 'C' : 'c');
++		ret = snprintf(str, size, "%s: slot %ld flags %c",
++			       cdnsp_trb_type_string(type),
++			       TRB_TO_SLOT_ID(field3),
++			       field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_ENDPOINT_NRDY:
+-		temp  = TRB_TO_HOST_STREAM(field2);
+-
+-		ret += snprintf(str, size,
+-				"%s: ep%d%s(%d) H_SID %x%s%s D_SID %lx flags %c:%c",
+-				cdnsp_trb_type_string(type),
+-				ep_num, ep_id % 2 ? "out" : "in",
+-				TRB_TO_EP_INDEX(field3), temp,
+-				temp == STREAM_PRIME_ACK ? "(PRIME)" : "",
+-				temp == STREAM_REJECTED ? "(REJECTED)" : "",
+-				TRB_TO_DEV_STREAM(field0),
+-				field3 & TRB_STAT ? 'S' : 's',
+-				field3 & TRB_CYCLE ? 'C' : 'c');
++		temp = TRB_TO_HOST_STREAM(field2);
++
++		ret = snprintf(str, size,
++			       "%s: ep%d%s(%d) H_SID %x%s%s D_SID %lx flags %c:%c",
++			       cdnsp_trb_type_string(type),
++			       ep_num, ep_id % 2 ? "out" : "in",
++			       TRB_TO_EP_INDEX(field3), temp,
++			       temp == STREAM_PRIME_ACK ? "(PRIME)" : "",
++			       temp == STREAM_REJECTED ? "(REJECTED)" : "",
++			       TRB_TO_DEV_STREAM(field0),
++			       field3 & TRB_STAT ? 'S' : 's',
++			       field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	default:
+-		ret += snprintf(str, size,
+-				"type '%s' -> raw %08x %08x %08x %08x",
+-				cdnsp_trb_type_string(type),
+-				field0, field1, field2, field3);
++		ret = snprintf(str, size,
++			       "type '%s' -> raw %08x %08x %08x %08x",
++			       cdnsp_trb_type_string(type),
++			       field0, field1, field2, field3);
+ 	}
+ 
++	if (ret >= size)
++		pr_info("CDNSP: buffer overflowed.\n");
++
+ 	return str;
+ }
+ 
+diff --git a/drivers/usb/dwc3/dwc3-omap.c b/drivers/usb/dwc3/dwc3-omap.c
+index e196673f5c647..efaf0db595f46 100644
+--- a/drivers/usb/dwc3/dwc3-omap.c
++++ b/drivers/usb/dwc3/dwc3-omap.c
+@@ -242,7 +242,7 @@ static void dwc3_omap_set_mailbox(struct dwc3_omap *omap,
+ 		break;
+ 
+ 	case OMAP_DWC3_ID_FLOAT:
+-		if (omap->vbus_reg)
++		if (omap->vbus_reg && regulator_is_enabled(omap->vbus_reg))
+ 			regulator_disable(omap->vbus_reg);
+ 		val = dwc3_omap_read_utmi_ctrl(omap);
+ 		val |= USBOTGSS_UTMI_OTG_CTRL_IDDIG;
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index 06d0e88ec8af9..4d9608cc55f73 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -185,7 +185,8 @@ static const struct software_node dwc3_pci_amd_mr_swnode = {
+ 	.properties = dwc3_pci_mr_properties,
+ };
+ 
+-static int dwc3_pci_quirks(struct dwc3_pci *dwc)
++static int dwc3_pci_quirks(struct dwc3_pci *dwc,
++			   const struct software_node *swnode)
+ {
+ 	struct pci_dev			*pdev = dwc->pci;
+ 
+@@ -242,7 +243,7 @@ static int dwc3_pci_quirks(struct dwc3_pci *dwc)
+ 		}
+ 	}
+ 
+-	return 0;
++	return device_add_software_node(&dwc->dwc3->dev, swnode);
+ }
+ 
+ #ifdef CONFIG_PM
+@@ -307,11 +308,7 @@ static int dwc3_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ 	dwc->dwc3->dev.parent = dev;
+ 	ACPI_COMPANION_SET(&dwc->dwc3->dev, ACPI_COMPANION(dev));
+ 
+-	ret = device_add_software_node(&dwc->dwc3->dev, (void *)id->driver_data);
+-	if (ret < 0)
+-		goto err;
+-
+-	ret = dwc3_pci_quirks(dwc);
++	ret = dwc3_pci_quirks(dwc, (void *)id->driver_data);
+ 	if (ret)
+ 		goto err;
+ 
+diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
+index 43f1b0d461c1e..be76f891b9c52 100644
+--- a/drivers/usb/gadget/udc/tegra-xudc.c
++++ b/drivers/usb/gadget/udc/tegra-xudc.c
+@@ -32,9 +32,6 @@
+ #include <linux/workqueue.h>
+ 
+ /* XUSB_DEV registers */
+-#define SPARAM 0x000
+-#define  SPARAM_ERSTMAX_MASK GENMASK(20, 16)
+-#define  SPARAM_ERSTMAX(x) (((x) << 16) & SPARAM_ERSTMAX_MASK)
+ #define DB 0x004
+ #define  DB_TARGET_MASK GENMASK(15, 8)
+ #define  DB_TARGET(x) (((x) << 8) & DB_TARGET_MASK)
+@@ -275,8 +272,10 @@ BUILD_EP_CONTEXT_RW(deq_hi, deq_hi, 0, 0xffffffff)
+ BUILD_EP_CONTEXT_RW(avg_trb_len, tx_info, 0, 0xffff)
+ BUILD_EP_CONTEXT_RW(max_esit_payload, tx_info, 16, 0xffff)
+ BUILD_EP_CONTEXT_RW(edtla, rsvd[0], 0, 0xffffff)
+-BUILD_EP_CONTEXT_RW(seq_num, rsvd[0], 24, 0xff)
++BUILD_EP_CONTEXT_RW(rsvd, rsvd[0], 24, 0x1)
+ BUILD_EP_CONTEXT_RW(partial_td, rsvd[0], 25, 0x1)
++BUILD_EP_CONTEXT_RW(splitxstate, rsvd[0], 26, 0x1)
++BUILD_EP_CONTEXT_RW(seq_num, rsvd[0], 27, 0x1f)
+ BUILD_EP_CONTEXT_RW(cerrcnt, rsvd[1], 18, 0x3)
+ BUILD_EP_CONTEXT_RW(data_offset, rsvd[2], 0, 0x1ffff)
+ BUILD_EP_CONTEXT_RW(numtrbs, rsvd[2], 22, 0x1f)
+@@ -1557,6 +1556,9 @@ static int __tegra_xudc_ep_set_halt(struct tegra_xudc_ep *ep, bool halt)
+ 		ep_reload(xudc, ep->index);
+ 
+ 		ep_ctx_write_state(ep->context, EP_STATE_RUNNING);
++		ep_ctx_write_rsvd(ep->context, 0);
++		ep_ctx_write_partial_td(ep->context, 0);
++		ep_ctx_write_splitxstate(ep->context, 0);
+ 		ep_ctx_write_seq_num(ep->context, 0);
+ 
+ 		ep_reload(xudc, ep->index);
+@@ -2812,7 +2814,10 @@ static void tegra_xudc_reset(struct tegra_xudc *xudc)
+ 	xudc->setup_seq_num = 0;
+ 	xudc->queued_setup_packet = false;
+ 
+-	ep_ctx_write_seq_num(ep0->context, xudc->setup_seq_num);
++	ep_ctx_write_rsvd(ep0->context, 0);
++	ep_ctx_write_partial_td(ep0->context, 0);
++	ep_ctx_write_splitxstate(ep0->context, 0);
++	ep_ctx_write_seq_num(ep0->context, 0);
+ 
+ 	deq_ptr = trb_virt_to_phys(ep0, &ep0->transfer_ring[ep0->deq_ptr]);
+ 
+@@ -3295,11 +3300,6 @@ static void tegra_xudc_init_event_ring(struct tegra_xudc *xudc)
+ 	unsigned int i;
+ 	u32 val;
+ 
+-	val = xudc_readl(xudc, SPARAM);
+-	val &= ~(SPARAM_ERSTMAX_MASK);
+-	val |= SPARAM_ERSTMAX(XUDC_NR_EVENT_RINGS);
+-	xudc_writel(xudc, val, SPARAM);
+-
+ 	for (i = 0; i < ARRAY_SIZE(xudc->event_ring); i++) {
+ 		memset(xudc->event_ring[i], 0, XUDC_EVENT_RING_SIZE *
+ 		       sizeof(*xudc->event_ring[i]));
+diff --git a/drivers/usb/host/ehci-pci.c b/drivers/usb/host/ehci-pci.c
+index e87cf3a00fa4b..638f03b897394 100644
+--- a/drivers/usb/host/ehci-pci.c
++++ b/drivers/usb/host/ehci-pci.c
+@@ -21,6 +21,9 @@ static const char hcd_name[] = "ehci-pci";
+ /* defined here to avoid adding to pci_ids.h for single instance use */
+ #define PCI_DEVICE_ID_INTEL_CE4100_USB	0x2e70
+ 
++#define PCI_VENDOR_ID_ASPEED		0x1a03
++#define PCI_DEVICE_ID_ASPEED_EHCI	0x2603
++
+ /*-------------------------------------------------------------------------*/
+ #define PCI_DEVICE_ID_INTEL_QUARK_X1000_SOC		0x0939
+ static inline bool is_intel_quark_x1000(struct pci_dev *pdev)
+@@ -222,6 +225,12 @@ static int ehci_pci_setup(struct usb_hcd *hcd)
+ 			ehci->has_synopsys_hc_bug = 1;
+ 		}
+ 		break;
++	case PCI_VENDOR_ID_ASPEED:
++		if (pdev->device == PCI_DEVICE_ID_ASPEED_EHCI) {
++			ehci_info(ehci, "applying Aspeed HC workaround\n");
++			ehci->is_aspeed = 1;
++		}
++		break;
+ 	}
+ 
+ 	/* optional debug port, normally in the first BAR */
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index 57bed3d740608..be50c15609b7a 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -164,6 +164,7 @@ struct mlx5_vdpa_net {
+ 	u32 cur_num_vqs;
+ 	struct notifier_block nb;
+ 	struct vdpa_callback config_cb;
++	struct mlx5_vdpa_wq_ent cvq_ent;
+ };
+ 
+ static void free_resources(struct mlx5_vdpa_net *ndev);
+@@ -1627,10 +1628,10 @@ static void mlx5_cvq_kick_handler(struct work_struct *work)
+ 	ndev = to_mlx5_vdpa_ndev(mvdev);
+ 	cvq = &mvdev->cvq;
+ 	if (!(ndev->mvdev.actual_features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ)))
+-		goto out;
++		return;
+ 
+ 	if (!cvq->ready)
+-		goto out;
++		return;
+ 
+ 	while (true) {
+ 		err = vringh_getdesc_iotlb(&cvq->vring, &cvq->riov, &cvq->wiov, &cvq->head,
+@@ -1664,9 +1665,10 @@ static void mlx5_cvq_kick_handler(struct work_struct *work)
+ 
+ 		if (vringh_need_notify_iotlb(&cvq->vring))
+ 			vringh_notify(&cvq->vring);
++
++		queue_work(mvdev->wq, &wqent->work);
++		break;
+ 	}
+-out:
+-	kfree(wqent);
+ }
+ 
+ static void mlx5_vdpa_kick_vq(struct vdpa_device *vdev, u16 idx)
+@@ -1674,7 +1676,6 @@ static void mlx5_vdpa_kick_vq(struct vdpa_device *vdev, u16 idx)
+ 	struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
+ 	struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);
+ 	struct mlx5_vdpa_virtqueue *mvq;
+-	struct mlx5_vdpa_wq_ent *wqent;
+ 
+ 	if (!is_index_valid(mvdev, idx))
+ 		return;
+@@ -1683,13 +1684,7 @@ static void mlx5_vdpa_kick_vq(struct vdpa_device *vdev, u16 idx)
+ 		if (!mvdev->wq || !mvdev->cvq.ready)
+ 			return;
+ 
+-		wqent = kzalloc(sizeof(*wqent), GFP_ATOMIC);
+-		if (!wqent)
+-			return;
+-
+-		wqent->mvdev = mvdev;
+-		INIT_WORK(&wqent->work, mlx5_cvq_kick_handler);
+-		queue_work(mvdev->wq, &wqent->work);
++		queue_work(mvdev->wq, &ndev->cvq_ent.work);
+ 		return;
+ 	}
+ 
+@@ -2634,6 +2629,8 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
+ 	if (err)
+ 		goto err_mr;
+ 
++	ndev->cvq_ent.mvdev = mvdev;
++	INIT_WORK(&ndev->cvq_ent.work, mlx5_cvq_kick_handler);
+ 	mvdev->wq = create_singlethread_workqueue("mlx5_vdpa_wq");
+ 	if (!mvdev->wq) {
+ 		err = -ENOMEM;
+diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c b/drivers/vfio/pci/vfio_pci_rdwr.c
+index 57d3b2cbbd8e5..82ac1569deb05 100644
+--- a/drivers/vfio/pci/vfio_pci_rdwr.c
++++ b/drivers/vfio/pci/vfio_pci_rdwr.c
+@@ -288,6 +288,7 @@ out:
+ 	return done;
+ }
+ 
++#ifdef CONFIG_VFIO_PCI_VGA
+ ssize_t vfio_pci_vga_rw(struct vfio_pci_core_device *vdev, char __user *buf,
+ 			       size_t count, loff_t *ppos, bool iswrite)
+ {
+@@ -355,6 +356,7 @@ ssize_t vfio_pci_vga_rw(struct vfio_pci_core_device *vdev, char __user *buf,
+ 
+ 	return done;
+ }
++#endif
+ 
+ static void vfio_pci_ioeventfd_do_write(struct vfio_pci_ioeventfd *ioeventfd,
+ 					bool test_mem)
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index 28ef323882fb2..792ab5f236471 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -473,6 +473,7 @@ static void vhost_tx_batch(struct vhost_net *net,
+ 		goto signal_used;
+ 
+ 	msghdr->msg_control = &ctl;
++	msghdr->msg_controllen = sizeof(ctl);
+ 	err = sock->ops->sendmsg(sock, msghdr, 0);
+ 	if (unlikely(err < 0)) {
+ 		vq_err(&nvq->vq, "Fail to batch sending packets\n");
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index b585339509b0e..66da07f870651 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -1581,7 +1581,14 @@ static void do_remove_conflicting_framebuffers(struct apertures_struct *a,
+ 			 * If it's not a platform device, at least print a warning. A
+ 			 * fix would add code to remove the device from the system.
+ 			 */
+-			if (dev_is_platform(device)) {
++			if (!device) {
++				/* TODO: Represent each OF framebuffer as its own
++				 * device in the device hierarchy. For now, offb
++				 * doesn't have such a device, so unregister the
++				 * framebuffer as before without warning.
++				 */
++				do_unregister_framebuffer(registered_fb[i]);
++			} else if (dev_is_platform(device)) {
+ 				registered_fb[i]->forced_out = true;
+ 				platform_device_unregister(to_platform_device(device));
+ 			} else {
+diff --git a/drivers/w1/slaves/w1_therm.c b/drivers/w1/slaves/w1_therm.c
+index ca70c5f032060..9cbeeb4923ecf 100644
+--- a/drivers/w1/slaves/w1_therm.c
++++ b/drivers/w1/slaves/w1_therm.c
+@@ -2090,16 +2090,20 @@ static ssize_t w1_seq_show(struct device *device,
+ 		if (sl->reg_num.id == reg_num->id)
+ 			seq = i;
+ 
++		if (w1_reset_bus(sl->master))
++			goto error;
++
++		/* Put the device into chain DONE state */
++		w1_write_8(sl->master, W1_MATCH_ROM);
++		w1_write_block(sl->master, (u8 *)&rn, 8);
+ 		w1_write_8(sl->master, W1_42_CHAIN);
+ 		w1_write_8(sl->master, W1_42_CHAIN_DONE);
+ 		w1_write_8(sl->master, W1_42_CHAIN_DONE_INV);
+-		w1_read_block(sl->master, &ack, sizeof(ack));
+ 
+ 		/* check for acknowledgment */
+ 		ack = w1_read_8(sl->master);
+ 		if (ack != W1_42_SUCCESS_CONFIRM_BYTE)
+ 			goto error;
+-
+ 	}
+ 
+ 	/* Exit from CHAIN state */
+diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
+index 0399cf8e3c32c..151e9da5da2dc 100644
+--- a/fs/btrfs/extent_io.h
++++ b/fs/btrfs/extent_io.h
+@@ -118,7 +118,7 @@ struct btrfs_bio_ctrl {
+  */
+ struct extent_changeset {
+ 	/* How many bytes are set/cleared in this operation */
+-	unsigned int bytes_changed;
++	u64 bytes_changed;
+ 
+ 	/* Changed ranges */
+ 	struct ulist range_changed;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 68f5a94c82b78..96a8f3d87f687 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -4462,6 +4462,13 @@ int btrfs_delete_subvolume(struct inode *dir, struct dentry *dentry)
+ 			   dest->root_key.objectid);
+ 		return -EPERM;
+ 	}
++	if (atomic_read(&dest->nr_swapfiles)) {
++		spin_unlock(&dest->root_item_lock);
++		btrfs_warn(fs_info,
++			   "attempt to delete subvolume %llu with active swapfile",
++			   root->root_key.objectid);
++		return -EPERM;
++	}
+ 	root_flags = btrfs_root_flags(&dest->root_item);
+ 	btrfs_set_root_flags(&dest->root_item,
+ 			     root_flags | BTRFS_ROOT_SUBVOL_DEAD);
+@@ -10764,8 +10771,23 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ 	 * set. We use this counter to prevent snapshots. We must increment it
+ 	 * before walking the extents because we don't want a concurrent
+ 	 * snapshot to run after we've already checked the extents.
++	 *
++	 * It is possible that subvolume is marked for deletion but still not
++	 * removed yet. To prevent this race, we check the root status before
++	 * activating the swapfile.
+ 	 */
++	spin_lock(&root->root_item_lock);
++	if (btrfs_root_dead(root)) {
++		spin_unlock(&root->root_item_lock);
++
++		btrfs_exclop_finish(fs_info);
++		btrfs_warn(fs_info,
++		"cannot activate swapfile because subvolume %llu is being deleted",
++			root->root_key.objectid);
++		return -EPERM;
++	}
+ 	atomic_inc(&root->nr_swapfiles);
++	spin_unlock(&root->root_item_lock);
+ 
+ 	isize = ALIGN_DOWN(inode->i_size, fs_info->sectorsize);
+ 
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 3d3d6e2f110ae..8fdb68ac45d20 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -1189,7 +1189,7 @@ static u32 get_extent_max_capacity(const struct extent_map *em)
+ }
+ 
+ static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em,
+-				     bool locked)
++				     u32 extent_thresh, u64 newer_than, bool locked)
+ {
+ 	struct extent_map *next;
+ 	bool ret = false;
+@@ -1199,11 +1199,12 @@ static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em,
+ 		return false;
+ 
+ 	/*
+-	 * We want to check if the next extent can be merged with the current
+-	 * one, which can be an extent created in a past generation, so we pass
+-	 * a minimum generation of 0 to defrag_lookup_extent().
++	 * Here we need to pass @newer_then when checking the next extent, or
++	 * we will hit a case we mark current extent for defrag, but the next
++	 * one will not be a target.
++	 * This will just cause extra IO without really reducing the fragments.
+ 	 */
+-	next = defrag_lookup_extent(inode, em->start + em->len, 0, locked);
++	next = defrag_lookup_extent(inode, em->start + em->len, newer_than, locked);
+ 	/* No more em or hole */
+ 	if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE)
+ 		goto out;
+@@ -1215,6 +1216,13 @@ static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em,
+ 	 */
+ 	if (next->len >= get_extent_max_capacity(em))
+ 		goto out;
++	/* Skip older extent */
++	if (next->generation < newer_than)
++		goto out;
++	/* Also check extent size */
++	if (next->len >= extent_thresh)
++		goto out;
++
+ 	ret = true;
+ out:
+ 	free_extent_map(next);
+@@ -1420,7 +1428,7 @@ static int defrag_collect_targets(struct btrfs_inode *inode,
+ 			goto next;
+ 
+ 		next_mergeable = defrag_check_next_extent(&inode->vfs_inode, em,
+-							  locked);
++						extent_thresh, newer_than, locked);
+ 		if (!next_mergeable) {
+ 			struct defrag_target_range *last;
+ 
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 683822b7b194f..dffbde3f05708 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1942,23 +1942,18 @@ static void update_dev_time(const char *device_path)
+ 	path_put(&path);
+ }
+ 
+-static int btrfs_rm_dev_item(struct btrfs_device *device)
++static int btrfs_rm_dev_item(struct btrfs_trans_handle *trans,
++			     struct btrfs_device *device)
+ {
+ 	struct btrfs_root *root = device->fs_info->chunk_root;
+ 	int ret;
+ 	struct btrfs_path *path;
+ 	struct btrfs_key key;
+-	struct btrfs_trans_handle *trans;
+ 
+ 	path = btrfs_alloc_path();
+ 	if (!path)
+ 		return -ENOMEM;
+ 
+-	trans = btrfs_start_transaction(root, 0);
+-	if (IS_ERR(trans)) {
+-		btrfs_free_path(path);
+-		return PTR_ERR(trans);
+-	}
+ 	key.objectid = BTRFS_DEV_ITEMS_OBJECTID;
+ 	key.type = BTRFS_DEV_ITEM_KEY;
+ 	key.offset = device->devid;
+@@ -1969,21 +1964,12 @@ static int btrfs_rm_dev_item(struct btrfs_device *device)
+ 	if (ret) {
+ 		if (ret > 0)
+ 			ret = -ENOENT;
+-		btrfs_abort_transaction(trans, ret);
+-		btrfs_end_transaction(trans);
+ 		goto out;
+ 	}
+ 
+ 	ret = btrfs_del_item(trans, root, path);
+-	if (ret) {
+-		btrfs_abort_transaction(trans, ret);
+-		btrfs_end_transaction(trans);
+-	}
+-
+ out:
+ 	btrfs_free_path(path);
+-	if (!ret)
+-		ret = btrfs_commit_transaction(trans);
+ 	return ret;
+ }
+ 
+@@ -2124,6 +2110,7 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info,
+ 		    struct btrfs_dev_lookup_args *args,
+ 		    struct block_device **bdev, fmode_t *mode)
+ {
++	struct btrfs_trans_handle *trans;
+ 	struct btrfs_device *device;
+ 	struct btrfs_fs_devices *cur_devices;
+ 	struct btrfs_fs_devices *fs_devices = fs_info->fs_devices;
+@@ -2139,7 +2126,7 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info,
+ 
+ 	ret = btrfs_check_raid_min_devices(fs_info, num_devices - 1);
+ 	if (ret)
+-		goto out;
++		return ret;
+ 
+ 	device = btrfs_find_device(fs_info->fs_devices, args);
+ 	if (!device) {
+@@ -2147,27 +2134,22 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info,
+ 			ret = BTRFS_ERROR_DEV_MISSING_NOT_FOUND;
+ 		else
+ 			ret = -ENOENT;
+-		goto out;
++		return ret;
+ 	}
+ 
+ 	if (btrfs_pinned_by_swapfile(fs_info, device)) {
+ 		btrfs_warn_in_rcu(fs_info,
+ 		  "cannot remove device %s (devid %llu) due to active swapfile",
+ 				  rcu_str_deref(device->name), device->devid);
+-		ret = -ETXTBSY;
+-		goto out;
++		return -ETXTBSY;
+ 	}
+ 
+-	if (test_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state)) {
+-		ret = BTRFS_ERROR_DEV_TGT_REPLACE;
+-		goto out;
+-	}
++	if (test_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state))
++		return BTRFS_ERROR_DEV_TGT_REPLACE;
+ 
+ 	if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state) &&
+-	    fs_info->fs_devices->rw_devices == 1) {
+-		ret = BTRFS_ERROR_DEV_ONLY_WRITABLE;
+-		goto out;
+-	}
++	    fs_info->fs_devices->rw_devices == 1)
++		return BTRFS_ERROR_DEV_ONLY_WRITABLE;
+ 
+ 	if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state)) {
+ 		mutex_lock(&fs_info->chunk_mutex);
+@@ -2182,14 +2164,22 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info,
+ 	if (ret)
+ 		goto error_undo;
+ 
+-	/*
+-	 * TODO: the superblock still includes this device in its num_devices
+-	 * counter although write_all_supers() is not locked out. This
+-	 * could give a filesystem state which requires a degraded mount.
+-	 */
+-	ret = btrfs_rm_dev_item(device);
+-	if (ret)
++	trans = btrfs_start_transaction(fs_info->chunk_root, 0);
++	if (IS_ERR(trans)) {
++		ret = PTR_ERR(trans);
+ 		goto error_undo;
++	}
++
++	ret = btrfs_rm_dev_item(trans, device);
++	if (ret) {
++		/* Any error in dev item removal is critical */
++		btrfs_crit(fs_info,
++			   "failed to remove device item for devid %llu: %d",
++			   device->devid, ret);
++		btrfs_abort_transaction(trans, ret);
++		btrfs_end_transaction(trans);
++		return ret;
++	}
+ 
+ 	clear_bit(BTRFS_DEV_STATE_IN_FS_METADATA, &device->dev_state);
+ 	btrfs_scrub_cancel_dev(device);
+@@ -2272,7 +2262,8 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info,
+ 		free_fs_devices(cur_devices);
+ 	}
+ 
+-out:
++	ret = btrfs_commit_transaction(trans);
++
+ 	return ret;
+ 
+ error_undo:
+@@ -2284,7 +2275,7 @@ error_undo:
+ 		device->fs_devices->rw_devices++;
+ 		mutex_unlock(&fs_info->chunk_mutex);
+ 	}
+-	goto out;
++	return ret;
+ }
+ 
+ void btrfs_rm_dev_replace_remove_srcdev(struct btrfs_device *srcdev)
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index e2ccbb339499c..e3bf3fc1cbbfc 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -1936,18 +1936,19 @@ int btrfs_zone_finish(struct btrfs_block_group *block_group)
+ 
+ bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, u64 flags)
+ {
++	struct btrfs_fs_info *fs_info = fs_devices->fs_info;
+ 	struct btrfs_device *device;
+ 	bool ret = false;
+ 
+-	if (!btrfs_is_zoned(fs_devices->fs_info))
++	if (!btrfs_is_zoned(fs_info))
+ 		return true;
+ 
+ 	/* Non-single profiles are not supported yet */
+ 	ASSERT((flags & BTRFS_BLOCK_GROUP_PROFILE_MASK) == 0);
+ 
+ 	/* Check if there is a device with active zones left */
+-	mutex_lock(&fs_devices->device_list_mutex);
+-	list_for_each_entry(device, &fs_devices->devices, dev_list) {
++	mutex_lock(&fs_info->chunk_mutex);
++	list_for_each_entry(device, &fs_devices->alloc_list, dev_alloc_list) {
+ 		struct btrfs_zoned_device_info *zinfo = device->zone_info;
+ 
+ 		if (!device->bdev)
+@@ -1959,7 +1960,7 @@ bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, u64 flags)
+ 			break;
+ 		}
+ 	}
+-	mutex_unlock(&fs_devices->device_list_mutex);
++	mutex_unlock(&fs_info->chunk_mutex);
+ 
+ 	return ret;
+ }
+diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c
+index 133dbd9338e73..d91fa53e12b33 100644
+--- a/fs/ceph/dir.c
++++ b/fs/ceph/dir.c
+@@ -478,8 +478,11 @@ more:
+ 					2 : (fpos_off(rde->offset) + 1);
+ 			err = note_last_dentry(dfi, rde->name, rde->name_len,
+ 					       next_offset);
+-			if (err)
++			if (err) {
++				ceph_mdsc_put_request(dfi->last_readdir);
++				dfi->last_readdir = NULL;
+ 				return err;
++			}
+ 		} else if (req->r_reply_info.dir_end) {
+ 			dfi->next_offset = 2;
+ 			/* keep last name */
+@@ -520,6 +523,12 @@ more:
+ 		if (!dir_emit(ctx, rde->name, rde->name_len,
+ 			      ceph_present_ino(inode->i_sb, le64_to_cpu(rde->inode.in->ino)),
+ 			      le32_to_cpu(rde->inode.in->mode) >> 12)) {
++			/*
++			 * NOTE: Here no need to put the 'dfi->last_readdir',
++			 * because when dir_emit stops us it's most likely
++			 * doesn't have enough memory, etc. So for next readdir
++			 * it will continue.
++			 */
+ 			dout("filldir stopping us...\n");
+ 			return 0;
+ 		}
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index e3322fcb2e8d9..6d87991cf67ea 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -87,13 +87,13 @@ struct inode *ceph_get_snapdir(struct inode *parent)
+ 	if (!S_ISDIR(parent->i_mode)) {
+ 		pr_warn_once("bad snapdir parent type (mode=0%o)\n",
+ 			     parent->i_mode);
+-		return ERR_PTR(-ENOTDIR);
++		goto err;
+ 	}
+ 
+ 	if (!(inode->i_state & I_NEW) && !S_ISDIR(inode->i_mode)) {
+ 		pr_warn_once("bad snapdir inode type (mode=0%o)\n",
+ 			     inode->i_mode);
+-		return ERR_PTR(-ENOTDIR);
++		goto err;
+ 	}
+ 
+ 	inode->i_mode = parent->i_mode;
+@@ -113,6 +113,12 @@ struct inode *ceph_get_snapdir(struct inode *parent)
+ 	}
+ 
+ 	return inode;
++err:
++	if ((inode->i_state & I_NEW))
++		discard_new_inode(inode);
++	else
++		iput(inode);
++	return ERR_PTR(-ENOTDIR);
+ }
+ 
+ const struct inode_operations ceph_file_iops = {
+diff --git a/fs/file_table.c b/fs/file_table.c
+index 45437f8e1003e..e8c9016703ad6 100644
+--- a/fs/file_table.c
++++ b/fs/file_table.c
+@@ -375,6 +375,7 @@ void __fput_sync(struct file *file)
+ }
+ 
+ EXPORT_SYMBOL(fput);
++EXPORT_SYMBOL(__fput_sync);
+ 
+ void __init files_init(void)
+ {
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 49cafa0a8b8fb..4eb8581625599 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -619,10 +619,10 @@ struct io_epoll {
+ 
+ struct io_splice {
+ 	struct file			*file_out;
+-	struct file			*file_in;
+ 	loff_t				off_out;
+ 	loff_t				off_in;
+ 	u64				len;
++	int				splice_fd_in;
+ 	unsigned int			flags;
+ };
+ 
+@@ -1524,14 +1524,6 @@ static void io_prep_async_work(struct io_kiocb *req)
+ 		if (def->unbound_nonreg_file)
+ 			req->work.flags |= IO_WQ_WORK_UNBOUND;
+ 	}
+-
+-	switch (req->opcode) {
+-	case IORING_OP_SPLICE:
+-	case IORING_OP_TEE:
+-		if (!S_ISREG(file_inode(req->splice.file_in)->i_mode))
+-			req->work.flags |= IO_WQ_WORK_UNBOUND;
+-		break;
+-	}
+ }
+ 
+ static void io_prep_async_link(struct io_kiocb *req)
+@@ -1622,12 +1614,11 @@ static __cold void io_flush_timeouts(struct io_ring_ctx *ctx)
+ 	__must_hold(&ctx->completion_lock)
+ {
+ 	u32 seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
++	struct io_kiocb *req, *tmp;
+ 
+ 	spin_lock_irq(&ctx->timeout_lock);
+-	while (!list_empty(&ctx->timeout_list)) {
++	list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) {
+ 		u32 events_needed, events_got;
+-		struct io_kiocb *req = list_first_entry(&ctx->timeout_list,
+-						struct io_kiocb, timeout.list);
+ 
+ 		if (io_is_timeout_noseq(req))
+ 			break;
+@@ -1644,7 +1635,6 @@ static __cold void io_flush_timeouts(struct io_ring_ctx *ctx)
+ 		if (events_got < events_needed)
+ 			break;
+ 
+-		list_del_init(&req->timeout.list);
+ 		io_kill_timeout(req, 0);
+ 	}
+ 	ctx->cq_last_tm_flush = seq;
+@@ -4054,18 +4044,11 @@ static int __io_splice_prep(struct io_kiocb *req,
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+ 
+-	sp->file_in = NULL;
+ 	sp->len = READ_ONCE(sqe->len);
+ 	sp->flags = READ_ONCE(sqe->splice_flags);
+-
+ 	if (unlikely(sp->flags & ~valid_flags))
+ 		return -EINVAL;
+-
+-	sp->file_in = io_file_get(req->ctx, req, READ_ONCE(sqe->splice_fd_in),
+-				  (sp->flags & SPLICE_F_FD_IN_FIXED));
+-	if (!sp->file_in)
+-		return -EBADF;
+-	req->flags |= REQ_F_NEED_CLEANUP;
++	sp->splice_fd_in = READ_ONCE(sqe->splice_fd_in);
+ 	return 0;
+ }
+ 
+@@ -4080,20 +4063,27 @@ static int io_tee_prep(struct io_kiocb *req,
+ static int io_tee(struct io_kiocb *req, unsigned int issue_flags)
+ {
+ 	struct io_splice *sp = &req->splice;
+-	struct file *in = sp->file_in;
+ 	struct file *out = sp->file_out;
+ 	unsigned int flags = sp->flags & ~SPLICE_F_FD_IN_FIXED;
++	struct file *in;
+ 	long ret = 0;
+ 
+ 	if (issue_flags & IO_URING_F_NONBLOCK)
+ 		return -EAGAIN;
++
++	in = io_file_get(req->ctx, req, sp->splice_fd_in,
++				  (sp->flags & SPLICE_F_FD_IN_FIXED));
++	if (!in) {
++		ret = -EBADF;
++		goto done;
++	}
++
+ 	if (sp->len)
+ 		ret = do_tee(in, out, sp->len, flags);
+ 
+ 	if (!(sp->flags & SPLICE_F_FD_IN_FIXED))
+ 		io_put_file(in);
+-	req->flags &= ~REQ_F_NEED_CLEANUP;
+-
++done:
+ 	if (ret != sp->len)
+ 		req_set_fail(req);
+ 	io_req_complete(req, ret);
+@@ -4112,15 +4102,22 @@ static int io_splice_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ static int io_splice(struct io_kiocb *req, unsigned int issue_flags)
+ {
+ 	struct io_splice *sp = &req->splice;
+-	struct file *in = sp->file_in;
+ 	struct file *out = sp->file_out;
+ 	unsigned int flags = sp->flags & ~SPLICE_F_FD_IN_FIXED;
+ 	loff_t *poff_in, *poff_out;
++	struct file *in;
+ 	long ret = 0;
+ 
+ 	if (issue_flags & IO_URING_F_NONBLOCK)
+ 		return -EAGAIN;
+ 
++	in = io_file_get(req->ctx, req, sp->splice_fd_in,
++				  (sp->flags & SPLICE_F_FD_IN_FIXED));
++	if (!in) {
++		ret = -EBADF;
++		goto done;
++	}
++
+ 	poff_in = (sp->off_in == -1) ? NULL : &sp->off_in;
+ 	poff_out = (sp->off_out == -1) ? NULL : &sp->off_out;
+ 
+@@ -4129,8 +4126,7 @@ static int io_splice(struct io_kiocb *req, unsigned int issue_flags)
+ 
+ 	if (!(sp->flags & SPLICE_F_FD_IN_FIXED))
+ 		io_put_file(in);
+-	req->flags &= ~REQ_F_NEED_CLEANUP;
+-
++done:
+ 	if (ret != sp->len)
+ 		req_set_fail(req);
+ 	io_req_complete(req, ret);
+@@ -4155,9 +4151,6 @@ static int io_fsync_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+ 	struct io_ring_ctx *ctx = req->ctx;
+ 
+-	if (!req->file)
+-		return -EBADF;
+-
+ 	if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+ 	if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index ||
+@@ -6228,6 +6221,7 @@ static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+ 	if (data->ts.tv_sec < 0 || data->ts.tv_nsec < 0)
+ 		return -EINVAL;
+ 
++	INIT_LIST_HEAD(&req->timeout.list);
+ 	data->mode = io_translate_timeout_mode(flags);
+ 	hrtimer_init(&data->timer, io_timeout_get_clock(data), data->mode);
+ 
+@@ -6633,11 +6627,6 @@ static void io_clean_op(struct io_kiocb *req)
+ 			kfree(io->free_iov);
+ 			break;
+ 			}
+-		case IORING_OP_SPLICE:
+-		case IORING_OP_TEE:
+-			if (!(req->splice.flags & SPLICE_F_FD_IN_FIXED))
+-				io_put_file(req->splice.file_in);
+-			break;
+ 		case IORING_OP_OPENAT:
+ 		case IORING_OP_OPENAT2:
+ 			if (req->open.filename)
+@@ -8176,8 +8165,12 @@ static int __io_sqe_files_scm(struct io_ring_ctx *ctx, int nr, int offset)
+ 		refcount_add(skb->truesize, &sk->sk_wmem_alloc);
+ 		skb_queue_head(&sk->sk_receive_queue, skb);
+ 
+-		for (i = 0; i < nr_files; i++)
+-			fput(fpl->fp[i]);
++		for (i = 0; i < nr; i++) {
++			struct file *file = io_file_from_index(ctx, i + offset);
++
++			if (file)
++				fput(file);
++		}
+ 	} else {
+ 		kfree_skb(skb);
+ 		free_uid(fpl->user);
+@@ -8640,7 +8633,7 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx,
+ 				err = -EBADF;
+ 				break;
+ 			}
+-			*io_get_tag_slot(data, up->offset + done) = tag;
++			*io_get_tag_slot(data, i) = tag;
+ 			io_fixed_file_set(file_slot, file);
+ 			err = io_sqe_file_register(ctx, file, i);
+ 			if (err) {
+@@ -10805,7 +10798,15 @@ static __cold int io_register_iowq_aff(struct io_ring_ctx *ctx,
+ 	if (len > cpumask_size())
+ 		len = cpumask_size();
+ 
+-	if (copy_from_user(new_mask, arg, len)) {
++	if (in_compat_syscall()) {
++		ret = compat_get_bitmap(cpumask_bits(new_mask),
++					(const compat_ulong_t __user *)arg,
++					len * 8 /* CHAR_BIT */);
++	} else {
++		ret = copy_from_user(new_mask, arg, len);
++	}
++
++	if (ret) {
+ 		free_cpumask_var(new_mask);
+ 		return -EFAULT;
+ 	}
+diff --git a/fs/jfs/inode.c b/fs/jfs/inode.c
+index 57ab424c05ff0..072821b50ab91 100644
+--- a/fs/jfs/inode.c
++++ b/fs/jfs/inode.c
+@@ -146,12 +146,13 @@ void jfs_evict_inode(struct inode *inode)
+ 		dquot_initialize(inode);
+ 
+ 		if (JFS_IP(inode)->fileset == FILESYSTEM_I) {
++			struct inode *ipimap = JFS_SBI(inode->i_sb)->ipimap;
+ 			truncate_inode_pages_final(&inode->i_data);
+ 
+ 			if (test_cflag(COMMIT_Freewmap, inode))
+ 				jfs_free_zero_link(inode);
+ 
+-			if (JFS_SBI(inode->i_sb)->ipimap)
++			if (ipimap && JFS_IP(ipimap)->i_imap)
+ 				diFree(inode);
+ 
+ 			/*
+diff --git a/fs/minix/inode.c b/fs/minix/inode.c
+index a71f1cf894b9f..d4bd94234ef73 100644
+--- a/fs/minix/inode.c
++++ b/fs/minix/inode.c
+@@ -447,7 +447,8 @@ static const struct address_space_operations minix_aops = {
+ 	.writepage = minix_writepage,
+ 	.write_begin = minix_write_begin,
+ 	.write_end = generic_write_end,
+-	.bmap = minix_bmap
++	.bmap = minix_bmap,
++	.direct_IO = noop_direct_IO
+ };
+ 
+ static const struct inode_operations minix_symlink_inode_operations = {
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index 877f72433f435..503b9eef45331 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -1830,16 +1830,6 @@ const struct dentry_operations nfs4_dentry_operations = {
+ };
+ EXPORT_SYMBOL_GPL(nfs4_dentry_operations);
+ 
+-static fmode_t flags_to_mode(int flags)
+-{
+-	fmode_t res = (__force fmode_t)flags & FMODE_EXEC;
+-	if ((flags & O_ACCMODE) != O_WRONLY)
+-		res |= FMODE_READ;
+-	if ((flags & O_ACCMODE) != O_RDONLY)
+-		res |= FMODE_WRITE;
+-	return res;
+-}
+-
+ static struct nfs_open_context *create_nfs_open_context(struct dentry *dentry, int open_flags, struct file *filp)
+ {
+ 	return alloc_nfs_open_context(dentry, flags_to_mode(open_flags), filp);
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index 9cff8709c80ae..737b30792ac4b 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -172,8 +172,8 @@ ssize_t nfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ 	VM_BUG_ON(iov_iter_count(iter) != PAGE_SIZE);
+ 
+ 	if (iov_iter_rw(iter) == READ)
+-		return nfs_file_direct_read(iocb, iter);
+-	return nfs_file_direct_write(iocb, iter);
++		return nfs_file_direct_read(iocb, iter, true);
++	return nfs_file_direct_write(iocb, iter, true);
+ }
+ 
+ static void nfs_direct_release_pages(struct page **pages, unsigned int npages)
+@@ -424,6 +424,7 @@ static ssize_t nfs_direct_read_schedule_iovec(struct nfs_direct_req *dreq,
+  * nfs_file_direct_read - file direct read operation for NFS files
+  * @iocb: target I/O control block
+  * @iter: vector of user buffers into which to read data
++ * @swap: flag indicating this is swap IO, not O_DIRECT IO
+  *
+  * We use this function for direct reads instead of calling
+  * generic_file_aio_read() in order to avoid gfar's check to see if
+@@ -439,7 +440,8 @@ static ssize_t nfs_direct_read_schedule_iovec(struct nfs_direct_req *dreq,
+  * client must read the updated atime from the server back into its
+  * cache.
+  */
+-ssize_t nfs_file_direct_read(struct kiocb *iocb, struct iov_iter *iter)
++ssize_t nfs_file_direct_read(struct kiocb *iocb, struct iov_iter *iter,
++			     bool swap)
+ {
+ 	struct file *file = iocb->ki_filp;
+ 	struct address_space *mapping = file->f_mapping;
+@@ -481,12 +483,14 @@ ssize_t nfs_file_direct_read(struct kiocb *iocb, struct iov_iter *iter)
+ 	if (iter_is_iovec(iter))
+ 		dreq->flags = NFS_ODIRECT_SHOULD_DIRTY;
+ 
+-	nfs_start_io_direct(inode);
++	if (!swap)
++		nfs_start_io_direct(inode);
+ 
+ 	NFS_I(inode)->read_io += count;
+ 	requested = nfs_direct_read_schedule_iovec(dreq, iter, iocb->ki_pos);
+ 
+-	nfs_end_io_direct(inode);
++	if (!swap)
++		nfs_end_io_direct(inode);
+ 
+ 	if (requested > 0) {
+ 		result = nfs_direct_wait(dreq);
+@@ -789,7 +793,7 @@ static const struct nfs_pgio_completion_ops nfs_direct_write_completion_ops = {
+  */
+ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
+ 					       struct iov_iter *iter,
+-					       loff_t pos)
++					       loff_t pos, int ioflags)
+ {
+ 	struct nfs_pageio_descriptor desc;
+ 	struct inode *inode = dreq->inode;
+@@ -797,7 +801,7 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
+ 	size_t requested_bytes = 0;
+ 	size_t wsize = max_t(size_t, NFS_SERVER(inode)->wsize, PAGE_SIZE);
+ 
+-	nfs_pageio_init_write(&desc, inode, FLUSH_COND_STABLE, false,
++	nfs_pageio_init_write(&desc, inode, ioflags, false,
+ 			      &nfs_direct_write_completion_ops);
+ 	desc.pg_dreq = dreq;
+ 	get_dreq(dreq);
+@@ -875,6 +879,7 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
+  * nfs_file_direct_write - file direct write operation for NFS files
+  * @iocb: target I/O control block
+  * @iter: vector of user buffers from which to write data
++ * @swap: flag indicating this is swap IO, not O_DIRECT IO
+  *
+  * We use this function for direct writes instead of calling
+  * generic_file_aio_write() in order to avoid taking the inode
+@@ -891,7 +896,8 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
+  * Note that O_APPEND is not supported for NFS direct writes, as there
+  * is no atomic O_APPEND write facility in the NFS protocol.
+  */
+-ssize_t nfs_file_direct_write(struct kiocb *iocb, struct iov_iter *iter)
++ssize_t nfs_file_direct_write(struct kiocb *iocb, struct iov_iter *iter,
++			      bool swap)
+ {
+ 	ssize_t result, requested;
+ 	size_t count;
+@@ -905,7 +911,11 @@ ssize_t nfs_file_direct_write(struct kiocb *iocb, struct iov_iter *iter)
+ 	dfprintk(FILE, "NFS: direct write(%pD2, %zd@%Ld)\n",
+ 		file, iov_iter_count(iter), (long long) iocb->ki_pos);
+ 
+-	result = generic_write_checks(iocb, iter);
++	if (swap)
++		/* bypass generic checks */
++		result =  iov_iter_count(iter);
++	else
++		result = generic_write_checks(iocb, iter);
+ 	if (result <= 0)
+ 		return result;
+ 	count = result;
+@@ -936,16 +946,22 @@ ssize_t nfs_file_direct_write(struct kiocb *iocb, struct iov_iter *iter)
+ 		dreq->iocb = iocb;
+ 	pnfs_init_ds_commit_info_ops(&dreq->ds_cinfo, inode);
+ 
+-	nfs_start_io_direct(inode);
++	if (swap) {
++		requested = nfs_direct_write_schedule_iovec(dreq, iter, pos,
++							    FLUSH_STABLE);
++	} else {
++		nfs_start_io_direct(inode);
+ 
+-	requested = nfs_direct_write_schedule_iovec(dreq, iter, pos);
++		requested = nfs_direct_write_schedule_iovec(dreq, iter, pos,
++							    FLUSH_COND_STABLE);
+ 
+-	if (mapping->nrpages) {
+-		invalidate_inode_pages2_range(mapping,
+-					      pos >> PAGE_SHIFT, end);
+-	}
++		if (mapping->nrpages) {
++			invalidate_inode_pages2_range(mapping,
++						      pos >> PAGE_SHIFT, end);
++		}
+ 
+-	nfs_end_io_direct(inode);
++		nfs_end_io_direct(inode);
++	}
+ 
+ 	if (requested > 0) {
+ 		result = nfs_direct_wait(dreq);
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index 24e7dccce3559..5105416652192 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -161,7 +161,7 @@ nfs_file_read(struct kiocb *iocb, struct iov_iter *to)
+ 	ssize_t result;
+ 
+ 	if (iocb->ki_flags & IOCB_DIRECT)
+-		return nfs_file_direct_read(iocb, to);
++		return nfs_file_direct_read(iocb, to, false);
+ 
+ 	dprintk("NFS: read(%pD2, %zu@%lu)\n",
+ 		iocb->ki_filp,
+@@ -616,7 +616,7 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ 		return result;
+ 
+ 	if (iocb->ki_flags & IOCB_DIRECT)
+-		return nfs_file_direct_write(iocb, from);
++		return nfs_file_direct_write(iocb, from, false);
+ 
+ 	dprintk("NFS: write(%pD2, %zu@%Ld)\n",
+ 		file, iov_iter_count(from), (long long) iocb->ki_pos);
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index a09d3ff627c20..4da8a4a7bad73 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -1180,7 +1180,6 @@ int nfs_open(struct inode *inode, struct file *filp)
+ 	nfs_fscache_open_file(inode, filp);
+ 	return 0;
+ }
+-EXPORT_SYMBOL_GPL(nfs_open);
+ 
+ /*
+  * This function is called whenever some part of NFS notices that
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 12f6acb483bbd..dd62f0525e807 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -42,6 +42,16 @@ static inline bool nfs_lookup_is_soft_revalidate(const struct dentry *dentry)
+ 	return true;
+ }
+ 
++static inline fmode_t flags_to_mode(int flags)
++{
++	fmode_t res = (__force fmode_t)flags & FMODE_EXEC;
++	if ((flags & O_ACCMODE) != O_WRONLY)
++		res |= FMODE_READ;
++	if ((flags & O_ACCMODE) != O_RDONLY)
++		res |= FMODE_WRITE;
++	return res;
++}
++
+ /*
+  * Note: RFC 1813 doesn't limit the number of auth flavors that
+  * a server can return, so make something up.
+@@ -572,6 +582,13 @@ nfs_write_match_verf(const struct nfs_writeverf *verf,
+ 		!nfs_write_verifier_cmp(&req->wb_verf, &verf->verifier);
+ }
+ 
++static inline gfp_t nfs_io_gfp_mask(void)
++{
++	if (current->flags & PF_WQ_WORKER)
++		return GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN;
++	return GFP_KERNEL;
++}
++
+ /* unlink.c */
+ extern struct rpc_task *
+ nfs_async_rename(struct inode *old_dir, struct inode *new_dir,
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index 8b21ff1be7175..438de4f93ce79 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -592,8 +592,10 @@ static int _nfs42_proc_copy_notify(struct file *src, struct file *dst,
+ 
+ 	ctx = get_nfs_open_context(nfs_file_open_context(src));
+ 	l_ctx = nfs_get_lock_context(ctx);
+-	if (IS_ERR(l_ctx))
+-		return PTR_ERR(l_ctx);
++	if (IS_ERR(l_ctx)) {
++		status = PTR_ERR(l_ctx);
++		goto out;
++	}
+ 
+ 	status = nfs4_set_rw_stateid(&args->cna_src_stateid, ctx, l_ctx,
+ 				     FMODE_READ);
+@@ -601,7 +603,7 @@ static int _nfs42_proc_copy_notify(struct file *src, struct file *dst,
+ 	if (status) {
+ 		if (status == -EAGAIN)
+ 			status = -NFS4ERR_BAD_STATEID;
+-		return status;
++		goto out;
+ 	}
+ 
+ 	status = nfs4_call_sync(src_server->client, src_server, &msg,
+@@ -610,6 +612,7 @@ static int _nfs42_proc_copy_notify(struct file *src, struct file *dst,
+ 	if (status == -ENOTSUPP)
+ 		src_server->caps &= ~NFS_CAP_COPY_NOTIFY;
+ 
++out:
+ 	put_nfs_open_context(nfs_file_open_context(src));
+ 	return status;
+ }
+diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
+index e79ae4cbc395e..e34af48fb4f41 100644
+--- a/fs/nfs/nfs4file.c
++++ b/fs/nfs/nfs4file.c
+@@ -32,6 +32,7 @@ nfs4_file_open(struct inode *inode, struct file *filp)
+ 	struct dentry *parent = NULL;
+ 	struct inode *dir;
+ 	unsigned openflags = filp->f_flags;
++	fmode_t f_mode;
+ 	struct iattr attr;
+ 	int err;
+ 
+@@ -50,8 +51,9 @@ nfs4_file_open(struct inode *inode, struct file *filp)
+ 	if (err)
+ 		return err;
+ 
++	f_mode = filp->f_mode;
+ 	if ((openflags & O_ACCMODE) == 3)
+-		return nfs_open(inode, filp);
++		f_mode |= flags_to_mode(openflags);
+ 
+ 	/* We can't create new files here */
+ 	openflags &= ~(O_CREAT|O_EXCL);
+@@ -59,7 +61,7 @@ nfs4_file_open(struct inode *inode, struct file *filp)
+ 	parent = dget_parent(dentry);
+ 	dir = d_inode(parent);
+ 
+-	ctx = alloc_nfs_open_context(file_dentry(filp), filp->f_mode, filp);
++	ctx = alloc_nfs_open_context(file_dentry(filp), f_mode, filp);
+ 	err = PTR_ERR(ctx);
+ 	if (IS_ERR(ctx))
+ 		goto out;
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 499bef9fe1186..94f1876afab21 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -49,6 +49,7 @@
+ #include <linux/workqueue.h>
+ #include <linux/bitops.h>
+ #include <linux/jiffies.h>
++#include <linux/sched/mm.h>
+ 
+ #include <linux/sunrpc/clnt.h>
+ 
+@@ -2560,9 +2561,17 @@ static void nfs4_layoutreturn_any_run(struct nfs_client *clp)
+ 
+ static void nfs4_state_manager(struct nfs_client *clp)
+ {
++	unsigned int memflags;
+ 	int status = 0;
+ 	const char *section = "", *section_sep = "";
+ 
++	/*
++	 * State recovery can deadlock if the direct reclaim code tries
++	 * start NFS writeback. So ensure memory allocations are all
++	 * GFP_NOFS.
++	 */
++	memflags = memalloc_nofs_save();
++
+ 	/* Ensure exclusive access to NFSv4 state */
+ 	do {
+ 		trace_nfs4_state_mgr(clp);
+@@ -2657,6 +2666,7 @@ static void nfs4_state_manager(struct nfs_client *clp)
+ 			clear_bit(NFS4CLNT_RECLAIM_NOGRACE, &clp->cl_state);
+ 		}
+ 
++		memalloc_nofs_restore(memflags);
+ 		nfs4_end_drain_session(clp);
+ 		nfs4_clear_state_manager_bit(clp);
+ 
+@@ -2674,6 +2684,7 @@ static void nfs4_state_manager(struct nfs_client *clp)
+ 			return;
+ 		if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0)
+ 			return;
++		memflags = memalloc_nofs_save();
+ 	} while (refcount_read(&clp->cl_count) > 1 && !signalled());
+ 	goto out_drain;
+ 
+@@ -2686,6 +2697,7 @@ out_error:
+ 			clp->cl_hostname, -status);
+ 	ssleep(1);
+ out_drain:
++	memalloc_nofs_restore(memflags);
+ 	nfs4_end_drain_session(clp);
+ 	nfs4_clear_state_manager_bit(clp);
+ }
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index 815d630802451..9157dd19b8b4f 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -90,10 +90,10 @@ void nfs_set_pgio_error(struct nfs_pgio_header *hdr, int error, loff_t pos)
+ 	}
+ }
+ 
+-static inline struct nfs_page *
+-nfs_page_alloc(void)
++static inline struct nfs_page *nfs_page_alloc(void)
+ {
+-	struct nfs_page	*p = kmem_cache_zalloc(nfs_page_cachep, GFP_KERNEL);
++	struct nfs_page *p =
++		kmem_cache_zalloc(nfs_page_cachep, nfs_io_gfp_mask());
+ 	if (p)
+ 		INIT_LIST_HEAD(&p->wb_list);
+ 	return p;
+@@ -892,7 +892,7 @@ int nfs_generic_pgio(struct nfs_pageio_descriptor *desc,
+ 	struct nfs_commit_info cinfo;
+ 	struct nfs_page_array *pg_array = &hdr->page_array;
+ 	unsigned int pagecount, pageused;
+-	gfp_t gfp_flags = GFP_KERNEL;
++	gfp_t gfp_flags = nfs_io_gfp_mask();
+ 
+ 	pagecount = nfs_page_array_len(mirror->pg_base, mirror->pg_count);
+ 	pg_array->npages = pagecount;
+@@ -979,7 +979,7 @@ nfs_pageio_alloc_mirrors(struct nfs_pageio_descriptor *desc,
+ 	desc->pg_mirrors_dynamic = NULL;
+ 	if (mirror_count == 1)
+ 		return desc->pg_mirrors_static;
+-	ret = kmalloc_array(mirror_count, sizeof(*ret), GFP_KERNEL);
++	ret = kmalloc_array(mirror_count, sizeof(*ret), nfs_io_gfp_mask());
+ 	if (ret != NULL) {
+ 		for (i = 0; i < mirror_count; i++)
+ 			nfs_pageio_mirror_init(&ret[i], desc->pg_bsize);
+diff --git a/fs/nfs/pnfs_nfs.c b/fs/nfs/pnfs_nfs.c
+index 316f68f96e573..657c242a18ff1 100644
+--- a/fs/nfs/pnfs_nfs.c
++++ b/fs/nfs/pnfs_nfs.c
+@@ -419,7 +419,7 @@ static struct nfs_commit_data *
+ pnfs_bucket_fetch_commitdata(struct pnfs_commit_bucket *bucket,
+ 			     struct nfs_commit_info *cinfo)
+ {
+-	struct nfs_commit_data *data = nfs_commitdata_alloc(false);
++	struct nfs_commit_data *data = nfs_commitdata_alloc();
+ 
+ 	if (!data)
+ 		return NULL;
+@@ -515,7 +515,11 @@ pnfs_generic_commit_pagelist(struct inode *inode, struct list_head *mds_pages,
+ 	unsigned int nreq = 0;
+ 
+ 	if (!list_empty(mds_pages)) {
+-		data = nfs_commitdata_alloc(true);
++		data = nfs_commitdata_alloc();
++		if (!data) {
++			nfs_retry_commit(mds_pages, NULL, cinfo, -1);
++			return -ENOMEM;
++		}
+ 		data->ds_commit_index = -1;
+ 		list_splice_init(mds_pages, &data->pages);
+ 		list_add_tail(&data->list, &list);
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index e86aff429993e..fdaca2a55f366 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -70,27 +70,17 @@ static mempool_t *nfs_wdata_mempool;
+ static struct kmem_cache *nfs_cdata_cachep;
+ static mempool_t *nfs_commit_mempool;
+ 
+-struct nfs_commit_data *nfs_commitdata_alloc(bool never_fail)
++struct nfs_commit_data *nfs_commitdata_alloc(void)
+ {
+ 	struct nfs_commit_data *p;
+ 
+-	if (never_fail)
+-		p = mempool_alloc(nfs_commit_mempool, GFP_NOIO);
+-	else {
+-		/* It is OK to do some reclaim, not no safe to wait
+-		 * for anything to be returned to the pool.
+-		 * mempool_alloc() cannot handle that particular combination,
+-		 * so we need two separate attempts.
+-		 */
++	p = kmem_cache_zalloc(nfs_cdata_cachep, nfs_io_gfp_mask());
++	if (!p) {
+ 		p = mempool_alloc(nfs_commit_mempool, GFP_NOWAIT);
+-		if (!p)
+-			p = kmem_cache_alloc(nfs_cdata_cachep, GFP_NOIO |
+-					     __GFP_NOWARN | __GFP_NORETRY);
+ 		if (!p)
+ 			return NULL;
++		memset(p, 0, sizeof(*p));
+ 	}
+-
+-	memset(p, 0, sizeof(*p));
+ 	INIT_LIST_HEAD(&p->pages);
+ 	return p;
+ }
+@@ -104,9 +94,15 @@ EXPORT_SYMBOL_GPL(nfs_commit_free);
+ 
+ static struct nfs_pgio_header *nfs_writehdr_alloc(void)
+ {
+-	struct nfs_pgio_header *p = mempool_alloc(nfs_wdata_mempool, GFP_KERNEL);
++	struct nfs_pgio_header *p;
+ 
+-	memset(p, 0, sizeof(*p));
++	p = kmem_cache_zalloc(nfs_wdata_cachep, nfs_io_gfp_mask());
++	if (!p) {
++		p = mempool_alloc(nfs_wdata_mempool, GFP_NOWAIT);
++		if (!p)
++			return NULL;
++		memset(p, 0, sizeof(*p));
++	}
+ 	p->rw_mode = FMODE_WRITE;
+ 	return p;
+ }
+@@ -1825,7 +1821,11 @@ nfs_commit_list(struct inode *inode, struct list_head *head, int how,
+ 	if (list_empty(head))
+ 		return 0;
+ 
+-	data = nfs_commitdata_alloc(true);
++	data = nfs_commitdata_alloc();
++	if (!data) {
++		nfs_retry_commit(head, NULL, cinfo, -1);
++		return -ENOMEM;
++	}
+ 
+ 	/* Set up the argument struct */
+ 	nfs_init_commit(data, head, NULL, cinfo);
+diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
+index a673a359e20b1..7b2a13f03cc74 100644
+--- a/include/linux/gpio/driver.h
++++ b/include/linux/gpio/driver.h
+@@ -218,6 +218,15 @@ struct gpio_irq_chip {
+ 	 */
+ 	bool per_parent_data;
+ 
++	/**
++	 * @initialized:
++	 *
++	 * Flag to track GPIO chip irq member's initialization.
++	 * This flag will make sure GPIO chip irq members are not used
++	 * before they are initialized.
++	 */
++	bool initialized;
++
+ 	/**
+ 	 * @init_hw: optional routine to initialize hardware before
+ 	 * an IRQ chip will be added. This is quite useful when
+diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h
+index a59d25f193857..b8641dc0ee661 100644
+--- a/include/linux/ipv6.h
++++ b/include/linux/ipv6.h
+@@ -51,7 +51,7 @@ struct ipv6_devconf {
+ 	__s32		use_optimistic;
+ #endif
+ #ifdef CONFIG_IPV6_MROUTE
+-	__s32		mc_forwarding;
++	atomic_t	mc_forwarding;
+ #endif
+ 	__s32		disable_ipv6;
+ 	__s32		drop_unicast_in_l2_multicast;
+diff --git a/include/linux/mc146818rtc.h b/include/linux/mc146818rtc.h
+index 0661af17a7584..37c74e25f53dd 100644
+--- a/include/linux/mc146818rtc.h
++++ b/include/linux/mc146818rtc.h
+@@ -123,7 +123,8 @@ struct cmos_rtc_board_info {
+ #define RTC_IO_EXTENT_USED      RTC_IO_EXTENT
+ #endif /* ARCH_RTC_LOCATION */
+ 
+-unsigned int mc146818_get_time(struct rtc_time *time);
++bool mc146818_does_rtc_work(void);
++int mc146818_get_time(struct rtc_time *time);
+ int mc146818_set_time(struct rtc_time *time);
+ 
+ #endif /* _MC146818RTC_H */
+diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
+index aed44e9b5d899..c7a0d500b396f 100644
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -1389,13 +1389,16 @@ static inline unsigned long *section_to_usemap(struct mem_section *ms)
+ 
+ static inline struct mem_section *__nr_to_section(unsigned long nr)
+ {
++	unsigned long root = SECTION_NR_TO_ROOT(nr);
++
++	if (unlikely(root >= NR_SECTION_ROOTS))
++		return NULL;
++
+ #ifdef CONFIG_SPARSEMEM_EXTREME
+-	if (!mem_section)
++	if (!mem_section || !mem_section[root])
+ 		return NULL;
+ #endif
+-	if (!mem_section[SECTION_NR_TO_ROOT(nr)])
+-		return NULL;
+-	return &mem_section[SECTION_NR_TO_ROOT(nr)][nr & SECTION_ROOT_MASK];
++	return &mem_section[root][nr & SECTION_ROOT_MASK];
+ }
+ extern size_t mem_section_usage_size(void);
+ 
+diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
+index 29a2ab5de1daa..445e9343be3e3 100644
+--- a/include/linux/nfs_fs.h
++++ b/include/linux/nfs_fs.h
+@@ -512,10 +512,10 @@ static inline const struct cred *nfs_file_cred(struct file *file)
+  * linux/fs/nfs/direct.c
+  */
+ extern ssize_t nfs_direct_IO(struct kiocb *, struct iov_iter *);
+-extern ssize_t nfs_file_direct_read(struct kiocb *iocb,
+-			struct iov_iter *iter);
+-extern ssize_t nfs_file_direct_write(struct kiocb *iocb,
+-			struct iov_iter *iter);
++ssize_t nfs_file_direct_read(struct kiocb *iocb,
++			     struct iov_iter *iter, bool swap);
++ssize_t nfs_file_direct_write(struct kiocb *iocb,
++			      struct iov_iter *iter, bool swap);
+ 
+ /*
+  * linux/fs/nfs/dir.c
+@@ -584,7 +584,7 @@ extern int nfs_wb_all(struct inode *inode);
+ extern int nfs_wb_page(struct inode *inode, struct page *page);
+ extern int nfs_wb_page_cancel(struct inode *inode, struct page* page);
+ extern int  nfs_commit_inode(struct inode *, int);
+-extern struct nfs_commit_data *nfs_commitdata_alloc(bool never_fail);
++extern struct nfs_commit_data *nfs_commitdata_alloc(void);
+ extern void nfs_commit_free(struct nfs_commit_data *data);
+ bool nfs_commit_end(struct nfs_mds_commit_info *cinfo);
+ 
+diff --git a/include/linux/static_call.h b/include/linux/static_call.h
+index 3e56a9751c062..fcc5b48989b3c 100644
+--- a/include/linux/static_call.h
++++ b/include/linux/static_call.h
+@@ -248,10 +248,7 @@ static inline int static_call_text_reserved(void *start, void *end)
+ 	return 0;
+ }
+ 
+-static inline long __static_call_return0(void)
+-{
+-	return 0;
+-}
++extern long __static_call_return0(void);
+ 
+ #define EXPORT_STATIC_CALL(name)					\
+ 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
+diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h
+index ef9a44b6cf5d5..ae6f4838ab755 100644
+--- a/include/linux/vfio_pci_core.h
++++ b/include/linux/vfio_pci_core.h
+@@ -159,8 +159,17 @@ extern ssize_t vfio_pci_config_rw(struct vfio_pci_core_device *vdev,
+ extern ssize_t vfio_pci_bar_rw(struct vfio_pci_core_device *vdev, char __user *buf,
+ 			       size_t count, loff_t *ppos, bool iswrite);
+ 
++#ifdef CONFIG_VFIO_PCI_VGA
+ extern ssize_t vfio_pci_vga_rw(struct vfio_pci_core_device *vdev, char __user *buf,
+ 			       size_t count, loff_t *ppos, bool iswrite);
++#else
++static inline ssize_t vfio_pci_vga_rw(struct vfio_pci_core_device *vdev,
++				      char __user *buf, size_t count,
++				      loff_t *ppos, bool iswrite)
++{
++	return -EINVAL;
++}
++#endif
+ 
+ extern long vfio_pci_ioeventfd(struct vfio_pci_core_device *vdev, loff_t offset,
+ 			       uint64_t data, int count, int fd);
+diff --git a/include/net/arp.h b/include/net/arp.h
+index 4950191f6b2bf..4a23a97195f33 100644
+--- a/include/net/arp.h
++++ b/include/net/arp.h
+@@ -71,6 +71,7 @@ void arp_send(int type, int ptype, __be32 dest_ip,
+ 	      const unsigned char *src_hw, const unsigned char *th);
+ int arp_mc_map(__be32 addr, u8 *haddr, struct net_device *dev, int dir);
+ void arp_ifdown(struct net_device *dev);
++int arp_invalidate(struct net_device *dev, __be32 ip, bool force);
+ 
+ struct sk_buff *arp_create(int type, int ptype, __be32 dest_ip,
+ 			   struct net_device *dev, __be32 src_ip,
+diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h
+index a1093994e5e43..7203164671271 100644
+--- a/include/net/bluetooth/bluetooth.h
++++ b/include/net/bluetooth/bluetooth.h
+@@ -204,19 +204,21 @@ void bt_err_ratelimited(const char *fmt, ...);
+ #define BT_DBG(fmt, ...)	pr_debug(fmt "\n", ##__VA_ARGS__)
+ #endif
+ 
++#define bt_dev_name(hdev) ((hdev) ? (hdev)->name : "null")
++
+ #define bt_dev_info(hdev, fmt, ...)				\
+-	BT_INFO("%s: " fmt, (hdev)->name, ##__VA_ARGS__)
++	BT_INFO("%s: " fmt, bt_dev_name(hdev), ##__VA_ARGS__)
+ #define bt_dev_warn(hdev, fmt, ...)				\
+-	BT_WARN("%s: " fmt, (hdev)->name, ##__VA_ARGS__)
++	BT_WARN("%s: " fmt, bt_dev_name(hdev), ##__VA_ARGS__)
+ #define bt_dev_err(hdev, fmt, ...)				\
+-	BT_ERR("%s: " fmt, (hdev)->name, ##__VA_ARGS__)
++	BT_ERR("%s: " fmt, bt_dev_name(hdev), ##__VA_ARGS__)
+ #define bt_dev_dbg(hdev, fmt, ...)				\
+-	BT_DBG("%s: " fmt, (hdev)->name, ##__VA_ARGS__)
++	BT_DBG("%s: " fmt, bt_dev_name(hdev), ##__VA_ARGS__)
+ 
+ #define bt_dev_warn_ratelimited(hdev, fmt, ...)			\
+-	bt_warn_ratelimited("%s: " fmt, (hdev)->name, ##__VA_ARGS__)
++	bt_warn_ratelimited("%s: " fmt, bt_dev_name(hdev), ##__VA_ARGS__)
+ #define bt_dev_err_ratelimited(hdev, fmt, ...)			\
+-	bt_err_ratelimited("%s: " fmt, (hdev)->name, ##__VA_ARGS__)
++	bt_err_ratelimited("%s: " fmt, bt_dev_name(hdev), ##__VA_ARGS__)
+ 
+ /* Connection and socket states */
+ enum {
+diff --git a/include/net/mctp.h b/include/net/mctp.h
+index 7e35ec79b909c..204ae3aebc0da 100644
+--- a/include/net/mctp.h
++++ b/include/net/mctp.h
+@@ -36,8 +36,6 @@ struct mctp_hdr {
+ #define MCTP_HDR_TAG_SHIFT	0
+ #define MCTP_HDR_TAG_MASK	GENMASK(2, 0)
+ 
+-#define MCTP_HEADER_MAXLEN	4
+-
+ #define MCTP_INITIAL_DEFAULT_NET	1
+ 
+ static inline bool mctp_address_ok(mctp_eid_t eid)
+diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h
+index bb5fa59140321..2ba326f9e004d 100644
+--- a/include/net/net_namespace.h
++++ b/include/net/net_namespace.h
+@@ -479,4 +479,10 @@ static inline void fnhe_genid_bump(struct net *net)
+ 	atomic_inc(&net->fnhe_genid);
+ }
+ 
++#ifdef CONFIG_NET
++void net_ns_init(void);
++#else
++static inline void net_ns_init(void) {}
++#endif
++
+ #endif /* __NET_NET_NAMESPACE_H */
+diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
+index 54eb1654d9ecc..5c6a5e11c4bc4 100644
+--- a/include/trace/events/sunrpc.h
++++ b/include/trace/events/sunrpc.h
+@@ -993,7 +993,6 @@ DEFINE_RPC_XPRT_LIFETIME_EVENT(connect);
+ DEFINE_RPC_XPRT_LIFETIME_EVENT(disconnect_auto);
+ DEFINE_RPC_XPRT_LIFETIME_EVENT(disconnect_done);
+ DEFINE_RPC_XPRT_LIFETIME_EVENT(disconnect_force);
+-DEFINE_RPC_XPRT_LIFETIME_EVENT(disconnect_cleanup);
+ DEFINE_RPC_XPRT_LIFETIME_EVENT(destroy);
+ 
+ DECLARE_EVENT_CLASS(rpc_xprt_event,
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index 2b3c3f83076c8..7cb2e58a67fdb 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -5414,7 +5414,8 @@ struct bpf_sock {
+ 	__u32 src_ip4;
+ 	__u32 src_ip6[4];
+ 	__u32 src_port;		/* host byte order */
+-	__u32 dst_port;		/* network byte order */
++	__be16 dst_port;	/* network byte order */
++	__u16 :16;		/* zero padding */
+ 	__u32 dst_ip4;
+ 	__u32 dst_ip6[4];
+ 	__u32 state;
+@@ -6292,7 +6293,8 @@ struct bpf_sk_lookup {
+ 	__u32 protocol;		/* IP protocol (IPPROTO_TCP, IPPROTO_UDP) */
+ 	__u32 remote_ip4;	/* Network byte order */
+ 	__u32 remote_ip6[4];	/* Network byte order */
+-	__u32 remote_port;	/* Network byte order */
++	__be16 remote_port;	/* Network byte order */
++	__u16 :16;		/* Zero padding */
+ 	__u32 local_ip4;	/* Network byte order */
+ 	__u32 local_ip6[4];	/* Network byte order */
+ 	__u32 local_port;	/* Host byte order */
+diff --git a/include/uapi/linux/can/isotp.h b/include/uapi/linux/can/isotp.h
+index c55935b64ccc8..590f8aea2b6d2 100644
+--- a/include/uapi/linux/can/isotp.h
++++ b/include/uapi/linux/can/isotp.h
+@@ -137,20 +137,16 @@ struct can_isotp_ll_options {
+ #define CAN_ISOTP_WAIT_TX_DONE	0x400	/* wait for tx completion */
+ #define CAN_ISOTP_SF_BROADCAST	0x800	/* 1-to-N functional addressing */
+ 
+-/* default values */
++/* protocol machine default values */
+ 
+ #define CAN_ISOTP_DEFAULT_FLAGS		0
+ #define CAN_ISOTP_DEFAULT_EXT_ADDRESS	0x00
+ #define CAN_ISOTP_DEFAULT_PAD_CONTENT	0xCC /* prevent bit-stuffing */
+-#define CAN_ISOTP_DEFAULT_FRAME_TXTIME	0
++#define CAN_ISOTP_DEFAULT_FRAME_TXTIME	50000 /* 50 micro seconds */
+ #define CAN_ISOTP_DEFAULT_RECV_BS	0
+ #define CAN_ISOTP_DEFAULT_RECV_STMIN	0x00
+ #define CAN_ISOTP_DEFAULT_RECV_WFTMAX	0
+ 
+-#define CAN_ISOTP_DEFAULT_LL_MTU	CAN_MTU
+-#define CAN_ISOTP_DEFAULT_LL_TX_DL	CAN_MAX_DLEN
+-#define CAN_ISOTP_DEFAULT_LL_TX_FLAGS	0
+-
+ /*
+  * Remark on CAN_ISOTP_DEFAULT_RECV_* values:
+  *
+@@ -162,4 +158,24 @@ struct can_isotp_ll_options {
+  * consistency and copied directly into the flow control (FC) frame.
+  */
+ 
++/* link layer default values => make use of Classical CAN frames */
++
++#define CAN_ISOTP_DEFAULT_LL_MTU	CAN_MTU
++#define CAN_ISOTP_DEFAULT_LL_TX_DL	CAN_MAX_DLEN
++#define CAN_ISOTP_DEFAULT_LL_TX_FLAGS	0
++
++/*
++ * The CAN_ISOTP_DEFAULT_FRAME_TXTIME has become a non-zero value as
++ * it only makes sense for isotp implementation tests to run without
++ * a N_As value. As user space applications usually do not set the
++ * frame_txtime element of struct can_isotp_options the new in-kernel
++ * default is very likely overwritten with zero when the sockopt()
++ * CAN_ISOTP_OPTS is invoked.
++ * To make sure that a N_As value of zero is only set intentional the
++ * value '0' is now interpreted as 'do not change the current value'.
++ * When a frame_txtime of zero is required for testing purposes this
++ * CAN_ISOTP_FRAME_TXTIME_ZERO u32 value has to be set in frame_txtime.
++ */
++#define CAN_ISOTP_FRAME_TXTIME_ZERO	0xFFFFFFFF
++
+ #endif /* !_UAPI_CAN_ISOTP_H */
+diff --git a/init/main.c b/init/main.c
+index bb984ed79de0e..792a8d9cc5602 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -99,6 +99,7 @@
+ #include <linux/kcsan.h>
+ #include <linux/init_syscalls.h>
+ #include <linux/stackdepot.h>
++#include <net/net_namespace.h>
+ 
+ #include <asm/io.h>
+ #include <asm/bugs.h>
+@@ -1113,6 +1114,7 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
+ 	key_init();
+ 	security_init();
+ 	dbg_late_init();
++	net_ns_init();
+ 	vfs_caches_init();
+ 	pagecache_init();
+ 	signals_init();
+@@ -1187,7 +1189,7 @@ static int __init initcall_blacklist(char *str)
+ 		}
+ 	} while (str_entry);
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static bool __init_or_module initcall_blacklisted(initcall_t fn)
+@@ -1449,7 +1451,9 @@ static noinline void __init kernel_init_freeable(void);
+ bool rodata_enabled __ro_after_init = true;
+ static int __init set_debug_rodata(char *str)
+ {
+-	return strtobool(str, &rodata_enabled);
++	if (strtobool(str, &rodata_enabled))
++		pr_warn("Invalid option string for rodata: '%s'\n", str);
++	return 1;
+ }
+ __setup("rodata=", set_debug_rodata);
+ #endif
+diff --git a/kernel/Makefile b/kernel/Makefile
+index 186c49582f45b..8020b4a07b873 100644
+--- a/kernel/Makefile
++++ b/kernel/Makefile
+@@ -112,7 +112,8 @@ obj-$(CONFIG_CPU_PM) += cpu_pm.o
+ obj-$(CONFIG_BPF) += bpf/
+ obj-$(CONFIG_KCSAN) += kcsan/
+ obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o
+-obj-$(CONFIG_HAVE_STATIC_CALL_INLINE) += static_call.o
++obj-$(CONFIG_HAVE_STATIC_CALL) += static_call.o
++obj-$(CONFIG_HAVE_STATIC_CALL_INLINE) += static_call_inline.o
+ obj-$(CONFIG_CFI_CLANG) += cfi.o
+ 
+ obj-$(CONFIG_PERF_EVENTS) += events/
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index a0e196f973b95..85929109e588f 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -11626,6 +11626,9 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+ 
+ 	event->state		= PERF_EVENT_STATE_INACTIVE;
+ 
++	if (parent_event)
++		event->event_caps = parent_event->event_caps;
++
+ 	if (event->attr.sigtrap)
+ 		atomic_set(&event->event_limit, 1);
+ 
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 0e4b8214af31c..4810503f79a57 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -5889,7 +5889,7 @@ static bool try_steal_cookie(int this, int that)
+ 		if (p == src->core_pick || p == src->curr)
+ 			goto next;
+ 
+-		if (!cpumask_test_cpu(this, &p->cpus_mask))
++		if (!is_cpu_allowed(p, this))
+ 			goto next;
+ 
+ 		if (p->core_occupation > dst->idle->core_occupation)
+diff --git a/kernel/static_call.c b/kernel/static_call.c
+index 43ba0b1e0edbb..e9c3e69f38379 100644
+--- a/kernel/static_call.c
++++ b/kernel/static_call.c
+@@ -1,548 +1,8 @@
+ // SPDX-License-Identifier: GPL-2.0
+-#include <linux/init.h>
+ #include <linux/static_call.h>
+-#include <linux/bug.h>
+-#include <linux/smp.h>
+-#include <linux/sort.h>
+-#include <linux/slab.h>
+-#include <linux/module.h>
+-#include <linux/cpu.h>
+-#include <linux/processor.h>
+-#include <asm/sections.h>
+-
+-extern struct static_call_site __start_static_call_sites[],
+-			       __stop_static_call_sites[];
+-extern struct static_call_tramp_key __start_static_call_tramp_key[],
+-				    __stop_static_call_tramp_key[];
+-
+-static bool static_call_initialized;
+-
+-/* mutex to protect key modules/sites */
+-static DEFINE_MUTEX(static_call_mutex);
+-
+-static void static_call_lock(void)
+-{
+-	mutex_lock(&static_call_mutex);
+-}
+-
+-static void static_call_unlock(void)
+-{
+-	mutex_unlock(&static_call_mutex);
+-}
+-
+-static inline void *static_call_addr(struct static_call_site *site)
+-{
+-	return (void *)((long)site->addr + (long)&site->addr);
+-}
+-
+-static inline unsigned long __static_call_key(const struct static_call_site *site)
+-{
+-	return (long)site->key + (long)&site->key;
+-}
+-
+-static inline struct static_call_key *static_call_key(const struct static_call_site *site)
+-{
+-	return (void *)(__static_call_key(site) & ~STATIC_CALL_SITE_FLAGS);
+-}
+-
+-/* These assume the key is word-aligned. */
+-static inline bool static_call_is_init(struct static_call_site *site)
+-{
+-	return __static_call_key(site) & STATIC_CALL_SITE_INIT;
+-}
+-
+-static inline bool static_call_is_tail(struct static_call_site *site)
+-{
+-	return __static_call_key(site) & STATIC_CALL_SITE_TAIL;
+-}
+-
+-static inline void static_call_set_init(struct static_call_site *site)
+-{
+-	site->key = (__static_call_key(site) | STATIC_CALL_SITE_INIT) -
+-		    (long)&site->key;
+-}
+-
+-static int static_call_site_cmp(const void *_a, const void *_b)
+-{
+-	const struct static_call_site *a = _a;
+-	const struct static_call_site *b = _b;
+-	const struct static_call_key *key_a = static_call_key(a);
+-	const struct static_call_key *key_b = static_call_key(b);
+-
+-	if (key_a < key_b)
+-		return -1;
+-
+-	if (key_a > key_b)
+-		return 1;
+-
+-	return 0;
+-}
+-
+-static void static_call_site_swap(void *_a, void *_b, int size)
+-{
+-	long delta = (unsigned long)_a - (unsigned long)_b;
+-	struct static_call_site *a = _a;
+-	struct static_call_site *b = _b;
+-	struct static_call_site tmp = *a;
+-
+-	a->addr = b->addr  - delta;
+-	a->key  = b->key   - delta;
+-
+-	b->addr = tmp.addr + delta;
+-	b->key  = tmp.key  + delta;
+-}
+-
+-static inline void static_call_sort_entries(struct static_call_site *start,
+-					    struct static_call_site *stop)
+-{
+-	sort(start, stop - start, sizeof(struct static_call_site),
+-	     static_call_site_cmp, static_call_site_swap);
+-}
+-
+-static inline bool static_call_key_has_mods(struct static_call_key *key)
+-{
+-	return !(key->type & 1);
+-}
+-
+-static inline struct static_call_mod *static_call_key_next(struct static_call_key *key)
+-{
+-	if (!static_call_key_has_mods(key))
+-		return NULL;
+-
+-	return key->mods;
+-}
+-
+-static inline struct static_call_site *static_call_key_sites(struct static_call_key *key)
+-{
+-	if (static_call_key_has_mods(key))
+-		return NULL;
+-
+-	return (struct static_call_site *)(key->type & ~1);
+-}
+-
+-void __static_call_update(struct static_call_key *key, void *tramp, void *func)
+-{
+-	struct static_call_site *site, *stop;
+-	struct static_call_mod *site_mod, first;
+-
+-	cpus_read_lock();
+-	static_call_lock();
+-
+-	if (key->func == func)
+-		goto done;
+-
+-	key->func = func;
+-
+-	arch_static_call_transform(NULL, tramp, func, false);
+-
+-	/*
+-	 * If uninitialized, we'll not update the callsites, but they still
+-	 * point to the trampoline and we just patched that.
+-	 */
+-	if (WARN_ON_ONCE(!static_call_initialized))
+-		goto done;
+-
+-	first = (struct static_call_mod){
+-		.next = static_call_key_next(key),
+-		.mod = NULL,
+-		.sites = static_call_key_sites(key),
+-	};
+-
+-	for (site_mod = &first; site_mod; site_mod = site_mod->next) {
+-		bool init = system_state < SYSTEM_RUNNING;
+-		struct module *mod = site_mod->mod;
+-
+-		if (!site_mod->sites) {
+-			/*
+-			 * This can happen if the static call key is defined in
+-			 * a module which doesn't use it.
+-			 *
+-			 * It also happens in the has_mods case, where the
+-			 * 'first' entry has no sites associated with it.
+-			 */
+-			continue;
+-		}
+-
+-		stop = __stop_static_call_sites;
+-
+-		if (mod) {
+-#ifdef CONFIG_MODULES
+-			stop = mod->static_call_sites +
+-			       mod->num_static_call_sites;
+-			init = mod->state == MODULE_STATE_COMING;
+-#endif
+-		}
+-
+-		for (site = site_mod->sites;
+-		     site < stop && static_call_key(site) == key; site++) {
+-			void *site_addr = static_call_addr(site);
+-
+-			if (!init && static_call_is_init(site))
+-				continue;
+-
+-			if (!kernel_text_address((unsigned long)site_addr)) {
+-				/*
+-				 * This skips patching built-in __exit, which
+-				 * is part of init_section_contains() but is
+-				 * not part of kernel_text_address().
+-				 *
+-				 * Skipping built-in __exit is fine since it
+-				 * will never be executed.
+-				 */
+-				WARN_ONCE(!static_call_is_init(site),
+-					  "can't patch static call site at %pS",
+-					  site_addr);
+-				continue;
+-			}
+-
+-			arch_static_call_transform(site_addr, NULL, func,
+-						   static_call_is_tail(site));
+-		}
+-	}
+-
+-done:
+-	static_call_unlock();
+-	cpus_read_unlock();
+-}
+-EXPORT_SYMBOL_GPL(__static_call_update);
+-
+-static int __static_call_init(struct module *mod,
+-			      struct static_call_site *start,
+-			      struct static_call_site *stop)
+-{
+-	struct static_call_site *site;
+-	struct static_call_key *key, *prev_key = NULL;
+-	struct static_call_mod *site_mod;
+-
+-	if (start == stop)
+-		return 0;
+-
+-	static_call_sort_entries(start, stop);
+-
+-	for (site = start; site < stop; site++) {
+-		void *site_addr = static_call_addr(site);
+-
+-		if ((mod && within_module_init((unsigned long)site_addr, mod)) ||
+-		    (!mod && init_section_contains(site_addr, 1)))
+-			static_call_set_init(site);
+-
+-		key = static_call_key(site);
+-		if (key != prev_key) {
+-			prev_key = key;
+-
+-			/*
+-			 * For vmlinux (!mod) avoid the allocation by storing
+-			 * the sites pointer in the key itself. Also see
+-			 * __static_call_update()'s @first.
+-			 *
+-			 * This allows architectures (eg. x86) to call
+-			 * static_call_init() before memory allocation works.
+-			 */
+-			if (!mod) {
+-				key->sites = site;
+-				key->type |= 1;
+-				goto do_transform;
+-			}
+-
+-			site_mod = kzalloc(sizeof(*site_mod), GFP_KERNEL);
+-			if (!site_mod)
+-				return -ENOMEM;
+-
+-			/*
+-			 * When the key has a direct sites pointer, extract
+-			 * that into an explicit struct static_call_mod, so we
+-			 * can have a list of modules.
+-			 */
+-			if (static_call_key_sites(key)) {
+-				site_mod->mod = NULL;
+-				site_mod->next = NULL;
+-				site_mod->sites = static_call_key_sites(key);
+-
+-				key->mods = site_mod;
+-
+-				site_mod = kzalloc(sizeof(*site_mod), GFP_KERNEL);
+-				if (!site_mod)
+-					return -ENOMEM;
+-			}
+-
+-			site_mod->mod = mod;
+-			site_mod->sites = site;
+-			site_mod->next = static_call_key_next(key);
+-			key->mods = site_mod;
+-		}
+-
+-do_transform:
+-		arch_static_call_transform(site_addr, NULL, key->func,
+-				static_call_is_tail(site));
+-	}
+-
+-	return 0;
+-}
+-
+-static int addr_conflict(struct static_call_site *site, void *start, void *end)
+-{
+-	unsigned long addr = (unsigned long)static_call_addr(site);
+-
+-	if (addr <= (unsigned long)end &&
+-	    addr + CALL_INSN_SIZE > (unsigned long)start)
+-		return 1;
+-
+-	return 0;
+-}
+-
+-static int __static_call_text_reserved(struct static_call_site *iter_start,
+-				       struct static_call_site *iter_stop,
+-				       void *start, void *end, bool init)
+-{
+-	struct static_call_site *iter = iter_start;
+-
+-	while (iter < iter_stop) {
+-		if (init || !static_call_is_init(iter)) {
+-			if (addr_conflict(iter, start, end))
+-				return 1;
+-		}
+-		iter++;
+-	}
+-
+-	return 0;
+-}
+-
+-#ifdef CONFIG_MODULES
+-
+-static int __static_call_mod_text_reserved(void *start, void *end)
+-{
+-	struct module *mod;
+-	int ret;
+-
+-	preempt_disable();
+-	mod = __module_text_address((unsigned long)start);
+-	WARN_ON_ONCE(__module_text_address((unsigned long)end) != mod);
+-	if (!try_module_get(mod))
+-		mod = NULL;
+-	preempt_enable();
+-
+-	if (!mod)
+-		return 0;
+-
+-	ret = __static_call_text_reserved(mod->static_call_sites,
+-			mod->static_call_sites + mod->num_static_call_sites,
+-			start, end, mod->state == MODULE_STATE_COMING);
+-
+-	module_put(mod);
+-
+-	return ret;
+-}
+-
+-static unsigned long tramp_key_lookup(unsigned long addr)
+-{
+-	struct static_call_tramp_key *start = __start_static_call_tramp_key;
+-	struct static_call_tramp_key *stop = __stop_static_call_tramp_key;
+-	struct static_call_tramp_key *tramp_key;
+-
+-	for (tramp_key = start; tramp_key != stop; tramp_key++) {
+-		unsigned long tramp;
+-
+-		tramp = (long)tramp_key->tramp + (long)&tramp_key->tramp;
+-		if (tramp == addr)
+-			return (long)tramp_key->key + (long)&tramp_key->key;
+-	}
+-
+-	return 0;
+-}
+-
+-static int static_call_add_module(struct module *mod)
+-{
+-	struct static_call_site *start = mod->static_call_sites;
+-	struct static_call_site *stop = start + mod->num_static_call_sites;
+-	struct static_call_site *site;
+-
+-	for (site = start; site != stop; site++) {
+-		unsigned long s_key = __static_call_key(site);
+-		unsigned long addr = s_key & ~STATIC_CALL_SITE_FLAGS;
+-		unsigned long key;
+-
+-		/*
+-		 * Is the key is exported, 'addr' points to the key, which
+-		 * means modules are allowed to call static_call_update() on
+-		 * it.
+-		 *
+-		 * Otherwise, the key isn't exported, and 'addr' points to the
+-		 * trampoline so we need to lookup the key.
+-		 *
+-		 * We go through this dance to prevent crazy modules from
+-		 * abusing sensitive static calls.
+-		 */
+-		if (!kernel_text_address(addr))
+-			continue;
+-
+-		key = tramp_key_lookup(addr);
+-		if (!key) {
+-			pr_warn("Failed to fixup __raw_static_call() usage at: %ps\n",
+-				static_call_addr(site));
+-			return -EINVAL;
+-		}
+-
+-		key |= s_key & STATIC_CALL_SITE_FLAGS;
+-		site->key = key - (long)&site->key;
+-	}
+-
+-	return __static_call_init(mod, start, stop);
+-}
+-
+-static void static_call_del_module(struct module *mod)
+-{
+-	struct static_call_site *start = mod->static_call_sites;
+-	struct static_call_site *stop = mod->static_call_sites +
+-					mod->num_static_call_sites;
+-	struct static_call_key *key, *prev_key = NULL;
+-	struct static_call_mod *site_mod, **prev;
+-	struct static_call_site *site;
+-
+-	for (site = start; site < stop; site++) {
+-		key = static_call_key(site);
+-		if (key == prev_key)
+-			continue;
+-
+-		prev_key = key;
+-
+-		for (prev = &key->mods, site_mod = key->mods;
+-		     site_mod && site_mod->mod != mod;
+-		     prev = &site_mod->next, site_mod = site_mod->next)
+-			;
+-
+-		if (!site_mod)
+-			continue;
+-
+-		*prev = site_mod->next;
+-		kfree(site_mod);
+-	}
+-}
+-
+-static int static_call_module_notify(struct notifier_block *nb,
+-				     unsigned long val, void *data)
+-{
+-	struct module *mod = data;
+-	int ret = 0;
+-
+-	cpus_read_lock();
+-	static_call_lock();
+-
+-	switch (val) {
+-	case MODULE_STATE_COMING:
+-		ret = static_call_add_module(mod);
+-		if (ret) {
+-			WARN(1, "Failed to allocate memory for static calls");
+-			static_call_del_module(mod);
+-		}
+-		break;
+-	case MODULE_STATE_GOING:
+-		static_call_del_module(mod);
+-		break;
+-	}
+-
+-	static_call_unlock();
+-	cpus_read_unlock();
+-
+-	return notifier_from_errno(ret);
+-}
+-
+-static struct notifier_block static_call_module_nb = {
+-	.notifier_call = static_call_module_notify,
+-};
+-
+-#else
+-
+-static inline int __static_call_mod_text_reserved(void *start, void *end)
+-{
+-	return 0;
+-}
+-
+-#endif /* CONFIG_MODULES */
+-
+-int static_call_text_reserved(void *start, void *end)
+-{
+-	bool init = system_state < SYSTEM_RUNNING;
+-	int ret = __static_call_text_reserved(__start_static_call_sites,
+-			__stop_static_call_sites, start, end, init);
+-
+-	if (ret)
+-		return ret;
+-
+-	return __static_call_mod_text_reserved(start, end);
+-}
+-
+-int __init static_call_init(void)
+-{
+-	int ret;
+-
+-	if (static_call_initialized)
+-		return 0;
+-
+-	cpus_read_lock();
+-	static_call_lock();
+-	ret = __static_call_init(NULL, __start_static_call_sites,
+-				 __stop_static_call_sites);
+-	static_call_unlock();
+-	cpus_read_unlock();
+-
+-	if (ret) {
+-		pr_err("Failed to allocate memory for static_call!\n");
+-		BUG();
+-	}
+-
+-	static_call_initialized = true;
+-
+-#ifdef CONFIG_MODULES
+-	register_module_notifier(&static_call_module_nb);
+-#endif
+-	return 0;
+-}
+-early_initcall(static_call_init);
+ 
+ long __static_call_return0(void)
+ {
+ 	return 0;
+ }
+-
+-#ifdef CONFIG_STATIC_CALL_SELFTEST
+-
+-static int func_a(int x)
+-{
+-	return x+1;
+-}
+-
+-static int func_b(int x)
+-{
+-	return x+2;
+-}
+-
+-DEFINE_STATIC_CALL(sc_selftest, func_a);
+-
+-static struct static_call_data {
+-      int (*func)(int);
+-      int val;
+-      int expect;
+-} static_call_data [] __initdata = {
+-      { NULL,   2, 3 },
+-      { func_b, 2, 4 },
+-      { func_a, 2, 3 }
+-};
+-
+-static int __init test_static_call_init(void)
+-{
+-      int i;
+-
+-      for (i = 0; i < ARRAY_SIZE(static_call_data); i++ ) {
+-	      struct static_call_data *scd = &static_call_data[i];
+-
+-              if (scd->func)
+-                      static_call_update(sc_selftest, scd->func);
+-
+-              WARN_ON(static_call(sc_selftest)(scd->val) != scd->expect);
+-      }
+-
+-      return 0;
+-}
+-early_initcall(test_static_call_init);
+-
+-#endif /* CONFIG_STATIC_CALL_SELFTEST */
++EXPORT_SYMBOL_GPL(__static_call_return0);
+diff --git a/kernel/static_call_inline.c b/kernel/static_call_inline.c
+new file mode 100644
+index 0000000000000..dc5665b628140
+--- /dev/null
++++ b/kernel/static_call_inline.c
+@@ -0,0 +1,543 @@
++// SPDX-License-Identifier: GPL-2.0
++#include <linux/init.h>
++#include <linux/static_call.h>
++#include <linux/bug.h>
++#include <linux/smp.h>
++#include <linux/sort.h>
++#include <linux/slab.h>
++#include <linux/module.h>
++#include <linux/cpu.h>
++#include <linux/processor.h>
++#include <asm/sections.h>
++
++extern struct static_call_site __start_static_call_sites[],
++			       __stop_static_call_sites[];
++extern struct static_call_tramp_key __start_static_call_tramp_key[],
++				    __stop_static_call_tramp_key[];
++
++static bool static_call_initialized;
++
++/* mutex to protect key modules/sites */
++static DEFINE_MUTEX(static_call_mutex);
++
++static void static_call_lock(void)
++{
++	mutex_lock(&static_call_mutex);
++}
++
++static void static_call_unlock(void)
++{
++	mutex_unlock(&static_call_mutex);
++}
++
++static inline void *static_call_addr(struct static_call_site *site)
++{
++	return (void *)((long)site->addr + (long)&site->addr);
++}
++
++static inline unsigned long __static_call_key(const struct static_call_site *site)
++{
++	return (long)site->key + (long)&site->key;
++}
++
++static inline struct static_call_key *static_call_key(const struct static_call_site *site)
++{
++	return (void *)(__static_call_key(site) & ~STATIC_CALL_SITE_FLAGS);
++}
++
++/* These assume the key is word-aligned. */
++static inline bool static_call_is_init(struct static_call_site *site)
++{
++	return __static_call_key(site) & STATIC_CALL_SITE_INIT;
++}
++
++static inline bool static_call_is_tail(struct static_call_site *site)
++{
++	return __static_call_key(site) & STATIC_CALL_SITE_TAIL;
++}
++
++static inline void static_call_set_init(struct static_call_site *site)
++{
++	site->key = (__static_call_key(site) | STATIC_CALL_SITE_INIT) -
++		    (long)&site->key;
++}
++
++static int static_call_site_cmp(const void *_a, const void *_b)
++{
++	const struct static_call_site *a = _a;
++	const struct static_call_site *b = _b;
++	const struct static_call_key *key_a = static_call_key(a);
++	const struct static_call_key *key_b = static_call_key(b);
++
++	if (key_a < key_b)
++		return -1;
++
++	if (key_a > key_b)
++		return 1;
++
++	return 0;
++}
++
++static void static_call_site_swap(void *_a, void *_b, int size)
++{
++	long delta = (unsigned long)_a - (unsigned long)_b;
++	struct static_call_site *a = _a;
++	struct static_call_site *b = _b;
++	struct static_call_site tmp = *a;
++
++	a->addr = b->addr  - delta;
++	a->key  = b->key   - delta;
++
++	b->addr = tmp.addr + delta;
++	b->key  = tmp.key  + delta;
++}
++
++static inline void static_call_sort_entries(struct static_call_site *start,
++					    struct static_call_site *stop)
++{
++	sort(start, stop - start, sizeof(struct static_call_site),
++	     static_call_site_cmp, static_call_site_swap);
++}
++
++static inline bool static_call_key_has_mods(struct static_call_key *key)
++{
++	return !(key->type & 1);
++}
++
++static inline struct static_call_mod *static_call_key_next(struct static_call_key *key)
++{
++	if (!static_call_key_has_mods(key))
++		return NULL;
++
++	return key->mods;
++}
++
++static inline struct static_call_site *static_call_key_sites(struct static_call_key *key)
++{
++	if (static_call_key_has_mods(key))
++		return NULL;
++
++	return (struct static_call_site *)(key->type & ~1);
++}
++
++void __static_call_update(struct static_call_key *key, void *tramp, void *func)
++{
++	struct static_call_site *site, *stop;
++	struct static_call_mod *site_mod, first;
++
++	cpus_read_lock();
++	static_call_lock();
++
++	if (key->func == func)
++		goto done;
++
++	key->func = func;
++
++	arch_static_call_transform(NULL, tramp, func, false);
++
++	/*
++	 * If uninitialized, we'll not update the callsites, but they still
++	 * point to the trampoline and we just patched that.
++	 */
++	if (WARN_ON_ONCE(!static_call_initialized))
++		goto done;
++
++	first = (struct static_call_mod){
++		.next = static_call_key_next(key),
++		.mod = NULL,
++		.sites = static_call_key_sites(key),
++	};
++
++	for (site_mod = &first; site_mod; site_mod = site_mod->next) {
++		bool init = system_state < SYSTEM_RUNNING;
++		struct module *mod = site_mod->mod;
++
++		if (!site_mod->sites) {
++			/*
++			 * This can happen if the static call key is defined in
++			 * a module which doesn't use it.
++			 *
++			 * It also happens in the has_mods case, where the
++			 * 'first' entry has no sites associated with it.
++			 */
++			continue;
++		}
++
++		stop = __stop_static_call_sites;
++
++		if (mod) {
++#ifdef CONFIG_MODULES
++			stop = mod->static_call_sites +
++			       mod->num_static_call_sites;
++			init = mod->state == MODULE_STATE_COMING;
++#endif
++		}
++
++		for (site = site_mod->sites;
++		     site < stop && static_call_key(site) == key; site++) {
++			void *site_addr = static_call_addr(site);
++
++			if (!init && static_call_is_init(site))
++				continue;
++
++			if (!kernel_text_address((unsigned long)site_addr)) {
++				/*
++				 * This skips patching built-in __exit, which
++				 * is part of init_section_contains() but is
++				 * not part of kernel_text_address().
++				 *
++				 * Skipping built-in __exit is fine since it
++				 * will never be executed.
++				 */
++				WARN_ONCE(!static_call_is_init(site),
++					  "can't patch static call site at %pS",
++					  site_addr);
++				continue;
++			}
++
++			arch_static_call_transform(site_addr, NULL, func,
++						   static_call_is_tail(site));
++		}
++	}
++
++done:
++	static_call_unlock();
++	cpus_read_unlock();
++}
++EXPORT_SYMBOL_GPL(__static_call_update);
++
++static int __static_call_init(struct module *mod,
++			      struct static_call_site *start,
++			      struct static_call_site *stop)
++{
++	struct static_call_site *site;
++	struct static_call_key *key, *prev_key = NULL;
++	struct static_call_mod *site_mod;
++
++	if (start == stop)
++		return 0;
++
++	static_call_sort_entries(start, stop);
++
++	for (site = start; site < stop; site++) {
++		void *site_addr = static_call_addr(site);
++
++		if ((mod && within_module_init((unsigned long)site_addr, mod)) ||
++		    (!mod && init_section_contains(site_addr, 1)))
++			static_call_set_init(site);
++
++		key = static_call_key(site);
++		if (key != prev_key) {
++			prev_key = key;
++
++			/*
++			 * For vmlinux (!mod) avoid the allocation by storing
++			 * the sites pointer in the key itself. Also see
++			 * __static_call_update()'s @first.
++			 *
++			 * This allows architectures (eg. x86) to call
++			 * static_call_init() before memory allocation works.
++			 */
++			if (!mod) {
++				key->sites = site;
++				key->type |= 1;
++				goto do_transform;
++			}
++
++			site_mod = kzalloc(sizeof(*site_mod), GFP_KERNEL);
++			if (!site_mod)
++				return -ENOMEM;
++
++			/*
++			 * When the key has a direct sites pointer, extract
++			 * that into an explicit struct static_call_mod, so we
++			 * can have a list of modules.
++			 */
++			if (static_call_key_sites(key)) {
++				site_mod->mod = NULL;
++				site_mod->next = NULL;
++				site_mod->sites = static_call_key_sites(key);
++
++				key->mods = site_mod;
++
++				site_mod = kzalloc(sizeof(*site_mod), GFP_KERNEL);
++				if (!site_mod)
++					return -ENOMEM;
++			}
++
++			site_mod->mod = mod;
++			site_mod->sites = site;
++			site_mod->next = static_call_key_next(key);
++			key->mods = site_mod;
++		}
++
++do_transform:
++		arch_static_call_transform(site_addr, NULL, key->func,
++				static_call_is_tail(site));
++	}
++
++	return 0;
++}
++
++static int addr_conflict(struct static_call_site *site, void *start, void *end)
++{
++	unsigned long addr = (unsigned long)static_call_addr(site);
++
++	if (addr <= (unsigned long)end &&
++	    addr + CALL_INSN_SIZE > (unsigned long)start)
++		return 1;
++
++	return 0;
++}
++
++static int __static_call_text_reserved(struct static_call_site *iter_start,
++				       struct static_call_site *iter_stop,
++				       void *start, void *end, bool init)
++{
++	struct static_call_site *iter = iter_start;
++
++	while (iter < iter_stop) {
++		if (init || !static_call_is_init(iter)) {
++			if (addr_conflict(iter, start, end))
++				return 1;
++		}
++		iter++;
++	}
++
++	return 0;
++}
++
++#ifdef CONFIG_MODULES
++
++static int __static_call_mod_text_reserved(void *start, void *end)
++{
++	struct module *mod;
++	int ret;
++
++	preempt_disable();
++	mod = __module_text_address((unsigned long)start);
++	WARN_ON_ONCE(__module_text_address((unsigned long)end) != mod);
++	if (!try_module_get(mod))
++		mod = NULL;
++	preempt_enable();
++
++	if (!mod)
++		return 0;
++
++	ret = __static_call_text_reserved(mod->static_call_sites,
++			mod->static_call_sites + mod->num_static_call_sites,
++			start, end, mod->state == MODULE_STATE_COMING);
++
++	module_put(mod);
++
++	return ret;
++}
++
++static unsigned long tramp_key_lookup(unsigned long addr)
++{
++	struct static_call_tramp_key *start = __start_static_call_tramp_key;
++	struct static_call_tramp_key *stop = __stop_static_call_tramp_key;
++	struct static_call_tramp_key *tramp_key;
++
++	for (tramp_key = start; tramp_key != stop; tramp_key++) {
++		unsigned long tramp;
++
++		tramp = (long)tramp_key->tramp + (long)&tramp_key->tramp;
++		if (tramp == addr)
++			return (long)tramp_key->key + (long)&tramp_key->key;
++	}
++
++	return 0;
++}
++
++static int static_call_add_module(struct module *mod)
++{
++	struct static_call_site *start = mod->static_call_sites;
++	struct static_call_site *stop = start + mod->num_static_call_sites;
++	struct static_call_site *site;
++
++	for (site = start; site != stop; site++) {
++		unsigned long s_key = __static_call_key(site);
++		unsigned long addr = s_key & ~STATIC_CALL_SITE_FLAGS;
++		unsigned long key;
++
++		/*
++		 * Is the key is exported, 'addr' points to the key, which
++		 * means modules are allowed to call static_call_update() on
++		 * it.
++		 *
++		 * Otherwise, the key isn't exported, and 'addr' points to the
++		 * trampoline so we need to lookup the key.
++		 *
++		 * We go through this dance to prevent crazy modules from
++		 * abusing sensitive static calls.
++		 */
++		if (!kernel_text_address(addr))
++			continue;
++
++		key = tramp_key_lookup(addr);
++		if (!key) {
++			pr_warn("Failed to fixup __raw_static_call() usage at: %ps\n",
++				static_call_addr(site));
++			return -EINVAL;
++		}
++
++		key |= s_key & STATIC_CALL_SITE_FLAGS;
++		site->key = key - (long)&site->key;
++	}
++
++	return __static_call_init(mod, start, stop);
++}
++
++static void static_call_del_module(struct module *mod)
++{
++	struct static_call_site *start = mod->static_call_sites;
++	struct static_call_site *stop = mod->static_call_sites +
++					mod->num_static_call_sites;
++	struct static_call_key *key, *prev_key = NULL;
++	struct static_call_mod *site_mod, **prev;
++	struct static_call_site *site;
++
++	for (site = start; site < stop; site++) {
++		key = static_call_key(site);
++		if (key == prev_key)
++			continue;
++
++		prev_key = key;
++
++		for (prev = &key->mods, site_mod = key->mods;
++		     site_mod && site_mod->mod != mod;
++		     prev = &site_mod->next, site_mod = site_mod->next)
++			;
++
++		if (!site_mod)
++			continue;
++
++		*prev = site_mod->next;
++		kfree(site_mod);
++	}
++}
++
++static int static_call_module_notify(struct notifier_block *nb,
++				     unsigned long val, void *data)
++{
++	struct module *mod = data;
++	int ret = 0;
++
++	cpus_read_lock();
++	static_call_lock();
++
++	switch (val) {
++	case MODULE_STATE_COMING:
++		ret = static_call_add_module(mod);
++		if (ret) {
++			WARN(1, "Failed to allocate memory for static calls");
++			static_call_del_module(mod);
++		}
++		break;
++	case MODULE_STATE_GOING:
++		static_call_del_module(mod);
++		break;
++	}
++
++	static_call_unlock();
++	cpus_read_unlock();
++
++	return notifier_from_errno(ret);
++}
++
++static struct notifier_block static_call_module_nb = {
++	.notifier_call = static_call_module_notify,
++};
++
++#else
++
++static inline int __static_call_mod_text_reserved(void *start, void *end)
++{
++	return 0;
++}
++
++#endif /* CONFIG_MODULES */
++
++int static_call_text_reserved(void *start, void *end)
++{
++	bool init = system_state < SYSTEM_RUNNING;
++	int ret = __static_call_text_reserved(__start_static_call_sites,
++			__stop_static_call_sites, start, end, init);
++
++	if (ret)
++		return ret;
++
++	return __static_call_mod_text_reserved(start, end);
++}
++
++int __init static_call_init(void)
++{
++	int ret;
++
++	if (static_call_initialized)
++		return 0;
++
++	cpus_read_lock();
++	static_call_lock();
++	ret = __static_call_init(NULL, __start_static_call_sites,
++				 __stop_static_call_sites);
++	static_call_unlock();
++	cpus_read_unlock();
++
++	if (ret) {
++		pr_err("Failed to allocate memory for static_call!\n");
++		BUG();
++	}
++
++	static_call_initialized = true;
++
++#ifdef CONFIG_MODULES
++	register_module_notifier(&static_call_module_nb);
++#endif
++	return 0;
++}
++early_initcall(static_call_init);
++
++#ifdef CONFIG_STATIC_CALL_SELFTEST
++
++static int func_a(int x)
++{
++	return x+1;
++}
++
++static int func_b(int x)
++{
++	return x+2;
++}
++
++DEFINE_STATIC_CALL(sc_selftest, func_a);
++
++static struct static_call_data {
++      int (*func)(int);
++      int val;
++      int expect;
++} static_call_data [] __initdata = {
++      { NULL,   2, 3 },
++      { func_b, 2, 4 },
++      { func_a, 2, 3 }
++};
++
++static int __init test_static_call_init(void)
++{
++      int i;
++
++      for (i = 0; i < ARRAY_SIZE(static_call_data); i++ ) {
++	      struct static_call_data *scd = &static_call_data[i];
++
++              if (scd->func)
++                      static_call_update(sc_selftest, scd->func);
++
++              WARN_ON(static_call(sc_selftest)(scd->val) != scd->expect);
++      }
++
++      return 0;
++}
++early_initcall(test_static_call_init);
++
++#endif /* CONFIG_STATIC_CALL_SELFTEST */
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 5e14e32056add..166e67b985063 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -416,7 +416,8 @@ config SECTION_MISMATCH_WARN_ONLY
+ 	  If unsure, say Y.
+ 
+ config DEBUG_FORCE_FUNCTION_ALIGN_64B
+-	bool "Force all function address 64B aligned" if EXPERT
++	bool "Force all function address 64B aligned"
++	depends on EXPERT && (X86_64 || ARM64 || PPC32 || PPC64 || ARC)
+ 	help
+ 	  There are cases that a commit from one domain changes the function
+ 	  address alignment of other domains, and cause magic performance
+diff --git a/lib/Kconfig.ubsan b/lib/Kconfig.ubsan
+index e5372a13511df..236c5cefc4cc5 100644
+--- a/lib/Kconfig.ubsan
++++ b/lib/Kconfig.ubsan
+@@ -112,19 +112,6 @@ config UBSAN_UNREACHABLE
+ 	  This option enables -fsanitize=unreachable which checks for control
+ 	  flow reaching an expected-to-be-unreachable position.
+ 
+-config UBSAN_OBJECT_SIZE
+-	bool "Perform checking for accesses beyond the end of objects"
+-	default UBSAN
+-	# gcc hugely expands stack usage with -fsanitize=object-size
+-	# https://lore.kernel.org/lkml/CAHk-=wjPasyJrDuwDnpHJS2TuQfExwe=px-SzLeN8GFMAQJPmQ@mail.gmail.com/
+-	depends on !CC_IS_GCC
+-	depends on $(cc-option,-fsanitize=object-size)
+-	help
+-	  This option enables -fsanitize=object-size which checks for accesses
+-	  beyond the end of objects where the optimizer can determine both the
+-	  object being operated on and its size, usually seen with bad downcasts,
+-	  or access to struct members from NULL pointers.
+-
+ config UBSAN_BOOL
+ 	bool "Perform checking for non-boolean values used as boolean"
+ 	default UBSAN
+diff --git a/lib/logic_iomem.c b/lib/logic_iomem.c
+index 549b22d4bcde1..e7ea9b28d8db5 100644
+--- a/lib/logic_iomem.c
++++ b/lib/logic_iomem.c
+@@ -68,7 +68,7 @@ int logic_iomem_add_region(struct resource *resource,
+ }
+ EXPORT_SYMBOL(logic_iomem_add_region);
+ 
+-#ifndef CONFIG_LOGIC_IOMEM_FALLBACK
++#ifndef CONFIG_INDIRECT_IOMEM_FALLBACK
+ static void __iomem *real_ioremap(phys_addr_t offset, size_t size)
+ {
+ 	WARN(1, "invalid ioremap(0x%llx, 0x%zx)\n",
+@@ -81,7 +81,7 @@ static void real_iounmap(void __iomem *addr)
+ 	WARN(1, "invalid iounmap for addr 0x%llx\n",
+ 	     (unsigned long long)(uintptr_t __force)addr);
+ }
+-#endif /* CONFIG_LOGIC_IOMEM_FALLBACK */
++#endif /* CONFIG_INDIRECT_IOMEM_FALLBACK */
+ 
+ void __iomem *ioremap(phys_addr_t offset, size_t size)
+ {
+@@ -168,7 +168,7 @@ void iounmap(void __iomem *addr)
+ }
+ EXPORT_SYMBOL(iounmap);
+ 
+-#ifndef CONFIG_LOGIC_IOMEM_FALLBACK
++#ifndef CONFIG_INDIRECT_IOMEM_FALLBACK
+ #define MAKE_FALLBACK(op, sz) 						\
+ static u##sz real_raw_read ## op(const volatile void __iomem *addr)	\
+ {									\
+@@ -213,7 +213,7 @@ static void real_memcpy_toio(volatile void __iomem *addr, const void *buffer,
+ 	WARN(1, "Invalid memcpy_toio at address 0x%llx\n",
+ 	     (unsigned long long)(uintptr_t __force)addr);
+ }
+-#endif /* CONFIG_LOGIC_IOMEM_FALLBACK */
++#endif /* CONFIG_INDIRECT_IOMEM_FALLBACK */
+ 
+ #define MAKE_OP(op, sz) 						\
+ u##sz __raw_read ## op(const volatile void __iomem *addr)		\
+diff --git a/lib/lz4/lz4_decompress.c b/lib/lz4/lz4_decompress.c
+index 926f4823d5eac..fd1728d94babb 100644
+--- a/lib/lz4/lz4_decompress.c
++++ b/lib/lz4/lz4_decompress.c
+@@ -271,8 +271,12 @@ static FORCE_INLINE int LZ4_decompress_generic(
+ 			ip += length;
+ 			op += length;
+ 
+-			/* Necessarily EOF, due to parsing restrictions */
+-			if (!partialDecoding || (cpy == oend))
++			/* Necessarily EOF when !partialDecoding.
++			 * When partialDecoding, it is EOF if we've either
++			 * filled the output buffer or
++			 * can't proceed with reading an offset for following match.
++			 */
++			if (!partialDecoding || (cpy == oend) || (ip >= (iend - 2)))
+ 				break;
+ 		} else {
+ 			/* may overwrite up to WILDCOPYLENGTH beyond cpy */
+diff --git a/lib/test_ubsan.c b/lib/test_ubsan.c
+index 7e7bbd0f3fd27..2062be1f2e80f 100644
+--- a/lib/test_ubsan.c
++++ b/lib/test_ubsan.c
+@@ -79,15 +79,6 @@ static void test_ubsan_load_invalid_value(void)
+ 	eval2 = eval;
+ }
+ 
+-static void test_ubsan_null_ptr_deref(void)
+-{
+-	volatile int *ptr = NULL;
+-	int val;
+-
+-	UBSAN_TEST(CONFIG_UBSAN_OBJECT_SIZE);
+-	val = *ptr;
+-}
+-
+ static void test_ubsan_misaligned_access(void)
+ {
+ 	volatile char arr[5] __aligned(4) = {1, 2, 3, 4, 5};
+@@ -98,29 +89,16 @@ static void test_ubsan_misaligned_access(void)
+ 	*ptr = val;
+ }
+ 
+-static void test_ubsan_object_size_mismatch(void)
+-{
+-	/* "((aligned(8)))" helps this not into be misaligned for ptr-access. */
+-	volatile int val __aligned(8) = 4;
+-	volatile long long *ptr, val2;
+-
+-	UBSAN_TEST(CONFIG_UBSAN_OBJECT_SIZE);
+-	ptr = (long long *)&val;
+-	val2 = *ptr;
+-}
+-
+ static const test_ubsan_fp test_ubsan_array[] = {
+ 	test_ubsan_shift_out_of_bounds,
+ 	test_ubsan_out_of_bounds,
+ 	test_ubsan_load_invalid_value,
+ 	test_ubsan_misaligned_access,
+-	test_ubsan_object_size_mismatch,
+ };
+ 
+ /* Excluded because they Oops the module. */
+ static const test_ubsan_fp skip_ubsan_array[] = {
+ 	test_ubsan_divrem_overflow,
+-	test_ubsan_null_ptr_deref,
+ };
+ 
+ static int __init test_ubsan_init(void)
+diff --git a/mm/highmem.c b/mm/highmem.c
+index 762679050c9a0..916b66e0776c2 100644
+--- a/mm/highmem.c
++++ b/mm/highmem.c
+@@ -624,7 +624,7 @@ void __kmap_local_sched_out(void)
+ 
+ 		/* With debug all even slots are unmapped and act as guard */
+ 		if (IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL) && !(i & 0x01)) {
+-			WARN_ON_ONCE(!pte_none(pteval));
++			WARN_ON_ONCE(pte_val(pteval) != 0);
+ 			continue;
+ 		}
+ 		if (WARN_ON_ONCE(pte_none(pteval)))
+@@ -661,7 +661,7 @@ void __kmap_local_sched_in(void)
+ 
+ 		/* With debug all even slots are unmapped and act as guard */
+ 		if (IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL) && !(i & 0x01)) {
+-			WARN_ON_ONCE(!pte_none(pteval));
++			WARN_ON_ONCE(pte_val(pteval) != 0);
+ 			continue;
+ 		}
+ 		if (WARN_ON_ONCE(pte_none(pteval)))
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index dd5daddb6257a..4f724469b7dfe 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -2653,6 +2653,7 @@ alloc_new:
+ 	mpol_new = kmem_cache_alloc(policy_cache, GFP_KERNEL);
+ 	if (!mpol_new)
+ 		goto err_out;
++	atomic_set(&mpol_new->refcnt, 1);
+ 	goto restart;
+ }
+ 
+diff --git a/mm/mremap.c b/mm/mremap.c
+index 002eec83e91e5..0e175aef536e1 100644
+--- a/mm/mremap.c
++++ b/mm/mremap.c
+@@ -486,6 +486,9 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
+ 	pmd_t *old_pmd, *new_pmd;
+ 	pud_t *old_pud, *new_pud;
+ 
++	if (!len)
++		return 0;
++
+ 	old_end = old_addr + len;
+ 	flush_cache_range(vma, old_addr, old_end);
+ 
+diff --git a/mm/rmap.c b/mm/rmap.c
+index 163ac4e6bceed..26d3d4824c3ae 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -1570,7 +1570,30 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
+ 
+ 			/* MADV_FREE page check */
+ 			if (!PageSwapBacked(page)) {
+-				if (!PageDirty(page)) {
++				int ref_count, map_count;
++
++				/*
++				 * Synchronize with gup_pte_range():
++				 * - clear PTE; barrier; read refcount
++				 * - inc refcount; barrier; read PTE
++				 */
++				smp_mb();
++
++				ref_count = page_ref_count(page);
++				map_count = page_mapcount(page);
++
++				/*
++				 * Order reads for page refcount and dirty flag
++				 * (see comments in __remove_mapping()).
++				 */
++				smp_rmb();
++
++				/*
++				 * The only page refs must be one from isolation
++				 * plus the rmap(s) (dropped by discard:).
++				 */
++				if (ref_count == 1 + map_count &&
++				    !PageDirty(page)) {
+ 					/* Invalidate as we cleared the pte */
+ 					mmu_notifier_invalidate_range(mm,
+ 						address, address + PAGE_SIZE);
+diff --git a/net/batman-adv/multicast.c b/net/batman-adv/multicast.c
+index f4004cf0ff6fb..9f311fddfaf9a 100644
+--- a/net/batman-adv/multicast.c
++++ b/net/batman-adv/multicast.c
+@@ -134,7 +134,7 @@ static u8 batadv_mcast_mla_rtr_flags_softif_get_ipv6(struct net_device *dev)
+ {
+ 	struct inet6_dev *in6_dev = __in6_dev_get(dev);
+ 
+-	if (in6_dev && in6_dev->cnf.mc_forwarding)
++	if (in6_dev && atomic_read(&in6_dev->cnf.mc_forwarding))
+ 		return BATADV_NO_FLAGS;
+ 	else
+ 		return BATADV_MCAST_WANT_NO_RTR6;
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 816a0f6823a3a..e06802d314690 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -5203,8 +5203,9 @@ static void hci_disconn_phylink_complete_evt(struct hci_dev *hdev,
+ 	hci_dev_lock(hdev);
+ 
+ 	hcon = hci_conn_hash_lookup_handle(hdev, ev->phy_handle);
+-	if (hcon) {
++	if (hcon && hcon->type == AMP_LINK) {
+ 		hcon->state = BT_CLOSED;
++		hci_disconn_cfm(hcon, ev->reason);
+ 		hci_conn_del(hcon);
+ 	}
+ 
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 4f8f375999625..e06baffc0dc6a 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -1436,6 +1436,7 @@ static void l2cap_ecred_connect(struct l2cap_chan *chan)
+ 
+ 	l2cap_ecred_init(chan, 0);
+ 
++	memset(&data, 0, sizeof(data));
+ 	data.pdu.req.psm     = chan->psm;
+ 	data.pdu.req.mtu     = cpu_to_le16(chan->imtu);
+ 	data.pdu.req.mps     = cpu_to_le16(chan->mps);
+diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
+index 46dd957559672..eb43aaac0392c 100644
+--- a/net/bpf/test_run.c
++++ b/net/bpf/test_run.c
+@@ -960,7 +960,7 @@ int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kat
+ 	if (!range_is_zero(user_ctx, offsetofend(typeof(*user_ctx), local_port), sizeof(*user_ctx)))
+ 		goto out;
+ 
+-	if (user_ctx->local_port > U16_MAX || user_ctx->remote_port > U16_MAX) {
++	if (user_ctx->local_port > U16_MAX) {
+ 		ret = -ERANGE;
+ 		goto out;
+ 	}
+@@ -968,7 +968,7 @@ int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kat
+ 	ctx.family = (u16)user_ctx->family;
+ 	ctx.protocol = (u16)user_ctx->protocol;
+ 	ctx.dport = (u16)user_ctx->local_port;
+-	ctx.sport = (__force __be16)user_ctx->remote_port;
++	ctx.sport = user_ctx->remote_port;
+ 
+ 	switch (ctx.family) {
+ 	case AF_INET:
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index a95d171b3a64b..5bce7c66c1219 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -141,6 +141,7 @@ struct isotp_sock {
+ 	struct can_isotp_options opt;
+ 	struct can_isotp_fc_options rxfc, txfc;
+ 	struct can_isotp_ll_options ll;
++	u32 frame_txtime;
+ 	u32 force_tx_stmin;
+ 	u32 force_rx_stmin;
+ 	struct tpcon rx, tx;
+@@ -360,7 +361,7 @@ static int isotp_rcv_fc(struct isotp_sock *so, struct canfd_frame *cf, int ae)
+ 
+ 		so->tx_gap = ktime_set(0, 0);
+ 		/* add transmission time for CAN frame N_As */
+-		so->tx_gap = ktime_add_ns(so->tx_gap, so->opt.frame_txtime);
++		so->tx_gap = ktime_add_ns(so->tx_gap, so->frame_txtime);
+ 		/* add waiting time for consecutive frames N_Cs */
+ 		if (so->opt.flags & CAN_ISOTP_FORCE_TXSTMIN)
+ 			so->tx_gap = ktime_add_ns(so->tx_gap,
+@@ -1247,6 +1248,14 @@ static int isotp_setsockopt_locked(struct socket *sock, int level, int optname,
+ 		/* no separate rx_ext_address is given => use ext_address */
+ 		if (!(so->opt.flags & CAN_ISOTP_RX_EXT_ADDR))
+ 			so->opt.rx_ext_address = so->opt.ext_address;
++
++		/* check for frame_txtime changes (0 => no changes) */
++		if (so->opt.frame_txtime) {
++			if (so->opt.frame_txtime == CAN_ISOTP_FRAME_TXTIME_ZERO)
++				so->frame_txtime = 0;
++			else
++				so->frame_txtime = so->opt.frame_txtime;
++		}
+ 		break;
+ 
+ 	case CAN_ISOTP_RECV_FC:
+@@ -1448,6 +1457,7 @@ static int isotp_init(struct sock *sk)
+ 	so->opt.rxpad_content = CAN_ISOTP_DEFAULT_PAD_CONTENT;
+ 	so->opt.txpad_content = CAN_ISOTP_DEFAULT_PAD_CONTENT;
+ 	so->opt.frame_txtime = CAN_ISOTP_DEFAULT_FRAME_TXTIME;
++	so->frame_txtime = CAN_ISOTP_DEFAULT_FRAME_TXTIME;
+ 	so->rxfc.bs = CAN_ISOTP_DEFAULT_RECV_BS;
+ 	so->rxfc.stmin = CAN_ISOTP_DEFAULT_RECV_STMIN;
+ 	so->rxfc.wftmax = CAN_ISOTP_DEFAULT_RECV_WFTMAX;
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 2078d04c6482f..8c47e0f2075db 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -11403,8 +11403,7 @@ static int __net_init netdev_init(struct net *net)
+ 	BUILD_BUG_ON(GRO_HASH_BUCKETS >
+ 		     8 * sizeof_field(struct napi_struct, gro_bitmask));
+ 
+-	if (net != &init_net)
+-		INIT_LIST_HEAD(&net->dev_base_head);
++	INIT_LIST_HEAD(&net->dev_base_head);
+ 
+ 	net->dev_name_head = netdev_create_hash();
+ 	if (net->dev_name_head == NULL)
+diff --git a/net/core/filter.c b/net/core/filter.c
+index d4cdf11656b3f..734defee989f0 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -6719,24 +6719,33 @@ BPF_CALL_5(bpf_tcp_check_syncookie, struct sock *, sk, void *, iph, u32, iph_len
+ 	if (!th->ack || th->rst || th->syn)
+ 		return -ENOENT;
+ 
++	if (unlikely(iph_len < sizeof(struct iphdr)))
++		return -EINVAL;
++
+ 	if (tcp_synq_no_recent_overflow(sk))
+ 		return -ENOENT;
+ 
+ 	cookie = ntohl(th->ack_seq) - 1;
+ 
+-	switch (sk->sk_family) {
+-	case AF_INET:
+-		if (unlikely(iph_len < sizeof(struct iphdr)))
++	/* Both struct iphdr and struct ipv6hdr have the version field at the
++	 * same offset so we can cast to the shorter header (struct iphdr).
++	 */
++	switch (((struct iphdr *)iph)->version) {
++	case 4:
++		if (sk->sk_family == AF_INET6 && ipv6_only_sock(sk))
+ 			return -EINVAL;
+ 
+ 		ret = __cookie_v4_check((struct iphdr *)iph, th, cookie);
+ 		break;
+ 
+ #if IS_BUILTIN(CONFIG_IPV6)
+-	case AF_INET6:
++	case 6:
+ 		if (unlikely(iph_len < sizeof(struct ipv6hdr)))
+ 			return -EINVAL;
+ 
++		if (sk->sk_family != AF_INET6)
++			return -EINVAL;
++
+ 		ret = __cookie_v6_check((struct ipv6hdr *)iph, th, cookie);
+ 		break;
+ #endif /* CONFIG_IPV6 */
+@@ -7975,6 +7984,7 @@ bool bpf_sock_is_valid_access(int off, int size, enum bpf_access_type type,
+ 			      struct bpf_insn_access_aux *info)
+ {
+ 	const int size_default = sizeof(__u32);
++	int field_size;
+ 
+ 	if (off < 0 || off >= sizeof(struct bpf_sock))
+ 		return false;
+@@ -7986,7 +7996,6 @@ bool bpf_sock_is_valid_access(int off, int size, enum bpf_access_type type,
+ 	case offsetof(struct bpf_sock, family):
+ 	case offsetof(struct bpf_sock, type):
+ 	case offsetof(struct bpf_sock, protocol):
+-	case offsetof(struct bpf_sock, dst_port):
+ 	case offsetof(struct bpf_sock, src_port):
+ 	case offsetof(struct bpf_sock, rx_queue_mapping):
+ 	case bpf_ctx_range(struct bpf_sock, src_ip4):
+@@ -7995,6 +8004,14 @@ bool bpf_sock_is_valid_access(int off, int size, enum bpf_access_type type,
+ 	case bpf_ctx_range_till(struct bpf_sock, dst_ip6[0], dst_ip6[3]):
+ 		bpf_ctx_record_field_size(info, size_default);
+ 		return bpf_ctx_narrow_access_ok(off, size, size_default);
++	case bpf_ctx_range(struct bpf_sock, dst_port):
++		field_size = size == size_default ?
++			size_default : sizeof_field(struct bpf_sock, dst_port);
++		bpf_ctx_record_field_size(info, field_size);
++		return bpf_ctx_narrow_access_ok(off, size, field_size);
++	case offsetofend(struct bpf_sock, dst_port) ...
++	     offsetof(struct bpf_sock, dst_ip4) - 1:
++		return false;
+ 	}
+ 
+ 	return size == size_default;
+@@ -10546,7 +10563,8 @@ static bool sk_lookup_is_valid_access(int off, int size,
+ 	case bpf_ctx_range(struct bpf_sk_lookup, local_ip4):
+ 	case bpf_ctx_range_till(struct bpf_sk_lookup, remote_ip6[0], remote_ip6[3]):
+ 	case bpf_ctx_range_till(struct bpf_sk_lookup, local_ip6[0], local_ip6[3]):
+-	case bpf_ctx_range(struct bpf_sk_lookup, remote_port):
++	case offsetof(struct bpf_sk_lookup, remote_port) ...
++	     offsetof(struct bpf_sk_lookup, local_ip4) - 1:
+ 	case bpf_ctx_range(struct bpf_sk_lookup, local_port):
+ 		bpf_ctx_record_field_size(info, sizeof(__u32));
+ 		return bpf_ctx_narrow_access_ok(off, size, sizeof(__u32));
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index 9702d2b0d9207..9745cb6fdf516 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -44,13 +44,7 @@ EXPORT_SYMBOL_GPL(net_rwsem);
+ static struct key_tag init_net_key_domain = { .usage = REFCOUNT_INIT(1) };
+ #endif
+ 
+-struct net init_net = {
+-	.ns.count	= REFCOUNT_INIT(1),
+-	.dev_base_head	= LIST_HEAD_INIT(init_net.dev_base_head),
+-#ifdef CONFIG_KEYS
+-	.key_domain	= &init_net_key_domain,
+-#endif
+-};
++struct net init_net;
+ EXPORT_SYMBOL(init_net);
+ 
+ static bool init_net_initialized;
+@@ -1081,7 +1075,7 @@ out:
+ 	rtnl_set_sk_err(net, RTNLGRP_NSID, err);
+ }
+ 
+-static int __init net_ns_init(void)
++void __init net_ns_init(void)
+ {
+ 	struct net_generic *ng;
+ 
+@@ -1102,6 +1096,9 @@ static int __init net_ns_init(void)
+ 
+ 	rcu_assign_pointer(init_net.gen, ng);
+ 
++#ifdef CONFIG_KEYS
++	init_net.key_domain = &init_net_key_domain;
++#endif
+ 	down_write(&pernet_ops_rwsem);
+ 	if (setup_net(&init_net, &init_user_ns))
+ 		panic("Could not setup the initial network namespace");
+@@ -1116,12 +1113,8 @@ static int __init net_ns_init(void)
+ 		      RTNL_FLAG_DOIT_UNLOCKED);
+ 	rtnl_register(PF_UNSPEC, RTM_GETNSID, rtnl_net_getid, rtnl_net_dumpid,
+ 		      RTNL_FLAG_DOIT_UNLOCKED);
+-
+-	return 0;
+ }
+ 
+-pure_initcall(net_ns_init);
+-
+ static void free_exit_list(struct pernet_operations *ops, struct list_head *net_exit_list)
+ {
+ 	ops_pre_exit_list(ops, net_exit_list);
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 8b5c5703d7582..ef56dc8d7c44c 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -3631,13 +3631,24 @@ static int rtnl_alt_ifname(int cmd, struct net_device *dev, struct nlattr *attr,
+ 			   bool *changed, struct netlink_ext_ack *extack)
+ {
+ 	char *alt_ifname;
++	size_t size;
+ 	int err;
+ 
+ 	err = nla_validate(attr, attr->nla_len, IFLA_MAX, ifla_policy, extack);
+ 	if (err)
+ 		return err;
+ 
+-	alt_ifname = nla_strdup(attr, GFP_KERNEL);
++	if (cmd == RTM_NEWLINKPROP) {
++		size = rtnl_prop_list_size(dev);
++		size += nla_total_size(ALTIFNAMSIZ);
++		if (size >= U16_MAX) {
++			NL_SET_ERR_MSG(extack,
++				       "effective property list too long");
++			return -EINVAL;
++		}
++	}
++
++	alt_ifname = nla_strdup(attr, GFP_KERNEL_ACCOUNT);
+ 	if (!alt_ifname)
+ 		return -ENOMEM;
+ 
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index fdd8041206001..001152c8def9a 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -5369,11 +5369,18 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
+ 	if (skb_cloned(to))
+ 		return false;
+ 
+-	/* The page pool signature of struct page will eventually figure out
+-	 * which pages can be recycled or not but for now let's prohibit slab
+-	 * allocated and page_pool allocated SKBs from being coalesced.
++	/* In general, avoid mixing slab allocated and page_pool allocated
++	 * pages within the same SKB. However when @to is not pp_recycle and
++	 * @from is cloned, we can transition frag pages from page_pool to
++	 * reference counted.
++	 *
++	 * On the other hand, don't allow coalescing two pp_recycle SKBs if
++	 * @from is cloned, in case the SKB is using page_pool fragment
++	 * references (PP_FLAG_PAGE_FRAG). Since we only take full page
++	 * references for cloned SKBs at the moment that would result in
++	 * inconsistent reference counts.
+ 	 */
+-	if (to->pp_recycle != from->pp_recycle)
++	if (to->pp_recycle != (from->pp_recycle && !skb_cloned(from)))
+ 		return false;
+ 
+ 	if (len <= skb_tailroom(to)) {
+diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c
+index 857a144b1ea96..5ee382309a9d7 100644
+--- a/net/ipv4/arp.c
++++ b/net/ipv4/arp.c
+@@ -1116,13 +1116,18 @@ static int arp_req_get(struct arpreq *r, struct net_device *dev)
+ 	return err;
+ }
+ 
+-static int arp_invalidate(struct net_device *dev, __be32 ip)
++int arp_invalidate(struct net_device *dev, __be32 ip, bool force)
+ {
+ 	struct neighbour *neigh = neigh_lookup(&arp_tbl, &ip, dev);
+ 	int err = -ENXIO;
+ 	struct neigh_table *tbl = &arp_tbl;
+ 
+ 	if (neigh) {
++		if ((neigh->nud_state & NUD_VALID) && !force) {
++			neigh_release(neigh);
++			return 0;
++		}
++
+ 		if (neigh->nud_state & ~NUD_NOARP)
+ 			err = neigh_update(neigh, NULL, NUD_FAILED,
+ 					   NEIGH_UPDATE_F_OVERRIDE|
+@@ -1169,7 +1174,7 @@ static int arp_req_delete(struct net *net, struct arpreq *r,
+ 		if (!dev)
+ 			return -EINVAL;
+ 	}
+-	return arp_invalidate(dev, ip);
++	return arp_invalidate(dev, ip, true);
+ }
+ 
+ /*
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 4d61ddd8a0ecf..1eb7795edb9dc 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -1112,9 +1112,11 @@ void fib_add_ifaddr(struct in_ifaddr *ifa)
+ 		return;
+ 
+ 	/* Add broadcast address, if it is explicitly assigned. */
+-	if (ifa->ifa_broadcast && ifa->ifa_broadcast != htonl(0xFFFFFFFF))
++	if (ifa->ifa_broadcast && ifa->ifa_broadcast != htonl(0xFFFFFFFF)) {
+ 		fib_magic(RTM_NEWROUTE, RTN_BROADCAST, ifa->ifa_broadcast, 32,
+ 			  prim, 0);
++		arp_invalidate(dev, ifa->ifa_broadcast, false);
++	}
+ 
+ 	if (!ipv4_is_zeronet(prefix) && !(ifa->ifa_flags & IFA_F_SECONDARY) &&
+ 	    (prefix != addr || ifa->ifa_prefixlen < 32)) {
+@@ -1128,6 +1130,7 @@ void fib_add_ifaddr(struct in_ifaddr *ifa)
+ 		if (ifa->ifa_prefixlen < 31) {
+ 			fib_magic(RTM_NEWROUTE, RTN_BROADCAST, prefix | ~mask,
+ 				  32, prim, 0);
++			arp_invalidate(dev, prefix | ~mask, false);
+ 		}
+ 	}
+ }
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index d244c57b73031..b5563f5ff1765 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -887,8 +887,13 @@ int fib_nh_match(struct net *net, struct fib_config *cfg, struct fib_info *fi,
+ 	}
+ 
+ 	if (cfg->fc_oif || cfg->fc_gw_family) {
+-		struct fib_nh *nh = fib_info_nh(fi, 0);
++		struct fib_nh *nh;
++
++		/* cannot match on nexthop object attributes */
++		if (fi->nh)
++			return 1;
+ 
++		nh = fib_info_nh(fi, 0);
+ 		if (cfg->fc_encap) {
+ 			if (fib_encap_match(net, cfg->fc_encap_type,
+ 					    cfg->fc_encap, nh, cfg, extack))
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index 75737267746f8..7bd1e10086f0a 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -637,7 +637,9 @@ int __inet_hash(struct sock *sk, struct sock *osk)
+ 	int err = 0;
+ 
+ 	if (sk->sk_state != TCP_LISTEN) {
++		local_bh_disable();
+ 		inet_ehash_nolisten(sk, osk, NULL);
++		local_bh_enable();
+ 		return 0;
+ 	}
+ 	WARN_ON(!sk_unhashed(sk));
+@@ -669,45 +671,54 @@ int inet_hash(struct sock *sk)
+ {
+ 	int err = 0;
+ 
+-	if (sk->sk_state != TCP_CLOSE) {
+-		local_bh_disable();
++	if (sk->sk_state != TCP_CLOSE)
+ 		err = __inet_hash(sk, NULL);
+-		local_bh_enable();
+-	}
+ 
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(inet_hash);
+ 
+-void inet_unhash(struct sock *sk)
++static void __inet_unhash(struct sock *sk, struct inet_listen_hashbucket *ilb)
+ {
+-	struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo;
+-	struct inet_listen_hashbucket *ilb = NULL;
+-	spinlock_t *lock;
+-
+ 	if (sk_unhashed(sk))
+ 		return;
+ 
+-	if (sk->sk_state == TCP_LISTEN) {
+-		ilb = &hashinfo->listening_hash[inet_sk_listen_hashfn(sk)];
+-		lock = &ilb->lock;
+-	} else {
+-		lock = inet_ehash_lockp(hashinfo, sk->sk_hash);
+-	}
+-	spin_lock_bh(lock);
+-	if (sk_unhashed(sk))
+-		goto unlock;
+-
+ 	if (rcu_access_pointer(sk->sk_reuseport_cb))
+ 		reuseport_stop_listen_sock(sk);
+ 	if (ilb) {
++		struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo;
++
+ 		inet_unhash2(hashinfo, sk);
+ 		ilb->count--;
+ 	}
+ 	__sk_nulls_del_node_init_rcu(sk);
+ 	sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
+-unlock:
+-	spin_unlock_bh(lock);
++}
++
++void inet_unhash(struct sock *sk)
++{
++	struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo;
++
++	if (sk_unhashed(sk))
++		return;
++
++	if (sk->sk_state == TCP_LISTEN) {
++		struct inet_listen_hashbucket *ilb;
++
++		ilb = &hashinfo->listening_hash[inet_sk_listen_hashfn(sk)];
++		/* Don't disable bottom halves while acquiring the lock to
++		 * avoid circular locking dependency on PREEMPT_RT.
++		 */
++		spin_lock(&ilb->lock);
++		__inet_unhash(sk, ilb);
++		spin_unlock(&ilb->lock);
++	} else {
++		spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash);
++
++		spin_lock_bh(lock);
++		__inet_unhash(sk, NULL);
++		spin_unlock_bh(lock);
++	}
+ }
+ EXPORT_SYMBOL_GPL(inet_unhash);
+ 
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index e92ca415756a5..4f64fb285af71 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -554,7 +554,7 @@ static int inet6_netconf_fill_devconf(struct sk_buff *skb, int ifindex,
+ #ifdef CONFIG_IPV6_MROUTE
+ 	if ((all || type == NETCONFA_MC_FORWARDING) &&
+ 	    nla_put_s32(skb, NETCONFA_MC_FORWARDING,
+-			devconf->mc_forwarding) < 0)
++			atomic_read(&devconf->mc_forwarding)) < 0)
+ 		goto nla_put_failure;
+ #endif
+ 	if ((all || type == NETCONFA_PROXY_NEIGH) &&
+@@ -5539,7 +5539,7 @@ static inline void ipv6_store_devconf(struct ipv6_devconf *cnf,
+ 	array[DEVCONF_USE_OPTIMISTIC] = cnf->use_optimistic;
+ #endif
+ #ifdef CONFIG_IPV6_MROUTE
+-	array[DEVCONF_MC_FORWARDING] = cnf->mc_forwarding;
++	array[DEVCONF_MC_FORWARDING] = atomic_read(&cnf->mc_forwarding);
+ #endif
+ 	array[DEVCONF_DISABLE_IPV6] = cnf->disable_ipv6;
+ 	array[DEVCONF_ACCEPT_DAD] = cnf->accept_dad;
+diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c
+index 67c9114835c84..0a2e7f2283911 100644
+--- a/net/ipv6/inet6_hashtables.c
++++ b/net/ipv6/inet6_hashtables.c
+@@ -333,11 +333,8 @@ int inet6_hash(struct sock *sk)
+ {
+ 	int err = 0;
+ 
+-	if (sk->sk_state != TCP_CLOSE) {
+-		local_bh_disable();
++	if (sk->sk_state != TCP_CLOSE)
+ 		err = __inet_hash(sk, NULL);
+-		local_bh_enable();
+-	}
+ 
+ 	return err;
+ }
+diff --git a/net/ipv6/ip6_input.c b/net/ipv6/ip6_input.c
+index 80256717868e6..d4b1e2c5aa76d 100644
+--- a/net/ipv6/ip6_input.c
++++ b/net/ipv6/ip6_input.c
+@@ -508,7 +508,7 @@ int ip6_mc_input(struct sk_buff *skb)
+ 	/*
+ 	 *      IPv6 multicast router mode is now supported ;)
+ 	 */
+-	if (dev_net(skb->dev)->ipv6.devconf_all->mc_forwarding &&
++	if (atomic_read(&dev_net(skb->dev)->ipv6.devconf_all->mc_forwarding) &&
+ 	    !(ipv6_addr_type(&hdr->daddr) &
+ 	      (IPV6_ADDR_LOOPBACK|IPV6_ADDR_LINKLOCAL)) &&
+ 	    likely(!(IP6CB(skb)->flags & IP6SKB_FORWARDED))) {
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index 6a4065d81aa91..91f1c5f56d5fa 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -739,7 +739,7 @@ static int mif6_delete(struct mr_table *mrt, int vifi, int notify,
+ 
+ 	in6_dev = __in6_dev_get(dev);
+ 	if (in6_dev) {
+-		in6_dev->cnf.mc_forwarding--;
++		atomic_dec(&in6_dev->cnf.mc_forwarding);
+ 		inet6_netconf_notify_devconf(dev_net(dev), RTM_NEWNETCONF,
+ 					     NETCONFA_MC_FORWARDING,
+ 					     dev->ifindex, &in6_dev->cnf);
+@@ -907,7 +907,7 @@ static int mif6_add(struct net *net, struct mr_table *mrt,
+ 
+ 	in6_dev = __in6_dev_get(dev);
+ 	if (in6_dev) {
+-		in6_dev->cnf.mc_forwarding++;
++		atomic_inc(&in6_dev->cnf.mc_forwarding);
+ 		inet6_netconf_notify_devconf(dev_net(dev), RTM_NEWNETCONF,
+ 					     NETCONFA_MC_FORWARDING,
+ 					     dev->ifindex, &in6_dev->cnf);
+@@ -1557,7 +1557,7 @@ static int ip6mr_sk_init(struct mr_table *mrt, struct sock *sk)
+ 	} else {
+ 		rcu_assign_pointer(mrt->mroute_sk, sk);
+ 		sock_set_flag(sk, SOCK_RCU_FREE);
+-		net->ipv6.devconf_all->mc_forwarding++;
++		atomic_inc(&net->ipv6.devconf_all->mc_forwarding);
+ 	}
+ 	write_unlock_bh(&mrt_lock);
+ 
+@@ -1590,7 +1590,7 @@ int ip6mr_sk_done(struct sock *sk)
+ 			 * so the RCU grace period before sk freeing
+ 			 * is guaranteed by sk_destruct()
+ 			 */
+-			net->ipv6.devconf_all->mc_forwarding--;
++			atomic_dec(&net->ipv6.devconf_all->mc_forwarding);
+ 			write_unlock_bh(&mrt_lock);
+ 			inet6_netconf_notify_devconf(net, RTM_NEWNETCONF,
+ 						     NETCONFA_MC_FORWARDING,
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 75f916b7460c7..cac0d65ed124b 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -4509,7 +4509,7 @@ static int ip6_pkt_drop(struct sk_buff *skb, u8 code, int ipstats_mib_noroutes)
+ 	struct inet6_dev *idev;
+ 	int type;
+ 
+-	if (netif_is_l3_master(skb->dev) &&
++	if (netif_is_l3_master(skb->dev) ||
+ 	    dst->dev == net->loopback_dev)
+ 		idev = __in6_dev_get_safely(dev_get_by_index_rcu(net, IP6CB(skb)->iif));
+ 	else
+diff --git a/net/mctp/af_mctp.c b/net/mctp/af_mctp.c
+index 871cf62661258..102851189bbfd 100644
+--- a/net/mctp/af_mctp.c
++++ b/net/mctp/af_mctp.c
+@@ -90,13 +90,13 @@ out_release:
+ static int mctp_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ {
+ 	DECLARE_SOCKADDR(struct sockaddr_mctp *, addr, msg->msg_name);
+-	const int hlen = MCTP_HEADER_MAXLEN + sizeof(struct mctp_hdr);
+ 	int rc, addrlen = msg->msg_namelen;
+ 	struct sock *sk = sock->sk;
+ 	struct mctp_sock *msk = container_of(sk, struct mctp_sock, sk);
+ 	struct mctp_skb_cb *cb;
+ 	struct mctp_route *rt;
+-	struct sk_buff *skb;
++	struct sk_buff *skb = NULL;
++	int hlen;
+ 
+ 	if (addr) {
+ 		if (addrlen < sizeof(struct sockaddr_mctp))
+@@ -119,6 +119,34 @@ static int mctp_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 	if (addr->smctp_network == MCTP_NET_ANY)
+ 		addr->smctp_network = mctp_default_net(sock_net(sk));
+ 
++	/* direct addressing */
++	if (msk->addr_ext && addrlen >= sizeof(struct sockaddr_mctp_ext)) {
++		DECLARE_SOCKADDR(struct sockaddr_mctp_ext *,
++				 extaddr, msg->msg_name);
++		struct net_device *dev;
++
++		rc = -EINVAL;
++		rcu_read_lock();
++		dev = dev_get_by_index_rcu(sock_net(sk), extaddr->smctp_ifindex);
++		/* check for correct halen */
++		if (dev && extaddr->smctp_halen == dev->addr_len) {
++			hlen = LL_RESERVED_SPACE(dev) + sizeof(struct mctp_hdr);
++			rc = 0;
++		}
++		rcu_read_unlock();
++		if (rc)
++			goto err_free;
++		rt = NULL;
++	} else {
++		rt = mctp_route_lookup(sock_net(sk), addr->smctp_network,
++				       addr->smctp_addr.s_addr);
++		if (!rt) {
++			rc = -EHOSTUNREACH;
++			goto err_free;
++		}
++		hlen = LL_RESERVED_SPACE(rt->dev->dev) + sizeof(struct mctp_hdr);
++	}
++
+ 	skb = sock_alloc_send_skb(sk, hlen + 1 + len,
+ 				  msg->msg_flags & MSG_DONTWAIT, &rc);
+ 	if (!skb)
+@@ -137,8 +165,8 @@ static int mctp_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 	cb = __mctp_cb(skb);
+ 	cb->net = addr->smctp_network;
+ 
+-	/* direct addressing */
+-	if (msk->addr_ext && addrlen >= sizeof(struct sockaddr_mctp_ext)) {
++	if (!rt) {
++		/* fill extended address in cb */
+ 		DECLARE_SOCKADDR(struct sockaddr_mctp_ext *,
+ 				 extaddr, msg->msg_name);
+ 
+@@ -149,17 +177,9 @@ static int mctp_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 		}
+ 
+ 		cb->ifindex = extaddr->smctp_ifindex;
++		/* smctp_halen is checked above */
+ 		cb->halen = extaddr->smctp_halen;
+ 		memcpy(cb->haddr, extaddr->smctp_haddr, cb->halen);
+-
+-		rt = NULL;
+-	} else {
+-		rt = mctp_route_lookup(sock_net(sk), addr->smctp_network,
+-				       addr->smctp_addr.s_addr);
+-		if (!rt) {
+-			rc = -EHOSTUNREACH;
+-			goto err_free;
+-		}
+ 	}
+ 
+ 	rc = mctp_local_output(sk, rt, skb, addr->smctp_addr.s_addr,
+diff --git a/net/mctp/route.c b/net/mctp/route.c
+index f8c0cb2de98be..caa042ea777e7 100644
+--- a/net/mctp/route.c
++++ b/net/mctp/route.c
+@@ -500,6 +500,11 @@ static int mctp_route_output(struct mctp_route *route, struct sk_buff *skb)
+ 
+ 	if (cb->ifindex) {
+ 		/* direct route; use the hwaddr we stashed in sendmsg */
++		if (cb->halen != skb->dev->addr_len) {
++			/* sanity check, sendmsg should have already caught this */
++			kfree_skb(skb);
++			return -EMSGSIZE;
++		}
+ 		daddr = cb->haddr;
+ 	} else {
+ 		/* If lookup fails let the device handle daddr==NULL */
+@@ -509,7 +514,7 @@ static int mctp_route_output(struct mctp_route *route, struct sk_buff *skb)
+ 
+ 	rc = dev_hard_header(skb, skb->dev, ntohs(skb->protocol),
+ 			     daddr, skb->dev->dev_addr, skb->len);
+-	if (rc) {
++	if (rc < 0) {
+ 		kfree_skb(skb);
+ 		return -EHOSTUNREACH;
+ 	}
+@@ -709,7 +714,7 @@ static int mctp_do_fragment_route(struct mctp_route *rt, struct sk_buff *skb,
+ {
+ 	const unsigned int hlen = sizeof(struct mctp_hdr);
+ 	struct mctp_hdr *hdr, *hdr2;
+-	unsigned int pos, size;
++	unsigned int pos, size, headroom;
+ 	struct sk_buff *skb2;
+ 	int rc;
+ 	u8 seq;
+@@ -723,6 +728,9 @@ static int mctp_do_fragment_route(struct mctp_route *rt, struct sk_buff *skb,
+ 		return -EMSGSIZE;
+ 	}
+ 
++	/* keep same headroom as the original skb */
++	headroom = skb_headroom(skb);
++
+ 	/* we've got the header */
+ 	skb_pull(skb, hlen);
+ 
+@@ -730,7 +738,7 @@ static int mctp_do_fragment_route(struct mctp_route *rt, struct sk_buff *skb,
+ 		/* size of message payload */
+ 		size = min(mtu - hlen, skb->len - pos);
+ 
+-		skb2 = alloc_skb(MCTP_HEADER_MAXLEN + hlen + size, GFP_KERNEL);
++		skb2 = alloc_skb(headroom + hlen + size, GFP_KERNEL);
+ 		if (!skb2) {
+ 			rc = -ENOMEM;
+ 			break;
+@@ -746,7 +754,7 @@ static int mctp_do_fragment_route(struct mctp_route *rt, struct sk_buff *skb,
+ 			skb_set_owner_w(skb2, skb->sk);
+ 
+ 		/* establish packet */
+-		skb_reserve(skb2, MCTP_HEADER_MAXLEN);
++		skb_reserve(skb2, headroom);
+ 		skb_reset_network_header(skb2);
+ 		skb_put(skb2, hlen + size);
+ 		skb2->transport_header = skb2->network_header + hlen;
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 917e708a45611..3a98a1316307c 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -66,6 +66,8 @@ EXPORT_SYMBOL_GPL(nf_conntrack_hash);
+ struct conntrack_gc_work {
+ 	struct delayed_work	dwork;
+ 	u32			next_bucket;
++	u32			avg_timeout;
++	u32			start_time;
+ 	bool			exiting;
+ 	bool			early_drop;
+ };
+@@ -77,8 +79,19 @@ static __read_mostly bool nf_conntrack_locks_all;
+ /* serialize hash resizes and nf_ct_iterate_cleanup */
+ static DEFINE_MUTEX(nf_conntrack_mutex);
+ 
+-#define GC_SCAN_INTERVAL	(120u * HZ)
++#define GC_SCAN_INTERVAL_MAX	(60ul * HZ)
++#define GC_SCAN_INTERVAL_MIN	(1ul * HZ)
++
++/* clamp timeouts to this value (TCP unacked) */
++#define GC_SCAN_INTERVAL_CLAMP	(300ul * HZ)
++
++/* large initial bias so that we don't scan often just because we have
++ * three entries with a 1s timeout.
++ */
++#define GC_SCAN_INTERVAL_INIT	INT_MAX
++
+ #define GC_SCAN_MAX_DURATION	msecs_to_jiffies(10)
++#define GC_SCAN_EXPIRED_MAX	(64000u / HZ)
+ 
+ #define MIN_CHAINLEN	8u
+ #define MAX_CHAINLEN	(32u - MIN_CHAINLEN)
+@@ -1420,16 +1433,28 @@ static bool gc_worker_can_early_drop(const struct nf_conn *ct)
+ 
+ static void gc_worker(struct work_struct *work)
+ {
+-	unsigned long end_time = jiffies + GC_SCAN_MAX_DURATION;
+ 	unsigned int i, hashsz, nf_conntrack_max95 = 0;
+-	unsigned long next_run = GC_SCAN_INTERVAL;
++	u32 end_time, start_time = nfct_time_stamp;
+ 	struct conntrack_gc_work *gc_work;
++	unsigned int expired_count = 0;
++	unsigned long next_run;
++	s32 delta_time;
++
+ 	gc_work = container_of(work, struct conntrack_gc_work, dwork.work);
+ 
+ 	i = gc_work->next_bucket;
+ 	if (gc_work->early_drop)
+ 		nf_conntrack_max95 = nf_conntrack_max / 100u * 95u;
+ 
++	if (i == 0) {
++		gc_work->avg_timeout = GC_SCAN_INTERVAL_INIT;
++		gc_work->start_time = start_time;
++	}
++
++	next_run = gc_work->avg_timeout;
++
++	end_time = start_time + GC_SCAN_MAX_DURATION;
++
+ 	do {
+ 		struct nf_conntrack_tuple_hash *h;
+ 		struct hlist_nulls_head *ct_hash;
+@@ -1446,6 +1471,7 @@ static void gc_worker(struct work_struct *work)
+ 
+ 		hlist_nulls_for_each_entry_rcu(h, n, &ct_hash[i], hnnode) {
+ 			struct nf_conntrack_net *cnet;
++			unsigned long expires;
+ 			struct net *net;
+ 
+ 			tmp = nf_ct_tuplehash_to_ctrack(h);
+@@ -1455,11 +1481,29 @@ static void gc_worker(struct work_struct *work)
+ 				continue;
+ 			}
+ 
++			if (expired_count > GC_SCAN_EXPIRED_MAX) {
++				rcu_read_unlock();
++
++				gc_work->next_bucket = i;
++				gc_work->avg_timeout = next_run;
++
++				delta_time = nfct_time_stamp - gc_work->start_time;
++
++				/* re-sched immediately if total cycle time is exceeded */
++				next_run = delta_time < (s32)GC_SCAN_INTERVAL_MAX;
++				goto early_exit;
++			}
++
+ 			if (nf_ct_is_expired(tmp)) {
+ 				nf_ct_gc_expired(tmp);
++				expired_count++;
+ 				continue;
+ 			}
+ 
++			expires = clamp(nf_ct_expires(tmp), GC_SCAN_INTERVAL_MIN, GC_SCAN_INTERVAL_CLAMP);
++			next_run += expires;
++			next_run /= 2u;
++
+ 			if (nf_conntrack_max95 == 0 || gc_worker_skip_ct(tmp))
+ 				continue;
+ 
+@@ -1477,8 +1521,10 @@ static void gc_worker(struct work_struct *work)
+ 				continue;
+ 			}
+ 
+-			if (gc_worker_can_early_drop(tmp))
++			if (gc_worker_can_early_drop(tmp)) {
+ 				nf_ct_kill(tmp);
++				expired_count++;
++			}
+ 
+ 			nf_ct_put(tmp);
+ 		}
+@@ -1491,33 +1537,38 @@ static void gc_worker(struct work_struct *work)
+ 		cond_resched();
+ 		i++;
+ 
+-		if (time_after(jiffies, end_time) && i < hashsz) {
++		delta_time = nfct_time_stamp - end_time;
++		if (delta_time > 0 && i < hashsz) {
++			gc_work->avg_timeout = next_run;
+ 			gc_work->next_bucket = i;
+ 			next_run = 0;
+-			break;
++			goto early_exit;
+ 		}
+ 	} while (i < hashsz);
+ 
++	gc_work->next_bucket = 0;
++
++	next_run = clamp(next_run, GC_SCAN_INTERVAL_MIN, GC_SCAN_INTERVAL_MAX);
++
++	delta_time = max_t(s32, nfct_time_stamp - gc_work->start_time, 1);
++	if (next_run > (unsigned long)delta_time)
++		next_run -= delta_time;
++	else
++		next_run = 1;
++
++early_exit:
+ 	if (gc_work->exiting)
+ 		return;
+ 
+-	/*
+-	 * Eviction will normally happen from the packet path, and not
+-	 * from this gc worker.
+-	 *
+-	 * This worker is only here to reap expired entries when system went
+-	 * idle after a busy period.
+-	 */
+-	if (next_run) {
++	if (next_run)
+ 		gc_work->early_drop = false;
+-		gc_work->next_bucket = 0;
+-	}
++
+ 	queue_delayed_work(system_power_efficient_wq, &gc_work->dwork, next_run);
+ }
+ 
+ static void conntrack_gc_work_init(struct conntrack_gc_work *gc_work)
+ {
+-	INIT_DEFERRABLE_WORK(&gc_work->dwork, gc_worker);
++	INIT_DELAYED_WORK(&gc_work->dwork, gc_worker);
+ 	gc_work->exiting = false;
+ }
+ 
+diff --git a/net/netlabel/netlabel_kapi.c b/net/netlabel/netlabel_kapi.c
+index beb0e573266d0..54c0830039470 100644
+--- a/net/netlabel/netlabel_kapi.c
++++ b/net/netlabel/netlabel_kapi.c
+@@ -885,6 +885,8 @@ int netlbl_bitmap_walk(const unsigned char *bitmap, u32 bitmap_len,
+ 	unsigned char bitmask;
+ 	unsigned char byte;
+ 
++	if (offset >= bitmap_len)
++		return -1;
+ 	byte_offset = offset / 8;
+ 	byte = bitmap[byte_offset];
+ 	bit_spot = offset;
+diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
+index 780d9e2246f39..8955f31fa47e9 100644
+--- a/net/openvswitch/actions.c
++++ b/net/openvswitch/actions.c
+@@ -1051,7 +1051,7 @@ static int clone(struct datapath *dp, struct sk_buff *skb,
+ 	int rem = nla_len(attr);
+ 	bool dont_clone_flow_key;
+ 
+-	/* The first action is always 'OVS_CLONE_ATTR_ARG'. */
++	/* The first action is always 'OVS_CLONE_ATTR_EXEC'. */
+ 	clone_arg = nla_data(attr);
+ 	dont_clone_flow_key = nla_get_u32(clone_arg);
+ 	actions = nla_next(clone_arg, &rem);
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index 0d677c9c2c805..c591b923016a6 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -2288,6 +2288,62 @@ static struct sw_flow_actions *nla_alloc_flow_actions(int size)
+ 	return sfa;
+ }
+ 
++static void ovs_nla_free_nested_actions(const struct nlattr *actions, int len);
++
++static void ovs_nla_free_check_pkt_len_action(const struct nlattr *action)
++{
++	const struct nlattr *a;
++	int rem;
++
++	nla_for_each_nested(a, action, rem) {
++		switch (nla_type(a)) {
++		case OVS_CHECK_PKT_LEN_ATTR_ACTIONS_IF_LESS_EQUAL:
++		case OVS_CHECK_PKT_LEN_ATTR_ACTIONS_IF_GREATER:
++			ovs_nla_free_nested_actions(nla_data(a), nla_len(a));
++			break;
++		}
++	}
++}
++
++static void ovs_nla_free_clone_action(const struct nlattr *action)
++{
++	const struct nlattr *a = nla_data(action);
++	int rem = nla_len(action);
++
++	switch (nla_type(a)) {
++	case OVS_CLONE_ATTR_EXEC:
++		/* The real list of actions follows this attribute. */
++		a = nla_next(a, &rem);
++		ovs_nla_free_nested_actions(a, rem);
++		break;
++	}
++}
++
++static void ovs_nla_free_dec_ttl_action(const struct nlattr *action)
++{
++	const struct nlattr *a = nla_data(action);
++
++	switch (nla_type(a)) {
++	case OVS_DEC_TTL_ATTR_ACTION:
++		ovs_nla_free_nested_actions(nla_data(a), nla_len(a));
++		break;
++	}
++}
++
++static void ovs_nla_free_sample_action(const struct nlattr *action)
++{
++	const struct nlattr *a = nla_data(action);
++	int rem = nla_len(action);
++
++	switch (nla_type(a)) {
++	case OVS_SAMPLE_ATTR_ARG:
++		/* The real list of actions follows this attribute. */
++		a = nla_next(a, &rem);
++		ovs_nla_free_nested_actions(a, rem);
++		break;
++	}
++}
++
+ static void ovs_nla_free_set_action(const struct nlattr *a)
+ {
+ 	const struct nlattr *ovs_key = nla_data(a);
+@@ -2301,25 +2357,54 @@ static void ovs_nla_free_set_action(const struct nlattr *a)
+ 	}
+ }
+ 
+-void ovs_nla_free_flow_actions(struct sw_flow_actions *sf_acts)
++static void ovs_nla_free_nested_actions(const struct nlattr *actions, int len)
+ {
+ 	const struct nlattr *a;
+ 	int rem;
+ 
+-	if (!sf_acts)
++	/* Whenever new actions are added, the need to update this
++	 * function should be considered.
++	 */
++	BUILD_BUG_ON(OVS_ACTION_ATTR_MAX != 23);
++
++	if (!actions)
+ 		return;
+ 
+-	nla_for_each_attr(a, sf_acts->actions, sf_acts->actions_len, rem) {
++	nla_for_each_attr(a, actions, len, rem) {
+ 		switch (nla_type(a)) {
+-		case OVS_ACTION_ATTR_SET:
+-			ovs_nla_free_set_action(a);
++		case OVS_ACTION_ATTR_CHECK_PKT_LEN:
++			ovs_nla_free_check_pkt_len_action(a);
++			break;
++
++		case OVS_ACTION_ATTR_CLONE:
++			ovs_nla_free_clone_action(a);
+ 			break;
++
+ 		case OVS_ACTION_ATTR_CT:
+ 			ovs_ct_free_action(a);
+ 			break;
++
++		case OVS_ACTION_ATTR_DEC_TTL:
++			ovs_nla_free_dec_ttl_action(a);
++			break;
++
++		case OVS_ACTION_ATTR_SAMPLE:
++			ovs_nla_free_sample_action(a);
++			break;
++
++		case OVS_ACTION_ATTR_SET:
++			ovs_nla_free_set_action(a);
++			break;
+ 		}
+ 	}
++}
++
++void ovs_nla_free_flow_actions(struct sw_flow_actions *sf_acts)
++{
++	if (!sf_acts)
++		return;
+ 
++	ovs_nla_free_nested_actions(sf_acts->actions, sf_acts->actions_len);
+ 	kfree(sf_acts);
+ }
+ 
+@@ -3429,7 +3514,9 @@ static int clone_action_to_attr(const struct nlattr *attr,
+ 	if (!start)
+ 		return -EMSGSIZE;
+ 
+-	err = ovs_nla_put_actions(nla_data(attr), rem, skb);
++	/* Skipping the OVS_CLONE_ATTR_EXEC that is always the first attribute. */
++	attr = nla_next(nla_data(attr), &rem);
++	err = ovs_nla_put_actions(attr, rem, skb);
+ 
+ 	if (err)
+ 		nla_nest_cancel(skb, start);
+diff --git a/net/rxrpc/net_ns.c b/net/rxrpc/net_ns.c
+index 25bbc4cc8b135..f15d6942da453 100644
+--- a/net/rxrpc/net_ns.c
++++ b/net/rxrpc/net_ns.c
+@@ -113,8 +113,8 @@ static __net_exit void rxrpc_exit_net(struct net *net)
+ 	struct rxrpc_net *rxnet = rxrpc_net(net);
+ 
+ 	rxnet->live = false;
+-	del_timer_sync(&rxnet->peer_keepalive_timer);
+ 	cancel_work_sync(&rxnet->peer_keepalive_work);
++	del_timer_sync(&rxnet->peer_keepalive_timer);
+ 	rxrpc_destroy_all_calls(rxnet);
+ 	rxrpc_destroy_all_connections(rxnet);
+ 	rxrpc_destroy_all_peers(rxnet);
+diff --git a/net/sctp/outqueue.c b/net/sctp/outqueue.c
+index ff47091c385e7..b3950963fc8f0 100644
+--- a/net/sctp/outqueue.c
++++ b/net/sctp/outqueue.c
+@@ -911,6 +911,7 @@ static void sctp_outq_flush_ctrl(struct sctp_flush_ctx *ctx)
+ 				ctx->asoc->base.sk->sk_err = -error;
+ 				return;
+ 			}
++			ctx->asoc->stats.octrlchunks++;
+ 			break;
+ 
+ 		case SCTP_CID_ABORT:
+@@ -935,7 +936,10 @@ static void sctp_outq_flush_ctrl(struct sctp_flush_ctx *ctx)
+ 
+ 		case SCTP_CID_HEARTBEAT:
+ 			if (chunk->pmtu_probe) {
+-				sctp_packet_singleton(ctx->transport, chunk, ctx->gfp);
++				error = sctp_packet_singleton(ctx->transport,
++							      chunk, ctx->gfp);
++				if (!error)
++					ctx->asoc->stats.octrlchunks++;
+ 				break;
+ 			}
+ 			fallthrough;
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index a0fb596459a36..88bf88b7ed788 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -2621,8 +2621,8 @@ static int smc_setsockopt(struct socket *sock, int level, int optname,
+ 		    sk->sk_state != SMC_CLOSED) {
+ 			if (val) {
+ 				SMC_STAT_INC(smc, ndly_cnt);
+-				mod_delayed_work(smc->conn.lgr->tx_wq,
+-						 &smc->conn.tx_work, 0);
++				smc_tx_pending(&smc->conn);
++				cancel_delayed_work(&smc->conn.tx_work);
+ 			}
+ 		}
+ 		break;
+@@ -2632,8 +2632,8 @@ static int smc_setsockopt(struct socket *sock, int level, int optname,
+ 		    sk->sk_state != SMC_CLOSED) {
+ 			if (!val) {
+ 				SMC_STAT_INC(smc, cork_cnt);
+-				mod_delayed_work(smc->conn.lgr->tx_wq,
+-						 &smc->conn.tx_work, 0);
++				smc_tx_pending(&smc->conn);
++				cancel_delayed_work(&smc->conn.tx_work);
+ 			}
+ 		}
+ 		break;
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index d1cdc891c2114..f1dc5b914771b 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -1904,7 +1904,7 @@ static struct smc_buf_desc *smc_buf_get_slot(int compressed_bufsize,
+  */
+ static inline int smc_rmb_wnd_update_limit(int rmbe_size)
+ {
+-	return min_t(int, rmbe_size / 10, SOCK_MIN_SNDBUF / 2);
++	return max_t(int, rmbe_size / 10, SOCK_MIN_SNDBUF / 2);
+ }
+ 
+ /* map an rmb buf to a link */
+diff --git a/net/smc/smc_tx.c b/net/smc/smc_tx.c
+index be241d53020f1..7b0b6e24582f9 100644
+--- a/net/smc/smc_tx.c
++++ b/net/smc/smc_tx.c
+@@ -597,27 +597,32 @@ int smc_tx_sndbuf_nonempty(struct smc_connection *conn)
+ 	return rc;
+ }
+ 
+-/* Wakeup sndbuf consumers from process context
+- * since there is more data to transmit
+- */
+-void smc_tx_work(struct work_struct *work)
++void smc_tx_pending(struct smc_connection *conn)
+ {
+-	struct smc_connection *conn = container_of(to_delayed_work(work),
+-						   struct smc_connection,
+-						   tx_work);
+ 	struct smc_sock *smc = container_of(conn, struct smc_sock, conn);
+ 	int rc;
+ 
+-	lock_sock(&smc->sk);
+ 	if (smc->sk.sk_err)
+-		goto out;
++		return;
+ 
+ 	rc = smc_tx_sndbuf_nonempty(conn);
+ 	if (!rc && conn->local_rx_ctrl.prod_flags.write_blocked &&
+ 	    !atomic_read(&conn->bytes_to_rcv))
+ 		conn->local_rx_ctrl.prod_flags.write_blocked = 0;
++}
++
++/* Wakeup sndbuf consumers from process context
++ * since there is more data to transmit
++ */
++void smc_tx_work(struct work_struct *work)
++{
++	struct smc_connection *conn = container_of(to_delayed_work(work),
++						   struct smc_connection,
++						   tx_work);
++	struct smc_sock *smc = container_of(conn, struct smc_sock, conn);
+ 
+-out:
++	lock_sock(&smc->sk);
++	smc_tx_pending(conn);
+ 	release_sock(&smc->sk);
+ }
+ 
+diff --git a/net/smc/smc_tx.h b/net/smc/smc_tx.h
+index 07e6ad76224a0..a59f370b8b432 100644
+--- a/net/smc/smc_tx.h
++++ b/net/smc/smc_tx.h
+@@ -27,6 +27,7 @@ static inline int smc_tx_prepared_sends(struct smc_connection *conn)
+ 	return smc_curs_diff(conn->sndbuf_desc->len, &sent, &prep);
+ }
+ 
++void smc_tx_pending(struct smc_connection *conn);
+ void smc_tx_work(struct work_struct *work);
+ void smc_tx_init(struct smc_sock *smc);
+ int smc_tx_sendmsg(struct smc_sock *smc, struct msghdr *msg, size_t len);
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index b36d235d2d6d9..0222ad4523a9d 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -2197,6 +2197,7 @@ call_transmit_status(struct rpc_task *task)
+ 		 * socket just returned a connection error,
+ 		 * then hold onto the transport lock.
+ 		 */
++	case -ENOMEM:
+ 	case -ENOBUFS:
+ 		rpc_delay(task, HZ>>2);
+ 		fallthrough;
+@@ -2280,6 +2281,7 @@ call_bc_transmit_status(struct rpc_task *task)
+ 	case -ENOTCONN:
+ 	case -EPIPE:
+ 		break;
++	case -ENOMEM:
+ 	case -ENOBUFS:
+ 		rpc_delay(task, HZ>>2);
+ 		fallthrough;
+@@ -2362,6 +2364,11 @@ call_status(struct rpc_task *task)
+ 	case -EPIPE:
+ 	case -EAGAIN:
+ 		break;
++	case -ENFILE:
++	case -ENOBUFS:
++	case -ENOMEM:
++		rpc_delay(task, HZ>>2);
++		break;
+ 	case -EIO:
+ 		/* shutdown or soft timeout */
+ 		goto out_exit;
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index ae295844ac55a..9020cedb7c95a 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -186,11 +186,6 @@ static void __rpc_add_wait_queue_priority(struct rpc_wait_queue *queue,
+ 
+ /*
+  * Add new request to wait queue.
+- *
+- * Swapper tasks always get inserted at the head of the queue.
+- * This should avoid many nasty memory deadlocks and hopefully
+- * improve overall performance.
+- * Everyone else gets appended to the queue to ensure proper FIFO behavior.
+  */
+ static void __rpc_add_wait_queue(struct rpc_wait_queue *queue,
+ 		struct rpc_task *task,
+@@ -199,8 +194,6 @@ static void __rpc_add_wait_queue(struct rpc_wait_queue *queue,
+ 	INIT_LIST_HEAD(&task->u.tk_wait.timer_list);
+ 	if (RPC_IS_PRIORITY(queue))
+ 		__rpc_add_wait_queue_priority(queue, task, queue_priority);
+-	else if (RPC_IS_SWAPPER(task))
+-		list_add(&task->u.tk_wait.list, &queue->tasks[0]);
+ 	else
+ 		list_add_tail(&task->u.tk_wait.list, &queue->tasks[0]);
+ 	task->tk_waitqueue = queue;
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index 478f857cdaed4..6ea3d87e11475 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -1096,7 +1096,9 @@ static int svc_tcp_sendmsg(struct socket *sock, struct xdr_buf *xdr,
+ 	int ret;
+ 
+ 	*sentp = 0;
+-	xdr_alloc_bvec(xdr, GFP_KERNEL);
++	ret = xdr_alloc_bvec(xdr, GFP_KERNEL);
++	if (ret < 0)
++		return ret;
+ 
+ 	ret = kernel_sendmsg(sock, &msg, &rm, 1, rm.iov_len);
+ 	if (ret < 0)
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index 396a74974f60f..d557a3cb2ad4a 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -929,12 +929,7 @@ void xprt_connect(struct rpc_task *task)
+ 	if (!xprt_lock_write(xprt, task))
+ 		return;
+ 
+-	if (test_and_clear_bit(XPRT_CLOSE_WAIT, &xprt->state)) {
+-		trace_xprt_disconnect_cleanup(xprt);
+-		xprt->ops->close(xprt);
+-	}
+-
+-	if (!xprt_connected(xprt)) {
++	if (!xprt_connected(xprt) && !test_bit(XPRT_CLOSE_WAIT, &xprt->state)) {
+ 		task->tk_rqstp->rq_connect_cookie = xprt->connect_cookie;
+ 		rpc_sleep_on_timeout(&xprt->pending, task, NULL,
+ 				xprt_request_timeout(task->tk_rqstp));
+@@ -1354,17 +1349,6 @@ xprt_request_enqueue_transmit(struct rpc_task *task)
+ 				INIT_LIST_HEAD(&req->rq_xmit2);
+ 				goto out;
+ 			}
+-		} else if (RPC_IS_SWAPPER(task)) {
+-			list_for_each_entry(pos, &xprt->xmit_queue, rq_xmit) {
+-				if (pos->rq_cong || pos->rq_bytes_sent)
+-					continue;
+-				if (RPC_IS_SWAPPER(pos->rq_task))
+-					continue;
+-				/* Note: req is added _before_ pos */
+-				list_add_tail(&req->rq_xmit, &pos->rq_xmit);
+-				INIT_LIST_HEAD(&req->rq_xmit2);
+-				goto out;
+-			}
+ 		} else if (!req->rq_seqno) {
+ 			list_for_each_entry(pos, &xprt->xmit_queue, rq_xmit) {
+ 				if (pos->rq_task->tk_owner != task->tk_owner)
+@@ -1690,12 +1674,15 @@ out:
+ static struct rpc_rqst *xprt_dynamic_alloc_slot(struct rpc_xprt *xprt)
+ {
+ 	struct rpc_rqst *req = ERR_PTR(-EAGAIN);
++	gfp_t gfp_mask = GFP_KERNEL;
+ 
+ 	if (xprt->num_reqs >= xprt->max_reqs)
+ 		goto out;
+ 	++xprt->num_reqs;
+ 	spin_unlock(&xprt->reserve_lock);
+-	req = kzalloc(sizeof(struct rpc_rqst), GFP_NOFS);
++	if (current->flags & PF_WQ_WORKER)
++		gfp_mask |= __GFP_NORETRY | __GFP_NOWARN;
++	req = kzalloc(sizeof(*req), gfp_mask);
+ 	spin_lock(&xprt->reserve_lock);
+ 	if (req != NULL)
+ 		goto out;
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index 6268af7e03101..256b06a92391b 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -525,7 +525,7 @@ xprt_rdma_alloc_slot(struct rpc_xprt *xprt, struct rpc_task *task)
+ 	return;
+ 
+ out_sleep:
+-	task->tk_status = -EAGAIN;
++	task->tk_status = -ENOMEM;
+ 	xprt_add_backlog(xprt, task);
+ }
+ 
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 37d961c1a5c9b..a2fda99c648fb 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -763,12 +763,12 @@ xs_stream_start_connect(struct sock_xprt *transport)
+ /**
+  * xs_nospace - handle transmit was incomplete
+  * @req: pointer to RPC request
++ * @transport: pointer to struct sock_xprt
+  *
+  */
+-static int xs_nospace(struct rpc_rqst *req)
++static int xs_nospace(struct rpc_rqst *req, struct sock_xprt *transport)
+ {
+-	struct rpc_xprt *xprt = req->rq_xprt;
+-	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
++	struct rpc_xprt *xprt = &transport->xprt;
+ 	struct sock *sk = transport->inet;
+ 	int ret = -EAGAIN;
+ 
+@@ -779,25 +779,49 @@ static int xs_nospace(struct rpc_rqst *req)
+ 
+ 	/* Don't race with disconnect */
+ 	if (xprt_connected(xprt)) {
++		struct socket_wq *wq;
++
++		rcu_read_lock();
++		wq = rcu_dereference(sk->sk_wq);
++		set_bit(SOCKWQ_ASYNC_NOSPACE, &wq->flags);
++		rcu_read_unlock();
++
+ 		/* wait for more buffer space */
++		set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
+ 		sk->sk_write_pending++;
+ 		xprt_wait_for_buffer_space(xprt);
+ 	} else
+ 		ret = -ENOTCONN;
+ 
+ 	spin_unlock(&xprt->transport_lock);
++	return ret;
++}
+ 
+-	/* Race breaker in case memory is freed before above code is called */
+-	if (ret == -EAGAIN) {
+-		struct socket_wq *wq;
++static int xs_sock_nospace(struct rpc_rqst *req)
++{
++	struct sock_xprt *transport =
++		container_of(req->rq_xprt, struct sock_xprt, xprt);
++	struct sock *sk = transport->inet;
++	int ret = -EAGAIN;
+ 
+-		rcu_read_lock();
+-		wq = rcu_dereference(sk->sk_wq);
+-		set_bit(SOCKWQ_ASYNC_NOSPACE, &wq->flags);
+-		rcu_read_unlock();
++	lock_sock(sk);
++	if (!sock_writeable(sk))
++		ret = xs_nospace(req, transport);
++	release_sock(sk);
++	return ret;
++}
+ 
+-		sk->sk_write_space(sk);
+-	}
++static int xs_stream_nospace(struct rpc_rqst *req)
++{
++	struct sock_xprt *transport =
++		container_of(req->rq_xprt, struct sock_xprt, xprt);
++	struct sock *sk = transport->inet;
++	int ret = -EAGAIN;
++
++	lock_sock(sk);
++	if (!sk_stream_memory_free(sk))
++		ret = xs_nospace(req, transport);
++	release_sock(sk);
+ 	return ret;
+ }
+ 
+@@ -856,7 +880,7 @@ static int xs_local_send_request(struct rpc_rqst *req)
+ 
+ 	/* Close the stream if the previous transmission was incomplete */
+ 	if (xs_send_request_was_aborted(transport, req)) {
+-		xs_close(xprt);
++		xprt_force_disconnect(xprt);
+ 		return -ENOTCONN;
+ 	}
+ 
+@@ -887,14 +911,14 @@ static int xs_local_send_request(struct rpc_rqst *req)
+ 	case -ENOBUFS:
+ 		break;
+ 	case -EAGAIN:
+-		status = xs_nospace(req);
++		status = xs_stream_nospace(req);
+ 		break;
+ 	default:
+ 		dprintk("RPC:       sendmsg returned unrecognized error %d\n",
+ 			-status);
+ 		fallthrough;
+ 	case -EPIPE:
+-		xs_close(xprt);
++		xprt_force_disconnect(xprt);
+ 		status = -ENOTCONN;
+ 	}
+ 
+@@ -963,7 +987,7 @@ process_status:
+ 		/* Should we call xs_close() here? */
+ 		break;
+ 	case -EAGAIN:
+-		status = xs_nospace(req);
++		status = xs_sock_nospace(req);
+ 		break;
+ 	case -ENETUNREACH:
+ 	case -ENOBUFS:
+@@ -1083,7 +1107,7 @@ static int xs_tcp_send_request(struct rpc_rqst *req)
+ 		/* Should we call xs_close() here? */
+ 		break;
+ 	case -EAGAIN:
+-		status = xs_nospace(req);
++		status = xs_stream_nospace(req);
+ 		break;
+ 	case -ECONNRESET:
+ 	case -ECONNREFUSED:
+@@ -1179,6 +1203,16 @@ static void xs_reset_transport(struct sock_xprt *transport)
+ 
+ 	if (sk == NULL)
+ 		return;
++	/*
++	 * Make sure we're calling this in a context from which it is safe
++	 * to call __fput_sync(). In practice that means rpciod and the
++	 * system workqueue.
++	 */
++	if (!(current->flags & PF_WQ_WORKER)) {
++		WARN_ON_ONCE(1);
++		set_bit(XPRT_CLOSE_WAIT, &xprt->state);
++		return;
++	}
+ 
+ 	if (atomic_read(&transport->xprt.swapper))
+ 		sk_clear_memalloc(sk);
+@@ -1202,7 +1236,7 @@ static void xs_reset_transport(struct sock_xprt *transport)
+ 	mutex_unlock(&transport->recv_mutex);
+ 
+ 	trace_rpc_socket_close(xprt, sock);
+-	fput(filp);
++	__fput_sync(filp);
+ 
+ 	xprt_disconnect_done(xprt);
+ }
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index dfe623a4e72f4..3aecd770ef990 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1495,7 +1495,7 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
+ 	if (prot->version == TLS_1_3_VERSION ||
+ 	    prot->cipher_type == TLS_CIPHER_CHACHA20_POLY1305)
+ 		memcpy(iv + iv_offset, tls_ctx->rx.iv,
+-		       crypto_aead_ivsize(ctx->aead_recv));
++		       prot->iv_size + prot->salt_size);
+ 	else
+ 		memcpy(iv + iv_offset, tls_ctx->rx.iv, prot->salt_size);
+ 
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 22e92be619388..ea0b768def5ed 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -702,8 +702,12 @@ static bool cfg80211_find_ssid_match(struct cfg80211_colocated_ap *ap,
+ 
+ 	for (i = 0; i < request->n_ssids; i++) {
+ 		/* wildcard ssid in the scan request */
+-		if (!request->ssids[i].ssid_len)
++		if (!request->ssids[i].ssid_len) {
++			if (ap->multi_bss && !ap->transmitted_bssid)
++				continue;
++
+ 			return true;
++		}
+ 
+ 		if (ap->ssid_len &&
+ 		    ap->ssid_len == request->ssids[i].ssid_len) {
+@@ -829,6 +833,9 @@ static int cfg80211_scan_6ghz(struct cfg80211_registered_device *rdev)
+ 		    !cfg80211_find_ssid_match(ap, request))
+ 			continue;
+ 
++		if (!request->n_ssids && ap->multi_bss && !ap->transmitted_bssid)
++			continue;
++
+ 		cfg80211_scan_req_add_chan(request, chan, true);
+ 		memcpy(scan_6ghz_params->bssid, ap->bssid, ETH_ALEN);
+ 		scan_6ghz_params->short_ssid = ap->short_ssid;
+diff --git a/scripts/Makefile.ubsan b/scripts/Makefile.ubsan
+index 9e2092fd5206c..7099c603ff0ad 100644
+--- a/scripts/Makefile.ubsan
++++ b/scripts/Makefile.ubsan
+@@ -8,7 +8,6 @@ ubsan-cflags-$(CONFIG_UBSAN_LOCAL_BOUNDS)	+= -fsanitize=local-bounds
+ ubsan-cflags-$(CONFIG_UBSAN_SHIFT)		+= -fsanitize=shift
+ ubsan-cflags-$(CONFIG_UBSAN_DIV_ZERO)		+= -fsanitize=integer-divide-by-zero
+ ubsan-cflags-$(CONFIG_UBSAN_UNREACHABLE)	+= -fsanitize=unreachable
+-ubsan-cflags-$(CONFIG_UBSAN_OBJECT_SIZE)	+= -fsanitize=object-size
+ ubsan-cflags-$(CONFIG_UBSAN_BOOL)		+= -fsanitize=bool
+ ubsan-cflags-$(CONFIG_UBSAN_ENUM)		+= -fsanitize=enum
+ ubsan-cflags-$(CONFIG_UBSAN_TRAP)		+= -fsanitize-undefined-trap-on-error
+diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile
+index 1480910c792e2..de66e1cc07348 100644
+--- a/tools/build/feature/Makefile
++++ b/tools/build/feature/Makefile
+@@ -217,9 +217,16 @@ strip-libs = $(filter-out -l%,$(1))
+ PERL_EMBED_LDOPTS = $(shell perl -MExtUtils::Embed -e ldopts 2>/dev/null)
+ PERL_EMBED_LDFLAGS = $(call strip-libs,$(PERL_EMBED_LDOPTS))
+ PERL_EMBED_LIBADD = $(call grep-libs,$(PERL_EMBED_LDOPTS))
+-PERL_EMBED_CCOPTS = `perl -MExtUtils::Embed -e ccopts 2>/dev/null`
++PERL_EMBED_CCOPTS = $(shell perl -MExtUtils::Embed -e ccopts 2>/dev/null)
+ FLAGS_PERL_EMBED=$(PERL_EMBED_CCOPTS) $(PERL_EMBED_LDOPTS)
+ 
++ifeq ($(CC_NO_CLANG), 0)
++  PERL_EMBED_LDOPTS := $(filter-out -specs=%,$(PERL_EMBED_LDOPTS))
++  PERL_EMBED_CCOPTS := $(filter-out -flto=auto -ffat-lto-objects, $(PERL_EMBED_CCOPTS))
++  PERL_EMBED_CCOPTS := $(filter-out -specs=%,$(PERL_EMBED_CCOPTS))
++  FLAGS_PERL_EMBED += -Wno-compound-token-split-by-macro
++endif
++
+ $(OUTPUT)test-libperl.bin:
+ 	$(BUILD) $(FLAGS_PERL_EMBED)
+ 
+diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
+index b393b5e823804..d1b474f3c586f 100644
+--- a/tools/lib/bpf/Makefile
++++ b/tools/lib/bpf/Makefile
+@@ -129,7 +129,7 @@ GLOBAL_SYM_COUNT = $(shell readelf -s --wide $(BPF_IN_SHARED) | \
+ 			   sort -u | wc -l)
+ VERSIONED_SYM_COUNT = $(shell readelf --dyn-syms --wide $(OUTPUT)libbpf.so | \
+ 			      sed 's/\[.*\]//' | \
+-			      awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}' | \
++			      awk '/GLOBAL/ && /DEFAULT/ && !/UND|ABS/ {print $$NF}' | \
+ 			      grep -Eo '[^ ]+@LIBBPF_' | cut -d@ -f1 | sort -u | wc -l)
+ 
+ CMD_TARGETS = $(LIB_TARGET) $(PC_FILE)
+@@ -192,7 +192,7 @@ check_abi: $(OUTPUT)libbpf.so $(VERSION_SCRIPT)
+ 		    sort -u > $(OUTPUT)libbpf_global_syms.tmp;		 \
+ 		readelf --dyn-syms --wide $(OUTPUT)libbpf.so |		 \
+ 		    sed 's/\[.*\]//' |					 \
+-		    awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}'|  \
++		    awk '/GLOBAL/ && /DEFAULT/ && !/UND|ABS/ {print $$NF}'|  \
+ 		    grep -Eo '[^ ]+@LIBBPF_' | cut -d@ -f1 |		 \
+ 		    sort -u > $(OUTPUT)libbpf_versioned_syms.tmp; 	 \
+ 		diff -u $(OUTPUT)libbpf_global_syms.tmp			 \
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 83c04ebc940c9..fecd7e842ffd6 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -270,6 +270,9 @@ ifdef PYTHON_CONFIG
+   PYTHON_EMBED_LIBADD := $(call grep-libs,$(PYTHON_EMBED_LDOPTS)) -lutil
+   PYTHON_EMBED_CCOPTS := $(shell $(PYTHON_CONFIG_SQ) --includes 2>/dev/null)
+   FLAGS_PYTHON_EMBED := $(PYTHON_EMBED_CCOPTS) $(PYTHON_EMBED_LDOPTS)
++  ifeq ($(CC_NO_CLANG), 0)
++    PYTHON_EMBED_CCOPTS := $(filter-out -ffat-lto-objects, $(PYTHON_EMBED_CCOPTS))
++  endif
+ endif
+ 
+ FEATURE_CHECK_CFLAGS-libpython := $(PYTHON_EMBED_CCOPTS)
+@@ -785,6 +788,9 @@ else
+     LDFLAGS += $(PERL_EMBED_LDFLAGS)
+     EXTLIBS += $(PERL_EMBED_LIBADD)
+     CFLAGS += -DHAVE_LIBPERL_SUPPORT
++    ifeq ($(CC_NO_CLANG), 0)
++      CFLAGS += -Wno-compound-token-split-by-macro
++    endif
+     $(call detected,CONFIG_LIBPERL)
+   endif
+ endif
+diff --git a/tools/perf/arch/arm64/util/arm-spe.c b/tools/perf/arch/arm64/util/arm-spe.c
+index 2100d46ccf5e6..bb4ab99afa7f8 100644
+--- a/tools/perf/arch/arm64/util/arm-spe.c
++++ b/tools/perf/arch/arm64/util/arm-spe.c
+@@ -239,6 +239,12 @@ static int arm_spe_recording_options(struct auxtrace_record *itr,
+ 		arm_spe_set_timestamp(itr, arm_spe_evsel);
+ 	}
+ 
++	/*
++	 * Set this only so that perf report knows that SPE generates memory info. It has no effect
++	 * on the opening of the event or the SPE data produced.
++	 */
++	evsel__set_sample_bit(arm_spe_evsel, DATA_SRC);
++
+ 	/* Add dummy event to keep tracking */
+ 	err = parse_events(evlist, "dummy:u", NULL);
+ 	if (err)
+diff --git a/tools/perf/perf.c b/tools/perf/perf.c
+index 2f6b67189b426..6aae7b6c376b4 100644
+--- a/tools/perf/perf.c
++++ b/tools/perf/perf.c
+@@ -434,7 +434,7 @@ void pthread__unblock_sigwinch(void)
+ static int libperf_print(enum libperf_print_level level,
+ 			 const char *fmt, va_list ap)
+ {
+-	return eprintf(level, verbose, fmt, ap);
++	return veprintf(level, verbose, fmt, ap);
+ }
+ 
+ int main(int argc, const char **argv)
+diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
+index d8857d1b6d7c1..c64953dabcf54 100644
+--- a/tools/perf/util/session.c
++++ b/tools/perf/util/session.c
+@@ -2082,6 +2082,7 @@ prefetch_event(char *buf, u64 head, size_t mmap_size,
+ 	       bool needs_swap, union perf_event *error)
+ {
+ 	union perf_event *event;
++	u16 event_size;
+ 
+ 	/*
+ 	 * Ensure we have enough space remaining to read
+@@ -2094,15 +2095,23 @@ prefetch_event(char *buf, u64 head, size_t mmap_size,
+ 	if (needs_swap)
+ 		perf_event_header__bswap(&event->header);
+ 
+-	if (head + event->header.size <= mmap_size)
++	event_size = event->header.size;
++	if (head + event_size <= mmap_size)
+ 		return event;
+ 
+ 	/* We're not fetching the event so swap back again */
+ 	if (needs_swap)
+ 		perf_event_header__bswap(&event->header);
+ 
+-	pr_debug("%s: head=%#" PRIx64 " event->header_size=%#x, mmap_size=%#zx:"
+-		 " fuzzed or compressed perf.data?\n",__func__, head, event->header.size, mmap_size);
++	/* Check if the event fits into the next mmapped buf. */
++	if (event_size <= mmap_size - head % page_size) {
++		/* Remap buf and fetch again. */
++		return NULL;
++	}
++
++	/* Invalid input. Event size should never exceed mmap_size. */
++	pr_debug("%s: head=%#" PRIx64 " event->header.size=%#x, mmap_size=%#zx:"
++		 " fuzzed or compressed perf.data?\n", __func__, head, event_size, mmap_size);
+ 
+ 	return error;
+ }
+diff --git a/tools/perf/util/setup.py b/tools/perf/util/setup.py
+index 483f05004e682..c255a2c90cd67 100644
+--- a/tools/perf/util/setup.py
++++ b/tools/perf/util/setup.py
+@@ -1,12 +1,14 @@
+-from os import getenv
++from os import getenv, path
+ from subprocess import Popen, PIPE
+ from re import sub
+ 
+ cc = getenv("CC")
+ cc_is_clang = b"clang version" in Popen([cc.split()[0], "-v"], stderr=PIPE).stderr.readline()
++src_feature_tests  = getenv('srctree') + '/tools/build/feature'
+ 
+ def clang_has_option(option):
+-    return [o for o in Popen([cc, option], stderr=PIPE).stderr.readlines() if b"unknown argument" in o] == [ ]
++    cc_output = Popen([cc, option, path.join(src_feature_tests, "test-hello.c") ], stderr=PIPE).stderr.readlines()
++    return [o for o in cc_output if ((b"unknown argument" in o) or (b"is not supported" in o))] == [ ]
+ 
+ if cc_is_clang:
+     from distutils.sysconfig import get_config_vars
+@@ -23,6 +25,8 @@ if cc_is_clang:
+             vars[var] = sub("-fstack-protector-strong", "", vars[var])
+         if not clang_has_option("-fno-semantic-interposition"):
+             vars[var] = sub("-fno-semantic-interposition", "", vars[var])
++        if not clang_has_option("-ffat-lto-objects"):
++            vars[var] = sub("-ffat-lto-objects", "", vars[var])
+ 
+ from distutils.core import setup, Extension
+ 
+diff --git a/tools/testing/selftests/bpf/progs/test_sk_lookup.c b/tools/testing/selftests/bpf/progs/test_sk_lookup.c
+index 19d2465d94425..4e38234d80c49 100644
+--- a/tools/testing/selftests/bpf/progs/test_sk_lookup.c
++++ b/tools/testing/selftests/bpf/progs/test_sk_lookup.c
+@@ -404,8 +404,7 @@ int ctx_narrow_access(struct bpf_sk_lookup *ctx)
+ 
+ 	/* Narrow loads from remote_port field. Expect SRC_PORT. */
+ 	if (LSB(ctx->remote_port, 0) != ((SRC_PORT >> 0) & 0xff) ||
+-	    LSB(ctx->remote_port, 1) != ((SRC_PORT >> 8) & 0xff) ||
+-	    LSB(ctx->remote_port, 2) != 0 || LSB(ctx->remote_port, 3) != 0)
++	    LSB(ctx->remote_port, 1) != ((SRC_PORT >> 8) & 0xff))
+ 		return SK_DROP;
+ 	if (LSW(ctx->remote_port, 0) != SRC_PORT)
+ 		return SK_DROP;
+diff --git a/tools/testing/selftests/bpf/xdpxceiver.c b/tools/testing/selftests/bpf/xdpxceiver.c
+index 37d4873d9a2ed..e475d0327c811 100644
+--- a/tools/testing/selftests/bpf/xdpxceiver.c
++++ b/tools/testing/selftests/bpf/xdpxceiver.c
+@@ -260,22 +260,24 @@ static int xsk_configure_umem(struct xsk_umem_info *umem, void *buffer, u64 size
+ }
+ 
+ static int xsk_configure_socket(struct xsk_socket_info *xsk, struct xsk_umem_info *umem,
+-				struct ifobject *ifobject, u32 qid)
++				struct ifobject *ifobject, bool shared)
+ {
+-	struct xsk_socket_config cfg;
++	struct xsk_socket_config cfg = {};
+ 	struct xsk_ring_cons *rxr;
+ 	struct xsk_ring_prod *txr;
+ 
+ 	xsk->umem = umem;
+ 	cfg.rx_size = xsk->rxqsize;
+ 	cfg.tx_size = XSK_RING_PROD__DEFAULT_NUM_DESCS;
+-	cfg.libbpf_flags = 0;
++	cfg.libbpf_flags = XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD;
+ 	cfg.xdp_flags = ifobject->xdp_flags;
+ 	cfg.bind_flags = ifobject->bind_flags;
++	if (shared)
++		cfg.bind_flags |= XDP_SHARED_UMEM;
+ 
+ 	txr = ifobject->tx_on ? &xsk->tx : NULL;
+ 	rxr = ifobject->rx_on ? &xsk->rx : NULL;
+-	return xsk_socket__create(&xsk->xsk, ifobject->ifname, qid, umem->umem, rxr, txr, &cfg);
++	return xsk_socket__create(&xsk->xsk, ifobject->ifname, 0, umem->umem, rxr, txr, &cfg);
+ }
+ 
+ static struct option long_options[] = {
+@@ -381,7 +383,6 @@ static void __test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx,
+ 	for (i = 0; i < MAX_INTERFACES; i++) {
+ 		struct ifobject *ifobj = i ? ifobj_rx : ifobj_tx;
+ 
+-		ifobj->umem = &ifobj->umem_arr[0];
+ 		ifobj->xsk = &ifobj->xsk_arr[0];
+ 		ifobj->use_poll = false;
+ 		ifobj->pacing_on = true;
+@@ -395,11 +396,12 @@ static void __test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx,
+ 			ifobj->tx_on = false;
+ 		}
+ 
++		memset(ifobj->umem, 0, sizeof(*ifobj->umem));
++		ifobj->umem->num_frames = DEFAULT_UMEM_BUFFERS;
++		ifobj->umem->frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE;
++
+ 		for (j = 0; j < MAX_SOCKETS; j++) {
+-			memset(&ifobj->umem_arr[j], 0, sizeof(ifobj->umem_arr[j]));
+ 			memset(&ifobj->xsk_arr[j], 0, sizeof(ifobj->xsk_arr[j]));
+-			ifobj->umem_arr[j].num_frames = DEFAULT_UMEM_BUFFERS;
+-			ifobj->umem_arr[j].frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE;
+ 			ifobj->xsk_arr[j].rxqsize = XSK_RING_CONS__DEFAULT_NUM_DESCS;
+ 		}
+ 	}
+@@ -946,7 +948,10 @@ static void tx_stats_validate(struct ifobject *ifobject)
+ 
+ static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject)
+ {
++	u64 umem_sz = ifobject->umem->num_frames * ifobject->umem->frame_size;
+ 	int mmap_flags = MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE;
++	int ret, ifindex;
++	void *bufs;
+ 	u32 i;
+ 
+ 	ifobject->ns_fd = switch_namespace(ifobject->nsname);
+@@ -954,23 +959,20 @@ static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject)
+ 	if (ifobject->umem->unaligned_mode)
+ 		mmap_flags |= MAP_HUGETLB;
+ 
+-	for (i = 0; i < test->nb_sockets; i++) {
+-		u64 umem_sz = ifobject->umem->num_frames * ifobject->umem->frame_size;
+-		u32 ctr = 0;
+-		void *bufs;
+-		int ret;
++	bufs = mmap(NULL, umem_sz, PROT_READ | PROT_WRITE, mmap_flags, -1, 0);
++	if (bufs == MAP_FAILED)
++		exit_with_error(errno);
+ 
+-		bufs = mmap(NULL, umem_sz, PROT_READ | PROT_WRITE, mmap_flags, -1, 0);
+-		if (bufs == MAP_FAILED)
+-			exit_with_error(errno);
++	ret = xsk_configure_umem(ifobject->umem, bufs, umem_sz);
++	if (ret)
++		exit_with_error(-ret);
+ 
+-		ret = xsk_configure_umem(&ifobject->umem_arr[i], bufs, umem_sz);
+-		if (ret)
+-			exit_with_error(-ret);
++	for (i = 0; i < test->nb_sockets; i++) {
++		u32 ctr = 0;
+ 
+ 		while (ctr++ < SOCK_RECONF_CTR) {
+-			ret = xsk_configure_socket(&ifobject->xsk_arr[i], &ifobject->umem_arr[i],
+-						   ifobject, i);
++			ret = xsk_configure_socket(&ifobject->xsk_arr[i], ifobject->umem,
++						   ifobject, !!i);
+ 			if (!ret)
+ 				break;
+ 
+@@ -981,8 +983,22 @@ static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject)
+ 		}
+ 	}
+ 
+-	ifobject->umem = &ifobject->umem_arr[0];
+ 	ifobject->xsk = &ifobject->xsk_arr[0];
++
++	if (!ifobject->rx_on)
++		return;
++
++	ifindex = if_nametoindex(ifobject->ifname);
++	if (!ifindex)
++		exit_with_error(errno);
++
++	ret = xsk_setup_xdp_prog(ifindex, &ifobject->xsk_map_fd);
++	if (ret)
++		exit_with_error(-ret);
++
++	ret = xsk_socket__update_xskmap(ifobject->xsk->xsk, ifobject->xsk_map_fd);
++	if (ret)
++		exit_with_error(-ret);
+ }
+ 
+ static void testapp_cleanup_xsk_res(struct ifobject *ifobj)
+@@ -1138,14 +1154,16 @@ static void testapp_bidi(struct test_spec *test)
+ 
+ static void swap_xsk_resources(struct ifobject *ifobj_tx, struct ifobject *ifobj_rx)
+ {
++	int ret;
++
+ 	xsk_socket__delete(ifobj_tx->xsk->xsk);
+-	xsk_umem__delete(ifobj_tx->umem->umem);
+ 	xsk_socket__delete(ifobj_rx->xsk->xsk);
+-	xsk_umem__delete(ifobj_rx->umem->umem);
+-	ifobj_tx->umem = &ifobj_tx->umem_arr[1];
+ 	ifobj_tx->xsk = &ifobj_tx->xsk_arr[1];
+-	ifobj_rx->umem = &ifobj_rx->umem_arr[1];
+ 	ifobj_rx->xsk = &ifobj_rx->xsk_arr[1];
++
++	ret = xsk_socket__update_xskmap(ifobj_rx->xsk->xsk, ifobj_rx->xsk_map_fd);
++	if (ret)
++		exit_with_error(-ret);
+ }
+ 
+ static void testapp_bpf_res(struct test_spec *test)
+@@ -1404,13 +1422,13 @@ static struct ifobject *ifobject_create(void)
+ 	if (!ifobj->xsk_arr)
+ 		goto out_xsk_arr;
+ 
+-	ifobj->umem_arr = calloc(MAX_SOCKETS, sizeof(*ifobj->umem_arr));
+-	if (!ifobj->umem_arr)
+-		goto out_umem_arr;
++	ifobj->umem = calloc(1, sizeof(*ifobj->umem));
++	if (!ifobj->umem)
++		goto out_umem;
+ 
+ 	return ifobj;
+ 
+-out_umem_arr:
++out_umem:
+ 	free(ifobj->xsk_arr);
+ out_xsk_arr:
+ 	free(ifobj);
+@@ -1419,7 +1437,7 @@ out_xsk_arr:
+ 
+ static void ifobject_delete(struct ifobject *ifobj)
+ {
+-	free(ifobj->umem_arr);
++	free(ifobj->umem);
+ 	free(ifobj->xsk_arr);
+ 	free(ifobj);
+ }
+diff --git a/tools/testing/selftests/bpf/xdpxceiver.h b/tools/testing/selftests/bpf/xdpxceiver.h
+index 2f705f44b7483..62a3e63886325 100644
+--- a/tools/testing/selftests/bpf/xdpxceiver.h
++++ b/tools/testing/selftests/bpf/xdpxceiver.h
+@@ -125,10 +125,10 @@ struct ifobject {
+ 	struct xsk_socket_info *xsk;
+ 	struct xsk_socket_info *xsk_arr;
+ 	struct xsk_umem_info *umem;
+-	struct xsk_umem_info *umem_arr;
+ 	thread_func_t func_ptr;
+ 	struct pkt_stream *pkt_stream;
+ 	int ns_fd;
++	int xsk_map_fd;
+ 	u32 dst_ip;
+ 	u32 src_ip;
+ 	u32 xdp_flags;
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 74798e50e3f97..6896d384858aa 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -439,8 +439,8 @@ static void kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
+ 
+ void kvm_vcpu_destroy(struct kvm_vcpu *vcpu)
+ {
+-	kvm_dirty_ring_free(&vcpu->dirty_ring);
+ 	kvm_arch_vcpu_destroy(vcpu);
++	kvm_dirty_ring_free(&vcpu->dirty_ring);
+ 
+ 	/*
+ 	 * No need for rcu_read_lock as VCPU_RUN is the only place that changes


^ permalink raw reply related	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2022-04-13 18:34 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-03-23 11:51 [gentoo-commits] proj/linux-patches:5.16 commit in: / Mike Pagano
  -- strict thread matches above, loose matches on Subject: below --
2022-04-13 18:33 Mike Pagano
2022-04-12 19:14 Mike Pagano
2022-04-12 17:49 Mike Pagano
2022-04-08 13:07 Mike Pagano
2022-04-08 13:06 Mike Pagano
2022-04-08 13:04 Mike Pagano
2022-04-08 12:55 Mike Pagano
2022-03-28 22:32 Mike Pagano
2022-03-28 10:56 Mike Pagano
2022-03-19 13:17 Mike Pagano
2022-03-16 13:57 Mike Pagano
2022-03-11 12:04 Mike Pagano
2022-03-08 18:38 Mike Pagano
2022-03-02 13:04 Mike Pagano
2022-02-26 17:42 Mike Pagano
2022-02-23 12:56 Mike Pagano
2022-02-23 12:35 Mike Pagano
2022-02-16 12:44 Mike Pagano
2022-02-11 12:33 Mike Pagano
2022-02-08 18:19 Mike Pagano
2022-02-08 13:34 Mike Pagano
2022-02-05 19:02 Mike Pagano
2022-02-05 12:11 Mike Pagano
2022-02-01 17:21 Mike Pagano
2022-01-29 20:46 Mike Pagano
2022-01-29 17:40 Mike Pagano
2022-01-27 11:35 Mike Pagano
2022-01-20 13:45 Mike Pagano
2022-01-16 10:20 Mike Pagano
2022-01-16 10:19 Mike Pagano
2022-01-02 15:29 Mike Pagano
2021-12-07 19:42 Mike Pagano
2021-12-05 23:46 Mike Pagano
2021-12-05 23:43 Mike Pagano

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox